EVA
EVA is a Chinese pre-trained dialogue model with 2.8 billion parameters.
EVA is based on the encoder-decoder architecture and it performs well on many dialogue tasks, especially in the multi-turn interaction of human-bot conversations.
GitHub
License
Features
Large-scale
A large-scale Chinese dialogue model with 2.8 billion parameters
High-quality Corpus
Use the 181GB high-quality diagolue corpus WDC-Dialogue, which is built with strict rules
Efficient Training
Use specific data sampling stategy based on attention masks to accelerate training
Performance
EVA performs well in zero-shot dialogue tasks
Performances on Zero-shot Tasks
Applications
Open-domain Dialogue
Demo
Question Answering