ConvLab-2 is an open-source toolkit that enables researchers to build task-oriented dialogue systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems. As the successor of ConvLab, ConvLab-2 inherits ConvLab's framework but integrates more powerful dialogue models and supports more datasets.
The code of ConvLab-2 has been released here.
If you use ConvLab-2 in your research, please cite ConvLab-2: An Open-Source Toolkit for Building, Evaluating, and Diagnosing Dialogue Systems.
07/15/2020 -- A cleaned version of MultiWOZ 2.1 train/val/test dataset is added to data/multiwoz/MultiWOZ2.1_Cleaned.zip
As part of the Ninth Dialog System Technology Challenge (DSTC9), Microsoft Research and Tsinghua University are hosting Multi-domain Task-oriented Dialog Challenge II, aiming to solve two tasks in the multi-domain task completion setting:
End-to-end Multi-domain Task Completion Dialog โ In recent years there has been an increasing interest in building complex task completion bots that span over multiple domains. In this task, participants will develop an end-to-end dialog system that receives natural language as an input and generates natural language as an output in the travel planning setting. There is no restriction on the modeling approaches, and all resources/datasets/pre-trained models in the community can be used for model training. The system will be evaluated in MultiWOZ 2.1 dataset setting with ConvLab-2.
Cross-lingual Multi-domain Dialog State Tracking โ Building a dialog system that handles multiple languages becomes increasingly important with the rapid process of globalization. To advance state-of-the-art technologies in handling cross-lingual multi-domain dialogs, we offer the task of building cross-lingual dialog state trackers with a training set in resource-rich language, and dev/test set in a resource-poor language. In particular, this task consists of two sub-tasks. One uses English as the resource-rich language and Chinese as the resource-poor language on the MultiWOZ dataset, and the other one uses Chinese as the resource-rich language and English as the resource-poor language on the CrossWOZ dataset.
Jun 15, 2020 | Competition Starts |
Sep 21, 2020 | Test data is released |
Oct 5, 2020 | Entry submission deadline |
Oct 19,2020 | Results announced |
Nov 2020 | Paper submission deadline |
- Automatic end2end Evaluation: The submitted system (code) will be evaluated using the user-simulator setting
bertnlu + ruleDST + ruleDST + templateNLG
as in ConvLab-2. We will use the evaluator MultiWozEvaluator inconvlab2/evaluator/multiwoz_eval.py
to report metrics including success rate, average reward, number of turms, precision, recall, and F1 score. - Human Evaluation: The submitted system will be evaluated in Amazon Mechanic Turk. Crowd-workers will communicate with your summited system, and provide a rating based on the whole experience (language understanding, appropriateness, etc.)
We evaluate the performance of the dialog state tracker using two metrics:
- Joint Goal Accuracy. This metric evaluates whether the predicted dialog state is exactly equal to the ground truth.
- Slot Precision/Recall/F1. These metrics evaluate whether the predicted labels for individual slots in dialog state are equal to the ground truth, microaveraged over all slots.
- Submit the participation form here. Your identities will NOT be made public.
- Participate at https://aka.ms/dstc-mdtc (sign up if you do not have a CodaLab account). Participation is welcome from any team.
- Extend ConvLab-2 with your code, and submit up to 5 agents. In the main directory, please create a directory called
end2end
, and sub-directories with namessubmission[1-5]
. In the sub-directory, add your runnable main python scripts for both automatic evaluation and human evaluation, respectively. For automatic evaluation, please use a similar format astests/test_end2end.py
in ConvLab-2 with the main script name asautomatic.py
. For human evaluation, please use a similar format asconvlab2/human_eval/run_agent.py
in ConvLab-2 with the main script name ashuman.py
. Human evaluation is executed in Amazon Mechanic Turk. Please make sure that your agent is compatible withconvlab2/human_eval/run.py
for evaluation on Amazon Mechanic Turk. - If your code uses external packages beyond the existing docker environment, please choose one of the following two approaches to specify your environment requirements:
- Add install.sh under the main directory. Running install.sh should install all required extra packages.
- Create your own Dockerfile with the name dev.dockerfile
- Zip the system and submit.
- Extend ConvLab-2 with your code, and submit up to 5 results. In the main directory, please create a directory called
multiwoz-dst
orcrosswoz-dst
or both, based on your selected task(s), and include your prediction results with the namesubmission[1-5]
. - Zip them and submit.
If you are participating both tasks, you can submit one zip file with the results of both tasks together.
SOLOIST: Few-shot Task-Oriented Dialog with A Single Pre-trained Auto-regressive Model
DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
Please email [email protected] if you have any questions. For special enquiries, feel free to contact: jincli (at) microsoft (dot) com; zhu-q18 (at) mails (dot) tsinghua (dot) edu (dot) cn