SQuARE is a flexible and extensible Question Answering (QA) platform to enable users to easily implement, manage and share their custom QA pipelines (aka Skills in SQuARE).
Two ways are supported to use SQuARE:
- 🌐 Get access to the existing QA Skills (and even deploy your Skill!) via our demo page;
- 💾 Or clone and install SQuARE to host services on a local machine.
Recent advances in NLP and information retrieval have given rise to a diverse set of question answering tasks that are of different formats (e.g., extractive, abstractive), require different model architectures (e.g., generative, discriminative) and setups (e.g., with or without retrieval). Despite having a large number of powerful, specialized QA pipelines (a.k.a., Skills) that consider a single domain, model or setup, there exists no framework where users can easily explore and compare such pipelines and can extend them according to their needs.
To address this issue, we present SQuARE, an extensible online QA platform for researchers which allows users to query and analyze a large collection of modern Skills via a user-friendly web interface and integrated behavioural tests. In addition, QA researchers can develop, manage and share their custom Skills using our microservices that support a wide range of models (Transformers, Adapters, ONNX), datastores and retrieval techniques (e.g., sparse and dense).
Find out more about the project on UKPs Website.
👉 If you want to use the SQuARE public service online, you can refer to Online Service for using the existing skills and refer to Add New Skills for adding new skills.
👉 If you want to deploy SQuARE locally yourself, please refer to Local Installation.
👉 For illustration of the architecture, please refer to Architecture.
👉 And welcome to contact us.
Try out the on-the-go skills on the demo page! The existing skills include span-extraction, abstractive, multi-choice QA with contexts or without contexts (open QA based on retrieval).
To add new skills, please see the skills section.
To run UKP-SQuARE locally, you need the following software:
Next change the environment
to local
and os
to your operating system in the config.yaml. For installation we provide a script that takes care of the entire setup for you. After installing the previous requirements, simply run:
bash install.sh
Finally, you can run the full system with docker-compose. Before doing so, you might want to reduce the number of models running depending on your resources. To do so, remove the respective services from the docker-compose.
docker-compose up -d
Check with docker-compose logs -f
if all systems have started successfully. Once they are up and running go to square.ukp-lab.local.
👉 Accept that the browser cannot verify the certificate.
Add Skills according to the Add New Skills section. Note that, for open-domain skills the datastore need to created first.
For a whole (open QA) skill pipeline, it requires 6 steps:
- First a user selects a Skill and issues a query via the user interface;
- The selected QA Skill forwards the query to the respective Datastore for document retrieval;
- The Datastore gets the query embedding from the Models, uses it for semantic document retrieval and returns the top documents to the Skill;
- The Skill sends the query and retrieved documents to the reader model for answer extraction;
- Finally, the answers are shown to the user;
- Optionally, the user can view the results of the predefined behavioural tests for the Skill.
The main contributors of this repository are:
- Tim Baumgärtner, Kexin Wang, Rachneet Singh Sachdeva, Max Eichler, Gregor Geigle, Clifton Poth, Hannah Sterz
Contact person: Tim Baumgärtner (Skills and general questions), Kexin Wang (Datastores), Rachneet Singh Sachdeva (Models)
https://www.ukp.tu-darmstadt.de/
Don't hesitate to send us an e-mail or report an issue, if something is broken (and it shouldn't be) or if you have further questions.
This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.