Comments (8)
Hi @dxlong2000,
Unfortunately, we don't have our fine-tuned models anymore. You need to fine-tune BERT yourself first.
Hope this helps!
from dialogentailment.
Hi @dxlong2000,
For Semantic Similarity, take a look at here. You need to write a code like the following:
from dialogentail.semantic_similarity import SemanticSimilarity
ss = SemanticSimilarity()
ss.compute(conversation_history, actual_response, generated_response)
conversation_history
is a list of strings and actual_response
and generated_response
are both strings.
from dialogentailment.
Semantic Similarity measures cosine similarity between embedding vectors. An updated version of it would be BERTScore.
actual_response
is not really necessary. You can pass the same string as generated_response
.
If you want to use an entailment model, the coherence metric, here, is what you need:
from dialogentail.coherence import BertCoherence
c = BertCoherence("/path/to/model")
c.compute(conversation_history, actual_response, generated_response)
The constructor argument is the path to a fine-tuned BERT model.
from dialogentailment.
Thanks for your reply. I see. Could you mind uploading the inference codes? Like how to load and evaluate a new dialog? Thanks
from dialogentailment.
Our code supports evaluation. You can find it here. We didn't implement inference where we save the predicted labels for an input data, but it would be quite similar to the evaluation code.
from dialogentailment.
Hi @ehsk ,
Your evaluation code only reports the eval_accuracy, eval_loss, global_step, and loss. May I ask how can I get the SS scores? Look forward hearing from you soon.
Thanks!
from dialogentailment.
Hi @ehsk ,
Thanks for your quick response. From my understanding is that let's say the entailment model is trained on the ground-truth response and then we can take that pre-trained model to evaluate on a new conversation without knowing the actual_response
, am I correct?
I still see in the computation of ss
includes actual_response
. In the paper I saw: It measures the distance between the generated response and the utterances in the conversation history
but there is no mention of theactual_response
. Could you mind clarifying for me?
Thanks a lot!
from dialogentailment.
I saw you already provided sim_generated_resp
. That answered my right above question. Is there any way I can load my above pretrained BERT instead of Elmo?
from dialogentailment.
Related Issues (8)
- Can I use it directly to evaluate the coherence between the response and the history utterances? HOT 1
- How to obtain the results of Table 2 in the paper? HOT 4
- Test script HOT 1
- How to test with other metrics ? HOT 1
- Where can I find the original dataset HOT 4
- Errors when training BERT HOT 2
- could you please release the already trained model HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dialogentailment.