richardpaulhudson / coreferee Goto Github PK
View Code? Open in Web Editor NEWCoreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages
License: MIT License
Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages
License: MIT License
I'm new to Python so please forgive me for any simple errors, but I'm using spacy-3.1.7
and python 3.12.2
on Windows 64bit. I have successfully installed Spacy and was able to run it without a problem, but I run into this error when trying to install coreferee
. I can post more error logs if needed.
python3 -m pip install coreferee
I get this error:
Collecting coreferee
Using cached coreferee-1.1.3-py3-none-any.whl.metadata (2.2 kB)
Collecting spacy<3.2.0,>=3.1.0 (from coreferee)
Using cached spacy-3.1.7.tar.gz (1.0 MB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [771 lines of output]
Collecting setuptools
Using cached setuptools-69.2.0-py3-none-any.whl.metadata (6.3 kB)
Collecting cython<3.0,>=0.25
Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB)
Collecting cymem<2.1.0,>=2.0.2
Using cached cymem-2.0.8-cp312-cp312-win_amd64.whl.metadata (8.6 kB)
Collecting preshed<3.1.0,>=3.0.2
Using cached preshed-3.0.9-cp312-cp312-win_amd64.whl.metadata (2.2 kB)
Collecting murmurhash<1.1.0,>=0.28.0
Using cached murmurhash-1.0.10-cp312-cp312-win_amd64.whl.metadata (2.0 kB)
Collecting thinc<8.1.0,>=8.0.12
Using cached thinc-8.0.17.tar.gz (189 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting blis<0.8.0,>=0.4.0
Using cached blis-0.7.11-cp312-cp312-win_amd64.whl.metadata (7.6 kB)
Collecting pathy
Using cached pathy-0.11.0-py3-none-any.whl.metadata (16 kB)
Collecting numpy>=1.15.0
Using cached numpy-1.26.4-cp312-cp312-win_amd64.whl.metadata (61 kB)
Collecting wasabi<1.1.0,>=0.8.1 (from thinc<8.1.0,>=8.0.12)
Using cached wasabi-0.10.1-py3-none-any.whl.metadata (28 kB)
Collecting srsly<3.0.0,>=2.4.0 (from thinc<8.1.0,>=8.0.12)
Using cached srsly-2.4.8-cp312-cp312-win_amd64.whl.metadata (20 kB)
Collecting catalogue<2.1.0,>=2.0.4 (from thinc<8.1.0,>=8.0.12)
Using cached catalogue-2.0.10-py3-none-any.whl.metadata (14 kB)
Collecting pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 (from thinc<8.1.0,>=8.0.12)
Using cached pydantic-1.8.2-py3-none-any.whl.metadata (103 kB)
Collecting smart-open<7.0.0,>=5.2.1 (from pathy)
Using cached smart_open-6.4.0-py3-none-any.whl.metadata (21 kB)
Collecting typer<1.0.0,>=0.3.0 (from pathy)
Using cached typer-0.9.0-py3-none-any.whl.metadata (14 kB)
Collecting pathlib-abc==0.1.1 (from pathy)
Using cached pathlib_abc-0.1.1-py3-none-any.whl.metadata (18 kB)
Collecting typing-extensions>=3.7.4.3 (from pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4->thinc<8.1.0,>=8.0.12)
Using cached typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB)
Collecting click<9.0.0,>=7.1.1 (from typer<1.0.0,>=0.3.0->pathy)
Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting colorama (from click<9.0.0,>=7.1.1->typer<1.0.0,>=0.3.0->pathy)
Using cached colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB)
Using cached setuptools-69.2.0-py3-none-any.whl (821 kB)
Using cached Cython-0.29.37-py2.py3-none-any.whl (989 kB)
Using cached cymem-2.0.8-cp312-cp312-win_amd64.whl (39 kB)
Using cached preshed-3.0.9-cp312-cp312-win_amd64.whl (122 kB)
Using cached murmurhash-1.0.10-cp312-cp312-win_amd64.whl (25 kB)
Using cached blis-0.7.11-cp312-cp312-win_amd64.whl (6.6 MB)
Using cached pathy-0.11.0-py3-none-any.whl (47 kB)
Using cached pathlib_abc-0.1.1-py3-none-any.whl (23 kB)
Using cached numpy-1.26.4-cp312-cp312-win_amd64.whl (15.5 MB)
Using cached catalogue-2.0.10-py3-none-any.whl (17 kB)
Using cached pydantic-1.8.2-py3-none-any.whl (126 kB)
Using cached smart_open-6.4.0-py3-none-any.whl (57 kB)
Using cached srsly-2.4.8-cp312-cp312-win_amd64.whl (478 kB)
Using cached typer-0.9.0-py3-none-any.whl (45 kB)
Using cached wasabi-0.10.1-py3-none-any.whl (26 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB)
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Building wheels for collected packages: thinc
Building wheel for thinc (pyproject.toml): started
Building wheel for thinc (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
Building wheel for thinc (pyproject.toml) did not run successfully.
exit code: 1
Hi,
While testing coreferee in French on this simple example :
"Robin est un garçon, il est gentils. La Reine Elisabeth II est aussi gentille"
with this code :
import spacy
import coreferee
nlp = spacy.load("fr_core_news_lg")
nlp.add_pipe('coreferee')
doc = nlp(text)
doc._.coref_chains.print()
I get, this error message :
Unexpected error in Coreferee annotating document, skipping ....
⚠ <class 'TypeError'>
⚠ unsupported operand type(s) for |: 'dict' and 'dict'
versions:
python==3.8.1
spacy==3.2.0
fr_core_news_lg==3.2.0
coreferee==1.3.1
I think this issue is due to an operation on dict that is not yet supported in python3.8. The syntax to concatenate two dicts need to be changed as follows:
a = {"exemple_1":5,"exemple_2":3}
b = {"exemple_2":5,"exemple_3":3}
c = {**a,**b}
instead of :
a = {"exemple_1":5,"exemple_2":3}
b = {"exemple_2":5,"exemple_3":3}
c = a|b
In French, the following changes need to be applied at least in this file:
"coreferee/lang/fr/language_specific_rules.py", line 1276,
After changing this file, the bug is disappearing on my example but might be in other languages or use cases.
Hope this helps,
Thank you for your amazing work,
When executing the following code -
nlp=spacy.load('en_core_web_md')
nlp.add_pipe('coreferee')
I am getting the following error -
coreferee.errors.ModelNotSupportedError: en_core_web_md version 3.1.0
Any idea why this is happening? And what can be done in order to resolve this?
Earlier I tried training my custom NER spacy model on Litbank dataset, which was working. But when I tried training on my own own data, it seems that coref_chains attribute doesnt mark any text to true. Can you help me? How can I proceed?
I have attacehd the self annotated sample dataset too, can you check if that is alright?
Thanks in advance!
(Link to custom dataset)
https://drive.google.com/drive/folders/1WzRogtvg81TMCHmVR0Kw4iqrbVWCFgO7?usp=sharing
While trying to use Coreferee to replace proper nouns with their corresponding references, Coreferee will return the wrong token indexes. This issue only occure if a merge was done beforehand.
doc = nlp("the big bad wolf is small, he is also bad")
with doc.retokenize() as retokenizer:
retokenizer.merge(doc[1:4])
def coref(sentences):
# nlp = spacy.load('en_core_web_trf')
# nlp.add_pipe('coreferee')
resolved_text = ""
for token in doc:
print('token:',token)
repres = doc._.coref_chains.resolve(token)
if repres:
print("refer to: ",repres)
resolved_text += " " + " and ".join([t.text for t in repres])
else:
resolved_text += " " + token.text
return(resolved_text)
resolved_text = coref(doc)
print(resolved_text)
I expect "he" to refer to "big bad wolf"
I get "small" instead
Apologies if this is an inappropriate place to ask this question. Is there anyway to bypass Coreferee not capturing coreferences that are unambiguously evident from the structure of a sentence? I feel that this step was taken for reasons of efficiency, and that I may be able to add a flag in a suitable location to achieve this.
With many thanks,
Andy
After following the installation instructions of holmes extractor I run into the following error:
"spaCy model en_core_web_trf version 3.4.1 is not supported by Coreferee. Please examine /coreferee/lang/en/config.cfg to see the supported models/versions."
however, if I try:
python -m spacy download en_core_web_trf==3.4.0
I get the error:
✘ No compatible package found for 'en_core_web_trf==3.4.0' (spaCy v3.4.1)
Hints how to solve this issue, i.e. how to uninstall/install a setup of libraries that work together would be highly appreciated :-)
Thanks,
Is it possible to create a release allowing python 3.11? We would like to upgrade everything to latest if possible
I followed the instructions, but It doesn't work. I'm getting the same error everytime ,and there is nothing left that I didn't try for fixing it.
Here is my spacy info:
And here is the my environment's versions:
coreferee==1.4.1
coreferee-model-en @ https://github.com/richardpaulhudson/coreferee/raw/master/models/coreferee_model_en.zip#sha256=aec5662b4af38fbf4b8c67e4aada8b828c51d4a224b5e08f7b2b176c02d8780f
spacy==3.4.4
spacy-alignments==0.9.0
spacy-experimental==0.6.2
spacy-legacy==3.0.12
spacy-loggers==1.0.4
spacy-transformers==1.1.9
What is wrong here ? It is so annoying. I really need this module.
Complete error below:
✘ spaCy model en_coreference_web_trf version 3.4.0a2 is not supported
by Coreferee. Please examine /coreferee/lang/en/config.cfg to see the supported
models/versions.
---------------------------------------------------------------------------
ModelNotSupportedError Traceback (most recent call last)
Cell In[5], line 2
1 nlp_corr = spacy.load("en_coreference_web_trf")
----> 2 nlp_corr.add_pipe('coreferee')
File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/spacy/language.py:801, in Language.add_pipe(self, factory_name, name, before, after, first, last, source, config, raw_config, validate)
793 if not self.has_factory(factory_name):
794 err = Errors.E002.format(
795 name=factory_name,
796 opts=", ".join(self.factory_names),
(...)
799 lang_code=self.lang,
800 )
--> 801 pipe_component = self.create_pipe(
802 factory_name,
803 name=name,
804 config=config,
805 raw_config=raw_config,
806 validate=validate,
807 )
808 pipe_index = self._get_pipe_index(before, after, first, last)
809 self._pipe_meta[name] = self.get_factory_meta(factory_name)
File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/spacy/language.py:680, in Language.create_pipe(self, factory_name, name, config, raw_config, validate)
677 cfg = {factory_name: config}
678 # We're calling the internal _fill here to avoid constructing the
679 # registered functions twice
--> 680 resolved = registry.resolve(cfg, validate=validate)
681 filled = registry.fill({"cfg": cfg[factory_name]}, validate=validate)["cfg"]
682 filled = Config(filled)
File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/confection/__init__.py:728, in registry.resolve(cls, config, schema, overrides, validate)
719 @classmethod
720 def resolve(
721 cls,
(...)
726 validate: bool = True,
727 ) -> Dict[str, Any]:
--> 728 resolved, _ = cls._make(
729 config, schema=schema, overrides=overrides, validate=validate, resolve=True
730 )
731 return resolved
File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/confection/__init__.py:777, in registry._make(cls, config, schema, overrides, resolve, validate)
775 if not is_interpolated:
776 config = Config(orig_config).interpolate()
--> 777 filled, _, resolved = cls._fill(
778 config, schema, validate=validate, overrides=overrides, resolve=resolve
779 )
780 filled = Config(filled, section_order=section_order)
781 # Check that overrides didn't include invalid properties not in config
File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/confection/__init__.py:849, in registry._fill(cls, config, schema, validate, resolve, parent, overrides)
846 getter = cls.get(reg_name, func_name)
847 # We don't want to try/except this and raise our own error
848 # here, because we want the traceback if the function fails.
--> 849 getter_result = getter(*args, **kwargs)
850 else:
851 # We're not resolving and calling the function, so replace
852 # the getter_result with a Promise class
853 getter_result = Promise(
854 registry=reg_name, name=func_name, args=args, kwargs=kwargs
855 )
File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/coreferee/manager.py:140, in CorefereeBroker.__init__(self, nlp, name)
138 self.nlp = nlp
139 self.pid = os.getpid()
--> 140 self.annotator = CorefereeManager().get_annotator(nlp)
File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/coreferee/manager.py:132, in CorefereeManager.get_annotator(nlp)
118 error_msg = "".join(
119 (
120 "spaCy model ",
(...)
129 )
130 )
131 msg.fail(error_msg)
--> 132 raise ModelNotSupportedError(error_msg)
ModelNotSupportedError: spaCy model en_coreference_web_trf version 3.4.0a2 is not supported by Coreferee. Please examine /coreferee/lang/en/config.cfg to see the supported models/versions.
Hi @richardpaulhudson, many thanks for your great work on coreferee! I'm working with @Pantalaymon on a project that does French coreference resolution, and we are trying to get coreferee working with spaCy 3.2 for better quality results. I believe @Pantalaymon has made great progress with training a new French coreferee model with spaCy 3.2 (with new rules) and has seen improved results on the benchmarks, so we were wondering, how would the upcoming versions of coreferee handle new PRs and updates?
For now there's an open PR on the old coreferee repo -- however, the commit history and refs from that repo wouldn't transfer to this new repo -- is there a reason you didn't transfer ownership of the msg-systems repo to the explosion org so that the commit histories would carry over? Please advise on next steps when you can, thanks!
Hello,
I encountered an issue while trying to install the Chinese coreference resolution model for coreferee
. Following the documentation, I attempted to download the model file from the following link:
https://github.com/richardpaulhudson/coreferee/raw/master/models/coreferee_model_zh.zip
However, this link returns a 404 error, and I am unable to download the model file.
Could you please provide an updated download link or installation instructions? If there are any new model files or alternative solutions available, could you share the relevant information? Thank you very much!
Best regards,
rtc
ModuleNotFoundError: No module named 'coreferee_model_en.trf_3_0_0'
Hi,
I am interested in annotating my own custom dataset for finetuning existing pretrained model.
I have tried reviewing some of the public datasets available like
I am little confused as all are not similar to each other. Can you suggest me some basic guidelines for annotation. It would be great help.
Thanks in advance!
use this code:
import coreferee
import spacy
nlp = spacy.load("en_core_web_trf")
nlp.add_pipe("coreferee") <<<
got error:
*** ValueError: [E002] Can't find factory for 'coreferee' for language English (en). This usually happens when spaCy calls nlp.create_pipe
with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator @Language.component
(for function components) or @Language.factory
(for class components).
I followed the instructions as defined here https://github.com/explosion/coreferee#version-131
and installed 'spacy-transformers'
reuven
Hello,
I am unable to test corefree with Spacy 3.7:
✘ spaCy model fr_core_news_lg version 3.7.0 is not supported by
Coreferee. Please examine /coreferee/lang/fr/config.cfg to see the supported
models/versions.
Are there any plans to support Spacy 3.7 with the en_core_news_lg
and fr_core_news_lg
models?
Thanks so much,
Yann
To explain better, I want to check certainty in a percentage of a specific coreference. I am sorry, if that feature is already present in the code, but I dug up, and could not find something that I could use myself.
Examples:
"Peter and Jane went to the park. He forgot to bring his phone."
Mension : "He", Reference: "Peter", Confidence: "92%"
"Peter went to the park. He forgot to bring his phone."
Mension : "He", Reference: "Peter", Confidence: "99%"
Hi Richard,
I've been doing some tests comparing the performance of neuralcoref (on older version of Python/spaCy) with coreferee for English, and I'm noticing some rather concerning degradations in performance with newer versions of coreferee. I'm not ready to share the comparison report for the neuralcoref/coreferee yet -- the data and tests need to be cleaned up, but in the interim, I've been inspecting coreferee's coreference chains across the following versions (both using coreferee 1.2.0)
en_core_web_md
and en_core_web_lg
en_core_web_md
and en_core_web_lg
I tried generating chains for the below sentences:
Victoria Chen, a well-known business executive, says she is 'really honoured' to see her pay jump to $2.3 million, as she became MegaBucks Corporation's first female executive. Her colleague and long-time business partner, Peter Zhang, says he is extremely pleased with this development. The firm's CEO, Lawrence Willis will be onboarding the new CFO in a few months. He said he is looking forward to the whole experience.
en_core_web_md
▶ python test_coref.py
Loaded spaCy language model: en_core_web_md
0: Chen(1), she(11), her(19), she(28), Her(37)
1: Corporation(31), firm(59)
2: Zhang(47), he(50)
3: Willis(64), He(76), he(78)
None
en_core_web_md
▶ python test_coref.py
Loaded spaCy language model: en_core_web_md
0: Chen(1), she(11), her(19), she(28), Her(37)
1: Corporation(31), firm(59)
2: Zhang(47), he(50), He(76), he(78)
None
en_core_web_lg
▶ python test_coref.py
Loaded spaCy language model: en_core_web_lg
0: Chen(1), she(11), her(19), she(28), Her(37)
1: Corporation(31), firm(59)
2: colleague(38), he(50), He(76), he(78)
None
en_core_web_lg
▶ python test_coref.py
Loaded spaCy language model: en_core_web_lg
0: Chen(1), she(11), her(19), she(28), Her(37)
1: Corporation(31), firm(59)
2: colleague(38), he(50)
3: Willis(64), He(76), he(78)
None
In both cases, the en_core_web_lg
language model returns a result that's considerably worse than the en_core_web_md
model, which is itself quite surprising. I'd expect that the dependency parse from the large model would be far superior to the medium model, and so should not produce this noticeably different a result. As can be seen, the en_core_web_lg
model is missing entire named entities altogether, and the total number of results in the chain is lower than what we get from the medium model.
The best result (in which we capture all three named entities -- "Chen", "Zhang" and "Willis" in the coref chain) is obtained with the smallest (en_core_web_md
) model in spaCy 3.2.4, and not the newest version with the largest model, which is rather counter-intuitive.
I understand that the most general guideline you can offer is that these sorts of examples are single cases, and that statistically, the models should be more or less comparable. But that's definitely not true in the case of my own private tests (which I will attempt to share shortly) -- in my tests, in which I perform a range of tasks, including parsing, named entity recognition, coreference resolution and gender identification across a dataset of ~100 news articles, I am noticing a recognizable drop in coreferee performance across both these dimensions:
Again, I fully understand that the one-off example I gave above might seem that it's indeed one-off, but I was wondering if there's something you've noticed in terms of accuracy numbers in your tests. My concern is that the rules for coreferee's English version are not carrying over well with the new spaCy models, particularly in v3.3.x, potentially due to whatever internal changes were made to the language models in the recent release.
The issue comparing neuralcoref (whose performance also seems to be better than coreferee) is a totally different one, and is unrelated to this one I've posted. I'll do my best to clean up my comparison tests of neuralcoref and coreferee and document them (I'm currently trying to separate the different functions I'm performing for my own project, so that I document only the coreference resolution results as clearly as possible. Looking forward to hearing your thoughts!
Hello,
I would like to use coreferee with a custom spacy model that is a slight variation of the en_core_web_lg version 3.4.1 (it's basically the same model that has been trained to recognize one additional entity type using the standard spacy training process).
Trying to add coreferee to the trained pipeline with .add_pipe fails with a model version not supported error. In the readme it says that I'm supposed to train a new coreferee model for this custom model, however I would like to essentially use the same model for en_core_web_lg as my custom model is very similar. Is there any way to just lift that coreferee model for use with a custom spacy model?
Hi,
Would you consider mirroring the pre-trained models to Hugging Face for faster downloads?
Thank you!
What is the primary difference, if any, between Coreferee and NeuralCoref? I realize one specific difference is that NeuralCoref is english only. Just trying to understand any limitations or deltas between the two implementations.
NeuralCoref: https://github.com/huggingface/neuralcoref
Is it possible to host the en model in conda or pypi so that I can download it in a .yml, similar to the spacy models? Basically, just trying to do this:
name: dev
channels:
- conda-forge
- defaults
dependencies:
- pip:
- spacy
- coreferee
- spacy-model-en_core_web_lg
- spacy-model-en_core_web_trf
- coreferee-model-en
I can't do the command line install in my setup. Thank you!
Hi,
Thanks for writing this library! I'm trying to replace pronouns with proper nouns (except in quotations). Is there an example on how to do this?
Thank you!
Thank you for your contributions to the NLP field. I would like to know more about 1.4.2 model performance, such as the meaning of Anaphors in 20% and Accuracy (%), as well as how to align the format of the coreferee with the corpus. Since "A mention within Coreferee does not consist of a span", the output of coreferee seems incompatible with the answer key of most corpora ( I tried ontonotes and litbank). For example, the answer key is "Gaza strip”, but the output of the coreferee is ”Gaza”. Thank you!
Hi there and thanks for sharing this incredible model!
I plugged in the following example, and was surprised to not see a chain in the first person. I would expect the instances of "my" to eventually chain with "I" later in the text. I am very new to coreferences, curious why this might be happening. Thanks for any insight you may provide.
"Thank you for your videos. The situation with my mom is now that she is older and has thinner skin she gets really cold. She doesn’t believe this is why she gets colder. She insists that we are the only people that has a cold house. Our temp is set around 72 or 73 degrees. She says everyone else keeps there house temp at 80 degrees and she insists that we kept the house temp at 80 degrees year round for our whole lives. Ex: When my parents were in their 30’s and I was a young child she claims our house temp was always set at 80 degrees. If you tell her it was not and that she gets colder now because of her age she gets really mad. I should also mention this is not a once in a while conversation she has. She talks about this multiple times every day."
0: mom(10), she(14), she(21), She(27), she(34), She(39), She(64), she(76)
1: parents(100), their(103)
2: 30(104), it(129)
3: child(111), she(112), her(128), she(134), her(140), she(142), she(161), She(165)
All the best,
David
Hello,
First, great project and good job!
I'm trying to use coreferee for French data. I tried the public example for French in your doc.
I got the following error when I'm running it with Python 3.8.13.
⚠ Unexpected error in Coreferee annotating document, skipping ....
⚠ <class 'TypeError'>
⚠ unsupported operand type(s) for |: 'dict' and 'dict'
File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/manager.py", line 144, in __call__
self.annotator.annotate(doc)
File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/annotation.py", line 377, in annotate
self.rules_analyzer.initialize(doc)
File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/rules.py", line 314, in initialize
if self.language_independent_is_potential_anaphoric_pair(
File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/rules.py", line 474, in language_independent_is_potential_anaphoric_pair
if self.is_potential_coreferring_noun_pair(
File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/lang/fr/language_specific_rules.py", line 1276, in is_potential_coreferring_noun_pair
new_reverse_entity_noun_dictionary = {
The error does not occur with Python 3.9.
Would it be possible to fix this problem? (the project I want to use it is still using 3.8...)
Best,
Jérôme
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.