Comments (6)
Thank you for showing some impacts. This is helpful.
from spacy-stanza.
That's an interesting problem. For reference, it looks like your text is using the unicode hyphen and non-breaking space characters.
- Unicode Character 'HYPHEN' (U+2010)
- Unicode Character 'NO-BREAK SPACE' (U+00A0)
spaCy (and I guess Stanza) don't have any special treatment of these characters, which means they can end up being treated differently than their ASCII equivalents. If you're working with English text and don't have to worry about losing diacritics then maybe you can preprocess your text with unidecode.
If you need to Unicode characters in general but don't want the keep these, then I would recommend doing a simple string replace on your input text, like this:
text = text.replace("\u2010", "-")
from spacy-stanza.
That's an interesting problem. For reference, it looks like your text is using the unicode hyphen and non-breaking space characters.
- Unicode Character 'HYPHEN' (U+2010)
- Unicode Character 'NO-BREAK SPACE' (U+00A0)
spaCy (and I guess Stanza) don't have any special treatment of these characters, which means they can end up being treated differently than their ASCII equivalents. If you're working with English text and don't have to worry about losing diacritics then maybe you can preprocess your text with unidecode.
If you need to Unicode characters in general but don't want the keep these, then I would recommend doing a simple string replace on your input text, like this:
text = text.replace("\u2010", "-")
You are right, i have used 'utf-8' encoding while reading the .txt file. The problem is not limited to one or two Unicode characters. I am working on bigger dataset and you can realize there would be many Unicode characters. Therefor, i need all of them to replace. In that case, could you help me further?
from spacy-stanza.
OK, in that case maybe unidecode can help you. Is all your text in English? Is it OK if you strip all diacritics, so that "Erdős Pál" becomes "Erdos Pal"? If so then you can just do this:
# set up spaCy first
from unidecode import unidecode
text = ... # your text goes here
doc = nlp(unidecode(text))
If that's not OK, you'll need to describe your data in more detail.
from spacy-stanza.
My complete dataset is in English. You can have a look on dataset for more clarity.
https://github.com/chaitanya2334/WLP-Dataset
from spacy-stanza.
Thanks for the link! It's much easier to give advice when the data is open like this.
Here are some example sentences:
Add 250 µl PB2 Lysis Buffer.
Centrifuge for 5 min at 11,000 x g at room temperature.
HB101 or strains of the JM series), perform a wash step with 500 µl PB4 Wash Buffer pre-warmed to 50°C.
Unfortunately it looks like the data has unicode characters without clear ASCII equivalents. For example, unidecode would convert µl
to ul
, or 50°C
to 50degC
. That might actually be OK, since ul
isn't otherwise a word, but you'd have to be careful, and it might make your output hard to understand in some cases.
Based on the sample data I've seen, while there are a number of unicode characters, only a few like the hyphen or space would actually cause strange behavior in spaCy's tokenizer. Given that, I would first try making a list of characters and replacing them in preprocessing, and if that doesn't work, then try unidecode. If neither of those work what I'd do next would depend on what the problem was.
from spacy-stanza.
Related Issues (20)
- Support for Spacy 3 HOT 6
- Port trailing whitespace fix to master
- SPACE is not UPOS HOT 4
- ImportError: cannot import name 'hash_unicode' from 'murmurhash' HOT 5
- Spacy-stanza and Spacy conflict when calling pipelines on the GPU HOT 2
- Spacy Tokenizer Boundary Issue. HOT 1
- Multi-word token expansion issue, misaligned tokens --> failed NER (German) HOT 4
- [W109] Unable to save user hooks while serializing the doc HOT 3
- Question: fine tuning stanza models from within Spacy HOT 1
- stanza.download('en') not working HOT 1
- Streamline behavior when xpos/tag is None HOT 2
- Add stanza constituency output HOT 2
- NER & Parsing not working for new language HOT 2
- AttributeError: module 'spacy_stanza' has no attribute 'load_pipeline' HOT 2
- Upgrade `stanza` version to 1.4.0 in the requirements.txt
- Can't use Spacy-Stanza in a databricks/spark UDF
- how to enable resource.json from local path when spacy_stanza.load_pipeline HOT 2
- Custom sentence segmentization HOT 2
- Building an NER pipeline for languages supported by stanza but not spacy HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from spacy-stanza.