Comments (3)
Really cool! I have actually worked on a small prototype that mimics the output of the current token classification models:
from flair.data import Sentence
from flair.models import SequenceTagger
tagger: SequenceTagger = SequenceTagger.load("dbmdz/flair-historic-ner-onb")
sentence: Sentence = Sentence(input_text)
# Also show scores for recognized NEs
tagger.predict(sentence, all_tag_prob=True, label_name="predicted")
response = []
entities = []
for span in sentence.get_spans("predicted"):
idx = [token.idx for token in span.tokens]
current_entity = {
"entity_group": span.tag,
"word": span.to_original_text(),
"start": idx[0] - 1, # Because Flair starts at 1
"end": idx[-1],
"score": span.score
}
entities.append(current_entity)
response = {
"entities": entities
}
return jsonify(response)
Bookmark for token classification API: https://api-inference.huggingface.co/docs/python/html/detailed_parameters.html#named-entity-recognition-ner-task
from api-inference-community.
Thanks! Yeah it looks pretty easy!
We'll discuss internally to figure out if we would implement this here or directly in our api-inference
repo (which hosts the transformers model inference)
from api-inference-community.
was implemented in our api-inference
repo (@Narsil @n1t0) so closing this here
from api-inference-community.
Related Issues (20)
- Doesn't work with the bart-large-cnn model HOT 1
- What is the ratelimit for inference api for pro users? HOT 1
- Hosted Inference Api, all models error 422 HOT 6
- pydantic.errors.PydanticImportError: `pydantic:ConstrainedFloat` has been removed in V2. HOT 4
- How do we use the detailed Parameters for the api? HOT 1
- Many of the docker images seem to be out of sync with latest inference community version HOT 7
- Update docker images to latest version of api-inference-community version
- Update docker images to latest version of api-inference-community version HOT 2
- meta-llama/Llama-2-70b-chat-hf Inference API shows incpmplete output HOT 1
- About using Hosted Inference API
- Proper parameters for `HuggingFaceM4/idefics-9b-instruct` HOT 1
- An exception occurs when running the NER model. HOT 1
- Return max_input_length if sequence is too long (NLP tasks) HOT 9
- No image-to-text task in pipelines!
- [Bug] Audio task accept headers are not respected HOT 2
- any pipeline using huggingface_hub.model_info is not offline compatible
- Adding End-Of-Generation-Token parameter for text generation Inference API
- 1xbet
- Bumping docker version of SpeechBrain? HOT 1
- [Bug] Multiple Image Outputs are returned in a single byte buffer
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from api-inference-community.