Git Product home page Git Product logo

edenai / edenai-apis Goto Github PK

View Code? Open in Web Editor NEW
388.0 8.0 54.0 149.71 MB

Eden AI: simplify the use and deployment of AI technologies by providing a unique API that connects to the best possible AI engines

Home Page: https://www.edenai.co/

License: Apache License 2.0

Python 100.00%
aggregator ai ai-as-a-service api computer-vision document-parsing image-processing machine-translation natural-language-processing nlp ocr optical-character-recognition pre-trained-model python speech-recognition speech-to-text text-to-speech video-recognition

edenai-apis's Introduction

License Tests

Eden AI Logo

Table of Contents

Eden AI APIs

Eden AI aims to simplify the use and deployment of AI technologies by providing a unique API that connects to all the best AI engines.

With the rise of AI as a Service , a lot of companies provide off-the-shelf trained models that you can access directly through an API. These companies are either the tech giants (Google, Microsoft , Amazon) or other smaller, more specialized companies, and there are hundreds of them. Some of the most known are : DeepL (translation), OpenAI (text and image analysis), AssemblyAI (speech analysis).

There are hundreds of companies doing that. We're regrouping the best ones in one place !

➡️ Read more about it ...

why aren't you regrouping Open Source models (instead of proprietary APIs) into one repo? : Because it doesn't make sens to deploy and maintain large pytorch (or other framework) AI models in every solution that wants AI capabilities (especially for document parsing, image and video moderation or speech recognition) . So using APIs makes way more sens. Deployed OpenSource models are being included using different APIs like HuggingFace and other equivalents.

EdenAI Gif

Package Installation

You can install the package with pip :

pip install git+https://github.com/edenai/edenai-apis 

Quick Start

To make call to different AI providers, first add the api-keys/secrets for the provider you will use in edenai_apis.api_keys.<provider_name>_settings_templates.json, then rename the file to <provider_name>_settings.json

When it's done you can directly start using edenai_apis. Here is a quick example using Microsoft and IBM Keyword Extraction apis:

from edenai_apis import Text

keyword_extraction = Text.keyword_extraction("microsoft")
microsoft_res = keyword_extraction(language="en", text="as simple as that")

# Provider's response
print(microsoft_res.original_response)

# Standardized version of Provider's response
print(microsoft_res.standardized_response)

for item in microsoft_res.standardized_response.items:
    print(f"keyword: {item.keyword}, importance: {item.importance}")


# What if we want to try an other provider?
ibm_kw = Text.keyword_extraction("ibm")
ibm_res = ibm_kw(language="en", text="same api & unified inputs for all providers")


# `original_response` will obviously be different and you will have to check
# the doc of each individual providers to know how to parse them
print(ibm_res.original_response)

# We can however easily parse `standardized_response`
# the same way as we did for microsoft:
for item in ibm_res.standardized_response.items:
    print(f"keyword: {item.keyword}, importance: {item.importance}")

Asynchronous features

If you need to use features like speech to text, object extraction from videos, etc. Then you will have to use asynchronous operations. This means that you will first make a call to launch an asynchronous job, it will then return a job ID allowing you to make other calls to get the job status or response if the job is finished

from edenai_apis import Audio

provider = "google" # it could also be assamblyai, deepgram, microsoft ...etc
stt_launch = Audio.speech_to_text_async__launch_job(provider)
stt_get_result = Audio.speech_to_text_async__get_job_result(provider)


res = stt_launch(
    file=your_file.wav,
    language="en",
    speakers=2,
    profanity_filter=False,
)

job_id = stt_launch.provider_job_id

res = stt_get_result(provider_job_id=job_id)
print(res.status)  # "pending" | "succeeded" | "failed"

Available Features & Providers

⚠️ You can find a list of all available features and providers here ⚠️

Contribute

We would love to have your contribution. Please follow our guidelines for adding a new AI provider's API or a new AI feature. You can check the package structure for more details on how it is organized. We use GitHub issues for tracking requests and bugs. For broader discussions you can join our discord.

Don’t want to create accounts for all providers and host the project by yourself?

You can create an account on Eden AI and have access to all the AI technologies and providers directly through our API. Eden AI Logo

Join the community!

Join our friendly community to improve your skills, focus on the integration of AI engines, get help to use Eden AI API and much more !

Linkedin Medium

License

Apache License 2.0

edenai-apis's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

edenai-apis's Issues

OpenAPI support

It would be nice if this API allowed me to generate a native SDK rather than calling the API directly. At my company, we use Node.js for our backend and Flutter/Angular for our frontend.

Tools worth considering:

Wrong endpoint DeepL using personal API key

When using a personal API Key (ONLY in this case) with DeepL for Translation, it appears to be that the wrong end point is used.

{
  "deepl": {
    "error": {
      "message": "Deepl has returned an error: Wrong endpoint. Use https://api.deepl.com",
      "type": "ProviderException"
    },
    "status": "fail",
    "provider_status_code": 403,
    "own_api_keys": true,
    "cost": 0
  }
}

Any quick fix, please?

Thanks in advance!

Google Object Localizer results parsed incorrectly

Thanks for this tool!

There seems to be a problem parsing the response for an image detection API call with the google provider.

To reproduce

Google's documentation for Cloud Vision API "Detect multiple objects" has a live example (scroll to the bottom, "Try this method").

For the reference image, https://cloud.google.com/vision/docs/images/bicycle_example.png, it produces the following output .

  • Bicycle wheel (x2)
  • Bicycle
  • Picture frame
correct (raw Google response)
{
  "responses": [
    {
      "localizedObjectAnnotations": [
        {
          "mid": "/m/01bqk0",
          "name": "Bicycle wheel",
          "score": 0.94234306,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.31524897,
                "y": 0.78658724
              },
              {
                "x": 0.44186485,
                "y": 0.78658724
              },
              {
                "x": 0.44186485,
                "y": 0.9692919
              },
              {
                "x": 0.31524897,
                "y": 0.9692919
              }
            ]
          }
        },
        {
          "mid": "/m/01bqk0",
          "name": "Bicycle wheel",
          "score": 0.9337022,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.50342137,
                "y": 0.7553652
              },
              {
                "x": 0.6289583,
                "y": 0.7553652
              },
              {
                "x": 0.6289583,
                "y": 0.9428141
              },
              {
                "x": 0.50342137,
                "y": 0.9428141
              }
            ]
          }
        },
        {
          "mid": "/m/0199g",
          "name": "Bicycle",
          "score": 0.8973106,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.31594256,
                "y": 0.66489404
              },
              {
                "x": 0.63338375,
                "y": 0.66489404
              },
              {
                "x": 0.63338375,
                "y": 0.9687162
              },
              {
                "x": 0.31594256,
                "y": 0.9687162
              }
            ]
          }
        },
        {
          "mid": "/m/06z37_",
          "name": "Picture frame",
          "score": 0.7171168,
          "boundingPoly": {
            "normalizedVertices": [
              {
                "x": 0.7882889,
                "y": 0.16610023
              },
              {
                "x": 0.9662418,
                "y": 0.16610023
              },
              {
                "x": 0.9662418,
                "y": 0.3178568
              },
              {
                "x": 0.7882889,
                "y": 0.3178568
              }
            ]
          }
        }
      ]
    }
  ]
}

When I pass the same image to Eden (with providers=google), the output has duplicate items.

  • In the "google" JSON object, it appears the *_min, *_max fields are populated incrementally in four "copies" of the data. For each item, note

    • the first "copy" has no logical size (x_min == x_max && y_min == y_max).
    • the second "copy" has no logical height (y_min == y_max).
    • the third and fourth "copies" are identical.
  • The "eden-ai" JSON object contains the redundant "copies" of each item.

Eden AI output
{
  "google": {
    "status": "success",
    "items": [
      {
        "label": "Bicycle wheel",
        "confidence": 0.94234306,
        "x_min": 0.31524897,
        "x_max": 0.31524897,
        "y_min": 0.78658724,
        "y_max": 0.78658724
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.94234306,
        "x_min": 0.31524897,
        "x_max": 0.44186485,
        "y_min": 0.78658724,
        "y_max": 0.78658724
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.94234306,
        "x_min": 0.31524897,
        "x_max": 0.44186485,
        "y_min": 0.78658724,
        "y_max": 0.9692919
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.94234306,
        "x_min": 0.31524897,
        "x_max": 0.44186485,
        "y_min": 0.78658724,
        "y_max": 0.9692919
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.93370223,
        "x_min": 0.50342137,
        "x_max": 0.50342137,
        "y_min": 0.7553652,
        "y_max": 0.7553652
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.93370223,
        "x_min": 0.50342137,
        "x_max": 0.6289583,
        "y_min": 0.7553652,
        "y_max": 0.7553652
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.93370223,
        "x_min": 0.50342137,
        "x_max": 0.6289583,
        "y_min": 0.7553652,
        "y_max": 0.9428141
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.93370223,
        "x_min": 0.50342137,
        "x_max": 0.6289583,
        "y_min": 0.7553652,
        "y_max": 0.9428141
      },
      {
        "label": "Bicycle",
        "confidence": 0.89731073,
        "x_min": 0.31594256,
        "x_max": 0.31594256,
        "y_min": 0.66489404,
        "y_max": 0.66489404
      },
      {
        "label": "Bicycle",
        "confidence": 0.89731073,
        "x_min": 0.31594256,
        "x_max": 0.63338375,
        "y_min": 0.66489404,
        "y_max": 0.66489404
      },
      {
        "label": "Bicycle",
        "confidence": 0.89731073,
        "x_min": 0.31594256,
        "x_max": 0.63338375,
        "y_min": 0.66489404,
        "y_max": 0.9687162
      },
      {
        "label": "Bicycle",
        "confidence": 0.89731073,
        "x_min": 0.31594256,
        "x_max": 0.63338375,
        "y_min": 0.66489404,
        "y_max": 0.9687162
      },
      {
        "label": "Picture frame",
        "confidence": 0.7171168,
        "x_min": 0.7882889,
        "x_max": 0.7882889,
        "y_min": 0.16610023,
        "y_max": 0.16610023
      },
      {
        "label": "Picture frame",
        "confidence": 0.7171168,
        "x_min": 0.7882889,
        "x_max": 0.9662418,
        "y_min": 0.16610023,
        "y_max": 0.16610023
      },
      {
        "label": "Picture frame",
        "confidence": 0.7171168,
        "x_min": 0.7882889,
        "x_max": 0.9662418,
        "y_min": 0.16610023,
        "y_max": 0.3178568
      },
      {
        "label": "Picture frame",
        "confidence": 0.7171168,
        "x_min": 0.7882889,
        "x_max": 0.9662418,
        "y_min": 0.16610023,
        "y_max": 0.3178568
      }
    ],
    "cost": 0.00225
  },
  "eden-ai": {
    "status": "success",
    "items": [
      {
        "label": "Bicycle wheel",
        "confidence": 0.94234306,
        "x_min": 0.31524897,
        "x_max": 0.31524897,
        "y_min": 0.78658724,
        "y_max": 0.78658724
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.94234306,
        "x_min": 0.31524897,
        "x_max": 0.44186485,
        "y_min": 0.78658724,
        "y_max": 0.78658724
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.94234306,
        "x_min": 0.31524897,
        "x_max": 0.44186485,
        "y_min": 0.78658724,
        "y_max": 0.9692919
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.93370223,
        "x_min": 0.50342137,
        "x_max": 0.50342137,
        "y_min": 0.7553652,
        "y_max": 0.7553652
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.93370223,
        "x_min": 0.50342137,
        "x_max": 0.6289583,
        "y_min": 0.7553652,
        "y_max": 0.7553652
      },
      {
        "label": "Bicycle wheel",
        "confidence": 0.93370223,
        "x_min": 0.50342137,
        "x_max": 0.6289583,
        "y_min": 0.7553652,
        "y_max": 0.9428141
      },
      {
        "label": "Bicycle",
        "confidence": 0.89731073,
        "x_min": 0.31594256,
        "x_max": 0.31594256,
        "y_min": 0.66489404,
        "y_max": 0.66489404
      },
      {
        "label": "Bicycle",
        "confidence": 0.89731073,
        "x_min": 0.31594256,
        "x_max": 0.63338375,
        "y_min": 0.66489404,
        "y_max": 0.66489404
      },
      {
        "label": "Bicycle",
        "confidence": 0.89731073,
        "x_min": 0.31594256,
        "x_max": 0.63338375,
        "y_min": 0.66489404,
        "y_max": 0.9687162
      },
      {
        "label": "Picture frame",
        "confidence": 0.7171168,
        "x_min": 0.7882889,
        "x_max": 0.7882889,
        "y_min": 0.16610023,
        "y_max": 0.16610023
      },
      {
        "label": "Picture frame",
        "confidence": 0.7171168,
        "x_min": 0.7882889,
        "x_max": 0.9662418,
        "y_min": 0.16610023,
        "y_max": 0.16610023
      },
      {
        "label": "Picture frame",
        "confidence": 0.7171168,
        "x_min": 0.7882889,
        "x_max": 0.9662418,
        "y_min": 0.16610023,
        "y_max": 0.3178568
      }
    ]
  }
}

OCR endpoint not working

Endpoint for ocr is not working.

It is throwing a validation error message about provider not being set, while within guzzle it is clearly being set and passed.

Your live preview cannot test when header is set to 'Content-Type' => 'multipart/form-data;'
https://docs.edenai.co/reference/ocr_ocr_create

$client = new \GuzzleHttp\Client();

$response = $client->post('https://api.edenai.run/v2/ocr/ocr', [
    'multipart' => [
        [
            'name' => 'providers',
            'contents' => 'api4ai'
        ]
    ],
    'headers' => [
        'Authorization' => 'Bearer ' . $this->token,
        'Content-Type' => 'multipart/form-data;'
    ]
])

Response

 Client error: `POST https://api.edenai.run/v2/ocr/ocr` resulted in a `400 Bad Request` response:
    {"error":{"type":"Invalid request","message":{"providers":["Please enter the name of the provider(s)"]}}}

Conflict in speech-to-text parametrization with provider_params and language selection

In order to use multi-language identification by Amazon, I want to enable the IdentifyMultipleLanguages argument.
Therefore I do not set any language in data.

import requests
import json
 
headers = {"Authorization": "Bearer Your API KEY"}
 
url="[https://api.edenai.run/v2/audio/speech_to_text_async"](https://api.edenai.run/v2/audio/speech_to_text_async%22)
 
amazon_params = json.dumps({"amazon": {"IdentifyMultipleLanguages": True}})
data={"providers": "amazon", "provider_params": amazon_params}
 
files = {'file': open("multilingual_audio.mp3",'rb')}
 
response = requests.post(url, data=data, files=files, headers=headers)
data = response.json()

But it seems like not setting the language in data switches directly to language identification.
In this particular case with Amazon, it seems to be enabling IdentifyLanguage and not the wanted IdentifyMultipleLanguages .

Non-specified provider parameters and example use for Cohere summarization

I would like to make use of additional provider parameters, notably in this case additional_command from Cohere for summarization.
But it seems like it is not taking it into account.

And in general, is it possible to use parameters that you did not specify in edenapis' payloads?
Because it seems like all non-specified parameters are discarded.

I do not know if there is a way to allow the use of non-specified provider parameters.
If possible, would you have a solution? It would allow the users more flexibility.

Thanks!

Error setting up a local python development environment

repro

  • Clone this repo
  • Create a new python virtualenv. These repro steps work for python3.9 and 3.10
  • run pip install -r requirements.txt

expected

  • The required packages are installed. We are off to the races.

actual

  • Process exited with error. See below.
error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [37 lines of output]
      /Users/beijbom/.virtualenvs/edenai9/lib/python3.9/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
      !!
      
              ********************************************************************************
              The license_file parameter is deprecated, use license_files instead.
      
              By 2023-Oct-30, you need to update your project and remove deprecated calls
              or your builds will no longer be supported.
      
              See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
              ********************************************************************************
      
      !!
        parsed = self.parsers.get(option_name, lambda x: x)(value)
      running egg_info
      creating /private/var/folders/59/565v8hvn3_db0sqz6w0f8snh0000gn/T/pip-pip-egg-info-ll2022vk/psycopg2.egg-info
      writing /private/var/folders/59/565v8hvn3_db0sqz6w0f8snh0000gn/T/pip-pip-egg-info-ll2022vk/psycopg2.egg-info/PKG-INFO
      writing dependency_links to /private/var/folders/59/565v8hvn3_db0sqz6w0f8snh0000gn/T/pip-pip-egg-info-ll2022vk/psycopg2.egg-info/dependency_links.txt
      writing top-level names to /private/var/folders/59/565v8hvn3_db0sqz6w0f8snh0000gn/T/pip-pip-egg-info-ll2022vk/psycopg2.egg-info/top_level.txt
      writing manifest file '/private/var/folders/59/565v8hvn3_db0sqz6w0f8snh0000gn/T/pip-pip-egg-info-ll2022vk/psycopg2.egg-info/SOURCES.txt'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.