Git Product home page Git Product logo

helicone's Introduction

Helicone

Twitter

Open-source observability platform for LLMs

Helicone is an open-source observability platform for Language Learning Models (LLMs). It offers the following features:

  • ๐Ÿ“ Logs all of your requests to OpenAI in a user-friendly UI

  • ๐Ÿ’พ Caching, custom rate limits, and retries

  • ๐Ÿ“Š Track costs and latencies by users and custom properties

  • ๐ŸŽฎ Every log is a playground: iterate on prompts and chat conversations in a UI

  • ๐Ÿš€ Share results and collaborate with your friends or teammates

  • ๐Ÿ”œ (Coming soon) APIs to log feedback and evaluate results

Quick Use โšก๏ธ

Get your API key by signing up here.

export HELICONE_API_KEY=<your API key>
pip install helicone
from helicone.openai_proxy import openai

response = openai.Completion.create(
	model="text-davinci-003",
	prompt="What is Helicone?",
	user="[email protected]",
	# Optional Helicone features:
	cache=True,
	properties={"conversation_id": 12},
	rate_limit_policy={"quota": 100, "time_window": 60, "segment": "user"}
)

๐Ÿ‘‰ Then view your logs at Helicone.

More resources

Local Setup ๐Ÿ’ป

Helicone's cloud offering is deployed on Cloudflare and ensures the lowest latency add-on to your API requests.

To get started locally, Helicone is comprised of five services:

  • Web: Frontend Platform (NextJs)
  • Worker: Proxy & Async Logging (Cloudflare Workers)
  • Jawn: Dedicated Server for serving Web (Express)
  • Supabase: Application Database and Auth
  • ClickHouse: Analytics Database

If you have any questions, contact [email protected] or join discord.

Install Wrangler and Yarn

nvm install 18.11.0
nvm use 18.11.0
npm install -g wrangler
npm install -g yarn

Install Supabase

brew install supabase/tap/supabase

Install and setup ClickHouse

# This will start clickhouse locally
python3 clickhouse/ch_hcone.py --start

Install and setup MinIO

# Install minio
python3 -m pip install minio

# Start minio
python3 minio_hcone.py --restart

# Dashboard will be available at http://localhost:9001
# Default credentials:
# Username: minioadmin
# Password: minioadmin

Run all services

cd web

# start supabase to log all the db stuff...
supabase start

# start frontend
yarn
yarn dev

# start workers (for proxying, async logging and some API requests)
# in another terminal
cd worker
yarn
chmod +x run_all_workers.sh
./run_all_workers.sh

# start jawn (for serving the FE and handling API requests)
# in another terminal
cd valhalla/jawn
cp .env.example .env
yarn && yarn dev

# Make your request to local host
curl --request POST \
  --url http://127.0.0.1:8787/v1/chat/completions \
  --header 'Authorization: Bearer <KEY>' \
  --data '{
	"model": "gpt-3.5-turbo",
	"messages": [
		{
			"role": "user",
			"content": "Can you give me a random number?"
		}
	],
	"temperature": 1,
	"max_tokens": 7
}'

# Now you can go to localhost:3000 and create an account and see your request.
# When creating an account on localhost, you will automatically be signed in.

Setup .env file

Make sure your .env file is in web/.env. Here is an example:

NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=""
STRIPE_SECRET_KEY=""
NEXT_PUBLIC_HELICONE_BILLING_PORTAL_LINK=""
NEXT_PUBLIC_HELICONE_CONTACT_LINK="https://calendly.com/d/x5d-9q9-v7x/helicone-discovery-call"
STRIPE_PRICE_ID=""
STRIPE_STARTER_PRICE_ID=""
STRIPE_ENTERPRISE_PRODUCT_ID=""
STRIPE_STARTER_PRODUCT_ID=""
DATABASE_URL="postgresql://postgres:postgres@localhost:54322/postgres"
NEXT_PUBLIC_SUPABASE_ANON_KEY="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0"
NEXT_PUBLIC_SUPABASE_URL="http://localhost:54321"
SUPABASE_URL="http://localhost:54321"
SUPABASE_SERVICE_KEY="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImV4cCI6MTk4MzgxMjk5Nn0.EGIM96RAZx35lJzdJsyH-qQwv8Hdp7fsn3W0YpN81IU"
NEXT_PUBLIC_HELICONE_JAWN_SERVICE="http://localhost:8585"

Community ๐ŸŒ

Learn this repo with Onboard AI

learnthisrepo.com/helicone

Supported Projects

Name Docs
nextjs-chat-app Docs
langchain Docs
langchainjs Docs
ModelFusion Docs

Contributing

We are extremely open to contributors on documentation, integrations, and feature requests.

Update Cost Data

  1. Add new cost data to the costs/src/ directory. If provider folder exists, add to its index.ts. If not, create a new folder with the provider name and an index.ts and export a cost object

    Example:

    File name: costs/src/anthropic/index.ts

    export const costs: ModelRow[] = [
      {
        model: {
          operator: "equals",
          value: "claude-instant-1",
        },
        cost: {
          prompt_token: 0.00000163,
          completion_token: 0.0000551,
        },
      },
    ];

    We can match in 3 ways:

    • equals: The model name must be exactly the same as the value
    • startsWith: The model name must start with the value
    • includes: The model name must include the value

    Use what is most appropriate for the model

    cost object is the cost per token for prompt and completion

  2. Import the new cost data into src/providers/mappings.ts and add it to the providers array

    Example:

    File name: src/providers/mappings.ts

    import { costs as anthropicCosts } from "./providers/anthropic";
    
    // 1. Add the pattern for the API so it is a valid gateway.
    const anthropicPattern = /^https:\/\/api\.anthropic\.com/;
    
    // 2. Add Anthropic pattern, provider tag, and costs array from the generated list
    export const providers: {
      pattern: RegExp;
      provider: string;
      costs?: ModelRow[];
    }[] = [
      // ...
      {
        pattern: anthropicPattern,
        provider: "ANTHROPIC",
        costs: anthropicCosts,
      },
      // ...
    ];
  3. Run yarn test -- -u in the cost/ directory to update the snapshot tests

  4. Run yarn copy in the cost/ directory to copy the cost data into other directories

License

Helicone is licensed under the Apache v2.0 License.

helicone's People

Contributors

adaptive avatar andrewtran10 avatar andyscho avatar asim-shrestha avatar barakoshri avatar bellardia avatar beydogan avatar chitalian avatar colegottdank avatar dapama avatar dmwyatt avatar fauh45 avatar flexchar avatar fredguth avatar grvydev avatar h4r5h4 avatar handotdev avatar hkd987 avatar joshcolts18 avatar krrishdholakia avatar levankvirkvelia avatar lgrammel avatar linalam avatar maamalama avatar scottmktn avatar skrish13 avatar umuthopeyildirim avatar use-tusk[bot] avatar waynehamadi avatar yashkarthik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

helicone's Issues

Request Table Model field is empty

Request Table is using requestbody for the model but we should use responsebody, this is an issue with the engine key being there and not themodel key

Usage gauge incorrect

Thanks for the awesome work in this service! Have found it immensely helpful as we start testing our application with users.

I noticed that #309 introduced increased limits, but the gauge is still counting against the old limit. I'm sure this is going to be fixed but just wanted you to be aware ๐Ÿ˜

image

Notifications

  • Get email notification notifications when your app errors or OpenAI is down.

Passing user in OpenAI params doesn't work, but headers work

Docs show 2 methods for passing in a user.
This method, tested does not work (user is not stored in the request)

const response = await openai.createCompletion({
  model: "text-davinci-003",
  prompt: "How do I log users?",
  user: "[email protected]",
});

This method works ๐Ÿ‘ (user is stored in request)

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
  basePath: "https://oai.hconeai.com/v1",
  baseOptions: {
    headers: {
      Helicone-User-Id: "[email protected]",
    },
  },
});

Prompt Injection detection and alerting

We can do a similarity score between the response and the initial prompt to detect % likelyhood that there was a prompt injection and flag over some default threshold (80%?)

Allow proxying to custom LLM APIs

Currently, Helicone only allows people to proxy to the following services:

private validateApiConfiguration(api_base: string | undefined): boolean {
const openAiPattern = /^https:\/\/api\.openai\.com\/v\d+\/?$/;
const anthropicPattern = /^https:\/\/api\.anthropic\.com\/v\d+\/?$/;
const azurePattern = /^(https?:\/\/)?([^.]*\.)?(openai\.azure\.com|azure-api\.net)(\/.*)?$/;
const localProxyPattern = /^http:\/\/127\.0\.0\.1:\d+\/v\d+\/?$/;
const heliconeProxyPattern = /^https:\/\/oai\.hconeai\.com\/v\d+\/?$/;

However there are many other OpenAI compatible-services, and people are building OpenAI interfaces to open-source models like LLama and company, so Helicone could provide metrics without any code modifications.

User rate limiting

Taken from #140

We should have the ability to easily rate limit a user for X # of requests per day or something like that.

Rate limiting by user. See this super simple implementation of a user ID rate limiter implemented with Upstash

Something like this

{
  `helicone-enable-user-rate-limit': 'true'
  `helicone-user-requests`: "# of requests per cadence"
  `helicone-user-cadence`: "# of seconds" 
}

From @aavetis

Well those 2 params are pretty much the only thing people seem to use at the moment with redis caches :)

Add docker image?

This looks like a cool project! I'd love to try it out quickly, and the easiest way to do that (instead of setting up NPM, Supabase, and the Cloudflare workers) would be to run a docker image. I'd also suggest putting screenshots or a demo gif on the Github README so people can get a sense of the UI without going to the website. Thanks for your work on this and open-sourcing it!

Email summaries

We should add weekly reports on app performance and cost analytics for that week.

Embedding Requests show up on /requests as "Invalid Prompt"

What I'd expect

  • truncated text of what was being embedded

What did I observe
Screenshot 2023-07-18 at 9 03 36 AM

  • In the Request column, it prints "Invalid Prompt" for whenever the model is text-embedding-ada-002
  • On clicking the request and selecting json, the input text and model appear to be working as normal

What's the problem?

  • It's confusing so had me verifying if there was a real issue or whether an end user was trying to jail break the model or something

Add dynamic time filters

Currently you can only choose 3 months, 1 months, 7 days, 24 hours and 1 hour.

We want to allow our users to select dynamic time ranges

Add retention TTL to sensitive request fields

From OpenAI's API data usage policy:

Any data sent through the API will be retained for abuse and misuse monitoring purposes for a maximum of 30 days, after which it will be deleted (unless otherwise required by law).

Helicone retains all request data at the moment, and this is totally understandable if you wish to debug issues or tune your model. However, if you are developing an application where end-users could potentially send sensitive data to the Helicone API, it would be best to be able to tell users "your request data will be deleted automatically in x time" or immediately, with no TTL.

This setting would be fine to set account-wide, or on messages with a certain header such as sensitive: true.

Helicone has a distinct advantage in being able to track costs per user, model, key etc, but tracking the actual messages themselves may not be necessary. This could also be implemented as a "metadata only mode" where only the following fields are kept:

  • time
  • total tokens
  • user
  • model

Getting Cloudflare "523: Origin is unreachable" errors

Trying to use the helicone.openai module out of the box, but I get the error:

Screenshot 2023-06-23 at 2 21 58 PM

With an HTML page with the following paragraph:

"Check your DNS Settings. A 523 error means that Cloudflare could not reach your host web server. The most common cause is that your DNS settings are incorrect. Please contact your hosting provider to confirm your origin IP and then make sure the correct IP is listed for your A record in your Cloudflare DNS Settings page."

Prompt formatting tracking

Let's kick off the discussion about how we can best support the ability to track prompt formats and how prompts were constructed.

Our main focus is ensuring the UX is good and there are no major code changes/workflow disruptions to add this logging.

We currently have formatted templates https://docs.helicone.ai/advanced-usage/templated-prompts. But as @ianbicking from HN mentioned, they might want to add formatting details and not have helicone format the prompt for them.

Here is one idea we can do

template = {
    "prompt": "Write an amazing poem about Helicone",
    "promptFormat": "Write an amazing poem about Helicone"
}

serialized_template = json.dumps(template)

openai.Completion.create(
    model="text-davinci-003",
    prompt=serialized_template,
    headers={
        'Helicone-Prompt-Format': 'on',
    }
)

Support CORS

allow CORS so that Helicone can be called within a browser

Anomaly detection

Automatically detect Anomalies

Example anomalies

  • Empty responses
  • Repeat offensive (how closely does the response match the input)
  • Maybe integrate with Gaurd Rails

We can flag them on the UI or send a push notification.

Feedback on requests

We should have the ability to give feedback for specific requests.

For a given request we should be able to provide feedback and be able to report human feedback and compare across prompts and models

Please remove hard print statements

Can we please remove these types of print statements??

print("logging", request, provider)
print("logging", async_log, Provider.OPENAI)

Used a way around to hide these but it would be better to remove these.

Exposure of SUPABASE_SERVICE_ROLE_KEY on GitHub

Description: A team member accidentally published the SUPABASE_SERVICE_ROLE_KEY on GitHub, which is a secret key used by the Supabase service to authenticate requests and perform operations on behalf of a service role. This key is used to grant permissions to specific resources within the Supabase project. As a result, the key is now accessible to anyone who has access to the repository, which can lead to potential security breaches and unauthorized access to team resources.

Actions Taken:

  • The team member immediately removed the key from the repository to prevent unauthorized access.
  • The team member revoked the exposed key from the Supabase service to prevent anyone from using it to access team resources.
  • The team member generated a new key and updated the service to use the new key.
  • The team reviewed the code to ensure that no other sensitive information is being exposed.
  • The team member notified the team about the issue and the actions taken to fix it.

It is important for the team to be aware of the potential risks associated with accidentally publishing sensitive information on public repositories. It is recommended to implement security measures such as:

  • Use environment variables to store sensitive information instead of hardcoding them in the code.
  • Implement code reviews and automated tools to detect and prevent the publishing of sensitive information.
  • Provide training and education to team members on the importance of keeping sensitive information secure.

By taking these steps, the team can help prevent potential security breaches and protect sensitive information from unauthorized access.

Sweep (slow): Add a new way to authenticate proxy requests

The authentication spec allows for the "Authorization" header to accept multiple keys. Change the integration from requiring Helicone-Auth to only use the Authorization header.

We want to UX to be like this..

import openai

openai.api_base = "https://oai.hconeai.com/v1"
opeani.api_key = "Bearer <OpenAI API Key>, Bearer helicone-sk-<KEY>"
openai.Completion.create(
  # ...other parameters
)

Make the changes in RequestWrapper.ts and HeliconeHeaders.ts.

Make sure that you take the OpenAI API Key and map it to the correct "Authorization" header so that we don't send OpenAI our Helicone key

Only make changes to the /worker

openai function call not support

error shows as below:

Application error: a client-side exception has occurred (see the browser console for more information).

High-level stats (e.g. Tokens used by user id)

It would be great to get an overview of token usage by users without exporting the data and doing the calculations elsewhere. This is the main thing I'm using Helicone to keep track of.

Open AI calls with stream are not working

Hello,

I'm using open ai api like that :

const completion = await this.openaiApi.createChatCompletion({ model: this.model, messages: messages, functions: this.functions, stream: true, temperature: 1 , function_call: "auto" }, { responseType: 'stream' })

After setting basePath and baseOptions following the documentation.

A lot of chunks (not all) are truncated the chunk looks like that : data:

{"id":"chatcmpl-7jOVWL

Instead of that :

data: {"id":"chatcmpl-7jOVWLdYdXUnf2omQlYBBZPOawnRR","object":"chat.completion.chunk","created":1691053322,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"content":"?"},"finish_reason":null}]}

Does not cache when user properties are changed

Love the caching feature! Makes the request so much quicker when it's seen before, and more predictable too.

However, I noticed an issue when using it together with user properties. Say I have the following headers:

const headers =  {
  "Helicone-Cache-Enabled": "true",
  "Helicone-Property-App": "my-application",
  "Helicone-Property-Session-": "12345689",
},

Helicone-Property-App is static so that's fine, but when the session ID Helicone-Property-Session changes, it no longer uses the cache.

Would love to be able to have the cache ignore certain headers.

Pipe to custom domain (Azure OpenAI service support)

Taken from #139

Azure has the ability for users to deploy their own OpenAI service as part of their cognitive services offerings. By the looks of it, Microsoft will quickly mirror functionality and models as they're made available by OpenAI (e.g. they added chat-turbo the day after it was available).

See below example from the OpenAI cookbook on how the Azure OpenAI endpoints look. Users give their deployment a name and end up with https://my-service-name.openai.azure.com. No indication that this url pattern will change, but not sure.

https://github.com/openai/openai-python/blob/main/README.md#microsoft-azure-endpoints

Potential options:

Pass in the Azure endpoint along with the rest of the Helicone params
Or setup something in the UI for users to specify which API key should go where (either create an Azure service record or send to OpenAI)

updated documentation link, need to provide api_type, api_base and api_version

Credit: @aavetis

support third-party, like api2d

just like helicone, it has its own base and api_key as below:

import os
os.environ['OPENAI_API_KEY'] = 'fk-xxxxxxxxxxxxxxxxxxxxxxxx'
import openai

openai.api_base = "https://openai.api2d.net/v1"

Langchain + Azure OpenAI proxy getting resource not found error

When trying to add the helicone proxy to AzureOpenAI with langchain wrapper, getting a consistent resource not found error from openai

from langchain.chat_models import AzureChatOpenAI
helicone_headers = {
        "Helicone-Auth": f"Bearer {helicone_api_key}",
        "Helicone-Property-Env": helicone_env,
        "Helicone-Cache-Enabled": "true",
        "Helicone-OpenAI-Api-Base":
            "https://<model_name>.openai.azure.com/"
    }
self.model = AzureChatOpenAI(
        openai_api_base="https://oai.hconeai.com/v1",
        deployment_name="gpt-35-turbo",
        openai_api_key=<AZURE_OPENAI_API_KEY>,
        openai_api_version="2023-05-15",
        openai_api_type="azure",
        max_retries=max_retries,
        headers=helicone_headers,
        **kwargs,
    )

'error': 'Resource not found'

Calling the model without the wrapper works fine

import openai
openai.api_base = 'https://oai.hconeai.com/v1'
response = openai.Completion.create(
   engine='gpt-35-turbo', 
   prompt='Write a tagline for an ice cream shop. ', 
   max_tokens=10, 
   headers={
        "Helicone-Auth": f"Bearer {os.environ.get('HELICONE_API_KEY')}", 
        "Helicone-Cache-Enabled": "true", 
        "Helicone-OpenAI-Api-Base": "https://<MODEL_NAME>.openai.azure.com"
    }, 
    api_version="2023-05-15", 
    api_type='azure', 
    api_key=<AZURE_OPENAI_API_KEY>, 
    api_base="https://<MODEL_NAME>.openai.azure.com")

<OpenAIObject text_completion id=cmpl-7e5iIQeevfdKYouYoyu2kFHi2xWgK at 0x10593fbf0> JSON: {
  "choices": [
    {
      "finish_reason": "length",
      "index": 0,
      "logprobs": null,
      "text": "2 days left\n\n...a tagline we can"
    }
  ],
  "created": 1689789438,
  "helicone_meta": {},
  "id": "cmpl-7e5iIQeevfdKYouYoyu2kFHi2xWgK",
  "model": "gpt-35-turbo",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 10,
    "prompt_tokens": 11,
    "total_tokens": 21
  }
}

Make our cloudflare worker more robust

We want to wrap our entire worker in a try catch that will fall back on the catch, where the catch will log the error and then do a best effort to forward the request to OpenAI.

Show failed requests in the UI

I'm getting the error Invalid API base when using the Helicone-OpenAI-Api-Base header via CURL and Python. I'm confused on how this is happening, so it would be great if the Helicone UI included failed requests too, with the full outgoing request and headers to the external API, and its response. Or maybe just adding a more detailed error message for this problem.

Be able to block users

Taken from #140

We should be able to block bad actors or users. We need some user management section with in the UI to say "restrict traffic for this user"

Build errors when following set-up instructions

Hey!

I was trying to follow the set-up instructions that are currently described in README.md, however, I receive the following errors when trying to execute wrangler dev:

โœ˜ [ERROR] Could not resolve "@supabase/supabase-js"

    src/index.ts:1:45:
      1 โ”‚ import { createClient, SupabaseClient } from "@supabase/supabase-js";
        โ•ต                                              ~~~~~~~~~~~~~~~~~~~~~~~

  You can mark the path "@supabase/supabase-js" as external to exclude it from the bundle, which will remove this error.


โœ˜ [ERROR] Could not resolve "gpt3-tokenizer"

    src/index.ts:11:26:
      11 โ”‚ import GPT3Tokenizer from "gpt3-tokenizer";
         โ•ต                           ~~~~~~~~~~~~~~~~

  You can mark the path "gpt3-tokenizer" as external to exclude it from the bundle, which will remove this error.


โœ˜ [ERROR] Could not resolve "events"

    src/index.ts:12:29:
      12 โ”‚ import { EventEmitter } from "events";
         โ•ต                              ~~~~~~~~

  The package "events" wasn't found on the file system but is built into node.
  Add "node_compat = true" to your wrangler.toml file to enable Node compatibility.


โœ˜ [ERROR] Could not resolve "@supabase/supabase-js"

    src/properties.ts:1:29:
      1 โ”‚ import { createClient } from "@supabase/supabase-js";
        โ•ต                              ~~~~~~~~~~~~~~~~~~~~~~~

  You can mark the path "@supabase/supabase-js" as external to exclude it from the bundle, which will remove this error.


โœ˜ [ERROR] Could not resolve "async-retry"

    src/retry.ts:2:18:
      2 โ”‚ import retry from 'async-retry';
        โ•ต                   ~~~~~~~~~~~~~

  You can mark the path "async-retry" as external to exclude it from the bundle, which will remove this error.


โœ˜ [ERROR] Build failed with 5 errors:

  src/index.ts:1:45: ERROR: Could not resolve "@supabase/supabase-js"
  src/index.ts:11:26: ERROR: Could not resolve "gpt3-tokenizer"
  src/index.ts:12:29: ERROR: Could not resolve "events"
  src/properties.ts:1:29: ERROR: Could not resolve "@supabase/supabase-js"
  src/retry.ts:2:18: ERROR: Could not resolve "async-retry"

How do I best fix these? I might unintentionally be doing something completely stupid as well to get this error.

Any help is appreciated!

Allow users to encrypt their prompts

We want to allow users to encrypt their prompts. We will do this by passing in an extra header called helicone-encrypt-prompt and then encrypt the actual prompt that is stored

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.