Git Product home page Git Product logo

langchain-java's Introduction

🦜️ LangChain Java

Java version of LangChain, while empowering LLM for BigData.

It serves as a bridge to the realm of LLM within the Big Data domain, primarily in the Java stack. Introduction to Langchain-Java.png

If you are interested, you can add me on WeChat: HamaWhite, or send email to me.

1. What is this?

This is the Java language implementation of LangChain, which makes it as easy as possible to develop LLM-powered applications. Langchain overview.png

The following example in the langchain-example.

2. Integrations

2.1 LLMs

2.2 Vector stores

3. Quickstart Guide

The API documentation is available at the following link:
https://hamawhitegg.github.io/langchain-java

3.1 Maven Repository

Prerequisites for building:

  • Java 17 or later
  • Unix-like environment (we use Linux, Mac OS X)
  • Maven (we recommend version 3.8.6 and require at least 3.5.4)

Maven Central

<dependency>
    <groupId>io.github.hamawhitegg</groupId>
    <artifactId>langchain-core</artifactId>
    <version>0.2.1</version>
</dependency>

3.2 Environment Setup

Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc. For this example, we will be using OpenAI’s APIs.

We will then need to set the environment variable.

export OPENAI_API_KEY=xxx

# If a proxy is needed, set the OPENAI_PROXY environment variable.
export OPENAI_PROXY=http://host:port

If you want to set the API key and proxy dynamically, you can use the openaiApiKey and openaiProxy parameter when initiating OpenAI class.

var llm = OpenAI.builder()
        .openaiOrganization("xxx")
        .openaiApiKey("xxx")
        .openaiProxy("http://host:port")
        .requestTimeout(16)
        .build()
        .init();

3.3 LLMs

Get predictions from a language model. The basic building block of LangChain is the LLM, which takes in text and generates more text.

OpenAI Example

var llm = OpenAI.builder()
        .temperature(0.9f)
        .build()
        .init();

var result = llm.predict("What would be a good company name for a company that makes colorful socks?");
print(result);

And now we can pass in text and get predictions!

Feetful of Fun

3.4 Chat models

Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.

OpenAI Chat Example

var chat = ChatOpenAI.builder()
        .temperature(0)
        .build()
        .init();

var result = chat.predictMessages(List.of(new HumanMessage("Translate this sentence from English to French. I love programming.")));
println(result);
AIMessage{content='J'adore la programmation.', additionalKwargs={}}

It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM. You can access this through the predict interface.

var output = chat.predict("Translate this sentence from English to French. I love programming.");
println(output);
J'adore la programmation.

3.5 Chains

Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.

3.5.1 LLMs

The simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.

LLM Chain Example

var prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}?");

var chain = new LLMChain(llm, prompt);
var result = chain.run("colorful socks");
println(result);
Feetful of Fun

3.5.2 Chat models

The LLMChain can be used with chat models as well:

LLM Chat Chain Example

var template = "You are a helpful assistant that translates {input_language} to {output_language}.";
var systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);
var humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate("{text}");
var chatPrompt = ChatPromptTemplate.fromMessages(List.of(systemMessagePrompt, humanMessagePrompt));

var chain = new LLMChain(chat, chatPrompt);
var result = chain.run(Map.of("input_language", "English", "output_language", "French", "text", "I love programming."));
println(result);
J'adore la programmation.

3.5.1 SQL Chains Example

LLMs make it possible to interact with SQL databases using natural language, and LangChain offers SQL Chains to build and run SQL queries based on natural language prompts.

SQL chains.png

SQL Chain Example

var database = SQLDatabase.fromUri("jdbc:mysql://127.0.0.1:3306/demo", "xxx", "xxx");

var chain = SQLDatabaseChain.fromLLM(llm, database);
var result = chain.run("How many students are there?");
println(result);

result = chain.run("Who got zero score? Show me her parent's contact information.");
println(result);
There are 6 students.

The parent of the student who got zero score is Tracy and their contact information is 088124.

Available Languages are as follows.

Language Value
English(default) en_US
Portuguese(Brazil) pt_BR

If you want to choose other language instead english, just set environment variable on your host. If you not set, then en-US will be default

export USE_LANGUAGE=pt_BR

3.6 Agents

Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.

Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.

Set the appropriate environment variables.

export SERPAPI_API_KEY=xxx

3.6.1 Google Search Agent Example

To augment OpenAI's knowledge beyond 2021 and computational abilities through the use of the Search and Calculator tools. Google agent example.png

Google Search Agent Example

// the 'llm-math' tool uses an LLM
var tools = loadTools(List.of("serpapi", "llm-math"), llm);

var agent = initializeAgent(tools, chat, AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION);
var query = "How many countries and regions participated in the 2023 Hangzhou Asian Games?" +
        "What is that number raised to the .023 power?";

agent.run(query);

Google agent example output.png

4. Run Test Cases from Source

git clone https://github.com/HamaWhiteGG/langchain-java.git
cd langchain-java

# export JAVA_HOME=JDK17_INSTALL_HOME && mvn clean test
mvn clean test

This project uses Spotless to format the code. If you make any modifications, please remember to format the code using the following command.

# export JAVA_HOME=JDK17_INSTALL_HOME && mvn spotless:apply
mvn spotless:apply

5. Support

Don’t hesitate to ask!

Open an issue if you find a bug in langchain-java.

6. Reward

If the project has been helpful to you, you can treat me to a cup of coffee. Appreciation code

This is a WeChat appreciation code.

langchain-java's People

Contributors

ashtonhogan avatar dependabot[bot] avatar fwborges avatar hamawhitegg avatar mzhu-ai avatar sandiegoe avatar tokuhirom avatar wangmiao-1981 avatar zhangxiaojiawow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain-java's Issues

UncrecognizedPropertyException in CompetionRes

When running RetrievalQaExample there is an error like:
java.lang.RuntimeException: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "warning" (class com.hw.openai.entity.completions.CompletionResp), not marked as ignorable

Exception in thread "main" retrofit2.adapter.rxjava2.HttpException: HTTP 429 Too Many Requests

When I use the demo on GitHub and execute the chain example, an error occurs.It is unclear why the error occurs without further information or details about the error message.

//  The language model we're going to use to control the agent.
var llm = OpenAI.builder().temperature(0).build().init();

// The tools we'll give the Agent access to. Note that the 'llm-math' tool uses an LLM, so we need to pass that in.
var tools = loadTools(List.of("serpapi", "llm-math"), llm);

//  Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
var agent = initializeAgent(tools, llm, AgentType.ZERO_SHOT_REACT_DESCRIPTION);

// Let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?");

Exception in thread "main" retrofit2.adapter.rxjava2.HttpException: HTTP 429 Too Many Requests
at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:57)
at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:38)
at retrofit2.adapter.rxjava2.CallExecuteObservable.subscribeActual(CallExecuteObservable.java:48)
at io.reactivex.Observable.subscribe(Observable.java:12284)
at retrofit2.adapter.rxjava2.BodyObservable.subscribeActual(BodyObservable.java:35)
at io.reactivex.Observable.subscribe(Observable.java:12284)
at io.reactivex.internal.operators.observable.ObservableSingleSingle.subscribeActual(ObservableSingleSingle.java:35)
at io.reactivex.Single.subscribe(Single.java:3666)
at io.reactivex.Single.blockingGet(Single.java:2869)
at com.hw.openai.OpenAiClient.create(OpenAiClient.java:197)
at com.hw.langchain.llms.openai.BaseOpenAI._generate(BaseOpenAI.java:181)
at com.hw.langchain.llms.base.BaseLLM.generate(BaseLLM.java:62)
at com.hw.langchain.llms.base.BaseLLM.generatePrompt(BaseLLM.java:70)
at com.hw.langchain.chains.llm.LLMChain.generate(LLMChain.java:111)
at com.hw.langchain.chains.llm.LLMChain.innerCall(LLMChain.java:101)
at com.hw.langchain.chains.base.Chain.call(Chain.java:103)
at com.hw.langchain.chains.llm.LLMChain.predict(LLMChain.java:164)
at com.hw.langchain.agents.agent.Agent.plan(Agent.java:109)
at com.hw.langchain.agents.agent.AgentExecutor.takeNextStep(AgentExecutor.java:112)
at com.hw.langchain.agents.agent.AgentExecutor.innerCall(AgentExecutor.java:153)
at com.hw.langchain.chains.base.Chain.call(Chain.java:103)
at com.hw.langchain.chains.base.Chain.call(Chain.java:89)
at com.hw.langchain.chains.base.Chain.run(Chain.java:171)
at com.higuava.OpenAIAPIExample.main(OpenAIAPIExample.java:37)

在 Pinecone 中使用了从Collections中恢复的索引发生异常

java.lang.RuntimeException: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "source_collection" (class com.hw.pinecone.entity.index.Database), not marked as ignorable (7 known properties: "metric", "pod_type", "shards", "dimension", "name", "pods", "replicas"])
 at [Source: (okhttp3.ResponseBody$BomAwareReader); line: 1, column: 120] (through reference chain: com.hw.pinecone.entity.index.IndexDescription["database"]->com.hw.pinecone.entity.index.Database["source_collection"])
	at io.reactivex.internal.util.ExceptionHelper.wrapOrThrow(ExceptionHelper.java:46)
	at io.reactivex.internal.observers.BlockingMultiObserver.blockingGet(BlockingMultiObserver.java:93)
	at io.reactivex.Single.blockingGet(Single.java:2870)
	at com.hw.pinecone.PineconeClient.describeIndex(PineconeClient.java:183)
	at com.hw.pinecone.PineconeClient.indexClient(PineconeClient.java:91)
	at com.hw.langchain.vectorstores.pinecone.Pinecone.init(Pinecone.java:79)
	at com.beavers.aichat.service.VectorDatabaseService.match(VectorDatabaseService.java:61)

版本:pinecone-client:0.1.11

如何解决tokens超过限制的问题呢?

我想使用langchain + graphQl 来做问答,当Graph的数据节点比较多时,langchain调openAI的prompt就会太长,接口报了超出tokens限制,这种有什么好的解决方式吗?

Dependency convergence errors when adding langchain-java as a Maven dependency

Thanks for this great library!
When importing it as a dependency in Maven, we observe the following dependency convergence errors between direct dependencies of langchain-java and indirect ones. We'd be very grateful if you could find a consistent choice of dependencies or use <exclusion>-tags in your pom.xml to avoid the convergence errors. Thank you very much in advance:

Dependency convergence error for io.netty:netty-resolver-dns:jar:4.1.77.Final paths to dependency are:

<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.redisson:redisson:jar:3.17.3:compile
      +-io.netty:netty-resolver-dns:jar:4.1.77.Final:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.netty:netty-resolver-dns:jar:4.1.43.Final:compile

[ERROR]
Dependency convergence error for org.apache.commons:commons-collections4:jar:4.3 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.milvus:milvus-sdk-java:jar:2.2.9:compile
      +-org.apache.commons:commons-collections4:jar:4.3:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.apache.commons:commons-collections4:jar:4.4:compile

[ERROR]
Dependency convergence error for io.netty:netty-resolver:jar:4.1.77.Final paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.redisson:redisson:jar:3.17.3:compile
      +-io.netty:netty-transport:jar:4.1.77.Final:compile
        +-io.netty:netty-resolver:jar:4.1.77.Final:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.redisson:redisson:jar:3.17.3:compile
      +-io.netty:netty-resolver:jar:4.1.77.Final:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.redisson:redisson:jar:3.17.3:compile
      +-io.netty:netty-handler:jar:4.1.77.Final:compile
        +-io.netty:netty-resolver:jar:4.1.77.Final:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.netty:netty-resolver:jar:4.1.43.Final:compile

[ERROR]
Dependency convergence error for io.reactivex.rxjava2:rxjava:jar:2.0.0 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.github.hamawhitegg:openai-client:jar:0.2.0:compile
      +-com.squareup.retrofit2:adapter-rxjava2:jar:2.9.0:compile
        +-io.reactivex.rxjava2:rxjava:jar:2.0.0:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.reactivex.rxjava2:rxjava:jar:2.2.21:compile

[ERROR]
Dependency convergence error for org.apache.httpcomponents:httpcore:jar:4.4.15 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-com.google.http-client:google-http-client-apache-v2:jar:1.42.3:compile
        +-org.apache.httpcomponents:httpcore:jar:4.4.15:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-org.apache.httpcomponents:httpcore:jar:4.4.16:compile

[ERROR]
Dependency convergence error for org.apache.httpcomponents:httpclient:jar:4.5.13 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-com.google.http-client:google-http-client-apache-v2:jar:1.42.3:compile
        +-org.apache.httpcomponents:httpclient:jar:4.5.13:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-org.apache.httpcomponents:httpclient:jar:4.5.14:compile

[ERROR]
Dependency convergence error for io.projectreactor:reactor-core:jar:3.4.13 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.redisson:redisson:jar:3.17.3:compile
      +-io.projectreactor:reactor-core:jar:3.4.13:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.projectreactor:reactor-core:jar:3.5.8:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.projectreactor.addons:reactor-adapter:jar:3.5.1:compile
      +-io.projectreactor:reactor-core:jar:3.5.4:compile

[ERROR]
Dependency convergence error for org.reactivestreams:reactive-streams:jar:1.0.3 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.github.hamawhitegg:openai-client:jar:0.2.0:compile
      +-com.squareup.retrofit2:adapter-rxjava2:jar:2.9.0:compile
        +-org.reactivestreams:reactive-streams:jar:1.0.3:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.redisson:redisson:jar:3.17.3:compile
      +-org.reactivestreams:reactive-streams:jar:1.0.3:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.redisson:redisson:jar:3.17.3:compile
      +-io.reactivex.rxjava3:rxjava:jar:3.1.6:compile
        +-org.reactivestreams:reactive-streams:jar:1.0.4:compile

[ERROR]
Dependency convergence error for com.google.http-client:google-http-client-gson:jar:1.42.0 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-com.google.oauth-client:google-oauth-client:jar:1.34.1:compile
        +-com.google.http-client:google-http-client-gson:jar:1.42.0:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-com.google.http-client:google-http-client-gson:jar:1.42.3:compile

[ERROR]
Dependency convergence error for org.apache.commons:commons-text:jar:1.6 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-io.milvus:milvus-sdk-java:jar:2.2.9:compile
      +-org.apache.commons:commons-text:jar:1.6:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-org.apache.commons:commons-text:jar:1.10.0:compile

[ERROR]
Dependency convergence error for com.google.http-client:google-http-client:jar:1.42.0 paths to dependency are:
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-com.google.oauth-client:google-oauth-client:jar:1.34.1:compile
        +-com.google.http-client:google-http-client:jar:1.42.0:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-com.google.http-client:google-http-client-apache-v2:jar:1.42.3:compile
        +-com.google.http-client:google-http-client:jar:1.42.3:compile
and
<our project>
  +-io.github.hamawhitegg:langchain-core:jar:0.2.0:compile
    +-com.google.api-client:google-api-client:jar:2.2.0:compile
      +-com.google.http-client:google-http-client:jar:1.42.3:compile

HttpExeption HTTP 429

The problem is related to the number of requests that are sent without delay, how can I set a delay for this?

Exception:
Exception in thread "main" retrofit2.adapter.rxjava2.HttpException: HTTP 429
at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:57)
at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:38)
at retrofit2.adapter.rxjava2.CallExecuteObservable.subscribeActual(CallExecuteObservable.java:48)
at io.reactivex.Observable.subscribe(Observable.java:10151)
at retrofit2.adapter.rxjava2.BodyObservable.subscribeActual(BodyObservable.java:35)
at io.reactivex.Observable.subscribe(Observable.java:10151)
at io.reactivex.internal.operators.observable.ObservableSingleSingle.subscribeActual(ObservableSingleSingle.java:35)
at io.reactivex.Single.subscribe(Single.java:2517)
at io.reactivex.Single.blockingGet(Single.java:2001)
at com.hw.openai.OpenAiClient.create(OpenAiClient.java:214)
at com.hw.langchain.llms.openai.BaseOpenAI._generate(BaseOpenAI.java:197)
at com.hw.langchain.llms.base.BaseLLM.generate(BaseLLM.java:62)
at com.hw.langchain.llms.base.BaseLLM.generatePrompt(BaseLLM.java:70)
at com.hw.langchain.chains.llm.LLMChain.generate(LLMChain.java:111)
at com.hw.langchain.chains.llm.LLMChain.innerCall(LLMChain.java:101)
at com.hw.langchain.chains.base.Chain.call(Chain.java:103)
at com.hw.langchain.chains.llm.LLMChain.predict(LLMChain.java:164)
at com.hw.langchain.chains.sql.database.base.SQLDatabaseChain.innerCall(SQLDatabaseChain.java:150)
at com.hw.langchain.chains.base.Chain.call(Chain.java:103)
at com.hw.langchain.chains.base.Chain.call(Chain.java:89)
at com.hw.langchain.chains.base.Chain.run(Chain.java:171)
at com.hw.langchain.tools.base.Tool.innerRun(Tool.java:76)
at com.hw.langchain.tools.base.BaseTool.run(BaseTool.java:114)
at com.hw.langchain.agents.agent.AgentExecutor.takeNextStep(AgentExecutor.java:126)
at com.hw.langchain.agents.agent.AgentExecutor.innerCall(AgentExecutor.java:153)
at com.hw.langchain.chains.base.Chain.call(Chain.java:103)
at com.hw.langchain.chains.base.Chain.call(Chain.java:89)
at com.hw.langchain.chains.base.Chain.run(Chain.java:171)
at me.moteloff.demo.application.LangchainWithPostgresApplication.main(LangchainWithPostgresApplication.java:111)

retrofit2.adapter.rxjava2.HttpException: HTTP 500

我想创建一个知识库,但是异常

var client = PineconeClient.builder().pineconeApiKey("xx").pineconeEnv("us-west1-gcp-free").requestTimeout(30).build().init();
        if (!client.listIndexes().contains(INDEX_NAME)) {
            // the text-embedding-ada-002 model has an output dimension of 1536.
            var request = CreateIndexRequest.builder()
                    .name(INDEX_NAME)
                    .build();
            client.createIndex(request);
            awaitIndexReady(client);
        }

        var embeddings = OpenAIEmbeddings.builder().openaiApiBase("https://xx/v1/").openaiApiKey("xx").requestTimeout(600).build().init();
        var pinecone = Pinecone.builder()
                .client(client)
                .indexName(INDEX_NAME)
                .namespace(namespace)
                .embeddingFunction(embeddings::embedQuery)
                .build().init();

        var request = new DescribeIndexStatsRequest();

        var response = pinecone.getIndex().describeIndexStats(request);

        if (!response.getNamespaces().containsKey(namespace)) {
            pinecone.fromDocuments(docs,embeddings);
        }

errMsg:

2023/08/16 15:24:54.439 ERROR [http-nio-8089-exec-3] c.b.a.c.ControllerExceptionHandler : 捕获异常:
retrofit2.adapter.rxjava2.HttpException: HTTP 500 
	at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:57)
	at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:38)
	at retrofit2.adapter.rxjava2.CallExecuteObservable.subscribeActual(CallExecuteObservable.java:48)
	at io.reactivex.Observable.subscribe(Observable.java:12284)
	at retrofit2.adapter.rxjava2.BodyObservable.subscribeActual(BodyObservable.java:35)
	at io.reactivex.Observable.subscribe(Observable.java:12284)
	at io.reactivex.internal.operators.observable.ObservableSingleSingle.subscribeActual(ObservableSingleSingle.java:35)
	at io.reactivex.Single.subscribe(Single.java:3666)
	at io.reactivex.Single.blockingGet(Single.java:2869)
	at com.hw.openai.OpenAiClient.embedding(OpenAiClient.java:247)
	at com.hw.langchain.embeddings.openai.OpenAIEmbeddings.embedWithRetry(OpenAIEmbeddings.java:214)
	at com.hw.langchain.embeddings.openai.OpenAIEmbeddings.getLenSafeEmbeddings(OpenAIEmbeddings.java:140)
	at com.hw.langchain.embeddings.openai.OpenAIEmbeddings.embedDocuments(OpenAIEmbeddings.java:195)
	at com.hw.langchain.vectorstores.pinecone.Pinecone.fromTexts(Pinecone.java:202)
	at com.hw.langchain.vectorstores.base.VectorStore.fromDocuments(VectorStore.java:195)
	at com.beaver.asura.service.impl.FileServiceImpl.initializePineconeIndex(FileServiceImpl.java:98)
	at com.beaver.asura.service.impl.FileServiceImpl.upload(FileServiceImpl.java:65)
	at com.beaver.asura.service.impl.FileServiceImpl$$FastClassBySpringCGLIB$$496fdd7c.invoke(<generated>)
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
	at org.springframework.aop.framework.CglibAopProxy.invokeMethod(CglibAopProxy.java:386)
	at org.springframework.aop.framework.CglibAopProxy.access$000(CglibAopProxy.java:85)
	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:704)
	at com.beaver.asura.service.impl.FileServiceImpl$$EnhancerBySpringCGLIB$$e70f8772.upload(<generated>)
	at com.beaver.asura.controller.AppController.upload(AppController.java:84)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117)
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808)
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1072)
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:965)
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
	at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:555)
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:623)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:209)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153)
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153)
	at com.github.xiaoymin.knife4j.spring.filter.SecurityBasicAuthFilter.doFilter(SecurityBasicAuthFilter.java:87)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153)
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153)
	at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153)
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:481)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:130)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:390)
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:926)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1791)
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
	at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
	at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
	at java.base/java.lang.Thread.run(Thread.java:833)
2023/08/16 15:24:54.440 WARN  [http-nio-8089-exec-3] o.s.w.s.m.m.a.ExceptionHandlerExceptionResolver : Resolved [retrofit2.adapter.rxjava2.HttpException: HTTP 500 ]






jackson解析json报错 经常出现

测试 报错 如下图代码

    ChatCompletion chatCompletion = ChatCompletion.builder()
            .model("gpt-3.5-turbo-16k")
            .messages(List.of(message))
            .temperature(0.5f)
            .stream(true)
            .build();

    String msg = client.chatCompletion(chatCompletion);
    System.out.println(msg);
    client.close();
image

supported stream response?

I am using RetrievalQa chain to build a document-based conversational tool, but every time I ask a question about the content of the document, I have to wait for the large language model to complete the entire answer. Sometimes it takes a long time, and I am not sure if there is an error. Therefore, I am considering whether we can support a streaming response interface for this conversational chain.

pls help me!!!!A 400 exception occurred while transferring data to pinecone Exception in thread "main" retrofit2.adapter.rxjava2.HttpException: HTTP 400

// upsert to Pinecone
var response = index.upsert(new UpsertRequest(vectors, namespace));
In this line of code, I have an anomaly of 400.

var client = PineconeClient.builder().pineconeApiKey("").pineconeEnv("").projectName("").requestTimeout(30).build().init();
createPineconeIndex(client);

I have set these. What is the problem? pls help me

[Question] - It looks like a proxy is needed everytime

When initializing without providing any proxy (I do not need one)

@Bean
public OpenAI llm() {
  return OpenAI.builder()
                .openaiApiKey(openAiKey)
                .openaiProxy("")
                .temperature(0.9f)
                .build()
                .init();
}

i got the error:

Did not find OPENAI_PROXY, please add an environment variable `OPENAI_PROXY` which contains it, or pass `OPENAI_PROXY` as a named parameter.
	at com.hw.langchain.utils.Utils.getFromEnv(Utils.java:72)

I seems the parameter is totally required:
https://github.com/HamaWhiteGG/langchain-java/blob/main/openai-client/src/main/java/com/hw/openai/OpenAiClient.java#LL71C1-L71C1

Do you have any proxy servers that require authentication support?

I use my own proxy server to access the OpenAI interface, but my proxy server requires authentication. However, I do not know how to set the authentication information for the current interface, so I received the following error message:

Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Failed to authenticate with proxy
	at io.reactivex.internal.util.ExceptionHelper.wrapOrThrow(ExceptionHelper.java:45)
	at io.reactivex.internal.observers.BlockingMultiObserver.blockingGet(BlockingMultiObserver.java:90)
	at io.reactivex.Single.blockingGet(Single.java:2002)
	at com.hw.openai.OpenAiClient.create(OpenAiClient.java:195)
	at com.hw.langchain.llms.openai.BaseOpenAI._generate(BaseOpenAI.java:192)
	at com.hw.langchain.llms.base.BaseLLM.generate(BaseLLM.java:61)
	at com.hw.langchain.llms.base.BaseLLM.call(BaseLLM.java:50)
	at com.hw.langchain.llms.base.BaseLLM.call(BaseLLM.java:54)
	at com.netease.mail.demo.langchain.Application.main(Application.java:19)
Caused by: java.io.IOException: Failed to authenticate with proxy
	at okhttp3.internal.connection.RealConnection.createTunnel(RealConnection.java:418)
	at okhttp3.internal.connection.RealConnection.connectTunnel(RealConnection.java:236)
	at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:177)
	at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
	at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
	at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
	at okhttp3.internal.connection.Transmitter.newExchange(Transmitter.java:169)
	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:223)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at com.hw.openai.OpenAiClient.lambda$init$0(OpenAiClient.java:101)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:229)
	at okhttp3.RealCall.execute(RealCall.java:81)
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:204)
	at retrofit2.adapter.rxjava2.CallExecuteObservable.subscribeActual(CallExecuteObservable.java:46)
	at io.reactivex.Observable.subscribe(Observable.java:10151)
	at retrofit2.adapter.rxjava2.BodyObservable.subscribeActual(BodyObservable.java:35)
	at io.reactivex.Observable.subscribe(Observable.java:10151)
	at io.reactivex.internal.operators.observable.ObservableSingleSingle.subscribeActual(ObservableSingleSingle.java:35)
	at io.reactivex.Single.subscribe(Single.java:2517)
	at io.reactivex.Single.blockingGet(Single.java:2001)

Setting timeouts

Hey.. thanks for writing this library. It works perfect for my usecase. however I use gpt-4 for my queries and 99% of my requests timeout, because the requests take a bit longer than 10 secs. There doesn't seem to be a way to increase the timeout. An environment variable to setup this param would be perfect.

Thank you.

Java版本

为啥要弄个17的版本,可不可以弄个11的呢

Stream is not work.

Hello, i enable the stream and it's not work. And i can't find the implementation of callback. Are you conduct it?

ChatGLMExample 测试报错

实际是两个问题:
···
模型使用的是 ChatGLM2
使用的case 为:ChatGLMExample
1.withHistory(true),报错。

以下是postman 调试ChatGLM2 api.py
post:
{"prompt": "咖啡怎么样?","history": [

]}

response
{
"response": "咖啡是一种受欢迎的饮料,由于其香味和提神的效果而备受青睐。咖啡可以在不同的地方种植,每种地方的咖啡味道也不同,这使得咖啡成为一种有趣的饮品。总的来说,咖啡是一种好喝的饮料,但也要适量饮用,因为过量摄入咖啡因可能会导致不良影响。",
"history": [
[
"咖啡怎么样?",
"咖啡是一种受欢迎的饮料,由于其香味和提神的效果而备受青睐。咖啡可以在不同的地方种植,每种地方的咖啡味道也不同,这使得咖啡成为一种有趣的饮品。总的来说,咖啡是一种好喝的饮料,但也要适量饮用,因为过量摄入咖啡因可能会导致不良影响。"
]
],
"status": 200,
"time": "2023-10-12 19:42:16"
}

ChatGlm2 api.py里 针对history 类型 是 List[Tuple[str, str]]
def chat(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, max_length: int = 8192, num_beams=1,
do_sample=True, top_p=0.8, temperature=0.8, logits_processor=None, **kwargs):
java 版本的 model处理history 回传数据应该是 [[,][,][,]]
···
2. 在 python transformers 套件里,history参数在多轮对话场景下用于提供语言模型对话上下文。java ChatGLM 的history 是不是可以开个口子,灵活处理(适当的时候,做一下上下文的条数裁剪),一个session 多轮对话后,会不会因为history 过多而爆掉?

Exception in thread "main" retrofit2.adapter.rxjava2.HttpException: HTTP 429

ChatAgentExample

Exception in thread "main" retrofit2.adapter.rxjava2.HttpException: HTTP 429
at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:57)
at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:38)
at retrofit2.adapter.rxjava2.CallExecuteObservable.subscribeActual(CallExecuteObservable.java:48)
at io.reactivex.Observable.subscribe(Observable.java:10151)
at retrofit2.adapter.rxjava2.BodyObservable.subscribeActual(BodyObservable.java:35)
at io.reactivex.Observable.subscribe(Observable.java:10151)
at io.reactivex.internal.operators.observable.ObservableSingleSingle.subscribeActual(ObservableSingleSingle.java:35)
at io.reactivex.Single.subscribe(Single.java:2517)
at io.reactivex.Single.blockingGet(Single.java:2001)
at com.hw.openai.OpenAiClient.create(OpenAiClient.java:237)
at com.hw.langchain.chat.models.openai.ChatOpenAI._generate(ChatOpenAI.java:174)
at com.hw.langchain.chat.models.base.BaseChatModel.lambda$generate$0(BaseChatModel.java:53)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:720)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575)
at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622)
at java.base/java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627)
at com.hw.langchain.chat.models.base.BaseChatModel.generate(BaseChatModel.java:54)
at com.hw.langchain.chat.models.base.BaseChatModel.generatePrompt(BaseChatModel.java:72)
at com.hw.langchain.chains.llm.LLMChain.generate(LLMChain.java:111)
at com.hw.langchain.chains.llm.LLMChain.innerCall(LLMChain.java:101)
at com.hw.langchain.chains.base.Chain.call(Chain.java:103)
at com.hw.langchain.chains.llm.LLMChain.predict(LLMChain.java:164)
at com.hw.langchain.agents.agent.Agent.plan(Agent.java:109)
at com.hw.langchain.agents.agent.AgentExecutor.takeNextStep(AgentExecutor.java:112)
at com.hw.langchain.agents.agent.AgentExecutor.innerCall(AgentExecutor.java:153)
at com.hw.langchain.chains.base.Chain.call(Chain.java:103)
at com.hw.langchain.chains.base.Chain.call(Chain.java:89)
at com.hw.langchain.chains.base.Chain.run(Chain.java:171)
at com.hw.langchain.examples.agents.ChatAgentExample.main(ChatAgentExample.java:53)

Azure OpenAI endpoint support

Can Azure endpoint be used instead of OpenAI? How do I configure the endpoint/model/deployment/version that is needed on the Azure endpoint?

openai callback

我从向量数据库根据query拿到了similarity_search的文档,我想通过类似langchain的这样函数,
with get_openai_callback() as cb:
print(chain.run(input_documents=docs, question=query))
print(cb)
传递给openai的大模型,这个可以在这个项目实现么?

如何实现langchain中的文件加载且文本分割

就像下面这样

    documents = load_docs(app.config['UPLOAD_FOLDER'])
    # 初始化加载器
    text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
    # 切割加载的 document
    split_docs = text_splitter.split_documents(documents)

Jvm crash

I got a jvm crash using both azul/zulu-openjdk-alpine:17-latest and amazoncorretto:17-alpine-jdk when I tried to index the documents through pinecone. Curious if anyone also experience the same issue.

#
# See problematic frame for where to report the bug.
# The crash happened outside the Java Virtual Machine in native code.
# http://www.azul.com/support/
# If you would like to submit a bug report, please visit:
#
# /srv/core-api/hs_err_pid1.log
# An error report file with more information is saved as:
#
# Core dump will be written. Default location: /srv/core-api/core.1
#
# C [libquadmath.so.0+0x26b0]
# Problematic frame:
# Java VM: OpenJDK 64-Bit Server VM Zulu17.44+15-CA (17.0.8+7-LTS, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# JRE version: OpenJDK Runtime Environment Zulu17.44+15-CA (17.0.8+7) (build 17.0.8+7-LTS)
#
# SIGSEGV (0xb) at pc=0x0000000000002026, pid=1, tid=68
#
# A fatal error has been detected by the Java Runtime Environment:
#

Support Agent run with retryMaxTimes

onece the agent has 3-5 tools need to run, langchain will try call llms(like openAI) more than 3-5 times, but limit the llms timeout or other exception, the agent run task failed.
so, we need complete the maxRetryTimes biz as the Python editon do it

why use JDK17?

Many enterprises are currently using JDK 8 in their production scenarios.

Using a higher version of JDK may limit the usage scenarios.

I suggest downgrading to JDK 8.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.