microsoft / typechat Goto Github PK
View Code? Open in Web Editor NEWTypeChat is a library that makes it easy to build natural language interfaces using types.
Home Page: https://microsoft.github.io/TypeChat/
License: MIT License
TypeChat is a library that makes it easy to build natural language interfaces using types.
Home Page: https://microsoft.github.io/TypeChat/
License: MIT License
Hey I’m solo developing https://github.com/jxnl/openai_function_call
Would love to work together to find abstractions. I’ll likely take and cite a lot of the great documentation you have.
From the restaurant example, we find this type in the schema:
export type Pizza = {
itemType: "pizza";
// default: large
size?: "small" | "medium" | "large" | "extra large";
// toppings requested (examples: pepperoni, arugula)
addedToppings?: string[];
// toppings requested to be removed (examples: fresh garlic, anchovies)
removedToppings?: string[];
// default: 1
quantity?: number;
// used if the requester references a pizza by name
name?: "Hawaiian" | "Yeti" | "Pig In a Forest" | "Cherry Bomb";
};
It appears that values in strings and comments (in English) are used to help construct the response JSON.
Would there need to be one schema file per language where all strings and comments are in a given language?
Does the library allow the use of the current application state? For example, if a user requests "place an order from the basket".
From my understanding you currently only get the answer and a success flag back.
For me it would be quite interesting to also get information on tokens used etc (which OpenAI provides for example).
Is there a way to get that information out or does this need modification in the code? Would be happy to support, just need a little guidance on where to start.
This is the information on usage you get back when using the openAI-API
"usage":{
"prompt_tokens":13,
"completion_tokens":7,
"total_tokens":20
},
The "Restaurant" example has a wrong output in the "Usage" section of the README1, if I understand the input correctly, it should be:
-2 large pizza with mushrooms
+1 large pizza with mushrooms
+1 large pizza with sausage
1 small pizza with sausage
1 whole Greek salad
1 Pale Ale
1 Mack and Jacks
If this is intentional, then maybe there should be a sentence below it explaining this to avoid confusion, for example:
This shows that TypeChat may not be 100% accurate, and you may want to consider asking the user for confirmation before performing any action. The output here erroneously shows 2 mushroom pizzas and 1 sausage pizza, while it should be 1 mushroom pizza and 2 sausage pizzas (one large and one small).
And the "Input" might be incorrect as well, shouldn't it be only 🍕>
instead of 😀> 🍕>
? ↩
What do we need to put into the .env file to cruft up connectivity to chatgpt?
The following schema is introduced in #20.
// A program consists of a sequence of expressions that are evaluated in order.
export type Program = {
"@steps": Expression[];
}
// An expression is a JSON value, a function call, or a reference to the result of a preceding expression.
export type Expression = JsonValue | FunctionCall | ResultReference;
// A JSON value is a string, a number, a boolean, null, an object, or an array. Function calls and result
// references can be nested in objects and arrays.
export type JsonValue = string | number | boolean | null | { [x: string]: Expression } | Expression[];
// A function call specifices a function name and a list of argument expressions. Arguments may contain
// nested function calls and result references.
export type FunctionCall = {
// Name of the function
"@func": string;
// Arguments for the function
"@args": Expression[];
};
// A result reference represents the value of an expression from a preceding step.
export type ResultReference = {
// Index of the previous expression in the "@steps" array
"@ref": number;
};
However, I don't think FunctionCall
should be part of Expression
type. Because { [x: string]: Expression }
is part of JsonValue
. The output of gpt model is "@steps": Expression[]
, the execution order is determined. Because when we allow FunctionCall
in an object-like value, the execution order is undetermined. Example:
{
"@steps": [
{
"@func": "func1",
"@args": [
{
"a": {
"@func": "func2",
"args": []
},
"b": {
"@func": "func3",
"args": []
}
}
]
}
]
}
we don't know the execution order of func2
vs func3
{
"@steps": [
{
"@func": "func3",
"args": []
},
{
"@func": "func2",
"args": []
},
{
"@func": "func1",
"@args": [
{
"a": {
"@ref": 0
},
"b": {
"@ref": 1
}
}
]
}
]
}
func3
is executed first, then func2
.
I'm not sure in what scenarios the gpt model can produce @steps
with reference to FunctionCall
, in the tests I run, only ResultReference
is produced. Is gpt model able to understand the difference between reference to the result of previous functioncall
and execute every times it is evaluated
?
I think it would be 'safer' to design the generic schema as deterministic as possible. For example, in a financial application, the execution orders matters for the state mutation for user's funds.
therefore, a more strict schema version is:
export type Program = {
"@steps": FunctionCall[];
}
export type Expression = JsonValue | ResultReference;
export type JsonValue = string | number | boolean | null | { [x: string]: Expression } | Expression[];
export type FunctionCall = {
"@func": string;
"@args": Expression[];
};
export type ResultReference = {
"@ref": number;
};
@ahejlsberg @steveluc does it make sense? Have you seen any example with including FunctionCall
in Expression
?
I am currently deploying the model and exposing a callable OpenAI API. I am modifying model.ts to fit my own model, but I'm experiencing instability when using mpt-30b-chat and mpt-30b-instruct. Math functions are frequently being called incorrectly. JSON templates often fail to validate in complex projects. Do you have any suggestions, such as adding a system prompt or modifying the template to accommodate Chinese?
Top level functions exported.
def add(x: float, y: float): float
def sub(x: float, y: float): float
# ...
string
, but then it's generated on the fly.Tracking issue for:
This is relatively low risk since it only involves interaction with the Spotify service (the route is not open to the Internet).
But will mitigate since easy to check only called once.
Hi all,
When looking at the samples like CoffeeShop
or Restaurant
we can see schemas like:
export interface BakeryProducts {
type: 'BakeryProducts';
name: 'apple bran muffin' | 'blueberry muffin' | 'lemon poppyseed muffin' | 'bagel';
options: (BakeryOptions | BakeryPreparations)[];
}
or
export type Pizza = {
itemType: 'pizza';
// default: large
size?: 'small' | 'medium' | 'large' | 'extra large';
// toppings requested (examples: pepperoni, arugula)
addedToppings?: string[];
// toppings requested to be removed (examples: fresh garlic, anchovies)
removedToppings?: string[];
// default: 1
quantity?: number;
// used if the requester references a pizza by name
name?: "Hawaiian" | "Yeti" | "Pig In a Forest" | "Cherry Bomb";
};
Here, we use data like the pizza names or the bakery products inside the schema definitions.
Is this a realistic approach?
Usually, we have data from a data source/store, e.g., all the pizzas a restaurant offers. They won't/cannot live inside the schema definitions file in real life :-).
Do you have any thoughts on this design?
Thank you.
I was using Typechat in a project I worked on and it worked fine during development, The deployed version was raising the following error from that library: Cannot find module 'readline/promises'
,
Could it be that it's not bundled properly, or doesn't work in a lambda environment for some reason?
I need some help here.
I'm wondering how this compares to Open AI's function calling as that's also made "to more reliably get structured data back from the model."
I see that TypeChat aimes to be model-agnostic and lets me pass in TS types.
How does the quality of the answer compare? OpenAI finetuned the models to work for function calling. Are TypeChat results as reliable? Or could it be combined?
Did you consider using @azure/openai
? If yes, we would love to hear what were the pain points and whether there is any feedback 😊
If not, please checkout the following samples of using completions:
As you notice, the library works with both openai.com and Azure OpenAI and provides a seamless experience of switching between either.
Thank you so much for developing such a useful tool. I usually only use Python and other mathematical software, and I am not familiar with TypeScript. I don't know from this library how to give a complete prompt for a specific problem. Can you add a description to the documentation? For example, what is the full prompt for the sentiment task?
Not sure if this is intentional or not but the package.json
in each of the samples is missing the dependency for typechat.
i.e. https://github.com/microsoft/TypeChat/blob/main/examples/calendar/package.json
Does this library consume a lot of the openai API?
Does it change the LLM decoding process or repeat the openai API call for each blank in the json format as Guidance?
function createRequestPrompt(request: string) {
return `${prefixPrompt}\nYou are a service that translates user requests into JSON objects of type "${validator.typeName}" according to the following TypeScript definitions:\n` +
`\`\`\`\n${validator.schema}\`\`\`\n` +
`The following is a user request:\n` +
`"""\n${request}\n"""\n` +
`The following is the user request translated into a JSON object with 2 spaces of indentation and no properties with the value undefined:\n`;
}
Is it possible to support adding prefix prompt? As this approach, we can define more customized prompt on the global level.
e.g.
const prefixPrompt = "You need to make inferences about what the user hasn't mentioned, based on what has been provided."
Exporting createAxiosLanguageModel
would solve the following problems
/v1/chat/completions
/**
* An object representing a successful operation with a result of type `T`.
*/
export type Success<T> = {
success: true;
data: T;
};
/**
* An object representing an operation that failed for the reason given in `message`.
*/
export type Error = {
success: false;
message: string;
};
/**
* An object representing a successful or failed operation of type `T`.
*/
export type Result<T> = Success<T> | Error;
/**
* Returns a `Success<T>` object.
* @param data The value for the `data` property of the result.
* @returns A `Success<T>` object.
*/
export declare function success<T>(data: T): Success<T>;
/**
* Returns an `Error` object.
* @param message The value for the `message` property of the result.
* @returns An `Error` object.
*/
export declare function error(message: string): Error;
/**
* Obtains the value associated with a successful `Result<T>` or throws an exception if
* the result is an error.
* @param result The `Result<T>` from which to obtain the `data` property.
* @returns The value of the `data` property.
*/
export declare function getData<T>(result: Result<T>): T;
It would be useful to get access to data
typed as unknown
even if result.success === false
. This would enable you to use the data and create your own repair pipeline.
I'm using this function to get summary
export const getSummary = async (mails:string) =>{
return new Promise(async (resolve,reject)=> {
const response= await translator.translate(mails);
if (!response.success) {
console.log(response);
return reject(response);
}
const summarizedMail = response.data;
console.log(JSON.stringify(summarizedMail,undefined, 2));
if (summarizedMail.summaryObject.type==="unknown") {
console.log("I didn't understand the following:");
console.log(summarizedMail.summaryObject.text);
}
resolve(response)
})
Here's the schema file
export interface SummarizedMailItems {
type:'mailSummary',
summarizationLanguage: 'arabic' | 'english' | 'french' | 'spanish'
summaryParagraph:string ,
summaryBulletpoints: [string,string,string]
}
export interface UnknownText {
type:'unknown',
text: string; // The text that wasn't understood
}
export type SummarizedMail = {
summaryObject: SummarizedMailItems | UnknownText;
};
I'm getting this error
{
success: false,
message: "JSON validation failed: File '/schema.ts' is not a module.\n'Object' only refers to a type, but is being used as a value here.\nCannot find name 'exports'.\n{\n \"paragraphSummary\": \"..........",
}```
Based on typechat, I wrapped a small tool that extracts natural language into a specified structure for customizing api endpoints. Is it useful? Please point out any incorrections or comment。
source:
https://github.com/cooder-org/json-translator
demo:
https://nts.cooder.org/
TypeChat currently only distributes CJS targeting a very old ES version. Can TypeChat be configured to distribute both? It's a relatively straightforward configuration.
function
role that fits within a conversation.We have experiments in Python (e.g. Pypechat)
People use Pydantic a lot for validation. Sending Python doesn't work all that well as TypeScript as a spec language, JSON schema dumps of Pydantic doesn't work as well as TypeScript as a spec language.
Could we generate TypeScript from something like Pydantic data structures?
Libraries like Pydantic also have certain kinds of validation beyond what any static type system can encode. We can encode those in comments.
We could do the same thing with something like Zod as well.
We don't know how well libraries like Pydantic work on discriminated unions and collections of literals.
One of the nice things about these solutions is that for dynamic schema generation (i.e. "my information is all in a database, generate a schema out of that") can be achieved because they all have programmatic APIs.
Using a runtime type validation library sounds nice, but what about TypeChat programs?
Have to extend Pydantic in some way to describe APIs
Something where each ref is inlined and type-checked in that manner.
Will that work? What about the csv example? Table
types are basically opaque, but exist across values.
Problem with this approach and opaque values (things that can't be JSONy) is... well, let's dive into the current programs approach.
Given the following API...
interface API {
getThing(...): Thing;
processStuff({ thing: Thing, a: ..., b: ... }): ...;
}
for an intent, a language model will generate something like...
{
"@steps": [
...,
{
"@func": {
"name": "...",
"args": [
{
"thing": { "@ref": 0 },
"a": "...",
"b": "..."
}
]
}
}
]
}
{ "@ref": 0 }
with the earlier value.If we did this for Python and .NET, we would probably do the same for TypeScript as well.
Does this validation approach work? Don't you need an exemplar value for each return type?
Forget Python, how does this work with up-front validation?
interface API {
getThing(): { x: number, y: number };
eatThing(value: { x: number, y: number }): void
}
could generate
{
"@steps": [
{
"@func": {
"name": "getThing",
"args": []
},
"@func": {
"name": "eatThing",
"args": [{ "@ref": 0 }]
},
}
]
}
which turns into...
{
"@steps": [
{
"@func": {
"name": "getThing",
"args": []
},
"@func": {
"name": "eatThing",
"args": [{"@func": { "name": "getThing", "args": [] } }]
},
}
]
}
eatThing
But that's not the same thing that's in TypeChat today - this doesn't do up-front validation, it validates at each step of evaluation.
We might be able to figure something out with runtime type validation libraries to do up-front validation.
Is up-front validation important?
StuffArg = {
thing: Thing,
a: number,
b: number
}
interface API {
eatThing(value: StuffArg): void
}
But TypeChat programs permit some amount of structural construction - object literals etc.
Could come up with a very minimal type-checker across APIs.
How do you deal with the divergence between how this type-checks versus how it all type-checks in the behind-the-scenes implementation of the API.
We will need to prototype this out a bit.
With this interface
export interface DateTime {
type: 'dateTime',
dateTime?: Date;
};
The json results look good but validate fails:
JSON validation failed: Cannot find name 'Date'.
{
"entities": [
{
"type": "dateTime",
"dateTime": "2022-08-20T08:30:00.000Z"
}
]
}
/**
* A request processor for interactive input or input from a text file. If an input file name is specified,
* the callback function is invoked for each line in file. Otherwise, the callback function is invoked for
* each line of interactive input until the user types "quit" or "exit".
* @param interactivePrompt Prompt to present to user.
* @param inputFileName Input text file name, if any.
* @param processRequest Async callback function that is invoked for each interactive input or each line in text file.
*/
export declare function processRequests(interactivePrompt: string, inputFileName: string | undefined, processRequest: (request: string) => Promise<void>): Promise<void>;
If I didn't miss anything, there is currently no possibility to pass multiline text to processRequests
. Since prompts can be written over multiple lines, it would be convenient to have that option.
Thank you for crafting such a valuable library. Integrating it with Deno would offer enhanced flexibility for its users.
Any thoughts on how to use TypeChat in conversation-style interactions? In my use case, there is a need to go back and forth with the LLM, refining queries. In your coffee shop example, something like this:
User: Two tall lattes. The first one with no foam.
Assistant: Two tall lattes coming up.
User: The second one with whole milk. Actually make the first one a grande.
Assistant: One grande latte, one tall latte with whole milk. Coming up.
Hello!
I included some enums in my schema, but unfortunately the library seems incompatible with schemas containing enums.
My JSON data includes values that should correspond to valid enum keys, but the validator seems to incorrectly rejects these values.
Here is a part of the error and the Currency enum :
Error: Type '"EUR"' is not assignable to type 'CURRENCY'. {... "currency": "EUR", ...}
export enum CURRENCY { USD = 'USD', EUR = 'EUR', GBP = 'GBP', }
That worked using union type like the following :
type CURRENCY = 'USD' | 'EUR' | 'GBP';
Will enums be supported in the future, or my implementation may be wrong ?
Do you think type safety is guaranteed with a union type ?
Thank you !
Is there any possibilities to integrate NLP providers
Like
Text analysis https://azure.microsoft.com/en-us/products/ai-services/text-analytics
Amazon Comprehend
What I understand is Typechat can be an alternative to lang chain library but in whole documentation I only see this being used for OpenAI's GPT Model only. So, is there any option to use this with any Open source model like LLAMA or Falcon ?
Hi @steveluc ! Can we make typechat platform agnostic in the near future? Remove node.js built-in dependencies like fs
module to make it run in the edge runtime or web browser without any concern.
We can provide a higher level abstract layer or an adapter to let users provide fs
implementation or readFile
implementaiton to avoid dependency maybe?
Hello
I've been playing around with a meal plan app for which I've been working a lot on how to get ChatGPT to give the result as JSON.
For this reason, when I heard about TypeChat, I thought that, that sounded like a perfect solution.
Not sure whether its due to my type definitions are a bit complex, but the resulting prompts make ChatGPT interpret that I want an example on how a resulting JSON could look.
Here's a link to a chat I made with the prompt generated by Typechat.
https://chat.openai.com/share/5bce537a-f9b8-4fbc-afd7-7ebb724b3f89
In my own approach I've had success with ending my prompt with the type definition. Like for example:
"Your response should be in JSON format {meals: {"description": string, "ingredients": {"name": string, "quantity": number, "unit": string}[], "directions": string[]}[]}."
Have you considered this sort of approach?
When i want to use typechat do some greate thing on cloudflare worker. when i build, i meet some error.
- warn No build cache found. Please configure build caching for faster rebuilds. Read more: https://nextjs.org/docs/messages/no-cache
Attention: Next.js now collects completely anonymous telemetry regarding usage.
This information is used to shape Next.js' roadmap and prioritize features.
You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
https://nextjs.org/telemetry
- info Creating an optimized production build...
Warning: For production Image Optimization with Next.js, the optional 'sharp' package is strongly recommended. Run 'yarn add sharp', and Next.js will use it automatically for Image Optimization.
Read more: https://nextjs.org/docs/messages/sharp-missing-in-production
Failed to compile.
../node_modules/.pnpm/[email protected]/node_modules/typechat/dist/interactive.js:7:29
Module not found: Can't resolve 'fs'
https://nextjs.org/docs/messages/module-not-found
Import trace for requested module:
../node_modules/.pnpm/[email protected]/node_modules/typechat/dist/index.js
./app/api/typeChat/route.ts
../node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/next/dist/build/webpack/loaders/next-edge-app-route-loader/index.js?absolutePagePath=private-next-app-dir%2Fapi%2FtypeChat%2Froute.ts&page=%2Fapi%2FtypeChat%2Froute&appDirLoader=bmV4dC1hcHAtbG9hZGVyP25hbWU9YXBwJTJGYXBpJTJGdHlwZUNoYXQlMkZyb3V0ZSZwYWdlPSUyRmFwaSUyRnR5cGVDaGF0JTJGcm91dGUmcGFnZVBhdGg9cHJpdmF0ZS1uZXh0LWFwcC1kaXIlMkZhcGklMkZ0eXBlQ2hhdCUyRnJvdXRlLnRzJmFwcERpcj0lMkZob21lJTJGcnVubmVyJTJGd29yayUyRmFzay1jb2RlYmFzZSUyRmFzay1jb2RlYmFzZSUyRnNyYyUyRmFwcCZhcHBQYXRocz0lMkZhcGklMkZ0eXBlQ2hhdCUyRnJvdXRlJnBhZ2VFeHRlbnNpb25zPXRzeCZwYWdlRXh0ZW5zaW9ucz10cyZwYWdlRXh0ZW5zaW9ucz1qc3gmcGFnZUV4dGVuc2lvbnM9anMmYmFzZVBhdGg9JmFzc2V0UHJlZml4PSZuZXh0Q29uZmlnT3V0cHV0PSZwcmVmZXJyZWRSZWdpb249Jm1pZGRsZXdhcmVDb25maWc9ZTMwJTNEIQ%3D%3D&nextConfigOutput=&preferredRegion=&middlewareConfig=e30%3D!
../node_modules/.pnpm/[email protected]/node_modules/typechat/dist/interactive.js:8:35
Module not found: Can't resolve 'readline/promises'
https://nextjs.org/docs/messages/module-not-found
Import trace for requested module:
../node_modules/.pnpm/[email protected]/node_modules/typechat/dist/index.js
./app/api/typeChat/route.ts
../node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/next/dist/build/webpack/loaders/next-edge-app-route-loader/index.js?absolutePagePath=private-next-app-dir%2Fapi%2FtypeChat%2Froute.ts&page=%2Fapi%2FtypeChat%2Froute&appDirLoader=bmV4dC1hcHAtbG9hZGVyP25hbWU9YXBwJTJGYXBpJTJGdHlwZUNoYXQlMkZyb3V0ZSZwYWdlPSUyRmFwaSUyRnR5cGVDaGF0JTJGcm91dGUmcGFnZVBhdGg9cHJpdmF0ZS1uZXh0LWFwcC1kaXIlMkZhcGklMkZ0eXBlQ2hhdCUyRnJvdXRlLnRzJmFwcERpcj0lMkZob21lJTJGcnVubmVyJTJGd29yayUyRmFzay1jb2RlYmFzZSUyRmFzay1jb2RlYmFzZSUyRnNyYyUyRmFwcCZhcHBQYXRocz0lMkZhcGklMkZ0eXBlQ2hhdCUyRnJvdXRlJnBhZ2VFeHRlbnNpb25zPXRzeCZwYWdlRXh0ZW5zaW9ucz10cyZwYWdlRXh0ZW5zaW9ucz1qc3gmcGFnZUV4dGVuc2lvbnM9anMmYmFzZVBhdGg9JmFzc2V0UHJlZml4PSZuZXh0Q29uZmlnT3V0cHV0PSZwcmVmZXJyZWRSZWdpb249Jm1pZGRsZXdhcmVDb25maWc9ZTMwJTNEIQ%3D%3D&nextConfigOutput=&preferredRegion=&middlewareConfig=e30%3D!
> Build failed because of webpack errors
ELIFECYCLE Command failed with exit code 1.
Error: Process completed with exit code 1.
The calendar
example's actions are similar to intents that one would define for Dialogflow or Alexa. How to use OpenAI as an intent NLU engine?
The issue is that "yes" and "yes, please" matches the YesIntent but "ok" or other affirmative responses do not.
Inputs:
Type schema:
// The following types define the structure of an object of type BotIntent that represents a user request that matches most closely to the sample or synonyms
export type BotIntent = YesIntent | NoIntent | UnknownIntent;
// if the user types text that closely matches 'yes' or a synonym, this intent is used
export type YesIntent = {
intentName: 'YesIntent';
sample: 'yes';
text: string;
};
// if the user types text that closely matches 'no' or a synonym, this intent is used
export type NoIntent = {
intentName: 'NoIntent';
sample: 'no';
text: string;
};
// if the user types text that can not easily be understood as a bot intent, this intent is used
export interface UnknownIntent {
intentName: 'UnknownIntent';
sample: 'unknown';
// text typed by the user that the system did not understand
text: string;
}
How to model more complicated intents with required and optional entities?
I'd like to customize the prompt for some fields and have that imported from the schema. For the calendar example, I'd like to prompt the LLM to give me startTime
and endTime
in ISO 8601 format
export type EventTimeRange = {
/** provide the time in the systems timezone, assume user is refering to a future date */
startTime?: Date;
/** Expect endTime to be in ISO 8601 format, if not otherwise specified, assume user is refering to a future date, it should always be equal to or later than startTime */
endTime?: string;
/** Expect duration to be in minutes */
duration?: string;
};
This library is great, but how can I add additional validation?
For example, how can I ensure that a string doesn't contain newlines?
GPT3 will often ignore instructions to return a string in a certain format, and it would be great if this library could retry until the right response (e.g. passing validation) was returned.
Hi :-)
we are not actually sending over a revised JSON program object here:
Line 215 in ec3a37c
Is this an oversight?
Thank you.
When I run sentiment example, the response result from chatglm is always '{\nsentiment : "xxx"\n}' which parse failed by JSON.Parse().
I used fastchat and chatglm2-6b-4bit to mock openai api
Because chatglm2-6b-4bit has lower performance than chatgpt, it cannot return the correct result via the example
Add one prompt in source code "Please note that ..."
function createRequestPrompt(request) {
return `You are a service that translates user requests into JSON objects of type "${validator.typeName}" according to the following TypeScript definitions:\n` +
`\`\`\`\n${validator.schema}\`\`\`\n` +
`Please note that the response string can be parsed to JSON object via JSON.Parse() such as {"aaa" : "bbb"}\n` +
`The following is a user request:\n` +
`"""\n${request}\n"""\n` +
`The following is the user request translated into a JSON object with 2 spaces of indentation and no properties with the value undefined:\n`
}
Now the response result from chatglm is '{"sentiment": "xxx"}' which can be parsed to JSON correctly
Typechat currently assumes that schemas are self-contained in one file. In reality, schemas are often spread out over multiple files. Ideally, typechat would enable the usage of multi-file schemas out of the box by resolving imports before stringifying the result.
Is this project still undergoing iterative updates? It has been 3 months since a new NPM version was released
cause: Error: connect ETIMEDOUT 199.59.150.49:443
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -4039,
code: 'ETIMEDOUT',
syscall: 'connect',
address: '199.59.150.49',
port: 443
Due to my use of VPN, I have set up HTTP_ Proxy, but it doesn't seem to work, and the code still reports errors.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.