Comments (8)
opps my bad, sendMessage is something else.
you can use finalMessages() method returned from stream to get the response result.
runStream.finalMessages().then((finalMessages) => {
console.log("finalMessages", finalMessages);
});
hope this works for your usecase, let me know.
from ai.
Hi @ayepRahman ,
you can use "sendMessage" function present on assistantresponsecallback to save the current response to your external database
https://sdk.vercel.ai/docs/api-reference/providers/assistant-response#process-assistantresponsecallback
as of now, the documentation on "sendMessage" is not updated yet.
and if you want to get the whole chat history then you can use the threadId to get all the messages
https://platform.openai.com/docs/api-reference/messages/listMessages
from ai.
do u by any chance have any example on using sendMessage()?
from ai.
@ravvi-kumar do you know how to map the response of openai.beta.threads.messages.list
to the useAssistant
hook's messages
properly?
const { setMessages } = useAssistant({
// ...
})
useEffect(() => {
if (props.messages) {
const mapped = props.messages.map((message) => {
// ...
})
setMessages(mapped)
}
}, [props.message, setMessages])
from ai.
I just save the string that the model output at the very end of my action.tsx
const result = await experimental_streamText({
model: anthropic('claude-3-opus-20240229'),
maxTokens: 4000,
temperature: 0,
frequencyPenalty: 0.5,
system: systemPromptTemplate,
messages: [
...aiState.get().map((info: any) => ({
role: info.role,
content: info.content,
name: info.name
}))
]
});
let fullResponse = '';
for await (const textDelta of result.textStream) {
fullResponse += textDelta;
uiStream.update(<BotMessage>{fullResponse}</BotMessage>);
}
uiStream.done();
aiState.done([
...aiState.get(),
{
role: 'assistant',
content: fullResponse,
id: uuidv4()
}
]);
saveChatToRedis(
CurrentChatSessionId,
session.id,
currnetUserMessage,
fullResponse,
Array.from(uniqueReferences)
);
})();
return {
id: Date.now(),
display: uiStream.value,
chatId: CurrentChatSessionId
};
}
I have not had any issues with saying them.
In an api route i used the
let partialCompletion = '';
const { stream, handlers } = LangChainStream({
onToken: (token: string) => {
partialCompletion += token;
},
onFinal: (completion) => {
try {
saveChatToRedis(
chatSessionId,
userId,
messages[messages.length - 1].content,
partialCompletion,
Array.from(uniqueReferences)
);
cacheChatCompletionVectorDB(
messages[messages.length - 1].content,
completion,
'cache-chat',
Array.from(uniqueReferences)
);
} catch (error) {
console.error('Error saving chat to database:', error);
}
revalidatePath('/chatai', 'layout');
}
});
the partial on token is if the user stops the chat in teh middle of the streaming response so we always store the latest token in the database :)
Hope this helps
from ai.
@ravvi-kumar do you know how to map the response of
openai.beta.threads.messages.list
to theuseAssistant
hook'smessages
properly?const { setMessages } = useAssistant({ // ... }) useEffect(() => { if (props.messages) { const mapped = props.messages.map((message) => { // ... }) setMessages(mapped) } }, [props.message, setMessages])
Something like this here:
for (
let i = 0;
i < Math.max(userMessages.length, assistantMessages.length);
i++
) {
if (userMessages[i]) {
combinedMessages.push({
role: 'user',
id: `user-${i}`,
content: userMessages[i]
});
}
if (assistantMessages[i]) {
combinedMessages.push({
role: 'assistant',
id: `assistant-${i}`,
content: assistantMessages[i]
});
}
}
from ai.
opps my bad, sendMessage is something else. you can use finalMessages() method returned from stream to get the response result.
runStream.finalMessages().then((finalMessages) => { console.log("finalMessages", finalMessages); });hope this works for your usecase, let me know.
Yes, it works, but at the same time, I was wondering. How do I convert the messages coming openai into the same format as forwardStream() does properly?
from ai.
the LangChainStream
from Vercel SDK does not work (still today) for langchain's llm.stream()
... for your api route, you can create your own LangChainStream to work with it like this:
This is what LangChainStreamCustom() looks like. It only has the onCompletion but you get the jist and can implement the other callbacks like onToken etc.. if you need it.
export const LangChainStreamCustom= (
stream: any,
{ onCompletion }: { onCompletion: (completion: string) => Promise<void> }
) => {
let completion = ''
const transformStream = new TransformStream({
transform(chunk, controller) {
completion += new TextDecoder('utf-8').decode(chunk)
controller.enqueue(chunk)
},
flush(controller) {
onCompletion(completion)
.then(() => {
controller.terminate()
})
.catch((e: any) => {
console.error('Error', e)
controller.terminate()
})
}
})
stream.pipeThrough(transformStream)
return transformStream.readable
}
Then in your api route.. you get the stream or response from the LangChain call .stream()
//....
response = llm.stream({}) // assuming this comes from a typical LangChain
Then you can use it like all the other Vercel "OpenAIStream, AnthropicStream" .. etc.
const stream = LangChainStreamCustom(response, {
onCompletion: async (completion: string) => {
console.log('COMPLETE!', completion)
}
})
return new StreamingTextResponse(stream)
from ai.
Related Issues (20)
- Using OpenAI SDK with Ollama HOT 2
- Using Antropic with the new Generative UI HOT 3
- Option to still disable data streaming if I just need the text HOT 1
- Annotations array empty with OpenAI and AssistantResponse and useAssistant HOT 3
- Expose a `OpenAIAssistantStream` function
- Super weird bug (not updating) only on Vercel / deployed code HOT 2
- [AI Core] Abort signal causes `BodyStreamBuffer was aborted` HOT 1
- Pages Router AI SDK 404 error HOT 2
- Error in Gemini Chat Template HOT 3
- JSON parsing error - possible fixes HOT 1
- useAIState with key not correctly inferring type
- Router cache issue with NextJS and Vercel AI SDK
- Omit/Modify Conversation History When Triggering Request in useChat
- Importing from "langchain/llms/openai" is deprecated. HOT 2
- Better documentation for `streamObject`
- Allow Unchecked Schema for Tool Calls and object generations. Make Zod optional.
- Groq API Support? HOT 6
- `createStreamableValue()` in `initialUIState` causes infinite loading HOT 2
- Passing chatRequestOptions to the reload function provided by useChat does not work
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ai.