Comments (14)
Might need to call LLM in tool results, but so far for this use case if I get return {role:"tool",{input:"email",message:"Please provide your email"}}
it would be more than enough.
Bonus point if message could be streamed as well as an object
from ai.
well, we can try to figure it out together :D I am thinking that agents could serve as a router of a sort in the chat, e.g. when ai should responed with message + input, when it shall respond with just message and when it shall for example execute some other action based on the whole conversation.
I think agents are designed specifically for that as concept, so in my case not that I need streaming much, even if the response is done not in streamed way I would be fine with that
from ai.
Currently this is not supported. I'm exploring options for adding core tool call and core tool result support to useChat.
from ai.
In-progress PR: #1514 - this will be a larger change, i expect it to take a few days
from ai.
few days seems more than reasonable! I am not 100% bound to this right now, so can wait. Is there maybe a way to achieve what I want in a different way?
@lgrammel thanks a lot for your great work and prompt response, you rock 🚀
from ai.
Thanks! You could try using the legacy providers, but then you'd need to refactor once this lands
from ai.
@lgrammel I saw there was an example of annotations that might be of use here. My use-case is that when there is a specific message response(with some metadata) I want to render that message + some UI. E.g. I would expect something like
{role:'user', message:'Provide your email', annotations:{emailInput:true}}
Is this a good use-case for this feature. If so is there a good docs or example of how to do this? I am happy to contribute to docs myself if you give me some basic directions and I manage to figure this out.
I know that maybe RSCs would be a better use case for this whole thing, but I prefer to use useChat()
and core in this project.
from ai.
@d-ivashchuk yes, this should be possible. you could define tools without an execute method, handle the tool calls on the client to add information to an array (or alternatively handle the tool calls on the server and forward stream data), and then render the components on the client.
I've added an example of how to show a client component with use-chat in v3.1.2
: https://github.com/vercel/ai/pull/1523/files but it will get easier and cleaner with the new feature.
from ai.
@lgrammel thanks a lot for the example! makes sense.
Anecdotally the best docs I could find yesterday was your announcement tweet of streamData feature 🙈
that's what I tried yesterday:
const data = new StreamData();
const stream = result.toAIStream({
async onFinal(completion) {
console.log(completion);
data.append({
test: "hello",
});
await data.close();
},
});
return new StreamingTextResponse(stream, {}, data);
This is not ideal tbh, as it is missing something. I think I could use messageAnnotation:
async onFinal(completion, message) {
const jsonCompletion = await completion('Is this message prompting for an email, if yes return {email:true}')
if(jsonCompletion.email){
data.appendMessageAnnotation({
email: true,
});
}
it has a separate call to completions, making it not ideal here. I guess with tools this all would be very much simplified.
from ai.
@d-ivashchuk yes, i think this is a great use case for tools. do you need to call the llm with the tool results, or are they just needed to display a ui component?
from ai.
hey @lgrammel, did you manage to release it in a form that we discussed? I see a related PR merged, but not sure if all the work has been done. Thanks a lot for the answer!
from ai.
@d-ivashchuk i have implemented a slightly different version that focusses on user interactions as client-side tools. I have the sense that you might need something different, e.g. stream data, for your use case. i'm thinking about some additions in that area. would it be sufficient for you if you could attach metadata to an assistant response?
from ai.
unrelated question to the above but still in the domain of tool calling.
when I call a tool, on the frontend I now have access to the result, but in the toolInvocations
. message content is empty. Is there a way to invoke a tool that would respond to user and still execute the tool code @lgrammel?
from ai.
when I call a tool, on the frontend I now have access to the result, but in the
toolInvocations
. message content is empty. Is there a way to invoke a tool that would respond to user and still execute the tool code @lgrammel?
Running into the same issue with content being empty 💯
from ai.
Related Issues (20)
- streamObject works on local machines, but not when deployed to vercel (pro user) HOT 11
- fail type inference at createStreamableValue HOT 3
- Error: action is not a function HOT 4
- AI State not updated during iterations
- Client side not working Next:JS HOT 5
- Demo streamUI with a custom provider HOT 1
- web-llm integration HOT 3
- "You uploaded an unsupported image" when trying to streamText with image data HOT 8
- `experimental_onToolCall` callback is not triggered but the `tool_calls` correctly streamed HOT 1
- Issue submitting image query with base64 data HOT 5
- Multiple Streamable UIs Disappear in Production Builds When Calling .done() HOT 4
- Examples for function composition HOT 2
- Ability to set defaultHeaders with OpenRouter HOT 1
- Support for batch embed HOT 4
- chunkToText in OpenAIStream is mishandling tool_calls HOT 2
- Error: Maximum update depth exceeded. When using useCompletion hook in nextjs, on long response on gpt-4o HOT 2
- Add AI Vercel SDK to React
- `Error: failed to pipe response` when trying to respond with `fullStream` HOT 1
- feat: Anthropic streaming tool calls support
- Error when using model 'gpt-4o' with multiple tools in chat-with-tools example HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ai.