-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
anthropic[patch]: add vertex and bedrock support, streamResponseChunk… #6206
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
"@anthropic-ai/sdk": "^0.22.0", | ||
"@anthropic-ai/vertex-sdk": "^0.4.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making this a required, rather than an optional, dependency adds a library for an edge case. (It also suggests that we can't include this in the google-vertex package, since we're not using this library there.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just spoke async with the rest of the LangChain team, and we decided not to include Bedrock/Vertex support via the Anthropic package. For Anthropic via Bedrock, the new recommended approach is via ChatBedrockConverse
in the @langchain/aws
package. As for Vertex, any changes should be pushed to the @langchain/google-common
package, which will then be used in @langchain/google-vertexai
.
I pushed up a commit removing these additions, but still keeping the other refactors you made. Thanks for cleaning that up!
After this update (0.2.10), anthropic will stream with bound tools but it will not stream tool chunks. Related issues: langchain-ai/langgraphjs#253 (closed but still ongoing issue) |
Add support for vertex and bedrock, optimize stream response chunks
This pull request adds support for vertex and bedrock, and optimizes the
*_streamResponseChunks
method for better efficiency.Changes:
*_streamResponseChunks
method:Question:
extractToken
method return tool_use related tokens?extractToken
method returns tool_use related tokens as new tokens tohandleLLMNewToken
. This might be problematic because these tokens are not actual language model outputs for human.