Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: runnable generator #7691

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

feat: runnable generator #7691

wants to merge 1 commit into from

Conversation

zabealbe
Copy link

Add Support for RunnableGenerator in LangChain.js

Summary

This PR addresses an issue where streaming stops working when using .withStructuredOutput() followed by .pipe() to transform the output structure in LangChain.js. The stream should continue emitting results as expected, but the current implementation causes it to only return the last chunk.

Background & Motivation

Issue: #4936

  • When using a LangChain model with .withStructuredOutput() and then applying .pipe() to transform the output structure, streaming no longer works.
  • The issue arises because .pipe() internally creates a RunnableLambda, which buffers all results before invoking the function, preventing the incremental emission of outputs required for streaming.
  • Example of broken behavior:
const model = new ChatOpenAI().withStructuredOutput<MyOutputType>();

const mappedChain = model.pipe((output: MyOutputType) => ({
  newField: output.oldField,
}));

for await (const chunk of mappedChain.stream({ input: "Hello" })) {
  console.log(chunk); // Logs only the last chunk
}

Key Changes

  • Implemented RunnableGenerator to handle streaming transformations without accumulating input.
const model = new ChatOpenAI().withStructuredOutput<MyOutputType>();

const mappedChain = model.pipe(new RunnableGenerator({
  func: (output: MyOutputType) => ({
    newField: output.oldField,
  })
});

for await (const chunk of mappedChain.stream({ input: "Hello" })) {
  console.log(chunk); // Works
}

Expected Impact

  • Streaming will now work when transforming structured outputs using .pipe().
  • Simplifies handling streaming transformations, reducing complexity for users.
  • Improves real-time processing for cases where output structures need to be modified dynamically.
  • LangChain for Python already includes RunnableGenerator, but this feature was missing in LangChain.js.

Fixes

Fixes #4936

🚧 Draft: Needs Testing 🚧

This is a draft PR and still needs testing to ensure everything works as expected in all edge cases. Please help by running some tests and sharing feedback!

Copy link

vercel bot commented Feb 13, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchainjs-docs ✅ Ready (Inspect) Visit Preview Feb 13, 2025 7:27pm
1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchainjs-api-refs ⬜️ Ignored (Inspect) Feb 13, 2025 7:27pm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for RunnableGenerator
1 participant