Function calling metadata missing in streamed chunk response from LMStudio (qwen2.5-7b-instruct-1m) #7708
-
Checked other resources
Commit to Help
Example Code// server.ts snippet
const model = new ChatOpenAI({
// modelName: "deepseek-r1-distill-qwen-7b",
modelName: "qwen2.5-7b-instruct-1m",
openAIApiKey: "lmstudio", // Placeholder
configuration: {
baseURL: "http://localhost:1234/v1"
},
streaming: true,
});
app.post("/stream-text", async (req: Request, res: Response) => {
try {
const { messages } = req.body;
const chatMessages = [
new SystemMessage(SYSTEM_PROMPT),
...messages.map((m: any) =>
m.role === "user"
? new HumanMessage(m.content)
: new AIMessage(m.content)
),
];
const tools = [
startNewGameTool,
playMoveTool,
drawArrowTool,
calculatorTool,
];
const boundModel = model.bindTools(tools);
const streamIterator = await boundModel.stream(chatMessages);
for await (const chunk of streamIterator) {
if (!final) {
final = chunk;
} else {
final = final.concat(chunk);
}
// **I'm checking this chunk here for the appropriate function calling metadata and not seeing it**
console.dir(chunk, { depth: null });
}
console.log("Final streaming result:", final);
res.end();
} catch (error) {
console.error("Error in streaming:", error);
res.end();
}
}); // tools.ts
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
export const startNewGameTool = new DynamicStructuredTool({
name: "start_new_game",
description: "Clears the board and starts a new game.",
schema: z.object({}),
func: async () => {
return "Starting a new game now.";
},
});
export const playMoveTool = new DynamicStructuredTool({
name: "play_move",
description:
"Places a stone on the board at a given position for a given color.",
schema: z.object({
color: z.enum(["black", "white"]).describe("Color of the stone."),
position: z
.tuple([z.number(), z.number()])
.describe("Coordinates [x, y] of the move."),
}),
func: async ({ color, position }) => {
return `Playing a ${color} stone at position ${position}`;
},
});
export const drawArrowTool = new DynamicStructuredTool({
name: "draw_arrow",
description: "Draws an arrow between two points on the board.",
schema: z.object({
start: z.tuple([z.number(), z.number()]).describe("Start position [x, y]."),
end: z.tuple([z.number(), z.number()]).describe("End position [x, y]."),
color: z.string().describe("Color of the arrow."),
}),
func: async ({ start, end, color }) => {
return `Drawing an arrow from ${start} to ${end} in ${color}.`;
},
});
export const calculatorTool = new DynamicStructuredTool({
name: "calculator",
description: "Can perform mathematical operations.",
schema: z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
}),
func: async ({ operation, number1, number2 }) => {
// Functions must return strings
if (operation === "add") {
return `${number1 + number2}`;
} else if (operation === "subtract") {
return `${number1 - number2}`;
} else if (operation === "multiply") {
return `${number1 * number2}`;
} else if (operation === "divide") {
return `${number1 / number2}`;
} else {
throw new Error("Invalid operation.");
}
},
}); DescriptionI am running BehaviorThis setup was working fine until I added function calling support. Now, the streamed response does not include the expected function call metadata. I'm using the below to dive into and examine the data coming from the stream
If none of the messages in the conversation explicitly invoke a function call (For example simply saying "Hello"), the response is correctly streamed as
This is good because I can see However, if I prompt the model to explicitly call a function (e.g., "What is 5+5" for
Expected behaviorWhen the model decides to call a function, the streamed response should contain tool_calls or tool_call_chunks metadata so I can parse and execute the function calls. Observations
QuestionIs there something wrong with how I am handling streaming function calls? Should boundModel.stream(chatMessages) be expected to always return function call metadata in the streamed chunks? Additionally, it is entirely possible that this is an issue for lmstudio or the qwen model instead of Langchain. Please advise if you can! Thank you. System Info[email protected] | MIT | deps: 12 | versions: 310 keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores dist dependencies: maintainers:
dist-tags: published 2 weeks ago by jacoblee93 [email protected] Platform: Windows |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 20 replies
-
Hello @juliarvalenti! I'm a bot designed to help you with bug fixes, questions, and contributions. I'll be here to assist you while you wait for a human maintainer. When handling streamed responses in LangChain, it's important to ensure that the function call metadata is correctly processed and included in the streamed chunks. Here are a few things to check and consider:
By ensuring that these aspects are correctly implemented, you should be able to receive and process function call metadata in your streamed responses. If the issue persists, it might be worth investigating further into the model's capabilities or any specific configurations required by LMStudio. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Beta Was this translation helpful? Give feedback.
@jacoblee93
I have replaced LMStudio with OpenAI directly and I'm not observing the behavior, so I don't think this is a langchain issue. I've opened up a ticket on LMStudio regarding this.
lmstudio-ai/lmstudio-bug-tracker#445