-
Notifications
You must be signed in to change notification settings - Fork 11k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-Streaming Models Do Not Return Results Properly in _handle_invoke_result #13571
Conversation
Non-Streaming Models Do Not Return Results Properly in _handle_invoke_result langgenius#13569
Typo Fix
Please fix the lint errors |
Sure, will do |
@crazywoola I have fixed the lint issues, it does not complain in local, can we try running the pipeline |
i think my IDE is incorrectly formatting the idents, i am checking. |
@crazywoola i have fixed my code editor issue, can you please try again. |
Next time you can run this cmd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…_result (langgenius#13571) Co-authored-by: crazywoola <[email protected]>
Fix: Non-Streaming Models Do Not Return Results Properly in _handle_invoke_result
Summary
This PR fixes an issue where models that do not support streaming (stream=False) fail to return results properly. The _handle_invoke_result function was not yielding any events when it received an LLMResult, leading to an empty generator. As a result, the for event in generator: loop in _run() never executed, causing missing responses for non-streaming models.
Issue:
Non-streaming models (stream=False) return an LLMResult, but _handle_invoke_result does not yield anything.
This causes _run() to receive an empty generator, resulting in no response being sent.
Streaming models (stream=True) work fine, as their results are handled iteratively.
Root Cause:
The _handle_invoke_result method contained this condition:
FIx #13569
if isinstance(invoke_result, LLMResult):
return # ❌ This exits without yielding any event
Since _handle_invoke_result did not yield an event for non-streaming models, the generator was empty.
_run() expects to iterate over generator, but since it was empty, the loop never executed.
Fix Implemented
The fix ensures _handle_invoke_result always yields a ModelInvokeCompletedEvent when stream=False, preventing an empty generator.
Updated _handle_invoke_result Implementation:
def _handle_invoke_result(self, invoke_result: LLMResult | Generator) -> Generator[NodeEvent, None, None]:
Impact of the Fix
✅ Non-streaming models now return responses correctly.
✅ Streaming models (stream=True) remain unaffected.
✅ Prevents missing responses by ensuring _handle_invoke_result always yields an event.
Testing & Validation
Verified that non-streaming models (stream=False) now return valid responses.
Ensured that streaming models (stream=True) function as expected.
Confirmed that _handle_invoke_result always yields at least one event, preventing an empty generator.