Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-Streaming Models Do Not Return Results Properly in _handle_invoke_result #13571

Merged
merged 8 commits into from
Feb 14, 2025

Conversation

vasu-negi
Copy link
Contributor

@vasu-negi vasu-negi commented Feb 12, 2025

Fix: Non-Streaming Models Do Not Return Results Properly in _handle_invoke_result
Summary
This PR fixes an issue where models that do not support streaming (stream=False) fail to return results properly. The _handle_invoke_result function was not yielding any events when it received an LLMResult, leading to an empty generator. As a result, the for event in generator: loop in _run() never executed, causing missing responses for non-streaming models.

Issue:
Non-streaming models (stream=False) return an LLMResult, but _handle_invoke_result does not yield anything.
This causes _run() to receive an empty generator, resulting in no response being sent.
Streaming models (stream=True) work fine, as their results are handled iteratively.
Root Cause:
The _handle_invoke_result method contained this condition:

FIx #13569

if isinstance(invoke_result, LLMResult):
return # ❌ This exits without yielding any event
Since _handle_invoke_result did not yield an event for non-streaming models, the generator was empty.
_run() expects to iterate over generator, but since it was empty, the loop never executed.
Fix Implemented
The fix ensures _handle_invoke_result always yields a ModelInvokeCompletedEvent when stream=False, preventing an empty generator.

Updated _handle_invoke_result Implementation:

def _handle_invoke_result(self, invoke_result: LLMResult | Generator) -> Generator[NodeEvent, None, None]:

# Handle non-streaming models
if isinstance(invoke_result, LLMResult):
    yield ModelInvokeCompletedEvent(
        text=invoke_result.message.content,
        usage=invoke_result.usage,
        finish_reason=None,  # Adjust as needed
    )
    return  # Ensures at least one event is yielded

Impact of the Fix
✅ Non-streaming models now return responses correctly.
✅ Streaming models (stream=True) remain unaffected.
✅ Prevents missing responses by ensuring _handle_invoke_result always yields an event.

Testing & Validation
Verified that non-streaming models (stream=False) now return valid responses.
Ensured that streaming models (stream=True) function as expected.
Confirmed that _handle_invoke_result always yields at least one event, preventing an empty generator.

Non-Streaming Models Do Not Return Results Properly in _handle_invoke_result langgenius#13569
Typo Fix
@dosubot dosubot bot added size:XS This PR changes 0-9 lines, ignoring generated files. 🐞 bug Something isn't working labels Feb 12, 2025
@crazywoola crazywoola requested a review from laipz8200 February 12, 2025 08:58
@crazywoola
Copy link
Member

Please fix the lint errors

@vasu-negi
Copy link
Contributor Author

Sure, will do

@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. and removed size:XS This PR changes 0-9 lines, ignoring generated files. labels Feb 13, 2025
@vasu-negi
Copy link
Contributor Author

@crazywoola I have fixed the lint issues, it does not complain in local, can we try running the pipeline

@vasu-negi
Copy link
Contributor Author

vasu-negi commented Feb 14, 2025

i think my IDE is incorrectly formatting the idents, i am checking.

@vasu-negi
Copy link
Contributor Author

@crazywoola i have fixed my code editor issue, can you please try again.

@crazywoola
Copy link
Member

Next time you can run this cmd ./dev/reformat in root dir. :)

Copy link
Member

@laipz8200 laipz8200 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 14, 2025
@laipz8200 laipz8200 merged commit 8a0aa91 into langgenius:main Feb 14, 2025
6 checks passed
vinhdevie pushed a commit to vinhdevie/dify that referenced this pull request Feb 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working lgtm This PR has been approved by a maintainer size:S This PR changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Non-Streaming Models Do Not Return Results Properly in _handle_invoke_result
3 participants