Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature(litellm): pandasai litellm wrapper #1648

Merged
merged 5 commits into from
Feb 28, 2025

Conversation

scaliseraoul
Copy link
Contributor

@scaliseraoul scaliseraoul commented Feb 28, 2025


Important

Introduces LiteLLM wrapper for unified LLM interaction in PandaAI, replacing multiple existing LLM integrations.

  • New Feature:
    • Introduces LiteLLM wrapper in litellm.py for unified LLM interaction across multiple providers.
    • Supports models from OpenAI, Anthropic, Google, Azure, AWS, and more.
  • Documentation:
    • Updates large-language-models.mdx to include LiteLLM setup instructions.
    • Adds README.md for litellm extension.
  • Testing:
    • Adds test_litellm.py for unit testing LiteLLM functionality.
  • Removals:
    • Deletes old LLM extensions for Google, Bedrock, HuggingFace, IBM, LangChain, and Local models.
  • Miscellaneous:
    • Updates pyproject.toml for litellm extension.
    • Removes LangChain LLM handling from state.py and config.py.

This description was created by Ellipsis for b5e87ee. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❌ Changes requested. Reviewed everything up to b5e87ee in 2 minutes and 39 seconds

More details
  • Looked at 2451 lines of code in 40 files
  • Skipped 6 files when reviewing.
  • Skipped posting 4 drafted comments based on config settings.
1. extensions/llms/litellm/tests/test_litellm.py:87
  • Draft comment:
    The 'test_completion_with_extra_params' expects the extra parameters to appear within an 'optional_params' -> 'extra_body' structure. Verify that LiteLLM’s completion function consistently wraps extra kwargs this way, or adjust the test to match the actual behavior.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 30% <= threshold 50%
    The comment is asking the author to verify the behavior of a function, which violates the rule against asking the author to confirm their intention or ensure behavior. However, it does point out a potential inconsistency between the test and the actual behavior, which could be useful if rephrased to suggest a specific action.
2. extensions/llms/litellm/pandasai_litellm/litellm.py:9
  • Draft comment:
    Consider updating the docstring to be provider-agnostic. The current text mentions “new OpenAI LLM”, but LiteLLM supports multiple providers.
  • Reason this comment was not posted:
    Marked as duplicate.
3. docs/v3/large-language-models.mdx:166
  • Draft comment:
    Typo: In the example comment on line 166, 'answer should me (mostly) consistent across devices.' should be corrected to 'answer should be (mostly) consistent across devices.'
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. extensions/llms/litellm/tests/test_litellm.py:34
  • Draft comment:
    In the definition of the function test_successful_completion, the parameters are written as llm,prompt without a space after the comma. For better readability and consistency with other functions (e.g., test_invalid_api_key), please add a space (i.e., change to llm, prompt).
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50%
    The rules state not to make comments that are obvious or unimportant. While this is a real formatting inconsistency, it's a very minor style issue that could be automatically fixed by code formatters. It doesn't affect functionality or code quality in any meaningful way. Most teams would handle this via automated formatting rather than PR comments.
    The inconsistent formatting could make the code slightly less readable. Some teams do care about maintaining consistent style manually.
    While consistency is good, this is too minor an issue to warrant a PR comment. It would be better handled by automated tooling like black, autopep8, or similar.
    Delete this comment as it points out a trivial formatting issue that doesn't meaningfully impact code quality.

Workflow ID: wflow_HLZJfzCCT1NWyOub


Want Ellipsis to fix these issues? Tag @ellipsis-dev in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

scaliseraoul and others added 2 commits February 28, 2025 15:42
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
@gventuri gventuri merged commit b67caf8 into sinaptik-ai:main Feb 28, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants