Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(community): Perplexity integration #7817

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

anadi45
Copy link
Contributor

@anadi45 anadi45 commented Mar 8, 2025

  1. Adds support for perplexity client.

Copy link

vercel bot commented Mar 8, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchainjs-docs ❌ Failed (Inspect) Mar 8, 2025 10:26pm
1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchainjs-api-refs ⬜️ Ignored (Inspect) Mar 8, 2025 10:26pm

@anadi45 anadi45 changed the title Perplexity integration feat(community): Perplexity integration Mar 8, 2025
@anadi45 anadi45 marked this pull request as ready for review March 8, 2025 22:17
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. auto:documentation Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder auto:enhancement A large net-new component, integration, or chain. Use sparingly. The largest features labels Mar 8, 2025
return "ChatPerplexity";
}

modelName = "llama-3.1-sonar-small-128k-online";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not take a default here

Also, we are standardizing on model over modelName

Copy link
Collaborator

@jacoblee93 jacoblee93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for this!

We will also want to use our standard template for docs

I am happy to take over and land this this week if you don't have time


modelName = "llama-3.1-sonar-small-128k-online";

temperature = 0.2;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer not setting defaults


streaming = false;

topP = 0.9;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer not setting defaults

@@ -432,6 +434,7 @@ export const config = {
"chat_models/bedrock",
"chat_models/bedrock/web",
"chat_models/llama_cpp",
"chat_models/perplexity",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need this since it doesn't use any optional deps

yield generationChunk;

// Emit the chunk to the callback manager if provided
if (_runManager) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: don't prefix with _ since this is used in the function

const response = await this.client.chat.completions.create({
messages: messagesList,
...this.invocationParams(),
stream: false,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do also support calling .invoke/.generate with stream: true and aggregating

This is niche but convenient if you want to use the final output while also streaming back tokens

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:documentation Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder auto:enhancement A large net-new component, integration, or chain. Use sparingly. The largest features size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants