Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add code for Envoy extension that support body-to-header translation #355

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

rramkumar1
Copy link

@rramkumar1 rramkumar1 commented Feb 18, 2025

Ref: #321

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 18, 2025
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 18, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: rramkumar1
Once this PR has been reviewed and has the lgtm label, please assign kfswain for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Welcome @rramkumar1!

It looks like this is your first PR to kubernetes-sigs/gateway-api-inference-extension 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/gateway-api-inference-extension has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 18, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @rramkumar1. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 18, 2025
Copy link

netlify bot commented Feb 18, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit 5f6daff
🔍 Latest deploy log https://app.netlify.com/sites/gateway-api-inference-extension/deploys/67c1fb5547b1b700084db541
😎 Deploy Preview https://deploy-preview-355--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@ahg-g
Copy link
Contributor

ahg-g commented Feb 18, 2025

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 18, 2025
Copy link
Member

@hzxuzhonghu hzxuzhonghu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how it is related with this project. I mean you can deploy this not depending on gateway inference extension

@rramkumar1
Copy link
Author

I am not sure how it is related with this project. I mean you can deploy this not depending on gateway inference extension

@hzxuzhonghu Sorry I forgot to link the related issue in the initial comment. Fixed that.

@hzxuzhonghu
Copy link
Member

Thanks, now i get the inetention. But i am wondering, should we separate it from the ext-proc binart? IMO, it can be merged into the current binary. Correct me if miss some ctx

@rramkumar1
Copy link
Author

Thanks, now i get the inetention. But i am wondering, should we separate it from the ext-proc binart? IMO, it can be merged into the current binary. Correct me if miss some ctx

I think it should be a separate binary because this particular extension is intended to execute before the routing decision while endpoint picker is intended to execute after routing decision.

@hzxuzhonghu
Copy link
Member

hmm, the model name header will be used by the http route match later. What confused me most is the epp currenlty only support one model pool. It seems no model match needed to specify in the HTTPRoute

@robscott
Copy link
Member

hmm, the model name header will be used by the http route match later. What confused me most is the epp currenlty only support one model pool. It seems no model match needed to specify in the HTTPRoute

@hzxuzhonghu for a bit more context, there are different points in time where we can attach an ext_proc extension. We want to attach this extension early in the process so it can add headers before routing logic is computed. That would allow someone to say that requests for the "foo" model should go to the "bar" InferencePool while requests for "baz" model should go to a different InferencePool.

Then when the request gets to an InferencePool, the Endpoint Picker extension can select the best endpoint to serve a request for that model within that InferencePool.

@hzxuzhonghu
Copy link
Member

Ah, that makes sense. Would we plan to support multiple inferencepool within a epp process

@robscott
Copy link
Member

Ah, that makes sense. Would we plan to support multiple inferencepool within a epp process

That's definitely been a point of discussion, I think it's inevitable that that will be a mode at some point, but it's not part of the initial releases. The rationale for this is:

  1. We're currently relying on very frequent probing of every endpoint an EPP is supporting, this would not scale well to a large number of endpoints
  2. We're relying on keeping some state in memory, this would also not scale to a large number of endpoints
  3. Having a 1:1 mapping between InferencePool and EPP allows you to effectively traffic split between different EPPs or InferencePools

With all that said, I think these could all be temporary limitations.

Copy link
Member

@robscott robscott left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rramkumar1! Once we get the image pipeline set up and some corresponding docs, this will be a great part of the v0.2 release.

@@ -0,0 +1,29 @@
# Dockerfile has specific requirement to put this ARG at the beginning:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kfswain Are you able to help get automated builds set up for this image as well? Can't remember all the steps required offhand

Copy link
Contributor

@ahg-g ahg-g Feb 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #320 as an example; we a added build rule and a step in cloudbuild to automatically create the image for the lora syncer sidecar

@ahg-g
Copy link
Contributor

ahg-g commented Feb 21, 2025

Can you also please add a README.md file; I added one for pkg/epp at #386

@robscott robscott mentioned this pull request Feb 28, 2025
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 28, 2025
@rramkumar1 rramkumar1 changed the title [WIP] Add code for Envoy extension that support body-to-header translation Add code for Envoy extension that support body-to-header translation Feb 28, 2025
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 28, 2025
@k8s-ci-robot
Copy link
Contributor

@rramkumar1: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-gateway-api-inference-extension-verify-main 5f6daff link true /test pull-gateway-api-inference-extension-verify-main

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants