You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Updating an existing LoadBalancer type kubernetes Service to bind it to a user-created PIP does not release the existing PIP when that PIP is in a different resource group
What you expected to happen:
The PIP which was previously bound to the service should be released and removed
How to reproduce it (as minimally and precisely as possible):
Set variables to use in future commands
export TEST_RG=test-orphaned-pip
Create a resource-group to test with
az group create \
--location eastus \
--resource-group $TEST_RG
Create an AKS cluster
az aks create \
--resource-group $TEST_RG \
--name my-cluster \
--node-count 1 \
--generate-ssh-keys
Export cluster credentials to a temporary file and set KUBECONFIG
az aks get-credentials \
--resource-group $TEST_RG \
--name my-cluster \
-f /tmp/my-cluster.kubeconfig
export KUBECONFIG=/tmp/my-cluster.kubeconfig
Observe that the service has been reconciled and now has our PIP assigned
Observe that the original PIP remains untouched in the cluster's node resource group
Note: If the user-created PIP is created in the cluster's node resource group, the PIP gets released properly as expected. However according to documentation it is a bad practice to create resources in this RG
Environment:
Kubernetes version (use kubectl version):
Cloud provider or hardware configuration:
OS (e.g: cat /etc/os-release):
Kernel (e.g. uname -a):
Install tools:
Network plugin and version (if this is a network-related bug):
Others:
The text was updated successfully, but these errors were encountered:
I've been looking at the code and while this is not my area of expertise, I'm starting to think the issue lies in that the code looks at the Service object, sees the annotation service.beta.kubernetes.io/azure-load-balancer-resource-group and operates on the Resource Group specified, disregarding the possibility that prior to the Service object update the annotation may not have been there and thus it never looks at the PIPs in the cluster's Resource Group
I also want to reiterate: If the Service is deleted after it has been annotated and reconciled with the user-provided PIP from the other RG, the original PIP remains and is never deleted
Since k8s's cloud-provider interface does not pass the old object to the controller, I think one way to solve this would be to also consider az.ResourceGroup in addition to the resource group that is specified in the annotation when enumerating and iterating over PIPs in az.reconcilePublicIPs. This would make it so that even though we're looking for a PIP in the specified Resource Group, we also consider the cluster's Resource Group to determine whether PIPs need to be removed
What happened:
Updating an existing LoadBalancer type kubernetes Service to bind it to a user-created PIP does not release the existing PIP when that PIP is in a different resource group
What you expected to happen:
The PIP which was previously bound to the service should be released and removed
How to reproduce it (as minimally and precisely as possible):
Note: If the user-created PIP is created in the cluster's node resource group, the PIP gets released properly as expected. However according to documentation it is a bad practice to create resources in this RG
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: