Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Statically provisioned PersistentVolume with persistentVolumeReclaimPolicy: Delete does not delete #1826

Open
rhaps0dy opened this issue Feb 10, 2025 · 5 comments

Comments

@rhaps0dy
Copy link

rhaps0dy commented Feb 10, 2025

What happened:

Just like #1616 , the underlying Azure Blob container did not get deleted when deleting a PV with persistentVolumeReclaimPolicy: Delete.

Here is my PV and PVC:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: az-pv-example
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 100Gi
  csi:
    driver: blob.csi.azure.com
    nodeStageSecretRef:
      name: csi-myaccountname
      namespace: storage
    volumeAttributes:
      containerName: adria-catastrophic-goodhart
      isHnsEnabled: "true"
    volumeHandle: az-pv-example
  persistentVolumeReclaimPolicy: Delete
  storageClassName: az-myaccountname
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: az-pvc-example
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: az-myaccountname
  volumeMode: Filesystem
  volumeName: az-pv-example

What you expected to happen:

The container adria-catastrophic-goodhart in account myaccountname in Azure Blob should be deleted, but it is not.

How to reproduce it:

Create the YAMLs above, editing to set your account name and a container that exists.

Verify that it exists by creating a pod and mounting it (optional but recommended). E.g. something like this:

apiVersion: v1
kind: Pod
metadata:
  name: minimal-pod
spec:
  containers:
  - name: dummy
    image: busybox
    command:
      - sleep
      - "600" # 600s
    volumeMounts:
    - name: az-pvc-example
      mountPath: /data
  volumes:
  - name: az-pvc-example
    persistentVolumeClaim:
      claimName: az-pvc-example
  restartPolicy: Never

Delete the pod. Then run kubectl delete pvc az-pvc-example.

Anything else we need to know?:

Environment:

  • CSI Driver version: v1.25.1
  • Kubernetes version (use kubectl version):

Client Version: v1.29.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.10

  • OS (e.g. from /etc/os-release): Ubuntu 22.04.4 LTS
  • Kernel (e.g. uname -a):
Linux node02 5.15.0-131-generic #141-Ubuntu SMP Fri Jan 10 21:18:28 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:
@andyzhangx
Copy link
Member

what is the csi driver controller logs? https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/csi-debug.md#case1-volume-createdelete-issue, there should be DeleteVolume logs

1 similar comment
@andyzhangx
Copy link
Member

what is the csi driver controller logs? https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/csi-debug.md#case1-volume-createdelete-issue, there should be DeleteVolume logs

@rhaps0dy
Copy link
Author

Thanks for the pointer!

Nothing interesting here:
k logs -n storage csi-blob-controller-85d77755f6-sbqlr

Defaulted container "csi-provisioner" out of: csi-provisioner, liveness-probe, blob, csi-resizer
I0210 00:40:23.148498       1 feature_gate.go:387] feature gates: {map[HonorPVReclaimPolicy:true]}
I0210 00:40:23.148764       1 csi-provisioner.go:154] Version: v5.1.0-0-g656955bc2
I0210 00:40:23.148772       1 csi-provisioner.go:177] Building kube configs for running in cluster...
I0210 00:40:25.353932       1 common.go:143] "Probing CSI driver for readiness"
I0210 00:40:25.357431       1 csi-provisioner.go:230] Detected CSI driver blob.csi.azure.com
I0210 00:40:25.359511       1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I0210 00:40:25.360193       1 controller.go:744] "Using saving PVs to API server in background"
I0210 00:40:25.360464       1 leaderelection.go:254] attempting to acquire leader lease storage/blob-csi-azure-com...
I0210 00:40:45.176210       1 leaderelection.go:268] successfully acquired lease storage/blob-csi-azure-com
I0210 00:40:45.176365       1 leader_election.go:184] "became leader, starting"
I0210 00:40:45.176912       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0210 00:40:45.176929       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0210 00:40:45.178846       1 reflector.go:368] Caches populated for *v1.StorageClass from k8s.io/client-go/informers/factory.go:160
I0210 00:40:45.188281       1 reflector.go:368] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:160
I0210 00:40:45.276689       1 controller.go:824] "Starting provisioner controller" component="blob.csi.azure.com_csi-blob-controller-85d77755f6-sbqlr_eff4ef0e-e83a-4ec4-8832-a22679c12796"
I0210 00:40:45.276737       1 volume_store.go:98] "Starting save volume queue"
I0210 00:40:45.276779       1 clone_controller.go:66] Starting CloningProtection controller
I0210 00:40:45.276927       1 clone_controller.go:82] Started CloningProtection controller
I0210 00:40:45.278153       1 reflector.go:368] Caches populated for *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v10/controller/controller.go:861
I0210 00:40:45.294438       1 reflector.go:368] Caches populated for *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v10/controller/controller.go:858
I0210 00:40:45.377603       1 controller.go:873] "Started provisioner controller" component="blob.csi.azure.com_csi-blob-controller-85d77755f6-sbqlr_eff4ef0e-e83a-4ec4-8832-a22679c12796"

BUT if we look at the blob container we get interesting stuff:

E0210 00:52:58.811447       1 controllerserver.go:528] GetContainerInfo(az-pv-example) in DeleteVolume failed with error: error parsing volume id: "az-pv-example", should at least contain two #

I guess the volumeHandle needs to have two # somehow? Where should they go?

@rhaps0dy
Copy link
Author

The error message appears to be in this function, which also provides examples of what the ID should look like:

// GetContainerInfo get container info according to volume id
// the format of VolumeId is: rg#accountName#containerName#uuid#secretNamespace#subsID
//
// e.g.
// input: "rg#f5713de20cde511e8ba4900#containerName#uuid#"
// output: rg, f5713de20cde511e8ba4900, containerName, "" , ""
// input: "rg#f5713de20cde511e8ba4900#containerName#uuid#namespace#"
// output: rg, f5713de20cde511e8ba4900, containerName, namespace, ""
// input: "rg#f5713de20cde511e8ba4900#containerName#uuid#namespace#subsID"
// output: rg, f5713de20cde511e8ba4900, containerName, namespace, subsID
func GetContainerInfo(id string) (string, string, string, string, string, error) {
segments := strings.Split(id, separator)
if len(segments) < 3 {
return "", "", "", "", "", fmt.Errorf("error parsing volume id: %q, should at least contain two #", id)
}
var secretNamespace, subsID string
if len(segments) > 4 {
secretNamespace = segments[4]
}
if len(segments) > 5 {
subsID = segments[5]
}
return segments[0], segments[1], segments[2], secretNamespace, subsID, nil
}

I suppose I'll dyanmically provision some volumes and take a look at what the IDs should look like. But this should be documented given that it's important for the driver functioning.

@rhaps0dy
Copy link
Author

Using a volumeHandle of the form ##{container_name}##{namespace}# works well. container_name is the name of the Blob Storage container that corresponds to the PVC.

This issue is solved for me, but I will leave it open because it should be addressed in the documentation for static provisioning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants