What happened : While using file.csi.azure.com as the driver for a custom StorageClass, storage accounts are being created and the file shares moved between them - leaving a string of empty storage accounts behind.
What you expected to happen : When/if a new storage account is created for the file shares, the older storage account is removed.
How to reproduce it : I'm not able to reproduce this on demand, but will update this ticket if I spot it happening going forward. I've included my csidriver, storageclass, pvc, and pv definitions to help isolate what's causing this;
csidriver
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
annotations:
csiDriver: v1.34.2
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"CSIDriver","metadata":{"annotations":{"csiDriver":"v1.34.2","snapshot":"v8.4.0"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","kubernetes.io/cluster-service":"true"},"name":"file.csi.azure.com"},"spec":{"attachRequired":false,"fsGroupPolicy":"ReadWriteOnceWithFSType","podInfoOnMount":true,"tokenRequests":[{"audience":"api://AzureADTokenExchange"}],"volumeLifecycleModes":["Persistent","Ephemeral"]}}
snapshot: v8.4.0
creationTimestamp: "2025-09-11T09:10:33Z"
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: file.csi.azure.com
resourceVersion: "124284652"
uid: 44fa1367-7f57-4dca-8aaa-05636bbd21da
spec:
attachRequired: false
fsGroupPolicy: ReadWriteOnceWithFSType
podInfoOnMount: true
requiresRepublish: false
seLinuxMount: false
storageCapacity: false
tokenRequests:
- audience: api://AzureADTokenExchange
volumeLifecycleModes:
- Persistent
- Ephemeral
storageclass
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2025-09-11T09:10:12Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: "true"
name: azurefile-csi-premium
resourceVersion: "464"
uid: a2f1df6c-2886-4d28-8cd2-21ebb0b7e559
mountOptions:
- mfsymlinks
- actimeo=30
- nosharesock
parameters:
skuName: Premium_LRS
provisioner: file.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
pvc
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: file.csi.azure.com
volume.kubernetes.io/storage-provisioner: file.csi.azure.com
creationTimestamp: "2026-05-06T16:17:31Z"
finalizers:
- kubernetes.io/pvc-protection
name: azure-generic-46z8q-runner-cbdsk-docker-cache
namespace: arc-runner
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: true
controller: true
kind: Pod
name: azure-generic-46z8q-runner-cbdsk
uid: 4a5630bc-e72e-4169-b578-65033a8d7f33
resourceVersion: "150424683"
uid: 698a0d07-2f08-41e6-a5e7-88e5dce258ab
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 64G
storageClassName: azurefile-csi-premium
volumeMode: Filesystem
volumeName: pvc-698a0d07-2f08-41e6-a5e7-88e5dce258ab
status:
accessModes:
- ReadWriteMany
capacity:
storage: 100Gi
phase: Bound
pv
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: file.csi.azure.com
volume.kubernetes.io/provisioner-deletion-secret-name: ""
volume.kubernetes.io/provisioner-deletion-secret-namespace: ""
creationTimestamp: "2026-05-06T16:17:32Z"
finalizers:
- external-provisioner.volume.kubernetes.io/finalizer
- kubernetes.io/pv-protection
name: pvc-698a0d07-2f08-41e6-a5e7-88e5dce258ab
resourceVersion: "150424681"
uid: ab7000be-c6bb-4934-9b72-1be2b6aa13a4
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 100Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: azure-generic-46z8q-runner-cbdsk-docker-cache
namespace: arc-runner
resourceVersion: "150424658"
uid: 698a0d07-2f08-41e6-a5e7-88e5dce258ab
csi:
driver: file.csi.azure.com
volumeAttributes:
csi.storage.k8s.io/pv/name: pvc-698a0d07-2f08-41e6-a5e7-88e5dce258ab
csi.storage.k8s.io/pvc/name: azure-generic-46z8q-runner-cbdsk-docker-cache
csi.storage.k8s.io/pvc/namespace: arc-runner
secretnamespace: arc-runner
skuName: Premium_LRS
storage.kubernetes.io/csiProvisionerIdentity: 1777595565687-6797-file.csi.azure.com
volumeHandle: rg-node-general-platform-workloads-prod-uksouth#f08b9d089433e4639b1d5f8#pvc-698a0d07-2f08-41e6-a5e7-88e5dce258ab###arc-runner
mountOptions:
- mfsymlinks
- actimeo=30
- nosharesock
persistentVolumeReclaimPolicy: Delete
storageClassName: azurefile-csi-premium
volumeMode: Filesystem
status:
lastPhaseTransitionTime: "2026-05-06T16:17:32Z"
phase: Bound
The storage accounts are all named similarly to f08b9d089433e4639b1d5f8 - seem to all be random strings of same length, beginning with f.
Anything else we need to know? : I suspect this might be related to #980 , but possibly this is a new edge case as the volumes are used on a node pool that frequently resizes (it's a cluster just running github runners - so scales up the pool as jobs queue up).
Environment :
CSI Driver version: image: mcr.microsoft.com/oss/v2/kubernetes-csi/azurefile-csi:v1.34.4
Kubernetes version (use kubectl version): Client Version: v1.33.2
Kustomize Version: v5.6.0
Server Version: v1.34.6
OS (e.g. from /etc/os-release): Ubuntu 22.04.5 LTS
Kernel (e.g. uname -a): Linux aks-ghr-72317509-vmss00008Z 5.15.0-1102-azure feat: implement CreateSnapshot #111 -Ubuntu SMP Fri Nov 21 22:22:11 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Others:
What happened: While using
file.csi.azure.comas the driver for a custom StorageClass, storage accounts are being created and the file shares moved between them - leaving a string of empty storage accounts behind.What you expected to happen: When/if a new storage account is created for the file shares, the older storage account is removed.
How to reproduce it: I'm not able to reproduce this on demand, but will update this ticket if I spot it happening going forward. I've included my csidriver, storageclass, pvc, and pv definitions to help isolate what's causing this;
csidriver
storageclass
pvc
pv
The storage accounts are all named similarly to
f08b9d089433e4639b1d5f8- seem to all be random strings of same length, beginning with f.Anything else we need to know?: I suspect this might be related to #980, but possibly this is a new edge case as the volumes are used on a node pool that frequently resizes (it's a cluster just running github runners - so scales up the pool as jobs queue up).
Environment:
kubectl version): Client Version: v1.33.2Kustomize Version: v5.6.0
Server Version: v1.34.6
uname -a): Linux aks-ghr-72317509-vmss00008Z 5.15.0-1102-azure feat: implement CreateSnapshot #111-Ubuntu SMP Fri Nov 21 22:22:11 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux