"Deep Dive into Managing Persistent Volumes in Kubernetes (K8S)"
- Ingress now
- Apr 29, 2024
- 6 min read
Persistent Volume
· Persistent Volume has been provisioned by an administrator.
· Persistent Volume is a resource.
· PVs are volume plugins.
PVC is a request for storage by a user.
Note: pods consume node resources same as PVCS consume PV resource
ex: pod can request specific level of resources (cpu and Memory)
same like that climes can request specific size and access modes
readWriteOnce, ReadOnlyMany, ReadWritemany, or ReadWriteOnce
· while PVCS allows a user to consume abstract storage resources.
· administrators need to be able to offer a variety of pvs.
· that differ in more ways than size and access modes.
· without exposing users to the details of how those volumes are implemented.
Note: this type of volume discusses in storage classes
lifecycle of a volume:
The interaction between pvs and pvcs follows this lifecycle.
provisioning:
two ways PVs may be provisioned: statically or dynamically.
Static:
The administrator creates a few pvs they carry the details of the real storage.
which is available for use by users
Dynamic:
When none of the static PVs the administrator created match a user's PersistentVolumeClaim
The cluster may try to dynamically provision a volume specially for the PVC.
This provisioning is based on Storage Classes: the PVC must request a storage class
the administrator must have created and configured that class for dynamic provisioning to occur.
dynamic provisioning disables themselves.
How to enable dynamic provision?
DefaultStorageClass --enable-admission-plugins.
Binding:
A control loop in the control plane watches for new PVCs, finds a matching PV (if possible), and binds them together.
If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC
A PVC to PV binding is a one-to-one mapping, using a ClaimRef
which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim.
Ex: provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
Storage Object in Use Protection:
The purpose of the Storage Object in Use Protection feature is to ensure that PersistentVolumeClaims (PVCs) in active use by a Pod.
PersistentVolume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss.
· If a user deletes a PVC in active use by a Pod, the PVC is not removed immediately.
· PVC removal is postponed until the PVC is no longer actively used by any Pods.
· Also, if an admin deletes a PV that is bound to a PVC, the PV is not removed immediately.
· PV removal is postponed until the PV is no longer bound to a PVC.
Reclaiming:
When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource.
volumes can either be Retained, Recycled, or Deleted
Retain:
The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted
The PersistentVolume still exists and the volume is considered "released".
But it is not yet available for another claim because the previous claimant's data remains on the volume.
1. Delete the PersistentVolume. The associated storage asset in external infrastructure still exists after the PV is deleted.
2. manually clean up the data on the associated storage asset accordingly.
3. manually delete the associated storage asset.
Delete:
For volume plugins that support the Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes.
Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass, which defaults to Delete.
Recycle:
Warning: The Recycle reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning.
If supported by the underlying volume plugin, the Recycle reclaim policy performs a basic scrub (rm -rf /thevolume/*)
PersistentVolume deletion protection finalize:
Finalizers can be added on a PV to ensure that PersistentVolumes
The newly introduced finalizers kubernetes.io/pv-controller and external-provisioner.volume.kubernetes.io/finalizer are only added to dynamically provisioned volumes.
Reserving a PersistentVolume:
The control plane can bind PVC to matching PV in the cluster.
However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
If the PV exists and has not reserved PVC through its claimRef field, then the PV and PVC will be bound.
you first need to reserve that storage volume. Specify the relevant PVC in the claimRef field of the PV so that other PVCs cannot bind to it.
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
storageClassName: ""
claimRef:
name: foo-pvc
namespace: foo
...
Expanding Persistent Volumes Claims:
You can only expand a PVC if its storage class's allowVolumeExpansion field is set to true.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: example-vol-default
provisioner: vendor-name.example/magicstorage
parameters:
resturl: "http://192.168.10.100:8080"
restuser: ""
secretNamespace: ""
secretName: ""
allowVolumeExpansion: true
Warning: Directly editing the size of a PV can prevent an automatic resize of that volume. If you edit the capacity of a PV, and then edit the. spec of a matching PVC to make the size of the PVC match the PV, then no storage resize happens. The Kubernetes control plane will see that the desired state of both resources matches, conclude that the backing volume size has been manually increased and that no resize is necessary.
CSI Volume expansion:
Support for expanding CSI volumes is enabled by default.
but it also requires a specific CSI driver to support volume expansion.
Resizing a volume containing a file system
You can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4.
When a volume contains a file system, the file system is only resized
when a new Pod is using the PVC in ReadWrite mode.
Resizing an in-use PVC:
In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC.
Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded.
Recovering from Failure when Expanding Volumes:
1.rk the PV that is bound to the PVC with Retain reclaim policy.
2.delete PVC. Since PV has Retain reclaim policy - we will not lose any data when we recreate the PVC.
3dlete claimRef entry from PV specs, so as new PVC can bind to it. This should make the PV Available.
4recreate the PVC with smaller size than PV and set volumeName field of the PVC to the name of the PV. This should bind new PVC to existing PV.
5dontforget to restore the reclaim policy of the PV.
Types of Persistent Volumes:
PersistentVolume types are implemented as plugins.
Csi - Container Storage Interface (CSI)
fc - Fibre Channel (FC) storage
hostPath - HostPath volume (for single node testing only; WILL NOT WORK in a multi-node;
iscsi - iSCSI (SCSI over IP) storage
local - local storage devices mounted on nodes.
nfs - Network File System (NFS) storage
Persistent Volumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
Kubernetes supports two volumeModes of PV: Filesystem and Block
Accessmode:
ReadWriteOnce
the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume.
ReadOnlyMany
the volume can be mounted as read-only by many nodes.
ReadWriteMany
the volume can be mounted as read-write by many nodes.
The ReadWriteOncePod access mode is only supported for CSI volumes
In the CLI, the access modes are abbreviated to:
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
RWOP - ReadWriteOncePod
Class:
A PV can have a class, which is specified by setting the storageClassName attribute to the name of a StorageClass. A PV of a particular class can only be bound to PVCs requesting that class.
Reclaim Policy
Current reclaim policies are:
Retain -- manual reclamation
Recycle -- basic scrub (rm -rf /thevolume/*)
Delete -- delete the volume
For Kubernetes 1.30, only nfs and hostPath volume types support recycling
Phase
A PersistentVolume will be in one of the following phases:
Available
a free resource that is not yet bound to a claim
Bound
the volume is bound to a claim
Released
the claim has been deleted, but the associated storage resource is not yet reclaimed by the cluster
Failed
the volume has failed its (automated) reclamation
PersistentVolumeClaims:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}
Claims As Volumes: pod creation
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim



Comments