This page explains how to enable dynamic provisioning of Hyperdisk Balanced High Availability volumes and regional persistent disks and how to provision them manually in Google Kubernetes Engine (GKE). For machine type compatibilities, see Limitations for Regional Disk and Machine series support for Hyperdisk. Generally, you should use Hyperdisk Balanced High Availability volumes for 3rd generation machine series or newer and regional persistent disks in 2nd generation machine series or older. For more information on machine series generation, see Compute Engine terminology.
For creating end-to-end solutions for high-availability applications with regional persistent disks, see Increase stateful app availability with Stateful HA Operator. This feature does not support Hyperdisk Balanced High Availability volumes.
Hyperdisk Balanced High Availability
This example shows how Hyperdisk Balanced High Availability volumes can be dynamically provisioned as needed or manually provisioned in advance by the cluster administrator.
Dynamic provisioning
Save the following manifest in a file named
balanced-ha-storage.yaml
.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: balanced-ha-storage provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: type: hyperdisk-balanced-high-availability provisioned-throughput-on-create: "250Mi" provisioned-iops-on-create: "7000" allowedTopologies: - matchLabelExpressions: - key: topology.gke.io/zone values: - ZONE1 - ZONE2
Replace the following:
ZONE1
,ZONE2
: the zones within the region where the dynamically provisioned volume will be replicated.
Create the StorageClass:
kubectl create -f hdb-ha-example-class.yaml
Save the following PersistentVolumeClaim manifest in a file named
pvc-example.yaml
:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ACCESS_MODE storageClassName: balanced-ha-storage resources: requests: storage: 20Gi
Replace the following:
ACCESS_MODE
: Hyperdisk Balanced High Availability supportsReadWriteOnce
,ReadWriteMany
andReadWriteOncePod
. For differences and use cases of each access mode, see Persistent Volume Access Modes.
Apply the PersistentVolumeClaim that references the StorageClass you created from earlier:
kubectl apply -f pvc-example.yaml
Manual Provisioning
Follow Compute Engine documentation to create a Hyperdisk Balanced High Availability volume manually.
Save the following PersistentVolume manifest in a file named
pv-example.yaml
. The manifest references the Hyperdisk Balanced High Availability volume you just created:apiVersion: v1 kind: PersistentVolume metadata: name: pv-demo spec: capacity: storage: 500Gi accessModes: - ACCESS_MODE claimRef: namespace: default name: podpvc csi: driver: pd.csi.storage.gke.io volumeHandle: projects/PROJECT_ID/regions/REGION/disks/gce-disk-1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.gke.io/zone operator: In values: - ZONE1 - ZONE2
Replace the following:
PROJECT_ID
: the project ID of the volume you created.REGION
: the region of the disk you created. Refer to the Compute Engine documentation for the latest regional availability.ZONE1
,ZONE2
: the zones within the region where the volume you created is replicated.ACCESS_MODE
: Hyperdisk Balanced High Availability supportsReadWriteOnce
,ReadWriteMany
andReadWriteOncePod
. For differences and use cases of each access mode, see Persistent Volume Access Modes.
Create the Persistent Volume that references the Hyperdisk Balanced High Availability volume you created earlier:
kubectl apply -f pv-example.yaml
Save the following PersistentVolumeClaim manifest in a file named
pvc-example.yaml
:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ACCESS_MODE storageClassName: balanced-ha-storage resources: requests: storage: 20Gi
Replace the following:
ACCESS_MODE
: Hyperdisk Balanced High Availability supportsReadWriteOnce
,ReadWriteMany
andReadWriteOncePod
. Must be the same access mode as what is specified in the PersistentVolume from the previous step. For differences and use cases of each access mode, see Persistent Volume Access Modes.
Apply the PersistentVolumeClaim that references the PersistentVolume you created from earlier:
kubectl apply -f pvc-example.yaml
Regional persistent disks
As with zonal persistent disks, regional persistent disks can be dynamically
provisioned as needed or manually provisioned in advance by the cluster
administrator, although dynamic provisioning is recommended.
To utilize regional persistent disks of the pd-standard
type, set the
PersistentVolumeClaim's spec.resources.requests.storage
attribute to a minimum
of 200 GiB. If your use case requires a smaller volume, consider using pd-balanced
or pd-ssd
instead.
Dynamic provisioning
To enable dynamic provisioning of regional persistent disks, create a
StorageClass
with the replication-type
parameter, and specify zone
constraints in allowedTopologies
.
For example, the following manifest describes a StorageClass
named
regionalpd-storageclass
that uses standard
persistent disks and that replicates data to the europe-west1-b
and
europe-west1-c
zones:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: regionalpd-storageclass
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-balanced
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- europe-west1-b
- europe-west1-c
If using a regional cluster, you can leave allowedTopologies
unspecified. If
you do this, when you create a Pod that consumes a PersistentVolumeClaim
which uses this StorageClass
a regional persistent disk is provisioned with
two zones. One zone is the same as the zone that the Pod is scheduled in. The
other zone is randomly picked from the zones available to the cluster.
When using a zonal cluster, allowedTopologies
must be set.
Once the StorageClass
is created, next create a PersistentVolumeClaim
object, using the storageClassName
field to refer to the StorageClass
. For
example, the following manifest creates a PersistentVolumeClaim
named
regional-pvc
and references the regionalpd-storageclass
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: regional-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: regionalpd-storageclass
Since the StorageClass
is configured with
volumeBindingMode: WaitForFirstConsumer
, the PersistentVolume
is not
provisioned until a Pod using the PersistentVolumeClaim
has been created.
The following manifest is an example Pod using the previously created
PersistentVolumeClaim
:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: regional-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Manual provisioning
First, create a regional persistent disk using the
gcloud compute disks create
command. The following example creates a disk named gce-disk-1
replicated to the europe-west1-b
and europe-west1-c
zones:
gcloud compute disks create gce-disk-1 \
--size 500Gi \
--region europe-west1 \
--replica-zones europe-west1-b,europe-west1-c
You can then create a PersistentVolume
that references the regional
persistent disk you just created. In addition to objects in
Using preexisting Persistent Disks as PersistentVolumes,
the PersistentVolume
for a regional persistent disk should also specify a
node-affinity
.
If you use a StorageClass
, it should specify the persistent disk CSI driver.
Here's an example of a StorageClass
manifest that uses standard persistent
disks and that replicates data to the europe-west1-b
and europe-west1-c
zones:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: regionalpd-storageclass
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-balanced
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- europe-west1-b
- europe-west1-c
Here's an example manifest that creates a PersistentVolume
named
pv-demo
and references the regionalpd-storageclass
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
storageClassName: "regionalpd-storageclass"
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: pv-claim-demo
csi:
driver: pd.csi.storage.gke.io
volumeHandle: projects/PROJECT_ID/regions/europe-west1/disks/gce-disk-1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.gke.io/zone
operator: In
values:
- europe-west1-b
- europe-west1-c
Note the following for the PersistentVolume
example:
- The
volumeHandle
field contains details from thegcloud compute disks create
call, including yourPROJECT_ID
. - The
claimRef.namespace
field must be specified even when it is set todefault
.
Naming persistent disks
Kubernetes cannot distinguish between zonal and regional persistent disks with the same name. As a workaround, ensure that persistent disks have unique names. This issue does not occur when using dynamically provisioned persistent disks.
What's next
- Take a tutorial to learn about Deploying WordPress on GKE with Persistent Disks and Cloud SQL.