This page shows you how to enforce encryption of data in-use in your nodes and workloads by using Confidential Google Kubernetes Engine Nodes. Enforcing encryption can help increase the security of your workloads.
This page is for Security specialists who implement security measures on GKE. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.
Before reading this page, ensure that you're familiar with the concept of data-in-use.
What is Confidential GKE Nodes?
You can encrypt your workloads with Confidential GKE Nodes or Confidential mode for Hyperdisk Balanced.
Confidential GKE Nodes
Confidential GKE Nodes is built on top of Compute Engine Confidential VM, which uses hardware-based memory encryption to protect data in use. Confidential GKE Nodes supports the following Confidential Computing technologies:
- AMD Secure Encrypted Virtualization (SEV)
- AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP)
- Intel Trust Domain Extensions (TDX)
For details about these technologies and for help choosing the best fit for your requirements, see the Confidential VM overview.
Confidential GKE Nodes doesn't change the security measures that GKE applies to cluster control planes. To learn about these measures, see Control plane security. For visibility over who accesses control planes in your Google Cloud projects, use Access Transparency.
You can do the following to enable Confidential GKE Nodes:
- Create a new cluster
- Deploy a workload with node auto-provisioning
- Create a node pool
- Update an existing node pool
You can't update an existing cluster to change the cluster-level Confidential GKE Nodes setting.
The following table shows you the GKE behavior that applies when you enable Confidential GKE Nodes:
Confidential GKE Nodes setting | How to configure | Behavior |
---|---|---|
Cluster level | Create a new Autopilot or Standard mode cluster | All nodes use Confidential GKE Nodes. This operation is irreversible. You can't override the setting for individual nodes. In GKE Autopilot clusters, all nodes automatically use the default machine series for the Balanced compute class, which is N2D. |
Node pool level |
|
GKE encrypts the memory contents of nodes in that node pool. This is only possible if Confidential GKE Nodes is disabled at the cluster level. |
Confidential mode for Hyperdisk Balanced
You can also enable Confidential mode for Hyperdisk Balanced on your boot disk storage, which encrypts your data on additional hardware-backed enclaves.
You can enable Confidential mode for Hyperdisk Balanced when doing one of the following:
- Create a new cluster
- Create a new node pool
You cannot update an existing cluster or a node pool to change the Confidential mode for Hyperdisk Balanced setting.
The following table shows you the GKE behavior that applies when you enable Confidential mode for Hyperdisk Balanced setting at the cluster level or at the node pool level:
Confidential mode for Hyperdisk Balanced setting | How to configure | Behavior |
---|---|---|
Cluster-level | Create a new cluster | Only the default node pool in the cluster will use
Confidential mode for Hyperdisk Balanced setting. You cannot do the
following:
|
Node pool level | Create a new node pool | You can configure Confidential mode for Hyperdisk Balanced setting for any new node pools at creation time. You can't update existing node pools to use Confidential mode for Hyperdisk Balanced setting. |
Pricing
The following pricing applies:
Autopilot:
- You incur costs based on the Balanced compute class pricing because enabling Confidential GKE Nodes changes the default machine series in the cluster to N2D. For pricing details, see Autopilot pricing.
- You incur costs for Confidential GKE Nodes in addition to the GKE Autopilot pricing. For details, see the "Confidential GKE Nodes on GKE Autopilot pricing" section in Confidential VM pricing.
Standard: There is no additional cost to deploy Confidential GKE Nodes, other than the cost of Compute Engine Confidential VM. However, Confidential GKE Nodes might generate slightly more log data on startup than standard nodes. For information on logs pricing, see Pricing for Google Cloud Observability.
Availability
Confidential GKE Nodes has the following availability requirements:
- Your nodes must be in a zone or a region that supports the Confidential Computing technology that you select. For more information, see View supported zones.
- Your Autopilot clusters must use GKE version 1.30.2 or later.
- Your Standard node pools must use one of the supported machine types and the Container-Optimized OS node image.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Requirements
- Your Autopilot clusters must use GKE version 1.30.2 or later.
- Your Standard node pools must use one of the supported machine types and the Container-Optimized OS node image.
Your Standard clusters and node pools must use one of the following GKE versions depending on the Confidential Computing technology that you choose:
- AMD SEV: any available GKE version.
- AMD SEV-SNP: 1.32.2-gke.1297000 or later.
- Intel TDX: 1.32.2-gke.1297000 or later.
Use Confidential GKE Nodes in Autopilot
You can enable Confidential GKE Nodes for an entire Autopilot cluster, which makes every node a confidential node. All your workloads run on confidential nodes with no changes needed to workload manifests. Enabling Confidential GKE Nodes changes the default machine series in the cluster to N2D.
Enable Confidential GKE Nodes on a new Autopilot cluster
Run the following command:
gcloud container clusters create-auto CLUSTER_NAME \
--location=LOCATION \
--enable-confidential-nodes
Replace the following:
CLUSTER_NAME
: the name of the Autopilot cluster.LOCATION
: the Compute Engine location for the cluster.
The cluster must run version 1.30.2 or later. To set a specific version when you create a cluster, see Set the version and release channel of a new Autopilot cluster.
Use Confidential GKE Nodes in Standard mode
You can enable Confidential GKE Nodes at the cluster level or at the node pool level in Standard mode.
Enable Confidential GKE Nodes on Standard clusters
You can specify a Confidential Computing technology for your nodes when you create a cluster. Specifying the technology when you create a cluster has all of the following effects:
- You can't create node pools that don't use Confidential GKE Nodes in that cluster.
- You can't update the cluster to disable Confidential GKE Nodes.
- You can't override the cluster-level Confidential Computing technology in individual node pools.
Configuring a Confidential Computing setting at the cluster level is permanent. As a result, consider the following use cases before you create your cluster:
To use node auto-provisioning in your cluster, you must do all of the following:
- Use the gcloud CLI to create your cluster and specify the
--enable-confidential-nodes
flag in your cluster creation command. - Select a Confidential Computing technology that node auto-provisioning supports.
For details, see the Use Confidential GKE Nodes with node auto-provisioning section.
- Use the gcloud CLI to create your cluster and specify the
To use different Confidential Computing technologies to encrypt specific node pools in the cluster, skip this section and specify the technology at the node pool level.
To create a Standard mode cluster that uses Confidential GKE Nodes, select one of the following options:
gcloud
When creating a new cluster, specify the --confidential-node-type
option
in the gcloud CLI:
gcloud container clusters create CLUSTER_NAME \
--location=LOCATION \
--machine-type=MACHINE_TYPE \
--confidential-node-type=CONFIDENTIAL_COMPUTE_TECHNOLOGY
Replace the following:
CLUSTER_NAME
: the name of your cluster.LOCATION
: the Compute Engine location for the cluster. The location must support the Confidential Computing technology that you specify. For details, see the Availability section.MACHINE_TYPE
: a machine type that supports the Confidential Computing technology that you specify. For details, see the Availability section.CONFIDENTIAL_COMPUTE_TECHNOLOGY
: the Confidential Computing technology to use. The following values are supported:sev
: AMD SEVsev_snp
: AMD SEV-SNPtdx
: Intel TDX
You can also use the --enable-confidential-nodes
flag in your cluster
creation command. If you specify only this flag in your command, the cluster
uses AMD SEV. The machine type that you specify in the command must
support AMD SEV. However, if you specify the --confidential-node-type
flag
in the same command, GKE uses the value that you specify in the
--confidential-node-type
flag.
Console
In the Google Cloud console, go to the Create a Kubernetes cluster page.
From the navigation pane, under Cluster, click Security.
Select the Enable Confidential GKE Nodes checkbox.
Configure your cluster as needed.
Click Create.
See Creating a regional cluster for more details about creating clusters.
For any node pool created with the Confidential mode for Hyperdisk Balanced setting, only the nodes in the node pool are restricted to the setup configuration. For any new node pools created in the cluster, you must set up confidential mode at creation.
Enable Confidential GKE Nodes on node pools
You can enable Confidential GKE Nodes on specific node pools if Confidential GKE Nodes is disabled at the cluster level.
Confidential mode for Hyperdisk Balanced setting must be specified during node pool creation request.
Create a new node pool
To create a new node pool with Confidential GKE Nodes enabled, run the following command:
gcloud container node-pools create NODE_POOL_NAME \
--location=LOCATION \
--cluster=CLUSTER_NAME \
--machine-type=MACHINE_TYPE \
--confidential-node-type=CONFIDENTIAL_COMPUTE_TECHNOLOGY
Replace the following:
NODE_POOL_NAME
: the name of your new node pool.LOCATION
: the location for your new node pool. The location must support the Confidential Computing technology that you specify. For details, see the Availability section.CLUSTER_NAME
: the name of your cluster.MACHINE_TYPE
: a machine type that supports the Confidential Computing technology that you specify. For details, see the Availability section.CONFIDENTIAL_COMPUTE_TECHNOLOGY
: the Confidential Computing technology to use. The following values are supported:sev
: AMD SEVsev_snp
: AMD SEV-SNPtdx
: Intel TDX
You can also use the --enable-confidential-nodes
flag in your cluster
creation command. If you specify only this flag in your command, the cluster
uses AMD SEV. The machine type that you specify in the command must
support AMD SEV. However, if you specify the --confidential-node-type
flag
in the same command, GKE uses the value that you specify in the
--confidential-node-type
flag.
Update an existing node pool
This change requires recreating the nodes, which can cause disruption to your running workloads. For details about this specific change, find the corresponding row in the manual changes that recreate the nodes using a node upgrade strategy without respecting maintenance policies table. To learn more about node updates, see Planning for node update disruptions.
You can update existing node pools to use Confidential GKE Nodes or to switch the Confidential Computing technology that the nodes use. To update an existing node pool to use Confidential GKE Nodes, run the following command:
gcloud container node-pools update NODE_POOL_NAME \
--cluster=CLUSTER_NAME \
--confidential-node-type=CONFIDENTIAL_COMPUTE_TECHNOLOGY
Replace the following:
- NODE_POOL_NAME: the name of your node pool.
- CLUSTER_NAME: the name of your cluster.
CONFIDENTIAL_COMPUTE_TECHNOLOGY: the Confidential Computing technology to use. The following values are supported:
sev
: AMD SEVsev_snp
: AMD SEV-SNPtdx
: Intel TDX
The nodes must already use a machine type that supports the Confidential Computing technology that you're updating the nodes to use. If your nodes use a machine type that doesn't support your chosen technology (for example, if you use a machine type that has AMD CPUs and you want to enable Intel TDX), do the following:
- If the node pool already uses Confidential GKE Nodes, disable Confidential GKE Nodes.
- Change the machine type of the node pool.
- Update the node pool to use the new Confidential Computing setting by running the preceding command.
Use Confidential GKE Nodes with node auto-provisioning
You can configure node auto-provisioning to use Confidential GKE Nodes in auto-provisioned node pools. Node auto-provisioning supports the following Confidential Computing technologies:
- AMD SEV
- AMD SEV-SNP
To use Confidential GKE Nodes with node auto-provisioning, specify the
--enable-confidential-nodes
gcloud CLI flag when you create a
cluster, create a node pool, or update a node pool. The following additional
considerations apply:
- Create a new auto-provisioned node pool: ensure that the Confidential Computing technology that you choose is supported in node auto-provisioning.
- Update an existing node pool: ensure that the Confidential Computing technology that you choose is supported in node auto-provisioning.
- Create a new cluster: ensure that the Confidential Computing technology that you choose is supported in node auto-provisioning. This choice is irreversible at the cluster level.
- Update an existing cluster: the cluster must already use Confidential GKE Nodes. The Confidential GKE Nodes technology that the cluster uses must be one that node auto-provisioning supports.
Place workloads on only Confidential GKE Nodes node pools
If you enable Confidential GKE Nodes at the cluster level, all of your workloads run on confidential nodes. You don't need to make changes to your manifests. However, if you only enable Confidential GKE Nodes for specific Standard mode node pools, you should declaratively express that your workloads must only run on node pools with Confidential GKE Nodes.
To require that a workload runs on a specific Confidential Computing technology, use a node selector with the
cloud.google.com/gke-confidential-nodes-instance-type
label, like in the following example:apiVersion: v1 kind: Pod spec: containers: - name: my-confidential-app image: us-docker.pkg.dev/myproject/myrepo/my-confidential-app nodeSelector: cloud.google.com/gke-confidential-nodes-instance-type: "CONFIDENTIAL_COMPUTE_TECHNOLOGY"
Replace
CONFIDENTIAL_COMPUTE_TECHNOLOGY
with the name of the technology that the node pool uses. The following values are supported:sev
: AMD SEVsev_snp
: AMD SEV-SNPtdx
: Intel TDX
To let a workload run on any confidential nodes, regardless of the Confidential Computing technology, use a node affinity rule, like in the following example:
apiVersion: v1 kind: Pod spec: containers: - name: confidential-app image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: cloud.google.com/gke-confidential-nodes-instance-type operator: Exists
To let a workload run on nodes that use only a subset of the available Confidential Computing technologies, use a node affinity rule that's similar to the following example:
apiVersion: v1 kind: Pod spec: containers: - name: confidential-app image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: cloud.google.com/gke-confidential-nodes-instance-type operator: In values: - SEV - SEV_SNP - TDX
In the
values
field, specify only the Confidential Computing technologies that you want to run the workload on.
Verify that Confidential GKE Nodes are enabled
You can check whether your clusters or nodes use Confidential GKE Nodes by inspecting the clusters or nodes.
On Autopilot mode or Standard mode clusters
You can verify that your Autopilot or Standard cluster is using Confidential GKE Nodes with the gcloud CLI or the Google Cloud console.
gcloud
Describe the cluster:
gcloud container clusters describe CLUSTER_NAME
If Confidential GKE Nodes is enabled, the output is similar to the following, depending on your cluster mode of operation.
Standard mode clusters
confidentialNodes:
confidentialInstanceType: CONFIDENTIAL_COMPUTE_TECHNOLOGY
Autopilot mode clusters
confidentialNodes:
enabled: true
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click the name of the cluster you want to inspect.
Under Security, in the Confidential GKE Nodes field, verify that Confidential GKE Nodes is Enabled.
On Autopilot mode or Standard mode nodes
To verify whether specific Autopilot or Standard nodes use Confidential GKE Nodes, do the following:
Find the node name:
kubectl get nodes
Describe the node:
kubectl describe NODE_NAME
Replace
NODE_NAME
with the name of a node to inspect.
The output is similar to the following:
Name: gke-cluster-1-default-pool-affsf335r-asdf
Roles: <none>
Labels: cloud.google.com/gke-boot-disk=pd-balanced
cloud.google.com/gke-container-runtime=containerd
cloud.google.com/gke-confidential-nodes-instance-type=CONFIDENTIAL_COMPUTE_TECHNOLOGY
cloud.google.com/gke-nodepool=default-pool
cloud.google.com/gke-os-distribution=cos
cloud.google.com/machine-family=e2
# lines omitted for clarity
In this output, the cloud.google.com/gke-confidential-nodes-instance-type
node label indicates that the node is a confidential node.
On Standard mode node pools
To verify that your node pool is using Confidential GKE Nodes, run the following command:
gcloud container node-pools describe NODE_POOL_NAME \
--cluster=CLUSTER_NAME
If Confidential GKE Nodes is enabled, the output is similar to the following:
confidentialNodes:
cloud.google.com/gke-confidential-nodes-instance-type=CONFIDENTIAL_COMPUTE_TECHNOLOGY
If Confidential mode for Hyperdisk Balanced setting is enabled, the output is similar to the following:
enableConfidentialStorage: true
On individual Standard mode nodes
To validate the confidentiality of specific nodes in Standard clusters, do any of the following:
Set organization policy constraints
You can define an organization policy constraint to ensure that all VM resources
created across your organization are Confidential VM instances.
For GKE, you can customize the Restrict Non-Confidential
Computing constraint to require that all new clusters are created with
one of the available Confidential Computing technologies enabled. Add the
container.googleapis.com
API Service name to the deny list when
enforcing organization policy constraints,
like in the following example:
gcloud resource-manager org-policies deny \
constraints/compute.restrictNonConfidentialComputing compute.googleapis.com container.googleapis.com \
--project=PROJECT_ID
Replace PROJECT_ID with your project ID.
Create a PersistentVolume for Confidential mode for Hyperdisk Balanced
For guidance on allowable values for throughput or IOPS, see Plan the performance level for your Hyperdisk volume.
The following examples show how you can create a Confidential mode for Hyperdisk Balanced StorageClass for each Hyperdisk type:
Hyperdisk Balanced
Save the following manifest in a file named
confidential-hdb-example-class.yaml
:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: balanced-storage provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: type: hyperdisk-balanced provisioned-throughput-on-create: "250Mi" provisioned-iops-on-create: "7000" enable-confidential-storage: true disk-encryption-kms-key: "projects/KMS_PROJECT_ID/locations/REGION/keyRings/KEY_RING/cryptoKeys/HSM_KEY_NAME"
Replace the following:
KMS_PROJECT_ID
: the project that owns the Cloud KMS keyREGION
: the region where the disk is locatedKEY_RING
: the name of the key ring that includes the keyHSM_KEY_NAME
: the name of the HSM key used to encrypt the disk
Create the StorageClass:
kubectl create -f hdb-example-class.yaml
Create a Hyperdisk Persistent Volume Claim for GKE that uses your Confidential mode for Hyperdisk Balanced volume.
To find the name of the StorageClasses available in your cluster, run the following command:
kubectl get sc
Limitations
Confidential GKE Nodes has the following limitations:
- All the limitations of Compute Engine Confidential VM instances apply to Confidential GKE Nodes.
- All of the limitations of using CMEK to encrypt disks apply to Confidential mode for Hyperdisk Balanced.
- Confidential GKE Nodes that have the C2D machine type can only use node auto-provisioning in GKE version 1.24 or later.
- Confidential GKE Nodes only supports PersistentVolumes backed by persistent disks if your control plane runs GKE version 1.22 or later. For instructions, refer to Using the Compute Engine persistent disk CSI Driver .
- Confidential GKE Nodes is not compatible with sole tenant nodes.
- Confidential GKE Nodes only supports using ephemeral storage on local SSDs, but doesn't support using local SSDs in general.
- Only Container-Optimized OS nodes are supported. Ubuntu and Windows nodes are not supported.
- Confidential mode for Hyperdisk Balanced is supported only on Confidential GKE Nodes that use AMD SEV as the Confidential Computing technology.
- GKE Autopilot clusters support only AMD SEV. AMD SEV-SNP and Intel TDX aren't supported.
- To use
node auto-provisioning
with Confidential GKE Nodes, you must use the
--enable-confidential-nodes
flag in your Standard mode cluster or node pool gcloud CLI commands. Node auto-provisioning doesn't support Intel TDX.
Live migration limitations
Compute Engine Confidential VM that use the N2D machine type and use AMD SEV as the Confidential Computing technology support live migration, which minimizes the potential workload disruption from a host maintenance event. Live migration occurs in the following GKE versions:
- 1.27.10-gke.1218000 and later
- 1.28.6-gke.1393000 and later
- 1.29.1-gke.1621000 and later
If your node pools were already running a supported version when live migration was added, manually upgrade the node pools to the same or a different supported version. Upgrading the nodes triggers node recreation, and the new nodes have live migration enabled.
For details about which Compute Engine machine types support live migration, see Supported configurations.
If a host maintenance event
occurs on a node that doesn't support live migration, the node enters a
NotReady
state. Running Pods will experience disruptions until the node
becomes ready again. If the maintenance takes more than five minutes,
GKE might try to recreate the Pods on other nodes.
Disable Confidential GKE Nodes
You can only disable Confidential GKE Nodes in Standard mode node pools. If the node pool is in a cluster that uses Confidential GKE Nodes at the cluster level, you can't disable the feature at the node pool level.
Run the following command to disable Confidential GKE Nodes on a node pool:
gcloud container node-pools update NODE_POOL_NAME \
--cluster=CLUSTER_NAME \
--no-enable-confidential-nodes
This change requires recreating the nodes, which can cause disruption to your running workloads. For details about this specific change, find the corresponding row in the manual changes that recreate the nodes using a node upgrade strategy without respecting maintenance policies table. To learn more about node updates, see Planning for node update disruptions.
What's next
- Learn more about Confidential VM
- Learn more about Google Cloud encryption at rest
- Learn more about Google Cloud encryption in transit
- Learn more about customer-managed encryption keys (CMEK)
- Learn how to remotely attest that workloads are running on Confidential VM
- Learn how to run GPUs on Confidential GKE Nodes nodes (Preview)