Share disks between VMs


To access the same disk from multiple virtual machine (VM) instances, you can enable read-only or multi-writer sharing.

Read-only sharing allows static access to the data on a disk from multiple VMs. The VMs that the disk is attached to can only read data from the disk, and can't write to the disk.

Multi-writer mode grants multiple VMs read-write access to the same disk.

VMs must be in the same zone to share a zonal disk. Similarly, regional disks can only be shared with VMs in the same zones as the disk's replicas.

This document discusses disk sharing in Compute Engine and how to enable it.

Before you begin

  • To share a Hyperdisk ML volume between VMs, you must set the access mode of the Hyperdisk ML volume to read-only. To change a Hyperdisk ML volume's access mode, see Change the access mode of a Hyperdisk ML volume.
  • If you haven't already, set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine as follows.

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init
    2. Set a default region and zone.

    Java

    To use the Java samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. To initialize the gcloud CLI, run the following command:

      gcloud init
    3. Create local authentication credentials for your Google Account:

      gcloud auth application-default login

    For more information, see Set up authentication for a local development environment.

    Python

    To use the Python samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.
    2. To initialize the gcloud CLI, run the following command:

      gcloud init
    3. Create local authentication credentials for your Google Account:

      gcloud auth application-default login

    For more information, see Set up authentication for a local development environment.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Required roles and permissions

To get the permissions that you need to Share a disk between VMs, ask your administrator to grant you the following IAM roles on the project:

For more information about granting roles, see Manage access.

These predefined roles contain the permissions required to Share a disk between VMs. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to Share a disk between VMs:

  • To attach a disk to a VM:
    • compute.instances.attachDisk on the VM
    • compute.disks.use on the disk that you want to attach to the VM

You might also be able to get these permissions with custom roles or other predefined roles.

Overview of read-only mode

To share static data on a disk between multiple VMs, attach the disk to each VM in read-only mode. Sharing a single disk between multiple VMs is less expensive than having copies of the same data on multiple disks.

Supported disk types for read-only mode

These Persistent Disk and Google Cloud Hyperdisk types support attaching multiple VMs in read-only mode:

  • Hyperdisk ML
  • Zonal and regional Balanced Persistent Disk
  • SSD Persistent Disk
  • Standard Persistent Disk

Performance in read-only mode

Attaching a read-only disk to multiple VMs does not affect the disk's performance. Each VM can still reach the maximum disk performance possible for the VM's machine series.

Restrictions for sharing disks in read-only mode

  • If you share a Hyperdisk ML volume in read-only mode, you can't re-enable write access to the disk.
  • You can attach a Hyperdisk ML volume to up to 100 VMs during every 30-second interval.
  • The maximum number of VMs a disk can be attached to varies by disk type:
    • For Hyperdisk ML volumes, the maximum number of VMs depends on the provisioned size, as follows:
      • Volumes less than 256 GiB in size: 2,500
      • Volumes with capacity of 256 GiB or more, and less than 1 TiB: 1,500
      • Volumes with capacity of 1 TiB or more, and less than 2 TiB: 600
      • Volumes with 2 TiB or more of capacity: 30
    • Zonal or regional Balanced Persistent Disk volumes in read-only mode support at most 10 VMs.
    • For SSD Persistent Disk, Google recommends at most 100 VMs.
    • For Standard Persistent Disk volumes, the recommended maximum is 10 VMs.

Prepare to share a disk in read-only mode

If you're not using Hyperdisk ML, you don't need to follow any additional steps. You can share the disk by following the instructions in Share a disk in read-only mode between multiple VMs.

To share a Hyperdisk ML volume in read-only mode, you must set the disk's access mode property to read-only mode. The access mode indicates the type of access granted to any VM the disk is attached to. If you're using a Persistent Disk volume, you don't have to manually set the access mode.

The available access modes for Hyperdisk volumes are as follows:

  • Read-only mode (READ_ONLY_MANY): grants read-only access to all VMs attached to the disk.
  • Read-write mode (READ_WRITE_SINGLE): allows only 1 VM to be attached to the disk, and grants the attached VM read-write access. This is the default access mode.

To share a Hyperdisk ML volume between VMs, Change the access mode to READ_ONLY_MANY.

After you enable read-only mode, follow the steps to Share a disk in read-only mode between multiple VMs.

Share a disk in read-only mode between VMs

This section describes how to attach a non-boot Hyperdisk ML or Persistent Disk volume in read-only mode to multiple VMs.

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. In the list of VMs in your project, click the name of the VM where you want to attach the disk. The VM instance details page opens.

  3. On the instance details page, click Edit.

  4. In the Additional disks section, click one of the following:

    1. Click Add a disk.
    2. Attach existing disk to select an existing disk and attach it in read-only mode to your VM.
  5. In the Disk list, select the disk you want to attach. If the disk isn't listed, make sure it's in the same location as the VM. This means the same zone for a zonal disk and the same region for a regional disk.

  6. For Disk attachment mode, select Read-only.

  7. Specify other options for your disk.

  8. To apply the changes to the disk, click Done.

  9. To apply your changes to the VM, click Save.

  10. Connect to the VM and mount the disk.

  11. Repeat this process to attach the disk to other VMs in read-only mode.

gcloud

In the gcloud CLI, use the compute instances attach-disk command and specify the --mode flag with the ro option.

gcloud compute instances attach-disk INSTANCE_NAME \
  --disk DISK_NAME \
  --mode ro

Replace the following:

  • INSTANCE_NAME: the name of the VM where you want to attach the zonal Persistent Disk volume
  • DISK_NAME: the name of the disk that you want to attach

After you attach the disk, connect to the VM and mount the disk.

Repeat this command for each VM where you want to add this disk in read-only mode.

Java

Java

Before trying this sample, follow the Java setup instructions in the Compute Engine quickstart using client libraries. For more information, see the Compute Engine Java API reference documentation.

To authenticate to Compute Engine, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


import com.google.cloud.compute.v1.AttachDiskInstanceRequest;
import com.google.cloud.compute.v1.AttachedDisk;
import com.google.cloud.compute.v1.InstancesClient;
import com.google.cloud.compute.v1.Operation;
import java.io.IOException;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class AttachDisk {

  public static void main(String[] args)
      throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // TODO(developer): Replace these variables before running the sample.
    // Project ID or project number of the Cloud project you want to use.
    String projectId = "your-project-id";

    // Name of the zone in which the instance you want to use resides.
    String zone = "zone-name";

    // Name of the compute instance you want to attach a disk to.
    String instanceName = "instance-name";

    // Full or partial URL of a persistent disk that you want to attach. This can be either
    // be a regional or zonal disk.
    // Valid formats:
    //     * https://www.googleapis.com/compute/v1/projects/{project}/zones/{zone}/disks/{disk_name}
    //     * /projects/{project}/zones/{zone}/disks/{disk_name}
    //     * /projects/{project}/regions/{region}/disks/{disk_name}
    String diskLink = String.format("/projects/%s/zones/%s/disks/%s",
        "project", "zone", "disk_name");

    // Specifies in what mode the disk will be attached to the instance. Available options are
    // `READ_ONLY` and `READ_WRITE`. Disk in `READ_ONLY` mode can be attached to
    // multiple instances at once.
    String mode = "READ_ONLY";

    attachDisk(projectId, zone, instanceName, diskLink, mode);
  }

  // Attaches a non-boot persistent disk to a specified compute instance.
  // The disk might be zonal or regional.
  // You need following permissions to execute this action:
  // https://cloud.google.com/compute/docs/disks/regional-persistent-disk#expandable-1
  public static void attachDisk(String projectId, String zone, String instanceName, String diskLink,
      String mode)
      throws IOException, ExecutionException, InterruptedException, TimeoutException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the `instancesClient.close()` method on the client to safely
    // clean up any remaining background resources.
    try (InstancesClient instancesClient = InstancesClient.create()) {

      AttachDiskInstanceRequest attachDiskInstanceRequest = AttachDiskInstanceRequest.newBuilder()
          .setProject(projectId)
          .setZone(zone)
          .setInstance(instanceName)
          .setAttachedDiskResource(AttachedDisk.newBuilder()
              .setSource(diskLink)
              .setMode(mode)
              .build())
          .build();

      Operation response = instancesClient.attachDiskAsync(attachDiskInstanceRequest)
          .get(3, TimeUnit.MINUTES);

      if (response.hasError()) {
        System.out.println("Attach disk failed! " + response);
        return;
      }
      System.out.println("Attach disk - operation status: " + response.getStatus());
    }
  }
}

Python

Python

Before trying this sample, follow the Python setup instructions in the Compute Engine quickstart using client libraries. For more information, see the Compute Engine Python API reference documentation.

To authenticate to Compute Engine, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

from __future__ import annotations

import sys
from typing import Any

from google.api_core.extended_operation import ExtendedOperation
from google.cloud import compute_v1


def wait_for_extended_operation(
    operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
    """
    Waits for the extended (long-running) operation to complete.

    If the operation is successful, it will return its result.
    If the operation ends with an error, an exception will be raised.
    If there were any warnings during the execution of the operation
    they will be printed to sys.stderr.

    Args:
        operation: a long-running operation you want to wait on.
        verbose_name: (optional) a more verbose name of the operation,
            used only during error and warning reporting.
        timeout: how long (in seconds) to wait for operation to finish.
            If None, wait indefinitely.

    Returns:
        Whatever the operation.result() returns.

    Raises:
        This method will raise the exception received from `operation.exception()`
        or RuntimeError if there is no exception set, but there is an `error_code`
        set for the `operation`.

        In case of an operation taking longer than `timeout` seconds to complete,
        a `concurrent.futures.TimeoutError` will be raised.
    """
    result = operation.result(timeout=timeout)

    if operation.error_code:
        print(
            f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
            file=sys.stderr,
            flush=True,
        )
        print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
        raise operation.exception() or RuntimeError(operation.error_message)

    if operation.warnings:
        print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
        for warning in operation.warnings:
            print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)

    return result


def attach_disk(
    project_id: str, zone: str, instance_name: str, disk_link: str, mode: str
) -> None:
    """
    Attaches a non-boot persistent disk to a specified compute instance. The disk might be zonal or regional.

    You need following permissions to execute this action:
    https://cloud.google.com/compute/docs/disks/regional-persistent-disk#expandable-1

    Args:
        project_id: project ID or project number of the Cloud project you want to use.
        zone:name of the zone in which the instance you want to use resides.
        instance_name: name of the compute instance you want to attach a disk to.
        disk_link: full or partial URL to a persistent disk that you want to attach. This can be either
            regional or zonal disk.
            Expected formats:
                * https://www.googleapis.com/compute/v1/projects/[project]/zones/[zone]/disks/[disk_name]
                * /projects/[project]/zones/[zone]/disks/[disk_name]
                * /projects/[project]/regions/[region]/disks/[disk_name]
        mode: Specifies in what mode the disk will be attached to the instance. Available options are `READ_ONLY`
            and `READ_WRITE`. Disk in `READ_ONLY` mode can be attached to multiple instances at once.
    """
    instances_client = compute_v1.InstancesClient()

    request = compute_v1.AttachDiskInstanceRequest()
    request.project = project_id
    request.zone = zone
    request.instance = instance_name
    request.attached_disk_resource = compute_v1.AttachedDisk()
    request.attached_disk_resource.source = disk_link
    request.attached_disk_resource.mode = mode

    operation = instances_client.attach_disk(request)

    wait_for_extended_operation(operation, "disk attachement")

REST

In the API, construct a POST request to the compute.instances.attachDisk method method. In the request body, specify the mode parameter as READ_ONLY.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk

{
 "source": "zones/ZONE/disks/DISK_NAME",
 "mode": "READ_ONLY"
}

Replace the following:

  • INSTANCE_NAME: the name of the VM where you want to attach the zonal Persistent Disk volume
  • PROJECT_ID: your project ID
  • ZONE: the zone where your disk is located
  • DISK_NAME: the name of the disk that you are attaching

After you attach the disk, connect to the VM and mount the disk.

Repeat this request for each VM where you want to add this disk in read-only mode.

Overview of multi-writer mode

You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2 virtual machine (VM) instances simultaneously so that both VMs can read and write to the disk.

If you have more than 2 N2 VMs or you're using any other machine series, you can use one of the following options:

To enable multi-writer mode for new Persistent Disk volumes, create a new Persistent Disk volume and specify the --multi-writer flag in the gcloud CLI or the multiWriter property in the Compute Engine API.

Persistent Disk volumes in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed storage systems and similar highly available services. When using Persistent Disk volumes in multi-writer mode, use a scale-out storage software system that has the ability to coordinate access to Persistent Disk devices across multiple VMs. Examples of these storage systems include Lustre and IBM Spectrum Scale. Most single VM file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage. For more information, see Best practices in this document. If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine VMs.

Persistent Disk volumes in multi-writer mode support a subset of SCSI-3 Persistent Reservations (SCSI PR) commands. High-availability applications can use these commands for I/O fencing and failover configurations.

The following SCSI PR commands are supported:

  • IN {REPORT CAPABILITIES, READ FULL STATUS, READ RESERVATION, READ KEYS}
  • OUT {REGISTER, REGISTER AND IGNORE EXISTING KEY, RESERVE, PREEMPT, CLEAR, RELEASE}

For instructions on sharing a disk in multi-writer mode, see Share an SSD Persistent Disk volume in multi-writer mode between VMs.

Supported disk types for multi-writer mode

You can simultaneously attach SSD Persistent Disk in multi-writer mode to up to 2 N2 VMs.

Best practices for multi-writer mode

  • I/O fencing using SCSI PR commands results in a crash consistent state of Persistent Disk data. Some file systems don't have crash consistency and therefore might become corrupt if you use SCSI PR commands.
  • Many file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage and don't have mechanisms to synchronize or perform operations that originate from multiple VM instances.
  • Before you use Persistent Disk volumes in multi-writer mode, ensure that you understand your file system and how it can be safely used with shared block storage and simultaneous access from multiple VMs.

Performance in multi-writer mode

Persistent Disk volumes created in multi-writer mode have specific IOPS and throughput limits.

Zonal SSD persistent disk multi-writer mode
Maximum sustained IOPS
Read IOPS per GB 30
Write IOPS per GB 30
Read IOPS per instance 15,000–100,000*
Write IOPS per instance 15,000–100,000*
Maximum sustained throughput (MB/s)
Read throughput per GB 0.48
Write throughput per GB 0.48
Read throughput per instance 240–1,200*
Write throughput per instance 240–1,200*
* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.
Attaching a multi-writer disk to multiple virtual machine instances does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit.

To learn how to share persistent disks between multiple VMs, see Share persistent disks between VMs.

Restrictions for sharing a disk in multi-writer mode

  • Multi-writer mode is only supported for SSD type Persistent Disk volumes.
  • You can create a Persistent Disk volume in multi-writer mode in any zone, but you can only attach that disk to VMs in the following locations:
    • australia-southeast1
    • europe-west1
    • us-central1 (us-central1-a and us-central1-c zones only)
    • us-east1 (us-east1-d zone only)
    • us-west1 (us-west1-b and us-west1-c zones only)
  • Attached VMs must have an N2 machine type.
  • Minimum disk size is 10 GiB.
  • Disks in multi-writer mode don't support attaching more than 2 VMs at a time. Multi-writer mode Persistent Disk volumes don't support Persistent Disk metrics.
  • Disks in multi-writer mode cannot change to read-only mode.
  • You cannot use disk images or snapshots to create Persistent Disk volumes in multi-writer mode.
  • You can't create snapshots or images from Persistent Disk volumes in multi-writer mode.
  • Lower IOPS limits. See disk performance for details.
  • You can't resize a multi-writer Persistent Disk volume.
  • When creating a VM using the Google Cloud CLI, you can't create a multi-writer Persistent Disk volume using the --create-disk flag.

Share an SSD Persistent Disk volume in multi-writer mode between VMs

You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMs in the same zone. See Persistent Disk multi-writer mode for details about how this mode works. You can create and attach multi-writer Persistent Disk volumes using the following process:

gcloud

Create and attach a zonal Persistent Disk volume by using the gcloud CLI:

  1. Use the gcloud beta compute disks create command command to create a zonal Persistent Disk volume. Include the --multi-writer flag to indicate that the disk must be shareable between the VMs in multi-writer mode.

    gcloud beta compute disks create DISK_NAME \
       --size DISK_SIZE \
       --type pd-ssd \
       --multi-writer
    

    Replace the following:

    • DISK_NAME: the name of the new disk
    • DISK_SIZE: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
  2. After you create the disk, attach it to any running or stopped VM with an N2 machine type. Use the gcloud compute instances attach-disk command:

    gcloud compute instances attach-disk INSTANCE_NAME \
       --disk DISK_NAME
    

    Replace the following:

    • INSTANCE_NAME: the name of the N2 VM where you are adding the new zonal Persistent Disk volume
    • DISK_NAME: the name of the new disk that you are attaching to the VM
  3. Repeat the gcloud compute instances attach-disk command but replace INSTANCE_NAME with the name of your second VM.

After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk. You cannot mount the disk to multiple VMs using the same process you would normally use to mount the disk to a single VM.

REST

Use the Compute Engine API to create and attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.

  1. In the API, construct a POST request to create a zonal Persistent Disk volume using the disks.insert method. Include the name, sizeGb, and type properties. To create this new disk as an empty and unformatted non-boot disk, don't specify a source image or a source snapshot for this disk. Include the multiWriter property with a value of True to indicate that the disk must be sharable between the VMs in multi-writer mode.

    POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks
    
    {
    "name": "DISK_NAME",
    "sizeGb": "DISK_SIZE",
    "type": "zones/ZONE/diskTypes/pd-ssd",
    "multiWriter": "True"
    }
    

    Replace the following:

    • PROJECT_ID: your project ID
    • ZONE: the zone where your VM and new disk are located
    • DISK_NAME: the name of the new disk
    • DISK_SIZE: the size, in GB, of the new disk Acceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Disk volumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes in multi-writer mode.
  2. To attach the disk to a VM, construct a POST request to the compute.instances.attachDisk method. Include the URL to the zonal Persistent Disk volume that you just created:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk
    
    {
    "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME"
    }
    

    Replace the following:

    • PROJECT_ID: your project ID
    • ZONE: the zone where your VM and new disk are located
    • INSTANCE_NAME: the name of the VM where you are adding the new Persistent Disk volume.
    • DISK_NAME: the name of the new disk
  3. To attach the disk to a second VM, repeat the instances.attachDisk command from the previous step. Set the INSTANCE_NAME to the name of the second VM.

After you create and attach a new disk to a VM, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk.

What's next