Add new zvol

From Open-E Wiki
Revision as of 07:58, 15 January 2026 by Da-F (talk | contribs)
Jump to navigation Jump to search

A zvol is a ZFS block device created inside a ZFS pool (zpool). In this documentation, the term zvol refers to a block-type resource that is typically exported as a LUN to hosts over iSCSI, Fibre Channel, or NVMe-oF. Zvols are usually used to:

  • Provide block storage for virtual machines, databases, or other applications that expect a disk device.
  • Separate workloads that require different performance or data-protection policies (for example, different compression, block size, or deduplication settings).
  • Control how space is consumed by different applications or tenants at the pool level.

You first create a zvol, and then (optionally) attach it to a target to make it available to hosts.

Creating a zvol

  1. Go to the zpool management view in the GUI.
  2. Select and expand the zpool in which you want to create the zvol.
  3. Navigate to the iSCSI Targets, FC Targets, or NVMe-oF Targets section.
  4. Click Add zvol to open the Add new zvol dialog.
  5. Configure Encryption settings and Zvol properties, and optionally attach to an iSCSI target, NVMe-oF subsystem, or assign to FC groups.
  6. Review the configuration and click Add. The new zvol appears in the selected zpool.

After creation, you can adjust most properties later; however, encryption and some layout-related parameters (such as volume block size) cannot be changed after data has been written.

Encryption settings

This section is displayed at the top of the dialog. Encryption can be enabled only during zvol creation and cannot be disabled later for this resource.

Encrypt resource

Enable this switch to create an encrypted zvol. If the switch remains disabled, the zvol is created unencrypted.

Encryption method

Defines the encryption algorithm used when the zvol is encrypted.

  • By default, the method is inherited from Configuration → Resource encryption (for example, aes-256-gcm).
  • You can select a different supported method for this zvol if required by policy or performance.

For information about keys, unlocking behaviour, and error handling, see the Encryption article.

Zvol properties

These fields define the behaviour and performance characteristics of the zvol.

Name

  • The zvol name must be unique within the pool.
  • Allowed characters: a–z  A–Z  0–9  .  _  -

Renaming a zvol that is already exported through a target will change its internal path; any targets using that path must be updated before clients regain access.

Size

  • Defines the logical capacity of the zvol. Enter the value and select the unit (e.g., GiB).
  • The dialog shows the currently available physical space in the pool below the field.

Effective space consumption depends on the provisioning mode, compression efficiency, and any additional data copies.

Provisioning

Controls how space is allocated in the pool.

  • Thin provisioned (default): Physical space is allocated on demand as data is written. This allows you to define logical capacities larger than the pool’s current free space, but you must monitor pool usage to avoid running out of space.
  • Thick provisioned: The full size of the zvol is reserved immediately at creation time. This guarantees capacity for the zvol but reduces free space for other resources.

Use thick provisioning only for workloads that require guaranteed capacity and for which overcommitment is unacceptable.

Deduplication

Enables ZFS block-level deduplication for the zvol. Available options include:

  • Disabled (default) – deduplication is off.
  • On – alias for sha256.
  • Verify – alias for sha256,verify; performs an extra block comparison step.
  • sha256 – deduplicates based on SHA-256 checksums; blocks with identical checksums share a single physical copy.
  • sha256,verify – uses SHA-256 and additionally verifies candidate duplicate blocks to reduce the risk of hash collisions. This mode is very resource-intensive.

Use deduplication only when you expect a high ratio of repeated blocks and have sufficient RAM (e.g., many similar VM images). For general-purpose workloads, leaving deduplication disabled is usually recommended.

Number of data copies

Controls how many ZFS data copies are stored for this zvol, in addition to pool-level redundancy (mirrors, RAIDZ, etc.).

  • Allowed values: 1 (default), 2, 3.
  • When possible, copies are placed on different physical disks.
  • Additional copies increase used space and count against pool capacity.
  • Only new writes use the current setting; existing data keeps the number of copies that were in effect when they were written.

Use 2 or 3 copies only for small but critical zvols where additional local redundancy is more important than capacity efficiency.

Compression

Defines the on-the-fly compression algorithm for zvol data.

  • lz4 (default) – high-performance, general-purpose method that is recommended for most workloads.
  • None – disables compression.
  • Additional algorithms in the list:
    • gzip-1 … gzip-9 (higher levels compress more but are slower),
    • lzjb,
    • zle (effective mainly for blocks of zeros).

Keeping lz4 enabled is advisable for most zvols. Disable compression only when the data is already compressed and extremely latency-sensitive.

Volume block size

Defines the block size used for the zvol. This is similar to choosing a sector size for a virtual disk.

  • Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB, and 1 MiB.
  • Default value in the dialog: 64 KiB.
  • The chosen size cannot be changed once meaningful data has been written.

Guidelines:

  • Smaller blocks (e.g., 4-16 KiB) can improve performance for random I/O with small requests, at the cost of more metadata and slightly higher overhead.
  • Larger blocks (e.g., 128 KiB or more) are suitable for large sequential workloads, such as backup or media storage.

Choose the block size based on the typical I/O pattern of the applications that will use the zvol.

Write cache sync requests

Controls how synchronous write operations are handled for this zvol (ZFS sync property).

  • Always (default): All writes are treated as synchronous. Each transaction is committed and flushed to stable storage before the operation returns to the initiator. This provides the highest level of data safety and is recommended especially when no reliable UPS is available.
  • Standard: Equivalent to sync=standard. Only writes that are explicitly requested as synchronous are forced to stable storage; other writes can stay cached for up to about one second before being committed. This improves performance, but the most recent (up to 1 second) cached data can be lost in case of a power outage. Use this option only in environments protected by a reliable UPS.
  • Disabled: Equivalent to sync=disabled. Even explicitly synchronous writes are treated as asynchronous and may remain in cache for up to about one second. This provides the highest performance, but the most recent cached data may be lost during a power outage, and applications may observe inconsistent data. Use this option only for non-critical workloads and only in environments equipped with a reliable UPS.

Write cache sync request handling (logbias)

Provides a hint about how synchronous writes should use log devices (if present in the pool).

  • Write log device (Latency): If dedicated log vdevs exist, they are used to minimize latency for synchronous writes. This is the recommended default for latency-sensitive workloads.
  • In pool (Throughput): Log vdevs are bypassed, and writes are optimized for aggregate throughput and efficient pool usage. This can be beneficial for streaming workloads where latency is less critical.

This setting does not override pool layout; it only influences where synchronous data is staged before being committed to main storage.

Read cache (primary, ARC) scope

Specifies what is cached in the primary memory cache (ARC) for this zvol.

  • All (default) – cache both data and metadata.
  • Metadata – cache only metadata; user data is read directly from disk.
  • None – do not cache anything for this zvol in ARC.

For large, sequential, or low-priority workloads you can reduce ARC pressure by switching to Metadata or None.

Read cache (secondary, L2ARC) scope

Controls use of secondary cache devices (L2ARC), typically SSDs.

  • All (default) – cache both metadata and user data on L2ARC.
  • Metadata – cache only metadata on L2ARC.
  • None – exclude this zvol from L2ARC caching.

Use Metadata or None for zvols that would otherwise fill L2ARC with data that has low reuse value, thereby preserving cache space for more critical workloads.

Attach to target

The Attach to target section at the bottom of the dialog allows you to export the newly created zvol as a LUN immediately. This section is optional; you can also attach the zvol later from the target configuration views.

General behaviour

  • When the Attach to target checkbox is disabled, the zvol is created but not attached to any target.
  • Enabling the checkbox expands the configuration panel and allows you to select or configure how the zvol will be presented to initiators.

Fields

Target name: Select an existing target from the drop-down list. The zvol will be attached as a new LUN under this target.

SCSI ID

  • Automatic – uses an automatically generated SCSI identifier.
  • Generate – creates a new random identifier if you need to control or refresh the ID.

In most cases, leaving the default automatic value is sufficient.

LUN

  • automatic – assigns the next available LUN number on the selected target.
  • manual entry – specify a particular LUN number if your environment uses a specific numbering scheme; the number must not already be in use on that target.

Write cache settings

Defines how write caching is presented to the initiator for this LUN.

  • Write-through --- Block I/O (default): All writes are committed directly to stable storage before completion is reported to the initiator. This mode prioritizes data integrity and is recommended for most environments.
  • Read only --- Block I/O: Exposes the LUN as read-only on the Block I/O path. Any write attempts from the initiator are rejected. Use this option for volumes that must not be modified.
  • Write-through --- File I/O: Similar to Write-through --- Block I/O, but handled through the File I/O path. Writes are acknowledged only after they are safely stored on disk.
  • Write-back --- File I/O: Enables write-back caching on the File I/O path. Write requests are acknowledged after being stored in cache rather than on disk, which provides the highest write performance. However, cached data may be lost in case of a power failure or node crash (this risk exists even in HA cluster configurations), and resource failover can take noticeably longer. Use this option only when the environment can tolerate potential data loss and when additional protection (e.g., a battery-backed cache and a reliable UPS) is in place.
  • Read only --- File I/O: Exposes the LUN as read-only on the File I/O path. Use when the initiator must have read access only, for example, for archival or reference datasets.

TRIM support

  • When enabled, it allows the zvol to honor TRIM / UNMAP requests from the initiator so that released blocks can be returned to the pool.
  • Use this option only when the operating system and initiator software fully support TRIM for the relevant protocol.

TRIM can improve space efficiency for thin-provisioned zvols, but misconfigured initiators or unsupported combinations may cause unexpected behaviour.

After you confirm the configuration and click Add, the zvol is created with the specified properties. If attachment is enabled, the zvol is also exposed as a LUN on the selected target and becomes available to connected initiators after they rescan their devices.