Add FC volume
A zvol is a ZFS block device created inside a ZFS pool (zpool). In this documentation, the term zvol refers to a block-type resource that is typically exported as a LUN to hosts over iSCSI, Fibre Channel, or NVMe-oF. Zvols are usually used to:
- Provide block storage for virtual machines, databases, or other applications that expect a disk device.
- Separate workloads that require different performance or data-protection policies (for example, different compression, block size, or deduplication settings).
- Control how space is consumed by different applications or tenants at the pool level.
Creating a zvol
- Go to the zpool management view in the GUI.
- Select and expand the zpool in which you want to create the zvol.
- Navigate to the iSCSI Targets, FC Targets, or NVMe-oF Targets section.
- Click Add zvol to open the Add new zvol dialog.
- Configure Encryption settings and Zvol properties, and optionally attach to an iSCSI target, NVMe-oF subsystem, or assign to FC groups.
- Review the configuration and click Add. The new zvol appears in the selected zpool.
After creation, you can adjust most properties later; however, encryption and some layout-related parameters (such as volume block size) cannot be changed after data has been written.
Encryption settings
This section is displayed at the top of the dialog. Encryption can be enabled only during zvol creation and cannot be disabled later for this resource.
Encrypt resource
Enable this switch to create an encrypted zvol. If the switch remains disabled, the zvol is created unencrypted.
Encryption method
Defines the encryption algorithm used when the zvol is encrypted.
- By default, the method is inherited from Configuration → Resource encryption (for example, aes-256-gcm).
- You can select a different supported method for this zvol if required by policy or performance.
For information about keys, unlocking behaviour, and error handling, see the Encryption article.
Zvol properties
These fields define the behaviour and performance characteristics of the zvol.
Name
- The zvol name must be unique within the pool.
- Allowed characters: a–z A–Z 0–9 . _ -
Renaming a zvol that is already exported through a target will change its internal path; any targets using that path must be updated before clients regain access.
Size
- Defines the logical capacity of the zvol. Enter the value and select the unit (e.g., GiB).
- The dialog shows the currently available physical space in the pool below the field.
Effective space consumption depends on the provisioning mode, compression efficiency, and any additional data copies.
Provisioning
Controls how space is allocated in the pool.
- Thin provisioned (default): Physical space is allocated on demand as data is written. This allows you to define logical capacities larger than the pool’s current free space, but you must monitor pool usage to avoid running out of space.
- Thick provisioned: The full size of the zvol is reserved immediately at creation time. This guarantees capacity for the zvol but reduces free space for other resources.
Use thick provisioning only for workloads that require guaranteed capacity and for which overcommitment is unacceptable.
Deduplication
Enables ZFS block-level deduplication for the zvol. Available options include:
- Disabled (default) – deduplication is off.
- On – alias for sha256.
- Verify – alias for sha256,verify; performs an extra block comparison step.
- sha256 – deduplicates based on SHA-256 checksums; blocks with identical checksums share a single physical copy.
- sha256,verify – uses SHA-256 and additionally verifies candidate duplicate blocks to reduce the risk of hash collisions. This mode is very resource-intensive.
Use deduplication only when you expect a high ratio of repeated blocks and have sufficient RAM (e.g., many similar VM images). For general-purpose workloads, leaving deduplication disabled is usually recommended.
Number of data copies
Controls how many ZFS data copies are stored for this zvol, in addition to pool-level redundancy (mirrors, RAIDZ, etc.).
- Allowed values: 1 (default), 2, 3.
- When possible, copies are placed on different physical disks.
- Additional copies increase used space and count against pool capacity.
- Only new writes use the current setting; existing data keeps the number of copies that were in effect when they were written.
Use 2 or 3 copies only for small but critical zvols where additional local redundancy is more important than capacity efficiency.
Compression
Defines the on-the-fly compression algorithm for zvol data.
- lz4 (default) – high-performance, general-purpose method that is recommended for most workloads.
- None – disables compression.
- Additional algorithms in the list:
- gzip-1 … gzip-9 (higher levels compress more but are slower),
- lzjb,
- zle (effective mainly for blocks of zeros).
Keeping lz4 enabled is advisable for most zvols. Disable compression only when the data is already compressed and extremely latency-sensitive.
Volume block size
Defines the block size used for the zvol. This is similar to choosing a sector size for a virtual disk.
- Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB, and 1 MiB.
- Default value in the dialog: 64 KiB.
- The chosen size cannot be changed once meaningful data has been written.
Guidelines:
- Smaller blocks (e.g., 4-16 KiB) can improve performance for random I/O with small requests, at the cost of more metadata and slightly higher overhead.
- Larger blocks (e.g., 128 KiB or more) are suitable for large sequential workloads, such as backup or media storage.
Choose the block size based on the typical I/O pattern of the applications that will use the zvol.
Write cache sync requests
Controls how synchronous write operations are handled for this zvol (ZFS sync property).
- Always (default): All writes are treated as synchronous. Each transaction is committed and flushed to stable storage before the operation returns to the initiator. This provides the highest level of data safety and is recommended especially when no reliable UPS is available.
- Standard: Equivalent to sync=standard. Only writes that are explicitly requested as synchronous are forced to stable storage; other writes can stay cached for up to about one second before being committed. This improves performance, but the most recent (up to 1 second) cached data can be lost in case of a power outage. Use this option only in environments protected by a reliable UPS.
- Disabled: Equivalent to sync=disabled. Even explicitly synchronous writes are treated as asynchronous and may remain in cache for up to about one second. This provides the highest performance, but the most recent cached data may be lost during a power outage, and applications may observe inconsistent data. Use this option only for non-critical workloads and only in environments equipped with a reliable UPS.
Write cache sync request handling (logbias)
Provides a hint about how synchronous writes should use log devices (if present in the pool).
- Write log device (Latency): If dedicated log vdevs exist, they are used to minimize latency for synchronous writes. This is the recommended default for latency-sensitive workloads.
- In pool (Throughput): Log vdevs are bypassed, and writes are optimized for aggregate throughput and efficient pool usage. This can be beneficial for streaming workloads where latency is less critical.
This setting does not override pool layout; it only influences where synchronous data is staged before being committed to main storage.
Read cache (primary, ARC) scope
Specifies what is cached in the primary memory cache (ARC) for this zvol.
- All (default) – cache both data and metadata.
- Metadata – cache only metadata; user data is read directly from disk.
- None – do not cache anything for this zvol in ARC.
For large, sequential, or low-priority workloads you can reduce ARC pressure by switching to Metadata or None.
Read cache (secondary, L2ARC) scope
Controls use of secondary cache devices (L2ARC), typically SSDs.
- All (default) – cache both metadata and user data on L2ARC.
- Metadata – cache only metadata on L2ARC.
- None – exclude this zvol from L2ARC caching.
Use Metadata or None for zvols that would otherwise fill L2ARC with data that has low reuse value, thereby preserving cache space for more critical workloads.
Attach to Fibre Channel groups
This section allows you to assign the new zvol to one or more Fibre Channel (FC) groups and control how it will be presented to FC initiators.
FC membership properties
SCSI ID
A unique identifier of a device.
- Automatic – a SCSI identifier is generated automatically.
- Generate – creates a new random SCSI identifier.
In most cases, leaving the value set to automatic is sufficient.
Write cache settings
Defines how write caching is exposed for this FC LUN:
- Write-through --- Block I/O (default) – write requests are completed only after data is safely stored on disk. Recommended for most environments.
- Write-through --- File I/O – write-through behaviour using the File I/O path.
- Write-back --- File I/O – enables write-back caching on the File I/O path. This provides the highest write performance, but cached data can be lost in case of a power outage or node failure (even in HA cluster mode). Use only when the environment can tolerate potential data loss and when appropriate protection such as UPS and battery-backed cache is in place.
- Read only --- File I/O – exposes the LUN as read-only over File I/O.
- Read only --- Block I/O – exposes the LUN as read-only over Block I/O.
TRIM support
When enabled, allows the FC LUN to accept TRIM/UNMAP commands, returning freed blocks to the pool. Use this option only when the initiator and operating system fully support TRIM over Fibre Channel.
FC groups
The FC groups table lists all configured Fibre Channel groups.
- Alias – name of the FC group. Select the check box to assign the zvol to that group.
- LUN – LUN number under which the zvol will be exposed in the selected group.
- Auto-assigns the next available LUN number.
- If manual entry is allowed, you can type a specific LUN number that is free within that group.
If no FC group is selected, the zvol will not be available over Fibre Channel and can be assigned later from the FC configuration view.