Zpool wizard: Difference between revisions

From Open-E Wiki
Jump to navigation Jump to search
Da-F (talk | contribs)
No edit summary
Da-F (talk | contribs)
No edit summary
 
Line 1: Line 1:
<div>A '''zpool''' is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into '''vdevs''' (virtual devices), which collectively form the unified zpool. From this zpool, ZFS creates and manages '''datasets''' (file systems) and '''zvols''' (block storage volumes).</div><br/><div>The zpool wizard is made up of the following steps:</div><br/><div>
__NOTOC__
<span style="font-size:larger">'''1. Add data group'''</span>
The '''Zpool Wizard''' guides you through the process of creating and configuring a new ZFS pool (zpool) from available disks. A zpool is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into '''vdevs''' (virtual devices), which collectively form the unified zpool.&nbsp;The wizard consists of multiple steps that allow you to configure data groups (vdevs), add optional device groups, adjust pool settings, and enable encryption if required.


This section provides information about all storage devices connected to the storage server. To add the first Data Group to your Zpool, follow these steps:


#Select the desired disks from the list on the left.
 
#Choose the redundancy type.
== Accessing the wizard ==
#Click the "Add group" button.
 
</div><br/><div>The available redundancy options for groups are as follows:
#Navigate to '''Storage'''.
*'''Single''': Each disk operates as an independent drive with no redundancy.
#Click '''Add zpool'''.
*'''Mirror''': All data written to one device in the mirror is automatically replicated to another device, ensuring data redundancy. A minimum of two disks is required to create a mirrored vdev.
#The Zpool creation wizard will launch.
**'''Mirror (Single Group)''': All selected disks will be combined into a single mirrored group.
#Follow the guided steps to configure your zpool.
**'''Mirror (Multiple Groups)''': The selected disks will be paired into multiple mirrored groups, each consisting of two disks.
 
*'''RAIDZ-1''': Allows for the failure of one disk per RAIDZ-1 group without losing data. A minimum of three disks is required for a RAIDZ-1 group.
 
*'''RAIDZ-2''': Allows for the failure of two disks per RAIDZ-2 group without losing data. A minimum of four disks is required for a RAIDZ-2 group.
 
*'''RAIDZ-3''': Allows for the failure of three disks per RAIDZ-3 group without losing data. A minimum of five disks is required for a RAIDZ-3 group.
== Zpool configuration steps ==
</div><br/><div>To learn more vdev types, please refer to the following article:&nbsp;[[Redundancy in Disks Groups|Redundancy_in_Disks_Groups]]</div><br/>
 
<span style="font-size:larger">'''2. Add write log'''</span>
=== Add data group&nbsp; ===
<div>This feature allows you to configure the write log function using a chosen redundancy level (either a single drive or a mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVME should be used for this vdev.</div><br/><div>Key points to consider:
 
*If multiple log devices are specified, write operations are load-balanced between the devices.
In this step, available disks are listed. You can filter only unused disks using the toggle.
*Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.
 
*RAIDZ vdev types are not supported for the intent log.
#Select one or more disks from the list.
</div><br/><div>This ensures efficient and reliable write operations while leveraging the selected redundancy level.</div><br/>
#Choose the desired redundancy level for the group:
<span style="font-size:larger">'''3. Add read cache'''</span>
#*'''Single''' - No redundancy. Any disk failure results in data loss.
<div>A cache device is used to store frequently accessed storage pool data, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVME should be used for this vdev.</div><br/><div>Key benefits and considerations:
#*'''Mirror''' - Data is stored on multiple disks. Capacity equals the size of one disk per mirror.
*Cache devices are particularly useful for '''read-heavy workloads''' where the working dataset size exceeds the capacity of main memory.
#**'''Mirror (Single Group)''': All selected disks will be combined into a single mirrored group.
*By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.
#**'''Mirror (Multiple Groups)''': The selected disks will be paired into multiple mirrored groups, each consisting of two disks.
*The greatest performance improvements are seen in workloads characterized by '''random reads''' of primarily static content.
#*'''Z-1''' - Single-parity redundancy. One disk may fail without losing data. A minimum of three disks is required for a RAIDZ-1 group.
</div><br/><div>Adding a read cache helps enhance performance and reduces latency for storage systems with high read demands.</div><br/>
#*'''Z-2''' - Double-parity redundancy. Two disks may fail without losing data. A minimum of four disks is required for a RAIDZ-2 group.
<span style="font-size:larger">'''4. Add special devices group'''</span>
#*'''Z-3''' - Triple-parity redundancy. Three disks may fail without losing data. A minimum of five disks is required for a RAIDZ-3 group.
<div>Special devices are used to store specific types of data, such as metadata or small files, on dedicated storage devices separate from the main data pool. A fast SSD/NVME should be used for this vdev.</div><br/><div>Key features and benefits:
#Click '''Add group''' to add the selected configuration.&nbsp;
*Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.
#*The selected data group will appear in the right-hand panel. The total zpool capacity and licensed storage usage are displayed below.&nbsp;
*Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.
#*To learn more vdev types, please refer to the following article: [[Redundancy in Disks Groups|Redundancy in Disk Groups]]&nbsp;
*Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.<div></div>
 
<div>Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage.</div><br/>
 
<span style="font-size:larger">'''5. Add deduplication group'''</span>
 
<div>A deduplication group can be explicitly excluded from a special device group as a dedicated storage group used to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.</div><br/><div>Key features and considerations:
=== Add write log (optional) ===
*Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.
 
*This configuration provides flexibility in optimizing storage layout based on workload requirements.
This feature allows you to configure the write log function with a selected redundancy level (single drive or mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVMe should be used for this vdev.
*Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.
 
</div><br/><div>This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.</div><br/>
#Select disks from the available list.
<span style="font-size:larger">'''6. Add spare disks'''</span>
#Choose redundancy type ('''Single''' or '''Mirror''') for added reliability.
<div>A spare disk is a special pseudo-vdev used to track available spare devices for a zpool.</div><br/><div>Using spare disks enhances the reliability of the storage pool by allowing seamless drive replacement and reducing the risk of data loss.</div><br/>
#Add the group to the zpool.
<span style="font-size:larger">'''7. Configuration'''</span>
 
<div>During this step, you can configure the Zpool by naming it and enabling additional features if required.</div><br/><div>Key configurations:
Write log groups are displayed separately in the Other groups section.
*'''Zpool Name''': Assign a unique and descriptive name to the Zpool for easy identification.
 
*'''Enable AutoTRIM''': If supported by your devices, enable the AutoTRIM feature to automatically reclaim unused space. AutoTRIM helps optimize the performance and lifespan of SSDs by informing them when blocks are no longer in use.
 
*'''Small blocks policy settings''' if a special device group has been configured in Step. When the small block size is set for the pool all datasets inherit this value by default. It can be changed for a particular dataset in its setting.
 
</div><br/><div>Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.</div><br/>
'''Key points to consider''':
<span style="font-size:larger">'''8. Summary'''</span>
 
<div>This step provides a summary of the zpool configuration, detailing the arrangement of disk groups and their roles within the pool. Click ‘Add zpool’ to create a zpool.</div>
• If multiple log devices are specified, write operations are load-balanced between the devices.
<br/>'''''<span>Video tutorial related to this article</span>'''''
• Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.
{{#ev:youtube|aJEZg-F6WQQ}}</div>
• RAIDZ vdev types are not supported for the intent log.
This ensures efficient and reliable write operations while leveraging the selected redundancy level.
 
=== <br/>Add read cache (optional) ===
 
This step allows you to assign SSDs as L2ARC (Level 2 Adaptive Replacement Cache) devices to boost read performance. Adding a read cache improves performance and reduces latency for storage systems under heavy read load. A cache device stores frequently accessed data from the storage pool, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVMe should be used for this vdev.
 
#Select a disk to be used as a cache device.&nbsp;Only '''Single''' redundancy is available.
#Confirm by adding the group.
 
<br/>'''Key benefits and considerations''':
 
• Cache devices are particularly useful for '''read-heavy workloads''' where the working dataset size exceeds the capacity of main memory.
• By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.
• The greatest performance improvements are seen in workloads characterized by random reads of primarily static content.
 
=== <br/>Add special devices group (optional) ===
 
Special and deduplication vdevs require at least the same level of redundancy as data vdevs.
Because RAIDZ vdevs do not provide compatible redundancy for these device groups, special vdevs and deduplication vdevs cannot be used in a ZFS pool that contains RAIDZ1, RAIDZ2, or RAIDZ3.
 
A special devices group stores metadata and small-block data to improve performance. A fast SSD/NVMe should be used for this vdev.
 
#Select one or more disks.
#Choose redundancy ('''Single''' or '''Mirror'''). '''The mirror redundancy level is recommended to prevent data loss'''.
#Add them as a group.
 
<br/>'''Key features and benefits''':
 
• Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.
• Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.
• Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.
Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage.&nbsp;
 
=== Add deduplication group (optional) ===
 
A deduplication group can be explicitly excluded from a special device group to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.
 
#Select disks for this purpose.&nbsp;Redundancy can be set to '''Single''' or '''Mirror'''.&nbsp;'''The mirror redundancy level is recommended to prevent data loss'''.
#Add the group to confirm.
 
<br/>'''Key features and considerations''':
 
• Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.
• This configuration provides flexibility in optimizing storage layout based on workload requirements.
Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.
This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.
 
=== <br/>Add spare disks (optional) ===
 
A spare disk is a special pseudo-vdev used to track available spare devices for a zpool. Using spare disks enhances the storage pool's reliability by enabling seamless drive replacement and reducing the risk of data loss.
 
#Select the disk and add it to the '''Spare''' group.
 
 
 
=== Configuration ===
 
In this step, you configure the final pool settings:
 
*'''Zpool name''' - Enter a unique name for the zpool for easy identification.
*'''autoTRIM''' - If supported by your devices, enable the AutoTRIM feature to reclaim unused space automatically. AutoTRIM helps optimize SSD performance and lifespan by notifying the controller when blocks are no longer in use.
*'''Initialize the zpool after creation''' - Writes patterns to unallocated space to avoid initial-write latency, especially in virtualized environments.&nbsp;The process may extend creation time and briefly affect performance.
 
Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.
 
 
 
=== Resource encryption (optional) ===
 
Encryption applies to datasets and zvols created in the ZFS pool. The zpool itself remains unencrypted.
 
#Enable '''Configure encryption passphrase'''.&nbsp;
#Select a '''Default encryption method'''.&nbsp;
#Enter and confirm the passphrase.
 
Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.
 
'''Note''':
• The passphrase cannot be recovered.
• Encrypted resources inherit the passphrase unless changed later.
 
=== Summary ===
 
The summary page displays the complete zpool configuration before finalization. Click '''Add zpool''' to complete pool creation.&nbsp;The wizard will create the zpool with the selected configuration.
 
'''Remember''':
• Redundancy level cannot be changed after the ZFS pool is created.
• Mixed disk sizes reduce usable capacity to the smallest disk in a vdev.
• SSDs are recommended for write log, special devices, and deduplication groups.
• Encryption passphrases cannot be recovered.
[[Category:Help topics]]
[[Category:Help topics]]

Latest revision as of 07:43, 15 January 2026

The Zpool Wizard guides you through the process of creating and configuring a new ZFS pool (zpool) from available disks. A zpool is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into vdevs (virtual devices), which collectively form the unified zpool. The wizard consists of multiple steps that allow you to configure data groups (vdevs), add optional device groups, adjust pool settings, and enable encryption if required.


Accessing the wizard

  1. Navigate to Storage.
  2. Click Add zpool.
  3. The Zpool creation wizard will launch.
  4. Follow the guided steps to configure your zpool.


Zpool configuration steps

Add data group 

In this step, available disks are listed. You can filter only unused disks using the toggle.

  1. Select one or more disks from the list.
  2. Choose the desired redundancy level for the group:
    • Single - No redundancy. Any disk failure results in data loss.
    • Mirror - Data is stored on multiple disks. Capacity equals the size of one disk per mirror.
      • Mirror (Single Group): All selected disks will be combined into a single mirrored group.
      • Mirror (Multiple Groups): The selected disks will be paired into multiple mirrored groups, each consisting of two disks.
    • Z-1 - Single-parity redundancy. One disk may fail without losing data. A minimum of three disks is required for a RAIDZ-1 group.
    • Z-2 - Double-parity redundancy. Two disks may fail without losing data. A minimum of four disks is required for a RAIDZ-2 group.
    • Z-3 - Triple-parity redundancy. Three disks may fail without losing data. A minimum of five disks is required for a RAIDZ-3 group.
  3. Click Add group to add the selected configuration. 
    • The selected data group will appear in the right-hand panel. The total zpool capacity and licensed storage usage are displayed below. 
    • To learn more vdev types, please refer to the following article: Redundancy in Disk Groups 


Add write log (optional)

This feature allows you to configure the write log function with a selected redundancy level (single drive or mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVMe should be used for this vdev.

  1. Select disks from the available list.
  2. Choose redundancy type (Single or Mirror) for added reliability.
  3. Add the group to the zpool.

Write log groups are displayed separately in the Other groups section.


Key points to consider:

• If multiple log devices are specified, write operations are load-balanced between the devices.
• Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.
• RAIDZ vdev types are not supported for the intent log.

This ensures efficient and reliable write operations while leveraging the selected redundancy level.


Add read cache (optional)

This step allows you to assign SSDs as L2ARC (Level 2 Adaptive Replacement Cache) devices to boost read performance. Adding a read cache improves performance and reduces latency for storage systems under heavy read load. A cache device stores frequently accessed data from the storage pool, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVMe should be used for this vdev.

  1. Select a disk to be used as a cache device. Only Single redundancy is available.
  2. Confirm by adding the group.


Key benefits and considerations:

• Cache devices are particularly useful for read-heavy workloads where the working dataset size exceeds the capacity of main memory.
• By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.
• The greatest performance improvements are seen in workloads characterized by random reads of primarily static content.


Add special devices group (optional)

Special and deduplication vdevs require at least the same level of redundancy as data vdevs. 
Because RAIDZ vdevs do not provide compatible redundancy for these device groups, special vdevs and deduplication vdevs cannot be used in a ZFS pool that contains RAIDZ1, RAIDZ2, or RAIDZ3.

A special devices group stores metadata and small-block data to improve performance. A fast SSD/NVMe should be used for this vdev.

  1. Select one or more disks.
  2. Choose redundancy (Single or Mirror). The mirror redundancy level is recommended to prevent data loss.
  3. Add them as a group.


Key features and benefits:

• Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.
• Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.
• Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.

Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage. 

Add deduplication group (optional)

A deduplication group can be explicitly excluded from a special device group to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.

  1. Select disks for this purpose. Redundancy can be set to Single or MirrorThe mirror redundancy level is recommended to prevent data loss.
  2. Add the group to confirm.


Key features and considerations:

• Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.
• This configuration provides flexibility in optimizing storage layout based on workload requirements.
• Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.

This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.


Add spare disks (optional)

A spare disk is a special pseudo-vdev used to track available spare devices for a zpool. Using spare disks enhances the storage pool's reliability by enabling seamless drive replacement and reducing the risk of data loss.

  1. Select the disk and add it to the Spare group.


Configuration

In this step, you configure the final pool settings:

  • Zpool name - Enter a unique name for the zpool for easy identification.
  • autoTRIM - If supported by your devices, enable the AutoTRIM feature to reclaim unused space automatically. AutoTRIM helps optimize SSD performance and lifespan by notifying the controller when blocks are no longer in use.
  • Initialize the zpool after creation - Writes patterns to unallocated space to avoid initial-write latency, especially in virtualized environments. The process may extend creation time and briefly affect performance.

Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.


Resource encryption (optional)

Encryption applies to datasets and zvols created in the ZFS pool. The zpool itself remains unencrypted.

  1. Enable Configure encryption passphrase
  2. Select a Default encryption method
  3. Enter and confirm the passphrase.

Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.

Note:
• The passphrase cannot be recovered.
• Encrypted resources inherit the passphrase unless changed later.

Summary

The summary page displays the complete zpool configuration before finalization. Click Add zpool to complete pool creation. The wizard will create the zpool with the selected configuration.

Remember:
• Redundancy level cannot be changed after the ZFS pool is created.
• Mixed disk sizes reduce usable capacity to the smallest disk in a vdev.
• SSDs are recommended for write log, special devices, and deduplication groups.
• Encryption passphrases cannot be recovered.