Zpool wizard: Difference between revisions

From Open-E Wiki
Jump to navigation Jump to search
migrate>Ad-B
No edit summary
Da-F (talk | contribs)
No edit summary
 
(19 intermediate revisions by 9 users not shown)
Line 1: Line 1:
<span style="font-size:small;"><span style="font-family:arial,helvetica,sans-serif;">Pools are used for grouping disks that belong to storages.</span></span>
__NOTOC__
The '''Zpool Wizard''' guides you through the process of creating and configuring a new ZFS pool (zpool) from available disks. A zpool is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into '''vdevs''' (virtual devices), which collectively form the unified zpool.&nbsp;The wizard consists of multiple steps that allow you to configure data groups (vdevs), add optional device groups, adjust pool settings, and enable encryption if required.


<span style="font-family: arial, helvetica, sans-serif; font-size: small;">1. </span>'''CREATE DATA GROUP&nbsp;'''<span style="font-family: arial, helvetica, sans-serif; font-size: small;">- provides information of all hard drives that are connected to the storage server. To add first Data Group to your zpool please select disks on the list on the left, select redundancy type and click "'''Create group'''" button.</span>


<span style="font-size:small;"><span style="font-family:arial,helvetica,sans-serif;">&nbsp;Available redundancy groups are&nbsp;:</span></span>


*<span style="font-family: arial, helvetica, sans-serif; font-size: small;">'''Single '''- each disks works as a single drive</span>
== Accessing the wizard ==


*<span style="font-family: arial, helvetica, sans-serif; font-size: small;">'''Mirror&nbsp;'''- All data that are stored on disk "A" will be automatically mirrored on the disk "B". At least two disks are needed for creating a mirror. They need to be the same size</span>
#Navigate to '''Storage'''.
#Click '''Add zpool'''.
#The Zpool creation wizard will launch.
#Follow the guided steps to configure your zpool.


<span style="font-size:small;"><span style="font-family:arial,helvetica,sans-serif;">2. '''CREATE WRITE LOG''' - allows to configure the write log function, using a redundancy level (mirror or single drive). A separate-intent log device. If more than one log device is specified, then writes are load-balanced between devices. Log devices can be mirrored. However, raidz vdev types are not supported for the intent log.</span></span>


<span style="font-size:small;"><span style="font-family:arial,helvetica,sans-serif;">3.'''CREATE READ CACHE&nbsp;'''- A device used to cache storage pool data. A cache device cannot be configured as a mirror or raidz group.These devices provide an additional layer of caching between main memory and disk. For read-heavy workload,where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content.</span></span>


<span style="font-size:small;"><span style="font-family:arial,helvetica,sans-serif;">4. '''CREATE SPARE DISK''' - &nbsp;A special pseudo-vdev which keeps track of available hot spares for a pool.ZFS allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a "spare" vdev with any number of devices.Once a spare replacement is initiated, a new "spare" vdev is created within the configuration that will remain there until the original device is replaced. At this point, the hot spare becomes available again if another device fails.If a pool has a shared spare that is currently being used, the pool can not be exported since other pools may use this shared spare, which may lead to potential data corruption.Spares cannot replace log devices.</span></span>
== Zpool configuration steps ==


<span style="font-size:small;"><span style="font-family:arial,helvetica,sans-serif;">5. '''ZPOOL PROPERTIES''' - &nbsp;allows to define a pool called as a " Zpool name ". This name of the zpool &nbsp;will be used in the system.</span></span>
=== Add data group&nbsp; ===


<span style="font-family: arial, helvetica, sans-serif; font-size: small;">6. </span>'''SUMMARY'''<span style="font-family: arial, helvetica, sans-serif; font-size: small;">- sumarizes the zpool configuration&nbsp;: groups of disks that are used for data, and other groups of disks, used for caching, spare, etc.</span>
In this step, available disks are listed. You can filter only unused disks using the toggle.
 
#Select one or more disks from the list.
#Choose the desired redundancy level for the group:
#*'''Single''' - No redundancy. Any disk failure results in data loss.
#*'''Mirror''' - Data is stored on multiple disks. Capacity equals the size of one disk per mirror.
#**'''Mirror (Single Group)''': All selected disks will be combined into a single mirrored group.
#**'''Mirror (Multiple Groups)''': The selected disks will be paired into multiple mirrored groups, each consisting of two disks.
#*'''Z-1''' - Single-parity redundancy. One disk may fail without losing data. A minimum of three disks is required for a RAIDZ-1 group.
#*'''Z-2''' - Double-parity redundancy. Two disks may fail without losing data. A minimum of four disks is required for a RAIDZ-2 group.
#*'''Z-3''' - Triple-parity redundancy. Three disks may fail without losing data. A minimum of five disks is required for a RAIDZ-3 group.
#Click '''Add group''' to add the selected configuration.&nbsp;
#*The selected data group will appear in the right-hand panel. The total zpool capacity and licensed storage usage are displayed below.&nbsp;
#*To learn more vdev types, please refer to the following article: [[Redundancy in Disks Groups|Redundancy in Disk Groups]]&nbsp;
 
 
 
=== Add write log (optional) ===
 
This feature allows you to configure the write log function with a selected redundancy level (single drive or mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVMe should be used for this vdev.
 
#Select disks from the available list.
#Choose redundancy type ('''Single''' or '''Mirror''') for added reliability.
#Add the group to the zpool.
 
Write log groups are displayed separately in the Other groups section.
 
 
 
'''Key points to consider''':
 
• If multiple log devices are specified, write operations are load-balanced between the devices.
• Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.
• RAIDZ vdev types are not supported for the intent log.
This ensures efficient and reliable write operations while leveraging the selected redundancy level.
 
=== <br/>Add read cache (optional) ===
 
This step allows you to assign SSDs as L2ARC (Level 2 Adaptive Replacement Cache) devices to boost read performance. Adding a read cache improves performance and reduces latency for storage systems under heavy read load. A cache device stores frequently accessed data from the storage pool, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVMe should be used for this vdev.
 
#Select a disk to be used as a cache device.&nbsp;Only '''Single''' redundancy is available.
#Confirm by adding the group.
 
<br/>'''Key benefits and considerations''':
 
• Cache devices are particularly useful for '''read-heavy workloads''' where the working dataset size exceeds the capacity of main memory.
• By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.
• The greatest performance improvements are seen in workloads characterized by random reads of primarily static content.
 
=== <br/>Add special devices group (optional) ===
 
Special and deduplication vdevs require at least the same level of redundancy as data vdevs.
Because RAIDZ vdevs do not provide compatible redundancy for these device groups, special vdevs and deduplication vdevs cannot be used in a ZFS pool that contains RAIDZ1, RAIDZ2, or RAIDZ3.
 
A special devices group stores metadata and small-block data to improve performance. A fast SSD/NVMe should be used for this vdev.
 
#Select one or more disks.
#Choose redundancy ('''Single''' or '''Mirror'''). '''The mirror redundancy level is recommended to prevent data loss'''.
#Add them as a group.
 
<br/>'''Key features and benefits''':
 
• Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.
• Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.
• Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.
Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage.&nbsp;
 
=== Add deduplication group (optional) ===
 
A deduplication group can be explicitly excluded from a special device group to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.
 
#Select disks for this purpose.&nbsp;Redundancy can be set to '''Single''' or '''Mirror'''.&nbsp;'''The mirror redundancy level is recommended to prevent data loss'''.
#Add the group to confirm.
 
<br/>'''Key features and considerations''':
 
• Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.
• This configuration provides flexibility in optimizing storage layout based on workload requirements.
• Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.
This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.
 
=== <br/>Add spare disks (optional) ===
 
A spare disk is a special pseudo-vdev used to track available spare devices for a zpool. Using spare disks enhances the storage pool's reliability by enabling seamless drive replacement and reducing the risk of data loss.
 
#Select the disk and add it to the '''Spare''' group.
 
 
 
=== Configuration ===
 
In this step, you configure the final pool settings:
 
*'''Zpool name''' - Enter a unique name for the zpool for easy identification.
*'''autoTRIM''' - If supported by your devices, enable the AutoTRIM feature to reclaim unused space automatically. AutoTRIM helps optimize SSD performance and lifespan by notifying the controller when blocks are no longer in use.
*'''Initialize the zpool after creation''' - Writes patterns to unallocated space to avoid initial-write latency, especially in virtualized environments.&nbsp;The process may extend creation time and briefly affect performance.
 
Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.
 
 
 
=== Resource encryption (optional) ===
 
Encryption applies to datasets and zvols created in the ZFS pool. The zpool itself remains unencrypted.
 
#Enable '''Configure encryption passphrase'''.&nbsp;
#Select a '''Default encryption method'''.&nbsp;
#Enter and confirm the passphrase.
 
Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.
 
'''Note''':
• The passphrase cannot be recovered.
• Encrypted resources inherit the passphrase unless changed later.
 
=== Summary ===
 
The summary page displays the complete zpool configuration before finalization. Click '''Add zpool''' to complete pool creation.&nbsp;The wizard will create the zpool with the selected configuration.
 
'''Remember''':
• Redundancy level cannot be changed after the ZFS pool is created.
• Mixed disk sizes reduce usable capacity to the smallest disk in a vdev.
• SSDs are recommended for write log, special devices, and deduplication groups.
• Encryption passphrases cannot be recovered.
[[Category:Help topics]]

Latest revision as of 07:43, 15 January 2026

The Zpool Wizard guides you through the process of creating and configuring a new ZFS pool (zpool) from available disks. A zpool is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into vdevs (virtual devices), which collectively form the unified zpool. The wizard consists of multiple steps that allow you to configure data groups (vdevs), add optional device groups, adjust pool settings, and enable encryption if required.


Accessing the wizard

  1. Navigate to Storage.
  2. Click Add zpool.
  3. The Zpool creation wizard will launch.
  4. Follow the guided steps to configure your zpool.


Zpool configuration steps

Add data group 

In this step, available disks are listed. You can filter only unused disks using the toggle.

  1. Select one or more disks from the list.
  2. Choose the desired redundancy level for the group:
    • Single - No redundancy. Any disk failure results in data loss.
    • Mirror - Data is stored on multiple disks. Capacity equals the size of one disk per mirror.
      • Mirror (Single Group): All selected disks will be combined into a single mirrored group.
      • Mirror (Multiple Groups): The selected disks will be paired into multiple mirrored groups, each consisting of two disks.
    • Z-1 - Single-parity redundancy. One disk may fail without losing data. A minimum of three disks is required for a RAIDZ-1 group.
    • Z-2 - Double-parity redundancy. Two disks may fail without losing data. A minimum of four disks is required for a RAIDZ-2 group.
    • Z-3 - Triple-parity redundancy. Three disks may fail without losing data. A minimum of five disks is required for a RAIDZ-3 group.
  3. Click Add group to add the selected configuration. 
    • The selected data group will appear in the right-hand panel. The total zpool capacity and licensed storage usage are displayed below. 
    • To learn more vdev types, please refer to the following article: Redundancy in Disk Groups 


Add write log (optional)

This feature allows you to configure the write log function with a selected redundancy level (single drive or mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVMe should be used for this vdev.

  1. Select disks from the available list.
  2. Choose redundancy type (Single or Mirror) for added reliability.
  3. Add the group to the zpool.

Write log groups are displayed separately in the Other groups section.


Key points to consider:

• If multiple log devices are specified, write operations are load-balanced between the devices.
• Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.
• RAIDZ vdev types are not supported for the intent log.

This ensures efficient and reliable write operations while leveraging the selected redundancy level.


Add read cache (optional)

This step allows you to assign SSDs as L2ARC (Level 2 Adaptive Replacement Cache) devices to boost read performance. Adding a read cache improves performance and reduces latency for storage systems under heavy read load. A cache device stores frequently accessed data from the storage pool, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVMe should be used for this vdev.

  1. Select a disk to be used as a cache device. Only Single redundancy is available.
  2. Confirm by adding the group.


Key benefits and considerations:

• Cache devices are particularly useful for read-heavy workloads where the working dataset size exceeds the capacity of main memory.
• By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.
• The greatest performance improvements are seen in workloads characterized by random reads of primarily static content.


Add special devices group (optional)

Special and deduplication vdevs require at least the same level of redundancy as data vdevs. 
Because RAIDZ vdevs do not provide compatible redundancy for these device groups, special vdevs and deduplication vdevs cannot be used in a ZFS pool that contains RAIDZ1, RAIDZ2, or RAIDZ3.

A special devices group stores metadata and small-block data to improve performance. A fast SSD/NVMe should be used for this vdev.

  1. Select one or more disks.
  2. Choose redundancy (Single or Mirror). The mirror redundancy level is recommended to prevent data loss.
  3. Add them as a group.


Key features and benefits:

• Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.
• Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.
• Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.

Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage. 

Add deduplication group (optional)

A deduplication group can be explicitly excluded from a special device group to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.

  1. Select disks for this purpose. Redundancy can be set to Single or MirrorThe mirror redundancy level is recommended to prevent data loss.
  2. Add the group to confirm.


Key features and considerations:

• Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.
• This configuration provides flexibility in optimizing storage layout based on workload requirements.
• Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.

This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.


Add spare disks (optional)

A spare disk is a special pseudo-vdev used to track available spare devices for a zpool. Using spare disks enhances the storage pool's reliability by enabling seamless drive replacement and reducing the risk of data loss.

  1. Select the disk and add it to the Spare group.


Configuration

In this step, you configure the final pool settings:

  • Zpool name - Enter a unique name for the zpool for easy identification.
  • autoTRIM - If supported by your devices, enable the AutoTRIM feature to reclaim unused space automatically. AutoTRIM helps optimize SSD performance and lifespan by notifying the controller when blocks are no longer in use.
  • Initialize the zpool after creation - Writes patterns to unallocated space to avoid initial-write latency, especially in virtualized environments. The process may extend creation time and briefly affect performance.

Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.


Resource encryption (optional)

Encryption applies to datasets and zvols created in the ZFS pool. The zpool itself remains unencrypted.

  1. Enable Configure encryption passphrase
  2. Select a Default encryption method
  3. Enter and confirm the passphrase.

Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.

Note:
• The passphrase cannot be recovered.
• Encrypted resources inherit the passphrase unless changed later.

Summary

The summary page displays the complete zpool configuration before finalization. Click Add zpool to complete pool creation. The wizard will create the zpool with the selected configuration.

Remember:
• Redundancy level cannot be changed after the ZFS pool is created.
• Mixed disk sizes reduce usable capacity to the smallest disk in a vdev.
• SSDs are recommended for write log, special devices, and deduplication groups.
• Encryption passphrases cannot be recovered.