<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.open-e.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Da-F</id>
	<title>Open-E Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.open-e.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Da-F"/>
	<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/Special:Contributions/Da-F"/>
	<updated>2026-05-03T08:17:51Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.5</generator>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=HTTPS_Certificate&amp;diff=12390</id>
		<title>HTTPS Certificate</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=HTTPS_Certificate&amp;diff=12390"/>
		<updated>2026-02-23T14:48:41Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;__NOTOC__ The HTTPS Certificate option allows you to configure the SSL/TLS certificate used for secure communication with the system. You can either use the default self-signed certificate delivered with the software or upload your own custom certificate.  == Certificate Options ==  === Self-signed certificate (default) ===  A self-signed certificate is an identity certificate signed by the same entity whose identity it certifies. This is the default certificate provided...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The HTTPS Certificate option allows you to configure the SSL/TLS certificate used for secure communication with the system. You can either use the default self-signed certificate delivered with the software or upload your own custom certificate.&lt;br /&gt;
&lt;br /&gt;
== Certificate Options ==&lt;br /&gt;
&lt;br /&gt;
=== Self-signed certificate (default) ===&lt;br /&gt;
&lt;br /&gt;
A self-signed certificate is an identity certificate signed by the same entity whose identity it certifies. This is the default certificate provided with the software. The system displays information about the certificate:&lt;br /&gt;
&lt;br /&gt;
*Certificate name&lt;br /&gt;
*Certificate Authority&lt;br /&gt;
*Expiration date&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Note&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 Self-signed certificates are not trusted by browsers by default and may generate security warnings. &lt;br /&gt;
&lt;br /&gt;
=== Custom certificate ===&lt;br /&gt;
&lt;br /&gt;
A custom certificate is signed by a &#039;&#039;&#039;Certificate Authority (CA)&#039;&#039;&#039; or can be self-generated. Use this option if you want to replace the self-signed certificate with your own trusted certificate. To configure, you must upload both:&lt;br /&gt;
&lt;br /&gt;
*Private key file&lt;br /&gt;
*Certificate file&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Requirements&#039;&#039;&#039;:&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
 • Only private keys encrypted by &#039;&#039;&#039;RSA&#039;&#039;&#039; or &#039;&#039;&#039;ECC (Elliptic Curve Cryptography)&#039;&#039;&#039; are supported. &lt;br /&gt;
 • &#039;&#039;&#039;RSA&#039;&#039;&#039;: Key length must be at least 2048 bits. &lt;br /&gt;
 • &#039;&#039;&#039;ECC&#039;&#039;&#039;: Supported curves: &lt;br /&gt;
 • P-256 (secp256r1 or prime256v1) &lt;br /&gt;
 • P-384 (secp384r1)&lt;br /&gt;
&lt;br /&gt;
After selecting the files, click &#039;&#039;&#039;Apply&#039;&#039;&#039; to activate the certificate.&lt;br /&gt;
&lt;br /&gt;
=== Applying Changes ===&lt;br /&gt;
&lt;br /&gt;
#Select the certificate type (&#039;&#039;&#039;Self-signed or Custom&#039;&#039;&#039;).&lt;br /&gt;
#If &#039;&#039;&#039;Custom&#039;&#039;&#039;, upload the required private key and certificate files.&lt;br /&gt;
#Click &#039;&#039;&#039;Apply&#039;&#039;&#039;.&lt;br /&gt;
#The system will immediately apply the new certificate.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;“Invalid certificate or key” error&#039;&#039;&#039;&lt;br /&gt;
**Check that the private key and certificate match. Ensure the private key is at least 2048 bits (RSA) or uses a supported ECC curve.&lt;br /&gt;
*&#039;&#039;&#039;Connection fails after applying a custom certificate&#039;&#039;&#039;&lt;br /&gt;
**Verify that the uploaded files are not corrupted and the certificate chain is complete (including intermediate certificates if required).&lt;br /&gt;
*&#039;&#039;&#039;Expired certificate&#039;&#039;&#039;&lt;br /&gt;
**Replace the certificate with a new one before the expiration date to avoid service interruption.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=About&amp;diff=12389</id>
		<title>About</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=About&amp;diff=12389"/>
		<updated>2026-02-19T09:48:38Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Port number required for successful product activation.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This function summarizes all information connected with your license. The detailed information are divided into two panels:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ABOUT&#039;&#039;&#039;&amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Version&#039;&#039;&#039; - shows detailed information about the used Release of the JovianDSS&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Serial Number &#039;&#039;&#039; - the serial number of your JovianDSS license.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Storage&#039;&#039;&#039; - limitation of the usable storage size. There can be a limited size (you will see the maximum value for the storage size) or it can be unlimited.&amp;lt;br/&amp;gt;The &#039;&#039;&#039;TRIAL&#039;&#039;&#039; license is an Unlimited Storage license. After the &#039;&#039;&#039;TRIAL&#039;&#039;&#039; period is over, the system&#039;s performance will be significantly reduced.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Version status&#039;&#039;&#039; - shows the current status of the activation. The status can be either &#039;Activated&#039; or &#039;Not Activated&#039;. If the status is &#039;Not Activated&#039;, the button &#039;&#039;&#039;ACTIVATE&#039;&#039;&#039; will function. In order to activate the product, the system needs an access to the Internet. The communication port needed for the successful activation is 443 (source and destination). In case of any activation errors, please verify your firewall rules.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Expiration date&#039;&#039;&#039;&amp;amp;nbsp;: trial license expiration date.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;LICENSES&#039;&#039;&#039;&amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
License keys should be provided to you from Open-E partner.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Product Key &#039;&#039;&#039;- Product Key format is XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX&#039;&#039;&#039;&amp;lt;br/&amp;gt;Storage Key &#039;&#039;&#039;- allows to extend the storage capacity managed by the JovianDSS system.&amp;lt;br/&amp;gt;&#039;&#039;&#039;Feature Pack keys&#039;&#039;&#039; - adds additional function to the system.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Encryption&amp;diff=12388</id>
		<title>Encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Encryption&amp;diff=12388"/>
		<updated>2026-02-17T09:15:56Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Encryption protects data stored in datasets and zvols within a ZFS pool (zpool). The encryption feature is available for every zpool, but encrypted resources can be created only after you configure a pool-wide encryption passphrase.&lt;br /&gt;
&lt;br /&gt;
Key characteristics:&lt;br /&gt;
&lt;br /&gt;
*Encryption applies to datasets and zvols; the zpool itself is not encrypted.&lt;br /&gt;
*All encrypted resources in one zpool share the same passphrase.&lt;br /&gt;
*Datasets and zvols can only be encrypted during their creation.&lt;br /&gt;
*You can later change the pool-wide encryption passphrase and the default encryption method.&lt;br /&gt;
&lt;br /&gt;
Use encryption when you need at-rest data protection within a specific zpool.&lt;br /&gt;
&lt;br /&gt;
== Configuring resource encryption ==&lt;br /&gt;
&lt;br /&gt;
#Go to &#039;&#039;&#039;Storage&#039;&#039;&#039;.&lt;br /&gt;
#Select the zpool you want to configure.&lt;br /&gt;
#Open the &#039;&#039;&#039;Configuration&#039;&#039;&#039; tab.&lt;br /&gt;
#Expand the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You will see either the initial configuration fields or the current encryption status, depending on whether encryption was already configured or was configured during [[Zpool_wizard|zpool creation]]. When no passphrase is configured for a zpool, the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section shows:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Default encryption method&#039;&#039;&#039; – algorithm that is preselected in the drop-down list and used by default for new encrypted datasets and zvols in this zpool, if you do not choose a different method during resource creation.&lt;br /&gt;
*&#039;&#039;&#039;Encryption passphrase&#039;&#039;&#039; – shared passphrase used to unlock all encrypted resources in this zpool.&lt;br /&gt;
*&#039;&#039;&#039;Confirm passphrase&#039;&#039;&#039; – repeat the passphrase for verification.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enter the passphrase twice, select the default method, and then click &#039;&#039;&#039;Save settings&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Important&#039;&#039;&#039;: The passphrase cannot be recovered if it is lost. Without the passphrase, encrypted resources in this zpool cannot be accessed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the passphrase is configured, you can start creating encrypted datasets and zvols in this zpool. More details on how to use encryption in resources can be found here:&lt;br /&gt;
&lt;br /&gt;
*Create a new zvol for iSCSI Target&lt;br /&gt;
*Create a new zvol for FC Group&lt;br /&gt;
*Create a new dataset&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
*Encryption can be enabled only at creation time. Existing datasets and zvols cannot be switched to encrypted mode by editing their properties.&lt;br /&gt;
*To protect existing data that is currently unencrypted, you must:&lt;br /&gt;
**Create a new encrypted dataset or zvol.&lt;br /&gt;
**Copy or replicate data from the old resource to the new encrypted one.&lt;br /&gt;
**Remove the unencrypted original if it is no longer needed.&lt;br /&gt;
&lt;br /&gt;
== Managing a zpool with configured resource encryption ==&lt;br /&gt;
&lt;br /&gt;
When a passphrase is already configured, the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section shows:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Passphrase status&#039;&#039;&#039; (for example, configured).&lt;br /&gt;
*&#039;&#039;&#039;Default encryption method&#039;&#039;&#039;.&lt;br /&gt;
*Buttons:&lt;br /&gt;
**Change passphrase&lt;br /&gt;
**Change encryption method&lt;br /&gt;
&lt;br /&gt;
=== Changing the encryption passphrase ===&lt;br /&gt;
&lt;br /&gt;
#Click &#039;&#039;&#039;Change passphrase&#039;&#039;&#039;.&lt;br /&gt;
#In the dialog:&lt;br /&gt;
##Enter &#039;&#039;&#039;New passphrase&#039;&#039;&#039;.&lt;br /&gt;
##Confirm passphrase.&lt;br /&gt;
##Enter the &#039;&#039;&#039;Administrator&#039;&#039;&#039; password to authorize the change.&lt;br /&gt;
#Click &#039;&#039;&#039;Change passphrase&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
After you confirm the change, the new passphrase is propagated to all existing encrypted datasets and zvols in the zpool. This synchronization may take some time, depending on the number of encrypted resources. A notification of the operation&#039;s start and completion is recorded in &#039;&#039;&#039;Event Viewer&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 While the synchronization is in progress, the User Interface is locked for changes and cannot be used until the operation finishes. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Changing the default encryption method ===&lt;br /&gt;
&lt;br /&gt;
#Click &#039;&#039;&#039;Change encryption method&#039;&#039;&#039;.&lt;br /&gt;
#Select a new &#039;&#039;&#039;Default encryption method&#039;&#039;&#039; from the drop-down list.&lt;br /&gt;
#Click &#039;&#039;&#039;Save method&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The selected method is displayed as default only for encrypted datasets and zvols created after this change. Existing encrypted resources keep their original encryption method which cannot be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Available encryption methods ====&lt;br /&gt;
&lt;br /&gt;
The following methods are available for resource encryption:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-128-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 128-bit key in CCM (Counter with CBC-MAC) mode.&lt;br /&gt;
*Provides authenticated encryption with moderate CPU usage.&lt;br /&gt;
*Suitable when you need a balance between performance and security.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-192-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 192-bit key in CCM mode.&lt;br /&gt;
*Higher security margin than 128-bit, with slightly higher CPU cost.&lt;br /&gt;
*Use when you prefer stronger keys and can accept a small performance impact.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-256-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 256-bit key in CCM mode.&lt;br /&gt;
*Maximum key length in the CCM group.&lt;br /&gt;
*Use when the security margin is more important than performance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-128-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 128-bit key in GCM (Galois/Counter Mode).&lt;br /&gt;
*Authenticated encryption optimized for performance on modern CPUs.&lt;br /&gt;
*Good choice when you need strong encryption with high throughput.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-192-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 192-bit key in GCM mode.&lt;br /&gt;
*Increases key size over AES-128-GCM while remaining performant.&lt;br /&gt;
*Use when you want a higher security margin but similar behavior to AES-128-GCM.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-256-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 256-bit key in GCM mode.&lt;br /&gt;
*Provides strong authenticated encryption and is widely used as a best-practice choice.&lt;br /&gt;
*Recommended default when your hardware can handle the additional CPU load.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
== Handling invalid or missing passphrase ==&lt;br /&gt;
&lt;br /&gt;
If the encryption passphrase is invalid or not configured on the current host, all encrypted datasets and zvols in the affected zpool are locked and cannot be accessed. When a locked zvol is attached to an iSCSI target, FC group, or NVMe-oF subsystem, these objects are effectively blocked as well, and no data can be accessed through them. For an encrypted dataset, all shares configured on it are also blocked.&lt;br /&gt;
&lt;br /&gt;
To restore access, enter the correct passphrase in &#039;&#039;&#039;Configuration → Resource encryption&#039;&#039;&#039; for the zpool. After a valid passphrase is provided, all locked, encrypted resources are automatically unlocked and become active again, provided that the related targets, groups, subsystems, or datasets were not manually deactivated beforehand.&lt;br /&gt;
&lt;br /&gt;
Such situations may occur, for example, when a zpool is imported on a different host or moved between cluster nodes. In a cluster environment, the passphrase is usually synchronized between nodes, so after a failover, the other node already has the required passphrase. However, if the passphrase change operation was interrupted, some encrypted resources may have been updated to the new passphrase while others still use the old one. On the original host, access may still work, but after exporting the zpool and importing it on another host, some or all encrypted resources can become partially locked. In this case, an event is recorded in the Event Viewer indicating that the passphrase change did not complete successfully.&lt;br /&gt;
&lt;br /&gt;
If this happens, first try to unlock the resources by entering the latest passphrase (the one you intended to change to). If this does not unlock all encrypted resources, enter the previous passphrase (the one used before the change), allow the passphrase change process to complete, and then change the passphrase again to the desired new value. This sequence should unify the passphrase across all encrypted resources in the zpool. Always monitor Event Viewer logs when working with encrypted resources and when changing passphrases.&lt;br /&gt;
&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=FC_group&amp;diff=12387</id>
		<title>FC group</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=FC_group&amp;diff=12387"/>
		<updated>2026-02-17T09:11:12Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__Each pool contains its own configuration for FC. It is assigned to a particular pool and can be used on any machine where a pool is imported. However, FC targets are local to a particular machine and each pool contains mapping of FC targets that has to be used on a particular machine. When the pool is imported on a machine where targets are not assigned to a pool configuration, it is possible to specify them after the pool import.&lt;br /&gt;
&lt;br /&gt;
FC configuration consists of groups that define which volumes are available on given ports to configured initiators. Two type of groups are available: a public group and initiator groups. Public group allows any initiator connected to configured ports to access LUNs assigned to this group. Public group is present on a pool by default and cannot be removed or created. Initially, it didn’t have any volumes or ports assigned, so nothing is available until it is configured manually. Because this group allows to connect any initiator, it is not possible to assign initiators to this group. The second type of FC group is an initiator group. By using this group it is possible to define which initiators can connect to LUNs assigned to a group. Initiator that is not assigned to a FC group won’t be able to connect through ports to LUNs. It is possible to configure many initiator groups to have different access configurations to volumes through FC targets. It is possible to define alias for each initiator group that allows easier identification of the group purpose.&lt;br /&gt;
&lt;br /&gt;
In general, groups gather set of: ports, volumes and initiators. LUNs added to a group define which volumes are available in this group. Ports assigned to a group define on which ports it is possible to connect to LUNs in a particular group. Finally, initiators (in case of an initiator group) define which initiators (ports of remote machines) will be able to connect to LUNs in the group using ports that are assigned to the group. For example, to allow initiators Ini0 and Ini1 access volume Vol-01 through ports P0 and P1 it is required to create an initiator group with assigned ports: P0 and P1, next add volume Vol-01 and initiators Ini0 and Ini1 to this group. It is possible to assign the same volume, initiator or port to more than one group. However, there are some limitations to the configuration that the system won’t allow to be applied:&lt;br /&gt;
&lt;br /&gt;
#The same target cannot be assigned to two groups that share a set of initiators.&lt;br /&gt;
#Due to the rule above, the same initiator cannot be assigned to two groups that share a set of ports.&lt;br /&gt;
#The target assigned in a public group cannot be used by an initiator group and the other way around, the target assigned to an initiator group cannot be used in public group.&lt;br /&gt;
#The target can be assigned to only one pool - the same port cannot be used in two groups that belong to different pools. If the target is used in an active pool and another pool that also uses this target is imported, the group using a conflicting target will be deactivated upon import.&lt;br /&gt;
&lt;br /&gt;
Moreover, a volume used by iSCSI cannot be assigned to any FC group and the other way around.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;A created group can be modified at any point in time. It is possible to assign or remove initiators, ports or volumes. Volumes assigned to groups can be modified, however, it is advised to be careful because in some cases the connected initiator may lose access to the volume during this operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is possible to deactivate any FC group. When a group is inactive, configuration that is represented by that group is not applied to ports thus LUNs are not available to initiators from a given group. A group can be deactivated either manually or by the system in case of configuration conflicts. Configuration conflicts are possible mainly during a foreign pool import. A group is deactivated on the imported pool in case of the following conflicts:&lt;br /&gt;
&lt;br /&gt;
#Target used by the pool is already used by other active pool.&lt;br /&gt;
#One of the LUNs uses SCSI ID that is already used by FC or iSCSI LUN on other pool.&lt;br /&gt;
&lt;br /&gt;
A group that was deactivated due to conflict can be activated manually after resolving that conflict by modifying the configuration.&lt;br /&gt;
&lt;br /&gt;
A bit more explanation is required for the SCSI ID uniqueness. This LUN identifier consists of 16 characters, however two SCSI IDs that have the same first 8 characters are considered conflicting. It is due to the way some initiators read those identifiers. Some initiators honor only those first 8 characters of SCSI ID, which could lead to issues if two LUNs would have this part of SCSI ID the same. However, in most cases you don’t have to worry about this setting because the system assigns a unique SCSI ID to the volume based on it’s name and time stamp of creation. When a SCSI ID is not specified, the one assigned to volume is used for a LUN. It is recommended to use a default (generated by system) SCSI ID. To use the default value simply do not specify any SCSI ID during configuration of LUN.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;FC targets assigned to groups require additional configuration to be used in a cluster environment. For detailed description of this configuration please refer to the Edit FC target properties section.&lt;br /&gt;
&lt;br /&gt;
== FC groups and encrypted resources ==&lt;br /&gt;
FC Groups can contain a mix of unencrypted and encrypted zvols. However, encryption introduces strict dependency rules that affect the availability of the entire group.&lt;br /&gt;
&lt;br /&gt;
=== Group locking mechanism ===&lt;br /&gt;
If any encrypted zvol assigned to an FC Group cannot be accessed due to an encryption issue, the system will:&lt;br /&gt;
&lt;br /&gt;
* Automatically set the FC Group status to &#039;&#039;&#039;Locked&#039;&#039;&#039; &lt;br /&gt;
* Block access to &#039;&#039;&#039;all zvols&#039;&#039;&#039; in that group, including unencrypted ones &lt;br /&gt;
* Prevent initiators from accessing any LUNs assigned to the group &lt;br /&gt;
&lt;br /&gt;
This behavior is intentional and ensures data consistency and security.&lt;br /&gt;
&lt;br /&gt;
In the GUI, encrypted zvols with access issues are marked with an &#039;&#039;&#039;error indicator&#039;&#039;&#039;, and a tooltip may display the cause of the issue (e.g., an incorrect encryption passphrase).&lt;br /&gt;
&lt;br /&gt;
=== Resolving a Locked FC Group ===&lt;br /&gt;
If an FC Group is locked:&lt;br /&gt;
&lt;br /&gt;
# Identify encrypted zvols in the group.&lt;br /&gt;
# Check their encryption status.&lt;br /&gt;
# Unlock the affected zvols by providing the correct encryption passphrase.&lt;br /&gt;
# Verify that all encrypted zvols are accessible.&lt;br /&gt;
&lt;br /&gt;
In general, once all encryption issues are resolved, encrypted resources are unlocked automatically and the FC Group returns to &#039;&#039;&#039;Active&#039;&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
In rare cases, automatic activation may fail even though encryption issues have already been resolved. In such situations, deactivating and then reactivating the FC Group can be used to trigger the same validation procedures that are executed after resolving encryption-related errors. If encryption issues are fully resolved, the FC Group and all its resources will activate successfully. If not, the system will prevent activation and display an error.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Important:&#039;&#039;&#039;&lt;br /&gt;
 If the FC Group was &#039;&#039;&#039;manually deactivated while it was locked&#039;&#039;&#039;, resolving the encryption issues will still unlock the encrypted resources, but the FC Group will &#039;&#039;&#039;remain inactive&#039;&#039;&#039;. In this case, the group must be &#039;&#039;&#039;manually activated&#039;&#039;&#039; to make its resources available to initiators.&lt;br /&gt;
&lt;br /&gt;
=== Detaching blocked zvols as an alternative ===&lt;br /&gt;
As an alternative recovery method, a locked FC Group can be restored to an &#039;&#039;&#039;Active&#039;&#039;&#039; state by &#039;&#039;&#039;detaching blocked zvols&#039;&#039;&#039; (e.g., encrypted and inaccessible ones) from the FC Group.&lt;br /&gt;
&lt;br /&gt;
Detaching blocked zvols removes them from the group configuration. As a result:&lt;br /&gt;
&lt;br /&gt;
* The FC Group becomes &#039;&#039;&#039;Active&#039;&#039;&#039; again &lt;br /&gt;
* Initiators regain access to the remaining, non-blocked zvols assigned to the group&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After detaching blocked zvols:&lt;br /&gt;
&lt;br /&gt;
* The FC Group operates normally with the remaining zvols. &lt;br /&gt;
* Detached zvols remain unavailable until their encryption issues are resolved.&lt;br /&gt;
&lt;br /&gt;
Once detached, encrypted zvols are later unlocked:&lt;br /&gt;
&lt;br /&gt;
* They are &#039;&#039;&#039;not automatically reattached or activated.&#039;&#039;&#039; &lt;br /&gt;
* To make them available again, you must &#039;&#039;&#039;manually attach them to the FC Group.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This behavior ensures predictable recovery of FC Groups while preventing unintended exposure of storage resources after encryption-related access issues.&lt;br /&gt;
{{:Encryption}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Encryption&amp;diff=12386</id>
		<title>Encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Encryption&amp;diff=12386"/>
		<updated>2026-02-17T09:10:48Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Encryption protects data stored in datasets and zvols within a ZFS pool (zpool). The encryption feature is available for every zpool, but encrypted resources can be created only after you configure a pool-wide encryption passphrase.&lt;br /&gt;
&lt;br /&gt;
Key characteristics:&lt;br /&gt;
&lt;br /&gt;
*Encryption applies to datasets and zvols; the zpool itself is not encrypted.&lt;br /&gt;
*All encrypted resources in one zpool share the same passphrase.&lt;br /&gt;
*Datasets and zvols can only be encrypted during their creation.&lt;br /&gt;
*You can later change the pool-wide encryption passphrase and the default encryption method.&lt;br /&gt;
&lt;br /&gt;
Use encryption when you need at-rest data protection within a specific zpool.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuring resource encryption ==&lt;br /&gt;
&lt;br /&gt;
#Go to &#039;&#039;&#039;Storage&#039;&#039;&#039;.&lt;br /&gt;
#Select the zpool you want to configure.&lt;br /&gt;
#Open the &#039;&#039;&#039;Configuration&#039;&#039;&#039; tab.&lt;br /&gt;
#Expand the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You will see either the initial configuration fields or the current encryption status, depending on whether encryption was already configured or was configured during [[Zpool_wizard|zpool creation]]. When no passphrase is configured for a zpool, the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section shows:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Default encryption method&#039;&#039;&#039; – algorithm that is preselected in the drop-down list and used by default for new encrypted datasets and zvols in this zpool, if you do not choose a different method during resource creation.&lt;br /&gt;
*&#039;&#039;&#039;Encryption passphrase&#039;&#039;&#039; – shared passphrase used to unlock all encrypted resources in this zpool.&lt;br /&gt;
*&#039;&#039;&#039;Confirm passphrase&#039;&#039;&#039; – repeat the passphrase for verification.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enter the passphrase twice, select the default method, and then click &#039;&#039;&#039;Save settings&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Important&#039;&#039;&#039;: The passphrase cannot be recovered if it is lost. Without the passphrase, encrypted resources in this zpool cannot be accessed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the passphrase is configured, you can start creating encrypted datasets and zvols in this zpool. More details on how to use encryption in resources can be found here:&lt;br /&gt;
&lt;br /&gt;
*Create a new zvol for iSCSI Target&lt;br /&gt;
*Create a new zvol for FC Group&lt;br /&gt;
*Create a new dataset&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
*Encryption can be enabled only at creation time. Existing datasets and zvols cannot be switched to encrypted mode by editing their properties.&lt;br /&gt;
*To protect existing data that is currently unencrypted, you must:&lt;br /&gt;
**Create a new encrypted dataset or zvol.&lt;br /&gt;
**Copy or replicate data from the old resource to the new encrypted one.&lt;br /&gt;
**Remove the unencrypted original if it is no longer needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Managing a zpool with configured resource encryption ==&lt;br /&gt;
&lt;br /&gt;
When a passphrase is already configured, the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section shows:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Passphrase status&#039;&#039;&#039; (for example, configured).&lt;br /&gt;
*&#039;&#039;&#039;Default encryption method&#039;&#039;&#039;.&lt;br /&gt;
*Buttons:&lt;br /&gt;
**Change passphrase&lt;br /&gt;
**Change encryption method&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Changing the encryption passphrase ===&lt;br /&gt;
&lt;br /&gt;
#Click &#039;&#039;&#039;Change passphrase&#039;&#039;&#039;.&lt;br /&gt;
#In the dialog:&lt;br /&gt;
##Enter &#039;&#039;&#039;New passphrase&#039;&#039;&#039;.&lt;br /&gt;
##Confirm passphrase.&lt;br /&gt;
##Enter the &#039;&#039;&#039;Administrator&#039;&#039;&#039; password to authorize the change.&lt;br /&gt;
#Click &#039;&#039;&#039;Change passphrase&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
After you confirm the change, the new passphrase is propagated to all existing encrypted datasets and zvols in the zpool. This synchronization may take some time, depending on the number of encrypted resources. A notification of the operation&#039;s start and completion is recorded in &#039;&#039;&#039;Event Viewer&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 While the synchronization is in progress, the User Interface is locked for changes and cannot be used until the operation finishes. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Changing the default encryption method ===&lt;br /&gt;
&lt;br /&gt;
#Click &#039;&#039;&#039;Change encryption method&#039;&#039;&#039;.&lt;br /&gt;
#Select a new &#039;&#039;&#039;Default encryption method&#039;&#039;&#039; from the drop-down list.&lt;br /&gt;
#Click &#039;&#039;&#039;Save method&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The selected method is displayed as default only for encrypted datasets and zvols created after this change. Existing encrypted resources keep their original encryption method which cannot be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Available encryption methods ====&lt;br /&gt;
&lt;br /&gt;
The following methods are available for resource encryption:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-128-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 128-bit key in CCM (Counter with CBC-MAC) mode.&lt;br /&gt;
*Provides authenticated encryption with moderate CPU usage.&lt;br /&gt;
*Suitable when you need a balance between performance and security.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-192-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 192-bit key in CCM mode.&lt;br /&gt;
*Higher security margin than 128-bit, with slightly higher CPU cost.&lt;br /&gt;
*Use when you prefer stronger keys and can accept a small performance impact.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-256-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 256-bit key in CCM mode.&lt;br /&gt;
*Maximum key length in the CCM group.&lt;br /&gt;
*Use when the security margin is more important than performance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-128-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 128-bit key in GCM (Galois/Counter Mode).&lt;br /&gt;
*Authenticated encryption optimized for performance on modern CPUs.&lt;br /&gt;
*Good choice when you need strong encryption with high throughput.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-192-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 192-bit key in GCM mode.&lt;br /&gt;
*Increases key size over AES-128-GCM while remaining performant.&lt;br /&gt;
*Use when you want a higher security margin but similar behavior to AES-128-GCM.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-256-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 256-bit key in GCM mode.&lt;br /&gt;
*Provides strong authenticated encryption and is widely used as a best-practice choice.&lt;br /&gt;
*Recommended default when your hardware can handle the additional CPU load.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
== Handling invalid or missing passphrase ==&lt;br /&gt;
&lt;br /&gt;
If the encryption passphrase is invalid or not configured on the current host, all encrypted datasets and zvols in the affected zpool are locked and cannot be accessed. When a locked zvol is attached to an iSCSI target, FC group, or NVMe-oF subsystem, these objects are effectively blocked as well, and no data can be accessed through them. For an encrypted dataset, all shares configured on it are also blocked.&lt;br /&gt;
&lt;br /&gt;
To restore access, enter the correct passphrase in &#039;&#039;&#039;Configuration → Resource encryption&#039;&#039;&#039; for the zpool. After a valid passphrase is provided, all locked, encrypted resources are automatically unlocked and become active again, provided that the related targets, groups, subsystems, or datasets were not manually deactivated beforehand.&lt;br /&gt;
&lt;br /&gt;
Such situations may occur, for example, when a zpool is imported on a different host or moved between cluster nodes. In a cluster environment, the passphrase is usually synchronized between nodes, so after a failover, the other node already has the required passphrase. However, if the passphrase change operation was interrupted, some encrypted resources may have been updated to the new passphrase while others still use the old one. On the original host, access may still work, but after exporting the zpool and importing it on another host, some or all encrypted resources can become partially locked. In this case, an event is recorded in the Event Viewer indicating that the passphrase change did not complete successfully.&lt;br /&gt;
&lt;br /&gt;
If this happens, first try to unlock the resources by entering the latest passphrase (the one you intended to change to). If this does not unlock all encrypted resources, enter the previous passphrase (the one used before the change), allow the passphrase change process to complete, and then change the passphrase again to the desired new value. This sequence should unify the passphrase across all encrypted resources in the zpool. Always monitor Event Viewer logs when working with encrypted resources and when changing passphrases.&lt;br /&gt;
&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Add_new_zvol&amp;diff=12377</id>
		<title>Add new zvol</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Add_new_zvol&amp;diff=12377"/>
		<updated>2026-01-15T08:09:27Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
A zvol is a ZFS block device created inside a ZFS pool (zpool). In this documentation, the term zvol refers to a block-type resource that is typically exported as a LUN to hosts over iSCSI, Fibre Channel, or NVMe-oF. Zvols are usually used to:&lt;br /&gt;
&lt;br /&gt;
* Provide block storage for virtual machines, databases, or other applications that expect a disk device. &lt;br /&gt;
* Separate workloads that require different performance or data-protection policies (for example, different compression, block size, or deduplication settings). &lt;br /&gt;
* Control how space is consumed by different applications or tenants at the pool level.&amp;lt;/onlyinclude&amp;gt; &lt;br /&gt;
&lt;br /&gt;
 You first create a zvol, and then (optionally) attach it to a target to make it available to hosts.&lt;br /&gt;
&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
== Creating a zvol ==&lt;br /&gt;
&lt;br /&gt;
# Go to the zpool management view in the GUI. &lt;br /&gt;
# Select and expand the zpool in which you want to create the zvol. &lt;br /&gt;
# Navigate to the iSCSI Targets, FC Targets, or NVMe-oF Targets section. &lt;br /&gt;
# Click &#039;&#039;&#039;Add zvol&#039;&#039;&#039; to open the &#039;&#039;&#039;Add new zvol&#039;&#039;&#039; dialog. &lt;br /&gt;
# Configure &#039;&#039;&#039;Encryption settings&#039;&#039;&#039; and Zvol properties, and optionally attach to an &#039;&#039;&#039;iSCSI&#039;&#039;&#039; &#039;&#039;&#039;target, NVMe-oF subsystem, or assign to FC groups&#039;&#039;&#039;. &lt;br /&gt;
# Review the configuration and click &#039;&#039;&#039;Add&#039;&#039;&#039;. The new zvol appears in the selected zpool.&lt;br /&gt;
&lt;br /&gt;
After creation, you can adjust most properties later; however, encryption and some layout-related parameters (such as volume block size) cannot be changed after data has been written.&lt;br /&gt;
&lt;br /&gt;
=== Encryption settings ===&lt;br /&gt;
This section is displayed at the top of the dialog. Encryption can be enabled only during zvol creation and cannot be disabled later for this resource.&lt;br /&gt;
&lt;br /&gt;
==== Encrypt resource ====&lt;br /&gt;
Enable this switch to create an encrypted zvol. If the switch remains disabled, the zvol is created unencrypted.&lt;br /&gt;
&lt;br /&gt;
==== Encryption method ====&lt;br /&gt;
Defines the encryption algorithm used when the zvol is encrypted.&lt;br /&gt;
&lt;br /&gt;
* By default, the method is inherited from &#039;&#039;&#039;Configuration → Resource encryption&#039;&#039;&#039; (for example, aes-256-gcm). &lt;br /&gt;
* You can select a different supported method for this zvol if required by policy or performance.&lt;br /&gt;
&lt;br /&gt;
For information about keys, unlocking behaviour, and error handling, see the [[Encryption]] article.&lt;br /&gt;
&lt;br /&gt;
=== Zvol properties ===&lt;br /&gt;
These fields define the behaviour and performance characteristics of the zvol.&lt;br /&gt;
&lt;br /&gt;
==== Name ====&lt;br /&gt;
&lt;br /&gt;
* The zvol name must be unique within the pool. &lt;br /&gt;
* Allowed characters: a–z  A–Z  0–9  .  _  -&lt;br /&gt;
&lt;br /&gt;
Renaming a zvol that is already exported through a target will change its internal path; any targets using that path must be updated before clients regain access.&lt;br /&gt;
&lt;br /&gt;
==== Size ====&lt;br /&gt;
&lt;br /&gt;
* Defines the logical capacity of the zvol. Enter the value and select the unit (e.g., GiB). &lt;br /&gt;
* The dialog shows the currently available physical space in the pool below the field.&lt;br /&gt;
&lt;br /&gt;
Effective space consumption depends on the provisioning mode, compression efficiency, and any additional data copies.&lt;br /&gt;
&lt;br /&gt;
==== Provisioning ====&lt;br /&gt;
Controls how space is allocated in the pool.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Thin provisioned (default)&#039;&#039;&#039;: Physical space is allocated on demand as data is written. This allows you to define logical capacities larger than the pool’s current free space, but you must monitor pool usage to avoid running out of space. &lt;br /&gt;
* &#039;&#039;&#039;Thick provisioned&#039;&#039;&#039;: The full size of the zvol is reserved immediately at creation time. This guarantees capacity for the zvol but reduces free space for other resources.&lt;br /&gt;
&lt;br /&gt;
Use thick provisioning only for workloads that require guaranteed capacity and for which overcommitment is unacceptable.&lt;br /&gt;
&lt;br /&gt;
==== Deduplication ====&lt;br /&gt;
Enables ZFS block-level deduplication for the zvol. Available options include:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Disabled (default)&#039;&#039;&#039; – deduplication is off. &lt;br /&gt;
* &#039;&#039;&#039;On&#039;&#039;&#039; – alias for sha256. &lt;br /&gt;
* &#039;&#039;&#039;Verify&#039;&#039;&#039; – alias for sha256,verify; performs an extra block comparison step. &lt;br /&gt;
* &#039;&#039;&#039;sha256&#039;&#039;&#039; – deduplicates based on SHA-256 checksums; blocks with identical checksums share a single physical copy. &lt;br /&gt;
* &#039;&#039;&#039;sha256,verify&#039;&#039;&#039; – uses SHA-256 and additionally verifies candidate duplicate blocks to reduce the risk of hash collisions. This mode is very resource-intensive.&lt;br /&gt;
&lt;br /&gt;
Use deduplication only when you expect a high ratio of repeated blocks and have sufficient RAM (e.g., many similar VM images). For general-purpose workloads, leaving deduplication disabled is usually recommended. &lt;br /&gt;
&lt;br /&gt;
==== Number of data copies ====&lt;br /&gt;
Controls how many ZFS data copies are stored for this zvol, in addition to pool-level redundancy (mirrors, RAIDZ, etc.). &lt;br /&gt;
&lt;br /&gt;
* Allowed values: &#039;&#039;&#039;1 (default), 2, 3&#039;&#039;&#039;. &lt;br /&gt;
* When possible, copies are placed on different physical disks. &lt;br /&gt;
* Additional copies increase used space and count against pool capacity. &lt;br /&gt;
* Only new writes use the current setting; existing data keeps the number of copies that were in effect when they were written.&lt;br /&gt;
&lt;br /&gt;
Use 2 or 3 copies only for small but critical zvols where additional local redundancy is more important than capacity efficiency. &lt;br /&gt;
&lt;br /&gt;
==== Compression ====&lt;br /&gt;
Defines the on-the-fly compression algorithm for zvol data.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;lz4 (default)&#039;&#039;&#039; – high-performance, general-purpose method that is recommended for most workloads. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – disables compression. &lt;br /&gt;
* Additional algorithms in the list: &lt;br /&gt;
** gzip-1 … gzip-9 (higher levels compress more but are slower), &lt;br /&gt;
** lzjb, &lt;br /&gt;
** zle (effective mainly for blocks of zeros).&lt;br /&gt;
&lt;br /&gt;
Keeping lz4 enabled is advisable for most zvols. Disable compression only when the data is already compressed and extremely latency-sensitive. &lt;br /&gt;
&lt;br /&gt;
==== Volume block size ====&lt;br /&gt;
Defines the block size used for the zvol. This is similar to choosing a sector size for a virtual disk.&lt;br /&gt;
&lt;br /&gt;
* Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB, and 1 MiB. &lt;br /&gt;
* Default value in the dialog: 64 KiB. &lt;br /&gt;
* The chosen size cannot be changed once meaningful data has been written.&lt;br /&gt;
&lt;br /&gt;
Guidelines:&lt;br /&gt;
&lt;br /&gt;
* Smaller blocks (e.g., 4-16 KiB) can improve performance for random I/O with small requests, at the cost of more metadata and slightly higher overhead. &lt;br /&gt;
* Larger blocks (e.g., 128 KiB or more) are suitable for large sequential workloads, such as backup or media storage.&lt;br /&gt;
&lt;br /&gt;
Choose the block size based on the typical I/O pattern of the applications that will use the zvol.&lt;br /&gt;
&lt;br /&gt;
==== Write cache sync requests ====&lt;br /&gt;
Controls how synchronous write operations are handled for this zvol (ZFS sync property).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Always (default)&#039;&#039;&#039;: All writes are treated as synchronous. Each transaction is committed and flushed to stable storage before the operation returns to the initiator. This provides the highest level of data safety and is recommended especially when no reliable UPS is available. &lt;br /&gt;
* &#039;&#039;&#039;Standard&#039;&#039;&#039;: Equivalent to sync=standard. Only writes that are explicitly requested as synchronous are forced to stable storage; other writes can stay cached for up to about one second before being committed. This improves performance, but the most recent (up to 1 second) cached data can be lost in case of a power outage. Use this option only in environments protected by a reliable UPS. &lt;br /&gt;
* &#039;&#039;&#039;Disabled&#039;&#039;&#039;: Equivalent to sync=disabled. Even explicitly synchronous writes are treated as asynchronous and may remain in cache for up to about one second. This provides the highest performance, but the most recent cached data may be lost during a power outage, and applications may observe inconsistent data. Use this option only for non-critical workloads and only in environments equipped with a reliable UPS. &lt;br /&gt;
&lt;br /&gt;
==== Write cache sync request handling (logbias) ====&lt;br /&gt;
Provides a hint about how synchronous writes should use log devices (if present in the pool).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write log device (Latency)&#039;&#039;&#039;: If dedicated log vdevs exist, they are used to minimize latency for synchronous writes. This is the recommended default for latency-sensitive workloads. &lt;br /&gt;
* &#039;&#039;&#039;In pool (Throughput)&#039;&#039;&#039;: Log vdevs are bypassed, and writes are optimized for aggregate throughput and efficient pool usage. This can be beneficial for streaming workloads where latency is less critical.&lt;br /&gt;
&lt;br /&gt;
This setting does not override pool layout; it only influences where synchronous data is staged before being committed to main storage. &lt;br /&gt;
&lt;br /&gt;
==== Read cache (primary, ARC) scope ====&lt;br /&gt;
Specifies what is cached in the primary memory cache (ARC) for this zvol.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache both data and metadata. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata; user data is read directly from disk. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – do not cache anything for this zvol in ARC. &lt;br /&gt;
&lt;br /&gt;
For large, sequential, or low-priority workloads you can reduce ARC pressure by switching to &#039;&#039;&#039;Metadata&#039;&#039;&#039; or &#039;&#039;&#039;None&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
==== Read cache (secondary, L2ARC) scope ====&lt;br /&gt;
Controls use of secondary cache devices (L2ARC), typically SSDs.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache both metadata and user data on L2ARC. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata on L2ARC. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – exclude this zvol from L2ARC caching.&lt;br /&gt;
&lt;br /&gt;
Use &#039;&#039;&#039;Metadata&#039;&#039;&#039; or &#039;&#039;&#039;None&#039;&#039;&#039; for zvols that would otherwise fill L2ARC with data that has low reuse value, thereby preserving cache space for more critical workloads.&amp;lt;/onlyinclude&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Attach to target ===&lt;br /&gt;
The &#039;&#039;&#039;Attach to target&#039;&#039;&#039; section at the bottom of the dialog allows you to export the newly created zvol as a LUN immediately. This section is optional; you can also attach the zvol later from the target configuration views.&lt;br /&gt;
&lt;br /&gt;
==== General behaviour ====&lt;br /&gt;
&lt;br /&gt;
* When the Attach to target checkbox is disabled, the zvol is created but not attached to any target. &lt;br /&gt;
* Enabling the checkbox expands the configuration panel and allows you to select or configure how the zvol will be presented to initiators.&lt;br /&gt;
&lt;br /&gt;
==== Fields ====&lt;br /&gt;
&#039;&#039;&#039;Target name&#039;&#039;&#039;: Select an existing target from the drop-down list. The zvol will be attached as a new LUN under this target.&lt;br /&gt;
&lt;br /&gt;
==== SCSI ID ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Automatic&#039;&#039;&#039; – uses an automatically generated SCSI identifier. &lt;br /&gt;
* &#039;&#039;&#039;Generate&#039;&#039;&#039; – creates a new random identifier if you need to control or refresh the ID.&lt;br /&gt;
&lt;br /&gt;
In most cases, leaving the default automatic value is sufficient.&lt;br /&gt;
&lt;br /&gt;
==== LUN ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;automatic&#039;&#039;&#039; – assigns the next available LUN number on the selected target. &lt;br /&gt;
* &#039;&#039;&#039;manual entry&#039;&#039;&#039; – specify a particular LUN number if your environment uses a specific numbering scheme; the number must not already be in use on that target.&lt;br /&gt;
&lt;br /&gt;
==== Write cache settings ====&lt;br /&gt;
Defines how write caching is presented to the initiator for this LUN.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write-through --- Block I/O (default)&#039;&#039;&#039;: All writes are committed directly to stable storage before completion is reported to the initiator. This mode prioritizes data integrity and is recommended for most environments. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- Block I/O&#039;&#039;&#039;: Exposes the LUN as read-only on the Block I/O path. Any write attempts from the initiator are rejected. Use this option for volumes that must not be modified. &lt;br /&gt;
* &#039;&#039;&#039;Write-through --- File I/O&#039;&#039;&#039;: Similar to Write-through --- Block I/O, but handled through the File I/O path. Writes are acknowledged only after they are safely stored on disk. &lt;br /&gt;
* &#039;&#039;&#039;Write-back --- File I/O&#039;&#039;&#039;: Enables write-back caching on the File I/O path. Write requests are acknowledged after being stored in cache rather than on disk, which provides the highest write performance. However, cached data may be lost in case of a power failure or node crash (this risk exists even in HA cluster configurations), and resource failover can take noticeably longer. Use this option only when the environment can tolerate potential data loss and when additional protection (e.g., a battery-backed cache and a reliable UPS) is in place. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- File I/O&#039;&#039;&#039;: Exposes the LUN as read-only on the File I/O path. Use when the initiator must have read access only, for example, for archival or reference datasets.&lt;br /&gt;
&lt;br /&gt;
==== TRIM support ====&lt;br /&gt;
&lt;br /&gt;
* When enabled, it allows the zvol to honor TRIM / UNMAP requests from the initiator so that released blocks can be returned to the pool. &lt;br /&gt;
* Use this option only when the operating system and initiator software fully support TRIM for the relevant protocol. &lt;br /&gt;
&lt;br /&gt;
TRIM can improve space efficiency for thin-provisioned zvols, but misconfigured initiators or unsupported combinations may cause unexpected behaviour.&lt;br /&gt;
&lt;br /&gt;
After you confirm the configuration and click Add, the zvol is created with the specified properties. If attachment is enabled, the zvol is also exposed as a LUN on the selected target and becomes available to connected initiators after they rescan their devices.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Add_FC_volume&amp;diff=12378</id>
		<title>Add FC volume</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Add_FC_volume&amp;diff=12378"/>
		<updated>2026-01-15T08:07:28Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__{{:Add new zvol}}&lt;br /&gt;
=== Attach to Fibre Channel groups ===&lt;br /&gt;
This section allows you to assign the new zvol to one or more Fibre Channel (FC) groups and control how it will be presented to FC initiators.&lt;br /&gt;
&lt;br /&gt;
==== FC membership properties ====&lt;br /&gt;
&lt;br /&gt;
===== SCSI ID =====&lt;br /&gt;
A unique identifier of a device.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Automatic&#039;&#039;&#039; – a SCSI identifier is generated automatically. &lt;br /&gt;
* &#039;&#039;&#039;Generate&#039;&#039;&#039; – creates a new random SCSI identifier.&lt;br /&gt;
&lt;br /&gt;
In most cases, leaving the value set to &#039;&#039;&#039;automatic&#039;&#039;&#039; is sufficient.&lt;br /&gt;
&lt;br /&gt;
===== Write cache settings =====&lt;br /&gt;
Defines how write caching is exposed for this FC LUN:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write-through --- Block I/O (default)&#039;&#039;&#039; – write requests are completed only after data is safely stored on disk. Recommended for most environments. &lt;br /&gt;
* &#039;&#039;&#039;Write-through --- File I/O&#039;&#039;&#039; – write-through behaviour using the File I/O path. &lt;br /&gt;
* &#039;&#039;&#039;Write-back --- File I/O&#039;&#039;&#039; – enables write-back caching on the File I/O path. This provides the highest write performance, but cached data can be lost in case of a power outage or node failure (even in HA cluster mode). Use only when the environment can tolerate potential data loss and when appropriate protection such as UPS and battery-backed cache is in place. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- File I/O&#039;&#039;&#039; – exposes the LUN as read-only over File I/O. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- Block I/O&#039;&#039;&#039; – exposes the LUN as read-only over Block I/O.&lt;br /&gt;
&lt;br /&gt;
===== TRIM support =====&lt;br /&gt;
When enabled, allows the FC LUN to accept TRIM/UNMAP commands, returning freed blocks to the pool. Use this option only when the initiator and operating system fully support TRIM over Fibre Channel.&lt;br /&gt;
&lt;br /&gt;
===== FC groups =====&lt;br /&gt;
The &#039;&#039;&#039;FC groups&#039;&#039;&#039; table lists all configured Fibre Channel groups.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Alias&#039;&#039;&#039; – name of the FC group. Select the check box to assign the zvol to that group. &lt;br /&gt;
* &#039;&#039;&#039;LUN&#039;&#039;&#039; – LUN number under which the zvol will be exposed in the selected group. &lt;br /&gt;
** Auto-assigns the next available LUN number. &lt;br /&gt;
** If manual entry is allowed, you can type a specific LUN number that is free within that group.&lt;br /&gt;
&lt;br /&gt;
If no FC group is selected, the zvol will not be available over Fibre Channel and can be assigned later from the FC configuration view.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Add_new_zvol&amp;diff=12376</id>
		<title>Add new zvol</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Add_new_zvol&amp;diff=12376"/>
		<updated>2026-01-15T08:02:53Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&amp;lt;onlyinclude&amp;gt;A zvol is a ZFS block device created inside a ZFS pool (zpool). In this documentation, the term zvol refers to a block-type resource that is typically exported as a LUN to hosts over iSCSI, Fibre Channel, or NVMe-oF. Zvols are usually used to:&lt;br /&gt;
&lt;br /&gt;
* Provide block storage for virtual machines, databases, or other applications that expect a disk device. &lt;br /&gt;
* Separate workloads that require different performance or data-protection policies (for example, different compression, block size, or deduplication settings). &lt;br /&gt;
* Control how space is consumed by different applications or tenants at the pool level.&amp;lt;/onlyinclude&amp;gt; &lt;br /&gt;
&lt;br /&gt;
 You first create a zvol, and then (optionally) attach it to a target to make it available to hosts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;onlyinclude&amp;gt;== Creating a zvol ==&lt;br /&gt;
&lt;br /&gt;
# Go to the zpool management view in the GUI. &lt;br /&gt;
# Select and expand the zpool in which you want to create the zvol. &lt;br /&gt;
# Navigate to the iSCSI Targets, FC Targets, or NVMe-oF Targets section. &lt;br /&gt;
# Click &#039;&#039;&#039;Add zvol&#039;&#039;&#039; to open the &#039;&#039;&#039;Add new zvol&#039;&#039;&#039; dialog. &lt;br /&gt;
# Configure &#039;&#039;&#039;Encryption settings&#039;&#039;&#039; and Zvol properties, and optionally attach to an &#039;&#039;&#039;iSCSI&#039;&#039;&#039; &#039;&#039;&#039;target, NVMe-oF subsystem, or assign to FC groups&#039;&#039;&#039;. &lt;br /&gt;
# Review the configuration and click &#039;&#039;&#039;Add&#039;&#039;&#039;. The new zvol appears in the selected zpool.&lt;br /&gt;
&lt;br /&gt;
After creation, you can adjust most properties later; however, encryption and some layout-related parameters (such as volume block size) cannot be changed after data has been written.&lt;br /&gt;
&lt;br /&gt;
=== Encryption settings ===&lt;br /&gt;
This section is displayed at the top of the dialog. Encryption can be enabled only during zvol creation and cannot be disabled later for this resource.&lt;br /&gt;
&lt;br /&gt;
==== Encrypt resource ====&lt;br /&gt;
Enable this switch to create an encrypted zvol. If the switch remains disabled, the zvol is created unencrypted.&lt;br /&gt;
&lt;br /&gt;
==== Encryption method ====&lt;br /&gt;
Defines the encryption algorithm used when the zvol is encrypted.&lt;br /&gt;
&lt;br /&gt;
* By default, the method is inherited from &#039;&#039;&#039;Configuration → Resource encryption&#039;&#039;&#039; (for example, aes-256-gcm). &lt;br /&gt;
* You can select a different supported method for this zvol if required by policy or performance.&lt;br /&gt;
&lt;br /&gt;
For information about keys, unlocking behaviour, and error handling, see the [[Encryption]] article.&lt;br /&gt;
&lt;br /&gt;
=== Zvol properties ===&lt;br /&gt;
These fields define the behaviour and performance characteristics of the zvol.&lt;br /&gt;
&lt;br /&gt;
==== Name ====&lt;br /&gt;
&lt;br /&gt;
* The zvol name must be unique within the pool. &lt;br /&gt;
* Allowed characters: a–z  A–Z  0–9  .  _  -&lt;br /&gt;
&lt;br /&gt;
Renaming a zvol that is already exported through a target will change its internal path; any targets using that path must be updated before clients regain access.&lt;br /&gt;
&lt;br /&gt;
==== Size ====&lt;br /&gt;
&lt;br /&gt;
* Defines the logical capacity of the zvol. Enter the value and select the unit (e.g., GiB). &lt;br /&gt;
* The dialog shows the currently available physical space in the pool below the field.&lt;br /&gt;
&lt;br /&gt;
Effective space consumption depends on the provisioning mode, compression efficiency, and any additional data copies.&lt;br /&gt;
&lt;br /&gt;
==== Provisioning ====&lt;br /&gt;
Controls how space is allocated in the pool.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Thin provisioned (default)&#039;&#039;&#039;: Physical space is allocated on demand as data is written. This allows you to define logical capacities larger than the pool’s current free space, but you must monitor pool usage to avoid running out of space. &lt;br /&gt;
* &#039;&#039;&#039;Thick provisioned&#039;&#039;&#039;: The full size of the zvol is reserved immediately at creation time. This guarantees capacity for the zvol but reduces free space for other resources.&lt;br /&gt;
&lt;br /&gt;
Use thick provisioning only for workloads that require guaranteed capacity and for which overcommitment is unacceptable.&lt;br /&gt;
&lt;br /&gt;
==== Deduplication ====&lt;br /&gt;
Enables ZFS block-level deduplication for the zvol. Available options include:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Disabled (default)&#039;&#039;&#039; – deduplication is off. &lt;br /&gt;
* &#039;&#039;&#039;On&#039;&#039;&#039; – alias for sha256. &lt;br /&gt;
* &#039;&#039;&#039;Verify&#039;&#039;&#039; – alias for sha256,verify; performs an extra block comparison step. &lt;br /&gt;
* &#039;&#039;&#039;sha256&#039;&#039;&#039; – deduplicates based on SHA-256 checksums; blocks with identical checksums share a single physical copy. &lt;br /&gt;
* &#039;&#039;&#039;sha256,verify&#039;&#039;&#039; – uses SHA-256 and additionally verifies candidate duplicate blocks to reduce the risk of hash collisions. This mode is very resource-intensive.&lt;br /&gt;
&lt;br /&gt;
Use deduplication only when you expect a high ratio of repeated blocks and have sufficient RAM (e.g., many similar VM images). For general-purpose workloads, leaving deduplication disabled is usually recommended. &lt;br /&gt;
&lt;br /&gt;
==== Number of data copies ====&lt;br /&gt;
Controls how many ZFS data copies are stored for this zvol, in addition to pool-level redundancy (mirrors, RAIDZ, etc.). &lt;br /&gt;
&lt;br /&gt;
* Allowed values: &#039;&#039;&#039;1 (default), 2, 3&#039;&#039;&#039;. &lt;br /&gt;
* When possible, copies are placed on different physical disks. &lt;br /&gt;
* Additional copies increase used space and count against pool capacity. &lt;br /&gt;
* Only new writes use the current setting; existing data keeps the number of copies that were in effect when they were written.&lt;br /&gt;
&lt;br /&gt;
Use 2 or 3 copies only for small but critical zvols where additional local redundancy is more important than capacity efficiency. &lt;br /&gt;
&lt;br /&gt;
==== Compression ====&lt;br /&gt;
Defines the on-the-fly compression algorithm for zvol data.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;lz4 (default)&#039;&#039;&#039; – high-performance, general-purpose method that is recommended for most workloads. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – disables compression. &lt;br /&gt;
* Additional algorithms in the list: &lt;br /&gt;
** gzip-1 … gzip-9 (higher levels compress more but are slower), &lt;br /&gt;
** lzjb, &lt;br /&gt;
** zle (effective mainly for blocks of zeros).&lt;br /&gt;
&lt;br /&gt;
Keeping lz4 enabled is advisable for most zvols. Disable compression only when the data is already compressed and extremely latency-sensitive. &lt;br /&gt;
&lt;br /&gt;
==== Volume block size ====&lt;br /&gt;
Defines the block size used for the zvol. This is similar to choosing a sector size for a virtual disk.&lt;br /&gt;
&lt;br /&gt;
* Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB, and 1 MiB. &lt;br /&gt;
* Default value in the dialog: 64 KiB. &lt;br /&gt;
* The chosen size cannot be changed once meaningful data has been written.&lt;br /&gt;
&lt;br /&gt;
Guidelines:&lt;br /&gt;
&lt;br /&gt;
* Smaller blocks (e.g., 4-16 KiB) can improve performance for random I/O with small requests, at the cost of more metadata and slightly higher overhead. &lt;br /&gt;
* Larger blocks (e.g., 128 KiB or more) are suitable for large sequential workloads, such as backup or media storage.&lt;br /&gt;
&lt;br /&gt;
Choose the block size based on the typical I/O pattern of the applications that will use the zvol.&lt;br /&gt;
&lt;br /&gt;
==== Write cache sync requests ====&lt;br /&gt;
Controls how synchronous write operations are handled for this zvol (ZFS sync property).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Always (default)&#039;&#039;&#039;: All writes are treated as synchronous. Each transaction is committed and flushed to stable storage before the operation returns to the initiator. This provides the highest level of data safety and is recommended especially when no reliable UPS is available. &lt;br /&gt;
* &#039;&#039;&#039;Standard&#039;&#039;&#039;: Equivalent to sync=standard. Only writes that are explicitly requested as synchronous are forced to stable storage; other writes can stay cached for up to about one second before being committed. This improves performance, but the most recent (up to 1 second) cached data can be lost in case of a power outage. Use this option only in environments protected by a reliable UPS. &lt;br /&gt;
* &#039;&#039;&#039;Disabled&#039;&#039;&#039;: Equivalent to sync=disabled. Even explicitly synchronous writes are treated as asynchronous and may remain in cache for up to about one second. This provides the highest performance, but the most recent cached data may be lost during a power outage, and applications may observe inconsistent data. Use this option only for non-critical workloads and only in environments equipped with a reliable UPS. &lt;br /&gt;
&lt;br /&gt;
==== Write cache sync request handling (logbias) ====&lt;br /&gt;
Provides a hint about how synchronous writes should use log devices (if present in the pool).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write log device (Latency)&#039;&#039;&#039;: If dedicated log vdevs exist, they are used to minimize latency for synchronous writes. This is the recommended default for latency-sensitive workloads. &lt;br /&gt;
* &#039;&#039;&#039;In pool (Throughput)&#039;&#039;&#039;: Log vdevs are bypassed, and writes are optimized for aggregate throughput and efficient pool usage. This can be beneficial for streaming workloads where latency is less critical.&lt;br /&gt;
&lt;br /&gt;
This setting does not override pool layout; it only influences where synchronous data is staged before being committed to main storage. &lt;br /&gt;
&lt;br /&gt;
==== Read cache (primary, ARC) scope ====&lt;br /&gt;
Specifies what is cached in the primary memory cache (ARC) for this zvol.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache both data and metadata. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata; user data is read directly from disk. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – do not cache anything for this zvol in ARC. &lt;br /&gt;
&lt;br /&gt;
For large, sequential, or low-priority workloads you can reduce ARC pressure by switching to &#039;&#039;&#039;Metadata&#039;&#039;&#039; or &#039;&#039;&#039;None&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
==== Read cache (secondary, L2ARC) scope ====&lt;br /&gt;
Controls use of secondary cache devices (L2ARC), typically SSDs.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache both metadata and user data on L2ARC. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata on L2ARC. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – exclude this zvol from L2ARC caching.&lt;br /&gt;
&lt;br /&gt;
Use &#039;&#039;&#039;Metadata&#039;&#039;&#039; or &#039;&#039;&#039;None&#039;&#039;&#039; for zvols that would otherwise fill L2ARC with data that has low reuse value, thereby preserving cache space for more critical workloads.&amp;lt;/onlyinclude&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Attach to target ===&lt;br /&gt;
The &#039;&#039;&#039;Attach to target&#039;&#039;&#039; section at the bottom of the dialog allows you to export the newly created zvol as a LUN immediately. This section is optional; you can also attach the zvol later from the target configuration views.&lt;br /&gt;
&lt;br /&gt;
==== General behaviour ====&lt;br /&gt;
&lt;br /&gt;
* When the Attach to target checkbox is disabled, the zvol is created but not attached to any target. &lt;br /&gt;
* Enabling the checkbox expands the configuration panel and allows you to select or configure how the zvol will be presented to initiators.&lt;br /&gt;
&lt;br /&gt;
==== Fields ====&lt;br /&gt;
&#039;&#039;&#039;Target name&#039;&#039;&#039;: Select an existing target from the drop-down list. The zvol will be attached as a new LUN under this target.&lt;br /&gt;
&lt;br /&gt;
==== SCSI ID ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Automatic&#039;&#039;&#039; – uses an automatically generated SCSI identifier. &lt;br /&gt;
* &#039;&#039;&#039;Generate&#039;&#039;&#039; – creates a new random identifier if you need to control or refresh the ID.&lt;br /&gt;
&lt;br /&gt;
In most cases, leaving the default automatic value is sufficient.&lt;br /&gt;
&lt;br /&gt;
==== LUN ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;automatic&#039;&#039;&#039; – assigns the next available LUN number on the selected target. &lt;br /&gt;
* &#039;&#039;&#039;manual entry&#039;&#039;&#039; – specify a particular LUN number if your environment uses a specific numbering scheme; the number must not already be in use on that target.&lt;br /&gt;
&lt;br /&gt;
==== Write cache settings ====&lt;br /&gt;
Defines how write caching is presented to the initiator for this LUN.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write-through --- Block I/O (default)&#039;&#039;&#039;: All writes are committed directly to stable storage before completion is reported to the initiator. This mode prioritizes data integrity and is recommended for most environments. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- Block I/O&#039;&#039;&#039;: Exposes the LUN as read-only on the Block I/O path. Any write attempts from the initiator are rejected. Use this option for volumes that must not be modified. &lt;br /&gt;
* &#039;&#039;&#039;Write-through --- File I/O&#039;&#039;&#039;: Similar to Write-through --- Block I/O, but handled through the File I/O path. Writes are acknowledged only after they are safely stored on disk. &lt;br /&gt;
* &#039;&#039;&#039;Write-back --- File I/O&#039;&#039;&#039;: Enables write-back caching on the File I/O path. Write requests are acknowledged after being stored in cache rather than on disk, which provides the highest write performance. However, cached data may be lost in case of a power failure or node crash (this risk exists even in HA cluster configurations), and resource failover can take noticeably longer. Use this option only when the environment can tolerate potential data loss and when additional protection (e.g., a battery-backed cache and a reliable UPS) is in place. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- File I/O&#039;&#039;&#039;: Exposes the LUN as read-only on the File I/O path. Use when the initiator must have read access only, for example, for archival or reference datasets.&lt;br /&gt;
&lt;br /&gt;
==== TRIM support ====&lt;br /&gt;
&lt;br /&gt;
* When enabled, it allows the zvol to honor TRIM / UNMAP requests from the initiator so that released blocks can be returned to the pool. &lt;br /&gt;
* Use this option only when the operating system and initiator software fully support TRIM for the relevant protocol. &lt;br /&gt;
&lt;br /&gt;
TRIM can improve space efficiency for thin-provisioned zvols, but misconfigured initiators or unsupported combinations may cause unexpected behaviour.&lt;br /&gt;
&lt;br /&gt;
After you confirm the configuration and click Add, the zvol is created with the specified properties. If attachment is enabled, the zvol is also exposed as a LUN on the selected target and becomes available to connected initiators after they rescan their devices.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Add_new_zvol&amp;diff=12375</id>
		<title>Add new zvol</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Add_new_zvol&amp;diff=12375"/>
		<updated>2026-01-15T07:58:32Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__A zvol is a ZFS block device created inside a ZFS pool (zpool). In this documentation, the term zvol refers to a block-type resource that is typically exported as a LUN to hosts over iSCSI, Fibre Channel, or NVMe-oF. Zvols are usually used to:&lt;br /&gt;
&lt;br /&gt;
* Provide block storage for virtual machines, databases, or other applications that expect a disk device. &lt;br /&gt;
* Separate workloads that require different performance or data-protection policies (for example, different compression, block size, or deduplication settings). &lt;br /&gt;
* Control how space is consumed by different applications or tenants at the pool level. &lt;br /&gt;
&lt;br /&gt;
You first create a zvol, and then (optionally) attach it to a target to make it available to hosts.&lt;br /&gt;
&lt;br /&gt;
== Creating a zvol ==&lt;br /&gt;
&lt;br /&gt;
# Go to the zpool management view in the GUI. &lt;br /&gt;
# Select and expand the zpool in which you want to create the zvol. &lt;br /&gt;
# Navigate to the iSCSI Targets, FC Targets, or NVMe-oF Targets section. &lt;br /&gt;
# Click &#039;&#039;&#039;Add zvol&#039;&#039;&#039; to open the &#039;&#039;&#039;Add new zvol&#039;&#039;&#039; dialog. &lt;br /&gt;
# Configure &#039;&#039;&#039;Encryption settings&#039;&#039;&#039; and Zvol properties, and optionally attach to an &#039;&#039;&#039;iSCSI&#039;&#039;&#039; &#039;&#039;&#039;target, NVMe-oF subsystem, or assign to FC groups&#039;&#039;&#039;. &lt;br /&gt;
# Review the configuration and click &#039;&#039;&#039;Add&#039;&#039;&#039;. The new zvol appears in the selected zpool.&lt;br /&gt;
&lt;br /&gt;
After creation, you can adjust most properties later; however, encryption and some layout-related parameters (such as volume block size) cannot be changed after data has been written.&lt;br /&gt;
&lt;br /&gt;
=== Encryption settings ===&lt;br /&gt;
This section is displayed at the top of the dialog. Encryption can be enabled only during zvol creation and cannot be disabled later for this resource.&lt;br /&gt;
&lt;br /&gt;
==== Encrypt resource ====&lt;br /&gt;
Enable this switch to create an encrypted zvol. If the switch remains disabled, the zvol is created unencrypted.&lt;br /&gt;
&lt;br /&gt;
==== Encryption method ====&lt;br /&gt;
Defines the encryption algorithm used when the zvol is encrypted.&lt;br /&gt;
&lt;br /&gt;
* By default, the method is inherited from &#039;&#039;&#039;Configuration → Resource encryption&#039;&#039;&#039; (for example, aes-256-gcm). &lt;br /&gt;
* You can select a different supported method for this zvol if required by policy or performance.&lt;br /&gt;
&lt;br /&gt;
For information about keys, unlocking behaviour, and error handling, see the [[Encryption]] article.&lt;br /&gt;
&lt;br /&gt;
=== Zvol properties ===&lt;br /&gt;
These fields define the behaviour and performance characteristics of the zvol.&lt;br /&gt;
&lt;br /&gt;
==== Name ====&lt;br /&gt;
&lt;br /&gt;
* The zvol name must be unique within the pool. &lt;br /&gt;
* Allowed characters: a–z  A–Z  0–9  .  _  -&lt;br /&gt;
&lt;br /&gt;
Renaming a zvol that is already exported through a target will change its internal path; any targets using that path must be updated before clients regain access.&lt;br /&gt;
&lt;br /&gt;
==== Size ====&lt;br /&gt;
&lt;br /&gt;
* Defines the logical capacity of the zvol. Enter the value and select the unit (e.g., GiB). &lt;br /&gt;
* The dialog shows the currently available physical space in the pool below the field.&lt;br /&gt;
&lt;br /&gt;
Effective space consumption depends on the provisioning mode, compression efficiency, and any additional data copies.&lt;br /&gt;
&lt;br /&gt;
==== Provisioning ====&lt;br /&gt;
Controls how space is allocated in the pool.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Thin provisioned (default)&#039;&#039;&#039;: Physical space is allocated on demand as data is written. This allows you to define logical capacities larger than the pool’s current free space, but you must monitor pool usage to avoid running out of space. &lt;br /&gt;
* &#039;&#039;&#039;Thick provisioned&#039;&#039;&#039;: The full size of the zvol is reserved immediately at creation time. This guarantees capacity for the zvol but reduces free space for other resources.&lt;br /&gt;
&lt;br /&gt;
Use thick provisioning only for workloads that require guaranteed capacity and for which overcommitment is unacceptable.&lt;br /&gt;
&lt;br /&gt;
==== Deduplication ====&lt;br /&gt;
Enables ZFS block-level deduplication for the zvol. Available options include:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Disabled (default)&#039;&#039;&#039; – deduplication is off. &lt;br /&gt;
* &#039;&#039;&#039;On&#039;&#039;&#039; – alias for sha256. &lt;br /&gt;
* &#039;&#039;&#039;Verify&#039;&#039;&#039; – alias for sha256,verify; performs an extra block comparison step. &lt;br /&gt;
* &#039;&#039;&#039;sha256&#039;&#039;&#039; – deduplicates based on SHA-256 checksums; blocks with identical checksums share a single physical copy. &lt;br /&gt;
* &#039;&#039;&#039;sha256,verify&#039;&#039;&#039; – uses SHA-256 and additionally verifies candidate duplicate blocks to reduce the risk of hash collisions. This mode is very resource-intensive.&lt;br /&gt;
&lt;br /&gt;
Use deduplication only when you expect a high ratio of repeated blocks and have sufficient RAM (e.g., many similar VM images). For general-purpose workloads, leaving deduplication disabled is usually recommended. &lt;br /&gt;
&lt;br /&gt;
==== Number of data copies ====&lt;br /&gt;
Controls how many ZFS data copies are stored for this zvol, in addition to pool-level redundancy (mirrors, RAIDZ, etc.). &lt;br /&gt;
&lt;br /&gt;
* Allowed values: &#039;&#039;&#039;1 (default), 2, 3&#039;&#039;&#039;. &lt;br /&gt;
* When possible, copies are placed on different physical disks. &lt;br /&gt;
* Additional copies increase used space and count against pool capacity. &lt;br /&gt;
* Only new writes use the current setting; existing data keeps the number of copies that were in effect when they were written.&lt;br /&gt;
&lt;br /&gt;
Use 2 or 3 copies only for small but critical zvols where additional local redundancy is more important than capacity efficiency. &lt;br /&gt;
&lt;br /&gt;
==== Compression ====&lt;br /&gt;
Defines the on-the-fly compression algorithm for zvol data.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;lz4 (default)&#039;&#039;&#039; – high-performance, general-purpose method that is recommended for most workloads. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – disables compression. &lt;br /&gt;
* Additional algorithms in the list: &lt;br /&gt;
** gzip-1 … gzip-9 (higher levels compress more but are slower), &lt;br /&gt;
** lzjb, &lt;br /&gt;
** zle (effective mainly for blocks of zeros).&lt;br /&gt;
&lt;br /&gt;
Keeping lz4 enabled is advisable for most zvols. Disable compression only when the data is already compressed and extremely latency-sensitive. &lt;br /&gt;
&lt;br /&gt;
==== Volume block size ====&lt;br /&gt;
Defines the block size used for the zvol. This is similar to choosing a sector size for a virtual disk.&lt;br /&gt;
&lt;br /&gt;
* Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB, and 1 MiB. &lt;br /&gt;
* Default value in the dialog: 64 KiB. &lt;br /&gt;
* The chosen size cannot be changed once meaningful data has been written.&lt;br /&gt;
&lt;br /&gt;
Guidelines:&lt;br /&gt;
&lt;br /&gt;
* Smaller blocks (e.g., 4-16 KiB) can improve performance for random I/O with small requests, at the cost of more metadata and slightly higher overhead. &lt;br /&gt;
* Larger blocks (e.g., 128 KiB or more) are suitable for large sequential workloads, such as backup or media storage.&lt;br /&gt;
&lt;br /&gt;
Choose the block size based on the typical I/O pattern of the applications that will use the zvol.&lt;br /&gt;
&lt;br /&gt;
==== Write cache sync requests ====&lt;br /&gt;
Controls how synchronous write operations are handled for this zvol (ZFS sync property).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Always (default)&#039;&#039;&#039;: All writes are treated as synchronous. Each transaction is committed and flushed to stable storage before the operation returns to the initiator. This provides the highest level of data safety and is recommended especially when no reliable UPS is available. &lt;br /&gt;
* &#039;&#039;&#039;Standard&#039;&#039;&#039;: Equivalent to sync=standard. Only writes that are explicitly requested as synchronous are forced to stable storage; other writes can stay cached for up to about one second before being committed. This improves performance, but the most recent (up to 1 second) cached data can be lost in case of a power outage. Use this option only in environments protected by a reliable UPS. &lt;br /&gt;
* &#039;&#039;&#039;Disabled&#039;&#039;&#039;: Equivalent to sync=disabled. Even explicitly synchronous writes are treated as asynchronous and may remain in cache for up to about one second. This provides the highest performance, but the most recent cached data may be lost during a power outage, and applications may observe inconsistent data. Use this option only for non-critical workloads and only in environments equipped with a reliable UPS. &lt;br /&gt;
&lt;br /&gt;
==== Write cache sync request handling (logbias) ====&lt;br /&gt;
Provides a hint about how synchronous writes should use log devices (if present in the pool).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write log device (Latency)&#039;&#039;&#039;: If dedicated log vdevs exist, they are used to minimize latency for synchronous writes. This is the recommended default for latency-sensitive workloads. &lt;br /&gt;
* &#039;&#039;&#039;In pool (Throughput)&#039;&#039;&#039;: Log vdevs are bypassed, and writes are optimized for aggregate throughput and efficient pool usage. This can be beneficial for streaming workloads where latency is less critical.&lt;br /&gt;
&lt;br /&gt;
This setting does not override pool layout; it only influences where synchronous data is staged before being committed to main storage. &lt;br /&gt;
&lt;br /&gt;
==== Read cache (primary, ARC) scope ====&lt;br /&gt;
Specifies what is cached in the primary memory cache (ARC) for this zvol.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache both data and metadata. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata; user data is read directly from disk. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – do not cache anything for this zvol in ARC. &lt;br /&gt;
&lt;br /&gt;
For large, sequential, or low-priority workloads you can reduce ARC pressure by switching to &#039;&#039;&#039;Metadata&#039;&#039;&#039; or &#039;&#039;&#039;None&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
==== Read cache (secondary, L2ARC) scope ====&lt;br /&gt;
Controls use of secondary cache devices (L2ARC), typically SSDs.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache both metadata and user data on L2ARC. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata on L2ARC. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – exclude this zvol from L2ARC caching.&lt;br /&gt;
&lt;br /&gt;
Use &#039;&#039;&#039;Metadata&#039;&#039;&#039; or &#039;&#039;&#039;None&#039;&#039;&#039; for zvols that would otherwise fill L2ARC with data that has low reuse value, thereby preserving cache space for more critical workloads. &lt;br /&gt;
&lt;br /&gt;
=== Attach to target ===&lt;br /&gt;
The &#039;&#039;&#039;Attach to target&#039;&#039;&#039; section at the bottom of the dialog allows you to export the newly created zvol as a LUN immediately. This section is optional; you can also attach the zvol later from the target configuration views.&lt;br /&gt;
&lt;br /&gt;
==== General behaviour ====&lt;br /&gt;
&lt;br /&gt;
* When the Attach to target checkbox is disabled, the zvol is created but not attached to any target. &lt;br /&gt;
* Enabling the checkbox expands the configuration panel and allows you to select or configure how the zvol will be presented to initiators.&lt;br /&gt;
&lt;br /&gt;
==== Fields ====&lt;br /&gt;
&#039;&#039;&#039;Target name&#039;&#039;&#039;: Select an existing target from the drop-down list. The zvol will be attached as a new LUN under this target.&lt;br /&gt;
&lt;br /&gt;
==== SCSI ID ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Automatic&#039;&#039;&#039; – uses an automatically generated SCSI identifier. &lt;br /&gt;
* &#039;&#039;&#039;Generate&#039;&#039;&#039; – creates a new random identifier if you need to control or refresh the ID.&lt;br /&gt;
&lt;br /&gt;
In most cases, leaving the default automatic value is sufficient.&lt;br /&gt;
&lt;br /&gt;
==== LUN ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;automatic&#039;&#039;&#039; – assigns the next available LUN number on the selected target. &lt;br /&gt;
* &#039;&#039;&#039;manual entry&#039;&#039;&#039; – specify a particular LUN number if your environment uses a specific numbering scheme; the number must not already be in use on that target.&lt;br /&gt;
&lt;br /&gt;
==== Write cache settings ====&lt;br /&gt;
Defines how write caching is presented to the initiator for this LUN.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write-through --- Block I/O (default)&#039;&#039;&#039;: All writes are committed directly to stable storage before completion is reported to the initiator. This mode prioritizes data integrity and is recommended for most environments. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- Block I/O&#039;&#039;&#039;: Exposes the LUN as read-only on the Block I/O path. Any write attempts from the initiator are rejected. Use this option for volumes that must not be modified. &lt;br /&gt;
* &#039;&#039;&#039;Write-through --- File I/O&#039;&#039;&#039;: Similar to Write-through --- Block I/O, but handled through the File I/O path. Writes are acknowledged only after they are safely stored on disk. &lt;br /&gt;
* &#039;&#039;&#039;Write-back --- File I/O&#039;&#039;&#039;: Enables write-back caching on the File I/O path. Write requests are acknowledged after being stored in cache rather than on disk, which provides the highest write performance. However, cached data may be lost in case of a power failure or node crash (this risk exists even in HA cluster configurations), and resource failover can take noticeably longer. Use this option only when the environment can tolerate potential data loss and when additional protection (e.g., a battery-backed cache and a reliable UPS) is in place. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- File I/O&#039;&#039;&#039;: Exposes the LUN as read-only on the File I/O path. Use when the initiator must have read access only, for example, for archival or reference datasets.&lt;br /&gt;
&lt;br /&gt;
==== TRIM support ====&lt;br /&gt;
&lt;br /&gt;
* When enabled, it allows the zvol to honor TRIM / UNMAP requests from the initiator so that released blocks can be returned to the pool. &lt;br /&gt;
* Use this option only when the operating system and initiator software fully support TRIM for the relevant protocol. &lt;br /&gt;
&lt;br /&gt;
TRIM can improve space efficiency for thin-provisioned zvols, but misconfigured initiators or unsupported combinations may cause unexpected behaviour.&lt;br /&gt;
&lt;br /&gt;
After you confirm the configuration and click Add, the zvol is created with the specified properties. If attachment is enabled, the zvol is also exposed as a LUN on the selected target and becomes available to connected initiators after they rescan their devices.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Dataset&amp;diff=12374</id>
		<title>Dataset</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Dataset&amp;diff=12374"/>
		<updated>2026-01-15T07:52:58Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__A &#039;&#039;&#039;dataset&#039;&#039;&#039; is a ZFS file system created inside a ZFS pool (zpool). In this documentation, dataset always refers to a file-system resource, typically used as NAS storage for SMB / NFS shares.&lt;br /&gt;
&lt;br /&gt;
Datasets are typically used to: &lt;br /&gt;
&lt;br /&gt;
* Provide NAS volumes for SMB and NFS shares.&lt;br /&gt;
* Separate data sets with different performance or data-protection policies (for example, different compression, recordsize, or deduplication settings).&lt;br /&gt;
* Apply independent quota and reservation limits for different workloads or tenants.&lt;br /&gt;
&lt;br /&gt;
You create a dataset first, then create shares (SMB / NFS, etc.) that point to it.&lt;br /&gt;
&lt;br /&gt;
== Creating a dataset ==&lt;br /&gt;
&lt;br /&gt;
# Go to the pool management view in the GUI. &lt;br /&gt;
# Select and expand the zpool where you want to create the dataset. &lt;br /&gt;
# Navigate to the &#039;&#039;&#039;Shares&#039;&#039;&#039; section for the selected zpool. &lt;br /&gt;
# Click &#039;&#039;&#039;Add dataset&#039;&#039;&#039; to open the dataset creation dialog. &lt;br /&gt;
# Configure the parameters.&lt;br /&gt;
# Review and confirm creation. The new dataset appears in the dataset list for the selected zpool. &lt;br /&gt;
&lt;br /&gt;
After creation, you can:&lt;br /&gt;
&lt;br /&gt;
* Assign SMB or NFS shares to the dataset in the appropriate shares configuration pages. &lt;br /&gt;
* Adjust most dataset properties later; however, &#039;&#039;&#039;encryption settings&#039;&#039;&#039; and some layout-related properties cannot be changed after creation.&lt;br /&gt;
&lt;br /&gt;
Below is a description of all dataset properties.&lt;br /&gt;
&lt;br /&gt;
=== Encryption settings ===&lt;br /&gt;
This section is displayed at the top of the dialog. &#039;&#039;&#039;Encryption is available only during creation and cannot be disabled later&#039;&#039;&#039;. Once the dataset is created, you cannot turn encryption off for it.&lt;br /&gt;
&lt;br /&gt;
==== Encrypt resource ====&lt;br /&gt;
Enable this switch to create an encrypted dataset. When disabled, the dataset is created unencrypted.&lt;br /&gt;
&lt;br /&gt;
==== Encryption method ====&lt;br /&gt;
Shows the algorithm used when the dataset is encrypted.&lt;br /&gt;
&lt;br /&gt;
* By default, it inherits the value from the Configuration -&amp;gt; Resource encryption setting (for example, aes-256-gcm).&lt;br /&gt;
* You can select a different supported method for this dataset.&lt;br /&gt;
&lt;br /&gt;
For details about keys, unlocking, and error handling, see [[Encryption]].&lt;br /&gt;
&lt;br /&gt;
=== Dataset properties ===&lt;br /&gt;
These fields define the behaviour of the dataset itself.&lt;br /&gt;
&lt;br /&gt;
==== Name ====&lt;br /&gt;
&lt;br /&gt;
* The dataset name must be unique within the pool. &lt;br /&gt;
* Allowed characters: a–z  A–Z  0–9  .  _  -&lt;br /&gt;
&lt;br /&gt;
Changing the name of an existing dataset breaks paths used by its shares; clients will lose access until the share definitions are adjusted.&lt;br /&gt;
&lt;br /&gt;
==== Deduplication ====&lt;br /&gt;
Enables ZFS block-level deduplication for this dataset. Options:&lt;br /&gt;
&lt;br /&gt;
* Disabled (default) – deduplication is turned off. &lt;br /&gt;
* On – alias for “sha256”. &lt;br /&gt;
* Verify – alias for &amp;quot;sha256, Verify&amp;quot;; additionally compares blocks to reduce the risk of false matches. &lt;br /&gt;
* sha256 - uses the SHA-256 checksum for deduplication. When two blocks have the same checksum, they are treated as identical and only a single copy is stored. &lt;br /&gt;
* sha256, Verify – uses SHA-256 for deduplication and additionally verifies candidate duplicate blocks to detect possible hash collisions. This mode is very resource-intensive and is not recommended for general use.&lt;br /&gt;
&lt;br /&gt;
Use deduplication only for workloads with a high ratio of identical blocks and sufficient RAM (e.g., many similar VM images). For general data, it is usually better to keep it disabled.&lt;br /&gt;
&lt;br /&gt;
==== Number of data copies ====&lt;br /&gt;
Controls the number of ZFS data copies stored for this dataset, in addition to pool redundancy (mirror, RAIDZ, and so on).&lt;br /&gt;
&lt;br /&gt;
* Possible values: &#039;&#039;&#039;1 (default)&#039;&#039;&#039;, &#039;&#039;&#039;2&#039;&#039;&#039;, &#039;&#039;&#039;3&#039;&#039;&#039;. &lt;br /&gt;
* Copies are stored on different disks when possible. &lt;br /&gt;
* Extra copies increase used space and are counted towards quota and reservation. &lt;br /&gt;
* Only new writes use the current setting.&lt;br /&gt;
&lt;br /&gt;
Use higher values only for small but critical datasets where local redundancy is more important than capacity.&lt;br /&gt;
&lt;br /&gt;
==== Compression ====&lt;br /&gt;
The compression algorithm used for this dataset.&lt;br /&gt;
&lt;br /&gt;
* Default: &#039;&#039;&#039;lz4 (default)&#039;&#039;&#039; – fast, generally recommended.&lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – disables compression. &lt;br /&gt;
* Other algorithms that can appear in the list: &lt;br /&gt;
** gzip levels 1–9 (1 = fastest, lowest compression; 9 = slowest, highest compression), &lt;br /&gt;
** lzjb, &lt;br /&gt;
** zle.&lt;br /&gt;
&lt;br /&gt;
Keep lz4 for most datasets. Disable compression only when data is already compressed and very latency-sensitive.&lt;br /&gt;
&lt;br /&gt;
==== Record size ====&lt;br /&gt;
Suggested block size for files stored in this dataset.&lt;br /&gt;
&lt;br /&gt;
* Designed primarily for database-type workloads that access large files in fixed-size records. &lt;br /&gt;
* For such workloads, setting the “record size” to at least match the database record size can significantly improve performance. &lt;br /&gt;
* For general-purpose datasets, changing the default is not recommended and may reduce performance. &lt;br /&gt;
* Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB and 1 MiB; newer software versions allow values up to 16 MiB. &lt;br /&gt;
* Default: &#039;&#039;&#039;128 KiB (default)&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
The new record size applies only to data written after the change; existing files keep their original block size.&lt;br /&gt;
&lt;br /&gt;
==== Write cache sync requests ====&lt;br /&gt;
Controls the ZFS &#039;&#039;&#039;sync&#039;&#039;&#039; property – how synchronous write operations are handled.&lt;br /&gt;
&lt;br /&gt;
Options in the drop-down:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Always&#039;&#039;&#039;: All file-system transactions are committed and flushed to stable storage before returning to the application. Best data safety; lower performance. &lt;br /&gt;
* &#039;&#039;&#039;Standard (default)&#039;&#039;&#039;: Synchronous operations are logged and flushed; however, to improve performance, the most recent cached data (approximately one second) may be lost if a sudden power failure occurs. Recommended only when the environment is protected by a reliable UPS, as indicated by the warning in the dialog. &lt;br /&gt;
* &#039;&#039;&#039;Disabled&#039;&#039;&#039;: Synchronous requests are treated as asynchronous; data is committed only when the next transaction group is written. This provides maximum performance but the highest risk of data loss and inconsistency. Use only for non-critical workloads where this risk is acceptable.&lt;br /&gt;
&lt;br /&gt;
==== Write cache sync request handling (logbias) ====&lt;br /&gt;
Gives a hint how synchronous writes for this dataset should use log devices.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write log device (Latency)&#039;&#039;&#039; – if the pool has separate log devices, they are used to minimize latency of synchronous writes. Recommended default. &lt;br /&gt;
* &#039;&#039;&#039;In pool (Throughput)&#039;&#039;&#039; – log devices are not used; the software optimizes for overall pool throughput and efficient use of resources.&lt;br /&gt;
&lt;br /&gt;
==== Read cache (primary, ARC) scope ====&lt;br /&gt;
Controls what is cached in main memory (ARC) for this dataset.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache data and metadata. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – do not cache anything from this dataset in ARC. &lt;br /&gt;
&lt;br /&gt;
You can reduce ARC pressure for large streaming or low-priority datasets by switching to “Metadata” or “None”.&lt;br /&gt;
&lt;br /&gt;
==== Read cache (secondary, L2ARC) scope ====&lt;br /&gt;
Controls what is cached on L2ARC devices (if present).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache data and metadata. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – do not cache this dataset in L2ARC.&lt;br /&gt;
&lt;br /&gt;
Use “Metadata” or “None” for datasets that would otherwise fill L2ARC with low-value data.&lt;br /&gt;
&lt;br /&gt;
==== Access time ====&lt;br /&gt;
Controls whether file access time (atime) is updated on reads.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Disabled (default)&#039;&#039;&#039; – access time is not updated, which avoids extra writes and can significantly improve performance. &lt;br /&gt;
* &#039;&#039;&#039;Enabled&#039;&#039;&#039; – access time is updated on each read; required by some legacy applications (for example, certain mailers).&lt;br /&gt;
&lt;br /&gt;
=== Small data blocks policy ===&lt;br /&gt;
This section controls how &#039;&#039;&#039;small data blocks&#039;&#039;&#039; of this dataset are placed when the pool has a &#039;&#039;&#039;special devices group&#039;&#039;&#039; configured.&lt;br /&gt;
&lt;br /&gt;
* If no special devices group exists in the pool, the section is disabled, and an information banner appears: “Available only when a special devices group exists.” In this case, all data blocks are stored on regular data vdevs. &lt;br /&gt;
* When a special devices group exists and is healthy, the &#039;&#039;&#039;Small data block size&#039;&#039;&#039; list becomes active.&lt;br /&gt;
&lt;br /&gt;
==== Small data block size ====&lt;br /&gt;
Defines the maximum size of blocks that will be stored on special devices instead of regular data vdevs (this corresponds to the ZFS special_small_blocks property for the dataset). More info available in the “[[Small blocks policy settings]]” article.&lt;br /&gt;
&lt;br /&gt;
Available options in the drop-down:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Disable for the dataset&#039;&#039;&#039;: The small data blocks policy is disabled for this dataset, regardless of the pool settings. All data blocks (including small ones) are stored on regular data vdevs. &lt;br /&gt;
* &#039;&#039;&#039;4 KiB, 8 KiB, 16 KiB, 32 KiB, 64 KiB, 128 KiB, 256 KiB, 512 KiB, 1 MiB, 2 MiB, 4 MiB, 8 MiB, 16 MiB&#039;&#039;&#039;: Any data block with a logical size less than or equal to the selected value is stored on special devices. Larger blocks are stored on regular data vdevs. &lt;br /&gt;
* &#039;&#039;&#039;Inherit from the pool settings (default) [X KiB]&#039;&#039;&#039;: The dataset inherits the pool-level small blocks setting. The value in brackets ([X KiB]) shows the current pool threshold; e.g.: &lt;br /&gt;
** [0 KiB] – small data blocks policy is effectively disabled on the pool. &lt;br /&gt;
** [128 KiB] – blocks up to 128 KiB are redirected to special devices according to pool settings.&lt;br /&gt;
&lt;br /&gt;
Notes and recommendations&lt;br /&gt;
&lt;br /&gt;
* A &#039;&#039;&#039;higher threshold&#039;&#039;&#039; moves more data to special devices, which can improve performance for small, random I/O, but also increases capacity usage on the special devices group. &lt;br /&gt;
* A &#039;&#039;&#039;very small value&#039;&#039;&#039; (e.g., 4 KiB or 8 KiB) typically limits the placement mostly to metadata and very small files. &lt;br /&gt;
* If special or dedup devices are not supported by the pool layout (e.g., the pool contains RAIDZ data groups instead of mirror-based data vdevs), the small data blocks policy cannot be effectively used. Plan the pool layout accordingly. &lt;br /&gt;
* If the special devices group becomes degraded or unavailable, the performance and behaviour of datasets using the small data blocks policy can be affected; always monitor pool health.&lt;br /&gt;
&lt;br /&gt;
=== Space management – quota and reservation ===&lt;br /&gt;
The bottom part of the dialog controls space limits for the dataset.&lt;br /&gt;
&lt;br /&gt;
==== Enable quota ====&lt;br /&gt;
When this switch is enabled:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Quota definition&#039;&#039;&#039; &lt;br /&gt;
** Hard limit on the total space that the dataset and all its descendants (child datasets, snapshots, clones) can consume. &lt;br /&gt;
** A unit (MiB, GiB, TiB) can be selected from the drop-down. &lt;br /&gt;
* &#039;&#039;&#039;Include snapshots and clones&#039;&#039;&#039; (checkbox) &lt;br /&gt;
** When checked (default), space used by snapshots and clones counts towards the quota. This matches standard ZFS behaviour and is usually recommended.&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
&lt;br /&gt;
* Quota cannot be smaller than reservation (if reservation is enabled). &lt;br /&gt;
* When the quota is reached, further writes fail with “out of space” for this dataset even if the pool still has free capacity.&lt;br /&gt;
&lt;br /&gt;
==== Enable reservation ====&lt;br /&gt;
When this switch is enabled:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Reserved space&#039;&#039;&#039; &lt;br /&gt;
** Amount of pool space reserved exclusively for this dataset. &lt;br /&gt;
** You cannot reserve more than the currently available free space in the pool. The dialog shows the currently available physical space below the field. &lt;br /&gt;
* &#039;&#039;&#039;Include snapshots and clones&#039;&#039;&#039; (checkbox) &lt;br /&gt;
** When checked, the reserved space covers the dataset and all its descendants (snapshots and clones). &lt;br /&gt;
** When unchecked, reserved space applies only to the dataset itself (behaviour similar to ZFS refreservation).&lt;br /&gt;
&lt;br /&gt;
Additional rules:&lt;br /&gt;
&lt;br /&gt;
* The sum of all reservations in a pool cannot exceed its free space. &lt;br /&gt;
* Quota must be greater than or equal to reservation.&lt;br /&gt;
&lt;br /&gt;
Use reservation only for datasets that must have guaranteed space, for example critical databases or backup targets.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Zpool_wizard&amp;diff=12373</id>
		<title>Zpool wizard</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Zpool_wizard&amp;diff=12373"/>
		<updated>2026-01-15T07:43:55Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The &#039;&#039;&#039;Zpool Wizard&#039;&#039;&#039; guides you through the process of creating and configuring a new ZFS pool (zpool) from available disks. A zpool is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into &#039;&#039;&#039;vdevs&#039;&#039;&#039; (virtual devices), which collectively form the unified zpool.&amp;amp;nbsp;The wizard consists of multiple steps that allow you to configure data groups (vdevs), add optional device groups, adjust pool settings, and enable encryption if required.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Accessing the wizard ==&lt;br /&gt;
&lt;br /&gt;
#Navigate to &#039;&#039;&#039;Storage&#039;&#039;&#039;.&lt;br /&gt;
#Click &#039;&#039;&#039;Add zpool&#039;&#039;&#039;.&lt;br /&gt;
#The Zpool creation wizard will launch.&lt;br /&gt;
#Follow the guided steps to configure your zpool.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Zpool configuration steps ==&lt;br /&gt;
&lt;br /&gt;
=== Add data group&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
In this step, available disks are listed. You can filter only unused disks using the toggle.&lt;br /&gt;
&lt;br /&gt;
#Select one or more disks from the list.&lt;br /&gt;
#Choose the desired redundancy level for the group:&lt;br /&gt;
#*&#039;&#039;&#039;Single&#039;&#039;&#039; - No redundancy. Any disk failure results in data loss.&lt;br /&gt;
#*&#039;&#039;&#039;Mirror&#039;&#039;&#039; - Data is stored on multiple disks. Capacity equals the size of one disk per mirror.&lt;br /&gt;
#**&#039;&#039;&#039;Mirror (Single Group)&#039;&#039;&#039;: All selected disks will be combined into a single mirrored group.&lt;br /&gt;
#**&#039;&#039;&#039;Mirror (Multiple Groups)&#039;&#039;&#039;: The selected disks will be paired into multiple mirrored groups, each consisting of two disks.&lt;br /&gt;
#*&#039;&#039;&#039;Z-1&#039;&#039;&#039; - Single-parity redundancy. One disk may fail without losing data. A minimum of three disks is required for a RAIDZ-1 group.&lt;br /&gt;
#*&#039;&#039;&#039;Z-2&#039;&#039;&#039; - Double-parity redundancy. Two disks may fail without losing data. A minimum of four disks is required for a RAIDZ-2 group.&lt;br /&gt;
#*&#039;&#039;&#039;Z-3&#039;&#039;&#039; - Triple-parity redundancy. Three disks may fail without losing data. A minimum of five disks is required for a RAIDZ-3 group.&lt;br /&gt;
#Click &#039;&#039;&#039;Add group&#039;&#039;&#039; to add the selected configuration.&amp;amp;nbsp;&lt;br /&gt;
#*The selected data group will appear in the right-hand panel. The total zpool capacity and licensed storage usage are displayed below.&amp;amp;nbsp;&lt;br /&gt;
#*To learn more vdev types, please refer to the following article: [[Redundancy in Disks Groups|Redundancy in Disk Groups]]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Add write log (optional) ===&lt;br /&gt;
&lt;br /&gt;
This feature allows you to configure the write log function with a selected redundancy level (single drive or mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVMe should be used for this vdev.&lt;br /&gt;
&lt;br /&gt;
#Select disks from the available list.&lt;br /&gt;
#Choose redundancy type (&#039;&#039;&#039;Single&#039;&#039;&#039; or &#039;&#039;&#039;Mirror&#039;&#039;&#039;) for added reliability.&lt;br /&gt;
#Add the group to the zpool.&lt;br /&gt;
&lt;br /&gt;
Write log groups are displayed separately in the Other groups section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Key points to consider&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 • If multiple log devices are specified, write operations are load-balanced between the devices.&lt;br /&gt;
 • Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.&lt;br /&gt;
 • RAIDZ vdev types are not supported for the intent log.&lt;br /&gt;
 &lt;br /&gt;
 This ensures efficient and reliable write operations while leveraging the selected redundancy level.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;br/&amp;gt;Add read cache (optional) ===&lt;br /&gt;
&lt;br /&gt;
This step allows you to assign SSDs as L2ARC (Level 2 Adaptive Replacement Cache) devices to boost read performance. Adding a read cache improves performance and reduces latency for storage systems under heavy read load. A cache device stores frequently accessed data from the storage pool, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVMe should be used for this vdev.&lt;br /&gt;
&lt;br /&gt;
#Select a disk to be used as a cache device.&amp;amp;nbsp;Only &#039;&#039;&#039;Single&#039;&#039;&#039; redundancy is available.&lt;br /&gt;
#Confirm by adding the group.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Key benefits and considerations&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 • Cache devices are particularly useful for &#039;&#039;&#039;read-heavy workloads&#039;&#039;&#039; where the working dataset size exceeds the capacity of main memory.&lt;br /&gt;
 • By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.&lt;br /&gt;
 • The greatest performance improvements are seen in workloads characterized by random reads of primarily static content.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;br/&amp;gt;Add special devices group (optional) ===&lt;br /&gt;
&lt;br /&gt;
 Special and deduplication vdevs require at least the same level of redundancy as data vdevs. &lt;br /&gt;
 Because RAIDZ vdevs do not provide compatible redundancy for these device groups, special vdevs and deduplication vdevs cannot be used in a ZFS pool that contains RAIDZ1, RAIDZ2, or RAIDZ3.&lt;br /&gt;
&lt;br /&gt;
A special devices group stores metadata and small-block data to improve performance. A fast SSD/NVMe should be used for this vdev.&lt;br /&gt;
&lt;br /&gt;
#Select one or more disks.&lt;br /&gt;
#Choose redundancy (&#039;&#039;&#039;Single&#039;&#039;&#039; or &#039;&#039;&#039;Mirror&#039;&#039;&#039;). &#039;&#039;&#039;The mirror redundancy level is recommended to prevent data loss&#039;&#039;&#039;.&lt;br /&gt;
#Add them as a group.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Key features and benefits&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 • Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.&lt;br /&gt;
 • Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.&lt;br /&gt;
 • Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.&lt;br /&gt;
 &lt;br /&gt;
 Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== Add deduplication group (optional) ===&lt;br /&gt;
&lt;br /&gt;
A deduplication group can be explicitly excluded from a special device group to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.&lt;br /&gt;
&lt;br /&gt;
#Select disks for this purpose.&amp;amp;nbsp;Redundancy can be set to &#039;&#039;&#039;Single&#039;&#039;&#039; or &#039;&#039;&#039;Mirror&#039;&#039;&#039;.&amp;amp;nbsp;&#039;&#039;&#039;The mirror redundancy level is recommended to prevent data loss&#039;&#039;&#039;.&lt;br /&gt;
#Add the group to confirm.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Key features and considerations&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 • Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.&lt;br /&gt;
 • This configuration provides flexibility in optimizing storage layout based on workload requirements.&lt;br /&gt;
 • Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.&lt;br /&gt;
 &lt;br /&gt;
 This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;br/&amp;gt;Add spare disks (optional) ===&lt;br /&gt;
&lt;br /&gt;
A spare disk is a special pseudo-vdev used to track available spare devices for a zpool. Using spare disks enhances the storage pool&#039;s reliability by enabling seamless drive replacement and reducing the risk of data loss.&lt;br /&gt;
&lt;br /&gt;
#Select the disk and add it to the &#039;&#039;&#039;Spare&#039;&#039;&#039; group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configuration ===&lt;br /&gt;
&lt;br /&gt;
In this step, you configure the final pool settings:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Zpool name&#039;&#039;&#039; - Enter a unique name for the zpool for easy identification.&lt;br /&gt;
*&#039;&#039;&#039;autoTRIM&#039;&#039;&#039; - If supported by your devices, enable the AutoTRIM feature to reclaim unused space automatically. AutoTRIM helps optimize SSD performance and lifespan by notifying the controller when blocks are no longer in use.&lt;br /&gt;
*&#039;&#039;&#039;Initialize the zpool after creation&#039;&#039;&#039; - Writes patterns to unallocated space to avoid initial-write latency, especially in virtualized environments.&amp;amp;nbsp;The process may extend creation time and briefly affect performance.&lt;br /&gt;
&lt;br /&gt;
Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Resource encryption (optional) ===&lt;br /&gt;
&lt;br /&gt;
Encryption applies to datasets and zvols created in the ZFS pool. The zpool itself remains unencrypted.&lt;br /&gt;
&lt;br /&gt;
#Enable &#039;&#039;&#039;Configure encryption passphrase&#039;&#039;&#039;.&amp;amp;nbsp;&lt;br /&gt;
#Select a &#039;&#039;&#039;Default encryption method&#039;&#039;&#039;.&amp;amp;nbsp;&lt;br /&gt;
#Enter and confirm the passphrase.&lt;br /&gt;
&lt;br /&gt;
Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Note&#039;&#039;&#039;:&lt;br /&gt;
 • The passphrase cannot be recovered.&lt;br /&gt;
 • Encrypted resources inherit the passphrase unless changed later.&lt;br /&gt;
&lt;br /&gt;
=== Summary ===&lt;br /&gt;
&lt;br /&gt;
The summary page displays the complete zpool configuration before finalization. Click &#039;&#039;&#039;Add zpool&#039;&#039;&#039; to complete pool creation.&amp;amp;nbsp;The wizard will create the zpool with the selected configuration.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Remember&#039;&#039;&#039;:&lt;br /&gt;
 • Redundancy level cannot be changed after the ZFS pool is created.&lt;br /&gt;
 • Mixed disk sizes reduce usable capacity to the smallest disk in a vdev.&lt;br /&gt;
 • SSDs are recommended for write log, special devices, and deduplication groups.&lt;br /&gt;
 • Encryption passphrases cannot be recovered.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Encryption&amp;diff=12372</id>
		<title>Encryption</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Encryption&amp;diff=12372"/>
		<updated>2026-01-14T11:51:33Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;__NOTOC__ Encryption protects data stored in datasets and zvols within a ZFS pool (zpool). The encryption feature is available for every zpool, but encrypted resources can be created only after you configure a pool-wide encryption passphrase.  Key characteristics:  *Encryption applies to datasets and zvols; the zpool itself is not encrypted. *All encrypted resources in one zpool share the same passphrase. *Datasets and zvols can only be encrypted during their creation. *...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Encryption protects data stored in datasets and zvols within a ZFS pool (zpool). The encryption feature is available for every zpool, but encrypted resources can be created only after you configure a pool-wide encryption passphrase.&lt;br /&gt;
&lt;br /&gt;
Key characteristics:&lt;br /&gt;
&lt;br /&gt;
*Encryption applies to datasets and zvols; the zpool itself is not encrypted.&lt;br /&gt;
*All encrypted resources in one zpool share the same passphrase.&lt;br /&gt;
*Datasets and zvols can only be encrypted during their creation.&lt;br /&gt;
*You can later change the pool-wide encryption passphrase and the default encryption method.&lt;br /&gt;
&lt;br /&gt;
Use encryption when you need at-rest data protection within a specific zpool.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuring resource encryption ==&lt;br /&gt;
&lt;br /&gt;
#Go to &#039;&#039;&#039;Storage&#039;&#039;&#039;.&lt;br /&gt;
#Select the zpool you want to configure.&lt;br /&gt;
#Open the &#039;&#039;&#039;Configuration&#039;&#039;&#039; tab.&lt;br /&gt;
#Expand the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You will see either the initial configuration fields or the current encryption status, depending on whether encryption was already configured or was configured during [[Zpool_wizard|zpool creation]]. When no passphrase is configured for a zpool, the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section shows:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Default encryption method&#039;&#039;&#039; – algorithm that is preselected in the drop-down list and used by default for new encrypted datasets and zvols in this zpool, if you do not choose a different method during resource creation.&lt;br /&gt;
*&#039;&#039;&#039;Encryption passphrase&#039;&#039;&#039; – shared passphrase used to unlock all encrypted resources in this zpool.&lt;br /&gt;
*&#039;&#039;&#039;Confirm passphrase&#039;&#039;&#039; – repeat the passphrase for verification.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enter the passphrase twice, select the default method, and then click &#039;&#039;&#039;Save settings&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Important&#039;&#039;&#039;: The passphrase cannot be recovered if it is lost. Without the passphrase, encrypted resources in this zpool cannot be accessed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the passphrase is configured, you can start creating encrypted datasets and zvols in this zpool. More details on how to use encryption in resources can be found here:&lt;br /&gt;
&lt;br /&gt;
*Create a new zvol for iSCSI Target&lt;br /&gt;
*Create a new zvol for FC Group&lt;br /&gt;
*Create a new dataset&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
*Encryption can be enabled only at creation time. Existing datasets and zvols cannot be switched to encrypted mode by editing their properties.&lt;br /&gt;
*To protect existing data that is currently unencrypted, you must:&lt;br /&gt;
**Create a new encrypted dataset or zvol.&lt;br /&gt;
**Copy or replicate data from the old resource to the new encrypted one.&lt;br /&gt;
**Remove the unencrypted original if it is no longer needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Managing a zpool with configured resource encryption ==&lt;br /&gt;
&lt;br /&gt;
When a passphrase is already configured, the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section shows:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Passphrase status&#039;&#039;&#039; (for example, configured).&lt;br /&gt;
*&#039;&#039;&#039;Default encryption method&#039;&#039;&#039;.&lt;br /&gt;
*Buttons:&lt;br /&gt;
**Change passphrase&lt;br /&gt;
**Change encryption method&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Changing the encryption passphrase ===&lt;br /&gt;
&lt;br /&gt;
#Click &#039;&#039;&#039;Change passphrase&#039;&#039;&#039;.&lt;br /&gt;
#In the dialog:&lt;br /&gt;
##Enter &#039;&#039;&#039;New passphrase&#039;&#039;&#039;.&lt;br /&gt;
##Confirm passphrase.&lt;br /&gt;
##Enter the &#039;&#039;&#039;Administrator&#039;&#039;&#039; password to authorize the change.&lt;br /&gt;
#Click &#039;&#039;&#039;Change passphrase&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
After you confirm the change, the new passphrase is propagated to all existing encrypted datasets and zvols in the zpool. This synchronization may take some time, depending on the number of encrypted resources. A notification of the operation&#039;s start and completion is recorded in &#039;&#039;&#039;Event Viewer&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 While the synchronization is in progress, the User Interface is locked for changes and cannot be used until the operation finishes. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Changing the default encryption method ===&lt;br /&gt;
&lt;br /&gt;
#Click &#039;&#039;&#039;Change encryption method&#039;&#039;&#039;.&lt;br /&gt;
#Select a new &#039;&#039;&#039;Default encryption method&#039;&#039;&#039; from the drop-down list.&lt;br /&gt;
#Click &#039;&#039;&#039;Save method&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The selected method is displayed as default only for encrypted datasets and zvols created after this change. Existing encrypted resources keep their original encryption method which cannot be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Available encryption methods ====&lt;br /&gt;
&lt;br /&gt;
The following methods are available for resource encryption:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-128-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 128-bit key in CCM (Counter with CBC-MAC) mode.&lt;br /&gt;
*Provides authenticated encryption with moderate CPU usage.&lt;br /&gt;
*Suitable when you need a balance between performance and security.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-192-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 192-bit key in CCM mode.&lt;br /&gt;
*Higher security margin than 128-bit, with slightly higher CPU cost.&lt;br /&gt;
*Use when you prefer stronger keys and can accept a small performance impact.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-256-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 256-bit key in CCM mode.&lt;br /&gt;
*Maximum key length in the CCM group.&lt;br /&gt;
*Use when the security margin is more important than performance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-128-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 128-bit key in GCM (Galois/Counter Mode).&lt;br /&gt;
*Authenticated encryption optimized for performance on modern CPUs.&lt;br /&gt;
*Good choice when you need strong encryption with high throughput.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-192-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 192-bit key in GCM mode.&lt;br /&gt;
*Increases key size over AES-128-GCM while remaining performant.&lt;br /&gt;
*Use when you want a higher security margin but similar behavior to AES-128-GCM.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-256-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 256-bit key in GCM mode.&lt;br /&gt;
*Provides strong authenticated encryption and is widely used as a best-practice choice.&lt;br /&gt;
*Recommended default when your hardware can handle the additional CPU load.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Handling invalid or missing passphrase ==&lt;br /&gt;
&lt;br /&gt;
If the encryption passphrase is invalid or not configured on the current host, all encrypted datasets and zvols in the affected zpool are locked and cannot be accessed. When a locked zvol is attached to an iSCSI target, FC group, or NVMe-oF subsystem, these objects are effectively blocked as well, and no data can be accessed through them. For an encrypted dataset, all shares configured on it are also blocked.&lt;br /&gt;
&lt;br /&gt;
To restore access, enter the correct passphrase in &#039;&#039;&#039;Configuration → Resource encryption&#039;&#039;&#039; for the zpool. After a valid passphrase is provided, all locked, encrypted resources are automatically unlocked and become active again, provided that the related targets, groups, subsystems, or datasets were not manually deactivated beforehand.&lt;br /&gt;
&lt;br /&gt;
Such situations may occur, for example, when a zpool is imported on a different host or moved between cluster nodes. In a cluster environment, the passphrase is usually synchronized between nodes, so after a failover, the other node already has the required passphrase. However, if the passphrase change operation was interrupted, some encrypted resources may have been updated to the new passphrase while others still use the old one. On the original host, access may still work, but after exporting the zpool and importing it on another host, some or all encrypted resources can become partially locked. In this case, an event is recorded in the Event Viewer indicating that the passphrase change did not complete successfully.&lt;br /&gt;
&lt;br /&gt;
If this happens, first try to unlock the resources by entering the latest passphrase (the one you intended to change to). If this does not unlock all encrypted resources, enter the previous passphrase (the one used before the change), allow the passphrase change process to complete, and then change the passphrase again to the desired new value. This sequence should unify the passphrase across all encrypted resources in the zpool. Always monitor Event Viewer logs when working with encrypted resources and when changing passphrases.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12237</id>
		<title>NVMe-oF Initiator</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12237"/>
		<updated>2025-10-31T11:51:11Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The NVMe-oF (NVMe over Fabrics) initiator enables connections to external NVMe storage arrays (targets) via network protocols. This feature provides efficient and high-performance management of remote storage solutions, overcoming traditional cabling limitations by allowing substantial distances between servers and storage arrays.&lt;br /&gt;
&lt;br /&gt;
== Supported Protocols ==&lt;br /&gt;
&lt;br /&gt;
The software supports two principal NVMe-oF initiator protocols:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;TCP&#039;&#039;&#039; – A widely adopted protocol ensuring ease of implementation and compatibility with conventional networking infrastructure.&lt;br /&gt;
*&#039;&#039;&#039;RDMA&#039;&#039;&#039; – A protocol providing lower latency and higher performance, ideal for environments requiring exceptional throughput. RDMA requires specialized hardware, such as Mellanox/NVIDIA ConnectX or ATTO network interface cards, to fully utilize its capabilities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Follow these steps to configure the NVMe-oF initiator:&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Start Discovery&#039;&#039;&#039;&lt;br /&gt;
#;Click the &amp;quot;&#039;&#039;&#039;Discover&#039;&#039;&#039;&amp;quot; button to start the discovery wizard.&lt;br /&gt;
#&#039;&#039;&#039;Enter Connection Details&#039;&#039;&#039;&lt;br /&gt;
#*&#039;&#039;&#039;Server IP&#039;&#039;&#039;: IP address of the NVMe storage target.&lt;br /&gt;
#*&#039;&#039;&#039;Server port&#039;&#039;&#039;: Network port for communication (&#039;&#039;&#039;default is 4420&#039;&#039;&#039;).&lt;br /&gt;
#*&#039;&#039;&#039;Server protocol&#039;&#039;&#039;: Choose between TCP and RDMA.&lt;br /&gt;
#*&#039;&#039;&#039;Advanced settings (optional)&#039;&#039;&#039;: Enable and specify the number of I/O queues. Leave blank or disabled to use the system default, or enter a specific number to override.&lt;br /&gt;
#:The number of I/O queues refers to the parallel channels through which data is transferred between the NVMe initiator and the target. Increasing this number can improve performance by enabling higher parallelism and reducing latency. However, each queue consumes system resources, and setting the number too high may exceed hardware or network capabilities, leading to connection issues. Adjust this value based on performance requirements and available resources.&lt;br /&gt;
#&#039;&#039;&#039;Proceed to Subsystems&#039;&#039;&#039;&lt;br /&gt;
#;Click &amp;quot;&#039;&#039;&#039;Next&#039;&#039;&#039;&amp;quot;. A list of available NVMe-oF subsystems will appear. Select the subsystems you want to connect to and click “&#039;&#039;&#039;Connect&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Manage Connection Paths ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Add a new path&#039;&#039;&#039;: Click the “&#039;&#039;&#039;Options&#039;&#039;&#039;” dropdown menu and select “&#039;&#039;&#039;Add path&#039;&#039;&#039;”. Enter the required connection details (Server IP, port, protocol, and optionally the number of I/O queues).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Disconnect a subsystem&#039;&#039;&#039;: Use the “&#039;&#039;&#039;Options&#039;&#039;&#039;” menu and select “&#039;&#039;&#039;Disconnect subsystem&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
You can perform additional discoveries at any time to connect new subsystems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Practical Implementation ==&lt;br /&gt;
&lt;br /&gt;
After connecting to a subsystem, a list of available namespaces will be displayed, including:&lt;br /&gt;
&lt;br /&gt;
*Namespace ID&lt;br /&gt;
*Namespace capacity&lt;br /&gt;
*Namespace aliases&lt;br /&gt;
&lt;br /&gt;
Namespaces are sections of the NVMe controller on the storage array. They appear as independent NVMe disks to the server, can be identified by their alias, and are managed in the same manner as standard NVMe disks. Namespaces can be partitioned and added to storage pools.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Only one partition per disk can be active within a single pool or data group to maintain redundancy and reliability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Multi-path Connectivity ==&lt;br /&gt;
&lt;br /&gt;
The initiator supports multi-path connectivity, allowing multiple redundant network paths to a single NVMe target. Each path requires a distinct IP address (Virtual IP) to ensure redundancy and high availability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
If you encounter connection issues (e.g., “Could not connect to subsystem(s)” error), consider the following actions:&lt;br /&gt;
&lt;br /&gt;
#Check Network Connectivity:&lt;br /&gt;
#*Ensure that the server can ping the target’s IP address.&lt;br /&gt;
#*Verify that the correct port (default 4420) is open and not blocked by a firewall.&lt;br /&gt;
#Validate Target Configuration:&lt;br /&gt;
#*Verify that the NVMe target is online and properly configured to support NVMe over Fabrics (NVMe-oF) connections.&lt;br /&gt;
#*Ensure that access control lists (ACLs) or authentication settings on the target allow the initiator to establish a connection.&lt;br /&gt;
#Adjust I/O Queues:&lt;br /&gt;
#*If connection errors occur due to queue limits, try lowering the number of I/O queues in the advanced settings to match target capabilities.&lt;br /&gt;
#Use Alternative Paths:&lt;br /&gt;
#*If multiple network interfaces are available (typical in JBOD or HA environments), try using an alternative IP address or configure multi-path connectivity.&lt;br /&gt;
#Review Logs:&lt;br /&gt;
#*Check logs for detailed error messages that can guide further troubleshooting.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=System_volume_upgrade&amp;diff=12245</id>
		<title>System volume upgrade</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=System_volume_upgrade&amp;diff=12245"/>
		<updated>2025-10-31T10:51:43Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;__NOTOC__  &amp;#039;&amp;#039;&amp;#039;Note&amp;#039;&amp;#039;&amp;#039;: This upgrade improves system stability and performance but is irreversible. Pools upgraded to a 64 KB system volume volblocksize cannot be accessed by o...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
 &#039;&#039;&#039;Note&#039;&#039;&#039;: This upgrade improves system stability and performance but is irreversible. Pools upgraded to a 64 KB system volume volblocksize cannot be accessed by older software versions. &lt;br /&gt;
&lt;br /&gt;
After installing a software version that supports the latest ZFS file system, you may be prompted to upgrade the system volume on each storage pool. This operation improves stability and performance by setting the system volume&#039;s volblocksize to &#039;&#039;&#039;64 KB&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Upgrade Notification ==&lt;br /&gt;
&lt;br /&gt;
When a pool uses an older system volume format, an information banner appears in the Storage view. It recommends performing the upgrade and specifies the required free space - &#039;&#039;&#039;8 GB&#039;&#039;&#039; on the pool. To begin, open the &#039;&#039;&#039;Options&#039;&#039;&#039; menu for the selected pool, then select &#039;&#039;&#039;&#039;Upgrade system volume&#039;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Upgrade Process ==&lt;br /&gt;
&lt;br /&gt;
#A confirmation window appears, displaying a warning that the operation cannot be undone. For safety, please type the word &#039;&#039;&#039;&#039;upgrade&#039;&#039;&#039;&#039; into the confirmation field.&lt;br /&gt;
#Click &#039;&#039;&#039;Upgrade&#039;&#039;&#039; to start the process.&lt;br /&gt;
#A progress window is shown during the upgrade.&lt;br /&gt;
#When completed, a message appears indicating that the system volume has been upgraded to 64 KB. To finalize, you must &#039;&#039;&#039;export and import&#039;&#039;&#039; the pool or use the &#039;&#039;&#039;Move&#039;&#039;&#039; option if the pool belongs to a cluster.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== After Upgrade ==&lt;br /&gt;
&lt;br /&gt;
Once the system volume has been successfully updated, the pool’s status panel displays a message indicating that a &#039;&#039;&#039;pool export/import&#039;&#039;&#039; is required to complete the process. After performing this step, the system volume upgrade is fully applied.&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Active_Directory_(AD)_server_authentication&amp;diff=10792</id>
		<title>Active Directory (AD) server authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Active_Directory_(AD)_server_authentication&amp;diff=10792"/>
		<updated>2025-10-16T13:32:46Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
This functionality is available in &#039;&#039;&#039;User Management &amp;gt; Share users/groups &amp;gt; Authorization protocols&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;To configure a connection to the existing Active Directory server:&lt;br /&gt;
&lt;br /&gt;
#Navigate to the&amp;amp;nbsp;&#039;&#039;&#039;User Management&amp;amp;nbsp;&#039;&#039;&#039;section in the left menu.&lt;br /&gt;
#Go to the &#039;&#039;&#039;Share users/groups&#039;&#039;&#039; tab.&lt;br /&gt;
#Find the &#039;&#039;&#039;Active Directory (AD) server authentication&#039;&#039;&#039; block.&lt;br /&gt;
#Enable the&amp;amp;nbsp;&#039;&#039;&#039;Enable protocol&#039;&#039;&#039;&amp;amp;nbsp;option.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AD server authentication status ==&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Connection&#039;&#039;&#039; - shows whether you are connected to an AD server or not.&lt;br /&gt;
*&#039;&#039;&#039;Users/groups list&#039;&#039;&#039; - shows when the lists of users and groups were last synchronized or if the synchronization is taking place at the moment.&lt;br /&gt;
&lt;br /&gt;
Users and groups are synchronized with an Active Directory server every 2 hours. Synchronization can also be started manually by using the &#039;&#039;&#039;Synchronize&#039;&#039;&#039;&amp;amp;nbsp;button.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AD server authentication settings ==&lt;br /&gt;
&lt;br /&gt;
To connect to the existing AD server, fill in the following fields with credentials provided by the AD server administrator and click the &#039;&#039;&#039;Apply&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Realm&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Administrator name&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;&amp;lt;br/&amp;gt;NOTE&#039;&#039;&#039;: Password cannot contain:&#039;&#039;&#039;&lt;br /&gt;
**special characters such as &#039; &amp;quot; ` ^ &amp;amp; $ # ~ [ ] \ / | *&amp;amp;nbsp;:&amp;amp;nbsp;? &amp;amp;lt; &amp;amp;gt;&lt;br /&gt;
**spaces&lt;br /&gt;
**less than 12 and more than 16 characters&lt;br /&gt;
*&#039;&#039;&#039;Organizational Unit (OU) - &#039;&#039;&#039;a direct path to the container where the Computer Organizational Unit is placed. The path must be entered starting from the primary container name within the domain structure. The container name set by default is &#039;&#039;&#039;Computers&#039;&#039;&#039;.&amp;amp;nbsp;If another container name is used instead, then &#039;&#039;&#039;Computers&#039;&#039;&#039; must be changed to the appropriate name. If the path to the container is nested, use a slash as the connector. In the screenshot below, the OU is in the &#039;&#039;&#039;Computers&#039;&#039;&#039; container that is nested in&amp;amp;nbsp;&#039;&#039;&#039;AllComputers &amp;gt; Marketing&#039;&#039;&#039;. In this example, the path to the OU is: &#039;&#039;&#039;AllComputers/Marketing/Computers&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;[[File:Ad-structure.png]]&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NOTE&#039;&#039;&#039;: Container name can&#039;t contain:&#039;&#039;&#039;&lt;br /&gt;
**special characters such as , + &amp;quot; \ &amp;amp;lt; &amp;amp;gt;&amp;amp;nbsp;; = / #&lt;br /&gt;
**spaces&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div&amp;gt;&#039;&#039;&#039;The following reasons might prevent you from connecting to Active Directory:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
#Difference in time between Active Directory Server - if the time difference is greater than 5 minutes, the connection is not possible.&lt;br /&gt;
#The method of authenticating trusted domains - the authentication has to be set to two-way trust. Otherwise, it is not possible to read users and groups from trusted domains.&lt;br /&gt;
#DNS configuration - for an Active Directory domain, it is not possible to use a round-robin mechanism in DNS. This is connected to the fact that only one IP address is authorized. In a moment when another IP is obtained from DNS, the connection is not possible.&lt;br /&gt;
#The &#039;&#039;&#039;server name&#039;&#039;&#039; is the same as the Computer Organizational Unit (OU) named in the Active Directory (AD) server. If the object with the same name exists and the user that you use to log in to the AD server does not have permission to access this file, the connection will fail. The solution is to delete the existing computer object from the AD server. The following information explains how to delete the OU file:&lt;br /&gt;
&amp;lt;ul style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Log on to the Domain Controller with the domain administrator account. Press Windows Logo + R, enter &amp;quot;dsa.msc&amp;quot; and press Enter.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;In the &amp;quot;Active Directory Users and Computers&amp;quot; window, select the domain container in which the OU you are looking for is located.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Select the computer object and delete it.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&#039;&#039;&#039;Note&#039;&#039;&#039;: By default, any created Organizational Unit is protected from accidental deletion. To delete the OU, you need to clear the &amp;quot;Protect object from accidental deletion&amp;quot; checkbox, which you can find in the object properties in the &amp;quot;Object&amp;quot; tab. By deleting OU, you delete all nested objects that it contains as well.&lt;br /&gt;
:::&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Users and user groups management ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Management mode:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Scan single domain (default)&#039;&#039;&#039; - Using this function allows the user to obtain users and groups from the main domain only.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Scan all trusted domains&#039;&#039;&#039; - Using this function allows the user to obtain users and groups from the main and trusted domains.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;ID mapping backend:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;rid + tdb (default)&#039;&#039;&#039; - This option utilizes the rid backend for ID mapping to AD users. UID/GIDs range has to be entered manually The tdb backend is used when no other configuration is set. Recommended for large databases.Samba Wiki link for rid backend: [https://wiki.samba.org/index.php/Idmap_config_rid https://wiki.samba.org/index.php/Idmap_config_rid]&lt;br /&gt;
*&#039;&#039;&#039;ad (with RFC2307 schema) + tdb&#039;&#039;&#039; - Allows reading ID mappings from an AD server, provided that the uidNumber attributes for users and gidNumber attributes for groups were added in advance in the AD. This backend requires additional configuration of uidNumber and gidNumber on the AD server side. The tdb back end is used when no other configuration is set. Samba Wiki link for rid backend: [https://wiki.samba.org/index.php/Idmap_config_ad https://wiki.samba.org/index.php/Idmap_config_ad]&lt;br /&gt;
*&#039;&#039;&#039;autorid&#039;&#039;&#039; - Automatically configures the range to be used for each domain. The only configuration needed is the range of UID/GIDs used for user/group mappings and the number of IDs per domain. Samba Wiki link for autorid backend: [https://wiki.samba.org/index.php/Idmap_config_autorid https://wiki.samba.org/index.php/Idmap_config_autorid]&lt;br /&gt;
&lt;br /&gt;
 Autorid is not recommended in cluster environments.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;span style=&amp;quot;font-size:small&amp;quot;&amp;gt;The TDB UID/GIDs mapping does not work properly.&amp;lt;/span&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Single-Domain Environments&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div&amp;gt;It is recommended to use the &amp;quot;autorid&amp;quot; option in the &amp;quot;ID mapping backend&amp;quot; settings. Alternatively, you can use the &amp;quot;rid+tdb&amp;quot; option. If you choose &amp;quot;rid+tdb,&amp;quot; set the UID/GIDs mapping to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range (e.g., 2,000,000 to 2,999,999). The range 1,000,000 to 1,999,999 is reserved.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Multi-Domain Environments&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div&amp;gt;The &amp;quot;autorid&amp;quot; option cannot be used. Instead, use &amp;quot;rid+tdb&amp;quot; or &amp;quot;ad (with RFC2307 schema) + tdb.&amp;quot; Ensure the UID/GIDs mapping is set to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range for each domain (e.g., 2,000,000 to 2,999,999 for the first domain, 3,000,000 to 3,999,999 for the second domain, etc.).&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12236</id>
		<title>NVMe-oF Initiator</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12236"/>
		<updated>2025-07-30T12:28:32Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The NVMe-oF (NVMe over Fabrics) initiator enables connections to external NVMe storage arrays (targets) via network protocols. This feature provides efficient and high-performance management of remote storage solutions, overcoming traditional cabling limitations by allowing substantial distances between servers and storage arrays.&lt;br /&gt;
&lt;br /&gt;
== Supported Protocols ==&lt;br /&gt;
&lt;br /&gt;
The software supports two principal NVMe-oF initiator protocols:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;TCP&#039;&#039;&#039; – A widely adopted protocol ensuring ease of implementation and compatibility with conventional networking infrastructure.&lt;br /&gt;
*&#039;&#039;&#039;RDMA&#039;&#039;&#039; – A protocol providing lower latency and higher performance, ideal for environments requiring exceptional throughput. RDMA requires specialized hardware, such as Mellanox/NVIDIA ConnectX or ATTO network interface cards, to fully utilize its capabilities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Follow these steps to configure the NVMe-oF initiator:&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Start Discovery&#039;&#039;&#039;&lt;br /&gt;
#;Click the &amp;quot;&#039;&#039;&#039;Discover&#039;&#039;&#039;&amp;quot; button to start the discovery wizard.&lt;br /&gt;
#&#039;&#039;&#039;Enter Connection Details&#039;&#039;&#039;&lt;br /&gt;
#*&#039;&#039;&#039;Server IP&#039;&#039;&#039;: IP address of the NVMe storage target.&lt;br /&gt;
#*&#039;&#039;&#039;Server port&#039;&#039;&#039;: Network port for communication (&#039;&#039;&#039;default is 4420&#039;&#039;&#039;).&lt;br /&gt;
#*&#039;&#039;&#039;Server protocol&#039;&#039;&#039;: Choose between TCP and RDMA.&lt;br /&gt;
#*&#039;&#039;&#039;Advanced settings (optional)&#039;&#039;&#039;: Enable and specify the number of I/O queues. Leave blank or disabled to use the system default, or enter a specific number to override.&lt;br /&gt;
#:The number of I/O queues refers to the parallel channels through which data is transferred between the NVMe initiator and the target. Increasing this number can improve performance by enabling higher parallelism and reducing latency. However, each queue consumes system resources, and setting the number too high may exceed hardware or network capabilities, leading to connection issues. Adjust this value based on performance requirements and available resources.&lt;br /&gt;
#&#039;&#039;&#039;Proceed to Subsystems&#039;&#039;&#039;&lt;br /&gt;
#;Click &amp;quot;&#039;&#039;&#039;Next&#039;&#039;&#039;&amp;quot;. A list of available NVMe-oF subsystems will appear. Select the subsystems you want to connect to and click “&#039;&#039;&#039;Connect&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Manage Connection Paths ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Add a new path&#039;&#039;&#039;: Click the “&#039;&#039;&#039;Options&#039;&#039;&#039;” dropdown menu and select “&#039;&#039;&#039;Add path&#039;&#039;&#039;”. Enter the required connection details (Server IP, port, protocol, and optionally the number of I/O queues).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Disconnect a subsystem&#039;&#039;&#039;: Use the “&#039;&#039;&#039;Options&#039;&#039;&#039;” menu and select “&#039;&#039;&#039;Disconnect subsystem&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
You can perform additional discoveries at any time to connect new subsystems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Practical Implementation ==&lt;br /&gt;
&lt;br /&gt;
After connecting to a subsystem, a list of available namespaces will be displayed, including:&lt;br /&gt;
&lt;br /&gt;
*Namespace ID&lt;br /&gt;
*Namespace capacity&lt;br /&gt;
*Namespace aliases&lt;br /&gt;
&lt;br /&gt;
Namespaces are sections of the NVMe controller on the storage array. They appear as independent NVMe disks to the server, can be identified by their alias, and are managed in the same manner as standard NVMe disks. Namespaces can be partitioned and added to storage pools.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Only one partition per disk can be active within a single pool or data group to maintain redundancy and reliability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Multi-path Connectivity ==&lt;br /&gt;
&lt;br /&gt;
The initiator supports multi-path connectivity, allowing multiple redundant network paths to a single NVMe target. Each path requires a distinct IP address (Virtual IP) to ensure redundancy and high availability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
If you encounter connection issues (e.g., “Could not connect to subsystem(s)” error), consider the following actions:&lt;br /&gt;
&lt;br /&gt;
#Check Network Connectivity:&lt;br /&gt;
#*Ensure that the server can ping the target’s IP address.&lt;br /&gt;
#*Verify that the correct port (default 4420) is open and not blocked by a firewall.&lt;br /&gt;
#Validate Target Configuration:&lt;br /&gt;
#*Verify that the NVMe target is online and properly configured to support NVMe over Fabrics (NVMe-oF) connections.&lt;br /&gt;
#*Ensure that access control lists (ACLs) or authentication settings on the target allow the initiator to establish a connection.&lt;br /&gt;
#Adjust I/O Queues:&lt;br /&gt;
#*If connection errors occur due to queue limits, try lowering the number of I/O queues in the advanced settings to match target capabilities.&lt;br /&gt;
#Use Alternative Paths:&lt;br /&gt;
#*If multiple network interfaces are available (typical in JBOD or HA environments), try using an alternative IP address or configure multi-path connectivity.&lt;br /&gt;
#Review Logs:&lt;br /&gt;
#*Check logs for detailed error messages that can guide further troubleshooting.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help_topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_discover&amp;diff=12240</id>
		<title>NVMe-oF discover</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_discover&amp;diff=12240"/>
		<updated>2025-07-30T12:09:57Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Redirected page to NVMe-oF Initiator&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT[[NVMe-oF Initiator]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Subsystem_Connection_Problems&amp;diff=12239</id>
		<title>NVMe-oF Subsystem Connection Problems</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Subsystem_Connection_Problems&amp;diff=12239"/>
		<updated>2025-07-30T11:29:08Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Redirected page to NVMe-oF Initiator&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT[[NVMe-oF Initiator]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Subsystems&amp;diff=12238</id>
		<title>NVMe-oF Subsystems</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Subsystems&amp;diff=12238"/>
		<updated>2025-07-30T10:31:48Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Redirected page to NVMe-oF Initiator&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT[[NVMe-oF Initiator]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12235</id>
		<title>NVMe-oF Initiator</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12235"/>
		<updated>2025-07-30T10:07:14Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The NVMe-oF (NVMe over Fabrics) initiator enables connections to external NVMe storage arrays (targets) via network protocols. This feature provides efficient and high-performance management of remote storage solutions, overcoming traditional cabling limitations by allowing substantial distances between servers and storage arrays.&lt;br /&gt;
&lt;br /&gt;
== Supported Protocols ==&lt;br /&gt;
&lt;br /&gt;
The software supports two principal NVMe-oF initiator protocols:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;TCP&#039;&#039;&#039; – A widely adopted protocol ensuring ease of implementation and compatibility with conventional networking infrastructure.&lt;br /&gt;
*&#039;&#039;&#039;RDMA&#039;&#039;&#039; – A protocol providing lower latency and higher performance, ideal for environments requiring exceptional throughput. RDMA requires specialized hardware, such as Mellanox/NVIDIA ConnectX or ATTO network interface cards, to fully utilize its capabilities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Follow these steps to configure the NVMe-oF initiator:&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Start Discovery&#039;&#039;&#039;&lt;br /&gt;
#;Click the &amp;quot;&#039;&#039;&#039;Discover&#039;&#039;&#039;&amp;quot; button to start the discovery wizard.&lt;br /&gt;
#&#039;&#039;&#039;Enter Connection Details&#039;&#039;&#039;&lt;br /&gt;
#*&#039;&#039;&#039;Server IP&#039;&#039;&#039;: IP address of the NVMe storage target.&lt;br /&gt;
#*&#039;&#039;&#039;Server port&#039;&#039;&#039;: Network port for communication (&#039;&#039;&#039;default is 4420&#039;&#039;&#039;).&lt;br /&gt;
#*&#039;&#039;&#039;Server protocol&#039;&#039;&#039;: Choose between TCP and RDMA.&lt;br /&gt;
#*&#039;&#039;&#039;Advanced settings (optional)&#039;&#039;&#039;: Enable and specify the number of I/O queues. Leave blank or disabled to use the system default, or enter a specific number to override.&lt;br /&gt;
#:The number of I/O queues refers to the parallel channels through which data is transferred between the NVMe initiator and the target. Increasing this number can improve performance by enabling higher parallelism and reducing latency. However, each queue consumes system resources, and setting the number too high may exceed hardware or network capabilities, leading to connection issues. Adjust this value based on performance requirements and available resources.&lt;br /&gt;
#&#039;&#039;&#039;Proceed to Subsystems&#039;&#039;&#039;&lt;br /&gt;
#;Click &amp;quot;&#039;&#039;&#039;Next&#039;&#039;&#039;&amp;quot;. A list of available NVMe-oF subsystems will appear. Select the subsystems you want to connect to and click “&#039;&#039;&#039;Connect&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Manage Connection Paths ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Add a new path&#039;&#039;&#039;: Click the “&#039;&#039;&#039;Options&#039;&#039;&#039;” dropdown menu and select “&#039;&#039;&#039;Add path&#039;&#039;&#039;”. Enter the required connection details (Server IP, port, protocol, and optionally the number of I/O queues).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Disconnect a subsystem&#039;&#039;&#039;: Use the “&#039;&#039;&#039;Options&#039;&#039;&#039;” menu and select “&#039;&#039;&#039;Disconnect subsystem&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
You can perform additional discoveries at any time to connect new subsystems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Practical Implementation ==&lt;br /&gt;
&lt;br /&gt;
After connecting to a subsystem, a list of available namespaces will be displayed, including:&lt;br /&gt;
&lt;br /&gt;
*Namespace ID&lt;br /&gt;
*Namespace capacity&lt;br /&gt;
*Namespace aliases&lt;br /&gt;
&lt;br /&gt;
Namespaces are sections of the NVMe controller on the storage array. They appear as independent NVMe disks to the server, can be identified by their alias, and are managed in the same manner as standard NVMe disks. Namespaces can be partitioned and added to storage pools.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Only one partition per disk can be active within a single pool or data group to maintain redundancy and reliability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Multi-path Connectivity ==&lt;br /&gt;
&lt;br /&gt;
The initiator supports multi-path connectivity, allowing multiple redundant network paths to a single NVMe target. Each path requires a distinct IP address (Virtual IP) to ensure redundancy and high availability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
If you encounter connection issues (e.g., “Could not connect to subsystem(s)” error), consider the following actions:&lt;br /&gt;
&lt;br /&gt;
#Check Network Connectivity:&lt;br /&gt;
#*Ensure that the server can ping the target’s IP address.&lt;br /&gt;
#*Verify that the correct port (default 4420) is open and not blocked by a firewall.&lt;br /&gt;
#Validate Target Configuration:&lt;br /&gt;
#*Verify that the NVMe target is online and properly configured to support NVMe over Fabrics (NVMe-oF) connections.&lt;br /&gt;
#*Ensure that access control lists (ACLs) or authentication settings on the target allow the initiator to establish a connection.&lt;br /&gt;
#Adjust I/O Queues:&lt;br /&gt;
#*If connection errors occur due to queue limits, try lowering the number of I/O queues in the advanced settings to match target capabilities.&lt;br /&gt;
#Use Alternative Paths:&lt;br /&gt;
#*If multiple network interfaces are available (typical in JBOD or HA environments), try using an alternative IP address or configure multi-path connectivity.&lt;br /&gt;
#Review Logs:&lt;br /&gt;
#*Check logs for detailed error messages that can guide further troubleshooting.&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12234</id>
		<title>NVMe-oF Initiator</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12234"/>
		<updated>2025-07-30T10:05:00Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The NVMe-oF (NVMe over Fabrics) initiator enables connections to external NVMe storage arrays (targets) via network protocols. This feature provides efficient and high-performance management of remote storage solutions, overcoming traditional cabling limitations by allowing substantial distances between servers and storage arrays.&lt;br /&gt;
&lt;br /&gt;
== Supported Protocols ==&lt;br /&gt;
&lt;br /&gt;
The software supports two principal NVMe-oF initiator protocols:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;TCP&#039;&#039;&#039; – A widely adopted protocol ensuring ease of implementation and compatibility with conventional networking infrastructure.&lt;br /&gt;
*&#039;&#039;&#039;RDMA&#039;&#039;&#039; – A protocol providing lower latency and higher performance, ideal for environments requiring exceptional throughput. RDMA requires specialized hardware, such as Mellanox/NVIDIA ConnectX or ATTO network interface cards, to fully utilize its capabilities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Follow these steps to configure the NVMe-oF initiator:&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Start Discovery&#039;&#039;&#039;&lt;br /&gt;
#;Click the &amp;quot;&#039;&#039;&#039;Discover&#039;&#039;&#039;&amp;quot; button to start the discovery wizard.&lt;br /&gt;
#&#039;&#039;&#039;Enter Connection Details&#039;&#039;&#039;&lt;br /&gt;
#:The number of I/O queues refers to the parallel channels through which data is transferred between the NVMe initiator and the target. Increasing this number can improve performance by enabling higher parallelism and reducing latency. However, each queue consumes system resources, and setting the number too high may exceed hardware or network capabilities, leading to connection issues. Adjust this value based on performance requirements and available resources.&lt;br /&gt;
#*&#039;&#039;&#039;Server IP&#039;&#039;&#039;: IP address of the NVMe storage target.&lt;br /&gt;
#*&#039;&#039;&#039;Server port&#039;&#039;&#039;: Network port for communication (&#039;&#039;&#039;default is 4420&#039;&#039;&#039;).&lt;br /&gt;
#*&#039;&#039;&#039;Server protocol&#039;&#039;&#039;: Choose between TCP and RDMA.&lt;br /&gt;
#*&#039;&#039;&#039;Advanced settings (optional)&#039;&#039;&#039;: Enable and specify the number of I/O queues. Leave blank or disabled to use the system default, or enter a specific number to override.&lt;br /&gt;
#&#039;&#039;&#039;Proceed to Subsystems&#039;&#039;&#039;&lt;br /&gt;
#;Click &amp;quot;&#039;&#039;&#039;Next&#039;&#039;&#039;&amp;quot;. A list of available NVMe-oF subsystems will appear. Select the subsystems you want to connect to and click “&#039;&#039;&#039;Connect&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Manage Connection Paths ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Add a new path&#039;&#039;&#039;: Click the “&#039;&#039;&#039;Options&#039;&#039;&#039;” dropdown menu and select “&#039;&#039;&#039;Add path&#039;&#039;&#039;”. Enter the required connection details (Server IP, port, protocol, and optionally the number of I/O queues).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Disconnect a subsystem&#039;&#039;&#039;: Use the “&#039;&#039;&#039;Options&#039;&#039;&#039;” menu and select “&#039;&#039;&#039;Disconnect subsystem&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
You can perform additional discoveries at any time to connect new subsystems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Practical Implementation ==&lt;br /&gt;
&lt;br /&gt;
After connecting to a subsystem, a list of available namespaces will be displayed, including:&lt;br /&gt;
&lt;br /&gt;
*Namespace ID&lt;br /&gt;
*Namespace capacity&lt;br /&gt;
*Namespace aliases&lt;br /&gt;
&lt;br /&gt;
Namespaces are sections of the NVMe controller on the storage array. They appear as independent NVMe disks to the server, can be identified by their alias, and are managed in the same manner as standard NVMe disks. Namespaces can be partitioned and added to storage pools.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Only one partition per disk can be active within a single pool or data group to maintain redundancy and reliability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Multi-path Connectivity ==&lt;br /&gt;
&lt;br /&gt;
The initiator supports multi-path connectivity, allowing multiple redundant network paths to a single NVMe target. Each path requires a distinct IP address (Virtual IP) to ensure redundancy and high availability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
If you encounter connection issues (e.g., “Could not connect to subsystem(s)” error), consider the following actions:&lt;br /&gt;
&lt;br /&gt;
#Check Network Connectivity:&lt;br /&gt;
#*Ensure that the server can ping the target’s IP address.&lt;br /&gt;
#*Verify that the correct port (default 4420) is open and not blocked by a firewall.&lt;br /&gt;
#Validate Target Configuration:&lt;br /&gt;
#*Verify that the NVMe target is online and properly configured to support NVMe over Fabrics (NVMe-oF) connections.&lt;br /&gt;
#*Ensure that access control lists (ACLs) or authentication settings on the target allow the initiator to establish a connection.&lt;br /&gt;
#Adjust I/O Queues:&lt;br /&gt;
#*If connection errors occur due to queue limits, try lowering the number of I/O queues in the advanced settings to match target capabilities.&lt;br /&gt;
#Use Alternative Paths:&lt;br /&gt;
#*If multiple network interfaces are available (typical in JBOD or HA environments), try using an alternative IP address or configure multi-path connectivity.&lt;br /&gt;
#Review Logs:&lt;br /&gt;
#*Check logs for detailed error messages that can guide further troubleshooting.&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12233</id>
		<title>NVMe-oF Initiator</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NVMe-oF_Initiator&amp;diff=12233"/>
		<updated>2025-07-30T10:02:48Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;__NOTOC__ The NVMe-oF (NVMe over Fabrics) initiator enables connections to external NVMe storage arrays (targets) via network protocols. This feature provides efficient and hi...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The NVMe-oF (NVMe over Fabrics) initiator enables connections to external NVMe storage arrays (targets) via network protocols. This feature provides efficient and high-performance management of remote storage solutions, overcoming traditional cabling limitations by allowing substantial distances between servers and storage arrays.&lt;br /&gt;
&lt;br /&gt;
== Supported Protocols ==&lt;br /&gt;
&lt;br /&gt;
The software supports two principal NVMe-oF initiator protocols:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;TCP&#039;&#039;&#039; – A widely adopted protocol ensuring ease of implementation and compatibility with conventional networking infrastructure.&lt;br /&gt;
*&#039;&#039;&#039;RDMA&#039;&#039;&#039; – A protocol providing lower latency and higher performance, ideal for environments requiring exceptional throughput. RDMA requires specialized hardware, such as Mellanox/NVIDIA ConnectX or ATTO network interface cards, to fully utilize its capabilities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Follow these steps to configure the NVMe-oF initiator:&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Start Discovery&#039;&#039;&#039;&lt;br /&gt;
#;Click the &amp;quot;&#039;&#039;&#039;Discover&#039;&#039;&#039;&amp;quot; button to start the discovery wizard.&lt;br /&gt;
#&#039;&#039;&#039;Enter Connection Details&#039;&#039;&#039;&lt;br /&gt;
#:The number of I/O queues refers to the parallel channels through which data is transferred between the NVMe initiator and the target. Increasing this number can improve performance by enabling higher parallelism and reducing latency. However, each queue consumes system resources, and setting the number too high may exceed hardware or network capabilities, leading to connection issues. Adjust this value based on performance requirements and available resources.&lt;br /&gt;
#*&#039;&#039;&#039;Server IP&#039;&#039;&#039;: IP address of the NVMe storage target.&lt;br /&gt;
#*&#039;&#039;&#039;Server port&#039;&#039;&#039;: Network port for communication (&#039;&#039;&#039;default is 4420&#039;&#039;&#039;).&lt;br /&gt;
#*&#039;&#039;&#039;Server protocol&#039;&#039;&#039;: Choose between TCP and RDMA.&lt;br /&gt;
#*&#039;&#039;&#039;Advanced settings (optional)&#039;&#039;&#039;: Enable and specify the number of I/O queues. Leave blank or disabled to use the system default, or enter a specific number to override.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Proceed to Subsystems&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
;Click &amp;quot;&#039;&#039;&#039;Next&#039;&#039;&#039;&amp;quot;. A list of available NVMe-oF subsystems will appear. Select the subsystems you want to connect to and click “&#039;&#039;&#039;Connect&#039;&#039;&#039;”.&lt;br /&gt;
;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Manage Connection Paths ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Add a new path&#039;&#039;&#039;: Click the “&#039;&#039;&#039;Options&#039;&#039;&#039;” dropdown menu and select “&#039;&#039;&#039;Add path&#039;&#039;&#039;”. Enter the required connection details (Server IP, port, protocol, and optionally the number of I/O queues).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Disconnect a subsystem&#039;&#039;&#039;: Use the “&#039;&#039;&#039;Options&#039;&#039;&#039;” menu and select “&#039;&#039;&#039;Disconnect subsystem&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
You can perform additional discoveries at any time to connect new subsystems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Practical Implementation ==&lt;br /&gt;
&lt;br /&gt;
After connecting to a subsystem, a list of available namespaces will be displayed, including:&lt;br /&gt;
&lt;br /&gt;
*Namespace ID&lt;br /&gt;
*Namespace capacity&lt;br /&gt;
*Namespace aliases&lt;br /&gt;
&lt;br /&gt;
Namespaces are sections of the NVMe controller on the storage array. They appear as independent NVMe disks to the server, can be identified by their alias, and are managed in the same manner as standard NVMe disks. Namespaces can be partitioned and added to storage pools.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Only one partition per disk can be active within a single pool or data group to maintain redundancy and reliability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Multi-path Connectivity ==&lt;br /&gt;
&lt;br /&gt;
The initiator supports multi-path connectivity, allowing multiple redundant network paths to a single NVMe target. Each path requires a distinct IP address (Virtual IP) to ensure redundancy and high availability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
If you encounter connection issues (e.g., “Could not connect to subsystem(s)” error), consider the following actions:&lt;br /&gt;
&lt;br /&gt;
#Check Network Connectivity:&lt;br /&gt;
#*Ensure that the server can ping the target’s IP address.&lt;br /&gt;
#*Verify that the correct port (default 4420) is open and not blocked by a firewall.&lt;br /&gt;
#Validate Target Configuration:&lt;br /&gt;
#*Verify that the NVMe target is online and properly configured to support NVMe over Fabrics (NVMe-oF) connections.&lt;br /&gt;
#*Ensure that access control lists (ACLs) or authentication settings on the target allow the initiator to establish a connection.&lt;br /&gt;
#Adjust I/O Queues:&lt;br /&gt;
#*If connection errors occur due to queue limits, try lowering the number of I/O queues in the advanced settings to match target capabilities.&lt;br /&gt;
#Use Alternative Paths:&lt;br /&gt;
#*If multiple network interfaces are available (typical in JBOD or HA environments), try using an alternative IP address or configure multi-path connectivity.&lt;br /&gt;
#Review Logs:&lt;br /&gt;
#*Check logs for detailed error messages that can guide further troubleshooting.&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=JBODs_%26_JBOFs&amp;diff=12113</id>
		<title>JBODs &amp; JBOFs</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=JBODs_%26_JBOFs&amp;diff=12113"/>
		<updated>2025-04-29T09:09:52Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This functionality is available in the &#039;&#039;&#039;Storage Settings &amp;gt; JBODs &amp;amp; JBOFs&#039;&#039;&#039; tab&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It’s used to obtain more information about the disks in the JBOD or JBOF by using external services, e.g. Redfish. For this reason, it is dedicated to disks enclosures with out-of-band management.&amp;lt;br/&amp;gt;&#039;&#039;&#039;The functionality only works on currently supported devices such as SUPERMICRO SSG-136R-N32JBF.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the case of SUPERMICRO SSG-136R-N32JBF, the Redfish service is used to gain more information about disks, so an account with this service will be needed. To link an enclosure to the service, click on the &amp;quot;&#039;&#039;&#039;Add device&#039;&#039;&#039;&amp;quot; button. A pop-up with a form will appear. Fill in the form.&lt;br /&gt;
&lt;br /&gt;
To link a device through the service, the following information must be provided:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Name (alias)&#039;&#039;&#039; - Set a name for the enclosure that allows the device to be recognized should there be a few machines of the same model.&lt;br /&gt;
*&#039;&#039;&#039;IP address / domain&#039;&#039;&#039; - The domain name or IP address connecting the device to the network.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039; - Enter the number of the port used to communicate with the device through the Redfish service. The default port number is 443. Change as needed.&lt;br /&gt;
*&#039;&#039;&#039;Username&#039;&#039;&#039; - Enter the username to the Redfish service.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039; - Enter the password that’s associated with the user name that’s been entered above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After filling in every field, click the &amp;quot;&#039;&#039;&#039;Add&#039;&#039;&#039;&amp;quot; button. The system will then connect to the service and start to scan all the available disks. This may take some time. The system needs a while to scan a disk. Thus the more disks there are in an enclosure, the more time is needed to scan them all. After all the disks are scanned, the information will be available in the disk details section. Additional data such as:&lt;br /&gt;
&lt;br /&gt;
*Name of the enclosure in which the disk is located,&lt;br /&gt;
*Number of the slot in which the disk is located&lt;br /&gt;
&lt;br /&gt;
will also be displayed generally (i.e., in pool&#039;s disk groups, pool wizard, etc.).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE!&#039;&#039;&#039; When the connection status changes, a rescan of all disks is required. This occurs, e.g.:&lt;br /&gt;
&lt;br /&gt;
*When a device configuration is changing,&lt;br /&gt;
*When the system is restarted,&lt;br /&gt;
*After network reconnection (when the connection has been lost), etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The connection status of the enclosure is displayed in the table in the &amp;quot;&#039;&#039;&#039;JBODs &amp;amp; JBOFs&#039;&#039;&#039;&amp;quot; tab at all times. Next to the connection status there’s a power state displayed that shows if the device is turned on.&amp;lt;br/&amp;gt;Every device that has been added can be edited, removed from the table, or selected to display its details.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To do any of the above, use the context menu.&amp;lt;br/&amp;gt;The “&#039;&#039;&#039;Edit&#039;&#039;&#039;” option allows changing the device’s data or credentials.&amp;lt;br/&amp;gt;The “&#039;&#039;&#039;Details&#039;&#039;&#039;” option shows more information about an enclosure such as:&lt;br /&gt;
&lt;br /&gt;
*Name (alias)&lt;br /&gt;
*Model&lt;br /&gt;
*Vendor name&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The “&#039;&#039;&#039;Remove&#039;&#039;&#039;” option leads to the removal of a device from the table. Removing a device causes it to disconnect from the external service. Any additional information uploaded via the service after removing a device will not be displayed. In some cases, the option to turn on the LED for disks in the JBOD/JBOF may also become disabled.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=NFS_service&amp;diff=10796</id>
		<title>NFS service</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=NFS_service&amp;diff=10796"/>
		<updated>2025-02-26T13:06:47Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With this function you can enable/disable Network File System (NFS) service and set NFS threads count. If you have a large number of clients accessing this NFS server and they experience some amount of lag in their operations, you can try to increase the threads count number. Enable NFS VAAI support Speeds up virtual machine cloning by offloading copy/clone operations to the storage backend. Requires virtual machine to be powered off.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up31_ZFS_Upgrade&amp;diff=12218</id>
		<title>Open-E JovianDSS ver.1.0 up31 ZFS Upgrade</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up31_ZFS_Upgrade&amp;diff=12218"/>
		<updated>2025-01-21T07:43:46Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Redirected page to File system upgrade&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[File system upgrade]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=IDs_range_conflict_within_the_cluster&amp;diff=12217</id>
		<title>IDs range conflict within the cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=IDs_range_conflict_within_the_cluster&amp;diff=12217"/>
		<updated>2025-01-20T14:23:09Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;&amp;lt;div&amp;gt;In a clustered environment connected to an Active Directory (AD) server, user IDs (UIDs) and group IDs (GIDs) must remain consistent across all nodes. However, using the ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;In a clustered environment connected to an Active Directory (AD) server, user IDs (UIDs) and group IDs (GIDs) must remain consistent across all nodes. However, using the autorid backend for mapping IDs can create problems during failover. When one node fails and another takes over, autorid may map user IDs differently, causing users to lose access to shared files and folders.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;If the system detects that different ID ranges are assigned to the same AD domain across cluster nodes, it will display the warning: &#039;&#039;&#039;&amp;quot;ID range conflict within the cluster.&amp;quot;&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
== Steps to resolve the problem ==&lt;br /&gt;
&amp;lt;div&amp;gt;Follow these steps to resolve the issue and ensure consistent ID mappings across the cluster:&amp;lt;/div&amp;gt;&lt;br /&gt;
#&#039;&#039;&#039;Identify the node with correct mapping&#039;&#039;&#039;:&amp;lt;br/&amp;gt;Log in to the node where user access is working correctly and user IDs are mapped as expected.&lt;br /&gt;
#&#039;&#039;&#039;Change the ID mapping backend&#039;&#039;&#039;:&amp;lt;br/&amp;gt;Update the ID mapping backend from autorid to rid+tdb. The rid+tdb backend assigns IDs based on the relative identifier (RID) from the AD, ensuring the same IDs are used across all cluster nodes, while also allowing for dynamic ID assignment through the tdb backend for greater flexibility. &amp;lt;u&amp;gt;&#039;&#039;&#039;Don&#039;t change the tdb backend ranges to keep the UID/GIDs ranges assigned to domains so far&#039;&#039;&#039;&amp;lt;/u&amp;gt;.&lt;br /&gt;
#&#039;&#039;&#039;Synchronize settings across nodes&#039;&#039;&#039;:&amp;lt;br/&amp;gt;After switching the backend, the ID settings from the updated node will automatically synchronize with the other node in the cluster.&lt;br /&gt;
#&#039;&#039;&#039;Verify and check configuration&#039;&#039;&#039;:&amp;lt;br/&amp;gt;Check that users can still access their files and folders. Perform a failover test to confirm that permissions remain correct when a node takes over.&lt;br /&gt;
&lt;br /&gt;
== Why use the rid+tdb backend? ==&lt;br /&gt;
&amp;lt;div&amp;gt;Switching to the rid+tdb backend provides a reliable and predictable way to map IDs across nodes, avoiding conflicts caused by the dynamic ID assignment of autorid. It is a proven solution for maintaining stable user ID mappings in clustered environments.&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File_system_upgrade&amp;diff=12216</id>
		<title>File system upgrade</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File_system_upgrade&amp;diff=12216"/>
		<updated>2025-01-16T10:22:50Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;After upgrading to a version with a newer ZFS filesystem, the following notification will be displayed upon first access:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;q&amp;gt;Zpools available for ZFS filesystem upgrade Upgrading Zpools to the latest ZFS file system is recommended. Although the file system upgrade is absolutely safe for your data and its integrity and will only take few minutes please be aware that this operation cannot be undone and accessing this zpool data will not be possible with older software versions. In order to upgrade a single Zpool, please use “Upgrade file system&amp;quot; from Zpool&#039;s option menu.&amp;lt;/q&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Additionally, the zpool itself will display the following zpool status:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;q&amp;gt;Some supported features are not enabled on the pool. The pool can still be used but it is recommended to upgrade it in order to fully utilize all system features. Action: Upgrade the pool using &amp;quot;Upgrade file system&amp;quot; in pool options menu. Once this is done, the pool will no longer be accessible by software that does not support new features.&amp;lt;/q&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;As prompted, expand the zpool options and choose &amp;quot;Upgrade file system&amp;quot;:&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Upgrade file system option.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The following windows will appear. Type ‘upgrade’ and click the Upgrade button to proceed:&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Upgrade file system confirmation.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Once completed, the system will notify you that the zpool has been updated successfully.&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File_system_upgrade&amp;diff=12215</id>
		<title>File system upgrade</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File_system_upgrade&amp;diff=12215"/>
		<updated>2025-01-16T10:20:46Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;&amp;lt;div&amp;gt;After upgrading to a version with a newer ZFS filesystem, the following notification will be displayed upon first access:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt; &amp;lt;q&amp;gt;Zpools available for ZFS filesyste...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;After upgrading to a version with a newer ZFS filesystem, the following notification will be displayed upon first access:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;q&amp;gt;Zpools available for ZFS filesystem upgrade Upgrading Zpools to the latest ZFS file system is recommended. Although the file system upgrade is absolutely safe for your data and its integrity and will only take few minutes please be aware that this operation cannot be undone and accessing this zpool data will not be possible with older software versions. In order to upgrade a single Zpool, please use “Upgrade file system&amp;quot; from Zpool&#039;s option menu.&amp;lt;/q&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Additionally, the zpool itself will display the following zpool status:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;q&amp;gt;Some supported features are not enabled on the pool. The pool can still be used but it is recommended to upgrade it in order to fully utilize all system features. Action: Upgrade the pool using &amp;quot;Upgrade file system&amp;quot; in pool options menu. Once this is done, the pool will no longer be accessible by software that does not support new features.&amp;lt;/q&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;As prompted, expand the zpool options and choose &amp;quot;Upgrade file system&amp;quot;:&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Upgrade file system option.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The following windows will appear. Type ‘upgrade’ and click the Upgrade button to proceed:&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Upgrade file system confirmation.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;Once completed, the system will notify you that the zpool has been updated successfully.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[Category:Help_topics]]&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Upgrade_file_system_confirmation.png&amp;diff=12214</id>
		<title>File:Upgrade file system confirmation.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Upgrade_file_system_confirmation.png&amp;diff=12214"/>
		<updated>2025-01-16T10:18:50Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Upgrade_file_system_option.png&amp;diff=12213</id>
		<title>File:Upgrade file system option.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Upgrade_file_system_option.png&amp;diff=12213"/>
		<updated>2025-01-16T10:18:40Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Redundancy_in_Disks_Groups&amp;diff=12196</id>
		<title>Redundancy in Disks Groups</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Redundancy_in_Disks_Groups&amp;diff=12196"/>
		<updated>2025-01-15T15:02:51Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Disk group redundancy refers to the ability of a zpool to maintain data integrity and availability in the event of disk failures. This is achieved through mirrored or RAID-Z configurations, which store multiple copies of data across different disks. When a disk fails or data corruption is detected, ZFS can use the redundant copies to repair or reconstruct the lost data, ensuring the system continues to operate without data loss.&lt;br /&gt;
&lt;br /&gt;
It is important not to mix the types of data groups (vdevs) inside your storage zpool, as it might lead to potential issues, so it is strongly recommended to consistently use only one type of vdev.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: 2-way mirror (2 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of mirror vdevs in the zpool.&lt;br /&gt;
*The 2-way mirror accepts a single disk failure in a given vdev.&lt;br /&gt;
*The 2-way mirrors can be used for mission critical applications, but it is recommended not to exceed 12 vdevs in a zpool (recommended up to 12 x 2 = 24 disks for mission-critical applications and 24 x 2 = 48 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: as a rule, the zpool performance increases with the number of vdevs in the pool. For mission-critical applications and using more than 12 groups, It is recommended to use 3-way mirrors or RAID-Z2 or RAID-Z3.&lt;br /&gt;
*For mission critical applications it is not recommended to use HDDs bigger than 4TB.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: 3-way mirror (3 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with number of mirror vdevs in the zpool.&lt;br /&gt;
*The 3-way mirror accepts up to two disks failures in a given vdev.&lt;br /&gt;
*3-way mirrors can be used for mission critical applications, but it is recommended not to exceed vdevs in a zpool (recommended up to 16 x 3 = 48 disks for mission critical applications and 24 x 3 = 72 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance increases with the number of vdevs in a zpool. For mission-critical applications, it is recommended to use RAID-Z3.&lt;br /&gt;
*For mission critical applications it is not recommended to use HDDs bigger than 10TB.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: 4-way mirror (4 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with number of mirror vdevs in the zpool.&lt;br /&gt;
*The 4-way mirror accepts up to three disks failures in a given vdev.&lt;br /&gt;
*The 4-way mirror is recommended for a Non-Shared HA Cluster that can be used for mission-critical applications.&lt;br /&gt;
*It is also recommended not to exceed 24 of 4-way mirror vdevs in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 24 x 4 = 96 disks for mission-critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: as a rule, the zpool performance increases with the number of vdev in the pool.&lt;br /&gt;
*HDDs bigger than 16TB should be avoided for mission critical applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: RAIDZ-1 (3-8 disks in a group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in a RAID-Z1 vdev.&lt;br /&gt;
*RAID-Z1 accepts one disk failure in a given vdev.&lt;br /&gt;
*The RAID-Z1 can be used for non-mission critical applications and it is not recommended to exceed 8 disks in a vdev. HDDs bigger than 4TB should be avoided.&lt;br /&gt;
*It is also not recommended to exceed 8 RAID-Z1 vdevs in a zpool as a single group damage results in the destruction of entire zpool (recommended up to 8 x 8 = 64 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*RAID-Z1 configuration is not possible in a Non-Shared HA Cluster configuration.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance is doubled with 2 x RAID-Z1 with 4 disks each comparing to a single RAID-Z1 vdev with 8 disks.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: RAIDZ-2 (4-24 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in the RAID-Z2 group.&lt;br /&gt;
*The RAID-Z2 accepts up to two disks failures in a given vdev.&lt;br /&gt;
*The RAID-Z2 can be used for mission-critical applications.&lt;br /&gt;
*It is not recommended to exceed 12 disks in a vdev for mission-critical and 24 disks for non-mission critical applications.&lt;br /&gt;
*It is also not recommended to exceed 16 of RAID-Z2 groups in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 16 x 12 = 192 disks for mission-critical applications and 16 x 24 = 384 disks for non-mission critical in a zpool). HDDs bigger than 16 TB should be avoided.&lt;br /&gt;
*If 3 disks failure in a vdev is required, it is recommended to use RAID-Z3.&lt;br /&gt;
*RAID-Z2 configuration is not possible in a Non-Shared HA Cluster configuration.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the pool performance is doubled with 2 x RAID-Z2 with 6 disks each comparing to a single RAID-Z2 with 12 disks.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: RAIDZ-3 (5-48 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in the RAID-Z3 group.&lt;br /&gt;
*The RAID-Z3 accepts up to three disks failure in a given vdev.&lt;br /&gt;
*The RAID-Z3 can be used for mission-critical applications.&lt;br /&gt;
*It is not recommended to exceed 24 disks in a vdev for mission-critical and 48 disks for non- mission critical applications.&lt;br /&gt;
*It is also not recommended to exceed 24 of RAID-Z3 groups in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 24x 24 =576 disks for mission critical applications and 24x 48 = 1152 disks for non-mission critical applications in a zpool). HDDs bigger than 16TB should be avoided.&lt;br /&gt;
*RAID-Z3 configuration is not possible in a Non-Shared HA Cluster configuration.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance is doubled with 2 x RAID-Z3 with 12 disks each comparing to single RAID-Z3 vdev with 24 disks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Write Log redundancy level ==&lt;br /&gt;
&lt;br /&gt;
*In both single nodes and HA clusters, it should be configured as a 2-way mirror.&lt;br /&gt;
*When choosing a disk model for the Write Log, make sure to take the endurance parameter into consideration. Selecting a disk classified by the manufacturer as write intensive is strongly recommended.&lt;br /&gt;
*When selecting a disk size for the write log, consider the potential amount of data that’ll be able to reach the server in three consecutive ZFS transactions, e.g. based on the network card bandwidth for the data transfer. If the transaction length is set to 5 seconds (default), the write log device should be able to accommodate the amount of data that can be transferred within three transaction groups, i.e. 15 seconds of writing. Using a larger disk does not make sense economically, while a smaller one can be a performance bottleneck during synchronous writes. &#039;&#039;&#039;Practically speaking, 100GB for a write log should be more than enough.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Read Cache redundancy level ==&lt;br /&gt;
&lt;br /&gt;
Read Cache disks can only be configured as single disks, but it is possible to configure any number of them. In an HA cluster, Read Cache can only be configured as local disks on both nodes on the cluster.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Special devices and deduplication group redundancy level ==&lt;br /&gt;
&lt;br /&gt;
In both single nodes and HA clusters, it should be configured as a 2-way mirror.&lt;br /&gt;
&lt;br /&gt;
[[Category:ZFS and data storage articles]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Zpool_wizard&amp;diff=9891</id>
		<title>Zpool wizard</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Zpool_wizard&amp;diff=9891"/>
		<updated>2025-01-15T15:02:22Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;A &#039;&#039;&#039;zpool&#039;&#039;&#039; is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into &#039;&#039;&#039;vdevs&#039;&#039;&#039; (virtual devices), which collectively form the unified zpool. From this zpool, ZFS creates and manages &#039;&#039;&#039;datasets&#039;&#039;&#039; (file systems) and &#039;&#039;&#039;zvols&#039;&#039;&#039; (block storage volumes).&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The zpool wizard is made up of the following steps:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;1. Add data group&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section provides information about all storage devices connected to the storage server. To add the first Data Group to your Zpool, follow these steps:&lt;br /&gt;
&lt;br /&gt;
#Select the desired disks from the list on the left.&lt;br /&gt;
#Choose the redundancy type.&lt;br /&gt;
#Click the &amp;quot;Add group&amp;quot; button.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The available redundancy options for groups are as follows:&lt;br /&gt;
*&#039;&#039;&#039;Single&#039;&#039;&#039;: Each disk operates as an independent drive with no redundancy.&lt;br /&gt;
*&#039;&#039;&#039;Mirror&#039;&#039;&#039;: All data written to one device in the mirror is automatically replicated to another device, ensuring data redundancy. A minimum of two disks is required to create a mirrored vdev.&lt;br /&gt;
**&#039;&#039;&#039;Mirror (Single Group)&#039;&#039;&#039;: All selected disks will be combined into a single mirrored group.&lt;br /&gt;
**&#039;&#039;&#039;Mirror (Multiple Groups)&#039;&#039;&#039;: The selected disks will be paired into multiple mirrored groups, each consisting of two disks.&lt;br /&gt;
*&#039;&#039;&#039;RAIDZ-1&#039;&#039;&#039;: Allows for the failure of one disk per RAIDZ-1 group without losing data. A minimum of three disks is required for a RAIDZ-1 group.&lt;br /&gt;
*&#039;&#039;&#039;RAIDZ-2&#039;&#039;&#039;: Allows for the failure of two disks per RAIDZ-2 group without losing data. A minimum of four disks is required for a RAIDZ-2 group.&lt;br /&gt;
*&#039;&#039;&#039;RAIDZ-3&#039;&#039;&#039;: Allows for the failure of three disks per RAIDZ-3 group without losing data. A minimum of five disks is required for a RAIDZ-3 group.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To learn more vdev types, please refer to the following article:&amp;amp;nbsp;[[Redundancy in Disks Groups|Redundancy_in_Disks_Groups]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;2. Add write log&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;This feature allows you to configure the write log function using a chosen redundancy level (either a single drive or a mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVME should be used for this vdev.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key points to consider:&lt;br /&gt;
*If multiple log devices are specified, write operations are load-balanced between the devices.&lt;br /&gt;
*Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.&lt;br /&gt;
*RAIDZ vdev types are not supported for the intent log.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;This ensures efficient and reliable write operations while leveraging the selected redundancy level.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;3. Add read cache&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;A cache device is used to store frequently accessed storage pool data, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVME should be used for this vdev.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key benefits and considerations:&lt;br /&gt;
*Cache devices are particularly useful for &#039;&#039;&#039;read-heavy workloads&#039;&#039;&#039; where the working dataset size exceeds the capacity of main memory.&lt;br /&gt;
*By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.&lt;br /&gt;
*The greatest performance improvements are seen in workloads characterized by &#039;&#039;&#039;random reads&#039;&#039;&#039; of primarily static content.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Adding a read cache helps enhance performance and reduces latency for storage systems with high read demands.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;4. Add special devices group&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;Special devices are used to store specific types of data, such as metadata or small files, on dedicated storage devices separate from the main data pool. A fast SSD/NVME should be used for this vdev.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key features and benefits:&lt;br /&gt;
*Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.&lt;br /&gt;
*Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.&lt;br /&gt;
*Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;5. Add deduplication group&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;A deduplication group can be explicitly excluded from a special device group as a dedicated storage group used to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key features and considerations:&lt;br /&gt;
*Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.&lt;br /&gt;
*This configuration provides flexibility in optimizing storage layout based on workload requirements.&lt;br /&gt;
*Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;6. Add spare disks&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;A spare disk is a special pseudo-vdev used to track available spare devices for a zpool.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Using spare disks enhances the reliability of the storage pool by allowing seamless drive replacement and reducing the risk of data loss.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;7. Configuration&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;During this step, you can configure the Zpool by naming it and enabling additional features if required.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key configurations:&lt;br /&gt;
*&#039;&#039;&#039;Zpool Name&#039;&#039;&#039;: Assign a unique and descriptive name to the Zpool for easy identification.&lt;br /&gt;
*&#039;&#039;&#039;Enable AutoTRIM&#039;&#039;&#039;: If supported by your devices, enable the AutoTRIM feature to automatically reclaim unused space. AutoTRIM helps optimize the performance and lifespan of SSDs by informing them when blocks are no longer in use.&lt;br /&gt;
*&#039;&#039;&#039;Small blocks policy settings&#039;&#039;&#039; if a special device group has been configured in Step. When the small block size is set for the pool all datasets inherit this value by default. It can be changed for a particular dataset in its setting.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;8. Summary&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;This step provides a summary of the zpool configuration, detailing the arrangement of disk groups and their roles within the pool. Click ‘Add zpool’ to create a zpool.&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;&#039;&#039;&amp;lt;span&amp;gt;Video tutorial related to this article&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
{{#ev:youtube|aJEZg-F6WQQ}}&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Small_blocks_policy_settings&amp;diff=12211</id>
		<title>Small blocks policy settings</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Small_blocks_policy_settings&amp;diff=12211"/>
		<updated>2025-01-15T11:58:03Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;&amp;lt;div&amp;gt;This feature is available only when the special devices group exists in the pool.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Devices assigned to the special devices group are designated for storing ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;This feature is available only when the special devices group exists in the pool.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Devices assigned to the special devices group are designated for storing specific data, including metadata, indirect blocks of user data, and deduplication tables. Additionally, devices in the special devices group can be configured to handle small file blocks that are not listed above by applying the small blocks policy.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The size of the small block refers to the size of a single block of data configured on the dataset. Maximum size of such blocks can be set for each dataset under the “Record size” option.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Deduplication tables can alternatively be placed in a separate group known as the deduplication group.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;WARNING&#039;&#039;&#039;: If the size of the small block is greater than or equal to the value of record size on the dataset, all the blocks will be offloaded to the special devices group.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The small block size can be configured for the whole Pool or for each dataset separately. Options to choose range from 4 KiB to 16 MiB.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Configuring a bigger size for the small block policy can be helpful in case the administrator expects the to have a substantial amount of small files that will require low access times separated from the bigger files. Using this option is recommended only if the administrator understands what kind of data will be stored in configured datasets and the maximum size of offloaded files is exactly known, to avoid accidental data offload and special devices congestion.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[Category:Help_topics]]&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Critical_system_error_response_policy&amp;diff=11462</id>
		<title>Critical system error response policy</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Critical_system_error_response_policy&amp;diff=11462"/>
		<updated>2025-01-15T11:39:50Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039;&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;Please be aware that these settings are applicable only for a single node configuration or for a cluster configuration with the other node being unavailable.&amp;lt;br/&amp;gt;&#039;&#039;&#039;For cluster configuration with both nodes available the policy is set to immediate reboot in all cases&#039;&#039;&#039;.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;A system reboot may be necessary when a critical error is detected. The administrator may choose to handle different errors in a different manner.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Possible critical errors are divided into three categories:&lt;br /&gt;
*&#039;&#039;&#039;ZFS pool I/O suspend&#039;&#039;&#039;: errors from this group are raised in case an uncorrectable I/O failure is encountered during read/write operation to the Pool. The I/O operation is suspended and the system awaits a reboot.&lt;br /&gt;
*&#039;&#039;&#039;Kernel oops or bug&#039;&#039;&#039;: kernel oops is defined as a deviation from the correct behavior of the Linux kernel that produces a certain error log. Such an error is not fatal to the system but may be dangerous to the system’s stability. Kernel oops often precedes a kernel panic, causing the system to immediately shutdown. Kernel bug refers to an internal error in the kernel code. Un-Kh errors put the system integrity at risk. It is highly recommended that a reboot is performed immediately to avoid unexpected failures.&lt;br /&gt;
*&#039;&#039;&#039;Out-of-memory error&#039;&#039;&#039;: This error, abbreviated as OOM, refers to the state of the system where no additional memory can be allocated for use by programs or the operating system. It is necessary to free up or add memory to the system to recover the system operation. Once this error occurs the system enters an unresponsive state until the memory issue is solved. It is highly recommended to reboot the system at the first moment possible.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;For each of the mentioned categories the following behavior patterns can be configured:&lt;br /&gt;
*&#039;&#039;&#039;Immediate&#039;&#039;&#039;: system will reboot the machine immediately after the error occurs (the event will not be recorded in the event viewer).&lt;br /&gt;
*&#039;&#039;&#039;Automatic&#039;&#039;&#039;: system will restart in 30 seconds from when the errors appear.&lt;br /&gt;
*&#039;&#039;&#039;Manual&#039;&#039;&#039;: system will prompt for manual restart.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Critical_IO_Errors&amp;diff=12210</id>
		<title>Critical IO Errors</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Critical_IO_Errors&amp;diff=12210"/>
		<updated>2025-01-15T11:38:36Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Da-F moved page Critical IO Errors to Critical system error response policy&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Critical system error response policy]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Critical_system_error_response_policy&amp;diff=11461</id>
		<title>Critical system error response policy</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Critical_system_error_response_policy&amp;diff=11461"/>
		<updated>2025-01-15T11:38:36Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Da-F moved page Critical IO Errors to Critical system error response policy&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If a JovianDSS will meet the hardware errors, then a reboot will be needed.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
The reboot procedure may be done by two options&amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Immediate&#039;&#039;&#039; - System will reboot the machine immediately after a pool has I/O suspended state. No event will be recorded about the reason of it. This option is recommended for cluster configurations because it immediately triggers the failover and therefore it`s the fastest way to restore the access to the data.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;automatically&#039;&#039;&#039; - System will restart in 30 seconds from when the errors appear.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;manually&#039;&#039;&#039; - System will prompt for manual restart.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_replication_task&amp;diff=10888</id>
		<title>Backup &amp; Recovery replication task</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_replication_task&amp;diff=10888"/>
		<updated>2025-01-15T11:08:06Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Backup &amp;amp; Recovery: Overview Tabs ==&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Tasks&#039;&#039;&#039;: View the list of all tasks with their current statuses.&lt;br /&gt;
#&#039;&#039;&#039;Destination Servers&#039;&#039;&#039;: View all added destination servers. Use the &#039;&#039;&#039;Add Server&#039;&#039;&#039; button to configure a new server outside of the Backup Task Wizard.&lt;br /&gt;
#&#039;&#039;&#039;vCenter/vSphere Servers&#039;&#039;&#039;: View all added vCenter/vSphere servers. Use the &#039;&#039;&#039;Add Server&#039;&#039;&#039; button to configure a new server outside of the Backup Task Wizard.&lt;br /&gt;
&lt;br /&gt;
For additional support or detailed guidance, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
== Backup &amp;amp; Recovery: Creating a Replication Task ==&lt;br /&gt;
&lt;br /&gt;
To create a replication task, navigate to &#039;&#039;&#039;Backup &amp;amp; Recovery&#039;&#039;&#039; and click on the &#039;&#039;&#039;Add Replication Task&#039;&#039;&#039; button. This launches the Backup Task Wizard, which consists of the following steps:&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Source Configuration ===&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Resource Path&#039;&#039;&#039;: Browse and select the ZVOLs or datasets to be backed up. Confirm your selection by clicking &#039;&#039;&#039;Apply&#039;&#039;&#039;.&lt;br /&gt;
#&#039;&#039;&#039;Retention Interval Plan&#039;&#039;&#039;: Specify how often snapshots should be taken and how long they should be retained.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 2: Destination Configuration ===&lt;br /&gt;
&lt;br /&gt;
*If the &#039;&#039;&#039;Toggle Bar&#039;&#039;&#039; is disabled, no destination will be configured. Enable the toggle bar to activate &#039;&#039;&#039;Destination 1&#039;&#039;&#039;.&lt;br /&gt;
*The destination server can be either:&lt;br /&gt;
**&#039;&#039;&#039;Local Server&#039;&#039;&#039;: The same machine as the source.&lt;br /&gt;
**&#039;&#039;&#039;Remote Server&#039;&#039;&#039;: A different server. To configure a remote server:&lt;br /&gt;
**#Select &#039;&#039;&#039;Add New Server&#039;&#039;&#039;.&lt;br /&gt;
**#Provide the following details:&lt;br /&gt;
**#*&#039;&#039;&#039;IP Address/Domain&#039;&#039;&#039;&lt;br /&gt;
**#*&#039;&#039;&#039;Port&#039;&#039;&#039; (default: 40000)&lt;br /&gt;
**#*&#039;&#039;&#039;Password&#039;&#039;&#039;&lt;br /&gt;
**#*&#039;&#039;&#039;Description&#039;&#039;&#039; (optional)&lt;br /&gt;
**#After adding the server, select the appropriate &#039;&#039;&#039;Resource Path&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;Note&#039;&#039;&#039;: &#039;&#039;&#039;The resource path cannot have iSCSI targets attached (for ZVOLs) or shared datasets.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Specify the &#039;&#039;&#039;Retention Interval Plan&#039;&#039;&#039; for the destination.&lt;br /&gt;
*To configure additional destinations, click &#039;&#039;&#039;Add Another Destination&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
For detailed explanations of these options, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]].&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 3: vCenter/vSphere Server Integration ===&lt;br /&gt;
&lt;br /&gt;
*Add a vCenter/vSphere server to enable consistent snapshots.&lt;br /&gt;
&lt;br /&gt;
For detailed instructions, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]].&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 4: Task Properties ===&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Task Description&#039;&#039;&#039;: Create a custom description for the task.&lt;br /&gt;
#&#039;&#039;&#039;Enable MBuffer&#039;&#039;&#039;: Buffer the data stream on the source and destination to prevent buffer underruns. Configure:&lt;br /&gt;
#*&#039;&#039;&#039;Buffer Size&#039;&#039;&#039;&lt;br /&gt;
#*&#039;&#039;&#039;Rate Limit&#039;&#039;&#039;&lt;br /&gt;
#&#039;&#039;&#039;Send Compressed Data&#039;&#039;&#039;: Enable this option to transfer compressed data directly without decompression, which speeds up the process and reduces network bandwidth usage.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 5: Summary ===&lt;br /&gt;
&lt;br /&gt;
*Review a summary of the configured settings.&lt;br /&gt;
*Click &#039;&#039;&#039;Add&#039;&#039;&#039; to finalize the task.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_replication_task&amp;diff=10887</id>
		<title>Backup &amp; Recovery replication task</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_replication_task&amp;diff=10887"/>
		<updated>2025-01-15T11:07:27Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Backup &amp;amp; Recovery: Overview Tabs ==&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Tasks&#039;&#039;&#039;: View the list of all tasks with their current statuses.&lt;br /&gt;
#&#039;&#039;&#039;Destination Servers&#039;&#039;&#039;: View all added destination servers. Use the &#039;&#039;&#039;Add Server&#039;&#039;&#039; button to configure a new server outside of the Backup Task Wizard.&lt;br /&gt;
#&#039;&#039;&#039;vCenter/vSphere Servers&#039;&#039;&#039;: View all added vCenter/vSphere servers. Use the &#039;&#039;&#039;Add Server&#039;&#039;&#039; button to configure a new server outside of the Backup Task Wizard.&lt;br /&gt;
&lt;br /&gt;
For additional support or detailed guidance, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
== Backup &amp;amp; Recovery: Creating a Replication Task ==&lt;br /&gt;
&lt;br /&gt;
To create a replication task, navigate to &#039;&#039;&#039;Backup &amp;amp; Recovery&#039;&#039;&#039; and click on the &#039;&#039;&#039;Add Replication Task&#039;&#039;&#039; button. This launches the Backup Task Wizard, which consists of the following steps:&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Source Configuration ===&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Resource Path&#039;&#039;&#039;: Browse and select the ZVOLs or datasets to be backed up. Confirm your selection by clicking &#039;&#039;&#039;Apply&#039;&#039;&#039;.&lt;br /&gt;
#&#039;&#039;&#039;Retention Interval Plan&#039;&#039;&#039;: Specify how often snapshots should be taken and how long they should be retained.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 2: Destination Configuration ===&lt;br /&gt;
&lt;br /&gt;
*If the &#039;&#039;&#039;Toggle Bar&#039;&#039;&#039; is disabled, no destination will be configured. Enable the toggle bar to activate &#039;&#039;&#039;Destination 1&#039;&#039;&#039;.&lt;br /&gt;
*The destination server can be either:&lt;br /&gt;
**&#039;&#039;&#039;Local Server&#039;&#039;&#039;: The same machine as the source.&lt;br /&gt;
**&#039;&#039;&#039;Remote Server&#039;&#039;&#039;: A different server. To configure a remote server:&lt;br /&gt;
**#Select &#039;&#039;&#039;Add New Server&#039;&#039;&#039;.&lt;br /&gt;
**#Provide the following details:&lt;br /&gt;
**#*&#039;&#039;&#039;IP Address/Domain&#039;&#039;&#039;&lt;br /&gt;
**#*&#039;&#039;&#039;Port&#039;&#039;&#039; (default: 40000)&lt;br /&gt;
**#*&#039;&#039;&#039;Password&#039;&#039;&#039;&lt;br /&gt;
**#*&#039;&#039;&#039;Description&#039;&#039;&#039; (optional)&lt;br /&gt;
**#After adding the server, select the appropriate &#039;&#039;&#039;Resource Path&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;Note&#039;&#039;&#039;: &#039;&#039;&#039;The resource path cannot have iSCSI targets attached (for ZVOLs) or shared datasets.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Specify the &#039;&#039;&#039;Retention Interval Plan&#039;&#039;&#039; for the destination.&lt;br /&gt;
*To configure additional destinations, click &#039;&#039;&#039;Add Another Destination&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
For detailed explanations of these options, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]].&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 3: vCenter/vSphere Server Integration ===&lt;br /&gt;
&lt;br /&gt;
*Add a vCenter/vSphere server to enable consistent snapshots.&lt;br /&gt;
&lt;br /&gt;
For detailed instructions, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]].&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 4: Task Properties ===&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Task Description&#039;&#039;&#039;: Create a custom description for the task.&lt;br /&gt;
#&#039;&#039;&#039;Enable MBuffer&#039;&#039;&#039;: Buffer the data stream on the source and destination to prevent buffer underruns. Configure:&lt;br /&gt;
#*&#039;&#039;&#039;Buffer Size&#039;&#039;&#039;&lt;br /&gt;
#*&#039;&#039;&#039;Rate Limit&#039;&#039;&#039;&lt;br /&gt;
#&#039;&#039;&#039;Send Compressed Data&#039;&#039;&#039;: Enable this option to transfer compressed data directly without decompression, which speeds up the process and reduces network bandwidth usage.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 5: Summary ===&lt;br /&gt;
&lt;br /&gt;
*Review a summary of the configured settings.&lt;br /&gt;
*Click &#039;&#039;&#039;Add&#039;&#039;&#039; to finalize the task.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_Esx_server&amp;diff=12208</id>
		<title>Backup &amp; Recovery Esx server</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_Esx_server&amp;diff=12208"/>
		<updated>2025-01-15T10:05:52Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;&amp;lt;div&amp;gt;Snapshot feature can be integrated with VMware ESX/vSphere snapshots. To integrate a new VMware server with click on the “Add server” option.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt; File:V-cent...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Snapshot feature can be integrated with VMware ESX/vSphere snapshots. To integrate a new VMware server with click on the “Add server” option.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[File:V-center add server option.png|800px|V-center add server option.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 443 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Username&#039;&#039;&#039;: provide the username for the VMware user you wish to use during integration. It is recommended to use the root user for this purpose.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the password to the VMware user account provided in the previous step.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[File:Add VMware server form.png|450px|Add VMware server form.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;The integrated server will be visible in the table below. The user can browse through the configured datastores and Virtual Machines by going to the Options &amp;gt; Details.&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Details of WMware server.png|800px|Details of WMware server.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:VMware server details.png|450px|VMware server details.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;It is also possible to remove the server from integration by clicking the Remove option. &amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Remove WMware server.png|800px|Remove WMware server.png]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Help_topics]]&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Add_VMware_server_form.png&amp;diff=12207</id>
		<title>File:Add VMware server form.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Add_VMware_server_form.png&amp;diff=12207"/>
		<updated>2025-01-15T10:03:42Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:VMware_server_details.png&amp;diff=12206</id>
		<title>File:VMware server details.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:VMware_server_details.png&amp;diff=12206"/>
		<updated>2025-01-15T10:03:17Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Details_of_WMware_server.png&amp;diff=12205</id>
		<title>File:Details of WMware server.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Details_of_WMware_server.png&amp;diff=12205"/>
		<updated>2025-01-15T10:03:02Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Remove_WMware_server.png&amp;diff=12204</id>
		<title>File:Remove WMware server.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Remove_WMware_server.png&amp;diff=12204"/>
		<updated>2025-01-15T10:02:21Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:V-center_add_server_option.png&amp;diff=12203</id>
		<title>File:V-center add server option.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:V-center_add_server_option.png&amp;diff=12203"/>
		<updated>2025-01-15T10:02:03Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=12201</id>
		<title>Backup &amp; Recovery destination server</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=12201"/>
		<updated>2025-01-14T13:24:20Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Remote servers that are added as snapshot targets are called Destination servers. To add one, click the “Add server” button.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server option.png|none|800px|Add server option.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address/domain&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&amp;lt;br/&amp;gt;&#039;&#039;&#039;&amp;lt;u&amp;gt;Note&amp;lt;/u&amp;gt;: In case the snapshot target is an HA cluster, it is highly recommended to use Virtual IP configured on a Pool. It allows the connection to be stable even after exporting the pool to the other node&#039;&#039;&#039;.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 40000 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the administrator password to the remote server.&lt;br /&gt;
*&#039;&#039;&#039;Description&#039;&#039;&#039;: This field is optional. Provide a short description of your server to identify it in the future.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server form.png|none|450px|Add server form.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Destination servers configured here will be available during the Destination configuration step of the Replication task wizard.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;If you wish to remove the Destination server click the “Remove” button. &#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Remove server option.png|none|800px|Remove server option.png]]&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[Category:Help_topics]]&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=12200</id>
		<title>Backup &amp; Recovery destination server</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=12200"/>
		<updated>2025-01-14T13:23:49Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Created page with &amp;quot;&amp;lt;div&amp;gt;Remote servers that are added as snapshot targets are called Destination servers. To add one, click the “Add server” button.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;File:Add server option.p...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Remote servers that are added as snapshot targets are called Destination servers. To add one, click the “Add server” button.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server option.png|none|800px|Add server option.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address/domain&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&amp;lt;br/&amp;gt;&#039;&#039;&#039;&amp;lt;u&amp;gt;Note&amp;lt;/u&amp;gt;: In case the snapshot target is an HA cluster, it is highly recommended to use Virtual IP configured on a Pool. It allows the connection to be stable even after exporting the pool to the other node&#039;&#039;&#039;.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 40000 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the administrator password to the remote server.&lt;br /&gt;
*&#039;&#039;&#039;Description&#039;&#039;&#039;: This field is optional. Provide a short description of your server to identify it in the future.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server form.png|none|450px|Add server form.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Destination servers configured here will be available during the Destination configuration step of the Replication task wizard.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;If you wish to remove the Destination server click the “Remove” button. &#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Remove server option.png|none|800px|Remove server option.png]]&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Add_server_form.png&amp;diff=12199</id>
		<title>File:Add server form.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Add_server_form.png&amp;diff=12199"/>
		<updated>2025-01-14T13:22:47Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Add_server_option.png&amp;diff=12198</id>
		<title>File:Add server option.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Add_server_option.png&amp;diff=12198"/>
		<updated>2025-01-14T13:22:23Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Remove_server_option.png&amp;diff=12197</id>
		<title>File:Remove server option.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Remove_server_option.png&amp;diff=12197"/>
		<updated>2025-01-14T13:22:09Z</updated>

		<summary type="html">&lt;p&gt;Da-F: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Ad-structure.png&amp;diff=12115</id>
		<title>File:Ad-structure.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Ad-structure.png&amp;diff=12115"/>
		<updated>2024-12-19T12:44:52Z</updated>

		<summary type="html">&lt;p&gt;Da-F: Da-F uploaded a new version of &amp;amp;quot;File:Ad-structure.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Da-F</name></author>
	</entry>
</feed>