<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.open-e.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ma-W</id>
	<title>Open-E Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.open-e.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ma-W"/>
	<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/Special:Contributions/Ma-W"/>
	<updated>2026-04-30T19:07:44Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.5</generator>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Small_blocks_policy_settings&amp;diff=12212</id>
		<title>Small blocks policy settings</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Small_blocks_policy_settings&amp;diff=12212"/>
		<updated>2025-01-21T09:49:10Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;This feature is available only when the special devices group exists in the pool.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Devices assigned to the special devices group are designated for storing specific data, including metadata, indirect blocks of user data, and deduplication tables. Additionally, devices in the special devices group can be configured to handle small file blocks that are not listed above by applying the small blocks policy.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The size of the small block refers to the size of a single block of data configured on the dataset. Maximum size of such blocks can be set for each dataset under the “Record size” option.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Deduplication tables can alternatively be placed in a separate group known as the deduplication group.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;WARNING&#039;&#039;&#039;: If the size of the small block is greater than or equal to the value of record size on the dataset, all the blocks will be offloaded to the special devices group.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The small block size can be configured for the whole Pool or for each dataset separately. Options to choose range from 4 KiB to 16 MiB.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Configuring a bigger size for the small block policy can be helpful in case the administrator expects the to have a substantial amount of small files that will require low access times separated from the bigger files. Using this option is recommended only if the administrator understands what kind of data will be stored in configured datasets and the maximum size of offloaded files is exactly known, to avoid accidental data offload and special devices congestion.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=12202</id>
		<title>Backup &amp; Recovery destination server</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=12202"/>
		<updated>2025-01-20T12:04:40Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Remote servers that are added as snapshot targets are called Destination servers. To add one, click the “Add server” button.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server option.png|none|800px|Add server option.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address/domain&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&amp;lt;br/&amp;gt;&#039;&#039;&#039;&amp;lt;u&amp;gt;Note&amp;lt;/u&amp;gt;: In case the snapshot target is an HA cluster, it is highly recommended to use Virtual IP configured on a Pool. It allows the connection to be stable even after exporting the pool to the other node&#039;&#039;&#039;.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 40000 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the administrator password to the remote server.&lt;br /&gt;
*&#039;&#039;&#039;Description&#039;&#039;&#039;: This field is optional. Provide a short description of your server to identify it in the future.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server form.png|none|450px|Add server form.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Destination servers configured here will be available during the Destination configuration step of the Replication task wizard.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;If you wish to remove the Destination server click the “Remove” button. &#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Remove server option.png|none|800px|Remove server option.png]]&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_Esx_server&amp;diff=12209</id>
		<title>Backup &amp; Recovery Esx server</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Backup_%26_Recovery_Esx_server&amp;diff=12209"/>
		<updated>2025-01-20T12:04:21Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Snapshot feature can be integrated with VMware ESX/vSphere snapshots. To integrate a new VMware server with click on the “Add server” option.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[File:V-center add server option.png|800px|V-center add server option.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 443 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Username&#039;&#039;&#039;: provide the username for the VMware user you wish to use during integration. It is recommended to use the root user for this purpose.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the password to the VMware user account provided in the previous step.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[File:Add VMware server form.png|450px|Add VMware server form.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;The integrated server will be visible in the table below. The user can browse through the configured datastores and Virtual Machines by going to the Options &amp;gt; Details.&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Details of WMware server.png|800px|Details of WMware server.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:VMware server details.png|450px|VMware server details.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;It is also possible to remove the server from integration by clicking the Remove option. &amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Remove WMware server.png|800px|Remove WMware server.png]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Redundancy_in_Disks_Groups&amp;diff=12195</id>
		<title>Redundancy in Disks Groups</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Redundancy_in_Disks_Groups&amp;diff=12195"/>
		<updated>2024-12-19T10:12:28Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Disk group redundancy refers to the ability of a zpool to maintain data integrity and availability in the event of disk failures. This is achieved through mirrored or RAID-Z configurations, which store multiple copies of data across different disks. When a disk fails or data corruption is detected, ZFS can use the redundant copies to repair or reconstruct the lost data, ensuring the system continues to operate without data loss.&lt;br /&gt;
&lt;br /&gt;
It is important not to mix the types of data groups (vdevs) inside your storage zpool, as it might lead to potential issues, so it is strongly recommended to consistently use only one type of vdev.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: 2-way mirror (2 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of mirror vdevs in the zpool.&lt;br /&gt;
*The 2-way mirror accepts a single disk failure in a given vdev.&lt;br /&gt;
*The 2-way mirrors can be used for mission critical applications, but it is recommended not to exceed 12 vdevs in a zpool (recommended up to 12 x 2 = 24 disks for mission-critical applications and 24 x 2 = 48 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: as a rule, the zpool performance increases with the number of vdevs in the pool. For mission-critical applications and using more than 12 groups, It is recommended to use 3-way mirrors or RAID-Z2 or RAID-Z3.&lt;br /&gt;
*For mission critical applications it is not recommended to use HDDs bigger than 4TB.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: 3-way mirror (3 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with number of mirror vdevs in the zpool.&lt;br /&gt;
*The 3-way mirror accepts up to two disks failures in a given vdev.&lt;br /&gt;
*3-way mirrors can be used for mission critical applications, but it is recommended not to exceed vdevs in a zpool (recommended up to 16 x 3 = 48 disks for mission critical applications and 24 x 3 = 72 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance increases with the number of vdevs in a zpool. For mission-critical applications, it is recommended to use RAID-Z3.&lt;br /&gt;
*For mission critical applications it is not recommended to use HDDs bigger than 10TB.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: 4-way mirror (4 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with number of mirror vdevs in the zpool.&lt;br /&gt;
*The 4-way mirror accepts up to three disks failures in a given vdev.&lt;br /&gt;
*The 4-way mirror is recommended for a Non-Shared HA Cluster that can be used for mission-critical applications.&lt;br /&gt;
*It is also recommended not to exceed 24 of 4-way mirror vdevs in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 24 x 4 = 96 disks for mission-critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: as a rule, the zpool performance increases with the number of vdev in the pool.&lt;br /&gt;
*HDDs bigger than 16TB should be avoided for mission critical applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: RAIDZ-1 (3-8 disks in a group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in a RAID-Z1 vdev.&lt;br /&gt;
*RAID-Z1 accepts one disk failure in a given vdev.&lt;br /&gt;
*The RAID-Z1 can be used for non-mission critical applications and it is not recommended to exceed 8 disks in a vdev. HDDs bigger than 4TB should be avoided.&lt;br /&gt;
*It is also not recommended to exceed 8 RAID-Z1 vdevs in a zpool as a single group damage results in the destruction of entire zpool (recommended up to 8 x 8 = 64 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*RAID-Z1 configuration is not possible in a Non-Shared HA Cluster configuration.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance is doubled with 2 x RAID-Z1 with 4 disks each comparing to a single RAID-Z1 vdev with 8 disks.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: RAIDZ-2 (4-24 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in the RAID-Z2 group.&lt;br /&gt;
*The RAID-Z2 accepts up to two disks failures in a given vdev.&lt;br /&gt;
*The RAID-Z2 can be used for mission-critical applications.&lt;br /&gt;
*It is not recommended to exceed 12 disks in a vdev for mission-critical and 24 disks for non-mission critical applications.&lt;br /&gt;
*It is also not recommended to exceed 16 of RAID-Z2 groups in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 16 x 12 = 192 disks for mission-critical applications and 16 x 24 = 384 disks for non-mission critical in a zpool). HDDs bigger than 16 TB should be avoided.&lt;br /&gt;
*If 3 disks failure in a vdev is required, it is recommended to use RAID-Z3.&lt;br /&gt;
*RAID-Z2 configuration is not possible in a Non-Shared HA Cluster configuration.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the pool performance is doubled with 2 x RAID-Z2 with 6 disks each comparing to a single RAID-Z2 with 12 disks.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: RAIDZ-3 (5-48 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in the RAID-Z3 group.&lt;br /&gt;
*The RAID-Z3 accepts up to three disks failure in a given vdev.&lt;br /&gt;
*The RAID-Z3 can be used for mission-critical applications.&lt;br /&gt;
*It is not recommended to exceed 24 disks in a vdev for mission-critical and 48 disks for non- mission critical applications.&lt;br /&gt;
*It is also not recommended to exceed 24 of RAID-Z3 groups in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 24x 24 =576 disks for mission critical applications and 24x 48 = 1152 disks for non-mission critical applications in a zpool). HDDs bigger than 16TB should be avoided.&lt;br /&gt;
*RAID-Z3 configuration is not possible in a Non-Shared HA Cluster configuration.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance is doubled with 2 x RAID-Z3 with 12 disks each comparing to single RAID-Z3 vdev with 24 disks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Write Log redundancy level ==&lt;br /&gt;
&lt;br /&gt;
*In both single nodes and HA clusters, it should be configured as a 2-way mirror.&lt;br /&gt;
*When choosing a disk model for the Write Log, make sure to take the endurance parameter into consideration. Selecting a disk classified by the manufacturer as write intensive is strongly recommended.&lt;br /&gt;
*When selecting a disk size for the write log, consider the potential amount of data that’ll be able to reach the server in three consecutive ZFS transactions, e.g. based on the network card bandwidth for the data transfer. If the transaction length is set to 5 seconds (default), the write log device should be able to accommodate the amount of data that can be transferred within three transaction groups, i.e. 15 seconds of writing. Using a larger disk does not make sense economically, while a smaller one can be a performance bottleneck during synchronous writes. &#039;&#039;&#039;Practically speaking, 100GB for a write log should be more than enough.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Read Cache redundancy level ==&lt;br /&gt;
&lt;br /&gt;
Read Cache disks can only be configured as single disks, but it is possible to configure any number of them. In an HA cluster, Read Cache can only be configured as local disks on both nodes on the cluster.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Special devices and deduplication group redundancy level ==&lt;br /&gt;
&lt;br /&gt;
In both single nodes and HA clusters, it should be configured as a 2-way mirror.&lt;br /&gt;
&lt;br /&gt;
[[Category:ZFS and data storage articles]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Main_Page&amp;diff=11438</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Main_Page&amp;diff=11438"/>
		<updated>2024-12-19T09:24:47Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===== &#039;&#039;Release Notes:&#039;&#039; =====&lt;br /&gt;
&lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
count = 1&lt;br /&gt;
mode = none&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div&amp;gt;[[Release Notes|All release notes »]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 50&lt;br /&gt;
count= 50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 100&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod = categorysortkey&lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Video tutorials:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Open-E JovianDSS Video Tutorials&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Open-E JovianDSS Video Tutorials &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending }}&amp;lt;br/&amp;gt;&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Setting up Open-E JovianDSS Standard HA Cluster&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Setting up Open-E JovianDSS Standard HA Cluster &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending }}&amp;lt;br/&amp;gt;&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Setting up Open-E JovianDSS Advanced Metro HA Cluster&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Setting up Open-E JovianDSS Advanced Metro HA Cluster &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up30r2_Release_Notes&amp;diff=12159</id>
		<title>Open-E JovianDSS ver.1.0 up30r2 Release Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up30r2_Release_Notes&amp;diff=12159"/>
		<updated>2024-04-25T15:01:59Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2024-03-11&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 55016&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;span style=&amp;quot;color:#cc0033&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039; &amp;lt;/span&amp;gt;To upgrade the product, you need to have an active Technical Support plan. You will be prompted to re-activate your product after installing the upgrade to verify your Technical Support status.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have an active Technical Support plan, please contact Open-E sales team or your reseller for further assistance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== Support for LED disk location for NVMe drives on Intel platforms ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS (v2.1.14) ===&lt;br /&gt;
&lt;br /&gt;
=== Ledctl (v0.97) ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/T5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.1) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== The Hot-Plug mechanism for NVMe drives does not work properly on several environments ===&lt;br /&gt;
&lt;br /&gt;
=== The system restart or shutdown procedure does not function correctly in environments utilizing the HP Smart Array controller (hpsa driver) ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for JovianDSS HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use static discovery in all iSCSI initiators ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same JovianDSS version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;You can find details below:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;[https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101 https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Open-E VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Open-E VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Open-E JovianDSS users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Open-E JovianDSS with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Open-E JovianDSS as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If JovianDSS nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in Jovian’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from JovianDSS up24 to JovianDSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from JovianDSS up24 to JovianDSS up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of Jovian DSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of JovianDSS up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of Jovian up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of Jovian up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open Jovian TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the Jovian cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from JovianDSS up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from JovianDSS up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_rio              100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let JDSS know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on JDSS leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from JovianDSS up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from JovianDSS up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.open-e.com/support/hardware-compatibility-list/open-e-jovian-dss/ https://www.open-e.com/support/hardware-compatibility-list/open-e-jovian-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns JovianDSS ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by JovianDSS as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to JovianDSS up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from JovianDSS up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from Jovian up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== It is recommended to use Fibre Channel groups in Fibre Channel Target HA Cluster environments that use the Fibre Channel switches. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the Fibre Channel switches in FC Target HA Cluster environments, it is recommended to use only Fibre Channel groups (using the Fibre Channel Public group it is not recommended).&lt;br /&gt;
&lt;br /&gt;
=== Manual export and import of zpool in the system or deactivation of the Fibre Channel group without first suspending or turning off the virtual machines on the VMware ESXi side may cause loss of access to the data by VMware ESXi. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before a manual export and import of a zpool in the system or deactivation of the Fibre Channel group in Fibre Channel Target HA Cluster environment, you must suspend or turn off the virtual machines on the VMware ESXi side. Otherwise, the VMware ESXi may lose access to the data, and restarting it will be necessary.&lt;br /&gt;
&lt;br /&gt;
=== In Fibre Channel Target HA Cluster environments the VMware ESXi 6.7 must be used instead of VMware ESXi 7.0. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the VMware ESXi 7.0 in Fibre Channel Target HA Cluster environment, restarting one of the cluster nodes may cause the Fibre Channel paths to report a dead state.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes cluster nodes hang up during boot of Open-E JovianDSS. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case one of the cluster nodes hangs up during Open-E JovianDSS boot, it must be manually restarted.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes when using the ipmi hardware solutions, the cluster node may be restarted again by the ipmi watchdog ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, it is recommended to wait 5 minutes before turning on the cluster node after it was turned off.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes restarting one of the cluster nodes may cause some disks to be missing in the zpool configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, click the “Rescan storage” button on the WebGUI to solve this problem.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of JovianDSS up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of JovianDSS up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open JovianDSS TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the JovianDSS. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== In case of no local storage disks in any Non-Shared storage HA Cluster node, the remote disks mirroring path connection status shows incorrect state: Disconnected. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; According to assumptions, each cluster nodes in Non-Shared storage HA Cluster must have at least one local storage disk before creating the remote disk mirroring path connection.&lt;br /&gt;
&lt;br /&gt;
=== In some environments in case of using RDMA for remote disks mirroring path, shutdown one of the cluster nodes may causes its restart instead of shutting down. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In some environments in case of using RDMA for remote disks mirroring path, shutdown one of the cluster nodes may causes its restart instead of shutting down.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the ATTO Fibre Channel Target in the HA cluster environment. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of using the ATTO Fibre Channel Target in a HA Cluster environment, after the power cycle of one of the cluster nodes, the fibre channel path reports a dead state. In order to restore the correct status of these fibre channel paths, the VMware server must be restarted.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the ATTO Fibre Channel Target in a HA cluster environment, restarting the cluster node with both zpools imported in the system causes the second cluster node to be unexpectedly restarted.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;Therefore, using the ATTO Fibre Channel Target in the HA cluster environment is not recommended.&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Open-E JovianDSS enables to use the drives with verified SED configuration only - they are tagged as &amp;quot;SED&amp;quot; and listed on the Open-E JovianDSS HCL. In order to properly configure the functionality, please follow the steps described in the Knowledge Base article: [https://kb.open-e.com/jdss-sed-support-in-joviandss_3381.html https://kb.open-e.com/jdss-sed-support-in-joviandss_3381.html]&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Open-E webpage ([https://www.open-e.com/support/hardware-compatibility-list/open-e-jovian-dss/ https://www.open-e.com/support/hardware-compatibility-list/open-e-jovian-dss/]). To find devices for which we support the SED functionality, on the Open-E HCL page in the form: &amp;quot;Search by component&amp;quot;, enter: “SED” in the keyword field and click the search button (loupe icon).&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #104059 --&amp;gt;SAS Multipath configuration is not supported in the Non-Shared Storage Cluster. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of the Non-Shared Storage Cluster, the SAS Multipath configuration is not supported at all. In this scenario, all the disks need to be connected through one path only. In the case of using the JBOD configuration with disks connected through a pair of SAS cables, one of them must be disconnected.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Open-E JovianDSS (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115494 --&amp;gt;Resilver progress bar in the HA Non-shared Cluster Storage environment may show values over 100%. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the HA Non-Shared storage cluster with compression and deduplication enabled it has been observed that the resilver progress bar on the WebGUI may display values exceeding 100%.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Service_discovery&amp;diff=12061</id>
		<title>Service discovery</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Service_discovery&amp;diff=12061"/>
		<updated>2023-12-12T07:44:57Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
=== Zeroconf service discovery ===&lt;br /&gt;
&amp;lt;div&amp;gt;This functionality allows discovering NAS devices by any operating system that supports zero-configuration networking (zeroconf), e.g., macOS.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Two types of services are currently supported:&lt;br /&gt;
*Discovering SMB services&lt;br /&gt;
*Discovering NAS for Time Machine via SMB&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
===== Discovering SMB services =====&lt;br /&gt;
&amp;lt;div&amp;gt;Allows the server to broadcast information about its SMB services.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
===== Discovering NAS for Time Machine via SMB =====&lt;br /&gt;
&amp;lt;div&amp;gt;Allows macOS users to perform backup tasks from the Mac computer to shared folders via SMB.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The following conditions must be met for the functionality to work:&lt;br /&gt;
*The “Discovering SMB services” option must be enabled.&lt;br /&gt;
*The “Discovering NAS for Time Machine via SMB” option must be enabled.&lt;br /&gt;
*The SMB protocol must be enabled, and the following options must be set as follows:&lt;br /&gt;
**Vfs_fruit ON&lt;br /&gt;
**oplocks ON&lt;br /&gt;
**level2 oplocks ON&lt;br /&gt;
**SMB2 leases ON&lt;br /&gt;
**kernel oplocks OFF&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;When all conditions above are met, the option that allows shares to be discovered by Time Machine becomes active. The next step is to select the &amp;quot;Enable macOS Time Machine support&amp;quot; option in every share the user wishes to make visible for Time Machine. All shares with this option enabled will be visible for Time Machine.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;Note!&amp;lt;/span&amp;gt; If any of the above settings are changed, shares will no longer appear in Time Machine.&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Service_discovery&amp;diff=12060</id>
		<title>Service discovery</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Service_discovery&amp;diff=12060"/>
		<updated>2023-12-12T07:44:15Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zeroconf service discovery ===&lt;br /&gt;
&amp;lt;div&amp;gt;This functionality allows discovering NAS devices by any operating system that supports zero-configuration networking (zeroconf), e.g., macOS.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Two types of services are currently supported:&lt;br /&gt;
*Discovering SMB services&lt;br /&gt;
*Discovering NAS for Time Machine via SMB&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
===== Discovering SMB services =====&lt;br /&gt;
&amp;lt;div&amp;gt;Allows the server to broadcast information about its SMB services.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
===== Discovering NAS for Time Machine via SMB =====&lt;br /&gt;
&amp;lt;div&amp;gt;Allows macOS users to perform backup tasks from the Mac computer to shared folders via SMB.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The following conditions must be met for the functionality to work:&lt;br /&gt;
*The “Discovering SMB services” option must be enabled.&lt;br /&gt;
*The “Discovering NAS for Time Machine via SMB” option must be enabled.&lt;br /&gt;
*The SMB protocol must be enabled, and the following options must be set as follows:&lt;br /&gt;
**Vfs_fruit ON&lt;br /&gt;
**oplocks ON&lt;br /&gt;
**level2 oplocks ON&lt;br /&gt;
**SMB2 leases ON&lt;br /&gt;
**kernel oplocks OFF&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;When all conditions above are met, the option that allows shares to be discovered by Time Machine becomes active. The next step is to select the &amp;quot;Enable macOS Time Machine support&amp;quot; option in every share the user wishes to make visible for Time Machine. All shares with this option enabled will be visible for Time Machine.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;Note!&amp;lt;/span&amp;gt; If any of the above settings are changed, shares will no longer appear in Time Machine.&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Service_discovery&amp;diff=12059</id>
		<title>Service discovery</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Service_discovery&amp;diff=12059"/>
		<updated>2023-12-12T07:43:13Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zeroconf service discovery ===&lt;br /&gt;
&amp;lt;div&amp;gt;This functionality allows discovering NAS devices by any operating system that supports zero-configuration networking (zeroconf), e.g., macOS.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Two types of services are currently supported:&lt;br /&gt;
*Discovering SMB services&lt;br /&gt;
*Discovering NAS for Time Machine via SMB&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
===== Discovering SMB services =====&lt;br /&gt;
&amp;lt;div&amp;gt;Allows the server to broadcast information about its SMB services.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
===== Discovering NAS for Time Machine via SMB =====&lt;br /&gt;
&amp;lt;div&amp;gt;Allows macOS users to perform backup tasks from the Mac computer to shared folders via SMB.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The following conditions must be met for the functionality to work:&lt;br /&gt;
*The “Discovering SMB services” option must be enabled.&lt;br /&gt;
*The “Discovering NAS for Time Machine via SMB” option must be enabled.&lt;br /&gt;
*The SMB protocol must be enabled, and the following options must be set as follows:&lt;br /&gt;
**Vfs_fruit ON&lt;br /&gt;
**oplocks ON&lt;br /&gt;
**level2 oplocks ON&lt;br /&gt;
**SMB2 leases ON&lt;br /&gt;
**kernel oplocks OFF&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;When all conditions above are met, the option that allows shares to be discovered by Time Machine becomes active. The next step is to select the &amp;quot;Enable macOS Time Machine support&amp;quot; option in every share the user wishes to make visible for Time Machine. All shares with this option enabled will be visible for Time Machine.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;Note!&amp;lt;/span&amp;gt; If any of the above settings are changed, shares will no longer appear in Time Machine.&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Storage_performance_test_tool&amp;diff=11811</id>
		<title>Storage performance test tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Storage_performance_test_tool&amp;diff=11811"/>
		<updated>2023-07-05T13:20:52Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== General information ===&lt;br /&gt;
&lt;br /&gt;
The storage performance test tool allows you to test speed of raw disks or zvols. The tool is available via TUI (Text-based User Interface). To access the tool, perform the following steps:&lt;br /&gt;
&lt;br /&gt;
#While in TUI, press&amp;amp;nbsp;&#039;&#039;&#039;Ctrl&#039;&#039;&#039;+&#039;&#039;&#039;Alt&#039;&#039;&#039;+&#039;&#039;&#039;T&#039;&#039;&#039;.&lt;br /&gt;
#Select&amp;amp;nbsp;&#039;&#039;&#039;Add-ons&#039;&#039;&#039;&amp;amp;nbsp;from the menu.&lt;br /&gt;
#Select&amp;amp;nbsp;&#039;&#039;&#039;Storage performance test&#039;&#039;&#039;&amp;amp;nbsp;from the list.&lt;br /&gt;
&lt;br /&gt;
You are able to select either a raw device test or zvol device test. Also, you can view test results.Once the test type is selected, you will be asked to select a device to be tested. After performing this operation, a test profile has to be selected. The following types can be chosen:&lt;br /&gt;
&lt;br /&gt;
*ran_write (blocksize 512B)&lt;br /&gt;
*ran_read (blocksize 512B)&lt;br /&gt;
*seq_write (blocksize 1MB)&lt;br /&gt;
*seq_read (blocksize 1MB)&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;div&amp;gt;Every random test takes 100 seconds and sequential test takes 150 seconds. Testing progress is shown while running the test. To interrupt the test, press the Ctrl+\ keyboard shortcuts. Test results are available from tool or in logs downloaded from the GUI.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Raw device test ===&lt;br /&gt;
&lt;br /&gt;
This test allows you to choose disks that are not assigned to zpool.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== Zvol device test ===&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;This test allows you to choose&amp;amp;nbsp;zvols. To ensure data safety on zvols in zpool, zvol name must contain:&amp;amp;nbsp;&#039;&#039;&#039;data_DESTRUCTIVE_benchmark_test_ONLY.&#039;&#039;&#039;&amp;amp;nbsp;Otherwise, it won’t be listed under the&amp;amp;nbsp;&#039;&#039;&#039;Zvol device selection&#039;&#039;&#039;&amp;amp;nbsp;list.&amp;amp;nbsp;Note that you can overwrite data on this zvol using ran_write or seq_write profile. Once a write profile is selected, you will need to confirm that you understand the risk of using such profiles.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Test results viewer ===&lt;br /&gt;
&amp;lt;div&amp;gt;Here you can view the results or remove them one by one.&amp;amp;nbsp;Test results are also available in logs downloaded from GUI in the “TEST_RESULTS” folder.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:ZFS and data storage articles]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Main_Page&amp;diff=11436</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Main_Page&amp;diff=11436"/>
		<updated>2023-07-05T13:20:38Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===== &#039;&#039;Release Notes:&#039;&#039; =====&lt;br /&gt;
&lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
count = 1&lt;br /&gt;
mode = none&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div&amp;gt;[[Release Notes|All release notes »]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 40&lt;br /&gt;
count= 40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 80&lt;br /&gt;
count=40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod=categorysortkey&lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Video tutorials:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Open-E JovianDSS Video Tutorials&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Open-E JovianDSS Video Tutorials &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending }}&amp;lt;br/&amp;gt;&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Setting up Open-E JovianDSS Standard HA Cluster&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Setting up Open-E JovianDSS Standard HA Cluster &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending }}&amp;lt;br/&amp;gt;&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Setting up Open-E JovianDSS Advanced Metro HA Cluster&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Setting up Open-E JovianDSS Advanced Metro HA Cluster &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Main_Page&amp;diff=11435</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Main_Page&amp;diff=11435"/>
		<updated>2022-11-28T15:48:51Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===== &#039;&#039;Release Notes:&#039;&#039; =====&lt;br /&gt;
&lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
count = 1&lt;br /&gt;
mode = none&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div&amp;gt;[[Release Notes|All release notes »]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 40&lt;br /&gt;
count= 40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 80&lt;br /&gt;
count=40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod=lastedit&lt;br /&gt;
order = descending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Video tutorials:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Open-E JovianDSS Video Tutorials&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Open-E JovianDSS Video Tutorials &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending }}&amp;lt;br/&amp;gt;&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Setting up Open-E JovianDSS Standard HA Cluster&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Setting up Open-E JovianDSS Standard HA Cluster &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending }}&amp;lt;br/&amp;gt;&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Setting up Open-E JovianDSS Advanced Metro HA Cluster&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Setting up Open-E JovianDSS Advanced Metro HA Cluster &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Release_Notes&amp;diff=11994</id>
		<title>Release Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Release_Notes&amp;diff=11994"/>
		<updated>2022-11-28T15:47:48Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Created page with &amp;quot;{{ #tag:DynamicPageList|  category = Release Notes  ordermethod = categorysortkey  order = descending }}&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up29r2_Release_Notes&amp;diff=11889</id>
		<title>Open-E JovianDSS ver.1.0 up29r2 Release Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up29r2_Release_Notes&amp;diff=11889"/>
		<updated>2022-07-06T13:29:16Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2022-07-06&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 48155&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== NVMe Write Log devices mirror over Ethernet for HA Shared Storage Cluster ===&lt;br /&gt;
&lt;br /&gt;
=== RDMA protocol support for mirroring path connection dedicated to Mellanox NIC ===&lt;br /&gt;
&lt;br /&gt;
=== Intel Optane™ Persistent Memory support (PMem) ===&lt;br /&gt;
&lt;br /&gt;
=== Self-Encrypting Drives (SED) feature support ===&lt;br /&gt;
&lt;br /&gt;
=== SSD TRIM functionality ===&lt;br /&gt;
&lt;br /&gt;
=== Custom OU (Organization Unit) parameter in Active Directory ===&lt;br /&gt;
&lt;br /&gt;
=== Read and Write list for SMB shares ===&lt;br /&gt;
&lt;br /&gt;
=== ‘Hosts allow’ parameter for Samba available in WebGUI ===&lt;br /&gt;
&lt;br /&gt;
=== Additional scheduler to pause a scheduled scrub process, e.g. to suspend it during working hours ===&lt;br /&gt;
&lt;br /&gt;
=== Paging mechanism for snapshots ===&lt;br /&gt;
&lt;br /&gt;
=== ‘Expand pool size’ option for replacing disks with larger-sized available in TUI (CTRL+ALT+X) ===&lt;br /&gt;
&lt;br /&gt;
=== Severity level included in the notification email’s subject ===&lt;br /&gt;
&lt;br /&gt;
=== Option to launch the system in RESCUE MODE (with skipping pools import) ===&lt;br /&gt;
&lt;br /&gt;
=== The most important statistics regarding the ZFS Pool usage on WebGUI ===&lt;br /&gt;
&lt;br /&gt;
=== TLS 1.2 and 1.3 available in the SMTP server configuration ===&lt;br /&gt;
&lt;br /&gt;
=== Checkmk agent checks for Percent Capacity per Pool replaced checks of available space per volume ===&lt;br /&gt;
&lt;br /&gt;
=== VORTEX SHELF JBOD status monitoring ===&lt;br /&gt;
&lt;br /&gt;
=== The source of force reboot is logged in IPMI ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Config Tool (v4.36) ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Kernel (v4.19.190) ===&lt;br /&gt;
&lt;br /&gt;
=== ZFS (v2.1.1-1) ===&lt;br /&gt;
&lt;br /&gt;
=== Checkmk agent (v1.5.0p8) ===&lt;br /&gt;
&lt;br /&gt;
=== Network UPS Tools (upsmon, v2.7.4) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE driver (igb, v5.8.5) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE driver (e1000e, v3.8.7-NAPI) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10GbE driver (ixgbe, v5.13.4) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 40GbE driver (i40e, v2.17.4) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 100GbE driver (i40e, v2.17.4) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM573xx and Broadcom BCM574xx controllers (bnxt_en, v1.10.2-219.0.55.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM577xx and Broadcom BCM578xx controllers (bnx2x, v1.715.10) ===&lt;br /&gt;
&lt;br /&gt;
=== Marvell FastLinQ 41000 Series driver (qede, v8.55.13.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Solarflare 10GbE Driver (sfc, v4.15.13.1000) ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-3 driver (mlx4_core, v4.9-3.1.5) ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-4/5 driver (mlx5_core, v5.2-2.2.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Emulex LightPulse Fibre Channel Adapter driver (lpfc, v12.8.614.14) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v2.04.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 8Gb Fibre Channel Adapter driver (celerity8fc, v2.21.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 6Gb/s HBA (esas2hba, v2.38.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s HBA (esas4hba, v1.48.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s GT HBA (esas5hba, v1.03.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== Areca RAID controllers driver (arcmsr, vv1.50.0X.07-20210712) ===&lt;br /&gt;
&lt;br /&gt;
=== HP Smart Array driver (hpsa, v3.4.20-208) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec Series SAS/SATA 6/12GB RAID driver (aacraid, v1.2.1.60001src) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartHBA and SmartRAID driver (smartpqi, v2.1.16-030) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom 12Gb SAS HBA driver (mpt3sas, v39.00.00.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom 12Gb MegaRAID driver (megaraid_sas, v07.719.03.00) ===&lt;br /&gt;
&lt;br /&gt;
=== MegaRAID Storage Manager (MSM, v.17.05.02.01) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool v3.10.24308 ===&lt;br /&gt;
&lt;br /&gt;
=== LSI Storage Authority v007.019.006.000 ===&lt;br /&gt;
&lt;br /&gt;
=== HPE Smart Storage Administrator v4.21.7.0 ===&lt;br /&gt;
&lt;br /&gt;
=== HPE System Management Homepage v7.6.7 ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== Very long pool export/moving time in case of large number of O&amp;amp;ODP tasks. ===&lt;br /&gt;
&lt;br /&gt;
=== System hang-ups during the reboot procedure. ===&lt;br /&gt;
&lt;br /&gt;
=== Negative network speed values in the network usage charts. ===&lt;br /&gt;
&lt;br /&gt;
=== It takes long time to display a large number of snapshots after a reboot. ===&lt;br /&gt;
&lt;br /&gt;
=== It is not possible to export a pool when the share subdirectory is mounted through NFS. ===&lt;br /&gt;
&lt;br /&gt;
=== Attempting to establish hundreds or thousands of connections to the resource&#039;s SMB simultaneously causes an LDAP service timeout. ===&lt;br /&gt;
&lt;br /&gt;
=== When logging in to SMB shares, both as a guest and a user with a password, Workgroup name providing is obligatory. ===&lt;br /&gt;
&lt;br /&gt;
=== Listing a very large number of files (several million and above) in one SMB share takes a long time or fails. NOTE: The problem was solved by adding the Metadata pinning functionality (System Console -&amp;gt; Addons -&amp;gt; Metadata pinning) ===&lt;br /&gt;
&lt;br /&gt;
=== Removing dataset/zvol with thousands of snapshots cause timeouts on WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
=== Persistent Reservation Sync (PRS) mechanism on some environments can saturate one of the CPU cores to 100%. ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for JovianDSS HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use static discovery in all iSCSI initiators ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same JovianDSS version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;You can find details below:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;[https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101 https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Open-E VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Open-E VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Open-E JovianDSS users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Open-E JovianDSS with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB and Round-Robin do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using JovianDSS as Hyper-V or VMware guest, bonding ALB and Round-Robin is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If JovianDSS nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in Jovian’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from JovianDSS up24 to JovianDSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from JovianDSS up24 to JovianDSS up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of Jovian DSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of JovianDSS up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of Jovian up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of Jovian up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open Jovian TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the Jovian cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from JovianDSS up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from JovianDSS up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_rio              100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No support for VMD option in BIOS leads to a problem with listing PCI devices ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; On some servers, an enabled VMD option in BIOS causes that PCI devices are not listed properly. If this is the case please disable the VMD option in BIOS. This problem will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let JDSS know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on JDSS leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from JovianDSS up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from JovianDSS up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.open-e.com/support/hardware-compatibility-list/open-e-jovian-dss/ https://www.open-e.com/support/hardware-compatibility-list/open-e-jovian-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns JovianDSS ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by JovianDSS as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to JovianDSS up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from JovianDSS up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from Jovian up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== It is recommended to use Fibre Channel groups in Fibre Channel Target HA Cluster environments that use the Fibre Channel switches. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the Fibre Channel switches in FC Target HA Cluster environments, it is recommended to use only Fibre Channel groups (using the Fibre Channel Public group it is not recommended).&lt;br /&gt;
&lt;br /&gt;
=== Manual export and import of zpool in the system or deactivation of the Fibre Channel group without first suspending or turning off the virtual machines on the VMware ESXi side may cause loss of access to the data by VMware ESXi. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before a manual export and import of a zpool in the system or deactivation of the Fibre Channel group in Fibre Channel Target HA Cluster environment, you must suspend or turn off the virtual machines on the VMware ESXi side. Otherwise, the VMware ESXi may lose access to the data, and restarting it will be necessary.&lt;br /&gt;
&lt;br /&gt;
=== In Fibre Channel Target HA Cluster environments the VMware ESXi 6.7 must be used instead of VMware ESXi 7.0. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the VMware ESXi 7.0 in Fibre Channel Target HA Cluster environment, restarting one of the cluster nodes may cause the Fibre Channel paths to report a dead state.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes cluster nodes hang up during boot of Open-E JovianDSS. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case one of the cluster nodes hangs up during Open-E JovianDSS boot, it must be manually restarted.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes when using the ipmi hardware solutions, the cluster node may be restarted again by the ipmi watchdog ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, it is recommended to wait 5 minutes before turning on the cluster node after it was turned off.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes restarting one of the cluster nodes may cause some disks to be missing in the zpool configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, click the “Rescan storage” button on the WebGUI to solve this problem.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Self-Encrypting Drives (SED) feature is supported for only specific devices. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; At this moment the Self-Encrypting Drives (SED) functionality is only supported for Samsung PM1643a devices only.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Delete_zpool&amp;diff=11969</id>
		<title>Delete zpool</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Delete_zpool&amp;diff=11969"/>
		<updated>2022-02-21T15:49:23Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Created page with &amp;quot;This functionality is available in: Storage &amp;gt; Pool &amp;gt; Options  This option allows you to delete a zpool. The whole structure will be unmounted and then deleted. Note that all d...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This functionality is available in: Storage &amp;gt; Pool &amp;gt; Options&lt;br /&gt;
&lt;br /&gt;
This option allows you to delete a zpool. The whole structure will be unmounted and then deleted. Note that all data will be lost from the selected zpool. To delete a zpool:&lt;br /&gt;
&lt;br /&gt;
#Go to the Storage section.&lt;br /&gt;
#Go to the Pool subsection.&lt;br /&gt;
#Click the &#039;&#039;&#039;Options&#039;&#039;&#039; button.&lt;br /&gt;
#Click &#039;&#039;&#039;Delete Zpool&#039;&#039;&#039;.&lt;br /&gt;
#Enter the word &amp;quot;delete&amp;quot;.&lt;br /&gt;
#Click the &#039;&#039;&#039;Delete&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
You can also clean metadata from cache using Text-based User Interface (TUI). To do so:&lt;br /&gt;
&lt;br /&gt;
#Activate TUI.&lt;br /&gt;
#Press &#039;&#039;&#039;Ctrl&#039;&#039;&#039; + &#039;&#039;&#039;Alt&#039;&#039;&#039; + &#039;&#039;&#039;X&#039;&#039;&#039;.&lt;br /&gt;
#In the window that appears, click &#039;&#039;&#039;Yes&#039;&#039;&#039;.&lt;br /&gt;
#In the window that appears, enter administrator password.&lt;br /&gt;
#Select &#039;&#039;&#039;Remove ZFS data structures and disks partitions&#039;&#039;&#039;.&lt;br /&gt;
#Choose &#039;&#039;&#039;Select&#039;&#039;&#039;.&lt;br /&gt;
#Press &#039;&#039;&#039;Enter&#039;&#039;&#039;.&lt;br /&gt;
#In the window that appears, enter administrator password.&lt;br /&gt;
#Select disks to clear from the list.&lt;br /&gt;
#Choose &#039;&#039;&#039;OK&#039;&#039;&#039;.&lt;br /&gt;
#Press &#039;&#039;&#039;Enter&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Main_Page&amp;diff=11414</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Main_Page&amp;diff=11414"/>
		<updated>2021-04-12T11:30:29Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Release Notes:&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 40&lt;br /&gt;
count= 40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 80&lt;br /&gt;
count=40&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Video tutorials:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;Open-E JovianDSS Video Tutorials&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039;&#039;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Open-E JovianDSS Video Tutorials &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending }}&amp;lt;br/&amp;gt;&#039;&#039;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span&amp;gt;&amp;lt;span style=&amp;quot;color:#696969&amp;quot;&amp;gt;&#039;&#039;&#039;&amp;lt;span&amp;gt;Setting up a Failover service for Open-E JovianDSS&amp;lt;/span&amp;gt;&#039;&#039;&#039;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&#039;&#039; {{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Setting up a Failover service for Open-E JovianDSS &lt;br /&gt;
ordermethod = sortkey &lt;br /&gt;
order = ascending &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=SNMP_settings&amp;diff=10018</id>
		<title>SNMP settings</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=SNMP_settings&amp;diff=10018"/>
		<updated>2021-03-08T14:16:27Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This function enables you to configure access over the &#039;&#039;&#039;SNMP&#039;&#039;&#039; protocol in versions 2 or 3.&lt;br /&gt;
&lt;br /&gt;
With SNMP enabled, you receive a wealth of information (CPU usage, system load, memory info, ethernet traffic, running processes).&amp;lt;br/&amp;gt;System location and system contact are only for your information.&amp;amp;nbsp;&amp;amp;nbsp;For example, when you connect from an SNMP client, you will see your location and name.&lt;br /&gt;
&lt;br /&gt;
SNMP, version 3 has an encrypted transmission feature as well as authentication by username and password.&amp;lt;br/&amp;gt;SNMP, version 2 does not have encrypted transmission, and authentication is done only via the community string.&lt;br /&gt;
&lt;br /&gt;
The community string you set can contain up to 20 characters, while the password needs to have at least 8 characters.&lt;br /&gt;
&lt;br /&gt;
Links to SNMP clients:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.muonics.com http://www.muonics.com]&amp;amp;nbsp;&amp;lt;/span&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
:&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.mg-soft.com http://www.mg-soft.com]&amp;amp;nbsp;&amp;lt;/span&amp;gt;&lt;br /&gt;
:&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.manageengine.com http://www.manageengine.com]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Note|&lt;br /&gt;
Our storage system supports the SNMP protocol in MIB-II standard.&amp;amp;nbsp; List of MIBs:&lt;br /&gt;
&lt;br /&gt;
*- mib-2.host&lt;br /&gt;
&lt;br /&gt;
*- mib-2.ip&lt;br /&gt;
&lt;br /&gt;
*- mib-2.tcp&lt;br /&gt;
&lt;br /&gt;
*- mib-2.udp&lt;br /&gt;
&lt;br /&gt;
*- mib-2.interfaces&lt;br /&gt;
&lt;br /&gt;
*- mib-2.at&lt;br /&gt;
&lt;br /&gt;
*- system&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
JovianDSS offers additional SNMP values to monitor Pool and ZFS attributes.&amp;lt;br/&amp;gt;It is necessary to query specific OIDs in order to receive those attributes.&lt;br /&gt;
&lt;br /&gt;
For basic ZFS parameters NYMNETWORKS-MIB mib is included [[:Media:NYMNETWORKS-MIB.txt|NYMNETWORKS-MIB]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;snmpwalk -v 2c -m NYMNETWORKS-MIB -c community 192.168.251.79 .1.3.6.1.4.1.25359.1&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemName.1 = STRING: &amp;quot;Pool-0&amp;quot;&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemName.2 = STRING: &amp;quot;Pool-1&amp;quot;&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableKB.1 = Gauge32: 15861464&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableKB.2 = Gauge32: 15861672&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedKB.1 = Gauge32: 4327720&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedKB.2 = Gauge32: 4327512&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsPoolHealth.1 = INTEGER: online(1)&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsPoolHealth.2 = INTEGER: online(1)&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeKB.1 = Wrong Type (should be INTEGER): Gauge32: 20189184&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeKB.2 = Wrong Type (should be INTEGER): Gauge32: 20189184&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableMB.1 = Gauge32: 15489&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableMB.2 = Gauge32: 15489&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedMB.1 = Gauge32: 4226&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedMB.2 = Gauge32: 4226&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeMB.1 = Gauge32: 19716&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeMB.2 = Gauge32: 19716&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCSizeKB.0 = Gauge32: 61086&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMetadataSizeKB.0 = Gauge32: 9278&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCDataSizeKB.0 = Gauge32: 51808&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCHits.0 = Counter32: 229308&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMisses.0 = Counter32: 41260&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCTargetSize.0 = Gauge32: 64287&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMru.0 = Gauge32: 59529&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCHits.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCMisses.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCReads.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCWrites.0 = Counter32: 0&lt;br /&gt;
&lt;br /&gt;
Additional information, like compression ratio, deduplication ratio, available space (in bytes), age (in seconds) of latest snapshot on volume,&amp;lt;br/&amp;gt;can be obtained with standard NET-SNMP-EXTEND-MIB:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Examples:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;deduplication&amp;quot; = STRING:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;deduplication Pool-0 1.00&amp;quot;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;compression&amp;quot; = STRING:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;compression Pool-0/vol00 1.01&amp;lt;br/&amp;gt;compression Pool-0/clone-vol00 1.00&amp;quot;&lt;br /&gt;
&lt;br /&gt;
NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;volumes_list&amp;quot; = STRING:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;quot;available Pool-0/vol00 11981377536&amp;lt;br/&amp;gt;available Pool-0/clone-vol00 11981377536&amp;quot;&lt;br /&gt;
&lt;br /&gt;
NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;snapshots_age&amp;quot; = STRING:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;quot;snapshot_age Pool-0/vol00 3&amp;lt;br/&amp;gt;snapshot_age Pool-0/vol01 371&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Untranslated OIDs:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;root@p-GA-880GM-USB3:/home/p# snmpwalk -v2c -c public 192.168.0.80&amp;amp;nbsp; 1.3.6.1.4.1.8072.1.3.2.3&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.11.99.111.109.112.114.101.115.115.105.111.110 = STRING: &amp;quot;compression Pool-0/vol00 1.01&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.12.115.110.97.112.115.104.111.116.95.97.103.101 = STRING: &amp;quot;snapshot_age Pool-0/vol00 3&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.12.118.111.108.117.109.101.115.95.108.105.115.116 = STRING: &amp;quot;available Pool-0/vol00 11981377536&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.13.100.101.100.117.112.108.105.99.97.116.105.111.110 = STRING: &amp;quot;deduplication Pool-0 1.00&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.11.99.111.109.112.114.101.115.115.105.111.110 = STRING: &amp;quot;compression Pool-0/vol00 1.01&amp;lt;br/&amp;gt;compression Pool-0/clone-vol00 1.00&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.12.115.110.97.112.115.104.111.116.95.97.103.101 = STRING: &amp;quot;snapshot_age Pool-0/vol00 3&amp;lt;br/&amp;gt;snapshot_age Pool-0/vol01 371&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.12.118.111.108.117.109.101.115.95.108.105.115.116 = STRING: &amp;quot;available Pool-0/vol00 11981377536&amp;lt;br/&amp;gt;available Pool-0/clone-vol00 11981377536&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.13.100.101.100.117.112.108.105.99.97.116.105.111.110 = STRING: &amp;quot;deduplication Pool-0 1.00&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:NYMNETWORKS-MIB.txt&amp;diff=11855</id>
		<title>File:NYMNETWORKS-MIB.txt</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:NYMNETWORKS-MIB.txt&amp;diff=11855"/>
		<updated>2021-03-08T13:25:53Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up28_Release_Notes&amp;diff=11742</id>
		<title>Open-E JovianDSS ver.1.0 up28 Release Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up28_Release_Notes&amp;diff=11742"/>
		<updated>2020-11-17T10:52:51Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date:&amp;amp;nbsp;2020-01-28&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build:&amp;amp;nbsp;37311&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For build 38496 go to: [http://wiki.open-e.com/default/wiki/Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== HA Cluster Ring can be configured as two single connections ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster Ping-nodes can be configured within any available interfaces and subnetworks ===&lt;br /&gt;
&lt;br /&gt;
=== Static routing configuration is available in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Custom SSL/TLS certificates can be manually imported in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== ZFS Datasets can get a record size value from range of 4KiB up to 16MiB (default record size value is 1MiB) ===&lt;br /&gt;
&lt;br /&gt;
=== Fibre Channel Target mode is available for ATTO Fiber Channel Adapter (supported only with VMware client) ===&lt;br /&gt;
&lt;br /&gt;
=== Improved performance of LDAP database replication mechanism ===&lt;br /&gt;
&lt;br /&gt;
=== Storage performance test tool is available in TUI (System console -&amp;gt; Ctrl+Alt+t -&amp;gt; Add-ons -&amp;gt; Storage performance tool) ===&lt;br /&gt;
&lt;br /&gt;
=== HPE tools for managing HP Smart Array controllers are available in Web-GUI and TUI ===&lt;br /&gt;
&lt;br /&gt;
=== MacOS Spotlight search support allows to quickly locate the files and search through their contents ===&lt;br /&gt;
&lt;br /&gt;
=== Installer creates 128GB boot medium partition size (more space for further upgrade processes) ===&lt;br /&gt;
&lt;br /&gt;
=== New filtering options for Event Viewer (selection by: error, warning, information and the date ranges) ===&lt;br /&gt;
&lt;br /&gt;
=== Kdump (kernel crash dumping mechanism) ===&lt;br /&gt;
&lt;br /&gt;
=== The default SCSI ID for iSCSI and FC luns can be manually set in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Deduplication statistics for zpool are available in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Ethernet cards detailed statistics (amount of data sent and received) are available in the system logs ===&lt;br /&gt;
&lt;br /&gt;
=== Statistics for MPIO devices are displayed in GUI (Diagnostics -&amp;gt; Disk usage) ===&lt;br /&gt;
&lt;br /&gt;
=== Linux iostat and S.M.A.R.T data are available in Checkmk monitoring system ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Samba 4.9.4 ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-3 driver (mlx4_core, v4.4-2.0.7) ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-4/5 driver (mlx5_core, v4.4-2.0.7) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE driver i40e (i40e, v2.9.21) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM5706/5708/5709/5716 driver (bnx2, v2.2.5x) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM57710/57711/57711E/57712/57712_MF/57800/57800_MF/57810/57810_MF/57840/57840_MF driver (bnx2x, v.1.715.0) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v1.76.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec RAID and HBA driver (aacraid, v1.2.1.57013src) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartRAID and SmartHBA driver (smartpqi, v1.2.6-015) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID SAS Driver (megaraid_sas, v07.709.08.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool (v3.02-23600) ===&lt;br /&gt;
&lt;br /&gt;
=== Areca SAS/SATA RAID Controller Driver (arcmsr, v1.40.0X.10-20181227) ===&lt;br /&gt;
&lt;br /&gt;
=== Smartmontools 7.0 ===&lt;br /&gt;
&lt;br /&gt;
=== VMware tools v10.3.10.10540 ===&lt;br /&gt;
&lt;br /&gt;
=== Page cache for zvol File I/O mode is reduced to 50% ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== RSS does not check if gateway is set up and if RSS server is available ===&lt;br /&gt;
&lt;br /&gt;
=== System activation on XEN VSA does not work ===&lt;br /&gt;
&lt;br /&gt;
=== Cannot use XEN drives for Metro Cluster in XEN VSA ===&lt;br /&gt;
&lt;br /&gt;
=== Zvol configured as a destination in OODP still can be set as a LUN for target ===&lt;br /&gt;
&lt;br /&gt;
=== Dataset configured as a destination in OODP still can be used as a location for a Share ===&lt;br /&gt;
&lt;br /&gt;
=== VMware VCenter/VSphere snapshot autoremove mechanism deletes all ESX snapshots ===&lt;br /&gt;
&lt;br /&gt;
=== Listing of OODP snapshots lasts very long ===&lt;br /&gt;
&lt;br /&gt;
=== activation.xml is cleaned while activation server was unavailable e.g. because of firewall settings ===&lt;br /&gt;
&lt;br /&gt;
=== System restart in watchdog for processes which works more than 300 sec. ===&lt;br /&gt;
&lt;br /&gt;
=== Problems with ssh and jumboframes (MTU) ===&lt;br /&gt;
&lt;br /&gt;
=== The SIDs are not mapped to usernames and groups for shares in Windows (fixed for new JovianDSS installations only) ===&lt;br /&gt;
&lt;br /&gt;
=== Unstable working of Intel X710/XL710 and Intel X722 network cards configured as LACP or Balance Round Robin bonding mode ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for JovianDSS HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use static discovery in all iSCSI initiators ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same JovianDSS version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;You can find details below:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;[https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101 https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Open-E VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Open-E VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Open-E JovianDSS users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Open-E JovianDSS with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB and Round-Robin do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using JovianDSS as Hyper-V or VMware guest, bonding ALB and Round-Robin is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If JovianDSS nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in Jovian’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from JovianDSS up24 to JovianDSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from JovianDSS up24 to JovianDSS up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of Jovian DSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of JovianDSS up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of Jovian up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of Jovian up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open Jovian TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the Jovian cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from JovianDSS up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from JovianDSS up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_rio              100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up28_Release_Notes&amp;diff=11741</id>
		<title>Open-E JovianDSS ver.1.0 up28 Release Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up28_Release_Notes&amp;diff=11741"/>
		<updated>2020-11-17T10:50:48Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date:&amp;amp;nbsp;2020-01-28&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build:&amp;amp;nbsp;37311&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For build 38496 go to: [Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes [http://wiki.open-e.com/default/wiki/Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes http://wiki.open-e.com/default/wiki/Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== HA Cluster Ring can be configured as two single connections ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster Ping-nodes can be configured within any available interfaces and subnetworks ===&lt;br /&gt;
&lt;br /&gt;
=== Static routing configuration is available in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Custom SSL/TLS certificates can be manually imported in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== ZFS Datasets can get a record size value from range of 4KiB up to 16MiB (default record size value is 1MiB) ===&lt;br /&gt;
&lt;br /&gt;
=== Fibre Channel Target mode is available for ATTO Fiber Channel Adapter (supported only with VMware client) ===&lt;br /&gt;
&lt;br /&gt;
=== Improved performance of LDAP database replication mechanism ===&lt;br /&gt;
&lt;br /&gt;
=== Storage performance test tool is available in TUI (System console -&amp;gt; Ctrl+Alt+t -&amp;gt; Add-ons -&amp;gt; Storage performance tool) ===&lt;br /&gt;
&lt;br /&gt;
=== HPE tools for managing HP Smart Array controllers are available in Web-GUI and TUI ===&lt;br /&gt;
&lt;br /&gt;
=== MacOS Spotlight search support allows to quickly locate the files and search through their contents ===&lt;br /&gt;
&lt;br /&gt;
=== Installer creates 128GB boot medium partition size (more space for further upgrade processes) ===&lt;br /&gt;
&lt;br /&gt;
=== New filtering options for Event Viewer (selection by: error, warning, information and the date ranges) ===&lt;br /&gt;
&lt;br /&gt;
=== Kdump (kernel crash dumping mechanism) ===&lt;br /&gt;
&lt;br /&gt;
=== The default SCSI ID for iSCSI and FC luns can be manually set in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Deduplication statistics for zpool are available in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Ethernet cards detailed statistics (amount of data sent and received) are available in the system logs ===&lt;br /&gt;
&lt;br /&gt;
=== Statistics for MPIO devices are displayed in GUI (Diagnostics -&amp;gt; Disk usage) ===&lt;br /&gt;
&lt;br /&gt;
=== Linux iostat and S.M.A.R.T data are available in Checkmk monitoring system ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Samba 4.9.4 ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-3 driver (mlx4_core, v4.4-2.0.7) ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-4/5 driver (mlx5_core, v4.4-2.0.7) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE driver i40e (i40e, v2.9.21) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM5706/5708/5709/5716 driver (bnx2, v2.2.5x) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM57710/57711/57711E/57712/57712_MF/57800/57800_MF/57810/57810_MF/57840/57840_MF driver (bnx2x, v.1.715.0) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v1.76.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec RAID and HBA driver (aacraid, v1.2.1.57013src) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartRAID and SmartHBA driver (smartpqi, v1.2.6-015) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID SAS Driver (megaraid_sas, v07.709.08.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool (v3.02-23600) ===&lt;br /&gt;
&lt;br /&gt;
=== Areca SAS/SATA RAID Controller Driver (arcmsr, v1.40.0X.10-20181227) ===&lt;br /&gt;
&lt;br /&gt;
=== Smartmontools 7.0 ===&lt;br /&gt;
&lt;br /&gt;
=== VMware tools v10.3.10.10540 ===&lt;br /&gt;
&lt;br /&gt;
=== Page cache for zvol File I/O mode is reduced to 50% ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== RSS does not check if gateway is set up and if RSS server is available ===&lt;br /&gt;
&lt;br /&gt;
=== System activation on XEN VSA does not work ===&lt;br /&gt;
&lt;br /&gt;
=== Cannot use XEN drives for Metro Cluster in XEN VSA ===&lt;br /&gt;
&lt;br /&gt;
=== Zvol configured as a destination in OODP still can be set as a LUN for target ===&lt;br /&gt;
&lt;br /&gt;
=== Dataset configured as a destination in OODP still can be used as a location for a Share ===&lt;br /&gt;
&lt;br /&gt;
=== VMware VCenter/VSphere snapshot autoremove mechanism deletes all ESX snapshots ===&lt;br /&gt;
&lt;br /&gt;
=== Listing of OODP snapshots lasts very long ===&lt;br /&gt;
&lt;br /&gt;
=== activation.xml is cleaned while activation server was unavailable e.g. because of firewall settings ===&lt;br /&gt;
&lt;br /&gt;
=== System restart in watchdog for processes which works more than 300 sec. ===&lt;br /&gt;
&lt;br /&gt;
=== Problems with ssh and jumboframes (MTU) ===&lt;br /&gt;
&lt;br /&gt;
=== The SIDs are not mapped to usernames and groups for shares in Windows (fixed for new JovianDSS installations only) ===&lt;br /&gt;
&lt;br /&gt;
=== Unstable working of Intel X710/XL710 and Intel X722 network cards configured as LACP or Balance Round Robin bonding mode ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for JovianDSS HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use static discovery in all iSCSI initiators ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same JovianDSS version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;You can find details below:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;[https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101 https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Open-E VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Open-E VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Open-E JovianDSS users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Open-E JovianDSS with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB and Round-Robin do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using JovianDSS as Hyper-V or VMware guest, bonding ALB and Round-Robin is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If JovianDSS nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in Jovian’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from JovianDSS up24 to JovianDSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from JovianDSS up24 to JovianDSS up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of Jovian DSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of JovianDSS up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of Jovian up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of Jovian up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open Jovian TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the Jovian cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from JovianDSS up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from JovianDSS up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_rio              100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up28_Release_Notes&amp;diff=11740</id>
		<title>Open-E JovianDSS ver.1.0 up28 Release Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up28_Release_Notes&amp;diff=11740"/>
		<updated>2020-11-17T08:28:04Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date:&amp;amp;nbsp;2020-01-28&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build:&amp;amp;nbsp;37311&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For build 38496 go to: [http://wiki.open-e.com/default/wiki/Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes http://wiki.open-e.com/default/wiki/Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== HA Cluster Ring can be configured as two single connections ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster Ping-nodes can be configured within any available interfaces and subnetworks ===&lt;br /&gt;
&lt;br /&gt;
=== Static routing configuration is available in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Custom SSL/TLS certificates can be manually imported in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== ZFS Datasets can get a record size value from range of 4KiB up to 16MiB (default record size value is 1MiB) ===&lt;br /&gt;
&lt;br /&gt;
=== Fibre Channel Target mode is available for ATTO Fiber Channel Adapter (supported only with VMware client) ===&lt;br /&gt;
&lt;br /&gt;
=== Improved performance of LDAP database replication mechanism ===&lt;br /&gt;
&lt;br /&gt;
=== Storage performance test tool is available in TUI (System console -&amp;gt; Ctrl+Alt+t -&amp;gt; Add-ons -&amp;gt; Storage performance tool) ===&lt;br /&gt;
&lt;br /&gt;
=== HPE tools for managing HP Smart Array controllers are available in Web-GUI and TUI ===&lt;br /&gt;
&lt;br /&gt;
=== MacOS Spotlight search support allows to quickly locate the files and search through their contents ===&lt;br /&gt;
&lt;br /&gt;
=== Installer creates 128GB boot medium partition size (more space for further upgrade processes) ===&lt;br /&gt;
&lt;br /&gt;
=== New filtering options for Event Viewer (selection by: error, warning, information and the date ranges) ===&lt;br /&gt;
&lt;br /&gt;
=== Kdump (kernel crash dumping mechanism) ===&lt;br /&gt;
&lt;br /&gt;
=== The default SCSI ID for iSCSI and FC luns can be manually set in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Deduplication statistics for zpool are available in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Ethernet cards detailed statistics (amount of data sent and received) are available in the system logs ===&lt;br /&gt;
&lt;br /&gt;
=== Statistics for MPIO devices are displayed in GUI (Diagnostics -&amp;gt; Disk usage) ===&lt;br /&gt;
&lt;br /&gt;
=== Linux iostat and S.M.A.R.T data are available in Checkmk monitoring system ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Samba 4.9.4 ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-3 driver (mlx4_core, v4.4-2.0.7) ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-4/5 driver (mlx5_core, v4.4-2.0.7) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE driver i40e (i40e, v2.9.21) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM5706/5708/5709/5716 driver (bnx2, v2.2.5x) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM57710/57711/57711E/57712/57712_MF/57800/57800_MF/57810/57810_MF/57840/57840_MF driver (bnx2x, v.1.715.0) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v1.76.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec RAID and HBA driver (aacraid, v1.2.1.57013src) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartRAID and SmartHBA driver (smartpqi, v1.2.6-015) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID SAS Driver (megaraid_sas, v07.709.08.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool (v3.02-23600) ===&lt;br /&gt;
&lt;br /&gt;
=== Areca SAS/SATA RAID Controller Driver (arcmsr, v1.40.0X.10-20181227) ===&lt;br /&gt;
&lt;br /&gt;
=== Smartmontools 7.0 ===&lt;br /&gt;
&lt;br /&gt;
=== VMware tools v10.3.10.10540 ===&lt;br /&gt;
&lt;br /&gt;
=== Page cache for zvol File I/O mode is reduced to 50% ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== RSS does not check if gateway is set up and if RSS server is available ===&lt;br /&gt;
&lt;br /&gt;
=== System activation on XEN VSA does not work ===&lt;br /&gt;
&lt;br /&gt;
=== Cannot use XEN drives for Metro Cluster in XEN VSA ===&lt;br /&gt;
&lt;br /&gt;
=== Zvol configured as a destination in OODP still can be set as a LUN for target ===&lt;br /&gt;
&lt;br /&gt;
=== Dataset configured as a destination in OODP still can be used as a location for a Share ===&lt;br /&gt;
&lt;br /&gt;
=== VMware VCenter/VSphere snapshot autoremove mechanism deletes all ESX snapshots ===&lt;br /&gt;
&lt;br /&gt;
=== Listing of OODP snapshots lasts very long ===&lt;br /&gt;
&lt;br /&gt;
=== activation.xml is cleaned while activation server was unavailable e.g. because of firewall settings ===&lt;br /&gt;
&lt;br /&gt;
=== System restart in watchdog for processes which works more than 300 sec. ===&lt;br /&gt;
&lt;br /&gt;
=== Problems with ssh and jumboframes (MTU) ===&lt;br /&gt;
&lt;br /&gt;
=== The SIDs are not mapped to usernames and groups for shares in Windows (fixed for new JovianDSS installations only) ===&lt;br /&gt;
&lt;br /&gt;
=== Unstable working of Intel X710/XL710 and Intel X722 network cards configured as LACP or Balance Round Robin bonding mode ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for JovianDSS HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use static discovery in all iSCSI initiators ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same JovianDSS version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;You can find details below:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;[https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101 https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Open-E VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Open-E VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Open-E JovianDSS users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Open-E JovianDSS with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB and Round-Robin do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using JovianDSS as Hyper-V or VMware guest, bonding ALB and Round-Robin is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If JovianDSS nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in Jovian’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from JovianDSS up24 to JovianDSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from JovianDSS up24 to JovianDSS up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of Jovian DSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of JovianDSS up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of Jovian up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of Jovian up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open Jovian TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the Jovian cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from JovianDSS up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from JovianDSS up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_rio              100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up28_Release_Notes&amp;diff=11739</id>
		<title>Open-E JovianDSS ver.1.0 up28 Release Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Open-E_JovianDSS_ver.1.0_up28_Release_Notes&amp;diff=11739"/>
		<updated>2020-11-17T08:27:22Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date:&amp;amp;nbsp;2020-01-28&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build:&amp;amp;nbsp;37311&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For build &#039;&#039;&#039;38496 go to: [http://wiki.open-e.com/default/wiki/Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes http://wiki.open-e.com/default/wiki/Open-E_JovianDSS_ver.1.0_up28r2_Release_Notes]&#039;&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== HA Cluster Ring can be configured as two single connections ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster Ping-nodes can be configured within any available interfaces and subnetworks ===&lt;br /&gt;
&lt;br /&gt;
=== Static routing configuration is available in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Custom SSL/TLS certificates can be manually imported in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== ZFS Datasets can get a record size value from range of 4KiB up to 16MiB (default record size value is 1MiB) ===&lt;br /&gt;
&lt;br /&gt;
=== Fibre Channel Target mode is available for ATTO Fiber Channel Adapter (supported only with VMware client) ===&lt;br /&gt;
&lt;br /&gt;
=== Improved performance of LDAP database replication mechanism ===&lt;br /&gt;
&lt;br /&gt;
=== Storage performance test tool is available in TUI (System console -&amp;gt; Ctrl+Alt+t -&amp;gt; Add-ons -&amp;gt; Storage performance tool) ===&lt;br /&gt;
&lt;br /&gt;
=== HPE tools for managing HP Smart Array controllers are available in Web-GUI and TUI ===&lt;br /&gt;
&lt;br /&gt;
=== MacOS Spotlight search support allows to quickly locate the files and search through their contents ===&lt;br /&gt;
&lt;br /&gt;
=== Installer creates 128GB boot medium partition size (more space for further upgrade processes) ===&lt;br /&gt;
&lt;br /&gt;
=== New filtering options for Event Viewer (selection by: error, warning, information and the date ranges) ===&lt;br /&gt;
&lt;br /&gt;
=== Kdump (kernel crash dumping mechanism) ===&lt;br /&gt;
&lt;br /&gt;
=== The default SCSI ID for iSCSI and FC luns can be manually set in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Deduplication statistics for zpool are available in Web-GUI ===&lt;br /&gt;
&lt;br /&gt;
=== Ethernet cards detailed statistics (amount of data sent and received) are available in the system logs ===&lt;br /&gt;
&lt;br /&gt;
=== Statistics for MPIO devices are displayed in GUI (Diagnostics -&amp;gt; Disk usage) ===&lt;br /&gt;
&lt;br /&gt;
=== Linux iostat and S.M.A.R.T data are available in Checkmk monitoring system ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Samba 4.9.4 ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-3 driver (mlx4_core, v4.4-2.0.7) ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-4/5 driver (mlx5_core, v4.4-2.0.7) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE driver i40e (i40e, v2.9.21) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM5706/5708/5709/5716 driver (bnx2, v2.2.5x) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM57710/57711/57711E/57712/57712_MF/57800/57800_MF/57810/57810_MF/57840/57840_MF driver (bnx2x, v.1.715.0) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v1.76.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec RAID and HBA driver (aacraid, v1.2.1.57013src) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartRAID and SmartHBA driver (smartpqi, v1.2.6-015) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID SAS Driver (megaraid_sas, v07.709.08.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool (v3.02-23600) ===&lt;br /&gt;
&lt;br /&gt;
=== Areca SAS/SATA RAID Controller Driver (arcmsr, v1.40.0X.10-20181227) ===&lt;br /&gt;
&lt;br /&gt;
=== Smartmontools 7.0 ===&lt;br /&gt;
&lt;br /&gt;
=== VMware tools v10.3.10.10540 ===&lt;br /&gt;
&lt;br /&gt;
=== Page cache for zvol File I/O mode is reduced to 50% ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== RSS does not check if gateway is set up and if RSS server is available ===&lt;br /&gt;
&lt;br /&gt;
=== System activation on XEN VSA does not work ===&lt;br /&gt;
&lt;br /&gt;
=== Cannot use XEN drives for Metro Cluster in XEN VSA ===&lt;br /&gt;
&lt;br /&gt;
=== Zvol configured as a destination in OODP still can be set as a LUN for target ===&lt;br /&gt;
&lt;br /&gt;
=== Dataset configured as a destination in OODP still can be used as a location for a Share ===&lt;br /&gt;
&lt;br /&gt;
=== VMware VCenter/VSphere snapshot autoremove mechanism deletes all ESX snapshots ===&lt;br /&gt;
&lt;br /&gt;
=== Listing of OODP snapshots lasts very long ===&lt;br /&gt;
&lt;br /&gt;
=== activation.xml is cleaned while activation server was unavailable e.g. because of firewall settings ===&lt;br /&gt;
&lt;br /&gt;
=== System restart in watchdog for processes which works more than 300 sec. ===&lt;br /&gt;
&lt;br /&gt;
=== Problems with ssh and jumboframes (MTU) ===&lt;br /&gt;
&lt;br /&gt;
=== The SIDs are not mapped to usernames and groups for shares in Windows (fixed for new JovianDSS installations only) ===&lt;br /&gt;
&lt;br /&gt;
=== Unstable working of Intel X710/XL710 and Intel X722 network cards configured as LACP or Balance Round Robin bonding mode ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for JovianDSS HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use static discovery in all iSCSI initiators ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same JovianDSS version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;You can find details below:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;[https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101 https://kb.open-e.com/index.php?View=entry&amp;amp;EntryID=3101]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Open-E VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Open-E VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Open-E JovianDSS users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Open-E JovianDSS with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB and Round-Robin do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using JovianDSS as Hyper-V or VMware guest, bonding ALB and Round-Robin is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If JovianDSS nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in Jovian’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from JovianDSS up24 to JovianDSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from JovianDSS up24 to JovianDSS up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of Jovian DSS up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of JovianDSS up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of Jovian up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of Jovian up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open Jovian TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the Jovian cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from JovianDSS up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from JovianDSS up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_rio              100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Share_group&amp;diff=11177</id>
		<title>Share group</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Share_group&amp;diff=11177"/>
		<updated>2020-09-18T08:31:49Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This functionality is available in: User Management &amp;gt; Share users/groups &amp;gt;&amp;amp;nbsp;Share users and groups &amp;gt; Share groups &amp;gt; Add group&lt;br /&gt;
&lt;br /&gt;
Here you can configure name for the new group.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039;&amp;amp;nbsp;The name should have 1 to 20 characters: a-z A-Z 0-9 - _. It can&#039;t start or end with a space or contain several spaces in a row.&lt;br /&gt;
&lt;br /&gt;
You are also able to add users to a group. To do so:&lt;br /&gt;
&lt;br /&gt;
#Specify the name for a group.&lt;br /&gt;
#Go to the Group users section.&lt;br /&gt;
#Select users to be added to the group.&lt;br /&gt;
#Click the &#039;&#039;&#039;Apply&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Share_user&amp;diff=11174</id>
		<title>Share user</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Share_user&amp;diff=11174"/>
		<updated>2020-09-18T08:30:29Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This functionality is available in: User Management &amp;gt; Share users/groups &amp;gt; Share users and groups&lt;br /&gt;
&lt;br /&gt;
Here you can configure name and the password for the new user. To add a new user to the system, the Lightweight Directory Access Protocol (LDAP) should be enabled.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&#039;&#039;&#039;Name&#039;&#039;&#039;: The value in the&amp;amp;nbsp;&#039;&#039;&#039;Name&#039;&#039;&#039;&amp;amp;nbsp;field should have 1 to 20 characters: a-z A-Z 0-9 - _. It can&#039;t start or end with a space or contain several spaces in a row.&amp;lt;br/&amp;gt;&#039;&#039;&#039;Password&#039;&#039;&#039;: The minimum length for this field is 6.&lt;br /&gt;
&lt;br /&gt;
To add a new user:&lt;br /&gt;
&lt;br /&gt;
#Go to the User Management section.&lt;br /&gt;
#Go to the Share users/groups tab.&lt;br /&gt;
#Go to the&amp;amp;nbsp;Lightweight Directory Access Protocol (LDAP) subsection.&lt;br /&gt;
#Make sure LDAP is enabled.&lt;br /&gt;
#Go to the Share users and groups section.&lt;br /&gt;
#Expand Share users subsection.&lt;br /&gt;
#Click &#039;&#039;&#039;Add user&#039;&#039;&#039;.&lt;br /&gt;
#In the window that appears, enter the values in the &#039;&#039;&#039;Name&#039;&#039;&#039;, &#039;&#039;&#039;Password&#039;&#039;&#039;, and&amp;amp;nbsp;&#039;&#039;&#039;Password confirmation&#039;&#039;&#039; fields.&lt;br /&gt;
#Click &#039;&#039;&#039;Apply&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
To add a a new group:&lt;br /&gt;
&lt;br /&gt;
#Go to the User Management section.&lt;br /&gt;
#Go to the Share users/groups tab.&lt;br /&gt;
#Go to the&amp;amp;nbsp;Lightweight Directory Access Protocol (LDAP) subsection.&lt;br /&gt;
#Make sure LDAP is enabled.&lt;br /&gt;
#Go to the Share users and groups section.&lt;br /&gt;
#Expand share groups subsection.&lt;br /&gt;
#Click &#039;&#039;&#039;Add group&#039;&#039;&#039;.&lt;br /&gt;
#In the window that appears, enter the value in the &#039;&#039;&#039;Name&#039;&#039;&#039; field.&lt;br /&gt;
#Click&amp;amp;nbsp;&#039;&#039;&#039;Apply&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=CreateNewPool&amp;diff=10032</id>
		<title>CreateNewPool</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=CreateNewPool&amp;diff=10032"/>
		<updated>2020-04-16T13:33:48Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to Zpool wizard&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Zpool wizard]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=SetTime&amp;diff=10095</id>
		<title>SetTime</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=SetTime&amp;diff=10095"/>
		<updated>2020-04-16T13:31:55Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to Time and date settings&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Time and date settings]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Snmp_Settings&amp;diff=10047</id>
		<title>Snmp Settings</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Snmp_Settings&amp;diff=10047"/>
		<updated>2020-04-16T13:30:43Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to SNMP settings&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[SNMP settings]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=SnmpConfig&amp;diff=10040</id>
		<title>SnmpConfig</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=SnmpConfig&amp;diff=10040"/>
		<updated>2020-04-16T13:26:41Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to SNMP settings&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[SNMP settings]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=SnmpConfig&amp;diff=10039</id>
		<title>SnmpConfig</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=SnmpConfig&amp;diff=10039"/>
		<updated>2020-04-16T13:23:12Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to Snmp settings&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Snmp settings]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=DefaultGateway&amp;diff=10084</id>
		<title>DefaultGateway</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=DefaultGateway&amp;diff=10084"/>
		<updated>2020-04-16T12:44:29Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to Default gateway&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Default gateway]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=CreateNewUser&amp;diff=10035</id>
		<title>CreateNewUser</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=CreateNewUser&amp;diff=10035"/>
		<updated>2020-04-16T11:37:42Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to Discovery CHAP user access&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Discovery CHAP user access]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Data_Migration_Tool&amp;diff=10817</id>
		<title>Data Migration Tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Data_Migration_Tool&amp;diff=10817"/>
		<updated>2020-03-31T11:49:18Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Functionality allows user to migrate data from DSSV7 NAS shares to the Open-E JovianDSS.&lt;br /&gt;
&lt;br /&gt;
Below the list of steps that need to be done in order to migrate data:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;On Open-E JovianDSS:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#Create zpool&lt;br /&gt;
#Create dataset&lt;br /&gt;
#Enable Data Migration Tool in Extended tools under console (CTRL+ALT+X)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;On DSSV7:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#Create Data Replication Task (DREP) using IP of JovianDSS as a destination IP and selecting source and destination shares from where to where data need to be migrated.&lt;br /&gt;
#Start the task and wait till task completes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Important notes:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#While Data Migration Tool is turned on data sharing protocols on JovianDSS will be disabled.&lt;br /&gt;
#Any changes made to shares configuration on JovianDSS, while Data Migration Tool is running, will be reflected by the tool after restarting it.&lt;br /&gt;
#After completing migration process it is Administrator role to turn off the Data Migration Tool.&lt;br /&gt;
#After rebooting the system Data Migration Tool becomes automatically turned off.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Edit_fc_group_properties&amp;diff=11646</id>
		<title>Edit fc group properties</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Edit_fc_group_properties&amp;diff=11646"/>
		<updated>2020-03-31T11:49:08Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This function allows modifying FC group alias. Alias of a FC group can be modified at any time, it doesn’t have any influence on the physical FC configuration. For more information about FC groups please refer to the FC groups section.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Virtual_IPs&amp;diff=11697</id>
		<title>Virtual IPs</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Virtual_IPs&amp;diff=11697"/>
		<updated>2020-03-12T10:37:55Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span id=&amp;quot;docs-internal-guid-178eff9a-7fff-bb24-bcf6-670490bfaf23&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  font-family: Arial;  color: rgb(0, 0, 0);  background-color: transparent;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;This option allows you to add a virtual IP to provide services. To add a virtual IP, click the &#039;&#039;&#039;Add Virtual IP&#039;&#039;&#039; button in the Storage section of the Virtual IP tab. The following data is required:&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;ul style=&amp;quot;margin-top: 0px; margin-bottom: 0px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;docs-internal-guid-178eff9a-7fff-bb24-bcf6-670490bfaf23&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-weight: 700;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;Virtual IP address &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;- the address by which the services will be granted.&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;docs-internal-guid-178eff9a-7fff-bb24-bcf6-670490bfaf23&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-weight: 700;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;Name &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;- the name used by the virtual IP.&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;docs-internal-guid-178eff9a-7fff-bb24-bcf6-670490bfaf23&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-weight: 700;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;Netmask &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;- the subnet mask assigned to the virtual IP address.&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;docs-internal-guid-178eff9a-7fff-bb24-bcf6-670490bfaf23&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-weight: 700;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;Network interface &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;- the network interface used in the local server by the virtual IP.&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;span id=&amp;quot;docs-internal-guid-178eff9a-7fff-bb24-bcf6-670490bfaf23&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-weight: 700;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;Remote network interface &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  background-color: transparent;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;- the remote network interface used in the remote server by the virtual IP.&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;docs-internal-guid-178eff9a-7fff-bb24-bcf6-670490bfaf23&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  font-family: Arial;  color: rgb(0, 0, 0);  background-color: transparent;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;Once everything is set, press &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  font-family: Arial;  color: rgb(0, 0, 0);  background-color: transparent;  font-weight: 700;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;Apply &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-size: 10pt;  font-family: Arial;  color: rgb(0, 0, 0);  background-color: transparent;  font-variant-numeric: normal;  font-variant-east-asian: normal;  vertical-align: baseline;  white-space: pre-wrap&amp;quot;&amp;gt;to add a virtual IP.&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=FC_target_assigned_to_pool_properties&amp;diff=11016</id>
		<title>FC target assigned to pool properties</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=FC_target_assigned_to_pool_properties&amp;diff=11016"/>
		<updated>2020-03-12T10:22:32Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The article describes how to assign FC target to pool properties when using HA cluster or single node configuration. In general, this function allows modifying target alias and assigning a remote target. In addition, if you would like to learn about FC target replacement, read this article:&amp;amp;nbsp;[http://wiki.open-e.com/default/wiki/FC_target_replacement http://wiki.open-e.com/default/wiki/FC_target_replacement]&lt;br /&gt;
&lt;br /&gt;
== High Availability cluster configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration of the remote target is crucial for a correct FC cluster configuration. FC targets are local to a particular machine because in general those are the physical ports from adapters attached to a given server. In a cluster environment, when a pool has to be moved to other node, it is is required to know which target should be used there to serve FC groups resources. To properly configure FC for a cluster use, it is required to assign a remote target to each of local targets present in FC groups.&lt;br /&gt;
&lt;br /&gt;
The cluster uses Asymmetric Logical Unit Access (ALUA) to configure the paired targets. LUNs are visible on both configured targets by the initiator that has access to those LUNs by paths. Depending on the path status, the initiator knowns which path should be used to access LUNs. The initiator accesses LUNs by using an active path and a standby path is used for a target that doesn’t have access to LUNs. The active path is set for a target, when the pool is present on the same node where the target is. The standby path is used for a target, when a pool is present on the other node. When a cluster moves the pool from one node to another, &amp;amp;nbsp;path statuses are modified accordingly. If the remote target is not configured, the ALUA configuration is not created by a cluster on the remote node. In that case, after moving the pool, all resources served by that target won’t be accessible. Once the remote target is assigned, the cluster performs the configuration of ALUA and a given target can be safely used by both cluster nodes. It is possible to assign the remote target before or after the cluster is started, but it is necessary to make sure that it is configured before the move of pool is performed (either manually or by failover). Moreover, after the assignment of a remote target, please make sure that it is in the target mode on the remote node. If the remote target will be in an initiator mode, it won’t be possible to access target LUNs after a pool move, despite the modification of path statuses.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;In summary, to correctly set up targets for a cluster usage, do the following:&lt;br /&gt;
&lt;br /&gt;
*Select a remote target for each target assigned to FC groups.&lt;br /&gt;
*Switch a selected remote target on a remote node to a target mode.&lt;br /&gt;
&lt;br /&gt;
== Single node configuration ==&lt;br /&gt;
&lt;br /&gt;
There is no possibility to pair the target with a remote node because ALUA is not used. In case when the pool is imported with an FC configuration from another machine, it is neccessary to assign FC targets to FC ports of the current server. This is due to the fact that the system is unable to determine on which ports it has to share resources.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Static_routing&amp;diff=11700</id>
		<title>Static routing</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Static_routing&amp;diff=11700"/>
		<updated>2020-03-12T09:57:44Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;color: rgb(0, 0, 0);  font-family: sans-serif;  font-size: 12.8px&amp;quot;&amp;gt;By using this manager, you are able to manually enter and remove static routes to/from a routing table. The manager allows configuring routes to a subnetwork or to specific hosts. The order of entries in the routing table is important.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: rgb(0, 0, 0);  font-family: sans-serif;  font-size: 12.8px&amp;quot;&amp;gt;&#039;&#039;&#039;Note:&#039;&#039;&#039; Configuring static routes for interfaces that are a part of the cluster is impossible.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=MPIO_disks&amp;diff=11631</id>
		<title>MPIO disks</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=MPIO_disks&amp;diff=11631"/>
		<updated>2020-03-12T09:40:48Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The MPIO disks function lists all the configured MPIO devices (disks which are available through more than one path will be automatically configured as an MPIO device and will be also listed).&lt;br /&gt;
&lt;br /&gt;
For each MPIO device the available and currently used paths, as well as the MPIO status are visible.&amp;lt;br/&amp;gt;Available MPIO statuses: Pending setup, OK, Unknown.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pending setup&#039;&#039;&#039; - postponed until the Pool is exported and later until: &lt;br /&gt;
&lt;br /&gt;
*the Pool is imported &lt;br /&gt;
*or the system restarted &lt;br /&gt;
*or in case of a cross SAS cluster - for moving the resources (Pool) to the second node and back.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Unknown&#039;&#039;&#039; - MPIO configuration present in the system but not active&amp;lt;br/&amp;gt;&#039;&#039;&#039;OK&#039;&#039;&#039; - MPIO device configured and running&lt;br /&gt;
&lt;br /&gt;
MPIO devices can be collectively edited (by selecting specific devices) and using the “&#039;&#039;&#039;Edit selected&#039;&#039;&#039;” button or separately &amp;amp;nbsp;by using “&#039;&#039;&#039;Edit disk&#039;&#039;&#039;” from the form in which “&#039;&#039;&#039;Remove failed paths&#039;&#039;&#039;”, “&#039;&#039;&#039;Disk details&#039;&#039;&#039;” and “&#039;&#039;&#039;Delete&#039;&#039;&#039;” functions are available.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is not possible to delete the MPIO device which has two or more available paths. In order to delete such device it is necessary to remove the paths (to unplug the cables) and then use the &amp;quot;Delete&amp;quot; function.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In case of using remote disks for cluster over Ethernet functionality, the MPIO devices configuration will be postponed until the system is restarted.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=MPIO_available_disks&amp;diff=11642</id>
		<title>MPIO available disks</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=MPIO_available_disks&amp;diff=11642"/>
		<updated>2020-03-12T09:35:53Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Available disks function lists the disks which are connected with one path only and that are not configured as a part of MPIO devices.&amp;lt;br/&amp;gt;This function enables a manual configuration of those disks (with one path) as part MPIO devices. It is also possible to check details of individual disks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039;&amp;lt;br/&amp;gt;Disks on which Pools are located are also listed – in this case, the configuration of an MPIO appliance will be postponed as pending setup until the Pool is exported,&amp;lt;br/&amp;gt;and later until:&lt;br /&gt;
&lt;br /&gt;
*the Pool is imported &lt;br /&gt;
*or the system restarted &lt;br /&gt;
*or when using a cross SAS cluster for moving the resources (Pool) to the second node and back.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039;&amp;lt;br/&amp;gt;In case of using remote disks for cluster over Ethernet functionality, the MPIO devices configuration will be postponed until the system is restarted.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039;&amp;lt;br/&amp;gt;Listed are also the disks which are available through more than one path and which were deleted from the MPIO devices in the MPIO disks section form - keep in mind&amp;lt;br/&amp;gt;that the MPIO device will be automatically configured after the system restart, as it is for all disks connected to the system through more than one path.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=File:Workgroup_guest.png&amp;diff=11778</id>
		<title>File:Workgroup guest.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=File:Workgroup_guest.png&amp;diff=11778"/>
		<updated>2020-03-02T10:56:58Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Ma-W uploaded a new version of &amp;amp;quot;File:Workgroup guest.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Share_groups&amp;diff=11179</id>
		<title>Share groups</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Share_groups&amp;diff=11179"/>
		<updated>2020-02-26T09:21:39Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to Group access&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Group access]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Share_users&amp;diff=11181</id>
		<title>Share users</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Share_users&amp;diff=11181"/>
		<updated>2020-02-26T08:08:19Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to User access&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[User access]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Spotlight_service&amp;diff=11752</id>
		<title>Spotlight service</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Spotlight_service&amp;diff=11752"/>
		<updated>2020-01-31T15:53:58Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Spotlight service, available only for macOS machines, allows you to search files on the attached share based on name, metadata, and size. The Open-E JovianDSS creates an index and that index is being used by the client&#039;s machine. Bear in mind that no local copy of an index on the client&#039;s side is being created. Thus, each new user does not have to generate an index for files once the Spotlight service in Open-E JovianDSS has been enabled. The support for Spotlight service has to be enabled. The SMB service has to be restarted after enabling Spotlight. This option is available after enabling SMB for a given share.&lt;br /&gt;
&lt;br /&gt;
The Spotlight&#039;s service database and index are updated during an upload or modification of data on datasets. If the pool that was currently used is exported, or could not be used due to the lack of space, the last imported or created pool will be used for the Spotlight service. The index is also placed on the pool. For example:&lt;br /&gt;
&lt;br /&gt;
*Pool-0 is created, Spotlight is enabled and pool-0 database is used.&lt;br /&gt;
*Pool-1 is created, Spotlight is enabled and still database from pool-0 is used.&lt;br /&gt;
*Pool-0 is exported and the database from pool-1 is used.&lt;br /&gt;
*Pool-0 is imported and the database from pool-1 is used.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Once the service has been enabled, a dataset called _spotlight-db-dataset will be added.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Spotlight_service&amp;diff=11751</id>
		<title>Spotlight service</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Spotlight_service&amp;diff=11751"/>
		<updated>2020-01-31T15:53:28Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Spotlight service, available only for macOS machines, allows you to search files on the attached share based on name, metadata, and size. The Open-E JovianDSS creates an index and that index is being used by the client&#039;s machine. Bear in mind that no local copy of an index on the client&#039;s side is being created. Thus, each new user does not have to generate an index for files once the Spotlight service in Open-E JovianDSS has been enabled. The support for Spotlight service has to be enabled. The SMB service has to be restarted after enabling Spotlight. This option is available after enabling SMB for a given share.&lt;br /&gt;
&lt;br /&gt;
The Spotlight&#039;s service database and index are updated during an upload or modification of data on datasets. If the pool that was currently used is exported, or could not be used due to the lack of space, the last imported or created pool will be used for the Spotlight service. The index is also placed on the pool. For example:&lt;br /&gt;
&lt;br /&gt;
*Pool-0 is created, Spotlight is enabled and pool-0 database is used.&lt;br /&gt;
*Pool-1 is created, Spotlight is enabled and still database from pool-0 is used.&lt;br /&gt;
*Pool-0 is exported and the database from pool-1 is used.&lt;br /&gt;
*Pool-0 is imported and the database from pool-1 is used.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Once the service has been enabled, a dataset called _spotlight-db-dataset will be added.&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Repair_failed_disk&amp;diff=10478</id>
		<title>Repair failed disk</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Repair_failed_disk&amp;diff=10478"/>
		<updated>2020-01-31T15:09:09Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You will find the list of all available disks in the system. Each disk in the list is represented by &amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Name&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Serial Number&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Model&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Size&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
​​To select a disk that should be replaced by a new disk, you have to point to the&amp;amp;nbsp;&#039;&#039;&#039;indicator.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the main&amp;amp;nbsp;&#039;&#039;&#039;Group&#039;&#039;&#039;&amp;amp;nbsp;view, you will see the position &amp;quot;replacing-xy&amp;quot;, that will contain names of the drives that will be changed.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Deduplication&amp;diff=11787</id>
		<title>Deduplication</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Deduplication&amp;diff=11787"/>
		<updated>2020-01-31T15:05:40Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. In turn, the deduplication ratio is the measurement of the zpool&#039;s data original size versus the data size after removing redundancy. Deduplication can be set on zvols or datasets but the&amp;amp;nbsp;deduplication ratio is displayed per pool instead of single zvols or datasets because they are located on the pool.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&amp;amp;nbsp;&#039;&#039;&#039;It is not recommended to use deduplication in systems whose primary aim is to provide efficiency and fast data access in case when High Availability cluster is enabled. In certain conditions, the deduplication table may crash and rebuilding it while importing the pool may take a lot of time or even lead to improper functioning of a failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Things to be taken into consideration ==&lt;br /&gt;
&lt;br /&gt;
Before using deduplication, the following matters should be taken into account:&lt;br /&gt;
&lt;br /&gt;
*hardware - deduplication is very memory consuming so if simultaneously the system would have to process the current tasks, it may significantly slow down its efficiency&lt;br /&gt;
*need for quick access to the data -&amp;amp;nbsp;when archiving or backing up the data as it saves the disk space.&amp;amp;nbsp;In case when there is a small amount of repetitive data, deduplication can only cause longer write times.&lt;br /&gt;
&lt;br /&gt;
It is also worth to calculate memory requirements in the following way:&lt;br /&gt;
&lt;br /&gt;
*deduplication table (DDT) entry that is equal to about 320 bytes should be multiplied by the number of allocated blocks by 320. For example: DDT size (1.08MB) x 320 =&amp;amp;nbsp;345,6‬ meaning that this amount of memory is required for deduplication&lt;br /&gt;
&lt;br /&gt;
Once deduplication is enabled, it has an impact on the whole pool since it creates the global DDT array with deduplication indicators. When deduplication is disabled,&amp;amp;nbsp;&#039;&#039;&#039;Zpool storage deduplication rate&#039;&#039;&#039;&amp;amp;nbsp;is set to 1. If the value is greater than 1, then the deduplication operation has taken place. Disabling deduplication will not cause the value to return to 1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Removing deduplicated data ==&lt;br /&gt;
&lt;br /&gt;
Deduplicated data must be rearranged after deduplication has been disabled on zpool by performing the send/ receive operation on the pool. The&amp;amp;nbsp;&#039;&#039;&#039;Zpool storage deduplication rate&#039;&#039;&#039;&amp;amp;nbsp;will be reset to 1, otherwise old data will be left in deduplication state.&lt;br /&gt;
&lt;br /&gt;
To remove deduplicated data, disable deduplication and transfer data from sources which had deduplication enabled to a different place where deduplication is not enabled.&lt;br /&gt;
&lt;br /&gt;
If the deduplication ratio on the given pool returns to value 1.0 it is possible to assume that there are no deduplicated data on the pool.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Deduplication&amp;diff=11786</id>
		<title>Deduplication</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Deduplication&amp;diff=11786"/>
		<updated>2020-01-31T15:04:34Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. In turn, the deduplication ratio is the measurement of the zpool&#039;s data original size versus the data size after removing redundancy. Deduplication can be set on zvols or datasets but the&amp;amp;nbsp;deduplication ratio is displayed per pool instead of single zvols or datasets because they are located on the pool.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&amp;amp;nbsp;&#039;&#039;&#039;It is not recommended to use deduplication in systems whose primary aim is to provide efficiency and fast data access in case when High Availability cluster is enabled. In certain conditions, the deduplication table may crash and rebuilding it while importing the pool may take a lot of time or even lead to improper functioning of a failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Things to be taken into consideration ==&lt;br /&gt;
&lt;br /&gt;
Before using deduplication, the following matters should be taken into account:&lt;br /&gt;
&lt;br /&gt;
*hardware - deduplication is very memory consuming so if simultaneously the system would have to process the current tasks, it may significantly slow down its efficiency&lt;br /&gt;
*need for quick access to the data -&amp;amp;nbsp;when archiving or backing up the data as it saves the disk space.&amp;amp;nbsp;In case when there is a small amount of repetitive data, deduplication can only cause longer write times.&lt;br /&gt;
&lt;br /&gt;
It is also worth to calculate memory requirements in the following way:&lt;br /&gt;
&lt;br /&gt;
*deduplication table (DDT) entry that is equal to about 320 bytes should be multiplied by the number of allocated blocks by 320. For example: DDT size (1.08MB) x 320 =&amp;amp;nbsp;345,6‬ meaning that this amount of memory is required for deduplication&lt;br /&gt;
&lt;br /&gt;
Once deduplication is enabled, it has an impact on the whole pool since it creates the global DDT array with deduplication indicators. When deduplication is disabled,&amp;amp;nbsp;&#039;&#039;&#039;Zpool storage deduplication rate&#039;&#039;&#039;&amp;amp;nbsp;is set to 1. If the value is greater than 1, then the deduplication operation has taken place. Disabling deduplication will not cause the value to return to 1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Removing deduplicated data ==&lt;br /&gt;
&lt;br /&gt;
Deduplicated data must be rearranged after deduplication has been disabled on zpool by performing the send/ receive operation on the pool. The&amp;amp;nbsp;&#039;&#039;&#039;Zpool storage deduplication rate&#039;&#039;&#039;&amp;amp;nbsp;will be reset to 1, otherwise old data will be left in deduplication state.&lt;br /&gt;
&lt;br /&gt;
To remove deduplicated data, disable deduplication and transfer data from sources which had deduplication enabled to a different place where deduplication is not enabled.&lt;br /&gt;
&lt;br /&gt;
If the deduplication ratio on the given pool returns to value 1.0 it is possible to assume that there are no deduplicated data on the pool.&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Add_Pool_Initiator_window&amp;diff=11809</id>
		<title>Add Pool Initiator window</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Add_Pool_Initiator_window&amp;diff=11809"/>
		<updated>2020-01-31T12:35:26Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to FC initiator&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[FC_initiator]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Share_users_list&amp;diff=11808</id>
		<title>Share users list</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Share_users_list&amp;diff=11808"/>
		<updated>2020-01-31T12:33:47Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to User access&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[User access]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Hardware_monitoring&amp;diff=11807</id>
		<title>Hardware monitoring</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Hardware_monitoring&amp;diff=11807"/>
		<updated>2020-01-31T12:33:05Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Created page with &amp;quot;&amp;lt;span style=&amp;quot;font-size:13px; color:#000000; font-weight:normal; text-decoration:none; font-family:&amp;#039;Arial&amp;#039;; font-style:normal; text-decoration-skip-ink:none&amp;quot;&amp;gt;Hardware monitorin...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;font-size:13px; color:#000000; font-weight:normal; text-decoration:none; font-family:&#039;Arial&#039;; font-style:normal; text-decoration-skip-ink:none&amp;quot;&amp;gt;Hardware monitoring&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>https://wiki.open-e.com/default/wiki/index.php?title=Edit_Sensor&amp;diff=11806</id>
		<title>Edit Sensor</title>
		<link rel="alternate" type="text/html" href="https://wiki.open-e.com/default/wiki/index.php?title=Edit_Sensor&amp;diff=11806"/>
		<updated>2020-01-31T12:30:23Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Created page with &amp;quot;&amp;lt;span style=&amp;quot;font-size:13px; color:#000000; font-weight:normal; text-decoration:none; font-family:&amp;#039;Arial&amp;#039;; font-style:normal; text-decoration-skip-ink:none&amp;quot;&amp;gt;Edit Sensor&amp;lt;/span&amp;gt;&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;font-size:13px; color:#000000; font-weight:normal; text-decoration:none; font-family:&#039;Arial&#039;; font-style:normal; text-decoration-skip-ink:none&amp;quot;&amp;gt;Edit Sensor&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
</feed>