Add new zvol: Difference between revisions

From Open-E Wiki
Jump to navigation Jump to search
migrate>Ja-S
No edit summary
Da-F (talk | contribs)
No edit summary
Line 1: Line 1:
This function allows you to create a new zvol. You have to fill in or choose detailed information related to zvol :
__NOTOC__A zvol is a ZFS block device created inside a ZFS pool (zpool). In this documentation, the term zvol refers to a block-type resource that is typically exported as a LUN to hosts over iSCSI, Fibre Channel, or NVMe-oF. Zvols are usually used to:


*'''Name''' - Name of the new Zvol
* Provide block storage for virtual machines, databases, or other applications that expect a disk device.
*'''Size'''&nbsp;- Size of the new Zvol. The unit for this values is '''GiB (Gibibyte).'''<br/>In general, if you multiply 1024 ·1024 ·1024 = 1 073 741 824b, the result is equal to 2<sup>30</sup>.<br/>For example, 1 GiB = 1024 × 1 MiB = 1024 × 1024 × 1 KiB = 1024 × 1024 × 1024 × 1 B = 1 073 741 824 ~ 1070 GB
* Separate workloads that require different performance or data-protection policies (for example, different compression, block size, or deduplication settings).  
* Control how space is consumed by different applications or tenants at the pool level.  


*'''Provisioning''':*​'''Thin provisioned -'''&nbsp;involves using virtualization technology to give the appearance of having more physical resources than are actually available.&nbsp;Use this format to save storage space. For the thin disk, you provision as much datastore space as the disk would require based on the value that you enter for the disk size. However, the thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations.
You first create a zvol, and then (optionally) attach it to a target to make it available to hosts.
**'''Thick provisioned (default)'''&nbsp;-&nbsp;creates a virtual disk in a thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation.


*'''Deduplication''':*​'''Disabled (default)'''
== Creating a zvol ==
**'''On -&nbsp;'''is a pseudonym for "sha256"
**'''Verify -&nbsp;'''is a pseudonym for "sha256,verify",&nbsp;option that performs a full comparison of every incoming block with any alleged duplicate to ensure that they really are the same
**'''sha256 -&nbsp;'''performs an operation where the same output given two different inputs, then it is reasonable to assume that when two blocks have the same checksum, they are in fact the same block.
**'''sha256, Verify -&nbsp;'''enables an ability to detect and correct hash collisions, however this is very system intensive and is not recommended for casual use


*'''​Number of the data copies '''-&nbsp;Controls the number of copies of data stored for this dataset.&nbsp;These copies are in addition to any redundancy provided by the pool, for example, mirroring or raid-z. The copies are stored on different disks, if possible. The space used by multiple copies is&nbsp;charged to the associated file and dataset, changing the "used"&nbsp;property and counting against quotas and reservations. Changing this property only affects the newly-written data. You can choose from three options:
# Go to the zpool management view in the GUI.  
**'''1 (default)'''
# Select and expand the zpool in which you want to create the zvol.
**'''2'''
# Navigate to the iSCSI Targets, FC Targets, or NVMe-oF Targets section.  
**'''3'''
# Click '''Add zvol''' to open the '''Add new zvol''' dialog.
# Configure '''Encryption settings''' and Zvol properties, and optionally attach to an '''iSCSI''' '''target, NVMe-oF subsystem, or assign to FC groups'''.
# Review the configuration and click '''Add'''. The new zvol appears in the selected zpool.


*'''Compression -&nbsp;'''there are several data compression methods - You will see all available methods on the drop-down list. The default compression method is&nbsp;'''lz4'''''.''You can also choose no compression ''('''''None''').
After creation, you can adjust most properties later; however, encryption and some layout-related parameters (such as volume block size) cannot be changed after data has been written.
**'''gzip''', the standard levels of 1 through 9 are supported, where 1 is as fast as possible, with the least compression, and 9 is as compressed as possible, taking as much time as necessary.
**'''lzjb'''&nbsp;- &nbsp;is a fast method with tight compression ratios, which is standard with most Lempel-Ziv algorithms. LZJB seems to provide the best all around results in terms of performance and compression.
**'''zle'''&nbsp;is a very fast compression method, with very light compression ratios.
**'''lz4''' - is intended as a faster and smarter replacement for lzjb
*'''Volume block size&nbsp; - '''specifies the block size of the volume. The block size&nbsp;cannot be changed once the volume has been written, so it should be&nbsp;set at volume creation time.&nbsp;'''​'''The default value is '''128'''&nbsp;'''KiB.&nbsp;'''<br/>The value can be changed. You can choose between&nbsp;: 4; 8; 16; 32; 64; 128, 256; 512 [KiB] and 1 [MB].
*'''Write cache sync request '''- you can choose from three options&nbsp;:
**'''always (default)'''
**'''standard'''
**'''disabled'''
*'''​Write cache sync request handling (logbias)&nbsp;'''-'''&nbsp;'''Use this property to provide a hint to system about handling synchronous requests for a specific dataset. If&nbsp;logbias&nbsp;is set to '''Write log device (latency)''',&nbsp;JovianDSS uses the pool's separate log devices, if any, to handle the requests at low latency. If&nbsp;logbias&nbsp;is set to '''In pool (throughput)''', the system does not use the pool's separate log devices. Instead, JovianDSS optimizes synchronous operations for global pool throughput and efficient use of resources. The default value is Write log device (latency). For most configurations, the default value is recommended.


*'''Read cache (primary ARC) scope '''-&nbsp;Controls what is cached in the primary cache (ARC). If this property is set to "All", then both user data and metadata is cached. If this property is set to "None", then neither user data nor metadata is cached. If this property is set to "Metadata", then only&nbsp;metadata is cached. The default value is "All".
=== Encryption settings ===
*'''​Read cache (secondary, L2ARC) scope '''-&nbsp;Controls what is cached in the secondary cache (L2ARC). If this&nbsp;property is set to "All", then both user data and metadata is&nbsp;cached. If this property is set to "None", then neither user data&nbsp;nor metadata is cached. If this property is set to "Metadata", then&nbsp;only metadata is cached. The default value is "All".
This section is displayed at the top of the dialog. Encryption can be enabled only during zvol creation and cannot be disabled later for this resource.


You can attach this newly created zvol to a target. To do that, you have to:
==== Encrypt resource ====
Enable this switch to create an encrypted zvol. If the switch remains disabled, the zvol is created unencrypted.


*choose one of the existing '''Target&nbsp;''' names
==== Encryption method ====
*define the '''SCSI ID&nbsp;''' for the target (you can generate a random SCSI ID number by clicking '''Generate''' button)&nbsp;
Defines the encryption algorithm used when the zvol is encrypted.
*define the&nbsp;'''LUN&nbsp;'''for the target
*'''Access mode&nbsp;'''for the target - the default is&nbsp;'''Read-Write,&nbsp;'''You can choose the '''Read-Only&nbsp;'''option


* By default, the method is inherited from '''Configuration → Resource encryption''' (for example, aes-256-gcm).
* You can select a different supported method for this zvol if required by policy or performance.
For information about keys, unlocking behaviour, and error handling, see the [[Encryption]] article.
=== Zvol properties ===
These fields define the behaviour and performance characteristics of the zvol.
==== Name ====
* The zvol name must be unique within the pool.
* Allowed characters: a–z  A–Z  0–9  .  _  -
Renaming a zvol that is already exported through a target will change its internal path; any targets using that path must be updated before clients regain access.
==== Size ====
* Defines the logical capacity of the zvol. Enter the value and select the unit (e.g., GiB).
* The dialog shows the currently available physical space in the pool below the field.
Effective space consumption depends on the provisioning mode, compression efficiency, and any additional data copies.
==== Provisioning ====
Controls how space is allocated in the pool.
* '''Thin provisioned (default)''': Physical space is allocated on demand as data is written. This allows you to define logical capacities larger than the pool’s current free space, but you must monitor pool usage to avoid running out of space.
* '''Thick provisioned''': The full size of the zvol is reserved immediately at creation time. This guarantees capacity for the zvol but reduces free space for other resources.
Use thick provisioning only for workloads that require guaranteed capacity and for which overcommitment is unacceptable.
==== Deduplication ====
Enables ZFS block-level deduplication for the zvol. Available options include:
* '''Disabled (default)''' – deduplication is off.
* '''On''' – alias for sha256.
* '''Verify''' – alias for sha256,verify; performs an extra block comparison step.
* '''sha256''' – deduplicates based on SHA-256 checksums; blocks with identical checksums share a single physical copy.
* '''sha256,verify''' – uses SHA-256 and additionally verifies candidate duplicate blocks to reduce the risk of hash collisions. This mode is very resource-intensive.
Use deduplication only when you expect a high ratio of repeated blocks and have sufficient RAM (e.g., many similar VM images). For general-purpose workloads, leaving deduplication disabled is usually recommended.
==== Number of data copies ====
Controls how many ZFS data copies are stored for this zvol, in addition to pool-level redundancy (mirrors, RAIDZ, etc.).
* Allowed values: '''1 (default), 2, 3'''.
* When possible, copies are placed on different physical disks.
* Additional copies increase used space and count against pool capacity.
* Only new writes use the current setting; existing data keeps the number of copies that were in effect when they were written.
Use 2 or 3 copies only for small but critical zvols where additional local redundancy is more important than capacity efficiency.
==== Compression ====
Defines the on-the-fly compression algorithm for zvol data.
* '''lz4 (default)''' – high-performance, general-purpose method that is recommended for most workloads.
* '''None''' – disables compression.
* Additional algorithms in the list:
** gzip-1 … gzip-9 (higher levels compress more but are slower),
** lzjb,
** zle (effective mainly for blocks of zeros).
Keeping lz4 enabled is advisable for most zvols. Disable compression only when the data is already compressed and extremely latency-sensitive.
==== Volume block size ====
Defines the block size used for the zvol. This is similar to choosing a sector size for a virtual disk.
* Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB, and 1 MiB.
* Default value in the dialog: 64 KiB.
* The chosen size cannot be changed once meaningful data has been written.
Guidelines:
* Smaller blocks (e.g., 4-16 KiB) can improve performance for random I/O with small requests, at the cost of more metadata and slightly higher overhead.
* Larger blocks (e.g., 128 KiB or more) are suitable for large sequential workloads, such as backup or media storage.
Choose the block size based on the typical I/O pattern of the applications that will use the zvol.
==== Write cache sync requests ====
Controls how synchronous write operations are handled for this zvol (ZFS sync property).
* '''Always (default)''': All writes are treated as synchronous. Each transaction is committed and flushed to stable storage before the operation returns to the initiator. This provides the highest level of data safety and is recommended especially when no reliable UPS is available.
* '''Standard''': Equivalent to sync=standard. Only writes that are explicitly requested as synchronous are forced to stable storage; other writes can stay cached for up to about one second before being committed. This improves performance, but the most recent (up to 1 second) cached data can be lost in case of a power outage. Use this option only in environments protected by a reliable UPS.
* '''Disabled''': Equivalent to sync=disabled. Even explicitly synchronous writes are treated as asynchronous and may remain in cache for up to about one second. This provides the highest performance, but the most recent cached data may be lost during a power outage, and applications may observe inconsistent data. Use this option only for non-critical workloads and only in environments equipped with a reliable UPS.
==== Write cache sync request handling (logbias) ====
Provides a hint about how synchronous writes should use log devices (if present in the pool).
* '''Write log device (Latency)''': If dedicated log vdevs exist, they are used to minimize latency for synchronous writes. This is the recommended default for latency-sensitive workloads.
* '''In pool (Throughput)''': Log vdevs are bypassed, and writes are optimized for aggregate throughput and efficient pool usage. This can be beneficial for streaming workloads where latency is less critical.
This setting does not override pool layout; it only influences where synchronous data is staged before being committed to main storage.
==== Read cache (primary, ARC) scope ====
Specifies what is cached in the primary memory cache (ARC) for this zvol.
* '''All (default)''' – cache both data and metadata.
* '''Metadata''' – cache only metadata; user data is read directly from disk.
* '''None''' – do not cache anything for this zvol in ARC.
For large, sequential, or low-priority workloads you can reduce ARC pressure by switching to '''Metadata''' or '''None'''.
==== Read cache (secondary, L2ARC) scope ====
Controls use of secondary cache devices (L2ARC), typically SSDs.
* '''All (default)''' – cache both metadata and user data on L2ARC.
* '''Metadata''' – cache only metadata on L2ARC.
* '''None''' – exclude this zvol from L2ARC caching.
Use '''Metadata''' or '''None''' for zvols that would otherwise fill L2ARC with data that has low reuse value, thereby preserving cache space for more critical workloads.
=== Attach to target ===
The '''Attach to target''' section at the bottom of the dialog allows you to export the newly created zvol as a LUN immediately. This section is optional; you can also attach the zvol later from the target configuration views.
==== General behaviour ====
* When the Attach to target checkbox is disabled, the zvol is created but not attached to any target.
* Enabling the checkbox expands the configuration panel and allows you to select or configure how the zvol will be presented to initiators.
==== Fields ====
'''Target name''': Select an existing target from the drop-down list. The zvol will be attached as a new LUN under this target.
==== SCSI ID ====
* '''Automatic''' – uses an automatically generated SCSI identifier.
* '''Generate''' – creates a new random identifier if you need to control or refresh the ID.
In most cases, leaving the default automatic value is sufficient.
==== LUN ====
* '''automatic''' – assigns the next available LUN number on the selected target.
* '''manual entry''' – specify a particular LUN number if your environment uses a specific numbering scheme; the number must not already be in use on that target.
==== Write cache settings ====
Defines how write caching is presented to the initiator for this LUN.
* '''Write-through --- Block I/O (default)''': All writes are committed directly to stable storage before completion is reported to the initiator. This mode prioritizes data integrity and is recommended for most environments.
* '''Read only --- Block I/O''': Exposes the LUN as read-only on the Block I/O path. Any write attempts from the initiator are rejected. Use this option for volumes that must not be modified.
* '''Write-through --- File I/O''': Similar to Write-through --- Block I/O, but handled through the File I/O path. Writes are acknowledged only after they are safely stored on disk.
* '''Write-back --- File I/O''': Enables write-back caching on the File I/O path. Write requests are acknowledged after being stored in cache rather than on disk, which provides the highest write performance. However, cached data may be lost in case of a power failure or node crash (this risk exists even in HA cluster configurations), and resource failover can take noticeably longer. Use this option only when the environment can tolerate potential data loss and when additional protection (e.g., a battery-backed cache and a reliable UPS) is in place.
* '''Read only --- File I/O''': Exposes the LUN as read-only on the File I/O path. Use when the initiator must have read access only, for example, for archival or reference datasets.
==== TRIM support ====
* When enabled, it allows the zvol to honor TRIM / UNMAP requests from the initiator so that released blocks can be returned to the pool.
* Use this option only when the operating system and initiator software fully support TRIM for the relevant protocol.
TRIM can improve space efficiency for thin-provisioned zvols, but misconfigured initiators or unsupported combinations may cause unexpected behaviour.
After you confirm the configuration and click Add, the zvol is created with the specified properties. If attachment is enabled, the zvol is also exposed as a LUN on the selected target and becomes available to connected initiators after they rescan their devices.
[[Category:Help topics]]
[[Category:Help topics]]

Revision as of 07:58, 15 January 2026

A zvol is a ZFS block device created inside a ZFS pool (zpool). In this documentation, the term zvol refers to a block-type resource that is typically exported as a LUN to hosts over iSCSI, Fibre Channel, or NVMe-oF. Zvols are usually used to:

  • Provide block storage for virtual machines, databases, or other applications that expect a disk device.
  • Separate workloads that require different performance or data-protection policies (for example, different compression, block size, or deduplication settings).
  • Control how space is consumed by different applications or tenants at the pool level.

You first create a zvol, and then (optionally) attach it to a target to make it available to hosts.

Creating a zvol

  1. Go to the zpool management view in the GUI.
  2. Select and expand the zpool in which you want to create the zvol.
  3. Navigate to the iSCSI Targets, FC Targets, or NVMe-oF Targets section.
  4. Click Add zvol to open the Add new zvol dialog.
  5. Configure Encryption settings and Zvol properties, and optionally attach to an iSCSI target, NVMe-oF subsystem, or assign to FC groups.
  6. Review the configuration and click Add. The new zvol appears in the selected zpool.

After creation, you can adjust most properties later; however, encryption and some layout-related parameters (such as volume block size) cannot be changed after data has been written.

Encryption settings

This section is displayed at the top of the dialog. Encryption can be enabled only during zvol creation and cannot be disabled later for this resource.

Encrypt resource

Enable this switch to create an encrypted zvol. If the switch remains disabled, the zvol is created unencrypted.

Encryption method

Defines the encryption algorithm used when the zvol is encrypted.

  • By default, the method is inherited from Configuration → Resource encryption (for example, aes-256-gcm).
  • You can select a different supported method for this zvol if required by policy or performance.

For information about keys, unlocking behaviour, and error handling, see the Encryption article.

Zvol properties

These fields define the behaviour and performance characteristics of the zvol.

Name

  • The zvol name must be unique within the pool.
  • Allowed characters: a–z  A–Z  0–9  .  _  -

Renaming a zvol that is already exported through a target will change its internal path; any targets using that path must be updated before clients regain access.

Size

  • Defines the logical capacity of the zvol. Enter the value and select the unit (e.g., GiB).
  • The dialog shows the currently available physical space in the pool below the field.

Effective space consumption depends on the provisioning mode, compression efficiency, and any additional data copies.

Provisioning

Controls how space is allocated in the pool.

  • Thin provisioned (default): Physical space is allocated on demand as data is written. This allows you to define logical capacities larger than the pool’s current free space, but you must monitor pool usage to avoid running out of space.
  • Thick provisioned: The full size of the zvol is reserved immediately at creation time. This guarantees capacity for the zvol but reduces free space for other resources.

Use thick provisioning only for workloads that require guaranteed capacity and for which overcommitment is unacceptable.

Deduplication

Enables ZFS block-level deduplication for the zvol. Available options include:

  • Disabled (default) – deduplication is off.
  • On – alias for sha256.
  • Verify – alias for sha256,verify; performs an extra block comparison step.
  • sha256 – deduplicates based on SHA-256 checksums; blocks with identical checksums share a single physical copy.
  • sha256,verify – uses SHA-256 and additionally verifies candidate duplicate blocks to reduce the risk of hash collisions. This mode is very resource-intensive.

Use deduplication only when you expect a high ratio of repeated blocks and have sufficient RAM (e.g., many similar VM images). For general-purpose workloads, leaving deduplication disabled is usually recommended.

Number of data copies

Controls how many ZFS data copies are stored for this zvol, in addition to pool-level redundancy (mirrors, RAIDZ, etc.).

  • Allowed values: 1 (default), 2, 3.
  • When possible, copies are placed on different physical disks.
  • Additional copies increase used space and count against pool capacity.
  • Only new writes use the current setting; existing data keeps the number of copies that were in effect when they were written.

Use 2 or 3 copies only for small but critical zvols where additional local redundancy is more important than capacity efficiency.

Compression

Defines the on-the-fly compression algorithm for zvol data.

  • lz4 (default) – high-performance, general-purpose method that is recommended for most workloads.
  • None – disables compression.
  • Additional algorithms in the list:
    • gzip-1 … gzip-9 (higher levels compress more but are slower),
    • lzjb,
    • zle (effective mainly for blocks of zeros).

Keeping lz4 enabled is advisable for most zvols. Disable compression only when the data is already compressed and extremely latency-sensitive.

Volume block size

Defines the block size used for the zvol. This is similar to choosing a sector size for a virtual disk.

  • Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB, and 1 MiB.
  • Default value in the dialog: 64 KiB.
  • The chosen size cannot be changed once meaningful data has been written.

Guidelines:

  • Smaller blocks (e.g., 4-16 KiB) can improve performance for random I/O with small requests, at the cost of more metadata and slightly higher overhead.
  • Larger blocks (e.g., 128 KiB or more) are suitable for large sequential workloads, such as backup or media storage.

Choose the block size based on the typical I/O pattern of the applications that will use the zvol.

Write cache sync requests

Controls how synchronous write operations are handled for this zvol (ZFS sync property).

  • Always (default): All writes are treated as synchronous. Each transaction is committed and flushed to stable storage before the operation returns to the initiator. This provides the highest level of data safety and is recommended especially when no reliable UPS is available.
  • Standard: Equivalent to sync=standard. Only writes that are explicitly requested as synchronous are forced to stable storage; other writes can stay cached for up to about one second before being committed. This improves performance, but the most recent (up to 1 second) cached data can be lost in case of a power outage. Use this option only in environments protected by a reliable UPS.
  • Disabled: Equivalent to sync=disabled. Even explicitly synchronous writes are treated as asynchronous and may remain in cache for up to about one second. This provides the highest performance, but the most recent cached data may be lost during a power outage, and applications may observe inconsistent data. Use this option only for non-critical workloads and only in environments equipped with a reliable UPS.

Write cache sync request handling (logbias)

Provides a hint about how synchronous writes should use log devices (if present in the pool).

  • Write log device (Latency): If dedicated log vdevs exist, they are used to minimize latency for synchronous writes. This is the recommended default for latency-sensitive workloads.
  • In pool (Throughput): Log vdevs are bypassed, and writes are optimized for aggregate throughput and efficient pool usage. This can be beneficial for streaming workloads where latency is less critical.

This setting does not override pool layout; it only influences where synchronous data is staged before being committed to main storage.

Read cache (primary, ARC) scope

Specifies what is cached in the primary memory cache (ARC) for this zvol.

  • All (default) – cache both data and metadata.
  • Metadata – cache only metadata; user data is read directly from disk.
  • None – do not cache anything for this zvol in ARC.

For large, sequential, or low-priority workloads you can reduce ARC pressure by switching to Metadata or None.

Read cache (secondary, L2ARC) scope

Controls use of secondary cache devices (L2ARC), typically SSDs.

  • All (default) – cache both metadata and user data on L2ARC.
  • Metadata – cache only metadata on L2ARC.
  • None – exclude this zvol from L2ARC caching.

Use Metadata or None for zvols that would otherwise fill L2ARC with data that has low reuse value, thereby preserving cache space for more critical workloads.

Attach to target

The Attach to target section at the bottom of the dialog allows you to export the newly created zvol as a LUN immediately. This section is optional; you can also attach the zvol later from the target configuration views.

General behaviour

  • When the Attach to target checkbox is disabled, the zvol is created but not attached to any target.
  • Enabling the checkbox expands the configuration panel and allows you to select or configure how the zvol will be presented to initiators.

Fields

Target name: Select an existing target from the drop-down list. The zvol will be attached as a new LUN under this target.

SCSI ID

  • Automatic – uses an automatically generated SCSI identifier.
  • Generate – creates a new random identifier if you need to control or refresh the ID.

In most cases, leaving the default automatic value is sufficient.

LUN

  • automatic – assigns the next available LUN number on the selected target.
  • manual entry – specify a particular LUN number if your environment uses a specific numbering scheme; the number must not already be in use on that target.

Write cache settings

Defines how write caching is presented to the initiator for this LUN.

  • Write-through --- Block I/O (default): All writes are committed directly to stable storage before completion is reported to the initiator. This mode prioritizes data integrity and is recommended for most environments.
  • Read only --- Block I/O: Exposes the LUN as read-only on the Block I/O path. Any write attempts from the initiator are rejected. Use this option for volumes that must not be modified.
  • Write-through --- File I/O: Similar to Write-through --- Block I/O, but handled through the File I/O path. Writes are acknowledged only after they are safely stored on disk.
  • Write-back --- File I/O: Enables write-back caching on the File I/O path. Write requests are acknowledged after being stored in cache rather than on disk, which provides the highest write performance. However, cached data may be lost in case of a power failure or node crash (this risk exists even in HA cluster configurations), and resource failover can take noticeably longer. Use this option only when the environment can tolerate potential data loss and when additional protection (e.g., a battery-backed cache and a reliable UPS) is in place.
  • Read only --- File I/O: Exposes the LUN as read-only on the File I/O path. Use when the initiator must have read access only, for example, for archival or reference datasets.

TRIM support

  • When enabled, it allows the zvol to honor TRIM / UNMAP requests from the initiator so that released blocks can be returned to the pool.
  • Use this option only when the operating system and initiator software fully support TRIM for the relevant protocol.

TRIM can improve space efficiency for thin-provisioned zvols, but misconfigured initiators or unsupported combinations may cause unexpected behaviour.

After you confirm the configuration and click Add, the zvol is created with the specified properties. If attachment is enabled, the zvol is also exposed as a LUN on the selected target and becomes available to connected initiators after they rescan their devices.