Print this page
NEX-18069 Unable to get/set VDEV_PROP_RESILVER_MAXACTIVE/VDEV_PROP_RESILVER_MINACTIVE props
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9552 zfs_scan_idle throttling harms performance and needs to be removed
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-5284 need to document and update default for import -t option
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4934 Add capability to remove special vdev
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4258 restore and update vdev-get & vdev-set in zpool man page
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3502 dedup ceiling should set a pool prop when cap is in effect
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
        usr/src/common/zfs/zpool_prop.c
        usr/src/uts/common/sys/fs/zfs.h
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
    usr/src/uts/common/io/scsi/targets/sd.c
    usr/src/uts/common/sys/scsi/targets/sddef.h
SUP-817 Removed references to special device from man and help
Revert "SUP-817 Removed references to special device"
This reverts commit f8970e28f0d8bd6b69711722f341e3e1d0e1babf.
SUP-817 Removed references to special device
OS-102 add man page info and tests for vdev/CoS properties and ZFS meta features
Issue #26: partial scrub
Added partial scrub options:
-M for MOS only scrub
-m for metadata scrub
re 13748 added zpool export -c option
zpool export -c command exports specified pool while keeping its latest
configuration in the cache file for subsequent zpool import -c.
re #11781 rb3701 Update man related tools (add missed files)
re #11781 rb3701 Update man related tools
--HG--
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE => usr/src/cmd/man/THIRDPARTYLICENSE
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE.descrip => usr/src/cmd/man/THIRDPARTYLICENSE.descrip
rename : usr/src/cmd/man/src/man.c => usr/src/cmd/man/man.c

@@ -10,36 +10,38 @@
      zpool clear pool [device]
      zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]...
            [-O file-system-property=value]... [-R root] pool vdev...
      zpool destroy [-f] pool
      zpool detach pool device
-     zpool export [-f] pool...
+     zpool export [-cfF] [-t numthreads] pool...
      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
      zpool history [-il] [pool]...
      zpool import [-D] [-d dir]
      zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
-           [-o property=value]... [-R root]
+           [-o property=value]... [-R root] [-t numthreads]
      zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
-           [-o property=value]... [-R root] pool|id [newpool]
+           [-o property=value]... [-R root] [-t numthreads] pool|id [newpool]
      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
      zpool labelclear [-f] device
      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
            [interval [count]]
      zpool offline [-t] pool device...
      zpool online [-e] pool device...
      zpool reguid pool
      zpool reopen pool
-     zpool remove [-np] pool device...
-     zpool remove -s pool
+     zpool remove pool device...
      zpool replace [-f] pool device [new_device]
-     zpool scrub [-s | -p] pool...
+     zpool scrub [-m|-M|-p|-s] pool...
      zpool set property=value pool
      zpool split [-n] [-o property=value]... [-R root] pool newpool
      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
+     zpool trim [-r rate|-s] pool...
      zpool upgrade
      zpool upgrade -v
      zpool upgrade [-V version] -a|pool...
+     zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
+     zpool vdev-set property=value pool vdev-name|vdev-guid
 
 DESCRIPTION
      The zpool command configures ZFS storage pools.  A storage pool is a
      collection of devices that provides physical storage and data replication
      for ZFS datasets.  All datasets within a storage pool share the same

@@ -216,10 +218,13 @@
      An in-progress spare replacement can be cancelled by detaching the hot
      spare.  If the original faulted device is detached, then the hot spare
      assumes its place in the configuration, and is removed from the spare
      list of all active pools.
 
+     See sparegroup vdev property in Device Properties section for information
+     on how to control spare selection.
+
      Spares cannot replace log devices.
 
    Intent Log
      The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
      transactions.  For instance, databases often require their transactions

@@ -234,12 +239,12 @@
      Multiple log devices can also be specified, and they can be mirrored.
      See the EXAMPLES section for an example of mirroring multiple log
      devices.
 
      Log devices can be added, replaced, attached, detached, and imported and
-     exported as part of the larger pool.  Mirrored devices can be removed by
-     specifying the top-level mirror vdev.
+     exported as part of the larger pool.  Mirrored log devices can be removed
+     by specifying the top-level mirror for the log.
 
    Cache Devices
      Devices can be added to a storage pool as "cache devices".  These devices
      provide an additional layer of caching between main memory and disk.  For
      read-heavy workloads, where the working set size is much larger than what

@@ -259,11 +264,11 @@
      raidz configuration.
 
      The content of the cache devices is considered volatile, as is the case
      with other system caches.
 
-   Properties
+   Pool Properties
      Each pool has several properties associated with it.  Some properties are
      read-only statistics while others are configurable and change the
      behavior of the pool.
 
      The following are read-only properties:

@@ -278,10 +283,15 @@
 
      capacity
              Percentage of pool space used.  This property can also be
              referred to by its shortened column name, cap.
 
+     ddt_capped=on|off
+             When the ddt_capped is on this indicates DDT growth has been
+             stopped.  New unique writes will not be deduped to prevent
+             further DDT growth.
+
      expandsize
              Amount of uninitialized space within the pool or device that can
              be used to increase the total capacity of the pool.
              Uninitialized space consists of any space on an EFI labeled vdev
              which has not been brought online (e.g, using zpool online -e).

@@ -291,13 +301,13 @@
              The amount of fragmentation in the pool.
 
      free    The amount of free space available in the pool.
 
      freeing
-             After a file system or snapshot is destroyed, the space it was
-             using is returned to the pool asynchronously.  freeing is the
-             amount of space remaining to be reclaimed.  Over time freeing
+             freeing is the amount of pool space remaining to be reclaimed.
+             After a file, dataset or snapshot is destroyed, the space it was
+             using is returned to the pool asynchronously.  Over time freeing
              will decrease while free increases.
 
      health  The current health of the pool.  Health can be one of ONLINE,
              DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
 

@@ -357,10 +367,34 @@
              the same physical location as a device that previously belonged
              to the pool, is automatically formatted and replaced.  The
              default behavior is off.  This property can also be referred to
              by its shortened column name, replace.
 
+     autotrim=on|off
+             When set to on, while deleting data, ZFS will inform the
+             underlying vdevs of any blocks that have been marked as freed.
+             This allows thinly provisioned vdevs to reclaim unused blocks.
+             Currently, this feature supports sending SCSI UNMAP commands to
+             SCSI and SAS disk vdevs, and using file hole punching on file-
+             backed vdevs.  SATA TRIM is currently not implemented.  The
+             default setting for this property is off.
+
+             Please note that automatic trimming of data blocks can put
+             significant stress on the underlying storage devices if they do
+             not handle these commands in a background, low-priority manner.
+             In that case, it may be possible to achieve most of the benefits
+             of trimming free space on the pool by running an on-demand
+             (manual) trim every once in a while during a maintenance window
+             using the zpool trim command.
+
+             Automatic trim does not reclaim blocks after a delete
+             immediately.  Instead, it waits approximately 32-64 TXGs (or as
+             defined by the zfs_txgs_per_trim tunable) to allow for more
+             efficient aggregation of smaller portions of free space into
+             fewer larger regions, as well as to allow for longer pool
+             corruption recovery via zpool import -F.
+
      bootfs=pool/dataset
              Identifies the default bootable dataset for the root pool.  This
              property is expected to be set mainly by the installation and
              upgrade programs.
 

@@ -425,24 +459,98 @@
              The value of this property is the current state of feature_name.
              The only valid value when setting this property is enabled which
              moves feature_name to the enabled state.  See zpool-features(5)
              for details on feature states.
 
+     forcetrim=on|off
+             Controls whether device support is taken into consideration when
+             issuing TRIM commands to the underlying vdevs of the pool.
+             Normally, both automatic trim and on-demand (manual) trim only
+             issue TRIM commands if a vdev indicates support for it.  Setting
+             the forcetrim property to on will force ZFS to issue TRIMs even
+             if it thinks a device does not support it.  The default value is
+             off.
+
      listsnapshots=on|off
              Controls whether information about snapshots associated with this
              pool is output when zfs list is run without the -t option.  The
              default value is off.  This property can also be referred to by
              its shortened name, listsnaps.
 
+     scrubprio=0-100
+             Sets the priority of scrub I/O for this pool.  This is a number
+             from 0 to 100, higher numbers meaning a higher priority and thus
+             more bandwidth allocated to scrub I/O, provided there is other
+             I/O competing for bandwidth.  If no other I/O is competing for
+             bandwidth, scrub is allowed to consume as much bandwidth as the
+             pool is capable of providing.  A priority of 100 means that scrub
+             I/O has equal priority to any other user-generated I/O.  The
+             value 0 is special, because it turns per-pool scrub priority
+             control.  In that case, scrub I/O priority is determined by the
+             zfs_vdev_scrub_min_active and zfs_vdev_scrub_max_active tunables.
+             The default value is 5.
+
+     resilverprio=0-100
+             Same as the scrubprio property, but controls the priority for
+             resilver I/O.  The default value is 10.  When set to 0 the global
+             tunables used for queue sizing are zfs_vdev_resilver_min_active
+             and zfs_vdev_resilver_max_active.
+
      version=version
              The current on-disk version of the pool.  This can be increased,
              but never decreased.  The preferred method of updating pools is
              with the zpool upgrade command, though this property can be used
              when a specific version is needed for backwards compatibility.
              Once feature flags are enabled on a pool this property will no
              longer have a value.
 
+   Device Properties
+     Each device can have several properties associated with it.  These
+     properites override global tunables and are designed to provide more
+     control over the operational parameters of this specific device, as well
+     as to help manage this device.
+
+     The cos device property can reference a CoS property descriptor by name,
+     in which case, the values of device properties are determined according
+     to the following rule: the device settings override CoS settings, which
+     in turn, override the global tunables.
+
+     The following device properties are available:
+
+     cos=cos-name
+             This property indicates whether the device is associated with a
+             CoS property descriptor object.  If so, the properties from the
+             CoS descriptor that are not explicitly overridden by the device
+             properties are in effect for this device.
+
+     l2arc_ddt=on|off
+             This property is meaningful for L2ARC devices.  If this property
+             is turned on ZFS will dedicate the L2ARC device to cache
+             deduplication table (DDT) buffers only.
+
+     prefread=1..100
+             This property is meaningful for devices that belong to a mirror.
+             The property determines the preference that is given to the
+             device when reading from the mirror.  The ratio of the value to
+             the sum of the values of this property for all the devices in the
+             mirror determines the relative frequency (which also is
+             considered "probability") of reading from this specific device.
+
+     sparegroup=group-name
+             This property indicates whether the device is a part of a spare
+             device group.  Devices in the pool (including spares) can be
+             labeled with strings that are meaningful in the context of the
+             management workflow in effect.  When a failed device is
+             automatically replaced by spares, the spares whose sparegroup
+             property match the failed device's property are used first.
+
+     {read|aread|write|awrite|scrub|resilver}_{minactive|maxactive}=1..1000
+             These properties define the minimim/maximum number of outstanding
+             active requests for the queueable classes of I/O requests as
+             defined by the ZFS I/O scheduler.  The classes include read,
+             asynchronous read, write, asynchronous write, and scrub classes.
+
    Subcommands
      All subcommands that modify state are logged persistently to the pool in
      their original form.
 
      The zpool command provides subcommands to create and destroy storage

@@ -543,11 +651,11 @@
                      actually creating the pool.  The actual pool creation can
                      still fail due to insufficient privileges or device
                      sharing.
 
              -o property=value
-                     Sets the given pool properties.  See the Properties
+                     Sets the given pool properties.  See the Pool Properties
                      section for a list of valid properties that can be set.
 
              -O file-system-property=value
                      Sets the given file system properties in the root file
                      system of the pool.  See the Properties section of

@@ -566,11 +674,11 @@
 
      zpool detach pool device
              Detaches device from a mirror.  The operation is refused if there
              are no other valid replicas of the data.
 
-     zpool export [-f] pool...
+     zpool export [-cfF] [-t numthreads] pool...
              Exports the given pools from the system.  All devices are marked
              as exported, but are still considered in use by other subsystems.
              The devices can be moved between systems (even those of different
              endianness) and imported as long as a sufficient number of
              devices are present.

@@ -582,17 +690,27 @@
              For pools to be portable, you must give the zpool command whole
              disks, not just slices, so that ZFS can label the disks with
              portable EFI labels.  Otherwise, disk drivers on platforms of
              different endianness will not recognize the disks.
 
+             -c      Keep configuration information of exported pool in the
+                     cache file.
+
              -f      Forcefully unmount all datasets, using the unmount -f
                      command.
 
                      This command will forcefully export the pool even if it
                      has a shared spare that is currently being used.  This
                      may lead to potential data corruption.
 
+             -F      Do not update device labels or cache file with new
+                     configuration.
+
+             -t numthreads
+                     Unmount datasets in parallel using up to numthreads
+                     threads.
+
      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
              Retrieves the given list of properties (or all properties if all
              is used) for the specified storage pool(s).  These properties are
              displayed with the following fields:
 

@@ -599,12 +717,12 @@
                      name          Name of storage pool
                      property      Property name
                      value         Property value
                      source        Property source, either 'default' or 'local'.
 
-             See the Properties section for more information on the available
-             pool properties.
+             See the Pool Properties section for more information on the
+             available pool properties.
 
              -H      Scripted mode.  Do not display headers, and separate
                      fields by a single tab instead of arbitrary space.
 
              -o field

@@ -700,11 +818,11 @@
                      mounting datasets within the pool.  See zfs(1M) for a
                      description of dataset properties and mount options.
 
              -o property=value
                      Sets the specified property on the imported pool.  See
-                     the Properties section for more information on the
+                     the Pool Properties section for more information on the
                      available pool properties.
 
              -R root
                      Sets the cachefile property to none and the altroot
                      property to root.

@@ -759,17 +877,21 @@
                      mounting datasets within the pool.  See zfs(1M) for a
                      description of dataset properties and mount options.
 
              -o property=value
                      Sets the specified property on the imported pool.  See
-                     the Properties section for more information on the
+                     the Pool Properties section for more information on the
                      available pool properties.
 
              -R root
                      Sets the cachefile property to none and the altroot
                      property to root.
 
+             -t numthreads
+                     Mount datasets in parallel using up to numthreads
+                     threads.
+
      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
              Displays I/O statistics for the given pools.  When given an
              interval, the statistics are printed every interval seconds until
              ^C is pressed.  If no pools are specified, statistics for every
              pool in the system is shown.  If count is specified, the command

@@ -801,13 +923,14 @@
              -H      Scripted mode.  Do not display headers, and separate
                      fields by a single tab instead of arbitrary space.
 
              -o property
                      Comma-separated list of properties to display.  See the
-                     Properties section for a list of valid properties.  The
-                     default list is name, size, allocated, free, expandsize,
-                     fragmentation, capacity, dedupratio, health, altroot.
+                     Pool Properties section for a list of valid properties.
+                     The default list is name, size, allocated, free,
+                     expandsize, fragmentation, capacity, dedupratio, health,
+                     altroot.
 
              -p      Display numbers in parsable (exact) values.
 
              -T u|d  Display a time stamp.  Specify -u for a printed
                      representation of the internal representation of time.

@@ -841,41 +964,19 @@
              performing this action.
 
      zpool reopen pool
              Reopen all the vdevs associated with the pool.
 
-     zpool remove [-np] pool device...
+     zpool remove pool device...
              Removes the specified device from the pool.  This command
-             currently only supports removing hot spares, cache, log devices
-             and mirrored top-level vdevs (mirror of leaf devices); but not
-             raidz.
+             currently only supports removing hot spares, cache, log and
+             special devices.  A mirrored log device can be removed by
+             specifying the top-level mirror for the log.  Non-log devices
+             that are part of a mirrored configuration can be removed using
+             the zpool detach command.  Non-redundant and raidz devices cannot
+             be removed from a pool.
 
-             Removing a top-level vdev reduces the total amount of space in
-             the storage pool.  The specified device will be evacuated by
-             copying all allocated space from it to the other devices in the
-             pool.  In this case, the zpool remove command initiates the
-             removal and returns, while the evacuation continues in the
-             background.  The removal progress can be monitored with zpool
-             status. This feature must be enabled to be used, see
-             zpool-features(5)
-
-             A mirrored top-level device (log or data) can be removed by
-             specifying the top-level mirror for the same.  Non-log devices or
-             data devices that are part of a mirrored configuration can be
-             removed using the zpool detach command.
-
-             -n      Do not actually perform the removal ("no-op").  Instead,
-                     print the estimated amount of memory that will be used by
-                     the mapping table after the removal completes.  This is
-                     nonzero only for top-level vdevs.
-
-             -p      Used in conjunction with the -n flag, displays numbers as
-                     parsable (exact) values.
-
-     zpool remove -s pool
-             Stops and cancels an in-progress removal of a top-level vdev.
-
      zpool replace [-f] pool device [new_device]
              Replaces old_device with new_device.  This is equivalent to
              attaching new_device, waiting for it to resilver, and then
              detaching old_device.
 

@@ -891,11 +992,11 @@
              actually a different disk.  ZFS recognizes this.
 
              -f      Forces use of new_device, even if its appears to be in
                      use.  Not all devices can be overridden in this manner.
 
-     zpool scrub [-s | -p] pool...
+     zpool scrub [-m|-M|-p|-s] pool...
              Begins a scrub or resumes a paused scrub.  The scrub examines all
              data in the specified pools to verify that it checksums
              correctly.  For replicated (mirror or raidz) devices, ZFS
              automatically repairs any damage discovered during the scrub.
              The zpool status command reports the progress of the scrub and

@@ -911,22 +1012,28 @@
              Because scrubbing and resilvering are I/O-intensive operations,
              ZFS only allows one at a time.  If a scrub is paused, the zpool
              scrub resumes it.  If a resilver is in progress, ZFS does not
              allow a scrub to be started until the resilver completes.
 
-             -s      Stop scrubbing.
+             Partial scrub may be requested using -m or -M option.
 
+             -m      Scrub only metadata blocks.
+
+             -M      Scrub only MOS blocks.
+
              -p      Pause scrubbing.  Scrub pause state and progress are
                      periodically synced to disk.  If the system is restarted
                      or pool is exported during a paused scrub, even after
                      import, scrub will remain paused until it is resumed.
                      Once resumed the scrub will pick up from the place where
                      it was last checkpointed to disk.  To resume a paused
                      scrub issue zpool scrub again.
 
+             -s      Stop scrubbing.
+
      zpool set property=value pool
-             Sets the given property on the specified pool.  See the
+             Sets the given property on the specified pool.  See the Pool
              Properties section for more information on what properties can be
              set and acceptable values.
 
      zpool split [-n] [-o property=value]... [-R root] pool newpool
              Splits devices off pool creating newpool.  All vdevs in pool must

@@ -935,11 +1042,11 @@
 
              -n      Do dry run, do not actually perform the split.  Print out
                      the expected configuration of newpool.
 
              -o property=value
-                     Sets the specified property for newpool.  See the
+                     Sets the specified property for newpool.  See the Pool
                      Properties section for more information on the available
                      pool properties.
 
              -R root
                      Set altroot for newpool to root and automatically import

@@ -972,10 +1079,66 @@
 
              -x      Only display status for pools that are exhibiting errors
                      or are otherwise unavailable.  Warnings about pools not
                      using the latest on-disk format will not be included.
 
+     zpool trim [-r rate|-s] pool...
+             Initiates a on-demand TRIM operation on all of the free space of
+             a pool.  This informs the underlying storage devices of all of
+             the blocks that the pool no longer considers allocated, thus
+             allowing thinly provisioned storage devices to reclaim them.
+             Please note that this collects all space marked as "freed" in the
+             pool immediately and doesn't wait the zfs_txgs_per_trim delay as
+             automatic TRIM does.  Hence, this can limit pool corruption
+             recovery options during and immediately following the on-demand
+             TRIM to 1-2 TXGs into the past (instead of the standard 32-64 of
+             automatic TRIM).  This approach, however, allows you to recover
+             the maximum amount of free space from the pool immediately
+             without having to wait.
+
+             Also note that an on-demand TRIM operation can be initiated
+             irrespective of the autotrim pool property setting.  It does,
+             however, respect the forcetrim pool property.
+
+             An on-demand TRIM operation does not conflict with an ongoing
+             scrub, but it can put significant I/O stress on the underlying
+             vdevs.  A resilver, however, automatically stops an on-demand
+             TRIM operation.  You can manually reinitiate the TRIM operation
+             after the resilver has started, by simply reissuing the zpool
+             trim command.
+
+             Adding a vdev during TRIM is supported, although the progression
+             display in zpool status might not be entirely accurate in that
+             case (TRIM will complete before reaching 100%).  Removing or
+             detaching a vdev will prematurely terminate an on-demand TRIM
+             operation.
+
+             -r rate
+                     Controls the speed at which the TRIM operation
+                     progresses.  Without this option, TRIM is executed in
+                     parallel on all top-level vdevs as quickly as possible.
+                     This option allows you to control how fast (in bytes per
+                     second) the TRIM is executed.  This rate is applied on a
+                     per-vdev basis, i.e. every top-level vdev in the pool
+                     tries to match this speed.
+
+                     Due to limitations in how the algorithm is designed,
+                     TRIMs are executed in whole-metaslab increments.  Each
+                     top-level vdev contains approximately 200 metaslabs, so a
+                     rate-limited TRIM progresses in steps, i.e. it TRIMs one
+                     metaslab completely and then waits for a while so that
+                     over the whole device, the speed averages out.
+
+                     When an on-demand TRIM operation is already in progress,
+                     this option changes its rate.  To change a rate-limited
+                     TRIM to an unlimited one, simply execute the zpool trim
+                     command without the -r option.
+
+             -s      Stop trimming.  If an on-demand TRIM operation is not
+                     ongoing at the moment, this does nothing and the command
+                     returns success.
+
      zpool upgrade
              Displays pools which do not have all supported features enabled
              and pools formatted using a legacy ZFS version number.  These
              pools can continue to be used, but some features may not be
              available.  Use zpool upgrade -a to enable all features on all

@@ -999,10 +1162,24 @@
                      Upgrade to the specified legacy version.  If the -V flag
                      is specified, no features will be enabled on the pool.
                      This option can only be used to increase the version
                      number up to the last supported legacy version number.
 
+     zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
+             Retrieves the given list of vdev properties (or all properties if
+             all is used) for the specified vdev of the specified storage
+             pool.  These properties are displayed in the same manner as the
+             pool properties.  The operation is supported for leaf-level vdevs
+             only.  See the Device Properties section for more information on
+             the available properties.
+
+     zpool vdev-set property=value pool vdev-name|vdev-guid
+             Sets the given property on the specified device of the specified
+             pool.  If top-level vdev is specified, sets the property on all
+             the child devices.  See the Device Properties section for more
+             information on what properties can be set and accepted values.
+
 EXIT STATUS
      The following exit values are returned:
 
      0       Successful completion.
 

@@ -1130,14 +1307,12 @@
              could take over an hour for them to fill.  Capacity and reads can
              be monitored using the iostat option as follows:
 
              # zpool iostat -v pool 5
 
-     Example 14 Removing a Mirrored top-level (Log or Data) Device
-             The following commands remove the mirrored log device mirror-2
-             and mirrored top-level data device mirror-1.
-
+     Example 14 Removing a Mirrored Log Device
+             The following command removes the mirrored log device mirror-2.
              Given this configuration:
 
                pool: tank
               state: ONLINE
               scrub: none requested

@@ -1158,14 +1333,10 @@
 
              The command to remove the mirrored log mirror-2 is:
 
              # zpool remove tank mirror-2
 
-             The command to remove the mirrored data mirror-1 is:
-
-             # zpool remove tank mirror-1
-
      Example 15 Displaying expanded space on a device
              The following command displays the detailed information for the
              pool data.  This pool is comprised of a single raidz vdev where
              one of its devices increased its capacity by 10GB.  In this
              example, the pool will not be able to utilize this extra capacity