Print this page
NEX-18069 Unable to get/set VDEV_PROP_RESILVER_MAXACTIVE/VDEV_PROP_RESILVER_MINACTIVE props
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9552 zfs_scan_idle throttling harms performance and needs to be removed
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-5284 need to document and update default for import -t option
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4934 Add capability to remove special vdev
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4258 restore and update vdev-get & vdev-set in zpool man page
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3502 dedup ceiling should set a pool prop when cap is in effect
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
        usr/src/common/zfs/zpool_prop.c
        usr/src/uts/common/sys/fs/zfs.h
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
    usr/src/uts/common/io/scsi/targets/sd.c
    usr/src/uts/common/sys/scsi/targets/sddef.h
SUP-817 Removed references to special device from man and help
Revert "SUP-817 Removed references to special device"
This reverts commit f8970e28f0d8bd6b69711722f341e3e1d0e1babf.
SUP-817 Removed references to special device
OS-102 add man page info and tests for vdev/CoS properties and ZFS meta features
Issue #26: partial scrub
Added partial scrub options:
-M for MOS only scrub
-m for metadata scrub
re 13748 added zpool export -c option
zpool export -c command exports specified pool while keeping its latest
configuration in the cache file for subsequent zpool import -c.
re #11781 rb3701 Update man related tools (add missed files)
re #11781 rb3701 Update man related tools
--HG--
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE => usr/src/cmd/man/THIRDPARTYLICENSE
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE.descrip => usr/src/cmd/man/THIRDPARTYLICENSE.descrip
rename : usr/src/cmd/man/src/man.c => usr/src/cmd/man/man.c

Split Close
Expand all
Collapse all
          --- old/usr/src/man/man1m/zpool.1m
          +++ new/usr/src/man/man1m/zpool.1m
↓ open down ↓ 12 lines elided ↑ open up ↑
  13   13  .\" When distributing Covered Code, include this CDDL HEADER in each
  14   14  .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15   15  .\" If applicable, add the following below this CDDL HEADER, with the
  16   16  .\" fields enclosed by brackets "[]" replaced with your own identifying
  17   17  .\" information: Portions Copyright [yyyy] [name of copyright owner]
  18   18  .\"
  19   19  .\" CDDL HEADER END
  20   20  .\"
  21   21  .\"
  22   22  .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
  23      -.\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
       23 +.\" Copyright (c) 2013 by Delphix. All rights reserved.
  24   24  .\" Copyright 2017 Nexenta Systems, Inc.
  25   25  .\" Copyright (c) 2017 Datto Inc.
  26   26  .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
  27   27  .\"
  28   28  .Dd December 6, 2017
  29   29  .Dt ZPOOL 1M
  30   30  .Os
  31   31  .Sh NAME
  32   32  .Nm zpool
  33   33  .Nd configure ZFS storage pools
↓ open down ↓ 23 lines elided ↑ open up ↑
  57   57  .Ar pool vdev Ns ...
  58   58  .Nm
  59   59  .Cm destroy
  60   60  .Op Fl f
  61   61  .Ar pool
  62   62  .Nm
  63   63  .Cm detach
  64   64  .Ar pool device
  65   65  .Nm
  66   66  .Cm export
  67      -.Op Fl f
       67 +.Op Fl cfF
       68 +.Op Fl t Ar numthreads
  68   69  .Ar pool Ns ...
  69   70  .Nm
  70   71  .Cm get
  71   72  .Op Fl Hp
  72   73  .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
  73   74  .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
  74   75  .Ar pool Ns ...
  75   76  .Nm
  76   77  .Cm history
  77   78  .Op Fl il
↓ open down ↓ 4 lines elided ↑ open up ↑
  82   83  .Op Fl d Ar dir
  83   84  .Nm
  84   85  .Cm import
  85   86  .Fl a
  86   87  .Op Fl DfmN
  87   88  .Op Fl F Op Fl n
  88   89  .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
  89   90  .Op Fl o Ar mntopts
  90   91  .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
  91   92  .Op Fl R Ar root
       93 +.Op Fl t Ar numthreads
  92   94  .Nm
  93   95  .Cm import
  94   96  .Op Fl Dfm
  95   97  .Op Fl F Op Fl n
  96   98  .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
  97   99  .Op Fl o Ar mntopts
  98  100  .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
  99  101  .Op Fl R Ar root
      102 +.Op Fl t Ar numthreads
 100  103  .Ar pool Ns | Ns Ar id
 101  104  .Op Ar newpool
 102  105  .Nm
 103  106  .Cm iostat
 104  107  .Op Fl v
 105  108  .Op Fl T Sy u Ns | Ns Sy d
 106  109  .Oo Ar pool Oc Ns ...
 107  110  .Op Ar interval Op Ar count
 108  111  .Nm
 109  112  .Cm labelclear
↓ open down ↓ 15 lines elided ↑ open up ↑
 125  128  .Op Fl e
 126  129  .Ar pool Ar device Ns ...
 127  130  .Nm
 128  131  .Cm reguid
 129  132  .Ar pool
 130  133  .Nm
 131  134  .Cm reopen
 132  135  .Ar pool
 133  136  .Nm
 134  137  .Cm remove
 135      -.Op Fl np
 136  138  .Ar pool Ar device Ns ...
 137  139  .Nm
 138      -.Cm remove
 139      -.Fl s
 140      -.Ar pool
 141      -.Nm
 142  140  .Cm replace
 143  141  .Op Fl f
 144  142  .Ar pool Ar device Op Ar new_device
 145  143  .Nm
 146  144  .Cm scrub
 147      -.Op Fl s | Fl p
      145 +.Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s
 148  146  .Ar pool Ns ...
 149  147  .Nm
 150  148  .Cm set
 151  149  .Ar property Ns = Ns Ar value
 152  150  .Ar pool
 153  151  .Nm
 154  152  .Cm split
 155  153  .Op Fl n
 156  154  .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
 157  155  .Op Fl R Ar root
 158  156  .Ar pool newpool
 159  157  .Nm
 160  158  .Cm status
 161  159  .Op Fl Dvx
 162  160  .Op Fl T Sy u Ns | Ns Sy d
 163  161  .Oo Ar pool Oc Ns ...
 164  162  .Op Ar interval Op Ar count
 165  163  .Nm
      164 +.Cm trim
      165 +.Op Fl r Ar rate Ns | Ns Fl s
      166 +.Ar pool Ns ...
      167 +.Nm
 166  168  .Cm upgrade
 167  169  .Nm
 168  170  .Cm upgrade
 169  171  .Fl v
 170  172  .Nm
 171  173  .Cm upgrade
 172  174  .Op Fl V Ar version
 173  175  .Fl a Ns | Ns Ar pool Ns ...
      176 +.Nm
      177 +.Cm vdev-get
      178 +.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
      179 +.Ar pool
      180 +.Ar vdev-name Ns | Ns Ar vdev-guid
      181 +.Nm
      182 +.Cm vdev-set
      183 +.Ar property Ns = Ns Ar value
      184 +.Ar pool
      185 +.Ar vdev-name Ns | Ns Ar vdev-guid
 174  186  .Sh DESCRIPTION
 175  187  The
 176  188  .Nm
 177  189  command configures ZFS storage pools.
 178  190  A storage pool is a collection of devices that provides physical storage and
 179  191  data replication for ZFS datasets.
 180  192  All datasets within a storage pool share the same space.
 181  193  See
 182  194  .Xr zfs 1M
 183  195  for information on managing datasets.
↓ open down ↓ 209 lines elided ↑ open up ↑
 393  405  .Pp
 394  406  If a pool has a shared spare that is currently being used, the pool can not be
 395  407  exported since other pools may use this shared spare, which may lead to
 396  408  potential data corruption.
 397  409  .Pp
 398  410  An in-progress spare replacement can be cancelled by detaching the hot spare.
 399  411  If the original faulted device is detached, then the hot spare assumes its
 400  412  place in the configuration, and is removed from the spare list of all active
 401  413  pools.
 402  414  .Pp
      415 +See
      416 +.Sy sparegroup
      417 +vdev property in
      418 +.Sx Device Properties
      419 +section for information on how to control spare selection.
      420 +.Pp
 403  421  Spares cannot replace log devices.
 404  422  .Ss Intent Log
 405  423  The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
 406  424  transactions.
 407  425  For instance, databases often require their transactions to be on stable storage
 408  426  devices when returning from a system call.
 409  427  NFS and other applications can also use
 410  428  .Xr fsync 3C
 411  429  to ensure data stability.
 412  430  By default, the intent log is allocated from blocks within the main pool.
↓ open down ↓ 4 lines elided ↑ open up ↑
 417  435  # zpool create pool c0d0 c1d0 log c2d0
 418  436  .Ed
 419  437  .Pp
 420  438  Multiple log devices can also be specified, and they can be mirrored.
 421  439  See the
 422  440  .Sx EXAMPLES
 423  441  section for an example of mirroring multiple log devices.
 424  442  .Pp
 425  443  Log devices can be added, replaced, attached, detached, and imported and
 426  444  exported as part of the larger pool.
 427      -Mirrored devices can be removed by specifying the top-level mirror vdev.
      445 +Mirrored log devices can be removed by specifying the top-level mirror for the
      446 +log.
 428  447  .Ss Cache Devices
 429  448  Devices can be added to a storage pool as
 430  449  .Qq cache devices .
 431  450  These devices provide an additional layer of caching between main memory and
 432  451  disk.
 433  452  For read-heavy workloads, where the working set size is much larger than what
 434  453  can be cached in main memory, using cache devices allow much more of this
 435  454  working set to be served from low latency media.
 436  455  Using cache devices provides the greatest performance improvement for random
 437  456  read-workloads of mostly static content.
↓ open down ↓ 6 lines elided ↑ open up ↑
 444  463  # zpool create pool c0d0 c1d0 cache c2d0 c3d0
 445  464  .Ed
 446  465  .Pp
 447  466  Cache devices cannot be mirrored or part of a raidz configuration.
 448  467  If a read error is encountered on a cache device, that read I/O is reissued to
 449  468  the original storage pool device, which might be part of a mirrored or raidz
 450  469  configuration.
 451  470  .Pp
 452  471  The content of the cache devices is considered volatile, as is the case with
 453  472  other system caches.
 454      -.Ss Properties
      473 +.Ss Pool Properties
 455  474  Each pool has several properties associated with it.
 456  475  Some properties are read-only statistics while others are configurable and
 457  476  change the behavior of the pool.
 458  477  .Pp
 459  478  The following are read-only properties:
 460  479  .Bl -tag -width Ds
 461  480  .It Cm allocated
 462  481  Amount of storage space used within the pool.
 463  482  .It Sy bootsize
 464  483  The size of the system boot partition.
 465  484  This property can only be set at pool creation time and is read-only once pool
 466  485  is created.
 467  486  Setting this property implies using the
 468  487  .Fl B
 469  488  option.
 470  489  .It Sy capacity
 471  490  Percentage of pool space used.
 472  491  This property can also be referred to by its shortened column name,
 473  492  .Sy cap .
      493 +.It Sy ddt_capped Ns = Ns Sy on Ns | Ns Sy off
      494 +When the
      495 +.Sy ddt_capped
      496 +is
      497 +.Sy on
      498 +this indicates DDT growth has been stopped.
      499 +New unique writes will not be deduped to prevent further DDT growth.
 474  500  .It Sy expandsize
 475  501  Amount of uninitialized space within the pool or device that can be used to
 476  502  increase the total capacity of the pool.
 477  503  Uninitialized space consists of any space on an EFI labeled vdev which has not
 478  504  been brought online
 479  505  .Po e.g, using
 480  506  .Nm zpool Cm online Fl e
 481  507  .Pc .
 482  508  This space occurs when a LUN is dynamically expanded.
 483  509  .It Sy fragmentation
 484  510  The amount of fragmentation in the pool.
 485  511  .It Sy free
 486  512  The amount of free space available in the pool.
 487  513  .It Sy freeing
 488      -After a file system or snapshot is destroyed, the space it was using is
 489      -returned to the pool asynchronously.
 490  514  .Sy freeing
 491      -is the amount of space remaining to be reclaimed.
      515 +is the amount of pool space remaining to be reclaimed.
      516 +After a file, dataset or snapshot is destroyed, the space it was using is
      517 +returned to the pool asynchronously.
 492  518  Over time
 493  519  .Sy freeing
 494  520  will decrease while
 495  521  .Sy free
 496  522  increases.
 497  523  .It Sy health
 498  524  The current health of the pool.
 499  525  Health can be one of
 500  526  .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
 501  527  .It Sy guid
↓ open down ↓ 75 lines elided ↑ open up ↑
 577  603  .Nm zpool Cm replace
 578  604  command.
 579  605  If set to
 580  606  .Sy on ,
 581  607  any new device, found in the same physical location as a device that previously
 582  608  belonged to the pool, is automatically formatted and replaced.
 583  609  The default behavior is
 584  610  .Sy off .
 585  611  This property can also be referred to by its shortened column name,
 586  612  .Sy replace .
      613 +.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
      614 +When set to
      615 +.Sy on ,
      616 +while deleting data, ZFS will inform the underlying vdevs of any blocks that
      617 +have been marked as freed.
      618 +This allows thinly provisioned vdevs to reclaim unused blocks.
      619 +Currently, this feature supports sending SCSI UNMAP commands to SCSI and SAS
      620 +disk vdevs, and using file hole punching on file-backed vdevs.
      621 +SATA TRIM is currently not implemented.
      622 +The default setting for this property is
      623 +.Sy off .
      624 +.Pp
      625 +Please note that automatic trimming of data blocks can put significant stress on
      626 +the underlying storage devices if they do not handle these commands in a
      627 +background, low-priority manner.
      628 +In that case, it may be possible to achieve most of the benefits of trimming
      629 +free space on the pool by running an on-demand
      630 +.Pq manual
      631 +trim every once in a while during a maintenance window using the
      632 +.Nm zpool Cm trim
      633 +command.
      634 +.Pp
      635 +Automatic trim does not reclaim blocks after a delete immediately.
      636 +Instead, it waits approximately 32-64 TXGs
      637 +.Po or as defined by the
      638 +.Sy zfs_txgs_per_trim
      639 +tunable
      640 +.Pc
      641 +to allow for more efficient aggregation of smaller portions of free space into
      642 +fewer larger regions, as well as to allow for longer pool corruption recovery
      643 +via
      644 +.Nm zpool Cm import Fl F .
 587  645  .It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
 588  646  Identifies the default bootable dataset for the root pool.
 589  647  This property is expected to be set mainly by the installation and upgrade
 590  648  programs.
 591  649  .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
 592  650  Controls the location of where the pool configuration is cached.
 593  651  Discovering all pools on system startup requires a cached copy of the
 594  652  configuration data that is stored on the root file system.
 595  653  All pools in this cache are automatically imported when the system boots.
 596  654  Some environments, such as install and clustering, need to cache this
↓ open down ↓ 58 lines elided ↑ open up ↑
 655  713  The value of this property is the current state of
 656  714  .Ar feature_name .
 657  715  The only valid value when setting this property is
 658  716  .Sy enabled
 659  717  which moves
 660  718  .Ar feature_name
 661  719  to the enabled state.
 662  720  See
 663  721  .Xr zpool-features 5
 664  722  for details on feature states.
      723 +.It Sy forcetrim Ns = Ns Sy on Ns | Ns Sy off
      724 +Controls whether device support is taken into consideration when issuing TRIM
      725 +commands to the underlying vdevs of the pool.
      726 +Normally, both automatic trim and on-demand
      727 +.Pq manual
      728 +trim only issue TRIM commands if a vdev indicates support for it.
      729 +Setting the
      730 +.Sy forcetrim
      731 +property to
      732 +.Sy on
      733 +will force ZFS to issue TRIMs even if it thinks a device does not support it.
      734 +The default value is
      735 +.Sy off .
 665  736  .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
 666  737  Controls whether information about snapshots associated with this pool is
 667  738  output when
 668  739  .Nm zfs Cm list
 669  740  is run without the
 670  741  .Fl t
 671  742  option.
 672  743  The default value is
 673  744  .Sy off .
 674  745  This property can also be referred to by its shortened name,
 675  746  .Sy listsnaps .
      747 +.It Sy scrubprio Ns = Ns Ar 0-100 Ns
      748 +Sets the priority of scrub I/O for this pool.
      749 +This is a number from 0 to 100, higher numbers meaning a higher priority
      750 +and thus more bandwidth allocated to scrub I/O, provided there is other
      751 +I/O competing for bandwidth.
      752 +If no other I/O is competing for bandwidth, scrub is allowed to consume
      753 +as much bandwidth as the pool is capable of providing.
      754 +A priority of
      755 +.Ar 100
      756 +means that scrub I/O has equal priority to any other user-generated I/O.
      757 +The value
      758 +.Ar 0
      759 +is special, because it turns per-pool scrub priority control.
      760 +In that case, scrub I/O priority is determined by the
      761 +.Sy zfs_vdev_scrub_min_active
      762 +and
      763 +.Sy zfs_vdev_scrub_max_active
      764 +tunables.
      765 +The default value is
      766 +.Ar 5 .
      767 +.It Sy resilverprio Ns = Ns Ar 0-100 Ns
      768 +Same as the
      769 +.Sy scrubprio
      770 +property, but controls the priority for resilver I/O.
      771 +The default value is
      772 +.Ar 10 .
      773 +When set to
      774 +.Ar 0
      775 +the global tunables used for queue sizing are
      776 +.Sy zfs_vdev_resilver_min_active
      777 +and
      778 +.Sy zfs_vdev_resilver_max_active .
 676  779  .It Sy version Ns = Ns Ar version
 677  780  The current on-disk version of the pool.
 678  781  This can be increased, but never decreased.
 679  782  The preferred method of updating pools is with the
 680  783  .Nm zpool Cm upgrade
 681  784  command, though this property can be used when a specific version is needed for
 682  785  backwards compatibility.
 683  786  Once feature flags are enabled on a pool this property will no longer have a
 684  787  value.
 685  788  .El
      789 +.Ss Device Properties
      790 +Each device can have several properties associated with it.
      791 +These properites override global tunables and are designed to provide more
      792 +control over the operational parameters of this specific device, as well as to
      793 +help manage this device.
      794 +.Pp
      795 +The
      796 +.Sy cos
      797 +device property can reference a CoS property descriptor by name, in which case,
      798 +the values of device properties are determined according to the following rule:
      799 +the device settings override CoS settings, which in turn, override the global
      800 +tunables.
      801 +.Pp
      802 +The following device properties are available:
      803 +.Bl -tag -width Ds
      804 +.It Sy cos Ns = Ns Ar cos-name
      805 +This property indicates whether the device is associated with a CoS property
      806 +descriptor object.
      807 +If so, the properties from the CoS descriptor that are not explicitly overridden
      808 +by the device properties are in effect for this device.
      809 +.It Sy l2arc_ddt Ns = Ns Sy on Ns | Ns Sy off
      810 +This property is meaningful for L2ARC devices.
      811 +If this property is turned
      812 +.Sy on
      813 +ZFS will dedicate the L2ARC device to cache deduplication table
      814 +.Pq DDT
      815 +buffers only.
      816 +.It Sy prefread Ns = Ns Sy 1 Ns .. Ns Sy 100
      817 +This property is meaningful for devices that belong to a mirror.
      818 +The property determines the preference that is given to the device when reading
      819 +from the mirror.
      820 +The ratio of the value to the sum of the values of this property for all the
      821 +devices in the mirror determines the relative frequency
      822 +.Po which also is considered
      823 +.Qq probability
      824 +.Pc
      825 +of reading from this specific device.
      826 +.It Sy sparegroup Ns = Ns Ar group-name
      827 +This property indicates whether the device is a part of a spare device group.
      828 +Devices in the pool
      829 +.Pq including spares
      830 +can be labeled with strings that are meaningful in the context of the management
      831 +workflow in effect.
      832 +When a failed device is automatically replaced by spares, the spares whose
      833 +.Sy sparegroup
      834 +property match the failed device's property are used first.
      835 +.It Xo
      836 +.Bro Sy read Ns | Ns Sy aread Ns | Ns Sy write Ns | Ns
      837 +.Sy awrite Ns | Ns Sy scrub Ns | Ns Sy resilver Brc Ns _ Ns
      838 +.Bro Sy minactive Ns | Ns Sy maxactive Brc Ns = Ns
      839 +.Sy 1 Ns .. Ns Sy 1000
      840 +.Xc
      841 +These properties define the minimim/maximum number of outstanding active
      842 +requests for the queueable classes of I/O requests as defined by the
      843 +ZFS I/O scheduler.
      844 +The classes include read, asynchronous read, write, asynchronous write, and
      845 +scrub classes.
      846 +.El
 686  847  .Ss Subcommands
 687  848  All subcommands that modify state are logged persistently to the pool in their
 688  849  original form.
 689  850  .Pp
 690  851  The
 691  852  .Nm
 692  853  command provides subcommands to create and destroy storage pools, add capacity
 693  854  to storage pools, and provide information about the storage pools.
 694  855  The following subcommands are supported:
 695  856  .Bl -tag -width Ds
↓ open down ↓ 186 lines elided ↑ open up ↑
 882 1043  For more information on dataset mount points, see
 883 1044  .Xr zfs 1M .
 884 1045  .It Fl n
 885 1046  Displays the configuration that would be used without actually creating the
 886 1047  pool.
 887 1048  The actual pool creation can still fail due to insufficient privileges or
 888 1049  device sharing.
 889 1050  .It Fl o Ar property Ns = Ns Ar value
 890 1051  Sets the given pool properties.
 891 1052  See the
 892      -.Sx Properties
     1053 +.Sx Pool Properties
 893 1054  section for a list of valid properties that can be set.
 894 1055  .It Fl O Ar file-system-property Ns = Ns Ar value
 895 1056  Sets the given file system properties in the root file system of the pool.
 896 1057  See the
 897 1058  .Sx Properties
 898 1059  section of
 899 1060  .Xr zfs 1M
 900 1061  for a list of valid properties that can be set.
 901 1062  .It Fl R Ar root
 902 1063  Equivalent to
↓ open down ↓ 16 lines elided ↑ open up ↑
 919 1080  .Cm detach
 920 1081  .Ar pool device
 921 1082  .Xc
 922 1083  Detaches
 923 1084  .Ar device
 924 1085  from a mirror.
 925 1086  The operation is refused if there are no other valid replicas of the data.
 926 1087  .It Xo
 927 1088  .Nm
 928 1089  .Cm export
 929      -.Op Fl f
     1090 +.Op Fl cfF
     1091 +.Op Fl t Ar numthreads
 930 1092  .Ar pool Ns ...
 931 1093  .Xc
 932 1094  Exports the given pools from the system.
 933 1095  All devices are marked as exported, but are still considered in use by other
 934 1096  subsystems.
 935 1097  The devices can be moved between systems
 936 1098  .Pq even those of different endianness
 937 1099  and imported as long as a sufficient number of devices are present.
 938 1100  .Pp
 939 1101  Before exporting the pool, all datasets within the pool are unmounted.
 940 1102  A pool can not be exported if it has a shared spare that is currently being
 941 1103  used.
 942 1104  .Pp
 943 1105  For pools to be portable, you must give the
 944 1106  .Nm
 945 1107  command whole disks, not just slices, so that ZFS can label the disks with
 946 1108  portable EFI labels.
 947 1109  Otherwise, disk drivers on platforms of different endianness will not recognize
 948 1110  the disks.
 949 1111  .Bl -tag -width Ds
     1112 +.It Fl c
     1113 +Keep configuration information of exported pool in the cache file.
 950 1114  .It Fl f
 951 1115  Forcefully unmount all datasets, using the
 952 1116  .Nm unmount Fl f
 953 1117  command.
 954 1118  .Pp
 955 1119  This command will forcefully export the pool even if it has a shared spare that
 956 1120  is currently being used.
 957 1121  This may lead to potential data corruption.
     1122 +.It Fl F
     1123 +Do not update device labels or cache file with new configuration.
     1124 +.It Fl t Ar numthreads
     1125 +Unmount datasets in parallel using up to
     1126 +.Ar numthreads
     1127 +threads.
 958 1128  .El
 959 1129  .It Xo
 960 1130  .Nm
 961 1131  .Cm get
 962 1132  .Op Fl Hp
 963 1133  .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
 964 1134  .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
 965 1135  .Ar pool Ns ...
 966 1136  .Xc
 967 1137  Retrieves the given list of properties
↓ open down ↓ 5 lines elided ↑ open up ↑
 973 1143  for the specified storage pool(s).
 974 1144  These properties are displayed with the following fields:
 975 1145  .Bd -literal
 976 1146          name          Name of storage pool
 977 1147          property      Property name
 978 1148          value         Property value
 979 1149          source        Property source, either 'default' or 'local'.
 980 1150  .Ed
 981 1151  .Pp
 982 1152  See the
 983      -.Sx Properties
     1153 +.Sx Pool Properties
 984 1154  section for more information on the available pool properties.
 985 1155  .Bl -tag -width Ds
 986 1156  .It Fl H
 987 1157  Scripted mode.
 988 1158  Do not display headers, and separate fields by a single tab instead of arbitrary
 989 1159  space.
 990 1160  .It Fl o Ar field
 991 1161  A comma-separated list of columns to display.
 992 1162  .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
 993 1163  is the default value.
↓ open down ↓ 130 lines elided ↑ open up ↑
1124 1294  Import the pool without mounting any file systems.
1125 1295  .It Fl o Ar mntopts
1126 1296  Comma-separated list of mount options to use when mounting datasets within the
1127 1297  pool.
1128 1298  See
1129 1299  .Xr zfs 1M
1130 1300  for a description of dataset properties and mount options.
1131 1301  .It Fl o Ar property Ns = Ns Ar value
1132 1302  Sets the specified property on the imported pool.
1133 1303  See the
1134      -.Sx Properties
     1304 +.Sx Pool Properties
1135 1305  section for more information on the available pool properties.
1136 1306  .It Fl R Ar root
1137 1307  Sets the
1138 1308  .Sy cachefile
1139 1309  property to
1140 1310  .Sy none
1141 1311  and the
1142 1312  .Sy altroot
1143 1313  property to
1144 1314  .Ar root .
↓ open down ↓ 73 lines elided ↑ open up ↑
1218 1388  option, above.
1219 1389  .It Fl o Ar mntopts
1220 1390  Comma-separated list of mount options to use when mounting datasets within the
1221 1391  pool.
1222 1392  See
1223 1393  .Xr zfs 1M
1224 1394  for a description of dataset properties and mount options.
1225 1395  .It Fl o Ar property Ns = Ns Ar value
1226 1396  Sets the specified property on the imported pool.
1227 1397  See the
1228      -.Sx Properties
     1398 +.Sx Pool Properties
1229 1399  section for more information on the available pool properties.
1230 1400  .It Fl R Ar root
1231 1401  Sets the
1232 1402  .Sy cachefile
1233 1403  property to
1234 1404  .Sy none
1235 1405  and the
1236 1406  .Sy altroot
1237 1407  property to
1238 1408  .Ar root .
     1409 +.It Fl t Ar numthreads
     1410 +Mount datasets in parallel using up to
     1411 +.Ar numthreads
     1412 +threads.
1239 1413  .El
1240 1414  .It Xo
1241 1415  .Nm
1242 1416  .Cm iostat
1243 1417  .Op Fl v
1244 1418  .Op Fl T Sy u Ns | Ns Sy d
1245 1419  .Oo Ar pool Oc Ns ...
1246 1420  .Op Ar interval Op Ar count
1247 1421  .Xc
1248 1422  Displays I/O statistics for the given pools.
↓ open down ↓ 66 lines elided ↑ open up ↑
1315 1489  .Ar count
1316 1490  reports are printed.
1317 1491  .Bl -tag -width Ds
1318 1492  .It Fl H
1319 1493  Scripted mode.
1320 1494  Do not display headers, and separate fields by a single tab instead of arbitrary
1321 1495  space.
1322 1496  .It Fl o Ar property
1323 1497  Comma-separated list of properties to display.
1324 1498  See the
1325      -.Sx Properties
     1499 +.Sx Pool Properties
1326 1500  section for a list of valid properties.
1327 1501  The default list is
1328 1502  .Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1329 1503  .Cm dedupratio , health , altroot .
1330 1504  .It Fl p
1331 1505  Display numbers in parsable
1332 1506  .Pq exact
1333 1507  values.
1334 1508  .It Fl T Sy u Ns | Ns Sy d
1335 1509  Display a time stamp.
↓ open down ↓ 52 lines elided ↑ open up ↑
1388 1562  performing this action.
1389 1563  .It Xo
1390 1564  .Nm
1391 1565  .Cm reopen
1392 1566  .Ar pool
1393 1567  .Xc
1394 1568  Reopen all the vdevs associated with the pool.
1395 1569  .It Xo
1396 1570  .Nm
1397 1571  .Cm remove
1398      -.Op Fl np
1399 1572  .Ar pool Ar device Ns ...
1400 1573  .Xc
1401 1574  Removes the specified device from the pool.
1402      -This command currently only supports removing hot spares, cache, log
1403      -devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1404      -.sp
1405      -Removing a top-level vdev reduces the total amount of space in the storage pool.
1406      -The specified device will be evacuated by copying all allocated space from it to
1407      -the other devices in the pool.
1408      -In this case, the
1409      -.Nm zpool Cm remove
1410      -command initiates the removal and returns, while the evacuation continues in
1411      -the background.
1412      -The removal progress can be monitored with
1413      -.Nm zpool Cm status.
1414      -This feature must be enabled to be used, see
1415      -.Xr zpool-features 5
1416      -.Pp
1417      -A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1418      -same.
1419      -Non-log devices or data devices that are part of a mirrored configuration can be removed using
     1575 +This command currently only supports removing hot spares, cache, log and special
     1576 +devices.
     1577 +A mirrored log device can be removed by specifying the top-level mirror for the
     1578 +log.
     1579 +Non-log devices that are part of a mirrored configuration can be removed using
1420 1580  the
1421 1581  .Nm zpool Cm detach
1422 1582  command.
1423      -.Bl -tag -width Ds
1424      -.It Fl n
1425      -Do not actually perform the removal ("no-op").
1426      -Instead, print the estimated amount of memory that will be used by the
1427      -mapping table after the removal completes.
1428      -This is nonzero only for top-level vdevs.
1429      -.El
1430      -.Bl -tag -width Ds
1431      -.It Fl p
1432      -Used in conjunction with the
1433      -.Fl n
1434      -flag, displays numbers as parsable (exact) values.
1435      -.El
     1583 +Non-redundant and raidz devices cannot be removed from a pool.
1436 1584  .It Xo
1437 1585  .Nm
1438      -.Cm remove
1439      -.Fl s
1440      -.Ar pool
1441      -.Xc
1442      -Stops and cancels an in-progress removal of a top-level vdev.
1443      -.It Xo
1444      -.Nm
1445 1586  .Cm replace
1446 1587  .Op Fl f
1447 1588  .Ar pool Ar device Op Ar new_device
1448 1589  .Xc
1449 1590  Replaces
1450 1591  .Ar old_device
1451 1592  with
1452 1593  .Ar new_device .
1453 1594  This is equivalent to attaching
1454 1595  .Ar new_device ,
↓ open down ↓ 20 lines elided ↑ open up ↑
1475 1616  .Bl -tag -width Ds
1476 1617  .It Fl f
1477 1618  Forces use of
1478 1619  .Ar new_device ,
1479 1620  even if its appears to be in use.
1480 1621  Not all devices can be overridden in this manner.
1481 1622  .El
1482 1623  .It Xo
1483 1624  .Nm
1484 1625  .Cm scrub
1485      -.Op Fl s | Fl p
     1626 +.Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s
1486 1627  .Ar pool Ns ...
1487 1628  .Xc
1488 1629  Begins a scrub or resumes a paused scrub.
1489 1630  The scrub examines all data in the specified pools to verify that it checksums
1490 1631  correctly.
1491 1632  For replicated
1492 1633  .Pq mirror or raidz
1493 1634  devices, ZFS automatically repairs any damage discovered during the scrub.
1494 1635  The
1495 1636  .Nm zpool Cm status
↓ open down ↓ 10 lines elided ↑ open up ↑
1506 1647  whereas scrubbing examines all data to discover silent errors due to hardware
1507 1648  faults or disk failure.
1508 1649  .Pp
1509 1650  Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1510 1651  one at a time.
1511 1652  If a scrub is paused, the
1512 1653  .Nm zpool Cm scrub
1513 1654  resumes it.
1514 1655  If a resilver is in progress, ZFS does not allow a scrub to be started until the
1515 1656  resilver completes.
     1657 +.Pp
     1658 +Partial scrub may be requested using
     1659 +.Fl m
     1660 +or
     1661 +.Fl M
     1662 +option.
1516 1663  .Bl -tag -width Ds
1517      -.It Fl s
1518      -Stop scrubbing.
1519      -.El
1520      -.Bl -tag -width Ds
     1664 +.It Fl m
     1665 +Scrub only metadata blocks.
     1666 +.It Fl M
     1667 +Scrub only MOS blocks.
1521 1668  .It Fl p
1522 1669  Pause scrubbing.
1523 1670  Scrub pause state and progress are periodically synced to disk.
1524 1671  If the system is restarted or pool is exported during a paused scrub,
1525 1672  even after import, scrub will remain paused until it is resumed.
1526 1673  Once resumed the scrub will pick up from the place where it was last
1527 1674  checkpointed to disk.
1528 1675  To resume a paused scrub issue
1529 1676  .Nm zpool Cm scrub
1530 1677  again.
     1678 +.It Fl s
     1679 +Stop scrubbing.
1531 1680  .El
1532 1681  .It Xo
1533 1682  .Nm
1534 1683  .Cm set
1535 1684  .Ar property Ns = Ns Ar value
1536 1685  .Ar pool
1537 1686  .Xc
1538 1687  Sets the given property on the specified pool.
1539 1688  See the
1540      -.Sx Properties
     1689 +.Sx Pool Properties
1541 1690  section for more information on what properties can be set and acceptable
1542 1691  values.
1543 1692  .It Xo
1544 1693  .Nm
1545 1694  .Cm split
1546 1695  .Op Fl n
1547 1696  .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1548 1697  .Op Fl R Ar root
1549 1698  .Ar pool newpool
1550 1699  .Xc
↓ open down ↓ 10 lines elided ↑ open up ↑
1561 1710  .Ar pool .
1562 1711  .Bl -tag -width Ds
1563 1712  .It Fl n
1564 1713  Do dry run, do not actually perform the split.
1565 1714  Print out the expected configuration of
1566 1715  .Ar newpool .
1567 1716  .It Fl o Ar property Ns = Ns Ar value
1568 1717  Sets the specified property for
1569 1718  .Ar newpool .
1570 1719  See the
1571      -.Sx Properties
     1720 +.Sx Pool Properties
1572 1721  section for more information on the available pool properties.
1573 1722  .It Fl R Ar root
1574 1723  Set
1575 1724  .Sy altroot
1576 1725  for
1577 1726  .Ar newpool
1578 1727  to
1579 1728  .Ar root
1580 1729  and automatically import it.
1581 1730  .El
↓ open down ↓ 39 lines elided ↑ open up ↑
1621 1770  .It Fl v
1622 1771  Displays verbose data error information, printing out a complete list of all
1623 1772  data errors since the last complete pool scrub.
1624 1773  .It Fl x
1625 1774  Only display status for pools that are exhibiting errors or are otherwise
1626 1775  unavailable.
1627 1776  Warnings about pools not using the latest on-disk format will not be included.
1628 1777  .El
1629 1778  .It Xo
1630 1779  .Nm
     1780 +.Cm trim
     1781 +.Op Fl r Ar rate Ns | Ns Fl s
     1782 +.Ar pool Ns ...
     1783 +.Xc
     1784 +Initiates a on-demand TRIM operation on all of the free space of a pool.
     1785 +This informs the underlying storage devices of all of the blocks that the pool
     1786 +no longer considers allocated, thus allowing thinly provisioned storage devices
     1787 +to reclaim them.
     1788 +Please note that this collects all space marked as
     1789 +.Qq freed
     1790 +in the pool immediately and doesn't wait the
     1791 +.Sy zfs_txgs_per_trim
     1792 +delay as automatic TRIM does.
     1793 +Hence, this can limit pool corruption recovery options during and immediately
     1794 +following the on-demand TRIM to 1-2 TXGs into the past
     1795 +.Pq instead of the standard 32-64 of automatic TRIM .
     1796 +This approach, however, allows you to recover the maximum amount of free space
     1797 +from the pool immediately without having to wait.
     1798 +.Pp
     1799 +Also note that an on-demand TRIM operation can be initiated irrespective of the
     1800 +.Sy autotrim
     1801 +pool property setting.
     1802 +It does, however, respect the
     1803 +.Sy forcetrim
     1804 +pool property.
     1805 +.Pp
     1806 +An on-demand TRIM operation does not conflict with an ongoing scrub, but it can
     1807 +put significant I/O stress on the underlying vdevs.
     1808 +A resilver, however, automatically stops an on-demand TRIM operation.
     1809 +You can manually reinitiate the TRIM operation after the resilver has started,
     1810 +by simply reissuing the
     1811 +.Nm zpool Cm trim
     1812 +command.
     1813 +.Pp
     1814 +Adding a vdev during TRIM is supported, although the progression display in
     1815 +.Nm zpool Cm status
     1816 +might not be entirely accurate in that case
     1817 +.Pq TRIM will complete before reaching 100% .
     1818 +Removing or detaching a vdev will prematurely terminate an on-demand TRIM
     1819 +operation.
     1820 +.Bl -tag -width Ds
     1821 +.It Fl r Ar rate
     1822 +Controls the speed at which the TRIM operation progresses.
     1823 +Without this option, TRIM is executed in parallel on all top-level vdevs as
     1824 +quickly as possible.
     1825 +This option allows you to control how fast
     1826 +.Pq in bytes per second
     1827 +the TRIM is executed.
     1828 +This rate is applied on a per-vdev basis, i.e. every top-level vdev in the pool
     1829 +tries to match this speed.
     1830 +.Pp
     1831 +Due to limitations in how the algorithm is designed, TRIMs are executed in
     1832 +whole-metaslab increments.
     1833 +Each top-level vdev contains approximately 200 metaslabs, so a rate-limited TRIM
     1834 +progresses in steps, i.e. it TRIMs one metaslab completely and then waits for a
     1835 +while so that over the whole device, the speed averages out.
     1836 +.Pp
     1837 +When an on-demand TRIM operation is already in progress, this option changes its
     1838 +rate.
     1839 +To change a rate-limited TRIM to an unlimited one, simply execute the
     1840 +.Nm zpool Cm trim
     1841 +command without the
     1842 +.Fl r
     1843 +option.
     1844 +.It Fl s
     1845 +Stop trimming.
     1846 +If an on-demand TRIM operation is not ongoing at the moment, this does nothing
     1847 +and the command returns success.
     1848 +.El
     1849 +.It Xo
     1850 +.Nm
1631 1851  .Cm upgrade
1632 1852  .Xc
1633 1853  Displays pools which do not have all supported features enabled and pools
1634 1854  formatted using a legacy ZFS version number.
1635 1855  These pools can continue to be used, but some features may not be available.
1636 1856  Use
1637 1857  .Nm zpool Cm upgrade Fl a
1638 1858  to enable all features on all pools.
1639 1859  .It Xo
1640 1860  .Nm
↓ open down ↓ 21 lines elided ↑ open up ↑
1662 1882  .It Fl a
1663 1883  Enables all supported features on all pools.
1664 1884  .It Fl V Ar version
1665 1885  Upgrade to the specified legacy version.
1666 1886  If the
1667 1887  .Fl V
1668 1888  flag is specified, no features will be enabled on the pool.
1669 1889  This option can only be used to increase the version number up to the last
1670 1890  supported legacy version number.
1671 1891  .El
     1892 +.It Xo
     1893 +.Nm
     1894 +.Cm vdev-get
     1895 +.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
     1896 +.Ar pool
     1897 +.Ar vdev-name Ns | Ns Ar vdev-guid
     1898 +.Xc
     1899 +Retrieves the given list of vdev properties
     1900 +.Po or all properties if
     1901 +.Sy all
     1902 +is used
     1903 +.Pc
     1904 +for the specified vdev of the specified storage pool.
     1905 +These properties are displayed in the same manner as the pool properties.
     1906 +The operation is supported for leaf-level vdevs only.
     1907 +See the
     1908 +.Sx Device Properties
     1909 +section for more information on the available properties.
     1910 +.It Xo
     1911 +.Nm
     1912 +.Cm vdev-set
     1913 +.Ar property Ns = Ns Ar value
     1914 +.Ar pool
     1915 +.Ar vdev-name Ns | Ns Ar vdev-guid
     1916 +.Xc
     1917 +Sets the given property on the specified device of the specified pool.
     1918 +If top-level vdev is specified, sets the property on all the child devices.
     1919 +See the
     1920 +.Sx Device Properties
     1921 +section for more information on what properties can be set and accepted values.
1672 1922  .El
1673 1923  .Sh EXIT STATUS
1674 1924  The following exit values are returned:
1675 1925  .Bl -tag -width Ds
1676 1926  .It Sy 0
1677 1927  Successful completion.
1678 1928  .It Sy 1
1679 1929  An error occurred.
1680 1930  .It Sy 2
1681 1931  Invalid command line options were specified.
↓ open down ↓ 122 lines elided ↑ open up ↑
1804 2054  .Pp
1805 2055  Once added, the cache devices gradually fill with content from main memory.
1806 2056  Depending on the size of your cache devices, it could take over an hour for
1807 2057  them to fill.
1808 2058  Capacity and reads can be monitored using the
1809 2059  .Cm iostat
1810 2060  option as follows:
1811 2061  .Bd -literal
1812 2062  # zpool iostat -v pool 5
1813 2063  .Ed
1814      -.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
1815      -The following commands remove the mirrored log device
1816      -.Sy mirror-2
1817      -and mirrored top-level data device
1818      -.Sy mirror-1 .
1819      -.Pp
     2064 +.It Sy Example 14 No Removing a Mirrored Log Device
     2065 +The following command removes the mirrored log device
     2066 +.Sy mirror-2 .
1820 2067  Given this configuration:
1821 2068  .Bd -literal
1822 2069    pool: tank
1823 2070   state: ONLINE
1824 2071   scrub: none requested
1825 2072  config:
1826 2073  
1827 2074           NAME        STATE     READ WRITE CKSUM
1828 2075           tank        ONLINE       0     0     0
1829 2076             mirror-0  ONLINE       0     0     0
↓ open down ↓ 7 lines elided ↑ open up ↑
1837 2084               c4t0d0  ONLINE       0     0     0
1838 2085               c4t1d0  ONLINE       0     0     0
1839 2086  .Ed
1840 2087  .Pp
1841 2088  The command to remove the mirrored log
1842 2089  .Sy mirror-2
1843 2090  is:
1844 2091  .Bd -literal
1845 2092  # zpool remove tank mirror-2
1846 2093  .Ed
1847      -.Pp
1848      -The command to remove the mirrored data
1849      -.Sy mirror-1
1850      -is:
1851      -.Bd -literal
1852      -# zpool remove tank mirror-1
1853      -.Ed
1854 2094  .It Sy Example 15 No Displaying expanded space on a device
1855 2095  The following command displays the detailed information for the pool
1856 2096  .Em data .
1857 2097  This pool is comprised of a single raidz vdev where one of its devices
1858 2098  increased its capacity by 10GB.
1859 2099  In this example, the pool will not be able to utilize this extra capacity until
1860 2100  all the devices under the raidz vdev have been expanded.
1861 2101  .Bd -literal
1862 2102  # zpool list -v data
1863 2103  NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
↓ open down ↓ 13 lines elided ↑ open up ↑
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX