Print this page
NEX-18069 Unable to get/set VDEV_PROP_RESILVER_MAXACTIVE/VDEV_PROP_RESILVER_MINACTIVE props
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9552 zfs_scan_idle throttling harms performance and needs to be removed
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-5284 need to document and update default for import -t option
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4934 Add capability to remove special vdev
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4258 restore and update vdev-get & vdev-set in zpool man page
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3502 dedup ceiling should set a pool prop when cap is in effect
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
        usr/src/common/zfs/zpool_prop.c
        usr/src/uts/common/sys/fs/zfs.h
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
    usr/src/uts/common/io/scsi/targets/sd.c
    usr/src/uts/common/sys/scsi/targets/sddef.h
SUP-817 Removed references to special device from man and help
Revert "SUP-817 Removed references to special device"
This reverts commit f8970e28f0d8bd6b69711722f341e3e1d0e1babf.
SUP-817 Removed references to special device
OS-102 add man page info and tests for vdev/CoS properties and ZFS meta features
Issue #26: partial scrub
Added partial scrub options:
-M for MOS only scrub
-m for metadata scrub
re 13748 added zpool export -c option
zpool export -c command exports specified pool while keeping its latest
configuration in the cache file for subsequent zpool import -c.
re #11781 rb3701 Update man related tools (add missed files)
re #11781 rb3701 Update man related tools
--HG--
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE => usr/src/cmd/man/THIRDPARTYLICENSE
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE.descrip => usr/src/cmd/man/THIRDPARTYLICENSE.descrip
rename : usr/src/cmd/man/src/man.c => usr/src/cmd/man/man.c

Split Close
Expand all
Collapse all
          --- old/usr/src/man/man1m/zpool.1m.man.txt
          +++ new/usr/src/man/man1m/zpool.1m.man.txt
↓ open down ↓ 4 lines elided ↑ open up ↑
   5    5  
   6    6  SYNOPSIS
   7    7       zpool -?
   8    8       zpool add [-fn] pool vdev...
   9    9       zpool attach [-f] pool device new_device
  10   10       zpool clear pool [device]
  11   11       zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]...
  12   12             [-O file-system-property=value]... [-R root] pool vdev...
  13   13       zpool destroy [-f] pool
  14   14       zpool detach pool device
  15      -     zpool export [-f] pool...
       15 +     zpool export [-cfF] [-t numthreads] pool...
  16   16       zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
  17   17       zpool history [-il] [pool]...
  18   18       zpool import [-D] [-d dir]
  19   19       zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  20      -           [-o property=value]... [-R root]
       20 +           [-o property=value]... [-R root] [-t numthreads]
  21   21       zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  22      -           [-o property=value]... [-R root] pool|id [newpool]
       22 +           [-o property=value]... [-R root] [-t numthreads] pool|id [newpool]
  23   23       zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
  24   24       zpool labelclear [-f] device
  25   25       zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
  26   26             [interval [count]]
  27   27       zpool offline [-t] pool device...
  28   28       zpool online [-e] pool device...
  29   29       zpool reguid pool
  30   30       zpool reopen pool
  31      -     zpool remove [-np] pool device...
  32      -     zpool remove -s pool
       31 +     zpool remove pool device...
  33   32       zpool replace [-f] pool device [new_device]
  34      -     zpool scrub [-s | -p] pool...
       33 +     zpool scrub [-m|-M|-p|-s] pool...
  35   34       zpool set property=value pool
  36   35       zpool split [-n] [-o property=value]... [-R root] pool newpool
  37   36       zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
       37 +     zpool trim [-r rate|-s] pool...
  38   38       zpool upgrade
  39   39       zpool upgrade -v
  40   40       zpool upgrade [-V version] -a|pool...
       41 +     zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
       42 +     zpool vdev-set property=value pool vdev-name|vdev-guid
  41   43  
  42   44  DESCRIPTION
  43   45       The zpool command configures ZFS storage pools.  A storage pool is a
  44   46       collection of devices that provides physical storage and data replication
  45   47       for ZFS datasets.  All datasets within a storage pool share the same
  46   48       space.  See zfs(1M) for information on managing datasets.
  47   49  
  48   50     Virtual Devices (vdevs)
  49   51       A "virtual device" describes a single device or a collection of devices
  50   52       organized according to certain performance and fault characteristics.
↓ open down ↓ 160 lines elided ↑ open up ↑
 211  213  
 212  214       If a pool has a shared spare that is currently being used, the pool can
 213  215       not be exported since other pools may use this shared spare, which may
 214  216       lead to potential data corruption.
 215  217  
 216  218       An in-progress spare replacement can be cancelled by detaching the hot
 217  219       spare.  If the original faulted device is detached, then the hot spare
 218  220       assumes its place in the configuration, and is removed from the spare
 219  221       list of all active pools.
 220  222  
      223 +     See sparegroup vdev property in Device Properties section for information
      224 +     on how to control spare selection.
      225 +
 221  226       Spares cannot replace log devices.
 222  227  
 223  228     Intent Log
 224  229       The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
 225  230       transactions.  For instance, databases often require their transactions
 226  231       to be on stable storage devices when returning from a system call.  NFS
 227  232       and other applications can also use fsync(3C) to ensure data stability.
 228  233       By default, the intent log is allocated from blocks within the main pool.
 229  234       However, it might be possible to get better performance using separate
 230  235       intent log devices such as NVRAM or a dedicated disk.  For example:
 231  236  
 232  237       # zpool create pool c0d0 c1d0 log c2d0
 233  238  
 234  239       Multiple log devices can also be specified, and they can be mirrored.
 235  240       See the EXAMPLES section for an example of mirroring multiple log
 236  241       devices.
 237  242  
 238  243       Log devices can be added, replaced, attached, detached, and imported and
 239      -     exported as part of the larger pool.  Mirrored devices can be removed by
 240      -     specifying the top-level mirror vdev.
      244 +     exported as part of the larger pool.  Mirrored log devices can be removed
      245 +     by specifying the top-level mirror for the log.
 241  246  
 242  247     Cache Devices
 243  248       Devices can be added to a storage pool as "cache devices".  These devices
 244  249       provide an additional layer of caching between main memory and disk.  For
 245  250       read-heavy workloads, where the working set size is much larger than what
 246  251       can be cached in main memory, using cache devices allow much more of this
 247  252       working set to be served from low latency media.  Using cache devices
 248  253       provides the greatest performance improvement for random read-workloads
 249  254       of mostly static content.
 250  255  
↓ open down ↓ 3 lines elided ↑ open up ↑
 254  259       # zpool create pool c0d0 c1d0 cache c2d0 c3d0
 255  260  
 256  261       Cache devices cannot be mirrored or part of a raidz configuration.  If a
 257  262       read error is encountered on a cache device, that read I/O is reissued to
 258  263       the original storage pool device, which might be part of a mirrored or
 259  264       raidz configuration.
 260  265  
 261  266       The content of the cache devices is considered volatile, as is the case
 262  267       with other system caches.
 263  268  
 264      -   Properties
      269 +   Pool Properties
 265  270       Each pool has several properties associated with it.  Some properties are
 266  271       read-only statistics while others are configurable and change the
 267  272       behavior of the pool.
 268  273  
 269  274       The following are read-only properties:
 270  275  
 271  276       allocated
 272  277               Amount of storage space used within the pool.
 273  278  
 274  279       bootsize
 275  280               The size of the system boot partition.  This property can only be
 276  281               set at pool creation time and is read-only once pool is created.
 277  282               Setting this property implies using the -B option.
 278  283  
 279  284       capacity
 280  285               Percentage of pool space used.  This property can also be
 281  286               referred to by its shortened column name, cap.
 282  287  
      288 +     ddt_capped=on|off
      289 +             When the ddt_capped is on this indicates DDT growth has been
      290 +             stopped.  New unique writes will not be deduped to prevent
      291 +             further DDT growth.
      292 +
 283  293       expandsize
 284  294               Amount of uninitialized space within the pool or device that can
 285  295               be used to increase the total capacity of the pool.
 286  296               Uninitialized space consists of any space on an EFI labeled vdev
 287  297               which has not been brought online (e.g, using zpool online -e).
 288  298               This space occurs when a LUN is dynamically expanded.
 289  299  
 290  300       fragmentation
 291  301               The amount of fragmentation in the pool.
 292  302  
 293  303       free    The amount of free space available in the pool.
 294  304  
 295  305       freeing
 296      -             After a file system or snapshot is destroyed, the space it was
 297      -             using is returned to the pool asynchronously.  freeing is the
 298      -             amount of space remaining to be reclaimed.  Over time freeing
      306 +             freeing is the amount of pool space remaining to be reclaimed.
      307 +             After a file, dataset or snapshot is destroyed, the space it was
      308 +             using is returned to the pool asynchronously.  Over time freeing
 299  309               will decrease while free increases.
 300  310  
 301  311       health  The current health of the pool.  Health can be one of ONLINE,
 302  312               DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
 303  313  
 304  314       guid    A unique identifier for the pool.
 305  315  
 306  316       size    Total size of the storage pool.
 307  317  
 308  318       unsupported@feature_guid
↓ open down ↓ 43 lines elided ↑ open up ↑
 352  362  
 353  363       autoreplace=on|off
 354  364               Controls automatic device replacement.  If set to off, device
 355  365               replacement must be initiated by the administrator by using the
 356  366               zpool replace command.  If set to on, any new device, found in
 357  367               the same physical location as a device that previously belonged
 358  368               to the pool, is automatically formatted and replaced.  The
 359  369               default behavior is off.  This property can also be referred to
 360  370               by its shortened column name, replace.
 361  371  
      372 +     autotrim=on|off
      373 +             When set to on, while deleting data, ZFS will inform the
      374 +             underlying vdevs of any blocks that have been marked as freed.
      375 +             This allows thinly provisioned vdevs to reclaim unused blocks.
      376 +             Currently, this feature supports sending SCSI UNMAP commands to
      377 +             SCSI and SAS disk vdevs, and using file hole punching on file-
      378 +             backed vdevs.  SATA TRIM is currently not implemented.  The
      379 +             default setting for this property is off.
      380 +
      381 +             Please note that automatic trimming of data blocks can put
      382 +             significant stress on the underlying storage devices if they do
      383 +             not handle these commands in a background, low-priority manner.
      384 +             In that case, it may be possible to achieve most of the benefits
      385 +             of trimming free space on the pool by running an on-demand
      386 +             (manual) trim every once in a while during a maintenance window
      387 +             using the zpool trim command.
      388 +
      389 +             Automatic trim does not reclaim blocks after a delete
      390 +             immediately.  Instead, it waits approximately 32-64 TXGs (or as
      391 +             defined by the zfs_txgs_per_trim tunable) to allow for more
      392 +             efficient aggregation of smaller portions of free space into
      393 +             fewer larger regions, as well as to allow for longer pool
      394 +             corruption recovery via zpool import -F.
      395 +
 362  396       bootfs=pool/dataset
 363  397               Identifies the default bootable dataset for the root pool.  This
 364  398               property is expected to be set mainly by the installation and
 365  399               upgrade programs.
 366  400  
 367  401       cachefile=path|none
 368  402               Controls the location of where the pool configuration is cached.
 369  403               Discovering all pools on system startup requires a cached copy of
 370  404               the configuration data that is stored on the root file system.
 371  405               All pools in this cache are automatically imported when the
↓ open down ↓ 48 lines elided ↑ open up ↑
 420  454  
 421  455               panic     Prints out a message to the console and generates a
 422  456                         system crash dump.
 423  457  
 424  458       feature@feature_name=enabled
 425  459               The value of this property is the current state of feature_name.
 426  460               The only valid value when setting this property is enabled which
 427  461               moves feature_name to the enabled state.  See zpool-features(5)
 428  462               for details on feature states.
 429  463  
      464 +     forcetrim=on|off
      465 +             Controls whether device support is taken into consideration when
      466 +             issuing TRIM commands to the underlying vdevs of the pool.
      467 +             Normally, both automatic trim and on-demand (manual) trim only
      468 +             issue TRIM commands if a vdev indicates support for it.  Setting
      469 +             the forcetrim property to on will force ZFS to issue TRIMs even
      470 +             if it thinks a device does not support it.  The default value is
      471 +             off.
      472 +
 430  473       listsnapshots=on|off
 431  474               Controls whether information about snapshots associated with this
 432  475               pool is output when zfs list is run without the -t option.  The
 433  476               default value is off.  This property can also be referred to by
 434  477               its shortened name, listsnaps.
 435  478  
      479 +     scrubprio=0-100
      480 +             Sets the priority of scrub I/O for this pool.  This is a number
      481 +             from 0 to 100, higher numbers meaning a higher priority and thus
      482 +             more bandwidth allocated to scrub I/O, provided there is other
      483 +             I/O competing for bandwidth.  If no other I/O is competing for
      484 +             bandwidth, scrub is allowed to consume as much bandwidth as the
      485 +             pool is capable of providing.  A priority of 100 means that scrub
      486 +             I/O has equal priority to any other user-generated I/O.  The
      487 +             value 0 is special, because it turns per-pool scrub priority
      488 +             control.  In that case, scrub I/O priority is determined by the
      489 +             zfs_vdev_scrub_min_active and zfs_vdev_scrub_max_active tunables.
      490 +             The default value is 5.
      491 +
      492 +     resilverprio=0-100
      493 +             Same as the scrubprio property, but controls the priority for
      494 +             resilver I/O.  The default value is 10.  When set to 0 the global
      495 +             tunables used for queue sizing are zfs_vdev_resilver_min_active
      496 +             and zfs_vdev_resilver_max_active.
      497 +
 436  498       version=version
 437  499               The current on-disk version of the pool.  This can be increased,
 438  500               but never decreased.  The preferred method of updating pools is
 439  501               with the zpool upgrade command, though this property can be used
 440  502               when a specific version is needed for backwards compatibility.
 441  503               Once feature flags are enabled on a pool this property will no
 442  504               longer have a value.
 443  505  
      506 +   Device Properties
      507 +     Each device can have several properties associated with it.  These
      508 +     properites override global tunables and are designed to provide more
      509 +     control over the operational parameters of this specific device, as well
      510 +     as to help manage this device.
      511 +
      512 +     The cos device property can reference a CoS property descriptor by name,
      513 +     in which case, the values of device properties are determined according
      514 +     to the following rule: the device settings override CoS settings, which
      515 +     in turn, override the global tunables.
      516 +
      517 +     The following device properties are available:
      518 +
      519 +     cos=cos-name
      520 +             This property indicates whether the device is associated with a
      521 +             CoS property descriptor object.  If so, the properties from the
      522 +             CoS descriptor that are not explicitly overridden by the device
      523 +             properties are in effect for this device.
      524 +
      525 +     l2arc_ddt=on|off
      526 +             This property is meaningful for L2ARC devices.  If this property
      527 +             is turned on ZFS will dedicate the L2ARC device to cache
      528 +             deduplication table (DDT) buffers only.
      529 +
      530 +     prefread=1..100
      531 +             This property is meaningful for devices that belong to a mirror.
      532 +             The property determines the preference that is given to the
      533 +             device when reading from the mirror.  The ratio of the value to
      534 +             the sum of the values of this property for all the devices in the
      535 +             mirror determines the relative frequency (which also is
      536 +             considered "probability") of reading from this specific device.
      537 +
      538 +     sparegroup=group-name
      539 +             This property indicates whether the device is a part of a spare
      540 +             device group.  Devices in the pool (including spares) can be
      541 +             labeled with strings that are meaningful in the context of the
      542 +             management workflow in effect.  When a failed device is
      543 +             automatically replaced by spares, the spares whose sparegroup
      544 +             property match the failed device's property are used first.
      545 +
      546 +     {read|aread|write|awrite|scrub|resilver}_{minactive|maxactive}=1..1000
      547 +             These properties define the minimim/maximum number of outstanding
      548 +             active requests for the queueable classes of I/O requests as
      549 +             defined by the ZFS I/O scheduler.  The classes include read,
      550 +             asynchronous read, write, asynchronous write, and scrub classes.
      551 +
 444  552     Subcommands
 445  553       All subcommands that modify state are logged persistently to the pool in
 446  554       their original form.
 447  555  
 448  556       The zpool command provides subcommands to create and destroy storage
 449  557       pools, add capacity to storage pools, and provide information about the
 450  558       storage pools.  The following subcommands are supported:
 451  559  
 452  560       zpool -?
 453  561               Displays a help message.
↓ open down ↓ 84 lines elided ↑ open up ↑
 538  646                       specified.  The mount point must be an absolute path,
 539  647                       legacy, or none.  For more information on dataset mount
 540  648                       points, see zfs(1M).
 541  649  
 542  650               -n      Displays the configuration that would be used without
 543  651                       actually creating the pool.  The actual pool creation can
 544  652                       still fail due to insufficient privileges or device
 545  653                       sharing.
 546  654  
 547  655               -o property=value
 548      -                     Sets the given pool properties.  See the Properties
      656 +                     Sets the given pool properties.  See the Pool Properties
 549  657                       section for a list of valid properties that can be set.
 550  658  
 551  659               -O file-system-property=value
 552  660                       Sets the given file system properties in the root file
 553  661                       system of the pool.  See the Properties section of
 554  662                       zfs(1M) for a list of valid properties that can be set.
 555  663  
 556  664               -R root
 557  665                       Equivalent to -o cachefile=none -o altroot=root
 558  666  
↓ open down ↓ 2 lines elided ↑ open up ↑
 561  669               This command tries to unmount any active datasets before
 562  670               destroying the pool.
 563  671  
 564  672               -f      Forces any active datasets contained within the pool to
 565  673                       be unmounted.
 566  674  
 567  675       zpool detach pool device
 568  676               Detaches device from a mirror.  The operation is refused if there
 569  677               are no other valid replicas of the data.
 570  678  
 571      -     zpool export [-f] pool...
      679 +     zpool export [-cfF] [-t numthreads] pool...
 572  680               Exports the given pools from the system.  All devices are marked
 573  681               as exported, but are still considered in use by other subsystems.
 574  682               The devices can be moved between systems (even those of different
 575  683               endianness) and imported as long as a sufficient number of
 576  684               devices are present.
 577  685  
 578  686               Before exporting the pool, all datasets within the pool are
 579  687               unmounted.  A pool can not be exported if it has a shared spare
 580  688               that is currently being used.
 581  689  
 582  690               For pools to be portable, you must give the zpool command whole
 583  691               disks, not just slices, so that ZFS can label the disks with
 584  692               portable EFI labels.  Otherwise, disk drivers on platforms of
 585  693               different endianness will not recognize the disks.
 586  694  
      695 +             -c      Keep configuration information of exported pool in the
      696 +                     cache file.
      697 +
 587  698               -f      Forcefully unmount all datasets, using the unmount -f
 588  699                       command.
 589  700  
 590  701                       This command will forcefully export the pool even if it
 591  702                       has a shared spare that is currently being used.  This
 592  703                       may lead to potential data corruption.
 593  704  
      705 +             -F      Do not update device labels or cache file with new
      706 +                     configuration.
      707 +
      708 +             -t numthreads
      709 +                     Unmount datasets in parallel using up to numthreads
      710 +                     threads.
      711 +
 594  712       zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
 595  713               Retrieves the given list of properties (or all properties if all
 596  714               is used) for the specified storage pool(s).  These properties are
 597  715               displayed with the following fields:
 598  716  
 599  717                       name          Name of storage pool
 600  718                       property      Property name
 601  719                       value         Property value
 602  720                       source        Property source, either 'default' or 'local'.
 603  721  
 604      -             See the Properties section for more information on the available
 605      -             pool properties.
      722 +             See the Pool Properties section for more information on the
      723 +             available pool properties.
 606  724  
 607  725               -H      Scripted mode.  Do not display headers, and separate
 608  726                       fields by a single tab instead of arbitrary space.
 609  727  
 610  728               -o field
 611  729                       A comma-separated list of columns to display.
 612  730                       name,property,value,source is the default value.
 613  731  
 614  732               -p      Display numbers in parsable (exact) values.
 615  733  
↓ open down ↓ 79 lines elided ↑ open up ↑
 695  813  
 696  814               -N      Import the pool without mounting any file systems.
 697  815  
 698  816               -o mntopts
 699  817                       Comma-separated list of mount options to use when
 700  818                       mounting datasets within the pool.  See zfs(1M) for a
 701  819                       description of dataset properties and mount options.
 702  820  
 703  821               -o property=value
 704  822                       Sets the specified property on the imported pool.  See
 705      -                     the Properties section for more information on the
      823 +                     the Pool Properties section for more information on the
 706  824                       available pool properties.
 707  825  
 708  826               -R root
 709  827                       Sets the cachefile property to none and the altroot
 710  828                       property to root.
 711  829  
 712  830       zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
 713  831               property=value]... [-R root] pool|id [newpool]
 714  832               Imports a specific pool.  A pool can be identified by its name or
 715  833               the numeric identifier.  If newpool is specified, the pool is
↓ open down ↓ 38 lines elided ↑ open up ↑
 754  872                       details about pool recovery mode, see the -F option,
 755  873                       above.
 756  874  
 757  875               -o mntopts
 758  876                       Comma-separated list of mount options to use when
 759  877                       mounting datasets within the pool.  See zfs(1M) for a
 760  878                       description of dataset properties and mount options.
 761  879  
 762  880               -o property=value
 763  881                       Sets the specified property on the imported pool.  See
 764      -                     the Properties section for more information on the
      882 +                     the Pool Properties section for more information on the
 765  883                       available pool properties.
 766  884  
 767  885               -R root
 768  886                       Sets the cachefile property to none and the altroot
 769  887                       property to root.
 770  888  
      889 +             -t numthreads
      890 +                     Mount datasets in parallel using up to numthreads
      891 +                     threads.
      892 +
 771  893       zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
 772  894               Displays I/O statistics for the given pools.  When given an
 773  895               interval, the statistics are printed every interval seconds until
 774  896               ^C is pressed.  If no pools are specified, statistics for every
 775  897               pool in the system is shown.  If count is specified, the command
 776  898               exits after count reports are printed.
 777  899  
 778  900               -T u|d  Display a time stamp.  Specify u for a printed
 779  901                       representation of the internal representation of time.
 780  902                       See time(2).  Specify d for standard date format.  See
↓ open down ↓ 15 lines elided ↑ open up ↑
 796  918               If no pools are specified, all pools in the system are listed.
 797  919               When given an interval, the information is printed every interval
 798  920               seconds until ^C is pressed.  If count is specified, the command
 799  921               exits after count reports are printed.
 800  922  
 801  923               -H      Scripted mode.  Do not display headers, and separate
 802  924                       fields by a single tab instead of arbitrary space.
 803  925  
 804  926               -o property
 805  927                       Comma-separated list of properties to display.  See the
 806      -                     Properties section for a list of valid properties.  The
 807      -                     default list is name, size, allocated, free, expandsize,
 808      -                     fragmentation, capacity, dedupratio, health, altroot.
      928 +                     Pool Properties section for a list of valid properties.
      929 +                     The default list is name, size, allocated, free,
      930 +                     expandsize, fragmentation, capacity, dedupratio, health,
      931 +                     altroot.
 809  932  
 810  933               -p      Display numbers in parsable (exact) values.
 811  934  
 812  935               -T u|d  Display a time stamp.  Specify -u for a printed
 813  936                       representation of the internal representation of time.
 814  937                       See time(2).  Specify -d for standard date format.  See
 815  938                       date(1).
 816  939  
 817  940               -v      Verbose statistics.  Reports usage statistics for
 818  941                       individual vdevs within the pool, in addition to the
↓ open down ↓ 17 lines elided ↑ open up ↑
 836  959                       the pool.
 837  960  
 838  961       zpool reguid pool
 839  962               Generates a new unique identifier for the pool.  You must ensure
 840  963               that all devices in this pool are online and healthy before
 841  964               performing this action.
 842  965  
 843  966       zpool reopen pool
 844  967               Reopen all the vdevs associated with the pool.
 845  968  
 846      -     zpool remove [-np] pool device...
      969 +     zpool remove pool device...
 847  970               Removes the specified device from the pool.  This command
 848      -             currently only supports removing hot spares, cache, log devices
 849      -             and mirrored top-level vdevs (mirror of leaf devices); but not
 850      -             raidz.
      971 +             currently only supports removing hot spares, cache, log and
      972 +             special devices.  A mirrored log device can be removed by
      973 +             specifying the top-level mirror for the log.  Non-log devices
      974 +             that are part of a mirrored configuration can be removed using
      975 +             the zpool detach command.  Non-redundant and raidz devices cannot
      976 +             be removed from a pool.
 851  977  
 852      -             Removing a top-level vdev reduces the total amount of space in
 853      -             the storage pool.  The specified device will be evacuated by
 854      -             copying all allocated space from it to the other devices in the
 855      -             pool.  In this case, the zpool remove command initiates the
 856      -             removal and returns, while the evacuation continues in the
 857      -             background.  The removal progress can be monitored with zpool
 858      -             status. This feature must be enabled to be used, see
 859      -             zpool-features(5)
 860      -
 861      -             A mirrored top-level device (log or data) can be removed by
 862      -             specifying the top-level mirror for the same.  Non-log devices or
 863      -             data devices that are part of a mirrored configuration can be
 864      -             removed using the zpool detach command.
 865      -
 866      -             -n      Do not actually perform the removal ("no-op").  Instead,
 867      -                     print the estimated amount of memory that will be used by
 868      -                     the mapping table after the removal completes.  This is
 869      -                     nonzero only for top-level vdevs.
 870      -
 871      -             -p      Used in conjunction with the -n flag, displays numbers as
 872      -                     parsable (exact) values.
 873      -
 874      -     zpool remove -s pool
 875      -             Stops and cancels an in-progress removal of a top-level vdev.
 876      -
 877  978       zpool replace [-f] pool device [new_device]
 878  979               Replaces old_device with new_device.  This is equivalent to
 879  980               attaching new_device, waiting for it to resilver, and then
 880  981               detaching old_device.
 881  982  
 882  983               The size of new_device must be greater than or equal to the
 883  984               minimum size of all the devices in a mirror or raidz
 884  985               configuration.
 885  986  
 886  987               new_device is required if the pool is not redundant.  If
 887  988               new_device is not specified, it defaults to old_device.  This
 888  989               form of replacement is useful after an existing disk has failed
 889  990               and has been physically replaced.  In this case, the new disk may
 890  991               have the same /dev/dsk path as the old device, even though it is
 891  992               actually a different disk.  ZFS recognizes this.
 892  993  
 893  994               -f      Forces use of new_device, even if its appears to be in
 894  995                       use.  Not all devices can be overridden in this manner.
 895  996  
 896      -     zpool scrub [-s | -p] pool...
      997 +     zpool scrub [-m|-M|-p|-s] pool...
 897  998               Begins a scrub or resumes a paused scrub.  The scrub examines all
 898  999               data in the specified pools to verify that it checksums
 899 1000               correctly.  For replicated (mirror or raidz) devices, ZFS
 900 1001               automatically repairs any damage discovered during the scrub.
 901 1002               The zpool status command reports the progress of the scrub and
 902 1003               summarizes the results of the scrub upon completion.
 903 1004  
 904 1005               Scrubbing and resilvering are very similar operations.  The
 905 1006               difference is that resilvering only examines data that ZFS knows
 906 1007               to be out of date (for example, when attaching a new device to a
 907 1008               mirror or replacing an existing device), whereas scrubbing
 908 1009               examines all data to discover silent errors due to hardware
 909 1010               faults or disk failure.
 910 1011  
 911 1012               Because scrubbing and resilvering are I/O-intensive operations,
 912 1013               ZFS only allows one at a time.  If a scrub is paused, the zpool
 913 1014               scrub resumes it.  If a resilver is in progress, ZFS does not
 914 1015               allow a scrub to be started until the resilver completes.
 915 1016  
 916      -             -s      Stop scrubbing.
     1017 +             Partial scrub may be requested using -m or -M option.
 917 1018  
     1019 +             -m      Scrub only metadata blocks.
     1020 +
     1021 +             -M      Scrub only MOS blocks.
     1022 +
 918 1023               -p      Pause scrubbing.  Scrub pause state and progress are
 919 1024                       periodically synced to disk.  If the system is restarted
 920 1025                       or pool is exported during a paused scrub, even after
 921 1026                       import, scrub will remain paused until it is resumed.
 922 1027                       Once resumed the scrub will pick up from the place where
 923 1028                       it was last checkpointed to disk.  To resume a paused
 924 1029                       scrub issue zpool scrub again.
 925 1030  
     1031 +             -s      Stop scrubbing.
     1032 +
 926 1033       zpool set property=value pool
 927      -             Sets the given property on the specified pool.  See the
     1034 +             Sets the given property on the specified pool.  See the Pool
 928 1035               Properties section for more information on what properties can be
 929 1036               set and acceptable values.
 930 1037  
 931 1038       zpool split [-n] [-o property=value]... [-R root] pool newpool
 932 1039               Splits devices off pool creating newpool.  All vdevs in pool must
 933 1040               be mirrors.  At the time of the split, newpool will be a replica
 934 1041               of pool.
 935 1042  
 936 1043               -n      Do dry run, do not actually perform the split.  Print out
 937 1044                       the expected configuration of newpool.
 938 1045  
 939 1046               -o property=value
 940      -                     Sets the specified property for newpool.  See the
     1047 +                     Sets the specified property for newpool.  See the Pool
 941 1048                       Properties section for more information on the available
 942 1049                       pool properties.
 943 1050  
 944 1051               -R root
 945 1052                       Set altroot for newpool to root and automatically import
 946 1053                       it.
 947 1054  
 948 1055       zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
 949 1056               Displays the detailed health status for the given pools.  If no
 950 1057               pool is specified, then the status of each pool in the system is
↓ open down ↓ 16 lines elided ↑ open up ↑
 967 1074                       date(1).
 968 1075  
 969 1076               -v      Displays verbose data error information, printing out a
 970 1077                       complete list of all data errors since the last complete
 971 1078                       pool scrub.
 972 1079  
 973 1080               -x      Only display status for pools that are exhibiting errors
 974 1081                       or are otherwise unavailable.  Warnings about pools not
 975 1082                       using the latest on-disk format will not be included.
 976 1083  
     1084 +     zpool trim [-r rate|-s] pool...
     1085 +             Initiates a on-demand TRIM operation on all of the free space of
     1086 +             a pool.  This informs the underlying storage devices of all of
     1087 +             the blocks that the pool no longer considers allocated, thus
     1088 +             allowing thinly provisioned storage devices to reclaim them.
     1089 +             Please note that this collects all space marked as "freed" in the
     1090 +             pool immediately and doesn't wait the zfs_txgs_per_trim delay as
     1091 +             automatic TRIM does.  Hence, this can limit pool corruption
     1092 +             recovery options during and immediately following the on-demand
     1093 +             TRIM to 1-2 TXGs into the past (instead of the standard 32-64 of
     1094 +             automatic TRIM).  This approach, however, allows you to recover
     1095 +             the maximum amount of free space from the pool immediately
     1096 +             without having to wait.
     1097 +
     1098 +             Also note that an on-demand TRIM operation can be initiated
     1099 +             irrespective of the autotrim pool property setting.  It does,
     1100 +             however, respect the forcetrim pool property.
     1101 +
     1102 +             An on-demand TRIM operation does not conflict with an ongoing
     1103 +             scrub, but it can put significant I/O stress on the underlying
     1104 +             vdevs.  A resilver, however, automatically stops an on-demand
     1105 +             TRIM operation.  You can manually reinitiate the TRIM operation
     1106 +             after the resilver has started, by simply reissuing the zpool
     1107 +             trim command.
     1108 +
     1109 +             Adding a vdev during TRIM is supported, although the progression
     1110 +             display in zpool status might not be entirely accurate in that
     1111 +             case (TRIM will complete before reaching 100%).  Removing or
     1112 +             detaching a vdev will prematurely terminate an on-demand TRIM
     1113 +             operation.
     1114 +
     1115 +             -r rate
     1116 +                     Controls the speed at which the TRIM operation
     1117 +                     progresses.  Without this option, TRIM is executed in
     1118 +                     parallel on all top-level vdevs as quickly as possible.
     1119 +                     This option allows you to control how fast (in bytes per
     1120 +                     second) the TRIM is executed.  This rate is applied on a
     1121 +                     per-vdev basis, i.e. every top-level vdev in the pool
     1122 +                     tries to match this speed.
     1123 +
     1124 +                     Due to limitations in how the algorithm is designed,
     1125 +                     TRIMs are executed in whole-metaslab increments.  Each
     1126 +                     top-level vdev contains approximately 200 metaslabs, so a
     1127 +                     rate-limited TRIM progresses in steps, i.e. it TRIMs one
     1128 +                     metaslab completely and then waits for a while so that
     1129 +                     over the whole device, the speed averages out.
     1130 +
     1131 +                     When an on-demand TRIM operation is already in progress,
     1132 +                     this option changes its rate.  To change a rate-limited
     1133 +                     TRIM to an unlimited one, simply execute the zpool trim
     1134 +                     command without the -r option.
     1135 +
     1136 +             -s      Stop trimming.  If an on-demand TRIM operation is not
     1137 +                     ongoing at the moment, this does nothing and the command
     1138 +                     returns success.
     1139 +
 977 1140       zpool upgrade
 978 1141               Displays pools which do not have all supported features enabled
 979 1142               and pools formatted using a legacy ZFS version number.  These
 980 1143               pools can continue to be used, but some features may not be
 981 1144               available.  Use zpool upgrade -a to enable all features on all
 982 1145               pools.
 983 1146  
 984 1147       zpool upgrade -v
 985 1148               Displays legacy ZFS versions supported by the current software.
 986 1149               See zpool-features(5) for a description of feature flags features
↓ open down ↓ 7 lines elided ↑ open up ↑
 994 1157               support all features enabled on the pool.
 995 1158  
 996 1159               -a      Enables all supported features on all pools.
 997 1160  
 998 1161               -V version
 999 1162                       Upgrade to the specified legacy version.  If the -V flag
1000 1163                       is specified, no features will be enabled on the pool.
1001 1164                       This option can only be used to increase the version
1002 1165                       number up to the last supported legacy version number.
1003 1166  
     1167 +     zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
     1168 +             Retrieves the given list of vdev properties (or all properties if
     1169 +             all is used) for the specified vdev of the specified storage
     1170 +             pool.  These properties are displayed in the same manner as the
     1171 +             pool properties.  The operation is supported for leaf-level vdevs
     1172 +             only.  See the Device Properties section for more information on
     1173 +             the available properties.
     1174 +
     1175 +     zpool vdev-set property=value pool vdev-name|vdev-guid
     1176 +             Sets the given property on the specified device of the specified
     1177 +             pool.  If top-level vdev is specified, sets the property on all
     1178 +             the child devices.  See the Device Properties section for more
     1179 +             information on what properties can be set and accepted values.
     1180 +
1004 1181  EXIT STATUS
1005 1182       The following exit values are returned:
1006 1183  
1007 1184       0       Successful completion.
1008 1185  
1009 1186       1       An error occurred.
1010 1187  
1011 1188       2       Invalid command line options were specified.
1012 1189  
1013 1190  EXAMPLES
↓ open down ↓ 111 lines elided ↑ open up ↑
1125 1302  
1126 1303               # zpool add pool cache c2d0 c3d0
1127 1304  
1128 1305               Once added, the cache devices gradually fill with content from
1129 1306               main memory.  Depending on the size of your cache devices, it
1130 1307               could take over an hour for them to fill.  Capacity and reads can
1131 1308               be monitored using the iostat option as follows:
1132 1309  
1133 1310               # zpool iostat -v pool 5
1134 1311  
1135      -     Example 14 Removing a Mirrored top-level (Log or Data) Device
1136      -             The following commands remove the mirrored log device mirror-2
1137      -             and mirrored top-level data device mirror-1.
1138      -
     1312 +     Example 14 Removing a Mirrored Log Device
     1313 +             The following command removes the mirrored log device mirror-2.
1139 1314               Given this configuration:
1140 1315  
1141 1316                 pool: tank
1142 1317                state: ONLINE
1143 1318                scrub: none requested
1144 1319               config:
1145 1320  
1146 1321                        NAME        STATE     READ WRITE CKSUM
1147 1322                        tank        ONLINE       0     0     0
1148 1323                          mirror-0  ONLINE       0     0     0
↓ open down ↓ 4 lines elided ↑ open up ↑
1153 1328                            c6t3d0  ONLINE       0     0     0
1154 1329                        logs
1155 1330                          mirror-2  ONLINE       0     0     0
1156 1331                            c4t0d0  ONLINE       0     0     0
1157 1332                            c4t1d0  ONLINE       0     0     0
1158 1333  
1159 1334               The command to remove the mirrored log mirror-2 is:
1160 1335  
1161 1336               # zpool remove tank mirror-2
1162 1337  
1163      -             The command to remove the mirrored data mirror-1 is:
1164      -
1165      -             # zpool remove tank mirror-1
1166      -
1167 1338       Example 15 Displaying expanded space on a device
1168 1339               The following command displays the detailed information for the
1169 1340               pool data.  This pool is comprised of a single raidz vdev where
1170 1341               one of its devices increased its capacity by 10GB.  In this
1171 1342               example, the pool will not be able to utilize this extra capacity
1172 1343               until all the devices under the raidz vdev have been expanded.
1173 1344  
1174 1345               # zpool list -v data
1175 1346               NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1176 1347               data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
↓ open down ↓ 12 lines elided ↑ open up ↑
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX