Print this page
NEX-18069 Unable to get/set VDEV_PROP_RESILVER_MAXACTIVE/VDEV_PROP_RESILVER_MINACTIVE props
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9552 zfs_scan_idle throttling harms performance and needs to be removed
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-5284 need to document and update default for import -t option
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4934 Add capability to remove special vdev
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4258 restore and update vdev-get & vdev-set in zpool man page
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3502 dedup ceiling should set a pool prop when cap is in effect
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
        usr/src/common/zfs/zpool_prop.c
        usr/src/uts/common/sys/fs/zfs.h
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
    usr/src/uts/common/io/scsi/targets/sd.c
    usr/src/uts/common/sys/scsi/targets/sddef.h
SUP-817 Removed references to special device from man and help
Revert "SUP-817 Removed references to special device"
This reverts commit f8970e28f0d8bd6b69711722f341e3e1d0e1babf.
SUP-817 Removed references to special device
OS-102 add man page info and tests for vdev/CoS properties and ZFS meta features
Issue #26: partial scrub
Added partial scrub options:
-M for MOS only scrub
-m for metadata scrub
re 13748 added zpool export -c option
zpool export -c command exports specified pool while keeping its latest
configuration in the cache file for subsequent zpool import -c.
re #11781 rb3701 Update man related tools (add missed files)
re #11781 rb3701 Update man related tools
--HG--
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE => usr/src/cmd/man/THIRDPARTYLICENSE
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE.descrip => usr/src/cmd/man/THIRDPARTYLICENSE.descrip
rename : usr/src/cmd/man/src/man.c => usr/src/cmd/man/man.c
   1 ZPOOL(1M)                    Maintenance Commands                    ZPOOL(1M)
   2 
   3 NAME
   4      zpool - configure ZFS storage pools
   5 
   6 SYNOPSIS
   7      zpool -?
   8      zpool add [-fn] pool vdev...
   9      zpool attach [-f] pool device new_device
  10      zpool clear pool [device]
  11      zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]...
  12            [-O file-system-property=value]... [-R root] pool vdev...
  13      zpool destroy [-f] pool
  14      zpool detach pool device
  15      zpool export [-f] pool...
  16      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
  17      zpool history [-il] [pool]...
  18      zpool import [-D] [-d dir]
  19      zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  20            [-o property=value]... [-R root]
  21      zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  22            [-o property=value]... [-R root] pool|id [newpool]
  23      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
  24      zpool labelclear [-f] device
  25      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
  26            [interval [count]]
  27      zpool offline [-t] pool device...
  28      zpool online [-e] pool device...
  29      zpool reguid pool
  30      zpool reopen pool
  31      zpool remove [-np] pool device...
  32      zpool remove -s pool
  33      zpool replace [-f] pool device [new_device]
  34      zpool scrub [-s | -p] pool...
  35      zpool set property=value pool
  36      zpool split [-n] [-o property=value]... [-R root] pool newpool
  37      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]

  38      zpool upgrade
  39      zpool upgrade -v
  40      zpool upgrade [-V version] -a|pool...


  41 
  42 DESCRIPTION
  43      The zpool command configures ZFS storage pools.  A storage pool is a
  44      collection of devices that provides physical storage and data replication
  45      for ZFS datasets.  All datasets within a storage pool share the same
  46      space.  See zfs(1M) for information on managing datasets.
  47 
  48    Virtual Devices (vdevs)
  49      A "virtual device" describes a single device or a collection of devices
  50      organized according to certain performance and fault characteristics.
  51      The following virtual devices are supported:
  52 
  53      disk    A block device, typically located under /dev/dsk.  ZFS can use
  54              individual slices or partitions, though the recommended mode of
  55              operation is to use whole disks.  A disk can be specified by a
  56              full path, or it can be a shorthand name (the relative portion of
  57              the path under /dev/dsk).  A whole disk can be specified by
  58              omitting the slice or partition designation.  For example, c0t0d0
  59              is equivalent to /dev/dsk/c0t0d0s2.  When given a whole disk, ZFS
  60              automatically labels the disk, if necessary.


 201      example,
 202 
 203      # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
 204 
 205      Spares can be shared across multiple pools, and can be added with the
 206      zpool add command and removed with the zpool remove command.  Once a
 207      spare replacement is initiated, a new spare vdev is created within the
 208      configuration that will remain there until the original device is
 209      replaced.  At this point, the hot spare becomes available again if
 210      another device fails.
 211 
 212      If a pool has a shared spare that is currently being used, the pool can
 213      not be exported since other pools may use this shared spare, which may
 214      lead to potential data corruption.
 215 
 216      An in-progress spare replacement can be cancelled by detaching the hot
 217      spare.  If the original faulted device is detached, then the hot spare
 218      assumes its place in the configuration, and is removed from the spare
 219      list of all active pools.
 220 



 221      Spares cannot replace log devices.
 222 
 223    Intent Log
 224      The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
 225      transactions.  For instance, databases often require their transactions
 226      to be on stable storage devices when returning from a system call.  NFS
 227      and other applications can also use fsync(3C) to ensure data stability.
 228      By default, the intent log is allocated from blocks within the main pool.
 229      However, it might be possible to get better performance using separate
 230      intent log devices such as NVRAM or a dedicated disk.  For example:
 231 
 232      # zpool create pool c0d0 c1d0 log c2d0
 233 
 234      Multiple log devices can also be specified, and they can be mirrored.
 235      See the EXAMPLES section for an example of mirroring multiple log
 236      devices.
 237 
 238      Log devices can be added, replaced, attached, detached, and imported and
 239      exported as part of the larger pool.  Mirrored devices can be removed by
 240      specifying the top-level mirror vdev.
 241 
 242    Cache Devices
 243      Devices can be added to a storage pool as "cache devices".  These devices
 244      provide an additional layer of caching between main memory and disk.  For
 245      read-heavy workloads, where the working set size is much larger than what
 246      can be cached in main memory, using cache devices allow much more of this
 247      working set to be served from low latency media.  Using cache devices
 248      provides the greatest performance improvement for random read-workloads
 249      of mostly static content.
 250 
 251      To create a pool with cache devices, specify a cache vdev with any number
 252      of devices.  For example:
 253 
 254      # zpool create pool c0d0 c1d0 cache c2d0 c3d0
 255 
 256      Cache devices cannot be mirrored or part of a raidz configuration.  If a
 257      read error is encountered on a cache device, that read I/O is reissued to
 258      the original storage pool device, which might be part of a mirrored or
 259      raidz configuration.
 260 
 261      The content of the cache devices is considered volatile, as is the case
 262      with other system caches.
 263 
 264    Properties
 265      Each pool has several properties associated with it.  Some properties are
 266      read-only statistics while others are configurable and change the
 267      behavior of the pool.
 268 
 269      The following are read-only properties:
 270 
 271      allocated
 272              Amount of storage space used within the pool.
 273 
 274      bootsize
 275              The size of the system boot partition.  This property can only be
 276              set at pool creation time and is read-only once pool is created.
 277              Setting this property implies using the -B option.
 278 
 279      capacity
 280              Percentage of pool space used.  This property can also be
 281              referred to by its shortened column name, cap.
 282 





 283      expandsize
 284              Amount of uninitialized space within the pool or device that can
 285              be used to increase the total capacity of the pool.
 286              Uninitialized space consists of any space on an EFI labeled vdev
 287              which has not been brought online (e.g, using zpool online -e).
 288              This space occurs when a LUN is dynamically expanded.
 289 
 290      fragmentation
 291              The amount of fragmentation in the pool.
 292 
 293      free    The amount of free space available in the pool.
 294 
 295      freeing
 296              After a file system or snapshot is destroyed, the space it was
 297              using is returned to the pool asynchronously.  freeing is the
 298              amount of space remaining to be reclaimed.  Over time freeing
 299              will decrease while free increases.
 300 
 301      health  The current health of the pool.  Health can be one of ONLINE,
 302              DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
 303 
 304      guid    A unique identifier for the pool.
 305 
 306      size    Total size of the storage pool.
 307 
 308      unsupported@feature_guid
 309              Information about unsupported features that are enabled on the
 310              pool.  See zpool-features(5) for details.
 311 
 312      The space usage properties report actual physical space available to the
 313      storage pool.  The physical space can be different from the total amount
 314      of space that any contained datasets can actually use.  The amount of
 315      space used in a raidz configuration depends on the characteristics of the
 316      data being written.  In addition, ZFS reserves some space for internal
 317      accounting that the zfs(1M) command takes into account, but the zpool
 318      command does not.  For non-full pools of a reasonable size, these effects


 342      later changed with the zpool set command:
 343 
 344      autoexpand=on|off
 345              Controls automatic pool expansion when the underlying LUN is
 346              grown.  If set to on, the pool will be resized according to the
 347              size of the expanded device.  If the device is part of a mirror
 348              or raidz then all devices within that mirror/raidz group must be
 349              expanded before the new space is made available to the pool.  The
 350              default behavior is off.  This property can also be referred to
 351              by its shortened column name, expand.
 352 
 353      autoreplace=on|off
 354              Controls automatic device replacement.  If set to off, device
 355              replacement must be initiated by the administrator by using the
 356              zpool replace command.  If set to on, any new device, found in
 357              the same physical location as a device that previously belonged
 358              to the pool, is automatically formatted and replaced.  The
 359              default behavior is off.  This property can also be referred to
 360              by its shortened column name, replace.
 361 
























 362      bootfs=pool/dataset
 363              Identifies the default bootable dataset for the root pool.  This
 364              property is expected to be set mainly by the installation and
 365              upgrade programs.
 366 
 367      cachefile=path|none
 368              Controls the location of where the pool configuration is cached.
 369              Discovering all pools on system startup requires a cached copy of
 370              the configuration data that is stored on the root file system.
 371              All pools in this cache are automatically imported when the
 372              system boots.  Some environments, such as install and clustering,
 373              need to cache this information in a different location so that
 374              pools are not automatically imported.  Setting this property
 375              caches the pool configuration in a different location that can
 376              later be imported with zpool import -c.  Setting it to the
 377              special value none creates a temporary pool that is never cached,
 378              and the special value "" (empty string) uses the default
 379              location.
 380 
 381              Multiple pools can share the same cache file.  Because the kernel


 410              determined as follows:
 411 
 412              wait      Blocks all I/O access until the device connectivity is
 413                        recovered and the errors are cleared.  This is the
 414                        default behavior.
 415 
 416              continue  Returns EIO to any new write I/O requests but allows
 417                        reads to any of the remaining healthy devices.  Any
 418                        write requests that have yet to be committed to disk
 419                        would be blocked.
 420 
 421              panic     Prints out a message to the console and generates a
 422                        system crash dump.
 423 
 424      feature@feature_name=enabled
 425              The value of this property is the current state of feature_name.
 426              The only valid value when setting this property is enabled which
 427              moves feature_name to the enabled state.  See zpool-features(5)
 428              for details on feature states.
 429 









 430      listsnapshots=on|off
 431              Controls whether information about snapshots associated with this
 432              pool is output when zfs list is run without the -t option.  The
 433              default value is off.  This property can also be referred to by
 434              its shortened name, listsnaps.
 435 



















 436      version=version
 437              The current on-disk version of the pool.  This can be increased,
 438              but never decreased.  The preferred method of updating pools is
 439              with the zpool upgrade command, though this property can be used
 440              when a specific version is needed for backwards compatibility.
 441              Once feature flags are enabled on a pool this property will no
 442              longer have a value.
 443 














































 444    Subcommands
 445      All subcommands that modify state are logged persistently to the pool in
 446      their original form.
 447 
 448      The zpool command provides subcommands to create and destroy storage
 449      pools, add capacity to storage pools, and provide information about the
 450      storage pools.  The following subcommands are supported:
 451 
 452      zpool -?
 453              Displays a help message.
 454 
 455      zpool add [-fn] pool vdev...
 456              Adds the specified virtual devices to the given pool.  The vdev
 457              specification is described in the Virtual Devices section.  The
 458              behavior of the -f option, and the device checks performed are
 459              described in the zpool create subcommand.
 460 
 461              -f      Forces use of vdevs, even if they appear in use or
 462                      specify a conflicting replication level.  Not all devices
 463                      can be overridden in this manner.


 528                      properties to enabled with the -o option.  See
 529                      zpool-features(5) for details about feature properties.
 530 
 531              -f      Forces use of vdevs, even if they appear in use or
 532                      specify a conflicting replication level.  Not all devices
 533                      can be overridden in this manner.
 534 
 535              -m mountpoint
 536                      Sets the mount point for the root dataset.  The default
 537                      mount point is /pool or altroot/pool if altroot is
 538                      specified.  The mount point must be an absolute path,
 539                      legacy, or none.  For more information on dataset mount
 540                      points, see zfs(1M).
 541 
 542              -n      Displays the configuration that would be used without
 543                      actually creating the pool.  The actual pool creation can
 544                      still fail due to insufficient privileges or device
 545                      sharing.
 546 
 547              -o property=value
 548                      Sets the given pool properties.  See the Properties
 549                      section for a list of valid properties that can be set.
 550 
 551              -O file-system-property=value
 552                      Sets the given file system properties in the root file
 553                      system of the pool.  See the Properties section of
 554                      zfs(1M) for a list of valid properties that can be set.
 555 
 556              -R root
 557                      Equivalent to -o cachefile=none -o altroot=root
 558 
 559      zpool destroy [-f] pool
 560              Destroys the given pool, freeing up any devices for other use.
 561              This command tries to unmount any active datasets before
 562              destroying the pool.
 563 
 564              -f      Forces any active datasets contained within the pool to
 565                      be unmounted.
 566 
 567      zpool detach pool device
 568              Detaches device from a mirror.  The operation is refused if there
 569              are no other valid replicas of the data.
 570 
 571      zpool export [-f] pool...
 572              Exports the given pools from the system.  All devices are marked
 573              as exported, but are still considered in use by other subsystems.
 574              The devices can be moved between systems (even those of different
 575              endianness) and imported as long as a sufficient number of
 576              devices are present.
 577 
 578              Before exporting the pool, all datasets within the pool are
 579              unmounted.  A pool can not be exported if it has a shared spare
 580              that is currently being used.
 581 
 582              For pools to be portable, you must give the zpool command whole
 583              disks, not just slices, so that ZFS can label the disks with
 584              portable EFI labels.  Otherwise, disk drivers on platforms of
 585              different endianness will not recognize the disks.
 586 



 587              -f      Forcefully unmount all datasets, using the unmount -f
 588                      command.
 589 
 590                      This command will forcefully export the pool even if it
 591                      has a shared spare that is currently being used.  This
 592                      may lead to potential data corruption.
 593 







 594      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
 595              Retrieves the given list of properties (or all properties if all
 596              is used) for the specified storage pool(s).  These properties are
 597              displayed with the following fields:
 598 
 599                      name          Name of storage pool
 600                      property      Property name
 601                      value         Property value
 602                      source        Property source, either 'default' or 'local'.
 603 
 604              See the Properties section for more information on the available
 605              pool properties.
 606 
 607              -H      Scripted mode.  Do not display headers, and separate
 608                      fields by a single tab instead of arbitrary space.
 609 
 610              -o field
 611                      A comma-separated list of columns to display.
 612                      name,property,value,source is the default value.
 613 
 614              -p      Display numbers in parsable (exact) values.
 615 
 616      zpool history [-il] [pool]...
 617              Displays the command history of the specified pool(s) or all
 618              pools if no pool is specified.
 619 
 620              -i      Displays internally logged ZFS events in addition to user
 621                      initiated events.
 622 
 623              -l      Displays log records in long format, which in addition to
 624                      standard format includes, the user name, the hostname,
 625                      and the zone in which the operation was performed.


 685 
 686              -m      Allows a pool to import when there is a missing log
 687                      device.  Recent transactions can be lost because the log
 688                      device will be discarded.
 689 
 690              -n      Used with the -F recovery option.  Determines whether a
 691                      non-importable pool can be made importable again, but
 692                      does not actually perform the pool recovery.  For more
 693                      details about pool recovery mode, see the -F option,
 694                      above.
 695 
 696              -N      Import the pool without mounting any file systems.
 697 
 698              -o mntopts
 699                      Comma-separated list of mount options to use when
 700                      mounting datasets within the pool.  See zfs(1M) for a
 701                      description of dataset properties and mount options.
 702 
 703              -o property=value
 704                      Sets the specified property on the imported pool.  See
 705                      the Properties section for more information on the
 706                      available pool properties.
 707 
 708              -R root
 709                      Sets the cachefile property to none and the altroot
 710                      property to root.
 711 
 712      zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
 713              property=value]... [-R root] pool|id [newpool]
 714              Imports a specific pool.  A pool can be identified by its name or
 715              the numeric identifier.  If newpool is specified, the pool is
 716              imported using the name newpool.  Otherwise, it is imported with
 717              the same name as its exported name.
 718 
 719              If a device is removed from a system without running zpool export
 720              first, the device appears as potentially active.  It cannot be
 721              determined if this was a failed export, or whether the device is
 722              really in use from another host.  To import a pool in this state,
 723              the -f option is required.
 724 
 725              -c cachefile


 744                      This option is ignored if the pool is importable or
 745                      already imported.
 746 
 747              -m      Allows a pool to import when there is a missing log
 748                      device.  Recent transactions can be lost because the log
 749                      device will be discarded.
 750 
 751              -n      Used with the -F recovery option.  Determines whether a
 752                      non-importable pool can be made importable again, but
 753                      does not actually perform the pool recovery.  For more
 754                      details about pool recovery mode, see the -F option,
 755                      above.
 756 
 757              -o mntopts
 758                      Comma-separated list of mount options to use when
 759                      mounting datasets within the pool.  See zfs(1M) for a
 760                      description of dataset properties and mount options.
 761 
 762              -o property=value
 763                      Sets the specified property on the imported pool.  See
 764                      the Properties section for more information on the
 765                      available pool properties.
 766 
 767              -R root
 768                      Sets the cachefile property to none and the altroot
 769                      property to root.
 770 




 771      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
 772              Displays I/O statistics for the given pools.  When given an
 773              interval, the statistics are printed every interval seconds until
 774              ^C is pressed.  If no pools are specified, statistics for every
 775              pool in the system is shown.  If count is specified, the command
 776              exits after count reports are printed.
 777 
 778              -T u|d  Display a time stamp.  Specify u for a printed
 779                      representation of the internal representation of time.
 780                      See time(2).  Specify d for standard date format.  See
 781                      date(1).
 782 
 783              -v      Verbose statistics Reports usage statistics for
 784                      individual vdevs within the pool, in addition to the
 785                      pool-wide statistics.
 786 
 787      zpool labelclear [-f] device
 788              Removes ZFS label information from the specified device.  The
 789              device must not be part of an active pool configuration.
 790 
 791              -f      Treat exported or foreign devices as inactive.
 792 
 793      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
 794              [interval [count]]
 795              Lists the given pools along with a health status and space usage.
 796              If no pools are specified, all pools in the system are listed.
 797              When given an interval, the information is printed every interval
 798              seconds until ^C is pressed.  If count is specified, the command
 799              exits after count reports are printed.
 800 
 801              -H      Scripted mode.  Do not display headers, and separate
 802                      fields by a single tab instead of arbitrary space.
 803 
 804              -o property
 805                      Comma-separated list of properties to display.  See the
 806                      Properties section for a list of valid properties.  The
 807                      default list is name, size, allocated, free, expandsize,
 808                      fragmentation, capacity, dedupratio, health, altroot.

 809 
 810              -p      Display numbers in parsable (exact) values.
 811 
 812              -T u|d  Display a time stamp.  Specify -u for a printed
 813                      representation of the internal representation of time.
 814                      See time(2).  Specify -d for standard date format.  See
 815                      date(1).
 816 
 817              -v      Verbose statistics.  Reports usage statistics for
 818                      individual vdevs within the pool, in addition to the
 819                      pool-wise statistics.
 820 
 821      zpool offline [-t] pool device...
 822              Takes the specified physical device offline.  While the device is
 823              offline, no attempt is made to read or write to the device.  This
 824              command is not applicable to spares.
 825 
 826              -t      Temporary.  Upon reboot, the specified physical device
 827                      reverts to its previous state.
 828 
 829      zpool online [-e] pool device...
 830              Brings the specified physical device online.  This command is not
 831              applicable to spares.
 832 
 833              -e      Expand the device to use all available space.  If the
 834                      device is part of a mirror or raidz then all devices must
 835                      be expanded before the new space will become available to
 836                      the pool.
 837 
 838      zpool reguid pool
 839              Generates a new unique identifier for the pool.  You must ensure
 840              that all devices in this pool are online and healthy before
 841              performing this action.
 842 
 843      zpool reopen pool
 844              Reopen all the vdevs associated with the pool.
 845 
 846      zpool remove [-np] pool device...
 847              Removes the specified device from the pool.  This command
 848              currently only supports removing hot spares, cache, log devices
 849              and mirrored top-level vdevs (mirror of leaf devices); but not
 850              raidz.



 851 
 852              Removing a top-level vdev reduces the total amount of space in
 853              the storage pool.  The specified device will be evacuated by
 854              copying all allocated space from it to the other devices in the
 855              pool.  In this case, the zpool remove command initiates the
 856              removal and returns, while the evacuation continues in the
 857              background.  The removal progress can be monitored with zpool
 858              status. This feature must be enabled to be used, see
 859              zpool-features(5)
 860 
 861              A mirrored top-level device (log or data) can be removed by
 862              specifying the top-level mirror for the same.  Non-log devices or
 863              data devices that are part of a mirrored configuration can be
 864              removed using the zpool detach command.
 865 
 866              -n      Do not actually perform the removal ("no-op").  Instead,
 867                      print the estimated amount of memory that will be used by
 868                      the mapping table after the removal completes.  This is
 869                      nonzero only for top-level vdevs.
 870 
 871              -p      Used in conjunction with the -n flag, displays numbers as
 872                      parsable (exact) values.
 873 
 874      zpool remove -s pool
 875              Stops and cancels an in-progress removal of a top-level vdev.
 876 
 877      zpool replace [-f] pool device [new_device]
 878              Replaces old_device with new_device.  This is equivalent to
 879              attaching new_device, waiting for it to resilver, and then
 880              detaching old_device.
 881 
 882              The size of new_device must be greater than or equal to the
 883              minimum size of all the devices in a mirror or raidz
 884              configuration.
 885 
 886              new_device is required if the pool is not redundant.  If
 887              new_device is not specified, it defaults to old_device.  This
 888              form of replacement is useful after an existing disk has failed
 889              and has been physically replaced.  In this case, the new disk may
 890              have the same /dev/dsk path as the old device, even though it is
 891              actually a different disk.  ZFS recognizes this.
 892 
 893              -f      Forces use of new_device, even if its appears to be in
 894                      use.  Not all devices can be overridden in this manner.
 895 
 896      zpool scrub [-s | -p] pool...
 897              Begins a scrub or resumes a paused scrub.  The scrub examines all
 898              data in the specified pools to verify that it checksums
 899              correctly.  For replicated (mirror or raidz) devices, ZFS
 900              automatically repairs any damage discovered during the scrub.
 901              The zpool status command reports the progress of the scrub and
 902              summarizes the results of the scrub upon completion.
 903 
 904              Scrubbing and resilvering are very similar operations.  The
 905              difference is that resilvering only examines data that ZFS knows
 906              to be out of date (for example, when attaching a new device to a
 907              mirror or replacing an existing device), whereas scrubbing
 908              examines all data to discover silent errors due to hardware
 909              faults or disk failure.
 910 
 911              Because scrubbing and resilvering are I/O-intensive operations,
 912              ZFS only allows one at a time.  If a scrub is paused, the zpool
 913              scrub resumes it.  If a resilver is in progress, ZFS does not
 914              allow a scrub to be started until the resilver completes.
 915 
 916              -s      Stop scrubbing.
 917 




 918              -p      Pause scrubbing.  Scrub pause state and progress are
 919                      periodically synced to disk.  If the system is restarted
 920                      or pool is exported during a paused scrub, even after
 921                      import, scrub will remain paused until it is resumed.
 922                      Once resumed the scrub will pick up from the place where
 923                      it was last checkpointed to disk.  To resume a paused
 924                      scrub issue zpool scrub again.
 925 


 926      zpool set property=value pool
 927              Sets the given property on the specified pool.  See the
 928              Properties section for more information on what properties can be
 929              set and acceptable values.
 930 
 931      zpool split [-n] [-o property=value]... [-R root] pool newpool
 932              Splits devices off pool creating newpool.  All vdevs in pool must
 933              be mirrors.  At the time of the split, newpool will be a replica
 934              of pool.
 935 
 936              -n      Do dry run, do not actually perform the split.  Print out
 937                      the expected configuration of newpool.
 938 
 939              -o property=value
 940                      Sets the specified property for newpool.  See the
 941                      Properties section for more information on the available
 942                      pool properties.
 943 
 944              -R root
 945                      Set altroot for newpool to root and automatically import
 946                      it.
 947 
 948      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
 949              Displays the detailed health status for the given pools.  If no
 950              pool is specified, then the status of each pool in the system is
 951              displayed.  For more information on pool and device health, see
 952              the Device Failure and Recovery section.
 953 
 954              If a scrub or resilver is in progress, this command reports the
 955              percentage done and the estimated time to completion.  Both of
 956              these are only approximate, because the amount of data in the
 957              pool and the other workloads on the system can change.
 958 
 959              -D      Display a histogram of deduplication statistics, showing
 960                      the allocated (physically present on disk) and referenced
 961                      (logically referenced in the pool) block counts and sizes
 962                      by reference count.
 963 
 964              -T u|d  Display a time stamp.  Specify -u for a printed
 965                      representation of the internal representation of time.
 966                      See time(2).  Specify -d for standard date format.  See
 967                      date(1).
 968 
 969              -v      Displays verbose data error information, printing out a
 970                      complete list of all data errors since the last complete
 971                      pool scrub.
 972 
 973              -x      Only display status for pools that are exhibiting errors
 974                      or are otherwise unavailable.  Warnings about pools not
 975                      using the latest on-disk format will not be included.
 976 
























































 977      zpool upgrade
 978              Displays pools which do not have all supported features enabled
 979              and pools formatted using a legacy ZFS version number.  These
 980              pools can continue to be used, but some features may not be
 981              available.  Use zpool upgrade -a to enable all features on all
 982              pools.
 983 
 984      zpool upgrade -v
 985              Displays legacy ZFS versions supported by the current software.
 986              See zpool-features(5) for a description of feature flags features
 987              supported by the current software.
 988 
 989      zpool upgrade [-V version] -a|pool...
 990              Enables all supported features on the given pool.  Once this is
 991              done, the pool will no longer be accessible on systems that do
 992              not support feature flags.  See zpool-features(5) for details on
 993              compatibility with systems that support feature flags, but do not
 994              support all features enabled on the pool.
 995 
 996              -a      Enables all supported features on all pools.
 997 
 998              -V version
 999                      Upgrade to the specified legacy version.  If the -V flag
1000                      is specified, no features will be enabled on the pool.
1001                      This option can only be used to increase the version
1002                      number up to the last supported legacy version number.
1003 














1004 EXIT STATUS
1005      The following exit values are returned:
1006 
1007      0       Successful completion.
1008 
1009      1       An error occurred.
1010 
1011      2       Invalid command line options were specified.
1012 
1013 EXAMPLES
1014      Example 1 Creating a RAID-Z Storage Pool
1015              The following command creates a pool with a single raidz root
1016              vdev that consists of six disks.
1017 
1018              # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1019 
1020      Example 2 Creating a Mirrored Storage Pool
1021              The following command creates a pool with two mirrors, where each
1022              mirror contains two disks.
1023 


1115      Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1116              The following command creates a ZFS storage pool consisting of
1117              two, two-way mirrors and mirrored log devices:
1118 
1119              # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1120                c4d0 c5d0
1121 
1122      Example 13 Adding Cache Devices to a ZFS Pool
1123              The following command adds two disks for use as cache devices to
1124              a ZFS storage pool:
1125 
1126              # zpool add pool cache c2d0 c3d0
1127 
1128              Once added, the cache devices gradually fill with content from
1129              main memory.  Depending on the size of your cache devices, it
1130              could take over an hour for them to fill.  Capacity and reads can
1131              be monitored using the iostat option as follows:
1132 
1133              # zpool iostat -v pool 5
1134 
1135      Example 14 Removing a Mirrored top-level (Log or Data) Device
1136              The following commands remove the mirrored log device mirror-2
1137              and mirrored top-level data device mirror-1.
1138 
1139              Given this configuration:
1140 
1141                pool: tank
1142               state: ONLINE
1143               scrub: none requested
1144              config:
1145 
1146                       NAME        STATE     READ WRITE CKSUM
1147                       tank        ONLINE       0     0     0
1148                         mirror-0  ONLINE       0     0     0
1149                           c6t0d0  ONLINE       0     0     0
1150                           c6t1d0  ONLINE       0     0     0
1151                         mirror-1  ONLINE       0     0     0
1152                           c6t2d0  ONLINE       0     0     0
1153                           c6t3d0  ONLINE       0     0     0
1154                       logs
1155                         mirror-2  ONLINE       0     0     0
1156                           c4t0d0  ONLINE       0     0     0
1157                           c4t1d0  ONLINE       0     0     0
1158 
1159              The command to remove the mirrored log mirror-2 is:
1160 
1161              # zpool remove tank mirror-2
1162 
1163              The command to remove the mirrored data mirror-1 is:
1164 
1165              # zpool remove tank mirror-1
1166 
1167      Example 15 Displaying expanded space on a device
1168              The following command displays the detailed information for the
1169              pool data.  This pool is comprised of a single raidz vdev where
1170              one of its devices increased its capacity by 10GB.  In this
1171              example, the pool will not be able to utilize this extra capacity
1172              until all the devices under the raidz vdev have been expanded.
1173 
1174              # zpool list -v data
1175              NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1176              data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
1177                raidz1    23.9G  14.6G  9.30G    48%         -
1178                  c1t1d0      -      -      -      -         -
1179                  c1t2d0      -      -      -      -       10G
1180                  c1t3d0      -      -      -      -         -
1181 
1182 INTERFACE STABILITY
1183      Evolving
1184 
1185 SEE ALSO
1186      zfs(1M), attributes(5), zpool-features(5)
   1 ZPOOL(1M)                    Maintenance Commands                    ZPOOL(1M)
   2 
   3 NAME
   4      zpool - configure ZFS storage pools
   5 
   6 SYNOPSIS
   7      zpool -?
   8      zpool add [-fn] pool vdev...
   9      zpool attach [-f] pool device new_device
  10      zpool clear pool [device]
  11      zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]...
  12            [-O file-system-property=value]... [-R root] pool vdev...
  13      zpool destroy [-f] pool
  14      zpool detach pool device
  15      zpool export [-cfF] [-t numthreads] pool...
  16      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
  17      zpool history [-il] [pool]...
  18      zpool import [-D] [-d dir]
  19      zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  20            [-o property=value]... [-R root] [-t numthreads]
  21      zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  22            [-o property=value]... [-R root] [-t numthreads] pool|id [newpool]
  23      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
  24      zpool labelclear [-f] device
  25      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
  26            [interval [count]]
  27      zpool offline [-t] pool device...
  28      zpool online [-e] pool device...
  29      zpool reguid pool
  30      zpool reopen pool
  31      zpool remove pool device...

  32      zpool replace [-f] pool device [new_device]
  33      zpool scrub [-m|-M|-p|-s] pool...
  34      zpool set property=value pool
  35      zpool split [-n] [-o property=value]... [-R root] pool newpool
  36      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
  37      zpool trim [-r rate|-s] pool...
  38      zpool upgrade
  39      zpool upgrade -v
  40      zpool upgrade [-V version] -a|pool...
  41      zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
  42      zpool vdev-set property=value pool vdev-name|vdev-guid
  43 
  44 DESCRIPTION
  45      The zpool command configures ZFS storage pools.  A storage pool is a
  46      collection of devices that provides physical storage and data replication
  47      for ZFS datasets.  All datasets within a storage pool share the same
  48      space.  See zfs(1M) for information on managing datasets.
  49 
  50    Virtual Devices (vdevs)
  51      A "virtual device" describes a single device or a collection of devices
  52      organized according to certain performance and fault characteristics.
  53      The following virtual devices are supported:
  54 
  55      disk    A block device, typically located under /dev/dsk.  ZFS can use
  56              individual slices or partitions, though the recommended mode of
  57              operation is to use whole disks.  A disk can be specified by a
  58              full path, or it can be a shorthand name (the relative portion of
  59              the path under /dev/dsk).  A whole disk can be specified by
  60              omitting the slice or partition designation.  For example, c0t0d0
  61              is equivalent to /dev/dsk/c0t0d0s2.  When given a whole disk, ZFS
  62              automatically labels the disk, if necessary.


 203      example,
 204 
 205      # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
 206 
 207      Spares can be shared across multiple pools, and can be added with the
 208      zpool add command and removed with the zpool remove command.  Once a
 209      spare replacement is initiated, a new spare vdev is created within the
 210      configuration that will remain there until the original device is
 211      replaced.  At this point, the hot spare becomes available again if
 212      another device fails.
 213 
 214      If a pool has a shared spare that is currently being used, the pool can
 215      not be exported since other pools may use this shared spare, which may
 216      lead to potential data corruption.
 217 
 218      An in-progress spare replacement can be cancelled by detaching the hot
 219      spare.  If the original faulted device is detached, then the hot spare
 220      assumes its place in the configuration, and is removed from the spare
 221      list of all active pools.
 222 
 223      See sparegroup vdev property in Device Properties section for information
 224      on how to control spare selection.
 225 
 226      Spares cannot replace log devices.
 227 
 228    Intent Log
 229      The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
 230      transactions.  For instance, databases often require their transactions
 231      to be on stable storage devices when returning from a system call.  NFS
 232      and other applications can also use fsync(3C) to ensure data stability.
 233      By default, the intent log is allocated from blocks within the main pool.
 234      However, it might be possible to get better performance using separate
 235      intent log devices such as NVRAM or a dedicated disk.  For example:
 236 
 237      # zpool create pool c0d0 c1d0 log c2d0
 238 
 239      Multiple log devices can also be specified, and they can be mirrored.
 240      See the EXAMPLES section for an example of mirroring multiple log
 241      devices.
 242 
 243      Log devices can be added, replaced, attached, detached, and imported and
 244      exported as part of the larger pool.  Mirrored log devices can be removed
 245      by specifying the top-level mirror for the log.
 246 
 247    Cache Devices
 248      Devices can be added to a storage pool as "cache devices".  These devices
 249      provide an additional layer of caching between main memory and disk.  For
 250      read-heavy workloads, where the working set size is much larger than what
 251      can be cached in main memory, using cache devices allow much more of this
 252      working set to be served from low latency media.  Using cache devices
 253      provides the greatest performance improvement for random read-workloads
 254      of mostly static content.
 255 
 256      To create a pool with cache devices, specify a cache vdev with any number
 257      of devices.  For example:
 258 
 259      # zpool create pool c0d0 c1d0 cache c2d0 c3d0
 260 
 261      Cache devices cannot be mirrored or part of a raidz configuration.  If a
 262      read error is encountered on a cache device, that read I/O is reissued to
 263      the original storage pool device, which might be part of a mirrored or
 264      raidz configuration.
 265 
 266      The content of the cache devices is considered volatile, as is the case
 267      with other system caches.
 268 
 269    Pool Properties
 270      Each pool has several properties associated with it.  Some properties are
 271      read-only statistics while others are configurable and change the
 272      behavior of the pool.
 273 
 274      The following are read-only properties:
 275 
 276      allocated
 277              Amount of storage space used within the pool.
 278 
 279      bootsize
 280              The size of the system boot partition.  This property can only be
 281              set at pool creation time and is read-only once pool is created.
 282              Setting this property implies using the -B option.
 283 
 284      capacity
 285              Percentage of pool space used.  This property can also be
 286              referred to by its shortened column name, cap.
 287 
 288      ddt_capped=on|off
 289              When the ddt_capped is on this indicates DDT growth has been
 290              stopped.  New unique writes will not be deduped to prevent
 291              further DDT growth.
 292 
 293      expandsize
 294              Amount of uninitialized space within the pool or device that can
 295              be used to increase the total capacity of the pool.
 296              Uninitialized space consists of any space on an EFI labeled vdev
 297              which has not been brought online (e.g, using zpool online -e).
 298              This space occurs when a LUN is dynamically expanded.
 299 
 300      fragmentation
 301              The amount of fragmentation in the pool.
 302 
 303      free    The amount of free space available in the pool.
 304 
 305      freeing
 306              freeing is the amount of pool space remaining to be reclaimed.
 307              After a file, dataset or snapshot is destroyed, the space it was
 308              using is returned to the pool asynchronously.  Over time freeing
 309              will decrease while free increases.
 310 
 311      health  The current health of the pool.  Health can be one of ONLINE,
 312              DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
 313 
 314      guid    A unique identifier for the pool.
 315 
 316      size    Total size of the storage pool.
 317 
 318      unsupported@feature_guid
 319              Information about unsupported features that are enabled on the
 320              pool.  See zpool-features(5) for details.
 321 
 322      The space usage properties report actual physical space available to the
 323      storage pool.  The physical space can be different from the total amount
 324      of space that any contained datasets can actually use.  The amount of
 325      space used in a raidz configuration depends on the characteristics of the
 326      data being written.  In addition, ZFS reserves some space for internal
 327      accounting that the zfs(1M) command takes into account, but the zpool
 328      command does not.  For non-full pools of a reasonable size, these effects


 352      later changed with the zpool set command:
 353 
 354      autoexpand=on|off
 355              Controls automatic pool expansion when the underlying LUN is
 356              grown.  If set to on, the pool will be resized according to the
 357              size of the expanded device.  If the device is part of a mirror
 358              or raidz then all devices within that mirror/raidz group must be
 359              expanded before the new space is made available to the pool.  The
 360              default behavior is off.  This property can also be referred to
 361              by its shortened column name, expand.
 362 
 363      autoreplace=on|off
 364              Controls automatic device replacement.  If set to off, device
 365              replacement must be initiated by the administrator by using the
 366              zpool replace command.  If set to on, any new device, found in
 367              the same physical location as a device that previously belonged
 368              to the pool, is automatically formatted and replaced.  The
 369              default behavior is off.  This property can also be referred to
 370              by its shortened column name, replace.
 371 
 372      autotrim=on|off
 373              When set to on, while deleting data, ZFS will inform the
 374              underlying vdevs of any blocks that have been marked as freed.
 375              This allows thinly provisioned vdevs to reclaim unused blocks.
 376              Currently, this feature supports sending SCSI UNMAP commands to
 377              SCSI and SAS disk vdevs, and using file hole punching on file-
 378              backed vdevs.  SATA TRIM is currently not implemented.  The
 379              default setting for this property is off.
 380 
 381              Please note that automatic trimming of data blocks can put
 382              significant stress on the underlying storage devices if they do
 383              not handle these commands in a background, low-priority manner.
 384              In that case, it may be possible to achieve most of the benefits
 385              of trimming free space on the pool by running an on-demand
 386              (manual) trim every once in a while during a maintenance window
 387              using the zpool trim command.
 388 
 389              Automatic trim does not reclaim blocks after a delete
 390              immediately.  Instead, it waits approximately 32-64 TXGs (or as
 391              defined by the zfs_txgs_per_trim tunable) to allow for more
 392              efficient aggregation of smaller portions of free space into
 393              fewer larger regions, as well as to allow for longer pool
 394              corruption recovery via zpool import -F.
 395 
 396      bootfs=pool/dataset
 397              Identifies the default bootable dataset for the root pool.  This
 398              property is expected to be set mainly by the installation and
 399              upgrade programs.
 400 
 401      cachefile=path|none
 402              Controls the location of where the pool configuration is cached.
 403              Discovering all pools on system startup requires a cached copy of
 404              the configuration data that is stored on the root file system.
 405              All pools in this cache are automatically imported when the
 406              system boots.  Some environments, such as install and clustering,
 407              need to cache this information in a different location so that
 408              pools are not automatically imported.  Setting this property
 409              caches the pool configuration in a different location that can
 410              later be imported with zpool import -c.  Setting it to the
 411              special value none creates a temporary pool that is never cached,
 412              and the special value "" (empty string) uses the default
 413              location.
 414 
 415              Multiple pools can share the same cache file.  Because the kernel


 444              determined as follows:
 445 
 446              wait      Blocks all I/O access until the device connectivity is
 447                        recovered and the errors are cleared.  This is the
 448                        default behavior.
 449 
 450              continue  Returns EIO to any new write I/O requests but allows
 451                        reads to any of the remaining healthy devices.  Any
 452                        write requests that have yet to be committed to disk
 453                        would be blocked.
 454 
 455              panic     Prints out a message to the console and generates a
 456                        system crash dump.
 457 
 458      feature@feature_name=enabled
 459              The value of this property is the current state of feature_name.
 460              The only valid value when setting this property is enabled which
 461              moves feature_name to the enabled state.  See zpool-features(5)
 462              for details on feature states.
 463 
 464      forcetrim=on|off
 465              Controls whether device support is taken into consideration when
 466              issuing TRIM commands to the underlying vdevs of the pool.
 467              Normally, both automatic trim and on-demand (manual) trim only
 468              issue TRIM commands if a vdev indicates support for it.  Setting
 469              the forcetrim property to on will force ZFS to issue TRIMs even
 470              if it thinks a device does not support it.  The default value is
 471              off.
 472 
 473      listsnapshots=on|off
 474              Controls whether information about snapshots associated with this
 475              pool is output when zfs list is run without the -t option.  The
 476              default value is off.  This property can also be referred to by
 477              its shortened name, listsnaps.
 478 
 479      scrubprio=0-100
 480              Sets the priority of scrub I/O for this pool.  This is a number
 481              from 0 to 100, higher numbers meaning a higher priority and thus
 482              more bandwidth allocated to scrub I/O, provided there is other
 483              I/O competing for bandwidth.  If no other I/O is competing for
 484              bandwidth, scrub is allowed to consume as much bandwidth as the
 485              pool is capable of providing.  A priority of 100 means that scrub
 486              I/O has equal priority to any other user-generated I/O.  The
 487              value 0 is special, because it turns per-pool scrub priority
 488              control.  In that case, scrub I/O priority is determined by the
 489              zfs_vdev_scrub_min_active and zfs_vdev_scrub_max_active tunables.
 490              The default value is 5.
 491 
 492      resilverprio=0-100
 493              Same as the scrubprio property, but controls the priority for
 494              resilver I/O.  The default value is 10.  When set to 0 the global
 495              tunables used for queue sizing are zfs_vdev_resilver_min_active
 496              and zfs_vdev_resilver_max_active.
 497 
 498      version=version
 499              The current on-disk version of the pool.  This can be increased,
 500              but never decreased.  The preferred method of updating pools is
 501              with the zpool upgrade command, though this property can be used
 502              when a specific version is needed for backwards compatibility.
 503              Once feature flags are enabled on a pool this property will no
 504              longer have a value.
 505 
 506    Device Properties
 507      Each device can have several properties associated with it.  These
 508      properites override global tunables and are designed to provide more
 509      control over the operational parameters of this specific device, as well
 510      as to help manage this device.
 511 
 512      The cos device property can reference a CoS property descriptor by name,
 513      in which case, the values of device properties are determined according
 514      to the following rule: the device settings override CoS settings, which
 515      in turn, override the global tunables.
 516 
 517      The following device properties are available:
 518 
 519      cos=cos-name
 520              This property indicates whether the device is associated with a
 521              CoS property descriptor object.  If so, the properties from the
 522              CoS descriptor that are not explicitly overridden by the device
 523              properties are in effect for this device.
 524 
 525      l2arc_ddt=on|off
 526              This property is meaningful for L2ARC devices.  If this property
 527              is turned on ZFS will dedicate the L2ARC device to cache
 528              deduplication table (DDT) buffers only.
 529 
 530      prefread=1..100
 531              This property is meaningful for devices that belong to a mirror.
 532              The property determines the preference that is given to the
 533              device when reading from the mirror.  The ratio of the value to
 534              the sum of the values of this property for all the devices in the
 535              mirror determines the relative frequency (which also is
 536              considered "probability") of reading from this specific device.
 537 
 538      sparegroup=group-name
 539              This property indicates whether the device is a part of a spare
 540              device group.  Devices in the pool (including spares) can be
 541              labeled with strings that are meaningful in the context of the
 542              management workflow in effect.  When a failed device is
 543              automatically replaced by spares, the spares whose sparegroup
 544              property match the failed device's property are used first.
 545 
 546      {read|aread|write|awrite|scrub|resilver}_{minactive|maxactive}=1..1000
 547              These properties define the minimim/maximum number of outstanding
 548              active requests for the queueable classes of I/O requests as
 549              defined by the ZFS I/O scheduler.  The classes include read,
 550              asynchronous read, write, asynchronous write, and scrub classes.
 551 
 552    Subcommands
 553      All subcommands that modify state are logged persistently to the pool in
 554      their original form.
 555 
 556      The zpool command provides subcommands to create and destroy storage
 557      pools, add capacity to storage pools, and provide information about the
 558      storage pools.  The following subcommands are supported:
 559 
 560      zpool -?
 561              Displays a help message.
 562 
 563      zpool add [-fn] pool vdev...
 564              Adds the specified virtual devices to the given pool.  The vdev
 565              specification is described in the Virtual Devices section.  The
 566              behavior of the -f option, and the device checks performed are
 567              described in the zpool create subcommand.
 568 
 569              -f      Forces use of vdevs, even if they appear in use or
 570                      specify a conflicting replication level.  Not all devices
 571                      can be overridden in this manner.


 636                      properties to enabled with the -o option.  See
 637                      zpool-features(5) for details about feature properties.
 638 
 639              -f      Forces use of vdevs, even if they appear in use or
 640                      specify a conflicting replication level.  Not all devices
 641                      can be overridden in this manner.
 642 
 643              -m mountpoint
 644                      Sets the mount point for the root dataset.  The default
 645                      mount point is /pool or altroot/pool if altroot is
 646                      specified.  The mount point must be an absolute path,
 647                      legacy, or none.  For more information on dataset mount
 648                      points, see zfs(1M).
 649 
 650              -n      Displays the configuration that would be used without
 651                      actually creating the pool.  The actual pool creation can
 652                      still fail due to insufficient privileges or device
 653                      sharing.
 654 
 655              -o property=value
 656                      Sets the given pool properties.  See the Pool Properties
 657                      section for a list of valid properties that can be set.
 658 
 659              -O file-system-property=value
 660                      Sets the given file system properties in the root file
 661                      system of the pool.  See the Properties section of
 662                      zfs(1M) for a list of valid properties that can be set.
 663 
 664              -R root
 665                      Equivalent to -o cachefile=none -o altroot=root
 666 
 667      zpool destroy [-f] pool
 668              Destroys the given pool, freeing up any devices for other use.
 669              This command tries to unmount any active datasets before
 670              destroying the pool.
 671 
 672              -f      Forces any active datasets contained within the pool to
 673                      be unmounted.
 674 
 675      zpool detach pool device
 676              Detaches device from a mirror.  The operation is refused if there
 677              are no other valid replicas of the data.
 678 
 679      zpool export [-cfF] [-t numthreads] pool...
 680              Exports the given pools from the system.  All devices are marked
 681              as exported, but are still considered in use by other subsystems.
 682              The devices can be moved between systems (even those of different
 683              endianness) and imported as long as a sufficient number of
 684              devices are present.
 685 
 686              Before exporting the pool, all datasets within the pool are
 687              unmounted.  A pool can not be exported if it has a shared spare
 688              that is currently being used.
 689 
 690              For pools to be portable, you must give the zpool command whole
 691              disks, not just slices, so that ZFS can label the disks with
 692              portable EFI labels.  Otherwise, disk drivers on platforms of
 693              different endianness will not recognize the disks.
 694 
 695              -c      Keep configuration information of exported pool in the
 696                      cache file.
 697 
 698              -f      Forcefully unmount all datasets, using the unmount -f
 699                      command.
 700 
 701                      This command will forcefully export the pool even if it
 702                      has a shared spare that is currently being used.  This
 703                      may lead to potential data corruption.
 704 
 705              -F      Do not update device labels or cache file with new
 706                      configuration.
 707 
 708              -t numthreads
 709                      Unmount datasets in parallel using up to numthreads
 710                      threads.
 711 
 712      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
 713              Retrieves the given list of properties (or all properties if all
 714              is used) for the specified storage pool(s).  These properties are
 715              displayed with the following fields:
 716 
 717                      name          Name of storage pool
 718                      property      Property name
 719                      value         Property value
 720                      source        Property source, either 'default' or 'local'.
 721 
 722              See the Pool Properties section for more information on the
 723              available pool properties.
 724 
 725              -H      Scripted mode.  Do not display headers, and separate
 726                      fields by a single tab instead of arbitrary space.
 727 
 728              -o field
 729                      A comma-separated list of columns to display.
 730                      name,property,value,source is the default value.
 731 
 732              -p      Display numbers in parsable (exact) values.
 733 
 734      zpool history [-il] [pool]...
 735              Displays the command history of the specified pool(s) or all
 736              pools if no pool is specified.
 737 
 738              -i      Displays internally logged ZFS events in addition to user
 739                      initiated events.
 740 
 741              -l      Displays log records in long format, which in addition to
 742                      standard format includes, the user name, the hostname,
 743                      and the zone in which the operation was performed.


 803 
 804              -m      Allows a pool to import when there is a missing log
 805                      device.  Recent transactions can be lost because the log
 806                      device will be discarded.
 807 
 808              -n      Used with the -F recovery option.  Determines whether a
 809                      non-importable pool can be made importable again, but
 810                      does not actually perform the pool recovery.  For more
 811                      details about pool recovery mode, see the -F option,
 812                      above.
 813 
 814              -N      Import the pool without mounting any file systems.
 815 
 816              -o mntopts
 817                      Comma-separated list of mount options to use when
 818                      mounting datasets within the pool.  See zfs(1M) for a
 819                      description of dataset properties and mount options.
 820 
 821              -o property=value
 822                      Sets the specified property on the imported pool.  See
 823                      the Pool Properties section for more information on the
 824                      available pool properties.
 825 
 826              -R root
 827                      Sets the cachefile property to none and the altroot
 828                      property to root.
 829 
 830      zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
 831              property=value]... [-R root] pool|id [newpool]
 832              Imports a specific pool.  A pool can be identified by its name or
 833              the numeric identifier.  If newpool is specified, the pool is
 834              imported using the name newpool.  Otherwise, it is imported with
 835              the same name as its exported name.
 836 
 837              If a device is removed from a system without running zpool export
 838              first, the device appears as potentially active.  It cannot be
 839              determined if this was a failed export, or whether the device is
 840              really in use from another host.  To import a pool in this state,
 841              the -f option is required.
 842 
 843              -c cachefile


 862                      This option is ignored if the pool is importable or
 863                      already imported.
 864 
 865              -m      Allows a pool to import when there is a missing log
 866                      device.  Recent transactions can be lost because the log
 867                      device will be discarded.
 868 
 869              -n      Used with the -F recovery option.  Determines whether a
 870                      non-importable pool can be made importable again, but
 871                      does not actually perform the pool recovery.  For more
 872                      details about pool recovery mode, see the -F option,
 873                      above.
 874 
 875              -o mntopts
 876                      Comma-separated list of mount options to use when
 877                      mounting datasets within the pool.  See zfs(1M) for a
 878                      description of dataset properties and mount options.
 879 
 880              -o property=value
 881                      Sets the specified property on the imported pool.  See
 882                      the Pool Properties section for more information on the
 883                      available pool properties.
 884 
 885              -R root
 886                      Sets the cachefile property to none and the altroot
 887                      property to root.
 888 
 889              -t numthreads
 890                      Mount datasets in parallel using up to numthreads
 891                      threads.
 892 
 893      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
 894              Displays I/O statistics for the given pools.  When given an
 895              interval, the statistics are printed every interval seconds until
 896              ^C is pressed.  If no pools are specified, statistics for every
 897              pool in the system is shown.  If count is specified, the command
 898              exits after count reports are printed.
 899 
 900              -T u|d  Display a time stamp.  Specify u for a printed
 901                      representation of the internal representation of time.
 902                      See time(2).  Specify d for standard date format.  See
 903                      date(1).
 904 
 905              -v      Verbose statistics Reports usage statistics for
 906                      individual vdevs within the pool, in addition to the
 907                      pool-wide statistics.
 908 
 909      zpool labelclear [-f] device
 910              Removes ZFS label information from the specified device.  The
 911              device must not be part of an active pool configuration.
 912 
 913              -f      Treat exported or foreign devices as inactive.
 914 
 915      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
 916              [interval [count]]
 917              Lists the given pools along with a health status and space usage.
 918              If no pools are specified, all pools in the system are listed.
 919              When given an interval, the information is printed every interval
 920              seconds until ^C is pressed.  If count is specified, the command
 921              exits after count reports are printed.
 922 
 923              -H      Scripted mode.  Do not display headers, and separate
 924                      fields by a single tab instead of arbitrary space.
 925 
 926              -o property
 927                      Comma-separated list of properties to display.  See the
 928                      Pool Properties section for a list of valid properties.
 929                      The default list is name, size, allocated, free,
 930                      expandsize, fragmentation, capacity, dedupratio, health,
 931                      altroot.
 932 
 933              -p      Display numbers in parsable (exact) values.
 934 
 935              -T u|d  Display a time stamp.  Specify -u for a printed
 936                      representation of the internal representation of time.
 937                      See time(2).  Specify -d for standard date format.  See
 938                      date(1).
 939 
 940              -v      Verbose statistics.  Reports usage statistics for
 941                      individual vdevs within the pool, in addition to the
 942                      pool-wise statistics.
 943 
 944      zpool offline [-t] pool device...
 945              Takes the specified physical device offline.  While the device is
 946              offline, no attempt is made to read or write to the device.  This
 947              command is not applicable to spares.
 948 
 949              -t      Temporary.  Upon reboot, the specified physical device
 950                      reverts to its previous state.
 951 
 952      zpool online [-e] pool device...
 953              Brings the specified physical device online.  This command is not
 954              applicable to spares.
 955 
 956              -e      Expand the device to use all available space.  If the
 957                      device is part of a mirror or raidz then all devices must
 958                      be expanded before the new space will become available to
 959                      the pool.
 960 
 961      zpool reguid pool
 962              Generates a new unique identifier for the pool.  You must ensure
 963              that all devices in this pool are online and healthy before
 964              performing this action.
 965 
 966      zpool reopen pool
 967              Reopen all the vdevs associated with the pool.
 968 
 969      zpool remove pool device...
 970              Removes the specified device from the pool.  This command
 971              currently only supports removing hot spares, cache, log and
 972              special devices.  A mirrored log device can be removed by
 973              specifying the top-level mirror for the log.  Non-log devices
 974              that are part of a mirrored configuration can be removed using
 975              the zpool detach command.  Non-redundant and raidz devices cannot
 976              be removed from a pool.
 977 

























 978      zpool replace [-f] pool device [new_device]
 979              Replaces old_device with new_device.  This is equivalent to
 980              attaching new_device, waiting for it to resilver, and then
 981              detaching old_device.
 982 
 983              The size of new_device must be greater than or equal to the
 984              minimum size of all the devices in a mirror or raidz
 985              configuration.
 986 
 987              new_device is required if the pool is not redundant.  If
 988              new_device is not specified, it defaults to old_device.  This
 989              form of replacement is useful after an existing disk has failed
 990              and has been physically replaced.  In this case, the new disk may
 991              have the same /dev/dsk path as the old device, even though it is
 992              actually a different disk.  ZFS recognizes this.
 993 
 994              -f      Forces use of new_device, even if its appears to be in
 995                      use.  Not all devices can be overridden in this manner.
 996 
 997      zpool scrub [-m|-M|-p|-s] pool...
 998              Begins a scrub or resumes a paused scrub.  The scrub examines all
 999              data in the specified pools to verify that it checksums
1000              correctly.  For replicated (mirror or raidz) devices, ZFS
1001              automatically repairs any damage discovered during the scrub.
1002              The zpool status command reports the progress of the scrub and
1003              summarizes the results of the scrub upon completion.
1004 
1005              Scrubbing and resilvering are very similar operations.  The
1006              difference is that resilvering only examines data that ZFS knows
1007              to be out of date (for example, when attaching a new device to a
1008              mirror or replacing an existing device), whereas scrubbing
1009              examines all data to discover silent errors due to hardware
1010              faults or disk failure.
1011 
1012              Because scrubbing and resilvering are I/O-intensive operations,
1013              ZFS only allows one at a time.  If a scrub is paused, the zpool
1014              scrub resumes it.  If a resilver is in progress, ZFS does not
1015              allow a scrub to be started until the resilver completes.
1016 
1017              Partial scrub may be requested using -m or -M option.
1018 
1019              -m      Scrub only metadata blocks.
1020 
1021              -M      Scrub only MOS blocks.
1022 
1023              -p      Pause scrubbing.  Scrub pause state and progress are
1024                      periodically synced to disk.  If the system is restarted
1025                      or pool is exported during a paused scrub, even after
1026                      import, scrub will remain paused until it is resumed.
1027                      Once resumed the scrub will pick up from the place where
1028                      it was last checkpointed to disk.  To resume a paused
1029                      scrub issue zpool scrub again.
1030 
1031              -s      Stop scrubbing.
1032 
1033      zpool set property=value pool
1034              Sets the given property on the specified pool.  See the Pool
1035              Properties section for more information on what properties can be
1036              set and acceptable values.
1037 
1038      zpool split [-n] [-o property=value]... [-R root] pool newpool
1039              Splits devices off pool creating newpool.  All vdevs in pool must
1040              be mirrors.  At the time of the split, newpool will be a replica
1041              of pool.
1042 
1043              -n      Do dry run, do not actually perform the split.  Print out
1044                      the expected configuration of newpool.
1045 
1046              -o property=value
1047                      Sets the specified property for newpool.  See the Pool
1048                      Properties section for more information on the available
1049                      pool properties.
1050 
1051              -R root
1052                      Set altroot for newpool to root and automatically import
1053                      it.
1054 
1055      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
1056              Displays the detailed health status for the given pools.  If no
1057              pool is specified, then the status of each pool in the system is
1058              displayed.  For more information on pool and device health, see
1059              the Device Failure and Recovery section.
1060 
1061              If a scrub or resilver is in progress, this command reports the
1062              percentage done and the estimated time to completion.  Both of
1063              these are only approximate, because the amount of data in the
1064              pool and the other workloads on the system can change.
1065 
1066              -D      Display a histogram of deduplication statistics, showing
1067                      the allocated (physically present on disk) and referenced
1068                      (logically referenced in the pool) block counts and sizes
1069                      by reference count.
1070 
1071              -T u|d  Display a time stamp.  Specify -u for a printed
1072                      representation of the internal representation of time.
1073                      See time(2).  Specify -d for standard date format.  See
1074                      date(1).
1075 
1076              -v      Displays verbose data error information, printing out a
1077                      complete list of all data errors since the last complete
1078                      pool scrub.
1079 
1080              -x      Only display status for pools that are exhibiting errors
1081                      or are otherwise unavailable.  Warnings about pools not
1082                      using the latest on-disk format will not be included.
1083 
1084      zpool trim [-r rate|-s] pool...
1085              Initiates a on-demand TRIM operation on all of the free space of
1086              a pool.  This informs the underlying storage devices of all of
1087              the blocks that the pool no longer considers allocated, thus
1088              allowing thinly provisioned storage devices to reclaim them.
1089              Please note that this collects all space marked as "freed" in the
1090              pool immediately and doesn't wait the zfs_txgs_per_trim delay as
1091              automatic TRIM does.  Hence, this can limit pool corruption
1092              recovery options during and immediately following the on-demand
1093              TRIM to 1-2 TXGs into the past (instead of the standard 32-64 of
1094              automatic TRIM).  This approach, however, allows you to recover
1095              the maximum amount of free space from the pool immediately
1096              without having to wait.
1097 
1098              Also note that an on-demand TRIM operation can be initiated
1099              irrespective of the autotrim pool property setting.  It does,
1100              however, respect the forcetrim pool property.
1101 
1102              An on-demand TRIM operation does not conflict with an ongoing
1103              scrub, but it can put significant I/O stress on the underlying
1104              vdevs.  A resilver, however, automatically stops an on-demand
1105              TRIM operation.  You can manually reinitiate the TRIM operation
1106              after the resilver has started, by simply reissuing the zpool
1107              trim command.
1108 
1109              Adding a vdev during TRIM is supported, although the progression
1110              display in zpool status might not be entirely accurate in that
1111              case (TRIM will complete before reaching 100%).  Removing or
1112              detaching a vdev will prematurely terminate an on-demand TRIM
1113              operation.
1114 
1115              -r rate
1116                      Controls the speed at which the TRIM operation
1117                      progresses.  Without this option, TRIM is executed in
1118                      parallel on all top-level vdevs as quickly as possible.
1119                      This option allows you to control how fast (in bytes per
1120                      second) the TRIM is executed.  This rate is applied on a
1121                      per-vdev basis, i.e. every top-level vdev in the pool
1122                      tries to match this speed.
1123 
1124                      Due to limitations in how the algorithm is designed,
1125                      TRIMs are executed in whole-metaslab increments.  Each
1126                      top-level vdev contains approximately 200 metaslabs, so a
1127                      rate-limited TRIM progresses in steps, i.e. it TRIMs one
1128                      metaslab completely and then waits for a while so that
1129                      over the whole device, the speed averages out.
1130 
1131                      When an on-demand TRIM operation is already in progress,
1132                      this option changes its rate.  To change a rate-limited
1133                      TRIM to an unlimited one, simply execute the zpool trim
1134                      command without the -r option.
1135 
1136              -s      Stop trimming.  If an on-demand TRIM operation is not
1137                      ongoing at the moment, this does nothing and the command
1138                      returns success.
1139 
1140      zpool upgrade
1141              Displays pools which do not have all supported features enabled
1142              and pools formatted using a legacy ZFS version number.  These
1143              pools can continue to be used, but some features may not be
1144              available.  Use zpool upgrade -a to enable all features on all
1145              pools.
1146 
1147      zpool upgrade -v
1148              Displays legacy ZFS versions supported by the current software.
1149              See zpool-features(5) for a description of feature flags features
1150              supported by the current software.
1151 
1152      zpool upgrade [-V version] -a|pool...
1153              Enables all supported features on the given pool.  Once this is
1154              done, the pool will no longer be accessible on systems that do
1155              not support feature flags.  See zpool-features(5) for details on
1156              compatibility with systems that support feature flags, but do not
1157              support all features enabled on the pool.
1158 
1159              -a      Enables all supported features on all pools.
1160 
1161              -V version
1162                      Upgrade to the specified legacy version.  If the -V flag
1163                      is specified, no features will be enabled on the pool.
1164                      This option can only be used to increase the version
1165                      number up to the last supported legacy version number.
1166 
1167      zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
1168              Retrieves the given list of vdev properties (or all properties if
1169              all is used) for the specified vdev of the specified storage
1170              pool.  These properties are displayed in the same manner as the
1171              pool properties.  The operation is supported for leaf-level vdevs
1172              only.  See the Device Properties section for more information on
1173              the available properties.
1174 
1175      zpool vdev-set property=value pool vdev-name|vdev-guid
1176              Sets the given property on the specified device of the specified
1177              pool.  If top-level vdev is specified, sets the property on all
1178              the child devices.  See the Device Properties section for more
1179              information on what properties can be set and accepted values.
1180 
1181 EXIT STATUS
1182      The following exit values are returned:
1183 
1184      0       Successful completion.
1185 
1186      1       An error occurred.
1187 
1188      2       Invalid command line options were specified.
1189 
1190 EXAMPLES
1191      Example 1 Creating a RAID-Z Storage Pool
1192              The following command creates a pool with a single raidz root
1193              vdev that consists of six disks.
1194 
1195              # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1196 
1197      Example 2 Creating a Mirrored Storage Pool
1198              The following command creates a pool with two mirrors, where each
1199              mirror contains two disks.
1200 


1292      Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1293              The following command creates a ZFS storage pool consisting of
1294              two, two-way mirrors and mirrored log devices:
1295 
1296              # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1297                c4d0 c5d0
1298 
1299      Example 13 Adding Cache Devices to a ZFS Pool
1300              The following command adds two disks for use as cache devices to
1301              a ZFS storage pool:
1302 
1303              # zpool add pool cache c2d0 c3d0
1304 
1305              Once added, the cache devices gradually fill with content from
1306              main memory.  Depending on the size of your cache devices, it
1307              could take over an hour for them to fill.  Capacity and reads can
1308              be monitored using the iostat option as follows:
1309 
1310              # zpool iostat -v pool 5
1311 
1312      Example 14 Removing a Mirrored Log Device
1313              The following command removes the mirrored log device mirror-2.


1314              Given this configuration:
1315 
1316                pool: tank
1317               state: ONLINE
1318               scrub: none requested
1319              config:
1320 
1321                       NAME        STATE     READ WRITE CKSUM
1322                       tank        ONLINE       0     0     0
1323                         mirror-0  ONLINE       0     0     0
1324                           c6t0d0  ONLINE       0     0     0
1325                           c6t1d0  ONLINE       0     0     0
1326                         mirror-1  ONLINE       0     0     0
1327                           c6t2d0  ONLINE       0     0     0
1328                           c6t3d0  ONLINE       0     0     0
1329                       logs
1330                         mirror-2  ONLINE       0     0     0
1331                           c4t0d0  ONLINE       0     0     0
1332                           c4t1d0  ONLINE       0     0     0
1333 
1334              The command to remove the mirrored log mirror-2 is:
1335 
1336              # zpool remove tank mirror-2
1337 




1338      Example 15 Displaying expanded space on a device
1339              The following command displays the detailed information for the
1340              pool data.  This pool is comprised of a single raidz vdev where
1341              one of its devices increased its capacity by 10GB.  In this
1342              example, the pool will not be able to utilize this extra capacity
1343              until all the devices under the raidz vdev have been expanded.
1344 
1345              # zpool list -v data
1346              NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1347              data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
1348                raidz1    23.9G  14.6G  9.30G    48%         -
1349                  c1t1d0      -      -      -      -         -
1350                  c1t2d0      -      -      -      -       10G
1351                  c1t3d0      -      -      -      -         -
1352 
1353 INTERFACE STABILITY
1354      Evolving
1355 
1356 SEE ALSO
1357      zfs(1M), attributes(5), zpool-features(5)