1 ZPOOL(1M)                    Maintenance Commands                    ZPOOL(1M)
   2 
   3 NAME
   4      zpool - configure ZFS storage pools
   5 
   6 SYNOPSIS
   7      zpool -?
   8      zpool add [-fn] pool vdev...
   9      zpool attach [-f] pool device new_device
  10      zpool checkpoint [-d, --discard] pool
  11      zpool clear pool [device]
  12      zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]...
  13            [-O file-system-property=value]... [-R root] [-t tempname]
  14            pool vdev...
  15      zpool destroy [-f] pool
  16      zpool detach pool device
  17      zpool export [-f] pool...
  18      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
  19      zpool history [-il] [pool]...
  20      zpool import [-D] [-d dir]
  21      zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  22            [-o property=value]... [-R root]
  23      zpool import [-Dfmt] [-F [-n]] [--rewind-to-checkpoint]
  24            [-c cachefile|-d dir] [-o mntopts] [-o property=value]... [-R root]
  25            pool|id [newpool]
  26      zpool initialize [-cs] pool [device...]
  27      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
  28      zpool labelclear [-f] device
  29      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
  30            [interval [count]]
  31      zpool offline [-t] pool device...
  32      zpool online [-e] pool device...
  33      zpool reguid pool
  34      zpool reopen pool
  35      zpool remove [-np] pool device...
  36      zpool remove -s pool
  37      zpool replace [-f] pool device [new_device]
  38      zpool scrub [-s | -p] pool...
  39      zpool set property=value pool
  40      zpool split [-n] [-o property=value]... [-R root] pool newpool
  41      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
  42      zpool upgrade
  43      zpool upgrade -v
  44      zpool upgrade [-V version] -a|pool...
  45 
  46 DESCRIPTION
  47      The zpool command configures ZFS storage pools.  A storage pool is a
  48      collection of devices that provides physical storage and data replication
  49      for ZFS datasets.  All datasets within a storage pool share the same
  50      space.  See zfs(1M) for information on managing datasets.
  51 
  52    Virtual Devices (vdevs)
  53      A "virtual device" describes a single device or a collection of devices
  54      organized according to certain performance and fault characteristics.
  55      The following virtual devices are supported:
  56 
  57      disk    A block device, typically located under /dev/dsk.  ZFS can use
  58              individual slices or partitions, though the recommended mode of
  59              operation is to use whole disks.  A disk can be specified by a
  60              full path, or it can be a shorthand name (the relative portion of
  61              the path under /dev/dsk).  A whole disk can be specified by
  62              omitting the slice or partition designation.  For example, c0t0d0
  63              is equivalent to /dev/dsk/c0t0d0s2.  When given a whole disk, ZFS
  64              automatically labels the disk, if necessary.
  65 
  66      file    A regular file.  The use of files as a backing store is strongly
  67              discouraged.  It is designed primarily for experimental purposes,
  68              as the fault tolerance of a file is only as good as the file
  69              system of which it is a part.  A file must be specified by a full
  70              path.
  71 
  72      mirror  A mirror of two or more devices.  Data is replicated in an
  73              identical fashion across all components of a mirror.  A mirror
  74              with N disks of size X can hold X bytes and can withstand (N-1)
  75              devices failing before data integrity is compromised.
  76 
  77      raidz, raidz1, raidz2, raidz3
  78              A variation on RAID-5 that allows for better distribution of
  79              parity and eliminates the RAID-5 "write hole" (in which data and
  80              parity become inconsistent after a power loss).  Data and parity
  81              is striped across all disks within a raidz group.
  82 
  83              A raidz group can have single-, double-, or triple-parity,
  84              meaning that the raidz group can sustain one, two, or three
  85              failures, respectively, without losing any data.  The raidz1 vdev
  86              type specifies a single-parity raidz group; the raidz2 vdev type
  87              specifies a double-parity raidz group; and the raidz3 vdev type
  88              specifies a triple-parity raidz group.  The raidz vdev type is an
  89              alias for raidz1.
  90 
  91              A raidz group with N disks of size X with P parity disks can hold
  92              approximately (N-P)*X bytes and can withstand P device(s) failing
  93              before data integrity is compromised.  The minimum number of
  94              devices in a raidz group is one more than the number of parity
  95              disks.  The recommended number is between 3 and 9 to help
  96              increase performance.
  97 
  98      spare   A special pseudo-vdev which keeps track of available hot spares
  99              for a pool.  For more information, see the Hot Spares section.
 100 
 101      log     A separate intent log device.  If more than one log device is
 102              specified, then writes are load-balanced between devices.  Log
 103              devices can be mirrored.  However, raidz vdev types are not
 104              supported for the intent log.  For more information, see the
 105              Intent Log section.
 106 
 107      cache   A device used to cache storage pool data.  A cache device cannot
 108              be configured as a mirror or raidz group.  For more information,
 109              see the Cache Devices section.
 110 
 111      Virtual devices cannot be nested, so a mirror or raidz virtual device can
 112      only contain files or disks.  Mirrors of mirrors (or other combinations)
 113      are not allowed.
 114 
 115      A pool can have any number of virtual devices at the top of the
 116      configuration (known as "root vdevs").  Data is dynamically distributed
 117      across all top-level devices to balance data among devices.  As new
 118      virtual devices are added, ZFS automatically places data on the newly
 119      available devices.
 120 
 121      Virtual devices are specified one at a time on the command line,
 122      separated by whitespace.  The keywords mirror and raidz are used to
 123      distinguish where a group ends and another begins.  For example, the
 124      following creates two root vdevs, each a mirror of two disks:
 125 
 126      # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
 127 
 128    Device Failure and Recovery
 129      ZFS supports a rich set of mechanisms for handling device failure and
 130      data corruption.  All metadata and data is checksummed, and ZFS
 131      automatically repairs bad data from a good copy when corruption is
 132      detected.
 133 
 134      In order to take advantage of these features, a pool must make use of
 135      some form of redundancy, using either mirrored or raidz groups.  While
 136      ZFS supports running in a non-redundant configuration, where each root
 137      vdev is simply a disk or file, this is strongly discouraged.  A single
 138      case of bit corruption can render some or all of your data unavailable.
 139 
 140      A pool's health status is described by one of three states: online,
 141      degraded, or faulted.  An online pool has all devices operating normally.
 142      A degraded pool is one in which one or more devices have failed, but the
 143      data is still available due to a redundant configuration.  A faulted pool
 144      has corrupted metadata, or one or more faulted devices, and insufficient
 145      replicas to continue functioning.
 146 
 147      The health of the top-level vdev, such as mirror or raidz device, is
 148      potentially impacted by the state of its associated vdevs, or component
 149      devices.  A top-level vdev or component device is in one of the following
 150      states:
 151 
 152      DEGRADED  One or more top-level vdevs is in the degraded state because
 153                one or more component devices are offline.  Sufficient replicas
 154                exist to continue functioning.
 155 
 156                One or more component devices is in the degraded or faulted
 157                state, but sufficient replicas exist to continue functioning.
 158                The underlying conditions are as follows:
 159 
 160                o   The number of checksum errors exceeds acceptable levels and
 161                    the device is degraded as an indication that something may
 162                    be wrong.  ZFS continues to use the device as necessary.
 163 
 164                o   The number of I/O errors exceeds acceptable levels.  The
 165                    device could not be marked as faulted because there are
 166                    insufficient replicas to continue functioning.
 167 
 168      FAULTED   One or more top-level vdevs is in the faulted state because one
 169                or more component devices are offline.  Insufficient replicas
 170                exist to continue functioning.
 171 
 172                One or more component devices is in the faulted state, and
 173                insufficient replicas exist to continue functioning.  The
 174                underlying conditions are as follows:
 175 
 176                o   The device could be opened, but the contents did not match
 177                    expected values.
 178 
 179                o   The number of I/O errors exceeds acceptable levels and the
 180                    device is faulted to prevent further use of the device.
 181 
 182      OFFLINE   The device was explicitly taken offline by the zpool offline
 183                command.
 184 
 185      ONLINE    The device is online and functioning.
 186 
 187      REMOVED   The device was physically removed while the system was running.
 188                Device removal detection is hardware-dependent and may not be
 189                supported on all platforms.
 190 
 191      UNAVAIL   The device could not be opened.  If a pool is imported when a
 192                device was unavailable, then the device will be identified by a
 193                unique identifier instead of its path since the path was never
 194                correct in the first place.
 195 
 196      If a device is removed and later re-attached to the system, ZFS attempts
 197      to put the device online automatically.  Device attach detection is
 198      hardware-dependent and might not be supported on all platforms.
 199 
 200    Hot Spares
 201      ZFS allows devices to be associated with pools as "hot spares".  These
 202      devices are not actively used in the pool, but when an active device
 203      fails, it is automatically replaced by a hot spare.  To create a pool
 204      with hot spares, specify a spare vdev with any number of devices.  For
 205      example,
 206 
 207      # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
 208 
 209      Spares can be shared across multiple pools, and can be added with the
 210      zpool add command and removed with the zpool remove command.  Once a
 211      spare replacement is initiated, a new spare vdev is created within the
 212      configuration that will remain there until the original device is
 213      replaced.  At this point, the hot spare becomes available again if
 214      another device fails.
 215 
 216      If a pool has a shared spare that is currently being used, the pool can
 217      not be exported since other pools may use this shared spare, which may
 218      lead to potential data corruption.
 219 
 220      An in-progress spare replacement can be cancelled by detaching the hot
 221      spare.  If the original faulted device is detached, then the hot spare
 222      assumes its place in the configuration, and is removed from the spare
 223      list of all active pools.
 224 
 225      Spares cannot replace log devices.
 226 
 227    Intent Log
 228      The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
 229      transactions.  For instance, databases often require their transactions
 230      to be on stable storage devices when returning from a system call.  NFS
 231      and other applications can also use fsync(3C) to ensure data stability.
 232      By default, the intent log is allocated from blocks within the main pool.
 233      However, it might be possible to get better performance using separate
 234      intent log devices such as NVRAM or a dedicated disk.  For example:
 235 
 236      # zpool create pool c0d0 c1d0 log c2d0
 237 
 238      Multiple log devices can also be specified, and they can be mirrored.
 239      See the EXAMPLES section for an example of mirroring multiple log
 240      devices.
 241 
 242      Log devices can be added, replaced, attached, detached, and imported and
 243      exported as part of the larger pool.  Mirrored devices can be removed by
 244      specifying the top-level mirror vdev.
 245 
 246    Cache Devices
 247      Devices can be added to a storage pool as "cache devices".  These devices
 248      provide an additional layer of caching between main memory and disk.  For
 249      read-heavy workloads, where the working set size is much larger than what
 250      can be cached in main memory, using cache devices allow much more of this
 251      working set to be served from low latency media.  Using cache devices
 252      provides the greatest performance improvement for random read-workloads
 253      of mostly static content.
 254 
 255      To create a pool with cache devices, specify a cache vdev with any number
 256      of devices.  For example:
 257 
 258      # zpool create pool c0d0 c1d0 cache c2d0 c3d0
 259 
 260      Cache devices cannot be mirrored or part of a raidz configuration.  If a
 261      read error is encountered on a cache device, that read I/O is reissued to
 262      the original storage pool device, which might be part of a mirrored or
 263      raidz configuration.
 264 
 265      The content of the cache devices is considered volatile, as is the case
 266      with other system caches.
 267 
 268    Pool checkpoint
 269      Before starting critical procedures that include destructive actions (e.g
 270      zfs destroy ), an administrator can checkpoint the pool's state and in
 271      the case of a mistake or failure, rewind the entire pool back to the
 272      checkpoint.  Otherwise, the checkpoint can be discarded when the
 273      procedure has completed successfully.
 274 
 275      A pool checkpoint can be thought of as a pool-wide snapshot and should be
 276      used with care as it contains every part of the pool's state, from
 277      properties to vdev configuration.  Thus, while a pool has a checkpoint
 278      certain operations are not allowed.  Specifically, vdev
 279      removal/attach/detach, mirror splitting, and changing the pool's guid.
 280      Adding a new vdev is supported but in the case of a rewind it will have
 281      to be added again.  Finally, users of this feature should keep in mind
 282      that scrubs in a pool that has a checkpoint do not repair checkpointed
 283      data.
 284 
 285      To create a checkpoint for a pool:
 286 
 287      # zpool checkpoint pool
 288 
 289      To later rewind to its checkpointed state, you need to first export it
 290      and then rewind it during import:
 291 
 292      # zpool export pool
 293      # zpool import --rewind-to-checkpoint pool
 294 
 295      To discard the checkpoint from a pool:
 296 
 297      # zpool checkpoint -d pool
 298 
 299      Dataset reservations (controlled by the reservation or refreservation zfs
 300      properties) may be unenforceable while a checkpoint exists, because the
 301      checkpoint is allowed to consume the dataset's reservation.  Finally,
 302      data that is part of the checkpoint but has been freed in the current
 303      state of the pool won't be scanned during a scrub.
 304 
 305    Properties
 306      Each pool has several properties associated with it.  Some properties are
 307      read-only statistics while others are configurable and change the
 308      behavior of the pool.
 309 
 310      The following are read-only properties:
 311 
 312      allocated
 313              Amount of storage space used within the pool.
 314 
 315      bootsize
 316              The size of the system boot partition.  This property can only be
 317              set at pool creation time and is read-only once pool is created.
 318              Setting this property implies using the -B option.
 319 
 320      capacity
 321              Percentage of pool space used.  This property can also be
 322              referred to by its shortened column name, cap.
 323 
 324      expandsize
 325              Amount of uninitialized space within the pool or device that can
 326              be used to increase the total capacity of the pool.
 327              Uninitialized space consists of any space on an EFI labeled vdev
 328              which has not been brought online (e.g, using zpool online -e).
 329              This space occurs when a LUN is dynamically expanded.
 330 
 331      fragmentation
 332              The amount of fragmentation in the pool.
 333 
 334      free    The amount of free space available in the pool.
 335 
 336      freeing
 337              After a file system or snapshot is destroyed, the space it was
 338              using is returned to the pool asynchronously.  freeing is the
 339              amount of space remaining to be reclaimed.  Over time freeing
 340              will decrease while free increases.
 341 
 342      health  The current health of the pool.  Health can be one of ONLINE,
 343              DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
 344 
 345      guid    A unique identifier for the pool.
 346 
 347      size    Total size of the storage pool.
 348 
 349      unsupported@feature_guid
 350              Information about unsupported features that are enabled on the
 351              pool.  See zpool-features(5) for details.
 352 
 353      The space usage properties report actual physical space available to the
 354      storage pool.  The physical space can be different from the total amount
 355      of space that any contained datasets can actually use.  The amount of
 356      space used in a raidz configuration depends on the characteristics of the
 357      data being written.  In addition, ZFS reserves some space for internal
 358      accounting that the zfs(1M) command takes into account, but the zpool
 359      command does not.  For non-full pools of a reasonable size, these effects
 360      should be invisible.  For small pools, or pools that are close to being
 361      completely full, these discrepancies may become more noticeable.
 362 
 363      The following property can be set at creation time and import time:
 364 
 365      altroot
 366              Alternate root directory.  If set, this directory is prepended to
 367              any mount points within the pool.  This can be used when
 368              examining an unknown pool where the mount points cannot be
 369              trusted, or in an alternate boot environment, where the typical
 370              paths are not valid.  altroot is not a persistent property.  It
 371              is valid only while the system is up.  Setting altroot defaults
 372              to using cachefile=none, though this may be overridden using an
 373              explicit setting.
 374 
 375      The following property can be set only at import time:
 376 
 377      readonly=on|off
 378              If set to on, the pool will be imported in read-only mode.  This
 379              property can also be referred to by its shortened column name,
 380              rdonly.
 381 
 382      The following properties can be set at creation time and import time, and
 383      later changed with the zpool set command:
 384 
 385      autoexpand=on|off
 386              Controls automatic pool expansion when the underlying LUN is
 387              grown.  If set to on, the pool will be resized according to the
 388              size of the expanded device.  If the device is part of a mirror
 389              or raidz then all devices within that mirror/raidz group must be
 390              expanded before the new space is made available to the pool.  The
 391              default behavior is off.  This property can also be referred to
 392              by its shortened column name, expand.
 393 
 394      autoreplace=on|off
 395              Controls automatic device replacement.  If set to off, device
 396              replacement must be initiated by the administrator by using the
 397              zpool replace command.  If set to on, any new device, found in
 398              the same physical location as a device that previously belonged
 399              to the pool, is automatically formatted and replaced.  The
 400              default behavior is off.  This property can also be referred to
 401              by its shortened column name, replace.
 402 
 403      bootfs=pool/dataset
 404              Identifies the default bootable dataset for the root pool.  This
 405              property is expected to be set mainly by the installation and
 406              upgrade programs.
 407 
 408      cachefile=path|none
 409              Controls the location of where the pool configuration is cached.
 410              Discovering all pools on system startup requires a cached copy of
 411              the configuration data that is stored on the root file system.
 412              All pools in this cache are automatically imported when the
 413              system boots.  Some environments, such as install and clustering,
 414              need to cache this information in a different location so that
 415              pools are not automatically imported.  Setting this property
 416              caches the pool configuration in a different location that can
 417              later be imported with zpool import -c.  Setting it to the
 418              special value none creates a temporary pool that is never cached,
 419              and the special value "" (empty string) uses the default
 420              location.
 421 
 422              Multiple pools can share the same cache file.  Because the kernel
 423              destroys and recreates this file when pools are added and
 424              removed, care should be taken when attempting to access this
 425              file.  When the last pool using a cachefile is exported or
 426              destroyed, the file is removed.
 427 
 428      comment=text
 429              A text string consisting of printable ASCII characters that will
 430              be stored such that it is available even if the pool becomes
 431              faulted.  An administrator can provide additional information
 432              about a pool using this property.
 433 
 434      dedupditto=number
 435              Threshold for the number of block ditto copies.  If the reference
 436              count for a deduplicated block increases above this number, a new
 437              ditto copy of this block is automatically stored.  The default
 438              setting is 0 which causes no ditto copies to be created for
 439              deduplicated blocks.  The minimum legal nonzero setting is 100.
 440 
 441      delegation=on|off
 442              Controls whether a non-privileged user is granted access based on
 443              the dataset permissions defined on the dataset.  See zfs(1M) for
 444              more information on ZFS delegated administration.
 445 
 446      failmode=wait|continue|panic
 447              Controls the system behavior in the event of catastrophic pool
 448              failure.  This condition is typically a result of a loss of
 449              connectivity to the underlying storage device(s) or a failure of
 450              all devices within the pool.  The behavior of such an event is
 451              determined as follows:
 452 
 453              wait      Blocks all I/O access until the device connectivity is
 454                        recovered and the errors are cleared.  This is the
 455                        default behavior.
 456 
 457              continue  Returns EIO to any new write I/O requests but allows
 458                        reads to any of the remaining healthy devices.  Any
 459                        write requests that have yet to be committed to disk
 460                        would be blocked.
 461 
 462              panic     Prints out a message to the console and generates a
 463                        system crash dump.
 464 
 465      feature@feature_name=enabled
 466              The value of this property is the current state of feature_name.
 467              The only valid value when setting this property is enabled which
 468              moves feature_name to the enabled state.  See zpool-features(5)
 469              for details on feature states.
 470 
 471      listsnapshots=on|off
 472              Controls whether information about snapshots associated with this
 473              pool is output when zfs list is run without the -t option.  The
 474              default value is off.  This property can also be referred to by
 475              its shortened name, listsnaps.
 476 
 477      version=version
 478              The current on-disk version of the pool.  This can be increased,
 479              but never decreased.  The preferred method of updating pools is
 480              with the zpool upgrade command, though this property can be used
 481              when a specific version is needed for backwards compatibility.
 482              Once feature flags are enabled on a pool this property will no
 483              longer have a value.
 484 
 485    Subcommands
 486      All subcommands that modify state are logged persistently to the pool in
 487      their original form.
 488 
 489      The zpool command provides subcommands to create and destroy storage
 490      pools, add capacity to storage pools, and provide information about the
 491      storage pools.  The following subcommands are supported:
 492 
 493      zpool -?
 494              Displays a help message.
 495 
 496      zpool add [-fn] pool vdev...
 497              Adds the specified virtual devices to the given pool.  The vdev
 498              specification is described in the Virtual Devices section.  The
 499              behavior of the -f option, and the device checks performed are
 500              described in the zpool create subcommand.
 501 
 502              -f      Forces use of vdevs, even if they appear in use or
 503                      specify a conflicting replication level.  Not all devices
 504                      can be overridden in this manner.
 505 
 506              -n      Displays the configuration that would be used without
 507                      actually adding the vdevs.  The actual pool creation can
 508                      still fail due to insufficient privileges or device
 509                      sharing.
 510 
 511      zpool attach [-f] pool device new_device
 512              Attaches new_device to the existing device.  The existing device
 513              cannot be part of a raidz configuration.  If device is not
 514              currently part of a mirrored configuration, device automatically
 515              transforms into a two-way mirror of device and new_device.  If
 516              device is part of a two-way mirror, attaching new_device creates
 517              a three-way mirror, and so on.  In either case, new_device begins
 518              to resilver immediately.
 519 
 520              -f      Forces use of new_device, even if its appears to be in
 521                      use.  Not all devices can be overridden in this manner.
 522 
 523      zpool checkpoint [-d, --discard] pool
 524              Checkpoints the current state of pool , which can be later
 525              restored by zpool import --rewind-to-checkpoint.  The existence
 526              of a checkpoint in a pool prohibits the following zpool commands:
 527              remove, attach, detach, split, and reguid.  In addition, it may
 528              break reservation boundaries if the pool lacks free space.  The
 529              zpool status command indicates the existence of a checkpoint or
 530              the progress of discarding a checkpoint from a pool.  The zpool
 531              list command reports how much space the checkpoint takes from the
 532              pool.
 533 
 534              -d, --discard
 535                      Discards an existing checkpoint from pool.
 536 
 537      zpool clear pool [device]
 538              Clears device errors in a pool.  If no arguments are specified,
 539              all device errors within the pool are cleared.  If one or more
 540              devices is specified, only those errors associated with the
 541              specified device or devices are cleared.
 542 
 543      zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]... [-O
 544              file-system-property=value]... [-R root] [-t tempname] pool
 545              vdev...
 546              Creates a new storage pool containing the virtual devices
 547              specified on the command line.  The pool name must begin with a
 548              letter, and can only contain alphanumeric characters as well as
 549              underscore ("_"), dash ("-"), and period (".").  The pool names
 550              mirror, raidz, spare and log are reserved, as are names beginning
 551              with the pattern c[0-9].  The vdev specification is described in
 552              the Virtual Devices section.
 553 
 554              The command verifies that each device specified is accessible and
 555              not currently in use by another subsystem.  There are some uses,
 556              such as being currently mounted, or specified as the dedicated
 557              dump device, that prevents a device from ever being used by ZFS.
 558              Other uses, such as having a preexisting UFS file system, can be
 559              overridden with the -f option.
 560 
 561              The command also checks that the replication strategy for the
 562              pool is consistent.  An attempt to combine redundant and non-
 563              redundant storage in a single pool, or to mix disks and files,
 564              results in an error unless -f is specified.  The use of
 565              differently sized devices within a single raidz or mirror group
 566              is also flagged as an error unless -f is specified.
 567 
 568              Unless the -R option is specified, the default mount point is
 569              /pool.  The mount point must not exist or must be empty, or else
 570              the root dataset cannot be mounted.  This can be overridden with
 571              the -m option.
 572 
 573              By default all supported features are enabled on the new pool
 574              unless the -d option is specified.
 575 
 576              -B      Create whole disk pool with EFI System partition to
 577                      support booting system with UEFI firmware.  Default size
 578                      is 256MB.  To create boot partition with custom size, set
 579                      the bootsize property with the -o option.  See the
 580                      Properties section for details.
 581 
 582              -d      Do not enable any features on the new pool.  Individual
 583                      features can be enabled by setting their corresponding
 584                      properties to enabled with the -o option.  See
 585                      zpool-features(5) for details about feature properties.
 586 
 587              -f      Forces use of vdevs, even if they appear in use or
 588                      specify a conflicting replication level.  Not all devices
 589                      can be overridden in this manner.
 590 
 591              -m mountpoint
 592                      Sets the mount point for the root dataset.  The default
 593                      mount point is /pool or altroot/pool if altroot is
 594                      specified.  The mount point must be an absolute path,
 595                      legacy, or none.  For more information on dataset mount
 596                      points, see zfs(1M).
 597 
 598              -n      Displays the configuration that would be used without
 599                      actually creating the pool.  The actual pool creation can
 600                      still fail due to insufficient privileges or device
 601                      sharing.
 602 
 603              -o property=value
 604                      Sets the given pool properties.  See the Properties
 605                      section for a list of valid properties that can be set.
 606 
 607              -O file-system-property=value
 608                      Sets the given file system properties in the root file
 609                      system of the pool.  See the Properties section of
 610                      zfs(1M) for a list of valid properties that can be set.
 611 
 612              -R root
 613                      Equivalent to -o cachefile=none -o altroot=root
 614 
 615              -t tempname
 616                      Sets the in-core pool name to tempname while the on-disk
 617                      name will be the name specified as the pool name pool.
 618                      This will set the default cachefile property to none.
 619                      This is intended to handle name space collisions when
 620                      creating pools for other systems, such as virtual
 621                      machines or physical machines whose pools live on network
 622                      block devices.
 623 
 624      zpool destroy [-f] pool
 625              Destroys the given pool, freeing up any devices for other use.
 626              This command tries to unmount any active datasets before
 627              destroying the pool.
 628 
 629              -f      Forces any active datasets contained within the pool to
 630                      be unmounted.
 631 
 632      zpool detach pool device
 633              Detaches device from a mirror.  The operation is refused if there
 634              are no other valid replicas of the data.
 635 
 636      zpool export [-f] pool...
 637              Exports the given pools from the system.  All devices are marked
 638              as exported, but are still considered in use by other subsystems.
 639              The devices can be moved between systems (even those of different
 640              endianness) and imported as long as a sufficient number of
 641              devices are present.
 642 
 643              Before exporting the pool, all datasets within the pool are
 644              unmounted.  A pool can not be exported if it has a shared spare
 645              that is currently being used.
 646 
 647              For pools to be portable, you must give the zpool command whole
 648              disks, not just slices, so that ZFS can label the disks with
 649              portable EFI labels.  Otherwise, disk drivers on platforms of
 650              different endianness will not recognize the disks.
 651 
 652              -f      Forcefully unmount all datasets, using the unmount -f
 653                      command.
 654 
 655                      This command will forcefully export the pool even if it
 656                      has a shared spare that is currently being used.  This
 657                      may lead to potential data corruption.
 658 
 659      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
 660              Retrieves the given list of properties (or all properties if all
 661              is used) for the specified storage pool(s).  These properties are
 662              displayed with the following fields:
 663 
 664                      name          Name of storage pool
 665                      property      Property name
 666                      value         Property value
 667                      source        Property source, either 'default' or 'local'.
 668 
 669              See the Properties section for more information on the available
 670              pool properties.
 671 
 672              -H      Scripted mode.  Do not display headers, and separate
 673                      fields by a single tab instead of arbitrary space.
 674 
 675              -o field
 676                      A comma-separated list of columns to display.
 677                      name,property,value,source is the default value.
 678 
 679              -p      Display numbers in parsable (exact) values.
 680 
 681      zpool history [-il] [pool]...
 682              Displays the command history of the specified pool(s) or all
 683              pools if no pool is specified.
 684 
 685              -i      Displays internally logged ZFS events in addition to user
 686                      initiated events.
 687 
 688              -l      Displays log records in long format, which in addition to
 689                      standard format includes, the user name, the hostname,
 690                      and the zone in which the operation was performed.
 691 
 692      zpool import [-D] [-d dir]
 693              Lists pools available to import.  If the -d option is not
 694              specified, this command searches for devices in /dev/dsk.  The -d
 695              option can be specified multiple times, and all directories are
 696              searched.  If the device appears to be part of an exported pool,
 697              this command displays a summary of the pool with the name of the
 698              pool, a numeric identifier, as well as the vdev layout and
 699              current health of the device for each device or file.  Destroyed
 700              pools, pools that were previously destroyed with the zpool
 701              destroy command, are not listed unless the -D option is
 702              specified.
 703 
 704              The numeric identifier is unique, and can be used instead of the
 705              pool name when multiple exported pools of the same name are
 706              available.
 707 
 708              -c cachefile
 709                      Reads configuration from the given cachefile that was
 710                      created with the cachefile pool property.  This cachefile
 711                      is used instead of searching for devices.
 712 
 713              -d dir  Searches for devices or files in dir.  The -d option can
 714                      be specified multiple times.
 715 
 716              -D      Lists destroyed pools only.
 717 
 718      zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
 719              property=value]... [-R root]
 720              Imports all pools found in the search directories.  Identical to
 721              the previous command, except that all pools with a sufficient
 722              number of devices available are imported.  Destroyed pools, pools
 723              that were previously destroyed with the zpool destroy command,
 724              will not be imported unless the -D option is specified.
 725 
 726              -a      Searches for and imports all pools found.
 727 
 728              -c cachefile
 729                      Reads configuration from the given cachefile that was
 730                      created with the cachefile pool property.  This cachefile
 731                      is used instead of searching for devices.
 732 
 733              -d dir  Searches for devices or files in dir.  The -d option can
 734                      be specified multiple times.  This option is incompatible
 735                      with the -c option.
 736 
 737              -D      Imports destroyed pools only.  The -f option is also
 738                      required.
 739 
 740              -f      Forces import, even if the pool appears to be potentially
 741                      active.
 742 
 743              -F      Recovery mode for a non-importable pool.  Attempt to
 744                      return the pool to an importable state by discarding the
 745                      last few transactions.  Not all damaged pools can be
 746                      recovered by using this option.  If successful, the data
 747                      from the discarded transactions is irretrievably lost.
 748                      This option is ignored if the pool is importable or
 749                      already imported.
 750 
 751              -m      Allows a pool to import when there is a missing log
 752                      device.  Recent transactions can be lost because the log
 753                      device will be discarded.
 754 
 755              -n      Used with the -F recovery option.  Determines whether a
 756                      non-importable pool can be made importable again, but
 757                      does not actually perform the pool recovery.  For more
 758                      details about pool recovery mode, see the -F option,
 759                      above.
 760 
 761              -N      Import the pool without mounting any file systems.
 762 
 763              -o mntopts
 764                      Comma-separated list of mount options to use when
 765                      mounting datasets within the pool.  See zfs(1M) for a
 766                      description of dataset properties and mount options.
 767 
 768              -o property=value
 769                      Sets the specified property on the imported pool.  See
 770                      the Properties section for more information on the
 771                      available pool properties.
 772 
 773              -R root
 774                      Sets the cachefile property to none and the altroot
 775                      property to root.
 776 
 777      zpool import [-Dfmt] [-F [-n]] [--rewind-to-checkpoint] [-c cachefile|-d
 778              dir] [-o mntopts] [-o property=value]... [-R root] pool|id
 779              [newpool]
 780              Imports a specific pool.  A pool can be identified by its name or
 781              the numeric identifier.  If newpool is specified, the pool is
 782              imported using the name newpool.  Otherwise, it is imported with
 783              the same name as its exported name.
 784 
 785              If a device is removed from a system without running zpool export
 786              first, the device appears as potentially active.  It cannot be
 787              determined if this was a failed export, or whether the device is
 788              really in use from another host.  To import a pool in this state,
 789              the -f option is required.
 790 
 791              -c cachefile
 792                      Reads configuration from the given cachefile that was
 793                      created with the cachefile pool property.  This cachefile
 794                      is used instead of searching for devices.
 795 
 796              -d dir  Searches for devices or files in dir.  The -d option can
 797                      be specified multiple times.  This option is incompatible
 798                      with the -c option.
 799 
 800              -D      Imports destroyed pool.  The -f option is also required.
 801 
 802              -f      Forces import, even if the pool appears to be potentially
 803                      active.
 804 
 805              -F      Recovery mode for a non-importable pool.  Attempt to
 806                      return the pool to an importable state by discarding the
 807                      last few transactions.  Not all damaged pools can be
 808                      recovered by using this option.  If successful, the data
 809                      from the discarded transactions is irretrievably lost.
 810                      This option is ignored if the pool is importable or
 811                      already imported.
 812 
 813              -m      Allows a pool to import when there is a missing log
 814                      device.  Recent transactions can be lost because the log
 815                      device will be discarded.
 816 
 817              -n      Used with the -F recovery option.  Determines whether a
 818                      non-importable pool can be made importable again, but
 819                      does not actually perform the pool recovery.  For more
 820                      details about pool recovery mode, see the -F option,
 821                      above.
 822 
 823              -o mntopts
 824                      Comma-separated list of mount options to use when
 825                      mounting datasets within the pool.  See zfs(1M) for a
 826                      description of dataset properties and mount options.
 827 
 828              -o property=value
 829                      Sets the specified property on the imported pool.  See
 830                      the Properties section for more information on the
 831                      available pool properties.
 832 
 833              -R root
 834                      Sets the cachefile property to none and the altroot
 835                      property to root.
 836 
 837              -t      Used with newpool.  Specifies that newpool is temporary.
 838                      Temporary pool names last until export.  Ensures that the
 839                      original pool name will be used in all label updates and
 840                      therefore is retained upon export.  Will also set
 841                      cachefile property to none when not explicitly specified.
 842 
 843              --rewind-to-checkpoint
 844                      Rewinds pool to the checkpointed state.  Once the pool is
 845                      imported with this flag there is no way to undo the
 846                      rewind.  All changes and data that were written after the
 847                      checkpoint are lost!  The only exception is when the
 848                      readonly mounting option is enabled.  In this case, the
 849                      checkpointed state of the pool is opened and an
 850                      administrator can see how the pool would look like if
 851                      they were to fully rewind.
 852 
 853      zpool initialize [-cs] pool [device...]
 854              Begins initializing by writing to all unallocated regions on the
 855              specified devices, or all eligible devices in the pool if no
 856              individual devices are specified.  Only leaf data or log devices
 857              may be initialized.
 858 
 859              -c, --cancel
 860                      Cancel initializing on the specified devices, or all
 861                      eligible devices if none are specified.  If one or more
 862                      target devices are invalid or are not currently being
 863                      initialized, the command will fail and no cancellation
 864                      will occur on any device.
 865 
 866              -s --suspend
 867                      Suspend initializing on the specified devices, or all
 868                      eligible devices if none are specified.  If one or more
 869                      target devices are invalid or are not currently being
 870                      initialized, the command will fail and no suspension will
 871                      occur on any device.  Initializing can then be resumed by
 872                      running zpool initialize with no flags on the relevant
 873                      target devices.
 874 
 875      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
 876              Displays I/O statistics for the given pools.  When given an
 877              interval, the statistics are printed every interval seconds until
 878              ^C is pressed.  If no pools are specified, statistics for every
 879              pool in the system is shown.  If count is specified, the command
 880              exits after count reports are printed.
 881 
 882              -T u|d  Display a time stamp.  Specify u for a printed
 883                      representation of the internal representation of time.
 884                      See time(2).  Specify d for standard date format.  See
 885                      date(1).
 886 
 887              -v      Verbose statistics Reports usage statistics for
 888                      individual vdevs within the pool, in addition to the
 889                      pool-wide statistics.
 890 
 891      zpool labelclear [-f] device
 892              Removes ZFS label information from the specified device.  The
 893              device must not be part of an active pool configuration.
 894 
 895              -f      Treat exported or foreign devices as inactive.
 896 
 897      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
 898              [interval [count]]
 899              Lists the given pools along with a health status and space usage.
 900              If no pools are specified, all pools in the system are listed.
 901              When given an interval, the information is printed every interval
 902              seconds until ^C is pressed.  If count is specified, the command
 903              exits after count reports are printed.
 904 
 905              -H      Scripted mode.  Do not display headers, and separate
 906                      fields by a single tab instead of arbitrary space.
 907 
 908              -o property
 909                      Comma-separated list of properties to display.  See the
 910                      Properties section for a list of valid properties.  The
 911                      default list is name, size, allocated, free, checkpoint,
 912                      expandsize, fragmentation, capacity, dedupratio, health,
 913                      altroot.
 914 
 915              -p      Display numbers in parsable (exact) values.
 916 
 917              -T u|d  Display a time stamp.  Specify -u for a printed
 918                      representation of the internal representation of time.
 919                      See time(2).  Specify -d for standard date format.  See
 920                      date(1).
 921 
 922              -v      Verbose statistics.  Reports usage statistics for
 923                      individual vdevs within the pool, in addition to the
 924                      pool-wise statistics.
 925 
 926      zpool offline [-t] pool device...
 927              Takes the specified physical device offline.  While the device is
 928              offline, no attempt is made to read or write to the device.  This
 929              command is not applicable to spares.
 930 
 931              -t      Temporary.  Upon reboot, the specified physical device
 932                      reverts to its previous state.
 933 
 934      zpool online [-e] pool device...
 935              Brings the specified physical device online.  This command is not
 936              applicable to spares.
 937 
 938              -e      Expand the device to use all available space.  If the
 939                      device is part of a mirror or raidz then all devices must
 940                      be expanded before the new space will become available to
 941                      the pool.
 942 
 943      zpool reguid pool
 944              Generates a new unique identifier for the pool.  You must ensure
 945              that all devices in this pool are online and healthy before
 946              performing this action.
 947 
 948      zpool reopen pool
 949              Reopen all the vdevs associated with the pool.
 950 
 951      zpool remove [-np] pool device...
 952              Removes the specified device from the pool.  This command
 953              currently only supports removing hot spares, cache, log devices
 954              and mirrored top-level vdevs (mirror of leaf devices); but not
 955              raidz.
 956 
 957              Removing a top-level vdev reduces the total amount of space in
 958              the storage pool.  The specified device will be evacuated by
 959              copying all allocated space from it to the other devices in the
 960              pool.  In this case, the zpool remove command initiates the
 961              removal and returns, while the evacuation continues in the
 962              background.  The removal progress can be monitored with zpool
 963              status. This feature must be enabled to be used, see
 964              zpool-features(5)
 965 
 966              A mirrored top-level device (log or data) can be removed by
 967              specifying the top-level mirror for the same.  Non-log devices or
 968              data devices that are part of a mirrored configuration can be
 969              removed using the zpool detach command.
 970 
 971              -n      Do not actually perform the removal ("no-op").  Instead,
 972                      print the estimated amount of memory that will be used by
 973                      the mapping table after the removal completes.  This is
 974                      nonzero only for top-level vdevs.
 975 
 976              -p      Used in conjunction with the -n flag, displays numbers as
 977                      parsable (exact) values.
 978 
 979      zpool remove -s pool
 980              Stops and cancels an in-progress removal of a top-level vdev.
 981 
 982      zpool replace [-f] pool device [new_device]
 983              Replaces old_device with new_device.  This is equivalent to
 984              attaching new_device, waiting for it to resilver, and then
 985              detaching old_device.
 986 
 987              The size of new_device must be greater than or equal to the
 988              minimum size of all the devices in a mirror or raidz
 989              configuration.
 990 
 991              new_device is required if the pool is not redundant.  If
 992              new_device is not specified, it defaults to old_device.  This
 993              form of replacement is useful after an existing disk has failed
 994              and has been physically replaced.  In this case, the new disk may
 995              have the same /dev/dsk path as the old device, even though it is
 996              actually a different disk.  ZFS recognizes this.
 997 
 998              -f      Forces use of new_device, even if its appears to be in
 999                      use.  Not all devices can be overridden in this manner.
1000 
1001      zpool scrub [-s | -p] pool...
1002              Begins a scrub or resumes a paused scrub.  The scrub examines all
1003              data in the specified pools to verify that it checksums
1004              correctly.  For replicated (mirror or raidz) devices, ZFS
1005              automatically repairs any damage discovered during the scrub.
1006              The zpool status command reports the progress of the scrub and
1007              summarizes the results of the scrub upon completion.
1008 
1009              Scrubbing and resilvering are very similar operations.  The
1010              difference is that resilvering only examines data that ZFS knows
1011              to be out of date (for example, when attaching a new device to a
1012              mirror or replacing an existing device), whereas scrubbing
1013              examines all data to discover silent errors due to hardware
1014              faults or disk failure.
1015 
1016              Because scrubbing and resilvering are I/O-intensive operations,
1017              ZFS only allows one at a time.  If a scrub is paused, the zpool
1018              scrub resumes it.  If a resilver is in progress, ZFS does not
1019              allow a scrub to be started until the resilver completes.
1020 
1021              -s      Stop scrubbing.
1022 
1023              -p      Pause scrubbing.  Scrub pause state and progress are
1024                      periodically synced to disk.  If the system is restarted
1025                      or pool is exported during a paused scrub, even after
1026                      import, scrub will remain paused until it is resumed.
1027                      Once resumed the scrub will pick up from the place where
1028                      it was last checkpointed to disk.  To resume a paused
1029                      scrub issue zpool scrub again.
1030 
1031      zpool set property=value pool
1032              Sets the given property on the specified pool.  See the
1033              Properties section for more information on what properties can be
1034              set and acceptable values.
1035 
1036      zpool split [-n] [-o property=value]... [-R root] pool newpool
1037              Splits devices off pool creating newpool.  All vdevs in pool must
1038              be mirrors.  At the time of the split, newpool will be a replica
1039              of pool.
1040 
1041              -n      Do dry run, do not actually perform the split.  Print out
1042                      the expected configuration of newpool.
1043 
1044              -o property=value
1045                      Sets the specified property for newpool.  See the
1046                      Properties section for more information on the available
1047                      pool properties.
1048 
1049              -R root
1050                      Set altroot for newpool to root and automatically import
1051                      it.
1052 
1053      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
1054              Displays the detailed health status for the given pools.  If no
1055              pool is specified, then the status of each pool in the system is
1056              displayed.  For more information on pool and device health, see
1057              the Device Failure and Recovery section.
1058 
1059              If a scrub or resilver is in progress, this command reports the
1060              percentage done and the estimated time to completion.  Both of
1061              these are only approximate, because the amount of data in the
1062              pool and the other workloads on the system can change.
1063 
1064              -D      Display a histogram of deduplication statistics, showing
1065                      the allocated (physically present on disk) and referenced
1066                      (logically referenced in the pool) block counts and sizes
1067                      by reference count.
1068 
1069              -T u|d  Display a time stamp.  Specify -u for a printed
1070                      representation of the internal representation of time.
1071                      See time(2).  Specify -d for standard date format.  See
1072                      date(1).
1073 
1074              -v      Displays verbose data error information, printing out a
1075                      complete list of all data errors since the last complete
1076                      pool scrub.
1077 
1078              -x      Only display status for pools that are exhibiting errors
1079                      or are otherwise unavailable.  Warnings about pools not
1080                      using the latest on-disk format will not be included.
1081 
1082      zpool upgrade
1083              Displays pools which do not have all supported features enabled
1084              and pools formatted using a legacy ZFS version number.  These
1085              pools can continue to be used, but some features may not be
1086              available.  Use zpool upgrade -a to enable all features on all
1087              pools.
1088 
1089      zpool upgrade -v
1090              Displays legacy ZFS versions supported by the current software.
1091              See zpool-features(5) for a description of feature flags features
1092              supported by the current software.
1093 
1094      zpool upgrade [-V version] -a|pool...
1095              Enables all supported features on the given pool.  Once this is
1096              done, the pool will no longer be accessible on systems that do
1097              not support feature flags.  See zpool-features(5) for details on
1098              compatibility with systems that support feature flags, but do not
1099              support all features enabled on the pool.
1100 
1101              -a      Enables all supported features on all pools.
1102 
1103              -V version
1104                      Upgrade to the specified legacy version.  If the -V flag
1105                      is specified, no features will be enabled on the pool.
1106                      This option can only be used to increase the version
1107                      number up to the last supported legacy version number.
1108 
1109 EXIT STATUS
1110      The following exit values are returned:
1111 
1112      0       Successful completion.
1113 
1114      1       An error occurred.
1115 
1116      2       Invalid command line options were specified.
1117 
1118 EXAMPLES
1119      Example 1 Creating a RAID-Z Storage Pool
1120              The following command creates a pool with a single raidz root
1121              vdev that consists of six disks.
1122 
1123              # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1124 
1125      Example 2 Creating a Mirrored Storage Pool
1126              The following command creates a pool with two mirrors, where each
1127              mirror contains two disks.
1128 
1129              # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1130 
1131      Example 3 Creating a ZFS Storage Pool by Using Slices
1132              The following command creates an unmirrored pool using two disk
1133              slices.
1134 
1135              # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1136 
1137      Example 4 Creating a ZFS Storage Pool by Using Files
1138              The following command creates an unmirrored pool using files.
1139              While not recommended, a pool based on files can be useful for
1140              experimental purposes.
1141 
1142              # zpool create tank /path/to/file/a /path/to/file/b
1143 
1144      Example 5 Adding a Mirror to a ZFS Storage Pool
1145              The following command adds two mirrored disks to the pool tank,
1146              assuming the pool is already made up of two-way mirrors.  The
1147              additional space is immediately available to any datasets within
1148              the pool.
1149 
1150              # zpool add tank mirror c1t0d0 c1t1d0
1151 
1152      Example 6 Listing Available ZFS Storage Pools
1153              The following command lists all available pools on the system.
1154              In this case, the pool zion is faulted due to a missing device.
1155              The results from this command are similar to the following:
1156 
1157              # zpool list
1158              NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1159              rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
1160              tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
1161              zion       -      -      -      -         -      -      -  FAULTED -
1162 
1163      Example 7 Destroying a ZFS Storage Pool
1164              The following command destroys the pool tank and any datasets
1165              contained within.
1166 
1167              # zpool destroy -f tank
1168 
1169      Example 8 Exporting a ZFS Storage Pool
1170              The following command exports the devices in pool tank so that
1171              they can be relocated or later imported.
1172 
1173              # zpool export tank
1174 
1175      Example 9 Importing a ZFS Storage Pool
1176              The following command displays available pools, and then imports
1177              the pool tank for use on the system.  The results from this
1178              command are similar to the following:
1179 
1180              # zpool import
1181                pool: tank
1182                  id: 15451357997522795478
1183               state: ONLINE
1184              action: The pool can be imported using its name or numeric identifier.
1185              config:
1186 
1187                      tank        ONLINE
1188                        mirror    ONLINE
1189                          c1t2d0  ONLINE
1190                          c1t3d0  ONLINE
1191 
1192              # zpool import tank
1193 
1194      Example 10 Upgrading All ZFS Storage Pools to the Current Version
1195              The following command upgrades all ZFS Storage pools to the
1196              current version of the software.
1197 
1198              # zpool upgrade -a
1199              This system is currently running ZFS version 2.
1200 
1201      Example 11 Managing Hot Spares
1202              The following command creates a new pool with an available hot
1203              spare:
1204 
1205              # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1206 
1207              If one of the disks were to fail, the pool would be reduced to
1208              the degraded state.  The failed device can be replaced using the
1209              following command:
1210 
1211              # zpool replace tank c0t0d0 c0t3d0
1212 
1213              Once the data has been resilvered, the spare is automatically
1214              removed and is made available for use should another device fail.
1215              The hot spare can be permanently removed from the pool using the
1216              following command:
1217 
1218              # zpool remove tank c0t2d0
1219 
1220      Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1221              The following command creates a ZFS storage pool consisting of
1222              two, two-way mirrors and mirrored log devices:
1223 
1224              # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1225                c4d0 c5d0
1226 
1227      Example 13 Adding Cache Devices to a ZFS Pool
1228              The following command adds two disks for use as cache devices to
1229              a ZFS storage pool:
1230 
1231              # zpool add pool cache c2d0 c3d0
1232 
1233              Once added, the cache devices gradually fill with content from
1234              main memory.  Depending on the size of your cache devices, it
1235              could take over an hour for them to fill.  Capacity and reads can
1236              be monitored using the iostat option as follows:
1237 
1238              # zpool iostat -v pool 5
1239 
1240      Example 14 Removing a Mirrored top-level (Log or Data) Device
1241              The following commands remove the mirrored log device mirror-2
1242              and mirrored top-level data device mirror-1.
1243 
1244              Given this configuration:
1245 
1246                pool: tank
1247               state: ONLINE
1248               scrub: none requested
1249              config:
1250 
1251                       NAME        STATE     READ WRITE CKSUM
1252                       tank        ONLINE       0     0     0
1253                         mirror-0  ONLINE       0     0     0
1254                           c6t0d0  ONLINE       0     0     0
1255                           c6t1d0  ONLINE       0     0     0
1256                         mirror-1  ONLINE       0     0     0
1257                           c6t2d0  ONLINE       0     0     0
1258                           c6t3d0  ONLINE       0     0     0
1259                       logs
1260                         mirror-2  ONLINE       0     0     0
1261                           c4t0d0  ONLINE       0     0     0
1262                           c4t1d0  ONLINE       0     0     0
1263 
1264              The command to remove the mirrored log mirror-2 is:
1265 
1266              # zpool remove tank mirror-2
1267 
1268              The command to remove the mirrored data mirror-1 is:
1269 
1270              # zpool remove tank mirror-1
1271 
1272      Example 15 Displaying expanded space on a device
1273              The following command displays the detailed information for the
1274              pool data.  This pool is comprised of a single raidz vdev where
1275              one of its devices increased its capacity by 10GB.  In this
1276              example, the pool will not be able to utilize this extra capacity
1277              until all the devices under the raidz vdev have been expanded.
1278 
1279              # zpool list -v data
1280              NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1281              data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
1282                raidz1    23.9G  14.6G  9.30G    48%         -
1283                  c1t1d0      -      -      -      -         -
1284                  c1t2d0      -      -      -      -       10G
1285                  c1t3d0      -      -      -      -         -
1286 
1287 INTERFACE STABILITY
1288      Evolving
1289 
1290 SEE ALSO
1291      zfs(1M), attributes(5), zpool-features(5)
1292 
1293 illumos                         April 27, 2018                         illumos