1 ZPOOL(1M)                    Maintenance Commands                    ZPOOL(1M)
   2 
   3 
   4 
   5 NAME
   6        zpool - configures ZFS storage pools
   7 
   8 SYNOPSIS
   9        zpool [-?]
  10 
  11 
  12        zpool add [-fn] pool vdev ...
  13 
  14 
  15        zpool attach [-f] pool device new_device
  16 
  17 
  18        zpool clear pool [device]
  19 
  20 
  21        zpool create [-fnd] [-o property=value] ... [-O file-system-property=value]
  22             ... [-m mountpoint] [-R root] pool vdev ...
  23 
  24 
  25        zpool destroy [-f] pool
  26 
  27 
  28        zpool detach pool device
  29 
  30 
  31        zpool export [-f] pool ...
  32 
  33 
  34        zpool get [-Hp] [-o field[,...]] "all" | property[,...] pool ...
  35 
  36 
  37        zpool history [-il] [pool] ...
  38 
  39 
  40        zpool import [-d dir] [-D]
  41 
  42 
  43        zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
  44             [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a
  45 
  46 
  47        zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
  48             [-D] [-f] [-m] [-R root] [-F [-n]] pool |id [newpool]
  49 
  50 
  51        zpool iostat [-T u | d ] [-v] [pool] ... [interval[count]]
  52 
  53 
  54        zpool list [-T u | d ] [-Hpv] [-o property[,...]] [pool] ... [interval[count]]
  55 
  56 
  57        zpool offline [-t] pool device ...
  58 
  59 
  60        zpool online pool device ...
  61 
  62 
  63        zpool reguid pool
  64 
  65 
  66        zpool reopen pool
  67 
  68 
  69        zpool remove pool device ...
  70 
  71 
  72        zpool replace [-f] pool device [new_device]
  73 
  74 
  75        zpool scrub [-s] pool ...
  76 
  77 
  78        zpool set property=value pool
  79 
  80 
  81        zpool split [-n] [-R altroot] [-o mntopts] [-o property=value] pool newpool [device ... ]
  82 
  83 
  84        zpool status [-xvD] [-T u | d ] [pool] ... [interval [count]]
  85 
  86 
  87        zpool upgrade
  88 
  89 
  90        zpool upgrade -v
  91 
  92 
  93        zpool upgrade [-V version] -a | pool ...
  94 
  95 
  96 DESCRIPTION
  97        The zpool command configures ZFS storage pools. A storage pool is a
  98        collection of devices that provides physical storage and data
  99        replication for ZFS datasets.
 100 
 101 
 102        All datasets within a storage pool share the same space. See zfs(1M)
 103        for information on managing datasets.
 104 
 105    Virtual Devices (vdevs)
 106        A "virtual device" describes a single device or a collection of devices
 107        organized according to certain performance and fault characteristics.
 108        The following virtual devices are supported:
 109 
 110        disk
 111                  A block device, typically located under /dev/dsk. ZFS can use
 112                  individual slices or partitions, though the recommended mode
 113                  of operation is to use whole disks. A disk can be specified
 114                  by a full path, or it can be a shorthand name (the relative
 115                  portion of the path under "/dev/dsk"). A whole disk can be
 116                  specified by omitting the slice or partition designation. For
 117                  example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When
 118                  given a whole disk, ZFS automatically labels the disk, if
 119                  necessary.
 120 
 121 
 122        file
 123                  A regular file. The use of files as a backing store is
 124                  strongly discouraged. It is designed primarily for
 125                  experimental purposes, as the fault tolerance of a file is
 126                  only as good as the file system of which it is a part. A file
 127                  must be specified by a full path.
 128 
 129 
 130        mirror
 131                  A mirror of two or more devices. Data is replicated in an
 132                  identical fashion across all components of a mirror. A mirror
 133                  with N disks of size X can hold X bytes and can withstand
 134                  (N-1) devices failing before data integrity is compromised.
 135 
 136 
 137        raidz
 138        raidz1
 139        raidz2
 140        raidz3
 141                  A variation on RAID-5 that allows for better distribution of
 142                  parity and eliminates the "RAID-5 write hole" (in which data
 143                  and parity become inconsistent after a power loss). Data and
 144                  parity is striped across all disks within a raidz group.
 145 
 146                  A raidz group can have single-, double- , or triple parity,
 147                  meaning that the raidz group can sustain one, two, or three
 148                  failures, respectively, without losing any data. The raidz1
 149                  vdev type specifies a single-parity raidz group; the raidz2
 150                  vdev type specifies a double-parity raidz group; and the
 151                  raidz3 vdev type specifies a triple-parity raidz group. The
 152                  raidz vdev type is an alias for raidz1.
 153 
 154                  A raidz group with N disks of size X with P parity disks can
 155                  hold approximately (N-P)*X bytes and can withstand P device(s)
 156                  failing before data integrity is compromised. The minimum
 157                  number of devices in a raidz group is one more than the
 158                  number of parity disks. The recommended number is between 3
 159                  and 9 to help increase performance.
 160 
 161 
 162        spare
 163                  A special pseudo-vdev which keeps track of available hot
 164                  spares for a pool. For more information, see the "Hot Spares"
 165                  section.
 166 
 167 
 168        log
 169                  A separate-intent log device. If more than one log device is
 170                  specified, then writes are load-balanced between devices. Log
 171                  devices can be mirrored. However, raidz vdev types are not
 172                  supported for the intent log. For more information, see the
 173                  "Intent Log" section.
 174 
 175 
 176        cache
 177                  A device used to cache storage pool data. A cache device
 178                  cannot be cannot be configured as a mirror or raidz group.
 179                  For more information, see the "Cache Devices" section.
 180 
 181 
 182 
 183        Virtual devices cannot be nested, so a mirror or raidz virtual device
 184        can only contain files or disks. Mirrors of mirrors (or other
 185        combinations) are not allowed.
 186 
 187 
 188        A pool can have any number of virtual devices at the top of the
 189        configuration (known as "root vdevs"). Data is dynamically distributed
 190        across all top-level devices to balance data among devices. As new
 191        virtual devices are added, ZFS automatically places data on the newly
 192        available devices.
 193 
 194 
 195        Virtual devices are specified one at a time on the command line,
 196        separated by whitespace. The keywords "mirror" and "raidz" are used to
 197        distinguish where a group ends and another begins. For example, the
 198        following creates two root vdevs, each a mirror of two disks:
 199 
 200          # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
 201 
 202 
 203 
 204    Device Failure and Recovery
 205        ZFS supports a rich set of mechanisms for handling device failure and
 206        data corruption. All metadata and data is checksummed, and ZFS
 207        automatically repairs bad data from a good copy when corruption is
 208        detected.
 209 
 210 
 211        In order to take advantage of these features, a pool must make use of
 212        some form of redundancy, using either mirrored or raidz groups. While
 213        ZFS supports running in a non-redundant configuration, where each root
 214        vdev is simply a disk or file, this is strongly discouraged. A single
 215        case of bit corruption can render some or all of your data unavailable.
 216 
 217 
 218        A pool's health status is described by one of three states: online,
 219        degraded, or faulted. An online pool has all devices operating
 220        normally. A degraded pool is one in which one or more devices have
 221        failed, but the data is still available due to a redundant
 222        configuration. A faulted pool has corrupted metadata, or one or more
 223        faulted devices, and insufficient replicas to continue functioning.
 224 
 225 
 226        The health of the top-level vdev, such as mirror or raidz device, is
 227        potentially impacted by the state of its associated vdevs, or component
 228        devices. A top-level vdev or component device is in one of the following
 229        states:
 230 
 231        DEGRADED
 232                    One or more top-level vdevs is in the degraded state because
 233                    one or more component devices are offline. Sufficient
 234                    replicas exist to continue functioning.
 235 
 236                    One or more component devices is in the degraded or faulted
 237                    state, but sufficient replicas exist to continue
 238                    functioning. The underlying conditions are as follows:
 239 
 240                        o      The number of checksum errors exceeds acceptable
 241                               levels and the device is degraded as an
 242                               indication that something may be wrong. ZFS
 243                               continues to use the device as necessary.
 244 
 245                        o      The number of I/O errors exceeds acceptable
 246                               levels. The device could not be marked as
 247                               faulted because there are insufficient replicas
 248                               to continue functioning.
 249 
 250 
 251        FAULTED
 252                    One or more top-level vdevs is in the faulted state because
 253                    one or more component devices are offline. Insufficient
 254                    replicas exist to continue functioning.
 255 
 256                    One or more component devices is in the faulted state, and
 257                    insufficient replicas exist to continue functioning. The
 258                    underlying conditions are as follows:
 259 
 260                        o      The device could be opened, but the contents did
 261                               not match expected values.
 262 
 263                        o      The number of I/O errors exceeds acceptable
 264                               levels and the device is faulted to prevent
 265                               further use of the device.
 266 
 267 
 268        OFFLINE
 269                    The device was explicitly taken offline by the "zpool
 270                    offline" command.
 271 
 272 
 273        ONLINE
 274                    The device is online and functioning.
 275 
 276 
 277        REMOVED
 278                    The device was physically removed while the system was
 279                    running. Device removal detection is hardware-dependent and
 280                    may not be supported on all platforms.
 281 
 282 
 283        UNAVAIL
 284                    The device could not be opened. If a pool is imported when
 285                    a device was unavailable, then the device will be
 286                    identified by a unique identifier instead of its path since
 287                    the path was never correct in the first place.
 288 
 289 
 290 
 291        If a device is removed and later re-attached to the system, ZFS attempts
 292        to put the device online automatically. Device attach detection is
 293        hardware-dependent and might not be supported on all platforms.
 294 
 295    Hot Spares
 296        ZFS allows devices to be associated with pools as "hot spares". These
 297        devices are not actively used in the pool, but when an active device
 298        fails, it is automatically replaced by a hot spare. To create a pool
 299        with hot spares, specify a "spare" vdev with any number of devices. For
 300        example,
 301 
 302          # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
 303 
 304 
 305 
 306 
 307        Spares can be shared across multiple pools, and can be added with the
 308        "zpool add" command and removed with the "zpool remove" command. Once a
 309        spare replacement is initiated, a new "spare" vdev is created within
 310        the configuration that will remain there until the original device is
 311        replaced. At this point, the hot spare becomes available again if
 312        another device fails.
 313 
 314 
 315        If a pool has a shared spare that is currently being used, the pool can
 316        not be exported since other pools may use this shared spare, which may
 317        lead to potential data corruption.
 318 
 319 
 320        An in-progress spare replacement can be cancelled by detaching the hot
 321        spare.  If the original faulted device is detached, then the hot spare
 322        assumes its place in the configuration, and is removed from the spare
 323        list of all active pools.
 324 
 325 
 326        Spares cannot replace log devices.
 327 
 328    Intent Log
 329        The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
 330        transactions. For instance, databases often require their transactions
 331        to be on stable storage devices when returning from a system call.  NFS
 332        and other applications can also use fsync() to ensure data stability.
 333        By default, the intent log is allocated from blocks within the main
 334        pool. However, it might be possible to get better performance using
 335        separate intent log devices such as NVRAM or a dedicated disk. For
 336        example:
 337 
 338          # zpool create pool c0d0 c1d0 log c2d0
 339 
 340 
 341 
 342 
 343        Multiple log devices can also be specified, and they can be mirrored.
 344        See the EXAMPLES section for an example of mirroring multiple log
 345        devices.
 346 
 347 
 348        Log devices can be added, replaced, attached, detached, and imported
 349        and exported as part of the larger pool. Mirrored log devices can be
 350        removed by specifying the top-level mirror for the log.
 351 
 352    Cache Devices
 353        Devices can be added to a storage pool as "cache devices." These
 354        devices provide an additional layer of caching between main memory and
 355        disk. For read-heavy workloads, where the working set size is much
 356        larger than what can be cached in main memory, using cache devices
 357        allow much more of this working set to be served from low latency
 358        media. Using cache devices provides the greatest performance
 359        improvement for random read-workloads of mostly static content.
 360 
 361 
 362        To create a pool with cache devices, specify a "cache" vdev with any
 363        number of devices. For example:
 364 
 365          # zpool create pool c0d0 c1d0 cache c2d0 c3d0
 366 
 367 
 368 
 369 
 370        Cache devices cannot be mirrored or part of a raidz configuration. If a
 371        read error is encountered on a cache device, that read I/O is reissued
 372        to the original storage pool device, which might be part of a mirrored
 373        or raidz configuration.
 374 
 375 
 376        The content of the cache devices is considered volatile, as is the case
 377        with other system caches.
 378 
 379    Properties
 380        Each pool has several properties associated with it. Some properties
 381        are read-only statistics while others are configurable and change the
 382        behavior of the pool. The following are read-only properties:
 383 
 384        available
 385                            Amount of storage available within the pool. This
 386                            property can also be referred to by its shortened
 387                            column name, "avail".
 388 
 389 
 390        capacity
 391                            Percentage of pool space used. This property can
 392                            also be referred to by its shortened column name,
 393                            "cap".
 394 
 395 
 396        expandsize
 397                            Amount of uninitialized space within the pool or
 398                            device that can be used to increase the total
 399                            capacity of the pool.  Uninitialized space consists
 400                            of any space on an EFI labeled vdev which has not
 401                            been brought online (i.e. zpool online -e).  This
 402                            space occurs when a LUN is dynamically expanded.
 403 
 404 
 405        fragmentation
 406                            The amount of fragmentation in the pool.
 407 
 408 
 409        free
 410                            The amount of free space available in the pool.
 411 
 412 
 413        freeing
 414                            After a file system or snapshot is destroyed, the
 415                            space it was using is returned to the pool
 416                            asynchronously. freeing is the amount of space
 417                            remaining to be reclaimed. Over time freeing will
 418                            decrease while free increases.
 419 
 420 
 421        health
 422                            The current health of the pool. Health can be
 423                            "ONLINE", "DEGRADED", "FAULTED", " OFFLINE",
 424                            "REMOVED", or "UNAVAIL".
 425 
 426 
 427        guid
 428                            A unique identifier for the pool.
 429 
 430 
 431        size
 432                            Total size of the storage pool.
 433 
 434 
 435        unsupported@feature_guid
 436                            Information about unsupported features that are
 437                            enabled on the pool. See zpool-features(5) for
 438                            details.
 439 
 440 
 441        used
 442                            Amount of storage space used within the pool.
 443 
 444 
 445 
 446        The space usage properties report actual physical space available to
 447        the storage pool. The physical space can be different from the total
 448        amount of space that any contained datasets can actually use. The
 449        amount of space used in a raidz configuration depends on the
 450        characteristics of the data being written. In addition, ZFS reserves
 451        some space for internal accounting that the zfs(1M) command takes into
 452        account, but the zpool command does not. For non-full pools of a
 453        reasonable size, these effects should be invisible. For small pools, or
 454        pools that are close to being completely full, these discrepancies may
 455        become more noticeable.
 456 
 457 
 458        The following property can be set at creation time and import time:
 459 
 460        altroot
 461            Alternate root directory. If set, this directory is prepended to
 462            any mount points within the pool. This can be used when examining
 463            an unknown pool where the mount points cannot be trusted, or in an
 464            alternate boot environment, where the typical paths are not valid.
 465            altroot is not a persistent property. It is valid only while the
 466            system is up. Setting altroot defaults to using cachefile=none,
 467            though this may be overridden using an explicit setting.
 468 
 469 
 470 
 471        The following property can be set only at import time:
 472 
 473        readonly=on | off
 474            If set to on, the pool will be imported in read-only mode. This
 475            property can also be referred to by its shortened column name,
 476            rdonly.
 477 
 478 
 479 
 480        The following properties can be set at creation time and import time,
 481        and later changed with the zpool set command:
 482 
 483        autoexpand=on | off
 484            Controls automatic pool expansion when the underlying LUN is grown.
 485            If set to on, the pool will be resized according to the size of the
 486            expanded device. If the device is part of a mirror or raidz then
 487            all devices within that mirror/raidz group must be expanded before
 488            the new space is made available to the pool. The default behavior
 489            is off. This property can also be referred to by its shortened
 490            column name, expand.
 491 
 492 
 493        autoreplace=on | off
 494            Controls automatic device replacement. If set to "off", device
 495            replacement must be initiated by the administrator by using the
 496            "zpool replace" command. If set to "on", any new device, found in
 497            the same physical location as a device that previously belonged to
 498            the pool, is automatically formatted and replaced. The default
 499            behavior is "off". This property can also be referred to by its
 500            shortened column name, "replace".
 501 
 502 
 503        bootfs=pool/dataset
 504            Identifies the default bootable dataset for the root pool. This
 505            property is expected to be set mainly by the installation and
 506            upgrade programs.
 507 
 508 
 509        cachefile=path | none
 510            Controls the location of where the pool configuration is cached.
 511            Discovering all pools on system startup requires a cached copy of
 512            the configuration data that is stored on the root file system. All
 513            pools in this cache are automatically imported when the system
 514            boots. Some environments, such as install and clustering, need to
 515            cache this information in a different location so that pools are
 516            not automatically imported. Setting this property caches the pool
 517            configuration in a different location that can later be imported
 518            with "zpool import -c". Setting it to the special value "none"
 519            creates a temporary pool that is never cached, and the special
 520            value '' (empty string) uses the default location.
 521 
 522            Multiple pools can share the same cache file. Because the kernel
 523            destroys and recreates this file when pools are added and removed,
 524            care should be taken when attempting to access this file. When the
 525            last pool using a cachefile is exported or destroyed, the file is
 526            removed.
 527 
 528 
 529        comment=text
 530            A text string consisting of printable ASCII characters that will be
 531            stored such that it is available even if the pool becomes faulted.
 532            An administrator can provide additional information about a pool
 533            using this property.
 534 
 535 
 536        dedupditto=number
 537            Threshold for the number of block ditto copies. If the reference
 538            count for a deduplicated block increases above this number, a new
 539            ditto copy of this block is automatically stored. The default
 540            setting is 0 which causes no ditto copies to be created for
 541            deduplicated blocks. The miniumum legal nonzero setting is 100.
 542 
 543 
 544        delegation=on | off
 545            Controls whether a non-privileged user is granted access based on
 546            the dataset permissions defined on the dataset. See zfs(1M) for
 547            more information on ZFS delegated administration.
 548 
 549 
 550        failmode=wait | continue | panic
 551            Controls the system behavior in the event of catastrophic pool
 552            failure. This condition is typically a result of a loss of
 553            connectivity to the underlying storage device(s) or a failure of
 554            all devices within the pool. The behavior of such an event is
 555            determined as follows:
 556 
 557            wait
 558                        Blocks all I/O access until the device connectivity is
 559                        recovered and the errors are cleared. This is the
 560                        default behavior.
 561 
 562 
 563            continue
 564                        Returns EIO to any new write I/O requests but allows
 565                        reads to any of the remaining healthy devices. Any
 566                        write requests that have yet to be committed to disk
 567                        would be blocked.
 568 
 569 
 570            panic
 571                        Prints out a message to the console and generates a
 572                        system crash dump.
 573 
 574 
 575 
 576        feature@feature_name=enabled
 577            The value of this property is the current state of feature_name.
 578            The only valid value when setting this property is enabled which
 579            moves feature_name to the enabled state. See zpool-features(5) for
 580            details on feature states.
 581 
 582 
 583        listsnaps=on | off
 584            Controls whether information about snapshots associated with this
 585            pool is output when "zfs list" is run without the -t option. The
 586            default value is "off".
 587 
 588 
 589        version=version
 590            The current on-disk version of the pool. This can be increased, but
 591            never decreased. The preferred method of updating pools is with the
 592            "zpool upgrade" command, though this property can be used when a
 593            specific version is needed for backwards compatibility. Once
 594            feature flags is enabled on a pool this property will no longer
 595            have a value.
 596 
 597 
 598    Subcommands
 599        All subcommands that modify state are logged persistently to the pool
 600        in their original form.
 601 
 602 
 603        The zpool command provides subcommands to create and destroy storage
 604        pools, add capacity to storage pools, and provide information about the
 605        storage pools. The following subcommands are supported:
 606 
 607        zpool -?
 608            Displays a help message.
 609 
 610 
 611        zpool add [-fn] pool vdev ...
 612            Adds the specified virtual devices to the given pool. The vdev
 613            specification is described in the "Virtual Devices" section. The
 614            behavior of the -f option, and the device checks performed are
 615            described in the "zpool create" subcommand.
 616 
 617            -f
 618                  Forces use of vdevs, even if they appear in use or specify a
 619                  conflicting replication level. Not all devices can be
 620                  overridden in this manner.
 621 
 622 
 623            -n
 624                  Displays the configuration that would be used without
 625                  actually adding the vdevs. The actual pool creation can still
 626                  fail due to insufficient privileges or device sharing.
 627 
 628            Do not add a disk that is currently configured as a quorum device
 629            to a zpool.  After a disk is in the pool, that disk can then be
 630            configured as a quorum device.
 631 
 632 
 633        zpool attach [-f] pool device new_device
 634            Attaches new_device to an existing zpool device. The existing
 635            device cannot be part of a raidz configuration. If device is not
 636            currently part of a mirrored configuration, device automatically
 637            transforms into a two-way mirror of device and new_device. If device
 638            is part of a two-way mirror, attaching new_device creates a three-way
 639            mirror, and so on. In either case, new_device begins to resilver
 640            immediately.
 641 
 642            -f
 643                  Forces use of new_device, even if its appears to be in use.
 644                  Not all devices can be overridden in this manner.
 645 
 646 
 647 
 648        zpool clear pool [device] ...
 649            Clears device errors in a pool. If no arguments are specified, all
 650            device errors within the pool are cleared. If one or more devices
 651            is specified, only those errors associated with the specified
 652            device or devices are cleared.
 653 
 654 
 655        zpool create [-fnd] [-o property=value] ... [-O file-system-property=value]
 656        ... [-m mountpoint] [-R root] pool vdev ...
 657            Creates a new storage pool containing the virtual devices specified
 658            on the command line. The pool name must begin with a letter, and
 659            can only contain alphanumeric characters as well as underscore
 660            ("_"), dash ("-"), and period ("."). The pool names "mirror",
 661            "raidz", "spare" and "log" are reserved, as are names beginning
 662            with the pattern "c[0-9]". The vdev specification is described in
 663            the "Virtual Devices" section.
 664 
 665            The command verifies that each device specified is accessible and
 666            not currently in use by another subsystem. There are some uses,
 667            such as being currently mounted, or specified as the dedicated dump
 668            device, that prevents a device from ever being used by ZFS. Other
 669            uses, such as having a preexisting UFS file system, can be
 670            overridden with the -f option.
 671 
 672            The command also checks that the replication strategy for the pool
 673            is consistent. An attempt to combine redundant and non-redundant
 674            storage in a single pool, or to mix disks and files, results in an
 675            error unless -f is specified. The use of differently sized devices
 676            within a single raidz or mirror group is also flagged as an error
 677            unless -f is specified.
 678 
 679            Unless the -R option is specified, the default mount point is
 680            "/pool". The mount point must not exist or must be empty, or else
 681            the root dataset cannot be mounted. This can be overridden with the
 682            -m option.
 683 
 684            By default all supported features are enabled on the new pool
 685            unless the -d option is specified.
 686 
 687            -f
 688                Forces use of vdevs, even if they appear in use or specify a
 689                conflicting replication level. Not all devices can be
 690                overridden in this manner.
 691 
 692 
 693            -n
 694                Displays the configuration that would be used without actually
 695                creating the pool. The actual pool creation can still fail due
 696                to insufficient privileges or device sharing.
 697 
 698 
 699            -d
 700                Do not enable any features on the new pool. Individual features
 701                can be enabled by setting their corresponding properties to
 702                enabled with the -o option. See zpool-features(5) for details
 703                about feature properties.
 704 
 705 
 706            -o property=value [-o property=value] ...
 707                Sets the given pool properties. See the "Properties" section
 708                for a list of valid properties that can be set.
 709 
 710 
 711            -O file-system-property=value
 712            [-O file-system-property=value] ...
 713                Sets the given file system properties in the root file system
 714                of the pool. See the "Properties" section of zfs(1M) for a list
 715                of valid properties that can be set.
 716 
 717 
 718            -R root
 719                Equivalent to "-o cachefile=none,altroot=root"
 720 
 721 
 722            -m mountpoint
 723                Sets the mount point for the root dataset. The default mount
 724                point is "/pool" or "altroot/pool" if altroot is specified. The
 725                mount point must be an absolute path, "legacy", or "none". For
 726                more information on dataset mount points, see zfs(1M).
 727 
 728 
 729 
 730        zpool destroy [-f] pool
 731            Destroys the given pool, freeing up any devices for other use. This
 732            command tries to unmount any active datasets before destroying the
 733            pool.
 734 
 735            -f
 736                  Forces any active datasets contained within the pool to be
 737                  unmounted.
 738 
 739 
 740 
 741        zpool detach pool device
 742            Detaches device from a mirror. The operation is refused if there
 743            are no other valid replicas of the data.
 744 
 745 
 746        zpool export [-f] pool ...
 747            Exports the given pools from the system. All devices are marked as
 748            exported, but are still considered in use by other subsystems. The
 749            devices can be moved between systems (even those of different
 750            endianness) and imported as long as a sufficient number of devices
 751            are present.
 752 
 753            Before exporting the pool, all datasets within the pool are
 754            unmounted. A pool can not be exported if it has a shared spare that
 755            is currently being used.
 756 
 757            For pools to be portable, you must give the zpool command whole
 758            disks, not just slices, so that ZFS can label the disks with
 759            portable EFI labels. Otherwise, disk drivers on platforms of
 760            different endianness will not recognize the disks.
 761 
 762            -f
 763                  Forcefully unmount all datasets, using the "unmount -f"
 764                  command.
 765 
 766                  This command will forcefully export the pool even if it has a
 767                  shared spare that is currently being used. This may lead to
 768                  potential data corruption.
 769 
 770 
 771 
 772        zpool get [-Hp] [-o field[,...]]  "all" | property[,...] pool ...
 773            Retrieves the given list of properties (or all properties if "all"
 774            is used) for the specified storage pool(s). These properties are
 775            displayed with the following fields:
 776 
 777                      name          Name of storage pool
 778                      property      Property name
 779                      value         Property value
 780                      source        Property source, either 'default' or 'local'.
 781 
 782 
 783            See the "Properties" section for more information on the available
 784            pool properties.
 785 
 786 
 787            -H
 788                        Scripted mode. Do not display headers, and separate
 789                        fields by a single tab instead of arbitrary space.
 790 
 791 
 792            -p
 793                  Display numbers in parsable (exact) values.
 794 
 795 
 796            -o field
 797                  A comma-separated list of columns to display.
 798                  name,property,value,source is the default value.
 799 
 800 
 801        zpool history [-il] [pool] ...
 802            Displays the command history of the specified pools or all pools if
 803            no pool is specified.
 804 
 805            -i
 806                  Displays internally logged ZFS events in addition to user
 807                  initiated events.
 808 
 809 
 810            -l
 811                  Displays log records in long format, which in addition to
 812                  standard format includes, the user name, the hostname, and
 813                  the zone in which the operation was performed.
 814 
 815 
 816 
 817        zpool import [-d dir | -c cachefile] [-D]
 818            Lists pools available to import. If the -d option is not specified,
 819            this command searches for devices in "/dev/dsk". The -d option can
 820            be specified multiple times, and all directories are searched. If
 821            the device appears to be part of an exported pool, this command
 822            displays a summary of the pool with the name of the pool, a numeric
 823            identifier, as well as the vdev layout and current health of the
 824            device for each device or file. Destroyed pools, pools that were
 825            previously destroyed with the "zpool destroy" command, are not
 826            listed unless the -D option is specified.
 827 
 828            The numeric identifier is unique, and can be used instead of the
 829            pool name when multiple exported pools of the same name are
 830            available.
 831 
 832            -c cachefile
 833                            Reads configuration from the given cachefile that
 834                            was created with the "cachefile" pool property.
 835                            This cachefile is used instead of searching for
 836                            devices.
 837 
 838 
 839            -d dir
 840                            Searches for devices or files in dir. The -d option
 841                            can be specified multiple times.
 842 
 843 
 844            -D
 845                            Lists destroyed pools only.
 846 
 847 
 848 
 849        zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile]
 850        [-D] [-f] [-m] [-R root] [-F [-n]] -a
 851            Imports all pools found in the search directories. Identical to the
 852            previous command, except that all pools with a sufficient number of
 853            devices available are imported. Destroyed pools, pools that were
 854            previously destroyed with the "zpool destroy" command, will not be
 855            imported unless the -D option is specified.
 856 
 857            -o mntopts
 858                                 Comma-separated list of mount options to use
 859                                 when mounting datasets within the pool. See
 860                                 zfs(1M) for a description of dataset
 861                                 properties and mount options.
 862 
 863 
 864            -o property=value
 865                                 Sets the specified property on the imported
 866                                 pool. See the "Properties" section for more
 867                                 information on the available pool properties.
 868 
 869 
 870            -c cachefile
 871                                 Reads configuration from the given cachefile
 872                                 that was created with the "cachefile" pool
 873                                 property. This cachefile is used instead of
 874                                 searching for devices.
 875 
 876 
 877            -d dir
 878                                 Searches for devices or files in dir. The -d
 879                                 option can be specified multiple times. This
 880                                 option is incompatible with the -c option.
 881 
 882 
 883            -D
 884                                 Imports destroyed pools only. The -f option is
 885                                 also required.
 886 
 887 
 888            -f
 889                                 Forces import, even if the pool appears to be
 890                                 potentially active.
 891 
 892 
 893            -F
 894                                 Recovery mode for a non-importable pool.
 895                                 Attempt to return the pool to an importable
 896                                 state by discarding the last few transactions.
 897                                 Not all damaged pools can be recovered by
 898                                 using this option. If successful, the data
 899                                 from the discarded transactions is
 900                                 irretrievably lost. This option is ignored if
 901                                 the pool is importable or already imported.
 902 
 903 
 904            -a
 905                                 Searches for and imports all pools found.
 906 
 907 
 908            -m
 909                                 Allows a pool to import when there is a
 910                                 missing log device. Recent transactions can be
 911                                 lost because the log device will be discarded.
 912 
 913 
 914            -R root
 915                                 Sets the "cachefile" property to "none" and
 916                                 the "altroot" property to "root".
 917 
 918 
 919            -N
 920                                 Import the pool without mounting any file
 921                                 systems.
 922 
 923 
 924            -n
 925                                 Used with the -F recovery option. Determines
 926                                 whether a non-importable pool can be made
 927                                 importable again, but does not actually
 928                                 perform the pool recovery. For more details
 929                                 about pool recovery mode, see the -F option,
 930                                 above.
 931 
 932 
 933 
 934        zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile]
 935        [-D] [-f] [-m] [-R root] [-F [-n]] pool | id [newpool]
 936            Imports a specific pool. A pool can be identified by its name or
 937            the numeric identifier. If newpool is specified, the pool is
 938            imported using the name newpool. Otherwise, it is imported with the
 939            same name as its exported name.
 940 
 941            If a device is removed from a system without running "zpool export"
 942            first, the device appears as potentially active. It cannot be
 943            determined if this was a failed export, or whether the device is
 944            really in use from another host. To import a pool in this state,
 945            the -f option is required.
 946 
 947            -o mntopts
 948                Comma-separated list of mount options to use when mounting
 949                datasets within the pool. See zfs(1M) for a description of
 950                dataset properties and mount options.
 951 
 952 
 953            -o property=value
 954                Sets the specified property on the imported pool. See the
 955                "Properties" section for more information on the available pool
 956                properties.
 957 
 958 
 959            -c cachefile
 960                Reads configuration from the given cachefile that was created
 961                with the "cachefile" pool property. This cachefile is used
 962                instead of searching for devices.
 963 
 964 
 965            -d dir
 966                Searches for devices or files in dir. The -d option can be
 967                specified multiple times. This option is incompatible with the
 968                -c option.
 969 
 970 
 971            -D
 972                Imports destroyed pool. The -f option is also required.
 973 
 974 
 975            -f
 976                Forces import, even if the pool appears to be potentially
 977                active.
 978 
 979 
 980            -F
 981                Recovery mode for a non-importable pool. Attempt to return the
 982                pool to an importable state by discarding the last few
 983                transactions. Not all damaged pools can be recovered by using
 984                this option. If successful, the data from the discarded
 985                transactions is irretrievably lost. This option is ignored if
 986                the pool is importable or already imported.
 987 
 988 
 989            -R root
 990                Sets the "cachefile" property to "none" and the "altroot"
 991                property to "root".
 992 
 993 
 994            -n
 995                Used with the -F recovery option. Determines whether a non-
 996                importable pool can be made importable again, but does not
 997                actually perform the pool recovery. For more details about pool
 998                recovery mode, see the -F option, above.
 999 
1000 
1001            -m
1002                Allows a pool to import when there is a missing log device.
1003                Recent transactions can be lost because the log device will be
1004                discarded.
1005 
1006 
1007 
1008        zpool iostat [-T u | d] [-v] [pool] ...  [interval[count]]
1009            Displays I/O statistics for the given pools. When given an
1010            interval, the statistics are printed every interval seconds until
1011            Ctrl-C is pressed. If no pools are specified, statistics for every
1012            pool in the system is shown. If count is specified, the command
1013            exits after count reports are printed.
1014 
1015            -T u | d
1016                        Display a time stamp.
1017 
1018                        Specify u for a printed representation of the internal
1019                        representation of time. See time(2). Specify d for
1020                        standard date format. See date(1).
1021 
1022 
1023            -v
1024                        Verbose statistics. Reports usage statistics for
1025                        individual vdevs within the pool, in addition to the
1026                        pool-wide statistics.
1027 
1028 
1029 
1030        zpool list [-T u | d] [-Hv] [-o props[,...]] [pool] ...  [interval[count]]
1031            Lists the given pools along with a health status and space usage.
1032            If no pools are specified, all pools in the system are listed. When
1033            given an interval, the information is printed every interval
1034            seconds until Ctrl-C is pressed. If count is specified, the command
1035            exits after count reports are printed.
1036 
1037            -T u | d
1038                        Display a time stamp.
1039 
1040                        Specify u for a printed representation of the internal
1041                        representation of time. See time(2). Specify d for
1042                        standard date format. See date(1).
1043 
1044 
1045            -H
1046                        Scripted mode. Do not display headers, and separate
1047                        fields by a single tab instead of arbitrary space.
1048 
1049 
1050            -p
1051                        Display numbers in parsable (exact) values.
1052 
1053 
1054            -o props
1055                        Comma-separated list of properties to display. See the
1056                        "Properties" section for a list of valid properties.
1057                        The default list is "name, size, used, available,
1058                        fragmentation, expandsize, capacity, dedupratio,
1059                        health, altroot"
1060 
1061 
1062            -v
1063                        Verbose statistics.  Reports usage statistics for
1064                        individual vdevs within the pool, in addition to the
1065                        pool-wise statistics.
1066 
1067 
1068 
1069        zpool offline [-t] pool device ...
1070            Takes the specified physical device offline. While the device is
1071            offline, no attempt is made to read or write to the device.
1072 
1073            This command is not applicable to spares or cache devices.
1074 
1075            -t
1076                  Temporary. Upon reboot, the specified physical device reverts
1077                  to its previous state.
1078 
1079 
1080 
1081        zpool online [-e] pool device...
1082            Brings the specified physical device online.
1083 
1084            This command is not applicable to spares or cache devices.
1085 
1086            -e
1087                  Expand the device to use all available space. If the device
1088                  is part of a mirror or raidz then all devices must be
1089                  expanded before the new space will become available to the
1090                  pool.
1091 
1092 
1093 
1094        zpool reguid pool
1095            Generates a new unique identifier for the pool. You must ensure
1096            that all devices in this pool are online and healthy before
1097            performing this action.
1098 
1099 
1100        zpool reopen pool
1101            Reopen all the vdevs associated with the pool.
1102 
1103 
1104        zpool remove pool device ...
1105            Removes the specified device from the pool. This command currently
1106            only supports removing hot spares, cache, and log devices. A
1107            mirrored log device can be removed by specifying the top-level
1108            mirror for the log. Non-log devices that are part of a mirrored
1109            configuration can be removed using the zpool detach command. Non-
1110            redundant and raidz devices cannot be removed from a pool.
1111 
1112 
1113        zpool replace [-f] pool old_device [new_device]
1114            Replaces old_device with new_device. This is equivalent to
1115            attaching new_device, waiting for it to resilver, and then
1116            detaching old_device.
1117 
1118            The size of new_device must be greater than or equal to the minimum
1119            size of all the devices in a mirror or raidz configuration.
1120 
1121            new_device is required if the pool is not redundant. If new_device
1122            is not specified, it defaults to old_device. This form of
1123            replacement is useful after an existing disk has failed and has
1124            been physically replaced. In this case, the new disk may have the
1125            same /dev/dsk path as the old device, even though it is actually a
1126            different disk. ZFS recognizes this.
1127 
1128            -f
1129                  Forces use of new_device, even if its appears to be in use.
1130                  Not all devices can be overridden in this manner.
1131 
1132 
1133 
1134        zpool scrub [-s] pool ...
1135            Begins a scrub. The scrub examines all data in the specified pools
1136            to verify that it checksums correctly. For replicated (mirror or
1137            raidz) devices, ZFS automatically repairs any damage discovered
1138            during the scrub. The "zpool status" command reports the progress
1139            of the scrub and summarizes the results of the scrub upon
1140            completion.
1141 
1142            Scrubbing and resilvering are very similar operations. The
1143            difference is that resilvering only examines data that ZFS knows to
1144            be out of date (for example, when attaching a new device to a
1145            mirror or replacing an existing device), whereas scrubbing examines
1146            all data to discover silent errors due to hardware faults or disk
1147            failure.
1148 
1149            Because scrubbing and resilvering are I/O-intensive operations, ZFS
1150            only allows one at a time. If a scrub is already in progress, the
1151            "zpool scrub" command terminates it and starts a new scrub. If a
1152            resilver is in progress, ZFS does not allow a scrub to be started
1153            until the resilver completes.
1154 
1155            -s
1156                  Stop scrubbing.
1157 
1158 
1159 
1160        zpool set property=value pool
1161            Sets the given property on the specified pool. See the "Properties"
1162            section for more information on what properties can be set and
1163            acceptable values.
1164 
1165 
1166        zpool split [-n] [-R altroot] [-o mntopts] [-o property=value] pool newpool
1167        [device ... ]
1168 
1169            Splits off one disk from each mirrored top-level vdev in a pool and
1170            creates a new pool from the split-off disks. The original pool must
1171            be made up of one or more mirrors and must not be in the process of
1172            resilvering. The split subcommand chooses the last device in each
1173            mirror vdev unless overridden by a device specification on the
1174            command line.
1175 
1176            When using a device argument, split includes the specified
1177            device(s) in a new pool and, should any devices remain unspecified,
1178            assigns the last device in each mirror vdev to that pool, as it
1179            does normally. If you are uncertain about the outcome of a split
1180            command, use the -n ("dry-run") option to ensure your command will
1181            have the effect you intend.
1182 
1183 
1184            -n
1185                Displays the configuration that would be created without
1186                actually splitting the pool. The actual pool split could still
1187                fail due to insufficient privileges or device status.
1188 
1189 
1190            -R altroot
1191                Automatically import the newly created pool after splitting,
1192                using the specified altroot parameter for the new pool's
1193                alternate root. See the altroot description in the "Properties"
1194                section, above.
1195 
1196 
1197            -o mntopts
1198                Comma-separated list of mount options to use when mounting
1199                datasets within the pool. See zfs(1M) for a description of
1200                dataset properties and mount options. Valid only in conjunction
1201                with the -R option.
1202 
1203 
1204            -o property=value
1205                Sets the specified property on the new pool. See the
1206                "Properties" section, above, for more information on the
1207                available pool properties.
1208 
1209 
1210 
1211        zpool status [-xvD] [-T u | d ] [pool] ... [interval [count]]
1212            Displays the detailed health status for the given pools. If no pool
1213            is specified, then the status of each pool in the system is
1214            displayed. For more information on pool and device health, see the
1215            "Device Failure and Recovery" section.
1216 
1217            If a scrub or resilver is in progress, this command reports the
1218            percentage done and the estimated time to completion. Both of these
1219            are only approximate, because the amount of data in the pool and
1220            the other workloads on the system can change.
1221 
1222            -x
1223                  Only display status for pools that are exhibiting errors or
1224                  are otherwise unavailable. Warnings about pools not using the
1225                  latest on-disk format will not be included.
1226 
1227 
1228            -v
1229                  Displays verbose data error information, printing out a
1230                  complete list of all data errors since the last complete pool
1231                  scrub.
1232 
1233 
1234            -D
1235                  Display a histogram of deduplication statistics, showing the
1236                  allocated (physically present on disk) and referenced
1237                  (logically referenced in the pool) block counts and sizes by
1238                  reference count.
1239 
1240 
1241            -T u | d
1242                        Display a time stamp.
1243 
1244                        Specify u for a printed representation of the internal
1245                        representation of time. See time(2). Specify d for
1246                        standard date format. See date(1).
1247 
1248 
1249 
1250        zpool upgrade
1251            Displays pools which do not have all supported features enabled and
1252            pools formatted using a legacy ZFS version number. These pools can
1253            continue to be used, but some features may not be available. Use
1254            "zpool upgrade -a" to enable all features on all pools.
1255 
1256 
1257        zpool upgrade -v
1258            Displays legacy ZFS versions supported by the current software. See
1259            zpool-features(5) for a description of feature flags features
1260            supported by the current software.
1261 
1262 
1263        zpool upgrade [-V version] -a | pool ...
1264            Enables all supported features on the given pool. Once this is
1265            done, the pool will no longer be accessible on systems that do not
1266            support feature flags. See zpool-features(5) for details on
1267            compatibility with systems that support feature flags, but do not
1268            support all features enabled on the pool.
1269 
1270            -a
1271                          Enables all supported features on all pools.
1272 
1273 
1274            -V version
1275                          Upgrade to the specified legacy version. If the -V
1276                          flag is specified, no features will be enabled on the
1277                          pool. This option can only be used to increase the
1278                          version number up to the last supported legacy
1279                          version number.
1280 
1281 
1282 
1283 EXAMPLES
1284        Example 1 Creating a RAID-Z Storage Pool
1285 
1286 
1287        The following command creates a pool with a single raidz root vdev that
1288        consists of six disks.
1289 
1290 
1291          # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1292 
1293 
1294 
1295        Example 2 Creating a Mirrored Storage Pool
1296 
1297 
1298        The following command creates a pool with two mirrors, where each
1299        mirror contains two disks.
1300 
1301 
1302          # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1303 
1304 
1305 
1306        Example 3 Creating a ZFS Storage Pool by Using Slices
1307 
1308 
1309        The following command creates an unmirrored pool using two disk slices.
1310 
1311 
1312          # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1313 
1314 
1315 
1316        Example 4 Creating a ZFS Storage Pool by Using Files
1317 
1318 
1319        The following command creates an unmirrored pool using files. While not
1320        recommended, a pool based on files can be useful for experimental
1321        purposes.
1322 
1323 
1324          # zpool create tank /path/to/file/a /path/to/file/b
1325 
1326 
1327 
1328        Example 5 Adding a Mirror to a ZFS Storage Pool
1329 
1330 
1331        The following command adds two mirrored disks to the pool "tank",
1332        assuming the pool is already made up of two-way mirrors. The additional
1333        space is immediately available to any datasets within the pool.
1334 
1335 
1336          # zpool add tank mirror c1t0d0 c1t1d0
1337 
1338 
1339 
1340        Example 6 Listing Available ZFS Storage Pools
1341 
1342 
1343        The following command lists all available pools on the system. In this
1344        case, the pool zion is faulted due to a missing device.
1345 
1346 
1347 
1348        The results from this command are similar to the following:
1349 
1350 
1351          # zpool list
1352          NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1353          rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
1354          tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
1355          zion       -      -      -      -         -      -      -  FAULTED -
1356 
1357 
1358 
1359        Example 7 Destroying a ZFS Storage Pool
1360 
1361 
1362        The following command destroys the pool "tank" and any datasets
1363        contained within.
1364 
1365 
1366          # zpool destroy -f tank
1367 
1368 
1369 
1370        Example 8 Exporting a ZFS Storage Pool
1371 
1372 
1373        The following command exports the devices in pool tank so that they can
1374        be relocated or later imported.
1375 
1376 
1377          # zpool export tank
1378 
1379 
1380 
1381        Example 9 Importing a ZFS Storage Pool
1382 
1383 
1384        The following command displays available pools, and then imports the
1385        pool "tank" for use on the system.
1386 
1387 
1388 
1389        The results from this command are similar to the following:
1390 
1391 
1392          # zpool import
1393            pool: tank
1394              id: 15451357997522795478
1395           state: ONLINE
1396          action: The pool can be imported using its name or numeric identifier.
1397          config:
1398 
1399                  tank        ONLINE
1400                    mirror    ONLINE
1401                      c1t2d0  ONLINE
1402                      c1t3d0  ONLINE
1403 
1404          # zpool import tank
1405 
1406 
1407 
1408        Example 10 Upgrading All ZFS Storage Pools to the Current Version
1409 
1410 
1411        The following command upgrades all ZFS Storage pools to the current
1412        version of the software.
1413 
1414 
1415          # zpool upgrade -a
1416          This system is currently running ZFS version 2.
1417 
1418 
1419 
1420        Example 11 Managing Hot Spares
1421 
1422 
1423        The following command creates a new pool with an available hot spare:
1424 
1425 
1426          # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1427 
1428 
1429 
1430 
1431        If one of the disks were to fail, the pool would be reduced to the
1432        degraded state. The failed device can be replaced using the following
1433        command:
1434 
1435 
1436          # zpool replace tank c0t0d0 c0t3d0
1437 
1438 
1439 
1440 
1441        Once the data has been resilvered, the spare is automatically removed
1442        and is made available should another device fails. The hot spare can be
1443        permanently removed from the pool using the following command:
1444 
1445 
1446          # zpool remove tank c0t2d0
1447 
1448 
1449 
1450        Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1451 
1452 
1453        The following command creates a ZFS storage pool consisting of two,
1454        two-way mirrors and mirrored log devices:
1455 
1456 
1457          # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1458             c4d0 c5d0
1459 
1460 
1461 
1462        Example 13 Adding Cache Devices to a ZFS Pool
1463 
1464 
1465        The following command adds two disks for use as cache devices to a ZFS
1466        storage pool:
1467 
1468 
1469          # zpool add pool cache c2d0 c3d0
1470 
1471 
1472 
1473 
1474        Once added, the cache devices gradually fill with content from main
1475        memory.  Depending on the size of your cache devices, it could take
1476        over an hour for them to fill. Capacity and reads can be monitored
1477        using the iostat option as follows:
1478 
1479 
1480          # zpool iostat -v pool 5
1481 
1482 
1483 
1484        Example 14 Removing a Mirrored Log Device
1485 
1486 
1487        The following command removes the mirrored log device mirror-2.
1488 
1489 
1490 
1491        Given this configuration:
1492 
1493 
1494             pool: tank
1495            state: ONLINE
1496            scrub: none requested
1497          config:
1498 
1499                   NAME        STATE     READ WRITE CKSUM
1500                   tank        ONLINE       0     0     0
1501                     mirror-0  ONLINE       0     0     0
1502                       c6t0d0  ONLINE       0     0     0
1503                       c6t1d0  ONLINE       0     0     0
1504                     mirror-1  ONLINE       0     0     0
1505                       c6t2d0  ONLINE       0     0     0
1506                       c6t3d0  ONLINE       0     0     0
1507                   logs
1508                     mirror-2  ONLINE       0     0     0
1509                       c4t0d0  ONLINE       0     0     0
1510                       c4t1d0  ONLINE       0     0     0
1511 
1512 
1513 
1514 
1515        The command to remove the mirrored log mirror-2 is:
1516 
1517 
1518          # zpool remove tank mirror-2
1519 
1520 
1521 
1522        Example 15 Displaying expanded space on a device
1523 
1524 
1525        The following command dipslays the detailed information for the data
1526        pool. This pool is comprised of a single raidz vdev where one of its
1527        devices increased its capacity by 10GB. In this example, the pool will
1528        not be able to utilized this extra capacity until all the devices under
1529        the raidz vdev have been expanded.
1530 
1531 
1532          # zpool list -v data
1533          NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1534          data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
1535            raidz1    23.9G  14.6G  9.30G    48%         -
1536              c1t1d0      -      -      -      -         -
1537              c1t2d0      -      -      -      -       10G
1538              c1t3d0      -      -      -      -         -
1539 
1540 
1541 EXIT STATUS
1542        The following exit values are returned:
1543 
1544        0
1545             Successful completion.
1546 
1547 
1548        1
1549             An error occurred.
1550 
1551 
1552        2
1553             Invalid command line options were specified.
1554 
1555 
1556 ATTRIBUTES
1557        See attributes(5) for descriptions of the following attributes:
1558 
1559 
1560 
1561 
1562        +--------------------+-----------------+
1563        |  ATTRIBUTE TYPE    | ATTRIBUTE VALUE |
1564        +--------------------+-----------------+
1565        |Interface Stability | Evolving        |
1566        +--------------------+-----------------+
1567 
1568 SEE ALSO
1569        zfs(1M), zpool-features(5), attributes(5)
1570 
1571 
1572 
1573                                  March 6, 2014                       ZPOOL(1M)