1 .\"
   2 .\" CDDL HEADER START
   3 .\"
   4 .\" The contents of this file are subject to the terms of the
   5 .\" Common Development and Distribution License (the "License").
   6 .\" You may not use this file except in compliance with the License.
   7 .\"
   8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
   9 .\" or http://www.opensolaris.org/os/licensing.
  10 .\" See the License for the specific language governing permissions
  11 .\" and limitations under the License.
  12 .\"
  13 .\" When distributing Covered Code, include this CDDL HEADER in each
  14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15 .\" If applicable, add the following below this CDDL HEADER, with the
  16 .\" fields enclosed by brackets "[]" replaced with your own identifying
  17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
  18 .\"
  19 .\" CDDL HEADER END
  20 .\"
  21 .\"
  22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
  23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
  24 .\" Copyright 2017 Nexenta Systems, Inc.
  25 .\" Copyright (c) 2017 Datto Inc.
  26 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
  27 .\"
  28 .Dd December 6, 2017
  29 .Dt ZPOOL 1M
  30 .Os
  31 .Sh NAME
  32 .Nm zpool
  33 .Nd configure ZFS storage pools
  34 .Sh SYNOPSIS
  35 .Nm
  36 .Fl \?
  37 .Nm
  38 .Cm add
  39 .Op Fl fn
  40 .Ar pool vdev Ns ...
  41 .Nm
  42 .Cm attach
  43 .Op Fl f
  44 .Ar pool device new_device
  45 .Nm
  46 .Cm clear
  47 .Ar pool
  48 .Op Ar device
  49 .Nm
  50 .Cm create
  51 .Op Fl dfn
  52 .Op Fl B
  53 .Op Fl m Ar mountpoint
  54 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
  55 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
  56 .Op Fl R Ar root
  57 .Ar pool vdev Ns ...
  58 .Nm
  59 .Cm destroy
  60 .Op Fl f
  61 .Ar pool
  62 .Nm
  63 .Cm detach
  64 .Ar pool device
  65 .Nm
  66 .Cm export
  67 .Op Fl cfF
  68 .Op Fl t Ar numthreads
  69 .Ar pool Ns ...
  70 .Nm
  71 .Cm get
  72 .Op Fl Hp
  73 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
  74 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
  75 .Ar pool Ns ...
  76 .Nm
  77 .Cm history
  78 .Op Fl il
  79 .Oo Ar pool Oc Ns ...
  80 .Nm
  81 .Cm import
  82 .Op Fl D
  83 .Op Fl d Ar dir
  84 .Nm
  85 .Cm import
  86 .Fl a
  87 .Op Fl DfmN
  88 .Op Fl F Op Fl n
  89 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
  90 .Op Fl o Ar mntopts
  91 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
  92 .Op Fl R Ar root
  93 .Op Fl t Ar numthreads
  94 .Nm
  95 .Cm import
  96 .Op Fl Dfm
  97 .Op Fl F Op Fl n
  98 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
  99 .Op Fl o Ar mntopts
 100 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
 101 .Op Fl R Ar root
 102 .Op Fl t Ar numthreads
 103 .Ar pool Ns | Ns Ar id
 104 .Op Ar newpool
 105 .Nm
 106 .Cm iostat
 107 .Op Fl v
 108 .Op Fl T Sy u Ns | Ns Sy d
 109 .Oo Ar pool Oc Ns ...
 110 .Op Ar interval Op Ar count
 111 .Nm
 112 .Cm labelclear
 113 .Op Fl f
 114 .Ar device
 115 .Nm
 116 .Cm list
 117 .Op Fl Hpv
 118 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
 119 .Op Fl T Sy u Ns | Ns Sy d
 120 .Oo Ar pool Oc Ns ...
 121 .Op Ar interval Op Ar count
 122 .Nm
 123 .Cm offline
 124 .Op Fl t
 125 .Ar pool Ar device Ns ...
 126 .Nm
 127 .Cm online
 128 .Op Fl e
 129 .Ar pool Ar device Ns ...
 130 .Nm
 131 .Cm reguid
 132 .Ar pool
 133 .Nm
 134 .Cm reopen
 135 .Ar pool
 136 .Nm
 137 .Cm remove
 138 .Ar pool Ar device Ns ...
 139 .Nm
 140 .Cm replace
 141 .Op Fl f
 142 .Ar pool Ar device Op Ar new_device
 143 .Nm
 144 .Cm scrub
 145 .Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s
 146 .Ar pool Ns ...
 147 .Nm
 148 .Cm set
 149 .Ar property Ns = Ns Ar value
 150 .Ar pool
 151 .Nm
 152 .Cm split
 153 .Op Fl n
 154 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
 155 .Op Fl R Ar root
 156 .Ar pool newpool
 157 .Nm
 158 .Cm status
 159 .Op Fl Dvx
 160 .Op Fl T Sy u Ns | Ns Sy d
 161 .Oo Ar pool Oc Ns ...
 162 .Op Ar interval Op Ar count
 163 .Nm
 164 .Cm trim
 165 .Op Fl r Ar rate Ns | Ns Fl s
 166 .Ar pool Ns ...
 167 .Nm
 168 .Cm upgrade
 169 .Nm
 170 .Cm upgrade
 171 .Fl v
 172 .Nm
 173 .Cm upgrade
 174 .Op Fl V Ar version
 175 .Fl a Ns | Ns Ar pool Ns ...
 176 .Nm
 177 .Cm vdev-get
 178 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
 179 .Ar pool
 180 .Ar vdev-name Ns | Ns Ar vdev-guid
 181 .Nm
 182 .Cm vdev-set
 183 .Ar property Ns = Ns Ar value
 184 .Ar pool
 185 .Ar vdev-name Ns | Ns Ar vdev-guid
 186 .Sh DESCRIPTION
 187 The
 188 .Nm
 189 command configures ZFS storage pools.
 190 A storage pool is a collection of devices that provides physical storage and
 191 data replication for ZFS datasets.
 192 All datasets within a storage pool share the same space.
 193 See
 194 .Xr zfs 1M
 195 for information on managing datasets.
 196 .Ss Virtual Devices (vdevs)
 197 A "virtual device" describes a single device or a collection of devices
 198 organized according to certain performance and fault characteristics.
 199 The following virtual devices are supported:
 200 .Bl -tag -width Ds
 201 .It Sy disk
 202 A block device, typically located under
 203 .Pa /dev/dsk .
 204 ZFS can use individual slices or partitions, though the recommended mode of
 205 operation is to use whole disks.
 206 A disk can be specified by a full path, or it can be a shorthand name
 207 .Po the relative portion of the path under
 208 .Pa /dev/dsk
 209 .Pc .
 210 A whole disk can be specified by omitting the slice or partition designation.
 211 For example,
 212 .Pa c0t0d0
 213 is equivalent to
 214 .Pa /dev/dsk/c0t0d0s2 .
 215 When given a whole disk, ZFS automatically labels the disk, if necessary.
 216 .It Sy file
 217 A regular file.
 218 The use of files as a backing store is strongly discouraged.
 219 It is designed primarily for experimental purposes, as the fault tolerance of a
 220 file is only as good as the file system of which it is a part.
 221 A file must be specified by a full path.
 222 .It Sy mirror
 223 A mirror of two or more devices.
 224 Data is replicated in an identical fashion across all components of a mirror.
 225 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
 226 failing before data integrity is compromised.
 227 .It Sy raidz , raidz1 , raidz2 , raidz3
 228 A variation on RAID-5 that allows for better distribution of parity and
 229 eliminates the RAID-5
 230 .Qq write hole
 231 .Pq in which data and parity become inconsistent after a power loss .
 232 Data and parity is striped across all disks within a raidz group.
 233 .Pp
 234 A raidz group can have single-, double-, or triple-parity, meaning that the
 235 raidz group can sustain one, two, or three failures, respectively, without
 236 losing any data.
 237 The
 238 .Sy raidz1
 239 vdev type specifies a single-parity raidz group; the
 240 .Sy raidz2
 241 vdev type specifies a double-parity raidz group; and the
 242 .Sy raidz3
 243 vdev type specifies a triple-parity raidz group.
 244 The
 245 .Sy raidz
 246 vdev type is an alias for
 247 .Sy raidz1 .
 248 .Pp
 249 A raidz group with N disks of size X with P parity disks can hold approximately
 250 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
 251 compromised.
 252 The minimum number of devices in a raidz group is one more than the number of
 253 parity disks.
 254 The recommended number is between 3 and 9 to help increase performance.
 255 .It Sy spare
 256 A special pseudo-vdev which keeps track of available hot spares for a pool.
 257 For more information, see the
 258 .Sx Hot Spares
 259 section.
 260 .It Sy log
 261 A separate intent log device.
 262 If more than one log device is specified, then writes are load-balanced between
 263 devices.
 264 Log devices can be mirrored.
 265 However, raidz vdev types are not supported for the intent log.
 266 For more information, see the
 267 .Sx Intent Log
 268 section.
 269 .It Sy cache
 270 A device used to cache storage pool data.
 271 A cache device cannot be configured as a mirror or raidz group.
 272 For more information, see the
 273 .Sx Cache Devices
 274 section.
 275 .El
 276 .Pp
 277 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
 278 contain files or disks.
 279 Mirrors of mirrors
 280 .Pq or other combinations
 281 are not allowed.
 282 .Pp
 283 A pool can have any number of virtual devices at the top of the configuration
 284 .Po known as
 285 .Qq root vdevs
 286 .Pc .
 287 Data is dynamically distributed across all top-level devices to balance data
 288 among devices.
 289 As new virtual devices are added, ZFS automatically places data on the newly
 290 available devices.
 291 .Pp
 292 Virtual devices are specified one at a time on the command line, separated by
 293 whitespace.
 294 The keywords
 295 .Sy mirror
 296 and
 297 .Sy raidz
 298 are used to distinguish where a group ends and another begins.
 299 For example, the following creates two root vdevs, each a mirror of two disks:
 300 .Bd -literal
 301 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
 302 .Ed
 303 .Ss Device Failure and Recovery
 304 ZFS supports a rich set of mechanisms for handling device failure and data
 305 corruption.
 306 All metadata and data is checksummed, and ZFS automatically repairs bad data
 307 from a good copy when corruption is detected.
 308 .Pp
 309 In order to take advantage of these features, a pool must make use of some form
 310 of redundancy, using either mirrored or raidz groups.
 311 While ZFS supports running in a non-redundant configuration, where each root
 312 vdev is simply a disk or file, this is strongly discouraged.
 313 A single case of bit corruption can render some or all of your data unavailable.
 314 .Pp
 315 A pool's health status is described by one of three states: online, degraded,
 316 or faulted.
 317 An online pool has all devices operating normally.
 318 A degraded pool is one in which one or more devices have failed, but the data is
 319 still available due to a redundant configuration.
 320 A faulted pool has corrupted metadata, or one or more faulted devices, and
 321 insufficient replicas to continue functioning.
 322 .Pp
 323 The health of the top-level vdev, such as mirror or raidz device, is
 324 potentially impacted by the state of its associated vdevs, or component
 325 devices.
 326 A top-level vdev or component device is in one of the following states:
 327 .Bl -tag -width "DEGRADED"
 328 .It Sy DEGRADED
 329 One or more top-level vdevs is in the degraded state because one or more
 330 component devices are offline.
 331 Sufficient replicas exist to continue functioning.
 332 .Pp
 333 One or more component devices is in the degraded or faulted state, but
 334 sufficient replicas exist to continue functioning.
 335 The underlying conditions are as follows:
 336 .Bl -bullet
 337 .It
 338 The number of checksum errors exceeds acceptable levels and the device is
 339 degraded as an indication that something may be wrong.
 340 ZFS continues to use the device as necessary.
 341 .It
 342 The number of I/O errors exceeds acceptable levels.
 343 The device could not be marked as faulted because there are insufficient
 344 replicas to continue functioning.
 345 .El
 346 .It Sy FAULTED
 347 One or more top-level vdevs is in the faulted state because one or more
 348 component devices are offline.
 349 Insufficient replicas exist to continue functioning.
 350 .Pp
 351 One or more component devices is in the faulted state, and insufficient
 352 replicas exist to continue functioning.
 353 The underlying conditions are as follows:
 354 .Bl -bullet
 355 .It
 356 The device could be opened, but the contents did not match expected values.
 357 .It
 358 The number of I/O errors exceeds acceptable levels and the device is faulted to
 359 prevent further use of the device.
 360 .El
 361 .It Sy OFFLINE
 362 The device was explicitly taken offline by the
 363 .Nm zpool Cm offline
 364 command.
 365 .It Sy ONLINE
 366 The device is online and functioning.
 367 .It Sy REMOVED
 368 The device was physically removed while the system was running.
 369 Device removal detection is hardware-dependent and may not be supported on all
 370 platforms.
 371 .It Sy UNAVAIL
 372 The device could not be opened.
 373 If a pool is imported when a device was unavailable, then the device will be
 374 identified by a unique identifier instead of its path since the path was never
 375 correct in the first place.
 376 .El
 377 .Pp
 378 If a device is removed and later re-attached to the system, ZFS attempts
 379 to put the device online automatically.
 380 Device attach detection is hardware-dependent and might not be supported on all
 381 platforms.
 382 .Ss Hot Spares
 383 ZFS allows devices to be associated with pools as
 384 .Qq hot spares .
 385 These devices are not actively used in the pool, but when an active device
 386 fails, it is automatically replaced by a hot spare.
 387 To create a pool with hot spares, specify a
 388 .Sy spare
 389 vdev with any number of devices.
 390 For example,
 391 .Bd -literal
 392 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
 393 .Ed
 394 .Pp
 395 Spares can be shared across multiple pools, and can be added with the
 396 .Nm zpool Cm add
 397 command and removed with the
 398 .Nm zpool Cm remove
 399 command.
 400 Once a spare replacement is initiated, a new
 401 .Sy spare
 402 vdev is created within the configuration that will remain there until the
 403 original device is replaced.
 404 At this point, the hot spare becomes available again if another device fails.
 405 .Pp
 406 If a pool has a shared spare that is currently being used, the pool can not be
 407 exported since other pools may use this shared spare, which may lead to
 408 potential data corruption.
 409 .Pp
 410 An in-progress spare replacement can be cancelled by detaching the hot spare.
 411 If the original faulted device is detached, then the hot spare assumes its
 412 place in the configuration, and is removed from the spare list of all active
 413 pools.
 414 .Pp
 415 See
 416 .Sy sparegroup
 417 vdev property in
 418 .Sx Device Properties
 419 section for information on how to control spare selection.
 420 .Pp
 421 Spares cannot replace log devices.
 422 .Ss Intent Log
 423 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
 424 transactions.
 425 For instance, databases often require their transactions to be on stable storage
 426 devices when returning from a system call.
 427 NFS and other applications can also use
 428 .Xr fsync 3C
 429 to ensure data stability.
 430 By default, the intent log is allocated from blocks within the main pool.
 431 However, it might be possible to get better performance using separate intent
 432 log devices such as NVRAM or a dedicated disk.
 433 For example:
 434 .Bd -literal
 435 # zpool create pool c0d0 c1d0 log c2d0
 436 .Ed
 437 .Pp
 438 Multiple log devices can also be specified, and they can be mirrored.
 439 See the
 440 .Sx EXAMPLES
 441 section for an example of mirroring multiple log devices.
 442 .Pp
 443 Log devices can be added, replaced, attached, detached, and imported and
 444 exported as part of the larger pool.
 445 Mirrored log devices can be removed by specifying the top-level mirror for the
 446 log.
 447 .Ss Cache Devices
 448 Devices can be added to a storage pool as
 449 .Qq cache devices .
 450 These devices provide an additional layer of caching between main memory and
 451 disk.
 452 For read-heavy workloads, where the working set size is much larger than what
 453 can be cached in main memory, using cache devices allow much more of this
 454 working set to be served from low latency media.
 455 Using cache devices provides the greatest performance improvement for random
 456 read-workloads of mostly static content.
 457 .Pp
 458 To create a pool with cache devices, specify a
 459 .Sy cache
 460 vdev with any number of devices.
 461 For example:
 462 .Bd -literal
 463 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
 464 .Ed
 465 .Pp
 466 Cache devices cannot be mirrored or part of a raidz configuration.
 467 If a read error is encountered on a cache device, that read I/O is reissued to
 468 the original storage pool device, which might be part of a mirrored or raidz
 469 configuration.
 470 .Pp
 471 The content of the cache devices is considered volatile, as is the case with
 472 other system caches.
 473 .Ss Pool Properties
 474 Each pool has several properties associated with it.
 475 Some properties are read-only statistics while others are configurable and
 476 change the behavior of the pool.
 477 .Pp
 478 The following are read-only properties:
 479 .Bl -tag -width Ds
 480 .It Cm allocated
 481 Amount of storage space used within the pool.
 482 .It Sy bootsize
 483 The size of the system boot partition.
 484 This property can only be set at pool creation time and is read-only once pool
 485 is created.
 486 Setting this property implies using the
 487 .Fl B
 488 option.
 489 .It Sy capacity
 490 Percentage of pool space used.
 491 This property can also be referred to by its shortened column name,
 492 .Sy cap .
 493 .It Sy ddt_capped Ns = Ns Sy on Ns | Ns Sy off
 494 When the
 495 .Sy ddt_capped
 496 is
 497 .Sy on
 498 this indicates DDT growth has been stopped.
 499 New unique writes will not be deduped to prevent further DDT growth.
 500 .It Sy expandsize
 501 Amount of uninitialized space within the pool or device that can be used to
 502 increase the total capacity of the pool.
 503 Uninitialized space consists of any space on an EFI labeled vdev which has not
 504 been brought online
 505 .Po e.g, using
 506 .Nm zpool Cm online Fl e
 507 .Pc .
 508 This space occurs when a LUN is dynamically expanded.
 509 .It Sy fragmentation
 510 The amount of fragmentation in the pool.
 511 .It Sy free
 512 The amount of free space available in the pool.
 513 .It Sy freeing
 514 .Sy freeing
 515 is the amount of pool space remaining to be reclaimed.
 516 After a file, dataset or snapshot is destroyed, the space it was using is
 517 returned to the pool asynchronously.
 518 Over time
 519 .Sy freeing
 520 will decrease while
 521 .Sy free
 522 increases.
 523 .It Sy health
 524 The current health of the pool.
 525 Health can be one of
 526 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
 527 .It Sy guid
 528 A unique identifier for the pool.
 529 .It Sy size
 530 Total size of the storage pool.
 531 .It Sy unsupported@ Ns Em feature_guid
 532 Information about unsupported features that are enabled on the pool.
 533 See
 534 .Xr zpool-features 5
 535 for details.
 536 .El
 537 .Pp
 538 The space usage properties report actual physical space available to the
 539 storage pool.
 540 The physical space can be different from the total amount of space that any
 541 contained datasets can actually use.
 542 The amount of space used in a raidz configuration depends on the characteristics
 543 of the data being written.
 544 In addition, ZFS reserves some space for internal accounting that the
 545 .Xr zfs 1M
 546 command takes into account, but the
 547 .Nm
 548 command does not.
 549 For non-full pools of a reasonable size, these effects should be invisible.
 550 For small pools, or pools that are close to being completely full, these
 551 discrepancies may become more noticeable.
 552 .Pp
 553 The following property can be set at creation time and import time:
 554 .Bl -tag -width Ds
 555 .It Sy altroot
 556 Alternate root directory.
 557 If set, this directory is prepended to any mount points within the pool.
 558 This can be used when examining an unknown pool where the mount points cannot be
 559 trusted, or in an alternate boot environment, where the typical paths are not
 560 valid.
 561 .Sy altroot
 562 is not a persistent property.
 563 It is valid only while the system is up.
 564 Setting
 565 .Sy altroot
 566 defaults to using
 567 .Sy cachefile Ns = Ns Sy none ,
 568 though this may be overridden using an explicit setting.
 569 .El
 570 .Pp
 571 The following property can be set only at import time:
 572 .Bl -tag -width Ds
 573 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
 574 If set to
 575 .Sy on ,
 576 the pool will be imported in read-only mode.
 577 This property can also be referred to by its shortened column name,
 578 .Sy rdonly .
 579 .El
 580 .Pp
 581 The following properties can be set at creation time and import time, and later
 582 changed with the
 583 .Nm zpool Cm set
 584 command:
 585 .Bl -tag -width Ds
 586 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
 587 Controls automatic pool expansion when the underlying LUN is grown.
 588 If set to
 589 .Sy on ,
 590 the pool will be resized according to the size of the expanded device.
 591 If the device is part of a mirror or raidz then all devices within that
 592 mirror/raidz group must be expanded before the new space is made available to
 593 the pool.
 594 The default behavior is
 595 .Sy off .
 596 This property can also be referred to by its shortened column name,
 597 .Sy expand .
 598 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
 599 Controls automatic device replacement.
 600 If set to
 601 .Sy off ,
 602 device replacement must be initiated by the administrator by using the
 603 .Nm zpool Cm replace
 604 command.
 605 If set to
 606 .Sy on ,
 607 any new device, found in the same physical location as a device that previously
 608 belonged to the pool, is automatically formatted and replaced.
 609 The default behavior is
 610 .Sy off .
 611 This property can also be referred to by its shortened column name,
 612 .Sy replace .
 613 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
 614 When set to
 615 .Sy on ,
 616 while deleting data, ZFS will inform the underlying vdevs of any blocks that
 617 have been marked as freed.
 618 This allows thinly provisioned vdevs to reclaim unused blocks.
 619 Currently, this feature supports sending SCSI UNMAP commands to SCSI and SAS
 620 disk vdevs, and using file hole punching on file-backed vdevs.
 621 SATA TRIM is currently not implemented.
 622 The default setting for this property is
 623 .Sy off .
 624 .Pp
 625 Please note that automatic trimming of data blocks can put significant stress on
 626 the underlying storage devices if they do not handle these commands in a
 627 background, low-priority manner.
 628 In that case, it may be possible to achieve most of the benefits of trimming
 629 free space on the pool by running an on-demand
 630 .Pq manual
 631 trim every once in a while during a maintenance window using the
 632 .Nm zpool Cm trim
 633 command.
 634 .Pp
 635 Automatic trim does not reclaim blocks after a delete immediately.
 636 Instead, it waits approximately 32-64 TXGs
 637 .Po or as defined by the
 638 .Sy zfs_txgs_per_trim
 639 tunable
 640 .Pc
 641 to allow for more efficient aggregation of smaller portions of free space into
 642 fewer larger regions, as well as to allow for longer pool corruption recovery
 643 via
 644 .Nm zpool Cm import Fl F .
 645 .It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
 646 Identifies the default bootable dataset for the root pool.
 647 This property is expected to be set mainly by the installation and upgrade
 648 programs.
 649 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
 650 Controls the location of where the pool configuration is cached.
 651 Discovering all pools on system startup requires a cached copy of the
 652 configuration data that is stored on the root file system.
 653 All pools in this cache are automatically imported when the system boots.
 654 Some environments, such as install and clustering, need to cache this
 655 information in a different location so that pools are not automatically
 656 imported.
 657 Setting this property caches the pool configuration in a different location that
 658 can later be imported with
 659 .Nm zpool Cm import Fl c .
 660 Setting it to the special value
 661 .Sy none
 662 creates a temporary pool that is never cached, and the special value
 663 .Qq
 664 .Pq empty string
 665 uses the default location.
 666 .Pp
 667 Multiple pools can share the same cache file.
 668 Because the kernel destroys and recreates this file when pools are added and
 669 removed, care should be taken when attempting to access this file.
 670 When the last pool using a
 671 .Sy cachefile
 672 is exported or destroyed, the file is removed.
 673 .It Sy comment Ns = Ns Ar text
 674 A text string consisting of printable ASCII characters that will be stored
 675 such that it is available even if the pool becomes faulted.
 676 An administrator can provide additional information about a pool using this
 677 property.
 678 .It Sy dedupditto Ns = Ns Ar number
 679 Threshold for the number of block ditto copies.
 680 If the reference count for a deduplicated block increases above this number, a
 681 new ditto copy of this block is automatically stored.
 682 The default setting is
 683 .Sy 0
 684 which causes no ditto copies to be created for deduplicated blocks.
 685 The minimum legal nonzero setting is
 686 .Sy 100 .
 687 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
 688 Controls whether a non-privileged user is granted access based on the dataset
 689 permissions defined on the dataset.
 690 See
 691 .Xr zfs 1M
 692 for more information on ZFS delegated administration.
 693 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
 694 Controls the system behavior in the event of catastrophic pool failure.
 695 This condition is typically a result of a loss of connectivity to the underlying
 696 storage device(s) or a failure of all devices within the pool.
 697 The behavior of such an event is determined as follows:
 698 .Bl -tag -width "continue"
 699 .It Sy wait
 700 Blocks all I/O access until the device connectivity is recovered and the errors
 701 are cleared.
 702 This is the default behavior.
 703 .It Sy continue
 704 Returns
 705 .Er EIO
 706 to any new write I/O requests but allows reads to any of the remaining healthy
 707 devices.
 708 Any write requests that have yet to be committed to disk would be blocked.
 709 .It Sy panic
 710 Prints out a message to the console and generates a system crash dump.
 711 .El
 712 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
 713 The value of this property is the current state of
 714 .Ar feature_name .
 715 The only valid value when setting this property is
 716 .Sy enabled
 717 which moves
 718 .Ar feature_name
 719 to the enabled state.
 720 See
 721 .Xr zpool-features 5
 722 for details on feature states.
 723 .It Sy forcetrim Ns = Ns Sy on Ns | Ns Sy off
 724 Controls whether device support is taken into consideration when issuing TRIM
 725 commands to the underlying vdevs of the pool.
 726 Normally, both automatic trim and on-demand
 727 .Pq manual
 728 trim only issue TRIM commands if a vdev indicates support for it.
 729 Setting the
 730 .Sy forcetrim
 731 property to
 732 .Sy on
 733 will force ZFS to issue TRIMs even if it thinks a device does not support it.
 734 The default value is
 735 .Sy off .
 736 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
 737 Controls whether information about snapshots associated with this pool is
 738 output when
 739 .Nm zfs Cm list
 740 is run without the
 741 .Fl t
 742 option.
 743 The default value is
 744 .Sy off .
 745 This property can also be referred to by its shortened name,
 746 .Sy listsnaps .
 747 .It Sy scrubprio Ns = Ns Ar 0-100 Ns
 748 Sets the priority of scrub I/O for this pool.
 749 This is a number from 0 to 100, higher numbers meaning a higher priority
 750 and thus more bandwidth allocated to scrub I/O, provided there is other
 751 I/O competing for bandwidth.
 752 If no other I/O is competing for bandwidth, scrub is allowed to consume
 753 as much bandwidth as the pool is capable of providing.
 754 A priority of
 755 .Ar 100
 756 means that scrub I/O has equal priority to any other user-generated I/O.
 757 The value
 758 .Ar 0
 759 is special, because it turns per-pool scrub priority control.
 760 In that case, scrub I/O priority is determined by the
 761 .Sy zfs_vdev_scrub_min_active
 762 and
 763 .Sy zfs_vdev_scrub_max_active
 764 tunables.
 765 The default value is
 766 .Ar 5 .
 767 .It Sy resilverprio Ns = Ns Ar 0-100 Ns
 768 Same as the
 769 .Sy scrubprio
 770 property, but controls the priority for resilver I/O.
 771 The default value is
 772 .Ar 10 .
 773 When set to
 774 .Ar 0
 775 the global tunables used for queue sizing are
 776 .Sy zfs_vdev_resilver_min_active
 777 and
 778 .Sy zfs_vdev_resilver_max_active .
 779 .It Sy version Ns = Ns Ar version
 780 The current on-disk version of the pool.
 781 This can be increased, but never decreased.
 782 The preferred method of updating pools is with the
 783 .Nm zpool Cm upgrade
 784 command, though this property can be used when a specific version is needed for
 785 backwards compatibility.
 786 Once feature flags are enabled on a pool this property will no longer have a
 787 value.
 788 .El
 789 .Ss Device Properties
 790 Each device can have several properties associated with it.
 791 These properites override global tunables and are designed to provide more
 792 control over the operational parameters of this specific device, as well as to
 793 help manage this device.
 794 .Pp
 795 The
 796 .Sy cos
 797 device property can reference a CoS property descriptor by name, in which case,
 798 the values of device properties are determined according to the following rule:
 799 the device settings override CoS settings, which in turn, override the global
 800 tunables.
 801 .Pp
 802 The following device properties are available:
 803 .Bl -tag -width Ds
 804 .It Sy cos Ns = Ns Ar cos-name
 805 This property indicates whether the device is associated with a CoS property
 806 descriptor object.
 807 If so, the properties from the CoS descriptor that are not explicitly overridden
 808 by the device properties are in effect for this device.
 809 .It Sy l2arc_ddt Ns = Ns Sy on Ns | Ns Sy off
 810 This property is meaningful for L2ARC devices.
 811 If this property is turned
 812 .Sy on
 813 ZFS will dedicate the L2ARC device to cache deduplication table
 814 .Pq DDT
 815 buffers only.
 816 .It Sy prefread Ns = Ns Sy 1 Ns .. Ns Sy 100
 817 This property is meaningful for devices that belong to a mirror.
 818 The property determines the preference that is given to the device when reading
 819 from the mirror.
 820 The ratio of the value to the sum of the values of this property for all the
 821 devices in the mirror determines the relative frequency
 822 .Po which also is considered
 823 .Qq probability
 824 .Pc
 825 of reading from this specific device.
 826 .It Sy sparegroup Ns = Ns Ar group-name
 827 This property indicates whether the device is a part of a spare device group.
 828 Devices in the pool
 829 .Pq including spares
 830 can be labeled with strings that are meaningful in the context of the management
 831 workflow in effect.
 832 When a failed device is automatically replaced by spares, the spares whose
 833 .Sy sparegroup
 834 property match the failed device's property are used first.
 835 .It Xo
 836 .Bro Sy read Ns | Ns Sy aread Ns | Ns Sy write Ns | Ns
 837 .Sy awrite Ns | Ns Sy scrub Ns | Ns Sy resilver Brc Ns _ Ns
 838 .Bro Sy minactive Ns | Ns Sy maxactive Brc Ns = Ns
 839 .Sy 1 Ns .. Ns Sy 1000
 840 .Xc
 841 These properties define the minimim/maximum number of outstanding active
 842 requests for the queueable classes of I/O requests as defined by the
 843 ZFS I/O scheduler.
 844 The classes include read, asynchronous read, write, asynchronous write, and
 845 scrub classes.
 846 .El
 847 .Ss Subcommands
 848 All subcommands that modify state are logged persistently to the pool in their
 849 original form.
 850 .Pp
 851 The
 852 .Nm
 853 command provides subcommands to create and destroy storage pools, add capacity
 854 to storage pools, and provide information about the storage pools.
 855 The following subcommands are supported:
 856 .Bl -tag -width Ds
 857 .It Xo
 858 .Nm
 859 .Fl \?
 860 .Xc
 861 Displays a help message.
 862 .It Xo
 863 .Nm
 864 .Cm add
 865 .Op Fl fn
 866 .Ar pool vdev Ns ...
 867 .Xc
 868 Adds the specified virtual devices to the given pool.
 869 The
 870 .Ar vdev
 871 specification is described in the
 872 .Sx Virtual Devices
 873 section.
 874 The behavior of the
 875 .Fl f
 876 option, and the device checks performed are described in the
 877 .Nm zpool Cm create
 878 subcommand.
 879 .Bl -tag -width Ds
 880 .It Fl f
 881 Forces use of
 882 .Ar vdev Ns s ,
 883 even if they appear in use or specify a conflicting replication level.
 884 Not all devices can be overridden in this manner.
 885 .It Fl n
 886 Displays the configuration that would be used without actually adding the
 887 .Ar vdev Ns s .
 888 The actual pool creation can still fail due to insufficient privileges or
 889 device sharing.
 890 .El
 891 .It Xo
 892 .Nm
 893 .Cm attach
 894 .Op Fl f
 895 .Ar pool device new_device
 896 .Xc
 897 Attaches
 898 .Ar new_device
 899 to the existing
 900 .Ar device .
 901 The existing device cannot be part of a raidz configuration.
 902 If
 903 .Ar device
 904 is not currently part of a mirrored configuration,
 905 .Ar device
 906 automatically transforms into a two-way mirror of
 907 .Ar device
 908 and
 909 .Ar new_device .
 910 If
 911 .Ar device
 912 is part of a two-way mirror, attaching
 913 .Ar new_device
 914 creates a three-way mirror, and so on.
 915 In either case,
 916 .Ar new_device
 917 begins to resilver immediately.
 918 .Bl -tag -width Ds
 919 .It Fl f
 920 Forces use of
 921 .Ar new_device ,
 922 even if its appears to be in use.
 923 Not all devices can be overridden in this manner.
 924 .El
 925 .It Xo
 926 .Nm
 927 .Cm clear
 928 .Ar pool
 929 .Op Ar device
 930 .Xc
 931 Clears device errors in a pool.
 932 If no arguments are specified, all device errors within the pool are cleared.
 933 If one or more devices is specified, only those errors associated with the
 934 specified device or devices are cleared.
 935 .It Xo
 936 .Nm
 937 .Cm create
 938 .Op Fl dfn
 939 .Op Fl B
 940 .Op Fl m Ar mountpoint
 941 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
 942 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
 943 .Op Fl R Ar root
 944 .Ar pool vdev Ns ...
 945 .Xc
 946 Creates a new storage pool containing the virtual devices specified on the
 947 command line.
 948 The pool name must begin with a letter, and can only contain
 949 alphanumeric characters as well as underscore
 950 .Pq Qq Sy _ ,
 951 dash
 952 .Pq Qq Sy - ,
 953 and period
 954 .Pq Qq Sy \&. .
 955 The pool names
 956 .Sy mirror ,
 957 .Sy raidz ,
 958 .Sy spare
 959 and
 960 .Sy log
 961 are reserved, as are names beginning with the pattern
 962 .Sy c[0-9] .
 963 The
 964 .Ar vdev
 965 specification is described in the
 966 .Sx Virtual Devices
 967 section.
 968 .Pp
 969 The command verifies that each device specified is accessible and not currently
 970 in use by another subsystem.
 971 There are some uses, such as being currently mounted, or specified as the
 972 dedicated dump device, that prevents a device from ever being used by ZFS.
 973 Other uses, such as having a preexisting UFS file system, can be overridden with
 974 the
 975 .Fl f
 976 option.
 977 .Pp
 978 The command also checks that the replication strategy for the pool is
 979 consistent.
 980 An attempt to combine redundant and non-redundant storage in a single pool, or
 981 to mix disks and files, results in an error unless
 982 .Fl f
 983 is specified.
 984 The use of differently sized devices within a single raidz or mirror group is
 985 also flagged as an error unless
 986 .Fl f
 987 is specified.
 988 .Pp
 989 Unless the
 990 .Fl R
 991 option is specified, the default mount point is
 992 .Pa / Ns Ar pool .
 993 The mount point must not exist or must be empty, or else the root dataset
 994 cannot be mounted.
 995 This can be overridden with the
 996 .Fl m
 997 option.
 998 .Pp
 999 By default all supported features are enabled on the new pool unless the
1000 .Fl d
1001 option is specified.
1002 .Bl -tag -width Ds
1003 .It Fl B
1004 Create whole disk pool with EFI System partition to support booting system
1005 with UEFI firmware.
1006 Default size is 256MB.
1007 To create boot partition with custom size, set the
1008 .Sy bootsize
1009 property with the
1010 .Fl o
1011 option.
1012 See the
1013 .Sx Properties
1014 section for details.
1015 .It Fl d
1016 Do not enable any features on the new pool.
1017 Individual features can be enabled by setting their corresponding properties to
1018 .Sy enabled
1019 with the
1020 .Fl o
1021 option.
1022 See
1023 .Xr zpool-features 5
1024 for details about feature properties.
1025 .It Fl f
1026 Forces use of
1027 .Ar vdev Ns s ,
1028 even if they appear in use or specify a conflicting replication level.
1029 Not all devices can be overridden in this manner.
1030 .It Fl m Ar mountpoint
1031 Sets the mount point for the root dataset.
1032 The default mount point is
1033 .Pa /pool
1034 or
1035 .Pa altroot/pool
1036 if
1037 .Ar altroot
1038 is specified.
1039 The mount point must be an absolute path,
1040 .Sy legacy ,
1041 or
1042 .Sy none .
1043 For more information on dataset mount points, see
1044 .Xr zfs 1M .
1045 .It Fl n
1046 Displays the configuration that would be used without actually creating the
1047 pool.
1048 The actual pool creation can still fail due to insufficient privileges or
1049 device sharing.
1050 .It Fl o Ar property Ns = Ns Ar value
1051 Sets the given pool properties.
1052 See the
1053 .Sx Pool Properties
1054 section for a list of valid properties that can be set.
1055 .It Fl O Ar file-system-property Ns = Ns Ar value
1056 Sets the given file system properties in the root file system of the pool.
1057 See the
1058 .Sx Properties
1059 section of
1060 .Xr zfs 1M
1061 for a list of valid properties that can be set.
1062 .It Fl R Ar root
1063 Equivalent to
1064 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1065 .El
1066 .It Xo
1067 .Nm
1068 .Cm destroy
1069 .Op Fl f
1070 .Ar pool
1071 .Xc
1072 Destroys the given pool, freeing up any devices for other use.
1073 This command tries to unmount any active datasets before destroying the pool.
1074 .Bl -tag -width Ds
1075 .It Fl f
1076 Forces any active datasets contained within the pool to be unmounted.
1077 .El
1078 .It Xo
1079 .Nm
1080 .Cm detach
1081 .Ar pool device
1082 .Xc
1083 Detaches
1084 .Ar device
1085 from a mirror.
1086 The operation is refused if there are no other valid replicas of the data.
1087 .It Xo
1088 .Nm
1089 .Cm export
1090 .Op Fl cfF
1091 .Op Fl t Ar numthreads
1092 .Ar pool Ns ...
1093 .Xc
1094 Exports the given pools from the system.
1095 All devices are marked as exported, but are still considered in use by other
1096 subsystems.
1097 The devices can be moved between systems
1098 .Pq even those of different endianness
1099 and imported as long as a sufficient number of devices are present.
1100 .Pp
1101 Before exporting the pool, all datasets within the pool are unmounted.
1102 A pool can not be exported if it has a shared spare that is currently being
1103 used.
1104 .Pp
1105 For pools to be portable, you must give the
1106 .Nm
1107 command whole disks, not just slices, so that ZFS can label the disks with
1108 portable EFI labels.
1109 Otherwise, disk drivers on platforms of different endianness will not recognize
1110 the disks.
1111 .Bl -tag -width Ds
1112 .It Fl c
1113 Keep configuration information of exported pool in the cache file.
1114 .It Fl f
1115 Forcefully unmount all datasets, using the
1116 .Nm unmount Fl f
1117 command.
1118 .Pp
1119 This command will forcefully export the pool even if it has a shared spare that
1120 is currently being used.
1121 This may lead to potential data corruption.
1122 .It Fl F
1123 Do not update device labels or cache file with new configuration.
1124 .It Fl t Ar numthreads
1125 Unmount datasets in parallel using up to
1126 .Ar numthreads
1127 threads.
1128 .El
1129 .It Xo
1130 .Nm
1131 .Cm get
1132 .Op Fl Hp
1133 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1134 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1135 .Ar pool Ns ...
1136 .Xc
1137 Retrieves the given list of properties
1138 .Po
1139 or all properties if
1140 .Sy all
1141 is used
1142 .Pc
1143 for the specified storage pool(s).
1144 These properties are displayed with the following fields:
1145 .Bd -literal
1146         name          Name of storage pool
1147         property      Property name
1148         value         Property value
1149         source        Property source, either 'default' or 'local'.
1150 .Ed
1151 .Pp
1152 See the
1153 .Sx Pool Properties
1154 section for more information on the available pool properties.
1155 .Bl -tag -width Ds
1156 .It Fl H
1157 Scripted mode.
1158 Do not display headers, and separate fields by a single tab instead of arbitrary
1159 space.
1160 .It Fl o Ar field
1161 A comma-separated list of columns to display.
1162 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1163 is the default value.
1164 .It Fl p
1165 Display numbers in parsable (exact) values.
1166 .El
1167 .It Xo
1168 .Nm
1169 .Cm history
1170 .Op Fl il
1171 .Oo Ar pool Oc Ns ...
1172 .Xc
1173 Displays the command history of the specified pool(s) or all pools if no pool is
1174 specified.
1175 .Bl -tag -width Ds
1176 .It Fl i
1177 Displays internally logged ZFS events in addition to user initiated events.
1178 .It Fl l
1179 Displays log records in long format, which in addition to standard format
1180 includes, the user name, the hostname, and the zone in which the operation was
1181 performed.
1182 .El
1183 .It Xo
1184 .Nm
1185 .Cm import
1186 .Op Fl D
1187 .Op Fl d Ar dir
1188 .Xc
1189 Lists pools available to import.
1190 If the
1191 .Fl d
1192 option is not specified, this command searches for devices in
1193 .Pa /dev/dsk .
1194 The
1195 .Fl d
1196 option can be specified multiple times, and all directories are searched.
1197 If the device appears to be part of an exported pool, this command displays a
1198 summary of the pool with the name of the pool, a numeric identifier, as well as
1199 the vdev layout and current health of the device for each device or file.
1200 Destroyed pools, pools that were previously destroyed with the
1201 .Nm zpool Cm destroy
1202 command, are not listed unless the
1203 .Fl D
1204 option is specified.
1205 .Pp
1206 The numeric identifier is unique, and can be used instead of the pool name when
1207 multiple exported pools of the same name are available.
1208 .Bl -tag -width Ds
1209 .It Fl c Ar cachefile
1210 Reads configuration from the given
1211 .Ar cachefile
1212 that was created with the
1213 .Sy cachefile
1214 pool property.
1215 This
1216 .Ar cachefile
1217 is used instead of searching for devices.
1218 .It Fl d Ar dir
1219 Searches for devices or files in
1220 .Ar dir .
1221 The
1222 .Fl d
1223 option can be specified multiple times.
1224 .It Fl D
1225 Lists destroyed pools only.
1226 .El
1227 .It Xo
1228 .Nm
1229 .Cm import
1230 .Fl a
1231 .Op Fl DfmN
1232 .Op Fl F Op Fl n
1233 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1234 .Op Fl o Ar mntopts
1235 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1236 .Op Fl R Ar root
1237 .Xc
1238 Imports all pools found in the search directories.
1239 Identical to the previous command, except that all pools with a sufficient
1240 number of devices available are imported.
1241 Destroyed pools, pools that were previously destroyed with the
1242 .Nm zpool Cm destroy
1243 command, will not be imported unless the
1244 .Fl D
1245 option is specified.
1246 .Bl -tag -width Ds
1247 .It Fl a
1248 Searches for and imports all pools found.
1249 .It Fl c Ar cachefile
1250 Reads configuration from the given
1251 .Ar cachefile
1252 that was created with the
1253 .Sy cachefile
1254 pool property.
1255 This
1256 .Ar cachefile
1257 is used instead of searching for devices.
1258 .It Fl d Ar dir
1259 Searches for devices or files in
1260 .Ar dir .
1261 The
1262 .Fl d
1263 option can be specified multiple times.
1264 This option is incompatible with the
1265 .Fl c
1266 option.
1267 .It Fl D
1268 Imports destroyed pools only.
1269 The
1270 .Fl f
1271 option is also required.
1272 .It Fl f
1273 Forces import, even if the pool appears to be potentially active.
1274 .It Fl F
1275 Recovery mode for a non-importable pool.
1276 Attempt to return the pool to an importable state by discarding the last few
1277 transactions.
1278 Not all damaged pools can be recovered by using this option.
1279 If successful, the data from the discarded transactions is irretrievably lost.
1280 This option is ignored if the pool is importable or already imported.
1281 .It Fl m
1282 Allows a pool to import when there is a missing log device.
1283 Recent transactions can be lost because the log device will be discarded.
1284 .It Fl n
1285 Used with the
1286 .Fl F
1287 recovery option.
1288 Determines whether a non-importable pool can be made importable again, but does
1289 not actually perform the pool recovery.
1290 For more details about pool recovery mode, see the
1291 .Fl F
1292 option, above.
1293 .It Fl N
1294 Import the pool without mounting any file systems.
1295 .It Fl o Ar mntopts
1296 Comma-separated list of mount options to use when mounting datasets within the
1297 pool.
1298 See
1299 .Xr zfs 1M
1300 for a description of dataset properties and mount options.
1301 .It Fl o Ar property Ns = Ns Ar value
1302 Sets the specified property on the imported pool.
1303 See the
1304 .Sx Pool Properties
1305 section for more information on the available pool properties.
1306 .It Fl R Ar root
1307 Sets the
1308 .Sy cachefile
1309 property to
1310 .Sy none
1311 and the
1312 .Sy altroot
1313 property to
1314 .Ar root .
1315 .El
1316 .It Xo
1317 .Nm
1318 .Cm import
1319 .Op Fl Dfm
1320 .Op Fl F Op Fl n
1321 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1322 .Op Fl o Ar mntopts
1323 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1324 .Op Fl R Ar root
1325 .Ar pool Ns | Ns Ar id
1326 .Op Ar newpool
1327 .Xc
1328 Imports a specific pool.
1329 A pool can be identified by its name or the numeric identifier.
1330 If
1331 .Ar newpool
1332 is specified, the pool is imported using the name
1333 .Ar newpool .
1334 Otherwise, it is imported with the same name as its exported name.
1335 .Pp
1336 If a device is removed from a system without running
1337 .Nm zpool Cm export
1338 first, the device appears as potentially active.
1339 It cannot be determined if this was a failed export, or whether the device is
1340 really in use from another host.
1341 To import a pool in this state, the
1342 .Fl f
1343 option is required.
1344 .Bl -tag -width Ds
1345 .It Fl c Ar cachefile
1346 Reads configuration from the given
1347 .Ar cachefile
1348 that was created with the
1349 .Sy cachefile
1350 pool property.
1351 This
1352 .Ar cachefile
1353 is used instead of searching for devices.
1354 .It Fl d Ar dir
1355 Searches for devices or files in
1356 .Ar dir .
1357 The
1358 .Fl d
1359 option can be specified multiple times.
1360 This option is incompatible with the
1361 .Fl c
1362 option.
1363 .It Fl D
1364 Imports destroyed pool.
1365 The
1366 .Fl f
1367 option is also required.
1368 .It Fl f
1369 Forces import, even if the pool appears to be potentially active.
1370 .It Fl F
1371 Recovery mode for a non-importable pool.
1372 Attempt to return the pool to an importable state by discarding the last few
1373 transactions.
1374 Not all damaged pools can be recovered by using this option.
1375 If successful, the data from the discarded transactions is irretrievably lost.
1376 This option is ignored if the pool is importable or already imported.
1377 .It Fl m
1378 Allows a pool to import when there is a missing log device.
1379 Recent transactions can be lost because the log device will be discarded.
1380 .It Fl n
1381 Used with the
1382 .Fl F
1383 recovery option.
1384 Determines whether a non-importable pool can be made importable again, but does
1385 not actually perform the pool recovery.
1386 For more details about pool recovery mode, see the
1387 .Fl F
1388 option, above.
1389 .It Fl o Ar mntopts
1390 Comma-separated list of mount options to use when mounting datasets within the
1391 pool.
1392 See
1393 .Xr zfs 1M
1394 for a description of dataset properties and mount options.
1395 .It Fl o Ar property Ns = Ns Ar value
1396 Sets the specified property on the imported pool.
1397 See the
1398 .Sx Pool Properties
1399 section for more information on the available pool properties.
1400 .It Fl R Ar root
1401 Sets the
1402 .Sy cachefile
1403 property to
1404 .Sy none
1405 and the
1406 .Sy altroot
1407 property to
1408 .Ar root .
1409 .It Fl t Ar numthreads
1410 Mount datasets in parallel using up to
1411 .Ar numthreads
1412 threads.
1413 .El
1414 .It Xo
1415 .Nm
1416 .Cm iostat
1417 .Op Fl v
1418 .Op Fl T Sy u Ns | Ns Sy d
1419 .Oo Ar pool Oc Ns ...
1420 .Op Ar interval Op Ar count
1421 .Xc
1422 Displays I/O statistics for the given pools.
1423 When given an
1424 .Ar interval ,
1425 the statistics are printed every
1426 .Ar interval
1427 seconds until ^C is pressed.
1428 If no
1429 .Ar pool Ns s
1430 are specified, statistics for every pool in the system is shown.
1431 If
1432 .Ar count
1433 is specified, the command exits after
1434 .Ar count
1435 reports are printed.
1436 .Bl -tag -width Ds
1437 .It Fl T Sy u Ns | Ns Sy d
1438 Display a time stamp.
1439 Specify
1440 .Sy u
1441 for a printed representation of the internal representation of time.
1442 See
1443 .Xr time 2 .
1444 Specify
1445 .Sy d
1446 for standard date format.
1447 See
1448 .Xr date 1 .
1449 .It Fl v
1450 Verbose statistics Reports usage statistics for individual vdevs within the
1451 pool, in addition to the pool-wide statistics.
1452 .El
1453 .It Xo
1454 .Nm
1455 .Cm labelclear
1456 .Op Fl f
1457 .Ar device
1458 .Xc
1459 Removes ZFS label information from the specified
1460 .Ar device .
1461 The
1462 .Ar device
1463 must not be part of an active pool configuration.
1464 .Bl -tag -width Ds
1465 .It Fl f
1466 Treat exported or foreign devices as inactive.
1467 .El
1468 .It Xo
1469 .Nm
1470 .Cm list
1471 .Op Fl Hpv
1472 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1473 .Op Fl T Sy u Ns | Ns Sy d
1474 .Oo Ar pool Oc Ns ...
1475 .Op Ar interval Op Ar count
1476 .Xc
1477 Lists the given pools along with a health status and space usage.
1478 If no
1479 .Ar pool Ns s
1480 are specified, all pools in the system are listed.
1481 When given an
1482 .Ar interval ,
1483 the information is printed every
1484 .Ar interval
1485 seconds until ^C is pressed.
1486 If
1487 .Ar count
1488 is specified, the command exits after
1489 .Ar count
1490 reports are printed.
1491 .Bl -tag -width Ds
1492 .It Fl H
1493 Scripted mode.
1494 Do not display headers, and separate fields by a single tab instead of arbitrary
1495 space.
1496 .It Fl o Ar property
1497 Comma-separated list of properties to display.
1498 See the
1499 .Sx Pool Properties
1500 section for a list of valid properties.
1501 The default list is
1502 .Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1503 .Cm dedupratio , health , altroot .
1504 .It Fl p
1505 Display numbers in parsable
1506 .Pq exact
1507 values.
1508 .It Fl T Sy u Ns | Ns Sy d
1509 Display a time stamp.
1510 Specify
1511 .Fl u
1512 for a printed representation of the internal representation of time.
1513 See
1514 .Xr time 2 .
1515 Specify
1516 .Fl d
1517 for standard date format.
1518 See
1519 .Xr date 1 .
1520 .It Fl v
1521 Verbose statistics.
1522 Reports usage statistics for individual vdevs within the pool, in addition to
1523 the pool-wise statistics.
1524 .El
1525 .It Xo
1526 .Nm
1527 .Cm offline
1528 .Op Fl t
1529 .Ar pool Ar device Ns ...
1530 .Xc
1531 Takes the specified physical device offline.
1532 While the
1533 .Ar device
1534 is offline, no attempt is made to read or write to the device.
1535 This command is not applicable to spares.
1536 .Bl -tag -width Ds
1537 .It Fl t
1538 Temporary.
1539 Upon reboot, the specified physical device reverts to its previous state.
1540 .El
1541 .It Xo
1542 .Nm
1543 .Cm online
1544 .Op Fl e
1545 .Ar pool Ar device Ns ...
1546 .Xc
1547 Brings the specified physical device online.
1548 This command is not applicable to spares.
1549 .Bl -tag -width Ds
1550 .It Fl e
1551 Expand the device to use all available space.
1552 If the device is part of a mirror or raidz then all devices must be expanded
1553 before the new space will become available to the pool.
1554 .El
1555 .It Xo
1556 .Nm
1557 .Cm reguid
1558 .Ar pool
1559 .Xc
1560 Generates a new unique identifier for the pool.
1561 You must ensure that all devices in this pool are online and healthy before
1562 performing this action.
1563 .It Xo
1564 .Nm
1565 .Cm reopen
1566 .Ar pool
1567 .Xc
1568 Reopen all the vdevs associated with the pool.
1569 .It Xo
1570 .Nm
1571 .Cm remove
1572 .Ar pool Ar device Ns ...
1573 .Xc
1574 Removes the specified device from the pool.
1575 This command currently only supports removing hot spares, cache, log and special
1576 devices.
1577 A mirrored log device can be removed by specifying the top-level mirror for the
1578 log.
1579 Non-log devices that are part of a mirrored configuration can be removed using
1580 the
1581 .Nm zpool Cm detach
1582 command.
1583 Non-redundant and raidz devices cannot be removed from a pool.
1584 .It Xo
1585 .Nm
1586 .Cm replace
1587 .Op Fl f
1588 .Ar pool Ar device Op Ar new_device
1589 .Xc
1590 Replaces
1591 .Ar old_device
1592 with
1593 .Ar new_device .
1594 This is equivalent to attaching
1595 .Ar new_device ,
1596 waiting for it to resilver, and then detaching
1597 .Ar old_device .
1598 .Pp
1599 The size of
1600 .Ar new_device
1601 must be greater than or equal to the minimum size of all the devices in a mirror
1602 or raidz configuration.
1603 .Pp
1604 .Ar new_device
1605 is required if the pool is not redundant.
1606 If
1607 .Ar new_device
1608 is not specified, it defaults to
1609 .Ar old_device .
1610 This form of replacement is useful after an existing disk has failed and has
1611 been physically replaced.
1612 In this case, the new disk may have the same
1613 .Pa /dev/dsk
1614 path as the old device, even though it is actually a different disk.
1615 ZFS recognizes this.
1616 .Bl -tag -width Ds
1617 .It Fl f
1618 Forces use of
1619 .Ar new_device ,
1620 even if its appears to be in use.
1621 Not all devices can be overridden in this manner.
1622 .El
1623 .It Xo
1624 .Nm
1625 .Cm scrub
1626 .Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s
1627 .Ar pool Ns ...
1628 .Xc
1629 Begins a scrub or resumes a paused scrub.
1630 The scrub examines all data in the specified pools to verify that it checksums
1631 correctly.
1632 For replicated
1633 .Pq mirror or raidz
1634 devices, ZFS automatically repairs any damage discovered during the scrub.
1635 The
1636 .Nm zpool Cm status
1637 command reports the progress of the scrub and summarizes the results of the
1638 scrub upon completion.
1639 .Pp
1640 Scrubbing and resilvering are very similar operations.
1641 The difference is that resilvering only examines data that ZFS knows to be out
1642 of date
1643 .Po
1644 for example, when attaching a new device to a mirror or replacing an existing
1645 device
1646 .Pc ,
1647 whereas scrubbing examines all data to discover silent errors due to hardware
1648 faults or disk failure.
1649 .Pp
1650 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1651 one at a time.
1652 If a scrub is paused, the
1653 .Nm zpool Cm scrub
1654 resumes it.
1655 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1656 resilver completes.
1657 .Pp
1658 Partial scrub may be requested using
1659 .Fl m
1660 or
1661 .Fl M
1662 option.
1663 .Bl -tag -width Ds
1664 .It Fl m
1665 Scrub only metadata blocks.
1666 .It Fl M
1667 Scrub only MOS blocks.
1668 .It Fl p
1669 Pause scrubbing.
1670 Scrub pause state and progress are periodically synced to disk.
1671 If the system is restarted or pool is exported during a paused scrub,
1672 even after import, scrub will remain paused until it is resumed.
1673 Once resumed the scrub will pick up from the place where it was last
1674 checkpointed to disk.
1675 To resume a paused scrub issue
1676 .Nm zpool Cm scrub
1677 again.
1678 .It Fl s
1679 Stop scrubbing.
1680 .El
1681 .It Xo
1682 .Nm
1683 .Cm set
1684 .Ar property Ns = Ns Ar value
1685 .Ar pool
1686 .Xc
1687 Sets the given property on the specified pool.
1688 See the
1689 .Sx Pool Properties
1690 section for more information on what properties can be set and acceptable
1691 values.
1692 .It Xo
1693 .Nm
1694 .Cm split
1695 .Op Fl n
1696 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1697 .Op Fl R Ar root
1698 .Ar pool newpool
1699 .Xc
1700 Splits devices off
1701 .Ar pool
1702 creating
1703 .Ar newpool .
1704 All vdevs in
1705 .Ar pool
1706 must be mirrors.
1707 At the time of the split,
1708 .Ar newpool
1709 will be a replica of
1710 .Ar pool .
1711 .Bl -tag -width Ds
1712 .It Fl n
1713 Do dry run, do not actually perform the split.
1714 Print out the expected configuration of
1715 .Ar newpool .
1716 .It Fl o Ar property Ns = Ns Ar value
1717 Sets the specified property for
1718 .Ar newpool .
1719 See the
1720 .Sx Pool Properties
1721 section for more information on the available pool properties.
1722 .It Fl R Ar root
1723 Set
1724 .Sy altroot
1725 for
1726 .Ar newpool
1727 to
1728 .Ar root
1729 and automatically import it.
1730 .El
1731 .It Xo
1732 .Nm
1733 .Cm status
1734 .Op Fl Dvx
1735 .Op Fl T Sy u Ns | Ns Sy d
1736 .Oo Ar pool Oc Ns ...
1737 .Op Ar interval Op Ar count
1738 .Xc
1739 Displays the detailed health status for the given pools.
1740 If no
1741 .Ar pool
1742 is specified, then the status of each pool in the system is displayed.
1743 For more information on pool and device health, see the
1744 .Sx Device Failure and Recovery
1745 section.
1746 .Pp
1747 If a scrub or resilver is in progress, this command reports the percentage done
1748 and the estimated time to completion.
1749 Both of these are only approximate, because the amount of data in the pool and
1750 the other workloads on the system can change.
1751 .Bl -tag -width Ds
1752 .It Fl D
1753 Display a histogram of deduplication statistics, showing the allocated
1754 .Pq physically present on disk
1755 and referenced
1756 .Pq logically referenced in the pool
1757 block counts and sizes by reference count.
1758 .It Fl T Sy u Ns | Ns Sy d
1759 Display a time stamp.
1760 Specify
1761 .Fl u
1762 for a printed representation of the internal representation of time.
1763 See
1764 .Xr time 2 .
1765 Specify
1766 .Fl d
1767 for standard date format.
1768 See
1769 .Xr date 1 .
1770 .It Fl v
1771 Displays verbose data error information, printing out a complete list of all
1772 data errors since the last complete pool scrub.
1773 .It Fl x
1774 Only display status for pools that are exhibiting errors or are otherwise
1775 unavailable.
1776 Warnings about pools not using the latest on-disk format will not be included.
1777 .El
1778 .It Xo
1779 .Nm
1780 .Cm trim
1781 .Op Fl r Ar rate Ns | Ns Fl s
1782 .Ar pool Ns ...
1783 .Xc
1784 Initiates a on-demand TRIM operation on all of the free space of a pool.
1785 This informs the underlying storage devices of all of the blocks that the pool
1786 no longer considers allocated, thus allowing thinly provisioned storage devices
1787 to reclaim them.
1788 Please note that this collects all space marked as
1789 .Qq freed
1790 in the pool immediately and doesn't wait the
1791 .Sy zfs_txgs_per_trim
1792 delay as automatic TRIM does.
1793 Hence, this can limit pool corruption recovery options during and immediately
1794 following the on-demand TRIM to 1-2 TXGs into the past
1795 .Pq instead of the standard 32-64 of automatic TRIM .
1796 This approach, however, allows you to recover the maximum amount of free space
1797 from the pool immediately without having to wait.
1798 .Pp
1799 Also note that an on-demand TRIM operation can be initiated irrespective of the
1800 .Sy autotrim
1801 pool property setting.
1802 It does, however, respect the
1803 .Sy forcetrim
1804 pool property.
1805 .Pp
1806 An on-demand TRIM operation does not conflict with an ongoing scrub, but it can
1807 put significant I/O stress on the underlying vdevs.
1808 A resilver, however, automatically stops an on-demand TRIM operation.
1809 You can manually reinitiate the TRIM operation after the resilver has started,
1810 by simply reissuing the
1811 .Nm zpool Cm trim
1812 command.
1813 .Pp
1814 Adding a vdev during TRIM is supported, although the progression display in
1815 .Nm zpool Cm status
1816 might not be entirely accurate in that case
1817 .Pq TRIM will complete before reaching 100% .
1818 Removing or detaching a vdev will prematurely terminate an on-demand TRIM
1819 operation.
1820 .Bl -tag -width Ds
1821 .It Fl r Ar rate
1822 Controls the speed at which the TRIM operation progresses.
1823 Without this option, TRIM is executed in parallel on all top-level vdevs as
1824 quickly as possible.
1825 This option allows you to control how fast
1826 .Pq in bytes per second
1827 the TRIM is executed.
1828 This rate is applied on a per-vdev basis, i.e. every top-level vdev in the pool
1829 tries to match this speed.
1830 .Pp
1831 Due to limitations in how the algorithm is designed, TRIMs are executed in
1832 whole-metaslab increments.
1833 Each top-level vdev contains approximately 200 metaslabs, so a rate-limited TRIM
1834 progresses in steps, i.e. it TRIMs one metaslab completely and then waits for a
1835 while so that over the whole device, the speed averages out.
1836 .Pp
1837 When an on-demand TRIM operation is already in progress, this option changes its
1838 rate.
1839 To change a rate-limited TRIM to an unlimited one, simply execute the
1840 .Nm zpool Cm trim
1841 command without the
1842 .Fl r
1843 option.
1844 .It Fl s
1845 Stop trimming.
1846 If an on-demand TRIM operation is not ongoing at the moment, this does nothing
1847 and the command returns success.
1848 .El
1849 .It Xo
1850 .Nm
1851 .Cm upgrade
1852 .Xc
1853 Displays pools which do not have all supported features enabled and pools
1854 formatted using a legacy ZFS version number.
1855 These pools can continue to be used, but some features may not be available.
1856 Use
1857 .Nm zpool Cm upgrade Fl a
1858 to enable all features on all pools.
1859 .It Xo
1860 .Nm
1861 .Cm upgrade
1862 .Fl v
1863 .Xc
1864 Displays legacy ZFS versions supported by the current software.
1865 See
1866 .Xr zpool-features 5
1867 for a description of feature flags features supported by the current software.
1868 .It Xo
1869 .Nm
1870 .Cm upgrade
1871 .Op Fl V Ar version
1872 .Fl a Ns | Ns Ar pool Ns ...
1873 .Xc
1874 Enables all supported features on the given pool.
1875 Once this is done, the pool will no longer be accessible on systems that do not
1876 support feature flags.
1877 See
1878 .Xr zpool-features 5
1879 for details on compatibility with systems that support feature flags, but do not
1880 support all features enabled on the pool.
1881 .Bl -tag -width Ds
1882 .It Fl a
1883 Enables all supported features on all pools.
1884 .It Fl V Ar version
1885 Upgrade to the specified legacy version.
1886 If the
1887 .Fl V
1888 flag is specified, no features will be enabled on the pool.
1889 This option can only be used to increase the version number up to the last
1890 supported legacy version number.
1891 .El
1892 .It Xo
1893 .Nm
1894 .Cm vdev-get
1895 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1896 .Ar pool
1897 .Ar vdev-name Ns | Ns Ar vdev-guid
1898 .Xc
1899 Retrieves the given list of vdev properties
1900 .Po or all properties if
1901 .Sy all
1902 is used
1903 .Pc
1904 for the specified vdev of the specified storage pool.
1905 These properties are displayed in the same manner as the pool properties.
1906 The operation is supported for leaf-level vdevs only.
1907 See the
1908 .Sx Device Properties
1909 section for more information on the available properties.
1910 .It Xo
1911 .Nm
1912 .Cm vdev-set
1913 .Ar property Ns = Ns Ar value
1914 .Ar pool
1915 .Ar vdev-name Ns | Ns Ar vdev-guid
1916 .Xc
1917 Sets the given property on the specified device of the specified pool.
1918 If top-level vdev is specified, sets the property on all the child devices.
1919 See the
1920 .Sx Device Properties
1921 section for more information on what properties can be set and accepted values.
1922 .El
1923 .Sh EXIT STATUS
1924 The following exit values are returned:
1925 .Bl -tag -width Ds
1926 .It Sy 0
1927 Successful completion.
1928 .It Sy 1
1929 An error occurred.
1930 .It Sy 2
1931 Invalid command line options were specified.
1932 .El
1933 .Sh EXAMPLES
1934 .Bl -tag -width Ds
1935 .It Sy Example 1 No Creating a RAID-Z Storage Pool
1936 The following command creates a pool with a single raidz root vdev that
1937 consists of six disks.
1938 .Bd -literal
1939 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1940 .Ed
1941 .It Sy Example 2 No Creating a Mirrored Storage Pool
1942 The following command creates a pool with two mirrors, where each mirror
1943 contains two disks.
1944 .Bd -literal
1945 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1946 .Ed
1947 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Slices
1948 The following command creates an unmirrored pool using two disk slices.
1949 .Bd -literal
1950 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1951 .Ed
1952 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
1953 The following command creates an unmirrored pool using files.
1954 While not recommended, a pool based on files can be useful for experimental
1955 purposes.
1956 .Bd -literal
1957 # zpool create tank /path/to/file/a /path/to/file/b
1958 .Ed
1959 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
1960 The following command adds two mirrored disks to the pool
1961 .Em tank ,
1962 assuming the pool is already made up of two-way mirrors.
1963 The additional space is immediately available to any datasets within the pool.
1964 .Bd -literal
1965 # zpool add tank mirror c1t0d0 c1t1d0
1966 .Ed
1967 .It Sy Example 6 No Listing Available ZFS Storage Pools
1968 The following command lists all available pools on the system.
1969 In this case, the pool
1970 .Em zion
1971 is faulted due to a missing device.
1972 The results from this command are similar to the following:
1973 .Bd -literal
1974 # zpool list
1975 NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1976 rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
1977 tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
1978 zion       -      -      -      -         -      -      -  FAULTED -
1979 .Ed
1980 .It Sy Example 7 No Destroying a ZFS Storage Pool
1981 The following command destroys the pool
1982 .Em tank
1983 and any datasets contained within.
1984 .Bd -literal
1985 # zpool destroy -f tank
1986 .Ed
1987 .It Sy Example 8 No Exporting a ZFS Storage Pool
1988 The following command exports the devices in pool
1989 .Em tank
1990 so that they can be relocated or later imported.
1991 .Bd -literal
1992 # zpool export tank
1993 .Ed
1994 .It Sy Example 9 No Importing a ZFS Storage Pool
1995 The following command displays available pools, and then imports the pool
1996 .Em tank
1997 for use on the system.
1998 The results from this command are similar to the following:
1999 .Bd -literal
2000 # zpool import
2001   pool: tank
2002     id: 15451357997522795478
2003  state: ONLINE
2004 action: The pool can be imported using its name or numeric identifier.
2005 config:
2006 
2007         tank        ONLINE
2008           mirror    ONLINE
2009             c1t2d0  ONLINE
2010             c1t3d0  ONLINE
2011 
2012 # zpool import tank
2013 .Ed
2014 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2015 The following command upgrades all ZFS Storage pools to the current version of
2016 the software.
2017 .Bd -literal
2018 # zpool upgrade -a
2019 This system is currently running ZFS version 2.
2020 .Ed
2021 .It Sy Example 11 No Managing Hot Spares
2022 The following command creates a new pool with an available hot spare:
2023 .Bd -literal
2024 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
2025 .Ed
2026 .Pp
2027 If one of the disks were to fail, the pool would be reduced to the degraded
2028 state.
2029 The failed device can be replaced using the following command:
2030 .Bd -literal
2031 # zpool replace tank c0t0d0 c0t3d0
2032 .Ed
2033 .Pp
2034 Once the data has been resilvered, the spare is automatically removed and is
2035 made available for use should another device fail.
2036 The hot spare can be permanently removed from the pool using the following
2037 command:
2038 .Bd -literal
2039 # zpool remove tank c0t2d0
2040 .Ed
2041 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2042 The following command creates a ZFS storage pool consisting of two, two-way
2043 mirrors and mirrored log devices:
2044 .Bd -literal
2045 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
2046   c4d0 c5d0
2047 .Ed
2048 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2049 The following command adds two disks for use as cache devices to a ZFS storage
2050 pool:
2051 .Bd -literal
2052 # zpool add pool cache c2d0 c3d0
2053 .Ed
2054 .Pp
2055 Once added, the cache devices gradually fill with content from main memory.
2056 Depending on the size of your cache devices, it could take over an hour for
2057 them to fill.
2058 Capacity and reads can be monitored using the
2059 .Cm iostat
2060 option as follows:
2061 .Bd -literal
2062 # zpool iostat -v pool 5
2063 .Ed
2064 .It Sy Example 14 No Removing a Mirrored Log Device
2065 The following command removes the mirrored log device
2066 .Sy mirror-2 .
2067 Given this configuration:
2068 .Bd -literal
2069   pool: tank
2070  state: ONLINE
2071  scrub: none requested
2072 config:
2073 
2074          NAME        STATE     READ WRITE CKSUM
2075          tank        ONLINE       0     0     0
2076            mirror-0  ONLINE       0     0     0
2077              c6t0d0  ONLINE       0     0     0
2078              c6t1d0  ONLINE       0     0     0
2079            mirror-1  ONLINE       0     0     0
2080              c6t2d0  ONLINE       0     0     0
2081              c6t3d0  ONLINE       0     0     0
2082          logs
2083            mirror-2  ONLINE       0     0     0
2084              c4t0d0  ONLINE       0     0     0
2085              c4t1d0  ONLINE       0     0     0
2086 .Ed
2087 .Pp
2088 The command to remove the mirrored log
2089 .Sy mirror-2
2090 is:
2091 .Bd -literal
2092 # zpool remove tank mirror-2
2093 .Ed
2094 .It Sy Example 15 No Displaying expanded space on a device
2095 The following command displays the detailed information for the pool
2096 .Em data .
2097 This pool is comprised of a single raidz vdev where one of its devices
2098 increased its capacity by 10GB.
2099 In this example, the pool will not be able to utilize this extra capacity until
2100 all the devices under the raidz vdev have been expanded.
2101 .Bd -literal
2102 # zpool list -v data
2103 NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
2104 data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
2105   raidz1    23.9G  14.6G  9.30G    48%         -
2106     c1t1d0      -      -      -      -         -
2107     c1t2d0      -      -      -      -       10G
2108     c1t3d0      -      -      -      -         -
2109 .Ed
2110 .El
2111 .Sh INTERFACE STABILITY
2112 .Sy Evolving
2113 .Sh SEE ALSO
2114 .Xr zfs 1M ,
2115 .Xr attributes 5 ,
2116 .Xr zpool-features 5