1 ZPOOL(1M) Maintenance Commands ZPOOL(1M)
2
3 NAME
4 zpool - configure ZFS storage pools
5
6 SYNOPSIS
7 zpool -?
8 zpool add [-fn] pool vdev...
9 zpool attach [-f] pool device new_device
10 zpool clear pool [device]
11 zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]...
12 [-O file-system-property=value]... [-R root] pool vdev...
13 zpool destroy [-f] pool
14 zpool detach pool device
15 zpool export [-cfF] [-t numthreads] pool...
16 zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
17 zpool history [-il] [pool]...
18 zpool import [-D] [-d dir]
19 zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
20 [-o property=value]... [-R root] [-t numthreads]
21 zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
22 [-o property=value]... [-R root] [-t numthreads] pool|id [newpool]
23 zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
24 zpool labelclear [-f] device
25 zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
26 [interval [count]]
27 zpool offline [-t] pool device...
28 zpool online [-e] pool device...
29 zpool reguid pool
30 zpool reopen pool
31 zpool remove pool device...
32 zpool replace [-f] pool device [new_device]
33 zpool scrub [-m|-M|-p|-s] pool...
34 zpool set property=value pool
35 zpool split [-n] [-o property=value]... [-R root] pool newpool
36 zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
37 zpool trim [-r rate|-s] pool...
38 zpool upgrade
39 zpool upgrade -v
40 zpool upgrade [-V version] -a|pool...
41 zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
42 zpool vdev-set property=value pool vdev-name|vdev-guid
43
44 DESCRIPTION
45 The zpool command configures ZFS storage pools. A storage pool is a
46 collection of devices that provides physical storage and data replication
47 for ZFS datasets. All datasets within a storage pool share the same
48 space. See zfs(1M) for information on managing datasets.
49
50 Virtual Devices (vdevs)
51 A "virtual device" describes a single device or a collection of devices
52 organized according to certain performance and fault characteristics.
53 The following virtual devices are supported:
54
55 disk A block device, typically located under /dev/dsk. ZFS can use
56 individual slices or partitions, though the recommended mode of
57 operation is to use whole disks. A disk can be specified by a
58 full path, or it can be a shorthand name (the relative portion of
59 the path under /dev/dsk). A whole disk can be specified by
60 omitting the slice or partition designation. For example, c0t0d0
61 is equivalent to /dev/dsk/c0t0d0s2. When given a whole disk, ZFS
62 automatically labels the disk, if necessary.
63
64 file A regular file. The use of files as a backing store is strongly
65 discouraged. It is designed primarily for experimental purposes,
66 as the fault tolerance of a file is only as good as the file
67 system of which it is a part. A file must be specified by a full
68 path.
69
70 mirror A mirror of two or more devices. Data is replicated in an
71 identical fashion across all components of a mirror. A mirror
72 with N disks of size X can hold X bytes and can withstand (N-1)
73 devices failing before data integrity is compromised.
74
75 raidz, raidz1, raidz2, raidz3
76 A variation on RAID-5 that allows for better distribution of
77 parity and eliminates the RAID-5 "write hole" (in which data and
78 parity become inconsistent after a power loss). Data and parity
79 is striped across all disks within a raidz group.
80
81 A raidz group can have single-, double-, or triple-parity,
82 meaning that the raidz group can sustain one, two, or three
83 failures, respectively, without losing any data. The raidz1 vdev
84 type specifies a single-parity raidz group; the raidz2 vdev type
85 specifies a double-parity raidz group; and the raidz3 vdev type
86 specifies a triple-parity raidz group. The raidz vdev type is an
87 alias for raidz1.
88
89 A raidz group with N disks of size X with P parity disks can hold
90 approximately (N-P)*X bytes and can withstand P device(s) failing
91 before data integrity is compromised. The minimum number of
92 devices in a raidz group is one more than the number of parity
93 disks. The recommended number is between 3 and 9 to help
94 increase performance.
95
96 spare A special pseudo-vdev which keeps track of available hot spares
97 for a pool. For more information, see the Hot Spares section.
98
99 log A separate intent log device. If more than one log device is
100 specified, then writes are load-balanced between devices. Log
101 devices can be mirrored. However, raidz vdev types are not
102 supported for the intent log. For more information, see the
103 Intent Log section.
104
105 cache A device used to cache storage pool data. A cache device cannot
106 be configured as a mirror or raidz group. For more information,
107 see the Cache Devices section.
108
109 Virtual devices cannot be nested, so a mirror or raidz virtual device can
110 only contain files or disks. Mirrors of mirrors (or other combinations)
111 are not allowed.
112
113 A pool can have any number of virtual devices at the top of the
114 configuration (known as "root vdevs"). Data is dynamically distributed
115 across all top-level devices to balance data among devices. As new
116 virtual devices are added, ZFS automatically places data on the newly
117 available devices.
118
119 Virtual devices are specified one at a time on the command line,
120 separated by whitespace. The keywords mirror and raidz are used to
121 distinguish where a group ends and another begins. For example, the
122 following creates two root vdevs, each a mirror of two disks:
123
124 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
125
126 Device Failure and Recovery
127 ZFS supports a rich set of mechanisms for handling device failure and
128 data corruption. All metadata and data is checksummed, and ZFS
129 automatically repairs bad data from a good copy when corruption is
130 detected.
131
132 In order to take advantage of these features, a pool must make use of
133 some form of redundancy, using either mirrored or raidz groups. While
134 ZFS supports running in a non-redundant configuration, where each root
135 vdev is simply a disk or file, this is strongly discouraged. A single
136 case of bit corruption can render some or all of your data unavailable.
137
138 A pool's health status is described by one of three states: online,
139 degraded, or faulted. An online pool has all devices operating normally.
140 A degraded pool is one in which one or more devices have failed, but the
141 data is still available due to a redundant configuration. A faulted pool
142 has corrupted metadata, or one or more faulted devices, and insufficient
143 replicas to continue functioning.
144
145 The health of the top-level vdev, such as mirror or raidz device, is
146 potentially impacted by the state of its associated vdevs, or component
147 devices. A top-level vdev or component device is in one of the following
148 states:
149
150 DEGRADED One or more top-level vdevs is in the degraded state because
151 one or more component devices are offline. Sufficient replicas
152 exist to continue functioning.
153
154 One or more component devices is in the degraded or faulted
155 state, but sufficient replicas exist to continue functioning.
156 The underlying conditions are as follows:
157
158 o The number of checksum errors exceeds acceptable levels and
159 the device is degraded as an indication that something may
160 be wrong. ZFS continues to use the device as necessary.
161
162 o The number of I/O errors exceeds acceptable levels. The
163 device could not be marked as faulted because there are
164 insufficient replicas to continue functioning.
165
166 FAULTED One or more top-level vdevs is in the faulted state because one
167 or more component devices are offline. Insufficient replicas
168 exist to continue functioning.
169
170 One or more component devices is in the faulted state, and
171 insufficient replicas exist to continue functioning. The
172 underlying conditions are as follows:
173
174 o The device could be opened, but the contents did not match
175 expected values.
176
177 o The number of I/O errors exceeds acceptable levels and the
178 device is faulted to prevent further use of the device.
179
180 OFFLINE The device was explicitly taken offline by the zpool offline
181 command.
182
183 ONLINE The device is online and functioning.
184
185 REMOVED The device was physically removed while the system was running.
186 Device removal detection is hardware-dependent and may not be
187 supported on all platforms.
188
189 UNAVAIL The device could not be opened. If a pool is imported when a
190 device was unavailable, then the device will be identified by a
191 unique identifier instead of its path since the path was never
192 correct in the first place.
193
194 If a device is removed and later re-attached to the system, ZFS attempts
195 to put the device online automatically. Device attach detection is
196 hardware-dependent and might not be supported on all platforms.
197
198 Hot Spares
199 ZFS allows devices to be associated with pools as "hot spares". These
200 devices are not actively used in the pool, but when an active device
201 fails, it is automatically replaced by a hot spare. To create a pool
202 with hot spares, specify a spare vdev with any number of devices. For
203 example,
204
205 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
206
207 Spares can be shared across multiple pools, and can be added with the
208 zpool add command and removed with the zpool remove command. Once a
209 spare replacement is initiated, a new spare vdev is created within the
210 configuration that will remain there until the original device is
211 replaced. At this point, the hot spare becomes available again if
212 another device fails.
213
214 If a pool has a shared spare that is currently being used, the pool can
215 not be exported since other pools may use this shared spare, which may
216 lead to potential data corruption.
217
218 An in-progress spare replacement can be cancelled by detaching the hot
219 spare. If the original faulted device is detached, then the hot spare
220 assumes its place in the configuration, and is removed from the spare
221 list of all active pools.
222
223 See sparegroup vdev property in Device Properties section for information
224 on how to control spare selection.
225
226 Spares cannot replace log devices.
227
228 Intent Log
229 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
230 transactions. For instance, databases often require their transactions
231 to be on stable storage devices when returning from a system call. NFS
232 and other applications can also use fsync(3C) to ensure data stability.
233 By default, the intent log is allocated from blocks within the main pool.
234 However, it might be possible to get better performance using separate
235 intent log devices such as NVRAM or a dedicated disk. For example:
236
237 # zpool create pool c0d0 c1d0 log c2d0
238
239 Multiple log devices can also be specified, and they can be mirrored.
240 See the EXAMPLES section for an example of mirroring multiple log
241 devices.
242
243 Log devices can be added, replaced, attached, detached, and imported and
244 exported as part of the larger pool. Mirrored log devices can be removed
245 by specifying the top-level mirror for the log.
246
247 Cache Devices
248 Devices can be added to a storage pool as "cache devices". These devices
249 provide an additional layer of caching between main memory and disk. For
250 read-heavy workloads, where the working set size is much larger than what
251 can be cached in main memory, using cache devices allow much more of this
252 working set to be served from low latency media. Using cache devices
253 provides the greatest performance improvement for random read-workloads
254 of mostly static content.
255
256 To create a pool with cache devices, specify a cache vdev with any number
257 of devices. For example:
258
259 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
260
261 Cache devices cannot be mirrored or part of a raidz configuration. If a
262 read error is encountered on a cache device, that read I/O is reissued to
263 the original storage pool device, which might be part of a mirrored or
264 raidz configuration.
265
266 The content of the cache devices is considered volatile, as is the case
267 with other system caches.
268
269 Pool Properties
270 Each pool has several properties associated with it. Some properties are
271 read-only statistics while others are configurable and change the
272 behavior of the pool.
273
274 The following are read-only properties:
275
276 allocated
277 Amount of storage space used within the pool.
278
279 bootsize
280 The size of the system boot partition. This property can only be
281 set at pool creation time and is read-only once pool is created.
282 Setting this property implies using the -B option.
283
284 capacity
285 Percentage of pool space used. This property can also be
286 referred to by its shortened column name, cap.
287
288 ddt_capped=on|off
289 When the ddt_capped is on this indicates DDT growth has been
290 stopped. New unique writes will not be deduped to prevent
291 further DDT growth.
292
293 expandsize
294 Amount of uninitialized space within the pool or device that can
295 be used to increase the total capacity of the pool.
296 Uninitialized space consists of any space on an EFI labeled vdev
297 which has not been brought online (e.g, using zpool online -e).
298 This space occurs when a LUN is dynamically expanded.
299
300 fragmentation
301 The amount of fragmentation in the pool.
302
303 free The amount of free space available in the pool.
304
305 freeing
306 freeing is the amount of pool space remaining to be reclaimed.
307 After a file, dataset or snapshot is destroyed, the space it was
308 using is returned to the pool asynchronously. Over time freeing
309 will decrease while free increases.
310
311 health The current health of the pool. Health can be one of ONLINE,
312 DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
313
314 guid A unique identifier for the pool.
315
316 size Total size of the storage pool.
317
318 unsupported@feature_guid
319 Information about unsupported features that are enabled on the
320 pool. See zpool-features(5) for details.
321
322 The space usage properties report actual physical space available to the
323 storage pool. The physical space can be different from the total amount
324 of space that any contained datasets can actually use. The amount of
325 space used in a raidz configuration depends on the characteristics of the
326 data being written. In addition, ZFS reserves some space for internal
327 accounting that the zfs(1M) command takes into account, but the zpool
328 command does not. For non-full pools of a reasonable size, these effects
329 should be invisible. For small pools, or pools that are close to being
330 completely full, these discrepancies may become more noticeable.
331
332 The following property can be set at creation time and import time:
333
334 altroot
335 Alternate root directory. If set, this directory is prepended to
336 any mount points within the pool. This can be used when
337 examining an unknown pool where the mount points cannot be
338 trusted, or in an alternate boot environment, where the typical
339 paths are not valid. altroot is not a persistent property. It
340 is valid only while the system is up. Setting altroot defaults
341 to using cachefile=none, though this may be overridden using an
342 explicit setting.
343
344 The following property can be set only at import time:
345
346 readonly=on|off
347 If set to on, the pool will be imported in read-only mode. This
348 property can also be referred to by its shortened column name,
349 rdonly.
350
351 The following properties can be set at creation time and import time, and
352 later changed with the zpool set command:
353
354 autoexpand=on|off
355 Controls automatic pool expansion when the underlying LUN is
356 grown. If set to on, the pool will be resized according to the
357 size of the expanded device. If the device is part of a mirror
358 or raidz then all devices within that mirror/raidz group must be
359 expanded before the new space is made available to the pool. The
360 default behavior is off. This property can also be referred to
361 by its shortened column name, expand.
362
363 autoreplace=on|off
364 Controls automatic device replacement. If set to off, device
365 replacement must be initiated by the administrator by using the
366 zpool replace command. If set to on, any new device, found in
367 the same physical location as a device that previously belonged
368 to the pool, is automatically formatted and replaced. The
369 default behavior is off. This property can also be referred to
370 by its shortened column name, replace.
371
372 autotrim=on|off
373 When set to on, while deleting data, ZFS will inform the
374 underlying vdevs of any blocks that have been marked as freed.
375 This allows thinly provisioned vdevs to reclaim unused blocks.
376 Currently, this feature supports sending SCSI UNMAP commands to
377 SCSI and SAS disk vdevs, and using file hole punching on file-
378 backed vdevs. SATA TRIM is currently not implemented. The
379 default setting for this property is off.
380
381 Please note that automatic trimming of data blocks can put
382 significant stress on the underlying storage devices if they do
383 not handle these commands in a background, low-priority manner.
384 In that case, it may be possible to achieve most of the benefits
385 of trimming free space on the pool by running an on-demand
386 (manual) trim every once in a while during a maintenance window
387 using the zpool trim command.
388
389 Automatic trim does not reclaim blocks after a delete
390 immediately. Instead, it waits approximately 32-64 TXGs (or as
391 defined by the zfs_txgs_per_trim tunable) to allow for more
392 efficient aggregation of smaller portions of free space into
393 fewer larger regions, as well as to allow for longer pool
394 corruption recovery via zpool import -F.
395
396 bootfs=pool/dataset
397 Identifies the default bootable dataset for the root pool. This
398 property is expected to be set mainly by the installation and
399 upgrade programs.
400
401 cachefile=path|none
402 Controls the location of where the pool configuration is cached.
403 Discovering all pools on system startup requires a cached copy of
404 the configuration data that is stored on the root file system.
405 All pools in this cache are automatically imported when the
406 system boots. Some environments, such as install and clustering,
407 need to cache this information in a different location so that
408 pools are not automatically imported. Setting this property
409 caches the pool configuration in a different location that can
410 later be imported with zpool import -c. Setting it to the
411 special value none creates a temporary pool that is never cached,
412 and the special value "" (empty string) uses the default
413 location.
414
415 Multiple pools can share the same cache file. Because the kernel
416 destroys and recreates this file when pools are added and
417 removed, care should be taken when attempting to access this
418 file. When the last pool using a cachefile is exported or
419 destroyed, the file is removed.
420
421 comment=text
422 A text string consisting of printable ASCII characters that will
423 be stored such that it is available even if the pool becomes
424 faulted. An administrator can provide additional information
425 about a pool using this property.
426
427 dedupditto=number
428 Threshold for the number of block ditto copies. If the reference
429 count for a deduplicated block increases above this number, a new
430 ditto copy of this block is automatically stored. The default
431 setting is 0 which causes no ditto copies to be created for
432 deduplicated blocks. The minimum legal nonzero setting is 100.
433
434 delegation=on|off
435 Controls whether a non-privileged user is granted access based on
436 the dataset permissions defined on the dataset. See zfs(1M) for
437 more information on ZFS delegated administration.
438
439 failmode=wait|continue|panic
440 Controls the system behavior in the event of catastrophic pool
441 failure. This condition is typically a result of a loss of
442 connectivity to the underlying storage device(s) or a failure of
443 all devices within the pool. The behavior of such an event is
444 determined as follows:
445
446 wait Blocks all I/O access until the device connectivity is
447 recovered and the errors are cleared. This is the
448 default behavior.
449
450 continue Returns EIO to any new write I/O requests but allows
451 reads to any of the remaining healthy devices. Any
452 write requests that have yet to be committed to disk
453 would be blocked.
454
455 panic Prints out a message to the console and generates a
456 system crash dump.
457
458 feature@feature_name=enabled
459 The value of this property is the current state of feature_name.
460 The only valid value when setting this property is enabled which
461 moves feature_name to the enabled state. See zpool-features(5)
462 for details on feature states.
463
464 forcetrim=on|off
465 Controls whether device support is taken into consideration when
466 issuing TRIM commands to the underlying vdevs of the pool.
467 Normally, both automatic trim and on-demand (manual) trim only
468 issue TRIM commands if a vdev indicates support for it. Setting
469 the forcetrim property to on will force ZFS to issue TRIMs even
470 if it thinks a device does not support it. The default value is
471 off.
472
473 listsnapshots=on|off
474 Controls whether information about snapshots associated with this
475 pool is output when zfs list is run without the -t option. The
476 default value is off. This property can also be referred to by
477 its shortened name, listsnaps.
478
479 scrubprio=0-100
480 Sets the priority of scrub I/O for this pool. This is a number
481 from 0 to 100, higher numbers meaning a higher priority and thus
482 more bandwidth allocated to scrub I/O, provided there is other
483 I/O competing for bandwidth. If no other I/O is competing for
484 bandwidth, scrub is allowed to consume as much bandwidth as the
485 pool is capable of providing. A priority of 100 means that scrub
486 I/O has equal priority to any other user-generated I/O. The
487 value 0 is special, because it turns per-pool scrub priority
488 control. In that case, scrub I/O priority is determined by the
489 zfs_vdev_scrub_min_active and zfs_vdev_scrub_max_active tunables.
490 The default value is 5.
491
492 resilverprio=0-100
493 Same as the scrubprio property, but controls the priority for
494 resilver I/O. The default value is 10. When set to 0 the global
495 tunables used for queue sizing are zfs_vdev_resilver_min_active
496 and zfs_vdev_resilver_max_active.
497
498 version=version
499 The current on-disk version of the pool. This can be increased,
500 but never decreased. The preferred method of updating pools is
501 with the zpool upgrade command, though this property can be used
502 when a specific version is needed for backwards compatibility.
503 Once feature flags are enabled on a pool this property will no
504 longer have a value.
505
506 Device Properties
507 Each device can have several properties associated with it. These
508 properites override global tunables and are designed to provide more
509 control over the operational parameters of this specific device, as well
510 as to help manage this device.
511
512 The cos device property can reference a CoS property descriptor by name,
513 in which case, the values of device properties are determined according
514 to the following rule: the device settings override CoS settings, which
515 in turn, override the global tunables.
516
517 The following device properties are available:
518
519 cos=cos-name
520 This property indicates whether the device is associated with a
521 CoS property descriptor object. If so, the properties from the
522 CoS descriptor that are not explicitly overridden by the device
523 properties are in effect for this device.
524
525 l2arc_ddt=on|off
526 This property is meaningful for L2ARC devices. If this property
527 is turned on ZFS will dedicate the L2ARC device to cache
528 deduplication table (DDT) buffers only.
529
530 prefread=1..100
531 This property is meaningful for devices that belong to a mirror.
532 The property determines the preference that is given to the
533 device when reading from the mirror. The ratio of the value to
534 the sum of the values of this property for all the devices in the
535 mirror determines the relative frequency (which also is
536 considered "probability") of reading from this specific device.
537
538 sparegroup=group-name
539 This property indicates whether the device is a part of a spare
540 device group. Devices in the pool (including spares) can be
541 labeled with strings that are meaningful in the context of the
542 management workflow in effect. When a failed device is
543 automatically replaced by spares, the spares whose sparegroup
544 property match the failed device's property are used first.
545
546 {read|aread|write|awrite|scrub|resilver}_{minactive|maxactive}=1..1000
547 These properties define the minimim/maximum number of outstanding
548 active requests for the queueable classes of I/O requests as
549 defined by the ZFS I/O scheduler. The classes include read,
550 asynchronous read, write, asynchronous write, and scrub classes.
551
552 Subcommands
553 All subcommands that modify state are logged persistently to the pool in
554 their original form.
555
556 The zpool command provides subcommands to create and destroy storage
557 pools, add capacity to storage pools, and provide information about the
558 storage pools. The following subcommands are supported:
559
560 zpool -?
561 Displays a help message.
562
563 zpool add [-fn] pool vdev...
564 Adds the specified virtual devices to the given pool. The vdev
565 specification is described in the Virtual Devices section. The
566 behavior of the -f option, and the device checks performed are
567 described in the zpool create subcommand.
568
569 -f Forces use of vdevs, even if they appear in use or
570 specify a conflicting replication level. Not all devices
571 can be overridden in this manner.
572
573 -n Displays the configuration that would be used without
574 actually adding the vdevs. The actual pool creation can
575 still fail due to insufficient privileges or device
576 sharing.
577
578 zpool attach [-f] pool device new_device
579 Attaches new_device to the existing device. The existing device
580 cannot be part of a raidz configuration. If device is not
581 currently part of a mirrored configuration, device automatically
582 transforms into a two-way mirror of device and new_device. If
583 device is part of a two-way mirror, attaching new_device creates
584 a three-way mirror, and so on. In either case, new_device begins
585 to resilver immediately.
586
587 -f Forces use of new_device, even if its appears to be in
588 use. Not all devices can be overridden in this manner.
589
590 zpool clear pool [device]
591 Clears device errors in a pool. If no arguments are specified,
592 all device errors within the pool are cleared. If one or more
593 devices is specified, only those errors associated with the
594 specified device or devices are cleared.
595
596 zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]... [-O
597 file-system-property=value]... [-R root] pool vdev...
598 Creates a new storage pool containing the virtual devices
599 specified on the command line. The pool name must begin with a
600 letter, and can only contain alphanumeric characters as well as
601 underscore ("_"), dash ("-"), and period ("."). The pool names
602 mirror, raidz, spare and log are reserved, as are names beginning
603 with the pattern c[0-9]. The vdev specification is described in
604 the Virtual Devices section.
605
606 The command verifies that each device specified is accessible and
607 not currently in use by another subsystem. There are some uses,
608 such as being currently mounted, or specified as the dedicated
609 dump device, that prevents a device from ever being used by ZFS.
610 Other uses, such as having a preexisting UFS file system, can be
611 overridden with the -f option.
612
613 The command also checks that the replication strategy for the
614 pool is consistent. An attempt to combine redundant and non-
615 redundant storage in a single pool, or to mix disks and files,
616 results in an error unless -f is specified. The use of
617 differently sized devices within a single raidz or mirror group
618 is also flagged as an error unless -f is specified.
619
620 Unless the -R option is specified, the default mount point is
621 /pool. The mount point must not exist or must be empty, or else
622 the root dataset cannot be mounted. This can be overridden with
623 the -m option.
624
625 By default all supported features are enabled on the new pool
626 unless the -d option is specified.
627
628 -B Create whole disk pool with EFI System partition to
629 support booting system with UEFI firmware. Default size
630 is 256MB. To create boot partition with custom size, set
631 the bootsize property with the -o option. See the
632 Properties section for details.
633
634 -d Do not enable any features on the new pool. Individual
635 features can be enabled by setting their corresponding
636 properties to enabled with the -o option. See
637 zpool-features(5) for details about feature properties.
638
639 -f Forces use of vdevs, even if they appear in use or
640 specify a conflicting replication level. Not all devices
641 can be overridden in this manner.
642
643 -m mountpoint
644 Sets the mount point for the root dataset. The default
645 mount point is /pool or altroot/pool if altroot is
646 specified. The mount point must be an absolute path,
647 legacy, or none. For more information on dataset mount
648 points, see zfs(1M).
649
650 -n Displays the configuration that would be used without
651 actually creating the pool. The actual pool creation can
652 still fail due to insufficient privileges or device
653 sharing.
654
655 -o property=value
656 Sets the given pool properties. See the Pool Properties
657 section for a list of valid properties that can be set.
658
659 -O file-system-property=value
660 Sets the given file system properties in the root file
661 system of the pool. See the Properties section of
662 zfs(1M) for a list of valid properties that can be set.
663
664 -R root
665 Equivalent to -o cachefile=none -o altroot=root
666
667 zpool destroy [-f] pool
668 Destroys the given pool, freeing up any devices for other use.
669 This command tries to unmount any active datasets before
670 destroying the pool.
671
672 -f Forces any active datasets contained within the pool to
673 be unmounted.
674
675 zpool detach pool device
676 Detaches device from a mirror. The operation is refused if there
677 are no other valid replicas of the data.
678
679 zpool export [-cfF] [-t numthreads] pool...
680 Exports the given pools from the system. All devices are marked
681 as exported, but are still considered in use by other subsystems.
682 The devices can be moved between systems (even those of different
683 endianness) and imported as long as a sufficient number of
684 devices are present.
685
686 Before exporting the pool, all datasets within the pool are
687 unmounted. A pool can not be exported if it has a shared spare
688 that is currently being used.
689
690 For pools to be portable, you must give the zpool command whole
691 disks, not just slices, so that ZFS can label the disks with
692 portable EFI labels. Otherwise, disk drivers on platforms of
693 different endianness will not recognize the disks.
694
695 -c Keep configuration information of exported pool in the
696 cache file.
697
698 -f Forcefully unmount all datasets, using the unmount -f
699 command.
700
701 This command will forcefully export the pool even if it
702 has a shared spare that is currently being used. This
703 may lead to potential data corruption.
704
705 -F Do not update device labels or cache file with new
706 configuration.
707
708 -t numthreads
709 Unmount datasets in parallel using up to numthreads
710 threads.
711
712 zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
713 Retrieves the given list of properties (or all properties if all
714 is used) for the specified storage pool(s). These properties are
715 displayed with the following fields:
716
717 name Name of storage pool
718 property Property name
719 value Property value
720 source Property source, either 'default' or 'local'.
721
722 See the Pool Properties section for more information on the
723 available pool properties.
724
725 -H Scripted mode. Do not display headers, and separate
726 fields by a single tab instead of arbitrary space.
727
728 -o field
729 A comma-separated list of columns to display.
730 name,property,value,source is the default value.
731
732 -p Display numbers in parsable (exact) values.
733
734 zpool history [-il] [pool]...
735 Displays the command history of the specified pool(s) or all
736 pools if no pool is specified.
737
738 -i Displays internally logged ZFS events in addition to user
739 initiated events.
740
741 -l Displays log records in long format, which in addition to
742 standard format includes, the user name, the hostname,
743 and the zone in which the operation was performed.
744
745 zpool import [-D] [-d dir]
746 Lists pools available to import. If the -d option is not
747 specified, this command searches for devices in /dev/dsk. The -d
748 option can be specified multiple times, and all directories are
749 searched. If the device appears to be part of an exported pool,
750 this command displays a summary of the pool with the name of the
751 pool, a numeric identifier, as well as the vdev layout and
752 current health of the device for each device or file. Destroyed
753 pools, pools that were previously destroyed with the zpool
754 destroy command, are not listed unless the -D option is
755 specified.
756
757 The numeric identifier is unique, and can be used instead of the
758 pool name when multiple exported pools of the same name are
759 available.
760
761 -c cachefile
762 Reads configuration from the given cachefile that was
763 created with the cachefile pool property. This cachefile
764 is used instead of searching for devices.
765
766 -d dir Searches for devices or files in dir. The -d option can
767 be specified multiple times.
768
769 -D Lists destroyed pools only.
770
771 zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
772 property=value]... [-R root]
773 Imports all pools found in the search directories. Identical to
774 the previous command, except that all pools with a sufficient
775 number of devices available are imported. Destroyed pools, pools
776 that were previously destroyed with the zpool destroy command,
777 will not be imported unless the -D option is specified.
778
779 -a Searches for and imports all pools found.
780
781 -c cachefile
782 Reads configuration from the given cachefile that was
783 created with the cachefile pool property. This cachefile
784 is used instead of searching for devices.
785
786 -d dir Searches for devices or files in dir. The -d option can
787 be specified multiple times. This option is incompatible
788 with the -c option.
789
790 -D Imports destroyed pools only. The -f option is also
791 required.
792
793 -f Forces import, even if the pool appears to be potentially
794 active.
795
796 -F Recovery mode for a non-importable pool. Attempt to
797 return the pool to an importable state by discarding the
798 last few transactions. Not all damaged pools can be
799 recovered by using this option. If successful, the data
800 from the discarded transactions is irretrievably lost.
801 This option is ignored if the pool is importable or
802 already imported.
803
804 -m Allows a pool to import when there is a missing log
805 device. Recent transactions can be lost because the log
806 device will be discarded.
807
808 -n Used with the -F recovery option. Determines whether a
809 non-importable pool can be made importable again, but
810 does not actually perform the pool recovery. For more
811 details about pool recovery mode, see the -F option,
812 above.
813
814 -N Import the pool without mounting any file systems.
815
816 -o mntopts
817 Comma-separated list of mount options to use when
818 mounting datasets within the pool. See zfs(1M) for a
819 description of dataset properties and mount options.
820
821 -o property=value
822 Sets the specified property on the imported pool. See
823 the Pool Properties section for more information on the
824 available pool properties.
825
826 -R root
827 Sets the cachefile property to none and the altroot
828 property to root.
829
830 zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
831 property=value]... [-R root] pool|id [newpool]
832 Imports a specific pool. A pool can be identified by its name or
833 the numeric identifier. If newpool is specified, the pool is
834 imported using the name newpool. Otherwise, it is imported with
835 the same name as its exported name.
836
837 If a device is removed from a system without running zpool export
838 first, the device appears as potentially active. It cannot be
839 determined if this was a failed export, or whether the device is
840 really in use from another host. To import a pool in this state,
841 the -f option is required.
842
843 -c cachefile
844 Reads configuration from the given cachefile that was
845 created with the cachefile pool property. This cachefile
846 is used instead of searching for devices.
847
848 -d dir Searches for devices or files in dir. The -d option can
849 be specified multiple times. This option is incompatible
850 with the -c option.
851
852 -D Imports destroyed pool. The -f option is also required.
853
854 -f Forces import, even if the pool appears to be potentially
855 active.
856
857 -F Recovery mode for a non-importable pool. Attempt to
858 return the pool to an importable state by discarding the
859 last few transactions. Not all damaged pools can be
860 recovered by using this option. If successful, the data
861 from the discarded transactions is irretrievably lost.
862 This option is ignored if the pool is importable or
863 already imported.
864
865 -m Allows a pool to import when there is a missing log
866 device. Recent transactions can be lost because the log
867 device will be discarded.
868
869 -n Used with the -F recovery option. Determines whether a
870 non-importable pool can be made importable again, but
871 does not actually perform the pool recovery. For more
872 details about pool recovery mode, see the -F option,
873 above.
874
875 -o mntopts
876 Comma-separated list of mount options to use when
877 mounting datasets within the pool. See zfs(1M) for a
878 description of dataset properties and mount options.
879
880 -o property=value
881 Sets the specified property on the imported pool. See
882 the Pool Properties section for more information on the
883 available pool properties.
884
885 -R root
886 Sets the cachefile property to none and the altroot
887 property to root.
888
889 -t numthreads
890 Mount datasets in parallel using up to numthreads
891 threads.
892
893 zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
894 Displays I/O statistics for the given pools. When given an
895 interval, the statistics are printed every interval seconds until
896 ^C is pressed. If no pools are specified, statistics for every
897 pool in the system is shown. If count is specified, the command
898 exits after count reports are printed.
899
900 -T u|d Display a time stamp. Specify u for a printed
901 representation of the internal representation of time.
902 See time(2). Specify d for standard date format. See
903 date(1).
904
905 -v Verbose statistics Reports usage statistics for
906 individual vdevs within the pool, in addition to the
907 pool-wide statistics.
908
909 zpool labelclear [-f] device
910 Removes ZFS label information from the specified device. The
911 device must not be part of an active pool configuration.
912
913 -f Treat exported or foreign devices as inactive.
914
915 zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
916 [interval [count]]
917 Lists the given pools along with a health status and space usage.
918 If no pools are specified, all pools in the system are listed.
919 When given an interval, the information is printed every interval
920 seconds until ^C is pressed. If count is specified, the command
921 exits after count reports are printed.
922
923 -H Scripted mode. Do not display headers, and separate
924 fields by a single tab instead of arbitrary space.
925
926 -o property
927 Comma-separated list of properties to display. See the
928 Pool Properties section for a list of valid properties.
929 The default list is name, size, allocated, free,
930 expandsize, fragmentation, capacity, dedupratio, health,
931 altroot.
932
933 -p Display numbers in parsable (exact) values.
934
935 -T u|d Display a time stamp. Specify -u for a printed
936 representation of the internal representation of time.
937 See time(2). Specify -d for standard date format. See
938 date(1).
939
940 -v Verbose statistics. Reports usage statistics for
941 individual vdevs within the pool, in addition to the
942 pool-wise statistics.
943
944 zpool offline [-t] pool device...
945 Takes the specified physical device offline. While the device is
946 offline, no attempt is made to read or write to the device. This
947 command is not applicable to spares.
948
949 -t Temporary. Upon reboot, the specified physical device
950 reverts to its previous state.
951
952 zpool online [-e] pool device...
953 Brings the specified physical device online. This command is not
954 applicable to spares.
955
956 -e Expand the device to use all available space. If the
957 device is part of a mirror or raidz then all devices must
958 be expanded before the new space will become available to
959 the pool.
960
961 zpool reguid pool
962 Generates a new unique identifier for the pool. You must ensure
963 that all devices in this pool are online and healthy before
964 performing this action.
965
966 zpool reopen pool
967 Reopen all the vdevs associated with the pool.
968
969 zpool remove pool device...
970 Removes the specified device from the pool. This command
971 currently only supports removing hot spares, cache, log and
972 special devices. A mirrored log device can be removed by
973 specifying the top-level mirror for the log. Non-log devices
974 that are part of a mirrored configuration can be removed using
975 the zpool detach command. Non-redundant and raidz devices cannot
976 be removed from a pool.
977
978 zpool replace [-f] pool device [new_device]
979 Replaces old_device with new_device. This is equivalent to
980 attaching new_device, waiting for it to resilver, and then
981 detaching old_device.
982
983 The size of new_device must be greater than or equal to the
984 minimum size of all the devices in a mirror or raidz
985 configuration.
986
987 new_device is required if the pool is not redundant. If
988 new_device is not specified, it defaults to old_device. This
989 form of replacement is useful after an existing disk has failed
990 and has been physically replaced. In this case, the new disk may
991 have the same /dev/dsk path as the old device, even though it is
992 actually a different disk. ZFS recognizes this.
993
994 -f Forces use of new_device, even if its appears to be in
995 use. Not all devices can be overridden in this manner.
996
997 zpool scrub [-m|-M|-p|-s] pool...
998 Begins a scrub or resumes a paused scrub. The scrub examines all
999 data in the specified pools to verify that it checksums
1000 correctly. For replicated (mirror or raidz) devices, ZFS
1001 automatically repairs any damage discovered during the scrub.
1002 The zpool status command reports the progress of the scrub and
1003 summarizes the results of the scrub upon completion.
1004
1005 Scrubbing and resilvering are very similar operations. The
1006 difference is that resilvering only examines data that ZFS knows
1007 to be out of date (for example, when attaching a new device to a
1008 mirror or replacing an existing device), whereas scrubbing
1009 examines all data to discover silent errors due to hardware
1010 faults or disk failure.
1011
1012 Because scrubbing and resilvering are I/O-intensive operations,
1013 ZFS only allows one at a time. If a scrub is paused, the zpool
1014 scrub resumes it. If a resilver is in progress, ZFS does not
1015 allow a scrub to be started until the resilver completes.
1016
1017 Partial scrub may be requested using -m or -M option.
1018
1019 -m Scrub only metadata blocks.
1020
1021 -M Scrub only MOS blocks.
1022
1023 -p Pause scrubbing. Scrub pause state and progress are
1024 periodically synced to disk. If the system is restarted
1025 or pool is exported during a paused scrub, even after
1026 import, scrub will remain paused until it is resumed.
1027 Once resumed the scrub will pick up from the place where
1028 it was last checkpointed to disk. To resume a paused
1029 scrub issue zpool scrub again.
1030
1031 -s Stop scrubbing.
1032
1033 zpool set property=value pool
1034 Sets the given property on the specified pool. See the Pool
1035 Properties section for more information on what properties can be
1036 set and acceptable values.
1037
1038 zpool split [-n] [-o property=value]... [-R root] pool newpool
1039 Splits devices off pool creating newpool. All vdevs in pool must
1040 be mirrors. At the time of the split, newpool will be a replica
1041 of pool.
1042
1043 -n Do dry run, do not actually perform the split. Print out
1044 the expected configuration of newpool.
1045
1046 -o property=value
1047 Sets the specified property for newpool. See the Pool
1048 Properties section for more information on the available
1049 pool properties.
1050
1051 -R root
1052 Set altroot for newpool to root and automatically import
1053 it.
1054
1055 zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
1056 Displays the detailed health status for the given pools. If no
1057 pool is specified, then the status of each pool in the system is
1058 displayed. For more information on pool and device health, see
1059 the Device Failure and Recovery section.
1060
1061 If a scrub or resilver is in progress, this command reports the
1062 percentage done and the estimated time to completion. Both of
1063 these are only approximate, because the amount of data in the
1064 pool and the other workloads on the system can change.
1065
1066 -D Display a histogram of deduplication statistics, showing
1067 the allocated (physically present on disk) and referenced
1068 (logically referenced in the pool) block counts and sizes
1069 by reference count.
1070
1071 -T u|d Display a time stamp. Specify -u for a printed
1072 representation of the internal representation of time.
1073 See time(2). Specify -d for standard date format. See
1074 date(1).
1075
1076 -v Displays verbose data error information, printing out a
1077 complete list of all data errors since the last complete
1078 pool scrub.
1079
1080 -x Only display status for pools that are exhibiting errors
1081 or are otherwise unavailable. Warnings about pools not
1082 using the latest on-disk format will not be included.
1083
1084 zpool trim [-r rate|-s] pool...
1085 Initiates a on-demand TRIM operation on all of the free space of
1086 a pool. This informs the underlying storage devices of all of
1087 the blocks that the pool no longer considers allocated, thus
1088 allowing thinly provisioned storage devices to reclaim them.
1089 Please note that this collects all space marked as "freed" in the
1090 pool immediately and doesn't wait the zfs_txgs_per_trim delay as
1091 automatic TRIM does. Hence, this can limit pool corruption
1092 recovery options during and immediately following the on-demand
1093 TRIM to 1-2 TXGs into the past (instead of the standard 32-64 of
1094 automatic TRIM). This approach, however, allows you to recover
1095 the maximum amount of free space from the pool immediately
1096 without having to wait.
1097
1098 Also note that an on-demand TRIM operation can be initiated
1099 irrespective of the autotrim pool property setting. It does,
1100 however, respect the forcetrim pool property.
1101
1102 An on-demand TRIM operation does not conflict with an ongoing
1103 scrub, but it can put significant I/O stress on the underlying
1104 vdevs. A resilver, however, automatically stops an on-demand
1105 TRIM operation. You can manually reinitiate the TRIM operation
1106 after the resilver has started, by simply reissuing the zpool
1107 trim command.
1108
1109 Adding a vdev during TRIM is supported, although the progression
1110 display in zpool status might not be entirely accurate in that
1111 case (TRIM will complete before reaching 100%). Removing or
1112 detaching a vdev will prematurely terminate an on-demand TRIM
1113 operation.
1114
1115 -r rate
1116 Controls the speed at which the TRIM operation
1117 progresses. Without this option, TRIM is executed in
1118 parallel on all top-level vdevs as quickly as possible.
1119 This option allows you to control how fast (in bytes per
1120 second) the TRIM is executed. This rate is applied on a
1121 per-vdev basis, i.e. every top-level vdev in the pool
1122 tries to match this speed.
1123
1124 Due to limitations in how the algorithm is designed,
1125 TRIMs are executed in whole-metaslab increments. Each
1126 top-level vdev contains approximately 200 metaslabs, so a
1127 rate-limited TRIM progresses in steps, i.e. it TRIMs one
1128 metaslab completely and then waits for a while so that
1129 over the whole device, the speed averages out.
1130
1131 When an on-demand TRIM operation is already in progress,
1132 this option changes its rate. To change a rate-limited
1133 TRIM to an unlimited one, simply execute the zpool trim
1134 command without the -r option.
1135
1136 -s Stop trimming. If an on-demand TRIM operation is not
1137 ongoing at the moment, this does nothing and the command
1138 returns success.
1139
1140 zpool upgrade
1141 Displays pools which do not have all supported features enabled
1142 and pools formatted using a legacy ZFS version number. These
1143 pools can continue to be used, but some features may not be
1144 available. Use zpool upgrade -a to enable all features on all
1145 pools.
1146
1147 zpool upgrade -v
1148 Displays legacy ZFS versions supported by the current software.
1149 See zpool-features(5) for a description of feature flags features
1150 supported by the current software.
1151
1152 zpool upgrade [-V version] -a|pool...
1153 Enables all supported features on the given pool. Once this is
1154 done, the pool will no longer be accessible on systems that do
1155 not support feature flags. See zpool-features(5) for details on
1156 compatibility with systems that support feature flags, but do not
1157 support all features enabled on the pool.
1158
1159 -a Enables all supported features on all pools.
1160
1161 -V version
1162 Upgrade to the specified legacy version. If the -V flag
1163 is specified, no features will be enabled on the pool.
1164 This option can only be used to increase the version
1165 number up to the last supported legacy version number.
1166
1167 zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
1168 Retrieves the given list of vdev properties (or all properties if
1169 all is used) for the specified vdev of the specified storage
1170 pool. These properties are displayed in the same manner as the
1171 pool properties. The operation is supported for leaf-level vdevs
1172 only. See the Device Properties section for more information on
1173 the available properties.
1174
1175 zpool vdev-set property=value pool vdev-name|vdev-guid
1176 Sets the given property on the specified device of the specified
1177 pool. If top-level vdev is specified, sets the property on all
1178 the child devices. See the Device Properties section for more
1179 information on what properties can be set and accepted values.
1180
1181 EXIT STATUS
1182 The following exit values are returned:
1183
1184 0 Successful completion.
1185
1186 1 An error occurred.
1187
1188 2 Invalid command line options were specified.
1189
1190 EXAMPLES
1191 Example 1 Creating a RAID-Z Storage Pool
1192 The following command creates a pool with a single raidz root
1193 vdev that consists of six disks.
1194
1195 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1196
1197 Example 2 Creating a Mirrored Storage Pool
1198 The following command creates a pool with two mirrors, where each
1199 mirror contains two disks.
1200
1201 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1202
1203 Example 3 Creating a ZFS Storage Pool by Using Slices
1204 The following command creates an unmirrored pool using two disk
1205 slices.
1206
1207 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1208
1209 Example 4 Creating a ZFS Storage Pool by Using Files
1210 The following command creates an unmirrored pool using files.
1211 While not recommended, a pool based on files can be useful for
1212 experimental purposes.
1213
1214 # zpool create tank /path/to/file/a /path/to/file/b
1215
1216 Example 5 Adding a Mirror to a ZFS Storage Pool
1217 The following command adds two mirrored disks to the pool tank,
1218 assuming the pool is already made up of two-way mirrors. The
1219 additional space is immediately available to any datasets within
1220 the pool.
1221
1222 # zpool add tank mirror c1t0d0 c1t1d0
1223
1224 Example 6 Listing Available ZFS Storage Pools
1225 The following command lists all available pools on the system.
1226 In this case, the pool zion is faulted due to a missing device.
1227 The results from this command are similar to the following:
1228
1229 # zpool list
1230 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1231 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
1232 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
1233 zion - - - - - - - FAULTED -
1234
1235 Example 7 Destroying a ZFS Storage Pool
1236 The following command destroys the pool tank and any datasets
1237 contained within.
1238
1239 # zpool destroy -f tank
1240
1241 Example 8 Exporting a ZFS Storage Pool
1242 The following command exports the devices in pool tank so that
1243 they can be relocated or later imported.
1244
1245 # zpool export tank
1246
1247 Example 9 Importing a ZFS Storage Pool
1248 The following command displays available pools, and then imports
1249 the pool tank for use on the system. The results from this
1250 command are similar to the following:
1251
1252 # zpool import
1253 pool: tank
1254 id: 15451357997522795478
1255 state: ONLINE
1256 action: The pool can be imported using its name or numeric identifier.
1257 config:
1258
1259 tank ONLINE
1260 mirror ONLINE
1261 c1t2d0 ONLINE
1262 c1t3d0 ONLINE
1263
1264 # zpool import tank
1265
1266 Example 10 Upgrading All ZFS Storage Pools to the Current Version
1267 The following command upgrades all ZFS Storage pools to the
1268 current version of the software.
1269
1270 # zpool upgrade -a
1271 This system is currently running ZFS version 2.
1272
1273 Example 11 Managing Hot Spares
1274 The following command creates a new pool with an available hot
1275 spare:
1276
1277 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1278
1279 If one of the disks were to fail, the pool would be reduced to
1280 the degraded state. The failed device can be replaced using the
1281 following command:
1282
1283 # zpool replace tank c0t0d0 c0t3d0
1284
1285 Once the data has been resilvered, the spare is automatically
1286 removed and is made available for use should another device fail.
1287 The hot spare can be permanently removed from the pool using the
1288 following command:
1289
1290 # zpool remove tank c0t2d0
1291
1292 Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1293 The following command creates a ZFS storage pool consisting of
1294 two, two-way mirrors and mirrored log devices:
1295
1296 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1297 c4d0 c5d0
1298
1299 Example 13 Adding Cache Devices to a ZFS Pool
1300 The following command adds two disks for use as cache devices to
1301 a ZFS storage pool:
1302
1303 # zpool add pool cache c2d0 c3d0
1304
1305 Once added, the cache devices gradually fill with content from
1306 main memory. Depending on the size of your cache devices, it
1307 could take over an hour for them to fill. Capacity and reads can
1308 be monitored using the iostat option as follows:
1309
1310 # zpool iostat -v pool 5
1311
1312 Example 14 Removing a Mirrored Log Device
1313 The following command removes the mirrored log device mirror-2.
1314 Given this configuration:
1315
1316 pool: tank
1317 state: ONLINE
1318 scrub: none requested
1319 config:
1320
1321 NAME STATE READ WRITE CKSUM
1322 tank ONLINE 0 0 0
1323 mirror-0 ONLINE 0 0 0
1324 c6t0d0 ONLINE 0 0 0
1325 c6t1d0 ONLINE 0 0 0
1326 mirror-1 ONLINE 0 0 0
1327 c6t2d0 ONLINE 0 0 0
1328 c6t3d0 ONLINE 0 0 0
1329 logs
1330 mirror-2 ONLINE 0 0 0
1331 c4t0d0 ONLINE 0 0 0
1332 c4t1d0 ONLINE 0 0 0
1333
1334 The command to remove the mirrored log mirror-2 is:
1335
1336 # zpool remove tank mirror-2
1337
1338 Example 15 Displaying expanded space on a device
1339 The following command displays the detailed information for the
1340 pool data. This pool is comprised of a single raidz vdev where
1341 one of its devices increased its capacity by 10GB. In this
1342 example, the pool will not be able to utilize this extra capacity
1343 until all the devices under the raidz vdev have been expanded.
1344
1345 # zpool list -v data
1346 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1347 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1348 raidz1 23.9G 14.6G 9.30G 48% -
1349 c1t1d0 - - - - -
1350 c1t2d0 - - - - 10G
1351 c1t3d0 - - - - -
1352
1353 INTERFACE STABILITY
1354 Evolving
1355
1356 SEE ALSO
1357 zfs(1M), attributes(5), zpool-features(5)
1358
1359 illumos December 6, 2017 illumos