121 \fBzpool remove\fR \fIpool\fR \fIdevice\fR ...
122 .fi
123
124 .LP
125 .nf
126 \fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR [\fInew_device\fR]
127 .fi
128
129 .LP
130 .nf
131 \fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...
132 .fi
133
134 .LP
135 .nf
136 \fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR
137 .fi
138
139 .LP
140 .nf
141 \fBzpool status\fR [\fB-xvD\fR] [\fB-T\fR \fBu\fR | \fBd\fR ] [\fIpool\fR] ... [\fIinterval\fR [\fIcount\fR]]
142 .fi
143
144 .LP
145 .nf
146 \fBzpool upgrade\fR
147 .fi
148
149 .LP
150 .nf
151 \fBzpool upgrade\fR \fB-v\fR
152 .fi
153
154 .LP
155 .nf
156 \fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...
157 .fi
158
159 .SH DESCRIPTION
160 .sp
161 .LP
162 The \fBzpool\fR command configures \fBZFS\fR storage pools. A storage pool is a
163 collection of devices that provides physical storage and data replication for
164 \fBZFS\fR datasets.
165 .sp
166 .LP
167 All datasets within a storage pool share the same space. See \fBzfs\fR(1M) for
168 information on managing datasets.
169 .SS "Virtual Devices (\fBvdev\fRs)"
170 .sp
171 .LP
172 A "virtual device" describes a single device or a collection of devices
173 organized according to certain performance and fault characteristics. The
174 following virtual devices are supported:
175 .sp
176 .ne 2
177 .na
178 \fB\fBdisk\fR\fR
179 .ad
180 .RS 10n
181 A block device, typically located under \fB/dev/dsk\fR. \fBZFS\fR can use
182 individual slices or partitions, though the recommended mode of operation is to
183 use whole disks. A disk can be specified by a full path, or it can be a
184 shorthand name (the relative portion of the path under "/dev/dsk"). A whole
185 disk can be specified by omitting the slice or partition designation. For
186 example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When given a whole
187 disk, \fBZFS\fR automatically labels the disk, if necessary.
188 .RE
189
190 .sp
291 .LP
292 A pool can have any number of virtual devices at the top of the configuration
293 (known as "root vdevs"). Data is dynamically distributed across all top-level
294 devices to balance data among devices. As new virtual devices are added,
295 \fBZFS\fR automatically places data on the newly available devices.
296 .sp
297 .LP
298 Virtual devices are specified one at a time on the command line, separated by
299 whitespace. The keywords "mirror" and "raidz" are used to distinguish where a
300 group ends and another begins. For example, the following creates two root
301 vdevs, each a mirror of two disks:
302 .sp
303 .in +2
304 .nf
305 # \fBzpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0\fR
306 .fi
307 .in -2
308 .sp
309
310 .SS "Device Failure and Recovery"
311 .sp
312 .LP
313 \fBZFS\fR supports a rich set of mechanisms for handling device failure and
314 data corruption. All metadata and data is checksummed, and \fBZFS\fR
315 automatically repairs bad data from a good copy when corruption is detected.
316 .sp
317 .LP
318 In order to take advantage of these features, a pool must make use of some form
319 of redundancy, using either mirrored or \fBraidz\fR groups. While \fBZFS\fR
320 supports running in a non-redundant configuration, where each root vdev is
321 simply a disk or file, this is strongly discouraged. A single case of bit
322 corruption can render some or all of your data unavailable.
323 .sp
324 .LP
325 A pool's health status is described by one of three states: online, degraded,
326 or faulted. An online pool has all devices operating normally. A degraded pool
327 is one in which one or more devices have failed, but the data is still
328 available due to a redundant configuration. A faulted pool has corrupted
329 metadata, or one or more faulted devices, and insufficient replicas to continue
330 functioning.
331 .sp
421 detection is hardware-dependent and may not be supported on all platforms.
422 .RE
423
424 .sp
425 .ne 2
426 .na
427 \fB\fBUNAVAIL\fR\fR
428 .ad
429 .RS 12n
430 The device could not be opened. If a pool is imported when a device was
431 unavailable, then the device will be identified by a unique identifier instead
432 of its path since the path was never correct in the first place.
433 .RE
434
435 .sp
436 .LP
437 If a device is removed and later re-attached to the system, \fBZFS\fR attempts
438 to put the device online automatically. Device attach detection is
439 hardware-dependent and might not be supported on all platforms.
440 .SS "Hot Spares"
441 .sp
442 .LP
443 \fBZFS\fR allows devices to be associated with pools as "hot spares". These
444 devices are not actively used in the pool, but when an active device fails, it
445 is automatically replaced by a hot spare. To create a pool with hot spares,
446 specify a "spare" \fBvdev\fR with any number of devices. For example,
447 .sp
448 .in +2
449 .nf
450 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
451 .fi
452 .in -2
453 .sp
454
455 .sp
456 .LP
457 Spares can be shared across multiple pools, and can be added with the "\fBzpool
458 add\fR" command and removed with the "\fBzpool remove\fR" command. Once a spare
459 replacement is initiated, a new "spare" \fBvdev\fR is created within the
460 configuration that will remain there until the original device is replaced. At
461 this point, the hot spare becomes available again if another device fails.
462 .sp
463 .LP
464 If a pool has a shared spare that is currently being used, the pool can not be
465 exported since other pools may use this shared spare, which may lead to
466 potential data corruption.
467 .sp
468 .LP
469 An in-progress spare replacement can be cancelled by detaching the hot spare.
470 If the original faulted device is detached, then the hot spare assumes its
471 place in the configuration, and is removed from the spare list of all active
472 pools.
473 .sp
474 .LP
475 Spares cannot replace log devices.
476 .SS "Intent Log"
477 .sp
478 .LP
479 The \fBZFS\fR Intent Log (\fBZIL\fR) satisfies \fBPOSIX\fR requirements for
480 synchronous transactions. For instance, databases often require their
481 transactions to be on stable storage devices when returning from a system call.
482 \fBNFS\fR and other applications can also use \fBfsync\fR() to ensure data
483 stability. By default, the intent log is allocated from blocks within the main
484 pool. However, it might be possible to get better performance using separate
485 intent log devices such as \fBNVRAM\fR or a dedicated disk. For example:
486 .sp
487 .in +2
488 .nf
489 \fB# zpool create pool c0d0 c1d0 log c2d0\fR
490 .fi
491 .in -2
492 .sp
493
494 .sp
495 .LP
496 Multiple log devices can also be specified, and they can be mirrored. See the
497 EXAMPLES section for an example of mirroring multiple log devices.
498 .sp
499 .LP
500 Log devices can be added, replaced, attached, detached, and imported and
501 exported as part of the larger pool. Mirrored log devices can be removed by
502 specifying the top-level mirror for the log.
503 .SS "Cache Devices"
504 .sp
505 .LP
506 Devices can be added to a storage pool as "cache devices." These devices
507 provide an additional layer of caching between main memory and disk. For
508 read-heavy workloads, where the working set size is much larger than what can
509 be cached in main memory, using cache devices allow much more of this working
510 set to be served from low latency media. Using cache devices provides the
511 greatest performance improvement for random read-workloads of mostly static
512 content.
513 .sp
514 .LP
515 To create a pool with cache devices, specify a "cache" \fBvdev\fR with any
516 number of devices. For example:
517 .sp
518 .in +2
519 .nf
520 \fB# zpool create pool c0d0 c1d0 cache c2d0 c3d0\fR
521 .fi
522 .in -2
523 .sp
524
525 .sp
526 .LP
527 Cache devices cannot be mirrored or part of a \fBraidz\fR configuration. If a
528 read error is encountered on a cache device, that read \fBI/O\fR is reissued to
529 the original storage pool device, which might be part of a mirrored or
530 \fBraidz\fR configuration.
531 .sp
532 .LP
533 The content of the cache devices is considered volatile, as is the case with
534 other system caches.
535 .SS "Properties"
536 .sp
537 .LP
538 Each pool has several properties associated with it. Some properties are
539 read-only statistics while others are configurable and change the behavior of
540 the pool. The following are read-only properties:
541 .sp
542 .ne 2
543 .na
544 \fB\fBavailable\fR\fR
545 .ad
546 .RS 20n
547 Amount of storage available within the pool. This property can also be referred
548 to by its shortened column name, "avail".
549 .RE
550
551 .sp
552 .ne 2
553 .na
554 \fB\fBcapacity\fR\fR
555 .ad
556 .RS 20n
860 Controls whether information about snapshots associated with this pool is
861 output when "\fBzfs list\fR" is run without the \fB-t\fR option. The default
862 value is "off".
863 .RE
864
865 .sp
866 .ne 2
867 .na
868 \fB\fBversion\fR=\fIversion\fR\fR
869 .ad
870 .sp .6
871 .RS 4n
872 The current on-disk version of the pool. This can be increased, but never
873 decreased. The preferred method of updating pools is with the "\fBzpool
874 upgrade\fR" command, though this property can be used when a specific version
875 is needed for backwards compatibility. Once feature flags is enabled on a
876 pool this property will no longer have a value.
877 .RE
878
879 .SS "Subcommands"
880 .sp
881 .LP
882 All subcommands that modify state are logged persistently to the pool in their
883 original form.
884 .sp
885 .LP
886 The \fBzpool\fR command provides subcommands to create and destroy storage
887 pools, add capacity to storage pools, and provide information about the storage
888 pools. The following subcommands are supported:
889 .sp
890 .ne 2
891 .na
892 \fB\fBzpool\fR \fB-?\fR\fR
893 .ad
894 .sp .6
895 .RS 4n
896 Displays a help message.
897 .RE
898
899 .sp
900 .ne 2
1818 .RS 6n
1819 Stop scrubbing.
1820 .RE
1821
1822 .RE
1823
1824 .sp
1825 .ne 2
1826 .na
1827 \fB\fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR\fR
1828 .ad
1829 .sp .6
1830 .RS 4n
1831 Sets the given property on the specified pool. See the "Properties" section for
1832 more information on what properties can be set and acceptable values.
1833 .RE
1834
1835 .sp
1836 .ne 2
1837 .na
1838 \fBzpool status\fR [\fB-xvD\fR] [\fB-T\fR \fBu\fR | \fBd\fR ] [\fIpool\fR] ... [\fIinterval\fR [\fIcount\fR]]
1839 .ad
1840 .sp .6
1841 .RS 4n
1842 Displays the detailed health status for the given pools. If no \fIpool\fR is
1843 specified, then the status of each pool in the system is displayed. For more
1844 information on pool and device health, see the "Device Failure and Recovery"
1845 section.
1846 .sp
1847 If a scrub or resilver is in progress, this command reports the percentage done
1848 and the estimated time to completion. Both of these are only approximate,
1849 because the amount of data in the pool and the other workloads on the system
1850 can change.
1851 .sp
1852 .ne 2
1853 .na
1854 \fB\fB-x\fR\fR
1855 .ad
1856 .RS 6n
1857 Only display status for pools that are exhibiting errors or are otherwise
2269 The following command dipslays the detailed information for the \fIdata\fR
2270 pool. This pool is comprised of a single \fIraidz\fR vdev where one of its
2271 devices increased its capacity by 10GB. In this example, the pool will not
2272 be able to utilized this extra capacity until all the devices under the
2273 \fIraidz\fR vdev have been expanded.
2274
2275 .sp
2276 .in +2
2277 .nf
2278 # \fBzpool list -v data\fR
2279 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2280 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
2281 raidz1 23.9G 14.6G 9.30G 48% -
2282 c1t1d0 - - - - -
2283 c1t2d0 - - - - 10G
2284 c1t3d0 - - - - -
2285 .fi
2286 .in -2
2287
2288 .SH EXIT STATUS
2289 .sp
2290 .LP
2291 The following exit values are returned:
2292 .sp
2293 .ne 2
2294 .na
2295 \fB\fB0\fR\fR
2296 .ad
2297 .RS 5n
2298 Successful completion.
2299 .RE
2300
2301 .sp
2302 .ne 2
2303 .na
2304 \fB\fB1\fR\fR
2305 .ad
2306 .RS 5n
2307 An error occurred.
2308 .RE
2309
2310 .sp
2311 .ne 2
2312 .na
2313 \fB\fB2\fR\fR
2314 .ad
2315 .RS 5n
2316 Invalid command line options were specified.
2317 .RE
2318
2319 .SH ATTRIBUTES
2320 .sp
2321 .LP
2322 See \fBattributes\fR(5) for descriptions of the following attributes:
2323 .sp
2324
2325 .sp
2326 .TS
2327 box;
2328 c | c
2329 l | l .
2330 ATTRIBUTE TYPE ATTRIBUTE VALUE
2331 _
2332 Interface Stability Evolving
2333 .TE
2334
2335 .SH SEE ALSO
2336 .sp
2337 .LP
2338 \fBzfs\fR(1M), \fBzpool-features\fR(5), \fBattributes\fR(5)
|
121 \fBzpool remove\fR \fIpool\fR \fIdevice\fR ...
122 .fi
123
124 .LP
125 .nf
126 \fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR [\fInew_device\fR]
127 .fi
128
129 .LP
130 .nf
131 \fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...
132 .fi
133
134 .LP
135 .nf
136 \fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR
137 .fi
138
139 .LP
140 .nf
141 \fBzpool split\fR [\fB-n\fR] [\fB-R\fR \fIaltroot\fR] [\fB-o\fR \fImntopts\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fInewpool\fR [\fIdevice\fR ... ]
142 .fi
143
144 .LP
145 .nf
146 \fBzpool status\fR [\fB-xvD\fR] [\fB-T\fR \fBu\fR | \fBd\fR ] [\fIpool\fR] ... [\fIinterval\fR [\fIcount\fR]]
147 .fi
148
149 .LP
150 .nf
151 \fBzpool upgrade\fR
152 .fi
153
154 .LP
155 .nf
156 \fBzpool upgrade\fR \fB-v\fR
157 .fi
158
159 .LP
160 .nf
161 \fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...
162 .fi
163
164 .SH DESCRIPTION
165 .LP
166 The \fBzpool\fR command configures \fBZFS\fR storage pools. A storage pool is a
167 collection of devices that provides physical storage and data replication for
168 \fBZFS\fR datasets.
169 .sp
170 .LP
171 All datasets within a storage pool share the same space. See \fBzfs\fR(1M) for
172 information on managing datasets.
173 .SS "Virtual Devices (\fBvdev\fRs)"
174 .LP
175 A "virtual device" describes a single device or a collection of devices
176 organized according to certain performance and fault characteristics. The
177 following virtual devices are supported:
178 .sp
179 .ne 2
180 .na
181 \fB\fBdisk\fR\fR
182 .ad
183 .RS 10n
184 A block device, typically located under \fB/dev/dsk\fR. \fBZFS\fR can use
185 individual slices or partitions, though the recommended mode of operation is to
186 use whole disks. A disk can be specified by a full path, or it can be a
187 shorthand name (the relative portion of the path under "/dev/dsk"). A whole
188 disk can be specified by omitting the slice or partition designation. For
189 example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When given a whole
190 disk, \fBZFS\fR automatically labels the disk, if necessary.
191 .RE
192
193 .sp
294 .LP
295 A pool can have any number of virtual devices at the top of the configuration
296 (known as "root vdevs"). Data is dynamically distributed across all top-level
297 devices to balance data among devices. As new virtual devices are added,
298 \fBZFS\fR automatically places data on the newly available devices.
299 .sp
300 .LP
301 Virtual devices are specified one at a time on the command line, separated by
302 whitespace. The keywords "mirror" and "raidz" are used to distinguish where a
303 group ends and another begins. For example, the following creates two root
304 vdevs, each a mirror of two disks:
305 .sp
306 .in +2
307 .nf
308 # \fBzpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0\fR
309 .fi
310 .in -2
311 .sp
312
313 .SS "Device Failure and Recovery"
314 .LP
315 \fBZFS\fR supports a rich set of mechanisms for handling device failure and
316 data corruption. All metadata and data is checksummed, and \fBZFS\fR
317 automatically repairs bad data from a good copy when corruption is detected.
318 .sp
319 .LP
320 In order to take advantage of these features, a pool must make use of some form
321 of redundancy, using either mirrored or \fBraidz\fR groups. While \fBZFS\fR
322 supports running in a non-redundant configuration, where each root vdev is
323 simply a disk or file, this is strongly discouraged. A single case of bit
324 corruption can render some or all of your data unavailable.
325 .sp
326 .LP
327 A pool's health status is described by one of three states: online, degraded,
328 or faulted. An online pool has all devices operating normally. A degraded pool
329 is one in which one or more devices have failed, but the data is still
330 available due to a redundant configuration. A faulted pool has corrupted
331 metadata, or one or more faulted devices, and insufficient replicas to continue
332 functioning.
333 .sp
423 detection is hardware-dependent and may not be supported on all platforms.
424 .RE
425
426 .sp
427 .ne 2
428 .na
429 \fB\fBUNAVAIL\fR\fR
430 .ad
431 .RS 12n
432 The device could not be opened. If a pool is imported when a device was
433 unavailable, then the device will be identified by a unique identifier instead
434 of its path since the path was never correct in the first place.
435 .RE
436
437 .sp
438 .LP
439 If a device is removed and later re-attached to the system, \fBZFS\fR attempts
440 to put the device online automatically. Device attach detection is
441 hardware-dependent and might not be supported on all platforms.
442 .SS "Hot Spares"
443 .LP
444 \fBZFS\fR allows devices to be associated with pools as "hot spares". These
445 devices are not actively used in the pool, but when an active device fails, it
446 is automatically replaced by a hot spare. To create a pool with hot spares,
447 specify a "spare" \fBvdev\fR with any number of devices. For example,
448 .sp
449 .in +2
450 .nf
451 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
452 .fi
453 .in -2
454 .sp
455
456 .sp
457 .LP
458 Spares can be shared across multiple pools, and can be added with the "\fBzpool
459 add\fR" command and removed with the "\fBzpool remove\fR" command. Once a spare
460 replacement is initiated, a new "spare" \fBvdev\fR is created within the
461 configuration that will remain there until the original device is replaced. At
462 this point, the hot spare becomes available again if another device fails.
463 .sp
464 .LP
465 If a pool has a shared spare that is currently being used, the pool can not be
466 exported since other pools may use this shared spare, which may lead to
467 potential data corruption.
468 .sp
469 .LP
470 An in-progress spare replacement can be cancelled by detaching the hot spare.
471 If the original faulted device is detached, then the hot spare assumes its
472 place in the configuration, and is removed from the spare list of all active
473 pools.
474 .sp
475 .LP
476 Spares cannot replace log devices.
477 .SS "Intent Log"
478 .LP
479 The \fBZFS\fR Intent Log (\fBZIL\fR) satisfies \fBPOSIX\fR requirements for
480 synchronous transactions. For instance, databases often require their
481 transactions to be on stable storage devices when returning from a system call.
482 \fBNFS\fR and other applications can also use \fBfsync\fR() to ensure data
483 stability. By default, the intent log is allocated from blocks within the main
484 pool. However, it might be possible to get better performance using separate
485 intent log devices such as \fBNVRAM\fR or a dedicated disk. For example:
486 .sp
487 .in +2
488 .nf
489 \fB# zpool create pool c0d0 c1d0 log c2d0\fR
490 .fi
491 .in -2
492 .sp
493
494 .sp
495 .LP
496 Multiple log devices can also be specified, and they can be mirrored. See the
497 EXAMPLES section for an example of mirroring multiple log devices.
498 .sp
499 .LP
500 Log devices can be added, replaced, attached, detached, and imported and
501 exported as part of the larger pool. Mirrored log devices can be removed by
502 specifying the top-level mirror for the log.
503 .SS "Cache Devices"
504 .LP
505 Devices can be added to a storage pool as "cache devices." These devices
506 provide an additional layer of caching between main memory and disk. For
507 read-heavy workloads, where the working set size is much larger than what can
508 be cached in main memory, using cache devices allow much more of this working
509 set to be served from low latency media. Using cache devices provides the
510 greatest performance improvement for random read-workloads of mostly static
511 content.
512 .sp
513 .LP
514 To create a pool with cache devices, specify a "cache" \fBvdev\fR with any
515 number of devices. For example:
516 .sp
517 .in +2
518 .nf
519 \fB# zpool create pool c0d0 c1d0 cache c2d0 c3d0\fR
520 .fi
521 .in -2
522 .sp
523
524 .sp
525 .LP
526 Cache devices cannot be mirrored or part of a \fBraidz\fR configuration. If a
527 read error is encountered on a cache device, that read \fBI/O\fR is reissued to
528 the original storage pool device, which might be part of a mirrored or
529 \fBraidz\fR configuration.
530 .sp
531 .LP
532 The content of the cache devices is considered volatile, as is the case with
533 other system caches.
534 .SS "Properties"
535 .LP
536 Each pool has several properties associated with it. Some properties are
537 read-only statistics while others are configurable and change the behavior of
538 the pool. The following are read-only properties:
539 .sp
540 .ne 2
541 .na
542 \fB\fBavailable\fR\fR
543 .ad
544 .RS 20n
545 Amount of storage available within the pool. This property can also be referred
546 to by its shortened column name, "avail".
547 .RE
548
549 .sp
550 .ne 2
551 .na
552 \fB\fBcapacity\fR\fR
553 .ad
554 .RS 20n
858 Controls whether information about snapshots associated with this pool is
859 output when "\fBzfs list\fR" is run without the \fB-t\fR option. The default
860 value is "off".
861 .RE
862
863 .sp
864 .ne 2
865 .na
866 \fB\fBversion\fR=\fIversion\fR\fR
867 .ad
868 .sp .6
869 .RS 4n
870 The current on-disk version of the pool. This can be increased, but never
871 decreased. The preferred method of updating pools is with the "\fBzpool
872 upgrade\fR" command, though this property can be used when a specific version
873 is needed for backwards compatibility. Once feature flags is enabled on a
874 pool this property will no longer have a value.
875 .RE
876
877 .SS "Subcommands"
878 .LP
879 All subcommands that modify state are logged persistently to the pool in their
880 original form.
881 .sp
882 .LP
883 The \fBzpool\fR command provides subcommands to create and destroy storage
884 pools, add capacity to storage pools, and provide information about the storage
885 pools. The following subcommands are supported:
886 .sp
887 .ne 2
888 .na
889 \fB\fBzpool\fR \fB-?\fR\fR
890 .ad
891 .sp .6
892 .RS 4n
893 Displays a help message.
894 .RE
895
896 .sp
897 .ne 2
1815 .RS 6n
1816 Stop scrubbing.
1817 .RE
1818
1819 .RE
1820
1821 .sp
1822 .ne 2
1823 .na
1824 \fB\fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR\fR
1825 .ad
1826 .sp .6
1827 .RS 4n
1828 Sets the given property on the specified pool. See the "Properties" section for
1829 more information on what properties can be set and acceptable values.
1830 .RE
1831
1832 .sp
1833 .ne 2
1834 .na
1835 \fBzpool split\fR [\fB-n\fR] [\fB-R\fR \fIaltroot\fR] [\fB-o\fR \fImntopts\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fInewpool\fR [\fIdevice\fR ... ]
1836 .ad
1837 .sp .6
1838 .RS 4n
1839
1840 Splits off one disk from each mirrored top-level vdev in a pool and creates a
1841 new pool from the split-off disks. The original pool must be made up of one
1842 or more mirrors and must not be in the process of resilvering. The \fBsplit\fR
1843 subcommand chooses the last device in each mirror vdev unless overridden by a
1844 device specification on the command line.
1845
1846 When using a \fIdevice\fR argument, \fBsplit\fR includes the specified
1847 device(s) in a new pool and, should any devices remain unspecified, assigns
1848 the last device in each mirror vdev to that pool, as it does normally. If you
1849 are uncertain about the outcome of a \fBsplit\fR command, use the \fI-n\fR
1850 ("dry-run") option to ensure your command will have the effect you intend.
1851
1852 .sp
1853 .ne 2
1854 .na
1855 \fB\fB-n\fR \fR
1856 .ad
1857 .sp .6
1858 .RS 4n
1859 Displays the configuration that would be created without actually splitting
1860 the pool. The actual pool split could still fail due to insufficient
1861 privileges or device status.
1862 .RE
1863
1864 .sp
1865 .ne 2
1866 .na
1867 \fB\fB-R\fR \fIaltroot\fR \fR
1868 .ad
1869 .sp .6
1870 .RS 4n
1871 Automatically import the newly created pool after splitting, using the
1872 specified \fIaltroot\fR parameter for the new pool's alternate root. See the
1873 \fBaltroot\fR description in the "Properties" section, above.
1874 .RE
1875
1876 .sp
1877 .ne 2
1878 .na
1879 \fB\fB-o\fR \fImntopts\fR \fR
1880 .ad
1881 .sp .6
1882 .RS 4n
1883 Comma-separated list of mount options to use when mounting datasets within
1884 the pool. See \fBzfs\fR(1M) for a description of dataset properties and mount
1885 options. Valid only in conjunction with the \fB-R\fR option.
1886 .RE
1887
1888 .sp
1889 .ne 2
1890 .na
1891 \fB\fB-o\fR \fIproperty=value\fR \fR
1892 .ad
1893 .sp .6
1894 .RS 4n
1895 Sets the specified property on the new pool. See the "Properties" section,
1896 above, for more information on the available pool properties.
1897 .RE
1898
1899 .RE
1900
1901 .sp
1902 .ne 2
1903 .na
1904 \fBzpool status\fR [\fB-xvD\fR] [\fB-T\fR \fBu\fR | \fBd\fR ] [\fIpool\fR] ... [\fIinterval\fR [\fIcount\fR]]
1905 .ad
1906 .sp .6
1907 .RS 4n
1908 Displays the detailed health status for the given pools. If no \fIpool\fR is
1909 specified, then the status of each pool in the system is displayed. For more
1910 information on pool and device health, see the "Device Failure and Recovery"
1911 section.
1912 .sp
1913 If a scrub or resilver is in progress, this command reports the percentage done
1914 and the estimated time to completion. Both of these are only approximate,
1915 because the amount of data in the pool and the other workloads on the system
1916 can change.
1917 .sp
1918 .ne 2
1919 .na
1920 \fB\fB-x\fR\fR
1921 .ad
1922 .RS 6n
1923 Only display status for pools that are exhibiting errors or are otherwise
2335 The following command dipslays the detailed information for the \fIdata\fR
2336 pool. This pool is comprised of a single \fIraidz\fR vdev where one of its
2337 devices increased its capacity by 10GB. In this example, the pool will not
2338 be able to utilized this extra capacity until all the devices under the
2339 \fIraidz\fR vdev have been expanded.
2340
2341 .sp
2342 .in +2
2343 .nf
2344 # \fBzpool list -v data\fR
2345 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2346 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
2347 raidz1 23.9G 14.6G 9.30G 48% -
2348 c1t1d0 - - - - -
2349 c1t2d0 - - - - 10G
2350 c1t3d0 - - - - -
2351 .fi
2352 .in -2
2353
2354 .SH EXIT STATUS
2355 .LP
2356 The following exit values are returned:
2357 .sp
2358 .ne 2
2359 .na
2360 \fB\fB0\fR\fR
2361 .ad
2362 .RS 5n
2363 Successful completion.
2364 .RE
2365
2366 .sp
2367 .ne 2
2368 .na
2369 \fB\fB1\fR\fR
2370 .ad
2371 .RS 5n
2372 An error occurred.
2373 .RE
2374
2375 .sp
2376 .ne 2
2377 .na
2378 \fB\fB2\fR\fR
2379 .ad
2380 .RS 5n
2381 Invalid command line options were specified.
2382 .RE
2383
2384 .SH ATTRIBUTES
2385 .LP
2386 See \fBattributes\fR(5) for descriptions of the following attributes:
2387 .sp
2388
2389 .sp
2390 .TS
2391 box;
2392 c | c
2393 l | l .
2394 ATTRIBUTE TYPE ATTRIBUTE VALUE
2395 _
2396 Interface Stability Evolving
2397 .TE
2398
2399 .SH SEE ALSO
2400 .LP
2401 \fBzfs\fR(1M), \fBzpool-features\fR(5), \fBattributes\fR(5)
|