1 '\" te
2 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
3 .\" Copyright 2011, Nexenta Systems, Inc. All Rights Reserved.
4 .\" Copyright (c) 2012 by Delphix. All Rights Reserved.
5 .\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
6 .\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with the
7 .\" fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner]
8 .TH ZPOOL 1M "Nov 14, 2011"
9 .SH NAME
10 zpool \- configures ZFS storage pools
11 .SH SYNOPSIS
12 .LP
13 .nf
14 \fBzpool\fR [\fB-?\fR]
15 .fi
16
17 .LP
18 .nf
19 \fBzpool add\fR [\fB-fn\fR] \fIpool\fR \fIvdev\fR ...
20 .fi
21
22 .LP
23 .nf
24 \fBzpool attach\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR
25 .fi
26
27 .LP
28 .nf
29 \fBzpool clear\fR \fIpool\fR [\fIdevice\fR]
30 .fi
31
32 .LP
33 .nf
34 \fBzpool create\fR [\fB-fn\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR \fIfile-system-property=value\fR]
35 ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR \fIvdev\fR ...
36 .fi
37
38 .LP
39 .nf
40 \fBzpool destroy\fR [\fB-f\fR] \fIpool\fR
41 .fi
42
43 .LP
44 .nf
45 \fBzpool detach\fR \fIpool\fR \fIdevice\fR
46 .fi
47
48 .LP
49 .nf
50 \fBzpool export\fR [\fB-f\fR] \fIpool\fR ...
51 .fi
52
53 .LP
54 .nf
55 \fBzpool get\fR "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ...
56 .fi
57
58 .LP
59 .nf
60 \fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ...
61 .fi
62
63 .LP
64 .nf
65 \fBzpool import\fR [\fB-d\fR \fIdir\fR] [\fB-D\fR]
66 .fi
67
68 .LP
69 .nf
70 \fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
71 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fB-a\fR
72 .fi
73
74 .LP
75 .nf
76 \fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
77 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR |\fIid\fR [\fInewpool\fR]
78 .fi
79
80 .LP
81 .nf
82 \fBzpool iostat\fR [\fB-T\fR u | d ] [\fB-v\fR] [\fIpool\fR] ... [\fIinterval\fR[\fIcount\fR]]
83 .fi
84
85 .LP
86 .nf
87 \fBzpool list\fR [\fB-Hv\fR] [\fB-o\fR \fIproperty\fR[,...]] [\fIpool\fR] ...
88 .fi
89
90 .LP
91 .nf
92 \fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ...
93 .fi
94
95 .LP
96 .nf
97 \fBzpool online\fR \fIpool\fR \fIdevice\fR ...
98 .fi
99
100 .LP
101 .nf
102 \fBzpool reguid\fR \fIpool\fR
103 .fi
104
105 .LP
106 .nf
107 \fBzpool remove\fR \fIpool\fR \fIdevice\fR ...
108 .fi
109
110 .LP
111 .nf
112 \fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR [\fInew_device\fR]
113 .fi
114
115 .LP
116 .nf
117 \fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...
118 .fi
119
120 .LP
121 .nf
122 \fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR
123 .fi
124
125 .LP
126 .nf
127 \fBzpool status\fR [\fB-xv\fR] [\fIpool\fR] ...
128 .fi
129
130 .LP
131 .nf
132 \fBzpool upgrade\fR
133 .fi
134
135 .LP
136 .nf
137 \fBzpool upgrade\fR \fB-v\fR
138 .fi
139
140 .LP
141 .nf
142 \fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...
143 .fi
144
145 .SH DESCRIPTION
146 .sp
147 .LP
148 The \fBzpool\fR command configures \fBZFS\fR storage pools. A storage pool is a
149 collection of devices that provides physical storage and data replication for
150 \fBZFS\fR datasets.
151 .sp
152 .LP
153 All datasets within a storage pool share the same space. See \fBzfs\fR(1M) for
154 information on managing datasets.
155 .SS "Virtual Devices (\fBvdev\fRs)"
156 .sp
157 .LP
158 A "virtual device" describes a single device or a collection of devices
159 organized according to certain performance and fault characteristics. The
160 following virtual devices are supported:
161 .sp
162 .ne 2
163 .na
164 \fB\fBdisk\fR\fR
165 .ad
166 .RS 10n
167 A block device, typically located under \fB/dev/dsk\fR. \fBZFS\fR can use
168 individual slices or partitions, though the recommended mode of operation is to
169 use whole disks. A disk can be specified by a full path, or it can be a
170 shorthand name (the relative portion of the path under "/dev/dsk"). A whole
171 disk can be specified by omitting the slice or partition designation. For
172 example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When given a whole
173 disk, \fBZFS\fR automatically labels the disk, if necessary.
174 .RE
175
176 .sp
177 .ne 2
178 .na
179 \fB\fBfile\fR\fR
180 .ad
181 .RS 10n
182 A regular file. The use of files as a backing store is strongly discouraged. It
183 is designed primarily for experimental purposes, as the fault tolerance of a
184 file is only as good as the file system of which it is a part. A file must be
185 specified by a full path.
186 .RE
187
188 .sp
189 .ne 2
190 .na
191 \fB\fBmirror\fR\fR
192 .ad
193 .RS 10n
194 A mirror of two or more devices. Data is replicated in an identical fashion
195 across all components of a mirror. A mirror with \fIN\fR disks of size \fIX\fR
196 can hold \fIX\fR bytes and can withstand (\fIN-1\fR) devices failing before
197 data integrity is compromised.
198 .RE
199
200 .sp
201 .ne 2
202 .na
203 \fB\fBraidz\fR\fR
204 .ad
205 .br
206 .na
207 \fB\fBraidz1\fR\fR
208 .ad
209 .br
210 .na
211 \fB\fBraidz2\fR\fR
212 .ad
213 .br
214 .na
215 \fB\fBraidz3\fR\fR
216 .ad
217 .RS 10n
218 A variation on \fBRAID-5\fR that allows for better distribution of parity and
219 eliminates the "\fBRAID-5\fR write hole" (in which data and parity become
220 inconsistent after a power loss). Data and parity is striped across all disks
221 within a \fBraidz\fR group.
222 .sp
223 A \fBraidz\fR group can have single-, double- , or triple parity, meaning that
224 the \fBraidz\fR group can sustain one, two, or three failures, respectively,
225 without losing any data. The \fBraidz1\fR \fBvdev\fR type specifies a
226 single-parity \fBraidz\fR group; the \fBraidz2\fR \fBvdev\fR type specifies a
227 double-parity \fBraidz\fR group; and the \fBraidz3\fR \fBvdev\fR type specifies
228 a triple-parity \fBraidz\fR group. The \fBraidz\fR \fBvdev\fR type is an alias
229 for \fBraidz1\fR.
230 .sp
231 A \fBraidz\fR group with \fIN\fR disks of size \fIX\fR with \fIP\fR parity
232 disks can hold approximately (\fIN-P\fR)*\fIX\fR bytes and can withstand
233 \fIP\fR device(s) failing before data integrity is compromised. The minimum
234 number of devices in a \fBraidz\fR group is one more than the number of parity
235 disks. The recommended number is between 3 and 9 to help increase performance.
236 .RE
237
238 .sp
239 .ne 2
240 .na
241 \fB\fBspare\fR\fR
242 .ad
243 .RS 10n
244 A special pseudo-\fBvdev\fR which keeps track of available hot spares for a
245 pool. For more information, see the "Hot Spares" section.
246 .RE
247
248 .sp
249 .ne 2
250 .na
251 \fB\fBlog\fR\fR
252 .ad
253 .RS 10n
254 A separate-intent log device. If more than one log device is specified, then
255 writes are load-balanced between devices. Log devices can be mirrored. However,
256 \fBraidz\fR \fBvdev\fR types are not supported for the intent log. For more
257 information, see the "Intent Log" section.
258 .RE
259
260 .sp
261 .ne 2
262 .na
263 \fB\fBcache\fR\fR
264 .ad
265 .RS 10n
266 A device used to cache storage pool data. A cache device cannot be cannot be
267 configured as a mirror or \fBraidz\fR group. For more information, see the
268 "Cache Devices" section.
269 .RE
270
271 .sp
272 .LP
273 Virtual devices cannot be nested, so a mirror or \fBraidz\fR virtual device can
274 only contain files or disks. Mirrors of mirrors (or other combinations) are not
275 allowed.
276 .sp
277 .LP
278 A pool can have any number of virtual devices at the top of the configuration
279 (known as "root vdevs"). Data is dynamically distributed across all top-level
280 devices to balance data among devices. As new virtual devices are added,
281 \fBZFS\fR automatically places data on the newly available devices.
282 .sp
283 .LP
284 Virtual devices are specified one at a time on the command line, separated by
285 whitespace. The keywords "mirror" and "raidz" are used to distinguish where a
286 group ends and another begins. For example, the following creates two root
287 vdevs, each a mirror of two disks:
288 .sp
289 .in +2
290 .nf
291 # \fBzpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0\fR
292 .fi
293 .in -2
294 .sp
295
296 .SS "Device Failure and Recovery"
297 .sp
298 .LP
299 \fBZFS\fR supports a rich set of mechanisms for handling device failure and
300 data corruption. All metadata and data is checksummed, and \fBZFS\fR
301 automatically repairs bad data from a good copy when corruption is detected.
302 .sp
303 .LP
304 In order to take advantage of these features, a pool must make use of some form
305 of redundancy, using either mirrored or \fBraidz\fR groups. While \fBZFS\fR
306 supports running in a non-redundant configuration, where each root vdev is
307 simply a disk or file, this is strongly discouraged. A single case of bit
308 corruption can render some or all of your data unavailable.
309 .sp
310 .LP
311 A pool's health status is described by one of three states: online, degraded,
312 or faulted. An online pool has all devices operating normally. A degraded pool
313 is one in which one or more devices have failed, but the data is still
314 available due to a redundant configuration. A faulted pool has corrupted
315 metadata, or one or more faulted devices, and insufficient replicas to continue
316 functioning.
317 .sp
318 .LP
319 The health of the top-level vdev, such as mirror or \fBraidz\fR device, is
320 potentially impacted by the state of its associated vdevs, or component
321 devices. A top-level vdev or component device is in one of the following
322 states:
323 .sp
324 .ne 2
325 .na
326 \fB\fBDEGRADED\fR\fR
327 .ad
328 .RS 12n
329 One or more top-level vdevs is in the degraded state because one or more
330 component devices are offline. Sufficient replicas exist to continue
331 functioning.
332 .sp
333 One or more component devices is in the degraded or faulted state, but
334 sufficient replicas exist to continue functioning. The underlying conditions
335 are as follows:
336 .RS +4
337 .TP
338 .ie t \(bu
339 .el o
340 The number of checksum errors exceeds acceptable levels and the device is
341 degraded as an indication that something may be wrong. \fBZFS\fR continues to
342 use the device as necessary.
343 .RE
344 .RS +4
345 .TP
346 .ie t \(bu
347 .el o
348 The number of I/O errors exceeds acceptable levels. The device could not be
349 marked as faulted because there are insufficient replicas to continue
350 functioning.
351 .RE
352 .RE
353
354 .sp
355 .ne 2
356 .na
357 \fB\fBFAULTED\fR\fR
358 .ad
359 .RS 12n
360 One or more top-level vdevs is in the faulted state because one or more
361 component devices are offline. Insufficient replicas exist to continue
362 functioning.
363 .sp
364 One or more component devices is in the faulted state, and insufficient
365 replicas exist to continue functioning. The underlying conditions are as
366 follows:
367 .RS +4
368 .TP
369 .ie t \(bu
370 .el o
371 The device could be opened, but the contents did not match expected values.
372 .RE
373 .RS +4
374 .TP
375 .ie t \(bu
376 .el o
377 The number of I/O errors exceeds acceptable levels and the device is faulted to
378 prevent further use of the device.
379 .RE
380 .RE
381
382 .sp
383 .ne 2
384 .na
385 \fB\fBOFFLINE\fR\fR
386 .ad
387 .RS 12n
388 The device was explicitly taken offline by the "\fBzpool offline\fR" command.
389 .RE
390
391 .sp
392 .ne 2
393 .na
394 \fB\fBONLINE\fR\fR
395 .ad
396 .RS 12n
397 The device is online and functioning.
398 .RE
399
400 .sp
401 .ne 2
402 .na
403 \fB\fBREMOVED\fR\fR
404 .ad
405 .RS 12n
406 The device was physically removed while the system was running. Device removal
407 detection is hardware-dependent and may not be supported on all platforms.
408 .RE
409
410 .sp
411 .ne 2
412 .na
413 \fB\fBUNAVAIL\fR\fR
414 .ad
415 .RS 12n
416 The device could not be opened. If a pool is imported when a device was
417 unavailable, then the device will be identified by a unique identifier instead
418 of its path since the path was never correct in the first place.
419 .RE
420
421 .sp
422 .LP
423 If a device is removed and later re-attached to the system, \fBZFS\fR attempts
424 to put the device online automatically. Device attach detection is
425 hardware-dependent and might not be supported on all platforms.
426 .SS "Hot Spares"
427 .sp
428 .LP
429 \fBZFS\fR allows devices to be associated with pools as "hot spares". These
430 devices are not actively used in the pool, but when an active device fails, it
431 is automatically replaced by a hot spare. To create a pool with hot spares,
432 specify a "spare" \fBvdev\fR with any number of devices. For example,
433 .sp
434 .in +2
435 .nf
436 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
437 .fi
438 .in -2
439 .sp
440
441 .sp
442 .LP
443 Spares can be shared across multiple pools, and can be added with the "\fBzpool
444 add\fR" command and removed with the "\fBzpool remove\fR" command. Once a spare
445 replacement is initiated, a new "spare" \fBvdev\fR is created within the
446 configuration that will remain there until the original device is replaced. At
447 this point, the hot spare becomes available again if another device fails.
448 .sp
449 .LP
450 If a pool has a shared spare that is currently being used, the pool can not be
451 exported since other pools may use this shared spare, which may lead to
452 potential data corruption.
453 .sp
454 .LP
455 An in-progress spare replacement can be cancelled by detaching the hot spare.
456 If the original faulted device is detached, then the hot spare assumes its
457 place in the configuration, and is removed from the spare list of all active
458 pools.
459 .sp
460 .LP
461 Spares cannot replace log devices.
462 .SS "Intent Log"
463 .sp
464 .LP
465 The \fBZFS\fR Intent Log (\fBZIL\fR) satisfies \fBPOSIX\fR requirements for
466 synchronous transactions. For instance, databases often require their
467 transactions to be on stable storage devices when returning from a system call.
468 \fBNFS\fR and other applications can also use \fBfsync\fR() to ensure data
469 stability. By default, the intent log is allocated from blocks within the main
470 pool. However, it might be possible to get better performance using separate
471 intent log devices such as \fBNVRAM\fR or a dedicated disk. For example:
472 .sp
473 .in +2
474 .nf
475 \fB# zpool create pool c0d0 c1d0 log c2d0\fR
476 .fi
477 .in -2
478 .sp
479
480 .sp
481 .LP
482 Multiple log devices can also be specified, and they can be mirrored. See the
483 EXAMPLES section for an example of mirroring multiple log devices.
484 .sp
485 .LP
486 Log devices can be added, replaced, attached, detached, and imported and
487 exported as part of the larger pool. Mirrored log devices can be removed by
488 specifying the top-level mirror for the log.
489 .SS "Cache Devices"
490 .sp
491 .LP
492 Devices can be added to a storage pool as "cache devices." These devices
493 provide an additional layer of caching between main memory and disk. For
494 read-heavy workloads, where the working set size is much larger than what can
495 be cached in main memory, using cache devices allow much more of this working
496 set to be served from low latency media. Using cache devices provides the
497 greatest performance improvement for random read-workloads of mostly static
498 content.
499 .sp
500 .LP
501 To create a pool with cache devices, specify a "cache" \fBvdev\fR with any
502 number of devices. For example:
503 .sp
504 .in +2
505 .nf
506 \fB# zpool create pool c0d0 c1d0 cache c2d0 c3d0\fR
507 .fi
508 .in -2
509 .sp
510
511 .sp
512 .LP
513 Cache devices cannot be mirrored or part of a \fBraidz\fR configuration. If a
514 read error is encountered on a cache device, that read \fBI/O\fR is reissued to
515 the original storage pool device, which might be part of a mirrored or
516 \fBraidz\fR configuration.
517 .sp
518 .LP
519 The content of the cache devices is considered volatile, as is the case with
520 other system caches.
521 .SS "Properties"
522 .sp
523 .LP
524 Each pool has several properties associated with it. Some properties are
525 read-only statistics while others are configurable and change the behavior of
526 the pool. The following are read-only properties:
527 .sp
528 .ne 2
529 .na
530 \fB\fBavailable\fR\fR
531 .ad
532 .RS 20n
533 Amount of storage available within the pool. This property can also be referred
534 to by its shortened column name, "avail".
535 .RE
536
537 .sp
538 .ne 2
539 .na
540 \fB\fBcapacity\fR\fR
541 .ad
542 .RS 20n
543 Percentage of pool space used. This property can also be referred to by its
544 shortened column name, "cap".
545 .RE
546
547 .sp
548 .ne 2
549 .na
550 \fB\fBcomment\fR\fR
551 .ad
552 .RS 20n
553 A text string consisting of printable ASCII characters that will be stored
554 such that it is available even if the pool becomes faulted. An administrator
555 can provide additional information about a pool using this property.
556 .RE
557
558 .sp
559 .ne 2
560 .na
561 \fB\fBexpandsize\fR\fR
562 .ad
563 .RS 20n
564 Amount of uninitialized space within the pool or device that can be used to
565 increase the total capacity of the pool. Uninitialized space consists of
566 any space on an EFI labeled vdev which has not been brought online
567 (i.e. zpool online -e). This space occurs when a LUN is dynamically expanded.
568 .RE
569
570 .sp
571 .ne 2
572 .na
573 \fB\fBhealth\fR\fR
574 .ad
575 .RS 20n
576 The current health of the pool. Health can be "\fBONLINE\fR", "\fBDEGRADED\fR",
577 "\fBFAULTED\fR", " \fBOFFLINE\fR", "\fBREMOVED\fR", or "\fBUNAVAIL\fR".
578 .RE
579
580 .sp
581 .ne 2
582 .na
583 \fB\fBguid\fR\fR
584 .ad
585 .RS 20n
586 A unique identifier for the pool.
587 .RE
588
589 .sp
590 .ne 2
591 .na
592 \fB\fBsize\fR\fR
593 .ad
594 .RS 20n
595 Total size of the storage pool.
596 .RE
597
598 .sp
599 .ne 2
600 .na
601 \fB\fBused\fR\fR
602 .ad
603 .RS 20n
604 Amount of storage space used within the pool.
605 .RE
606
607 .sp
608 .LP
609 These space usage properties report actual physical space available to the
610 storage pool. The physical space can be different from the total amount of
611 space that any contained datasets can actually use. The amount of space used in
612 a \fBraidz\fR configuration depends on the characteristics of the data being
613 written. In addition, \fBZFS\fR reserves some space for internal accounting
614 that the \fBzfs\fR(1M) command takes into account, but the \fBzpool\fR command
615 does not. For non-full pools of a reasonable size, these effects should be
616 invisible. For small pools, or pools that are close to being completely full,
617 these discrepancies may become more noticeable.
618 .sp
619 .LP
620 The following property can be set at creation time and import time:
621 .sp
622 .ne 2
623 .na
624 \fB\fBaltroot\fR\fR
625 .ad
626 .sp .6
627 .RS 4n
628 Alternate root directory. If set, this directory is prepended to any mount
629 points within the pool. This can be used when examining an unknown pool where
630 the mount points cannot be trusted, or in an alternate boot environment, where
631 the typical paths are not valid. \fBaltroot\fR is not a persistent property. It
632 is valid only while the system is up. Setting \fBaltroot\fR defaults to using
633 \fBcachefile\fR=none, though this may be overridden using an explicit setting.
634 .RE
635
636 .sp
637 .LP
638 The following properties can be set at creation time and import time, and later
639 changed with the \fBzpool set\fR command:
640 .sp
641 .ne 2
642 .na
643 \fB\fBautoexpand\fR=\fBon\fR | \fBoff\fR\fR
644 .ad
645 .sp .6
646 .RS 4n
647 Controls automatic pool expansion when the underlying LUN is grown. If set to
648 \fBon\fR, the pool will be resized according to the size of the expanded
649 device. If the device is part of a mirror or \fBraidz\fR then all devices
650 within that mirror/\fBraidz\fR group must be expanded before the new space is
651 made available to the pool. The default behavior is \fBoff\fR. This property
652 can also be referred to by its shortened column name, \fBexpand\fR.
653 .RE
654
655 .sp
656 .ne 2
657 .na
658 \fB\fBautoreplace\fR=\fBon\fR | \fBoff\fR\fR
659 .ad
660 .sp .6
661 .RS 4n
662 Controls automatic device replacement. If set to "\fBoff\fR", device
663 replacement must be initiated by the administrator by using the "\fBzpool
664 replace\fR" command. If set to "\fBon\fR", any new device, found in the same
665 physical location as a device that previously belonged to the pool, is
666 automatically formatted and replaced. The default behavior is "\fBoff\fR". This
667 property can also be referred to by its shortened column name, "replace".
668 .RE
669
670 .sp
671 .ne 2
672 .na
673 \fB\fBbootfs\fR=\fIpool\fR/\fIdataset\fR\fR
674 .ad
675 .sp .6
676 .RS 4n
677 Identifies the default bootable dataset for the root pool. This property is
678 expected to be set mainly by the installation and upgrade programs.
679 .RE
680
681 .sp
682 .ne 2
683 .na
684 \fB\fBcachefile\fR=\fIpath\fR | \fBnone\fR\fR
685 .ad
686 .sp .6
687 .RS 4n
688 Controls the location of where the pool configuration is cached. Discovering
689 all pools on system startup requires a cached copy of the configuration data
690 that is stored on the root file system. All pools in this cache are
691 automatically imported when the system boots. Some environments, such as
692 install and clustering, need to cache this information in a different location
693 so that pools are not automatically imported. Setting this property caches the
694 pool configuration in a different location that can later be imported with
695 "\fBzpool import -c\fR". Setting it to the special value "\fBnone\fR" creates a
696 temporary pool that is never cached, and the special value \fB\&''\fR (empty
697 string) uses the default location.
698 .sp
699 Multiple pools can share the same cache file. Because the kernel destroys and
700 recreates this file when pools are added and removed, care should be taken when
701 attempting to access this file. When the last pool using a \fBcachefile\fR is
702 exported or destroyed, the file is removed.
703 .RE
704
705 .sp
706 .ne 2
707 .na
708 \fB\fBdelegation\fR=\fBon\fR | \fBoff\fR\fR
709 .ad
710 .sp .6
711 .RS 4n
712 Controls whether a non-privileged user is granted access based on the dataset
713 permissions defined on the dataset. See \fBzfs\fR(1M) for more information on
714 \fBZFS\fR delegated administration.
715 .RE
716
717 .sp
718 .ne 2
719 .na
720 \fB\fBfailmode\fR=\fBwait\fR | \fBcontinue\fR | \fBpanic\fR\fR
721 .ad
722 .sp .6
723 .RS 4n
724 Controls the system behavior in the event of catastrophic pool failure. This
725 condition is typically a result of a loss of connectivity to the underlying
726 storage device(s) or a failure of all devices within the pool. The behavior of
727 such an event is determined as follows:
728 .sp
729 .ne 2
730 .na
731 \fB\fBwait\fR\fR
732 .ad
733 .RS 12n
734 Blocks all \fBI/O\fR access until the device connectivity is recovered and the
735 errors are cleared. This is the default behavior.
736 .RE
737
738 .sp
739 .ne 2
740 .na
741 \fB\fBcontinue\fR\fR
742 .ad
743 .RS 12n
744 Returns \fBEIO\fR to any new write \fBI/O\fR requests but allows reads to any
745 of the remaining healthy devices. Any write requests that have yet to be
746 committed to disk would be blocked.
747 .RE
748
749 .sp
750 .ne 2
751 .na
752 \fB\fBpanic\fR\fR
753 .ad
754 .RS 12n
755 Prints out a message to the console and generates a system crash dump.
756 .RE
757
758 .RE
759
760 .sp
761 .ne 2
762 .na
763 \fB\fBlistsnaps\fR=on | off\fR
764 .ad
765 .sp .6
766 .RS 4n
767 Controls whether information about snapshots associated with this pool is
768 output when "\fBzfs list\fR" is run without the \fB-t\fR option. The default
769 value is "off".
770 .RE
771
772 .sp
773 .ne 2
774 .na
775 \fB\fBversion\fR=\fIversion\fR\fR
776 .ad
777 .sp .6
778 .RS 4n
779 The current on-disk version of the pool. This can be increased, but never
780 decreased. The preferred method of updating pools is with the "\fBzpool
781 upgrade\fR" command, though this property can be used when a specific version
782 is needed for backwards compatibility. This property can be any number between
783 1 and the current version reported by "\fBzpool upgrade -v\fR".
784 .RE
785
786 .SS "Subcommands"
787 .sp
788 .LP
789 All subcommands that modify state are logged persistently to the pool in their
790 original form.
791 .sp
792 .LP
793 The \fBzpool\fR command provides subcommands to create and destroy storage
794 pools, add capacity to storage pools, and provide information about the storage
795 pools. The following subcommands are supported:
796 .sp
797 .ne 2
798 .na
799 \fB\fBzpool\fR \fB-?\fR\fR
800 .ad
801 .sp .6
802 .RS 4n
803 Displays a help message.
804 .RE
805
806 .sp
807 .ne 2
808 .na
809 \fB\fBzpool add\fR [\fB-fn\fR] \fIpool\fR \fIvdev\fR ...\fR
810 .ad
811 .sp .6
812 .RS 4n
813 Adds the specified virtual devices to the given pool. The \fIvdev\fR
814 specification is described in the "Virtual Devices" section. The behavior of
815 the \fB-f\fR option, and the device checks performed are described in the
816 "zpool create" subcommand.
817 .sp
818 .ne 2
819 .na
820 \fB\fB-f\fR\fR
821 .ad
822 .RS 6n
823 Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting
824 replication level. Not all devices can be overridden in this manner.
825 .RE
826
827 .sp
828 .ne 2
829 .na
830 \fB\fB-n\fR\fR
831 .ad
832 .RS 6n
833 Displays the configuration that would be used without actually adding the
834 \fBvdev\fRs. The actual pool creation can still fail due to insufficient
835 privileges or device sharing.
836 .RE
837
838 Do not add a disk that is currently configured as a quorum device to a zpool.
839 After a disk is in the pool, that disk can then be configured as a quorum
840 device.
841 .RE
842
843 .sp
844 .ne 2
845 .na
846 \fB\fBzpool attach\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR\fR
847 .ad
848 .sp .6
849 .RS 4n
850 Attaches \fInew_device\fR to an existing \fBzpool\fR device. The existing
851 device cannot be part of a \fBraidz\fR configuration. If \fIdevice\fR is not
852 currently part of a mirrored configuration, \fIdevice\fR automatically
853 transforms into a two-way mirror of \fIdevice\fR and \fInew_device\fR. If
854 \fIdevice\fR is part of a two-way mirror, attaching \fInew_device\fR creates a
855 three-way mirror, and so on. In either case, \fInew_device\fR begins to
856 resilver immediately.
857 .sp
858 .ne 2
859 .na
860 \fB\fB-f\fR\fR
861 .ad
862 .RS 6n
863 Forces use of \fInew_device\fR, even if its appears to be in use. Not all
864 devices can be overridden in this manner.
865 .RE
866
867 .RE
868
869 .sp
870 .ne 2
871 .na
872 \fB\fBzpool clear\fR \fIpool\fR [\fIdevice\fR] ...\fR
873 .ad
874 .sp .6
875 .RS 4n
876 Clears device errors in a pool. If no arguments are specified, all device
877 errors within the pool are cleared. If one or more devices is specified, only
878 those errors associated with the specified device or devices are cleared.
879 .RE
880
881 .sp
882 .ne 2
883 .na
884 \fB\fBzpool create\fR [\fB-fn\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR
885 \fIfile-system-property=value\fR] ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR
886 \fIroot\fR] \fIpool\fR \fIvdev\fR ...\fR
887 .ad
888 .sp .6
889 .RS 4n
890 Creates a new storage pool containing the virtual devices specified on the
891 command line. The pool name must begin with a letter, and can only contain
892 alphanumeric characters as well as underscore ("_"), dash ("-"), and period
893 ("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are
894 names beginning with the pattern "c[0-9]". The \fBvdev\fR specification is
895 described in the "Virtual Devices" section.
896 .sp
897 The command verifies that each device specified is accessible and not currently
898 in use by another subsystem. There are some uses, such as being currently
899 mounted, or specified as the dedicated dump device, that prevents a device from
900 ever being used by \fBZFS\fR. Other uses, such as having a preexisting
901 \fBUFS\fR file system, can be overridden with the \fB-f\fR option.
902 .sp
903 The command also checks that the replication strategy for the pool is
904 consistent. An attempt to combine redundant and non-redundant storage in a
905 single pool, or to mix disks and files, results in an error unless \fB-f\fR is
906 specified. The use of differently sized devices within a single \fBraidz\fR or
907 mirror group is also flagged as an error unless \fB-f\fR is specified.
908 .sp
909 Unless the \fB-R\fR option is specified, the default mount point is
910 "/\fIpool\fR". The mount point must not exist or must be empty, or else the
911 root dataset cannot be mounted. This can be overridden with the \fB-m\fR
912 option.
913 .sp
914 .ne 2
915 .na
916 \fB\fB-f\fR\fR
917 .ad
918 .sp .6
919 .RS 4n
920 Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting
921 replication level. Not all devices can be overridden in this manner.
922 .RE
923
924 .sp
925 .ne 2
926 .na
927 \fB\fB-n\fR\fR
928 .ad
929 .sp .6
930 .RS 4n
931 Displays the configuration that would be used without actually creating the
932 pool. The actual pool creation can still fail due to insufficient privileges or
933 device sharing.
934 .RE
935
936 .sp
937 .ne 2
938 .na
939 \fB\fB-o\fR \fIproperty=value\fR [\fB-o\fR \fIproperty=value\fR] ...\fR
940 .ad
941 .sp .6
942 .RS 4n
943 Sets the given pool properties. See the "Properties" section for a list of
944 valid properties that can be set.
945 .RE
946
947 .sp
948 .ne 2
949 .na
950 \fB\fB-O\fR \fIfile-system-property=value\fR\fR
951 .ad
952 .br
953 .na
954 \fB[\fB-O\fR \fIfile-system-property=value\fR] ...\fR
955 .ad
956 .sp .6
957 .RS 4n
958 Sets the given file system properties in the root file system of the pool. See
959 the "Properties" section of \fBzfs\fR(1M) for a list of valid properties that
960 can be set.
961 .RE
962
963 .sp
964 .ne 2
965 .na
966 \fB\fB-R\fR \fIroot\fR\fR
967 .ad
968 .sp .6
969 .RS 4n
970 Equivalent to "-o cachefile=none,altroot=\fIroot\fR"
971 .RE
972
973 .sp
974 .ne 2
975 .na
976 \fB\fB-m\fR \fImountpoint\fR\fR
977 .ad
978 .sp .6
979 .RS 4n
980 Sets the mount point for the root dataset. The default mount point is
981 "/\fIpool\fR" or "\fBaltroot\fR/\fIpool\fR" if \fBaltroot\fR is specified. The
982 mount point must be an absolute path, "\fBlegacy\fR", or "\fBnone\fR". For more
983 information on dataset mount points, see \fBzfs\fR(1M).
984 .RE
985
986 .RE
987
988 .sp
989 .ne 2
990 .na
991 \fB\fBzpool destroy\fR [\fB-f\fR] \fIpool\fR\fR
992 .ad
993 .sp .6
994 .RS 4n
995 Destroys the given pool, freeing up any devices for other use. This command
996 tries to unmount any active datasets before destroying the pool.
997 .sp
998 .ne 2
999 .na
1000 \fB\fB-f\fR\fR
1001 .ad
1002 .RS 6n
1003 Forces any active datasets contained within the pool to be unmounted.
1004 .RE
1005
1006 .RE
1007
1008 .sp
1009 .ne 2
1010 .na
1011 \fB\fBzpool detach\fR \fIpool\fR \fIdevice\fR\fR
1012 .ad
1013 .sp .6
1014 .RS 4n
1015 Detaches \fIdevice\fR from a mirror. The operation is refused if there are no
1016 other valid replicas of the data.
1017 .RE
1018
1019 .sp
1020 .ne 2
1021 .na
1022 \fB\fBzpool export\fR [\fB-f\fR] \fIpool\fR ...\fR
1023 .ad
1024 .sp .6
1025 .RS 4n
1026 Exports the given pools from the system. All devices are marked as exported,
1027 but are still considered in use by other subsystems. The devices can be moved
1028 between systems (even those of different endianness) and imported as long as a
1029 sufficient number of devices are present.
1030 .sp
1031 Before exporting the pool, all datasets within the pool are unmounted. A pool
1032 can not be exported if it has a shared spare that is currently being used.
1033 .sp
1034 For pools to be portable, you must give the \fBzpool\fR command whole disks,
1035 not just slices, so that \fBZFS\fR can label the disks with portable \fBEFI\fR
1036 labels. Otherwise, disk drivers on platforms of different endianness will not
1037 recognize the disks.
1038 .sp
1039 .ne 2
1040 .na
1041 \fB\fB-f\fR\fR
1042 .ad
1043 .RS 6n
1044 Forcefully unmount all datasets, using the "\fBunmount -f\fR" command.
1045 .sp
1046 This command will forcefully export the pool even if it has a shared spare that
1047 is currently being used. This may lead to potential data corruption.
1048 .RE
1049
1050 .RE
1051
1052 .sp
1053 .ne 2
1054 .na
1055 \fB\fBzpool get\fR "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ...\fR
1056 .ad
1057 .sp .6
1058 .RS 4n
1059 Retrieves the given list of properties (or all properties if "\fBall\fR" is
1060 used) for the specified storage pool(s). These properties are displayed with
1061 the following fields:
1062 .sp
1063 .in +2
1064 .nf
1065 name Name of storage pool
1066 property Property name
1067 value Property value
1068 source Property source, either 'default' or 'local'.
1069 .fi
1070 .in -2
1071 .sp
1072
1073 See the "Properties" section for more information on the available pool
1074 properties.
1075 .RE
1076
1077 .sp
1078 .ne 2
1079 .na
1080 \fB\fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ...\fR
1081 .ad
1082 .sp .6
1083 .RS 4n
1084 Displays the command history of the specified pools or all pools if no pool is
1085 specified.
1086 .sp
1087 .ne 2
1088 .na
1089 \fB\fB-i\fR\fR
1090 .ad
1091 .RS 6n
1092 Displays internally logged \fBZFS\fR events in addition to user initiated
1093 events.
1094 .RE
1095
1096 .sp
1097 .ne 2
1098 .na
1099 \fB\fB-l\fR\fR
1100 .ad
1101 .RS 6n
1102 Displays log records in long format, which in addition to standard format
1103 includes, the user name, the hostname, and the zone in which the operation was
1104 performed.
1105 .RE
1106
1107 .RE
1108
1109 .sp
1110 .ne 2
1111 .na
1112 \fB\fBzpool import\fR [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1113 [\fB-D\fR]\fR
1114 .ad
1115 .sp .6
1116 .RS 4n
1117 Lists pools available to import. If the \fB-d\fR option is not specified, this
1118 command searches for devices in "/dev/dsk". The \fB-d\fR option can be
1119 specified multiple times, and all directories are searched. If the device
1120 appears to be part of an exported pool, this command displays a summary of the
1121 pool with the name of the pool, a numeric identifier, as well as the \fIvdev\fR
1122 layout and current health of the device for each device or file. Destroyed
1123 pools, pools that were previously destroyed with the "\fBzpool destroy\fR"
1124 command, are not listed unless the \fB-D\fR option is specified.
1125 .sp
1126 The numeric identifier is unique, and can be used instead of the pool name when
1127 multiple exported pools of the same name are available.
1128 .sp
1129 .ne 2
1130 .na
1131 \fB\fB-c\fR \fIcachefile\fR\fR
1132 .ad
1133 .RS 16n
1134 Reads configuration from the given \fBcachefile\fR that was created with the
1135 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1136 searching for devices.
1137 .RE
1138
1139 .sp
1140 .ne 2
1141 .na
1142 \fB\fB-d\fR \fIdir\fR\fR
1143 .ad
1144 .RS 16n
1145 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1146 specified multiple times.
1147 .RE
1148
1149 .sp
1150 .ne 2
1151 .na
1152 \fB\fB-D\fR\fR
1153 .ad
1154 .RS 16n
1155 Lists destroyed pools only.
1156 .RE
1157
1158 .RE
1159
1160 .sp
1161 .ne 2
1162 .na
1163 \fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR
1164 \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1165 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fB-a\fR\fR
1166 .ad
1167 .sp .6
1168 .RS 4n
1169 Imports all pools found in the search directories. Identical to the previous
1170 command, except that all pools with a sufficient number of devices available
1171 are imported. Destroyed pools, pools that were previously destroyed with the
1172 "\fBzpool destroy\fR" command, will not be imported unless the \fB-D\fR option
1173 is specified.
1174 .sp
1175 .ne 2
1176 .na
1177 \fB\fB-o\fR \fImntopts\fR\fR
1178 .ad
1179 .RS 21n
1180 Comma-separated list of mount options to use when mounting datasets within the
1181 pool. See \fBzfs\fR(1M) for a description of dataset properties and mount
1182 options.
1183 .RE
1184
1185 .sp
1186 .ne 2
1187 .na
1188 \fB\fB-o\fR \fIproperty=value\fR\fR
1189 .ad
1190 .RS 21n
1191 Sets the specified property on the imported pool. See the "Properties" section
1192 for more information on the available pool properties.
1193 .RE
1194
1195 .sp
1196 .ne 2
1197 .na
1198 \fB\fB-c\fR \fIcachefile\fR\fR
1199 .ad
1200 .RS 21n
1201 Reads configuration from the given \fBcachefile\fR that was created with the
1202 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1203 searching for devices.
1204 .RE
1205
1206 .sp
1207 .ne 2
1208 .na
1209 \fB\fB-d\fR \fIdir\fR\fR
1210 .ad
1211 .RS 21n
1212 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1213 specified multiple times. This option is incompatible with the \fB-c\fR option.
1214 .RE
1215
1216 .sp
1217 .ne 2
1218 .na
1219 \fB\fB-D\fR\fR
1220 .ad
1221 .RS 21n
1222 Imports destroyed pools only. The \fB-f\fR option is also required.
1223 .RE
1224
1225 .sp
1226 .ne 2
1227 .na
1228 \fB\fB-f\fR\fR
1229 .ad
1230 .RS 21n
1231 Forces import, even if the pool appears to be potentially active.
1232 .RE
1233
1234 .sp
1235 .ne 2
1236 .na
1237 \fB\fB-a\fR\fR
1238 .ad
1239 .RS 21n
1240 Searches for and imports all pools found.
1241 .RE
1242
1243 .sp
1244 .ne 2
1245 .na
1246 \fB\fB-R\fR \fIroot\fR\fR
1247 .ad
1248 .RS 21n
1249 Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR"
1250 property to "\fIroot\fR".
1251 .RE
1252
1253 .RE
1254
1255 .sp
1256 .ne 2
1257 .na
1258 \fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR
1259 \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1260 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR | \fIid\fR
1261 [\fInewpool\fR]\fR
1262 .ad
1263 .sp .6
1264 .RS 4n
1265 Imports a specific pool. A pool can be identified by its name or the numeric
1266 identifier. If \fInewpool\fR is specified, the pool is imported using the name
1267 \fInewpool\fR. Otherwise, it is imported with the same name as its exported
1268 name.
1269 .sp
1270 If a device is removed from a system without running "\fBzpool export\fR"
1271 first, the device appears as potentially active. It cannot be determined if
1272 this was a failed export, or whether the device is really in use from another
1273 host. To import a pool in this state, the \fB-f\fR option is required.
1274 .sp
1275 .ne 2
1276 .na
1277 \fB\fB-o\fR \fImntopts\fR\fR
1278 .ad
1279 .sp .6
1280 .RS 4n
1281 Comma-separated list of mount options to use when mounting datasets within the
1282 pool. See \fBzfs\fR(1M) for a description of dataset properties and mount
1283 options.
1284 .RE
1285
1286 .sp
1287 .ne 2
1288 .na
1289 \fB\fB-o\fR \fIproperty=value\fR\fR
1290 .ad
1291 .sp .6
1292 .RS 4n
1293 Sets the specified property on the imported pool. See the "Properties" section
1294 for more information on the available pool properties.
1295 .RE
1296
1297 .sp
1298 .ne 2
1299 .na
1300 \fB\fB-c\fR \fIcachefile\fR\fR
1301 .ad
1302 .sp .6
1303 .RS 4n
1304 Reads configuration from the given \fBcachefile\fR that was created with the
1305 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1306 searching for devices.
1307 .RE
1308
1309 .sp
1310 .ne 2
1311 .na
1312 \fB\fB-d\fR \fIdir\fR\fR
1313 .ad
1314 .sp .6
1315 .RS 4n
1316 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1317 specified multiple times. This option is incompatible with the \fB-c\fR option.
1318 .RE
1319
1320 .sp
1321 .ne 2
1322 .na
1323 \fB\fB-D\fR\fR
1324 .ad
1325 .sp .6
1326 .RS 4n
1327 Imports destroyed pool. The \fB-f\fR option is also required.
1328 .RE
1329
1330 .sp
1331 .ne 2
1332 .na
1333 \fB\fB-f\fR\fR
1334 .ad
1335 .sp .6
1336 .RS 4n
1337 Forces import, even if the pool appears to be potentially active.
1338 .RE
1339
1340 .sp
1341 .ne 2
1342 .na
1343 \fB\fB-R\fR \fIroot\fR\fR
1344 .ad
1345 .sp .6
1346 .RS 4n
1347 Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR"
1348 property to "\fIroot\fR".
1349 .RE
1350
1351 .RE
1352
1353 .sp
1354 .ne 2
1355 .na
1356 \fB\fBzpool iostat\fR [\fB-T\fR \fBu\fR | \fBd\fR] [\fB-v\fR] [\fIpool\fR] ...
1357 [\fIinterval\fR[\fIcount\fR]]\fR
1358 .ad
1359 .sp .6
1360 .RS 4n
1361 Displays \fBI/O\fR statistics for the given pools. When given an interval, the
1362 statistics are printed every \fIinterval\fR seconds until \fBCtrl-C\fR is
1363 pressed. If no \fIpools\fR are specified, statistics for every pool in the
1364 system is shown. If \fIcount\fR is specified, the command exits after
1365 \fIcount\fR reports are printed.
1366 .sp
1367 .ne 2
1368 .na
1369 \fB\fB-T\fR \fBu\fR | \fBd\fR\fR
1370 .ad
1371 .RS 12n
1372 Display a time stamp.
1373 .sp
1374 Specify \fBu\fR for a printed representation of the internal representation of
1375 time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See
1376 \fBdate\fR(1).
1377 .RE
1378
1379 .sp
1380 .ne 2
1381 .na
1382 \fB\fB-v\fR\fR
1383 .ad
1384 .RS 12n
1385 Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within
1386 the pool, in addition to the pool-wide statistics.
1387 .RE
1388
1389 .RE
1390
1391 .sp
1392 .ne 2
1393 .na
1394 \fB\fBzpool list\fR [\fB-Hv\fR] [\fB-o\fR \fIprops\fR[,...]] [\fIpool\fR] ...\fR
1395 .ad
1396 .sp .6
1397 .RS 4n
1398 Lists the given pools along with a health status and space usage. When given no
1399 arguments, all pools in the system are listed.
1400 .sp
1401 .ne 2
1402 .na
1403 \fB\fB-H\fR\fR
1404 .ad
1405 .RS 12n
1406 Scripted mode. Do not display headers, and separate fields by a single tab
1407 instead of arbitrary space.
1408 .RE
1409
1410 .sp
1411 .ne 2
1412 .na
1413 \fB\fB-o\fR \fIprops\fR\fR
1414 .ad
1415 .RS 12n
1416 Comma-separated list of properties to display. See the "Properties" section for
1417 a list of valid properties. The default list is "name, size, used, available,
1418 expandsize, capacity, dedupratio, health, altroot"
1419 .RE
1420
1421 .sp
1422 .ne 2
1423 .na
1424 \fB\fB-v\fR\fR
1425 .ad
1426 .RS 12n
1427 Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within
1428 the pool, in addition to the pool-wise statistics.
1429 .RE
1430
1431 .RE
1432
1433 .sp
1434 .ne 2
1435 .na
1436 \fB\fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ...\fR
1437 .ad
1438 .sp .6
1439 .RS 4n
1440 Takes the specified physical device offline. While the \fIdevice\fR is offline,
1441 no attempt is made to read or write to the device.
1442 .sp
1443 This command is not applicable to spares or cache devices.
1444 .sp
1445 .ne 2
1446 .na
1447 \fB\fB-t\fR\fR
1448 .ad
1449 .RS 6n
1450 Temporary. Upon reboot, the specified physical device reverts to its previous
1451 state.
1452 .RE
1453
1454 .RE
1455
1456 .sp
1457 .ne 2
1458 .na
1459 \fB\fBzpool online\fR [\fB-e\fR] \fIpool\fR \fIdevice\fR...\fR
1460 .ad
1461 .sp .6
1462 .RS 4n
1463 Brings the specified physical device online.
1464 .sp
1465 This command is not applicable to spares or cache devices.
1466 .sp
1467 .ne 2
1468 .na
1469 \fB\fB-e\fR\fR
1470 .ad
1471 .RS 6n
1472 Expand the device to use all available space. If the device is part of a mirror
1473 or \fBraidz\fR then all devices must be expanded before the new space will
1474 become available to the pool.
1475 .RE
1476
1477 .RE
1478
1479 .sp
1480 .ne 2
1481 .na
1482 \fB\fBzpool reguid\fR \fIpool\fR
1483 .ad
1484 .sp .6
1485 .RS 4n
1486 Generates a new unique identifier for the pool. You must ensure that all devices in this pool are online and
1487 healthy before performing this action.
1488 .RE
1489
1490 .sp
1491 .ne 2
1492 .na
1493 \fB\fBzpool remove\fR \fIpool\fR \fIdevice\fR ...\fR
1494 .ad
1495 .sp .6
1496 .RS 4n
1497 Removes the specified device from the pool. This command currently only
1498 supports removing hot spares, cache, and log devices. A mirrored log device can
1499 be removed by specifying the top-level mirror for the log. Non-log devices that
1500 are part of a mirrored configuration can be removed using the \fBzpool
1501 detach\fR command. Non-redundant and \fBraidz\fR devices cannot be removed from
1502 a pool.
1503 .RE
1504
1505 .sp
1506 .ne 2
1507 .na
1508 \fB\fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIold_device\fR
1509 [\fInew_device\fR]\fR
1510 .ad
1511 .sp .6
1512 .RS 4n
1513 Replaces \fIold_device\fR with \fInew_device\fR. This is equivalent to
1514 attaching \fInew_device\fR, waiting for it to resilver, and then detaching
1515 \fIold_device\fR.
1516 .sp
1517 The size of \fInew_device\fR must be greater than or equal to the minimum size
1518 of all the devices in a mirror or \fBraidz\fR configuration.
1519 .sp
1520 \fInew_device\fR is required if the pool is not redundant. If \fInew_device\fR
1521 is not specified, it defaults to \fIold_device\fR. This form of replacement is
1522 useful after an existing disk has failed and has been physically replaced. In
1523 this case, the new disk may have the same \fB/dev/dsk\fR path as the old
1524 device, even though it is actually a different disk. \fBZFS\fR recognizes this.
1525 .sp
1526 .ne 2
1527 .na
1528 \fB\fB-f\fR\fR
1529 .ad
1530 .RS 6n
1531 Forces use of \fInew_device\fR, even if its appears to be in use. Not all
1532 devices can be overridden in this manner.
1533 .RE
1534
1535 .RE
1536
1537 .sp
1538 .ne 2
1539 .na
1540 \fB\fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...\fR
1541 .ad
1542 .sp .6
1543 .RS 4n
1544 Begins a scrub. The scrub examines all data in the specified pools to verify
1545 that it checksums correctly. For replicated (mirror or \fBraidz\fR) devices,
1546 \fBZFS\fR automatically repairs any damage discovered during the scrub. The
1547 "\fBzpool status\fR" command reports the progress of the scrub and summarizes
1548 the results of the scrub upon completion.
1549 .sp
1550 Scrubbing and resilvering are very similar operations. The difference is that
1551 resilvering only examines data that \fBZFS\fR knows to be out of date (for
1552 example, when attaching a new device to a mirror or replacing an existing
1553 device), whereas scrubbing examines all data to discover silent errors due to
1554 hardware faults or disk failure.
1555 .sp
1556 Because scrubbing and resilvering are \fBI/O\fR-intensive operations, \fBZFS\fR
1557 only allows one at a time. If a scrub is already in progress, the "\fBzpool
1558 scrub\fR" command terminates it and starts a new scrub. If a resilver is in
1559 progress, \fBZFS\fR does not allow a scrub to be started until the resilver
1560 completes.
1561 .sp
1562 .ne 2
1563 .na
1564 \fB\fB-s\fR\fR
1565 .ad
1566 .RS 6n
1567 Stop scrubbing.
1568 .RE
1569
1570 .RE
1571
1572 .sp
1573 .ne 2
1574 .na
1575 \fB\fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR\fR
1576 .ad
1577 .sp .6
1578 .RS 4n
1579 Sets the given property on the specified pool. See the "Properties" section for
1580 more information on what properties can be set and acceptable values.
1581 .RE
1582
1583 .sp
1584 .ne 2
1585 .na
1586 \fB\fBzpool status\fR [\fB-xv\fR] [\fIpool\fR] ...\fR
1587 .ad
1588 .sp .6
1589 .RS 4n
1590 Displays the detailed health status for the given pools. If no \fIpool\fR is
1591 specified, then the status of each pool in the system is displayed. For more
1592 information on pool and device health, see the "Device Failure and Recovery"
1593 section.
1594 .sp
1595 If a scrub or resilver is in progress, this command reports the percentage done
1596 and the estimated time to completion. Both of these are only approximate,
1597 because the amount of data in the pool and the other workloads on the system
1598 can change.
1599 .sp
1600 .ne 2
1601 .na
1602 \fB\fB-x\fR\fR
1603 .ad
1604 .RS 6n
1605 Only display status for pools that are exhibiting errors or are otherwise
1606 unavailable.
1607 .RE
1608
1609 .sp
1610 .ne 2
1611 .na
1612 \fB\fB-v\fR\fR
1613 .ad
1614 .RS 6n
1615 Displays verbose data error information, printing out a complete list of all
1616 data errors since the last complete pool scrub.
1617 .RE
1618
1619 .RE
1620
1621 .sp
1622 .ne 2
1623 .na
1624 \fB\fBzpool upgrade\fR\fR
1625 .ad
1626 .sp .6
1627 .RS 4n
1628 Displays all pools formatted using a different \fBZFS\fR on-disk version. Older
1629 versions can continue to be used, but some features may not be available. These
1630 pools can be upgraded using "\fBzpool upgrade -a\fR". Pools that are formatted
1631 with a more recent version are also displayed, although these pools will be
1632 inaccessible on the system.
1633 .RE
1634
1635 .sp
1636 .ne 2
1637 .na
1638 \fB\fBzpool upgrade\fR \fB-v\fR\fR
1639 .ad
1640 .sp .6
1641 .RS 4n
1642 Displays \fBZFS\fR versions supported by the current software. The current
1643 \fBZFS\fR versions and all previous supported versions are displayed, along
1644 with an explanation of the features provided with each version.
1645 .RE
1646
1647 .sp
1648 .ne 2
1649 .na
1650 \fB\fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...\fR
1651 .ad
1652 .sp .6
1653 .RS 4n
1654 Upgrades the given pool to the latest on-disk version. Once this is done, the
1655 pool will no longer be accessible on systems running older versions of the
1656 software.
1657 .sp
1658 .ne 2
1659 .na
1660 \fB\fB-a\fR\fR
1661 .ad
1662 .RS 14n
1663 Upgrades all pools.
1664 .RE
1665
1666 .sp
1667 .ne 2
1668 .na
1669 \fB\fB-V\fR \fIversion\fR\fR
1670 .ad
1671 .RS 14n
1672 Upgrade to the specified version. If the \fB-V\fR flag is not specified, the
1673 pool is upgraded to the most recent version. This option can only be used to
1674 increase the version number, and only up to the most recent version supported
1675 by this software.
1676 .RE
1677
1678 .RE
1679
1680 .SH EXAMPLES
1681 .LP
1682 \fBExample 1 \fRCreating a RAID-Z Storage Pool
1683 .sp
1684 .LP
1685 The following command creates a pool with a single \fBraidz\fR root \fIvdev\fR
1686 that consists of six disks.
1687
1688 .sp
1689 .in +2
1690 .nf
1691 # \fBzpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0\fR
1692 .fi
1693 .in -2
1694 .sp
1695
1696 .LP
1697 \fBExample 2 \fRCreating a Mirrored Storage Pool
1698 .sp
1699 .LP
1700 The following command creates a pool with two mirrors, where each mirror
1701 contains two disks.
1702
1703 .sp
1704 .in +2
1705 .nf
1706 # \fBzpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0\fR
1707 .fi
1708 .in -2
1709 .sp
1710
1711 .LP
1712 \fBExample 3 \fRCreating a ZFS Storage Pool by Using Slices
1713 .sp
1714 .LP
1715 The following command creates an unmirrored pool using two disk slices.
1716
1717 .sp
1718 .in +2
1719 .nf
1720 # \fBzpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4\fR
1721 .fi
1722 .in -2
1723 .sp
1724
1725 .LP
1726 \fBExample 4 \fRCreating a ZFS Storage Pool by Using Files
1727 .sp
1728 .LP
1729 The following command creates an unmirrored pool using files. While not
1730 recommended, a pool based on files can be useful for experimental purposes.
1731
1732 .sp
1733 .in +2
1734 .nf
1735 # \fBzpool create tank /path/to/file/a /path/to/file/b\fR
1736 .fi
1737 .in -2
1738 .sp
1739
1740 .LP
1741 \fBExample 5 \fRAdding a Mirror to a ZFS Storage Pool
1742 .sp
1743 .LP
1744 The following command adds two mirrored disks to the pool "\fItank\fR",
1745 assuming the pool is already made up of two-way mirrors. The additional space
1746 is immediately available to any datasets within the pool.
1747
1748 .sp
1749 .in +2
1750 .nf
1751 # \fBzpool add tank mirror c1t0d0 c1t1d0\fR
1752 .fi
1753 .in -2
1754 .sp
1755
1756 .LP
1757 \fBExample 6 \fRListing Available ZFS Storage Pools
1758 .sp
1759 .LP
1760 The following command lists all available pools on the system. In this case,
1761 the pool \fIzion\fR is faulted due to a missing device.
1762
1763 .sp
1764 .LP
1765 The results from this command are similar to the following:
1766
1767 .sp
1768 .in +2
1769 .nf
1770 # \fBzpool list\fR
1771 NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
1772 rpool 19.9G 8.43G 11.4G - 42% 1.00x ONLINE -
1773 tank 61.5G 20.0G 41.5G - 32% 1.00x ONLINE -
1774 zion - - - - - - FAULTED -
1775 .fi
1776 .in -2
1777 .sp
1778
1779 .LP
1780 \fBExample 7 \fRDestroying a ZFS Storage Pool
1781 .sp
1782 .LP
1783 The following command destroys the pool "\fItank\fR" and any datasets contained
1784 within.
1785
1786 .sp
1787 .in +2
1788 .nf
1789 # \fBzpool destroy -f tank\fR
1790 .fi
1791 .in -2
1792 .sp
1793
1794 .LP
1795 \fBExample 8 \fRExporting a ZFS Storage Pool
1796 .sp
1797 .LP
1798 The following command exports the devices in pool \fItank\fR so that they can
1799 be relocated or later imported.
1800
1801 .sp
1802 .in +2
1803 .nf
1804 # \fBzpool export tank\fR
1805 .fi
1806 .in -2
1807 .sp
1808
1809 .LP
1810 \fBExample 9 \fRImporting a ZFS Storage Pool
1811 .sp
1812 .LP
1813 The following command displays available pools, and then imports the pool
1814 "tank" for use on the system.
1815
1816 .sp
1817 .LP
1818 The results from this command are similar to the following:
1819
1820 .sp
1821 .in +2
1822 .nf
1823 # \fBzpool import\fR
1824 pool: tank
1825 id: 15451357997522795478
1826 state: ONLINE
1827 action: The pool can be imported using its name or numeric identifier.
1828 config:
1829
1830 tank ONLINE
1831 mirror ONLINE
1832 c1t2d0 ONLINE
1833 c1t3d0 ONLINE
1834
1835 # \fBzpool import tank\fR
1836 .fi
1837 .in -2
1838 .sp
1839
1840 .LP
1841 \fBExample 10 \fRUpgrading All ZFS Storage Pools to the Current Version
1842 .sp
1843 .LP
1844 The following command upgrades all ZFS Storage pools to the current version of
1845 the software.
1846
1847 .sp
1848 .in +2
1849 .nf
1850 # \fBzpool upgrade -a\fR
1851 This system is currently running ZFS version 2.
1852 .fi
1853 .in -2
1854 .sp
1855
1856 .LP
1857 \fBExample 11 \fRManaging Hot Spares
1858 .sp
1859 .LP
1860 The following command creates a new pool with an available hot spare:
1861
1862 .sp
1863 .in +2
1864 .nf
1865 # \fBzpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0\fR
1866 .fi
1867 .in -2
1868 .sp
1869
1870 .sp
1871 .LP
1872 If one of the disks were to fail, the pool would be reduced to the degraded
1873 state. The failed device can be replaced using the following command:
1874
1875 .sp
1876 .in +2
1877 .nf
1878 # \fBzpool replace tank c0t0d0 c0t3d0\fR
1879 .fi
1880 .in -2
1881 .sp
1882
1883 .sp
1884 .LP
1885 Once the data has been resilvered, the spare is automatically removed and is
1886 made available should another device fails. The hot spare can be permanently
1887 removed from the pool using the following command:
1888
1889 .sp
1890 .in +2
1891 .nf
1892 # \fBzpool remove tank c0t2d0\fR
1893 .fi
1894 .in -2
1895 .sp
1896
1897 .LP
1898 \fBExample 12 \fRCreating a ZFS Pool with Mirrored Separate Intent Logs
1899 .sp
1900 .LP
1901 The following command creates a ZFS storage pool consisting of two, two-way
1902 mirrors and mirrored log devices:
1903
1904 .sp
1905 .in +2
1906 .nf
1907 # \fBzpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
1908 c4d0 c5d0\fR
1909 .fi
1910 .in -2
1911 .sp
1912
1913 .LP
1914 \fBExample 13 \fRAdding Cache Devices to a ZFS Pool
1915 .sp
1916 .LP
1917 The following command adds two disks for use as cache devices to a ZFS storage
1918 pool:
1919
1920 .sp
1921 .in +2
1922 .nf
1923 # \fBzpool add pool cache c2d0 c3d0\fR
1924 .fi
1925 .in -2
1926 .sp
1927
1928 .sp
1929 .LP
1930 Once added, the cache devices gradually fill with content from main memory.
1931 Depending on the size of your cache devices, it could take over an hour for
1932 them to fill. Capacity and reads can be monitored using the \fBiostat\fR option
1933 as follows:
1934
1935 .sp
1936 .in +2
1937 .nf
1938 # \fBzpool iostat -v pool 5\fR
1939 .fi
1940 .in -2
1941 .sp
1942
1943 .LP
1944 \fBExample 14 \fRRemoving a Mirrored Log Device
1945 .sp
1946 .LP
1947 The following command removes the mirrored log device \fBmirror-2\fR.
1948
1949 .sp
1950 .LP
1951 Given this configuration:
1952
1953 .sp
1954 .in +2
1955 .nf
1956 pool: tank
1957 state: ONLINE
1958 scrub: none requested
1959 config:
1960
1961 NAME STATE READ WRITE CKSUM
1962 tank ONLINE 0 0 0
1963 mirror-0 ONLINE 0 0 0
1964 c6t0d0 ONLINE 0 0 0
1965 c6t1d0 ONLINE 0 0 0
1966 mirror-1 ONLINE 0 0 0
1967 c6t2d0 ONLINE 0 0 0
1968 c6t3d0 ONLINE 0 0 0
1969 logs
1970 mirror-2 ONLINE 0 0 0
1971 c4t0d0 ONLINE 0 0 0
1972 c4t1d0 ONLINE 0 0 0
1973 .fi
1974 .in -2
1975 .sp
1976
1977 .sp
1978 .LP
1979 The command to remove the mirrored log \fBmirror-2\fR is:
1980
1981 .sp
1982 .in +2
1983 .nf
1984 # \fBzpool remove tank mirror-2\fR
1985 .fi
1986 .in -2
1987 .sp
1988
1989 .LP
1990 \fBExample 15 \fRDisplaying expanded space on a device
1991 .sp
1992 .LP
1993 The following command dipslays the detailed information for the \fIdata\fR
1994 pool. This pool is comprised of a single \fIraidz\fR vdev where one of its
1995 devices increased its capacity by 1GB. In this example, the pool will not
1996 be able to utilized this extra capacity until all the devices under the
1997 \fIraidz\fR vdev have been expanded.
1998
1999 .sp
2000 .in +2
2001 .nf
2002 # \fBzpool list -v data\fR
2003 NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
2004 data 17.9G 174K 17.9G - 0% 1.00x ONLINE -
2005 raidz1 17.9G 174K 17.9G -
2006 c4t2d0 - - - 1G
2007 c4t3d0 - - - -
2008 c4t4d0 - - - -
2009 .fi
2010 .in -2
2011
2012 .SH EXIT STATUS
2013 .sp
2014 .LP
2015 The following exit values are returned:
2016 .sp
2017 .ne 2
2018 .na
2019 \fB\fB0\fR\fR
2020 .ad
2021 .RS 5n
2022 Successful completion.
2023 .RE
2024
2025 .sp
2026 .ne 2
2027 .na
2028 \fB\fB1\fR\fR
2029 .ad
2030 .RS 5n
2031 An error occurred.
2032 .RE
2033
2034 .sp
2035 .ne 2
2036 .na
2037 \fB\fB2\fR\fR
2038 .ad
2039 .RS 5n
2040 Invalid command line options were specified.
2041 .RE
2042
2043 .SH ATTRIBUTES
2044 .sp
2045 .LP
2046 See \fBattributes\fR(5) for descriptions of the following attributes:
2047 .sp
2048
2049 .sp
2050 .TS
2051 box;
2052 c | c
2053 l | l .
2054 ATTRIBUTE TYPE ATTRIBUTE VALUE
2055 _
2056 Interface Stability Evolving
2057 .TE
2058
2059 .SH SEE ALSO
2060 .sp
2061 .LP
2062 \fBzfs\fR(1M), \fBattributes\fR(5)