Print this page
2619 asynchronous destruction of ZFS file systems
2747 SPA versioning with zfs feature flags
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <gwilson@delphix.com>
Reviewed by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan Kruchinin <dan.kruchinin@gmail.com>
Approved by: Dan McDonald <danmcd@nexenta.com>
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/man/man1m/zpool.1m
+++ new/usr/src/man/man1m/zpool.1m
1 1 '\" te
2 2 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
3 3 .\" Copyright 2011, Nexenta Systems, Inc. All Rights Reserved.
4 -.\" Copyright (c) 2012 by Delphix. All Rights Reserved.
5 -.\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
6 -.\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with the
7 -.\" fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner]
8 -.TH ZPOOL 1M "Nov 14, 2011"
4 +.\" Copyright (c) 2012 by Delphix. All rights reserved.
5 +.\" The contents of this file are subject to the terms of the Common Development
6 +.\" and Distribution License (the "License"). You may not use this file except
7 +.\" in compliance with the License. You can obtain a copy of the license at
8 +.\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
9 +.\"
10 +.\" See the License for the specific language governing permissions and
11 +.\" limitations under the License. When distributing Covered Code, include this
12 +.\" CDDL HEADER in each file and include the License file at
13 +.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
14 +.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
15 +.\" own identifying information:
16 +.\" Portions Copyright [yyyy] [name of copyright owner]
17 +.TH ZPOOL 1M "Mar 16, 2012"
9 18 .SH NAME
10 19 zpool \- configures ZFS storage pools
11 20 .SH SYNOPSIS
12 21 .LP
13 22 .nf
14 23 \fBzpool\fR [\fB-?\fR]
15 24 .fi
16 25
17 26 .LP
18 27 .nf
19 28 \fBzpool add\fR [\fB-fn\fR] \fIpool\fR \fIvdev\fR ...
20 29 .fi
21 30
22 31 .LP
23 32 .nf
|
↓ open down ↓ |
5 lines elided |
↑ open up ↑ |
24 33 \fBzpool attach\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR
25 34 .fi
26 35
27 36 .LP
28 37 .nf
29 38 \fBzpool clear\fR \fIpool\fR [\fIdevice\fR]
30 39 .fi
31 40
32 41 .LP
33 42 .nf
34 -\fBzpool create\fR [\fB-fn\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR \fIfile-system-property=value\fR]
43 +\fBzpool create\fR [\fB-fnd\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR \fIfile-system-property=value\fR]
35 44 ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR \fIvdev\fR ...
36 45 .fi
37 46
38 47 .LP
39 48 .nf
40 49 \fBzpool destroy\fR [\fB-f\fR] \fIpool\fR
41 50 .fi
42 51
43 52 .LP
44 53 .nf
45 54 \fBzpool detach\fR \fIpool\fR \fIdevice\fR
46 55 .fi
47 56
48 57 .LP
49 58 .nf
50 59 \fBzpool export\fR [\fB-f\fR] \fIpool\fR ...
51 60 .fi
52 61
53 62 .LP
54 63 .nf
55 64 \fBzpool get\fR "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ...
56 65 .fi
57 66
58 67 .LP
59 68 .nf
60 69 \fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ...
61 70 .fi
62 71
63 72 .LP
64 73 .nf
65 74 \fBzpool import\fR [\fB-d\fR \fIdir\fR] [\fB-D\fR]
66 75 .fi
67 76
68 77 .LP
69 78 .nf
70 79 \fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
71 80 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fB-a\fR
72 81 .fi
73 82
74 83 .LP
75 84 .nf
76 85 \fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
77 86 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR |\fIid\fR [\fInewpool\fR]
78 87 .fi
79 88
80 89 .LP
81 90 .nf
82 91 \fBzpool iostat\fR [\fB-T\fR u | d ] [\fB-v\fR] [\fIpool\fR] ... [\fIinterval\fR[\fIcount\fR]]
83 92 .fi
84 93
85 94 .LP
86 95 .nf
87 96 \fBzpool list\fR [\fB-Hv\fR] [\fB-o\fR \fIproperty\fR[,...]] [\fIpool\fR] ...
88 97 .fi
89 98
90 99 .LP
91 100 .nf
92 101 \fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ...
93 102 .fi
94 103
95 104 .LP
96 105 .nf
97 106 \fBzpool online\fR \fIpool\fR \fIdevice\fR ...
98 107 .fi
99 108
100 109 .LP
101 110 .nf
102 111 \fBzpool reguid\fR \fIpool\fR
103 112 .fi
104 113
105 114 .LP
106 115 .nf
107 116 \fBzpool remove\fR \fIpool\fR \fIdevice\fR ...
108 117 .fi
109 118
110 119 .LP
111 120 .nf
112 121 \fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR [\fInew_device\fR]
113 122 .fi
114 123
115 124 .LP
116 125 .nf
117 126 \fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...
118 127 .fi
119 128
120 129 .LP
121 130 .nf
122 131 \fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR
123 132 .fi
124 133
125 134 .LP
126 135 .nf
127 136 \fBzpool status\fR [\fB-xv\fR] [\fIpool\fR] ...
128 137 .fi
129 138
130 139 .LP
131 140 .nf
132 141 \fBzpool upgrade\fR
133 142 .fi
134 143
135 144 .LP
136 145 .nf
137 146 \fBzpool upgrade\fR \fB-v\fR
138 147 .fi
139 148
140 149 .LP
141 150 .nf
142 151 \fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...
143 152 .fi
144 153
145 154 .SH DESCRIPTION
146 155 .sp
147 156 .LP
148 157 The \fBzpool\fR command configures \fBZFS\fR storage pools. A storage pool is a
149 158 collection of devices that provides physical storage and data replication for
150 159 \fBZFS\fR datasets.
151 160 .sp
152 161 .LP
153 162 All datasets within a storage pool share the same space. See \fBzfs\fR(1M) for
154 163 information on managing datasets.
155 164 .SS "Virtual Devices (\fBvdev\fRs)"
156 165 .sp
157 166 .LP
158 167 A "virtual device" describes a single device or a collection of devices
159 168 organized according to certain performance and fault characteristics. The
160 169 following virtual devices are supported:
161 170 .sp
162 171 .ne 2
163 172 .na
164 173 \fB\fBdisk\fR\fR
165 174 .ad
166 175 .RS 10n
167 176 A block device, typically located under \fB/dev/dsk\fR. \fBZFS\fR can use
168 177 individual slices or partitions, though the recommended mode of operation is to
169 178 use whole disks. A disk can be specified by a full path, or it can be a
170 179 shorthand name (the relative portion of the path under "/dev/dsk"). A whole
171 180 disk can be specified by omitting the slice or partition designation. For
172 181 example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When given a whole
173 182 disk, \fBZFS\fR automatically labels the disk, if necessary.
174 183 .RE
175 184
176 185 .sp
177 186 .ne 2
178 187 .na
179 188 \fB\fBfile\fR\fR
180 189 .ad
181 190 .RS 10n
182 191 A regular file. The use of files as a backing store is strongly discouraged. It
183 192 is designed primarily for experimental purposes, as the fault tolerance of a
184 193 file is only as good as the file system of which it is a part. A file must be
185 194 specified by a full path.
186 195 .RE
187 196
188 197 .sp
189 198 .ne 2
190 199 .na
191 200 \fB\fBmirror\fR\fR
192 201 .ad
193 202 .RS 10n
194 203 A mirror of two or more devices. Data is replicated in an identical fashion
195 204 across all components of a mirror. A mirror with \fIN\fR disks of size \fIX\fR
196 205 can hold \fIX\fR bytes and can withstand (\fIN-1\fR) devices failing before
197 206 data integrity is compromised.
198 207 .RE
199 208
200 209 .sp
201 210 .ne 2
202 211 .na
203 212 \fB\fBraidz\fR\fR
204 213 .ad
205 214 .br
206 215 .na
207 216 \fB\fBraidz1\fR\fR
208 217 .ad
209 218 .br
210 219 .na
211 220 \fB\fBraidz2\fR\fR
212 221 .ad
213 222 .br
214 223 .na
215 224 \fB\fBraidz3\fR\fR
216 225 .ad
217 226 .RS 10n
218 227 A variation on \fBRAID-5\fR that allows for better distribution of parity and
219 228 eliminates the "\fBRAID-5\fR write hole" (in which data and parity become
220 229 inconsistent after a power loss). Data and parity is striped across all disks
221 230 within a \fBraidz\fR group.
222 231 .sp
223 232 A \fBraidz\fR group can have single-, double- , or triple parity, meaning that
224 233 the \fBraidz\fR group can sustain one, two, or three failures, respectively,
225 234 without losing any data. The \fBraidz1\fR \fBvdev\fR type specifies a
226 235 single-parity \fBraidz\fR group; the \fBraidz2\fR \fBvdev\fR type specifies a
227 236 double-parity \fBraidz\fR group; and the \fBraidz3\fR \fBvdev\fR type specifies
228 237 a triple-parity \fBraidz\fR group. The \fBraidz\fR \fBvdev\fR type is an alias
229 238 for \fBraidz1\fR.
230 239 .sp
231 240 A \fBraidz\fR group with \fIN\fR disks of size \fIX\fR with \fIP\fR parity
232 241 disks can hold approximately (\fIN-P\fR)*\fIX\fR bytes and can withstand
233 242 \fIP\fR device(s) failing before data integrity is compromised. The minimum
234 243 number of devices in a \fBraidz\fR group is one more than the number of parity
235 244 disks. The recommended number is between 3 and 9 to help increase performance.
236 245 .RE
237 246
238 247 .sp
239 248 .ne 2
240 249 .na
241 250 \fB\fBspare\fR\fR
242 251 .ad
243 252 .RS 10n
244 253 A special pseudo-\fBvdev\fR which keeps track of available hot spares for a
245 254 pool. For more information, see the "Hot Spares" section.
246 255 .RE
247 256
248 257 .sp
249 258 .ne 2
250 259 .na
251 260 \fB\fBlog\fR\fR
252 261 .ad
253 262 .RS 10n
254 263 A separate-intent log device. If more than one log device is specified, then
255 264 writes are load-balanced between devices. Log devices can be mirrored. However,
256 265 \fBraidz\fR \fBvdev\fR types are not supported for the intent log. For more
257 266 information, see the "Intent Log" section.
258 267 .RE
259 268
260 269 .sp
261 270 .ne 2
262 271 .na
263 272 \fB\fBcache\fR\fR
264 273 .ad
265 274 .RS 10n
266 275 A device used to cache storage pool data. A cache device cannot be cannot be
267 276 configured as a mirror or \fBraidz\fR group. For more information, see the
268 277 "Cache Devices" section.
269 278 .RE
270 279
271 280 .sp
272 281 .LP
273 282 Virtual devices cannot be nested, so a mirror or \fBraidz\fR virtual device can
274 283 only contain files or disks. Mirrors of mirrors (or other combinations) are not
275 284 allowed.
276 285 .sp
277 286 .LP
278 287 A pool can have any number of virtual devices at the top of the configuration
279 288 (known as "root vdevs"). Data is dynamically distributed across all top-level
280 289 devices to balance data among devices. As new virtual devices are added,
281 290 \fBZFS\fR automatically places data on the newly available devices.
282 291 .sp
283 292 .LP
284 293 Virtual devices are specified one at a time on the command line, separated by
285 294 whitespace. The keywords "mirror" and "raidz" are used to distinguish where a
286 295 group ends and another begins. For example, the following creates two root
287 296 vdevs, each a mirror of two disks:
288 297 .sp
289 298 .in +2
290 299 .nf
291 300 # \fBzpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0\fR
292 301 .fi
293 302 .in -2
294 303 .sp
295 304
296 305 .SS "Device Failure and Recovery"
297 306 .sp
298 307 .LP
299 308 \fBZFS\fR supports a rich set of mechanisms for handling device failure and
300 309 data corruption. All metadata and data is checksummed, and \fBZFS\fR
301 310 automatically repairs bad data from a good copy when corruption is detected.
302 311 .sp
303 312 .LP
304 313 In order to take advantage of these features, a pool must make use of some form
305 314 of redundancy, using either mirrored or \fBraidz\fR groups. While \fBZFS\fR
306 315 supports running in a non-redundant configuration, where each root vdev is
307 316 simply a disk or file, this is strongly discouraged. A single case of bit
308 317 corruption can render some or all of your data unavailable.
309 318 .sp
310 319 .LP
311 320 A pool's health status is described by one of three states: online, degraded,
312 321 or faulted. An online pool has all devices operating normally. A degraded pool
313 322 is one in which one or more devices have failed, but the data is still
314 323 available due to a redundant configuration. A faulted pool has corrupted
315 324 metadata, or one or more faulted devices, and insufficient replicas to continue
316 325 functioning.
317 326 .sp
318 327 .LP
319 328 The health of the top-level vdev, such as mirror or \fBraidz\fR device, is
320 329 potentially impacted by the state of its associated vdevs, or component
321 330 devices. A top-level vdev or component device is in one of the following
322 331 states:
323 332 .sp
324 333 .ne 2
325 334 .na
326 335 \fB\fBDEGRADED\fR\fR
327 336 .ad
328 337 .RS 12n
329 338 One or more top-level vdevs is in the degraded state because one or more
330 339 component devices are offline. Sufficient replicas exist to continue
331 340 functioning.
332 341 .sp
333 342 One or more component devices is in the degraded or faulted state, but
334 343 sufficient replicas exist to continue functioning. The underlying conditions
335 344 are as follows:
336 345 .RS +4
337 346 .TP
338 347 .ie t \(bu
339 348 .el o
340 349 The number of checksum errors exceeds acceptable levels and the device is
341 350 degraded as an indication that something may be wrong. \fBZFS\fR continues to
342 351 use the device as necessary.
343 352 .RE
344 353 .RS +4
345 354 .TP
346 355 .ie t \(bu
347 356 .el o
348 357 The number of I/O errors exceeds acceptable levels. The device could not be
349 358 marked as faulted because there are insufficient replicas to continue
350 359 functioning.
351 360 .RE
352 361 .RE
353 362
354 363 .sp
355 364 .ne 2
356 365 .na
357 366 \fB\fBFAULTED\fR\fR
358 367 .ad
359 368 .RS 12n
360 369 One or more top-level vdevs is in the faulted state because one or more
361 370 component devices are offline. Insufficient replicas exist to continue
362 371 functioning.
363 372 .sp
364 373 One or more component devices is in the faulted state, and insufficient
365 374 replicas exist to continue functioning. The underlying conditions are as
366 375 follows:
367 376 .RS +4
368 377 .TP
369 378 .ie t \(bu
370 379 .el o
371 380 The device could be opened, but the contents did not match expected values.
372 381 .RE
373 382 .RS +4
374 383 .TP
375 384 .ie t \(bu
376 385 .el o
377 386 The number of I/O errors exceeds acceptable levels and the device is faulted to
378 387 prevent further use of the device.
379 388 .RE
380 389 .RE
381 390
382 391 .sp
383 392 .ne 2
384 393 .na
385 394 \fB\fBOFFLINE\fR\fR
386 395 .ad
387 396 .RS 12n
388 397 The device was explicitly taken offline by the "\fBzpool offline\fR" command.
389 398 .RE
390 399
391 400 .sp
392 401 .ne 2
393 402 .na
394 403 \fB\fBONLINE\fR\fR
395 404 .ad
396 405 .RS 12n
397 406 The device is online and functioning.
398 407 .RE
399 408
400 409 .sp
401 410 .ne 2
402 411 .na
403 412 \fB\fBREMOVED\fR\fR
404 413 .ad
405 414 .RS 12n
406 415 The device was physically removed while the system was running. Device removal
407 416 detection is hardware-dependent and may not be supported on all platforms.
408 417 .RE
409 418
410 419 .sp
411 420 .ne 2
412 421 .na
413 422 \fB\fBUNAVAIL\fR\fR
414 423 .ad
415 424 .RS 12n
416 425 The device could not be opened. If a pool is imported when a device was
417 426 unavailable, then the device will be identified by a unique identifier instead
418 427 of its path since the path was never correct in the first place.
419 428 .RE
420 429
421 430 .sp
422 431 .LP
423 432 If a device is removed and later re-attached to the system, \fBZFS\fR attempts
424 433 to put the device online automatically. Device attach detection is
425 434 hardware-dependent and might not be supported on all platforms.
426 435 .SS "Hot Spares"
427 436 .sp
428 437 .LP
429 438 \fBZFS\fR allows devices to be associated with pools as "hot spares". These
430 439 devices are not actively used in the pool, but when an active device fails, it
431 440 is automatically replaced by a hot spare. To create a pool with hot spares,
432 441 specify a "spare" \fBvdev\fR with any number of devices. For example,
433 442 .sp
434 443 .in +2
435 444 .nf
436 445 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
437 446 .fi
438 447 .in -2
439 448 .sp
440 449
441 450 .sp
442 451 .LP
443 452 Spares can be shared across multiple pools, and can be added with the "\fBzpool
444 453 add\fR" command and removed with the "\fBzpool remove\fR" command. Once a spare
445 454 replacement is initiated, a new "spare" \fBvdev\fR is created within the
446 455 configuration that will remain there until the original device is replaced. At
447 456 this point, the hot spare becomes available again if another device fails.
448 457 .sp
449 458 .LP
450 459 If a pool has a shared spare that is currently being used, the pool can not be
451 460 exported since other pools may use this shared spare, which may lead to
452 461 potential data corruption.
453 462 .sp
454 463 .LP
455 464 An in-progress spare replacement can be cancelled by detaching the hot spare.
456 465 If the original faulted device is detached, then the hot spare assumes its
457 466 place in the configuration, and is removed from the spare list of all active
458 467 pools.
459 468 .sp
460 469 .LP
461 470 Spares cannot replace log devices.
462 471 .SS "Intent Log"
463 472 .sp
464 473 .LP
465 474 The \fBZFS\fR Intent Log (\fBZIL\fR) satisfies \fBPOSIX\fR requirements for
466 475 synchronous transactions. For instance, databases often require their
467 476 transactions to be on stable storage devices when returning from a system call.
468 477 \fBNFS\fR and other applications can also use \fBfsync\fR() to ensure data
469 478 stability. By default, the intent log is allocated from blocks within the main
470 479 pool. However, it might be possible to get better performance using separate
471 480 intent log devices such as \fBNVRAM\fR or a dedicated disk. For example:
472 481 .sp
473 482 .in +2
474 483 .nf
475 484 \fB# zpool create pool c0d0 c1d0 log c2d0\fR
476 485 .fi
477 486 .in -2
478 487 .sp
479 488
480 489 .sp
481 490 .LP
482 491 Multiple log devices can also be specified, and they can be mirrored. See the
483 492 EXAMPLES section for an example of mirroring multiple log devices.
484 493 .sp
485 494 .LP
486 495 Log devices can be added, replaced, attached, detached, and imported and
487 496 exported as part of the larger pool. Mirrored log devices can be removed by
488 497 specifying the top-level mirror for the log.
489 498 .SS "Cache Devices"
490 499 .sp
491 500 .LP
492 501 Devices can be added to a storage pool as "cache devices." These devices
493 502 provide an additional layer of caching between main memory and disk. For
494 503 read-heavy workloads, where the working set size is much larger than what can
495 504 be cached in main memory, using cache devices allow much more of this working
496 505 set to be served from low latency media. Using cache devices provides the
497 506 greatest performance improvement for random read-workloads of mostly static
498 507 content.
499 508 .sp
500 509 .LP
501 510 To create a pool with cache devices, specify a "cache" \fBvdev\fR with any
502 511 number of devices. For example:
503 512 .sp
504 513 .in +2
505 514 .nf
506 515 \fB# zpool create pool c0d0 c1d0 cache c2d0 c3d0\fR
507 516 .fi
508 517 .in -2
509 518 .sp
510 519
511 520 .sp
512 521 .LP
513 522 Cache devices cannot be mirrored or part of a \fBraidz\fR configuration. If a
514 523 read error is encountered on a cache device, that read \fBI/O\fR is reissued to
515 524 the original storage pool device, which might be part of a mirrored or
516 525 \fBraidz\fR configuration.
517 526 .sp
518 527 .LP
519 528 The content of the cache devices is considered volatile, as is the case with
520 529 other system caches.
521 530 .SS "Properties"
522 531 .sp
523 532 .LP
524 533 Each pool has several properties associated with it. Some properties are
525 534 read-only statistics while others are configurable and change the behavior of
526 535 the pool. The following are read-only properties:
527 536 .sp
528 537 .ne 2
529 538 .na
530 539 \fB\fBavailable\fR\fR
531 540 .ad
532 541 .RS 20n
533 542 Amount of storage available within the pool. This property can also be referred
534 543 to by its shortened column name, "avail".
535 544 .RE
536 545
537 546 .sp
538 547 .ne 2
539 548 .na
|
↓ open down ↓ |
495 lines elided |
↑ open up ↑ |
540 549 \fB\fBcapacity\fR\fR
541 550 .ad
542 551 .RS 20n
543 552 Percentage of pool space used. This property can also be referred to by its
544 553 shortened column name, "cap".
545 554 .RE
546 555
547 556 .sp
548 557 .ne 2
549 558 .na
550 -\fB\fBcomment\fR\fR
559 +\fB\fBexpandsize\fR\fR
551 560 .ad
552 561 .RS 20n
553 -A text string consisting of printable ASCII characters that will be stored
554 -such that it is available even if the pool becomes faulted. An administrator
555 -can provide additional information about a pool using this property.
562 +Amount of uninitialized space within the pool or device that can be used to
563 +increase the total capacity of the pool. Uninitialized space consists of
564 +any space on an EFI labeled vdev which has not been brought online
565 +(i.e. zpool online -e). This space occurs when a LUN is dynamically expanded.
556 566 .RE
557 567
558 568 .sp
559 569 .ne 2
560 570 .na
561 -\fB\fBexpandsize\fR\fR
571 +\fB\fBfree\fR\fR
562 572 .ad
563 573 .RS 20n
564 -Amount of uninitialized space within the pool or device that can be used to
565 -increase the total capacity of the pool. Uninitialized space consists of
566 -any space on an EFI labeled vdev which has not been brought online
567 -(i.e. zpool online -e). This space occurs when a LUN is dynamically expanded.
574 +The amount of free space available in the pool.
568 575 .RE
569 576
570 577 .sp
571 578 .ne 2
572 579 .na
580 +\fB\fBfreeing\fR\fR
581 +.ad
582 +.RS 20n
583 +After a file system or snapshot is destroyed, the space it was using is
584 +returned to the pool asynchronously. \fB\fBfreeing\fR\fR is the amount of
585 +space remaining to be reclaimed. Over time \fB\fBfreeing\fR\fR will decrease
586 +while \fB\fBfree\fR\fR increases.
587 +.RE
588 +
589 +.sp
590 +.ne 2
591 +.na
573 592 \fB\fBhealth\fR\fR
574 593 .ad
575 594 .RS 20n
576 595 The current health of the pool. Health can be "\fBONLINE\fR", "\fBDEGRADED\fR",
577 596 "\fBFAULTED\fR", " \fBOFFLINE\fR", "\fBREMOVED\fR", or "\fBUNAVAIL\fR".
578 597 .RE
579 598
580 599 .sp
581 600 .ne 2
582 601 .na
583 602 \fB\fBguid\fR\fR
584 603 .ad
585 604 .RS 20n
586 605 A unique identifier for the pool.
587 606 .RE
588 607
589 608 .sp
590 609 .ne 2
|
↓ open down ↓ |
8 lines elided |
↑ open up ↑ |
591 610 .na
592 611 \fB\fBsize\fR\fR
593 612 .ad
594 613 .RS 20n
595 614 Total size of the storage pool.
596 615 .RE
597 616
598 617 .sp
599 618 .ne 2
600 619 .na
620 +\fB\fBunsupported@\fR\fIfeature_guid\fR\fR
621 +.ad
622 +.RS 20n
623 +Information about unsupported features that are enabled on the pool. See
624 +\fBzpool-features\fR(5) for details.
625 +.RE
626 +
627 +.sp
628 +.ne 2
629 +.na
601 630 \fB\fBused\fR\fR
602 631 .ad
603 632 .RS 20n
604 633 Amount of storage space used within the pool.
605 634 .RE
606 635
607 636 .sp
608 637 .LP
609 -These space usage properties report actual physical space available to the
638 +The space usage properties report actual physical space available to the
610 639 storage pool. The physical space can be different from the total amount of
611 640 space that any contained datasets can actually use. The amount of space used in
612 641 a \fBraidz\fR configuration depends on the characteristics of the data being
613 642 written. In addition, \fBZFS\fR reserves some space for internal accounting
614 643 that the \fBzfs\fR(1M) command takes into account, but the \fBzpool\fR command
615 644 does not. For non-full pools of a reasonable size, these effects should be
616 645 invisible. For small pools, or pools that are close to being completely full,
617 646 these discrepancies may become more noticeable.
618 647 .sp
619 648 .LP
620 649 The following property can be set at creation time and import time:
621 650 .sp
622 651 .ne 2
623 652 .na
624 653 \fB\fBaltroot\fR\fR
625 654 .ad
626 655 .sp .6
627 656 .RS 4n
628 657 Alternate root directory. If set, this directory is prepended to any mount
629 658 points within the pool. This can be used when examining an unknown pool where
630 659 the mount points cannot be trusted, or in an alternate boot environment, where
631 660 the typical paths are not valid. \fBaltroot\fR is not a persistent property. It
632 661 is valid only while the system is up. Setting \fBaltroot\fR defaults to using
633 662 \fBcachefile\fR=none, though this may be overridden using an explicit setting.
634 663 .RE
635 664
636 665 .sp
637 666 .LP
638 667 The following properties can be set at creation time and import time, and later
639 668 changed with the \fBzpool set\fR command:
640 669 .sp
641 670 .ne 2
642 671 .na
643 672 \fB\fBautoexpand\fR=\fBon\fR | \fBoff\fR\fR
644 673 .ad
645 674 .sp .6
646 675 .RS 4n
647 676 Controls automatic pool expansion when the underlying LUN is grown. If set to
648 677 \fBon\fR, the pool will be resized according to the size of the expanded
649 678 device. If the device is part of a mirror or \fBraidz\fR then all devices
650 679 within that mirror/\fBraidz\fR group must be expanded before the new space is
651 680 made available to the pool. The default behavior is \fBoff\fR. This property
652 681 can also be referred to by its shortened column name, \fBexpand\fR.
653 682 .RE
654 683
655 684 .sp
656 685 .ne 2
657 686 .na
658 687 \fB\fBautoreplace\fR=\fBon\fR | \fBoff\fR\fR
659 688 .ad
660 689 .sp .6
661 690 .RS 4n
662 691 Controls automatic device replacement. If set to "\fBoff\fR", device
663 692 replacement must be initiated by the administrator by using the "\fBzpool
664 693 replace\fR" command. If set to "\fBon\fR", any new device, found in the same
665 694 physical location as a device that previously belonged to the pool, is
666 695 automatically formatted and replaced. The default behavior is "\fBoff\fR". This
667 696 property can also be referred to by its shortened column name, "replace".
668 697 .RE
669 698
670 699 .sp
671 700 .ne 2
672 701 .na
673 702 \fB\fBbootfs\fR=\fIpool\fR/\fIdataset\fR\fR
674 703 .ad
675 704 .sp .6
676 705 .RS 4n
677 706 Identifies the default bootable dataset for the root pool. This property is
678 707 expected to be set mainly by the installation and upgrade programs.
679 708 .RE
680 709
681 710 .sp
682 711 .ne 2
683 712 .na
684 713 \fB\fBcachefile\fR=\fIpath\fR | \fBnone\fR\fR
685 714 .ad
686 715 .sp .6
687 716 .RS 4n
688 717 Controls the location of where the pool configuration is cached. Discovering
689 718 all pools on system startup requires a cached copy of the configuration data
690 719 that is stored on the root file system. All pools in this cache are
691 720 automatically imported when the system boots. Some environments, such as
692 721 install and clustering, need to cache this information in a different location
693 722 so that pools are not automatically imported. Setting this property caches the
694 723 pool configuration in a different location that can later be imported with
695 724 "\fBzpool import -c\fR". Setting it to the special value "\fBnone\fR" creates a
696 725 temporary pool that is never cached, and the special value \fB\&''\fR (empty
697 726 string) uses the default location.
|
↓ open down ↓ |
78 lines elided |
↑ open up ↑ |
698 727 .sp
699 728 Multiple pools can share the same cache file. Because the kernel destroys and
700 729 recreates this file when pools are added and removed, care should be taken when
701 730 attempting to access this file. When the last pool using a \fBcachefile\fR is
702 731 exported or destroyed, the file is removed.
703 732 .RE
704 733
705 734 .sp
706 735 .ne 2
707 736 .na
737 +\fB\fBcomment\fR=\fB\fItext\fR\fR
738 +.ad
739 +.RS 4n
740 +A text string consisting of printable ASCII characters that will be stored
741 +such that it is available even if the pool becomes faulted. An administrator
742 +can provide additional information about a pool using this property.
743 +.RE
744 +
745 +.sp
746 +.ne 2
747 +.na
708 748 \fB\fBdelegation\fR=\fBon\fR | \fBoff\fR\fR
709 749 .ad
710 750 .sp .6
711 751 .RS 4n
712 752 Controls whether a non-privileged user is granted access based on the dataset
713 753 permissions defined on the dataset. See \fBzfs\fR(1M) for more information on
714 754 \fBZFS\fR delegated administration.
715 755 .RE
716 756
717 757 .sp
718 758 .ne 2
719 759 .na
720 760 \fB\fBfailmode\fR=\fBwait\fR | \fBcontinue\fR | \fBpanic\fR\fR
721 761 .ad
722 762 .sp .6
723 763 .RS 4n
724 764 Controls the system behavior in the event of catastrophic pool failure. This
725 765 condition is typically a result of a loss of connectivity to the underlying
726 766 storage device(s) or a failure of all devices within the pool. The behavior of
727 767 such an event is determined as follows:
728 768 .sp
729 769 .ne 2
730 770 .na
731 771 \fB\fBwait\fR\fR
732 772 .ad
733 773 .RS 12n
734 774 Blocks all \fBI/O\fR access until the device connectivity is recovered and the
735 775 errors are cleared. This is the default behavior.
736 776 .RE
737 777
738 778 .sp
739 779 .ne 2
740 780 .na
741 781 \fB\fBcontinue\fR\fR
742 782 .ad
743 783 .RS 12n
744 784 Returns \fBEIO\fR to any new write \fBI/O\fR requests but allows reads to any
745 785 of the remaining healthy devices. Any write requests that have yet to be
746 786 committed to disk would be blocked.
747 787 .RE
748 788
749 789 .sp
750 790 .ne 2
751 791 .na
752 792 \fB\fBpanic\fR\fR
|
↓ open down ↓ |
35 lines elided |
↑ open up ↑ |
753 793 .ad
754 794 .RS 12n
755 795 Prints out a message to the console and generates a system crash dump.
756 796 .RE
757 797
758 798 .RE
759 799
760 800 .sp
761 801 .ne 2
762 802 .na
803 +\fB\fBfeature@\fR\fIfeature_name\fR=\fBenabled\fR\fR
804 +.ad
805 +.RS 4n
806 +The value of this property is the current state of \fIfeature_name\fR. The
807 +only valid value when setting this property is \fBenabled\fR which moves
808 +\fIfeature_name\fR to the enabled state. See \fBzpool-features\fR(5) for
809 +details on feature states.
810 +.RE
811 +
812 +.sp
813 +.ne 2
814 +.na
763 815 \fB\fBlistsnaps\fR=on | off\fR
764 816 .ad
765 817 .sp .6
766 818 .RS 4n
767 819 Controls whether information about snapshots associated with this pool is
768 820 output when "\fBzfs list\fR" is run without the \fB-t\fR option. The default
769 821 value is "off".
770 822 .RE
771 823
772 824 .sp
773 825 .ne 2
774 826 .na
775 827 \fB\fBversion\fR=\fIversion\fR\fR
776 828 .ad
777 829 .sp .6
778 830 .RS 4n
779 831 The current on-disk version of the pool. This can be increased, but never
780 832 decreased. The preferred method of updating pools is with the "\fBzpool
781 833 upgrade\fR" command, though this property can be used when a specific version
782 -is needed for backwards compatibility. This property can be any number between
783 -1 and the current version reported by "\fBzpool upgrade -v\fR".
834 +is needed for backwards compatibility. Once feature flags is enabled on a
835 +pool this property will no longer have a value.
784 836 .RE
785 837
786 838 .SS "Subcommands"
787 839 .sp
788 840 .LP
789 841 All subcommands that modify state are logged persistently to the pool in their
790 842 original form.
791 843 .sp
792 844 .LP
793 845 The \fBzpool\fR command provides subcommands to create and destroy storage
794 846 pools, add capacity to storage pools, and provide information about the storage
795 847 pools. The following subcommands are supported:
796 848 .sp
797 849 .ne 2
798 850 .na
799 851 \fB\fBzpool\fR \fB-?\fR\fR
800 852 .ad
801 853 .sp .6
802 854 .RS 4n
803 855 Displays a help message.
804 856 .RE
805 857
806 858 .sp
807 859 .ne 2
808 860 .na
809 861 \fB\fBzpool add\fR [\fB-fn\fR] \fIpool\fR \fIvdev\fR ...\fR
810 862 .ad
811 863 .sp .6
812 864 .RS 4n
813 865 Adds the specified virtual devices to the given pool. The \fIvdev\fR
814 866 specification is described in the "Virtual Devices" section. The behavior of
815 867 the \fB-f\fR option, and the device checks performed are described in the
816 868 "zpool create" subcommand.
817 869 .sp
818 870 .ne 2
819 871 .na
820 872 \fB\fB-f\fR\fR
821 873 .ad
822 874 .RS 6n
823 875 Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting
824 876 replication level. Not all devices can be overridden in this manner.
825 877 .RE
826 878
827 879 .sp
828 880 .ne 2
829 881 .na
830 882 \fB\fB-n\fR\fR
831 883 .ad
832 884 .RS 6n
833 885 Displays the configuration that would be used without actually adding the
834 886 \fBvdev\fRs. The actual pool creation can still fail due to insufficient
835 887 privileges or device sharing.
836 888 .RE
837 889
838 890 Do not add a disk that is currently configured as a quorum device to a zpool.
839 891 After a disk is in the pool, that disk can then be configured as a quorum
840 892 device.
841 893 .RE
842 894
843 895 .sp
844 896 .ne 2
845 897 .na
846 898 \fB\fBzpool attach\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR\fR
847 899 .ad
848 900 .sp .6
849 901 .RS 4n
850 902 Attaches \fInew_device\fR to an existing \fBzpool\fR device. The existing
851 903 device cannot be part of a \fBraidz\fR configuration. If \fIdevice\fR is not
852 904 currently part of a mirrored configuration, \fIdevice\fR automatically
853 905 transforms into a two-way mirror of \fIdevice\fR and \fInew_device\fR. If
854 906 \fIdevice\fR is part of a two-way mirror, attaching \fInew_device\fR creates a
855 907 three-way mirror, and so on. In either case, \fInew_device\fR begins to
856 908 resilver immediately.
857 909 .sp
858 910 .ne 2
859 911 .na
860 912 \fB\fB-f\fR\fR
861 913 .ad
862 914 .RS 6n
863 915 Forces use of \fInew_device\fR, even if its appears to be in use. Not all
864 916 devices can be overridden in this manner.
865 917 .RE
866 918
867 919 .RE
868 920
869 921 .sp
870 922 .ne 2
871 923 .na
872 924 \fB\fBzpool clear\fR \fIpool\fR [\fIdevice\fR] ...\fR
873 925 .ad
|
↓ open down ↓ |
80 lines elided |
↑ open up ↑ |
874 926 .sp .6
875 927 .RS 4n
876 928 Clears device errors in a pool. If no arguments are specified, all device
877 929 errors within the pool are cleared. If one or more devices is specified, only
878 930 those errors associated with the specified device or devices are cleared.
879 931 .RE
880 932
881 933 .sp
882 934 .ne 2
883 935 .na
884 -\fB\fBzpool create\fR [\fB-fn\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR
936 +\fB\fBzpool create\fR [\fB-fnd\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR
885 937 \fIfile-system-property=value\fR] ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR
886 938 \fIroot\fR] \fIpool\fR \fIvdev\fR ...\fR
887 939 .ad
888 940 .sp .6
889 941 .RS 4n
890 942 Creates a new storage pool containing the virtual devices specified on the
891 943 command line. The pool name must begin with a letter, and can only contain
892 944 alphanumeric characters as well as underscore ("_"), dash ("-"), and period
893 945 ("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are
894 946 names beginning with the pattern "c[0-9]". The \fBvdev\fR specification is
895 947 described in the "Virtual Devices" section.
896 948 .sp
897 949 The command verifies that each device specified is accessible and not currently
898 950 in use by another subsystem. There are some uses, such as being currently
899 951 mounted, or specified as the dedicated dump device, that prevents a device from
900 952 ever being used by \fBZFS\fR. Other uses, such as having a preexisting
901 953 \fBUFS\fR file system, can be overridden with the \fB-f\fR option.
902 954 .sp
903 955 The command also checks that the replication strategy for the pool is
|
↓ open down ↓ |
9 lines elided |
↑ open up ↑ |
904 956 consistent. An attempt to combine redundant and non-redundant storage in a
905 957 single pool, or to mix disks and files, results in an error unless \fB-f\fR is
906 958 specified. The use of differently sized devices within a single \fBraidz\fR or
907 959 mirror group is also flagged as an error unless \fB-f\fR is specified.
908 960 .sp
909 961 Unless the \fB-R\fR option is specified, the default mount point is
910 962 "/\fIpool\fR". The mount point must not exist or must be empty, or else the
911 963 root dataset cannot be mounted. This can be overridden with the \fB-m\fR
912 964 option.
913 965 .sp
966 +By default all supported features are enabled on the new pool unless the
967 +\fB-d\fR option is specified.
968 +.sp
914 969 .ne 2
915 970 .na
916 971 \fB\fB-f\fR\fR
917 972 .ad
918 973 .sp .6
919 974 .RS 4n
920 975 Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting
921 976 replication level. Not all devices can be overridden in this manner.
922 977 .RE
923 978
924 979 .sp
925 980 .ne 2
926 981 .na
927 982 \fB\fB-n\fR\fR
928 983 .ad
|
↓ open down ↓ |
5 lines elided |
↑ open up ↑ |
929 984 .sp .6
930 985 .RS 4n
931 986 Displays the configuration that would be used without actually creating the
932 987 pool. The actual pool creation can still fail due to insufficient privileges or
933 988 device sharing.
934 989 .RE
935 990
936 991 .sp
937 992 .ne 2
938 993 .na
994 +\fB\fB-d\fR\fR
995 +.ad
996 +.sp .6
997 +.RS 4n
998 +Do not enable any features on the new pool. Individual features can be enabled
999 +by setting their corresponding properties to \fBenabled\fR with the \fB-o\fR
1000 +option. See \fBzpool-features\fR(5) for details about feature properties.
1001 +.RE
1002 +
1003 +.sp
1004 +.ne 2
1005 +.na
939 1006 \fB\fB-o\fR \fIproperty=value\fR [\fB-o\fR \fIproperty=value\fR] ...\fR
940 1007 .ad
941 1008 .sp .6
942 1009 .RS 4n
943 1010 Sets the given pool properties. See the "Properties" section for a list of
944 1011 valid properties that can be set.
945 1012 .RE
946 1013
947 1014 .sp
948 1015 .ne 2
949 1016 .na
950 1017 \fB\fB-O\fR \fIfile-system-property=value\fR\fR
951 1018 .ad
952 1019 .br
953 1020 .na
954 1021 \fB[\fB-O\fR \fIfile-system-property=value\fR] ...\fR
955 1022 .ad
956 1023 .sp .6
957 1024 .RS 4n
958 1025 Sets the given file system properties in the root file system of the pool. See
959 1026 the "Properties" section of \fBzfs\fR(1M) for a list of valid properties that
960 1027 can be set.
961 1028 .RE
962 1029
963 1030 .sp
964 1031 .ne 2
965 1032 .na
966 1033 \fB\fB-R\fR \fIroot\fR\fR
967 1034 .ad
968 1035 .sp .6
969 1036 .RS 4n
970 1037 Equivalent to "-o cachefile=none,altroot=\fIroot\fR"
971 1038 .RE
972 1039
973 1040 .sp
974 1041 .ne 2
975 1042 .na
976 1043 \fB\fB-m\fR \fImountpoint\fR\fR
977 1044 .ad
978 1045 .sp .6
979 1046 .RS 4n
980 1047 Sets the mount point for the root dataset. The default mount point is
981 1048 "/\fIpool\fR" or "\fBaltroot\fR/\fIpool\fR" if \fBaltroot\fR is specified. The
982 1049 mount point must be an absolute path, "\fBlegacy\fR", or "\fBnone\fR". For more
983 1050 information on dataset mount points, see \fBzfs\fR(1M).
984 1051 .RE
985 1052
986 1053 .RE
987 1054
988 1055 .sp
989 1056 .ne 2
990 1057 .na
991 1058 \fB\fBzpool destroy\fR [\fB-f\fR] \fIpool\fR\fR
992 1059 .ad
993 1060 .sp .6
994 1061 .RS 4n
995 1062 Destroys the given pool, freeing up any devices for other use. This command
996 1063 tries to unmount any active datasets before destroying the pool.
997 1064 .sp
998 1065 .ne 2
999 1066 .na
1000 1067 \fB\fB-f\fR\fR
1001 1068 .ad
1002 1069 .RS 6n
1003 1070 Forces any active datasets contained within the pool to be unmounted.
1004 1071 .RE
1005 1072
1006 1073 .RE
1007 1074
1008 1075 .sp
1009 1076 .ne 2
1010 1077 .na
1011 1078 \fB\fBzpool detach\fR \fIpool\fR \fIdevice\fR\fR
1012 1079 .ad
1013 1080 .sp .6
1014 1081 .RS 4n
1015 1082 Detaches \fIdevice\fR from a mirror. The operation is refused if there are no
1016 1083 other valid replicas of the data.
1017 1084 .RE
1018 1085
1019 1086 .sp
1020 1087 .ne 2
1021 1088 .na
1022 1089 \fB\fBzpool export\fR [\fB-f\fR] \fIpool\fR ...\fR
1023 1090 .ad
1024 1091 .sp .6
1025 1092 .RS 4n
1026 1093 Exports the given pools from the system. All devices are marked as exported,
1027 1094 but are still considered in use by other subsystems. The devices can be moved
1028 1095 between systems (even those of different endianness) and imported as long as a
1029 1096 sufficient number of devices are present.
1030 1097 .sp
1031 1098 Before exporting the pool, all datasets within the pool are unmounted. A pool
1032 1099 can not be exported if it has a shared spare that is currently being used.
1033 1100 .sp
1034 1101 For pools to be portable, you must give the \fBzpool\fR command whole disks,
1035 1102 not just slices, so that \fBZFS\fR can label the disks with portable \fBEFI\fR
1036 1103 labels. Otherwise, disk drivers on platforms of different endianness will not
1037 1104 recognize the disks.
1038 1105 .sp
1039 1106 .ne 2
1040 1107 .na
1041 1108 \fB\fB-f\fR\fR
1042 1109 .ad
1043 1110 .RS 6n
1044 1111 Forcefully unmount all datasets, using the "\fBunmount -f\fR" command.
1045 1112 .sp
1046 1113 This command will forcefully export the pool even if it has a shared spare that
1047 1114 is currently being used. This may lead to potential data corruption.
1048 1115 .RE
1049 1116
1050 1117 .RE
1051 1118
1052 1119 .sp
1053 1120 .ne 2
1054 1121 .na
1055 1122 \fB\fBzpool get\fR "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ...\fR
1056 1123 .ad
1057 1124 .sp .6
1058 1125 .RS 4n
1059 1126 Retrieves the given list of properties (or all properties if "\fBall\fR" is
1060 1127 used) for the specified storage pool(s). These properties are displayed with
1061 1128 the following fields:
1062 1129 .sp
1063 1130 .in +2
1064 1131 .nf
1065 1132 name Name of storage pool
1066 1133 property Property name
1067 1134 value Property value
1068 1135 source Property source, either 'default' or 'local'.
1069 1136 .fi
1070 1137 .in -2
1071 1138 .sp
1072 1139
1073 1140 See the "Properties" section for more information on the available pool
1074 1141 properties.
1075 1142 .RE
1076 1143
1077 1144 .sp
1078 1145 .ne 2
1079 1146 .na
1080 1147 \fB\fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ...\fR
1081 1148 .ad
1082 1149 .sp .6
1083 1150 .RS 4n
1084 1151 Displays the command history of the specified pools or all pools if no pool is
1085 1152 specified.
1086 1153 .sp
1087 1154 .ne 2
1088 1155 .na
1089 1156 \fB\fB-i\fR\fR
1090 1157 .ad
1091 1158 .RS 6n
1092 1159 Displays internally logged \fBZFS\fR events in addition to user initiated
1093 1160 events.
1094 1161 .RE
1095 1162
1096 1163 .sp
1097 1164 .ne 2
1098 1165 .na
1099 1166 \fB\fB-l\fR\fR
1100 1167 .ad
1101 1168 .RS 6n
1102 1169 Displays log records in long format, which in addition to standard format
1103 1170 includes, the user name, the hostname, and the zone in which the operation was
1104 1171 performed.
1105 1172 .RE
1106 1173
1107 1174 .RE
1108 1175
1109 1176 .sp
1110 1177 .ne 2
1111 1178 .na
1112 1179 \fB\fBzpool import\fR [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1113 1180 [\fB-D\fR]\fR
1114 1181 .ad
1115 1182 .sp .6
1116 1183 .RS 4n
1117 1184 Lists pools available to import. If the \fB-d\fR option is not specified, this
1118 1185 command searches for devices in "/dev/dsk". The \fB-d\fR option can be
1119 1186 specified multiple times, and all directories are searched. If the device
1120 1187 appears to be part of an exported pool, this command displays a summary of the
1121 1188 pool with the name of the pool, a numeric identifier, as well as the \fIvdev\fR
1122 1189 layout and current health of the device for each device or file. Destroyed
1123 1190 pools, pools that were previously destroyed with the "\fBzpool destroy\fR"
1124 1191 command, are not listed unless the \fB-D\fR option is specified.
1125 1192 .sp
1126 1193 The numeric identifier is unique, and can be used instead of the pool name when
1127 1194 multiple exported pools of the same name are available.
1128 1195 .sp
1129 1196 .ne 2
1130 1197 .na
1131 1198 \fB\fB-c\fR \fIcachefile\fR\fR
1132 1199 .ad
1133 1200 .RS 16n
1134 1201 Reads configuration from the given \fBcachefile\fR that was created with the
1135 1202 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1136 1203 searching for devices.
1137 1204 .RE
1138 1205
1139 1206 .sp
1140 1207 .ne 2
1141 1208 .na
1142 1209 \fB\fB-d\fR \fIdir\fR\fR
1143 1210 .ad
1144 1211 .RS 16n
1145 1212 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1146 1213 specified multiple times.
1147 1214 .RE
1148 1215
1149 1216 .sp
1150 1217 .ne 2
1151 1218 .na
1152 1219 \fB\fB-D\fR\fR
1153 1220 .ad
1154 1221 .RS 16n
1155 1222 Lists destroyed pools only.
1156 1223 .RE
1157 1224
1158 1225 .RE
1159 1226
1160 1227 .sp
1161 1228 .ne 2
1162 1229 .na
1163 1230 \fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR
1164 1231 \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1165 1232 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fB-a\fR\fR
1166 1233 .ad
1167 1234 .sp .6
1168 1235 .RS 4n
1169 1236 Imports all pools found in the search directories. Identical to the previous
1170 1237 command, except that all pools with a sufficient number of devices available
1171 1238 are imported. Destroyed pools, pools that were previously destroyed with the
1172 1239 "\fBzpool destroy\fR" command, will not be imported unless the \fB-D\fR option
1173 1240 is specified.
1174 1241 .sp
1175 1242 .ne 2
1176 1243 .na
1177 1244 \fB\fB-o\fR \fImntopts\fR\fR
1178 1245 .ad
1179 1246 .RS 21n
1180 1247 Comma-separated list of mount options to use when mounting datasets within the
1181 1248 pool. See \fBzfs\fR(1M) for a description of dataset properties and mount
1182 1249 options.
1183 1250 .RE
1184 1251
1185 1252 .sp
1186 1253 .ne 2
1187 1254 .na
1188 1255 \fB\fB-o\fR \fIproperty=value\fR\fR
1189 1256 .ad
1190 1257 .RS 21n
1191 1258 Sets the specified property on the imported pool. See the "Properties" section
1192 1259 for more information on the available pool properties.
1193 1260 .RE
1194 1261
1195 1262 .sp
1196 1263 .ne 2
1197 1264 .na
1198 1265 \fB\fB-c\fR \fIcachefile\fR\fR
1199 1266 .ad
1200 1267 .RS 21n
1201 1268 Reads configuration from the given \fBcachefile\fR that was created with the
1202 1269 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1203 1270 searching for devices.
1204 1271 .RE
1205 1272
1206 1273 .sp
1207 1274 .ne 2
1208 1275 .na
1209 1276 \fB\fB-d\fR \fIdir\fR\fR
1210 1277 .ad
1211 1278 .RS 21n
1212 1279 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1213 1280 specified multiple times. This option is incompatible with the \fB-c\fR option.
1214 1281 .RE
1215 1282
1216 1283 .sp
1217 1284 .ne 2
1218 1285 .na
1219 1286 \fB\fB-D\fR\fR
1220 1287 .ad
1221 1288 .RS 21n
1222 1289 Imports destroyed pools only. The \fB-f\fR option is also required.
1223 1290 .RE
1224 1291
1225 1292 .sp
1226 1293 .ne 2
1227 1294 .na
1228 1295 \fB\fB-f\fR\fR
1229 1296 .ad
1230 1297 .RS 21n
1231 1298 Forces import, even if the pool appears to be potentially active.
1232 1299 .RE
1233 1300
1234 1301 .sp
1235 1302 .ne 2
1236 1303 .na
1237 1304 \fB\fB-a\fR\fR
1238 1305 .ad
1239 1306 .RS 21n
1240 1307 Searches for and imports all pools found.
1241 1308 .RE
1242 1309
1243 1310 .sp
1244 1311 .ne 2
1245 1312 .na
1246 1313 \fB\fB-R\fR \fIroot\fR\fR
1247 1314 .ad
1248 1315 .RS 21n
1249 1316 Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR"
1250 1317 property to "\fIroot\fR".
1251 1318 .RE
1252 1319
1253 1320 .RE
1254 1321
1255 1322 .sp
1256 1323 .ne 2
1257 1324 .na
1258 1325 \fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR
1259 1326 \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1260 1327 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR | \fIid\fR
1261 1328 [\fInewpool\fR]\fR
1262 1329 .ad
1263 1330 .sp .6
1264 1331 .RS 4n
1265 1332 Imports a specific pool. A pool can be identified by its name or the numeric
1266 1333 identifier. If \fInewpool\fR is specified, the pool is imported using the name
1267 1334 \fInewpool\fR. Otherwise, it is imported with the same name as its exported
1268 1335 name.
1269 1336 .sp
1270 1337 If a device is removed from a system without running "\fBzpool export\fR"
1271 1338 first, the device appears as potentially active. It cannot be determined if
1272 1339 this was a failed export, or whether the device is really in use from another
1273 1340 host. To import a pool in this state, the \fB-f\fR option is required.
1274 1341 .sp
1275 1342 .ne 2
1276 1343 .na
1277 1344 \fB\fB-o\fR \fImntopts\fR\fR
1278 1345 .ad
1279 1346 .sp .6
1280 1347 .RS 4n
1281 1348 Comma-separated list of mount options to use when mounting datasets within the
1282 1349 pool. See \fBzfs\fR(1M) for a description of dataset properties and mount
1283 1350 options.
1284 1351 .RE
1285 1352
1286 1353 .sp
1287 1354 .ne 2
1288 1355 .na
1289 1356 \fB\fB-o\fR \fIproperty=value\fR\fR
1290 1357 .ad
1291 1358 .sp .6
1292 1359 .RS 4n
1293 1360 Sets the specified property on the imported pool. See the "Properties" section
1294 1361 for more information on the available pool properties.
1295 1362 .RE
1296 1363
1297 1364 .sp
1298 1365 .ne 2
1299 1366 .na
1300 1367 \fB\fB-c\fR \fIcachefile\fR\fR
1301 1368 .ad
1302 1369 .sp .6
1303 1370 .RS 4n
1304 1371 Reads configuration from the given \fBcachefile\fR that was created with the
1305 1372 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1306 1373 searching for devices.
1307 1374 .RE
1308 1375
1309 1376 .sp
1310 1377 .ne 2
1311 1378 .na
1312 1379 \fB\fB-d\fR \fIdir\fR\fR
1313 1380 .ad
1314 1381 .sp .6
1315 1382 .RS 4n
1316 1383 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1317 1384 specified multiple times. This option is incompatible with the \fB-c\fR option.
1318 1385 .RE
1319 1386
1320 1387 .sp
1321 1388 .ne 2
1322 1389 .na
1323 1390 \fB\fB-D\fR\fR
1324 1391 .ad
1325 1392 .sp .6
1326 1393 .RS 4n
1327 1394 Imports destroyed pool. The \fB-f\fR option is also required.
1328 1395 .RE
1329 1396
1330 1397 .sp
1331 1398 .ne 2
1332 1399 .na
1333 1400 \fB\fB-f\fR\fR
1334 1401 .ad
1335 1402 .sp .6
1336 1403 .RS 4n
1337 1404 Forces import, even if the pool appears to be potentially active.
1338 1405 .RE
1339 1406
1340 1407 .sp
1341 1408 .ne 2
1342 1409 .na
1343 1410 \fB\fB-R\fR \fIroot\fR\fR
1344 1411 .ad
1345 1412 .sp .6
1346 1413 .RS 4n
1347 1414 Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR"
1348 1415 property to "\fIroot\fR".
1349 1416 .RE
1350 1417
1351 1418 .RE
1352 1419
1353 1420 .sp
1354 1421 .ne 2
1355 1422 .na
1356 1423 \fB\fBzpool iostat\fR [\fB-T\fR \fBu\fR | \fBd\fR] [\fB-v\fR] [\fIpool\fR] ...
1357 1424 [\fIinterval\fR[\fIcount\fR]]\fR
1358 1425 .ad
1359 1426 .sp .6
1360 1427 .RS 4n
1361 1428 Displays \fBI/O\fR statistics for the given pools. When given an interval, the
1362 1429 statistics are printed every \fIinterval\fR seconds until \fBCtrl-C\fR is
1363 1430 pressed. If no \fIpools\fR are specified, statistics for every pool in the
1364 1431 system is shown. If \fIcount\fR is specified, the command exits after
1365 1432 \fIcount\fR reports are printed.
1366 1433 .sp
1367 1434 .ne 2
1368 1435 .na
1369 1436 \fB\fB-T\fR \fBu\fR | \fBd\fR\fR
1370 1437 .ad
1371 1438 .RS 12n
1372 1439 Display a time stamp.
1373 1440 .sp
1374 1441 Specify \fBu\fR for a printed representation of the internal representation of
1375 1442 time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See
1376 1443 \fBdate\fR(1).
1377 1444 .RE
1378 1445
1379 1446 .sp
1380 1447 .ne 2
1381 1448 .na
1382 1449 \fB\fB-v\fR\fR
1383 1450 .ad
1384 1451 .RS 12n
1385 1452 Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within
1386 1453 the pool, in addition to the pool-wide statistics.
1387 1454 .RE
1388 1455
1389 1456 .RE
1390 1457
1391 1458 .sp
1392 1459 .ne 2
1393 1460 .na
1394 1461 \fB\fBzpool list\fR [\fB-Hv\fR] [\fB-o\fR \fIprops\fR[,...]] [\fIpool\fR] ...\fR
1395 1462 .ad
1396 1463 .sp .6
1397 1464 .RS 4n
1398 1465 Lists the given pools along with a health status and space usage. When given no
1399 1466 arguments, all pools in the system are listed.
1400 1467 .sp
1401 1468 .ne 2
1402 1469 .na
1403 1470 \fB\fB-H\fR\fR
1404 1471 .ad
1405 1472 .RS 12n
1406 1473 Scripted mode. Do not display headers, and separate fields by a single tab
1407 1474 instead of arbitrary space.
1408 1475 .RE
1409 1476
1410 1477 .sp
1411 1478 .ne 2
1412 1479 .na
1413 1480 \fB\fB-o\fR \fIprops\fR\fR
1414 1481 .ad
1415 1482 .RS 12n
1416 1483 Comma-separated list of properties to display. See the "Properties" section for
1417 1484 a list of valid properties. The default list is "name, size, used, available,
1418 1485 expandsize, capacity, dedupratio, health, altroot"
1419 1486 .RE
1420 1487
1421 1488 .sp
1422 1489 .ne 2
1423 1490 .na
1424 1491 \fB\fB-v\fR\fR
1425 1492 .ad
1426 1493 .RS 12n
1427 1494 Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within
1428 1495 the pool, in addition to the pool-wise statistics.
1429 1496 .RE
1430 1497
1431 1498 .RE
1432 1499
1433 1500 .sp
1434 1501 .ne 2
1435 1502 .na
1436 1503 \fB\fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ...\fR
1437 1504 .ad
1438 1505 .sp .6
1439 1506 .RS 4n
1440 1507 Takes the specified physical device offline. While the \fIdevice\fR is offline,
1441 1508 no attempt is made to read or write to the device.
1442 1509 .sp
1443 1510 This command is not applicable to spares or cache devices.
1444 1511 .sp
1445 1512 .ne 2
1446 1513 .na
1447 1514 \fB\fB-t\fR\fR
1448 1515 .ad
1449 1516 .RS 6n
1450 1517 Temporary. Upon reboot, the specified physical device reverts to its previous
1451 1518 state.
1452 1519 .RE
1453 1520
1454 1521 .RE
1455 1522
1456 1523 .sp
1457 1524 .ne 2
1458 1525 .na
1459 1526 \fB\fBzpool online\fR [\fB-e\fR] \fIpool\fR \fIdevice\fR...\fR
1460 1527 .ad
1461 1528 .sp .6
1462 1529 .RS 4n
1463 1530 Brings the specified physical device online.
1464 1531 .sp
1465 1532 This command is not applicable to spares or cache devices.
1466 1533 .sp
1467 1534 .ne 2
1468 1535 .na
1469 1536 \fB\fB-e\fR\fR
1470 1537 .ad
1471 1538 .RS 6n
1472 1539 Expand the device to use all available space. If the device is part of a mirror
1473 1540 or \fBraidz\fR then all devices must be expanded before the new space will
1474 1541 become available to the pool.
1475 1542 .RE
1476 1543
1477 1544 .RE
1478 1545
1479 1546 .sp
1480 1547 .ne 2
1481 1548 .na
1482 1549 \fB\fBzpool reguid\fR \fIpool\fR
1483 1550 .ad
1484 1551 .sp .6
1485 1552 .RS 4n
1486 1553 Generates a new unique identifier for the pool. You must ensure that all devices in this pool are online and
1487 1554 healthy before performing this action.
1488 1555 .RE
1489 1556
1490 1557 .sp
1491 1558 .ne 2
1492 1559 .na
1493 1560 \fB\fBzpool remove\fR \fIpool\fR \fIdevice\fR ...\fR
1494 1561 .ad
1495 1562 .sp .6
1496 1563 .RS 4n
1497 1564 Removes the specified device from the pool. This command currently only
1498 1565 supports removing hot spares, cache, and log devices. A mirrored log device can
1499 1566 be removed by specifying the top-level mirror for the log. Non-log devices that
1500 1567 are part of a mirrored configuration can be removed using the \fBzpool
1501 1568 detach\fR command. Non-redundant and \fBraidz\fR devices cannot be removed from
1502 1569 a pool.
1503 1570 .RE
1504 1571
1505 1572 .sp
1506 1573 .ne 2
1507 1574 .na
1508 1575 \fB\fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIold_device\fR
1509 1576 [\fInew_device\fR]\fR
1510 1577 .ad
1511 1578 .sp .6
1512 1579 .RS 4n
1513 1580 Replaces \fIold_device\fR with \fInew_device\fR. This is equivalent to
1514 1581 attaching \fInew_device\fR, waiting for it to resilver, and then detaching
1515 1582 \fIold_device\fR.
1516 1583 .sp
1517 1584 The size of \fInew_device\fR must be greater than or equal to the minimum size
1518 1585 of all the devices in a mirror or \fBraidz\fR configuration.
1519 1586 .sp
1520 1587 \fInew_device\fR is required if the pool is not redundant. If \fInew_device\fR
1521 1588 is not specified, it defaults to \fIold_device\fR. This form of replacement is
1522 1589 useful after an existing disk has failed and has been physically replaced. In
1523 1590 this case, the new disk may have the same \fB/dev/dsk\fR path as the old
1524 1591 device, even though it is actually a different disk. \fBZFS\fR recognizes this.
1525 1592 .sp
1526 1593 .ne 2
1527 1594 .na
1528 1595 \fB\fB-f\fR\fR
1529 1596 .ad
1530 1597 .RS 6n
1531 1598 Forces use of \fInew_device\fR, even if its appears to be in use. Not all
1532 1599 devices can be overridden in this manner.
1533 1600 .RE
1534 1601
1535 1602 .RE
1536 1603
1537 1604 .sp
1538 1605 .ne 2
1539 1606 .na
1540 1607 \fB\fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...\fR
1541 1608 .ad
1542 1609 .sp .6
1543 1610 .RS 4n
1544 1611 Begins a scrub. The scrub examines all data in the specified pools to verify
1545 1612 that it checksums correctly. For replicated (mirror or \fBraidz\fR) devices,
1546 1613 \fBZFS\fR automatically repairs any damage discovered during the scrub. The
1547 1614 "\fBzpool status\fR" command reports the progress of the scrub and summarizes
1548 1615 the results of the scrub upon completion.
1549 1616 .sp
1550 1617 Scrubbing and resilvering are very similar operations. The difference is that
1551 1618 resilvering only examines data that \fBZFS\fR knows to be out of date (for
1552 1619 example, when attaching a new device to a mirror or replacing an existing
1553 1620 device), whereas scrubbing examines all data to discover silent errors due to
1554 1621 hardware faults or disk failure.
1555 1622 .sp
1556 1623 Because scrubbing and resilvering are \fBI/O\fR-intensive operations, \fBZFS\fR
1557 1624 only allows one at a time. If a scrub is already in progress, the "\fBzpool
1558 1625 scrub\fR" command terminates it and starts a new scrub. If a resilver is in
1559 1626 progress, \fBZFS\fR does not allow a scrub to be started until the resilver
1560 1627 completes.
1561 1628 .sp
1562 1629 .ne 2
1563 1630 .na
1564 1631 \fB\fB-s\fR\fR
1565 1632 .ad
1566 1633 .RS 6n
1567 1634 Stop scrubbing.
1568 1635 .RE
1569 1636
1570 1637 .RE
1571 1638
1572 1639 .sp
1573 1640 .ne 2
1574 1641 .na
1575 1642 \fB\fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR\fR
1576 1643 .ad
1577 1644 .sp .6
1578 1645 .RS 4n
1579 1646 Sets the given property on the specified pool. See the "Properties" section for
1580 1647 more information on what properties can be set and acceptable values.
1581 1648 .RE
1582 1649
1583 1650 .sp
1584 1651 .ne 2
1585 1652 .na
1586 1653 \fB\fBzpool status\fR [\fB-xv\fR] [\fIpool\fR] ...\fR
1587 1654 .ad
1588 1655 .sp .6
1589 1656 .RS 4n
1590 1657 Displays the detailed health status for the given pools. If no \fIpool\fR is
1591 1658 specified, then the status of each pool in the system is displayed. For more
1592 1659 information on pool and device health, see the "Device Failure and Recovery"
1593 1660 section.
1594 1661 .sp
1595 1662 If a scrub or resilver is in progress, this command reports the percentage done
1596 1663 and the estimated time to completion. Both of these are only approximate,
1597 1664 because the amount of data in the pool and the other workloads on the system
1598 1665 can change.
1599 1666 .sp
1600 1667 .ne 2
1601 1668 .na
1602 1669 \fB\fB-x\fR\fR
1603 1670 .ad
1604 1671 .RS 6n
1605 1672 Only display status for pools that are exhibiting errors or are otherwise
1606 1673 unavailable.
1607 1674 .RE
1608 1675
1609 1676 .sp
1610 1677 .ne 2
1611 1678 .na
1612 1679 \fB\fB-v\fR\fR
1613 1680 .ad
1614 1681 .RS 6n
1615 1682 Displays verbose data error information, printing out a complete list of all
1616 1683 data errors since the last complete pool scrub.
1617 1684 .RE
1618 1685
1619 1686 .RE
1620 1687
1621 1688 .sp
1622 1689 .ne 2
1623 1690 .na
1624 1691 \fB\fBzpool upgrade\fR\fR
1625 1692 .ad
1626 1693 .sp .6
1627 1694 .RS 4n
1628 1695 Displays all pools formatted using a different \fBZFS\fR on-disk version. Older
1629 1696 versions can continue to be used, but some features may not be available. These
1630 1697 pools can be upgraded using "\fBzpool upgrade -a\fR". Pools that are formatted
1631 1698 with a more recent version are also displayed, although these pools will be
1632 1699 inaccessible on the system.
1633 1700 .RE
1634 1701
1635 1702 .sp
1636 1703 .ne 2
1637 1704 .na
1638 1705 \fB\fBzpool upgrade\fR \fB-v\fR\fR
1639 1706 .ad
1640 1707 .sp .6
1641 1708 .RS 4n
1642 1709 Displays \fBZFS\fR versions supported by the current software. The current
1643 1710 \fBZFS\fR versions and all previous supported versions are displayed, along
1644 1711 with an explanation of the features provided with each version.
1645 1712 .RE
1646 1713
1647 1714 .sp
1648 1715 .ne 2
1649 1716 .na
1650 1717 \fB\fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...\fR
1651 1718 .ad
1652 1719 .sp .6
1653 1720 .RS 4n
1654 1721 Upgrades the given pool to the latest on-disk version. Once this is done, the
1655 1722 pool will no longer be accessible on systems running older versions of the
1656 1723 software.
1657 1724 .sp
1658 1725 .ne 2
1659 1726 .na
1660 1727 \fB\fB-a\fR\fR
1661 1728 .ad
1662 1729 .RS 14n
1663 1730 Upgrades all pools.
1664 1731 .RE
1665 1732
1666 1733 .sp
1667 1734 .ne 2
1668 1735 .na
1669 1736 \fB\fB-V\fR \fIversion\fR\fR
1670 1737 .ad
1671 1738 .RS 14n
1672 1739 Upgrade to the specified version. If the \fB-V\fR flag is not specified, the
1673 1740 pool is upgraded to the most recent version. This option can only be used to
1674 1741 increase the version number, and only up to the most recent version supported
1675 1742 by this software.
1676 1743 .RE
1677 1744
1678 1745 .RE
1679 1746
1680 1747 .SH EXAMPLES
1681 1748 .LP
1682 1749 \fBExample 1 \fRCreating a RAID-Z Storage Pool
1683 1750 .sp
1684 1751 .LP
1685 1752 The following command creates a pool with a single \fBraidz\fR root \fIvdev\fR
1686 1753 that consists of six disks.
1687 1754
1688 1755 .sp
1689 1756 .in +2
1690 1757 .nf
1691 1758 # \fBzpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0\fR
1692 1759 .fi
1693 1760 .in -2
1694 1761 .sp
1695 1762
1696 1763 .LP
1697 1764 \fBExample 2 \fRCreating a Mirrored Storage Pool
1698 1765 .sp
1699 1766 .LP
1700 1767 The following command creates a pool with two mirrors, where each mirror
1701 1768 contains two disks.
1702 1769
1703 1770 .sp
1704 1771 .in +2
1705 1772 .nf
1706 1773 # \fBzpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0\fR
1707 1774 .fi
1708 1775 .in -2
1709 1776 .sp
1710 1777
1711 1778 .LP
1712 1779 \fBExample 3 \fRCreating a ZFS Storage Pool by Using Slices
1713 1780 .sp
1714 1781 .LP
1715 1782 The following command creates an unmirrored pool using two disk slices.
1716 1783
1717 1784 .sp
1718 1785 .in +2
1719 1786 .nf
1720 1787 # \fBzpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4\fR
1721 1788 .fi
1722 1789 .in -2
1723 1790 .sp
1724 1791
1725 1792 .LP
1726 1793 \fBExample 4 \fRCreating a ZFS Storage Pool by Using Files
1727 1794 .sp
1728 1795 .LP
1729 1796 The following command creates an unmirrored pool using files. While not
1730 1797 recommended, a pool based on files can be useful for experimental purposes.
1731 1798
1732 1799 .sp
1733 1800 .in +2
1734 1801 .nf
1735 1802 # \fBzpool create tank /path/to/file/a /path/to/file/b\fR
1736 1803 .fi
1737 1804 .in -2
1738 1805 .sp
1739 1806
1740 1807 .LP
1741 1808 \fBExample 5 \fRAdding a Mirror to a ZFS Storage Pool
1742 1809 .sp
1743 1810 .LP
1744 1811 The following command adds two mirrored disks to the pool "\fItank\fR",
1745 1812 assuming the pool is already made up of two-way mirrors. The additional space
1746 1813 is immediately available to any datasets within the pool.
1747 1814
1748 1815 .sp
1749 1816 .in +2
1750 1817 .nf
1751 1818 # \fBzpool add tank mirror c1t0d0 c1t1d0\fR
1752 1819 .fi
1753 1820 .in -2
1754 1821 .sp
1755 1822
1756 1823 .LP
1757 1824 \fBExample 6 \fRListing Available ZFS Storage Pools
1758 1825 .sp
1759 1826 .LP
1760 1827 The following command lists all available pools on the system. In this case,
1761 1828 the pool \fIzion\fR is faulted due to a missing device.
1762 1829
1763 1830 .sp
1764 1831 .LP
1765 1832 The results from this command are similar to the following:
1766 1833
1767 1834 .sp
1768 1835 .in +2
1769 1836 .nf
1770 1837 # \fBzpool list\fR
1771 1838 NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
1772 1839 rpool 19.9G 8.43G 11.4G - 42% 1.00x ONLINE -
1773 1840 tank 61.5G 20.0G 41.5G - 32% 1.00x ONLINE -
1774 1841 zion - - - - - - FAULTED -
1775 1842 .fi
1776 1843 .in -2
1777 1844 .sp
1778 1845
1779 1846 .LP
1780 1847 \fBExample 7 \fRDestroying a ZFS Storage Pool
1781 1848 .sp
1782 1849 .LP
1783 1850 The following command destroys the pool "\fItank\fR" and any datasets contained
1784 1851 within.
1785 1852
1786 1853 .sp
1787 1854 .in +2
1788 1855 .nf
1789 1856 # \fBzpool destroy -f tank\fR
1790 1857 .fi
1791 1858 .in -2
1792 1859 .sp
1793 1860
1794 1861 .LP
1795 1862 \fBExample 8 \fRExporting a ZFS Storage Pool
1796 1863 .sp
1797 1864 .LP
1798 1865 The following command exports the devices in pool \fItank\fR so that they can
1799 1866 be relocated or later imported.
1800 1867
1801 1868 .sp
1802 1869 .in +2
1803 1870 .nf
1804 1871 # \fBzpool export tank\fR
1805 1872 .fi
1806 1873 .in -2
1807 1874 .sp
1808 1875
1809 1876 .LP
1810 1877 \fBExample 9 \fRImporting a ZFS Storage Pool
1811 1878 .sp
1812 1879 .LP
1813 1880 The following command displays available pools, and then imports the pool
1814 1881 "tank" for use on the system.
1815 1882
1816 1883 .sp
1817 1884 .LP
1818 1885 The results from this command are similar to the following:
1819 1886
1820 1887 .sp
1821 1888 .in +2
1822 1889 .nf
1823 1890 # \fBzpool import\fR
1824 1891 pool: tank
1825 1892 id: 15451357997522795478
1826 1893 state: ONLINE
1827 1894 action: The pool can be imported using its name or numeric identifier.
1828 1895 config:
1829 1896
1830 1897 tank ONLINE
1831 1898 mirror ONLINE
1832 1899 c1t2d0 ONLINE
1833 1900 c1t3d0 ONLINE
1834 1901
1835 1902 # \fBzpool import tank\fR
1836 1903 .fi
1837 1904 .in -2
1838 1905 .sp
1839 1906
1840 1907 .LP
1841 1908 \fBExample 10 \fRUpgrading All ZFS Storage Pools to the Current Version
1842 1909 .sp
1843 1910 .LP
1844 1911 The following command upgrades all ZFS Storage pools to the current version of
1845 1912 the software.
1846 1913
1847 1914 .sp
1848 1915 .in +2
1849 1916 .nf
1850 1917 # \fBzpool upgrade -a\fR
1851 1918 This system is currently running ZFS version 2.
1852 1919 .fi
1853 1920 .in -2
1854 1921 .sp
1855 1922
1856 1923 .LP
1857 1924 \fBExample 11 \fRManaging Hot Spares
1858 1925 .sp
1859 1926 .LP
1860 1927 The following command creates a new pool with an available hot spare:
1861 1928
1862 1929 .sp
1863 1930 .in +2
1864 1931 .nf
1865 1932 # \fBzpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0\fR
1866 1933 .fi
1867 1934 .in -2
1868 1935 .sp
1869 1936
1870 1937 .sp
1871 1938 .LP
1872 1939 If one of the disks were to fail, the pool would be reduced to the degraded
1873 1940 state. The failed device can be replaced using the following command:
1874 1941
1875 1942 .sp
1876 1943 .in +2
1877 1944 .nf
1878 1945 # \fBzpool replace tank c0t0d0 c0t3d0\fR
1879 1946 .fi
1880 1947 .in -2
1881 1948 .sp
1882 1949
1883 1950 .sp
1884 1951 .LP
1885 1952 Once the data has been resilvered, the spare is automatically removed and is
1886 1953 made available should another device fails. The hot spare can be permanently
1887 1954 removed from the pool using the following command:
1888 1955
1889 1956 .sp
1890 1957 .in +2
1891 1958 .nf
1892 1959 # \fBzpool remove tank c0t2d0\fR
1893 1960 .fi
1894 1961 .in -2
1895 1962 .sp
1896 1963
1897 1964 .LP
1898 1965 \fBExample 12 \fRCreating a ZFS Pool with Mirrored Separate Intent Logs
1899 1966 .sp
1900 1967 .LP
1901 1968 The following command creates a ZFS storage pool consisting of two, two-way
1902 1969 mirrors and mirrored log devices:
1903 1970
1904 1971 .sp
1905 1972 .in +2
1906 1973 .nf
1907 1974 # \fBzpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
1908 1975 c4d0 c5d0\fR
1909 1976 .fi
1910 1977 .in -2
1911 1978 .sp
1912 1979
1913 1980 .LP
1914 1981 \fBExample 13 \fRAdding Cache Devices to a ZFS Pool
1915 1982 .sp
1916 1983 .LP
1917 1984 The following command adds two disks for use as cache devices to a ZFS storage
1918 1985 pool:
1919 1986
1920 1987 .sp
1921 1988 .in +2
1922 1989 .nf
1923 1990 # \fBzpool add pool cache c2d0 c3d0\fR
1924 1991 .fi
1925 1992 .in -2
1926 1993 .sp
1927 1994
1928 1995 .sp
1929 1996 .LP
1930 1997 Once added, the cache devices gradually fill with content from main memory.
1931 1998 Depending on the size of your cache devices, it could take over an hour for
1932 1999 them to fill. Capacity and reads can be monitored using the \fBiostat\fR option
1933 2000 as follows:
1934 2001
1935 2002 .sp
1936 2003 .in +2
1937 2004 .nf
1938 2005 # \fBzpool iostat -v pool 5\fR
1939 2006 .fi
1940 2007 .in -2
1941 2008 .sp
1942 2009
1943 2010 .LP
1944 2011 \fBExample 14 \fRRemoving a Mirrored Log Device
1945 2012 .sp
1946 2013 .LP
1947 2014 The following command removes the mirrored log device \fBmirror-2\fR.
1948 2015
1949 2016 .sp
1950 2017 .LP
1951 2018 Given this configuration:
1952 2019
1953 2020 .sp
1954 2021 .in +2
1955 2022 .nf
1956 2023 pool: tank
1957 2024 state: ONLINE
1958 2025 scrub: none requested
1959 2026 config:
1960 2027
1961 2028 NAME STATE READ WRITE CKSUM
1962 2029 tank ONLINE 0 0 0
1963 2030 mirror-0 ONLINE 0 0 0
1964 2031 c6t0d0 ONLINE 0 0 0
1965 2032 c6t1d0 ONLINE 0 0 0
1966 2033 mirror-1 ONLINE 0 0 0
1967 2034 c6t2d0 ONLINE 0 0 0
1968 2035 c6t3d0 ONLINE 0 0 0
1969 2036 logs
1970 2037 mirror-2 ONLINE 0 0 0
1971 2038 c4t0d0 ONLINE 0 0 0
1972 2039 c4t1d0 ONLINE 0 0 0
1973 2040 .fi
1974 2041 .in -2
1975 2042 .sp
1976 2043
1977 2044 .sp
1978 2045 .LP
1979 2046 The command to remove the mirrored log \fBmirror-2\fR is:
1980 2047
1981 2048 .sp
1982 2049 .in +2
1983 2050 .nf
1984 2051 # \fBzpool remove tank mirror-2\fR
1985 2052 .fi
1986 2053 .in -2
1987 2054 .sp
1988 2055
1989 2056 .LP
1990 2057 \fBExample 15 \fRDisplaying expanded space on a device
1991 2058 .sp
1992 2059 .LP
1993 2060 The following command dipslays the detailed information for the \fIdata\fR
1994 2061 pool. This pool is comprised of a single \fIraidz\fR vdev where one of its
1995 2062 devices increased its capacity by 1GB. In this example, the pool will not
1996 2063 be able to utilized this extra capacity until all the devices under the
1997 2064 \fIraidz\fR vdev have been expanded.
1998 2065
1999 2066 .sp
2000 2067 .in +2
2001 2068 .nf
2002 2069 # \fBzpool list -v data\fR
2003 2070 NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
2004 2071 data 17.9G 174K 17.9G - 0% 1.00x ONLINE -
2005 2072 raidz1 17.9G 174K 17.9G -
2006 2073 c4t2d0 - - - 1G
2007 2074 c4t3d0 - - - -
2008 2075 c4t4d0 - - - -
2009 2076 .fi
2010 2077 .in -2
2011 2078
2012 2079 .SH EXIT STATUS
2013 2080 .sp
2014 2081 .LP
2015 2082 The following exit values are returned:
2016 2083 .sp
2017 2084 .ne 2
2018 2085 .na
2019 2086 \fB\fB0\fR\fR
2020 2087 .ad
2021 2088 .RS 5n
2022 2089 Successful completion.
2023 2090 .RE
2024 2091
2025 2092 .sp
2026 2093 .ne 2
2027 2094 .na
2028 2095 \fB\fB1\fR\fR
2029 2096 .ad
2030 2097 .RS 5n
2031 2098 An error occurred.
2032 2099 .RE
2033 2100
2034 2101 .sp
2035 2102 .ne 2
2036 2103 .na
2037 2104 \fB\fB2\fR\fR
2038 2105 .ad
2039 2106 .RS 5n
2040 2107 Invalid command line options were specified.
2041 2108 .RE
2042 2109
2043 2110 .SH ATTRIBUTES
2044 2111 .sp
2045 2112 .LP
2046 2113 See \fBattributes\fR(5) for descriptions of the following attributes:
2047 2114 .sp
2048 2115
2049 2116 .sp
2050 2117 .TS
2051 2118 box;
|
↓ open down ↓ |
1103 lines elided |
↑ open up ↑ |
2052 2119 c | c
2053 2120 l | l .
2054 2121 ATTRIBUTE TYPE ATTRIBUTE VALUE
2055 2122 _
2056 2123 Interface Stability Evolving
2057 2124 .TE
2058 2125
2059 2126 .SH SEE ALSO
2060 2127 .sp
2061 2128 .LP
2062 -\fBzfs\fR(1M), \fBattributes\fR(5)
2129 +\fBzfs\fR(1M), \fBzpool-features\fR(5), \fBattributes\fR(5)
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX