Print this page
1693 persistent 'comment' field for a zpool
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/man/man1m/zpool.1m
+++ new/usr/src/man/man1m/zpool.1m
1 1 '\" te
2 2 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
3 +.\" Copyright 2011, Nexenta Systems, Inc. All Rights Reserved.
3 4 .\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
4 5 .\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with the
5 6 .\" fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner]
6 -.\" Copyright 2011 Nexenta Systems, Inc. All rights reserved.
7 -.TH ZPOOL 1M "Oct 25, 2011"
7 +.TH ZPOOL 1M "Nov 14, 2011"
8 8 .SH NAME
9 9 zpool \- configures ZFS storage pools
10 10 .SH SYNOPSIS
11 11 .LP
12 12 .nf
13 13 \fBzpool\fR [\fB-?\fR]
14 14 .fi
15 15
16 16 .LP
17 17 .nf
18 18 \fBzpool add\fR [\fB-fn\fR] \fIpool\fR \fIvdev\fR ...
19 19 .fi
20 20
21 21 .LP
22 22 .nf
23 23 \fBzpool attach\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR
24 24 .fi
25 25
26 26 .LP
27 27 .nf
28 28 \fBzpool clear\fR \fIpool\fR [\fIdevice\fR]
29 29 .fi
30 30
31 31 .LP
32 32 .nf
33 33 \fBzpool create\fR [\fB-fn\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR \fIfile-system-property=value\fR]
34 34 ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR \fIvdev\fR ...
35 35 .fi
36 36
37 37 .LP
38 38 .nf
39 39 \fBzpool destroy\fR [\fB-f\fR] \fIpool\fR
40 40 .fi
41 41
42 42 .LP
43 43 .nf
44 44 \fBzpool detach\fR \fIpool\fR \fIdevice\fR
45 45 .fi
46 46
47 47 .LP
48 48 .nf
49 49 \fBzpool export\fR [\fB-f\fR] \fIpool\fR ...
50 50 .fi
51 51
52 52 .LP
53 53 .nf
54 54 \fBzpool get\fR "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ...
55 55 .fi
56 56
57 57 .LP
58 58 .nf
59 59 \fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ...
60 60 .fi
61 61
62 62 .LP
63 63 .nf
64 64 \fBzpool import\fR [\fB-d\fR \fIdir\fR] [\fB-D\fR]
65 65 .fi
66 66
67 67 .LP
68 68 .nf
69 69 \fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
70 70 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fB-a\fR
71 71 .fi
72 72
73 73 .LP
74 74 .nf
75 75 \fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
76 76 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR |\fIid\fR [\fInewpool\fR]
77 77 .fi
78 78
79 79 .LP
80 80 .nf
81 81 \fBzpool iostat\fR [\fB-T\fR u | d ] [\fB-v\fR] [\fIpool\fR] ... [\fIinterval\fR[\fIcount\fR]]
82 82 .fi
83 83
84 84 .LP
85 85 .nf
86 86 \fBzpool list\fR [\fB-H\fR] [\fB-o\fR \fIproperty\fR[,...]] [\fIpool\fR] ...
87 87 .fi
88 88
89 89 .LP
90 90 .nf
91 91 \fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ...
92 92 .fi
93 93
94 94 .LP
95 95 .nf
96 96 \fBzpool online\fR \fIpool\fR \fIdevice\fR ...
97 97 .fi
98 98
99 99 .LP
100 100 .nf
101 101 \fBzpool reguid\fR \fIpool\fR
102 102 .fi
103 103
104 104 .LP
105 105 .nf
106 106 \fBzpool remove\fR \fIpool\fR \fIdevice\fR ...
107 107 .fi
108 108
109 109 .LP
110 110 .nf
111 111 \fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR [\fInew_device\fR]
112 112 .fi
113 113
114 114 .LP
115 115 .nf
116 116 \fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...
117 117 .fi
118 118
119 119 .LP
120 120 .nf
121 121 \fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR
122 122 .fi
123 123
124 124 .LP
125 125 .nf
126 126 \fBzpool status\fR [\fB-xv\fR] [\fIpool\fR] ...
127 127 .fi
128 128
129 129 .LP
130 130 .nf
131 131 \fBzpool upgrade\fR
132 132 .fi
133 133
134 134 .LP
135 135 .nf
136 136 \fBzpool upgrade\fR \fB-v\fR
137 137 .fi
138 138
139 139 .LP
140 140 .nf
141 141 \fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...
142 142 .fi
143 143
144 144 .SH DESCRIPTION
145 145 .sp
146 146 .LP
147 147 The \fBzpool\fR command configures \fBZFS\fR storage pools. A storage pool is a
148 148 collection of devices that provides physical storage and data replication for
149 149 \fBZFS\fR datasets.
150 150 .sp
151 151 .LP
152 152 All datasets within a storage pool share the same space. See \fBzfs\fR(1M) for
153 153 information on managing datasets.
154 154 .SS "Virtual Devices (\fBvdev\fRs)"
155 155 .sp
156 156 .LP
157 157 A "virtual device" describes a single device or a collection of devices
158 158 organized according to certain performance and fault characteristics. The
159 159 following virtual devices are supported:
160 160 .sp
161 161 .ne 2
162 162 .na
163 163 \fB\fBdisk\fR\fR
164 164 .ad
165 165 .RS 10n
166 166 A block device, typically located under \fB/dev/dsk\fR. \fBZFS\fR can use
167 167 individual slices or partitions, though the recommended mode of operation is to
168 168 use whole disks. A disk can be specified by a full path, or it can be a
169 169 shorthand name (the relative portion of the path under "/dev/dsk"). A whole
170 170 disk can be specified by omitting the slice or partition designation. For
171 171 example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When given a whole
172 172 disk, \fBZFS\fR automatically labels the disk, if necessary.
173 173 .RE
174 174
175 175 .sp
176 176 .ne 2
177 177 .na
178 178 \fB\fBfile\fR\fR
179 179 .ad
180 180 .RS 10n
181 181 A regular file. The use of files as a backing store is strongly discouraged. It
182 182 is designed primarily for experimental purposes, as the fault tolerance of a
183 183 file is only as good as the file system of which it is a part. A file must be
184 184 specified by a full path.
185 185 .RE
186 186
187 187 .sp
188 188 .ne 2
189 189 .na
190 190 \fB\fBmirror\fR\fR
191 191 .ad
192 192 .RS 10n
193 193 A mirror of two or more devices. Data is replicated in an identical fashion
194 194 across all components of a mirror. A mirror with \fIN\fR disks of size \fIX\fR
195 195 can hold \fIX\fR bytes and can withstand (\fIN-1\fR) devices failing before
196 196 data integrity is compromised.
197 197 .RE
198 198
199 199 .sp
200 200 .ne 2
201 201 .na
202 202 \fB\fBraidz\fR\fR
203 203 .ad
204 204 .br
205 205 .na
206 206 \fB\fBraidz1\fR\fR
207 207 .ad
208 208 .br
209 209 .na
210 210 \fB\fBraidz2\fR\fR
211 211 .ad
212 212 .br
213 213 .na
214 214 \fB\fBraidz3\fR\fR
215 215 .ad
216 216 .RS 10n
217 217 A variation on \fBRAID-5\fR that allows for better distribution of parity and
218 218 eliminates the "\fBRAID-5\fR write hole" (in which data and parity become
219 219 inconsistent after a power loss). Data and parity is striped across all disks
220 220 within a \fBraidz\fR group.
221 221 .sp
222 222 A \fBraidz\fR group can have single-, double- , or triple parity, meaning that
223 223 the \fBraidz\fR group can sustain one, two, or three failures, respectively,
224 224 without losing any data. The \fBraidz1\fR \fBvdev\fR type specifies a
225 225 single-parity \fBraidz\fR group; the \fBraidz2\fR \fBvdev\fR type specifies a
226 226 double-parity \fBraidz\fR group; and the \fBraidz3\fR \fBvdev\fR type specifies
227 227 a triple-parity \fBraidz\fR group. The \fBraidz\fR \fBvdev\fR type is an alias
228 228 for \fBraidz1\fR.
229 229 .sp
230 230 A \fBraidz\fR group with \fIN\fR disks of size \fIX\fR with \fIP\fR parity
231 231 disks can hold approximately (\fIN-P\fR)*\fIX\fR bytes and can withstand
232 232 \fIP\fR device(s) failing before data integrity is compromised. The minimum
233 233 number of devices in a \fBraidz\fR group is one more than the number of parity
234 234 disks. The recommended number is between 3 and 9 to help increase performance.
235 235 .RE
236 236
237 237 .sp
238 238 .ne 2
239 239 .na
240 240 \fB\fBspare\fR\fR
241 241 .ad
242 242 .RS 10n
243 243 A special pseudo-\fBvdev\fR which keeps track of available hot spares for a
244 244 pool. For more information, see the "Hot Spares" section.
245 245 .RE
246 246
247 247 .sp
248 248 .ne 2
249 249 .na
250 250 \fB\fBlog\fR\fR
251 251 .ad
252 252 .RS 10n
253 253 A separate-intent log device. If more than one log device is specified, then
254 254 writes are load-balanced between devices. Log devices can be mirrored. However,
255 255 \fBraidz\fR \fBvdev\fR types are not supported for the intent log. For more
256 256 information, see the "Intent Log" section.
257 257 .RE
258 258
259 259 .sp
260 260 .ne 2
261 261 .na
262 262 \fB\fBcache\fR\fR
263 263 .ad
264 264 .RS 10n
265 265 A device used to cache storage pool data. A cache device cannot be cannot be
266 266 configured as a mirror or \fBraidz\fR group. For more information, see the
267 267 "Cache Devices" section.
268 268 .RE
269 269
270 270 .sp
271 271 .LP
272 272 Virtual devices cannot be nested, so a mirror or \fBraidz\fR virtual device can
273 273 only contain files or disks. Mirrors of mirrors (or other combinations) are not
274 274 allowed.
275 275 .sp
276 276 .LP
277 277 A pool can have any number of virtual devices at the top of the configuration
278 278 (known as "root vdevs"). Data is dynamically distributed across all top-level
279 279 devices to balance data among devices. As new virtual devices are added,
280 280 \fBZFS\fR automatically places data on the newly available devices.
281 281 .sp
282 282 .LP
283 283 Virtual devices are specified one at a time on the command line, separated by
284 284 whitespace. The keywords "mirror" and "raidz" are used to distinguish where a
285 285 group ends and another begins. For example, the following creates two root
286 286 vdevs, each a mirror of two disks:
287 287 .sp
288 288 .in +2
289 289 .nf
290 290 # \fBzpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0\fR
291 291 .fi
292 292 .in -2
293 293 .sp
294 294
295 295 .SS "Device Failure and Recovery"
296 296 .sp
297 297 .LP
298 298 \fBZFS\fR supports a rich set of mechanisms for handling device failure and
299 299 data corruption. All metadata and data is checksummed, and \fBZFS\fR
300 300 automatically repairs bad data from a good copy when corruption is detected.
301 301 .sp
302 302 .LP
303 303 In order to take advantage of these features, a pool must make use of some form
304 304 of redundancy, using either mirrored or \fBraidz\fR groups. While \fBZFS\fR
305 305 supports running in a non-redundant configuration, where each root vdev is
306 306 simply a disk or file, this is strongly discouraged. A single case of bit
307 307 corruption can render some or all of your data unavailable.
308 308 .sp
309 309 .LP
310 310 A pool's health status is described by one of three states: online, degraded,
311 311 or faulted. An online pool has all devices operating normally. A degraded pool
312 312 is one in which one or more devices have failed, but the data is still
313 313 available due to a redundant configuration. A faulted pool has corrupted
314 314 metadata, or one or more faulted devices, and insufficient replicas to continue
315 315 functioning.
316 316 .sp
317 317 .LP
318 318 The health of the top-level vdev, such as mirror or \fBraidz\fR device, is
319 319 potentially impacted by the state of its associated vdevs, or component
320 320 devices. A top-level vdev or component device is in one of the following
321 321 states:
322 322 .sp
323 323 .ne 2
324 324 .na
325 325 \fB\fBDEGRADED\fR\fR
326 326 .ad
327 327 .RS 12n
328 328 One or more top-level vdevs is in the degraded state because one or more
329 329 component devices are offline. Sufficient replicas exist to continue
330 330 functioning.
331 331 .sp
332 332 One or more component devices is in the degraded or faulted state, but
333 333 sufficient replicas exist to continue functioning. The underlying conditions
334 334 are as follows:
335 335 .RS +4
336 336 .TP
337 337 .ie t \(bu
338 338 .el o
339 339 The number of checksum errors exceeds acceptable levels and the device is
340 340 degraded as an indication that something may be wrong. \fBZFS\fR continues to
341 341 use the device as necessary.
342 342 .RE
343 343 .RS +4
344 344 .TP
345 345 .ie t \(bu
346 346 .el o
347 347 The number of I/O errors exceeds acceptable levels. The device could not be
348 348 marked as faulted because there are insufficient replicas to continue
349 349 functioning.
350 350 .RE
351 351 .RE
352 352
353 353 .sp
354 354 .ne 2
355 355 .na
356 356 \fB\fBFAULTED\fR\fR
357 357 .ad
358 358 .RS 12n
359 359 One or more top-level vdevs is in the faulted state because one or more
360 360 component devices are offline. Insufficient replicas exist to continue
361 361 functioning.
362 362 .sp
363 363 One or more component devices is in the faulted state, and insufficient
364 364 replicas exist to continue functioning. The underlying conditions are as
365 365 follows:
366 366 .RS +4
367 367 .TP
368 368 .ie t \(bu
369 369 .el o
370 370 The device could be opened, but the contents did not match expected values.
371 371 .RE
372 372 .RS +4
373 373 .TP
374 374 .ie t \(bu
375 375 .el o
376 376 The number of I/O errors exceeds acceptable levels and the device is faulted to
377 377 prevent further use of the device.
378 378 .RE
379 379 .RE
380 380
381 381 .sp
382 382 .ne 2
383 383 .na
384 384 \fB\fBOFFLINE\fR\fR
385 385 .ad
386 386 .RS 12n
387 387 The device was explicitly taken offline by the "\fBzpool offline\fR" command.
388 388 .RE
389 389
390 390 .sp
391 391 .ne 2
392 392 .na
393 393 \fB\fBONLINE\fR\fR
394 394 .ad
395 395 .RS 12n
396 396 The device is online and functioning.
397 397 .RE
398 398
399 399 .sp
400 400 .ne 2
401 401 .na
402 402 \fB\fBREMOVED\fR\fR
403 403 .ad
404 404 .RS 12n
405 405 The device was physically removed while the system was running. Device removal
406 406 detection is hardware-dependent and may not be supported on all platforms.
407 407 .RE
408 408
409 409 .sp
410 410 .ne 2
411 411 .na
412 412 \fB\fBUNAVAIL\fR\fR
413 413 .ad
414 414 .RS 12n
415 415 The device could not be opened. If a pool is imported when a device was
416 416 unavailable, then the device will be identified by a unique identifier instead
417 417 of its path since the path was never correct in the first place.
418 418 .RE
419 419
420 420 .sp
421 421 .LP
422 422 If a device is removed and later re-attached to the system, \fBZFS\fR attempts
423 423 to put the device online automatically. Device attach detection is
424 424 hardware-dependent and might not be supported on all platforms.
425 425 .SS "Hot Spares"
426 426 .sp
427 427 .LP
428 428 \fBZFS\fR allows devices to be associated with pools as "hot spares". These
429 429 devices are not actively used in the pool, but when an active device fails, it
430 430 is automatically replaced by a hot spare. To create a pool with hot spares,
431 431 specify a "spare" \fBvdev\fR with any number of devices. For example,
432 432 .sp
433 433 .in +2
434 434 .nf
435 435 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
436 436 .fi
437 437 .in -2
438 438 .sp
439 439
440 440 .sp
441 441 .LP
442 442 Spares can be shared across multiple pools, and can be added with the "\fBzpool
443 443 add\fR" command and removed with the "\fBzpool remove\fR" command. Once a spare
444 444 replacement is initiated, a new "spare" \fBvdev\fR is created within the
445 445 configuration that will remain there until the original device is replaced. At
446 446 this point, the hot spare becomes available again if another device fails.
447 447 .sp
448 448 .LP
449 449 If a pool has a shared spare that is currently being used, the pool can not be
450 450 exported since other pools may use this shared spare, which may lead to
451 451 potential data corruption.
452 452 .sp
453 453 .LP
454 454 An in-progress spare replacement can be cancelled by detaching the hot spare.
455 455 If the original faulted device is detached, then the hot spare assumes its
456 456 place in the configuration, and is removed from the spare list of all active
457 457 pools.
458 458 .sp
459 459 .LP
460 460 Spares cannot replace log devices.
461 461 .SS "Intent Log"
462 462 .sp
463 463 .LP
464 464 The \fBZFS\fR Intent Log (\fBZIL\fR) satisfies \fBPOSIX\fR requirements for
465 465 synchronous transactions. For instance, databases often require their
466 466 transactions to be on stable storage devices when returning from a system call.
467 467 \fBNFS\fR and other applications can also use \fBfsync\fR() to ensure data
468 468 stability. By default, the intent log is allocated from blocks within the main
469 469 pool. However, it might be possible to get better performance using separate
470 470 intent log devices such as \fBNVRAM\fR or a dedicated disk. For example:
471 471 .sp
472 472 .in +2
473 473 .nf
474 474 \fB# zpool create pool c0d0 c1d0 log c2d0\fR
475 475 .fi
476 476 .in -2
477 477 .sp
478 478
479 479 .sp
480 480 .LP
481 481 Multiple log devices can also be specified, and they can be mirrored. See the
482 482 EXAMPLES section for an example of mirroring multiple log devices.
483 483 .sp
484 484 .LP
485 485 Log devices can be added, replaced, attached, detached, and imported and
486 486 exported as part of the larger pool. Mirrored log devices can be removed by
487 487 specifying the top-level mirror for the log.
488 488 .SS "Cache Devices"
489 489 .sp
490 490 .LP
491 491 Devices can be added to a storage pool as "cache devices." These devices
492 492 provide an additional layer of caching between main memory and disk. For
493 493 read-heavy workloads, where the working set size is much larger than what can
494 494 be cached in main memory, using cache devices allow much more of this working
495 495 set to be served from low latency media. Using cache devices provides the
496 496 greatest performance improvement for random read-workloads of mostly static
497 497 content.
498 498 .sp
499 499 .LP
500 500 To create a pool with cache devices, specify a "cache" \fBvdev\fR with any
501 501 number of devices. For example:
502 502 .sp
503 503 .in +2
504 504 .nf
505 505 \fB# zpool create pool c0d0 c1d0 cache c2d0 c3d0\fR
506 506 .fi
507 507 .in -2
508 508 .sp
509 509
510 510 .sp
511 511 .LP
512 512 Cache devices cannot be mirrored or part of a \fBraidz\fR configuration. If a
513 513 read error is encountered on a cache device, that read \fBI/O\fR is reissued to
514 514 the original storage pool device, which might be part of a mirrored or
515 515 \fBraidz\fR configuration.
516 516 .sp
517 517 .LP
518 518 The content of the cache devices is considered volatile, as is the case with
519 519 other system caches.
520 520 .SS "Properties"
521 521 .sp
522 522 .LP
523 523 Each pool has several properties associated with it. Some properties are
524 524 read-only statistics while others are configurable and change the behavior of
525 525 the pool. The following are read-only properties:
526 526 .sp
527 527 .ne 2
528 528 .na
529 529 \fB\fBavailable\fR\fR
530 530 .ad
531 531 .RS 20n
532 532 Amount of storage available within the pool. This property can also be referred
533 533 to by its shortened column name, "avail".
534 534 .RE
535 535
|
↓ open down ↓ |
518 lines elided |
↑ open up ↑ |
536 536 .sp
537 537 .ne 2
538 538 .na
539 539 \fB\fBcapacity\fR\fR
540 540 .ad
541 541 .RS 20n
542 542 Percentage of pool space used. This property can also be referred to by its
543 543 shortened column name, "cap".
544 544 .RE
545 545
546 +.sp
547 +.ne 2
548 +.na
549 +\fB\fBcomment\fR\fR
550 +.ad
551 +.RS 20n
552 +A text string consiting of printable ASCII characters that will be stored
553 +such that it is available even if the pool becomes faulted. An administrator
554 +can provide additional information about a pool using this property.
555 +.RE
556 +
546 557 .sp
547 558 .ne 2
548 559 .na
549 560 \fB\fBhealth\fR\fR
550 561 .ad
551 562 .RS 20n
552 563 The current health of the pool. Health can be "\fBONLINE\fR", "\fBDEGRADED\fR",
553 564 "\fBFAULTED\fR", " \fBOFFLINE\fR", "\fBREMOVED\fR", or "\fBUNAVAIL\fR".
554 565 .RE
555 566
556 567 .sp
557 568 .ne 2
558 569 .na
559 570 \fB\fBguid\fR\fR
560 571 .ad
561 572 .RS 20n
562 573 A unique identifier for the pool.
563 574 .RE
564 575
565 576 .sp
566 577 .ne 2
567 578 .na
568 579 \fB\fBsize\fR\fR
569 580 .ad
570 581 .RS 20n
571 582 Total size of the storage pool.
572 583 .RE
573 584
574 585 .sp
575 586 .ne 2
576 587 .na
577 588 \fB\fBused\fR\fR
578 589 .ad
579 590 .RS 20n
580 591 Amount of storage space used within the pool.
581 592 .RE
582 593
583 594 .sp
584 595 .LP
585 596 These space usage properties report actual physical space available to the
586 597 storage pool. The physical space can be different from the total amount of
587 598 space that any contained datasets can actually use. The amount of space used in
588 599 a \fBraidz\fR configuration depends on the characteristics of the data being
589 600 written. In addition, \fBZFS\fR reserves some space for internal accounting
590 601 that the \fBzfs\fR(1M) command takes into account, but the \fBzpool\fR command
591 602 does not. For non-full pools of a reasonable size, these effects should be
592 603 invisible. For small pools, or pools that are close to being completely full,
593 604 these discrepancies may become more noticeable.
594 605 .sp
595 606 .LP
596 607 The following property can be set at creation time and import time:
597 608 .sp
598 609 .ne 2
599 610 .na
600 611 \fB\fBaltroot\fR\fR
601 612 .ad
602 613 .sp .6
603 614 .RS 4n
604 615 Alternate root directory. If set, this directory is prepended to any mount
605 616 points within the pool. This can be used when examining an unknown pool where
606 617 the mount points cannot be trusted, or in an alternate boot environment, where
607 618 the typical paths are not valid. \fBaltroot\fR is not a persistent property. It
608 619 is valid only while the system is up. Setting \fBaltroot\fR defaults to using
609 620 \fBcachefile\fR=none, though this may be overridden using an explicit setting.
610 621 .RE
611 622
612 623 .sp
613 624 .LP
614 625 The following properties can be set at creation time and import time, and later
615 626 changed with the \fBzpool set\fR command:
616 627 .sp
617 628 .ne 2
618 629 .na
619 630 \fB\fBautoexpand\fR=\fBon\fR | \fBoff\fR\fR
620 631 .ad
621 632 .sp .6
622 633 .RS 4n
623 634 Controls automatic pool expansion when the underlying LUN is grown. If set to
624 635 \fBon\fR, the pool will be resized according to the size of the expanded
625 636 device. If the device is part of a mirror or \fBraidz\fR then all devices
626 637 within that mirror/\fBraidz\fR group must be expanded before the new space is
627 638 made available to the pool. The default behavior is \fBoff\fR. This property
628 639 can also be referred to by its shortened column name, \fBexpand\fR.
629 640 .RE
630 641
631 642 .sp
632 643 .ne 2
633 644 .na
634 645 \fB\fBautoreplace\fR=\fBon\fR | \fBoff\fR\fR
635 646 .ad
636 647 .sp .6
637 648 .RS 4n
638 649 Controls automatic device replacement. If set to "\fBoff\fR", device
639 650 replacement must be initiated by the administrator by using the "\fBzpool
640 651 replace\fR" command. If set to "\fBon\fR", any new device, found in the same
641 652 physical location as a device that previously belonged to the pool, is
642 653 automatically formatted and replaced. The default behavior is "\fBoff\fR". This
643 654 property can also be referred to by its shortened column name, "replace".
644 655 .RE
645 656
646 657 .sp
647 658 .ne 2
648 659 .na
649 660 \fB\fBbootfs\fR=\fIpool\fR/\fIdataset\fR\fR
650 661 .ad
651 662 .sp .6
652 663 .RS 4n
653 664 Identifies the default bootable dataset for the root pool. This property is
654 665 expected to be set mainly by the installation and upgrade programs.
655 666 .RE
656 667
657 668 .sp
658 669 .ne 2
659 670 .na
660 671 \fB\fBcachefile\fR=\fIpath\fR | \fBnone\fR\fR
661 672 .ad
662 673 .sp .6
663 674 .RS 4n
664 675 Controls the location of where the pool configuration is cached. Discovering
665 676 all pools on system startup requires a cached copy of the configuration data
666 677 that is stored on the root file system. All pools in this cache are
667 678 automatically imported when the system boots. Some environments, such as
668 679 install and clustering, need to cache this information in a different location
669 680 so that pools are not automatically imported. Setting this property caches the
670 681 pool configuration in a different location that can later be imported with
671 682 "\fBzpool import -c\fR". Setting it to the special value "\fBnone\fR" creates a
672 683 temporary pool that is never cached, and the special value \fB\&''\fR (empty
673 684 string) uses the default location.
674 685 .sp
675 686 Multiple pools can share the same cache file. Because the kernel destroys and
676 687 recreates this file when pools are added and removed, care should be taken when
677 688 attempting to access this file. When the last pool using a \fBcachefile\fR is
678 689 exported or destroyed, the file is removed.
679 690 .RE
680 691
681 692 .sp
682 693 .ne 2
683 694 .na
684 695 \fB\fBdelegation\fR=\fBon\fR | \fBoff\fR\fR
685 696 .ad
686 697 .sp .6
687 698 .RS 4n
688 699 Controls whether a non-privileged user is granted access based on the dataset
689 700 permissions defined on the dataset. See \fBzfs\fR(1M) for more information on
690 701 \fBZFS\fR delegated administration.
691 702 .RE
692 703
693 704 .sp
694 705 .ne 2
695 706 .na
696 707 \fB\fBfailmode\fR=\fBwait\fR | \fBcontinue\fR | \fBpanic\fR\fR
697 708 .ad
698 709 .sp .6
699 710 .RS 4n
700 711 Controls the system behavior in the event of catastrophic pool failure. This
701 712 condition is typically a result of a loss of connectivity to the underlying
702 713 storage device(s) or a failure of all devices within the pool. The behavior of
703 714 such an event is determined as follows:
704 715 .sp
705 716 .ne 2
706 717 .na
707 718 \fB\fBwait\fR\fR
708 719 .ad
709 720 .RS 12n
710 721 Blocks all \fBI/O\fR access until the device connectivity is recovered and the
711 722 errors are cleared. This is the default behavior.
712 723 .RE
713 724
714 725 .sp
715 726 .ne 2
716 727 .na
717 728 \fB\fBcontinue\fR\fR
718 729 .ad
719 730 .RS 12n
720 731 Returns \fBEIO\fR to any new write \fBI/O\fR requests but allows reads to any
721 732 of the remaining healthy devices. Any write requests that have yet to be
722 733 committed to disk would be blocked.
723 734 .RE
724 735
725 736 .sp
726 737 .ne 2
727 738 .na
728 739 \fB\fBpanic\fR\fR
729 740 .ad
730 741 .RS 12n
731 742 Prints out a message to the console and generates a system crash dump.
732 743 .RE
733 744
734 745 .RE
735 746
736 747 .sp
737 748 .ne 2
738 749 .na
739 750 \fB\fBlistsnaps\fR=on | off\fR
740 751 .ad
741 752 .sp .6
742 753 .RS 4n
743 754 Controls whether information about snapshots associated with this pool is
744 755 output when "\fBzfs list\fR" is run without the \fB-t\fR option. The default
745 756 value is "off".
746 757 .RE
747 758
748 759 .sp
749 760 .ne 2
750 761 .na
751 762 \fB\fBversion\fR=\fIversion\fR\fR
752 763 .ad
753 764 .sp .6
754 765 .RS 4n
755 766 The current on-disk version of the pool. This can be increased, but never
756 767 decreased. The preferred method of updating pools is with the "\fBzpool
757 768 upgrade\fR" command, though this property can be used when a specific version
758 769 is needed for backwards compatibility. This property can be any number between
759 770 1 and the current version reported by "\fBzpool upgrade -v\fR".
760 771 .RE
761 772
762 773 .SS "Subcommands"
763 774 .sp
764 775 .LP
765 776 All subcommands that modify state are logged persistently to the pool in their
766 777 original form.
767 778 .sp
768 779 .LP
769 780 The \fBzpool\fR command provides subcommands to create and destroy storage
770 781 pools, add capacity to storage pools, and provide information about the storage
771 782 pools. The following subcommands are supported:
772 783 .sp
773 784 .ne 2
774 785 .na
775 786 \fB\fBzpool\fR \fB-?\fR\fR
776 787 .ad
777 788 .sp .6
778 789 .RS 4n
779 790 Displays a help message.
780 791 .RE
781 792
782 793 .sp
783 794 .ne 2
784 795 .na
785 796 \fB\fBzpool add\fR [\fB-fn\fR] \fIpool\fR \fIvdev\fR ...\fR
786 797 .ad
787 798 .sp .6
788 799 .RS 4n
789 800 Adds the specified virtual devices to the given pool. The \fIvdev\fR
790 801 specification is described in the "Virtual Devices" section. The behavior of
791 802 the \fB-f\fR option, and the device checks performed are described in the
792 803 "zpool create" subcommand.
793 804 .sp
794 805 .ne 2
795 806 .na
796 807 \fB\fB-f\fR\fR
797 808 .ad
798 809 .RS 6n
799 810 Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting
800 811 replication level. Not all devices can be overridden in this manner.
801 812 .RE
802 813
803 814 .sp
804 815 .ne 2
805 816 .na
806 817 \fB\fB-n\fR\fR
807 818 .ad
808 819 .RS 6n
809 820 Displays the configuration that would be used without actually adding the
810 821 \fBvdev\fRs. The actual pool creation can still fail due to insufficient
811 822 privileges or device sharing.
812 823 .RE
813 824
814 825 Do not add a disk that is currently configured as a quorum device to a zpool.
815 826 After a disk is in the pool, that disk can then be configured as a quorum
816 827 device.
817 828 .RE
818 829
819 830 .sp
820 831 .ne 2
821 832 .na
822 833 \fB\fBzpool attach\fR [\fB-f\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR\fR
823 834 .ad
824 835 .sp .6
825 836 .RS 4n
826 837 Attaches \fInew_device\fR to an existing \fBzpool\fR device. The existing
827 838 device cannot be part of a \fBraidz\fR configuration. If \fIdevice\fR is not
828 839 currently part of a mirrored configuration, \fIdevice\fR automatically
829 840 transforms into a two-way mirror of \fIdevice\fR and \fInew_device\fR. If
830 841 \fIdevice\fR is part of a two-way mirror, attaching \fInew_device\fR creates a
831 842 three-way mirror, and so on. In either case, \fInew_device\fR begins to
832 843 resilver immediately.
833 844 .sp
834 845 .ne 2
835 846 .na
836 847 \fB\fB-f\fR\fR
837 848 .ad
838 849 .RS 6n
839 850 Forces use of \fInew_device\fR, even if its appears to be in use. Not all
840 851 devices can be overridden in this manner.
841 852 .RE
842 853
843 854 .RE
844 855
845 856 .sp
846 857 .ne 2
847 858 .na
848 859 \fB\fBzpool clear\fR \fIpool\fR [\fIdevice\fR] ...\fR
849 860 .ad
850 861 .sp .6
851 862 .RS 4n
852 863 Clears device errors in a pool. If no arguments are specified, all device
853 864 errors within the pool are cleared. If one or more devices is specified, only
854 865 those errors associated with the specified device or devices are cleared.
855 866 .RE
856 867
857 868 .sp
858 869 .ne 2
859 870 .na
860 871 \fB\fBzpool create\fR [\fB-fn\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-O\fR
861 872 \fIfile-system-property=value\fR] ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR
862 873 \fIroot\fR] \fIpool\fR \fIvdev\fR ...\fR
863 874 .ad
864 875 .sp .6
865 876 .RS 4n
866 877 Creates a new storage pool containing the virtual devices specified on the
867 878 command line. The pool name must begin with a letter, and can only contain
868 879 alphanumeric characters as well as underscore ("_"), dash ("-"), and period
869 880 ("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are
870 881 names beginning with the pattern "c[0-9]". The \fBvdev\fR specification is
871 882 described in the "Virtual Devices" section.
872 883 .sp
873 884 The command verifies that each device specified is accessible and not currently
874 885 in use by another subsystem. There are some uses, such as being currently
875 886 mounted, or specified as the dedicated dump device, that prevents a device from
876 887 ever being used by \fBZFS\fR. Other uses, such as having a preexisting
877 888 \fBUFS\fR file system, can be overridden with the \fB-f\fR option.
878 889 .sp
879 890 The command also checks that the replication strategy for the pool is
880 891 consistent. An attempt to combine redundant and non-redundant storage in a
881 892 single pool, or to mix disks and files, results in an error unless \fB-f\fR is
882 893 specified. The use of differently sized devices within a single \fBraidz\fR or
883 894 mirror group is also flagged as an error unless \fB-f\fR is specified.
884 895 .sp
885 896 Unless the \fB-R\fR option is specified, the default mount point is
886 897 "/\fIpool\fR". The mount point must not exist or must be empty, or else the
887 898 root dataset cannot be mounted. This can be overridden with the \fB-m\fR
888 899 option.
889 900 .sp
890 901 .ne 2
891 902 .na
892 903 \fB\fB-f\fR\fR
893 904 .ad
894 905 .sp .6
895 906 .RS 4n
896 907 Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting
897 908 replication level. Not all devices can be overridden in this manner.
898 909 .RE
899 910
900 911 .sp
901 912 .ne 2
902 913 .na
903 914 \fB\fB-n\fR\fR
904 915 .ad
905 916 .sp .6
906 917 .RS 4n
907 918 Displays the configuration that would be used without actually creating the
908 919 pool. The actual pool creation can still fail due to insufficient privileges or
909 920 device sharing.
910 921 .RE
911 922
912 923 .sp
913 924 .ne 2
914 925 .na
915 926 \fB\fB-o\fR \fIproperty=value\fR [\fB-o\fR \fIproperty=value\fR] ...\fR
916 927 .ad
917 928 .sp .6
918 929 .RS 4n
919 930 Sets the given pool properties. See the "Properties" section for a list of
920 931 valid properties that can be set.
921 932 .RE
922 933
923 934 .sp
924 935 .ne 2
925 936 .na
926 937 \fB\fB-O\fR \fIfile-system-property=value\fR\fR
927 938 .ad
928 939 .br
929 940 .na
930 941 \fB[\fB-O\fR \fIfile-system-property=value\fR] ...\fR
931 942 .ad
932 943 .sp .6
933 944 .RS 4n
934 945 Sets the given file system properties in the root file system of the pool. See
935 946 the "Properties" section of \fBzfs\fR(1M) for a list of valid properties that
936 947 can be set.
937 948 .RE
938 949
939 950 .sp
940 951 .ne 2
941 952 .na
942 953 \fB\fB-R\fR \fIroot\fR\fR
943 954 .ad
944 955 .sp .6
945 956 .RS 4n
946 957 Equivalent to "-o cachefile=none,altroot=\fIroot\fR"
947 958 .RE
948 959
949 960 .sp
950 961 .ne 2
951 962 .na
952 963 \fB\fB-m\fR \fImountpoint\fR\fR
953 964 .ad
954 965 .sp .6
955 966 .RS 4n
956 967 Sets the mount point for the root dataset. The default mount point is
957 968 "/\fIpool\fR" or "\fBaltroot\fR/\fIpool\fR" if \fBaltroot\fR is specified. The
958 969 mount point must be an absolute path, "\fBlegacy\fR", or "\fBnone\fR". For more
959 970 information on dataset mount points, see \fBzfs\fR(1M).
960 971 .RE
961 972
962 973 .RE
963 974
964 975 .sp
965 976 .ne 2
966 977 .na
967 978 \fB\fBzpool destroy\fR [\fB-f\fR] \fIpool\fR\fR
968 979 .ad
969 980 .sp .6
970 981 .RS 4n
971 982 Destroys the given pool, freeing up any devices for other use. This command
972 983 tries to unmount any active datasets before destroying the pool.
973 984 .sp
974 985 .ne 2
975 986 .na
976 987 \fB\fB-f\fR\fR
977 988 .ad
978 989 .RS 6n
979 990 Forces any active datasets contained within the pool to be unmounted.
980 991 .RE
981 992
982 993 .RE
983 994
984 995 .sp
985 996 .ne 2
986 997 .na
987 998 \fB\fBzpool detach\fR \fIpool\fR \fIdevice\fR\fR
988 999 .ad
989 1000 .sp .6
990 1001 .RS 4n
991 1002 Detaches \fIdevice\fR from a mirror. The operation is refused if there are no
992 1003 other valid replicas of the data.
993 1004 .RE
994 1005
995 1006 .sp
996 1007 .ne 2
997 1008 .na
998 1009 \fB\fBzpool export\fR [\fB-f\fR] \fIpool\fR ...\fR
999 1010 .ad
1000 1011 .sp .6
1001 1012 .RS 4n
1002 1013 Exports the given pools from the system. All devices are marked as exported,
1003 1014 but are still considered in use by other subsystems. The devices can be moved
1004 1015 between systems (even those of different endianness) and imported as long as a
1005 1016 sufficient number of devices are present.
1006 1017 .sp
1007 1018 Before exporting the pool, all datasets within the pool are unmounted. A pool
1008 1019 can not be exported if it has a shared spare that is currently being used.
1009 1020 .sp
1010 1021 For pools to be portable, you must give the \fBzpool\fR command whole disks,
1011 1022 not just slices, so that \fBZFS\fR can label the disks with portable \fBEFI\fR
1012 1023 labels. Otherwise, disk drivers on platforms of different endianness will not
1013 1024 recognize the disks.
1014 1025 .sp
1015 1026 .ne 2
1016 1027 .na
1017 1028 \fB\fB-f\fR\fR
1018 1029 .ad
1019 1030 .RS 6n
1020 1031 Forcefully unmount all datasets, using the "\fBunmount -f\fR" command.
1021 1032 .sp
1022 1033 This command will forcefully export the pool even if it has a shared spare that
1023 1034 is currently being used. This may lead to potential data corruption.
1024 1035 .RE
1025 1036
1026 1037 .RE
1027 1038
1028 1039 .sp
1029 1040 .ne 2
1030 1041 .na
1031 1042 \fB\fBzpool get\fR "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ...\fR
1032 1043 .ad
1033 1044 .sp .6
1034 1045 .RS 4n
1035 1046 Retrieves the given list of properties (or all properties if "\fBall\fR" is
1036 1047 used) for the specified storage pool(s). These properties are displayed with
1037 1048 the following fields:
1038 1049 .sp
1039 1050 .in +2
1040 1051 .nf
1041 1052 name Name of storage pool
1042 1053 property Property name
1043 1054 value Property value
1044 1055 source Property source, either 'default' or 'local'.
1045 1056 .fi
1046 1057 .in -2
1047 1058 .sp
1048 1059
1049 1060 See the "Properties" section for more information on the available pool
1050 1061 properties.
1051 1062 .RE
1052 1063
1053 1064 .sp
1054 1065 .ne 2
1055 1066 .na
1056 1067 \fB\fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ...\fR
1057 1068 .ad
1058 1069 .sp .6
1059 1070 .RS 4n
1060 1071 Displays the command history of the specified pools or all pools if no pool is
1061 1072 specified.
1062 1073 .sp
1063 1074 .ne 2
1064 1075 .na
1065 1076 \fB\fB-i\fR\fR
1066 1077 .ad
1067 1078 .RS 6n
1068 1079 Displays internally logged \fBZFS\fR events in addition to user initiated
1069 1080 events.
1070 1081 .RE
1071 1082
1072 1083 .sp
1073 1084 .ne 2
1074 1085 .na
1075 1086 \fB\fB-l\fR\fR
1076 1087 .ad
1077 1088 .RS 6n
1078 1089 Displays log records in long format, which in addition to standard format
1079 1090 includes, the user name, the hostname, and the zone in which the operation was
1080 1091 performed.
1081 1092 .RE
1082 1093
1083 1094 .RE
1084 1095
1085 1096 .sp
1086 1097 .ne 2
1087 1098 .na
1088 1099 \fB\fBzpool import\fR [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1089 1100 [\fB-D\fR]\fR
1090 1101 .ad
1091 1102 .sp .6
1092 1103 .RS 4n
1093 1104 Lists pools available to import. If the \fB-d\fR option is not specified, this
1094 1105 command searches for devices in "/dev/dsk". The \fB-d\fR option can be
1095 1106 specified multiple times, and all directories are searched. If the device
1096 1107 appears to be part of an exported pool, this command displays a summary of the
1097 1108 pool with the name of the pool, a numeric identifier, as well as the \fIvdev\fR
1098 1109 layout and current health of the device for each device or file. Destroyed
1099 1110 pools, pools that were previously destroyed with the "\fBzpool destroy\fR"
1100 1111 command, are not listed unless the \fB-D\fR option is specified.
1101 1112 .sp
1102 1113 The numeric identifier is unique, and can be used instead of the pool name when
1103 1114 multiple exported pools of the same name are available.
1104 1115 .sp
1105 1116 .ne 2
1106 1117 .na
1107 1118 \fB\fB-c\fR \fIcachefile\fR\fR
1108 1119 .ad
1109 1120 .RS 16n
1110 1121 Reads configuration from the given \fBcachefile\fR that was created with the
1111 1122 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1112 1123 searching for devices.
1113 1124 .RE
1114 1125
1115 1126 .sp
1116 1127 .ne 2
1117 1128 .na
1118 1129 \fB\fB-d\fR \fIdir\fR\fR
1119 1130 .ad
1120 1131 .RS 16n
1121 1132 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1122 1133 specified multiple times.
1123 1134 .RE
1124 1135
1125 1136 .sp
1126 1137 .ne 2
1127 1138 .na
1128 1139 \fB\fB-D\fR\fR
1129 1140 .ad
1130 1141 .RS 16n
1131 1142 Lists destroyed pools only.
1132 1143 .RE
1133 1144
1134 1145 .RE
1135 1146
1136 1147 .sp
1137 1148 .ne 2
1138 1149 .na
1139 1150 \fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR
1140 1151 \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1141 1152 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fB-a\fR\fR
1142 1153 .ad
1143 1154 .sp .6
1144 1155 .RS 4n
1145 1156 Imports all pools found in the search directories. Identical to the previous
1146 1157 command, except that all pools with a sufficient number of devices available
1147 1158 are imported. Destroyed pools, pools that were previously destroyed with the
1148 1159 "\fBzpool destroy\fR" command, will not be imported unless the \fB-D\fR option
1149 1160 is specified.
1150 1161 .sp
1151 1162 .ne 2
1152 1163 .na
1153 1164 \fB\fB-o\fR \fImntopts\fR\fR
1154 1165 .ad
1155 1166 .RS 21n
1156 1167 Comma-separated list of mount options to use when mounting datasets within the
1157 1168 pool. See \fBzfs\fR(1M) for a description of dataset properties and mount
1158 1169 options.
1159 1170 .RE
1160 1171
1161 1172 .sp
1162 1173 .ne 2
1163 1174 .na
1164 1175 \fB\fB-o\fR \fIproperty=value\fR\fR
1165 1176 .ad
1166 1177 .RS 21n
1167 1178 Sets the specified property on the imported pool. See the "Properties" section
1168 1179 for more information on the available pool properties.
1169 1180 .RE
1170 1181
1171 1182 .sp
1172 1183 .ne 2
1173 1184 .na
1174 1185 \fB\fB-c\fR \fIcachefile\fR\fR
1175 1186 .ad
1176 1187 .RS 21n
1177 1188 Reads configuration from the given \fBcachefile\fR that was created with the
1178 1189 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1179 1190 searching for devices.
1180 1191 .RE
1181 1192
1182 1193 .sp
1183 1194 .ne 2
1184 1195 .na
1185 1196 \fB\fB-d\fR \fIdir\fR\fR
1186 1197 .ad
1187 1198 .RS 21n
1188 1199 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1189 1200 specified multiple times. This option is incompatible with the \fB-c\fR option.
1190 1201 .RE
1191 1202
1192 1203 .sp
1193 1204 .ne 2
1194 1205 .na
1195 1206 \fB\fB-D\fR\fR
1196 1207 .ad
1197 1208 .RS 21n
1198 1209 Imports destroyed pools only. The \fB-f\fR option is also required.
1199 1210 .RE
1200 1211
1201 1212 .sp
1202 1213 .ne 2
1203 1214 .na
1204 1215 \fB\fB-f\fR\fR
1205 1216 .ad
1206 1217 .RS 21n
1207 1218 Forces import, even if the pool appears to be potentially active.
1208 1219 .RE
1209 1220
1210 1221 .sp
1211 1222 .ne 2
1212 1223 .na
1213 1224 \fB\fB-a\fR\fR
1214 1225 .ad
1215 1226 .RS 21n
1216 1227 Searches for and imports all pools found.
1217 1228 .RE
1218 1229
1219 1230 .sp
1220 1231 .ne 2
1221 1232 .na
1222 1233 \fB\fB-R\fR \fIroot\fR\fR
1223 1234 .ad
1224 1235 .RS 21n
1225 1236 Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR"
1226 1237 property to "\fIroot\fR".
1227 1238 .RE
1228 1239
1229 1240 .RE
1230 1241
1231 1242 .sp
1232 1243 .ne 2
1233 1244 .na
1234 1245 \fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR
1235 1246 \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR]
1236 1247 [\fB-D\fR] [\fB-f\fR] [\fB-R\fR \fIroot\fR] \fIpool\fR | \fIid\fR
1237 1248 [\fInewpool\fR]\fR
1238 1249 .ad
1239 1250 .sp .6
1240 1251 .RS 4n
1241 1252 Imports a specific pool. A pool can be identified by its name or the numeric
1242 1253 identifier. If \fInewpool\fR is specified, the pool is imported using the name
1243 1254 \fInewpool\fR. Otherwise, it is imported with the same name as its exported
1244 1255 name.
1245 1256 .sp
1246 1257 If a device is removed from a system without running "\fBzpool export\fR"
1247 1258 first, the device appears as potentially active. It cannot be determined if
1248 1259 this was a failed export, or whether the device is really in use from another
1249 1260 host. To import a pool in this state, the \fB-f\fR option is required.
1250 1261 .sp
1251 1262 .ne 2
1252 1263 .na
1253 1264 \fB\fB-o\fR \fImntopts\fR\fR
1254 1265 .ad
1255 1266 .sp .6
1256 1267 .RS 4n
1257 1268 Comma-separated list of mount options to use when mounting datasets within the
1258 1269 pool. See \fBzfs\fR(1M) for a description of dataset properties and mount
1259 1270 options.
1260 1271 .RE
1261 1272
1262 1273 .sp
1263 1274 .ne 2
1264 1275 .na
1265 1276 \fB\fB-o\fR \fIproperty=value\fR\fR
1266 1277 .ad
1267 1278 .sp .6
1268 1279 .RS 4n
1269 1280 Sets the specified property on the imported pool. See the "Properties" section
1270 1281 for more information on the available pool properties.
1271 1282 .RE
1272 1283
1273 1284 .sp
1274 1285 .ne 2
1275 1286 .na
1276 1287 \fB\fB-c\fR \fIcachefile\fR\fR
1277 1288 .ad
1278 1289 .sp .6
1279 1290 .RS 4n
1280 1291 Reads configuration from the given \fBcachefile\fR that was created with the
1281 1292 "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of
1282 1293 searching for devices.
1283 1294 .RE
1284 1295
1285 1296 .sp
1286 1297 .ne 2
1287 1298 .na
1288 1299 \fB\fB-d\fR \fIdir\fR\fR
1289 1300 .ad
1290 1301 .sp .6
1291 1302 .RS 4n
1292 1303 Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be
1293 1304 specified multiple times. This option is incompatible with the \fB-c\fR option.
1294 1305 .RE
1295 1306
1296 1307 .sp
1297 1308 .ne 2
1298 1309 .na
1299 1310 \fB\fB-D\fR\fR
1300 1311 .ad
1301 1312 .sp .6
1302 1313 .RS 4n
1303 1314 Imports destroyed pool. The \fB-f\fR option is also required.
1304 1315 .RE
1305 1316
1306 1317 .sp
1307 1318 .ne 2
1308 1319 .na
1309 1320 \fB\fB-f\fR\fR
1310 1321 .ad
1311 1322 .sp .6
1312 1323 .RS 4n
1313 1324 Forces import, even if the pool appears to be potentially active.
1314 1325 .RE
1315 1326
1316 1327 .sp
1317 1328 .ne 2
1318 1329 .na
1319 1330 \fB\fB-R\fR \fIroot\fR\fR
1320 1331 .ad
1321 1332 .sp .6
1322 1333 .RS 4n
1323 1334 Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR"
1324 1335 property to "\fIroot\fR".
1325 1336 .RE
1326 1337
1327 1338 .RE
1328 1339
1329 1340 .sp
1330 1341 .ne 2
1331 1342 .na
1332 1343 \fB\fBzpool iostat\fR [\fB-T\fR \fBu\fR | \fBd\fR] [\fB-v\fR] [\fIpool\fR] ...
1333 1344 [\fIinterval\fR[\fIcount\fR]]\fR
1334 1345 .ad
1335 1346 .sp .6
1336 1347 .RS 4n
1337 1348 Displays \fBI/O\fR statistics for the given pools. When given an interval, the
1338 1349 statistics are printed every \fIinterval\fR seconds until \fBCtrl-C\fR is
1339 1350 pressed. If no \fIpools\fR are specified, statistics for every pool in the
1340 1351 system is shown. If \fIcount\fR is specified, the command exits after
1341 1352 \fIcount\fR reports are printed.
1342 1353 .sp
1343 1354 .ne 2
1344 1355 .na
1345 1356 \fB\fB-T\fR \fBu\fR | \fBd\fR\fR
1346 1357 .ad
1347 1358 .RS 12n
1348 1359 Display a time stamp.
1349 1360 .sp
1350 1361 Specify \fBu\fR for a printed representation of the internal representation of
1351 1362 time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See
1352 1363 \fBdate\fR(1).
1353 1364 .RE
1354 1365
1355 1366 .sp
1356 1367 .ne 2
1357 1368 .na
1358 1369 \fB\fB-v\fR\fR
1359 1370 .ad
1360 1371 .RS 12n
1361 1372 Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within
1362 1373 the pool, in addition to the pool-wide statistics.
1363 1374 .RE
1364 1375
1365 1376 .RE
1366 1377
1367 1378 .sp
1368 1379 .ne 2
1369 1380 .na
1370 1381 \fB\fBzpool list\fR [\fB-H\fR] [\fB-o\fR \fIprops\fR[,...]] [\fIpool\fR] ...\fR
1371 1382 .ad
1372 1383 .sp .6
1373 1384 .RS 4n
1374 1385 Lists the given pools along with a health status and space usage. When given no
1375 1386 arguments, all pools in the system are listed.
1376 1387 .sp
1377 1388 .ne 2
1378 1389 .na
1379 1390 \fB\fB-H\fR\fR
1380 1391 .ad
1381 1392 .RS 12n
1382 1393 Scripted mode. Do not display headers, and separate fields by a single tab
1383 1394 instead of arbitrary space.
1384 1395 .RE
1385 1396
1386 1397 .sp
1387 1398 .ne 2
1388 1399 .na
1389 1400 \fB\fB-o\fR \fIprops\fR\fR
1390 1401 .ad
1391 1402 .RS 12n
1392 1403 Comma-separated list of properties to display. See the "Properties" section for
1393 1404 a list of valid properties. The default list is "name, size, used, available,
1394 1405 capacity, health, altroot"
1395 1406 .RE
1396 1407
1397 1408 .RE
1398 1409
1399 1410 .sp
1400 1411 .ne 2
1401 1412 .na
1402 1413 \fB\fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ...\fR
1403 1414 .ad
1404 1415 .sp .6
1405 1416 .RS 4n
1406 1417 Takes the specified physical device offline. While the \fIdevice\fR is offline,
1407 1418 no attempt is made to read or write to the device.
1408 1419 .sp
1409 1420 This command is not applicable to spares or cache devices.
1410 1421 .sp
1411 1422 .ne 2
1412 1423 .na
1413 1424 \fB\fB-t\fR\fR
1414 1425 .ad
1415 1426 .RS 6n
1416 1427 Temporary. Upon reboot, the specified physical device reverts to its previous
1417 1428 state.
1418 1429 .RE
1419 1430
1420 1431 .RE
1421 1432
1422 1433 .sp
1423 1434 .ne 2
1424 1435 .na
1425 1436 \fB\fBzpool online\fR [\fB-e\fR] \fIpool\fR \fIdevice\fR...\fR
1426 1437 .ad
1427 1438 .sp .6
1428 1439 .RS 4n
1429 1440 Brings the specified physical device online.
1430 1441 .sp
1431 1442 This command is not applicable to spares or cache devices.
1432 1443 .sp
1433 1444 .ne 2
1434 1445 .na
1435 1446 \fB\fB-e\fR\fR
1436 1447 .ad
1437 1448 .RS 6n
1438 1449 Expand the device to use all available space. If the device is part of a mirror
1439 1450 or \fBraidz\fR then all devices must be expanded before the new space will
1440 1451 become available to the pool.
1441 1452 .RE
1442 1453
1443 1454 .RE
1444 1455
1445 1456 .sp
1446 1457 .ne 2
1447 1458 .na
1448 1459 \fB\fBzpool reguid\fR \fIpool\fR
1449 1460 .ad
1450 1461 .sp .6
1451 1462 .RS 4n
1452 1463 Generates a new unique identifier for the pool. You must ensure that all devices in this pool are online and
1453 1464 healthy before performing this action.
1454 1465 .RE
1455 1466
1456 1467 .sp
1457 1468 .ne 2
1458 1469 .na
1459 1470 \fB\fBzpool remove\fR \fIpool\fR \fIdevice\fR ...\fR
1460 1471 .ad
1461 1472 .sp .6
1462 1473 .RS 4n
1463 1474 Removes the specified device from the pool. This command currently only
1464 1475 supports removing hot spares, cache, and log devices. A mirrored log device can
1465 1476 be removed by specifying the top-level mirror for the log. Non-log devices that
1466 1477 are part of a mirrored configuration can be removed using the \fBzpool
1467 1478 detach\fR command. Non-redundant and \fBraidz\fR devices cannot be removed from
1468 1479 a pool.
1469 1480 .RE
1470 1481
1471 1482 .sp
1472 1483 .ne 2
1473 1484 .na
1474 1485 \fB\fBzpool replace\fR [\fB-f\fR] \fIpool\fR \fIold_device\fR
1475 1486 [\fInew_device\fR]\fR
1476 1487 .ad
1477 1488 .sp .6
1478 1489 .RS 4n
1479 1490 Replaces \fIold_device\fR with \fInew_device\fR. This is equivalent to
1480 1491 attaching \fInew_device\fR, waiting for it to resilver, and then detaching
1481 1492 \fIold_device\fR.
1482 1493 .sp
1483 1494 The size of \fInew_device\fR must be greater than or equal to the minimum size
1484 1495 of all the devices in a mirror or \fBraidz\fR configuration.
1485 1496 .sp
1486 1497 \fInew_device\fR is required if the pool is not redundant. If \fInew_device\fR
1487 1498 is not specified, it defaults to \fIold_device\fR. This form of replacement is
1488 1499 useful after an existing disk has failed and has been physically replaced. In
1489 1500 this case, the new disk may have the same \fB/dev/dsk\fR path as the old
1490 1501 device, even though it is actually a different disk. \fBZFS\fR recognizes this.
1491 1502 .sp
1492 1503 .ne 2
1493 1504 .na
1494 1505 \fB\fB-f\fR\fR
1495 1506 .ad
1496 1507 .RS 6n
1497 1508 Forces use of \fInew_device\fR, even if its appears to be in use. Not all
1498 1509 devices can be overridden in this manner.
1499 1510 .RE
1500 1511
1501 1512 .RE
1502 1513
1503 1514 .sp
1504 1515 .ne 2
1505 1516 .na
1506 1517 \fB\fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...\fR
1507 1518 .ad
1508 1519 .sp .6
1509 1520 .RS 4n
1510 1521 Begins a scrub. The scrub examines all data in the specified pools to verify
1511 1522 that it checksums correctly. For replicated (mirror or \fBraidz\fR) devices,
1512 1523 \fBZFS\fR automatically repairs any damage discovered during the scrub. The
1513 1524 "\fBzpool status\fR" command reports the progress of the scrub and summarizes
1514 1525 the results of the scrub upon completion.
1515 1526 .sp
1516 1527 Scrubbing and resilvering are very similar operations. The difference is that
1517 1528 resilvering only examines data that \fBZFS\fR knows to be out of date (for
1518 1529 example, when attaching a new device to a mirror or replacing an existing
1519 1530 device), whereas scrubbing examines all data to discover silent errors due to
1520 1531 hardware faults or disk failure.
1521 1532 .sp
1522 1533 Because scrubbing and resilvering are \fBI/O\fR-intensive operations, \fBZFS\fR
1523 1534 only allows one at a time. If a scrub is already in progress, the "\fBzpool
1524 1535 scrub\fR" command terminates it and starts a new scrub. If a resilver is in
1525 1536 progress, \fBZFS\fR does not allow a scrub to be started until the resilver
1526 1537 completes.
1527 1538 .sp
1528 1539 .ne 2
1529 1540 .na
1530 1541 \fB\fB-s\fR\fR
1531 1542 .ad
1532 1543 .RS 6n
1533 1544 Stop scrubbing.
1534 1545 .RE
1535 1546
1536 1547 .RE
1537 1548
1538 1549 .sp
1539 1550 .ne 2
1540 1551 .na
1541 1552 \fB\fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR\fR
1542 1553 .ad
1543 1554 .sp .6
1544 1555 .RS 4n
1545 1556 Sets the given property on the specified pool. See the "Properties" section for
1546 1557 more information on what properties can be set and acceptable values.
1547 1558 .RE
1548 1559
1549 1560 .sp
1550 1561 .ne 2
1551 1562 .na
1552 1563 \fB\fBzpool status\fR [\fB-xv\fR] [\fIpool\fR] ...\fR
1553 1564 .ad
1554 1565 .sp .6
1555 1566 .RS 4n
1556 1567 Displays the detailed health status for the given pools. If no \fIpool\fR is
1557 1568 specified, then the status of each pool in the system is displayed. For more
1558 1569 information on pool and device health, see the "Device Failure and Recovery"
1559 1570 section.
1560 1571 .sp
1561 1572 If a scrub or resilver is in progress, this command reports the percentage done
1562 1573 and the estimated time to completion. Both of these are only approximate,
1563 1574 because the amount of data in the pool and the other workloads on the system
1564 1575 can change.
1565 1576 .sp
1566 1577 .ne 2
1567 1578 .na
1568 1579 \fB\fB-x\fR\fR
1569 1580 .ad
1570 1581 .RS 6n
1571 1582 Only display status for pools that are exhibiting errors or are otherwise
1572 1583 unavailable.
1573 1584 .RE
1574 1585
1575 1586 .sp
1576 1587 .ne 2
1577 1588 .na
1578 1589 \fB\fB-v\fR\fR
1579 1590 .ad
1580 1591 .RS 6n
1581 1592 Displays verbose data error information, printing out a complete list of all
1582 1593 data errors since the last complete pool scrub.
1583 1594 .RE
1584 1595
1585 1596 .RE
1586 1597
1587 1598 .sp
1588 1599 .ne 2
1589 1600 .na
1590 1601 \fB\fBzpool upgrade\fR\fR
1591 1602 .ad
1592 1603 .sp .6
1593 1604 .RS 4n
1594 1605 Displays all pools formatted using a different \fBZFS\fR on-disk version. Older
1595 1606 versions can continue to be used, but some features may not be available. These
1596 1607 pools can be upgraded using "\fBzpool upgrade -a\fR". Pools that are formatted
1597 1608 with a more recent version are also displayed, although these pools will be
1598 1609 inaccessible on the system.
1599 1610 .RE
1600 1611
1601 1612 .sp
1602 1613 .ne 2
1603 1614 .na
1604 1615 \fB\fBzpool upgrade\fR \fB-v\fR\fR
1605 1616 .ad
1606 1617 .sp .6
1607 1618 .RS 4n
1608 1619 Displays \fBZFS\fR versions supported by the current software. The current
1609 1620 \fBZFS\fR versions and all previous supported versions are displayed, along
1610 1621 with an explanation of the features provided with each version.
1611 1622 .RE
1612 1623
1613 1624 .sp
1614 1625 .ne 2
1615 1626 .na
1616 1627 \fB\fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...\fR
1617 1628 .ad
1618 1629 .sp .6
1619 1630 .RS 4n
1620 1631 Upgrades the given pool to the latest on-disk version. Once this is done, the
1621 1632 pool will no longer be accessible on systems running older versions of the
1622 1633 software.
1623 1634 .sp
1624 1635 .ne 2
1625 1636 .na
1626 1637 \fB\fB-a\fR\fR
1627 1638 .ad
1628 1639 .RS 14n
1629 1640 Upgrades all pools.
1630 1641 .RE
1631 1642
1632 1643 .sp
1633 1644 .ne 2
1634 1645 .na
1635 1646 \fB\fB-V\fR \fIversion\fR\fR
1636 1647 .ad
1637 1648 .RS 14n
1638 1649 Upgrade to the specified version. If the \fB-V\fR flag is not specified, the
1639 1650 pool is upgraded to the most recent version. This option can only be used to
1640 1651 increase the version number, and only up to the most recent version supported
1641 1652 by this software.
1642 1653 .RE
1643 1654
1644 1655 .RE
1645 1656
1646 1657 .SH EXAMPLES
1647 1658 .LP
1648 1659 \fBExample 1 \fRCreating a RAID-Z Storage Pool
1649 1660 .sp
1650 1661 .LP
1651 1662 The following command creates a pool with a single \fBraidz\fR root \fIvdev\fR
1652 1663 that consists of six disks.
1653 1664
1654 1665 .sp
1655 1666 .in +2
1656 1667 .nf
1657 1668 # \fBzpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0\fR
1658 1669 .fi
1659 1670 .in -2
1660 1671 .sp
1661 1672
1662 1673 .LP
1663 1674 \fBExample 2 \fRCreating a Mirrored Storage Pool
1664 1675 .sp
1665 1676 .LP
1666 1677 The following command creates a pool with two mirrors, where each mirror
1667 1678 contains two disks.
1668 1679
1669 1680 .sp
1670 1681 .in +2
1671 1682 .nf
1672 1683 # \fBzpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0\fR
1673 1684 .fi
1674 1685 .in -2
1675 1686 .sp
1676 1687
1677 1688 .LP
1678 1689 \fBExample 3 \fRCreating a ZFS Storage Pool by Using Slices
1679 1690 .sp
1680 1691 .LP
1681 1692 The following command creates an unmirrored pool using two disk slices.
1682 1693
1683 1694 .sp
1684 1695 .in +2
1685 1696 .nf
1686 1697 # \fBzpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4\fR
1687 1698 .fi
1688 1699 .in -2
1689 1700 .sp
1690 1701
1691 1702 .LP
1692 1703 \fBExample 4 \fRCreating a ZFS Storage Pool by Using Files
1693 1704 .sp
1694 1705 .LP
1695 1706 The following command creates an unmirrored pool using files. While not
1696 1707 recommended, a pool based on files can be useful for experimental purposes.
1697 1708
1698 1709 .sp
1699 1710 .in +2
1700 1711 .nf
1701 1712 # \fBzpool create tank /path/to/file/a /path/to/file/b\fR
1702 1713 .fi
1703 1714 .in -2
1704 1715 .sp
1705 1716
1706 1717 .LP
1707 1718 \fBExample 5 \fRAdding a Mirror to a ZFS Storage Pool
1708 1719 .sp
1709 1720 .LP
1710 1721 The following command adds two mirrored disks to the pool "\fItank\fR",
1711 1722 assuming the pool is already made up of two-way mirrors. The additional space
1712 1723 is immediately available to any datasets within the pool.
1713 1724
1714 1725 .sp
1715 1726 .in +2
1716 1727 .nf
1717 1728 # \fBzpool add tank mirror c1t0d0 c1t1d0\fR
1718 1729 .fi
1719 1730 .in -2
1720 1731 .sp
1721 1732
1722 1733 .LP
1723 1734 \fBExample 6 \fRListing Available ZFS Storage Pools
1724 1735 .sp
1725 1736 .LP
1726 1737 The following command lists all available pools on the system. In this case,
1727 1738 the pool \fIzion\fR is faulted due to a missing device.
1728 1739
1729 1740 .sp
1730 1741 .LP
1731 1742 The results from this command are similar to the following:
1732 1743
1733 1744 .sp
1734 1745 .in +2
1735 1746 .nf
1736 1747 # \fBzpool list\fR
1737 1748 NAME SIZE USED AVAIL CAP HEALTH ALTROOT
1738 1749 pool 67.5G 2.92M 67.5G 0% ONLINE -
1739 1750 tank 67.5G 2.92M 67.5G 0% ONLINE -
1740 1751 zion - - - 0% FAULTED -
1741 1752 .fi
1742 1753 .in -2
1743 1754 .sp
1744 1755
1745 1756 .LP
1746 1757 \fBExample 7 \fRDestroying a ZFS Storage Pool
1747 1758 .sp
1748 1759 .LP
1749 1760 The following command destroys the pool "\fItank\fR" and any datasets contained
1750 1761 within.
1751 1762
1752 1763 .sp
1753 1764 .in +2
1754 1765 .nf
1755 1766 # \fBzpool destroy -f tank\fR
1756 1767 .fi
1757 1768 .in -2
1758 1769 .sp
1759 1770
1760 1771 .LP
1761 1772 \fBExample 8 \fRExporting a ZFS Storage Pool
1762 1773 .sp
1763 1774 .LP
1764 1775 The following command exports the devices in pool \fItank\fR so that they can
1765 1776 be relocated or later imported.
1766 1777
1767 1778 .sp
1768 1779 .in +2
1769 1780 .nf
1770 1781 # \fBzpool export tank\fR
1771 1782 .fi
1772 1783 .in -2
1773 1784 .sp
1774 1785
1775 1786 .LP
1776 1787 \fBExample 9 \fRImporting a ZFS Storage Pool
1777 1788 .sp
1778 1789 .LP
1779 1790 The following command displays available pools, and then imports the pool
1780 1791 "tank" for use on the system.
1781 1792
1782 1793 .sp
1783 1794 .LP
1784 1795 The results from this command are similar to the following:
1785 1796
1786 1797 .sp
1787 1798 .in +2
1788 1799 .nf
1789 1800 # \fBzpool import\fR
1790 1801 pool: tank
1791 1802 id: 15451357997522795478
1792 1803 state: ONLINE
1793 1804 action: The pool can be imported using its name or numeric identifier.
1794 1805 config:
1795 1806
1796 1807 tank ONLINE
1797 1808 mirror ONLINE
1798 1809 c1t2d0 ONLINE
1799 1810 c1t3d0 ONLINE
1800 1811
1801 1812 # \fBzpool import tank\fR
1802 1813 .fi
1803 1814 .in -2
1804 1815 .sp
1805 1816
1806 1817 .LP
1807 1818 \fBExample 10 \fRUpgrading All ZFS Storage Pools to the Current Version
1808 1819 .sp
1809 1820 .LP
1810 1821 The following command upgrades all ZFS Storage pools to the current version of
1811 1822 the software.
1812 1823
1813 1824 .sp
1814 1825 .in +2
1815 1826 .nf
1816 1827 # \fBzpool upgrade -a\fR
1817 1828 This system is currently running ZFS version 2.
1818 1829 .fi
1819 1830 .in -2
1820 1831 .sp
1821 1832
1822 1833 .LP
1823 1834 \fBExample 11 \fRManaging Hot Spares
1824 1835 .sp
1825 1836 .LP
1826 1837 The following command creates a new pool with an available hot spare:
1827 1838
1828 1839 .sp
1829 1840 .in +2
1830 1841 .nf
1831 1842 # \fBzpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0\fR
1832 1843 .fi
1833 1844 .in -2
1834 1845 .sp
1835 1846
1836 1847 .sp
1837 1848 .LP
1838 1849 If one of the disks were to fail, the pool would be reduced to the degraded
1839 1850 state. The failed device can be replaced using the following command:
1840 1851
1841 1852 .sp
1842 1853 .in +2
1843 1854 .nf
1844 1855 # \fBzpool replace tank c0t0d0 c0t3d0\fR
1845 1856 .fi
1846 1857 .in -2
1847 1858 .sp
1848 1859
1849 1860 .sp
1850 1861 .LP
1851 1862 Once the data has been resilvered, the spare is automatically removed and is
1852 1863 made available should another device fails. The hot spare can be permanently
1853 1864 removed from the pool using the following command:
1854 1865
1855 1866 .sp
1856 1867 .in +2
1857 1868 .nf
1858 1869 # \fBzpool remove tank c0t2d0\fR
1859 1870 .fi
1860 1871 .in -2
1861 1872 .sp
1862 1873
1863 1874 .LP
1864 1875 \fBExample 12 \fRCreating a ZFS Pool with Mirrored Separate Intent Logs
1865 1876 .sp
1866 1877 .LP
1867 1878 The following command creates a ZFS storage pool consisting of two, two-way
1868 1879 mirrors and mirrored log devices:
1869 1880
1870 1881 .sp
1871 1882 .in +2
1872 1883 .nf
1873 1884 # \fBzpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
1874 1885 c4d0 c5d0\fR
1875 1886 .fi
1876 1887 .in -2
1877 1888 .sp
1878 1889
1879 1890 .LP
1880 1891 \fBExample 13 \fRAdding Cache Devices to a ZFS Pool
1881 1892 .sp
1882 1893 .LP
1883 1894 The following command adds two disks for use as cache devices to a ZFS storage
1884 1895 pool:
1885 1896
1886 1897 .sp
1887 1898 .in +2
1888 1899 .nf
1889 1900 # \fBzpool add pool cache c2d0 c3d0\fR
1890 1901 .fi
1891 1902 .in -2
1892 1903 .sp
1893 1904
1894 1905 .sp
1895 1906 .LP
1896 1907 Once added, the cache devices gradually fill with content from main memory.
1897 1908 Depending on the size of your cache devices, it could take over an hour for
1898 1909 them to fill. Capacity and reads can be monitored using the \fBiostat\fR option
1899 1910 as follows:
1900 1911
1901 1912 .sp
1902 1913 .in +2
1903 1914 .nf
1904 1915 # \fBzpool iostat -v pool 5\fR
1905 1916 .fi
1906 1917 .in -2
1907 1918 .sp
1908 1919
1909 1920 .LP
1910 1921 \fBExample 14 \fRRemoving a Mirrored Log Device
1911 1922 .sp
1912 1923 .LP
1913 1924 The following command removes the mirrored log device \fBmirror-2\fR.
1914 1925
1915 1926 .sp
1916 1927 .LP
1917 1928 Given this configuration:
1918 1929
1919 1930 .sp
1920 1931 .in +2
1921 1932 .nf
1922 1933 pool: tank
1923 1934 state: ONLINE
1924 1935 scrub: none requested
1925 1936 config:
1926 1937
1927 1938 NAME STATE READ WRITE CKSUM
1928 1939 tank ONLINE 0 0 0
1929 1940 mirror-0 ONLINE 0 0 0
1930 1941 c6t0d0 ONLINE 0 0 0
1931 1942 c6t1d0 ONLINE 0 0 0
1932 1943 mirror-1 ONLINE 0 0 0
1933 1944 c6t2d0 ONLINE 0 0 0
1934 1945 c6t3d0 ONLINE 0 0 0
1935 1946 logs
1936 1947 mirror-2 ONLINE 0 0 0
1937 1948 c4t0d0 ONLINE 0 0 0
1938 1949 c4t1d0 ONLINE 0 0 0
1939 1950 .fi
1940 1951 .in -2
1941 1952 .sp
1942 1953
1943 1954 .sp
1944 1955 .LP
1945 1956 The command to remove the mirrored log \fBmirror-2\fR is:
1946 1957
1947 1958 .sp
1948 1959 .in +2
1949 1960 .nf
1950 1961 # \fBzpool remove tank mirror-2\fR
1951 1962 .fi
1952 1963 .in -2
1953 1964 .sp
1954 1965
1955 1966 .SH EXIT STATUS
1956 1967 .sp
1957 1968 .LP
1958 1969 The following exit values are returned:
1959 1970 .sp
1960 1971 .ne 2
1961 1972 .na
1962 1973 \fB\fB0\fR\fR
1963 1974 .ad
1964 1975 .RS 5n
1965 1976 Successful completion.
1966 1977 .RE
1967 1978
1968 1979 .sp
1969 1980 .ne 2
1970 1981 .na
1971 1982 \fB\fB1\fR\fR
1972 1983 .ad
1973 1984 .RS 5n
1974 1985 An error occurred.
1975 1986 .RE
1976 1987
1977 1988 .sp
1978 1989 .ne 2
1979 1990 .na
1980 1991 \fB\fB2\fR\fR
1981 1992 .ad
1982 1993 .RS 5n
1983 1994 Invalid command line options were specified.
1984 1995 .RE
1985 1996
1986 1997 .SH ATTRIBUTES
1987 1998 .sp
1988 1999 .LP
1989 2000 See \fBattributes\fR(5) for descriptions of the following attributes:
1990 2001 .sp
1991 2002
1992 2003 .sp
1993 2004 .TS
1994 2005 box;
1995 2006 c | c
1996 2007 l | l .
1997 2008 ATTRIBUTE TYPE ATTRIBUTE VALUE
1998 2009 _
1999 2010 Interface Stability Evolving
2000 2011 .TE
2001 2012
2002 2013 .SH SEE ALSO
2003 2014 .sp
2004 2015 .LP
2005 2016 \fBzfs\fR(1M), \fBattributes\fR(5)
|
↓ open down ↓ |
1450 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX