Print this page
blah
2897 "zpool split" documentation missing from manpage
Reviewed by: Dan McDonald <danmcd@omniti.com>
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/man/man1m/zpool.1m.man.txt
+++ new/usr/src/man/man1m/zpool.1m.man.txt
1 1 ZPOOL(1M) Maintenance Commands ZPOOL(1M)
2 2
3 3
4 4
5 5 NAME
6 6 zpool - configures ZFS storage pools
7 7
8 8 SYNOPSIS
9 9 zpool [-?]
10 10
11 11
12 12 zpool add [-fn] pool vdev ...
13 13
14 14
15 15 zpool attach [-f] pool device new_device
16 16
17 17
18 18 zpool clear pool [device]
19 19
20 20
21 21 zpool create [-fnd] [-o property=value] ... [-O file-system-property=value]
22 22 ... [-m mountpoint] [-R root] pool vdev ...
23 23
24 24
25 25 zpool destroy [-f] pool
26 26
27 27
28 28 zpool detach pool device
29 29
30 30
31 31 zpool export [-f] pool ...
32 32
33 33
34 34 zpool get [-Hp] [-o field[,...]] "all" | property[,...] pool ...
35 35
36 36
37 37 zpool history [-il] [pool] ...
38 38
39 39
40 40 zpool import [-d dir] [-D]
41 41
42 42
43 43 zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
44 44 [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a
45 45
46 46
47 47 zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
48 48 [-D] [-f] [-m] [-R root] [-F [-n]] pool |id [newpool]
49 49
50 50
51 51 zpool iostat [-T u | d ] [-v] [pool] ... [interval[count]]
52 52
53 53
54 54 zpool list [-T u | d ] [-Hpv] [-o property[,...]] [pool] ... [interval[count]]
55 55
56 56
57 57 zpool offline [-t] pool device ...
58 58
59 59
60 60 zpool online pool device ...
61 61
62 62
63 63 zpool reguid pool
64 64
65 65
66 66 zpool reopen pool
67 67
68 68
69 69 zpool remove pool device ...
70 70
|
↓ open down ↓ |
70 lines elided |
↑ open up ↑ |
71 71
72 72 zpool replace [-f] pool device [new_device]
73 73
74 74
75 75 zpool scrub [-s] pool ...
76 76
77 77
78 78 zpool set property=value pool
79 79
80 80
81 + zpool split [-n] [-R altroot] [-o mntopts] [-o property=value] pool newpool [device ... ]
82 +
83 +
81 84 zpool status [-xvD] [-T u | d ] [pool] ... [interval [count]]
82 85
83 86
84 87 zpool upgrade
85 88
86 89
87 90 zpool upgrade -v
88 91
89 92
90 93 zpool upgrade [-V version] -a | pool ...
91 94
92 95
93 96 DESCRIPTION
94 97 The zpool command configures ZFS storage pools. A storage pool is a
95 98 collection of devices that provides physical storage and data
96 99 replication for ZFS datasets.
97 100
98 101
99 102 All datasets within a storage pool share the same space. See zfs(1M)
100 103 for information on managing datasets.
101 104
102 105 Virtual Devices (vdevs)
103 106 A "virtual device" describes a single device or a collection of devices
104 107 organized according to certain performance and fault characteristics.
105 108 The following virtual devices are supported:
106 109
107 110 disk
108 111 A block device, typically located under /dev/dsk. ZFS can use
109 112 individual slices or partitions, though the recommended mode
110 113 of operation is to use whole disks. A disk can be specified
111 114 by a full path, or it can be a shorthand name (the relative
112 115 portion of the path under "/dev/dsk"). A whole disk can be
113 116 specified by omitting the slice or partition designation. For
114 117 example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When
115 118 given a whole disk, ZFS automatically labels the disk, if
116 119 necessary.
117 120
118 121
119 122 file
120 123 A regular file. The use of files as a backing store is
121 124 strongly discouraged. It is designed primarily for
122 125 experimental purposes, as the fault tolerance of a file is
123 126 only as good as the file system of which it is a part. A file
124 127 must be specified by a full path.
125 128
126 129
127 130 mirror
128 131 A mirror of two or more devices. Data is replicated in an
129 132 identical fashion across all components of a mirror. A mirror
130 133 with N disks of size X can hold X bytes and can withstand
131 134 (N-1) devices failing before data integrity is compromised.
132 135
133 136
134 137 raidz
135 138 raidz1
136 139 raidz2
137 140 raidz3
138 141 A variation on RAID-5 that allows for better distribution of
139 142 parity and eliminates the "RAID-5 write hole" (in which data
140 143 and parity become inconsistent after a power loss). Data and
141 144 parity is striped across all disks within a raidz group.
142 145
143 146 A raidz group can have single-, double- , or triple parity,
144 147 meaning that the raidz group can sustain one, two, or three
145 148 failures, respectively, without losing any data. The raidz1
146 149 vdev type specifies a single-parity raidz group; the raidz2
147 150 vdev type specifies a double-parity raidz group; and the
148 151 raidz3 vdev type specifies a triple-parity raidz group. The
149 152 raidz vdev type is an alias for raidz1.
150 153
151 154 A raidz group with N disks of size X with P parity disks can
152 155 hold approximately (N-P)*X bytes and can withstand P device(s)
153 156 failing before data integrity is compromised. The minimum
154 157 number of devices in a raidz group is one more than the
155 158 number of parity disks. The recommended number is between 3
156 159 and 9 to help increase performance.
157 160
158 161
159 162 spare
160 163 A special pseudo-vdev which keeps track of available hot
161 164 spares for a pool. For more information, see the "Hot Spares"
162 165 section.
163 166
164 167
165 168 log
166 169 A separate-intent log device. If more than one log device is
167 170 specified, then writes are load-balanced between devices. Log
168 171 devices can be mirrored. However, raidz vdev types are not
169 172 supported for the intent log. For more information, see the
170 173 "Intent Log" section.
171 174
172 175
173 176 cache
174 177 A device used to cache storage pool data. A cache device
175 178 cannot be cannot be configured as a mirror or raidz group.
176 179 For more information, see the "Cache Devices" section.
177 180
178 181
179 182
180 183 Virtual devices cannot be nested, so a mirror or raidz virtual device
181 184 can only contain files or disks. Mirrors of mirrors (or other
182 185 combinations) are not allowed.
183 186
184 187
185 188 A pool can have any number of virtual devices at the top of the
186 189 configuration (known as "root vdevs"). Data is dynamically distributed
187 190 across all top-level devices to balance data among devices. As new
188 191 virtual devices are added, ZFS automatically places data on the newly
189 192 available devices.
190 193
191 194
192 195 Virtual devices are specified one at a time on the command line,
193 196 separated by whitespace. The keywords "mirror" and "raidz" are used to
194 197 distinguish where a group ends and another begins. For example, the
195 198 following creates two root vdevs, each a mirror of two disks:
196 199
197 200 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
198 201
199 202
200 203
201 204 Device Failure and Recovery
202 205 ZFS supports a rich set of mechanisms for handling device failure and
203 206 data corruption. All metadata and data is checksummed, and ZFS
204 207 automatically repairs bad data from a good copy when corruption is
205 208 detected.
206 209
207 210
208 211 In order to take advantage of these features, a pool must make use of
209 212 some form of redundancy, using either mirrored or raidz groups. While
210 213 ZFS supports running in a non-redundant configuration, where each root
211 214 vdev is simply a disk or file, this is strongly discouraged. A single
212 215 case of bit corruption can render some or all of your data unavailable.
213 216
214 217
215 218 A pool's health status is described by one of three states: online,
216 219 degraded, or faulted. An online pool has all devices operating
217 220 normally. A degraded pool is one in which one or more devices have
218 221 failed, but the data is still available due to a redundant
219 222 configuration. A faulted pool has corrupted metadata, or one or more
220 223 faulted devices, and insufficient replicas to continue functioning.
221 224
222 225
223 226 The health of the top-level vdev, such as mirror or raidz device, is
224 227 potentially impacted by the state of its associated vdevs, or component
225 228 devices. A top-level vdev or component device is in one of the following
226 229 states:
227 230
228 231 DEGRADED
229 232 One or more top-level vdevs is in the degraded state because
230 233 one or more component devices are offline. Sufficient
231 234 replicas exist to continue functioning.
232 235
233 236 One or more component devices is in the degraded or faulted
234 237 state, but sufficient replicas exist to continue
235 238 functioning. The underlying conditions are as follows:
236 239
237 240 o The number of checksum errors exceeds acceptable
238 241 levels and the device is degraded as an
239 242 indication that something may be wrong. ZFS
240 243 continues to use the device as necessary.
241 244
242 245 o The number of I/O errors exceeds acceptable
243 246 levels. The device could not be marked as
244 247 faulted because there are insufficient replicas
245 248 to continue functioning.
246 249
247 250
248 251 FAULTED
249 252 One or more top-level vdevs is in the faulted state because
250 253 one or more component devices are offline. Insufficient
251 254 replicas exist to continue functioning.
252 255
253 256 One or more component devices is in the faulted state, and
254 257 insufficient replicas exist to continue functioning. The
255 258 underlying conditions are as follows:
256 259
257 260 o The device could be opened, but the contents did
258 261 not match expected values.
259 262
260 263 o The number of I/O errors exceeds acceptable
261 264 levels and the device is faulted to prevent
262 265 further use of the device.
263 266
264 267
265 268 OFFLINE
266 269 The device was explicitly taken offline by the "zpool
267 270 offline" command.
268 271
269 272
270 273 ONLINE
271 274 The device is online and functioning.
272 275
273 276
274 277 REMOVED
275 278 The device was physically removed while the system was
276 279 running. Device removal detection is hardware-dependent and
277 280 may not be supported on all platforms.
278 281
279 282
280 283 UNAVAIL
281 284 The device could not be opened. If a pool is imported when
282 285 a device was unavailable, then the device will be
283 286 identified by a unique identifier instead of its path since
284 287 the path was never correct in the first place.
285 288
286 289
287 290
288 291 If a device is removed and later re-attached to the system, ZFS attempts
289 292 to put the device online automatically. Device attach detection is
290 293 hardware-dependent and might not be supported on all platforms.
291 294
292 295 Hot Spares
293 296 ZFS allows devices to be associated with pools as "hot spares". These
294 297 devices are not actively used in the pool, but when an active device
295 298 fails, it is automatically replaced by a hot spare. To create a pool
296 299 with hot spares, specify a "spare" vdev with any number of devices. For
297 300 example,
298 301
299 302 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
300 303
301 304
302 305
303 306
304 307 Spares can be shared across multiple pools, and can be added with the
305 308 "zpool add" command and removed with the "zpool remove" command. Once a
306 309 spare replacement is initiated, a new "spare" vdev is created within
307 310 the configuration that will remain there until the original device is
308 311 replaced. At this point, the hot spare becomes available again if
309 312 another device fails.
310 313
311 314
312 315 If a pool has a shared spare that is currently being used, the pool can
313 316 not be exported since other pools may use this shared spare, which may
314 317 lead to potential data corruption.
315 318
316 319
317 320 An in-progress spare replacement can be cancelled by detaching the hot
318 321 spare. If the original faulted device is detached, then the hot spare
319 322 assumes its place in the configuration, and is removed from the spare
320 323 list of all active pools.
321 324
322 325
323 326 Spares cannot replace log devices.
324 327
325 328 Intent Log
326 329 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
327 330 transactions. For instance, databases often require their transactions
328 331 to be on stable storage devices when returning from a system call. NFS
329 332 and other applications can also use fsync() to ensure data stability.
330 333 By default, the intent log is allocated from blocks within the main
331 334 pool. However, it might be possible to get better performance using
332 335 separate intent log devices such as NVRAM or a dedicated disk. For
333 336 example:
334 337
335 338 # zpool create pool c0d0 c1d0 log c2d0
336 339
337 340
338 341
339 342
340 343 Multiple log devices can also be specified, and they can be mirrored.
341 344 See the EXAMPLES section for an example of mirroring multiple log
342 345 devices.
343 346
344 347
345 348 Log devices can be added, replaced, attached, detached, and imported
346 349 and exported as part of the larger pool. Mirrored log devices can be
347 350 removed by specifying the top-level mirror for the log.
348 351
349 352 Cache Devices
350 353 Devices can be added to a storage pool as "cache devices." These
351 354 devices provide an additional layer of caching between main memory and
352 355 disk. For read-heavy workloads, where the working set size is much
353 356 larger than what can be cached in main memory, using cache devices
354 357 allow much more of this working set to be served from low latency
355 358 media. Using cache devices provides the greatest performance
356 359 improvement for random read-workloads of mostly static content.
357 360
358 361
359 362 To create a pool with cache devices, specify a "cache" vdev with any
360 363 number of devices. For example:
361 364
362 365 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
363 366
364 367
365 368
366 369
367 370 Cache devices cannot be mirrored or part of a raidz configuration. If a
368 371 read error is encountered on a cache device, that read I/O is reissued
369 372 to the original storage pool device, which might be part of a mirrored
370 373 or raidz configuration.
371 374
372 375
373 376 The content of the cache devices is considered volatile, as is the case
374 377 with other system caches.
375 378
376 379 Properties
377 380 Each pool has several properties associated with it. Some properties
378 381 are read-only statistics while others are configurable and change the
379 382 behavior of the pool. The following are read-only properties:
380 383
381 384 available
382 385 Amount of storage available within the pool. This
383 386 property can also be referred to by its shortened
384 387 column name, "avail".
385 388
386 389
387 390 capacity
388 391 Percentage of pool space used. This property can
389 392 also be referred to by its shortened column name,
390 393 "cap".
391 394
392 395
393 396 expandsize
394 397 Amount of uninitialized space within the pool or
395 398 device that can be used to increase the total
396 399 capacity of the pool. Uninitialized space consists
397 400 of any space on an EFI labeled vdev which has not
398 401 been brought online (i.e. zpool online -e). This
399 402 space occurs when a LUN is dynamically expanded.
400 403
401 404
402 405 fragmentation
403 406 The amount of fragmentation in the pool.
404 407
405 408
406 409 free
407 410 The amount of free space available in the pool.
408 411
409 412
410 413 freeing
411 414 After a file system or snapshot is destroyed, the
412 415 space it was using is returned to the pool
413 416 asynchronously. freeing is the amount of space
414 417 remaining to be reclaimed. Over time freeing will
415 418 decrease while free increases.
416 419
417 420
418 421 health
419 422 The current health of the pool. Health can be
420 423 "ONLINE", "DEGRADED", "FAULTED", " OFFLINE",
421 424 "REMOVED", or "UNAVAIL".
422 425
423 426
424 427 guid
425 428 A unique identifier for the pool.
426 429
427 430
428 431 size
429 432 Total size of the storage pool.
430 433
431 434
432 435 unsupported@feature_guid
433 436 Information about unsupported features that are
434 437 enabled on the pool. See zpool-features(5) for
435 438 details.
436 439
437 440
438 441 used
439 442 Amount of storage space used within the pool.
440 443
441 444
442 445
443 446 The space usage properties report actual physical space available to
444 447 the storage pool. The physical space can be different from the total
445 448 amount of space that any contained datasets can actually use. The
446 449 amount of space used in a raidz configuration depends on the
447 450 characteristics of the data being written. In addition, ZFS reserves
448 451 some space for internal accounting that the zfs(1M) command takes into
449 452 account, but the zpool command does not. For non-full pools of a
450 453 reasonable size, these effects should be invisible. For small pools, or
451 454 pools that are close to being completely full, these discrepancies may
452 455 become more noticeable.
453 456
454 457
455 458 The following property can be set at creation time and import time:
456 459
457 460 altroot
458 461 Alternate root directory. If set, this directory is prepended to
459 462 any mount points within the pool. This can be used when examining
460 463 an unknown pool where the mount points cannot be trusted, or in an
461 464 alternate boot environment, where the typical paths are not valid.
462 465 altroot is not a persistent property. It is valid only while the
463 466 system is up. Setting altroot defaults to using cachefile=none,
464 467 though this may be overridden using an explicit setting.
465 468
466 469
467 470
468 471 The following property can be set only at import time:
469 472
470 473 readonly=on | off
471 474 If set to on, the pool will be imported in read-only mode. This
472 475 property can also be referred to by its shortened column name,
473 476 rdonly.
474 477
475 478
476 479
477 480 The following properties can be set at creation time and import time,
478 481 and later changed with the zpool set command:
479 482
480 483 autoexpand=on | off
481 484 Controls automatic pool expansion when the underlying LUN is grown.
482 485 If set to on, the pool will be resized according to the size of the
483 486 expanded device. If the device is part of a mirror or raidz then
484 487 all devices within that mirror/raidz group must be expanded before
485 488 the new space is made available to the pool. The default behavior
486 489 is off. This property can also be referred to by its shortened
487 490 column name, expand.
488 491
489 492
490 493 autoreplace=on | off
491 494 Controls automatic device replacement. If set to "off", device
492 495 replacement must be initiated by the administrator by using the
493 496 "zpool replace" command. If set to "on", any new device, found in
494 497 the same physical location as a device that previously belonged to
495 498 the pool, is automatically formatted and replaced. The default
496 499 behavior is "off". This property can also be referred to by its
497 500 shortened column name, "replace".
498 501
499 502
500 503 bootfs=pool/dataset
501 504 Identifies the default bootable dataset for the root pool. This
502 505 property is expected to be set mainly by the installation and
503 506 upgrade programs.
504 507
505 508
506 509 cachefile=path | none
507 510 Controls the location of where the pool configuration is cached.
508 511 Discovering all pools on system startup requires a cached copy of
509 512 the configuration data that is stored on the root file system. All
510 513 pools in this cache are automatically imported when the system
511 514 boots. Some environments, such as install and clustering, need to
512 515 cache this information in a different location so that pools are
513 516 not automatically imported. Setting this property caches the pool
514 517 configuration in a different location that can later be imported
515 518 with "zpool import -c". Setting it to the special value "none"
516 519 creates a temporary pool that is never cached, and the special
517 520 value '' (empty string) uses the default location.
518 521
519 522 Multiple pools can share the same cache file. Because the kernel
520 523 destroys and recreates this file when pools are added and removed,
521 524 care should be taken when attempting to access this file. When the
522 525 last pool using a cachefile is exported or destroyed, the file is
523 526 removed.
524 527
525 528
526 529 comment=text
527 530 A text string consisting of printable ASCII characters that will be
528 531 stored such that it is available even if the pool becomes faulted.
529 532 An administrator can provide additional information about a pool
530 533 using this property.
531 534
532 535
533 536 dedupditto=number
534 537 Threshold for the number of block ditto copies. If the reference
535 538 count for a deduplicated block increases above this number, a new
536 539 ditto copy of this block is automatically stored. The default
537 540 setting is 0 which causes no ditto copies to be created for
538 541 deduplicated blocks. The miniumum legal nonzero setting is 100.
539 542
540 543
541 544 delegation=on | off
542 545 Controls whether a non-privileged user is granted access based on
543 546 the dataset permissions defined on the dataset. See zfs(1M) for
544 547 more information on ZFS delegated administration.
545 548
546 549
547 550 failmode=wait | continue | panic
548 551 Controls the system behavior in the event of catastrophic pool
549 552 failure. This condition is typically a result of a loss of
550 553 connectivity to the underlying storage device(s) or a failure of
551 554 all devices within the pool. The behavior of such an event is
552 555 determined as follows:
553 556
554 557 wait
555 558 Blocks all I/O access until the device connectivity is
556 559 recovered and the errors are cleared. This is the
557 560 default behavior.
558 561
559 562
560 563 continue
561 564 Returns EIO to any new write I/O requests but allows
562 565 reads to any of the remaining healthy devices. Any
563 566 write requests that have yet to be committed to disk
564 567 would be blocked.
565 568
566 569
567 570 panic
568 571 Prints out a message to the console and generates a
569 572 system crash dump.
570 573
571 574
572 575
573 576 feature@feature_name=enabled
574 577 The value of this property is the current state of feature_name.
575 578 The only valid value when setting this property is enabled which
576 579 moves feature_name to the enabled state. See zpool-features(5) for
577 580 details on feature states.
578 581
579 582
580 583 listsnaps=on | off
581 584 Controls whether information about snapshots associated with this
582 585 pool is output when "zfs list" is run without the -t option. The
583 586 default value is "off".
584 587
585 588
586 589 version=version
587 590 The current on-disk version of the pool. This can be increased, but
588 591 never decreased. The preferred method of updating pools is with the
589 592 "zpool upgrade" command, though this property can be used when a
590 593 specific version is needed for backwards compatibility. Once
591 594 feature flags is enabled on a pool this property will no longer
592 595 have a value.
593 596
594 597
595 598 Subcommands
596 599 All subcommands that modify state are logged persistently to the pool
597 600 in their original form.
598 601
599 602
600 603 The zpool command provides subcommands to create and destroy storage
601 604 pools, add capacity to storage pools, and provide information about the
602 605 storage pools. The following subcommands are supported:
603 606
604 607 zpool -?
605 608 Displays a help message.
606 609
607 610
608 611 zpool add [-fn] pool vdev ...
609 612 Adds the specified virtual devices to the given pool. The vdev
610 613 specification is described in the "Virtual Devices" section. The
611 614 behavior of the -f option, and the device checks performed are
612 615 described in the "zpool create" subcommand.
613 616
614 617 -f
615 618 Forces use of vdevs, even if they appear in use or specify a
616 619 conflicting replication level. Not all devices can be
617 620 overridden in this manner.
618 621
619 622
620 623 -n
621 624 Displays the configuration that would be used without
622 625 actually adding the vdevs. The actual pool creation can still
623 626 fail due to insufficient privileges or device sharing.
624 627
625 628 Do not add a disk that is currently configured as a quorum device
626 629 to a zpool. After a disk is in the pool, that disk can then be
627 630 configured as a quorum device.
628 631
629 632
630 633 zpool attach [-f] pool device new_device
631 634 Attaches new_device to an existing zpool device. The existing
632 635 device cannot be part of a raidz configuration. If device is not
633 636 currently part of a mirrored configuration, device automatically
634 637 transforms into a two-way mirror of device and new_device. If device
635 638 is part of a two-way mirror, attaching new_device creates a three-way
636 639 mirror, and so on. In either case, new_device begins to resilver
637 640 immediately.
638 641
639 642 -f
640 643 Forces use of new_device, even if its appears to be in use.
641 644 Not all devices can be overridden in this manner.
642 645
643 646
644 647
645 648 zpool clear pool [device] ...
646 649 Clears device errors in a pool. If no arguments are specified, all
647 650 device errors within the pool are cleared. If one or more devices
648 651 is specified, only those errors associated with the specified
649 652 device or devices are cleared.
650 653
651 654
652 655 zpool create [-fnd] [-o property=value] ... [-O file-system-property=value]
653 656 ... [-m mountpoint] [-R root] pool vdev ...
654 657 Creates a new storage pool containing the virtual devices specified
655 658 on the command line. The pool name must begin with a letter, and
656 659 can only contain alphanumeric characters as well as underscore
657 660 ("_"), dash ("-"), and period ("."). The pool names "mirror",
658 661 "raidz", "spare" and "log" are reserved, as are names beginning
659 662 with the pattern "c[0-9]". The vdev specification is described in
660 663 the "Virtual Devices" section.
661 664
662 665 The command verifies that each device specified is accessible and
663 666 not currently in use by another subsystem. There are some uses,
664 667 such as being currently mounted, or specified as the dedicated dump
665 668 device, that prevents a device from ever being used by ZFS. Other
666 669 uses, such as having a preexisting UFS file system, can be
667 670 overridden with the -f option.
668 671
669 672 The command also checks that the replication strategy for the pool
670 673 is consistent. An attempt to combine redundant and non-redundant
671 674 storage in a single pool, or to mix disks and files, results in an
672 675 error unless -f is specified. The use of differently sized devices
673 676 within a single raidz or mirror group is also flagged as an error
674 677 unless -f is specified.
675 678
676 679 Unless the -R option is specified, the default mount point is
677 680 "/pool". The mount point must not exist or must be empty, or else
678 681 the root dataset cannot be mounted. This can be overridden with the
679 682 -m option.
680 683
681 684 By default all supported features are enabled on the new pool
682 685 unless the -d option is specified.
683 686
684 687 -f
685 688 Forces use of vdevs, even if they appear in use or specify a
686 689 conflicting replication level. Not all devices can be
687 690 overridden in this manner.
688 691
689 692
690 693 -n
691 694 Displays the configuration that would be used without actually
692 695 creating the pool. The actual pool creation can still fail due
693 696 to insufficient privileges or device sharing.
694 697
695 698
696 699 -d
697 700 Do not enable any features on the new pool. Individual features
698 701 can be enabled by setting their corresponding properties to
699 702 enabled with the -o option. See zpool-features(5) for details
700 703 about feature properties.
701 704
702 705
703 706 -o property=value [-o property=value] ...
704 707 Sets the given pool properties. See the "Properties" section
705 708 for a list of valid properties that can be set.
706 709
707 710
708 711 -O file-system-property=value
709 712 [-O file-system-property=value] ...
710 713 Sets the given file system properties in the root file system
711 714 of the pool. See the "Properties" section of zfs(1M) for a list
712 715 of valid properties that can be set.
713 716
714 717
715 718 -R root
716 719 Equivalent to "-o cachefile=none,altroot=root"
717 720
718 721
719 722 -m mountpoint
720 723 Sets the mount point for the root dataset. The default mount
721 724 point is "/pool" or "altroot/pool" if altroot is specified. The
722 725 mount point must be an absolute path, "legacy", or "none". For
723 726 more information on dataset mount points, see zfs(1M).
724 727
725 728
726 729
727 730 zpool destroy [-f] pool
728 731 Destroys the given pool, freeing up any devices for other use. This
729 732 command tries to unmount any active datasets before destroying the
730 733 pool.
731 734
732 735 -f
733 736 Forces any active datasets contained within the pool to be
734 737 unmounted.
735 738
736 739
737 740
738 741 zpool detach pool device
739 742 Detaches device from a mirror. The operation is refused if there
740 743 are no other valid replicas of the data.
741 744
742 745
743 746 zpool export [-f] pool ...
744 747 Exports the given pools from the system. All devices are marked as
745 748 exported, but are still considered in use by other subsystems. The
746 749 devices can be moved between systems (even those of different
747 750 endianness) and imported as long as a sufficient number of devices
748 751 are present.
749 752
750 753 Before exporting the pool, all datasets within the pool are
751 754 unmounted. A pool can not be exported if it has a shared spare that
752 755 is currently being used.
753 756
754 757 For pools to be portable, you must give the zpool command whole
755 758 disks, not just slices, so that ZFS can label the disks with
756 759 portable EFI labels. Otherwise, disk drivers on platforms of
757 760 different endianness will not recognize the disks.
758 761
759 762 -f
760 763 Forcefully unmount all datasets, using the "unmount -f"
761 764 command.
762 765
763 766 This command will forcefully export the pool even if it has a
764 767 shared spare that is currently being used. This may lead to
765 768 potential data corruption.
766 769
767 770
768 771
769 772 zpool get [-Hp] [-o field[,...]] "all" | property[,...] pool ...
770 773 Retrieves the given list of properties (or all properties if "all"
771 774 is used) for the specified storage pool(s). These properties are
772 775 displayed with the following fields:
773 776
774 777 name Name of storage pool
775 778 property Property name
776 779 value Property value
777 780 source Property source, either 'default' or 'local'.
778 781
779 782
780 783 See the "Properties" section for more information on the available
781 784 pool properties.
782 785
783 786
784 787 -H
785 788 Scripted mode. Do not display headers, and separate
786 789 fields by a single tab instead of arbitrary space.
787 790
788 791
789 792 -p
790 793 Display numbers in parsable (exact) values.
791 794
792 795
793 796 -o field
794 797 A comma-separated list of columns to display.
795 798 name,property,value,source is the default value.
796 799
797 800
798 801 zpool history [-il] [pool] ...
799 802 Displays the command history of the specified pools or all pools if
800 803 no pool is specified.
801 804
802 805 -i
803 806 Displays internally logged ZFS events in addition to user
804 807 initiated events.
805 808
806 809
807 810 -l
808 811 Displays log records in long format, which in addition to
809 812 standard format includes, the user name, the hostname, and
810 813 the zone in which the operation was performed.
811 814
812 815
813 816
814 817 zpool import [-d dir | -c cachefile] [-D]
815 818 Lists pools available to import. If the -d option is not specified,
816 819 this command searches for devices in "/dev/dsk". The -d option can
817 820 be specified multiple times, and all directories are searched. If
818 821 the device appears to be part of an exported pool, this command
819 822 displays a summary of the pool with the name of the pool, a numeric
820 823 identifier, as well as the vdev layout and current health of the
821 824 device for each device or file. Destroyed pools, pools that were
822 825 previously destroyed with the "zpool destroy" command, are not
823 826 listed unless the -D option is specified.
824 827
825 828 The numeric identifier is unique, and can be used instead of the
826 829 pool name when multiple exported pools of the same name are
827 830 available.
828 831
829 832 -c cachefile
830 833 Reads configuration from the given cachefile that
831 834 was created with the "cachefile" pool property.
832 835 This cachefile is used instead of searching for
833 836 devices.
834 837
835 838
836 839 -d dir
837 840 Searches for devices or files in dir. The -d option
838 841 can be specified multiple times.
839 842
840 843
841 844 -D
842 845 Lists destroyed pools only.
843 846
844 847
845 848
846 849 zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile]
847 850 [-D] [-f] [-m] [-R root] [-F [-n]] -a
848 851 Imports all pools found in the search directories. Identical to the
849 852 previous command, except that all pools with a sufficient number of
850 853 devices available are imported. Destroyed pools, pools that were
851 854 previously destroyed with the "zpool destroy" command, will not be
852 855 imported unless the -D option is specified.
853 856
854 857 -o mntopts
855 858 Comma-separated list of mount options to use
856 859 when mounting datasets within the pool. See
857 860 zfs(1M) for a description of dataset
858 861 properties and mount options.
859 862
860 863
861 864 -o property=value
862 865 Sets the specified property on the imported
863 866 pool. See the "Properties" section for more
864 867 information on the available pool properties.
865 868
866 869
867 870 -c cachefile
868 871 Reads configuration from the given cachefile
869 872 that was created with the "cachefile" pool
870 873 property. This cachefile is used instead of
871 874 searching for devices.
872 875
873 876
874 877 -d dir
875 878 Searches for devices or files in dir. The -d
876 879 option can be specified multiple times. This
877 880 option is incompatible with the -c option.
878 881
879 882
880 883 -D
881 884 Imports destroyed pools only. The -f option is
882 885 also required.
883 886
884 887
885 888 -f
886 889 Forces import, even if the pool appears to be
887 890 potentially active.
888 891
889 892
890 893 -F
891 894 Recovery mode for a non-importable pool.
892 895 Attempt to return the pool to an importable
893 896 state by discarding the last few transactions.
894 897 Not all damaged pools can be recovered by
895 898 using this option. If successful, the data
896 899 from the discarded transactions is
897 900 irretrievably lost. This option is ignored if
898 901 the pool is importable or already imported.
899 902
900 903
901 904 -a
902 905 Searches for and imports all pools found.
903 906
904 907
905 908 -m
906 909 Allows a pool to import when there is a
907 910 missing log device. Recent transactions can be
908 911 lost because the log device will be discarded.
909 912
910 913
911 914 -R root
912 915 Sets the "cachefile" property to "none" and
913 916 the "altroot" property to "root".
914 917
915 918
916 919 -N
917 920 Import the pool without mounting any file
918 921 systems.
919 922
920 923
921 924 -n
922 925 Used with the -F recovery option. Determines
923 926 whether a non-importable pool can be made
924 927 importable again, but does not actually
925 928 perform the pool recovery. For more details
926 929 about pool recovery mode, see the -F option,
927 930 above.
928 931
929 932
930 933
931 934 zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile]
932 935 [-D] [-f] [-m] [-R root] [-F [-n]] pool | id [newpool]
933 936 Imports a specific pool. A pool can be identified by its name or
934 937 the numeric identifier. If newpool is specified, the pool is
935 938 imported using the name newpool. Otherwise, it is imported with the
936 939 same name as its exported name.
937 940
938 941 If a device is removed from a system without running "zpool export"
939 942 first, the device appears as potentially active. It cannot be
940 943 determined if this was a failed export, or whether the device is
941 944 really in use from another host. To import a pool in this state,
942 945 the -f option is required.
943 946
944 947 -o mntopts
945 948 Comma-separated list of mount options to use when mounting
946 949 datasets within the pool. See zfs(1M) for a description of
947 950 dataset properties and mount options.
948 951
949 952
950 953 -o property=value
951 954 Sets the specified property on the imported pool. See the
952 955 "Properties" section for more information on the available pool
953 956 properties.
954 957
955 958
956 959 -c cachefile
957 960 Reads configuration from the given cachefile that was created
958 961 with the "cachefile" pool property. This cachefile is used
959 962 instead of searching for devices.
960 963
961 964
962 965 -d dir
963 966 Searches for devices or files in dir. The -d option can be
964 967 specified multiple times. This option is incompatible with the
965 968 -c option.
966 969
967 970
968 971 -D
969 972 Imports destroyed pool. The -f option is also required.
970 973
971 974
972 975 -f
973 976 Forces import, even if the pool appears to be potentially
974 977 active.
975 978
976 979
977 980 -F
978 981 Recovery mode for a non-importable pool. Attempt to return the
979 982 pool to an importable state by discarding the last few
980 983 transactions. Not all damaged pools can be recovered by using
981 984 this option. If successful, the data from the discarded
982 985 transactions is irretrievably lost. This option is ignored if
983 986 the pool is importable or already imported.
984 987
985 988
986 989 -R root
987 990 Sets the "cachefile" property to "none" and the "altroot"
988 991 property to "root".
989 992
990 993
991 994 -n
992 995 Used with the -F recovery option. Determines whether a non-
993 996 importable pool can be made importable again, but does not
994 997 actually perform the pool recovery. For more details about pool
995 998 recovery mode, see the -F option, above.
996 999
997 1000
998 1001 -m
999 1002 Allows a pool to import when there is a missing log device.
1000 1003 Recent transactions can be lost because the log device will be
1001 1004 discarded.
1002 1005
1003 1006
1004 1007
1005 1008 zpool iostat [-T u | d] [-v] [pool] ... [interval[count]]
1006 1009 Displays I/O statistics for the given pools. When given an
1007 1010 interval, the statistics are printed every interval seconds until
1008 1011 Ctrl-C is pressed. If no pools are specified, statistics for every
1009 1012 pool in the system is shown. If count is specified, the command
1010 1013 exits after count reports are printed.
1011 1014
1012 1015 -T u | d
1013 1016 Display a time stamp.
1014 1017
1015 1018 Specify u for a printed representation of the internal
1016 1019 representation of time. See time(2). Specify d for
1017 1020 standard date format. See date(1).
1018 1021
1019 1022
1020 1023 -v
1021 1024 Verbose statistics. Reports usage statistics for
1022 1025 individual vdevs within the pool, in addition to the
1023 1026 pool-wide statistics.
1024 1027
1025 1028
1026 1029
1027 1030 zpool list [-T u | d] [-Hv] [-o props[,...]] [pool] ... [interval[count]]
1028 1031 Lists the given pools along with a health status and space usage.
1029 1032 If no pools are specified, all pools in the system are listed. When
1030 1033 given an interval, the information is printed every interval
1031 1034 seconds until Ctrl-C is pressed. If count is specified, the command
1032 1035 exits after count reports are printed.
1033 1036
1034 1037 -T u | d
1035 1038 Display a time stamp.
1036 1039
1037 1040 Specify u for a printed representation of the internal
1038 1041 representation of time. See time(2). Specify d for
1039 1042 standard date format. See date(1).
1040 1043
1041 1044
1042 1045 -H
1043 1046 Scripted mode. Do not display headers, and separate
1044 1047 fields by a single tab instead of arbitrary space.
1045 1048
1046 1049
1047 1050 -p
1048 1051 Display numbers in parsable (exact) values.
1049 1052
1050 1053
1051 1054 -o props
1052 1055 Comma-separated list of properties to display. See the
1053 1056 "Properties" section for a list of valid properties.
1054 1057 The default list is "name, size, used, available,
1055 1058 fragmentation, expandsize, capacity, dedupratio,
1056 1059 health, altroot"
1057 1060
1058 1061
1059 1062 -v
1060 1063 Verbose statistics. Reports usage statistics for
1061 1064 individual vdevs within the pool, in addition to the
1062 1065 pool-wise statistics.
1063 1066
1064 1067
1065 1068
1066 1069 zpool offline [-t] pool device ...
1067 1070 Takes the specified physical device offline. While the device is
1068 1071 offline, no attempt is made to read or write to the device.
1069 1072
1070 1073 This command is not applicable to spares or cache devices.
1071 1074
1072 1075 -t
1073 1076 Temporary. Upon reboot, the specified physical device reverts
1074 1077 to its previous state.
1075 1078
1076 1079
1077 1080
1078 1081 zpool online [-e] pool device...
1079 1082 Brings the specified physical device online.
1080 1083
1081 1084 This command is not applicable to spares or cache devices.
1082 1085
1083 1086 -e
1084 1087 Expand the device to use all available space. If the device
1085 1088 is part of a mirror or raidz then all devices must be
1086 1089 expanded before the new space will become available to the
1087 1090 pool.
1088 1091
1089 1092
1090 1093
1091 1094 zpool reguid pool
1092 1095 Generates a new unique identifier for the pool. You must ensure
1093 1096 that all devices in this pool are online and healthy before
1094 1097 performing this action.
1095 1098
1096 1099
1097 1100 zpool reopen pool
1098 1101 Reopen all the vdevs associated with the pool.
1099 1102
1100 1103
1101 1104 zpool remove pool device ...
1102 1105 Removes the specified device from the pool. This command currently
1103 1106 only supports removing hot spares, cache, and log devices. A
1104 1107 mirrored log device can be removed by specifying the top-level
1105 1108 mirror for the log. Non-log devices that are part of a mirrored
1106 1109 configuration can be removed using the zpool detach command. Non-
1107 1110 redundant and raidz devices cannot be removed from a pool.
1108 1111
1109 1112
1110 1113 zpool replace [-f] pool old_device [new_device]
1111 1114 Replaces old_device with new_device. This is equivalent to
1112 1115 attaching new_device, waiting for it to resilver, and then
1113 1116 detaching old_device.
1114 1117
1115 1118 The size of new_device must be greater than or equal to the minimum
1116 1119 size of all the devices in a mirror or raidz configuration.
1117 1120
1118 1121 new_device is required if the pool is not redundant. If new_device
1119 1122 is not specified, it defaults to old_device. This form of
1120 1123 replacement is useful after an existing disk has failed and has
1121 1124 been physically replaced. In this case, the new disk may have the
1122 1125 same /dev/dsk path as the old device, even though it is actually a
1123 1126 different disk. ZFS recognizes this.
1124 1127
1125 1128 -f
1126 1129 Forces use of new_device, even if its appears to be in use.
1127 1130 Not all devices can be overridden in this manner.
1128 1131
1129 1132
1130 1133
1131 1134 zpool scrub [-s] pool ...
1132 1135 Begins a scrub. The scrub examines all data in the specified pools
1133 1136 to verify that it checksums correctly. For replicated (mirror or
1134 1137 raidz) devices, ZFS automatically repairs any damage discovered
1135 1138 during the scrub. The "zpool status" command reports the progress
1136 1139 of the scrub and summarizes the results of the scrub upon
1137 1140 completion.
1138 1141
1139 1142 Scrubbing and resilvering are very similar operations. The
1140 1143 difference is that resilvering only examines data that ZFS knows to
1141 1144 be out of date (for example, when attaching a new device to a
1142 1145 mirror or replacing an existing device), whereas scrubbing examines
1143 1146 all data to discover silent errors due to hardware faults or disk
1144 1147 failure.
1145 1148
1146 1149 Because scrubbing and resilvering are I/O-intensive operations, ZFS
1147 1150 only allows one at a time. If a scrub is already in progress, the
1148 1151 "zpool scrub" command terminates it and starts a new scrub. If a
1149 1152 resilver is in progress, ZFS does not allow a scrub to be started
1150 1153 until the resilver completes.
1151 1154
1152 1155 -s
|
↓ open down ↓ |
1062 lines elided |
↑ open up ↑ |
1153 1156 Stop scrubbing.
1154 1157
1155 1158
1156 1159
1157 1160 zpool set property=value pool
1158 1161 Sets the given property on the specified pool. See the "Properties"
1159 1162 section for more information on what properties can be set and
1160 1163 acceptable values.
1161 1164
1162 1165
1166 + zpool split [-n] [-R altroot] [-o mntopts] [-o property=value] pool newpool
1167 + [device ... ]
1168 +
1169 + Splits off one disk from each mirrored top-level vdev in a pool and
1170 + creates a new pool from the split-off disks. The original pool must
1171 + be made up of one or more mirrors and must not be in the process of
1172 + resilvering. The split subcommand chooses the last device in each
1173 + mirror vdev unless overridden by a device specification on the
1174 + command line.
1175 +
1176 + When using a device argument, split includes the specified
1177 + device(s) in a new pool and, should any devices remain unspecified,
1178 + assigns the last device in each mirror vdev to that pool, as it
1179 + does normally. If you are uncertain about the outcome of a split
1180 + command, use the -n ("dry-run") option to ensure your command will
1181 + have the effect you intend.
1182 +
1183 +
1184 + -n
1185 + Displays the configuration that would be created without
1186 + actually splitting the pool. The actual pool split could still
1187 + fail due to insufficient privileges or device status.
1188 +
1189 +
1190 + -R altroot
1191 + Automatically import the newly created pool after splitting,
1192 + using the specified altroot parameter for the new pool's
1193 + alternate root. See the altroot description in the "Properties"
1194 + section, above.
1195 +
1196 +
1197 + -o mntopts
1198 + Comma-separated list of mount options to use when mounting
1199 + datasets within the pool. See zfs(1M) for a description of
1200 + dataset properties and mount options. Valid only in conjunction
1201 + with the -R option.
1202 +
1203 +
1204 + -o property=value
1205 + Sets the specified property on the new pool. See the
1206 + "Properties" section, above, for more information on the
1207 + available pool properties.
1208 +
1209 +
1210 +
1163 1211 zpool status [-xvD] [-T u | d ] [pool] ... [interval [count]]
1164 1212 Displays the detailed health status for the given pools. If no pool
1165 1213 is specified, then the status of each pool in the system is
1166 1214 displayed. For more information on pool and device health, see the
1167 1215 "Device Failure and Recovery" section.
1168 1216
1169 1217 If a scrub or resilver is in progress, this command reports the
1170 1218 percentage done and the estimated time to completion. Both of these
1171 1219 are only approximate, because the amount of data in the pool and
1172 1220 the other workloads on the system can change.
1173 1221
1174 1222 -x
1175 1223 Only display status for pools that are exhibiting errors or
1176 1224 are otherwise unavailable. Warnings about pools not using the
1177 1225 latest on-disk format will not be included.
1178 1226
1179 1227
1180 1228 -v
1181 1229 Displays verbose data error information, printing out a
1182 1230 complete list of all data errors since the last complete pool
1183 1231 scrub.
1184 1232
1185 1233
1186 1234 -D
1187 1235 Display a histogram of deduplication statistics, showing the
1188 1236 allocated (physically present on disk) and referenced
1189 1237 (logically referenced in the pool) block counts and sizes by
1190 1238 reference count.
1191 1239
1192 1240
1193 1241 -T u | d
1194 1242 Display a time stamp.
1195 1243
1196 1244 Specify u for a printed representation of the internal
1197 1245 representation of time. See time(2). Specify d for
1198 1246 standard date format. See date(1).
1199 1247
1200 1248
1201 1249
1202 1250 zpool upgrade
1203 1251 Displays pools which do not have all supported features enabled and
1204 1252 pools formatted using a legacy ZFS version number. These pools can
1205 1253 continue to be used, but some features may not be available. Use
1206 1254 "zpool upgrade -a" to enable all features on all pools.
1207 1255
1208 1256
1209 1257 zpool upgrade -v
1210 1258 Displays legacy ZFS versions supported by the current software. See
1211 1259 zpool-features(5) for a description of feature flags features
1212 1260 supported by the current software.
1213 1261
1214 1262
1215 1263 zpool upgrade [-V version] -a | pool ...
1216 1264 Enables all supported features on the given pool. Once this is
1217 1265 done, the pool will no longer be accessible on systems that do not
1218 1266 support feature flags. See zpool-features(5) for details on
1219 1267 compatibility with systems that support feature flags, but do not
1220 1268 support all features enabled on the pool.
1221 1269
1222 1270 -a
1223 1271 Enables all supported features on all pools.
1224 1272
1225 1273
1226 1274 -V version
1227 1275 Upgrade to the specified legacy version. If the -V
1228 1276 flag is specified, no features will be enabled on the
1229 1277 pool. This option can only be used to increase the
1230 1278 version number up to the last supported legacy
1231 1279 version number.
1232 1280
1233 1281
1234 1282
1235 1283 EXAMPLES
1236 1284 Example 1 Creating a RAID-Z Storage Pool
1237 1285
1238 1286
1239 1287 The following command creates a pool with a single raidz root vdev that
1240 1288 consists of six disks.
1241 1289
1242 1290
1243 1291 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1244 1292
1245 1293
1246 1294
1247 1295 Example 2 Creating a Mirrored Storage Pool
1248 1296
1249 1297
1250 1298 The following command creates a pool with two mirrors, where each
1251 1299 mirror contains two disks.
1252 1300
1253 1301
1254 1302 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1255 1303
1256 1304
1257 1305
1258 1306 Example 3 Creating a ZFS Storage Pool by Using Slices
1259 1307
1260 1308
1261 1309 The following command creates an unmirrored pool using two disk slices.
1262 1310
1263 1311
1264 1312 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1265 1313
1266 1314
1267 1315
1268 1316 Example 4 Creating a ZFS Storage Pool by Using Files
1269 1317
1270 1318
1271 1319 The following command creates an unmirrored pool using files. While not
1272 1320 recommended, a pool based on files can be useful for experimental
1273 1321 purposes.
1274 1322
1275 1323
1276 1324 # zpool create tank /path/to/file/a /path/to/file/b
1277 1325
1278 1326
1279 1327
1280 1328 Example 5 Adding a Mirror to a ZFS Storage Pool
1281 1329
1282 1330
1283 1331 The following command adds two mirrored disks to the pool "tank",
1284 1332 assuming the pool is already made up of two-way mirrors. The additional
1285 1333 space is immediately available to any datasets within the pool.
1286 1334
1287 1335
1288 1336 # zpool add tank mirror c1t0d0 c1t1d0
1289 1337
1290 1338
1291 1339
1292 1340 Example 6 Listing Available ZFS Storage Pools
1293 1341
1294 1342
1295 1343 The following command lists all available pools on the system. In this
1296 1344 case, the pool zion is faulted due to a missing device.
1297 1345
1298 1346
1299 1347
1300 1348 The results from this command are similar to the following:
1301 1349
1302 1350
1303 1351 # zpool list
1304 1352 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1305 1353 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
1306 1354 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
1307 1355 zion - - - - - - - FAULTED -
1308 1356
1309 1357
1310 1358
1311 1359 Example 7 Destroying a ZFS Storage Pool
1312 1360
1313 1361
1314 1362 The following command destroys the pool "tank" and any datasets
1315 1363 contained within.
1316 1364
1317 1365
1318 1366 # zpool destroy -f tank
1319 1367
1320 1368
1321 1369
1322 1370 Example 8 Exporting a ZFS Storage Pool
1323 1371
1324 1372
1325 1373 The following command exports the devices in pool tank so that they can
1326 1374 be relocated or later imported.
1327 1375
1328 1376
1329 1377 # zpool export tank
1330 1378
1331 1379
1332 1380
1333 1381 Example 9 Importing a ZFS Storage Pool
1334 1382
1335 1383
1336 1384 The following command displays available pools, and then imports the
1337 1385 pool "tank" for use on the system.
1338 1386
1339 1387
1340 1388
1341 1389 The results from this command are similar to the following:
1342 1390
1343 1391
1344 1392 # zpool import
1345 1393 pool: tank
1346 1394 id: 15451357997522795478
1347 1395 state: ONLINE
1348 1396 action: The pool can be imported using its name or numeric identifier.
1349 1397 config:
1350 1398
1351 1399 tank ONLINE
1352 1400 mirror ONLINE
1353 1401 c1t2d0 ONLINE
1354 1402 c1t3d0 ONLINE
1355 1403
1356 1404 # zpool import tank
1357 1405
1358 1406
1359 1407
1360 1408 Example 10 Upgrading All ZFS Storage Pools to the Current Version
1361 1409
1362 1410
1363 1411 The following command upgrades all ZFS Storage pools to the current
1364 1412 version of the software.
1365 1413
1366 1414
1367 1415 # zpool upgrade -a
1368 1416 This system is currently running ZFS version 2.
1369 1417
1370 1418
1371 1419
1372 1420 Example 11 Managing Hot Spares
1373 1421
1374 1422
1375 1423 The following command creates a new pool with an available hot spare:
1376 1424
1377 1425
1378 1426 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1379 1427
1380 1428
1381 1429
1382 1430
1383 1431 If one of the disks were to fail, the pool would be reduced to the
1384 1432 degraded state. The failed device can be replaced using the following
1385 1433 command:
1386 1434
1387 1435
1388 1436 # zpool replace tank c0t0d0 c0t3d0
1389 1437
1390 1438
1391 1439
1392 1440
1393 1441 Once the data has been resilvered, the spare is automatically removed
1394 1442 and is made available should another device fails. The hot spare can be
1395 1443 permanently removed from the pool using the following command:
1396 1444
1397 1445
1398 1446 # zpool remove tank c0t2d0
1399 1447
1400 1448
1401 1449
1402 1450 Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1403 1451
1404 1452
1405 1453 The following command creates a ZFS storage pool consisting of two,
1406 1454 two-way mirrors and mirrored log devices:
1407 1455
1408 1456
1409 1457 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1410 1458 c4d0 c5d0
1411 1459
1412 1460
1413 1461
1414 1462 Example 13 Adding Cache Devices to a ZFS Pool
1415 1463
1416 1464
1417 1465 The following command adds two disks for use as cache devices to a ZFS
1418 1466 storage pool:
1419 1467
1420 1468
1421 1469 # zpool add pool cache c2d0 c3d0
1422 1470
1423 1471
1424 1472
1425 1473
1426 1474 Once added, the cache devices gradually fill with content from main
1427 1475 memory. Depending on the size of your cache devices, it could take
1428 1476 over an hour for them to fill. Capacity and reads can be monitored
1429 1477 using the iostat option as follows:
1430 1478
1431 1479
1432 1480 # zpool iostat -v pool 5
1433 1481
1434 1482
1435 1483
1436 1484 Example 14 Removing a Mirrored Log Device
1437 1485
1438 1486
1439 1487 The following command removes the mirrored log device mirror-2.
1440 1488
1441 1489
1442 1490
1443 1491 Given this configuration:
1444 1492
1445 1493
1446 1494 pool: tank
1447 1495 state: ONLINE
1448 1496 scrub: none requested
1449 1497 config:
1450 1498
1451 1499 NAME STATE READ WRITE CKSUM
1452 1500 tank ONLINE 0 0 0
1453 1501 mirror-0 ONLINE 0 0 0
1454 1502 c6t0d0 ONLINE 0 0 0
1455 1503 c6t1d0 ONLINE 0 0 0
1456 1504 mirror-1 ONLINE 0 0 0
1457 1505 c6t2d0 ONLINE 0 0 0
1458 1506 c6t3d0 ONLINE 0 0 0
1459 1507 logs
1460 1508 mirror-2 ONLINE 0 0 0
1461 1509 c4t0d0 ONLINE 0 0 0
1462 1510 c4t1d0 ONLINE 0 0 0
1463 1511
1464 1512
1465 1513
1466 1514
1467 1515 The command to remove the mirrored log mirror-2 is:
1468 1516
1469 1517
1470 1518 # zpool remove tank mirror-2
1471 1519
1472 1520
1473 1521
1474 1522 Example 15 Displaying expanded space on a device
1475 1523
1476 1524
1477 1525 The following command dipslays the detailed information for the data
1478 1526 pool. This pool is comprised of a single raidz vdev where one of its
1479 1527 devices increased its capacity by 10GB. In this example, the pool will
1480 1528 not be able to utilized this extra capacity until all the devices under
1481 1529 the raidz vdev have been expanded.
1482 1530
1483 1531
1484 1532 # zpool list -v data
1485 1533 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1486 1534 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1487 1535 raidz1 23.9G 14.6G 9.30G 48% -
1488 1536 c1t1d0 - - - - -
1489 1537 c1t2d0 - - - - 10G
1490 1538 c1t3d0 - - - - -
1491 1539
1492 1540
1493 1541 EXIT STATUS
1494 1542 The following exit values are returned:
1495 1543
1496 1544 0
1497 1545 Successful completion.
1498 1546
1499 1547
1500 1548 1
1501 1549 An error occurred.
1502 1550
1503 1551
1504 1552 2
1505 1553 Invalid command line options were specified.
1506 1554
1507 1555
1508 1556 ATTRIBUTES
1509 1557 See attributes(5) for descriptions of the following attributes:
1510 1558
1511 1559
1512 1560
1513 1561
1514 1562 +--------------------+-----------------+
1515 1563 | ATTRIBUTE TYPE | ATTRIBUTE VALUE |
1516 1564 +--------------------+-----------------+
1517 1565 |Interface Stability | Evolving |
1518 1566 +--------------------+-----------------+
1519 1567
1520 1568 SEE ALSO
1521 1569 zfs(1M), zpool-features(5), attributes(5)
1522 1570
1523 1571
1524 1572
1525 1573 March 6, 2014 ZPOOL(1M)
|
↓ open down ↓ |
353 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX