Print this page
5882 Temporary pool names
Reviewed by: Matt Ahrens <matt@delphix.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Approved by: Dan McDonald <danmcd@joyent.com>
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/man/man1m/zpool.1m.man.txt
+++ new/usr/src/man/man1m/zpool.1m.man.txt
1 1 ZPOOL(1M) Maintenance Commands ZPOOL(1M)
2 2
|
↓ open down ↓ |
2 lines elided |
↑ open up ↑ |
3 3 NAME
4 4 zpool - configure ZFS storage pools
5 5
6 6 SYNOPSIS
7 7 zpool -?
8 8 zpool add [-fn] pool vdev...
9 9 zpool attach [-f] pool device new_device
10 10 zpool checkpoint [-d, --discard] pool
11 11 zpool clear pool [device]
12 12 zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]...
13 - [-O file-system-property=value]... [-R root] pool vdev...
13 + [-O file-system-property=value]... [-R root] [-t tempname]
14 + pool vdev...
14 15 zpool destroy [-f] pool
15 16 zpool detach pool device
16 17 zpool export [-f] pool...
17 18 zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
18 19 zpool history [-il] [pool]...
19 20 zpool import [-D] [-d dir]
20 21 zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
21 22 [-o property=value]... [-R root]
22 - zpool import [-Dfm] [-F [-n]] [--rewind-to-checkpoint]
23 + zpool import [-Dfmt] [-F [-n]] [--rewind-to-checkpoint]
23 24 [-c cachefile|-d dir] [-o mntopts] [-o property=value]... [-R root]
24 25 pool|id [newpool]
25 26 zpool initialize [-cs] pool [device...]
26 27 zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
27 28 zpool labelclear [-f] device
28 29 zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
29 30 [interval [count]]
30 31 zpool offline [-t] pool device...
31 32 zpool online [-e] pool device...
32 33 zpool reguid pool
33 34 zpool reopen pool
34 35 zpool remove [-np] pool device...
35 36 zpool remove -s pool
36 37 zpool replace [-f] pool device [new_device]
37 38 zpool scrub [-s | -p] pool...
38 39 zpool set property=value pool
39 40 zpool split [-n] [-o property=value]... [-R root] pool newpool
40 41 zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
41 42 zpool upgrade
42 43 zpool upgrade -v
43 44 zpool upgrade [-V version] -a|pool...
44 45
45 46 DESCRIPTION
46 47 The zpool command configures ZFS storage pools. A storage pool is a
47 48 collection of devices that provides physical storage and data replication
48 49 for ZFS datasets. All datasets within a storage pool share the same
49 50 space. See zfs(1M) for information on managing datasets.
50 51
51 52 Virtual Devices (vdevs)
52 53 A "virtual device" describes a single device or a collection of devices
53 54 organized according to certain performance and fault characteristics.
54 55 The following virtual devices are supported:
55 56
56 57 disk A block device, typically located under /dev/dsk. ZFS can use
57 58 individual slices or partitions, though the recommended mode of
58 59 operation is to use whole disks. A disk can be specified by a
59 60 full path, or it can be a shorthand name (the relative portion of
60 61 the path under /dev/dsk). A whole disk can be specified by
61 62 omitting the slice or partition designation. For example, c0t0d0
62 63 is equivalent to /dev/dsk/c0t0d0s2. When given a whole disk, ZFS
63 64 automatically labels the disk, if necessary.
64 65
65 66 file A regular file. The use of files as a backing store is strongly
66 67 discouraged. It is designed primarily for experimental purposes,
67 68 as the fault tolerance of a file is only as good as the file
68 69 system of which it is a part. A file must be specified by a full
69 70 path.
70 71
71 72 mirror A mirror of two or more devices. Data is replicated in an
72 73 identical fashion across all components of a mirror. A mirror
73 74 with N disks of size X can hold X bytes and can withstand (N-1)
74 75 devices failing before data integrity is compromised.
75 76
76 77 raidz, raidz1, raidz2, raidz3
77 78 A variation on RAID-5 that allows for better distribution of
78 79 parity and eliminates the RAID-5 "write hole" (in which data and
79 80 parity become inconsistent after a power loss). Data and parity
80 81 is striped across all disks within a raidz group.
81 82
82 83 A raidz group can have single-, double-, or triple-parity,
83 84 meaning that the raidz group can sustain one, two, or three
84 85 failures, respectively, without losing any data. The raidz1 vdev
85 86 type specifies a single-parity raidz group; the raidz2 vdev type
86 87 specifies a double-parity raidz group; and the raidz3 vdev type
87 88 specifies a triple-parity raidz group. The raidz vdev type is an
88 89 alias for raidz1.
89 90
90 91 A raidz group with N disks of size X with P parity disks can hold
91 92 approximately (N-P)*X bytes and can withstand P device(s) failing
92 93 before data integrity is compromised. The minimum number of
93 94 devices in a raidz group is one more than the number of parity
94 95 disks. The recommended number is between 3 and 9 to help
95 96 increase performance.
96 97
97 98 spare A special pseudo-vdev which keeps track of available hot spares
98 99 for a pool. For more information, see the Hot Spares section.
99 100
100 101 log A separate intent log device. If more than one log device is
101 102 specified, then writes are load-balanced between devices. Log
102 103 devices can be mirrored. However, raidz vdev types are not
103 104 supported for the intent log. For more information, see the
104 105 Intent Log section.
105 106
106 107 cache A device used to cache storage pool data. A cache device cannot
107 108 be configured as a mirror or raidz group. For more information,
108 109 see the Cache Devices section.
109 110
110 111 Virtual devices cannot be nested, so a mirror or raidz virtual device can
111 112 only contain files or disks. Mirrors of mirrors (or other combinations)
112 113 are not allowed.
113 114
114 115 A pool can have any number of virtual devices at the top of the
115 116 configuration (known as "root vdevs"). Data is dynamically distributed
116 117 across all top-level devices to balance data among devices. As new
117 118 virtual devices are added, ZFS automatically places data on the newly
118 119 available devices.
119 120
120 121 Virtual devices are specified one at a time on the command line,
121 122 separated by whitespace. The keywords mirror and raidz are used to
122 123 distinguish where a group ends and another begins. For example, the
123 124 following creates two root vdevs, each a mirror of two disks:
124 125
125 126 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
126 127
127 128 Device Failure and Recovery
128 129 ZFS supports a rich set of mechanisms for handling device failure and
129 130 data corruption. All metadata and data is checksummed, and ZFS
130 131 automatically repairs bad data from a good copy when corruption is
131 132 detected.
132 133
133 134 In order to take advantage of these features, a pool must make use of
134 135 some form of redundancy, using either mirrored or raidz groups. While
135 136 ZFS supports running in a non-redundant configuration, where each root
136 137 vdev is simply a disk or file, this is strongly discouraged. A single
137 138 case of bit corruption can render some or all of your data unavailable.
138 139
139 140 A pool's health status is described by one of three states: online,
140 141 degraded, or faulted. An online pool has all devices operating normally.
141 142 A degraded pool is one in which one or more devices have failed, but the
142 143 data is still available due to a redundant configuration. A faulted pool
143 144 has corrupted metadata, or one or more faulted devices, and insufficient
144 145 replicas to continue functioning.
145 146
146 147 The health of the top-level vdev, such as mirror or raidz device, is
147 148 potentially impacted by the state of its associated vdevs, or component
148 149 devices. A top-level vdev or component device is in one of the following
149 150 states:
150 151
151 152 DEGRADED One or more top-level vdevs is in the degraded state because
152 153 one or more component devices are offline. Sufficient replicas
153 154 exist to continue functioning.
154 155
155 156 One or more component devices is in the degraded or faulted
156 157 state, but sufficient replicas exist to continue functioning.
157 158 The underlying conditions are as follows:
158 159
159 160 o The number of checksum errors exceeds acceptable levels and
160 161 the device is degraded as an indication that something may
161 162 be wrong. ZFS continues to use the device as necessary.
162 163
163 164 o The number of I/O errors exceeds acceptable levels. The
164 165 device could not be marked as faulted because there are
165 166 insufficient replicas to continue functioning.
166 167
167 168 FAULTED One or more top-level vdevs is in the faulted state because one
168 169 or more component devices are offline. Insufficient replicas
169 170 exist to continue functioning.
170 171
171 172 One or more component devices is in the faulted state, and
172 173 insufficient replicas exist to continue functioning. The
173 174 underlying conditions are as follows:
174 175
175 176 o The device could be opened, but the contents did not match
176 177 expected values.
177 178
178 179 o The number of I/O errors exceeds acceptable levels and the
179 180 device is faulted to prevent further use of the device.
180 181
181 182 OFFLINE The device was explicitly taken offline by the zpool offline
182 183 command.
183 184
184 185 ONLINE The device is online and functioning.
185 186
186 187 REMOVED The device was physically removed while the system was running.
187 188 Device removal detection is hardware-dependent and may not be
188 189 supported on all platforms.
189 190
190 191 UNAVAIL The device could not be opened. If a pool is imported when a
191 192 device was unavailable, then the device will be identified by a
192 193 unique identifier instead of its path since the path was never
193 194 correct in the first place.
194 195
195 196 If a device is removed and later re-attached to the system, ZFS attempts
196 197 to put the device online automatically. Device attach detection is
197 198 hardware-dependent and might not be supported on all platforms.
198 199
199 200 Hot Spares
200 201 ZFS allows devices to be associated with pools as "hot spares". These
201 202 devices are not actively used in the pool, but when an active device
202 203 fails, it is automatically replaced by a hot spare. To create a pool
203 204 with hot spares, specify a spare vdev with any number of devices. For
204 205 example,
205 206
206 207 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
207 208
208 209 Spares can be shared across multiple pools, and can be added with the
209 210 zpool add command and removed with the zpool remove command. Once a
210 211 spare replacement is initiated, a new spare vdev is created within the
211 212 configuration that will remain there until the original device is
212 213 replaced. At this point, the hot spare becomes available again if
213 214 another device fails.
214 215
215 216 If a pool has a shared spare that is currently being used, the pool can
216 217 not be exported since other pools may use this shared spare, which may
217 218 lead to potential data corruption.
218 219
219 220 An in-progress spare replacement can be cancelled by detaching the hot
220 221 spare. If the original faulted device is detached, then the hot spare
221 222 assumes its place in the configuration, and is removed from the spare
222 223 list of all active pools.
223 224
224 225 Spares cannot replace log devices.
225 226
226 227 Intent Log
227 228 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
228 229 transactions. For instance, databases often require their transactions
229 230 to be on stable storage devices when returning from a system call. NFS
230 231 and other applications can also use fsync(3C) to ensure data stability.
231 232 By default, the intent log is allocated from blocks within the main pool.
232 233 However, it might be possible to get better performance using separate
233 234 intent log devices such as NVRAM or a dedicated disk. For example:
234 235
235 236 # zpool create pool c0d0 c1d0 log c2d0
236 237
237 238 Multiple log devices can also be specified, and they can be mirrored.
238 239 See the EXAMPLES section for an example of mirroring multiple log
239 240 devices.
240 241
241 242 Log devices can be added, replaced, attached, detached, and imported and
242 243 exported as part of the larger pool. Mirrored devices can be removed by
243 244 specifying the top-level mirror vdev.
244 245
245 246 Cache Devices
246 247 Devices can be added to a storage pool as "cache devices". These devices
247 248 provide an additional layer of caching between main memory and disk. For
248 249 read-heavy workloads, where the working set size is much larger than what
249 250 can be cached in main memory, using cache devices allow much more of this
250 251 working set to be served from low latency media. Using cache devices
251 252 provides the greatest performance improvement for random read-workloads
252 253 of mostly static content.
253 254
254 255 To create a pool with cache devices, specify a cache vdev with any number
255 256 of devices. For example:
256 257
257 258 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
258 259
259 260 Cache devices cannot be mirrored or part of a raidz configuration. If a
260 261 read error is encountered on a cache device, that read I/O is reissued to
261 262 the original storage pool device, which might be part of a mirrored or
262 263 raidz configuration.
263 264
264 265 The content of the cache devices is considered volatile, as is the case
265 266 with other system caches.
266 267
267 268 Pool checkpoint
268 269 Before starting critical procedures that include destructive actions (e.g
269 270 zfs destroy ), an administrator can checkpoint the pool's state and in
270 271 the case of a mistake or failure, rewind the entire pool back to the
271 272 checkpoint. Otherwise, the checkpoint can be discarded when the
272 273 procedure has completed successfully.
273 274
274 275 A pool checkpoint can be thought of as a pool-wide snapshot and should be
275 276 used with care as it contains every part of the pool's state, from
276 277 properties to vdev configuration. Thus, while a pool has a checkpoint
277 278 certain operations are not allowed. Specifically, vdev
278 279 removal/attach/detach, mirror splitting, and changing the pool's guid.
279 280 Adding a new vdev is supported but in the case of a rewind it will have
280 281 to be added again. Finally, users of this feature should keep in mind
281 282 that scrubs in a pool that has a checkpoint do not repair checkpointed
282 283 data.
283 284
284 285 To create a checkpoint for a pool:
285 286
286 287 # zpool checkpoint pool
287 288
288 289 To later rewind to its checkpointed state, you need to first export it
289 290 and then rewind it during import:
290 291
291 292 # zpool export pool
292 293 # zpool import --rewind-to-checkpoint pool
293 294
294 295 To discard the checkpoint from a pool:
295 296
296 297 # zpool checkpoint -d pool
297 298
298 299 Dataset reservations (controlled by the reservation or refreservation zfs
299 300 properties) may be unenforceable while a checkpoint exists, because the
300 301 checkpoint is allowed to consume the dataset's reservation. Finally,
301 302 data that is part of the checkpoint but has been freed in the current
302 303 state of the pool won't be scanned during a scrub.
303 304
304 305 Properties
305 306 Each pool has several properties associated with it. Some properties are
306 307 read-only statistics while others are configurable and change the
307 308 behavior of the pool.
308 309
309 310 The following are read-only properties:
310 311
311 312 allocated
312 313 Amount of storage space used within the pool.
313 314
314 315 bootsize
315 316 The size of the system boot partition. This property can only be
316 317 set at pool creation time and is read-only once pool is created.
317 318 Setting this property implies using the -B option.
318 319
319 320 capacity
320 321 Percentage of pool space used. This property can also be
321 322 referred to by its shortened column name, cap.
322 323
323 324 expandsize
324 325 Amount of uninitialized space within the pool or device that can
325 326 be used to increase the total capacity of the pool.
326 327 Uninitialized space consists of any space on an EFI labeled vdev
327 328 which has not been brought online (e.g, using zpool online -e).
328 329 This space occurs when a LUN is dynamically expanded.
329 330
330 331 fragmentation
331 332 The amount of fragmentation in the pool.
332 333
333 334 free The amount of free space available in the pool.
334 335
335 336 freeing
336 337 After a file system or snapshot is destroyed, the space it was
337 338 using is returned to the pool asynchronously. freeing is the
338 339 amount of space remaining to be reclaimed. Over time freeing
339 340 will decrease while free increases.
340 341
341 342 health The current health of the pool. Health can be one of ONLINE,
342 343 DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
343 344
344 345 guid A unique identifier for the pool.
345 346
346 347 size Total size of the storage pool.
347 348
348 349 unsupported@feature_guid
349 350 Information about unsupported features that are enabled on the
350 351 pool. See zpool-features(5) for details.
351 352
352 353 The space usage properties report actual physical space available to the
353 354 storage pool. The physical space can be different from the total amount
354 355 of space that any contained datasets can actually use. The amount of
355 356 space used in a raidz configuration depends on the characteristics of the
356 357 data being written. In addition, ZFS reserves some space for internal
357 358 accounting that the zfs(1M) command takes into account, but the zpool
358 359 command does not. For non-full pools of a reasonable size, these effects
359 360 should be invisible. For small pools, or pools that are close to being
360 361 completely full, these discrepancies may become more noticeable.
361 362
362 363 The following property can be set at creation time and import time:
363 364
364 365 altroot
365 366 Alternate root directory. If set, this directory is prepended to
366 367 any mount points within the pool. This can be used when
367 368 examining an unknown pool where the mount points cannot be
368 369 trusted, or in an alternate boot environment, where the typical
369 370 paths are not valid. altroot is not a persistent property. It
370 371 is valid only while the system is up. Setting altroot defaults
371 372 to using cachefile=none, though this may be overridden using an
372 373 explicit setting.
373 374
374 375 The following property can be set only at import time:
375 376
376 377 readonly=on|off
377 378 If set to on, the pool will be imported in read-only mode. This
378 379 property can also be referred to by its shortened column name,
379 380 rdonly.
380 381
381 382 The following properties can be set at creation time and import time, and
382 383 later changed with the zpool set command:
383 384
384 385 autoexpand=on|off
385 386 Controls automatic pool expansion when the underlying LUN is
386 387 grown. If set to on, the pool will be resized according to the
387 388 size of the expanded device. If the device is part of a mirror
388 389 or raidz then all devices within that mirror/raidz group must be
389 390 expanded before the new space is made available to the pool. The
390 391 default behavior is off. This property can also be referred to
391 392 by its shortened column name, expand.
392 393
393 394 autoreplace=on|off
394 395 Controls automatic device replacement. If set to off, device
395 396 replacement must be initiated by the administrator by using the
396 397 zpool replace command. If set to on, any new device, found in
397 398 the same physical location as a device that previously belonged
398 399 to the pool, is automatically formatted and replaced. The
399 400 default behavior is off. This property can also be referred to
400 401 by its shortened column name, replace.
401 402
402 403 bootfs=pool/dataset
403 404 Identifies the default bootable dataset for the root pool. This
404 405 property is expected to be set mainly by the installation and
405 406 upgrade programs.
406 407
407 408 cachefile=path|none
408 409 Controls the location of where the pool configuration is cached.
409 410 Discovering all pools on system startup requires a cached copy of
410 411 the configuration data that is stored on the root file system.
411 412 All pools in this cache are automatically imported when the
412 413 system boots. Some environments, such as install and clustering,
413 414 need to cache this information in a different location so that
414 415 pools are not automatically imported. Setting this property
415 416 caches the pool configuration in a different location that can
416 417 later be imported with zpool import -c. Setting it to the
417 418 special value none creates a temporary pool that is never cached,
418 419 and the special value "" (empty string) uses the default
419 420 location.
420 421
421 422 Multiple pools can share the same cache file. Because the kernel
422 423 destroys and recreates this file when pools are added and
423 424 removed, care should be taken when attempting to access this
424 425 file. When the last pool using a cachefile is exported or
425 426 destroyed, the file is removed.
426 427
427 428 comment=text
428 429 A text string consisting of printable ASCII characters that will
429 430 be stored such that it is available even if the pool becomes
430 431 faulted. An administrator can provide additional information
431 432 about a pool using this property.
432 433
433 434 dedupditto=number
434 435 Threshold for the number of block ditto copies. If the reference
435 436 count for a deduplicated block increases above this number, a new
436 437 ditto copy of this block is automatically stored. The default
437 438 setting is 0 which causes no ditto copies to be created for
438 439 deduplicated blocks. The minimum legal nonzero setting is 100.
439 440
440 441 delegation=on|off
441 442 Controls whether a non-privileged user is granted access based on
442 443 the dataset permissions defined on the dataset. See zfs(1M) for
443 444 more information on ZFS delegated administration.
444 445
445 446 failmode=wait|continue|panic
446 447 Controls the system behavior in the event of catastrophic pool
447 448 failure. This condition is typically a result of a loss of
448 449 connectivity to the underlying storage device(s) or a failure of
449 450 all devices within the pool. The behavior of such an event is
450 451 determined as follows:
451 452
452 453 wait Blocks all I/O access until the device connectivity is
453 454 recovered and the errors are cleared. This is the
454 455 default behavior.
455 456
456 457 continue Returns EIO to any new write I/O requests but allows
457 458 reads to any of the remaining healthy devices. Any
458 459 write requests that have yet to be committed to disk
459 460 would be blocked.
460 461
461 462 panic Prints out a message to the console and generates a
462 463 system crash dump.
463 464
464 465 feature@feature_name=enabled
465 466 The value of this property is the current state of feature_name.
466 467 The only valid value when setting this property is enabled which
467 468 moves feature_name to the enabled state. See zpool-features(5)
468 469 for details on feature states.
469 470
470 471 listsnapshots=on|off
471 472 Controls whether information about snapshots associated with this
472 473 pool is output when zfs list is run without the -t option. The
473 474 default value is off. This property can also be referred to by
474 475 its shortened name, listsnaps.
475 476
476 477 version=version
477 478 The current on-disk version of the pool. This can be increased,
478 479 but never decreased. The preferred method of updating pools is
479 480 with the zpool upgrade command, though this property can be used
480 481 when a specific version is needed for backwards compatibility.
481 482 Once feature flags are enabled on a pool this property will no
482 483 longer have a value.
483 484
484 485 Subcommands
485 486 All subcommands that modify state are logged persistently to the pool in
486 487 their original form.
487 488
488 489 The zpool command provides subcommands to create and destroy storage
489 490 pools, add capacity to storage pools, and provide information about the
490 491 storage pools. The following subcommands are supported:
491 492
492 493 zpool -?
493 494 Displays a help message.
494 495
495 496 zpool add [-fn] pool vdev...
496 497 Adds the specified virtual devices to the given pool. The vdev
497 498 specification is described in the Virtual Devices section. The
498 499 behavior of the -f option, and the device checks performed are
499 500 described in the zpool create subcommand.
500 501
501 502 -f Forces use of vdevs, even if they appear in use or
502 503 specify a conflicting replication level. Not all devices
503 504 can be overridden in this manner.
504 505
505 506 -n Displays the configuration that would be used without
506 507 actually adding the vdevs. The actual pool creation can
507 508 still fail due to insufficient privileges or device
508 509 sharing.
509 510
510 511 zpool attach [-f] pool device new_device
511 512 Attaches new_device to the existing device. The existing device
512 513 cannot be part of a raidz configuration. If device is not
513 514 currently part of a mirrored configuration, device automatically
514 515 transforms into a two-way mirror of device and new_device. If
515 516 device is part of a two-way mirror, attaching new_device creates
516 517 a three-way mirror, and so on. In either case, new_device begins
517 518 to resilver immediately.
518 519
519 520 -f Forces use of new_device, even if its appears to be in
520 521 use. Not all devices can be overridden in this manner.
521 522
522 523 zpool checkpoint [-d, --discard] pool
523 524 Checkpoints the current state of pool , which can be later
524 525 restored by zpool import --rewind-to-checkpoint. The existence
525 526 of a checkpoint in a pool prohibits the following zpool commands:
526 527 remove, attach, detach, split, and reguid. In addition, it may
527 528 break reservation boundaries if the pool lacks free space. The
528 529 zpool status command indicates the existence of a checkpoint or
529 530 the progress of discarding a checkpoint from a pool. The zpool
530 531 list command reports how much space the checkpoint takes from the
531 532 pool.
532 533
|
↓ open down ↓ |
500 lines elided |
↑ open up ↑ |
533 534 -d, --discard
534 535 Discards an existing checkpoint from pool.
535 536
536 537 zpool clear pool [device]
537 538 Clears device errors in a pool. If no arguments are specified,
538 539 all device errors within the pool are cleared. If one or more
539 540 devices is specified, only those errors associated with the
540 541 specified device or devices are cleared.
541 542
542 543 zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]... [-O
543 - file-system-property=value]... [-R root] pool vdev...
544 + file-system-property=value]... [-R root] [-t tempname] pool
545 + vdev...
544 546 Creates a new storage pool containing the virtual devices
545 547 specified on the command line. The pool name must begin with a
546 548 letter, and can only contain alphanumeric characters as well as
547 549 underscore ("_"), dash ("-"), and period ("."). The pool names
548 550 mirror, raidz, spare and log are reserved, as are names beginning
549 551 with the pattern c[0-9]. The vdev specification is described in
550 552 the Virtual Devices section.
551 553
552 554 The command verifies that each device specified is accessible and
553 555 not currently in use by another subsystem. There are some uses,
554 556 such as being currently mounted, or specified as the dedicated
555 557 dump device, that prevents a device from ever being used by ZFS.
556 558 Other uses, such as having a preexisting UFS file system, can be
557 559 overridden with the -f option.
558 560
559 561 The command also checks that the replication strategy for the
560 562 pool is consistent. An attempt to combine redundant and non-
561 563 redundant storage in a single pool, or to mix disks and files,
562 564 results in an error unless -f is specified. The use of
563 565 differently sized devices within a single raidz or mirror group
564 566 is also flagged as an error unless -f is specified.
565 567
566 568 Unless the -R option is specified, the default mount point is
567 569 /pool. The mount point must not exist or must be empty, or else
568 570 the root dataset cannot be mounted. This can be overridden with
569 571 the -m option.
570 572
571 573 By default all supported features are enabled on the new pool
572 574 unless the -d option is specified.
573 575
574 576 -B Create whole disk pool with EFI System partition to
575 577 support booting system with UEFI firmware. Default size
576 578 is 256MB. To create boot partition with custom size, set
577 579 the bootsize property with the -o option. See the
578 580 Properties section for details.
579 581
580 582 -d Do not enable any features on the new pool. Individual
581 583 features can be enabled by setting their corresponding
582 584 properties to enabled with the -o option. See
583 585 zpool-features(5) for details about feature properties.
584 586
585 587 -f Forces use of vdevs, even if they appear in use or
586 588 specify a conflicting replication level. Not all devices
587 589 can be overridden in this manner.
588 590
589 591 -m mountpoint
590 592 Sets the mount point for the root dataset. The default
591 593 mount point is /pool or altroot/pool if altroot is
592 594 specified. The mount point must be an absolute path,
593 595 legacy, or none. For more information on dataset mount
594 596 points, see zfs(1M).
595 597
596 598 -n Displays the configuration that would be used without
597 599 actually creating the pool. The actual pool creation can
598 600 still fail due to insufficient privileges or device
599 601 sharing.
600 602
601 603 -o property=value
602 604 Sets the given pool properties. See the Properties
|
↓ open down ↓ |
49 lines elided |
↑ open up ↑ |
603 605 section for a list of valid properties that can be set.
604 606
605 607 -O file-system-property=value
606 608 Sets the given file system properties in the root file
607 609 system of the pool. See the Properties section of
608 610 zfs(1M) for a list of valid properties that can be set.
609 611
610 612 -R root
611 613 Equivalent to -o cachefile=none -o altroot=root
612 614
615 + -t tempname
616 + Sets the in-core pool name to tempname while the on-disk
617 + name will be the name specified as the pool name pool.
618 + This will set the default cachefile property to none.
619 + This is intended to handle name space collisions when
620 + creating pools for other systems, such as virtual
621 + machines or physical machines whose pools live on network
622 + block devices.
623 +
613 624 zpool destroy [-f] pool
614 625 Destroys the given pool, freeing up any devices for other use.
615 626 This command tries to unmount any active datasets before
616 627 destroying the pool.
617 628
618 629 -f Forces any active datasets contained within the pool to
619 630 be unmounted.
620 631
621 632 zpool detach pool device
622 633 Detaches device from a mirror. The operation is refused if there
623 634 are no other valid replicas of the data.
624 635
625 636 zpool export [-f] pool...
626 637 Exports the given pools from the system. All devices are marked
627 638 as exported, but are still considered in use by other subsystems.
628 639 The devices can be moved between systems (even those of different
629 640 endianness) and imported as long as a sufficient number of
630 641 devices are present.
631 642
632 643 Before exporting the pool, all datasets within the pool are
633 644 unmounted. A pool can not be exported if it has a shared spare
634 645 that is currently being used.
635 646
636 647 For pools to be portable, you must give the zpool command whole
637 648 disks, not just slices, so that ZFS can label the disks with
638 649 portable EFI labels. Otherwise, disk drivers on platforms of
639 650 different endianness will not recognize the disks.
640 651
641 652 -f Forcefully unmount all datasets, using the unmount -f
642 653 command.
643 654
644 655 This command will forcefully export the pool even if it
645 656 has a shared spare that is currently being used. This
646 657 may lead to potential data corruption.
647 658
648 659 zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
649 660 Retrieves the given list of properties (or all properties if all
650 661 is used) for the specified storage pool(s). These properties are
651 662 displayed with the following fields:
652 663
653 664 name Name of storage pool
654 665 property Property name
655 666 value Property value
656 667 source Property source, either 'default' or 'local'.
657 668
658 669 See the Properties section for more information on the available
659 670 pool properties.
660 671
661 672 -H Scripted mode. Do not display headers, and separate
662 673 fields by a single tab instead of arbitrary space.
663 674
664 675 -o field
665 676 A comma-separated list of columns to display.
666 677 name,property,value,source is the default value.
667 678
668 679 -p Display numbers in parsable (exact) values.
669 680
670 681 zpool history [-il] [pool]...
671 682 Displays the command history of the specified pool(s) or all
672 683 pools if no pool is specified.
673 684
674 685 -i Displays internally logged ZFS events in addition to user
675 686 initiated events.
676 687
677 688 -l Displays log records in long format, which in addition to
678 689 standard format includes, the user name, the hostname,
679 690 and the zone in which the operation was performed.
680 691
681 692 zpool import [-D] [-d dir]
682 693 Lists pools available to import. If the -d option is not
683 694 specified, this command searches for devices in /dev/dsk. The -d
684 695 option can be specified multiple times, and all directories are
685 696 searched. If the device appears to be part of an exported pool,
686 697 this command displays a summary of the pool with the name of the
687 698 pool, a numeric identifier, as well as the vdev layout and
688 699 current health of the device for each device or file. Destroyed
689 700 pools, pools that were previously destroyed with the zpool
690 701 destroy command, are not listed unless the -D option is
691 702 specified.
692 703
693 704 The numeric identifier is unique, and can be used instead of the
694 705 pool name when multiple exported pools of the same name are
695 706 available.
696 707
697 708 -c cachefile
698 709 Reads configuration from the given cachefile that was
699 710 created with the cachefile pool property. This cachefile
700 711 is used instead of searching for devices.
701 712
702 713 -d dir Searches for devices or files in dir. The -d option can
703 714 be specified multiple times.
704 715
705 716 -D Lists destroyed pools only.
706 717
707 718 zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
708 719 property=value]... [-R root]
709 720 Imports all pools found in the search directories. Identical to
710 721 the previous command, except that all pools with a sufficient
711 722 number of devices available are imported. Destroyed pools, pools
712 723 that were previously destroyed with the zpool destroy command,
713 724 will not be imported unless the -D option is specified.
714 725
715 726 -a Searches for and imports all pools found.
716 727
717 728 -c cachefile
718 729 Reads configuration from the given cachefile that was
719 730 created with the cachefile pool property. This cachefile
720 731 is used instead of searching for devices.
721 732
722 733 -d dir Searches for devices or files in dir. The -d option can
723 734 be specified multiple times. This option is incompatible
724 735 with the -c option.
725 736
726 737 -D Imports destroyed pools only. The -f option is also
727 738 required.
728 739
729 740 -f Forces import, even if the pool appears to be potentially
730 741 active.
731 742
732 743 -F Recovery mode for a non-importable pool. Attempt to
733 744 return the pool to an importable state by discarding the
734 745 last few transactions. Not all damaged pools can be
735 746 recovered by using this option. If successful, the data
736 747 from the discarded transactions is irretrievably lost.
737 748 This option is ignored if the pool is importable or
738 749 already imported.
739 750
740 751 -m Allows a pool to import when there is a missing log
741 752 device. Recent transactions can be lost because the log
742 753 device will be discarded.
743 754
744 755 -n Used with the -F recovery option. Determines whether a
745 756 non-importable pool can be made importable again, but
746 757 does not actually perform the pool recovery. For more
747 758 details about pool recovery mode, see the -F option,
748 759 above.
749 760
750 761 -N Import the pool without mounting any file systems.
751 762
752 763 -o mntopts
753 764 Comma-separated list of mount options to use when
754 765 mounting datasets within the pool. See zfs(1M) for a
755 766 description of dataset properties and mount options.
|
↓ open down ↓ |
133 lines elided |
↑ open up ↑ |
756 767
757 768 -o property=value
758 769 Sets the specified property on the imported pool. See
759 770 the Properties section for more information on the
760 771 available pool properties.
761 772
762 773 -R root
763 774 Sets the cachefile property to none and the altroot
764 775 property to root.
765 776
766 - zpool import [-Dfm] [-F [-n]] [--rewind-to-checkpoint] [-c cachefile|-d
777 + zpool import [-Dfmt] [-F [-n]] [--rewind-to-checkpoint] [-c cachefile|-d
767 778 dir] [-o mntopts] [-o property=value]... [-R root] pool|id
768 779 [newpool]
769 780 Imports a specific pool. A pool can be identified by its name or
770 781 the numeric identifier. If newpool is specified, the pool is
771 782 imported using the name newpool. Otherwise, it is imported with
772 783 the same name as its exported name.
773 784
774 785 If a device is removed from a system without running zpool export
775 786 first, the device appears as potentially active. It cannot be
776 787 determined if this was a failed export, or whether the device is
777 788 really in use from another host. To import a pool in this state,
778 789 the -f option is required.
779 790
780 791 -c cachefile
781 792 Reads configuration from the given cachefile that was
782 793 created with the cachefile pool property. This cachefile
783 794 is used instead of searching for devices.
784 795
785 796 -d dir Searches for devices or files in dir. The -d option can
786 797 be specified multiple times. This option is incompatible
787 798 with the -c option.
788 799
789 800 -D Imports destroyed pool. The -f option is also required.
790 801
791 802 -f Forces import, even if the pool appears to be potentially
792 803 active.
793 804
794 805 -F Recovery mode for a non-importable pool. Attempt to
795 806 return the pool to an importable state by discarding the
796 807 last few transactions. Not all damaged pools can be
797 808 recovered by using this option. If successful, the data
798 809 from the discarded transactions is irretrievably lost.
799 810 This option is ignored if the pool is importable or
800 811 already imported.
801 812
802 813 -m Allows a pool to import when there is a missing log
803 814 device. Recent transactions can be lost because the log
804 815 device will be discarded.
805 816
806 817 -n Used with the -F recovery option. Determines whether a
807 818 non-importable pool can be made importable again, but
808 819 does not actually perform the pool recovery. For more
809 820 details about pool recovery mode, see the -F option,
810 821 above.
811 822
812 823 -o mntopts
813 824 Comma-separated list of mount options to use when
814 825 mounting datasets within the pool. See zfs(1M) for a
815 826 description of dataset properties and mount options.
|
↓ open down ↓ |
39 lines elided |
↑ open up ↑ |
816 827
817 828 -o property=value
818 829 Sets the specified property on the imported pool. See
819 830 the Properties section for more information on the
820 831 available pool properties.
821 832
822 833 -R root
823 834 Sets the cachefile property to none and the altroot
824 835 property to root.
825 836
837 + -t Used with newpool. Specifies that newpool is temporary.
838 + Temporary pool names last until export. Ensures that the
839 + original pool name will be used in all label updates and
840 + therefore is retained upon export. Will also set
841 + cachefile property to none when not explicitly specified.
842 +
826 843 --rewind-to-checkpoint
827 844 Rewinds pool to the checkpointed state. Once the pool is
828 845 imported with this flag there is no way to undo the
829 846 rewind. All changes and data that were written after the
830 847 checkpoint are lost! The only exception is when the
831 848 readonly mounting option is enabled. In this case, the
832 849 checkpointed state of the pool is opened and an
833 850 administrator can see how the pool would look like if
834 851 they were to fully rewind.
835 852
836 853 zpool initialize [-cs] pool [device...]
837 854 Begins initializing by writing to all unallocated regions on the
838 855 specified devices, or all eligible devices in the pool if no
839 856 individual devices are specified. Only leaf data or log devices
840 857 may be initialized.
841 858
842 859 -c, --cancel
843 860 Cancel initializing on the specified devices, or all
844 861 eligible devices if none are specified. If one or more
845 862 target devices are invalid or are not currently being
846 863 initialized, the command will fail and no cancellation
847 864 will occur on any device.
848 865
849 866 -s --suspend
850 867 Suspend initializing on the specified devices, or all
851 868 eligible devices if none are specified. If one or more
852 869 target devices are invalid or are not currently being
853 870 initialized, the command will fail and no suspension will
854 871 occur on any device. Initializing can then be resumed by
855 872 running zpool initialize with no flags on the relevant
856 873 target devices.
857 874
858 875 zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
859 876 Displays I/O statistics for the given pools. When given an
860 877 interval, the statistics are printed every interval seconds until
861 878 ^C is pressed. If no pools are specified, statistics for every
862 879 pool in the system is shown. If count is specified, the command
863 880 exits after count reports are printed.
864 881
865 882 -T u|d Display a time stamp. Specify u for a printed
866 883 representation of the internal representation of time.
867 884 See time(2). Specify d for standard date format. See
868 885 date(1).
869 886
870 887 -v Verbose statistics Reports usage statistics for
871 888 individual vdevs within the pool, in addition to the
872 889 pool-wide statistics.
873 890
874 891 zpool labelclear [-f] device
875 892 Removes ZFS label information from the specified device. The
876 893 device must not be part of an active pool configuration.
877 894
878 895 -f Treat exported or foreign devices as inactive.
879 896
880 897 zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
881 898 [interval [count]]
882 899 Lists the given pools along with a health status and space usage.
883 900 If no pools are specified, all pools in the system are listed.
884 901 When given an interval, the information is printed every interval
885 902 seconds until ^C is pressed. If count is specified, the command
886 903 exits after count reports are printed.
887 904
888 905 -H Scripted mode. Do not display headers, and separate
889 906 fields by a single tab instead of arbitrary space.
890 907
891 908 -o property
892 909 Comma-separated list of properties to display. See the
893 910 Properties section for a list of valid properties. The
894 911 default list is name, size, allocated, free, checkpoint,
895 912 expandsize, fragmentation, capacity, dedupratio, health,
896 913 altroot.
897 914
898 915 -p Display numbers in parsable (exact) values.
899 916
900 917 -T u|d Display a time stamp. Specify -u for a printed
901 918 representation of the internal representation of time.
902 919 See time(2). Specify -d for standard date format. See
903 920 date(1).
904 921
905 922 -v Verbose statistics. Reports usage statistics for
906 923 individual vdevs within the pool, in addition to the
907 924 pool-wise statistics.
908 925
909 926 zpool offline [-t] pool device...
910 927 Takes the specified physical device offline. While the device is
911 928 offline, no attempt is made to read or write to the device. This
912 929 command is not applicable to spares.
913 930
914 931 -t Temporary. Upon reboot, the specified physical device
915 932 reverts to its previous state.
916 933
917 934 zpool online [-e] pool device...
918 935 Brings the specified physical device online. This command is not
919 936 applicable to spares.
920 937
921 938 -e Expand the device to use all available space. If the
922 939 device is part of a mirror or raidz then all devices must
923 940 be expanded before the new space will become available to
924 941 the pool.
925 942
926 943 zpool reguid pool
927 944 Generates a new unique identifier for the pool. You must ensure
928 945 that all devices in this pool are online and healthy before
929 946 performing this action.
930 947
931 948 zpool reopen pool
932 949 Reopen all the vdevs associated with the pool.
933 950
934 951 zpool remove [-np] pool device...
935 952 Removes the specified device from the pool. This command
936 953 currently only supports removing hot spares, cache, log devices
937 954 and mirrored top-level vdevs (mirror of leaf devices); but not
938 955 raidz.
939 956
940 957 Removing a top-level vdev reduces the total amount of space in
941 958 the storage pool. The specified device will be evacuated by
942 959 copying all allocated space from it to the other devices in the
943 960 pool. In this case, the zpool remove command initiates the
944 961 removal and returns, while the evacuation continues in the
945 962 background. The removal progress can be monitored with zpool
946 963 status. This feature must be enabled to be used, see
947 964 zpool-features(5)
948 965
949 966 A mirrored top-level device (log or data) can be removed by
950 967 specifying the top-level mirror for the same. Non-log devices or
951 968 data devices that are part of a mirrored configuration can be
952 969 removed using the zpool detach command.
953 970
954 971 -n Do not actually perform the removal ("no-op"). Instead,
955 972 print the estimated amount of memory that will be used by
956 973 the mapping table after the removal completes. This is
957 974 nonzero only for top-level vdevs.
958 975
959 976 -p Used in conjunction with the -n flag, displays numbers as
960 977 parsable (exact) values.
961 978
962 979 zpool remove -s pool
963 980 Stops and cancels an in-progress removal of a top-level vdev.
964 981
965 982 zpool replace [-f] pool device [new_device]
966 983 Replaces old_device with new_device. This is equivalent to
967 984 attaching new_device, waiting for it to resilver, and then
968 985 detaching old_device.
969 986
970 987 The size of new_device must be greater than or equal to the
971 988 minimum size of all the devices in a mirror or raidz
972 989 configuration.
973 990
974 991 new_device is required if the pool is not redundant. If
975 992 new_device is not specified, it defaults to old_device. This
976 993 form of replacement is useful after an existing disk has failed
977 994 and has been physically replaced. In this case, the new disk may
978 995 have the same /dev/dsk path as the old device, even though it is
979 996 actually a different disk. ZFS recognizes this.
980 997
981 998 -f Forces use of new_device, even if its appears to be in
982 999 use. Not all devices can be overridden in this manner.
983 1000
984 1001 zpool scrub [-s | -p] pool...
985 1002 Begins a scrub or resumes a paused scrub. The scrub examines all
986 1003 data in the specified pools to verify that it checksums
987 1004 correctly. For replicated (mirror or raidz) devices, ZFS
988 1005 automatically repairs any damage discovered during the scrub.
989 1006 The zpool status command reports the progress of the scrub and
990 1007 summarizes the results of the scrub upon completion.
991 1008
992 1009 Scrubbing and resilvering are very similar operations. The
993 1010 difference is that resilvering only examines data that ZFS knows
994 1011 to be out of date (for example, when attaching a new device to a
995 1012 mirror or replacing an existing device), whereas scrubbing
996 1013 examines all data to discover silent errors due to hardware
997 1014 faults or disk failure.
998 1015
999 1016 Because scrubbing and resilvering are I/O-intensive operations,
1000 1017 ZFS only allows one at a time. If a scrub is paused, the zpool
1001 1018 scrub resumes it. If a resilver is in progress, ZFS does not
1002 1019 allow a scrub to be started until the resilver completes.
1003 1020
1004 1021 -s Stop scrubbing.
1005 1022
1006 1023 -p Pause scrubbing. Scrub pause state and progress are
1007 1024 periodically synced to disk. If the system is restarted
1008 1025 or pool is exported during a paused scrub, even after
1009 1026 import, scrub will remain paused until it is resumed.
1010 1027 Once resumed the scrub will pick up from the place where
1011 1028 it was last checkpointed to disk. To resume a paused
1012 1029 scrub issue zpool scrub again.
1013 1030
1014 1031 zpool set property=value pool
1015 1032 Sets the given property on the specified pool. See the
1016 1033 Properties section for more information on what properties can be
1017 1034 set and acceptable values.
1018 1035
1019 1036 zpool split [-n] [-o property=value]... [-R root] pool newpool
1020 1037 Splits devices off pool creating newpool. All vdevs in pool must
1021 1038 be mirrors. At the time of the split, newpool will be a replica
1022 1039 of pool.
1023 1040
1024 1041 -n Do dry run, do not actually perform the split. Print out
1025 1042 the expected configuration of newpool.
1026 1043
1027 1044 -o property=value
1028 1045 Sets the specified property for newpool. See the
1029 1046 Properties section for more information on the available
1030 1047 pool properties.
1031 1048
1032 1049 -R root
1033 1050 Set altroot for newpool to root and automatically import
1034 1051 it.
1035 1052
1036 1053 zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
1037 1054 Displays the detailed health status for the given pools. If no
1038 1055 pool is specified, then the status of each pool in the system is
1039 1056 displayed. For more information on pool and device health, see
1040 1057 the Device Failure and Recovery section.
1041 1058
1042 1059 If a scrub or resilver is in progress, this command reports the
1043 1060 percentage done and the estimated time to completion. Both of
1044 1061 these are only approximate, because the amount of data in the
1045 1062 pool and the other workloads on the system can change.
1046 1063
1047 1064 -D Display a histogram of deduplication statistics, showing
1048 1065 the allocated (physically present on disk) and referenced
1049 1066 (logically referenced in the pool) block counts and sizes
1050 1067 by reference count.
1051 1068
1052 1069 -T u|d Display a time stamp. Specify -u for a printed
1053 1070 representation of the internal representation of time.
1054 1071 See time(2). Specify -d for standard date format. See
1055 1072 date(1).
1056 1073
1057 1074 -v Displays verbose data error information, printing out a
1058 1075 complete list of all data errors since the last complete
1059 1076 pool scrub.
1060 1077
1061 1078 -x Only display status for pools that are exhibiting errors
1062 1079 or are otherwise unavailable. Warnings about pools not
1063 1080 using the latest on-disk format will not be included.
1064 1081
1065 1082 zpool upgrade
1066 1083 Displays pools which do not have all supported features enabled
1067 1084 and pools formatted using a legacy ZFS version number. These
1068 1085 pools can continue to be used, but some features may not be
1069 1086 available. Use zpool upgrade -a to enable all features on all
1070 1087 pools.
1071 1088
1072 1089 zpool upgrade -v
1073 1090 Displays legacy ZFS versions supported by the current software.
1074 1091 See zpool-features(5) for a description of feature flags features
1075 1092 supported by the current software.
1076 1093
1077 1094 zpool upgrade [-V version] -a|pool...
1078 1095 Enables all supported features on the given pool. Once this is
1079 1096 done, the pool will no longer be accessible on systems that do
1080 1097 not support feature flags. See zpool-features(5) for details on
1081 1098 compatibility with systems that support feature flags, but do not
1082 1099 support all features enabled on the pool.
1083 1100
1084 1101 -a Enables all supported features on all pools.
1085 1102
1086 1103 -V version
1087 1104 Upgrade to the specified legacy version. If the -V flag
1088 1105 is specified, no features will be enabled on the pool.
1089 1106 This option can only be used to increase the version
1090 1107 number up to the last supported legacy version number.
1091 1108
1092 1109 EXIT STATUS
1093 1110 The following exit values are returned:
1094 1111
1095 1112 0 Successful completion.
1096 1113
1097 1114 1 An error occurred.
1098 1115
1099 1116 2 Invalid command line options were specified.
1100 1117
1101 1118 EXAMPLES
1102 1119 Example 1 Creating a RAID-Z Storage Pool
1103 1120 The following command creates a pool with a single raidz root
1104 1121 vdev that consists of six disks.
1105 1122
1106 1123 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1107 1124
1108 1125 Example 2 Creating a Mirrored Storage Pool
1109 1126 The following command creates a pool with two mirrors, where each
1110 1127 mirror contains two disks.
1111 1128
1112 1129 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1113 1130
1114 1131 Example 3 Creating a ZFS Storage Pool by Using Slices
1115 1132 The following command creates an unmirrored pool using two disk
1116 1133 slices.
1117 1134
1118 1135 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1119 1136
1120 1137 Example 4 Creating a ZFS Storage Pool by Using Files
1121 1138 The following command creates an unmirrored pool using files.
1122 1139 While not recommended, a pool based on files can be useful for
1123 1140 experimental purposes.
1124 1141
1125 1142 # zpool create tank /path/to/file/a /path/to/file/b
1126 1143
1127 1144 Example 5 Adding a Mirror to a ZFS Storage Pool
1128 1145 The following command adds two mirrored disks to the pool tank,
1129 1146 assuming the pool is already made up of two-way mirrors. The
1130 1147 additional space is immediately available to any datasets within
1131 1148 the pool.
1132 1149
1133 1150 # zpool add tank mirror c1t0d0 c1t1d0
1134 1151
1135 1152 Example 6 Listing Available ZFS Storage Pools
1136 1153 The following command lists all available pools on the system.
1137 1154 In this case, the pool zion is faulted due to a missing device.
1138 1155 The results from this command are similar to the following:
1139 1156
1140 1157 # zpool list
1141 1158 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1142 1159 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
1143 1160 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
1144 1161 zion - - - - - - - FAULTED -
1145 1162
1146 1163 Example 7 Destroying a ZFS Storage Pool
1147 1164 The following command destroys the pool tank and any datasets
1148 1165 contained within.
1149 1166
1150 1167 # zpool destroy -f tank
1151 1168
1152 1169 Example 8 Exporting a ZFS Storage Pool
1153 1170 The following command exports the devices in pool tank so that
1154 1171 they can be relocated or later imported.
1155 1172
1156 1173 # zpool export tank
1157 1174
1158 1175 Example 9 Importing a ZFS Storage Pool
1159 1176 The following command displays available pools, and then imports
1160 1177 the pool tank for use on the system. The results from this
1161 1178 command are similar to the following:
1162 1179
1163 1180 # zpool import
1164 1181 pool: tank
1165 1182 id: 15451357997522795478
1166 1183 state: ONLINE
1167 1184 action: The pool can be imported using its name or numeric identifier.
1168 1185 config:
1169 1186
1170 1187 tank ONLINE
1171 1188 mirror ONLINE
1172 1189 c1t2d0 ONLINE
1173 1190 c1t3d0 ONLINE
1174 1191
1175 1192 # zpool import tank
1176 1193
1177 1194 Example 10 Upgrading All ZFS Storage Pools to the Current Version
1178 1195 The following command upgrades all ZFS Storage pools to the
1179 1196 current version of the software.
1180 1197
1181 1198 # zpool upgrade -a
1182 1199 This system is currently running ZFS version 2.
1183 1200
1184 1201 Example 11 Managing Hot Spares
1185 1202 The following command creates a new pool with an available hot
1186 1203 spare:
1187 1204
1188 1205 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1189 1206
1190 1207 If one of the disks were to fail, the pool would be reduced to
1191 1208 the degraded state. The failed device can be replaced using the
1192 1209 following command:
1193 1210
1194 1211 # zpool replace tank c0t0d0 c0t3d0
1195 1212
1196 1213 Once the data has been resilvered, the spare is automatically
1197 1214 removed and is made available for use should another device fail.
1198 1215 The hot spare can be permanently removed from the pool using the
1199 1216 following command:
1200 1217
1201 1218 # zpool remove tank c0t2d0
1202 1219
1203 1220 Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1204 1221 The following command creates a ZFS storage pool consisting of
1205 1222 two, two-way mirrors and mirrored log devices:
1206 1223
1207 1224 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1208 1225 c4d0 c5d0
1209 1226
1210 1227 Example 13 Adding Cache Devices to a ZFS Pool
1211 1228 The following command adds two disks for use as cache devices to
1212 1229 a ZFS storage pool:
1213 1230
1214 1231 # zpool add pool cache c2d0 c3d0
1215 1232
1216 1233 Once added, the cache devices gradually fill with content from
1217 1234 main memory. Depending on the size of your cache devices, it
1218 1235 could take over an hour for them to fill. Capacity and reads can
1219 1236 be monitored using the iostat option as follows:
1220 1237
1221 1238 # zpool iostat -v pool 5
1222 1239
1223 1240 Example 14 Removing a Mirrored top-level (Log or Data) Device
1224 1241 The following commands remove the mirrored log device mirror-2
1225 1242 and mirrored top-level data device mirror-1.
1226 1243
1227 1244 Given this configuration:
1228 1245
1229 1246 pool: tank
1230 1247 state: ONLINE
1231 1248 scrub: none requested
1232 1249 config:
1233 1250
1234 1251 NAME STATE READ WRITE CKSUM
1235 1252 tank ONLINE 0 0 0
1236 1253 mirror-0 ONLINE 0 0 0
1237 1254 c6t0d0 ONLINE 0 0 0
1238 1255 c6t1d0 ONLINE 0 0 0
1239 1256 mirror-1 ONLINE 0 0 0
1240 1257 c6t2d0 ONLINE 0 0 0
1241 1258 c6t3d0 ONLINE 0 0 0
1242 1259 logs
1243 1260 mirror-2 ONLINE 0 0 0
1244 1261 c4t0d0 ONLINE 0 0 0
1245 1262 c4t1d0 ONLINE 0 0 0
1246 1263
1247 1264 The command to remove the mirrored log mirror-2 is:
1248 1265
1249 1266 # zpool remove tank mirror-2
1250 1267
1251 1268 The command to remove the mirrored data mirror-1 is:
1252 1269
1253 1270 # zpool remove tank mirror-1
1254 1271
1255 1272 Example 15 Displaying expanded space on a device
1256 1273 The following command displays the detailed information for the
1257 1274 pool data. This pool is comprised of a single raidz vdev where
1258 1275 one of its devices increased its capacity by 10GB. In this
1259 1276 example, the pool will not be able to utilize this extra capacity
1260 1277 until all the devices under the raidz vdev have been expanded.
1261 1278
1262 1279 # zpool list -v data
1263 1280 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1264 1281 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1265 1282 raidz1 23.9G 14.6G 9.30G 48% -
1266 1283 c1t1d0 - - - - -
1267 1284 c1t2d0 - - - - 10G
1268 1285 c1t3d0 - - - - -
1269 1286
1270 1287 INTERFACE STABILITY
1271 1288 Evolving
1272 1289
1273 1290 SEE ALSO
1274 1291 zfs(1M), attributes(5), zpool-features(5)
1275 1292
1276 1293 illumos April 27, 2018 illumos
|
↓ open down ↓ |
441 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX