Print this page
NEX-18069 Unable to get/set VDEV_PROP_RESILVER_MAXACTIVE/VDEV_PROP_RESILVER_MINACTIVE props
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9552 zfs_scan_idle throttling harms performance and needs to be removed
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-5284 need to document and update default for import -t option
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4934 Add capability to remove special vdev
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4258 restore and update vdev-get & vdev-set in zpool man page
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3502 dedup ceiling should set a pool prop when cap is in effect
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
usr/src/common/zfs/zpool_prop.c
usr/src/uts/common/sys/fs/zfs.h
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
usr/src/uts/common/io/scsi/targets/sd.c
usr/src/uts/common/sys/scsi/targets/sddef.h
SUP-817 Removed references to special device from man and help
Revert "SUP-817 Removed references to special device"
This reverts commit f8970e28f0d8bd6b69711722f341e3e1d0e1babf.
SUP-817 Removed references to special device
OS-102 add man page info and tests for vdev/CoS properties and ZFS meta features
Issue #26: partial scrub
Added partial scrub options:
-M for MOS only scrub
-m for metadata scrub
re 13748 added zpool export -c option
zpool export -c command exports specified pool while keeping its latest
configuration in the cache file for subsequent zpool import -c.
re #11781 rb3701 Update man related tools (add missed files)
re #11781 rb3701 Update man related tools
--HG--
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE => usr/src/cmd/man/THIRDPARTYLICENSE
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE.descrip => usr/src/cmd/man/THIRDPARTYLICENSE.descrip
rename : usr/src/cmd/man/src/man.c => usr/src/cmd/man/man.c
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/man/man1m/zpool.1m
+++ new/usr/src/man/man1m/zpool.1m
1 1 .\"
2 2 .\" CDDL HEADER START
3 3 .\"
4 4 .\" The contents of this file are subject to the terms of the
5 5 .\" Common Development and Distribution License (the "License").
6 6 .\" You may not use this file except in compliance with the License.
7 7 .\"
8 8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 .\" or http://www.opensolaris.org/os/licensing.
10 10 .\" See the License for the specific language governing permissions
11 11 .\" and limitations under the License.
12 12 .\"
|
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
13 13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 .\" If applicable, add the following below this CDDL HEADER, with the
16 16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 18 .\"
19 19 .\" CDDL HEADER END
20 20 .\"
21 21 .\"
22 22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 -.\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
23 +.\" Copyright (c) 2013 by Delphix. All rights reserved.
24 24 .\" Copyright 2017 Nexenta Systems, Inc.
25 25 .\" Copyright (c) 2017 Datto Inc.
26 26 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
27 27 .\"
28 28 .Dd December 6, 2017
29 29 .Dt ZPOOL 1M
30 30 .Os
31 31 .Sh NAME
32 32 .Nm zpool
33 33 .Nd configure ZFS storage pools
34 34 .Sh SYNOPSIS
35 35 .Nm
36 36 .Fl \?
37 37 .Nm
38 38 .Cm add
39 39 .Op Fl fn
40 40 .Ar pool vdev Ns ...
41 41 .Nm
42 42 .Cm attach
43 43 .Op Fl f
44 44 .Ar pool device new_device
45 45 .Nm
46 46 .Cm clear
47 47 .Ar pool
48 48 .Op Ar device
49 49 .Nm
50 50 .Cm create
51 51 .Op Fl dfn
52 52 .Op Fl B
53 53 .Op Fl m Ar mountpoint
54 54 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
55 55 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
56 56 .Op Fl R Ar root
|
↓ open down ↓ |
23 lines elided |
↑ open up ↑ |
57 57 .Ar pool vdev Ns ...
58 58 .Nm
59 59 .Cm destroy
60 60 .Op Fl f
61 61 .Ar pool
62 62 .Nm
63 63 .Cm detach
64 64 .Ar pool device
65 65 .Nm
66 66 .Cm export
67 -.Op Fl f
67 +.Op Fl cfF
68 +.Op Fl t Ar numthreads
68 69 .Ar pool Ns ...
69 70 .Nm
70 71 .Cm get
71 72 .Op Fl Hp
72 73 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
73 74 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
74 75 .Ar pool Ns ...
75 76 .Nm
76 77 .Cm history
77 78 .Op Fl il
78 79 .Oo Ar pool Oc Ns ...
79 80 .Nm
80 81 .Cm import
81 82 .Op Fl D
|
↓ open down ↓ |
4 lines elided |
↑ open up ↑ |
82 83 .Op Fl d Ar dir
83 84 .Nm
84 85 .Cm import
85 86 .Fl a
86 87 .Op Fl DfmN
87 88 .Op Fl F Op Fl n
88 89 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
89 90 .Op Fl o Ar mntopts
90 91 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
91 92 .Op Fl R Ar root
93 +.Op Fl t Ar numthreads
92 94 .Nm
93 95 .Cm import
94 96 .Op Fl Dfm
95 97 .Op Fl F Op Fl n
96 98 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
97 99 .Op Fl o Ar mntopts
98 100 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
99 101 .Op Fl R Ar root
102 +.Op Fl t Ar numthreads
100 103 .Ar pool Ns | Ns Ar id
101 104 .Op Ar newpool
102 105 .Nm
103 106 .Cm iostat
104 107 .Op Fl v
105 108 .Op Fl T Sy u Ns | Ns Sy d
106 109 .Oo Ar pool Oc Ns ...
107 110 .Op Ar interval Op Ar count
108 111 .Nm
109 112 .Cm labelclear
110 113 .Op Fl f
111 114 .Ar device
112 115 .Nm
113 116 .Cm list
114 117 .Op Fl Hpv
115 118 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
116 119 .Op Fl T Sy u Ns | Ns Sy d
117 120 .Oo Ar pool Oc Ns ...
118 121 .Op Ar interval Op Ar count
119 122 .Nm
120 123 .Cm offline
121 124 .Op Fl t
122 125 .Ar pool Ar device Ns ...
123 126 .Nm
124 127 .Cm online
|
↓ open down ↓ |
15 lines elided |
↑ open up ↑ |
125 128 .Op Fl e
126 129 .Ar pool Ar device Ns ...
127 130 .Nm
128 131 .Cm reguid
129 132 .Ar pool
130 133 .Nm
131 134 .Cm reopen
132 135 .Ar pool
133 136 .Nm
134 137 .Cm remove
135 -.Op Fl np
136 138 .Ar pool Ar device Ns ...
137 139 .Nm
138 -.Cm remove
139 -.Fl s
140 -.Ar pool
141 -.Nm
142 140 .Cm replace
143 141 .Op Fl f
144 142 .Ar pool Ar device Op Ar new_device
145 143 .Nm
146 144 .Cm scrub
147 -.Op Fl s | Fl p
145 +.Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s
148 146 .Ar pool Ns ...
149 147 .Nm
150 148 .Cm set
151 149 .Ar property Ns = Ns Ar value
152 150 .Ar pool
153 151 .Nm
154 152 .Cm split
155 153 .Op Fl n
156 154 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
157 155 .Op Fl R Ar root
158 156 .Ar pool newpool
159 157 .Nm
160 158 .Cm status
161 159 .Op Fl Dvx
162 160 .Op Fl T Sy u Ns | Ns Sy d
163 161 .Oo Ar pool Oc Ns ...
164 162 .Op Ar interval Op Ar count
165 163 .Nm
164 +.Cm trim
165 +.Op Fl r Ar rate Ns | Ns Fl s
166 +.Ar pool Ns ...
167 +.Nm
166 168 .Cm upgrade
167 169 .Nm
168 170 .Cm upgrade
169 171 .Fl v
170 172 .Nm
171 173 .Cm upgrade
172 174 .Op Fl V Ar version
173 175 .Fl a Ns | Ns Ar pool Ns ...
176 +.Nm
177 +.Cm vdev-get
178 +.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
179 +.Ar pool
180 +.Ar vdev-name Ns | Ns Ar vdev-guid
181 +.Nm
182 +.Cm vdev-set
183 +.Ar property Ns = Ns Ar value
184 +.Ar pool
185 +.Ar vdev-name Ns | Ns Ar vdev-guid
174 186 .Sh DESCRIPTION
175 187 The
176 188 .Nm
177 189 command configures ZFS storage pools.
178 190 A storage pool is a collection of devices that provides physical storage and
179 191 data replication for ZFS datasets.
180 192 All datasets within a storage pool share the same space.
181 193 See
182 194 .Xr zfs 1M
183 195 for information on managing datasets.
184 196 .Ss Virtual Devices (vdevs)
185 197 A "virtual device" describes a single device or a collection of devices
186 198 organized according to certain performance and fault characteristics.
187 199 The following virtual devices are supported:
188 200 .Bl -tag -width Ds
189 201 .It Sy disk
190 202 A block device, typically located under
191 203 .Pa /dev/dsk .
192 204 ZFS can use individual slices or partitions, though the recommended mode of
193 205 operation is to use whole disks.
194 206 A disk can be specified by a full path, or it can be a shorthand name
195 207 .Po the relative portion of the path under
196 208 .Pa /dev/dsk
197 209 .Pc .
198 210 A whole disk can be specified by omitting the slice or partition designation.
199 211 For example,
200 212 .Pa c0t0d0
201 213 is equivalent to
202 214 .Pa /dev/dsk/c0t0d0s2 .
203 215 When given a whole disk, ZFS automatically labels the disk, if necessary.
204 216 .It Sy file
205 217 A regular file.
206 218 The use of files as a backing store is strongly discouraged.
207 219 It is designed primarily for experimental purposes, as the fault tolerance of a
208 220 file is only as good as the file system of which it is a part.
209 221 A file must be specified by a full path.
210 222 .It Sy mirror
211 223 A mirror of two or more devices.
212 224 Data is replicated in an identical fashion across all components of a mirror.
213 225 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
214 226 failing before data integrity is compromised.
215 227 .It Sy raidz , raidz1 , raidz2 , raidz3
216 228 A variation on RAID-5 that allows for better distribution of parity and
217 229 eliminates the RAID-5
218 230 .Qq write hole
219 231 .Pq in which data and parity become inconsistent after a power loss .
220 232 Data and parity is striped across all disks within a raidz group.
221 233 .Pp
222 234 A raidz group can have single-, double-, or triple-parity, meaning that the
223 235 raidz group can sustain one, two, or three failures, respectively, without
224 236 losing any data.
225 237 The
226 238 .Sy raidz1
227 239 vdev type specifies a single-parity raidz group; the
228 240 .Sy raidz2
229 241 vdev type specifies a double-parity raidz group; and the
230 242 .Sy raidz3
231 243 vdev type specifies a triple-parity raidz group.
232 244 The
233 245 .Sy raidz
234 246 vdev type is an alias for
235 247 .Sy raidz1 .
236 248 .Pp
237 249 A raidz group with N disks of size X with P parity disks can hold approximately
238 250 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
239 251 compromised.
240 252 The minimum number of devices in a raidz group is one more than the number of
241 253 parity disks.
242 254 The recommended number is between 3 and 9 to help increase performance.
243 255 .It Sy spare
244 256 A special pseudo-vdev which keeps track of available hot spares for a pool.
245 257 For more information, see the
246 258 .Sx Hot Spares
247 259 section.
248 260 .It Sy log
249 261 A separate intent log device.
250 262 If more than one log device is specified, then writes are load-balanced between
251 263 devices.
252 264 Log devices can be mirrored.
253 265 However, raidz vdev types are not supported for the intent log.
254 266 For more information, see the
255 267 .Sx Intent Log
256 268 section.
257 269 .It Sy cache
258 270 A device used to cache storage pool data.
259 271 A cache device cannot be configured as a mirror or raidz group.
260 272 For more information, see the
261 273 .Sx Cache Devices
262 274 section.
263 275 .El
264 276 .Pp
265 277 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
266 278 contain files or disks.
267 279 Mirrors of mirrors
268 280 .Pq or other combinations
269 281 are not allowed.
270 282 .Pp
271 283 A pool can have any number of virtual devices at the top of the configuration
272 284 .Po known as
273 285 .Qq root vdevs
274 286 .Pc .
275 287 Data is dynamically distributed across all top-level devices to balance data
276 288 among devices.
277 289 As new virtual devices are added, ZFS automatically places data on the newly
278 290 available devices.
279 291 .Pp
280 292 Virtual devices are specified one at a time on the command line, separated by
281 293 whitespace.
282 294 The keywords
283 295 .Sy mirror
284 296 and
285 297 .Sy raidz
286 298 are used to distinguish where a group ends and another begins.
287 299 For example, the following creates two root vdevs, each a mirror of two disks:
288 300 .Bd -literal
289 301 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
290 302 .Ed
291 303 .Ss Device Failure and Recovery
292 304 ZFS supports a rich set of mechanisms for handling device failure and data
293 305 corruption.
294 306 All metadata and data is checksummed, and ZFS automatically repairs bad data
295 307 from a good copy when corruption is detected.
296 308 .Pp
297 309 In order to take advantage of these features, a pool must make use of some form
298 310 of redundancy, using either mirrored or raidz groups.
299 311 While ZFS supports running in a non-redundant configuration, where each root
300 312 vdev is simply a disk or file, this is strongly discouraged.
301 313 A single case of bit corruption can render some or all of your data unavailable.
302 314 .Pp
303 315 A pool's health status is described by one of three states: online, degraded,
304 316 or faulted.
305 317 An online pool has all devices operating normally.
306 318 A degraded pool is one in which one or more devices have failed, but the data is
307 319 still available due to a redundant configuration.
308 320 A faulted pool has corrupted metadata, or one or more faulted devices, and
309 321 insufficient replicas to continue functioning.
310 322 .Pp
311 323 The health of the top-level vdev, such as mirror or raidz device, is
312 324 potentially impacted by the state of its associated vdevs, or component
313 325 devices.
314 326 A top-level vdev or component device is in one of the following states:
315 327 .Bl -tag -width "DEGRADED"
316 328 .It Sy DEGRADED
317 329 One or more top-level vdevs is in the degraded state because one or more
318 330 component devices are offline.
319 331 Sufficient replicas exist to continue functioning.
320 332 .Pp
321 333 One or more component devices is in the degraded or faulted state, but
322 334 sufficient replicas exist to continue functioning.
323 335 The underlying conditions are as follows:
324 336 .Bl -bullet
325 337 .It
326 338 The number of checksum errors exceeds acceptable levels and the device is
327 339 degraded as an indication that something may be wrong.
328 340 ZFS continues to use the device as necessary.
329 341 .It
330 342 The number of I/O errors exceeds acceptable levels.
331 343 The device could not be marked as faulted because there are insufficient
332 344 replicas to continue functioning.
333 345 .El
334 346 .It Sy FAULTED
335 347 One or more top-level vdevs is in the faulted state because one or more
336 348 component devices are offline.
337 349 Insufficient replicas exist to continue functioning.
338 350 .Pp
339 351 One or more component devices is in the faulted state, and insufficient
340 352 replicas exist to continue functioning.
341 353 The underlying conditions are as follows:
342 354 .Bl -bullet
343 355 .It
344 356 The device could be opened, but the contents did not match expected values.
345 357 .It
346 358 The number of I/O errors exceeds acceptable levels and the device is faulted to
347 359 prevent further use of the device.
348 360 .El
349 361 .It Sy OFFLINE
350 362 The device was explicitly taken offline by the
351 363 .Nm zpool Cm offline
352 364 command.
353 365 .It Sy ONLINE
354 366 The device is online and functioning.
355 367 .It Sy REMOVED
356 368 The device was physically removed while the system was running.
357 369 Device removal detection is hardware-dependent and may not be supported on all
358 370 platforms.
359 371 .It Sy UNAVAIL
360 372 The device could not be opened.
361 373 If a pool is imported when a device was unavailable, then the device will be
362 374 identified by a unique identifier instead of its path since the path was never
363 375 correct in the first place.
364 376 .El
365 377 .Pp
366 378 If a device is removed and later re-attached to the system, ZFS attempts
367 379 to put the device online automatically.
368 380 Device attach detection is hardware-dependent and might not be supported on all
369 381 platforms.
370 382 .Ss Hot Spares
371 383 ZFS allows devices to be associated with pools as
372 384 .Qq hot spares .
373 385 These devices are not actively used in the pool, but when an active device
374 386 fails, it is automatically replaced by a hot spare.
375 387 To create a pool with hot spares, specify a
376 388 .Sy spare
377 389 vdev with any number of devices.
378 390 For example,
379 391 .Bd -literal
380 392 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
381 393 .Ed
382 394 .Pp
383 395 Spares can be shared across multiple pools, and can be added with the
384 396 .Nm zpool Cm add
385 397 command and removed with the
386 398 .Nm zpool Cm remove
387 399 command.
388 400 Once a spare replacement is initiated, a new
389 401 .Sy spare
390 402 vdev is created within the configuration that will remain there until the
391 403 original device is replaced.
392 404 At this point, the hot spare becomes available again if another device fails.
|
↓ open down ↓ |
209 lines elided |
↑ open up ↑ |
393 405 .Pp
394 406 If a pool has a shared spare that is currently being used, the pool can not be
395 407 exported since other pools may use this shared spare, which may lead to
396 408 potential data corruption.
397 409 .Pp
398 410 An in-progress spare replacement can be cancelled by detaching the hot spare.
399 411 If the original faulted device is detached, then the hot spare assumes its
400 412 place in the configuration, and is removed from the spare list of all active
401 413 pools.
402 414 .Pp
415 +See
416 +.Sy sparegroup
417 +vdev property in
418 +.Sx Device Properties
419 +section for information on how to control spare selection.
420 +.Pp
403 421 Spares cannot replace log devices.
404 422 .Ss Intent Log
405 423 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
406 424 transactions.
407 425 For instance, databases often require their transactions to be on stable storage
408 426 devices when returning from a system call.
409 427 NFS and other applications can also use
410 428 .Xr fsync 3C
411 429 to ensure data stability.
412 430 By default, the intent log is allocated from blocks within the main pool.
413 431 However, it might be possible to get better performance using separate intent
414 432 log devices such as NVRAM or a dedicated disk.
415 433 For example:
416 434 .Bd -literal
|
↓ open down ↓ |
4 lines elided |
↑ open up ↑ |
417 435 # zpool create pool c0d0 c1d0 log c2d0
418 436 .Ed
419 437 .Pp
420 438 Multiple log devices can also be specified, and they can be mirrored.
421 439 See the
422 440 .Sx EXAMPLES
423 441 section for an example of mirroring multiple log devices.
424 442 .Pp
425 443 Log devices can be added, replaced, attached, detached, and imported and
426 444 exported as part of the larger pool.
427 -Mirrored devices can be removed by specifying the top-level mirror vdev.
445 +Mirrored log devices can be removed by specifying the top-level mirror for the
446 +log.
428 447 .Ss Cache Devices
429 448 Devices can be added to a storage pool as
430 449 .Qq cache devices .
431 450 These devices provide an additional layer of caching between main memory and
432 451 disk.
433 452 For read-heavy workloads, where the working set size is much larger than what
434 453 can be cached in main memory, using cache devices allow much more of this
435 454 working set to be served from low latency media.
436 455 Using cache devices provides the greatest performance improvement for random
437 456 read-workloads of mostly static content.
438 457 .Pp
439 458 To create a pool with cache devices, specify a
440 459 .Sy cache
441 460 vdev with any number of devices.
442 461 For example:
443 462 .Bd -literal
|
↓ open down ↓ |
6 lines elided |
↑ open up ↑ |
444 463 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
445 464 .Ed
446 465 .Pp
447 466 Cache devices cannot be mirrored or part of a raidz configuration.
448 467 If a read error is encountered on a cache device, that read I/O is reissued to
449 468 the original storage pool device, which might be part of a mirrored or raidz
450 469 configuration.
451 470 .Pp
452 471 The content of the cache devices is considered volatile, as is the case with
453 472 other system caches.
454 -.Ss Properties
473 +.Ss Pool Properties
455 474 Each pool has several properties associated with it.
456 475 Some properties are read-only statistics while others are configurable and
457 476 change the behavior of the pool.
458 477 .Pp
459 478 The following are read-only properties:
460 479 .Bl -tag -width Ds
461 480 .It Cm allocated
462 481 Amount of storage space used within the pool.
463 482 .It Sy bootsize
464 483 The size of the system boot partition.
465 484 This property can only be set at pool creation time and is read-only once pool
466 485 is created.
467 486 Setting this property implies using the
468 487 .Fl B
469 488 option.
470 489 .It Sy capacity
471 490 Percentage of pool space used.
472 491 This property can also be referred to by its shortened column name,
473 492 .Sy cap .
493 +.It Sy ddt_capped Ns = Ns Sy on Ns | Ns Sy off
494 +When the
495 +.Sy ddt_capped
496 +is
497 +.Sy on
498 +this indicates DDT growth has been stopped.
499 +New unique writes will not be deduped to prevent further DDT growth.
474 500 .It Sy expandsize
475 501 Amount of uninitialized space within the pool or device that can be used to
476 502 increase the total capacity of the pool.
477 503 Uninitialized space consists of any space on an EFI labeled vdev which has not
478 504 been brought online
479 505 .Po e.g, using
480 506 .Nm zpool Cm online Fl e
481 507 .Pc .
482 508 This space occurs when a LUN is dynamically expanded.
483 509 .It Sy fragmentation
484 510 The amount of fragmentation in the pool.
485 511 .It Sy free
486 512 The amount of free space available in the pool.
487 513 .It Sy freeing
488 -After a file system or snapshot is destroyed, the space it was using is
489 -returned to the pool asynchronously.
490 514 .Sy freeing
491 -is the amount of space remaining to be reclaimed.
515 +is the amount of pool space remaining to be reclaimed.
516 +After a file, dataset or snapshot is destroyed, the space it was using is
517 +returned to the pool asynchronously.
492 518 Over time
493 519 .Sy freeing
494 520 will decrease while
495 521 .Sy free
496 522 increases.
497 523 .It Sy health
498 524 The current health of the pool.
499 525 Health can be one of
500 526 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
501 527 .It Sy guid
502 528 A unique identifier for the pool.
503 529 .It Sy size
504 530 Total size of the storage pool.
505 531 .It Sy unsupported@ Ns Em feature_guid
506 532 Information about unsupported features that are enabled on the pool.
507 533 See
508 534 .Xr zpool-features 5
509 535 for details.
510 536 .El
511 537 .Pp
512 538 The space usage properties report actual physical space available to the
513 539 storage pool.
514 540 The physical space can be different from the total amount of space that any
515 541 contained datasets can actually use.
516 542 The amount of space used in a raidz configuration depends on the characteristics
517 543 of the data being written.
518 544 In addition, ZFS reserves some space for internal accounting that the
519 545 .Xr zfs 1M
520 546 command takes into account, but the
521 547 .Nm
522 548 command does not.
523 549 For non-full pools of a reasonable size, these effects should be invisible.
524 550 For small pools, or pools that are close to being completely full, these
525 551 discrepancies may become more noticeable.
526 552 .Pp
527 553 The following property can be set at creation time and import time:
528 554 .Bl -tag -width Ds
529 555 .It Sy altroot
530 556 Alternate root directory.
531 557 If set, this directory is prepended to any mount points within the pool.
532 558 This can be used when examining an unknown pool where the mount points cannot be
533 559 trusted, or in an alternate boot environment, where the typical paths are not
534 560 valid.
535 561 .Sy altroot
536 562 is not a persistent property.
537 563 It is valid only while the system is up.
538 564 Setting
539 565 .Sy altroot
540 566 defaults to using
541 567 .Sy cachefile Ns = Ns Sy none ,
542 568 though this may be overridden using an explicit setting.
543 569 .El
544 570 .Pp
545 571 The following property can be set only at import time:
546 572 .Bl -tag -width Ds
547 573 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
548 574 If set to
549 575 .Sy on ,
550 576 the pool will be imported in read-only mode.
551 577 This property can also be referred to by its shortened column name,
552 578 .Sy rdonly .
553 579 .El
554 580 .Pp
555 581 The following properties can be set at creation time and import time, and later
556 582 changed with the
557 583 .Nm zpool Cm set
558 584 command:
559 585 .Bl -tag -width Ds
560 586 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
561 587 Controls automatic pool expansion when the underlying LUN is grown.
562 588 If set to
563 589 .Sy on ,
564 590 the pool will be resized according to the size of the expanded device.
565 591 If the device is part of a mirror or raidz then all devices within that
566 592 mirror/raidz group must be expanded before the new space is made available to
567 593 the pool.
568 594 The default behavior is
569 595 .Sy off .
570 596 This property can also be referred to by its shortened column name,
571 597 .Sy expand .
572 598 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
573 599 Controls automatic device replacement.
574 600 If set to
575 601 .Sy off ,
576 602 device replacement must be initiated by the administrator by using the
|
↓ open down ↓ |
75 lines elided |
↑ open up ↑ |
577 603 .Nm zpool Cm replace
578 604 command.
579 605 If set to
580 606 .Sy on ,
581 607 any new device, found in the same physical location as a device that previously
582 608 belonged to the pool, is automatically formatted and replaced.
583 609 The default behavior is
584 610 .Sy off .
585 611 This property can also be referred to by its shortened column name,
586 612 .Sy replace .
613 +.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
614 +When set to
615 +.Sy on ,
616 +while deleting data, ZFS will inform the underlying vdevs of any blocks that
617 +have been marked as freed.
618 +This allows thinly provisioned vdevs to reclaim unused blocks.
619 +Currently, this feature supports sending SCSI UNMAP commands to SCSI and SAS
620 +disk vdevs, and using file hole punching on file-backed vdevs.
621 +SATA TRIM is currently not implemented.
622 +The default setting for this property is
623 +.Sy off .
624 +.Pp
625 +Please note that automatic trimming of data blocks can put significant stress on
626 +the underlying storage devices if they do not handle these commands in a
627 +background, low-priority manner.
628 +In that case, it may be possible to achieve most of the benefits of trimming
629 +free space on the pool by running an on-demand
630 +.Pq manual
631 +trim every once in a while during a maintenance window using the
632 +.Nm zpool Cm trim
633 +command.
634 +.Pp
635 +Automatic trim does not reclaim blocks after a delete immediately.
636 +Instead, it waits approximately 32-64 TXGs
637 +.Po or as defined by the
638 +.Sy zfs_txgs_per_trim
639 +tunable
640 +.Pc
641 +to allow for more efficient aggregation of smaller portions of free space into
642 +fewer larger regions, as well as to allow for longer pool corruption recovery
643 +via
644 +.Nm zpool Cm import Fl F .
587 645 .It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
588 646 Identifies the default bootable dataset for the root pool.
589 647 This property is expected to be set mainly by the installation and upgrade
590 648 programs.
591 649 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
592 650 Controls the location of where the pool configuration is cached.
593 651 Discovering all pools on system startup requires a cached copy of the
594 652 configuration data that is stored on the root file system.
595 653 All pools in this cache are automatically imported when the system boots.
596 654 Some environments, such as install and clustering, need to cache this
597 655 information in a different location so that pools are not automatically
598 656 imported.
599 657 Setting this property caches the pool configuration in a different location that
600 658 can later be imported with
601 659 .Nm zpool Cm import Fl c .
602 660 Setting it to the special value
603 661 .Sy none
604 662 creates a temporary pool that is never cached, and the special value
605 663 .Qq
606 664 .Pq empty string
607 665 uses the default location.
608 666 .Pp
609 667 Multiple pools can share the same cache file.
610 668 Because the kernel destroys and recreates this file when pools are added and
611 669 removed, care should be taken when attempting to access this file.
612 670 When the last pool using a
613 671 .Sy cachefile
614 672 is exported or destroyed, the file is removed.
615 673 .It Sy comment Ns = Ns Ar text
616 674 A text string consisting of printable ASCII characters that will be stored
617 675 such that it is available even if the pool becomes faulted.
618 676 An administrator can provide additional information about a pool using this
619 677 property.
620 678 .It Sy dedupditto Ns = Ns Ar number
621 679 Threshold for the number of block ditto copies.
622 680 If the reference count for a deduplicated block increases above this number, a
623 681 new ditto copy of this block is automatically stored.
624 682 The default setting is
625 683 .Sy 0
626 684 which causes no ditto copies to be created for deduplicated blocks.
627 685 The minimum legal nonzero setting is
628 686 .Sy 100 .
629 687 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
630 688 Controls whether a non-privileged user is granted access based on the dataset
631 689 permissions defined on the dataset.
632 690 See
633 691 .Xr zfs 1M
634 692 for more information on ZFS delegated administration.
635 693 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
636 694 Controls the system behavior in the event of catastrophic pool failure.
637 695 This condition is typically a result of a loss of connectivity to the underlying
638 696 storage device(s) or a failure of all devices within the pool.
639 697 The behavior of such an event is determined as follows:
640 698 .Bl -tag -width "continue"
641 699 .It Sy wait
642 700 Blocks all I/O access until the device connectivity is recovered and the errors
643 701 are cleared.
644 702 This is the default behavior.
645 703 .It Sy continue
646 704 Returns
647 705 .Er EIO
648 706 to any new write I/O requests but allows reads to any of the remaining healthy
649 707 devices.
650 708 Any write requests that have yet to be committed to disk would be blocked.
651 709 .It Sy panic
652 710 Prints out a message to the console and generates a system crash dump.
653 711 .El
654 712 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
|
↓ open down ↓ |
58 lines elided |
↑ open up ↑ |
655 713 The value of this property is the current state of
656 714 .Ar feature_name .
657 715 The only valid value when setting this property is
658 716 .Sy enabled
659 717 which moves
660 718 .Ar feature_name
661 719 to the enabled state.
662 720 See
663 721 .Xr zpool-features 5
664 722 for details on feature states.
723 +.It Sy forcetrim Ns = Ns Sy on Ns | Ns Sy off
724 +Controls whether device support is taken into consideration when issuing TRIM
725 +commands to the underlying vdevs of the pool.
726 +Normally, both automatic trim and on-demand
727 +.Pq manual
728 +trim only issue TRIM commands if a vdev indicates support for it.
729 +Setting the
730 +.Sy forcetrim
731 +property to
732 +.Sy on
733 +will force ZFS to issue TRIMs even if it thinks a device does not support it.
734 +The default value is
735 +.Sy off .
665 736 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
666 737 Controls whether information about snapshots associated with this pool is
667 738 output when
668 739 .Nm zfs Cm list
669 740 is run without the
670 741 .Fl t
671 742 option.
672 743 The default value is
673 744 .Sy off .
674 745 This property can also be referred to by its shortened name,
675 746 .Sy listsnaps .
747 +.It Sy scrubprio Ns = Ns Ar 0-100 Ns
748 +Sets the priority of scrub I/O for this pool.
749 +This is a number from 0 to 100, higher numbers meaning a higher priority
750 +and thus more bandwidth allocated to scrub I/O, provided there is other
751 +I/O competing for bandwidth.
752 +If no other I/O is competing for bandwidth, scrub is allowed to consume
753 +as much bandwidth as the pool is capable of providing.
754 +A priority of
755 +.Ar 100
756 +means that scrub I/O has equal priority to any other user-generated I/O.
757 +The value
758 +.Ar 0
759 +is special, because it turns per-pool scrub priority control.
760 +In that case, scrub I/O priority is determined by the
761 +.Sy zfs_vdev_scrub_min_active
762 +and
763 +.Sy zfs_vdev_scrub_max_active
764 +tunables.
765 +The default value is
766 +.Ar 5 .
767 +.It Sy resilverprio Ns = Ns Ar 0-100 Ns
768 +Same as the
769 +.Sy scrubprio
770 +property, but controls the priority for resilver I/O.
771 +The default value is
772 +.Ar 10 .
773 +When set to
774 +.Ar 0
775 +the global tunables used for queue sizing are
776 +.Sy zfs_vdev_resilver_min_active
777 +and
778 +.Sy zfs_vdev_resilver_max_active .
676 779 .It Sy version Ns = Ns Ar version
677 780 The current on-disk version of the pool.
678 781 This can be increased, but never decreased.
679 782 The preferred method of updating pools is with the
680 783 .Nm zpool Cm upgrade
681 784 command, though this property can be used when a specific version is needed for
682 785 backwards compatibility.
683 786 Once feature flags are enabled on a pool this property will no longer have a
684 787 value.
685 788 .El
789 +.Ss Device Properties
790 +Each device can have several properties associated with it.
791 +These properites override global tunables and are designed to provide more
792 +control over the operational parameters of this specific device, as well as to
793 +help manage this device.
794 +.Pp
795 +The
796 +.Sy cos
797 +device property can reference a CoS property descriptor by name, in which case,
798 +the values of device properties are determined according to the following rule:
799 +the device settings override CoS settings, which in turn, override the global
800 +tunables.
801 +.Pp
802 +The following device properties are available:
803 +.Bl -tag -width Ds
804 +.It Sy cos Ns = Ns Ar cos-name
805 +This property indicates whether the device is associated with a CoS property
806 +descriptor object.
807 +If so, the properties from the CoS descriptor that are not explicitly overridden
808 +by the device properties are in effect for this device.
809 +.It Sy l2arc_ddt Ns = Ns Sy on Ns | Ns Sy off
810 +This property is meaningful for L2ARC devices.
811 +If this property is turned
812 +.Sy on
813 +ZFS will dedicate the L2ARC device to cache deduplication table
814 +.Pq DDT
815 +buffers only.
816 +.It Sy prefread Ns = Ns Sy 1 Ns .. Ns Sy 100
817 +This property is meaningful for devices that belong to a mirror.
818 +The property determines the preference that is given to the device when reading
819 +from the mirror.
820 +The ratio of the value to the sum of the values of this property for all the
821 +devices in the mirror determines the relative frequency
822 +.Po which also is considered
823 +.Qq probability
824 +.Pc
825 +of reading from this specific device.
826 +.It Sy sparegroup Ns = Ns Ar group-name
827 +This property indicates whether the device is a part of a spare device group.
828 +Devices in the pool
829 +.Pq including spares
830 +can be labeled with strings that are meaningful in the context of the management
831 +workflow in effect.
832 +When a failed device is automatically replaced by spares, the spares whose
833 +.Sy sparegroup
834 +property match the failed device's property are used first.
835 +.It Xo
836 +.Bro Sy read Ns | Ns Sy aread Ns | Ns Sy write Ns | Ns
837 +.Sy awrite Ns | Ns Sy scrub Ns | Ns Sy resilver Brc Ns _ Ns
838 +.Bro Sy minactive Ns | Ns Sy maxactive Brc Ns = Ns
839 +.Sy 1 Ns .. Ns Sy 1000
840 +.Xc
841 +These properties define the minimim/maximum number of outstanding active
842 +requests for the queueable classes of I/O requests as defined by the
843 +ZFS I/O scheduler.
844 +The classes include read, asynchronous read, write, asynchronous write, and
845 +scrub classes.
846 +.El
686 847 .Ss Subcommands
687 848 All subcommands that modify state are logged persistently to the pool in their
688 849 original form.
689 850 .Pp
690 851 The
691 852 .Nm
692 853 command provides subcommands to create and destroy storage pools, add capacity
693 854 to storage pools, and provide information about the storage pools.
694 855 The following subcommands are supported:
695 856 .Bl -tag -width Ds
696 857 .It Xo
697 858 .Nm
698 859 .Fl \?
699 860 .Xc
700 861 Displays a help message.
701 862 .It Xo
702 863 .Nm
703 864 .Cm add
704 865 .Op Fl fn
705 866 .Ar pool vdev Ns ...
706 867 .Xc
707 868 Adds the specified virtual devices to the given pool.
708 869 The
709 870 .Ar vdev
710 871 specification is described in the
711 872 .Sx Virtual Devices
712 873 section.
713 874 The behavior of the
714 875 .Fl f
715 876 option, and the device checks performed are described in the
716 877 .Nm zpool Cm create
717 878 subcommand.
718 879 .Bl -tag -width Ds
719 880 .It Fl f
720 881 Forces use of
721 882 .Ar vdev Ns s ,
722 883 even if they appear in use or specify a conflicting replication level.
723 884 Not all devices can be overridden in this manner.
724 885 .It Fl n
725 886 Displays the configuration that would be used without actually adding the
726 887 .Ar vdev Ns s .
727 888 The actual pool creation can still fail due to insufficient privileges or
728 889 device sharing.
729 890 .El
730 891 .It Xo
731 892 .Nm
732 893 .Cm attach
733 894 .Op Fl f
734 895 .Ar pool device new_device
735 896 .Xc
736 897 Attaches
737 898 .Ar new_device
738 899 to the existing
739 900 .Ar device .
740 901 The existing device cannot be part of a raidz configuration.
741 902 If
742 903 .Ar device
743 904 is not currently part of a mirrored configuration,
744 905 .Ar device
745 906 automatically transforms into a two-way mirror of
746 907 .Ar device
747 908 and
748 909 .Ar new_device .
749 910 If
750 911 .Ar device
751 912 is part of a two-way mirror, attaching
752 913 .Ar new_device
753 914 creates a three-way mirror, and so on.
754 915 In either case,
755 916 .Ar new_device
756 917 begins to resilver immediately.
757 918 .Bl -tag -width Ds
758 919 .It Fl f
759 920 Forces use of
760 921 .Ar new_device ,
761 922 even if its appears to be in use.
762 923 Not all devices can be overridden in this manner.
763 924 .El
764 925 .It Xo
765 926 .Nm
766 927 .Cm clear
767 928 .Ar pool
768 929 .Op Ar device
769 930 .Xc
770 931 Clears device errors in a pool.
771 932 If no arguments are specified, all device errors within the pool are cleared.
772 933 If one or more devices is specified, only those errors associated with the
773 934 specified device or devices are cleared.
774 935 .It Xo
775 936 .Nm
776 937 .Cm create
777 938 .Op Fl dfn
778 939 .Op Fl B
779 940 .Op Fl m Ar mountpoint
780 941 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
781 942 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
782 943 .Op Fl R Ar root
783 944 .Ar pool vdev Ns ...
784 945 .Xc
785 946 Creates a new storage pool containing the virtual devices specified on the
786 947 command line.
787 948 The pool name must begin with a letter, and can only contain
788 949 alphanumeric characters as well as underscore
789 950 .Pq Qq Sy _ ,
790 951 dash
791 952 .Pq Qq Sy - ,
792 953 and period
793 954 .Pq Qq Sy \&. .
794 955 The pool names
795 956 .Sy mirror ,
796 957 .Sy raidz ,
797 958 .Sy spare
798 959 and
799 960 .Sy log
800 961 are reserved, as are names beginning with the pattern
801 962 .Sy c[0-9] .
802 963 The
803 964 .Ar vdev
804 965 specification is described in the
805 966 .Sx Virtual Devices
806 967 section.
807 968 .Pp
808 969 The command verifies that each device specified is accessible and not currently
809 970 in use by another subsystem.
810 971 There are some uses, such as being currently mounted, or specified as the
811 972 dedicated dump device, that prevents a device from ever being used by ZFS.
812 973 Other uses, such as having a preexisting UFS file system, can be overridden with
813 974 the
814 975 .Fl f
815 976 option.
816 977 .Pp
817 978 The command also checks that the replication strategy for the pool is
818 979 consistent.
819 980 An attempt to combine redundant and non-redundant storage in a single pool, or
820 981 to mix disks and files, results in an error unless
821 982 .Fl f
822 983 is specified.
823 984 The use of differently sized devices within a single raidz or mirror group is
824 985 also flagged as an error unless
825 986 .Fl f
826 987 is specified.
827 988 .Pp
828 989 Unless the
829 990 .Fl R
830 991 option is specified, the default mount point is
831 992 .Pa / Ns Ar pool .
832 993 The mount point must not exist or must be empty, or else the root dataset
833 994 cannot be mounted.
834 995 This can be overridden with the
835 996 .Fl m
836 997 option.
837 998 .Pp
838 999 By default all supported features are enabled on the new pool unless the
839 1000 .Fl d
840 1001 option is specified.
841 1002 .Bl -tag -width Ds
842 1003 .It Fl B
843 1004 Create whole disk pool with EFI System partition to support booting system
844 1005 with UEFI firmware.
845 1006 Default size is 256MB.
846 1007 To create boot partition with custom size, set the
847 1008 .Sy bootsize
848 1009 property with the
849 1010 .Fl o
850 1011 option.
851 1012 See the
852 1013 .Sx Properties
853 1014 section for details.
854 1015 .It Fl d
855 1016 Do not enable any features on the new pool.
856 1017 Individual features can be enabled by setting their corresponding properties to
857 1018 .Sy enabled
858 1019 with the
859 1020 .Fl o
860 1021 option.
861 1022 See
862 1023 .Xr zpool-features 5
863 1024 for details about feature properties.
864 1025 .It Fl f
865 1026 Forces use of
866 1027 .Ar vdev Ns s ,
867 1028 even if they appear in use or specify a conflicting replication level.
868 1029 Not all devices can be overridden in this manner.
869 1030 .It Fl m Ar mountpoint
870 1031 Sets the mount point for the root dataset.
871 1032 The default mount point is
872 1033 .Pa /pool
873 1034 or
874 1035 .Pa altroot/pool
875 1036 if
876 1037 .Ar altroot
877 1038 is specified.
878 1039 The mount point must be an absolute path,
879 1040 .Sy legacy ,
880 1041 or
881 1042 .Sy none .
|
↓ open down ↓ |
186 lines elided |
↑ open up ↑ |
882 1043 For more information on dataset mount points, see
883 1044 .Xr zfs 1M .
884 1045 .It Fl n
885 1046 Displays the configuration that would be used without actually creating the
886 1047 pool.
887 1048 The actual pool creation can still fail due to insufficient privileges or
888 1049 device sharing.
889 1050 .It Fl o Ar property Ns = Ns Ar value
890 1051 Sets the given pool properties.
891 1052 See the
892 -.Sx Properties
1053 +.Sx Pool Properties
893 1054 section for a list of valid properties that can be set.
894 1055 .It Fl O Ar file-system-property Ns = Ns Ar value
895 1056 Sets the given file system properties in the root file system of the pool.
896 1057 See the
897 1058 .Sx Properties
898 1059 section of
899 1060 .Xr zfs 1M
900 1061 for a list of valid properties that can be set.
901 1062 .It Fl R Ar root
902 1063 Equivalent to
903 1064 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
904 1065 .El
905 1066 .It Xo
906 1067 .Nm
907 1068 .Cm destroy
908 1069 .Op Fl f
909 1070 .Ar pool
910 1071 .Xc
911 1072 Destroys the given pool, freeing up any devices for other use.
912 1073 This command tries to unmount any active datasets before destroying the pool.
913 1074 .Bl -tag -width Ds
914 1075 .It Fl f
915 1076 Forces any active datasets contained within the pool to be unmounted.
916 1077 .El
917 1078 .It Xo
918 1079 .Nm
|
↓ open down ↓ |
16 lines elided |
↑ open up ↑ |
919 1080 .Cm detach
920 1081 .Ar pool device
921 1082 .Xc
922 1083 Detaches
923 1084 .Ar device
924 1085 from a mirror.
925 1086 The operation is refused if there are no other valid replicas of the data.
926 1087 .It Xo
927 1088 .Nm
928 1089 .Cm export
929 -.Op Fl f
1090 +.Op Fl cfF
1091 +.Op Fl t Ar numthreads
930 1092 .Ar pool Ns ...
931 1093 .Xc
932 1094 Exports the given pools from the system.
933 1095 All devices are marked as exported, but are still considered in use by other
934 1096 subsystems.
935 1097 The devices can be moved between systems
936 1098 .Pq even those of different endianness
937 1099 and imported as long as a sufficient number of devices are present.
938 1100 .Pp
939 1101 Before exporting the pool, all datasets within the pool are unmounted.
940 1102 A pool can not be exported if it has a shared spare that is currently being
941 1103 used.
942 1104 .Pp
943 1105 For pools to be portable, you must give the
944 1106 .Nm
945 1107 command whole disks, not just slices, so that ZFS can label the disks with
946 1108 portable EFI labels.
947 1109 Otherwise, disk drivers on platforms of different endianness will not recognize
948 1110 the disks.
949 1111 .Bl -tag -width Ds
1112 +.It Fl c
1113 +Keep configuration information of exported pool in the cache file.
950 1114 .It Fl f
951 1115 Forcefully unmount all datasets, using the
952 1116 .Nm unmount Fl f
953 1117 command.
954 1118 .Pp
955 1119 This command will forcefully export the pool even if it has a shared spare that
956 1120 is currently being used.
957 1121 This may lead to potential data corruption.
1122 +.It Fl F
1123 +Do not update device labels or cache file with new configuration.
1124 +.It Fl t Ar numthreads
1125 +Unmount datasets in parallel using up to
1126 +.Ar numthreads
1127 +threads.
958 1128 .El
959 1129 .It Xo
960 1130 .Nm
961 1131 .Cm get
962 1132 .Op Fl Hp
963 1133 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
964 1134 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
965 1135 .Ar pool Ns ...
966 1136 .Xc
967 1137 Retrieves the given list of properties
968 1138 .Po
969 1139 or all properties if
970 1140 .Sy all
971 1141 is used
972 1142 .Pc
|
↓ open down ↓ |
5 lines elided |
↑ open up ↑ |
973 1143 for the specified storage pool(s).
974 1144 These properties are displayed with the following fields:
975 1145 .Bd -literal
976 1146 name Name of storage pool
977 1147 property Property name
978 1148 value Property value
979 1149 source Property source, either 'default' or 'local'.
980 1150 .Ed
981 1151 .Pp
982 1152 See the
983 -.Sx Properties
1153 +.Sx Pool Properties
984 1154 section for more information on the available pool properties.
985 1155 .Bl -tag -width Ds
986 1156 .It Fl H
987 1157 Scripted mode.
988 1158 Do not display headers, and separate fields by a single tab instead of arbitrary
989 1159 space.
990 1160 .It Fl o Ar field
991 1161 A comma-separated list of columns to display.
992 1162 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
993 1163 is the default value.
994 1164 .It Fl p
995 1165 Display numbers in parsable (exact) values.
996 1166 .El
997 1167 .It Xo
998 1168 .Nm
999 1169 .Cm history
1000 1170 .Op Fl il
1001 1171 .Oo Ar pool Oc Ns ...
1002 1172 .Xc
1003 1173 Displays the command history of the specified pool(s) or all pools if no pool is
1004 1174 specified.
1005 1175 .Bl -tag -width Ds
1006 1176 .It Fl i
1007 1177 Displays internally logged ZFS events in addition to user initiated events.
1008 1178 .It Fl l
1009 1179 Displays log records in long format, which in addition to standard format
1010 1180 includes, the user name, the hostname, and the zone in which the operation was
1011 1181 performed.
1012 1182 .El
1013 1183 .It Xo
1014 1184 .Nm
1015 1185 .Cm import
1016 1186 .Op Fl D
1017 1187 .Op Fl d Ar dir
1018 1188 .Xc
1019 1189 Lists pools available to import.
1020 1190 If the
1021 1191 .Fl d
1022 1192 option is not specified, this command searches for devices in
1023 1193 .Pa /dev/dsk .
1024 1194 The
1025 1195 .Fl d
1026 1196 option can be specified multiple times, and all directories are searched.
1027 1197 If the device appears to be part of an exported pool, this command displays a
1028 1198 summary of the pool with the name of the pool, a numeric identifier, as well as
1029 1199 the vdev layout and current health of the device for each device or file.
1030 1200 Destroyed pools, pools that were previously destroyed with the
1031 1201 .Nm zpool Cm destroy
1032 1202 command, are not listed unless the
1033 1203 .Fl D
1034 1204 option is specified.
1035 1205 .Pp
1036 1206 The numeric identifier is unique, and can be used instead of the pool name when
1037 1207 multiple exported pools of the same name are available.
1038 1208 .Bl -tag -width Ds
1039 1209 .It Fl c Ar cachefile
1040 1210 Reads configuration from the given
1041 1211 .Ar cachefile
1042 1212 that was created with the
1043 1213 .Sy cachefile
1044 1214 pool property.
1045 1215 This
1046 1216 .Ar cachefile
1047 1217 is used instead of searching for devices.
1048 1218 .It Fl d Ar dir
1049 1219 Searches for devices or files in
1050 1220 .Ar dir .
1051 1221 The
1052 1222 .Fl d
1053 1223 option can be specified multiple times.
1054 1224 .It Fl D
1055 1225 Lists destroyed pools only.
1056 1226 .El
1057 1227 .It Xo
1058 1228 .Nm
1059 1229 .Cm import
1060 1230 .Fl a
1061 1231 .Op Fl DfmN
1062 1232 .Op Fl F Op Fl n
1063 1233 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1064 1234 .Op Fl o Ar mntopts
1065 1235 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1066 1236 .Op Fl R Ar root
1067 1237 .Xc
1068 1238 Imports all pools found in the search directories.
1069 1239 Identical to the previous command, except that all pools with a sufficient
1070 1240 number of devices available are imported.
1071 1241 Destroyed pools, pools that were previously destroyed with the
1072 1242 .Nm zpool Cm destroy
1073 1243 command, will not be imported unless the
1074 1244 .Fl D
1075 1245 option is specified.
1076 1246 .Bl -tag -width Ds
1077 1247 .It Fl a
1078 1248 Searches for and imports all pools found.
1079 1249 .It Fl c Ar cachefile
1080 1250 Reads configuration from the given
1081 1251 .Ar cachefile
1082 1252 that was created with the
1083 1253 .Sy cachefile
1084 1254 pool property.
1085 1255 This
1086 1256 .Ar cachefile
1087 1257 is used instead of searching for devices.
1088 1258 .It Fl d Ar dir
1089 1259 Searches for devices or files in
1090 1260 .Ar dir .
1091 1261 The
1092 1262 .Fl d
1093 1263 option can be specified multiple times.
1094 1264 This option is incompatible with the
1095 1265 .Fl c
1096 1266 option.
1097 1267 .It Fl D
1098 1268 Imports destroyed pools only.
1099 1269 The
1100 1270 .Fl f
1101 1271 option is also required.
1102 1272 .It Fl f
1103 1273 Forces import, even if the pool appears to be potentially active.
1104 1274 .It Fl F
1105 1275 Recovery mode for a non-importable pool.
1106 1276 Attempt to return the pool to an importable state by discarding the last few
1107 1277 transactions.
1108 1278 Not all damaged pools can be recovered by using this option.
1109 1279 If successful, the data from the discarded transactions is irretrievably lost.
1110 1280 This option is ignored if the pool is importable or already imported.
1111 1281 .It Fl m
1112 1282 Allows a pool to import when there is a missing log device.
1113 1283 Recent transactions can be lost because the log device will be discarded.
1114 1284 .It Fl n
1115 1285 Used with the
1116 1286 .Fl F
1117 1287 recovery option.
1118 1288 Determines whether a non-importable pool can be made importable again, but does
1119 1289 not actually perform the pool recovery.
1120 1290 For more details about pool recovery mode, see the
1121 1291 .Fl F
1122 1292 option, above.
1123 1293 .It Fl N
|
↓ open down ↓ |
130 lines elided |
↑ open up ↑ |
1124 1294 Import the pool without mounting any file systems.
1125 1295 .It Fl o Ar mntopts
1126 1296 Comma-separated list of mount options to use when mounting datasets within the
1127 1297 pool.
1128 1298 See
1129 1299 .Xr zfs 1M
1130 1300 for a description of dataset properties and mount options.
1131 1301 .It Fl o Ar property Ns = Ns Ar value
1132 1302 Sets the specified property on the imported pool.
1133 1303 See the
1134 -.Sx Properties
1304 +.Sx Pool Properties
1135 1305 section for more information on the available pool properties.
1136 1306 .It Fl R Ar root
1137 1307 Sets the
1138 1308 .Sy cachefile
1139 1309 property to
1140 1310 .Sy none
1141 1311 and the
1142 1312 .Sy altroot
1143 1313 property to
1144 1314 .Ar root .
1145 1315 .El
1146 1316 .It Xo
1147 1317 .Nm
1148 1318 .Cm import
1149 1319 .Op Fl Dfm
1150 1320 .Op Fl F Op Fl n
1151 1321 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1152 1322 .Op Fl o Ar mntopts
1153 1323 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1154 1324 .Op Fl R Ar root
1155 1325 .Ar pool Ns | Ns Ar id
1156 1326 .Op Ar newpool
1157 1327 .Xc
1158 1328 Imports a specific pool.
1159 1329 A pool can be identified by its name or the numeric identifier.
1160 1330 If
1161 1331 .Ar newpool
1162 1332 is specified, the pool is imported using the name
1163 1333 .Ar newpool .
1164 1334 Otherwise, it is imported with the same name as its exported name.
1165 1335 .Pp
1166 1336 If a device is removed from a system without running
1167 1337 .Nm zpool Cm export
1168 1338 first, the device appears as potentially active.
1169 1339 It cannot be determined if this was a failed export, or whether the device is
1170 1340 really in use from another host.
1171 1341 To import a pool in this state, the
1172 1342 .Fl f
1173 1343 option is required.
1174 1344 .Bl -tag -width Ds
1175 1345 .It Fl c Ar cachefile
1176 1346 Reads configuration from the given
1177 1347 .Ar cachefile
1178 1348 that was created with the
1179 1349 .Sy cachefile
1180 1350 pool property.
1181 1351 This
1182 1352 .Ar cachefile
1183 1353 is used instead of searching for devices.
1184 1354 .It Fl d Ar dir
1185 1355 Searches for devices or files in
1186 1356 .Ar dir .
1187 1357 The
1188 1358 .Fl d
1189 1359 option can be specified multiple times.
1190 1360 This option is incompatible with the
1191 1361 .Fl c
1192 1362 option.
1193 1363 .It Fl D
1194 1364 Imports destroyed pool.
1195 1365 The
1196 1366 .Fl f
1197 1367 option is also required.
1198 1368 .It Fl f
1199 1369 Forces import, even if the pool appears to be potentially active.
1200 1370 .It Fl F
1201 1371 Recovery mode for a non-importable pool.
1202 1372 Attempt to return the pool to an importable state by discarding the last few
1203 1373 transactions.
1204 1374 Not all damaged pools can be recovered by using this option.
1205 1375 If successful, the data from the discarded transactions is irretrievably lost.
1206 1376 This option is ignored if the pool is importable or already imported.
1207 1377 .It Fl m
1208 1378 Allows a pool to import when there is a missing log device.
1209 1379 Recent transactions can be lost because the log device will be discarded.
1210 1380 .It Fl n
1211 1381 Used with the
1212 1382 .Fl F
1213 1383 recovery option.
1214 1384 Determines whether a non-importable pool can be made importable again, but does
1215 1385 not actually perform the pool recovery.
1216 1386 For more details about pool recovery mode, see the
1217 1387 .Fl F
|
↓ open down ↓ |
73 lines elided |
↑ open up ↑ |
1218 1388 option, above.
1219 1389 .It Fl o Ar mntopts
1220 1390 Comma-separated list of mount options to use when mounting datasets within the
1221 1391 pool.
1222 1392 See
1223 1393 .Xr zfs 1M
1224 1394 for a description of dataset properties and mount options.
1225 1395 .It Fl o Ar property Ns = Ns Ar value
1226 1396 Sets the specified property on the imported pool.
1227 1397 See the
1228 -.Sx Properties
1398 +.Sx Pool Properties
1229 1399 section for more information on the available pool properties.
1230 1400 .It Fl R Ar root
1231 1401 Sets the
1232 1402 .Sy cachefile
1233 1403 property to
1234 1404 .Sy none
1235 1405 and the
1236 1406 .Sy altroot
1237 1407 property to
1238 1408 .Ar root .
1409 +.It Fl t Ar numthreads
1410 +Mount datasets in parallel using up to
1411 +.Ar numthreads
1412 +threads.
1239 1413 .El
1240 1414 .It Xo
1241 1415 .Nm
1242 1416 .Cm iostat
1243 1417 .Op Fl v
1244 1418 .Op Fl T Sy u Ns | Ns Sy d
1245 1419 .Oo Ar pool Oc Ns ...
1246 1420 .Op Ar interval Op Ar count
1247 1421 .Xc
1248 1422 Displays I/O statistics for the given pools.
1249 1423 When given an
1250 1424 .Ar interval ,
1251 1425 the statistics are printed every
1252 1426 .Ar interval
1253 1427 seconds until ^C is pressed.
1254 1428 If no
1255 1429 .Ar pool Ns s
1256 1430 are specified, statistics for every pool in the system is shown.
1257 1431 If
1258 1432 .Ar count
1259 1433 is specified, the command exits after
1260 1434 .Ar count
1261 1435 reports are printed.
1262 1436 .Bl -tag -width Ds
1263 1437 .It Fl T Sy u Ns | Ns Sy d
1264 1438 Display a time stamp.
1265 1439 Specify
1266 1440 .Sy u
1267 1441 for a printed representation of the internal representation of time.
1268 1442 See
1269 1443 .Xr time 2 .
1270 1444 Specify
1271 1445 .Sy d
1272 1446 for standard date format.
1273 1447 See
1274 1448 .Xr date 1 .
1275 1449 .It Fl v
1276 1450 Verbose statistics Reports usage statistics for individual vdevs within the
1277 1451 pool, in addition to the pool-wide statistics.
1278 1452 .El
1279 1453 .It Xo
1280 1454 .Nm
1281 1455 .Cm labelclear
1282 1456 .Op Fl f
1283 1457 .Ar device
1284 1458 .Xc
1285 1459 Removes ZFS label information from the specified
1286 1460 .Ar device .
1287 1461 The
1288 1462 .Ar device
1289 1463 must not be part of an active pool configuration.
1290 1464 .Bl -tag -width Ds
1291 1465 .It Fl f
1292 1466 Treat exported or foreign devices as inactive.
1293 1467 .El
1294 1468 .It Xo
1295 1469 .Nm
1296 1470 .Cm list
1297 1471 .Op Fl Hpv
1298 1472 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1299 1473 .Op Fl T Sy u Ns | Ns Sy d
1300 1474 .Oo Ar pool Oc Ns ...
1301 1475 .Op Ar interval Op Ar count
1302 1476 .Xc
1303 1477 Lists the given pools along with a health status and space usage.
1304 1478 If no
1305 1479 .Ar pool Ns s
1306 1480 are specified, all pools in the system are listed.
1307 1481 When given an
1308 1482 .Ar interval ,
1309 1483 the information is printed every
1310 1484 .Ar interval
1311 1485 seconds until ^C is pressed.
1312 1486 If
1313 1487 .Ar count
1314 1488 is specified, the command exits after
|
↓ open down ↓ |
66 lines elided |
↑ open up ↑ |
1315 1489 .Ar count
1316 1490 reports are printed.
1317 1491 .Bl -tag -width Ds
1318 1492 .It Fl H
1319 1493 Scripted mode.
1320 1494 Do not display headers, and separate fields by a single tab instead of arbitrary
1321 1495 space.
1322 1496 .It Fl o Ar property
1323 1497 Comma-separated list of properties to display.
1324 1498 See the
1325 -.Sx Properties
1499 +.Sx Pool Properties
1326 1500 section for a list of valid properties.
1327 1501 The default list is
1328 1502 .Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1329 1503 .Cm dedupratio , health , altroot .
1330 1504 .It Fl p
1331 1505 Display numbers in parsable
1332 1506 .Pq exact
1333 1507 values.
1334 1508 .It Fl T Sy u Ns | Ns Sy d
1335 1509 Display a time stamp.
1336 1510 Specify
1337 1511 .Fl u
1338 1512 for a printed representation of the internal representation of time.
1339 1513 See
1340 1514 .Xr time 2 .
1341 1515 Specify
1342 1516 .Fl d
1343 1517 for standard date format.
1344 1518 See
1345 1519 .Xr date 1 .
1346 1520 .It Fl v
1347 1521 Verbose statistics.
1348 1522 Reports usage statistics for individual vdevs within the pool, in addition to
1349 1523 the pool-wise statistics.
1350 1524 .El
1351 1525 .It Xo
1352 1526 .Nm
1353 1527 .Cm offline
1354 1528 .Op Fl t
1355 1529 .Ar pool Ar device Ns ...
1356 1530 .Xc
1357 1531 Takes the specified physical device offline.
1358 1532 While the
1359 1533 .Ar device
1360 1534 is offline, no attempt is made to read or write to the device.
1361 1535 This command is not applicable to spares.
1362 1536 .Bl -tag -width Ds
1363 1537 .It Fl t
1364 1538 Temporary.
1365 1539 Upon reboot, the specified physical device reverts to its previous state.
1366 1540 .El
1367 1541 .It Xo
1368 1542 .Nm
1369 1543 .Cm online
1370 1544 .Op Fl e
1371 1545 .Ar pool Ar device Ns ...
1372 1546 .Xc
1373 1547 Brings the specified physical device online.
1374 1548 This command is not applicable to spares.
1375 1549 .Bl -tag -width Ds
1376 1550 .It Fl e
1377 1551 Expand the device to use all available space.
1378 1552 If the device is part of a mirror or raidz then all devices must be expanded
1379 1553 before the new space will become available to the pool.
1380 1554 .El
1381 1555 .It Xo
1382 1556 .Nm
1383 1557 .Cm reguid
1384 1558 .Ar pool
1385 1559 .Xc
1386 1560 Generates a new unique identifier for the pool.
1387 1561 You must ensure that all devices in this pool are online and healthy before
|
↓ open down ↓ |
52 lines elided |
↑ open up ↑ |
1388 1562 performing this action.
1389 1563 .It Xo
1390 1564 .Nm
1391 1565 .Cm reopen
1392 1566 .Ar pool
1393 1567 .Xc
1394 1568 Reopen all the vdevs associated with the pool.
1395 1569 .It Xo
1396 1570 .Nm
1397 1571 .Cm remove
1398 -.Op Fl np
1399 1572 .Ar pool Ar device Ns ...
1400 1573 .Xc
1401 1574 Removes the specified device from the pool.
1402 -This command currently only supports removing hot spares, cache, log
1403 -devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1404 -.sp
1405 -Removing a top-level vdev reduces the total amount of space in the storage pool.
1406 -The specified device will be evacuated by copying all allocated space from it to
1407 -the other devices in the pool.
1408 -In this case, the
1409 -.Nm zpool Cm remove
1410 -command initiates the removal and returns, while the evacuation continues in
1411 -the background.
1412 -The removal progress can be monitored with
1413 -.Nm zpool Cm status.
1414 -This feature must be enabled to be used, see
1415 -.Xr zpool-features 5
1416 -.Pp
1417 -A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1418 -same.
1419 -Non-log devices or data devices that are part of a mirrored configuration can be removed using
1575 +This command currently only supports removing hot spares, cache, log and special
1576 +devices.
1577 +A mirrored log device can be removed by specifying the top-level mirror for the
1578 +log.
1579 +Non-log devices that are part of a mirrored configuration can be removed using
1420 1580 the
1421 1581 .Nm zpool Cm detach
1422 1582 command.
1423 -.Bl -tag -width Ds
1424 -.It Fl n
1425 -Do not actually perform the removal ("no-op").
1426 -Instead, print the estimated amount of memory that will be used by the
1427 -mapping table after the removal completes.
1428 -This is nonzero only for top-level vdevs.
1429 -.El
1430 -.Bl -tag -width Ds
1431 -.It Fl p
1432 -Used in conjunction with the
1433 -.Fl n
1434 -flag, displays numbers as parsable (exact) values.
1435 -.El
1583 +Non-redundant and raidz devices cannot be removed from a pool.
1436 1584 .It Xo
1437 1585 .Nm
1438 -.Cm remove
1439 -.Fl s
1440 -.Ar pool
1441 -.Xc
1442 -Stops and cancels an in-progress removal of a top-level vdev.
1443 -.It Xo
1444 -.Nm
1445 1586 .Cm replace
1446 1587 .Op Fl f
1447 1588 .Ar pool Ar device Op Ar new_device
1448 1589 .Xc
1449 1590 Replaces
1450 1591 .Ar old_device
1451 1592 with
1452 1593 .Ar new_device .
1453 1594 This is equivalent to attaching
1454 1595 .Ar new_device ,
1455 1596 waiting for it to resilver, and then detaching
1456 1597 .Ar old_device .
1457 1598 .Pp
1458 1599 The size of
1459 1600 .Ar new_device
1460 1601 must be greater than or equal to the minimum size of all the devices in a mirror
1461 1602 or raidz configuration.
1462 1603 .Pp
1463 1604 .Ar new_device
1464 1605 is required if the pool is not redundant.
1465 1606 If
1466 1607 .Ar new_device
1467 1608 is not specified, it defaults to
1468 1609 .Ar old_device .
1469 1610 This form of replacement is useful after an existing disk has failed and has
1470 1611 been physically replaced.
1471 1612 In this case, the new disk may have the same
1472 1613 .Pa /dev/dsk
1473 1614 path as the old device, even though it is actually a different disk.
1474 1615 ZFS recognizes this.
|
↓ open down ↓ |
20 lines elided |
↑ open up ↑ |
1475 1616 .Bl -tag -width Ds
1476 1617 .It Fl f
1477 1618 Forces use of
1478 1619 .Ar new_device ,
1479 1620 even if its appears to be in use.
1480 1621 Not all devices can be overridden in this manner.
1481 1622 .El
1482 1623 .It Xo
1483 1624 .Nm
1484 1625 .Cm scrub
1485 -.Op Fl s | Fl p
1626 +.Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s
1486 1627 .Ar pool Ns ...
1487 1628 .Xc
1488 1629 Begins a scrub or resumes a paused scrub.
1489 1630 The scrub examines all data in the specified pools to verify that it checksums
1490 1631 correctly.
1491 1632 For replicated
1492 1633 .Pq mirror or raidz
1493 1634 devices, ZFS automatically repairs any damage discovered during the scrub.
1494 1635 The
1495 1636 .Nm zpool Cm status
1496 1637 command reports the progress of the scrub and summarizes the results of the
1497 1638 scrub upon completion.
1498 1639 .Pp
1499 1640 Scrubbing and resilvering are very similar operations.
1500 1641 The difference is that resilvering only examines data that ZFS knows to be out
1501 1642 of date
1502 1643 .Po
1503 1644 for example, when attaching a new device to a mirror or replacing an existing
1504 1645 device
1505 1646 .Pc ,
|
↓ open down ↓ |
10 lines elided |
↑ open up ↑ |
1506 1647 whereas scrubbing examines all data to discover silent errors due to hardware
1507 1648 faults or disk failure.
1508 1649 .Pp
1509 1650 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1510 1651 one at a time.
1511 1652 If a scrub is paused, the
1512 1653 .Nm zpool Cm scrub
1513 1654 resumes it.
1514 1655 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1515 1656 resilver completes.
1657 +.Pp
1658 +Partial scrub may be requested using
1659 +.Fl m
1660 +or
1661 +.Fl M
1662 +option.
1516 1663 .Bl -tag -width Ds
1517 -.It Fl s
1518 -Stop scrubbing.
1519 -.El
1520 -.Bl -tag -width Ds
1664 +.It Fl m
1665 +Scrub only metadata blocks.
1666 +.It Fl M
1667 +Scrub only MOS blocks.
1521 1668 .It Fl p
1522 1669 Pause scrubbing.
1523 1670 Scrub pause state and progress are periodically synced to disk.
1524 1671 If the system is restarted or pool is exported during a paused scrub,
1525 1672 even after import, scrub will remain paused until it is resumed.
1526 1673 Once resumed the scrub will pick up from the place where it was last
1527 1674 checkpointed to disk.
1528 1675 To resume a paused scrub issue
1529 1676 .Nm zpool Cm scrub
1530 1677 again.
1678 +.It Fl s
1679 +Stop scrubbing.
1531 1680 .El
1532 1681 .It Xo
1533 1682 .Nm
1534 1683 .Cm set
1535 1684 .Ar property Ns = Ns Ar value
1536 1685 .Ar pool
1537 1686 .Xc
1538 1687 Sets the given property on the specified pool.
1539 1688 See the
1540 -.Sx Properties
1689 +.Sx Pool Properties
1541 1690 section for more information on what properties can be set and acceptable
1542 1691 values.
1543 1692 .It Xo
1544 1693 .Nm
1545 1694 .Cm split
1546 1695 .Op Fl n
1547 1696 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1548 1697 .Op Fl R Ar root
1549 1698 .Ar pool newpool
1550 1699 .Xc
1551 1700 Splits devices off
1552 1701 .Ar pool
1553 1702 creating
1554 1703 .Ar newpool .
1555 1704 All vdevs in
1556 1705 .Ar pool
1557 1706 must be mirrors.
1558 1707 At the time of the split,
1559 1708 .Ar newpool
1560 1709 will be a replica of
|
↓ open down ↓ |
10 lines elided |
↑ open up ↑ |
1561 1710 .Ar pool .
1562 1711 .Bl -tag -width Ds
1563 1712 .It Fl n
1564 1713 Do dry run, do not actually perform the split.
1565 1714 Print out the expected configuration of
1566 1715 .Ar newpool .
1567 1716 .It Fl o Ar property Ns = Ns Ar value
1568 1717 Sets the specified property for
1569 1718 .Ar newpool .
1570 1719 See the
1571 -.Sx Properties
1720 +.Sx Pool Properties
1572 1721 section for more information on the available pool properties.
1573 1722 .It Fl R Ar root
1574 1723 Set
1575 1724 .Sy altroot
1576 1725 for
1577 1726 .Ar newpool
1578 1727 to
1579 1728 .Ar root
1580 1729 and automatically import it.
1581 1730 .El
1582 1731 .It Xo
1583 1732 .Nm
1584 1733 .Cm status
1585 1734 .Op Fl Dvx
1586 1735 .Op Fl T Sy u Ns | Ns Sy d
1587 1736 .Oo Ar pool Oc Ns ...
1588 1737 .Op Ar interval Op Ar count
1589 1738 .Xc
1590 1739 Displays the detailed health status for the given pools.
1591 1740 If no
1592 1741 .Ar pool
1593 1742 is specified, then the status of each pool in the system is displayed.
1594 1743 For more information on pool and device health, see the
1595 1744 .Sx Device Failure and Recovery
1596 1745 section.
1597 1746 .Pp
1598 1747 If a scrub or resilver is in progress, this command reports the percentage done
1599 1748 and the estimated time to completion.
1600 1749 Both of these are only approximate, because the amount of data in the pool and
1601 1750 the other workloads on the system can change.
1602 1751 .Bl -tag -width Ds
1603 1752 .It Fl D
1604 1753 Display a histogram of deduplication statistics, showing the allocated
1605 1754 .Pq physically present on disk
1606 1755 and referenced
1607 1756 .Pq logically referenced in the pool
1608 1757 block counts and sizes by reference count.
1609 1758 .It Fl T Sy u Ns | Ns Sy d
1610 1759 Display a time stamp.
1611 1760 Specify
1612 1761 .Fl u
1613 1762 for a printed representation of the internal representation of time.
1614 1763 See
1615 1764 .Xr time 2 .
1616 1765 Specify
1617 1766 .Fl d
1618 1767 for standard date format.
1619 1768 See
1620 1769 .Xr date 1 .
|
↓ open down ↓ |
39 lines elided |
↑ open up ↑ |
1621 1770 .It Fl v
1622 1771 Displays verbose data error information, printing out a complete list of all
1623 1772 data errors since the last complete pool scrub.
1624 1773 .It Fl x
1625 1774 Only display status for pools that are exhibiting errors or are otherwise
1626 1775 unavailable.
1627 1776 Warnings about pools not using the latest on-disk format will not be included.
1628 1777 .El
1629 1778 .It Xo
1630 1779 .Nm
1780 +.Cm trim
1781 +.Op Fl r Ar rate Ns | Ns Fl s
1782 +.Ar pool Ns ...
1783 +.Xc
1784 +Initiates a on-demand TRIM operation on all of the free space of a pool.
1785 +This informs the underlying storage devices of all of the blocks that the pool
1786 +no longer considers allocated, thus allowing thinly provisioned storage devices
1787 +to reclaim them.
1788 +Please note that this collects all space marked as
1789 +.Qq freed
1790 +in the pool immediately and doesn't wait the
1791 +.Sy zfs_txgs_per_trim
1792 +delay as automatic TRIM does.
1793 +Hence, this can limit pool corruption recovery options during and immediately
1794 +following the on-demand TRIM to 1-2 TXGs into the past
1795 +.Pq instead of the standard 32-64 of automatic TRIM .
1796 +This approach, however, allows you to recover the maximum amount of free space
1797 +from the pool immediately without having to wait.
1798 +.Pp
1799 +Also note that an on-demand TRIM operation can be initiated irrespective of the
1800 +.Sy autotrim
1801 +pool property setting.
1802 +It does, however, respect the
1803 +.Sy forcetrim
1804 +pool property.
1805 +.Pp
1806 +An on-demand TRIM operation does not conflict with an ongoing scrub, but it can
1807 +put significant I/O stress on the underlying vdevs.
1808 +A resilver, however, automatically stops an on-demand TRIM operation.
1809 +You can manually reinitiate the TRIM operation after the resilver has started,
1810 +by simply reissuing the
1811 +.Nm zpool Cm trim
1812 +command.
1813 +.Pp
1814 +Adding a vdev during TRIM is supported, although the progression display in
1815 +.Nm zpool Cm status
1816 +might not be entirely accurate in that case
1817 +.Pq TRIM will complete before reaching 100% .
1818 +Removing or detaching a vdev will prematurely terminate an on-demand TRIM
1819 +operation.
1820 +.Bl -tag -width Ds
1821 +.It Fl r Ar rate
1822 +Controls the speed at which the TRIM operation progresses.
1823 +Without this option, TRIM is executed in parallel on all top-level vdevs as
1824 +quickly as possible.
1825 +This option allows you to control how fast
1826 +.Pq in bytes per second
1827 +the TRIM is executed.
1828 +This rate is applied on a per-vdev basis, i.e. every top-level vdev in the pool
1829 +tries to match this speed.
1830 +.Pp
1831 +Due to limitations in how the algorithm is designed, TRIMs are executed in
1832 +whole-metaslab increments.
1833 +Each top-level vdev contains approximately 200 metaslabs, so a rate-limited TRIM
1834 +progresses in steps, i.e. it TRIMs one metaslab completely and then waits for a
1835 +while so that over the whole device, the speed averages out.
1836 +.Pp
1837 +When an on-demand TRIM operation is already in progress, this option changes its
1838 +rate.
1839 +To change a rate-limited TRIM to an unlimited one, simply execute the
1840 +.Nm zpool Cm trim
1841 +command without the
1842 +.Fl r
1843 +option.
1844 +.It Fl s
1845 +Stop trimming.
1846 +If an on-demand TRIM operation is not ongoing at the moment, this does nothing
1847 +and the command returns success.
1848 +.El
1849 +.It Xo
1850 +.Nm
1631 1851 .Cm upgrade
1632 1852 .Xc
1633 1853 Displays pools which do not have all supported features enabled and pools
1634 1854 formatted using a legacy ZFS version number.
1635 1855 These pools can continue to be used, but some features may not be available.
1636 1856 Use
1637 1857 .Nm zpool Cm upgrade Fl a
1638 1858 to enable all features on all pools.
1639 1859 .It Xo
1640 1860 .Nm
1641 1861 .Cm upgrade
1642 1862 .Fl v
1643 1863 .Xc
1644 1864 Displays legacy ZFS versions supported by the current software.
1645 1865 See
1646 1866 .Xr zpool-features 5
1647 1867 for a description of feature flags features supported by the current software.
1648 1868 .It Xo
1649 1869 .Nm
1650 1870 .Cm upgrade
1651 1871 .Op Fl V Ar version
1652 1872 .Fl a Ns | Ns Ar pool Ns ...
1653 1873 .Xc
1654 1874 Enables all supported features on the given pool.
1655 1875 Once this is done, the pool will no longer be accessible on systems that do not
1656 1876 support feature flags.
1657 1877 See
1658 1878 .Xr zpool-features 5
1659 1879 for details on compatibility with systems that support feature flags, but do not
1660 1880 support all features enabled on the pool.
1661 1881 .Bl -tag -width Ds
|
↓ open down ↓ |
21 lines elided |
↑ open up ↑ |
1662 1882 .It Fl a
1663 1883 Enables all supported features on all pools.
1664 1884 .It Fl V Ar version
1665 1885 Upgrade to the specified legacy version.
1666 1886 If the
1667 1887 .Fl V
1668 1888 flag is specified, no features will be enabled on the pool.
1669 1889 This option can only be used to increase the version number up to the last
1670 1890 supported legacy version number.
1671 1891 .El
1892 +.It Xo
1893 +.Nm
1894 +.Cm vdev-get
1895 +.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1896 +.Ar pool
1897 +.Ar vdev-name Ns | Ns Ar vdev-guid
1898 +.Xc
1899 +Retrieves the given list of vdev properties
1900 +.Po or all properties if
1901 +.Sy all
1902 +is used
1903 +.Pc
1904 +for the specified vdev of the specified storage pool.
1905 +These properties are displayed in the same manner as the pool properties.
1906 +The operation is supported for leaf-level vdevs only.
1907 +See the
1908 +.Sx Device Properties
1909 +section for more information on the available properties.
1910 +.It Xo
1911 +.Nm
1912 +.Cm vdev-set
1913 +.Ar property Ns = Ns Ar value
1914 +.Ar pool
1915 +.Ar vdev-name Ns | Ns Ar vdev-guid
1916 +.Xc
1917 +Sets the given property on the specified device of the specified pool.
1918 +If top-level vdev is specified, sets the property on all the child devices.
1919 +See the
1920 +.Sx Device Properties
1921 +section for more information on what properties can be set and accepted values.
1672 1922 .El
1673 1923 .Sh EXIT STATUS
1674 1924 The following exit values are returned:
1675 1925 .Bl -tag -width Ds
1676 1926 .It Sy 0
1677 1927 Successful completion.
1678 1928 .It Sy 1
1679 1929 An error occurred.
1680 1930 .It Sy 2
1681 1931 Invalid command line options were specified.
1682 1932 .El
1683 1933 .Sh EXAMPLES
1684 1934 .Bl -tag -width Ds
1685 1935 .It Sy Example 1 No Creating a RAID-Z Storage Pool
1686 1936 The following command creates a pool with a single raidz root vdev that
1687 1937 consists of six disks.
1688 1938 .Bd -literal
1689 1939 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1690 1940 .Ed
1691 1941 .It Sy Example 2 No Creating a Mirrored Storage Pool
1692 1942 The following command creates a pool with two mirrors, where each mirror
1693 1943 contains two disks.
1694 1944 .Bd -literal
1695 1945 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1696 1946 .Ed
1697 1947 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Slices
1698 1948 The following command creates an unmirrored pool using two disk slices.
1699 1949 .Bd -literal
1700 1950 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1701 1951 .Ed
1702 1952 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
1703 1953 The following command creates an unmirrored pool using files.
1704 1954 While not recommended, a pool based on files can be useful for experimental
1705 1955 purposes.
1706 1956 .Bd -literal
1707 1957 # zpool create tank /path/to/file/a /path/to/file/b
1708 1958 .Ed
1709 1959 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
1710 1960 The following command adds two mirrored disks to the pool
1711 1961 .Em tank ,
1712 1962 assuming the pool is already made up of two-way mirrors.
1713 1963 The additional space is immediately available to any datasets within the pool.
1714 1964 .Bd -literal
1715 1965 # zpool add tank mirror c1t0d0 c1t1d0
1716 1966 .Ed
1717 1967 .It Sy Example 6 No Listing Available ZFS Storage Pools
1718 1968 The following command lists all available pools on the system.
1719 1969 In this case, the pool
1720 1970 .Em zion
1721 1971 is faulted due to a missing device.
1722 1972 The results from this command are similar to the following:
1723 1973 .Bd -literal
1724 1974 # zpool list
1725 1975 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1726 1976 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
1727 1977 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
1728 1978 zion - - - - - - - FAULTED -
1729 1979 .Ed
1730 1980 .It Sy Example 7 No Destroying a ZFS Storage Pool
1731 1981 The following command destroys the pool
1732 1982 .Em tank
1733 1983 and any datasets contained within.
1734 1984 .Bd -literal
1735 1985 # zpool destroy -f tank
1736 1986 .Ed
1737 1987 .It Sy Example 8 No Exporting a ZFS Storage Pool
1738 1988 The following command exports the devices in pool
1739 1989 .Em tank
1740 1990 so that they can be relocated or later imported.
1741 1991 .Bd -literal
1742 1992 # zpool export tank
1743 1993 .Ed
1744 1994 .It Sy Example 9 No Importing a ZFS Storage Pool
1745 1995 The following command displays available pools, and then imports the pool
1746 1996 .Em tank
1747 1997 for use on the system.
1748 1998 The results from this command are similar to the following:
1749 1999 .Bd -literal
1750 2000 # zpool import
1751 2001 pool: tank
1752 2002 id: 15451357997522795478
1753 2003 state: ONLINE
1754 2004 action: The pool can be imported using its name or numeric identifier.
1755 2005 config:
1756 2006
1757 2007 tank ONLINE
1758 2008 mirror ONLINE
1759 2009 c1t2d0 ONLINE
1760 2010 c1t3d0 ONLINE
1761 2011
1762 2012 # zpool import tank
1763 2013 .Ed
1764 2014 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
1765 2015 The following command upgrades all ZFS Storage pools to the current version of
1766 2016 the software.
1767 2017 .Bd -literal
1768 2018 # zpool upgrade -a
1769 2019 This system is currently running ZFS version 2.
1770 2020 .Ed
1771 2021 .It Sy Example 11 No Managing Hot Spares
1772 2022 The following command creates a new pool with an available hot spare:
1773 2023 .Bd -literal
1774 2024 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1775 2025 .Ed
1776 2026 .Pp
1777 2027 If one of the disks were to fail, the pool would be reduced to the degraded
1778 2028 state.
1779 2029 The failed device can be replaced using the following command:
1780 2030 .Bd -literal
1781 2031 # zpool replace tank c0t0d0 c0t3d0
1782 2032 .Ed
1783 2033 .Pp
1784 2034 Once the data has been resilvered, the spare is automatically removed and is
1785 2035 made available for use should another device fail.
1786 2036 The hot spare can be permanently removed from the pool using the following
1787 2037 command:
1788 2038 .Bd -literal
1789 2039 # zpool remove tank c0t2d0
1790 2040 .Ed
1791 2041 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
1792 2042 The following command creates a ZFS storage pool consisting of two, two-way
1793 2043 mirrors and mirrored log devices:
1794 2044 .Bd -literal
1795 2045 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
1796 2046 c4d0 c5d0
1797 2047 .Ed
1798 2048 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
1799 2049 The following command adds two disks for use as cache devices to a ZFS storage
1800 2050 pool:
1801 2051 .Bd -literal
1802 2052 # zpool add pool cache c2d0 c3d0
1803 2053 .Ed
|
↓ open down ↓ |
122 lines elided |
↑ open up ↑ |
1804 2054 .Pp
1805 2055 Once added, the cache devices gradually fill with content from main memory.
1806 2056 Depending on the size of your cache devices, it could take over an hour for
1807 2057 them to fill.
1808 2058 Capacity and reads can be monitored using the
1809 2059 .Cm iostat
1810 2060 option as follows:
1811 2061 .Bd -literal
1812 2062 # zpool iostat -v pool 5
1813 2063 .Ed
1814 -.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
1815 -The following commands remove the mirrored log device
1816 -.Sy mirror-2
1817 -and mirrored top-level data device
1818 -.Sy mirror-1 .
1819 -.Pp
2064 +.It Sy Example 14 No Removing a Mirrored Log Device
2065 +The following command removes the mirrored log device
2066 +.Sy mirror-2 .
1820 2067 Given this configuration:
1821 2068 .Bd -literal
1822 2069 pool: tank
1823 2070 state: ONLINE
1824 2071 scrub: none requested
1825 2072 config:
1826 2073
1827 2074 NAME STATE READ WRITE CKSUM
1828 2075 tank ONLINE 0 0 0
1829 2076 mirror-0 ONLINE 0 0 0
1830 2077 c6t0d0 ONLINE 0 0 0
1831 2078 c6t1d0 ONLINE 0 0 0
1832 2079 mirror-1 ONLINE 0 0 0
1833 2080 c6t2d0 ONLINE 0 0 0
1834 2081 c6t3d0 ONLINE 0 0 0
1835 2082 logs
1836 2083 mirror-2 ONLINE 0 0 0
|
↓ open down ↓ |
7 lines elided |
↑ open up ↑ |
1837 2084 c4t0d0 ONLINE 0 0 0
1838 2085 c4t1d0 ONLINE 0 0 0
1839 2086 .Ed
1840 2087 .Pp
1841 2088 The command to remove the mirrored log
1842 2089 .Sy mirror-2
1843 2090 is:
1844 2091 .Bd -literal
1845 2092 # zpool remove tank mirror-2
1846 2093 .Ed
1847 -.Pp
1848 -The command to remove the mirrored data
1849 -.Sy mirror-1
1850 -is:
1851 -.Bd -literal
1852 -# zpool remove tank mirror-1
1853 -.Ed
1854 2094 .It Sy Example 15 No Displaying expanded space on a device
1855 2095 The following command displays the detailed information for the pool
1856 2096 .Em data .
1857 2097 This pool is comprised of a single raidz vdev where one of its devices
1858 2098 increased its capacity by 10GB.
1859 2099 In this example, the pool will not be able to utilize this extra capacity until
1860 2100 all the devices under the raidz vdev have been expanded.
1861 2101 .Bd -literal
1862 2102 # zpool list -v data
1863 2103 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1864 2104 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1865 2105 raidz1 23.9G 14.6G 9.30G 48% -
1866 2106 c1t1d0 - - - - -
1867 2107 c1t2d0 - - - - 10G
1868 2108 c1t3d0 - - - - -
1869 2109 .Ed
1870 2110 .El
1871 2111 .Sh INTERFACE STABILITY
1872 2112 .Sy Evolving
1873 2113 .Sh SEE ALSO
1874 2114 .Xr zfs 1M ,
1875 2115 .Xr attributes 5 ,
1876 2116 .Xr zpool-features 5
|
↓ open down ↓ |
13 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX