Print this page
NEX-18069 Unable to get/set VDEV_PROP_RESILVER_MAXACTIVE/VDEV_PROP_RESILVER_MINACTIVE props
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9552 zfs_scan_idle throttling harms performance and needs to be removed
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-5284 need to document and update default for import -t option
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4934 Add capability to remove special vdev
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4258 restore and update vdev-get & vdev-set in zpool man page
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3502 dedup ceiling should set a pool prop when cap is in effect
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
usr/src/common/zfs/zpool_prop.c
usr/src/uts/common/sys/fs/zfs.h
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
usr/src/uts/common/io/scsi/targets/sd.c
usr/src/uts/common/sys/scsi/targets/sddef.h
SUP-817 Removed references to special device from man and help
Revert "SUP-817 Removed references to special device"
This reverts commit f8970e28f0d8bd6b69711722f341e3e1d0e1babf.
SUP-817 Removed references to special device
OS-102 add man page info and tests for vdev/CoS properties and ZFS meta features
Issue #26: partial scrub
Added partial scrub options:
-M for MOS only scrub
-m for metadata scrub
re 13748 added zpool export -c option
zpool export -c command exports specified pool while keeping its latest
configuration in the cache file for subsequent zpool import -c.
re #11781 rb3701 Update man related tools (add missed files)
re #11781 rb3701 Update man related tools
--HG--
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE => usr/src/cmd/man/THIRDPARTYLICENSE
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE.descrip => usr/src/cmd/man/THIRDPARTYLICENSE.descrip
rename : usr/src/cmd/man/src/man.c => usr/src/cmd/man/man.c
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/man/man1m/zpool.1m.man.txt
+++ new/usr/src/man/man1m/zpool.1m.man.txt
1 1 ZPOOL(1M) Maintenance Commands ZPOOL(1M)
2 2
3 3 NAME
4 4 zpool - configure ZFS storage pools
|
↓ open down ↓ |
4 lines elided |
↑ open up ↑ |
5 5
6 6 SYNOPSIS
7 7 zpool -?
8 8 zpool add [-fn] pool vdev...
9 9 zpool attach [-f] pool device new_device
10 10 zpool clear pool [device]
11 11 zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]...
12 12 [-O file-system-property=value]... [-R root] pool vdev...
13 13 zpool destroy [-f] pool
14 14 zpool detach pool device
15 - zpool export [-f] pool...
15 + zpool export [-cfF] [-t numthreads] pool...
16 16 zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
17 17 zpool history [-il] [pool]...
18 18 zpool import [-D] [-d dir]
19 19 zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
20 - [-o property=value]... [-R root]
20 + [-o property=value]... [-R root] [-t numthreads]
21 21 zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
22 - [-o property=value]... [-R root] pool|id [newpool]
22 + [-o property=value]... [-R root] [-t numthreads] pool|id [newpool]
23 23 zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
24 24 zpool labelclear [-f] device
25 25 zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
26 26 [interval [count]]
27 27 zpool offline [-t] pool device...
28 28 zpool online [-e] pool device...
29 29 zpool reguid pool
30 30 zpool reopen pool
31 - zpool remove [-np] pool device...
32 - zpool remove -s pool
31 + zpool remove pool device...
33 32 zpool replace [-f] pool device [new_device]
34 - zpool scrub [-s | -p] pool...
33 + zpool scrub [-m|-M|-p|-s] pool...
35 34 zpool set property=value pool
36 35 zpool split [-n] [-o property=value]... [-R root] pool newpool
37 36 zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
37 + zpool trim [-r rate|-s] pool...
38 38 zpool upgrade
39 39 zpool upgrade -v
40 40 zpool upgrade [-V version] -a|pool...
41 + zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
42 + zpool vdev-set property=value pool vdev-name|vdev-guid
41 43
42 44 DESCRIPTION
43 45 The zpool command configures ZFS storage pools. A storage pool is a
44 46 collection of devices that provides physical storage and data replication
45 47 for ZFS datasets. All datasets within a storage pool share the same
46 48 space. See zfs(1M) for information on managing datasets.
47 49
48 50 Virtual Devices (vdevs)
49 51 A "virtual device" describes a single device or a collection of devices
50 52 organized according to certain performance and fault characteristics.
51 53 The following virtual devices are supported:
52 54
53 55 disk A block device, typically located under /dev/dsk. ZFS can use
54 56 individual slices or partitions, though the recommended mode of
55 57 operation is to use whole disks. A disk can be specified by a
56 58 full path, or it can be a shorthand name (the relative portion of
57 59 the path under /dev/dsk). A whole disk can be specified by
58 60 omitting the slice or partition designation. For example, c0t0d0
59 61 is equivalent to /dev/dsk/c0t0d0s2. When given a whole disk, ZFS
60 62 automatically labels the disk, if necessary.
61 63
62 64 file A regular file. The use of files as a backing store is strongly
63 65 discouraged. It is designed primarily for experimental purposes,
64 66 as the fault tolerance of a file is only as good as the file
65 67 system of which it is a part. A file must be specified by a full
66 68 path.
67 69
68 70 mirror A mirror of two or more devices. Data is replicated in an
69 71 identical fashion across all components of a mirror. A mirror
70 72 with N disks of size X can hold X bytes and can withstand (N-1)
71 73 devices failing before data integrity is compromised.
72 74
73 75 raidz, raidz1, raidz2, raidz3
74 76 A variation on RAID-5 that allows for better distribution of
75 77 parity and eliminates the RAID-5 "write hole" (in which data and
76 78 parity become inconsistent after a power loss). Data and parity
77 79 is striped across all disks within a raidz group.
78 80
79 81 A raidz group can have single-, double-, or triple-parity,
80 82 meaning that the raidz group can sustain one, two, or three
81 83 failures, respectively, without losing any data. The raidz1 vdev
82 84 type specifies a single-parity raidz group; the raidz2 vdev type
83 85 specifies a double-parity raidz group; and the raidz3 vdev type
84 86 specifies a triple-parity raidz group. The raidz vdev type is an
85 87 alias for raidz1.
86 88
87 89 A raidz group with N disks of size X with P parity disks can hold
88 90 approximately (N-P)*X bytes and can withstand P device(s) failing
89 91 before data integrity is compromised. The minimum number of
90 92 devices in a raidz group is one more than the number of parity
91 93 disks. The recommended number is between 3 and 9 to help
92 94 increase performance.
93 95
94 96 spare A special pseudo-vdev which keeps track of available hot spares
95 97 for a pool. For more information, see the Hot Spares section.
96 98
97 99 log A separate intent log device. If more than one log device is
98 100 specified, then writes are load-balanced between devices. Log
99 101 devices can be mirrored. However, raidz vdev types are not
100 102 supported for the intent log. For more information, see the
101 103 Intent Log section.
102 104
103 105 cache A device used to cache storage pool data. A cache device cannot
104 106 be configured as a mirror or raidz group. For more information,
105 107 see the Cache Devices section.
106 108
107 109 Virtual devices cannot be nested, so a mirror or raidz virtual device can
108 110 only contain files or disks. Mirrors of mirrors (or other combinations)
109 111 are not allowed.
110 112
111 113 A pool can have any number of virtual devices at the top of the
112 114 configuration (known as "root vdevs"). Data is dynamically distributed
113 115 across all top-level devices to balance data among devices. As new
114 116 virtual devices are added, ZFS automatically places data on the newly
115 117 available devices.
116 118
117 119 Virtual devices are specified one at a time on the command line,
118 120 separated by whitespace. The keywords mirror and raidz are used to
119 121 distinguish where a group ends and another begins. For example, the
120 122 following creates two root vdevs, each a mirror of two disks:
121 123
122 124 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
123 125
124 126 Device Failure and Recovery
125 127 ZFS supports a rich set of mechanisms for handling device failure and
126 128 data corruption. All metadata and data is checksummed, and ZFS
127 129 automatically repairs bad data from a good copy when corruption is
128 130 detected.
129 131
130 132 In order to take advantage of these features, a pool must make use of
131 133 some form of redundancy, using either mirrored or raidz groups. While
132 134 ZFS supports running in a non-redundant configuration, where each root
133 135 vdev is simply a disk or file, this is strongly discouraged. A single
134 136 case of bit corruption can render some or all of your data unavailable.
135 137
136 138 A pool's health status is described by one of three states: online,
137 139 degraded, or faulted. An online pool has all devices operating normally.
138 140 A degraded pool is one in which one or more devices have failed, but the
139 141 data is still available due to a redundant configuration. A faulted pool
140 142 has corrupted metadata, or one or more faulted devices, and insufficient
141 143 replicas to continue functioning.
142 144
143 145 The health of the top-level vdev, such as mirror or raidz device, is
144 146 potentially impacted by the state of its associated vdevs, or component
145 147 devices. A top-level vdev or component device is in one of the following
146 148 states:
147 149
148 150 DEGRADED One or more top-level vdevs is in the degraded state because
149 151 one or more component devices are offline. Sufficient replicas
150 152 exist to continue functioning.
151 153
152 154 One or more component devices is in the degraded or faulted
153 155 state, but sufficient replicas exist to continue functioning.
154 156 The underlying conditions are as follows:
155 157
156 158 o The number of checksum errors exceeds acceptable levels and
157 159 the device is degraded as an indication that something may
158 160 be wrong. ZFS continues to use the device as necessary.
159 161
160 162 o The number of I/O errors exceeds acceptable levels. The
161 163 device could not be marked as faulted because there are
162 164 insufficient replicas to continue functioning.
163 165
164 166 FAULTED One or more top-level vdevs is in the faulted state because one
165 167 or more component devices are offline. Insufficient replicas
166 168 exist to continue functioning.
167 169
168 170 One or more component devices is in the faulted state, and
169 171 insufficient replicas exist to continue functioning. The
170 172 underlying conditions are as follows:
171 173
172 174 o The device could be opened, but the contents did not match
173 175 expected values.
174 176
175 177 o The number of I/O errors exceeds acceptable levels and the
176 178 device is faulted to prevent further use of the device.
177 179
178 180 OFFLINE The device was explicitly taken offline by the zpool offline
179 181 command.
180 182
181 183 ONLINE The device is online and functioning.
182 184
183 185 REMOVED The device was physically removed while the system was running.
184 186 Device removal detection is hardware-dependent and may not be
185 187 supported on all platforms.
186 188
187 189 UNAVAIL The device could not be opened. If a pool is imported when a
188 190 device was unavailable, then the device will be identified by a
189 191 unique identifier instead of its path since the path was never
190 192 correct in the first place.
191 193
192 194 If a device is removed and later re-attached to the system, ZFS attempts
193 195 to put the device online automatically. Device attach detection is
194 196 hardware-dependent and might not be supported on all platforms.
195 197
196 198 Hot Spares
197 199 ZFS allows devices to be associated with pools as "hot spares". These
198 200 devices are not actively used in the pool, but when an active device
199 201 fails, it is automatically replaced by a hot spare. To create a pool
200 202 with hot spares, specify a spare vdev with any number of devices. For
201 203 example,
202 204
203 205 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
204 206
205 207 Spares can be shared across multiple pools, and can be added with the
206 208 zpool add command and removed with the zpool remove command. Once a
207 209 spare replacement is initiated, a new spare vdev is created within the
208 210 configuration that will remain there until the original device is
209 211 replaced. At this point, the hot spare becomes available again if
210 212 another device fails.
|
↓ open down ↓ |
160 lines elided |
↑ open up ↑ |
211 213
212 214 If a pool has a shared spare that is currently being used, the pool can
213 215 not be exported since other pools may use this shared spare, which may
214 216 lead to potential data corruption.
215 217
216 218 An in-progress spare replacement can be cancelled by detaching the hot
217 219 spare. If the original faulted device is detached, then the hot spare
218 220 assumes its place in the configuration, and is removed from the spare
219 221 list of all active pools.
220 222
223 + See sparegroup vdev property in Device Properties section for information
224 + on how to control spare selection.
225 +
221 226 Spares cannot replace log devices.
222 227
223 228 Intent Log
224 229 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
225 230 transactions. For instance, databases often require their transactions
226 231 to be on stable storage devices when returning from a system call. NFS
227 232 and other applications can also use fsync(3C) to ensure data stability.
228 233 By default, the intent log is allocated from blocks within the main pool.
229 234 However, it might be possible to get better performance using separate
230 235 intent log devices such as NVRAM or a dedicated disk. For example:
231 236
232 237 # zpool create pool c0d0 c1d0 log c2d0
233 238
234 239 Multiple log devices can also be specified, and they can be mirrored.
235 240 See the EXAMPLES section for an example of mirroring multiple log
236 241 devices.
237 242
238 243 Log devices can be added, replaced, attached, detached, and imported and
239 - exported as part of the larger pool. Mirrored devices can be removed by
240 - specifying the top-level mirror vdev.
244 + exported as part of the larger pool. Mirrored log devices can be removed
245 + by specifying the top-level mirror for the log.
241 246
242 247 Cache Devices
243 248 Devices can be added to a storage pool as "cache devices". These devices
244 249 provide an additional layer of caching between main memory and disk. For
245 250 read-heavy workloads, where the working set size is much larger than what
246 251 can be cached in main memory, using cache devices allow much more of this
247 252 working set to be served from low latency media. Using cache devices
248 253 provides the greatest performance improvement for random read-workloads
249 254 of mostly static content.
250 255
251 256 To create a pool with cache devices, specify a cache vdev with any number
252 257 of devices. For example:
253 258
|
↓ open down ↓ |
3 lines elided |
↑ open up ↑ |
254 259 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
255 260
256 261 Cache devices cannot be mirrored or part of a raidz configuration. If a
257 262 read error is encountered on a cache device, that read I/O is reissued to
258 263 the original storage pool device, which might be part of a mirrored or
259 264 raidz configuration.
260 265
261 266 The content of the cache devices is considered volatile, as is the case
262 267 with other system caches.
263 268
264 - Properties
269 + Pool Properties
265 270 Each pool has several properties associated with it. Some properties are
266 271 read-only statistics while others are configurable and change the
267 272 behavior of the pool.
268 273
269 274 The following are read-only properties:
270 275
271 276 allocated
272 277 Amount of storage space used within the pool.
273 278
274 279 bootsize
275 280 The size of the system boot partition. This property can only be
276 281 set at pool creation time and is read-only once pool is created.
277 282 Setting this property implies using the -B option.
278 283
279 284 capacity
280 285 Percentage of pool space used. This property can also be
281 286 referred to by its shortened column name, cap.
282 287
288 + ddt_capped=on|off
289 + When the ddt_capped is on this indicates DDT growth has been
290 + stopped. New unique writes will not be deduped to prevent
291 + further DDT growth.
292 +
283 293 expandsize
284 294 Amount of uninitialized space within the pool or device that can
285 295 be used to increase the total capacity of the pool.
286 296 Uninitialized space consists of any space on an EFI labeled vdev
287 297 which has not been brought online (e.g, using zpool online -e).
288 298 This space occurs when a LUN is dynamically expanded.
289 299
290 300 fragmentation
291 301 The amount of fragmentation in the pool.
292 302
293 303 free The amount of free space available in the pool.
294 304
295 305 freeing
296 - After a file system or snapshot is destroyed, the space it was
297 - using is returned to the pool asynchronously. freeing is the
298 - amount of space remaining to be reclaimed. Over time freeing
306 + freeing is the amount of pool space remaining to be reclaimed.
307 + After a file, dataset or snapshot is destroyed, the space it was
308 + using is returned to the pool asynchronously. Over time freeing
299 309 will decrease while free increases.
300 310
301 311 health The current health of the pool. Health can be one of ONLINE,
302 312 DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
303 313
304 314 guid A unique identifier for the pool.
305 315
306 316 size Total size of the storage pool.
307 317
308 318 unsupported@feature_guid
309 319 Information about unsupported features that are enabled on the
310 320 pool. See zpool-features(5) for details.
311 321
312 322 The space usage properties report actual physical space available to the
313 323 storage pool. The physical space can be different from the total amount
314 324 of space that any contained datasets can actually use. The amount of
315 325 space used in a raidz configuration depends on the characteristics of the
316 326 data being written. In addition, ZFS reserves some space for internal
317 327 accounting that the zfs(1M) command takes into account, but the zpool
318 328 command does not. For non-full pools of a reasonable size, these effects
319 329 should be invisible. For small pools, or pools that are close to being
320 330 completely full, these discrepancies may become more noticeable.
321 331
322 332 The following property can be set at creation time and import time:
323 333
324 334 altroot
325 335 Alternate root directory. If set, this directory is prepended to
326 336 any mount points within the pool. This can be used when
327 337 examining an unknown pool where the mount points cannot be
328 338 trusted, or in an alternate boot environment, where the typical
329 339 paths are not valid. altroot is not a persistent property. It
330 340 is valid only while the system is up. Setting altroot defaults
331 341 to using cachefile=none, though this may be overridden using an
332 342 explicit setting.
333 343
334 344 The following property can be set only at import time:
335 345
336 346 readonly=on|off
337 347 If set to on, the pool will be imported in read-only mode. This
338 348 property can also be referred to by its shortened column name,
339 349 rdonly.
340 350
341 351 The following properties can be set at creation time and import time, and
342 352 later changed with the zpool set command:
343 353
344 354 autoexpand=on|off
345 355 Controls automatic pool expansion when the underlying LUN is
346 356 grown. If set to on, the pool will be resized according to the
347 357 size of the expanded device. If the device is part of a mirror
348 358 or raidz then all devices within that mirror/raidz group must be
349 359 expanded before the new space is made available to the pool. The
350 360 default behavior is off. This property can also be referred to
351 361 by its shortened column name, expand.
|
↓ open down ↓ |
43 lines elided |
↑ open up ↑ |
352 362
353 363 autoreplace=on|off
354 364 Controls automatic device replacement. If set to off, device
355 365 replacement must be initiated by the administrator by using the
356 366 zpool replace command. If set to on, any new device, found in
357 367 the same physical location as a device that previously belonged
358 368 to the pool, is automatically formatted and replaced. The
359 369 default behavior is off. This property can also be referred to
360 370 by its shortened column name, replace.
361 371
372 + autotrim=on|off
373 + When set to on, while deleting data, ZFS will inform the
374 + underlying vdevs of any blocks that have been marked as freed.
375 + This allows thinly provisioned vdevs to reclaim unused blocks.
376 + Currently, this feature supports sending SCSI UNMAP commands to
377 + SCSI and SAS disk vdevs, and using file hole punching on file-
378 + backed vdevs. SATA TRIM is currently not implemented. The
379 + default setting for this property is off.
380 +
381 + Please note that automatic trimming of data blocks can put
382 + significant stress on the underlying storage devices if they do
383 + not handle these commands in a background, low-priority manner.
384 + In that case, it may be possible to achieve most of the benefits
385 + of trimming free space on the pool by running an on-demand
386 + (manual) trim every once in a while during a maintenance window
387 + using the zpool trim command.
388 +
389 + Automatic trim does not reclaim blocks after a delete
390 + immediately. Instead, it waits approximately 32-64 TXGs (or as
391 + defined by the zfs_txgs_per_trim tunable) to allow for more
392 + efficient aggregation of smaller portions of free space into
393 + fewer larger regions, as well as to allow for longer pool
394 + corruption recovery via zpool import -F.
395 +
362 396 bootfs=pool/dataset
363 397 Identifies the default bootable dataset for the root pool. This
364 398 property is expected to be set mainly by the installation and
365 399 upgrade programs.
366 400
367 401 cachefile=path|none
368 402 Controls the location of where the pool configuration is cached.
369 403 Discovering all pools on system startup requires a cached copy of
370 404 the configuration data that is stored on the root file system.
371 405 All pools in this cache are automatically imported when the
372 406 system boots. Some environments, such as install and clustering,
373 407 need to cache this information in a different location so that
374 408 pools are not automatically imported. Setting this property
375 409 caches the pool configuration in a different location that can
376 410 later be imported with zpool import -c. Setting it to the
377 411 special value none creates a temporary pool that is never cached,
378 412 and the special value "" (empty string) uses the default
379 413 location.
380 414
381 415 Multiple pools can share the same cache file. Because the kernel
382 416 destroys and recreates this file when pools are added and
383 417 removed, care should be taken when attempting to access this
384 418 file. When the last pool using a cachefile is exported or
385 419 destroyed, the file is removed.
386 420
387 421 comment=text
388 422 A text string consisting of printable ASCII characters that will
389 423 be stored such that it is available even if the pool becomes
390 424 faulted. An administrator can provide additional information
391 425 about a pool using this property.
392 426
393 427 dedupditto=number
394 428 Threshold for the number of block ditto copies. If the reference
395 429 count for a deduplicated block increases above this number, a new
396 430 ditto copy of this block is automatically stored. The default
397 431 setting is 0 which causes no ditto copies to be created for
398 432 deduplicated blocks. The minimum legal nonzero setting is 100.
399 433
400 434 delegation=on|off
401 435 Controls whether a non-privileged user is granted access based on
402 436 the dataset permissions defined on the dataset. See zfs(1M) for
403 437 more information on ZFS delegated administration.
404 438
405 439 failmode=wait|continue|panic
406 440 Controls the system behavior in the event of catastrophic pool
407 441 failure. This condition is typically a result of a loss of
408 442 connectivity to the underlying storage device(s) or a failure of
409 443 all devices within the pool. The behavior of such an event is
410 444 determined as follows:
411 445
412 446 wait Blocks all I/O access until the device connectivity is
413 447 recovered and the errors are cleared. This is the
414 448 default behavior.
415 449
416 450 continue Returns EIO to any new write I/O requests but allows
417 451 reads to any of the remaining healthy devices. Any
418 452 write requests that have yet to be committed to disk
419 453 would be blocked.
|
↓ open down ↓ |
48 lines elided |
↑ open up ↑ |
420 454
421 455 panic Prints out a message to the console and generates a
422 456 system crash dump.
423 457
424 458 feature@feature_name=enabled
425 459 The value of this property is the current state of feature_name.
426 460 The only valid value when setting this property is enabled which
427 461 moves feature_name to the enabled state. See zpool-features(5)
428 462 for details on feature states.
429 463
464 + forcetrim=on|off
465 + Controls whether device support is taken into consideration when
466 + issuing TRIM commands to the underlying vdevs of the pool.
467 + Normally, both automatic trim and on-demand (manual) trim only
468 + issue TRIM commands if a vdev indicates support for it. Setting
469 + the forcetrim property to on will force ZFS to issue TRIMs even
470 + if it thinks a device does not support it. The default value is
471 + off.
472 +
430 473 listsnapshots=on|off
431 474 Controls whether information about snapshots associated with this
432 475 pool is output when zfs list is run without the -t option. The
433 476 default value is off. This property can also be referred to by
434 477 its shortened name, listsnaps.
435 478
479 + scrubprio=0-100
480 + Sets the priority of scrub I/O for this pool. This is a number
481 + from 0 to 100, higher numbers meaning a higher priority and thus
482 + more bandwidth allocated to scrub I/O, provided there is other
483 + I/O competing for bandwidth. If no other I/O is competing for
484 + bandwidth, scrub is allowed to consume as much bandwidth as the
485 + pool is capable of providing. A priority of 100 means that scrub
486 + I/O has equal priority to any other user-generated I/O. The
487 + value 0 is special, because it turns per-pool scrub priority
488 + control. In that case, scrub I/O priority is determined by the
489 + zfs_vdev_scrub_min_active and zfs_vdev_scrub_max_active tunables.
490 + The default value is 5.
491 +
492 + resilverprio=0-100
493 + Same as the scrubprio property, but controls the priority for
494 + resilver I/O. The default value is 10. When set to 0 the global
495 + tunables used for queue sizing are zfs_vdev_resilver_min_active
496 + and zfs_vdev_resilver_max_active.
497 +
436 498 version=version
437 499 The current on-disk version of the pool. This can be increased,
438 500 but never decreased. The preferred method of updating pools is
439 501 with the zpool upgrade command, though this property can be used
440 502 when a specific version is needed for backwards compatibility.
441 503 Once feature flags are enabled on a pool this property will no
442 504 longer have a value.
443 505
506 + Device Properties
507 + Each device can have several properties associated with it. These
508 + properites override global tunables and are designed to provide more
509 + control over the operational parameters of this specific device, as well
510 + as to help manage this device.
511 +
512 + The cos device property can reference a CoS property descriptor by name,
513 + in which case, the values of device properties are determined according
514 + to the following rule: the device settings override CoS settings, which
515 + in turn, override the global tunables.
516 +
517 + The following device properties are available:
518 +
519 + cos=cos-name
520 + This property indicates whether the device is associated with a
521 + CoS property descriptor object. If so, the properties from the
522 + CoS descriptor that are not explicitly overridden by the device
523 + properties are in effect for this device.
524 +
525 + l2arc_ddt=on|off
526 + This property is meaningful for L2ARC devices. If this property
527 + is turned on ZFS will dedicate the L2ARC device to cache
528 + deduplication table (DDT) buffers only.
529 +
530 + prefread=1..100
531 + This property is meaningful for devices that belong to a mirror.
532 + The property determines the preference that is given to the
533 + device when reading from the mirror. The ratio of the value to
534 + the sum of the values of this property for all the devices in the
535 + mirror determines the relative frequency (which also is
536 + considered "probability") of reading from this specific device.
537 +
538 + sparegroup=group-name
539 + This property indicates whether the device is a part of a spare
540 + device group. Devices in the pool (including spares) can be
541 + labeled with strings that are meaningful in the context of the
542 + management workflow in effect. When a failed device is
543 + automatically replaced by spares, the spares whose sparegroup
544 + property match the failed device's property are used first.
545 +
546 + {read|aread|write|awrite|scrub|resilver}_{minactive|maxactive}=1..1000
547 + These properties define the minimim/maximum number of outstanding
548 + active requests for the queueable classes of I/O requests as
549 + defined by the ZFS I/O scheduler. The classes include read,
550 + asynchronous read, write, asynchronous write, and scrub classes.
551 +
444 552 Subcommands
445 553 All subcommands that modify state are logged persistently to the pool in
446 554 their original form.
447 555
448 556 The zpool command provides subcommands to create and destroy storage
449 557 pools, add capacity to storage pools, and provide information about the
450 558 storage pools. The following subcommands are supported:
451 559
452 560 zpool -?
453 561 Displays a help message.
454 562
455 563 zpool add [-fn] pool vdev...
456 564 Adds the specified virtual devices to the given pool. The vdev
457 565 specification is described in the Virtual Devices section. The
458 566 behavior of the -f option, and the device checks performed are
459 567 described in the zpool create subcommand.
460 568
461 569 -f Forces use of vdevs, even if they appear in use or
462 570 specify a conflicting replication level. Not all devices
463 571 can be overridden in this manner.
464 572
465 573 -n Displays the configuration that would be used without
466 574 actually adding the vdevs. The actual pool creation can
467 575 still fail due to insufficient privileges or device
468 576 sharing.
469 577
470 578 zpool attach [-f] pool device new_device
471 579 Attaches new_device to the existing device. The existing device
472 580 cannot be part of a raidz configuration. If device is not
473 581 currently part of a mirrored configuration, device automatically
474 582 transforms into a two-way mirror of device and new_device. If
475 583 device is part of a two-way mirror, attaching new_device creates
476 584 a three-way mirror, and so on. In either case, new_device begins
477 585 to resilver immediately.
478 586
479 587 -f Forces use of new_device, even if its appears to be in
480 588 use. Not all devices can be overridden in this manner.
481 589
482 590 zpool clear pool [device]
483 591 Clears device errors in a pool. If no arguments are specified,
484 592 all device errors within the pool are cleared. If one or more
485 593 devices is specified, only those errors associated with the
486 594 specified device or devices are cleared.
487 595
488 596 zpool create [-dfn] [-B] [-m mountpoint] [-o property=value]... [-O
489 597 file-system-property=value]... [-R root] pool vdev...
490 598 Creates a new storage pool containing the virtual devices
491 599 specified on the command line. The pool name must begin with a
492 600 letter, and can only contain alphanumeric characters as well as
493 601 underscore ("_"), dash ("-"), and period ("."). The pool names
494 602 mirror, raidz, spare and log are reserved, as are names beginning
495 603 with the pattern c[0-9]. The vdev specification is described in
496 604 the Virtual Devices section.
497 605
498 606 The command verifies that each device specified is accessible and
499 607 not currently in use by another subsystem. There are some uses,
500 608 such as being currently mounted, or specified as the dedicated
501 609 dump device, that prevents a device from ever being used by ZFS.
502 610 Other uses, such as having a preexisting UFS file system, can be
503 611 overridden with the -f option.
504 612
505 613 The command also checks that the replication strategy for the
506 614 pool is consistent. An attempt to combine redundant and non-
507 615 redundant storage in a single pool, or to mix disks and files,
508 616 results in an error unless -f is specified. The use of
509 617 differently sized devices within a single raidz or mirror group
510 618 is also flagged as an error unless -f is specified.
511 619
512 620 Unless the -R option is specified, the default mount point is
513 621 /pool. The mount point must not exist or must be empty, or else
514 622 the root dataset cannot be mounted. This can be overridden with
515 623 the -m option.
516 624
517 625 By default all supported features are enabled on the new pool
518 626 unless the -d option is specified.
519 627
520 628 -B Create whole disk pool with EFI System partition to
521 629 support booting system with UEFI firmware. Default size
522 630 is 256MB. To create boot partition with custom size, set
523 631 the bootsize property with the -o option. See the
524 632 Properties section for details.
525 633
526 634 -d Do not enable any features on the new pool. Individual
527 635 features can be enabled by setting their corresponding
528 636 properties to enabled with the -o option. See
529 637 zpool-features(5) for details about feature properties.
530 638
531 639 -f Forces use of vdevs, even if they appear in use or
532 640 specify a conflicting replication level. Not all devices
533 641 can be overridden in this manner.
534 642
535 643 -m mountpoint
536 644 Sets the mount point for the root dataset. The default
537 645 mount point is /pool or altroot/pool if altroot is
|
↓ open down ↓ |
84 lines elided |
↑ open up ↑ |
538 646 specified. The mount point must be an absolute path,
539 647 legacy, or none. For more information on dataset mount
540 648 points, see zfs(1M).
541 649
542 650 -n Displays the configuration that would be used without
543 651 actually creating the pool. The actual pool creation can
544 652 still fail due to insufficient privileges or device
545 653 sharing.
546 654
547 655 -o property=value
548 - Sets the given pool properties. See the Properties
656 + Sets the given pool properties. See the Pool Properties
549 657 section for a list of valid properties that can be set.
550 658
551 659 -O file-system-property=value
552 660 Sets the given file system properties in the root file
553 661 system of the pool. See the Properties section of
554 662 zfs(1M) for a list of valid properties that can be set.
555 663
556 664 -R root
557 665 Equivalent to -o cachefile=none -o altroot=root
558 666
559 667 zpool destroy [-f] pool
560 668 Destroys the given pool, freeing up any devices for other use.
|
↓ open down ↓ |
2 lines elided |
↑ open up ↑ |
561 669 This command tries to unmount any active datasets before
562 670 destroying the pool.
563 671
564 672 -f Forces any active datasets contained within the pool to
565 673 be unmounted.
566 674
567 675 zpool detach pool device
568 676 Detaches device from a mirror. The operation is refused if there
569 677 are no other valid replicas of the data.
570 678
571 - zpool export [-f] pool...
679 + zpool export [-cfF] [-t numthreads] pool...
572 680 Exports the given pools from the system. All devices are marked
573 681 as exported, but are still considered in use by other subsystems.
574 682 The devices can be moved between systems (even those of different
575 683 endianness) and imported as long as a sufficient number of
576 684 devices are present.
577 685
578 686 Before exporting the pool, all datasets within the pool are
579 687 unmounted. A pool can not be exported if it has a shared spare
580 688 that is currently being used.
581 689
582 690 For pools to be portable, you must give the zpool command whole
583 691 disks, not just slices, so that ZFS can label the disks with
584 692 portable EFI labels. Otherwise, disk drivers on platforms of
585 693 different endianness will not recognize the disks.
586 694
695 + -c Keep configuration information of exported pool in the
696 + cache file.
697 +
587 698 -f Forcefully unmount all datasets, using the unmount -f
588 699 command.
589 700
590 701 This command will forcefully export the pool even if it
591 702 has a shared spare that is currently being used. This
592 703 may lead to potential data corruption.
593 704
705 + -F Do not update device labels or cache file with new
706 + configuration.
707 +
708 + -t numthreads
709 + Unmount datasets in parallel using up to numthreads
710 + threads.
711 +
594 712 zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
595 713 Retrieves the given list of properties (or all properties if all
596 714 is used) for the specified storage pool(s). These properties are
597 715 displayed with the following fields:
598 716
599 717 name Name of storage pool
600 718 property Property name
601 719 value Property value
602 720 source Property source, either 'default' or 'local'.
603 721
604 - See the Properties section for more information on the available
605 - pool properties.
722 + See the Pool Properties section for more information on the
723 + available pool properties.
606 724
607 725 -H Scripted mode. Do not display headers, and separate
608 726 fields by a single tab instead of arbitrary space.
609 727
610 728 -o field
611 729 A comma-separated list of columns to display.
612 730 name,property,value,source is the default value.
613 731
614 732 -p Display numbers in parsable (exact) values.
615 733
616 734 zpool history [-il] [pool]...
617 735 Displays the command history of the specified pool(s) or all
618 736 pools if no pool is specified.
619 737
620 738 -i Displays internally logged ZFS events in addition to user
621 739 initiated events.
622 740
623 741 -l Displays log records in long format, which in addition to
624 742 standard format includes, the user name, the hostname,
625 743 and the zone in which the operation was performed.
626 744
627 745 zpool import [-D] [-d dir]
628 746 Lists pools available to import. If the -d option is not
629 747 specified, this command searches for devices in /dev/dsk. The -d
630 748 option can be specified multiple times, and all directories are
631 749 searched. If the device appears to be part of an exported pool,
632 750 this command displays a summary of the pool with the name of the
633 751 pool, a numeric identifier, as well as the vdev layout and
634 752 current health of the device for each device or file. Destroyed
635 753 pools, pools that were previously destroyed with the zpool
636 754 destroy command, are not listed unless the -D option is
637 755 specified.
638 756
639 757 The numeric identifier is unique, and can be used instead of the
640 758 pool name when multiple exported pools of the same name are
641 759 available.
642 760
643 761 -c cachefile
644 762 Reads configuration from the given cachefile that was
645 763 created with the cachefile pool property. This cachefile
646 764 is used instead of searching for devices.
647 765
648 766 -d dir Searches for devices or files in dir. The -d option can
649 767 be specified multiple times.
650 768
651 769 -D Lists destroyed pools only.
652 770
653 771 zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
654 772 property=value]... [-R root]
655 773 Imports all pools found in the search directories. Identical to
656 774 the previous command, except that all pools with a sufficient
657 775 number of devices available are imported. Destroyed pools, pools
658 776 that were previously destroyed with the zpool destroy command,
659 777 will not be imported unless the -D option is specified.
660 778
661 779 -a Searches for and imports all pools found.
662 780
663 781 -c cachefile
664 782 Reads configuration from the given cachefile that was
665 783 created with the cachefile pool property. This cachefile
666 784 is used instead of searching for devices.
667 785
668 786 -d dir Searches for devices or files in dir. The -d option can
669 787 be specified multiple times. This option is incompatible
670 788 with the -c option.
671 789
672 790 -D Imports destroyed pools only. The -f option is also
673 791 required.
674 792
675 793 -f Forces import, even if the pool appears to be potentially
676 794 active.
677 795
678 796 -F Recovery mode for a non-importable pool. Attempt to
679 797 return the pool to an importable state by discarding the
680 798 last few transactions. Not all damaged pools can be
681 799 recovered by using this option. If successful, the data
682 800 from the discarded transactions is irretrievably lost.
683 801 This option is ignored if the pool is importable or
684 802 already imported.
685 803
686 804 -m Allows a pool to import when there is a missing log
687 805 device. Recent transactions can be lost because the log
688 806 device will be discarded.
689 807
690 808 -n Used with the -F recovery option. Determines whether a
691 809 non-importable pool can be made importable again, but
692 810 does not actually perform the pool recovery. For more
693 811 details about pool recovery mode, see the -F option,
694 812 above.
|
↓ open down ↓ |
79 lines elided |
↑ open up ↑ |
695 813
696 814 -N Import the pool without mounting any file systems.
697 815
698 816 -o mntopts
699 817 Comma-separated list of mount options to use when
700 818 mounting datasets within the pool. See zfs(1M) for a
701 819 description of dataset properties and mount options.
702 820
703 821 -o property=value
704 822 Sets the specified property on the imported pool. See
705 - the Properties section for more information on the
823 + the Pool Properties section for more information on the
706 824 available pool properties.
707 825
708 826 -R root
709 827 Sets the cachefile property to none and the altroot
710 828 property to root.
711 829
712 830 zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
713 831 property=value]... [-R root] pool|id [newpool]
714 832 Imports a specific pool. A pool can be identified by its name or
715 833 the numeric identifier. If newpool is specified, the pool is
716 834 imported using the name newpool. Otherwise, it is imported with
717 835 the same name as its exported name.
718 836
719 837 If a device is removed from a system without running zpool export
720 838 first, the device appears as potentially active. It cannot be
721 839 determined if this was a failed export, or whether the device is
722 840 really in use from another host. To import a pool in this state,
723 841 the -f option is required.
724 842
725 843 -c cachefile
726 844 Reads configuration from the given cachefile that was
727 845 created with the cachefile pool property. This cachefile
728 846 is used instead of searching for devices.
729 847
730 848 -d dir Searches for devices or files in dir. The -d option can
731 849 be specified multiple times. This option is incompatible
732 850 with the -c option.
733 851
734 852 -D Imports destroyed pool. The -f option is also required.
735 853
736 854 -f Forces import, even if the pool appears to be potentially
737 855 active.
738 856
739 857 -F Recovery mode for a non-importable pool. Attempt to
740 858 return the pool to an importable state by discarding the
741 859 last few transactions. Not all damaged pools can be
742 860 recovered by using this option. If successful, the data
743 861 from the discarded transactions is irretrievably lost.
744 862 This option is ignored if the pool is importable or
745 863 already imported.
746 864
747 865 -m Allows a pool to import when there is a missing log
748 866 device. Recent transactions can be lost because the log
749 867 device will be discarded.
750 868
751 869 -n Used with the -F recovery option. Determines whether a
752 870 non-importable pool can be made importable again, but
753 871 does not actually perform the pool recovery. For more
|
↓ open down ↓ |
38 lines elided |
↑ open up ↑ |
754 872 details about pool recovery mode, see the -F option,
755 873 above.
756 874
757 875 -o mntopts
758 876 Comma-separated list of mount options to use when
759 877 mounting datasets within the pool. See zfs(1M) for a
760 878 description of dataset properties and mount options.
761 879
762 880 -o property=value
763 881 Sets the specified property on the imported pool. See
764 - the Properties section for more information on the
882 + the Pool Properties section for more information on the
765 883 available pool properties.
766 884
767 885 -R root
768 886 Sets the cachefile property to none and the altroot
769 887 property to root.
770 888
889 + -t numthreads
890 + Mount datasets in parallel using up to numthreads
891 + threads.
892 +
771 893 zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
772 894 Displays I/O statistics for the given pools. When given an
773 895 interval, the statistics are printed every interval seconds until
774 896 ^C is pressed. If no pools are specified, statistics for every
775 897 pool in the system is shown. If count is specified, the command
776 898 exits after count reports are printed.
777 899
778 900 -T u|d Display a time stamp. Specify u for a printed
779 901 representation of the internal representation of time.
780 902 See time(2). Specify d for standard date format. See
781 903 date(1).
782 904
783 905 -v Verbose statistics Reports usage statistics for
784 906 individual vdevs within the pool, in addition to the
785 907 pool-wide statistics.
786 908
787 909 zpool labelclear [-f] device
788 910 Removes ZFS label information from the specified device. The
789 911 device must not be part of an active pool configuration.
790 912
791 913 -f Treat exported or foreign devices as inactive.
792 914
793 915 zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
794 916 [interval [count]]
795 917 Lists the given pools along with a health status and space usage.
|
↓ open down ↓ |
15 lines elided |
↑ open up ↑ |
796 918 If no pools are specified, all pools in the system are listed.
797 919 When given an interval, the information is printed every interval
798 920 seconds until ^C is pressed. If count is specified, the command
799 921 exits after count reports are printed.
800 922
801 923 -H Scripted mode. Do not display headers, and separate
802 924 fields by a single tab instead of arbitrary space.
803 925
804 926 -o property
805 927 Comma-separated list of properties to display. See the
806 - Properties section for a list of valid properties. The
807 - default list is name, size, allocated, free, expandsize,
808 - fragmentation, capacity, dedupratio, health, altroot.
928 + Pool Properties section for a list of valid properties.
929 + The default list is name, size, allocated, free,
930 + expandsize, fragmentation, capacity, dedupratio, health,
931 + altroot.
809 932
810 933 -p Display numbers in parsable (exact) values.
811 934
812 935 -T u|d Display a time stamp. Specify -u for a printed
813 936 representation of the internal representation of time.
814 937 See time(2). Specify -d for standard date format. See
815 938 date(1).
816 939
817 940 -v Verbose statistics. Reports usage statistics for
818 941 individual vdevs within the pool, in addition to the
819 942 pool-wise statistics.
820 943
821 944 zpool offline [-t] pool device...
822 945 Takes the specified physical device offline. While the device is
823 946 offline, no attempt is made to read or write to the device. This
824 947 command is not applicable to spares.
825 948
826 949 -t Temporary. Upon reboot, the specified physical device
827 950 reverts to its previous state.
828 951
829 952 zpool online [-e] pool device...
830 953 Brings the specified physical device online. This command is not
831 954 applicable to spares.
832 955
833 956 -e Expand the device to use all available space. If the
834 957 device is part of a mirror or raidz then all devices must
835 958 be expanded before the new space will become available to
|
↓ open down ↓ |
17 lines elided |
↑ open up ↑ |
836 959 the pool.
837 960
838 961 zpool reguid pool
839 962 Generates a new unique identifier for the pool. You must ensure
840 963 that all devices in this pool are online and healthy before
841 964 performing this action.
842 965
843 966 zpool reopen pool
844 967 Reopen all the vdevs associated with the pool.
845 968
846 - zpool remove [-np] pool device...
969 + zpool remove pool device...
847 970 Removes the specified device from the pool. This command
848 - currently only supports removing hot spares, cache, log devices
849 - and mirrored top-level vdevs (mirror of leaf devices); but not
850 - raidz.
971 + currently only supports removing hot spares, cache, log and
972 + special devices. A mirrored log device can be removed by
973 + specifying the top-level mirror for the log. Non-log devices
974 + that are part of a mirrored configuration can be removed using
975 + the zpool detach command. Non-redundant and raidz devices cannot
976 + be removed from a pool.
851 977
852 - Removing a top-level vdev reduces the total amount of space in
853 - the storage pool. The specified device will be evacuated by
854 - copying all allocated space from it to the other devices in the
855 - pool. In this case, the zpool remove command initiates the
856 - removal and returns, while the evacuation continues in the
857 - background. The removal progress can be monitored with zpool
858 - status. This feature must be enabled to be used, see
859 - zpool-features(5)
860 -
861 - A mirrored top-level device (log or data) can be removed by
862 - specifying the top-level mirror for the same. Non-log devices or
863 - data devices that are part of a mirrored configuration can be
864 - removed using the zpool detach command.
865 -
866 - -n Do not actually perform the removal ("no-op"). Instead,
867 - print the estimated amount of memory that will be used by
868 - the mapping table after the removal completes. This is
869 - nonzero only for top-level vdevs.
870 -
871 - -p Used in conjunction with the -n flag, displays numbers as
872 - parsable (exact) values.
873 -
874 - zpool remove -s pool
875 - Stops and cancels an in-progress removal of a top-level vdev.
876 -
877 978 zpool replace [-f] pool device [new_device]
878 979 Replaces old_device with new_device. This is equivalent to
879 980 attaching new_device, waiting for it to resilver, and then
880 981 detaching old_device.
881 982
882 983 The size of new_device must be greater than or equal to the
883 984 minimum size of all the devices in a mirror or raidz
884 985 configuration.
885 986
886 987 new_device is required if the pool is not redundant. If
887 988 new_device is not specified, it defaults to old_device. This
888 989 form of replacement is useful after an existing disk has failed
889 990 and has been physically replaced. In this case, the new disk may
890 991 have the same /dev/dsk path as the old device, even though it is
891 992 actually a different disk. ZFS recognizes this.
892 993
893 994 -f Forces use of new_device, even if its appears to be in
894 995 use. Not all devices can be overridden in this manner.
895 996
896 - zpool scrub [-s | -p] pool...
997 + zpool scrub [-m|-M|-p|-s] pool...
897 998 Begins a scrub or resumes a paused scrub. The scrub examines all
898 999 data in the specified pools to verify that it checksums
899 1000 correctly. For replicated (mirror or raidz) devices, ZFS
900 1001 automatically repairs any damage discovered during the scrub.
901 1002 The zpool status command reports the progress of the scrub and
902 1003 summarizes the results of the scrub upon completion.
903 1004
904 1005 Scrubbing and resilvering are very similar operations. The
905 1006 difference is that resilvering only examines data that ZFS knows
906 1007 to be out of date (for example, when attaching a new device to a
907 1008 mirror or replacing an existing device), whereas scrubbing
908 1009 examines all data to discover silent errors due to hardware
909 1010 faults or disk failure.
910 1011
911 1012 Because scrubbing and resilvering are I/O-intensive operations,
912 1013 ZFS only allows one at a time. If a scrub is paused, the zpool
913 1014 scrub resumes it. If a resilver is in progress, ZFS does not
914 1015 allow a scrub to be started until the resilver completes.
915 1016
916 - -s Stop scrubbing.
1017 + Partial scrub may be requested using -m or -M option.
917 1018
1019 + -m Scrub only metadata blocks.
1020 +
1021 + -M Scrub only MOS blocks.
1022 +
918 1023 -p Pause scrubbing. Scrub pause state and progress are
919 1024 periodically synced to disk. If the system is restarted
920 1025 or pool is exported during a paused scrub, even after
921 1026 import, scrub will remain paused until it is resumed.
922 1027 Once resumed the scrub will pick up from the place where
923 1028 it was last checkpointed to disk. To resume a paused
924 1029 scrub issue zpool scrub again.
925 1030
1031 + -s Stop scrubbing.
1032 +
926 1033 zpool set property=value pool
927 - Sets the given property on the specified pool. See the
1034 + Sets the given property on the specified pool. See the Pool
928 1035 Properties section for more information on what properties can be
929 1036 set and acceptable values.
930 1037
931 1038 zpool split [-n] [-o property=value]... [-R root] pool newpool
932 1039 Splits devices off pool creating newpool. All vdevs in pool must
933 1040 be mirrors. At the time of the split, newpool will be a replica
934 1041 of pool.
935 1042
936 1043 -n Do dry run, do not actually perform the split. Print out
937 1044 the expected configuration of newpool.
938 1045
939 1046 -o property=value
940 - Sets the specified property for newpool. See the
1047 + Sets the specified property for newpool. See the Pool
941 1048 Properties section for more information on the available
942 1049 pool properties.
943 1050
944 1051 -R root
945 1052 Set altroot for newpool to root and automatically import
946 1053 it.
947 1054
948 1055 zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
949 1056 Displays the detailed health status for the given pools. If no
950 1057 pool is specified, then the status of each pool in the system is
951 1058 displayed. For more information on pool and device health, see
952 1059 the Device Failure and Recovery section.
953 1060
954 1061 If a scrub or resilver is in progress, this command reports the
955 1062 percentage done and the estimated time to completion. Both of
956 1063 these are only approximate, because the amount of data in the
957 1064 pool and the other workloads on the system can change.
958 1065
959 1066 -D Display a histogram of deduplication statistics, showing
960 1067 the allocated (physically present on disk) and referenced
961 1068 (logically referenced in the pool) block counts and sizes
962 1069 by reference count.
963 1070
964 1071 -T u|d Display a time stamp. Specify -u for a printed
965 1072 representation of the internal representation of time.
966 1073 See time(2). Specify -d for standard date format. See
|
↓ open down ↓ |
16 lines elided |
↑ open up ↑ |
967 1074 date(1).
968 1075
969 1076 -v Displays verbose data error information, printing out a
970 1077 complete list of all data errors since the last complete
971 1078 pool scrub.
972 1079
973 1080 -x Only display status for pools that are exhibiting errors
974 1081 or are otherwise unavailable. Warnings about pools not
975 1082 using the latest on-disk format will not be included.
976 1083
1084 + zpool trim [-r rate|-s] pool...
1085 + Initiates a on-demand TRIM operation on all of the free space of
1086 + a pool. This informs the underlying storage devices of all of
1087 + the blocks that the pool no longer considers allocated, thus
1088 + allowing thinly provisioned storage devices to reclaim them.
1089 + Please note that this collects all space marked as "freed" in the
1090 + pool immediately and doesn't wait the zfs_txgs_per_trim delay as
1091 + automatic TRIM does. Hence, this can limit pool corruption
1092 + recovery options during and immediately following the on-demand
1093 + TRIM to 1-2 TXGs into the past (instead of the standard 32-64 of
1094 + automatic TRIM). This approach, however, allows you to recover
1095 + the maximum amount of free space from the pool immediately
1096 + without having to wait.
1097 +
1098 + Also note that an on-demand TRIM operation can be initiated
1099 + irrespective of the autotrim pool property setting. It does,
1100 + however, respect the forcetrim pool property.
1101 +
1102 + An on-demand TRIM operation does not conflict with an ongoing
1103 + scrub, but it can put significant I/O stress on the underlying
1104 + vdevs. A resilver, however, automatically stops an on-demand
1105 + TRIM operation. You can manually reinitiate the TRIM operation
1106 + after the resilver has started, by simply reissuing the zpool
1107 + trim command.
1108 +
1109 + Adding a vdev during TRIM is supported, although the progression
1110 + display in zpool status might not be entirely accurate in that
1111 + case (TRIM will complete before reaching 100%). Removing or
1112 + detaching a vdev will prematurely terminate an on-demand TRIM
1113 + operation.
1114 +
1115 + -r rate
1116 + Controls the speed at which the TRIM operation
1117 + progresses. Without this option, TRIM is executed in
1118 + parallel on all top-level vdevs as quickly as possible.
1119 + This option allows you to control how fast (in bytes per
1120 + second) the TRIM is executed. This rate is applied on a
1121 + per-vdev basis, i.e. every top-level vdev in the pool
1122 + tries to match this speed.
1123 +
1124 + Due to limitations in how the algorithm is designed,
1125 + TRIMs are executed in whole-metaslab increments. Each
1126 + top-level vdev contains approximately 200 metaslabs, so a
1127 + rate-limited TRIM progresses in steps, i.e. it TRIMs one
1128 + metaslab completely and then waits for a while so that
1129 + over the whole device, the speed averages out.
1130 +
1131 + When an on-demand TRIM operation is already in progress,
1132 + this option changes its rate. To change a rate-limited
1133 + TRIM to an unlimited one, simply execute the zpool trim
1134 + command without the -r option.
1135 +
1136 + -s Stop trimming. If an on-demand TRIM operation is not
1137 + ongoing at the moment, this does nothing and the command
1138 + returns success.
1139 +
977 1140 zpool upgrade
978 1141 Displays pools which do not have all supported features enabled
979 1142 and pools formatted using a legacy ZFS version number. These
980 1143 pools can continue to be used, but some features may not be
981 1144 available. Use zpool upgrade -a to enable all features on all
982 1145 pools.
983 1146
984 1147 zpool upgrade -v
985 1148 Displays legacy ZFS versions supported by the current software.
986 1149 See zpool-features(5) for a description of feature flags features
987 1150 supported by the current software.
988 1151
989 1152 zpool upgrade [-V version] -a|pool...
990 1153 Enables all supported features on the given pool. Once this is
991 1154 done, the pool will no longer be accessible on systems that do
992 1155 not support feature flags. See zpool-features(5) for details on
993 1156 compatibility with systems that support feature flags, but do not
|
↓ open down ↓ |
7 lines elided |
↑ open up ↑ |
994 1157 support all features enabled on the pool.
995 1158
996 1159 -a Enables all supported features on all pools.
997 1160
998 1161 -V version
999 1162 Upgrade to the specified legacy version. If the -V flag
1000 1163 is specified, no features will be enabled on the pool.
1001 1164 This option can only be used to increase the version
1002 1165 number up to the last supported legacy version number.
1003 1166
1167 + zpool vdev-get all|property[,property]... pool vdev-name|vdev-guid
1168 + Retrieves the given list of vdev properties (or all properties if
1169 + all is used) for the specified vdev of the specified storage
1170 + pool. These properties are displayed in the same manner as the
1171 + pool properties. The operation is supported for leaf-level vdevs
1172 + only. See the Device Properties section for more information on
1173 + the available properties.
1174 +
1175 + zpool vdev-set property=value pool vdev-name|vdev-guid
1176 + Sets the given property on the specified device of the specified
1177 + pool. If top-level vdev is specified, sets the property on all
1178 + the child devices. See the Device Properties section for more
1179 + information on what properties can be set and accepted values.
1180 +
1004 1181 EXIT STATUS
1005 1182 The following exit values are returned:
1006 1183
1007 1184 0 Successful completion.
1008 1185
1009 1186 1 An error occurred.
1010 1187
1011 1188 2 Invalid command line options were specified.
1012 1189
1013 1190 EXAMPLES
1014 1191 Example 1 Creating a RAID-Z Storage Pool
1015 1192 The following command creates a pool with a single raidz root
1016 1193 vdev that consists of six disks.
1017 1194
1018 1195 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1019 1196
1020 1197 Example 2 Creating a Mirrored Storage Pool
1021 1198 The following command creates a pool with two mirrors, where each
1022 1199 mirror contains two disks.
1023 1200
1024 1201 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1025 1202
1026 1203 Example 3 Creating a ZFS Storage Pool by Using Slices
1027 1204 The following command creates an unmirrored pool using two disk
1028 1205 slices.
1029 1206
1030 1207 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1031 1208
1032 1209 Example 4 Creating a ZFS Storage Pool by Using Files
1033 1210 The following command creates an unmirrored pool using files.
1034 1211 While not recommended, a pool based on files can be useful for
1035 1212 experimental purposes.
1036 1213
1037 1214 # zpool create tank /path/to/file/a /path/to/file/b
1038 1215
1039 1216 Example 5 Adding a Mirror to a ZFS Storage Pool
1040 1217 The following command adds two mirrored disks to the pool tank,
1041 1218 assuming the pool is already made up of two-way mirrors. The
1042 1219 additional space is immediately available to any datasets within
1043 1220 the pool.
1044 1221
1045 1222 # zpool add tank mirror c1t0d0 c1t1d0
1046 1223
1047 1224 Example 6 Listing Available ZFS Storage Pools
1048 1225 The following command lists all available pools on the system.
1049 1226 In this case, the pool zion is faulted due to a missing device.
1050 1227 The results from this command are similar to the following:
1051 1228
1052 1229 # zpool list
1053 1230 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1054 1231 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
1055 1232 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
1056 1233 zion - - - - - - - FAULTED -
1057 1234
1058 1235 Example 7 Destroying a ZFS Storage Pool
1059 1236 The following command destroys the pool tank and any datasets
1060 1237 contained within.
1061 1238
1062 1239 # zpool destroy -f tank
1063 1240
1064 1241 Example 8 Exporting a ZFS Storage Pool
1065 1242 The following command exports the devices in pool tank so that
1066 1243 they can be relocated or later imported.
1067 1244
1068 1245 # zpool export tank
1069 1246
1070 1247 Example 9 Importing a ZFS Storage Pool
1071 1248 The following command displays available pools, and then imports
1072 1249 the pool tank for use on the system. The results from this
1073 1250 command are similar to the following:
1074 1251
1075 1252 # zpool import
1076 1253 pool: tank
1077 1254 id: 15451357997522795478
1078 1255 state: ONLINE
1079 1256 action: The pool can be imported using its name or numeric identifier.
1080 1257 config:
1081 1258
1082 1259 tank ONLINE
1083 1260 mirror ONLINE
1084 1261 c1t2d0 ONLINE
1085 1262 c1t3d0 ONLINE
1086 1263
1087 1264 # zpool import tank
1088 1265
1089 1266 Example 10 Upgrading All ZFS Storage Pools to the Current Version
1090 1267 The following command upgrades all ZFS Storage pools to the
1091 1268 current version of the software.
1092 1269
1093 1270 # zpool upgrade -a
1094 1271 This system is currently running ZFS version 2.
1095 1272
1096 1273 Example 11 Managing Hot Spares
1097 1274 The following command creates a new pool with an available hot
1098 1275 spare:
1099 1276
1100 1277 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1101 1278
1102 1279 If one of the disks were to fail, the pool would be reduced to
1103 1280 the degraded state. The failed device can be replaced using the
1104 1281 following command:
1105 1282
1106 1283 # zpool replace tank c0t0d0 c0t3d0
1107 1284
1108 1285 Once the data has been resilvered, the spare is automatically
1109 1286 removed and is made available for use should another device fail.
1110 1287 The hot spare can be permanently removed from the pool using the
1111 1288 following command:
1112 1289
1113 1290 # zpool remove tank c0t2d0
1114 1291
1115 1292 Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1116 1293 The following command creates a ZFS storage pool consisting of
1117 1294 two, two-way mirrors and mirrored log devices:
1118 1295
1119 1296 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1120 1297 c4d0 c5d0
1121 1298
1122 1299 Example 13 Adding Cache Devices to a ZFS Pool
1123 1300 The following command adds two disks for use as cache devices to
1124 1301 a ZFS storage pool:
|
↓ open down ↓ |
111 lines elided |
↑ open up ↑ |
1125 1302
1126 1303 # zpool add pool cache c2d0 c3d0
1127 1304
1128 1305 Once added, the cache devices gradually fill with content from
1129 1306 main memory. Depending on the size of your cache devices, it
1130 1307 could take over an hour for them to fill. Capacity and reads can
1131 1308 be monitored using the iostat option as follows:
1132 1309
1133 1310 # zpool iostat -v pool 5
1134 1311
1135 - Example 14 Removing a Mirrored top-level (Log or Data) Device
1136 - The following commands remove the mirrored log device mirror-2
1137 - and mirrored top-level data device mirror-1.
1138 -
1312 + Example 14 Removing a Mirrored Log Device
1313 + The following command removes the mirrored log device mirror-2.
1139 1314 Given this configuration:
1140 1315
1141 1316 pool: tank
1142 1317 state: ONLINE
1143 1318 scrub: none requested
1144 1319 config:
1145 1320
1146 1321 NAME STATE READ WRITE CKSUM
1147 1322 tank ONLINE 0 0 0
1148 1323 mirror-0 ONLINE 0 0 0
1149 1324 c6t0d0 ONLINE 0 0 0
1150 1325 c6t1d0 ONLINE 0 0 0
1151 1326 mirror-1 ONLINE 0 0 0
1152 1327 c6t2d0 ONLINE 0 0 0
|
↓ open down ↓ |
4 lines elided |
↑ open up ↑ |
1153 1328 c6t3d0 ONLINE 0 0 0
1154 1329 logs
1155 1330 mirror-2 ONLINE 0 0 0
1156 1331 c4t0d0 ONLINE 0 0 0
1157 1332 c4t1d0 ONLINE 0 0 0
1158 1333
1159 1334 The command to remove the mirrored log mirror-2 is:
1160 1335
1161 1336 # zpool remove tank mirror-2
1162 1337
1163 - The command to remove the mirrored data mirror-1 is:
1164 -
1165 - # zpool remove tank mirror-1
1166 -
1167 1338 Example 15 Displaying expanded space on a device
1168 1339 The following command displays the detailed information for the
1169 1340 pool data. This pool is comprised of a single raidz vdev where
1170 1341 one of its devices increased its capacity by 10GB. In this
1171 1342 example, the pool will not be able to utilize this extra capacity
1172 1343 until all the devices under the raidz vdev have been expanded.
1173 1344
1174 1345 # zpool list -v data
1175 1346 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1176 1347 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1177 1348 raidz1 23.9G 14.6G 9.30G 48% -
1178 1349 c1t1d0 - - - - -
1179 1350 c1t2d0 - - - - 10G
1180 1351 c1t3d0 - - - - -
1181 1352
1182 1353 INTERFACE STABILITY
1183 1354 Evolving
1184 1355
1185 1356 SEE ALSO
1186 1357 zfs(1M), attributes(5), zpool-features(5)
1187 1358
1188 1359 illumos December 6, 2017 illumos
|
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX