1 ZPOOL-FEATURES(5) Standards, Environments, and Macros ZPOOL-FEATURES(5)
2
3
4
5 NAME
6 zpool-features - ZFS pool feature descriptions
7
8 DESCRIPTION
9 ZFS pool on-disk format versions are specified via "features" which
10 replace the old on-disk format numbers (the last supported on-disk
11 format number is 28). To enable a feature on a pool use the upgrade
12 subcommand of the zpool(1M) command, or set the feature@feature_name
13 property to enabled.
14
15
16 The pool format does not affect file system version compatibility or
17 the ability to send file systems between pools.
18
19
20 Since most features can be enabled independently of each other the
21 on-disk format of the pool is specified by the set of all features
22 marked as active on the pool. If the pool was created by another
23 software version this set may include unsupported features.
24
25 Identifying features
26 Every feature has a guid of the form com.example:feature_name. The
27 reverse DNS name ensures that the feature's guid is unique across all
28 ZFS implementations. When unsupported features are encountered on a
29 pool they will be identified by their guids. Refer to the documentation
30 for the ZFS implementation that created the pool for information about
31 those features.
32
33
34 Each supported feature also has a short name. By convention a feature's
35 short name is the portion of its guid which follows the ':' (e.g.
36 com.example:feature_name would have the short name feature_name),
37 however a feature's short name may differ across ZFS implementations if
38 following the convention would result in name conflicts.
39
40 Feature states
41 Features can be in one of three states:
42
43 active
44 This feature's on-disk format changes are in effect on the
45 pool. Support for this feature is required to import the
46 pool in read-write mode. If this feature is not read-only
47 compatible, support is also required to import the pool in
48 read-only mode (see "Read-only compatibility").
49
50
51 enabled
52 An administrator has marked this feature as enabled on the
53 pool, but the feature's on-disk format changes have not
54 been made yet. The pool can still be imported by software
55 that does not support this feature, but changes may be made
56 to the on-disk format at any time which will move the
57 feature to the active state. Some features may support
58 returning to the enabled state after becoming active. See
59 feature-specific documentation for details.
60
61
62 disabled
63 This feature's on-disk format changes have not been made
64 and will not be made unless an administrator moves the
65 feature to the enabled state. Features cannot be disabled
66 once they have been enabled.
67
68
69
70 The state of supported features is exposed through pool properties of
71 the form feature@short_name.
72
73 Read-only compatibility
74 Some features may make on-disk format changes that do not interfere
75 with other software's ability to read from the pool. These features are
76 referred to as "read-only compatible". If all unsupported features on a
77 pool are read-only compatible, the pool can be imported in read-only
78 mode by setting the readonly property during import (see zpool(1M) for
79 details on importing pools).
80
81 Unsupported features
82 For each unsupported feature enabled on an imported pool a pool
83 property named unsupported@feature_guid will indicate why the import
84 was allowed despite the unsupported feature. Possible values for this
85 property are:
86
87
88 inactive
89 The feature is in the enabled state and therefore the
90 pool's on-disk format is still compatible with software
91 that does not support this feature.
92
93
94 readonly
95 The feature is read-only compatible and the pool has been
96 imported in read-only mode.
97
98
99 Feature dependencies
100 Some features depend on other features being enabled in order to
101 function properly. Enabling a feature will automatically enable any
102 features it depends on.
103
104 FEATURES
105 The following features are supported on this system:
106
107 async_destroy
108
109 GUID com.delphix:async_destroy
110 READ-ONLY COMPATIBLE yes
111 DEPENDENCIES none
112
113 Destroying a file system requires traversing all of its data in
114 order to return its used space to the pool. Without async_destroy
115 the file system is not fully removed until all space has been
116 reclaimed. If the destroy operation is interrupted by a reboot or
117 power outage the next attempt to open the pool will need to
118 complete the destroy operation synchronously.
119
120 When async_destroy is enabled the file system's data will be
121 reclaimed by a background process, allowing the destroy operation
122 to complete without traversing the entire file system. The
123 background process is able to resume interrupted destroys after the
124 pool has been opened, eliminating the need to finish interrupted
125 destroys as part of the open operation. The amount of space
126 remaining to be reclaimed by the background process is available
127 through the freeing property.
128
129 This feature is only active while freeing is non-zero.
130
131
132 empty_bpobj
133
134 GUID com.delphix:empty_bpobj
135 READ-ONLY COMPATIBLE yes
136 DEPENDENCIES none
137
138 This feature increases the performance of creating and using a
139 large number of snapshots of a single filesystem or volume, and
140 also reduces the disk space required.
141
142 When there are many snapshots, each snapshot uses many Block
143 Pointer Objects (bpobj's) to track blocks associated with that
144 snapshot. However, in common use cases, most of these bpobj's are
145 empty. This feature allows us to create each bpobj on-demand, thus
146 eliminating the empty bpobjs.
147
148 This feature is active while there are any filesystems, volumes, or
149 snapshots which were created after enabling this feature.
150
151
152 filesystem_limits
153
154 GUID com.joyent:filesystem_limits
155 READ-ONLY COMPATIBLE yes
156 DEPENDENCIES extensible_dataset
157
158 This feature enables filesystem and snapshot limits. These limits
159 can be used to control how many filesystems and/or snapshots can be
160 created at the point in the tree on which the limits are set.
161
162 This feature is active once either of the limit properties has been
163 set on a dataset. Once activated the feature is never deactivated.
164
165
166 lz4_compress
167
168 GUID org.illumos:lz4_compress
169 READ-ONLY COMPATIBLE no
170 DEPENDENCIES none
171
172 lz4 is a high-performance real-time compression algorithm that
173 features significantly faster compression and decompression as well
174 as a higher compression ratio than the older lzjb compression.
175 Typically, lz4 compression is approximately 50% faster on
176 compressible data and 200% faster on incompressible data than lzjb.
177 It is also approximately 80% faster on decompression, while giving
178 approximately 10% better compression ratio.
179
180 When the lz4_compress feature is set to enabled, the administrator
181 can turn on lz4 compression on any dataset on the pool using the
182 zfs(1M) command. Also, all newly written metadata will be
183 compressed with lz4 algorithm. Since this feature is not read-only
184 compatible, this operation will render the pool unimportable on
185 systems without support for the lz4_compress feature. Booting off
186 of lz4-compressed root pools is supported.
187
188 This feature becomes active as soon as it is enabled and will never
189 return to being enabled.
190
191
192 spacemap_histogram
193
194 GUID com.delphix:spacemap_histogram
195 READ-ONLY COMPATIBLE yes
196 DEPENDENCIES none
197
198 This features allows ZFS to maintain more information about how
199 free space is organized within the pool. If this feature is
200 enabled, ZFS will set this feature to active when a new space map
201 object is created or an existing space map is upgraded to the new
202 format. Once the feature is active, it will remain in that state
203 until the pool is destroyed.
204
205
206 multi_vdev_crash_dump
207
208 GUID com.joyent:multi_vdev_crash_dump
209 READ-ONLY COMPATIBLE no
210 DEPENDENCIES none
211
212 This feature allows a dump device to be configured with a pool
213 comprised of multiple vdevs. Those vdevs may be arranged in any
214 mirrored or raidz configuration.
215
216 When the multi_vdev_crash_dump feature is set to enabled, the
217 administrator can use the dumpadm(1M) command to configure a dump
218 device on a pool comprised of multiple vdevs.
219
220
221 extensible_dataset
222
223 GUID com.delphix:extensible_dataset
224 READ-ONLY COMPATIBLE no
225 DEPENDENCIES none
226
227 This feature allows more flexible use of internal ZFS data
228 structures, and exists for other features to depend on.
229
230 This feature will be active when the first dependent feature uses
231 it, and will be returned to the enabled state when all datasets
232 that use this feature are destroyed.
233
234
235
236 bookmarks
237
238 GUID com.delphix:bookmarks
239 READ-ONLY COMPATIBLE yes
240 DEPENDENCIES extensible_dataset
241
242 This feature enables use of the zfs bookmark subcommand.
243
244 This feature is active while any bookmarks exist in the pool. All
245 bookmarks in the pool can be listed by running zfs list -t bookmark
246 -r poolname.
247
248
249
250 enabled_txg
251
252 GUID com.delphix:enabled_txg
253 READ-ONLY COMPATIBLE yes
254 DEPENDENCIES none
255
256 Once this feature is enabled ZFS records the transaction group
257 number in which new features are enabled. This has no user-visible
258 impact, but other features may depend on this feature.
259
260 This feature becomes active as soon as it is enabled and will never
261 return to being enabled.
262
263
264
265 hole_birth
266
267 GUID com.delphix:hole_birth
268 READ-ONLY COMPATIBLE no
269 DEPENDENCIES enabled_txg
270
271 This feature improves performance of incremental sends ("zfs send
272 -i") and receives for objects with many holes. The most common case
273 of hole-filled objects is zvols.
274
275 An incremental send stream from snapshot A to snapshot B contains
276 information about every block that changed between A and B. Blocks
277 which did not change between those snapshots can be identified and
278 omitted from the stream using a piece of metadata called the 'block
279 birth time', but birth times are not recorded for holes (blocks
280 filled only with zeroes). Since holes created after A cannot be
281 distinguished from holes created before A, information about every
282 hole in the entire filesystem or zvol is included in the send
283 stream.
284
285 For workloads where holes are rare this is not a problem. However,
286 when incrementally replicating filesystems or zvols with many holes
287 (for example a zvol formatted with another filesystem) a lot of
288 time will be spent sending and receiving unnecessary information
289 about holes that already exist on the receiving side.
290
291 Once the hole_birth feature has been enabled the block birth times
292 of all new holes will be recorded. Incremental sends between
293 snapshots created after this feature is enabled will use this new
294 metadata to avoid sending information about holes that already
295 exist on the receiving side.
296
297 This feature becomes active as soon as it is enabled and will never
298 return to being enabled.
299
300
301
302 embedded_data
303
304 GUID com.delphix:embedded_data
305 READ-ONLY COMPATIBLE no
306 DEPENDENCIES none
307
308 This feature improves the performance and compression ratio of
309 highly-compressible blocks. Blocks whose contents can compress to
310 112 bytes or smaller can take advantage of this feature.
311
312 When this feature is enabled, the contents of highly-compressible
313 blocks are stored in the block "pointer" itself (a misnomer in this
314 case, as it contains the compresseed data, rather than a pointer to
315 its location on disk). Thus the space of the block (one sector,
316 typically 512 bytes or 4KB) is saved, and no additional i/o is
317 needed to read and write the data block.
318
319 This feature becomes active as soon as it is enabled and will never
320 return to being enabled.
321
322
323
324 device_removal
325
326 GUID com.delphix:device_removal
327 READ-ONLY COMPATIBLE no
328 DEPENDENCIES none
329
330 This feature enables the "zpool remove" subcommand to remove top-
331 level vdevs, evacuating them to reduce the total size of the pool.
332
333 This feature becomes active when the "zpool remove" command is used
334 on a top-level vdev, and will never return to being enabled.
335
336
337 obsolete_counts
338
339 GUID com.delphix:obsolete_counts
340 READ-ONLY COMPATIBLE yes
341 DEPENDENCIES device_removal
342
343 This feature is an enhancement of device_removal, which will over
344 time reduce the memory used to track removed devices. When
345 indirect blocks are freed or remapped, we note that their part of
346 the indirect mapping is "obsolete", i.e. no longer needed. See
347 also the zfs remap subcommand in zfs(1M).
348
349 This feature becomes active when the "zpool remove" command is used
350 on a top-level vdev, and will never return to being enabled.
351
352
353 large_blocks
354
355 GUID org.open-zfs:large_block
356 READ-ONLY COMPATIBLE no
357 DEPENDENCIES extensible_dataset
358
359 The large_block feature allows the record size on a dataset to be
360 set larger than 128KB.
361
362 This feature becomes active once a recordsize property has been set
363 larger than 128KB, and will return to being enabled once all
364 filesystems that have ever had their recordsize larger than 128KB
365 are destroyed.
366
367
368
369 vdev_properties
370
371 GUID com.nexenta:vdev_properties
372 READ-ONLY COMPATIBLE yes
373 DEPENDENCIES none
374
375 This feature indicates that the pool includes on-disk format
376 changes that support persistent vdev-specific properties. This
377 feature will be active when the first vdev-specific property is
378 set.
379
380
381
382 cos_properties
383
384 GUID com.nexenta:cos_properties
385 READ-ONLY COMPATIBLE yes
386 DEPENDENCIES com.nexenta:vdev_properties
387
388 This feature indicates that the pool includes on-disk format
389 changes that support persistent Class of Storage (CoS) properties.
390 Such properties can be associated with a collection of device that
391 belong to a common class from storage management standpoint. This
392 feature will be active when the first CoS property is set.
393
394
395
396 meta_devices
397
398 GUID com.nexenta:meta_devices
399 READ-ONLY COMPATIBLE yes
400 DEPENDENCIES none
401
402 This feature flag indicates presence of special-vdev in the pool.
403 Special vdev is used to speed-up read and write operations and can
404 be used to store ZFS metadata and/or write log (ZIL). In addition,
405 special vdev can function as a writeback cache (WBC) within the
406 pool, accelerating ZFS writes via underlying fast media (typically,
407 write-optimized SSD).
408
409 Note that unlike the first two functions the WBC function is
410 configurable on a per-dataset tree basis.
411
412
413
414 wbc
415
416 GUID com.nexenta:wbc
417 READ-ONLY COMPATIBLE no
418 DEPENDENCIES com.nexenta:meta_devices
419
420 When enabled, this feature indicates that the pool supports
421 writeback caching. The latter can be activated on a specific
422 filesystem (and all its children) or a volume (zvol) within the
423 pool by setting the corresponding wbc_mode property to 'on'. This
424 feature will show up as disabled if the pool does not contain
425 special-vdev.
426
427
428
429 sha512
430
431 GUID org.illumos:sha512
432 READ-ONLY COMPATIBLE no
433 DEPENDENCIES extensible_dataset
434
435 This feature enables the use of the SHA-512/256 truncated hash
436 algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit
437 arithmetic of SHA-512 provides an approximate 50% performance boost
438 over SHA-256 on 64-bit hardware and is thus a good minimum-change
439 replacement candidate for systems where hash performance is
440 important, but these systems cannot for whatever reason utilize the
441 faster skein and edonr algorithms.
442
443 When the sha512 feature is set to enabled, the administrator can
444 turn on the sha512 checksum on any dataset using the zfs set
445 checksum=sha512 command. This feature becomes active once a
446 checksum property has been set to sha512, and will return to being
447 enabled once all filesystems that have ever had their checksum set
448 to sha512 are destroyed.
449
450 Booting off of pools utilizing SHA-512/256 is supported (provided
451 that the updated GRUB stage2 module is installed).
452
453
454
455 skein
456
457 GUID org.illumos:skein
458 READ-ONLY COMPATIBLE no
459 DEPENDENCIES extensible_dataset
460
461 This feature enables the use of the Skein hash algorithm for
462 checksum and dedup. Skein is a high-performance secure hash
463 algorithm that was a finalist in the NIST SHA-3 competition. It
464 provides a very high security margin and high performance on 64-bit
465 hardware (80% faster than SHA-256). This implementation also
466 utilizes the new salted checksumming functionality in ZFS, which
467 means that the checksum is pre-seeded with a secret 256-bit random
468 key (stored on the pool) before being fed the data block to be
469 checksummed. Thus the produced checksums are unique to a given
470 pool, preventing hash collision attacks on systems with dedup.
471
472 When the skein feature is set to enabled, the administrator can
473 turn on the skein checksum on any dataset using the zfs set
474 checksum=skein command. This feature becomes active once a
475 checksum property has been set to skein, and will return to being
476 enabled once all filesystems that have ever had their checksum set
477 to skein are destroyed.
478
479 Booting off of pools using skein is NOT supported -- any attempt to
480 enable skein on a root pool will fail with an error.
481
482
483
484 edonr
485
486 GUID org.illumos:edonr
487 READ-ONLY COMPATIBLE no
488 DEPENDENCIES extensible_dataset
489
490 This feature enables the use of the Edon-R hash algorithm for
491 checksum, including for nopwrite (if compression is also enabled,
492 an overwrite of a block whose checksum matches the data being
493 written will be ignored). In an abundance of caution, Edon-R can
494 not be used with dedup (without verification).
495
496 Edon-R is a very high-performance hash algorithm that was part of
497 the NIST SHA-3 competition. It provides extremely high hash
498 performance (over 350% faster than SHA-256), but was not selected
499 because of its unsuitability as a general purpose secure hash
500 algorithm. This implementation utilizes the new salted
501 checksumming functionality in ZFS, which means that the checksum is
502 pre-seeded with a secret 256-bit random key (stored on the pool)
503 before being fed the data block to be checksummed. Thus the
504 produced checksums are unique to a given pool.
505
506 When the edonr feature is set to enabled, the administrator can
507 turn on the edonr checksum on any dataset using the zfs set
508 checksum=edonr command. This feature becomes active once a
509 checksum property has been set to edonr, and will return to being
510 enabled once all filesystems that have ever had their checksum set
511 to edonr are destroyed.
512
513 Booting off of pools using edonr is NOT supported -- any attempt to
514 enable edonr on a root pool will fail with an error.
515
516
517 SEE ALSO
518 zfs(1M), zpool(1M)
519
520
521
522 March 25, 2016 ZPOOL-FEATURES(5)