1 ZFS(1M) Maintenance Commands ZFS(1M)
2
3 NAME
4 zfs - configures ZFS file systems
5
6 SYNOPSIS
7 zfs [-?]
8 zfs create [-p] [-o property=value]... filesystem
9 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
10 zfs destroy [-Rfnprv] filesystem|volume
11 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
12 zfs destroy filesystem|volume#bookmark
13 zfs snapshot [-r] [-o property=value]...
14 filesystem@snapname|volume@snapname...
15 zfs rollback [-Rfr] snapshot
16 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
17 zfs promote clone-filesystem
18 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
19 zfs rename [-fp] filesystem|volume filesystem|volume
20 zfs rename -r snapshot snapshot
21 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
22 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
23 zfs set property=value [property=value]... filesystem|volume|snapshot...
24 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
25 [-t type[,type]...] all | property[,property]...
26 filesystem|volume|snapshot|bookmark...
27 zfs inherit [-rS] property filesystem|volume|snapshot...
28 zfs upgrade
29 zfs upgrade -v
30 zfs upgrade [-r] [-V version] -a | filesystem
31 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
32 [-t type[,type]...] filesystem|snapshot
33 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
34 [-t type[,type]...] filesystem|snapshot
35 zfs mount
36 zfs mount [-Ov] [-o options] -a | filesystem
37 zfs unmount [-f] -a | filesystem|mountpoint
38 zfs share -a | filesystem
39 zfs unshare -a | filesystem|mountpoint
40 zfs bookmark snapshot bookmark
41 zfs send [-DLPRcenpsv] [[-I|-i] snapshot] snapshot
42 zfs send [-Lce] [-i snapshot|bookmark] filesystem|volume|snapshot
43 zfs send [-Penv] -t receive_resume_token
44 zfs receive [-FKnsuv] [-l filesystem|volume]... [-o property=value]...
45 [-x property]... filesystem|volume|snapshot
46 zfs receive [-FKnsuv] [-d|-e] [-l filesystem|volume]...
47 [-o property=value]... [-x property]... filesystem
48 zfs receive -A filesystem|volume
49 zfs allow filesystem|volume
50 zfs allow [-dglu] user|group[,user|group]...
51 perm|@setname[,perm|@setname]... filesystem|volume
52 zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]...
53 filesystem|volume
54 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
55 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
56 zfs unallow [-dglru] user|group[,user|group]...
57 [perm|@setname[,perm|@setname]...] filesystem|volume
58 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
59 filesystem|volume
60 zfs unallow [-r] -c [perm|@setname[,perm|@setname]...] filesystem|volume
61 zfs unallow [-r] -s -@setname [perm|@setname[,perm|@setname]...]
62 filesystem|volume
63 zfs hold [-r] tag snapshot...
64 zfs holds [-r] snapshot...
65 zfs release [-r] tag snapshot...
66 zfs diff [-FHt] snapshot snapshot|filesystem
67 zfs program [-n] [-t timeout] [-m memory_limit] pool script [arg1 ...]
68
69 DESCRIPTION
70 The zfs command configures ZFS datasets within a ZFS storage pool, as
71 described in zpool(1M). A dataset is identified by a unique path within
72 the ZFS namespace. For example:
73
74 pool/{filesystem,volume,snapshot}
75
76 where the maximum length of a dataset name is MAXNAMELEN (256 bytes).
77
78 A dataset can be one of the following:
79
80 file system A ZFS dataset of type filesystem can be mounted within the
81 standard system namespace and behaves like other file
82 systems. While ZFS file systems are designed to be POSIX
83 compliant, known issues exist that prevent compliance in
84 some cases. Applications that depend on standards
85 conformance might fail due to non-standard behavior when
86 checking file system free space.
87
88 volume A logical volume exported as a raw or block device. This
89 type of dataset should only be used under special
90 circumstances. File systems are typically used in most
91 environments.
92
93 snapshot A read-only version of a file system or volume at a given
94 point in time. It is specified as filesystem@name or
95 volume@name.
96
97 ZFS File System Hierarchy
98 A ZFS storage pool is a logical collection of devices that provide space
99 for datasets. A storage pool is also the root of the ZFS file system
100 hierarchy.
101
102 The root of the pool can be accessed as a file system, such as mounting
103 and unmounting, taking snapshots, and setting properties. The physical
104 storage characteristics, however, are managed by the zpool(1M) command.
105
106 See zpool(1M) for more information on creating and administering pools.
107
108 Snapshots
109 A snapshot is a read-only copy of a file system or volume. Snapshots can
110 be created extremely quickly, and initially consume no additional space
111 within the pool. As data within the active dataset changes, the snapshot
112 consumes more data than would otherwise be shared with the active
113 dataset.
114
115 Snapshots can have arbitrary names. Snapshots of volumes can be cloned
116 or rolled back, but cannot be accessed independently.
117
118 File system snapshots can be accessed under the .zfs/snapshot directory
119 in the root of the file system. Snapshots are automatically mounted on
120 demand and may be unmounted at regular intervals. The visibility of the
121 .zfs directory can be controlled by the snapdir property.
122
123 Clones
124 A clone is a writable volume or file system whose initial contents are
125 the same as another dataset. As with snapshots, creating a clone is
126 nearly instantaneous, and initially consumes no additional space.
127
128 Clones can only be created from a snapshot. When a snapshot is cloned,
129 it creates an implicit dependency between the parent and child. Even
130 though the clone is created somewhere else in the dataset hierarchy, the
131 original snapshot cannot be destroyed as long as a clone exists. The
132 origin property exposes this dependency, and the destroy command lists
133 any such dependencies, if they exist.
134
135 The clone parent-child dependency relationship can be reversed by using
136 the promote subcommand. This causes the "origin" file system to become a
137 clone of the specified file system, which makes it possible to destroy
138 the file system that the clone was created from.
139
140 Mount Points
141 Creating a ZFS file system is a simple operation, so the number of file
142 systems per system is likely to be numerous. To cope with this, ZFS
143 automatically manages mounting and unmounting file systems without the
144 need to edit the /etc/vfstab file. All automatically managed file
145 systems are mounted by ZFS at boot time.
146
147 By default, file systems are mounted under /path, where path is the name
148 of the file system in the ZFS namespace. Directories are created and
149 destroyed as needed.
150
151 A file system can also have a mount point set in the mountpoint property.
152 This directory is created as needed, and ZFS automatically mounts the
153 file system when the zfs mount -a command is invoked (without editing
154 /etc/vfstab). The mountpoint property can be inherited, so if pool/home
155 has a mount point of /export/stuff, then pool/home/user automatically
156 inherits a mount point of /export/stuff/user.
157
158 A file system mountpoint property of none prevents the file system from
159 being mounted.
160
161 If needed, ZFS file systems can also be managed with traditional tools
162 (mount, umount, /etc/vfstab). If a file system's mount point is set to
163 legacy, ZFS makes no attempt to manage the file system, and the
164 administrator is responsible for mounting and unmounting the file system.
165
166 Zones
167 A ZFS file system can be added to a non-global zone by using the zonecfg
168 add fs subcommand. A ZFS file system that is added to a non-global zone
169 must have its mountpoint property set to legacy.
170
171 The physical properties of an added file system are controlled by the
172 global administrator. However, the zone administrator can create,
173 modify, or destroy files within the added file system, depending on how
174 the file system is mounted.
175
176 A dataset can also be delegated to a non-global zone by using the zonecfg
177 add dataset subcommand. You cannot delegate a dataset to one zone and
178 the children of the same dataset to another zone. The zone administrator
179 can change properties of the dataset or any of its children. However,
180 the quota, filesystem_limit and snapshot_limit properties of the
181 delegated dataset can be modified only by the global administrator.
182
183 A ZFS volume can be added as a device to a non-global zone by using the
184 zonecfg add device subcommand. However, its physical properties can be
185 modified only by the global administrator.
186
187 For more information about zonecfg syntax, see zonecfg(1M).
188
189 After a dataset is delegated to a non-global zone, the zoned property is
190 automatically set. A zoned file system cannot be mounted in the global
191 zone, since the zone administrator might have to set the mount point to
192 an unacceptable value.
193
194 The global administrator can forcibly clear the zoned property, though
195 this should be done with extreme care. The global administrator should
196 verify that all the mount points are acceptable before clearing the
197 property.
198
199 Native Properties
200 Properties are divided into two types, native properties and user-defined
201 (or "user") properties. Native properties either export internal
202 statistics or control ZFS behavior. In addition, native properties are
203 either editable or read-only. User properties have no effect on ZFS
204 behavior, but you can use them to annotate datasets in a way that is
205 meaningful in your environment. For more information about user
206 properties, see the User Properties section, below.
207
208 Every dataset has a set of properties that export statistics about the
209 dataset as well as control various behaviors. Properties are inherited
210 from the parent unless overridden by the child. Some properties apply
211 only to certain types of datasets (file systems, volumes, or snapshots).
212
213 The values of numeric properties can be specified using human-readable
214 suffixes (for example, k, KB, M, Gb, and so forth, up to Z for
215 zettabyte). The following are all valid (and equal) specifications:
216 1536M, 1.5g, 1.50GB.
217
218 The values of non-numeric properties are case sensitive and must be
219 lowercase, except for mountpoint, sharenfs, and sharesmb.
220
221 The following native properties consist of read-only statistics about the
222 dataset. These properties can be neither set, nor inherited. Native
223 properties apply to all dataset types unless otherwise noted.
224
225 available The amount of space available to the dataset and
226 all its children, assuming that there is no other
227 activity in the pool. Because space is shared
228 within a pool, availability can be limited by any
229 number of factors, including physical pool size,
230 quotas, reservations, or other datasets within the
231 pool.
232
233 This property can also be referred to by its
234 shortened column name, avail.
235
236 compressratio For non-snapshots, the compression ratio achieved
237 for the used space of this dataset, expressed as a
238 multiplier. The used property includes descendant
239 datasets, and, for clones, does not include the
240 space shared with the origin snapshot. For
241 snapshots, the compressratio is the same as the
242 refcompressratio property. Compression can be
243 turned on by running: zfs set compression=on
244 dataset. The default value is off.
245
246 creation The time this dataset was created.
247
248 clones For snapshots, this property is a comma-separated
249 list of filesystems or volumes which are clones of
250 this snapshot. The clones' origin property is this
251 snapshot. If the clones property is not empty,
252 then this snapshot can not be destroyed (even with
253 the -r or -f options).
254
255 defer_destroy This property is on if the snapshot has been marked
256 for deferred destroy by using the zfs destroy -d
257 command. Otherwise, the property is off.
258
259 filesystem_count The total number of filesystems and volumes that
260 exist under this location in the dataset tree.
261 This value is only available when a
262 filesystem_limit has been set somewhere in the tree
263 under which the dataset resides.
264
265 logicalreferenced The amount of space that is "logically" accessible
266 by this dataset. See the referenced property. The
267 logical space ignores the effect of the compression
268 and copies properties, giving a quantity closer to
269 the amount of data that applications see. However,
270 it does include space consumed by metadata.
271
272 This property can also be referred to by its
273 shortened column name, lrefer.
274
275 logicalused The amount of space that is "logically" consumed by
276 this dataset and all its descendents. See the used
277 property. The logical space ignores the effect of
278 the compression and copies properties, giving a
279 quantity closer to the amount of data that
280 applications see. However, it does include space
281 consumed by metadata.
282
283 This property can also be referred to by its
284 shortened column name, lused.
285
286 modified For a snapshot, indicates whether the parent
287 filesystem or volume has been modified since the
288 snapshot. This property can be either yes or no.
289
290 mounted For file systems, indicates whether the file system
291 is currently mounted. This property can be either
292 yes or no.
293
294 origin For cloned file systems or volumes, the snapshot
295 from which the clone was created. See also the
296 clones property.
297
298 receive_resume_token For filesystems or volumes which have saved
299 partially-completed state from zfs receive -s, this
300 opaque token can be provided to zfs send -t to
301 resume and complete the zfs receive.
302
303 referenced The amount of data that is accessible by this
304 dataset, which may or may not be shared with other
305 datasets in the pool. When a snapshot or clone is
306 created, it initially references the same amount of
307 space as the file system or snapshot it was created
308 from, since its contents are identical.
309
310 This property can also be referred to by its
311 shortened column name, refer.
312
313 refcompressratio The compression ratio achieved for the referenced
314 space of this dataset, expressed as a multiplier.
315 See also the compressratio property.
316
317 snapshot_count The total number of snapshots that exist under this
318 location in the dataset tree. This value is only
319 available when a snapshot_limit has been set
320 somewhere in the tree under which the dataset
321 resides.
322
323 type The type of dataset: filesystem, volume, or
324 snapshot.
325
326 used The amount of space consumed by this dataset and
327 all its descendents. This is the value that is
328 checked against this dataset's quota and
329 reservation. The space used does not include this
330 dataset's reservation, but does take into account
331 the reservations of any descendent datasets. The
332 amount of space that a dataset consumes from its
333 parent, as well as the amount of space that is
334 freed if this dataset is recursively destroyed, is
335 the greater of its space used and its reservation.
336
337 The used space of a snapshot (see the Snapshots
338 section) is space that is referenced exclusively by
339 this snapshot. If this snapshot is destroyed, the
340 amount of used space will be freed. Space that is
341 shared by multiple snapshots isn't accounted for in
342 this metric. When a snapshot is destroyed, space
343 that was previously shared with this snapshot can
344 become unique to snapshots adjacent to it, thus
345 changing the used space of those snapshots. The
346 used space of the latest snapshot can also be
347 affected by changes in the file system. Note that
348 the used space of a snapshot is a subset of the
349 written space of the snapshot.
350
351 The amount of space used, available, or referenced
352 does not take into account pending changes.
353 Pending changes are generally accounted for within
354 a few seconds. Committing a change to a disk using
355 fsync(3C) or O_SYNC does not necessarily guarantee
356 that the space usage information is updated
357 immediately.
358
359 usedby* The usedby* properties decompose the used
360 properties into the various reasons that space is
361 used. Specifically, used = usedbychildren +
362 usedbydataset + usedbyrefreservation +
363 usedbysnapshots. These properties are only
364 available for datasets created on zpool "version
365 13" pools.
366
367 usedbychildren The amount of space used by children of this
368 dataset, which would be freed if all the dataset's
369 children were destroyed.
370
371 usedbydataset The amount of space used by this dataset itself,
372 which would be freed if the dataset were destroyed
373 (after first removing any refreservation and
374 destroying any necessary snapshots or descendents).
375
376 usedbyrefreservation The amount of space used by a refreservation set on
377 this dataset, which would be freed if the
378 refreservation was removed.
379
380 usedbysnapshots The amount of space consumed by snapshots of this
381 dataset. In particular, it is the amount of space
382 that would be freed if all of this dataset's
383 snapshots were destroyed. Note that this is not
384 simply the sum of the snapshots' used properties
385 because space can be shared by multiple snapshots.
386
387 userused@user The amount of space consumed by the specified user
388 in this dataset. Space is charged to the owner of
389 each file, as displayed by ls -l. The amount of
390 space charged is displayed by du and ls -s. See
391 the zfs userspace subcommand for more information.
392
393 Unprivileged users can access only their own space
394 usage. The root user, or a user who has been
395 granted the userused privilege with zfs allow, can
396 access everyone's usage.
397
398 The userused@... properties are not displayed by
399 zfs get all. The user's name must be appended
400 after the @ symbol, using one of the following
401 forms:
402
403 o POSIX name (for example, joe)
404
405 o POSIX numeric ID (for example, 789)
406
407 o SID name (for example, joe.smith@mydomain)
408
409 o SID numeric ID (for example, S-1-123-456-789)
410
411 userrefs This property is set to the number of user holds on
412 this snapshot. User holds are set by using the zfs
413 hold command.
414
415 groupused@group The amount of space consumed by the specified group
416 in this dataset. Space is charged to the group of
417 each file, as displayed by ls -l. See the
418 userused@user property for more information.
419
420 Unprivileged users can only access their own
421 groups' space usage. The root user, or a user who
422 has been granted the groupused privilege with zfs
423 allow, can access all groups' usage.
424
425 volblocksize For volumes, specifies the block size of the
426 volume. The blocksize cannot be changed once the
427 volume has been written, so it should be set at
428 volume creation time. The default blocksize for
429 volumes is 8 Kbytes. Any power of 2 from 512 bytes
430 to 128 Kbytes is valid.
431
432 This property can also be referred to by its
433 shortened column name, volblock.
434
435 written The amount of space referenced by this dataset,
436 that was written since the previous snapshot (i.e.
437 that is not referenced by the previous snapshot).
438
439 written@snapshot The amount of referenced space written to this
440 dataset since the specified snapshot. This is the
441 space that is referenced by this dataset but was
442 not referenced by the specified snapshot.
443
444 The snapshot may be specified as a short snapshot
445 name (just the part after the @), in which case it
446 will be interpreted as a snapshot in the same
447 filesystem as this dataset. The snapshot may be a
448 full snapshot name (filesystem@snapshot), which for
449 clones may be a snapshot in the origin's filesystem
450 (or the origin of the origin's filesystem, etc.)
451
452 The following native properties can be used to change the behavior of a
453 ZFS dataset.
454
455 aclinherit=discard|noallow|restricted|passthrough|passthrough-x
456 Controls how ACEs are inherited when files and directories are created.
457
458 discard does not inherit any ACEs.
459
460 noallow only inherits inheritable ACEs that specify "deny"
461 permissions.
462
463 restricted default, removes the write_acl and write_owner
464 permissions when the ACE is inherited.
465
466 passthrough inherits all inheritable ACEs without any modifications.
467
468 passthrough-x same meaning as passthrough, except that the owner@,
469 group@, and everyone@ ACEs inherit the execute
470 permission only if the file creation mode also requests
471 the execute bit.
472
473 When the property value is set to passthrough, files are created with a
474 mode determined by the inheritable ACEs. If no inheritable ACEs exist
475 that affect the mode, then the mode is set in accordance to the
476 requested mode from the application.
477
478 aclmode=discard|groupmask|passthrough|restricted
479 Controls how an ACL is modified during chmod(2) and how inherited ACEs
480 are modified by the file creation mode.
481
482 discard default, deletes all ACEs except for those representing
483 the mode of the file or directory requested by chmod(2).
484
485 groupmask reduces permissions granted by all ALLOW entries found in
486 the ACL such that they are no greater than the group
487 permissions specified by the mode.
488
489 passthrough indicates that no changes are made to the ACL other than
490 creating or updating the necessary ACEs to represent the
491 new mode of the file or directory.
492
493 restricted causes the chmod(2) operation to return an error when used
494 on any file or directory which has a non-trivial ACL, with
495 entries in addition to those that represent the mode.
496
497 chmod(2) is required to change the set user ID, set group ID, or sticky
498 bit on a file or directory, as they do not have equivalent ACEs. In
499 order to use chmod(2) on a file or directory with a non-trivial ACL
500 when aclmode is set to restricted, you must first remove all ACEs
501 except for those that represent the current mode.
502
503 atime=on|off
504 Controls whether the access time for files is updated when they are
505 read. Turning this property off avoids producing write traffic when
506 reading files and can result in significant performance gains, though
507 it might confuse mailers and other similar utilities. The default
508 value is on.
509
510 canmount=on|off|noauto
511 If this property is set to off, the file system cannot be mounted, and
512 is ignored by zfs mount -a. Setting this property to off is similar to
513 setting the mountpoint property to none, except that the dataset still
514 has a normal mountpoint property, which can be inherited. Setting this
515 property to off allows datasets to be used solely as a mechanism to
516 inherit properties. One example of setting canmount=off is to have two
517 datasets with the same mountpoint, so that the children of both
518 datasets appear in the same directory, but might have different
519 inherited characteristics.
520
521 When set to noauto, a dataset can only be mounted and unmounted
522 explicitly. The dataset is not mounted automatically when the dataset
523 is created or imported, nor is it mounted by the zfs mount -a command
524 or unmounted by the zfs unmount -a command.
525
526 This property is not inherited.
527
528 checksum=on|off|fletcher2|fletcher4|sha256|noparity|sha512|skein|edonr
529 Controls the checksum used to verify data integrity. The default value
530 is on, which automatically selects an appropriate algorithm (currently,
531 fletcher4, but this may change in future releases). The value off
532 disables integrity checking on user data. The value noparity not only
533 disables integrity but also disables maintaining parity for user data.
534 This setting is used internally by a dump device residing on a RAID-Z
535 pool and should not be used by any other dataset. Disabling checksums
536 is NOT a recommended practice.
537
538 The sha512, skein, and edonr checksum algorithms require enabling the
539 appropriate features on the pool. Please see zpool-features(5) for
540 more information on these algorithms.
541
542 Changing this property affects only newly-written data.
543
544 Salted checksum algorithms (edonr, skein) are currently not supported
545 for any filesystem on the boot pools.
546
547 compression=on|off|gzip|gzip-N|lz4|lzjb|zle
548 Controls the compression algorithm used for this dataset.
549
550 Setting compression to on indicates that the current default
551 compression algorithm should be used. The default balances compression
552 and decompression speed, with compression ratio and is expected to work
553 well on a wide variety of workloads. Unlike all other settings for
554 this property, on does not select a fixed compression type. As new
555 compression algorithms are added to ZFS and enabled on a pool, the
556 default compression algorithm may change. The current default
557 compression algorithm is either lzjb or, if the lz4_compress feature is
558 enabled, lz4.
559
560 The lz4 compression algorithm is a high-performance replacement for the
561 lzjb algorithm. It features significantly faster compression and
562 decompression, as well as a moderately higher compression ratio than
563 lzjb, but can only be used on pools with the lz4_compress feature set
564 to enabled. See zpool-features(5) for details on ZFS feature flags and
565 the lz4_compress feature.
566
567 The lzjb compression algorithm is optimized for performance while
568 providing decent data compression.
569
570 The gzip compression algorithm uses the same compression as the gzip(1)
571 command. You can specify the gzip level by using the value gzip-N,
572 where N is an integer from 1 (fastest) to 9 (best compression ratio).
573 Currently, gzip is equivalent to gzip-6 (which is also the default for
574 gzip(1)).
575
576 The zle compression algorithm compresses runs of zeros.
577
578 This property can also be referred to by its shortened column name
579 compress. Changing this property affects only newly-written data.
580
581 smartcompression=on|off
582 Smart compression is a feature which optimizes compression performance
583 on filesystems which contain a mixture of compressible and
584 incompressible data. When compression is enabled on a filesystem,
585 smart compression dynamically tracks per-file compression ratios to
586 determine if a file is compressible or not. When the compression ratio
587 being achieved is too low, smart compression progressively backs off
588 attempting to compress the file.
589
590 The algorithm periodically checks whether new data written to a file
591 previously deemed incompressible is still not compressible and adjusts
592 behavior accordingly. Certain types of files, such as virtual machine
593 disk files or large database files, can contain a mixture of both types
594 of data. Although smart compression tries to detect these situations,
595 in marginal cases it can be too pessimistic, which results in a
596 reduction of the overall compression ratio. In this case, setting the
597 smartcompression property to off turns off smart compression on a
598 filesystem, so that data is always compressed regardless of the
599 compression ratio achieved.
600
601 The default value is on.
602
603 copies=1|2|3
604 Controls the number of copies of data stored for this dataset. These
605 copies are in addition to any redundancy provided by the pool, for
606 example, mirroring or RAID-Z. The copies are stored on different
607 disks, if possible. The space used by multiple copies is charged to
608 the associated file and dataset, changing the used property and
609 counting against quotas and reservations.
610
611 Changing this property only affects newly-written data. Therefore, set
612 this property at file system creation time by using the -o copies=N
613 option.
614
615 devices=on|off
616 Controls whether device nodes can be opened on this file system. The
617 default value is on.
618
619 exec=on|off
620 Controls whether processes can be executed from within this file
621 system. The default value is on.
622
623 filesystem_limit=count|none
624 Limits the number of filesystems and volumes that can exist under this
625 point in the dataset tree. The limit is not enforced if the user is
626 allowed to change the limit. Setting a filesystem_limit to on a
627 descendent of a filesystem that already has a filesystem_limit does not
628 override the ancestor's filesystem_limit, but rather imposes an
629 additional limit. This feature must be enabled to be used (see
630 zpool-features(5)).
631
632 mountpoint=path|none|legacy
633 Controls the mount point used for this file system. See the Mount
634 Points section for more information on how this property is used.
635
636 When the mountpoint property is changed for a file system, the file
637 system and any children that inherit the mount point are unmounted. If
638 the new value is legacy, then they remain unmounted. Otherwise, they
639 are automatically remounted in the new location if the property was
640 previously legacy or none, or if they were mounted before the property
641 was changed. In addition, any shared file systems are unshared and
642 shared in the new location.
643
644 nbmand=on|off
645 Controls whether the file system should be mounted with nbmand (Non
646 Blocking mandatory locks). This is used for SMB clients. Changes to
647 this property only take effect when the file system is umounted and
648 remounted. See mount(1M) for more information on nbmand mounts.
649
650 primarycache=all|none|metadata
651 Controls what is cached in the primary cache (ARC). If this property
652 is set to all, then both user data and metadata is cached. If this
653 property is set to none, then neither user data nor metadata is cached.
654 If this property is set to metadata, then only metadata is cached. The
655 default value is all.
656
657 quota=size|none
658 Limits the amount of space a dataset and its descendents can consume.
659 This property enforces a hard limit on the amount of space used. This
660 includes all space consumed by descendents, including file systems and
661 snapshots. Setting a quota on a descendent of a dataset that already
662 has a quota does not override the ancestor's quota, but rather imposes
663 an additional limit.
664
665 Quotas cannot be set on volumes, as the volsize property acts as an
666 implicit quota.
667
668 snapshot_limit=count|none
669 Limits the number of snapshots that can be created on a dataset and its
670 descendents. Setting a snapshot_limit on a descendent of a dataset
671 that already has a snapshot_limit does not override the ancestor's
672 snapshot_limit, but rather imposes an additional limit. The limit is
673 not enforced if the user is allowed to change the limit. For example,
674 this means that recursive snapshots taken from the global zone are
675 counted against each delegated dataset within a zone. This feature
676 must be enabled to be used (see zpool-features(5)).
677
678 userquota@user=size|none
679 Limits the amount of space consumed by the specified user. User space
680 consumption is identified by the userspace@user property.
681
682 Enforcement of user quotas may be delayed by several seconds. This
683 delay means that a user might exceed their quota before the system
684 notices that they are over quota and begins to refuse additional writes
685 with the EDQUOT error message. See the zfs userspace subcommand for
686 more information.
687
688 Unprivileged users can only access their own groups' space usage. The
689 root user, or a user who has been granted the userquota privilege with
690 zfs allow, can get and set everyone's quota.
691
692 This property is not available on volumes, on file systems before
693 version 4, or on pools before version 15. The userquota@... properties
694 are not displayed by zfs get all. The user's name must be appended
695 after the @ symbol, using one of the following forms:
696
697 o POSIX name (for example, joe)
698
699 o POSIX numeric ID (for example, 789)
700
701 o SID name (for example, joe.smith@mydomain)
702
703 o SID numeric ID (for example, S-1-123-456-789)
704
705 groupquota@group=size|none
706 Limits the amount of space consumed by the specified group. Group
707 space consumption is identified by the groupused@group property.
708
709 Unprivileged users can access only their own groups' space usage. The
710 root user, or a user who has been granted the groupquota privilege with
711 zfs allow, can get and set all groups' quotas.
712
713 readonly=on|off
714 Controls whether this dataset can be modified. The default value is
715 off.
716
717 This property can also be referred to by its shortened column name,
718 rdonly.
719
720 recordsize=size
721 Specifies a suggested block size for files in the file system. This
722 property is designed solely for use with database workloads that access
723 files in fixed-size records. ZFS automatically tunes block sizes
724 according to internal algorithms optimized for typical access patterns.
725
726 For databases that create very large files but access them in small
727 random chunks, these algorithms may be suboptimal. Specifying a
728 recordsize greater than or equal to the record size of the database can
729 result in significant performance gains. Use of this property for
730 general purpose file systems is strongly discouraged, and may adversely
731 affect performance.
732
733 The size specified must be a power of two greater than or equal to 512
734 and less than or equal to 128 Kbytes. If the large_blocks feature is
735 enabled on the pool, the size may be up to 1 Mbyte. See
736 zpool-features(5) for details on ZFS feature flags.
737
738 Changing the file system's recordsize affects only files created
739 afterward; existing files are unaffected.
740
741 This property can also be referred to by its shortened column name,
742 recsize.
743
744 redundant_metadata=all|most
745 Controls what types of metadata are stored redundantly. ZFS stores an
746 extra copy of metadata, so that if a single block is corrupted, the
747 amount of user data lost is limited. This extra copy is in addition to
748 any redundancy provided at the pool level (e.g. by mirroring or
749 RAID-Z), and is in addition to an extra copy specified by the copies
750 property (up to a total of 3 copies). For example if the pool is
751 mirrored, copies=2, and redundant_metadata=most, then ZFS stores 6
752 copies of most metadata, and 4 copies of data and some metadata.
753
754 When set to all, ZFS stores an extra copy of all metadata. If a single
755 on-disk block is corrupt, at worst a single block of user data (which
756 is recordsize bytes long) can be lost.
757
758 When set to most, ZFS stores an extra copy of most types of metadata.
759 This can improve performance of random writes, because less metadata
760 must be written. In practice, at worst about 100 blocks (of recordsize
761 bytes each) of user data can be lost if a single on-disk block is
762 corrupt. The exact behavior of which metadata blocks are stored
763 redundantly may change in future releases.
764
765 The default value is all.
766
767 refquota=size|none
768 Limits the amount of space a dataset can consume. This property
769 enforces a hard limit on the amount of space used. This hard limit
770 does not include space used by descendents, including file systems and
771 snapshots.
772
773 refreservation=size|none
774 The minimum amount of space guaranteed to a dataset, not including its
775 descendents. When the amount of space used is below this value, the
776 dataset is treated as if it were taking up the amount of space
777 specified by refreservation. The refreservation reservation is
778 accounted for in the parent datasets' space used, and counts against
779 the parent datasets' quotas and reservations.
780
781 If refreservation is set, a snapshot is only allowed if there is enough
782 free pool space outside of this reservation to accommodate the current
783 number of "referenced" bytes in the dataset.
784
785 This property can also be referred to by its shortened column name,
786 refreserv.
787
788 reservation=size|none
789 The minimum amount of space guaranteed to a dataset and its
790 descendants. When the amount of space used is below this value, the
791 dataset is treated as if it were taking up the amount of space
792 specified by its reservation. Reservations are accounted for in the
793 parent datasets' space used, and count against the parent datasets'
794 quotas and reservations.
795
796 This property can also be referred to by its shortened column name,
797 reserv.
798
799 secondarycache=all|none|metadata
800 Controls what is cached in the secondary cache (L2ARC). If this
801 property is set to all, then both user data and metadata is cached. If
802 this property is set to none, then neither user data nor metadata is
803 cached. If this property is set to metadata, then only metadata is
804 cached. The default value is all.
805
806 setuid=on|off
807 Controls whether the setuid bit is respected for the file system. The
808 default value is on.
809
810 sharesmb=on|off|opts
811 Controls whether the file system is shared via SMB, and what options
812 are to be used. A file system with the sharesmb property set to off is
813 managed through traditional tools such as sharemgr(1M). Otherwise, the
814 file system is automatically shared and unshared with the zfs share and
815 zfs unshare commands. See sharesmb(5) for the share options
816 description.
817
818 Because SMB shares requires a resource name, a unique resource name is
819 constructed from the dataset name. The constructed name is a copy of
820 the dataset name except that the characters in the dataset name, which
821 would be invalid in the resource name, are replaced with underscore (_)
822 characters. A pseudo property "name" is also supported that allows you
823 to replace the data set name with a specified name. The specified name
824 is then used to replace the prefix dataset in the case of inheritance.
825 For example, if the dataset data/home/john is set to name=john, then
826 data/home/john has a resource name of john. If a child dataset
827 data/home/john/backups is shared, it has a resource name of
828 john_backups.
829
830 When SMB shares are created, the SMB share name appears as an entry in
831 the .zfs/shares directory. You can use the ls or chmod command to
832 display the share-level ACLs on the entries in this directory.
833
834 When the sharesmb property is changed for a dataset, the dataset and
835 any children inheriting the property are re-shared with the new
836 options, only if the property was previously set to off, or if they
837 were shared before the property was changed. If the new property is
838 set to off, the file systems are unshared.
839
840 sharenfs=on|off|opts
841 Controls whether the file system is shared via NFS, and what options
842 are to be used. A file system with a sharenfs property of off is
843 managed through traditional tools such as share(1M), unshare(1M), and
844 dfstab(4). Otherwise, the file system is automatically shared and
845 unshared with the zfs share and zfs unshare commands. See sharenfs(5)
846 for the share options description.
847
848 When the sharenfs property is changed for a dataset, the dataset and
849 any children inheriting the property are re-shared with the new
850 options, only if the property was previously off, or if they were
851 shared before the property was changed. If the new property is off,
852 the file systems are unshared.
853
854 logbias=latency|throughput
855 Provide a hint to ZFS about handling of synchronous requests in this
856 dataset. If logbias is set to latency (the default), ZFS will use pool
857 log devices (if configured) to handle the requests at low latency. If
858 logbias is set to throughput, ZFS will not use configured pool log
859 devices. ZFS will instead optimize synchronous operations for global
860 pool throughput and efficient use of resources.
861
862 snapdir=hidden|visible
863 Controls whether the .zfs directory is hidden or visible in the root of
864 the file system as discussed in the Snapshots section. The default
865 value is hidden.
866
867 sync=standard|always|disabled
868 Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC).
869 standard is the POSIX specified behavior of ensuring all synchronous
870 requests are written to stable storage and all devices are flushed to
871 ensure data is not cached by device controllers (this is the default).
872 always causes every file system transaction to be written and flushed
873 before its system call returns. This has a large performance penalty.
874 disabled disables synchronous requests. File system transactions are
875 only committed to stable storage periodically. This option will give
876 the highest performance. However, it is very dangerous as ZFS would be
877 ignoring the synchronous transaction demands of applications such as
878 databases or NFS. Administrators should only use this option when the
879 risks are understood.
880
881 version=N|current
882 The on-disk version of this file system, which is independent of the
883 pool version. This property can only be set to later supported
884 versions. See the zfs upgrade command.
885
886 volsize=size
887 For volumes, specifies the logical size of the volume. By default,
888 creating a volume establishes a reservation of equal size. For storage
889 pools with a version number of 9 or higher, a refreservation is set
890 instead. Any changes to volsize are reflected in an equivalent change
891 to the reservation (or refreservation). The volsize can only be set to
892 a multiple of volblocksize, and cannot be zero.
893
894 The reservation is kept equal to the volume's logical size to prevent
895 unexpected behavior for consumers. Without the reservation, the volume
896 could run out of space, resulting in undefined behavior or data
897 corruption, depending on how the volume is used. These effects can
898 also occur when the volume size is changed while it is in use
899 (particularly when shrinking the size). Extreme care should be used
900 when adjusting the volume size.
901
902 Though not recommended, a "sparse volume" (also known as "thin
903 provisioning") can be created by specifying the -s option to the zfs
904 create -V command, or by changing the reservation after the volume has
905 been created. A "sparse volume" is a volume where the reservation is
906 less then the volume size. Consequently, writes to a sparse volume can
907 fail with ENOSPC when the pool is low on space. For a sparse volume,
908 changes to volsize are not reflected in the reservation.
909
910 vscan=on|off
911 Controls whether regular files should be scanned for viruses when a
912 file is opened and closed. In addition to enabling this property, the
913 virus scan service must also be enabled for virus scanning to occur.
914 The default value is off.
915
916 wbc_mode=on|off
917 Controls the mode of write back cache. After the property has been set
918 on a dataset all the child datasets inherit the property. Due to its
919 recursive nature the property will conflict with any child dataset or
920 any parent dataset having this property enabled as well. The property
921 cannot be set if the target pool does not have special device (special
922 vdev). The default value is off. This property cannot be enabled
923 together with the dedup property.
924
925 xattr=on|off
926 Controls whether extended attributes are enabled for this file system.
927 The default value is on.
928
929 zoned=on|off
930 Controls whether the dataset is managed from a non-global zone. See
931 the Zones section for more information. The default value is off.
932
933 The following three properties cannot be changed after the file system is
934 created, and therefore, should be set when the file system is created.
935 If the properties are not set with the zfs create or zpool create
936 commands, these properties are inherited from the parent dataset. If the
937 parent dataset lacks these properties due to having been created prior to
938 these features being supported, the new file system will have the default
939 values for these properties.
940
941 casesensitivity=sensitive|insensitive|mixed
942 Indicates whether the file name matching algorithm used by the file
943 system should be case-sensitive, case-insensitive, or allow a
944 combination of both styles of matching. The default value for the
945 casesensitivity property is sensitive. Traditionally, UNIX and POSIX
946 file systems have case-sensitive file names.
947
948 The mixed value for the casesensitivity property indicates that the
949 file system can support requests for both case-sensitive and case-
950 insensitive matching behavior. Currently, case-insensitive matching
951 behavior on a file system that supports mixed behavior is limited to
952 the SMB server product. For more information about the mixed value
953 behavior, see the "ZFS Administration Guide".
954
955 normalization=none|formC|formD|formKC|formKD
956 Indicates whether the file system should perform a unicode
957 normalization of file names whenever two file names are compared, and
958 which normalization algorithm should be used. File names are always
959 stored unmodified, names are normalized as part of any comparison
960 process. If this property is set to a legal value other than none, and
961 the utf8only property was left unspecified, the utf8only property is
962 automatically set to on. The default value of the normalization
963 property is none. This property cannot be changed after the file
964 system is created.
965
966 utf8only=on|off
967 Indicates whether the file system should reject file names that include
968 characters that are not present in the UTF-8 character code set. If
969 this property is explicitly set to off, the normalization property must
970 either not be explicitly set or be set to none. The default value for
971 the utf8only property is off. This property cannot be changed after
972 the file system is created.
973
974 The casesensitivity, normalization, and utf8only properties are also new
975 permissions that can be assigned to non-privileged users by using the ZFS
976 delegated administration feature.
977
978 Temporary Mount Point Properties
979 When a file system is mounted, either through mount(1M) for legacy mounts
980 or the zfs mount command for normal file systems, its mount options are
981 set according to its properties. The correlation between properties and
982 mount options is as follows:
983
984 PROPERTY MOUNT OPTION
985 devices devices/nodevices
986 exec exec/noexec
987 readonly ro/rw
988 setuid setuid/nosetuid
989 xattr xattr/noxattr
990
991 In addition, these options can be set on a per-mount basis using the -o
992 option, without affecting the property that is stored on disk. The
993 values specified on the command line override the values stored in the
994 dataset. The nosuid option is an alias for nodevices,nosetuid. These
995 properties are reported as "temporary" by the zfs get command. If the
996 properties are changed while the dataset is mounted, the new setting
997 overrides any temporary settings.
998
999 User Properties
1000 In addition to the standard native properties, ZFS supports arbitrary
1001 user properties. User properties have no effect on ZFS behavior, but
1002 applications or administrators can use them to annotate datasets (file
1003 systems, volumes, and snapshots).
1004
1005 User property names must contain a colon (":") character to distinguish
1006 them from native properties. They may contain lowercase letters,
1007 numbers, and the following punctuation characters: colon (":"), dash
1008 ("-"), period ("."), and underscore ("_"). The expected convention is
1009 that the property name is divided into two portions such as
1010 module:property, but this namespace is not enforced by ZFS. User
1011 property names can be at most 256 characters, and cannot begin with a
1012 dash ("-").
1013
1014 When making programmatic use of user properties, it is strongly suggested
1015 to use a reversed DNS domain name for the module component of property
1016 names to reduce the chance that two independently-developed packages use
1017 the same property name for different purposes.
1018
1019 The values of user properties are arbitrary strings, are always
1020 inherited, and are never validated. All of the commands that operate on
1021 properties (zfs list, zfs get, zfs set, and so forth) can be used to
1022 manipulate both native properties and user properties. Use the zfs
1023 inherit command to clear a user property. If the property is not defined
1024 in any parent dataset, it is removed entirely. Property values are
1025 limited to 8192 bytes.
1026
1027 ZFS Volumes as Swap or Dump Devices
1028 During an initial installation a swap device and dump device are created
1029 on ZFS volumes in the ZFS root pool. By default, the swap area size is
1030 based on 1/2 the size of physical memory up to 2 Gbytes. The size of the
1031 dump device depends on the kernel's requirements at installation time.
1032 Separate ZFS volumes must be used for the swap area and dump devices. Do
1033 not swap to a file on a ZFS file system. A ZFS swap file configuration
1034 is not supported.
1035
1036 If you need to change your swap area or dump device after the system is
1037 installed or upgraded, use the swap(1M) and dumpadm(1M) commands.
1038
1039 SUBCOMMANDS
1040 All subcommands that modify state are logged persistently to the pool in
1041 their original form.
1042
1043 zfs -?
1044 Displays a help message.
1045
1046 zfs create [-p] [-o property=value]... filesystem
1047 Creates a new ZFS file system. The file system is automatically
1048 mounted according to the mountpoint property inherited from the parent.
1049
1050 -o property=value
1051 Sets the specified property as if the command zfs set
1052 property=value was invoked at the same time the dataset was
1053 created. Any editable ZFS property can also be set at creation
1054 time. Multiple -o options can be specified. An error results if
1055 the same property is specified in multiple -o options.
1056
1057 -p Creates all the non-existing parent datasets. Datasets created in
1058 this manner are automatically mounted according to the mountpoint
1059 property inherited from their parent. Any property specified on
1060 the command line using the -o option is ignored. If the target
1061 filesystem already exists, the operation completes successfully.
1062
1063 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
1064 Creates a volume of the given size. The volume is exported as a block
1065 device in /dev/zvol/{dsk,rdsk}/path, where path is the name of the
1066 volume in the ZFS namespace. The size represents the logical size as
1067 exported by the device. By default, a reservation of equal size is
1068 created.
1069
1070 size is automatically rounded up to the nearest 128 Kbytes to ensure
1071 that the volume has an integral number of blocks regardless of
1072 blocksize.
1073
1074 -b blocksize
1075 Equivalent to -o volblocksize=blocksize. If this option is
1076 specified in conjunction with -o volblocksize, the resulting
1077 behavior is undefined.
1078
1079 -o property=value
1080 Sets the specified property as if the zfs set property=value
1081 command was invoked at the same time the dataset was created. Any
1082 editable ZFS property can also be set at creation time. Multiple
1083 -o options can be specified. An error results if the same property
1084 is specified in multiple -o options.
1085
1086 -p Creates all the non-existing parent datasets. Datasets created in
1087 this manner are automatically mounted according to the mountpoint
1088 property inherited from their parent. Any property specified on
1089 the command line using the -o option is ignored. If the target
1090 filesystem already exists, the operation completes successfully.
1091
1092 -s Creates a sparse volume with no reservation. See volsize in the
1093 Native Properties section for more information about sparse
1094 volumes.
1095
1096 zfs destroy [-Rfnprv] filesystem|volume
1097 Destroys the given dataset. By default, the command unshares any file
1098 systems that are currently shared, unmounts any file systems that are
1099 currently mounted, and refuses to destroy a dataset that has active
1100 dependents (children or clones).
1101
1102 -R Recursively destroy all dependents, including cloned file systems
1103 outside the target hierarchy.
1104
1105 -f Force an unmount of any file systems using the unmount -f command.
1106 This option has no effect on non-file systems or unmounted file
1107 systems.
1108
1109 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1110 useful in conjunction with the -v or -p flags to determine what
1111 data would be deleted.
1112
1113 -p Print machine-parsable verbose information about the deleted data.
1114
1115 -r Recursively destroy all children.
1116
1117 -v Print verbose information about the deleted data.
1118
1119 Extreme care should be taken when applying either the -r or the -R
1120 options, as they can destroy large portions of a pool and cause
1121 unexpected behavior for mounted file systems in use.
1122
1123 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
1124 The given snapshots are destroyed immediately if and only if the zfs
1125 destroy command without the -d option would have destroyed it. Such
1126 immediate destruction would occur, for example, if the snapshot had no
1127 clones and the user-initiated reference count were zero.
1128
1129 If a snapshot does not qualify for immediate destruction, it is marked
1130 for deferred deletion. In this state, it exists as a usable, visible
1131 snapshot until both of the preconditions listed above are met, at which
1132 point it is destroyed.
1133
1134 An inclusive range of snapshots may be specified by separating the
1135 first and last snapshots with a percent sign. The first and/or last
1136 snapshots may be left blank, in which case the filesystem's oldest or
1137 newest snapshot will be implied.
1138
1139 Multiple snapshots (or ranges of snapshots) of the same filesystem or
1140 volume may be specified in a comma-separated list of snapshots. Only
1141 the snapshot's short name (the part after the @) should be specified
1142 when using a range or comma-separated list to identify multiple
1143 snapshots.
1144
1145 -R Recursively destroy all clones of these snapshots, including the
1146 clones, snapshots, and children. If this flag is specified, the -d
1147 flag will have no effect.
1148
1149 -d Defer snapshot deletion.
1150
1151 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1152 useful in conjunction with the -p or -v flags to determine what
1153 data would be deleted.
1154
1155 -p Print machine-parsable verbose information about the deleted data.
1156
1157 -r Destroy (or mark for deferred deletion) all snapshots with this
1158 name in descendent file systems.
1159
1160 -v Print verbose information about the deleted data.
1161
1162 Extreme care should be taken when applying either the -r or the -R
1163 options, as they can destroy large portions of a pool and cause
1164 unexpected behavior for mounted file systems in use.
1165
1166 zfs destroy filesystem|volume#bookmark
1167 The given bookmark is destroyed.
1168
1169 zfs snapshot [-r] [-o property=value]...
1170 filesystem@snapname|volume@snapname...
1171 Creates snapshots with the given names. All previous modifications by
1172 successful system calls to the file system are part of the snapshots.
1173 Snapshots are taken atomically, so that all snapshots correspond to the
1174 same moment in time. See the Snapshots section for details.
1175
1176 -o property=value
1177 Sets the specified property; see zfs create for details.
1178
1179 -r Recursively create snapshots of all descendent datasets
1180
1181 zfs rollback [-Rfr] snapshot
1182 Roll back the given dataset to a previous snapshot. When a dataset is
1183 rolled back, all data that has changed since the snapshot is discarded,
1184 and the dataset reverts to the state at the time of the snapshot. By
1185 default, the command refuses to roll back to a snapshot other than the
1186 most recent one. In order to do so, all intermediate snapshots and
1187 bookmarks must be destroyed by specifying the -r option.
1188
1189 The -rR options do not recursively destroy the child snapshots of a
1190 recursive snapshot. Only direct snapshots of the specified filesystem
1191 are destroyed by either of these options. To completely roll back a
1192 recursive snapshot, you must rollback the individual child snapshots.
1193
1194 -R Destroy any more recent snapshots and bookmarks, as well as any
1195 clones of those snapshots.
1196
1197 -f Used with the -R option to force an unmount of any clone file
1198 systems that are to be destroyed.
1199
1200 -r Destroy any snapshots and bookmarks more recent than the one
1201 specified.
1202
1203 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
1204 Creates a clone of the given snapshot. See the Clones section for
1205 details. The target dataset can be located anywhere in the ZFS
1206 hierarchy, and is created as the same type as the original.
1207
1208 -o property=value
1209 Sets the specified property; see zfs create for details.
1210
1211 -p Creates all the non-existing parent datasets. Datasets created in
1212 this manner are automatically mounted according to the mountpoint
1213 property inherited from their parent. If the target filesystem or
1214 volume already exists, the operation completes successfully.
1215
1216 zfs promote clone-filesystem
1217 Promotes a clone file system to no longer be dependent on its "origin"
1218 snapshot. This makes it possible to destroy the file system that the
1219 clone was created from. The clone parent-child dependency relationship
1220 is reversed, so that the origin file system becomes a clone of the
1221 specified file system.
1222
1223 The snapshot that was cloned, and any snapshots previous to this
1224 snapshot, are now owned by the promoted clone. The space they use
1225 moves from the origin file system to the promoted clone, so enough
1226 space must be available to accommodate these snapshots. No new space
1227 is consumed by this operation, but the space accounting is adjusted.
1228 The promoted clone must not have any conflicting snapshot names of its
1229 own. The rename subcommand can be used to rename any conflicting
1230 snapshots.
1231
1232 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
1233
1234 zfs rename [-fp] filesystem|volume filesystem|volume
1235 Renames the given dataset. The new target can be located anywhere in
1236 the ZFS hierarchy, with the exception of snapshots. Snapshots can only
1237 be renamed within the parent file system or volume. When renaming a
1238 snapshot, the parent file system of the snapshot does not need to be
1239 specified as part of the second argument. Renamed file systems can
1240 inherit new mount points, in which case they are unmounted and
1241 remounted at the new mount point.
1242
1243 -f Force unmount any filesystems that need to be unmounted in the
1244 process.
1245
1246 -p Creates all the nonexistent parent datasets. Datasets created in
1247 this manner are automatically mounted according to the mountpoint
1248 property inherited from their parent.
1249
1250 zfs rename -r snapshot snapshot
1251 Recursively rename the snapshots of all descendent datasets. Snapshots
1252 are the only dataset that can be renamed recursively.
1253
1254 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
1255 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
1256 Lists the property information for the given datasets in tabular form.
1257 If specified, you can list property information by the absolute
1258 pathname or the relative pathname. By default, all file systems and
1259 volumes are displayed. Snapshots are displayed if the listsnaps
1260 property is on (the default is off). The following fields are
1261 displayed, name,used,available,referenced,mountpoint.
1262
1263 -H Used for scripting mode. Do not print headers and separate fields
1264 by a single tab instead of arbitrary white space.
1265
1266 -S property
1267 Same as the -s option, but sorts by property in descending order.
1268
1269 -d depth
1270 Recursively display any children of the dataset, limiting the
1271 recursion to depth. A depth of 1 will display only the dataset and
1272 its direct children.
1273
1274 -o property
1275 A comma-separated list of properties to display. The property must
1276 be:
1277
1278 o One of the properties described in the Native Properties
1279 section
1280
1281 o A user property
1282
1283 o The value name to display the dataset name
1284
1285 o The value space to display space usage properties on file
1286 systems and volumes. This is a shortcut for specifying -o
1287 name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t
1288 filesystem,volume syntax.
1289
1290 -p Display numbers in parsable (exact) values.
1291
1292 -r Recursively display any children of the dataset on the command
1293 line.
1294
1295 -s property
1296 A property for sorting the output by column in ascending order
1297 based on the value of the property. The property must be one of
1298 the properties described in the Properties section, or the special
1299 value name to sort by the dataset name. Multiple properties can be
1300 specified at one time using multiple -s property options. Multiple
1301 -s options are evaluated from left to right in decreasing order of
1302 importance. The following is a list of sorting criteria:
1303
1304 o Numeric types sort in numeric order.
1305
1306 o String types sort in alphabetical order.
1307
1308 o Types inappropriate for a row sort that row to the literal
1309 bottom, regardless of the specified ordering.
1310
1311 If no sorting options are specified the existing behavior of zfs
1312 list is preserved.
1313
1314 -t type
1315 A comma-separated list of types to display, where type is one of
1316 filesystem, snapshot, volume, bookmark, or all. For example,
1317 specifying -t snapshot displays only snapshots.
1318
1319 zfs set property=value [property=value]... filesystem|volume|snapshot...
1320 Sets the property or list of properties to the given value(s) for each
1321 dataset. Only some properties can be edited. See the Properties
1322 section for more information on what properties can be set and
1323 acceptable values. Numeric values can be specified as exact values, or
1324 in a human-readable form with a suffix of B, K, M, G, T, P, E, Z (for
1325 bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes,
1326 or zettabytes, respectively). User properties can be set on snapshots.
1327 For more information, see the User Properties section.
1328
1329 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
1330 [-t type[,type]...] all | property[,property]...
1331 filesystem|volume|snapshot|bookmark...
1332 Displays properties for the given datasets. If no datasets are
1333 specified, then the command displays properties for all datasets on the
1334 system. For each property, the following columns are displayed:
1335
1336 name Dataset name
1337 property Property name
1338 value Property value
1339 source Property source. Can either be local, default,
1340 temporary, inherited, or none (-).
1341
1342 All columns are displayed by default, though this can be controlled by
1343 using the -o option. This command takes a comma-separated list of
1344 properties as described in the Native Properties and User Properties
1345 sections.
1346
1347 The special value all can be used to display all properties that apply
1348 to the given dataset's type (filesystem, volume, snapshot, or
1349 bookmark).
1350
1351 -H Display output in a form more easily parsed by scripts. Any
1352 headers are omitted, and fields are explicitly separated by a
1353 single tab instead of an arbitrary amount of space.
1354
1355 -d depth
1356 Recursively display any children of the dataset, limiting the
1357 recursion to depth. A depth of 1 will display only the dataset and
1358 its direct children.
1359
1360 -o field
1361 A comma-separated list of columns to display.
1362 name,property,value,source is the default value.
1363
1364 -p Display numbers in parsable (exact) values.
1365
1366 -r Recursively display properties for any children.
1367
1368 -s source
1369 A comma-separated list of sources to display. Those properties
1370 coming from a source other than those in this list are ignored.
1371 Each source must be one of the following: local, default,
1372 inherited, temporary, and none. The default value is all sources.
1373
1374 -t type
1375 A comma-separated list of types to display, where type is one of
1376 filesystem, snapshot, volume, bookmark, or all.
1377
1378 zfs inherit [-rS] property filesystem|volume|snapshot...
1379 Clears the specified property, causing it to be inherited from an
1380 ancestor, restored to default if no ancestor has the property set, or
1381 with the -S option reverted to the received value if one exists. See
1382 the Properties section for a listing of default values, and details on
1383 which properties can be inherited.
1384
1385 -r Recursively inherit the given property for all children.
1386
1387 -S Revert the property to the received value if one exists; otherwise
1388 operate as if the -S option was not specified.
1389
1390 zfs upgrade
1391 Displays a list of file systems that are not the most recent version.
1392
1393 zfs upgrade -v
1394 Displays a list of currently supported file system versions.
1395
1396 zfs upgrade [-r] [-V version] -a | filesystem
1397 Upgrades file systems to a new on-disk version. Once this is done, the
1398 file systems will no longer be accessible on systems running older
1399 versions of the software. zfs send streams generated from new
1400 snapshots of these file systems cannot be accessed on systems running
1401 older versions of the software.
1402
1403 In general, the file system version is independent of the pool version.
1404 See zpool(1M) for information on the zpool upgrade command.
1405
1406 In some cases, the file system version and the pool version are
1407 interrelated and the pool version must be upgraded before the file
1408 system version can be upgraded.
1409
1410 -V version
1411 Upgrade to the specified version. If the -V flag is not specified,
1412 this command upgrades to the most recent version. This option can
1413 only be used to increase the version number, and only up to the
1414 most recent version supported by this software.
1415
1416 -a Upgrade all file systems on all imported pools.
1417
1418 filesystem
1419 Upgrade the specified file system.
1420
1421 -r Upgrade the specified file system and all descendent file systems.
1422
1423 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1424 [-t type[,type]...] filesystem|snapshot
1425 Displays space consumed by, and quotas on, each user in the specified
1426 filesystem or snapshot. This corresponds to the userused@user and
1427 userquota@user properties.
1428
1429 -H Do not print headers, use tab-delimited output.
1430
1431 -S field
1432 Sort by this field in reverse order. See -s.
1433
1434 -i Translate SID to POSIX ID. The POSIX ID may be ephemeral if no
1435 mapping exists. Normal POSIX interfaces (for example, stat(2), ls
1436 -l) perform this translation, so the -i option allows the output
1437 from zfs userspace to be compared directly with those utilities.
1438 However, -i may lead to confusion if some files were created by an
1439 SMB user before a SMB-to-POSIX name mapping was established. In
1440 such a case, some files will be owned by the SMB entity and some by
1441 the POSIX entity. However, the -i option will report that the
1442 POSIX entity has the total usage and quota for both.
1443
1444 -n Print numeric ID instead of user/group name.
1445
1446 -o field[,field]...
1447 Display only the specified fields from the following set: type,
1448 name, used, quota. The default is to display all fields.
1449
1450 -p Use exact (parsable) numeric output.
1451
1452 -s field
1453 Sort output by this field. The -s and -S flags may be specified
1454 multiple times to sort first by one field, then by another. The
1455 default is -s type -s name.
1456
1457 -t type[,type]...
1458 Print only the specified types from the following set: all,
1459 posixuser, smbuser, posixgroup, smbgroup. The default is -t
1460 posixuser,smbuser. The default can be changed to include group
1461 types.
1462
1463 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1464 [-t type[,type]...] filesystem|snapshot
1465 Displays space consumed by, and quotas on, each group in the specified
1466 filesystem or snapshot. This subcommand is identical to zfs userspace,
1467 except that the default types to display are -t posixgroup,smbgroup.
1468
1469 zfs mount
1470 Displays all ZFS file systems currently mounted.
1471
1472 zfs mount [-Ov] [-o options] -a | filesystem
1473 Mounts ZFS file systems.
1474
1475 -O Perform an overlay mount. See mount(1M) for more information.
1476
1477 -a Mount all available ZFS file systems. Invoked automatically as
1478 part of the boot process.
1479
1480 filesystem
1481 Mount the specified filesystem.
1482
1483 -o options
1484 An optional, comma-separated list of mount options to use
1485 temporarily for the duration of the mount. See the Temporary Mount
1486 Point Properties section for details.
1487
1488 -v Report mount progress.
1489
1490 zfs unmount [-f] -a | filesystem|mountpoint
1491 Unmounts currently mounted ZFS file systems.
1492
1493 -a Unmount all available ZFS file systems. Invoked automatically as
1494 part of the shutdown process.
1495
1496 filesystem|mountpoint
1497 Unmount the specified filesystem. The command can also be given a
1498 path to a ZFS file system mount point on the system.
1499
1500 -f Forcefully unmount the file system, even if it is currently in use.
1501
1502 zfs share -a | filesystem
1503 Shares available ZFS file systems.
1504
1505 -a Share all available ZFS file systems. Invoked automatically as
1506 part of the boot process.
1507
1508 filesystem
1509 Share the specified filesystem according to the sharenfs and
1510 sharesmb properties. File systems are shared when the sharenfs or
1511 sharesmb property is set.
1512
1513 zfs unshare -a | filesystem|mountpoint
1514 Unshares currently shared ZFS file systems.
1515
1516 -a Unshare all available ZFS file systems. Invoked automatically as
1517 part of the shutdown process.
1518
1519 filesystem|mountpoint
1520 Unshare the specified filesystem. The command can also be given a
1521 path to a ZFS file system shared on the system.
1522
1523 zfs bookmark snapshot bookmark
1524 Creates a bookmark of the given snapshot. Bookmarks mark the point in
1525 time when the snapshot was created, and can be used as the incremental
1526 source for a zfs send command.
1527
1528 This feature must be enabled to be used. See zpool-features(5) for
1529 details on ZFS feature flags and the bookmarks feature.
1530
1531 zfs send [-DLPRcenpsv] [[-I|-i] snapshot] snapshot
1532 Creates a stream representation of the second snapshot, which is
1533 written to standard output. The output can be redirected to a file or
1534 to a different system (for example, using ssh(1)). By default, a full
1535 stream is generated.
1536
1537 -D, --dedup
1538 Generate a deduplicated stream. Blocks which would have been sent
1539 multiple times in the send stream will only be sent once. The
1540 receiving system must also support this feature to receive a
1541 deduplicated stream. This flag can be used regardless of the
1542 dataset's dedup property, but performance will be much better if
1543 the filesystem uses a dedup-capable checksum (for example, sha256).
1544
1545 -I snapshot
1546 Generate a stream package that sends all intermediary snapshots
1547 from the first snapshot to the second snapshot. For example, -I @a
1548 fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The
1549 incremental source may be specified as with the -i option.
1550
1551 -L, --large-block
1552 Generate a stream which may contain blocks larger than 128KB. This
1553 flag has no effect if the large_blocks pool feature is disabled, or
1554 if the recordsize property of this filesystem has never been set
1555 above 128KB. The receiving system must have the large_blocks pool
1556 feature enabled as well. See zpool-features(5) for details on ZFS
1557 feature flags and the large_blocks feature.
1558
1559 -P, --parsable
1560 Print machine-parsable verbose information about the stream package
1561 generated.
1562
1563 -R, --replicate
1564 Generate a replication stream package, which will replicate the
1565 specified file system, and all descendent file systems, up to the
1566 named snapshot. When received, all properties, snapshots,
1567 descendent file systems, and clones are preserved.
1568
1569 If the -i or -I flags are used in conjunction with the -R flag, an
1570 incremental replication stream is generated. The current values of
1571 properties, and current snapshot and file system names are set when
1572 the stream is received. If the -F flag is specified when this
1573 stream is received, snapshots and file systems that do not exist on
1574 the sending side are destroyed. If the -K flag is specified in
1575 conjunction with -F flag, then it modifies the conventional force-
1576 receive behavior to not destroy destination snapshots that are not
1577 present at the replication source.
1578
1579 -e, --embed
1580 Generate a more compact stream by using WRITE_EMBEDDED records for
1581 blocks which are stored more compactly on disk by the embedded_data
1582 pool feature. This flag has no effect if the embedded_data feature
1583 is disabled. The receiving system must have the embedded_data
1584 feature enabled. If the lz4_compress feature is active on the
1585 sending system, then the receiving system must have that feature
1586 enabled as well. See zpool-features(5) for details on ZFS feature
1587 flags and the embedded_data feature.
1588
1589 -c, --compressed
1590 Generate a more compact stream by using compressed WRITE records
1591 for blocks which are compressed on disk and in memory (see the
1592 compression property for details). If the lz4_compress feature is
1593 active on the sending system, then the receiving system must have
1594 that feature enabled as well. If the large_blocks feature is
1595 enabled on the sending system but the -L option is not supplied in
1596 conjunction with -c, then the data will be decompressed before
1597 sending so it can be split into smaller block sizes.
1598
1599 -i snapshot
1600 Generate an incremental stream from the first snapshot (the
1601 incremental source) to the second snapshot (the incremental
1602 target). The incremental source can be specified as the last
1603 component of the snapshot name (the @ character and following) and
1604 it is assumed to be from the same file system as the incremental
1605 target.
1606
1607 If the destination is a clone, the source may be the origin
1608 snapshot, which must be fully specified (for example,
1609 pool/fs@origin, not just @origin).
1610
1611 -n, --dryrun
1612 Do a dry-run ("No-op") send. Do not generate any actual send data.
1613 This is useful in conjunction with the -v or -P flags to determine
1614 what data will be sent. In this case, the verbose output will be
1615 written to standard output (contrast with a non-dry-run, where the
1616 stream is written to standard output and the verbose output goes to
1617 standard error).
1618
1619 -p, --props
1620 Include the dataset's properties in the stream. This flag is
1621 implicit when -R is specified. The receiving system must also
1622 support this feature.
1623
1624 -s Calculate send stream size. Do not generate any actual send data.
1625 This is useful when one needs to know stream size in order to store
1626 the stream externally. With -v specified, provides info on stream
1627 header and stream data portion sizes, in addition to the total
1628 stream size.
1629
1630 -v, --verbose
1631 Print verbose information about the stream package generated. This
1632 information includes a per-second report of how much data has been
1633 sent.
1634
1635 The format of the stream is committed. You will be able to receive
1636 your streams on future versions of ZFS .
1637
1638 zfs send [-Lce] [-i snapshot|bookmark] filesystem|volume|snapshot
1639 Generate a send stream, which may be of a filesystem, and may be
1640 incremental from a bookmark. If the destination is a filesystem or
1641 volume, the pool must be read-only, or the filesystem must not be
1642 mounted. When the stream generated from a filesystem or volume is
1643 received, the default snapshot name will be "--head--".
1644
1645 -L, --large-block
1646 Generate a stream which may contain blocks larger than 128KB. This
1647 flag has no effect if the large_blocks pool feature is disabled, or
1648 if the recordsize property of this filesystem has never been set
1649 above 128KB. The receiving system must have the large_blocks pool
1650 feature enabled as well. See zpool-features(5) for details on ZFS
1651 feature flags and the large_blocks feature.
1652
1653 -c, --compressed
1654 Generate a more compact stream by using compressed WRITE records
1655 for blocks which are compressed on disk and in memory (see the
1656 compression property for details). If the lz4_compress feature is
1657 active on the sending system, then the receiving system must have
1658 that feature enabled as well. If the large_blocks feature is
1659 enabled on the sending system but the -L option is not supplied in
1660 conjunction with -c, then the data will be decompressed before
1661 sending so it can be split into smaller block sizes.
1662
1663 -e, --embed
1664 Generate a more compact stream by using WRITE_EMBEDDED records for
1665 blocks which are stored more compactly on disk by the embedded_data
1666 pool feature. This flag has no effect if the embedded_data feature
1667 is disabled. The receiving system must have the embedded_data
1668 feature enabled. If the lz4_compress feature is active on the
1669 sending system, then the receiving system must have that feature
1670 enabled as well. See zpool-features(5) for details on ZFS feature
1671 flags and the embedded_data feature.
1672
1673 -i snapshot|bookmark
1674 Generate an incremental send stream. The incremental source must
1675 be an earlier snapshot in the destination's history. It will
1676 commonly be an earlier snapshot in the destination's file system,
1677 in which case it can be specified as the last component of the name
1678 (the # or @ character and following).
1679
1680 If the incremental target is a clone, the incremental source can be
1681 the origin snapshot, or an earlier snapshot in the origin's
1682 filesystem, or the origin's origin, etc.
1683
1684 zfs send [-Penv] -t receive_resume_token
1685 Creates a send stream which resumes an interrupted receive. The
1686 receive_resume_token is the value of this property on the filesystem or
1687 volume that was being received into. See the documentation for zfs
1688 receive -s for more details.
1689
1690 zfs receive [-FKsnuv] [-l filesystem|volume]... [-o property=value]...
1691 [-x property]... filesystem|volume|snapshot
1692
1693 zfs receive [-FKnsuv] [-d|-e] [-l filesystem|volume]... [-o
1694 property=value]... [-x property]... filesystem
1695 Creates a snapshot whose contents are as specified in the stream
1696 provided on standard input. If a full stream is received, then a new
1697 file system is created as well. Streams are created using the zfs send
1698 subcommand, which by default creates a full stream. zfs recv can be
1699 used as an alias for zfs receive.
1700
1701 If an incremental stream is received, then the destination file system
1702 must already exist, and its most recent snapshot must match the
1703 incremental stream's source. For zvols, the destination device link is
1704 destroyed and recreated, which means the zvol cannot be accessed during
1705 the receive operation.
1706
1707 When a snapshot replication package stream that is generated by using
1708 the zfs send -R command is received, any snapshots that do not exist on
1709 the sending location are destroyed by using the zfs destroy -d command.
1710
1711 The name of the snapshot (and file system, if a full stream is
1712 received) that this subcommand creates depends on the argument type and
1713 the use of the -d or -e options.
1714
1715 If the argument is a snapshot name, the specified snapshot is created.
1716 If the argument is a file system or volume name, a snapshot with the
1717 same name as the sent snapshot is created within the specified
1718 filesystem or volume. If neither of the -d or -e options are
1719 specified, the provided target snapshot name is used exactly as
1720 provided.
1721
1722 The -d and -e options cause the file system name of the target snapshot
1723 to be determined by appending a portion of the sent snapshot's name to
1724 the specified target filesystem. If the -d option is specified, all
1725 but the first element of the sent snapshot's file system path (usually
1726 the pool name) is used and any required intermediate file systems
1727 within the specified one are created. If the -e option is specified,
1728 then only the last element of the sent snapshot's file system name
1729 (i.e. the name of the source file system itself) is used as the target
1730 file system name.
1731
1732 -F Force a rollback of the file system to the most recent snapshot
1733 before performing the receive operation. If receiving an
1734 incremental replication stream (for example, one generated by zfs
1735 send -R [-i|-I]), destroy snapshots and file systems that do not
1736 exist on the sending side.
1737
1738 -K When force receive is enabled, do not destroy snapshots on the
1739 receiving side that do not exist on the sending side.
1740
1741 -d Discard the first element of the sent snapshot's file system name,
1742 using the remaining elements to determine the name of the target
1743 file system for the new snapshot as described in the paragraph
1744 above.
1745
1746 -e Discard all but the last element of the sent snapshot's file system
1747 name, using that element to determine the name of the target file
1748 system for the new snapshot as described in the paragraph above.
1749
1750 -l filesystem|volume
1751 Limits the receive to only the filesystem or volume specified. As
1752 multiple options may be specified, this can be used to restore
1753 specific filesystems or volumes from the received stream.
1754
1755 -n Do not actually receive the stream. This can be useful in
1756 conjunction with the -v option to verify the name the receive
1757 operation would use.
1758
1759 -o property=value
1760 Sets the specified property to value during receive of the stream.
1761 Specifying multiple -o options is allowed.
1762
1763 -o origin=snapshot
1764 Forces the stream to be received as a clone of the given snapshot.
1765 If the stream is a full send stream, this will create the
1766 filesystem described by the stream as a clone of the specified
1767 snapshot. Which snapshot was specified will not affect the success
1768 or failure of the receive, as long as the snapshot does exist. If
1769 the stream is an incremental send stream, all the normal
1770 verification will be performed.
1771
1772 -u File system that is associated with the received stream is not
1773 mounted.
1774
1775 -v Print verbose information about the stream and the time required to
1776 perform the receive operation.
1777
1778 -x property
1779 Excludes the specified property from the received stream as if it
1780 was not included in the send stream. Specifying multiple -x
1781 options is allowed.
1782
1783 -s If the receive is interrupted, save the partially received state,
1784 rather than deleting it. Interruption may be due to premature
1785 termination of the stream (e.g. due to network failure or failure
1786 of the remote system if the stream is being read over a network
1787 connection), a checksum error in the stream, termination of the zfs
1788 receive process, or unclean shutdown of the system.
1789
1790 The receive can be resumed with a stream generated by zfs send -t
1791 token, where the token is the value of the receive_resume_token
1792 property of the filesystem or volume which is received into.
1793
1794 To use this flag, the storage pool must have the extensible_dataset
1795 feature enabled. See zpool-features(5) for details on ZFS feature
1796 flags.
1797
1798 zfs receive -A filesystem|volume
1799 Abort an interrupted zfs receive -s, deleting its saved partially
1800 received state.
1801
1802 zfs allow filesystem|volume
1803 Displays permissions that have been delegated on the specified
1804 filesystem or volume. See the other forms of zfs allow for more
1805 information.
1806
1807 zfs allow [-dglu] user|group[,user|group]...
1808 perm|@setname[,perm|@setname]... filesystem|volume
1809 zfs
1810 allow
1811 [-dl]
1812 -e|everyone
1813 perm|@setname[,perm|@setname]...
1814 filesystem|volume
1815 Delegates ZFS administration permission for the file systems to non-
1816 privileged users.
1817
1818 -d Allow only for the descendent file systems.
1819
1820 -e|everyone
1821 Specifies that the permissions be delegated to everyone.
1822
1823 -g group[,group]...
1824 Explicitly specify that permissions are delegated to the group.
1825
1826 -l Allow "locally" only for the specified file system.
1827
1828 -u user[,user]...
1829 Explicitly specify that permissions are delegated to the user.
1830
1831 user|group[,user|group]...
1832 Specifies to whom the permissions are delegated. Multiple entities
1833 can be specified as a comma-separated list. If neither of the -gu
1834 options are specified, then the argument is interpreted
1835 preferentially as the keyword everyone, then as a user name, and
1836 lastly as a group name. To specify a user or group named
1837 "everyone", use the -g or -u options. To specify a group with the
1838 same name as a user, use the -g options.
1839
1840 perm|@setname[,perm|@setname]...
1841 The permissions to delegate. Multiple permissions may be specified
1842 as a comma-separated list. Permission names are the same as ZFS
1843 subcommand and property names. See the property list below.
1844 Property set names, which begin with @, may be specified. See the
1845 -s form below for details.
1846
1847 If neither of the -dl options are specified, or both are, then the
1848 permissions are allowed for the file system or volume, and all of its
1849 descendents.
1850
1851 Permissions are generally the ability to use a ZFS subcommand or change
1852 a ZFS property. The following permissions are available:
1853
1854 NAME TYPE NOTES
1855 allow subcommand Must also have the permission that is
1856 being allowed
1857 clone subcommand Must also have the 'create' ability and
1858 'mount' ability in the origin file system
1859 create subcommand Must also have the 'mount' ability
1860 destroy subcommand Must also have the 'mount' ability
1861 diff subcommand Allows lookup of paths within a dataset
1862 given an object number, and the ability
1863 to create snapshots necessary to
1864 'zfs diff'.
1865 mount subcommand Allows mount/umount of ZFS datasets
1866 promote subcommand Must also have the 'mount' and 'promote'
1867 ability in the origin file system
1868 receive subcommand Must also have the 'mount' and 'create'
1869 ability
1870 rename subcommand Must also have the 'mount' and 'create'
1871 ability in the new parent
1872 rollback subcommand Must also have the 'mount' ability
1873 send subcommand
1874 share subcommand Allows sharing file systems over NFS
1875 or SMB protocols
1876 snapshot subcommand Must also have the 'mount' ability
1877
1878 groupquota other Allows accessing any groupquota@...
1879 property
1880 groupused other Allows reading any groupused@... property
1881 userprop other Allows changing any user property
1882 userquota other Allows accessing any userquota@...
1883 property
1884 userused other Allows reading any userused@... property
1885
1886 aclinherit property
1887 aclmode property
1888 atime property
1889 canmount property
1890 casesensitivity property
1891 checksum property
1892 compression property
1893 copies property
1894 devices property
1895 exec property
1896 filesystem_limit property
1897 mountpoint property
1898 nbmand property
1899 normalization property
1900 primarycache property
1901 quota property
1902 readonly property
1903 recordsize property
1904 refquota property
1905 refreservation property
1906 reservation property
1907 secondarycache property
1908 setuid property
1909 sharenfs property
1910 sharesmb property
1911 snapdir property
1912 snapshot_limit property
1913 utf8only property
1914 version property
1915 volblocksize property
1916 volsize property
1917 vscan property
1918 xattr property
1919 zoned property
1920
1921 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
1922 Sets "create time" permissions. These permissions are granted
1923 (locally) to the creator of any newly-created descendent file system.
1924
1925 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
1926 Defines or adds permissions to a permission set. The set can be used
1927 by other zfs allow commands for the specified file system and its
1928 descendents. Sets are evaluated dynamically, so changes to a set are
1929 immediately reflected. Permission sets follow the same naming
1930 restrictions as ZFS file systems, but the name must begin with @, and
1931 can be no more than 64 characters long.
1932
1933 zfs unallow [-dglru] user|group[,user|group]...
1934 [perm|@setname[,perm|@setname]...] filesystem|volume
1935 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
1936 filesystem|volume
1937 zfs
1938 unallow
1939 [-r]
1940 -c
1941 [perm|@setname[,perm|@setname]...]
1942 filesystem|volume
1943 Removes permissions that were granted with the zfs allow command. No
1944 permissions are explicitly denied, so other permissions granted are
1945 still in effect. For example, if the permission is granted by an
1946 ancestor. If no permissions are specified, then all permissions for
1947 the specified user, group, or everyone are removed. Specifying
1948 everyone (or using the -e option) only removes the permissions that
1949 were granted to everyone, not all permissions for every user and group.
1950 See the zfs allow command for a description of the -ldugec options.
1951
1952 -r Recursively remove the permissions from this file system and all
1953 descendents.
1954
1955 zfs unallow [-r] -s @setname [perm|@setname[,perm|@setname]...]
1956 filesystem|volume
1957 Removes permissions from a permission set. If no permissions are
1958 specified, then all permissions are removed, thus removing the set
1959 entirely.
1960
1961 zfs hold [-r] tag snapshot...
1962 Adds a single reference, named with the tag argument, to the specified
1963 snapshot or snapshots. Each snapshot has its own tag namespace, and
1964 tags must be unique within that space.
1965
1966 If a hold exists on a snapshot, attempts to destroy that snapshot by
1967 using the zfs destroy command return EBUSY.
1968
1969 -r Specifies that a hold with the given tag is applied recursively to
1970 the snapshots of all descendent file systems.
1971
1972 zfs holds [-r] snapshot...
1973 Lists all existing user references for the given snapshot or snapshots.
1974
1975 -r Lists the holds that are set on the named descendent snapshots, in
1976 addition to listing the holds on the named snapshot.
1977
1978 zfs release [-r] tag snapshot...
1979 Removes a single reference, named with the tag argument, from the
1980 specified snapshot or snapshots. The tag must already exist for each
1981 snapshot. If a hold exists on a snapshot, attempts to destroy that
1982 snapshot by using the zfs destroy command return EBUSY.
1983
1984 -r Recursively releases a hold with the given tag on the snapshots of
1985 all descendent file systems.
1986
1987 zfs diff [-FHt] snapshot snapshot|filesystem
1988 Display the difference between a snapshot of a given filesystem and
1989 another snapshot of that filesystem from a later time or the current
1990 contents of the filesystem. The first column is a character indicating
1991 the type of change, the other columns indicate pathname, new pathname
1992 (in case of rename), change in link count, and optionally file type
1993 and/or change time. The types of change are:
1994
1995 - The path has been removed
1996 + The path has been created
1997 M The path has been modified
1998 R The path has been renamed
1999
2000 -F Display an indication of the type of file, in a manner similar to
2001 the - option of ls(1).
2002
2003 B Block device
2004 C Character device
2005 / Directory
2006 > Door
2007 | Named pipe
2008 @ Symbolic link
2009 P Event port
2010 = Socket
2011 F Regular file
2012
2013 -H Give more parsable tab-separated output, without header lines and
2014 without arrows.
2015
2016 -t Display the path's inode change time as the first column of output.
2017
2018 zfs program [-n] [-t timeout] [-m memory_limit] pool script [arg1 ...]
2019 Executes script as a ZFS channel program on pool. The ZFS channel
2020 program interface allows ZFS administrative operations to be run
2021 programmatically via a Lua script. The entire script is executed
2022 atomically, with no other administrative operations taking effect
2023 concurrently. A library of ZFS calls is made available to channel
2024 program scripts. Channel programs may only be run with root
2025 privileges.
2026
2027 For full documentation of the ZFS channel program interface, see the
2028 manual page for
2029
2030 -n
2031 Executes a read-only channel program, which runs faster. The program
2032 cannot change on-disk state by calling functions from the zfs.sync
2033 submodule. The program can be used to gather information such as
2034 properties and determining if changes would succeed (zfs.check.*).
2035 Without this flag, all pending changes must be synced to disk before
2036 a channel program can complete.
2037
2038 -t timeout
2039 Execution time limit, in milliseconds. If a channel program executes
2040 for longer than the provided timeout, it will be stopped and an error
2041 will be returned. The default timeout is 1000 ms, and can be set to
2042 a maximum of 10000 ms.
2043
2044 -m memory-limit
2045 Memory limit, in bytes. If a channel program attempts to allocate
2046 more memory than the given limit, it will be stopped and an error
2047 returned. The default memory limit is 10 MB, and can be set to a
2048 maximum of 100 MB.
2049
2050 All remaining argument strings are passed directly to the channel
2051 program as arguments. See zfs-program(1M) for more information.
2052
2053 EXIT STATUS
2054 The zfs utility exits 0 on success, 1 if an error occurs, and 2 if
2055 invalid command line options were specified.
2056
2057 EXAMPLES
2058 Example 1 Creating a ZFS File System Hierarchy
2059 The following commands create a file system named pool/home and a file
2060 system named pool/home/bob. The mount point /export/home is set for
2061 the parent file system, and is automatically inherited by the child
2062 file system.
2063
2064 # zfs create pool/home
2065 # zfs set mountpoint=/export/home pool/home
2066 # zfs create pool/home/bob
2067
2068 Example 2 Creating a ZFS Snapshot
2069 The following command creates a snapshot named yesterday. This
2070 snapshot is mounted on demand in the .zfs/snapshot directory at the
2071 root of the pool/home/bob file system.
2072
2073 # zfs snapshot pool/home/bob@yesterday
2074
2075 Example 3 Creating and Destroying Multiple Snapshots
2076 The following command creates snapshots named yesterday of pool/home
2077 and all of its descendent file systems. Each snapshot is mounted on
2078 demand in the .zfs/snapshot directory at the root of its file system.
2079 The second command destroys the newly created snapshots.
2080
2081 # zfs snapshot -r pool/home@yesterday
2082 # zfs destroy -r pool/home@yesterday
2083
2084 Example 4 Disabling and Enabling File System Compression
2085 The following command disables the compression property for all file
2086 systems under pool/home. The next command explicitly enables
2087 compression for pool/home/anne.
2088
2089 # zfs set compression=off pool/home
2090 # zfs set compression=on pool/home/anne
2091
2092 Example 5 Listing ZFS Datasets
2093 The following command lists all active file systems and volumes in the
2094 system. Snapshots are displayed if the listsnaps property is on. The
2095 default is off. See zpool(1M) for more information on pool properties.
2096
2097 # zfs list
2098 NAME USED AVAIL REFER MOUNTPOINT
2099 pool 450K 457G 18K /pool
2100 pool/home 315K 457G 21K /export/home
2101 pool/home/anne 18K 457G 18K /export/home/anne
2102 pool/home/bob 276K 457G 276K /export/home/bob
2103
2104 Example 6 Setting a Quota on a ZFS File System
2105 The following command sets a quota of 50 Gbytes for pool/home/bob.
2106
2107 # zfs set quota=50G pool/home/bob
2108
2109 Example 7 Listing ZFS Properties
2110 The following command lists all properties for pool/home/bob.
2111
2112 # zfs get all pool/home/bob
2113 NAME PROPERTY VALUE SOURCE
2114 pool/home/bob type filesystem -
2115 pool/home/bob creation Tue Jul 21 15:53 2009 -
2116 pool/home/bob used 21K -
2117 pool/home/bob available 20.0G -
2118 pool/home/bob referenced 21K -
2119 pool/home/bob compressratio 1.00x -
2120 pool/home/bob mounted yes -
2121 pool/home/bob quota 20G local
2122 pool/home/bob reservation none default
2123 pool/home/bob recordsize 128K default
2124 pool/home/bob mountpoint /pool/home/bob default
2125 pool/home/bob sharenfs off default
2126 pool/home/bob checksum on default
2127 pool/home/bob compression on local
2128 pool/home/bob atime on default
2129 pool/home/bob devices on default
2130 pool/home/bob exec on default
2131 pool/home/bob setuid on default
2132 pool/home/bob readonly off default
2133 pool/home/bob zoned off default
2134 pool/home/bob snapdir hidden default
2135 pool/home/bob aclmode discard default
2136 pool/home/bob aclinherit restricted default
2137 pool/home/bob canmount on default
2138 pool/home/bob xattr on default
2139 pool/home/bob copies 1 default
2140 pool/home/bob version 4 -
2141 pool/home/bob utf8only off -
2142 pool/home/bob normalization none -
2143 pool/home/bob casesensitivity sensitive -
2144 pool/home/bob vscan off default
2145 pool/home/bob nbmand off default
2146 pool/home/bob sharesmb off default
2147 pool/home/bob refquota none default
2148 pool/home/bob refreservation none default
2149 pool/home/bob primarycache all default
2150 pool/home/bob secondarycache all default
2151 pool/home/bob usedbysnapshots 0 -
2152 pool/home/bob usedbydataset 21K -
2153 pool/home/bob usedbychildren 0 -
2154 pool/home/bob usedbyrefreservation 0 -
2155
2156 The following command gets a single property value.
2157
2158 # zfs get -H -o value compression pool/home/bob
2159 on
2160 The following command lists all properties with local settings for
2161 pool/home/bob.
2162
2163 # zfs get -r -s local -o name,property,value all pool/home/bob
2164 NAME PROPERTY VALUE
2165 pool/home/bob quota 20G
2166 pool/home/bob compression on
2167
2168 Example 8 Rolling Back a ZFS File System
2169 The following command reverts the contents of pool/home/anne to the
2170 snapshot named yesterday, deleting all intermediate snapshots.
2171
2172 # zfs rollback -r pool/home/anne@yesterday
2173
2174 Example 9 Creating a ZFS Clone
2175 The following command creates a writable file system whose initial
2176 contents are the same as pool/home/bob@yesterday.
2177
2178 # zfs clone pool/home/bob@yesterday pool/clone
2179
2180 Example 10 Promoting a ZFS Clone
2181 The following commands illustrate how to test out changes to a file
2182 system, and then replace the original file system with the changed one,
2183 using clones, clone promotion, and renaming:
2184
2185 # zfs create pool/project/production
2186 populate /pool/project/production with data
2187 # zfs snapshot pool/project/production@today
2188 # zfs clone pool/project/production@today pool/project/beta
2189 make changes to /pool/project/beta and test them
2190 # zfs promote pool/project/beta
2191 # zfs rename pool/project/production pool/project/legacy
2192 # zfs rename pool/project/beta pool/project/production
2193 once the legacy version is no longer needed, it can be destroyed
2194 # zfs destroy pool/project/legacy
2195
2196 Example 11 Inheriting ZFS Properties
2197 The following command causes pool/home/bob and pool/home/anne to
2198 inherit the checksum property from their parent.
2199
2200 # zfs inherit checksum pool/home/bob pool/home/anne
2201
2202 Example 12 Remotely Replicating ZFS Data
2203 The following commands send a full stream and then an incremental
2204 stream to a remote machine, restoring them into poolB/received/fs@a and
2205 poolB/received/fs@b, respectively. poolB must contain the file system
2206 poolB/received, and must not initially contain poolB/received/fs.
2207
2208 # zfs send pool/fs@a | \
2209 ssh host zfs receive poolB/received/fs@a
2210 # zfs send -i a pool/fs@b | \
2211 ssh host zfs receive poolB/received/fs
2212
2213 Example 13 Using the zfs receive -d Option
2214 The following command sends a full stream of poolA/fsA/fsB@snap to a
2215 remote machine, receiving it into poolB/received/fsA/fsB@snap. The
2216 fsA/fsB@snap portion of the received snapshot's name is determined from
2217 the name of the sent snapshot. poolB must contain the file system
2218 poolB/received. If poolB/received/fsA does not exist, it is created as
2219 an empty file system.
2220
2221 # zfs send poolA/fsA/fsB@snap | \
2222 ssh host zfs receive -d poolB/received
2223
2224 Example 14 Setting User Properties
2225 The following example sets the user-defined com.example:department
2226 property for a dataset.
2227
2228 # zfs set com.example:department=12345 tank/accounting
2229
2230 Example 15 Performing a Rolling Snapshot
2231 The following example shows how to maintain a history of snapshots with
2232 a consistent naming scheme. To keep a week's worth of snapshots, the
2233 user destroys the oldest snapshot, renames the remaining snapshots, and
2234 then creates a new snapshot, as follows:
2235
2236 # zfs destroy -r pool/users@7daysago
2237 # zfs rename -r pool/users@6daysago @7daysago
2238 # zfs rename -r pool/users@5daysago @6daysago
2239 # zfs rename -r pool/users@yesterday @5daysago
2240 # zfs rename -r pool/users@yesterday @4daysago
2241 # zfs rename -r pool/users@yesterday @3daysago
2242 # zfs rename -r pool/users@yesterday @2daysago
2243 # zfs rename -r pool/users@today @yesterday
2244 # zfs snapshot -r pool/users@today
2245
2246 Example 16 Setting sharenfs Property Options on a ZFS File System
2247 The following commands show how to set sharenfs property options to
2248 enable rw access for a set of IP addresses and to enable root access
2249 for system neo on the tank/home file system.
2250
2251 # zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
2252
2253 If you are using DNS for host name resolution, specify the fully
2254 qualified hostname.
2255
2256 Example 17 Delegating ZFS Administration Permissions on a ZFS Dataset
2257 The following example shows how to set permissions so that user cindys
2258 can create, destroy, mount, and take snapshots on tank/cindys. The
2259 permissions on tank/cindys are also displayed.
2260
2261 # zfs allow cindys create,destroy,mount,snapshot tank/cindys
2262 # zfs allow tank/cindys
2263 ---- Permissions on tank/cindys --------------------------------------
2264 Local+Descendent permissions:
2265 user cindys create,destroy,mount,snapshot
2266
2267 Because the tank/cindys mount point permission is set to 755 by
2268 default, user cindys will be unable to mount file systems under
2269 tank/cindys. Add an ACE similar to the following syntax to provide
2270 mount point access:
2271
2272 # chmod A+user:cindys:add_subdirectory:allow /tank/cindys
2273
2274 Example 18 Delegating Create Time Permissions on a ZFS Dataset
2275 The following example shows how to grant anyone in the group staff to
2276 create file systems in tank/users. This syntax also allows staff
2277 members to destroy their own file systems, but not destroy anyone
2278 else's file system. The permissions on tank/users are also displayed.
2279
2280 # zfs allow staff create,mount tank/users
2281 # zfs allow -c destroy tank/users
2282 # zfs allow tank/users
2283 ---- Permissions on tank/users ---------------------------------------
2284 Permission sets:
2285 destroy
2286 Local+Descendent permissions:
2287 group staff create,mount
2288
2289 Example 19 Defining and Granting a Permission Set on a ZFS Dataset
2290 The following example shows how to define and grant a permission set on
2291 the tank/users file system. The permissions on tank/users are also
2292 displayed.
2293
2294 # zfs allow -s @pset create,destroy,snapshot,mount tank/users
2295 # zfs allow staff @pset tank/users
2296 # zfs allow tank/users
2297 ---- Permissions on tank/users ---------------------------------------
2298 Permission sets:
2299 @pset create,destroy,mount,snapshot
2300 Local+Descendent permissions:
2301 group staff @pset
2302
2303 Example 20 Delegating Property Permissions on a ZFS Dataset
2304 The following example shows to grant the ability to set quotas and
2305 reservations on the users/home file system. The permissions on
2306 users/home are also displayed.
2307
2308 # zfs allow cindys quota,reservation users/home
2309 # zfs allow users/home
2310 ---- Permissions on users/home ---------------------------------------
2311 Local+Descendent permissions:
2312 user cindys quota,reservation
2313 cindys% zfs set quota=10G users/home/marks
2314 cindys% zfs get quota users/home/marks
2315 NAME PROPERTY VALUE SOURCE
2316 users/home/marks quota 10G local
2317
2318 Example 21 Removing ZFS Delegated Permissions on a ZFS Dataset
2319 The following example shows how to remove the snapshot permission from
2320 the staff group on the tank/users file system. The permissions on
2321 tank/users are also displayed.
2322
2323 # zfs unallow staff snapshot tank/users
2324 # zfs allow tank/users
2325 ---- Permissions on tank/users ---------------------------------------
2326 Permission sets:
2327 @pset create,destroy,mount,snapshot
2328 Local+Descendent permissions:
2329 group staff @pset
2330
2331 Example 22 Showing the differences between a snapshot and a ZFS Dataset
2332 The following example shows how to see what has changed between a prior
2333 snapshot of a ZFS dataset and its current state. The -F option is used
2334 to indicate type information for the files affected.
2335
2336 # zfs diff -F tank/test@before tank/test
2337 M / /tank/test/
2338 M F /tank/test/linked (+1)
2339 R F /tank/test/oldname -> /tank/test/newname
2340 - F /tank/test/deleted
2341 + F /tank/test/created
2342 M F /tank/test/modified
2343
2344 INTERFACE STABILITY
2345 Committed.
2346
2347 SEE ALSO
2348 gzip(1), ssh(1), mount(1M), sharemgr(1M), zonecfg(1M), zpool(1M),
2349 chmod(2), stat(2), write(2), fsync(3C), dfstab(4), acl(5), attributes(5),
2350 sharenfs(5), sharesmb(5)
2351
2352 illumos December 6, 2017 illumos