3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
24 .\" Copyright 2017 Nexenta Systems, Inc.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
27 .\"
28 .Dd December 6, 2017
29 .Dt ZPOOL 1M
30 .Os
31 .Sh NAME
32 .Nm zpool
33 .Nd configure ZFS storage pools
34 .Sh SYNOPSIS
35 .Nm
36 .Fl \?
37 .Nm
38 .Cm add
39 .Op Fl fn
40 .Ar pool vdev Ns ...
41 .Nm
42 .Cm attach
43 .Op Fl f
47 .Ar pool
48 .Op Ar device
49 .Nm
50 .Cm create
51 .Op Fl dfn
52 .Op Fl B
53 .Op Fl m Ar mountpoint
54 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
55 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
56 .Op Fl R Ar root
57 .Ar pool vdev Ns ...
58 .Nm
59 .Cm destroy
60 .Op Fl f
61 .Ar pool
62 .Nm
63 .Cm detach
64 .Ar pool device
65 .Nm
66 .Cm export
67 .Op Fl f
68 .Ar pool Ns ...
69 .Nm
70 .Cm get
71 .Op Fl Hp
72 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
73 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
74 .Ar pool Ns ...
75 .Nm
76 .Cm history
77 .Op Fl il
78 .Oo Ar pool Oc Ns ...
79 .Nm
80 .Cm import
81 .Op Fl D
82 .Op Fl d Ar dir
83 .Nm
84 .Cm import
85 .Fl a
86 .Op Fl DfmN
87 .Op Fl F Op Fl n
88 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
89 .Op Fl o Ar mntopts
90 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
91 .Op Fl R Ar root
92 .Nm
93 .Cm import
94 .Op Fl Dfm
95 .Op Fl F Op Fl n
96 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
97 .Op Fl o Ar mntopts
98 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
99 .Op Fl R Ar root
100 .Ar pool Ns | Ns Ar id
101 .Op Ar newpool
102 .Nm
103 .Cm iostat
104 .Op Fl v
105 .Op Fl T Sy u Ns | Ns Sy d
106 .Oo Ar pool Oc Ns ...
107 .Op Ar interval Op Ar count
108 .Nm
109 .Cm labelclear
110 .Op Fl f
111 .Ar device
112 .Nm
113 .Cm list
114 .Op Fl Hpv
115 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
116 .Op Fl T Sy u Ns | Ns Sy d
117 .Oo Ar pool Oc Ns ...
118 .Op Ar interval Op Ar count
119 .Nm
120 .Cm offline
121 .Op Fl t
122 .Ar pool Ar device Ns ...
123 .Nm
124 .Cm online
125 .Op Fl e
126 .Ar pool Ar device Ns ...
127 .Nm
128 .Cm reguid
129 .Ar pool
130 .Nm
131 .Cm reopen
132 .Ar pool
133 .Nm
134 .Cm remove
135 .Op Fl np
136 .Ar pool Ar device Ns ...
137 .Nm
138 .Cm remove
139 .Fl s
140 .Ar pool
141 .Nm
142 .Cm replace
143 .Op Fl f
144 .Ar pool Ar device Op Ar new_device
145 .Nm
146 .Cm scrub
147 .Op Fl s | Fl p
148 .Ar pool Ns ...
149 .Nm
150 .Cm set
151 .Ar property Ns = Ns Ar value
152 .Ar pool
153 .Nm
154 .Cm split
155 .Op Fl n
156 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
157 .Op Fl R Ar root
158 .Ar pool newpool
159 .Nm
160 .Cm status
161 .Op Fl Dvx
162 .Op Fl T Sy u Ns | Ns Sy d
163 .Oo Ar pool Oc Ns ...
164 .Op Ar interval Op Ar count
165 .Nm
166 .Cm upgrade
167 .Nm
168 .Cm upgrade
169 .Fl v
170 .Nm
171 .Cm upgrade
172 .Op Fl V Ar version
173 .Fl a Ns | Ns Ar pool Ns ...
174 .Sh DESCRIPTION
175 The
176 .Nm
177 command configures ZFS storage pools.
178 A storage pool is a collection of devices that provides physical storage and
179 data replication for ZFS datasets.
180 All datasets within a storage pool share the same space.
181 See
182 .Xr zfs 1M
183 for information on managing datasets.
184 .Ss Virtual Devices (vdevs)
185 A "virtual device" describes a single device or a collection of devices
186 organized according to certain performance and fault characteristics.
187 The following virtual devices are supported:
188 .Bl -tag -width Ds
189 .It Sy disk
190 A block device, typically located under
191 .Pa /dev/dsk .
192 ZFS can use individual slices or partitions, though the recommended mode of
193 operation is to use whole disks.
383 Spares can be shared across multiple pools, and can be added with the
384 .Nm zpool Cm add
385 command and removed with the
386 .Nm zpool Cm remove
387 command.
388 Once a spare replacement is initiated, a new
389 .Sy spare
390 vdev is created within the configuration that will remain there until the
391 original device is replaced.
392 At this point, the hot spare becomes available again if another device fails.
393 .Pp
394 If a pool has a shared spare that is currently being used, the pool can not be
395 exported since other pools may use this shared spare, which may lead to
396 potential data corruption.
397 .Pp
398 An in-progress spare replacement can be cancelled by detaching the hot spare.
399 If the original faulted device is detached, then the hot spare assumes its
400 place in the configuration, and is removed from the spare list of all active
401 pools.
402 .Pp
403 Spares cannot replace log devices.
404 .Ss Intent Log
405 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
406 transactions.
407 For instance, databases often require their transactions to be on stable storage
408 devices when returning from a system call.
409 NFS and other applications can also use
410 .Xr fsync 3C
411 to ensure data stability.
412 By default, the intent log is allocated from blocks within the main pool.
413 However, it might be possible to get better performance using separate intent
414 log devices such as NVRAM or a dedicated disk.
415 For example:
416 .Bd -literal
417 # zpool create pool c0d0 c1d0 log c2d0
418 .Ed
419 .Pp
420 Multiple log devices can also be specified, and they can be mirrored.
421 See the
422 .Sx EXAMPLES
423 section for an example of mirroring multiple log devices.
424 .Pp
425 Log devices can be added, replaced, attached, detached, and imported and
426 exported as part of the larger pool.
427 Mirrored devices can be removed by specifying the top-level mirror vdev.
428 .Ss Cache Devices
429 Devices can be added to a storage pool as
430 .Qq cache devices .
431 These devices provide an additional layer of caching between main memory and
432 disk.
433 For read-heavy workloads, where the working set size is much larger than what
434 can be cached in main memory, using cache devices allow much more of this
435 working set to be served from low latency media.
436 Using cache devices provides the greatest performance improvement for random
437 read-workloads of mostly static content.
438 .Pp
439 To create a pool with cache devices, specify a
440 .Sy cache
441 vdev with any number of devices.
442 For example:
443 .Bd -literal
444 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
445 .Ed
446 .Pp
447 Cache devices cannot be mirrored or part of a raidz configuration.
448 If a read error is encountered on a cache device, that read I/O is reissued to
449 the original storage pool device, which might be part of a mirrored or raidz
450 configuration.
451 .Pp
452 The content of the cache devices is considered volatile, as is the case with
453 other system caches.
454 .Ss Properties
455 Each pool has several properties associated with it.
456 Some properties are read-only statistics while others are configurable and
457 change the behavior of the pool.
458 .Pp
459 The following are read-only properties:
460 .Bl -tag -width Ds
461 .It Cm allocated
462 Amount of storage space used within the pool.
463 .It Sy bootsize
464 The size of the system boot partition.
465 This property can only be set at pool creation time and is read-only once pool
466 is created.
467 Setting this property implies using the
468 .Fl B
469 option.
470 .It Sy capacity
471 Percentage of pool space used.
472 This property can also be referred to by its shortened column name,
473 .Sy cap .
474 .It Sy expandsize
475 Amount of uninitialized space within the pool or device that can be used to
476 increase the total capacity of the pool.
477 Uninitialized space consists of any space on an EFI labeled vdev which has not
478 been brought online
479 .Po e.g, using
480 .Nm zpool Cm online Fl e
481 .Pc .
482 This space occurs when a LUN is dynamically expanded.
483 .It Sy fragmentation
484 The amount of fragmentation in the pool.
485 .It Sy free
486 The amount of free space available in the pool.
487 .It Sy freeing
488 After a file system or snapshot is destroyed, the space it was using is
489 returned to the pool asynchronously.
490 .Sy freeing
491 is the amount of space remaining to be reclaimed.
492 Over time
493 .Sy freeing
494 will decrease while
495 .Sy free
496 increases.
497 .It Sy health
498 The current health of the pool.
499 Health can be one of
500 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
501 .It Sy guid
502 A unique identifier for the pool.
503 .It Sy size
504 Total size of the storage pool.
505 .It Sy unsupported@ Ns Em feature_guid
506 Information about unsupported features that are enabled on the pool.
507 See
508 .Xr zpool-features 5
509 for details.
510 .El
511 .Pp
567 the pool.
568 The default behavior is
569 .Sy off .
570 This property can also be referred to by its shortened column name,
571 .Sy expand .
572 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
573 Controls automatic device replacement.
574 If set to
575 .Sy off ,
576 device replacement must be initiated by the administrator by using the
577 .Nm zpool Cm replace
578 command.
579 If set to
580 .Sy on ,
581 any new device, found in the same physical location as a device that previously
582 belonged to the pool, is automatically formatted and replaced.
583 The default behavior is
584 .Sy off .
585 This property can also be referred to by its shortened column name,
586 .Sy replace .
587 .It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
588 Identifies the default bootable dataset for the root pool.
589 This property is expected to be set mainly by the installation and upgrade
590 programs.
591 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
592 Controls the location of where the pool configuration is cached.
593 Discovering all pools on system startup requires a cached copy of the
594 configuration data that is stored on the root file system.
595 All pools in this cache are automatically imported when the system boots.
596 Some environments, such as install and clustering, need to cache this
597 information in a different location so that pools are not automatically
598 imported.
599 Setting this property caches the pool configuration in a different location that
600 can later be imported with
601 .Nm zpool Cm import Fl c .
602 Setting it to the special value
603 .Sy none
604 creates a temporary pool that is never cached, and the special value
605 .Qq
606 .Pq empty string
645 .It Sy continue
646 Returns
647 .Er EIO
648 to any new write I/O requests but allows reads to any of the remaining healthy
649 devices.
650 Any write requests that have yet to be committed to disk would be blocked.
651 .It Sy panic
652 Prints out a message to the console and generates a system crash dump.
653 .El
654 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
655 The value of this property is the current state of
656 .Ar feature_name .
657 The only valid value when setting this property is
658 .Sy enabled
659 which moves
660 .Ar feature_name
661 to the enabled state.
662 See
663 .Xr zpool-features 5
664 for details on feature states.
665 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
666 Controls whether information about snapshots associated with this pool is
667 output when
668 .Nm zfs Cm list
669 is run without the
670 .Fl t
671 option.
672 The default value is
673 .Sy off .
674 This property can also be referred to by its shortened name,
675 .Sy listsnaps .
676 .It Sy version Ns = Ns Ar version
677 The current on-disk version of the pool.
678 This can be increased, but never decreased.
679 The preferred method of updating pools is with the
680 .Nm zpool Cm upgrade
681 command, though this property can be used when a specific version is needed for
682 backwards compatibility.
683 Once feature flags are enabled on a pool this property will no longer have a
684 value.
685 .El
686 .Ss Subcommands
687 All subcommands that modify state are logged persistently to the pool in their
688 original form.
689 .Pp
690 The
691 .Nm
692 command provides subcommands to create and destroy storage pools, add capacity
693 to storage pools, and provide information about the storage pools.
694 The following subcommands are supported:
695 .Bl -tag -width Ds
696 .It Xo
697 .Nm
698 .Fl \?
699 .Xc
700 Displays a help message.
701 .It Xo
702 .Nm
703 .Cm add
704 .Op Fl fn
705 .Ar pool vdev Ns ...
872 .Pa /pool
873 or
874 .Pa altroot/pool
875 if
876 .Ar altroot
877 is specified.
878 The mount point must be an absolute path,
879 .Sy legacy ,
880 or
881 .Sy none .
882 For more information on dataset mount points, see
883 .Xr zfs 1M .
884 .It Fl n
885 Displays the configuration that would be used without actually creating the
886 pool.
887 The actual pool creation can still fail due to insufficient privileges or
888 device sharing.
889 .It Fl o Ar property Ns = Ns Ar value
890 Sets the given pool properties.
891 See the
892 .Sx Properties
893 section for a list of valid properties that can be set.
894 .It Fl O Ar file-system-property Ns = Ns Ar value
895 Sets the given file system properties in the root file system of the pool.
896 See the
897 .Sx Properties
898 section of
899 .Xr zfs 1M
900 for a list of valid properties that can be set.
901 .It Fl R Ar root
902 Equivalent to
903 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
904 .El
905 .It Xo
906 .Nm
907 .Cm destroy
908 .Op Fl f
909 .Ar pool
910 .Xc
911 Destroys the given pool, freeing up any devices for other use.
912 This command tries to unmount any active datasets before destroying the pool.
913 .Bl -tag -width Ds
914 .It Fl f
915 Forces any active datasets contained within the pool to be unmounted.
916 .El
917 .It Xo
918 .Nm
919 .Cm detach
920 .Ar pool device
921 .Xc
922 Detaches
923 .Ar device
924 from a mirror.
925 The operation is refused if there are no other valid replicas of the data.
926 .It Xo
927 .Nm
928 .Cm export
929 .Op Fl f
930 .Ar pool Ns ...
931 .Xc
932 Exports the given pools from the system.
933 All devices are marked as exported, but are still considered in use by other
934 subsystems.
935 The devices can be moved between systems
936 .Pq even those of different endianness
937 and imported as long as a sufficient number of devices are present.
938 .Pp
939 Before exporting the pool, all datasets within the pool are unmounted.
940 A pool can not be exported if it has a shared spare that is currently being
941 used.
942 .Pp
943 For pools to be portable, you must give the
944 .Nm
945 command whole disks, not just slices, so that ZFS can label the disks with
946 portable EFI labels.
947 Otherwise, disk drivers on platforms of different endianness will not recognize
948 the disks.
949 .Bl -tag -width Ds
950 .It Fl f
951 Forcefully unmount all datasets, using the
952 .Nm unmount Fl f
953 command.
954 .Pp
955 This command will forcefully export the pool even if it has a shared spare that
956 is currently being used.
957 This may lead to potential data corruption.
958 .El
959 .It Xo
960 .Nm
961 .Cm get
962 .Op Fl Hp
963 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
964 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
965 .Ar pool Ns ...
966 .Xc
967 Retrieves the given list of properties
968 .Po
969 or all properties if
970 .Sy all
971 is used
972 .Pc
973 for the specified storage pool(s).
974 These properties are displayed with the following fields:
975 .Bd -literal
976 name Name of storage pool
977 property Property name
978 value Property value
979 source Property source, either 'default' or 'local'.
980 .Ed
981 .Pp
982 See the
983 .Sx Properties
984 section for more information on the available pool properties.
985 .Bl -tag -width Ds
986 .It Fl H
987 Scripted mode.
988 Do not display headers, and separate fields by a single tab instead of arbitrary
989 space.
990 .It Fl o Ar field
991 A comma-separated list of columns to display.
992 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
993 is the default value.
994 .It Fl p
995 Display numbers in parsable (exact) values.
996 .El
997 .It Xo
998 .Nm
999 .Cm history
1000 .Op Fl il
1001 .Oo Ar pool Oc Ns ...
1002 .Xc
1003 Displays the command history of the specified pool(s) or all pools if no pool is
1114 .It Fl n
1115 Used with the
1116 .Fl F
1117 recovery option.
1118 Determines whether a non-importable pool can be made importable again, but does
1119 not actually perform the pool recovery.
1120 For more details about pool recovery mode, see the
1121 .Fl F
1122 option, above.
1123 .It Fl N
1124 Import the pool without mounting any file systems.
1125 .It Fl o Ar mntopts
1126 Comma-separated list of mount options to use when mounting datasets within the
1127 pool.
1128 See
1129 .Xr zfs 1M
1130 for a description of dataset properties and mount options.
1131 .It Fl o Ar property Ns = Ns Ar value
1132 Sets the specified property on the imported pool.
1133 See the
1134 .Sx Properties
1135 section for more information on the available pool properties.
1136 .It Fl R Ar root
1137 Sets the
1138 .Sy cachefile
1139 property to
1140 .Sy none
1141 and the
1142 .Sy altroot
1143 property to
1144 .Ar root .
1145 .El
1146 .It Xo
1147 .Nm
1148 .Cm import
1149 .Op Fl Dfm
1150 .Op Fl F Op Fl n
1151 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1152 .Op Fl o Ar mntopts
1153 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1154 .Op Fl R Ar root
1208 Allows a pool to import when there is a missing log device.
1209 Recent transactions can be lost because the log device will be discarded.
1210 .It Fl n
1211 Used with the
1212 .Fl F
1213 recovery option.
1214 Determines whether a non-importable pool can be made importable again, but does
1215 not actually perform the pool recovery.
1216 For more details about pool recovery mode, see the
1217 .Fl F
1218 option, above.
1219 .It Fl o Ar mntopts
1220 Comma-separated list of mount options to use when mounting datasets within the
1221 pool.
1222 See
1223 .Xr zfs 1M
1224 for a description of dataset properties and mount options.
1225 .It Fl o Ar property Ns = Ns Ar value
1226 Sets the specified property on the imported pool.
1227 See the
1228 .Sx Properties
1229 section for more information on the available pool properties.
1230 .It Fl R Ar root
1231 Sets the
1232 .Sy cachefile
1233 property to
1234 .Sy none
1235 and the
1236 .Sy altroot
1237 property to
1238 .Ar root .
1239 .El
1240 .It Xo
1241 .Nm
1242 .Cm iostat
1243 .Op Fl v
1244 .Op Fl T Sy u Ns | Ns Sy d
1245 .Oo Ar pool Oc Ns ...
1246 .Op Ar interval Op Ar count
1247 .Xc
1248 Displays I/O statistics for the given pools.
1249 When given an
1250 .Ar interval ,
1251 the statistics are printed every
1252 .Ar interval
1253 seconds until ^C is pressed.
1254 If no
1255 .Ar pool Ns s
1256 are specified, statistics for every pool in the system is shown.
1257 If
1258 .Ar count
1305 .Ar pool Ns s
1306 are specified, all pools in the system are listed.
1307 When given an
1308 .Ar interval ,
1309 the information is printed every
1310 .Ar interval
1311 seconds until ^C is pressed.
1312 If
1313 .Ar count
1314 is specified, the command exits after
1315 .Ar count
1316 reports are printed.
1317 .Bl -tag -width Ds
1318 .It Fl H
1319 Scripted mode.
1320 Do not display headers, and separate fields by a single tab instead of arbitrary
1321 space.
1322 .It Fl o Ar property
1323 Comma-separated list of properties to display.
1324 See the
1325 .Sx Properties
1326 section for a list of valid properties.
1327 The default list is
1328 .Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1329 .Cm dedupratio , health , altroot .
1330 .It Fl p
1331 Display numbers in parsable
1332 .Pq exact
1333 values.
1334 .It Fl T Sy u Ns | Ns Sy d
1335 Display a time stamp.
1336 Specify
1337 .Fl u
1338 for a printed representation of the internal representation of time.
1339 See
1340 .Xr time 2 .
1341 Specify
1342 .Fl d
1343 for standard date format.
1344 See
1345 .Xr date 1 .
1378 If the device is part of a mirror or raidz then all devices must be expanded
1379 before the new space will become available to the pool.
1380 .El
1381 .It Xo
1382 .Nm
1383 .Cm reguid
1384 .Ar pool
1385 .Xc
1386 Generates a new unique identifier for the pool.
1387 You must ensure that all devices in this pool are online and healthy before
1388 performing this action.
1389 .It Xo
1390 .Nm
1391 .Cm reopen
1392 .Ar pool
1393 .Xc
1394 Reopen all the vdevs associated with the pool.
1395 .It Xo
1396 .Nm
1397 .Cm remove
1398 .Op Fl np
1399 .Ar pool Ar device Ns ...
1400 .Xc
1401 Removes the specified device from the pool.
1402 This command currently only supports removing hot spares, cache, log
1403 devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1404 .sp
1405 Removing a top-level vdev reduces the total amount of space in the storage pool.
1406 The specified device will be evacuated by copying all allocated space from it to
1407 the other devices in the pool.
1408 In this case, the
1409 .Nm zpool Cm remove
1410 command initiates the removal and returns, while the evacuation continues in
1411 the background.
1412 The removal progress can be monitored with
1413 .Nm zpool Cm status.
1414 This feature must be enabled to be used, see
1415 .Xr zpool-features 5
1416 .Pp
1417 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1418 same.
1419 Non-log devices or data devices that are part of a mirrored configuration can be removed using
1420 the
1421 .Nm zpool Cm detach
1422 command.
1423 .Bl -tag -width Ds
1424 .It Fl n
1425 Do not actually perform the removal ("no-op").
1426 Instead, print the estimated amount of memory that will be used by the
1427 mapping table after the removal completes.
1428 This is nonzero only for top-level vdevs.
1429 .El
1430 .Bl -tag -width Ds
1431 .It Fl p
1432 Used in conjunction with the
1433 .Fl n
1434 flag, displays numbers as parsable (exact) values.
1435 .El
1436 .It Xo
1437 .Nm
1438 .Cm remove
1439 .Fl s
1440 .Ar pool
1441 .Xc
1442 Stops and cancels an in-progress removal of a top-level vdev.
1443 .It Xo
1444 .Nm
1445 .Cm replace
1446 .Op Fl f
1447 .Ar pool Ar device Op Ar new_device
1448 .Xc
1449 Replaces
1450 .Ar old_device
1451 with
1452 .Ar new_device .
1453 This is equivalent to attaching
1454 .Ar new_device ,
1455 waiting for it to resilver, and then detaching
1456 .Ar old_device .
1457 .Pp
1458 The size of
1459 .Ar new_device
1460 must be greater than or equal to the minimum size of all the devices in a mirror
1461 or raidz configuration.
1462 .Pp
1463 .Ar new_device
1464 is required if the pool is not redundant.
1465 If
1466 .Ar new_device
1467 is not specified, it defaults to
1468 .Ar old_device .
1469 This form of replacement is useful after an existing disk has failed and has
1470 been physically replaced.
1471 In this case, the new disk may have the same
1472 .Pa /dev/dsk
1473 path as the old device, even though it is actually a different disk.
1474 ZFS recognizes this.
1475 .Bl -tag -width Ds
1476 .It Fl f
1477 Forces use of
1478 .Ar new_device ,
1479 even if its appears to be in use.
1480 Not all devices can be overridden in this manner.
1481 .El
1482 .It Xo
1483 .Nm
1484 .Cm scrub
1485 .Op Fl s | Fl p
1486 .Ar pool Ns ...
1487 .Xc
1488 Begins a scrub or resumes a paused scrub.
1489 The scrub examines all data in the specified pools to verify that it checksums
1490 correctly.
1491 For replicated
1492 .Pq mirror or raidz
1493 devices, ZFS automatically repairs any damage discovered during the scrub.
1494 The
1495 .Nm zpool Cm status
1496 command reports the progress of the scrub and summarizes the results of the
1497 scrub upon completion.
1498 .Pp
1499 Scrubbing and resilvering are very similar operations.
1500 The difference is that resilvering only examines data that ZFS knows to be out
1501 of date
1502 .Po
1503 for example, when attaching a new device to a mirror or replacing an existing
1504 device
1505 .Pc ,
1506 whereas scrubbing examines all data to discover silent errors due to hardware
1507 faults or disk failure.
1508 .Pp
1509 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1510 one at a time.
1511 If a scrub is paused, the
1512 .Nm zpool Cm scrub
1513 resumes it.
1514 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1515 resilver completes.
1516 .Bl -tag -width Ds
1517 .It Fl s
1518 Stop scrubbing.
1519 .El
1520 .Bl -tag -width Ds
1521 .It Fl p
1522 Pause scrubbing.
1523 Scrub pause state and progress are periodically synced to disk.
1524 If the system is restarted or pool is exported during a paused scrub,
1525 even after import, scrub will remain paused until it is resumed.
1526 Once resumed the scrub will pick up from the place where it was last
1527 checkpointed to disk.
1528 To resume a paused scrub issue
1529 .Nm zpool Cm scrub
1530 again.
1531 .El
1532 .It Xo
1533 .Nm
1534 .Cm set
1535 .Ar property Ns = Ns Ar value
1536 .Ar pool
1537 .Xc
1538 Sets the given property on the specified pool.
1539 See the
1540 .Sx Properties
1541 section for more information on what properties can be set and acceptable
1542 values.
1543 .It Xo
1544 .Nm
1545 .Cm split
1546 .Op Fl n
1547 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1548 .Op Fl R Ar root
1549 .Ar pool newpool
1550 .Xc
1551 Splits devices off
1552 .Ar pool
1553 creating
1554 .Ar newpool .
1555 All vdevs in
1556 .Ar pool
1557 must be mirrors.
1558 At the time of the split,
1559 .Ar newpool
1560 will be a replica of
1561 .Ar pool .
1562 .Bl -tag -width Ds
1563 .It Fl n
1564 Do dry run, do not actually perform the split.
1565 Print out the expected configuration of
1566 .Ar newpool .
1567 .It Fl o Ar property Ns = Ns Ar value
1568 Sets the specified property for
1569 .Ar newpool .
1570 See the
1571 .Sx Properties
1572 section for more information on the available pool properties.
1573 .It Fl R Ar root
1574 Set
1575 .Sy altroot
1576 for
1577 .Ar newpool
1578 to
1579 .Ar root
1580 and automatically import it.
1581 .El
1582 .It Xo
1583 .Nm
1584 .Cm status
1585 .Op Fl Dvx
1586 .Op Fl T Sy u Ns | Ns Sy d
1587 .Oo Ar pool Oc Ns ...
1588 .Op Ar interval Op Ar count
1589 .Xc
1590 Displays the detailed health status for the given pools.
1591 If no
1611 Specify
1612 .Fl u
1613 for a printed representation of the internal representation of time.
1614 See
1615 .Xr time 2 .
1616 Specify
1617 .Fl d
1618 for standard date format.
1619 See
1620 .Xr date 1 .
1621 .It Fl v
1622 Displays verbose data error information, printing out a complete list of all
1623 data errors since the last complete pool scrub.
1624 .It Fl x
1625 Only display status for pools that are exhibiting errors or are otherwise
1626 unavailable.
1627 Warnings about pools not using the latest on-disk format will not be included.
1628 .El
1629 .It Xo
1630 .Nm
1631 .Cm upgrade
1632 .Xc
1633 Displays pools which do not have all supported features enabled and pools
1634 formatted using a legacy ZFS version number.
1635 These pools can continue to be used, but some features may not be available.
1636 Use
1637 .Nm zpool Cm upgrade Fl a
1638 to enable all features on all pools.
1639 .It Xo
1640 .Nm
1641 .Cm upgrade
1642 .Fl v
1643 .Xc
1644 Displays legacy ZFS versions supported by the current software.
1645 See
1646 .Xr zpool-features 5
1647 for a description of feature flags features supported by the current software.
1648 .It Xo
1649 .Nm
1650 .Cm upgrade
1652 .Fl a Ns | Ns Ar pool Ns ...
1653 .Xc
1654 Enables all supported features on the given pool.
1655 Once this is done, the pool will no longer be accessible on systems that do not
1656 support feature flags.
1657 See
1658 .Xr zpool-features 5
1659 for details on compatibility with systems that support feature flags, but do not
1660 support all features enabled on the pool.
1661 .Bl -tag -width Ds
1662 .It Fl a
1663 Enables all supported features on all pools.
1664 .It Fl V Ar version
1665 Upgrade to the specified legacy version.
1666 If the
1667 .Fl V
1668 flag is specified, no features will be enabled on the pool.
1669 This option can only be used to increase the version number up to the last
1670 supported legacy version number.
1671 .El
1672 .El
1673 .Sh EXIT STATUS
1674 The following exit values are returned:
1675 .Bl -tag -width Ds
1676 .It Sy 0
1677 Successful completion.
1678 .It Sy 1
1679 An error occurred.
1680 .It Sy 2
1681 Invalid command line options were specified.
1682 .El
1683 .Sh EXAMPLES
1684 .Bl -tag -width Ds
1685 .It Sy Example 1 No Creating a RAID-Z Storage Pool
1686 The following command creates a pool with a single raidz root vdev that
1687 consists of six disks.
1688 .Bd -literal
1689 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1690 .Ed
1691 .It Sy Example 2 No Creating a Mirrored Storage Pool
1794 .Bd -literal
1795 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
1796 c4d0 c5d0
1797 .Ed
1798 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
1799 The following command adds two disks for use as cache devices to a ZFS storage
1800 pool:
1801 .Bd -literal
1802 # zpool add pool cache c2d0 c3d0
1803 .Ed
1804 .Pp
1805 Once added, the cache devices gradually fill with content from main memory.
1806 Depending on the size of your cache devices, it could take over an hour for
1807 them to fill.
1808 Capacity and reads can be monitored using the
1809 .Cm iostat
1810 option as follows:
1811 .Bd -literal
1812 # zpool iostat -v pool 5
1813 .Ed
1814 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
1815 The following commands remove the mirrored log device
1816 .Sy mirror-2
1817 and mirrored top-level data device
1818 .Sy mirror-1 .
1819 .Pp
1820 Given this configuration:
1821 .Bd -literal
1822 pool: tank
1823 state: ONLINE
1824 scrub: none requested
1825 config:
1826
1827 NAME STATE READ WRITE CKSUM
1828 tank ONLINE 0 0 0
1829 mirror-0 ONLINE 0 0 0
1830 c6t0d0 ONLINE 0 0 0
1831 c6t1d0 ONLINE 0 0 0
1832 mirror-1 ONLINE 0 0 0
1833 c6t2d0 ONLINE 0 0 0
1834 c6t3d0 ONLINE 0 0 0
1835 logs
1836 mirror-2 ONLINE 0 0 0
1837 c4t0d0 ONLINE 0 0 0
1838 c4t1d0 ONLINE 0 0 0
1839 .Ed
1840 .Pp
1841 The command to remove the mirrored log
1842 .Sy mirror-2
1843 is:
1844 .Bd -literal
1845 # zpool remove tank mirror-2
1846 .Ed
1847 .Pp
1848 The command to remove the mirrored data
1849 .Sy mirror-1
1850 is:
1851 .Bd -literal
1852 # zpool remove tank mirror-1
1853 .Ed
1854 .It Sy Example 15 No Displaying expanded space on a device
1855 The following command displays the detailed information for the pool
1856 .Em data .
1857 This pool is comprised of a single raidz vdev where one of its devices
1858 increased its capacity by 10GB.
1859 In this example, the pool will not be able to utilize this extra capacity until
1860 all the devices under the raidz vdev have been expanded.
1861 .Bd -literal
1862 # zpool list -v data
1863 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1864 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1865 raidz1 23.9G 14.6G 9.30G 48% -
1866 c1t1d0 - - - - -
1867 c1t2d0 - - - - 10G
1868 c1t3d0 - - - - -
1869 .Ed
1870 .El
1871 .Sh INTERFACE STABILITY
1872 .Sy Evolving
1873 .Sh SEE ALSO
|
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
24 .\" Copyright 2017 Nexenta Systems, Inc.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
27 .\"
28 .Dd December 6, 2017
29 .Dt ZPOOL 1M
30 .Os
31 .Sh NAME
32 .Nm zpool
33 .Nd configure ZFS storage pools
34 .Sh SYNOPSIS
35 .Nm
36 .Fl \?
37 .Nm
38 .Cm add
39 .Op Fl fn
40 .Ar pool vdev Ns ...
41 .Nm
42 .Cm attach
43 .Op Fl f
47 .Ar pool
48 .Op Ar device
49 .Nm
50 .Cm create
51 .Op Fl dfn
52 .Op Fl B
53 .Op Fl m Ar mountpoint
54 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
55 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
56 .Op Fl R Ar root
57 .Ar pool vdev Ns ...
58 .Nm
59 .Cm destroy
60 .Op Fl f
61 .Ar pool
62 .Nm
63 .Cm detach
64 .Ar pool device
65 .Nm
66 .Cm export
67 .Op Fl cfF
68 .Op Fl t Ar numthreads
69 .Ar pool Ns ...
70 .Nm
71 .Cm get
72 .Op Fl Hp
73 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
74 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
75 .Ar pool Ns ...
76 .Nm
77 .Cm history
78 .Op Fl il
79 .Oo Ar pool Oc Ns ...
80 .Nm
81 .Cm import
82 .Op Fl D
83 .Op Fl d Ar dir
84 .Nm
85 .Cm import
86 .Fl a
87 .Op Fl DfmN
88 .Op Fl F Op Fl n
89 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
90 .Op Fl o Ar mntopts
91 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
92 .Op Fl R Ar root
93 .Op Fl t Ar numthreads
94 .Nm
95 .Cm import
96 .Op Fl Dfm
97 .Op Fl F Op Fl n
98 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
99 .Op Fl o Ar mntopts
100 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
101 .Op Fl R Ar root
102 .Op Fl t Ar numthreads
103 .Ar pool Ns | Ns Ar id
104 .Op Ar newpool
105 .Nm
106 .Cm iostat
107 .Op Fl v
108 .Op Fl T Sy u Ns | Ns Sy d
109 .Oo Ar pool Oc Ns ...
110 .Op Ar interval Op Ar count
111 .Nm
112 .Cm labelclear
113 .Op Fl f
114 .Ar device
115 .Nm
116 .Cm list
117 .Op Fl Hpv
118 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
119 .Op Fl T Sy u Ns | Ns Sy d
120 .Oo Ar pool Oc Ns ...
121 .Op Ar interval Op Ar count
122 .Nm
123 .Cm offline
124 .Op Fl t
125 .Ar pool Ar device Ns ...
126 .Nm
127 .Cm online
128 .Op Fl e
129 .Ar pool Ar device Ns ...
130 .Nm
131 .Cm reguid
132 .Ar pool
133 .Nm
134 .Cm reopen
135 .Ar pool
136 .Nm
137 .Cm remove
138 .Ar pool Ar device Ns ...
139 .Nm
140 .Cm replace
141 .Op Fl f
142 .Ar pool Ar device Op Ar new_device
143 .Nm
144 .Cm scrub
145 .Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s
146 .Ar pool Ns ...
147 .Nm
148 .Cm set
149 .Ar property Ns = Ns Ar value
150 .Ar pool
151 .Nm
152 .Cm split
153 .Op Fl n
154 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
155 .Op Fl R Ar root
156 .Ar pool newpool
157 .Nm
158 .Cm status
159 .Op Fl Dvx
160 .Op Fl T Sy u Ns | Ns Sy d
161 .Oo Ar pool Oc Ns ...
162 .Op Ar interval Op Ar count
163 .Nm
164 .Cm trim
165 .Op Fl r Ar rate Ns | Ns Fl s
166 .Ar pool Ns ...
167 .Nm
168 .Cm upgrade
169 .Nm
170 .Cm upgrade
171 .Fl v
172 .Nm
173 .Cm upgrade
174 .Op Fl V Ar version
175 .Fl a Ns | Ns Ar pool Ns ...
176 .Nm
177 .Cm vdev-get
178 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
179 .Ar pool
180 .Ar vdev-name Ns | Ns Ar vdev-guid
181 .Nm
182 .Cm vdev-set
183 .Ar property Ns = Ns Ar value
184 .Ar pool
185 .Ar vdev-name Ns | Ns Ar vdev-guid
186 .Sh DESCRIPTION
187 The
188 .Nm
189 command configures ZFS storage pools.
190 A storage pool is a collection of devices that provides physical storage and
191 data replication for ZFS datasets.
192 All datasets within a storage pool share the same space.
193 See
194 .Xr zfs 1M
195 for information on managing datasets.
196 .Ss Virtual Devices (vdevs)
197 A "virtual device" describes a single device or a collection of devices
198 organized according to certain performance and fault characteristics.
199 The following virtual devices are supported:
200 .Bl -tag -width Ds
201 .It Sy disk
202 A block device, typically located under
203 .Pa /dev/dsk .
204 ZFS can use individual slices or partitions, though the recommended mode of
205 operation is to use whole disks.
395 Spares can be shared across multiple pools, and can be added with the
396 .Nm zpool Cm add
397 command and removed with the
398 .Nm zpool Cm remove
399 command.
400 Once a spare replacement is initiated, a new
401 .Sy spare
402 vdev is created within the configuration that will remain there until the
403 original device is replaced.
404 At this point, the hot spare becomes available again if another device fails.
405 .Pp
406 If a pool has a shared spare that is currently being used, the pool can not be
407 exported since other pools may use this shared spare, which may lead to
408 potential data corruption.
409 .Pp
410 An in-progress spare replacement can be cancelled by detaching the hot spare.
411 If the original faulted device is detached, then the hot spare assumes its
412 place in the configuration, and is removed from the spare list of all active
413 pools.
414 .Pp
415 See
416 .Sy sparegroup
417 vdev property in
418 .Sx Device Properties
419 section for information on how to control spare selection.
420 .Pp
421 Spares cannot replace log devices.
422 .Ss Intent Log
423 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
424 transactions.
425 For instance, databases often require their transactions to be on stable storage
426 devices when returning from a system call.
427 NFS and other applications can also use
428 .Xr fsync 3C
429 to ensure data stability.
430 By default, the intent log is allocated from blocks within the main pool.
431 However, it might be possible to get better performance using separate intent
432 log devices such as NVRAM or a dedicated disk.
433 For example:
434 .Bd -literal
435 # zpool create pool c0d0 c1d0 log c2d0
436 .Ed
437 .Pp
438 Multiple log devices can also be specified, and they can be mirrored.
439 See the
440 .Sx EXAMPLES
441 section for an example of mirroring multiple log devices.
442 .Pp
443 Log devices can be added, replaced, attached, detached, and imported and
444 exported as part of the larger pool.
445 Mirrored log devices can be removed by specifying the top-level mirror for the
446 log.
447 .Ss Cache Devices
448 Devices can be added to a storage pool as
449 .Qq cache devices .
450 These devices provide an additional layer of caching between main memory and
451 disk.
452 For read-heavy workloads, where the working set size is much larger than what
453 can be cached in main memory, using cache devices allow much more of this
454 working set to be served from low latency media.
455 Using cache devices provides the greatest performance improvement for random
456 read-workloads of mostly static content.
457 .Pp
458 To create a pool with cache devices, specify a
459 .Sy cache
460 vdev with any number of devices.
461 For example:
462 .Bd -literal
463 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
464 .Ed
465 .Pp
466 Cache devices cannot be mirrored or part of a raidz configuration.
467 If a read error is encountered on a cache device, that read I/O is reissued to
468 the original storage pool device, which might be part of a mirrored or raidz
469 configuration.
470 .Pp
471 The content of the cache devices is considered volatile, as is the case with
472 other system caches.
473 .Ss Pool Properties
474 Each pool has several properties associated with it.
475 Some properties are read-only statistics while others are configurable and
476 change the behavior of the pool.
477 .Pp
478 The following are read-only properties:
479 .Bl -tag -width Ds
480 .It Cm allocated
481 Amount of storage space used within the pool.
482 .It Sy bootsize
483 The size of the system boot partition.
484 This property can only be set at pool creation time and is read-only once pool
485 is created.
486 Setting this property implies using the
487 .Fl B
488 option.
489 .It Sy capacity
490 Percentage of pool space used.
491 This property can also be referred to by its shortened column name,
492 .Sy cap .
493 .It Sy ddt_capped Ns = Ns Sy on Ns | Ns Sy off
494 When the
495 .Sy ddt_capped
496 is
497 .Sy on
498 this indicates DDT growth has been stopped.
499 New unique writes will not be deduped to prevent further DDT growth.
500 .It Sy expandsize
501 Amount of uninitialized space within the pool or device that can be used to
502 increase the total capacity of the pool.
503 Uninitialized space consists of any space on an EFI labeled vdev which has not
504 been brought online
505 .Po e.g, using
506 .Nm zpool Cm online Fl e
507 .Pc .
508 This space occurs when a LUN is dynamically expanded.
509 .It Sy fragmentation
510 The amount of fragmentation in the pool.
511 .It Sy free
512 The amount of free space available in the pool.
513 .It Sy freeing
514 .Sy freeing
515 is the amount of pool space remaining to be reclaimed.
516 After a file, dataset or snapshot is destroyed, the space it was using is
517 returned to the pool asynchronously.
518 Over time
519 .Sy freeing
520 will decrease while
521 .Sy free
522 increases.
523 .It Sy health
524 The current health of the pool.
525 Health can be one of
526 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
527 .It Sy guid
528 A unique identifier for the pool.
529 .It Sy size
530 Total size of the storage pool.
531 .It Sy unsupported@ Ns Em feature_guid
532 Information about unsupported features that are enabled on the pool.
533 See
534 .Xr zpool-features 5
535 for details.
536 .El
537 .Pp
593 the pool.
594 The default behavior is
595 .Sy off .
596 This property can also be referred to by its shortened column name,
597 .Sy expand .
598 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
599 Controls automatic device replacement.
600 If set to
601 .Sy off ,
602 device replacement must be initiated by the administrator by using the
603 .Nm zpool Cm replace
604 command.
605 If set to
606 .Sy on ,
607 any new device, found in the same physical location as a device that previously
608 belonged to the pool, is automatically formatted and replaced.
609 The default behavior is
610 .Sy off .
611 This property can also be referred to by its shortened column name,
612 .Sy replace .
613 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
614 When set to
615 .Sy on ,
616 while deleting data, ZFS will inform the underlying vdevs of any blocks that
617 have been marked as freed.
618 This allows thinly provisioned vdevs to reclaim unused blocks.
619 Currently, this feature supports sending SCSI UNMAP commands to SCSI and SAS
620 disk vdevs, and using file hole punching on file-backed vdevs.
621 SATA TRIM is currently not implemented.
622 The default setting for this property is
623 .Sy off .
624 .Pp
625 Please note that automatic trimming of data blocks can put significant stress on
626 the underlying storage devices if they do not handle these commands in a
627 background, low-priority manner.
628 In that case, it may be possible to achieve most of the benefits of trimming
629 free space on the pool by running an on-demand
630 .Pq manual
631 trim every once in a while during a maintenance window using the
632 .Nm zpool Cm trim
633 command.
634 .Pp
635 Automatic trim does not reclaim blocks after a delete immediately.
636 Instead, it waits approximately 32-64 TXGs
637 .Po or as defined by the
638 .Sy zfs_txgs_per_trim
639 tunable
640 .Pc
641 to allow for more efficient aggregation of smaller portions of free space into
642 fewer larger regions, as well as to allow for longer pool corruption recovery
643 via
644 .Nm zpool Cm import Fl F .
645 .It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
646 Identifies the default bootable dataset for the root pool.
647 This property is expected to be set mainly by the installation and upgrade
648 programs.
649 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
650 Controls the location of where the pool configuration is cached.
651 Discovering all pools on system startup requires a cached copy of the
652 configuration data that is stored on the root file system.
653 All pools in this cache are automatically imported when the system boots.
654 Some environments, such as install and clustering, need to cache this
655 information in a different location so that pools are not automatically
656 imported.
657 Setting this property caches the pool configuration in a different location that
658 can later be imported with
659 .Nm zpool Cm import Fl c .
660 Setting it to the special value
661 .Sy none
662 creates a temporary pool that is never cached, and the special value
663 .Qq
664 .Pq empty string
703 .It Sy continue
704 Returns
705 .Er EIO
706 to any new write I/O requests but allows reads to any of the remaining healthy
707 devices.
708 Any write requests that have yet to be committed to disk would be blocked.
709 .It Sy panic
710 Prints out a message to the console and generates a system crash dump.
711 .El
712 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
713 The value of this property is the current state of
714 .Ar feature_name .
715 The only valid value when setting this property is
716 .Sy enabled
717 which moves
718 .Ar feature_name
719 to the enabled state.
720 See
721 .Xr zpool-features 5
722 for details on feature states.
723 .It Sy forcetrim Ns = Ns Sy on Ns | Ns Sy off
724 Controls whether device support is taken into consideration when issuing TRIM
725 commands to the underlying vdevs of the pool.
726 Normally, both automatic trim and on-demand
727 .Pq manual
728 trim only issue TRIM commands if a vdev indicates support for it.
729 Setting the
730 .Sy forcetrim
731 property to
732 .Sy on
733 will force ZFS to issue TRIMs even if it thinks a device does not support it.
734 The default value is
735 .Sy off .
736 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
737 Controls whether information about snapshots associated with this pool is
738 output when
739 .Nm zfs Cm list
740 is run without the
741 .Fl t
742 option.
743 The default value is
744 .Sy off .
745 This property can also be referred to by its shortened name,
746 .Sy listsnaps .
747 .It Sy scrubprio Ns = Ns Ar 0-100 Ns
748 Sets the priority of scrub I/O for this pool.
749 This is a number from 0 to 100, higher numbers meaning a higher priority
750 and thus more bandwidth allocated to scrub I/O, provided there is other
751 I/O competing for bandwidth.
752 If no other I/O is competing for bandwidth, scrub is allowed to consume
753 as much bandwidth as the pool is capable of providing.
754 A priority of
755 .Ar 100
756 means that scrub I/O has equal priority to any other user-generated I/O.
757 The value
758 .Ar 0
759 is special, because it turns per-pool scrub priority control.
760 In that case, scrub I/O priority is determined by the
761 .Sy zfs_vdev_scrub_min_active
762 and
763 .Sy zfs_vdev_scrub_max_active
764 tunables.
765 The default value is
766 .Ar 5 .
767 .It Sy resilverprio Ns = Ns Ar 0-100 Ns
768 Same as the
769 .Sy scrubprio
770 property, but controls the priority for resilver I/O.
771 The default value is
772 .Ar 10 .
773 When set to
774 .Ar 0
775 the global tunables used for queue sizing are
776 .Sy zfs_vdev_resilver_min_active
777 and
778 .Sy zfs_vdev_resilver_max_active .
779 .It Sy version Ns = Ns Ar version
780 The current on-disk version of the pool.
781 This can be increased, but never decreased.
782 The preferred method of updating pools is with the
783 .Nm zpool Cm upgrade
784 command, though this property can be used when a specific version is needed for
785 backwards compatibility.
786 Once feature flags are enabled on a pool this property will no longer have a
787 value.
788 .El
789 .Ss Device Properties
790 Each device can have several properties associated with it.
791 These properites override global tunables and are designed to provide more
792 control over the operational parameters of this specific device, as well as to
793 help manage this device.
794 .Pp
795 The
796 .Sy cos
797 device property can reference a CoS property descriptor by name, in which case,
798 the values of device properties are determined according to the following rule:
799 the device settings override CoS settings, which in turn, override the global
800 tunables.
801 .Pp
802 The following device properties are available:
803 .Bl -tag -width Ds
804 .It Sy cos Ns = Ns Ar cos-name
805 This property indicates whether the device is associated with a CoS property
806 descriptor object.
807 If so, the properties from the CoS descriptor that are not explicitly overridden
808 by the device properties are in effect for this device.
809 .It Sy l2arc_ddt Ns = Ns Sy on Ns | Ns Sy off
810 This property is meaningful for L2ARC devices.
811 If this property is turned
812 .Sy on
813 ZFS will dedicate the L2ARC device to cache deduplication table
814 .Pq DDT
815 buffers only.
816 .It Sy prefread Ns = Ns Sy 1 Ns .. Ns Sy 100
817 This property is meaningful for devices that belong to a mirror.
818 The property determines the preference that is given to the device when reading
819 from the mirror.
820 The ratio of the value to the sum of the values of this property for all the
821 devices in the mirror determines the relative frequency
822 .Po which also is considered
823 .Qq probability
824 .Pc
825 of reading from this specific device.
826 .It Sy sparegroup Ns = Ns Ar group-name
827 This property indicates whether the device is a part of a spare device group.
828 Devices in the pool
829 .Pq including spares
830 can be labeled with strings that are meaningful in the context of the management
831 workflow in effect.
832 When a failed device is automatically replaced by spares, the spares whose
833 .Sy sparegroup
834 property match the failed device's property are used first.
835 .It Xo
836 .Bro Sy read Ns | Ns Sy aread Ns | Ns Sy write Ns | Ns
837 .Sy awrite Ns | Ns Sy scrub Ns | Ns Sy resilver Brc Ns _ Ns
838 .Bro Sy minactive Ns | Ns Sy maxactive Brc Ns = Ns
839 .Sy 1 Ns .. Ns Sy 1000
840 .Xc
841 These properties define the minimim/maximum number of outstanding active
842 requests for the queueable classes of I/O requests as defined by the
843 ZFS I/O scheduler.
844 The classes include read, asynchronous read, write, asynchronous write, and
845 scrub classes.
846 .El
847 .Ss Subcommands
848 All subcommands that modify state are logged persistently to the pool in their
849 original form.
850 .Pp
851 The
852 .Nm
853 command provides subcommands to create and destroy storage pools, add capacity
854 to storage pools, and provide information about the storage pools.
855 The following subcommands are supported:
856 .Bl -tag -width Ds
857 .It Xo
858 .Nm
859 .Fl \?
860 .Xc
861 Displays a help message.
862 .It Xo
863 .Nm
864 .Cm add
865 .Op Fl fn
866 .Ar pool vdev Ns ...
1033 .Pa /pool
1034 or
1035 .Pa altroot/pool
1036 if
1037 .Ar altroot
1038 is specified.
1039 The mount point must be an absolute path,
1040 .Sy legacy ,
1041 or
1042 .Sy none .
1043 For more information on dataset mount points, see
1044 .Xr zfs 1M .
1045 .It Fl n
1046 Displays the configuration that would be used without actually creating the
1047 pool.
1048 The actual pool creation can still fail due to insufficient privileges or
1049 device sharing.
1050 .It Fl o Ar property Ns = Ns Ar value
1051 Sets the given pool properties.
1052 See the
1053 .Sx Pool Properties
1054 section for a list of valid properties that can be set.
1055 .It Fl O Ar file-system-property Ns = Ns Ar value
1056 Sets the given file system properties in the root file system of the pool.
1057 See the
1058 .Sx Properties
1059 section of
1060 .Xr zfs 1M
1061 for a list of valid properties that can be set.
1062 .It Fl R Ar root
1063 Equivalent to
1064 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1065 .El
1066 .It Xo
1067 .Nm
1068 .Cm destroy
1069 .Op Fl f
1070 .Ar pool
1071 .Xc
1072 Destroys the given pool, freeing up any devices for other use.
1073 This command tries to unmount any active datasets before destroying the pool.
1074 .Bl -tag -width Ds
1075 .It Fl f
1076 Forces any active datasets contained within the pool to be unmounted.
1077 .El
1078 .It Xo
1079 .Nm
1080 .Cm detach
1081 .Ar pool device
1082 .Xc
1083 Detaches
1084 .Ar device
1085 from a mirror.
1086 The operation is refused if there are no other valid replicas of the data.
1087 .It Xo
1088 .Nm
1089 .Cm export
1090 .Op Fl cfF
1091 .Op Fl t Ar numthreads
1092 .Ar pool Ns ...
1093 .Xc
1094 Exports the given pools from the system.
1095 All devices are marked as exported, but are still considered in use by other
1096 subsystems.
1097 The devices can be moved between systems
1098 .Pq even those of different endianness
1099 and imported as long as a sufficient number of devices are present.
1100 .Pp
1101 Before exporting the pool, all datasets within the pool are unmounted.
1102 A pool can not be exported if it has a shared spare that is currently being
1103 used.
1104 .Pp
1105 For pools to be portable, you must give the
1106 .Nm
1107 command whole disks, not just slices, so that ZFS can label the disks with
1108 portable EFI labels.
1109 Otherwise, disk drivers on platforms of different endianness will not recognize
1110 the disks.
1111 .Bl -tag -width Ds
1112 .It Fl c
1113 Keep configuration information of exported pool in the cache file.
1114 .It Fl f
1115 Forcefully unmount all datasets, using the
1116 .Nm unmount Fl f
1117 command.
1118 .Pp
1119 This command will forcefully export the pool even if it has a shared spare that
1120 is currently being used.
1121 This may lead to potential data corruption.
1122 .It Fl F
1123 Do not update device labels or cache file with new configuration.
1124 .It Fl t Ar numthreads
1125 Unmount datasets in parallel using up to
1126 .Ar numthreads
1127 threads.
1128 .El
1129 .It Xo
1130 .Nm
1131 .Cm get
1132 .Op Fl Hp
1133 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1134 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1135 .Ar pool Ns ...
1136 .Xc
1137 Retrieves the given list of properties
1138 .Po
1139 or all properties if
1140 .Sy all
1141 is used
1142 .Pc
1143 for the specified storage pool(s).
1144 These properties are displayed with the following fields:
1145 .Bd -literal
1146 name Name of storage pool
1147 property Property name
1148 value Property value
1149 source Property source, either 'default' or 'local'.
1150 .Ed
1151 .Pp
1152 See the
1153 .Sx Pool Properties
1154 section for more information on the available pool properties.
1155 .Bl -tag -width Ds
1156 .It Fl H
1157 Scripted mode.
1158 Do not display headers, and separate fields by a single tab instead of arbitrary
1159 space.
1160 .It Fl o Ar field
1161 A comma-separated list of columns to display.
1162 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1163 is the default value.
1164 .It Fl p
1165 Display numbers in parsable (exact) values.
1166 .El
1167 .It Xo
1168 .Nm
1169 .Cm history
1170 .Op Fl il
1171 .Oo Ar pool Oc Ns ...
1172 .Xc
1173 Displays the command history of the specified pool(s) or all pools if no pool is
1284 .It Fl n
1285 Used with the
1286 .Fl F
1287 recovery option.
1288 Determines whether a non-importable pool can be made importable again, but does
1289 not actually perform the pool recovery.
1290 For more details about pool recovery mode, see the
1291 .Fl F
1292 option, above.
1293 .It Fl N
1294 Import the pool without mounting any file systems.
1295 .It Fl o Ar mntopts
1296 Comma-separated list of mount options to use when mounting datasets within the
1297 pool.
1298 See
1299 .Xr zfs 1M
1300 for a description of dataset properties and mount options.
1301 .It Fl o Ar property Ns = Ns Ar value
1302 Sets the specified property on the imported pool.
1303 See the
1304 .Sx Pool Properties
1305 section for more information on the available pool properties.
1306 .It Fl R Ar root
1307 Sets the
1308 .Sy cachefile
1309 property to
1310 .Sy none
1311 and the
1312 .Sy altroot
1313 property to
1314 .Ar root .
1315 .El
1316 .It Xo
1317 .Nm
1318 .Cm import
1319 .Op Fl Dfm
1320 .Op Fl F Op Fl n
1321 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1322 .Op Fl o Ar mntopts
1323 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1324 .Op Fl R Ar root
1378 Allows a pool to import when there is a missing log device.
1379 Recent transactions can be lost because the log device will be discarded.
1380 .It Fl n
1381 Used with the
1382 .Fl F
1383 recovery option.
1384 Determines whether a non-importable pool can be made importable again, but does
1385 not actually perform the pool recovery.
1386 For more details about pool recovery mode, see the
1387 .Fl F
1388 option, above.
1389 .It Fl o Ar mntopts
1390 Comma-separated list of mount options to use when mounting datasets within the
1391 pool.
1392 See
1393 .Xr zfs 1M
1394 for a description of dataset properties and mount options.
1395 .It Fl o Ar property Ns = Ns Ar value
1396 Sets the specified property on the imported pool.
1397 See the
1398 .Sx Pool Properties
1399 section for more information on the available pool properties.
1400 .It Fl R Ar root
1401 Sets the
1402 .Sy cachefile
1403 property to
1404 .Sy none
1405 and the
1406 .Sy altroot
1407 property to
1408 .Ar root .
1409 .It Fl t Ar numthreads
1410 Mount datasets in parallel using up to
1411 .Ar numthreads
1412 threads.
1413 .El
1414 .It Xo
1415 .Nm
1416 .Cm iostat
1417 .Op Fl v
1418 .Op Fl T Sy u Ns | Ns Sy d
1419 .Oo Ar pool Oc Ns ...
1420 .Op Ar interval Op Ar count
1421 .Xc
1422 Displays I/O statistics for the given pools.
1423 When given an
1424 .Ar interval ,
1425 the statistics are printed every
1426 .Ar interval
1427 seconds until ^C is pressed.
1428 If no
1429 .Ar pool Ns s
1430 are specified, statistics for every pool in the system is shown.
1431 If
1432 .Ar count
1479 .Ar pool Ns s
1480 are specified, all pools in the system are listed.
1481 When given an
1482 .Ar interval ,
1483 the information is printed every
1484 .Ar interval
1485 seconds until ^C is pressed.
1486 If
1487 .Ar count
1488 is specified, the command exits after
1489 .Ar count
1490 reports are printed.
1491 .Bl -tag -width Ds
1492 .It Fl H
1493 Scripted mode.
1494 Do not display headers, and separate fields by a single tab instead of arbitrary
1495 space.
1496 .It Fl o Ar property
1497 Comma-separated list of properties to display.
1498 See the
1499 .Sx Pool Properties
1500 section for a list of valid properties.
1501 The default list is
1502 .Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1503 .Cm dedupratio , health , altroot .
1504 .It Fl p
1505 Display numbers in parsable
1506 .Pq exact
1507 values.
1508 .It Fl T Sy u Ns | Ns Sy d
1509 Display a time stamp.
1510 Specify
1511 .Fl u
1512 for a printed representation of the internal representation of time.
1513 See
1514 .Xr time 2 .
1515 Specify
1516 .Fl d
1517 for standard date format.
1518 See
1519 .Xr date 1 .
1552 If the device is part of a mirror or raidz then all devices must be expanded
1553 before the new space will become available to the pool.
1554 .El
1555 .It Xo
1556 .Nm
1557 .Cm reguid
1558 .Ar pool
1559 .Xc
1560 Generates a new unique identifier for the pool.
1561 You must ensure that all devices in this pool are online and healthy before
1562 performing this action.
1563 .It Xo
1564 .Nm
1565 .Cm reopen
1566 .Ar pool
1567 .Xc
1568 Reopen all the vdevs associated with the pool.
1569 .It Xo
1570 .Nm
1571 .Cm remove
1572 .Ar pool Ar device Ns ...
1573 .Xc
1574 Removes the specified device from the pool.
1575 This command currently only supports removing hot spares, cache, log and special
1576 devices.
1577 A mirrored log device can be removed by specifying the top-level mirror for the
1578 log.
1579 Non-log devices that are part of a mirrored configuration can be removed using
1580 the
1581 .Nm zpool Cm detach
1582 command.
1583 Non-redundant and raidz devices cannot be removed from a pool.
1584 .It Xo
1585 .Nm
1586 .Cm replace
1587 .Op Fl f
1588 .Ar pool Ar device Op Ar new_device
1589 .Xc
1590 Replaces
1591 .Ar old_device
1592 with
1593 .Ar new_device .
1594 This is equivalent to attaching
1595 .Ar new_device ,
1596 waiting for it to resilver, and then detaching
1597 .Ar old_device .
1598 .Pp
1599 The size of
1600 .Ar new_device
1601 must be greater than or equal to the minimum size of all the devices in a mirror
1602 or raidz configuration.
1603 .Pp
1604 .Ar new_device
1605 is required if the pool is not redundant.
1606 If
1607 .Ar new_device
1608 is not specified, it defaults to
1609 .Ar old_device .
1610 This form of replacement is useful after an existing disk has failed and has
1611 been physically replaced.
1612 In this case, the new disk may have the same
1613 .Pa /dev/dsk
1614 path as the old device, even though it is actually a different disk.
1615 ZFS recognizes this.
1616 .Bl -tag -width Ds
1617 .It Fl f
1618 Forces use of
1619 .Ar new_device ,
1620 even if its appears to be in use.
1621 Not all devices can be overridden in this manner.
1622 .El
1623 .It Xo
1624 .Nm
1625 .Cm scrub
1626 .Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s
1627 .Ar pool Ns ...
1628 .Xc
1629 Begins a scrub or resumes a paused scrub.
1630 The scrub examines all data in the specified pools to verify that it checksums
1631 correctly.
1632 For replicated
1633 .Pq mirror or raidz
1634 devices, ZFS automatically repairs any damage discovered during the scrub.
1635 The
1636 .Nm zpool Cm status
1637 command reports the progress of the scrub and summarizes the results of the
1638 scrub upon completion.
1639 .Pp
1640 Scrubbing and resilvering are very similar operations.
1641 The difference is that resilvering only examines data that ZFS knows to be out
1642 of date
1643 .Po
1644 for example, when attaching a new device to a mirror or replacing an existing
1645 device
1646 .Pc ,
1647 whereas scrubbing examines all data to discover silent errors due to hardware
1648 faults or disk failure.
1649 .Pp
1650 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1651 one at a time.
1652 If a scrub is paused, the
1653 .Nm zpool Cm scrub
1654 resumes it.
1655 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1656 resilver completes.
1657 .Pp
1658 Partial scrub may be requested using
1659 .Fl m
1660 or
1661 .Fl M
1662 option.
1663 .Bl -tag -width Ds
1664 .It Fl m
1665 Scrub only metadata blocks.
1666 .It Fl M
1667 Scrub only MOS blocks.
1668 .It Fl p
1669 Pause scrubbing.
1670 Scrub pause state and progress are periodically synced to disk.
1671 If the system is restarted or pool is exported during a paused scrub,
1672 even after import, scrub will remain paused until it is resumed.
1673 Once resumed the scrub will pick up from the place where it was last
1674 checkpointed to disk.
1675 To resume a paused scrub issue
1676 .Nm zpool Cm scrub
1677 again.
1678 .It Fl s
1679 Stop scrubbing.
1680 .El
1681 .It Xo
1682 .Nm
1683 .Cm set
1684 .Ar property Ns = Ns Ar value
1685 .Ar pool
1686 .Xc
1687 Sets the given property on the specified pool.
1688 See the
1689 .Sx Pool Properties
1690 section for more information on what properties can be set and acceptable
1691 values.
1692 .It Xo
1693 .Nm
1694 .Cm split
1695 .Op Fl n
1696 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1697 .Op Fl R Ar root
1698 .Ar pool newpool
1699 .Xc
1700 Splits devices off
1701 .Ar pool
1702 creating
1703 .Ar newpool .
1704 All vdevs in
1705 .Ar pool
1706 must be mirrors.
1707 At the time of the split,
1708 .Ar newpool
1709 will be a replica of
1710 .Ar pool .
1711 .Bl -tag -width Ds
1712 .It Fl n
1713 Do dry run, do not actually perform the split.
1714 Print out the expected configuration of
1715 .Ar newpool .
1716 .It Fl o Ar property Ns = Ns Ar value
1717 Sets the specified property for
1718 .Ar newpool .
1719 See the
1720 .Sx Pool Properties
1721 section for more information on the available pool properties.
1722 .It Fl R Ar root
1723 Set
1724 .Sy altroot
1725 for
1726 .Ar newpool
1727 to
1728 .Ar root
1729 and automatically import it.
1730 .El
1731 .It Xo
1732 .Nm
1733 .Cm status
1734 .Op Fl Dvx
1735 .Op Fl T Sy u Ns | Ns Sy d
1736 .Oo Ar pool Oc Ns ...
1737 .Op Ar interval Op Ar count
1738 .Xc
1739 Displays the detailed health status for the given pools.
1740 If no
1760 Specify
1761 .Fl u
1762 for a printed representation of the internal representation of time.
1763 See
1764 .Xr time 2 .
1765 Specify
1766 .Fl d
1767 for standard date format.
1768 See
1769 .Xr date 1 .
1770 .It Fl v
1771 Displays verbose data error information, printing out a complete list of all
1772 data errors since the last complete pool scrub.
1773 .It Fl x
1774 Only display status for pools that are exhibiting errors or are otherwise
1775 unavailable.
1776 Warnings about pools not using the latest on-disk format will not be included.
1777 .El
1778 .It Xo
1779 .Nm
1780 .Cm trim
1781 .Op Fl r Ar rate Ns | Ns Fl s
1782 .Ar pool Ns ...
1783 .Xc
1784 Initiates a on-demand TRIM operation on all of the free space of a pool.
1785 This informs the underlying storage devices of all of the blocks that the pool
1786 no longer considers allocated, thus allowing thinly provisioned storage devices
1787 to reclaim them.
1788 Please note that this collects all space marked as
1789 .Qq freed
1790 in the pool immediately and doesn't wait the
1791 .Sy zfs_txgs_per_trim
1792 delay as automatic TRIM does.
1793 Hence, this can limit pool corruption recovery options during and immediately
1794 following the on-demand TRIM to 1-2 TXGs into the past
1795 .Pq instead of the standard 32-64 of automatic TRIM .
1796 This approach, however, allows you to recover the maximum amount of free space
1797 from the pool immediately without having to wait.
1798 .Pp
1799 Also note that an on-demand TRIM operation can be initiated irrespective of the
1800 .Sy autotrim
1801 pool property setting.
1802 It does, however, respect the
1803 .Sy forcetrim
1804 pool property.
1805 .Pp
1806 An on-demand TRIM operation does not conflict with an ongoing scrub, but it can
1807 put significant I/O stress on the underlying vdevs.
1808 A resilver, however, automatically stops an on-demand TRIM operation.
1809 You can manually reinitiate the TRIM operation after the resilver has started,
1810 by simply reissuing the
1811 .Nm zpool Cm trim
1812 command.
1813 .Pp
1814 Adding a vdev during TRIM is supported, although the progression display in
1815 .Nm zpool Cm status
1816 might not be entirely accurate in that case
1817 .Pq TRIM will complete before reaching 100% .
1818 Removing or detaching a vdev will prematurely terminate an on-demand TRIM
1819 operation.
1820 .Bl -tag -width Ds
1821 .It Fl r Ar rate
1822 Controls the speed at which the TRIM operation progresses.
1823 Without this option, TRIM is executed in parallel on all top-level vdevs as
1824 quickly as possible.
1825 This option allows you to control how fast
1826 .Pq in bytes per second
1827 the TRIM is executed.
1828 This rate is applied on a per-vdev basis, i.e. every top-level vdev in the pool
1829 tries to match this speed.
1830 .Pp
1831 Due to limitations in how the algorithm is designed, TRIMs are executed in
1832 whole-metaslab increments.
1833 Each top-level vdev contains approximately 200 metaslabs, so a rate-limited TRIM
1834 progresses in steps, i.e. it TRIMs one metaslab completely and then waits for a
1835 while so that over the whole device, the speed averages out.
1836 .Pp
1837 When an on-demand TRIM operation is already in progress, this option changes its
1838 rate.
1839 To change a rate-limited TRIM to an unlimited one, simply execute the
1840 .Nm zpool Cm trim
1841 command without the
1842 .Fl r
1843 option.
1844 .It Fl s
1845 Stop trimming.
1846 If an on-demand TRIM operation is not ongoing at the moment, this does nothing
1847 and the command returns success.
1848 .El
1849 .It Xo
1850 .Nm
1851 .Cm upgrade
1852 .Xc
1853 Displays pools which do not have all supported features enabled and pools
1854 formatted using a legacy ZFS version number.
1855 These pools can continue to be used, but some features may not be available.
1856 Use
1857 .Nm zpool Cm upgrade Fl a
1858 to enable all features on all pools.
1859 .It Xo
1860 .Nm
1861 .Cm upgrade
1862 .Fl v
1863 .Xc
1864 Displays legacy ZFS versions supported by the current software.
1865 See
1866 .Xr zpool-features 5
1867 for a description of feature flags features supported by the current software.
1868 .It Xo
1869 .Nm
1870 .Cm upgrade
1872 .Fl a Ns | Ns Ar pool Ns ...
1873 .Xc
1874 Enables all supported features on the given pool.
1875 Once this is done, the pool will no longer be accessible on systems that do not
1876 support feature flags.
1877 See
1878 .Xr zpool-features 5
1879 for details on compatibility with systems that support feature flags, but do not
1880 support all features enabled on the pool.
1881 .Bl -tag -width Ds
1882 .It Fl a
1883 Enables all supported features on all pools.
1884 .It Fl V Ar version
1885 Upgrade to the specified legacy version.
1886 If the
1887 .Fl V
1888 flag is specified, no features will be enabled on the pool.
1889 This option can only be used to increase the version number up to the last
1890 supported legacy version number.
1891 .El
1892 .It Xo
1893 .Nm
1894 .Cm vdev-get
1895 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1896 .Ar pool
1897 .Ar vdev-name Ns | Ns Ar vdev-guid
1898 .Xc
1899 Retrieves the given list of vdev properties
1900 .Po or all properties if
1901 .Sy all
1902 is used
1903 .Pc
1904 for the specified vdev of the specified storage pool.
1905 These properties are displayed in the same manner as the pool properties.
1906 The operation is supported for leaf-level vdevs only.
1907 See the
1908 .Sx Device Properties
1909 section for more information on the available properties.
1910 .It Xo
1911 .Nm
1912 .Cm vdev-set
1913 .Ar property Ns = Ns Ar value
1914 .Ar pool
1915 .Ar vdev-name Ns | Ns Ar vdev-guid
1916 .Xc
1917 Sets the given property on the specified device of the specified pool.
1918 If top-level vdev is specified, sets the property on all the child devices.
1919 See the
1920 .Sx Device Properties
1921 section for more information on what properties can be set and accepted values.
1922 .El
1923 .Sh EXIT STATUS
1924 The following exit values are returned:
1925 .Bl -tag -width Ds
1926 .It Sy 0
1927 Successful completion.
1928 .It Sy 1
1929 An error occurred.
1930 .It Sy 2
1931 Invalid command line options were specified.
1932 .El
1933 .Sh EXAMPLES
1934 .Bl -tag -width Ds
1935 .It Sy Example 1 No Creating a RAID-Z Storage Pool
1936 The following command creates a pool with a single raidz root vdev that
1937 consists of six disks.
1938 .Bd -literal
1939 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1940 .Ed
1941 .It Sy Example 2 No Creating a Mirrored Storage Pool
2044 .Bd -literal
2045 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
2046 c4d0 c5d0
2047 .Ed
2048 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2049 The following command adds two disks for use as cache devices to a ZFS storage
2050 pool:
2051 .Bd -literal
2052 # zpool add pool cache c2d0 c3d0
2053 .Ed
2054 .Pp
2055 Once added, the cache devices gradually fill with content from main memory.
2056 Depending on the size of your cache devices, it could take over an hour for
2057 them to fill.
2058 Capacity and reads can be monitored using the
2059 .Cm iostat
2060 option as follows:
2061 .Bd -literal
2062 # zpool iostat -v pool 5
2063 .Ed
2064 .It Sy Example 14 No Removing a Mirrored Log Device
2065 The following command removes the mirrored log device
2066 .Sy mirror-2 .
2067 Given this configuration:
2068 .Bd -literal
2069 pool: tank
2070 state: ONLINE
2071 scrub: none requested
2072 config:
2073
2074 NAME STATE READ WRITE CKSUM
2075 tank ONLINE 0 0 0
2076 mirror-0 ONLINE 0 0 0
2077 c6t0d0 ONLINE 0 0 0
2078 c6t1d0 ONLINE 0 0 0
2079 mirror-1 ONLINE 0 0 0
2080 c6t2d0 ONLINE 0 0 0
2081 c6t3d0 ONLINE 0 0 0
2082 logs
2083 mirror-2 ONLINE 0 0 0
2084 c4t0d0 ONLINE 0 0 0
2085 c4t1d0 ONLINE 0 0 0
2086 .Ed
2087 .Pp
2088 The command to remove the mirrored log
2089 .Sy mirror-2
2090 is:
2091 .Bd -literal
2092 # zpool remove tank mirror-2
2093 .Ed
2094 .It Sy Example 15 No Displaying expanded space on a device
2095 The following command displays the detailed information for the pool
2096 .Em data .
2097 This pool is comprised of a single raidz vdev where one of its devices
2098 increased its capacity by 10GB.
2099 In this example, the pool will not be able to utilize this extra capacity until
2100 all the devices under the raidz vdev have been expanded.
2101 .Bd -literal
2102 # zpool list -v data
2103 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2104 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
2105 raidz1 23.9G 14.6G 9.30G 48% -
2106 c1t1d0 - - - - -
2107 c1t2d0 - - - - 10G
2108 c1t3d0 - - - - -
2109 .Ed
2110 .El
2111 .Sh INTERFACE STABILITY
2112 .Sy Evolving
2113 .Sh SEE ALSO
|