zpool —
configure ZFS storage pools
zpool |
add
[-fn ]
pool vdev... |
zpool |
attach
[-f ]
pool device new_device |
zpool |
clear
pool
[device ] |
zpool |
create
[-dfn ]
[-B ]
[-m
mountpoint ]
[-o
property=value ]...
[-O
file-system-property=value ]...
[-R
root ]
pool vdev... |
zpool |
export
[-cfF ]
[-t
numthreads ]
pool... |
zpool |
get
[-Hp ]
[]
all|property[,property ]...
pool... |
zpool |
history
[-il ]
[pool ]... |
zpool |
import
[-D ]
[-d
dir ] |
zpool |
import
-a
[-DfmN ]
[]
[-c
cachefile|-d
dir ]
[-o
mntopts ]
[-o
property=value ]...
[-R
root ]
[-t
numthreads ] |
zpool |
import
[-Dfm ]
[]
[-c
cachefile|-d
dir ]
[-o
mntopts ]
[-o
property=value ]...
[-R
root ]
[-t
numthreads ]
pool|id
[newpool ] |
zpool |
iostat
[-v ]
[-T
u|d ]
[pool ]...
[] |
zpool |
labelclear
[-f ]
device |
zpool |
list
[-Hpv ]
[-o
property[ ,property ]... ]
[-T
u|d ]
[pool ]...
[] |
zpool |
offline
[-t ]
pool
device... |
zpool |
online
[-e ]
pool
device... |
zpool |
remove
pool
device... |
zpool |
replace
[-f ]
pool
device
[new_device ] |
zpool |
scrub
[-m|-M|-p|-s ]
pool... |
zpool |
set
property=value
pool |
zpool |
split
[-n ]
[-o
property=value ]...
[-R
root ]
pool newpool |
zpool |
status
[-Dvx ]
[-T
u|d ]
[pool ]...
[] |
zpool |
trim
[-r
rate|-s ]
pool... |
zpool |
upgrade
[-V
version ]
-a|pool... |
zpool |
vdev-get
all|property[,property ]...
pool
vdev-name|vdev-guid |
zpool |
vdev-set
property=value
pool
vdev-name|vdev-guid |
The
zpool command configures ZFS storage
pools. A storage pool is a collection of devices that provides physical
storage and data replication for ZFS datasets. All datasets within a storage
pool share the same space. See
zfs(1M) for
information on managing datasets.
A "virtual device" describes a single device or a collection of
devices organized according to certain performance and fault characteristics.
The following virtual devices are supported:
-
-
- disk
- A block device, typically located under
/dev/dsk. ZFS can use individual slices
or partitions, though the recommended mode of operation is to use whole
disks. A disk can be specified by a full path, or it can be a shorthand
name (the relative portion of the path under
/dev/dsk). A whole disk can be
specified by omitting the slice or partition designation. For example,
c0t0d0 is equivalent to
/dev/dsk/c0t0d0s2. When given a whole
disk, ZFS automatically labels the disk, if necessary.
-
-
- file
- A regular file. The use of files as a backing store is strongly
discouraged. It is designed primarily for experimental purposes, as the
fault tolerance of a file is only as good as the file system of which it
is a part. A file must be specified by a full path.
-
-
- mirror
- A mirror of two or more devices. Data is replicated in an identical
fashion across all components of a mirror. A mirror with N disks of size X
can hold X bytes and can withstand (N-1) devices failing before data
integrity is compromised.
-
-
- raidz, raidz1,
raidz2,
raidz3
- A variation on RAID-5 that allows for better distribution of parity and
eliminates the RAID-5 “write hole” (in which data and parity
become inconsistent after a power loss). Data and parity is striped across
all disks within a raidz group.
A raidz group can have single-, double-, or triple-parity, meaning that the
raidz group can sustain one, two, or three failures, respectively, without
losing any data. The raidz1 vdev type
specifies a single-parity raidz group; the
raidz2 vdev type specifies a double-parity
raidz group; and the raidz3 vdev type
specifies a triple-parity raidz group. The
raidz vdev type is an alias for
raidz1.
A raidz group with N disks of size X with P parity disks can hold
approximately (N-P)*X bytes and can withstand P device(s) failing before
data integrity is compromised. The minimum number of devices in a raidz
group is one more than the number of parity disks. The recommended number
is between 3 and 9 to help increase performance.
-
-
- spare
- A special pseudo-vdev which keeps track of available hot spares for a
pool. For more information, see the
Hot Spares section.
-
-
- log
- A separate intent log device. If more than one log device is specified,
then writes are load-balanced between devices. Log devices can be
mirrored. However, raidz vdev types are not supported for the intent log.
For more information, see the
Intent Log section.
-
-
- cache
- A device used to cache storage pool data. A cache device cannot be
configured as a mirror or raidz group. For more information, see the
Cache Devices
section.
Virtual devices cannot be nested, so a mirror or raidz virtual device can only
contain files or disks. Mirrors of mirrors (or other combinations) are not
allowed.
A pool can have any number of virtual devices at the top of the configuration
(known as “root vdevs”). Data is dynamically distributed across
all top-level devices to balance data among devices. As new virtual devices
are added, ZFS automatically places data on the newly available devices.
Virtual devices are specified one at a time on the command line, separated by
whitespace. The keywords
mirror and
raidz are used to distinguish where a group ends
and another begins. For example, the following creates two root vdevs, each a
mirror of two disks:
# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
ZFS supports a rich set of mechanisms for handling device failure and data
corruption. All metadata and data is checksummed, and ZFS automatically
repairs bad data from a good copy when corruption is detected.
In order to take advantage of these features, a pool must make use of some form
of redundancy, using either mirrored or raidz groups. While ZFS supports
running in a non-redundant configuration, where each root vdev is simply a
disk or file, this is strongly discouraged. A single case of bit corruption
can render some or all of your data unavailable.
A pool's health status is described by one of three states: online, degraded, or
faulted. An online pool has all devices operating normally. A degraded pool is
one in which one or more devices have failed, but the data is still available
due to a redundant configuration. A faulted pool has corrupted metadata, or
one or more faulted devices, and insufficient replicas to continue
functioning.
The health of the top-level vdev, such as mirror or raidz device, is potentially
impacted by the state of its associated vdevs, or component devices. A
top-level vdev or component device is in one of the following states:
-
-
- DEGRADED
- One or more top-level vdevs is in the degraded state because one or more
component devices are offline. Sufficient replicas exist to continue
functioning.
One or more component devices is in the degraded or faulted state, but
sufficient replicas exist to continue functioning. The underlying
conditions are as follows:
- The number of checksum errors exceeds acceptable levels and the device
is degraded as an indication that something may be wrong. ZFS
continues to use the device as necessary.
- The number of I/O errors exceeds acceptable levels. The device could
not be marked as faulted because there are insufficient replicas to
continue functioning.
-
-
- FAULTED
- One or more top-level vdevs is in the faulted state because one or more
component devices are offline. Insufficient replicas exist to continue
functioning.
One or more component devices is in the faulted state, and insufficient
replicas exist to continue functioning. The underlying conditions are as
follows:
- The device could be opened, but the contents did not match expected
values.
- The number of I/O errors exceeds acceptable levels and the device is
faulted to prevent further use of the device.
-
-
- OFFLINE
- The device was explicitly taken offline by the
zpool
offline command.
-
-
- ONLINE
- The device is online and functioning.
-
-
- REMOVED
- The device was physically removed while the system was running. Device
removal detection is hardware-dependent and may not be supported on all
platforms.
-
-
- UNAVAIL
- The device could not be opened. If a pool is imported when a device was
unavailable, then the device will be identified by a unique identifier
instead of its path since the path was never correct in the first
place.
If a device is removed and later re-attached to the system, ZFS attempts to put
the device online automatically. Device attach detection is hardware-dependent
and might not be supported on all platforms.
ZFS allows devices to be associated with pools as “hot spares”.
These devices are not actively used in the pool, but when an active device
fails, it is automatically replaced by a hot spare. To create a pool with hot
spares, specify a
spare vdev with any number of
devices. For example,
# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
Spares can be shared across multiple pools, and can be added with the
zpool
add command and removed with the
zpool
remove command. Once a spare replacement is
initiated, a new
spare vdev is created within the
configuration that will remain there until the original device is replaced. At
this point, the hot spare becomes available again if another device fails.
If a pool has a shared spare that is currently being used, the pool can not be
exported since other pools may use this shared spare, which may lead to
potential data corruption.
An in-progress spare replacement can be cancelled by detaching the hot spare. If
the original faulted device is detached, then the hot spare assumes its place
in the configuration, and is removed from the spare list of all active pools.
See
sparegroup vdev property in
Device Properties
section for information on how to control spare selection.
Spares cannot replace log devices.
The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
transactions. For instance, databases often require their transactions to be
on stable storage devices when returning from a system call. NFS and other
applications can also use
fsync(3C) to ensure
data stability. By default, the intent log is allocated from blocks within the
main pool. However, it might be possible to get better performance using
separate intent log devices such as NVRAM or a dedicated disk. For example:
# zpool create pool c0d0 c1d0 log c2d0
Multiple log devices can also be specified, and they can be mirrored. See the
EXAMPLES section for an example
of mirroring multiple log devices.
Log devices can be added, replaced, attached, detached, and imported and
exported as part of the larger pool. Mirrored log devices can be removed by
specifying the top-level mirror for the log.
Devices can be added to a storage pool as “cache devices”. These
devices provide an additional layer of caching between main memory and disk.
For read-heavy workloads, where the working set size is much larger than what
can be cached in main memory, using cache devices allow much more of this
working set to be served from low latency media. Using cache devices provides
the greatest performance improvement for random read-workloads of mostly
static content.
To create a pool with cache devices, specify a
cache vdev with any number of devices. For
example:
# zpool create pool c0d0 c1d0 cache c2d0 c3d0
Cache devices cannot be mirrored or part of a raidz configuration. If a read
error is encountered on a cache device, that read I/O is reissued to the
original storage pool device, which might be part of a mirrored or raidz
configuration.
The content of the cache devices is considered volatile, as is the case with
other system caches.
Each pool has several properties associated with it. Some properties are
read-only statistics while others are configurable and change the behavior of
the pool.
The following are read-only properties:
-
-
allocated
- Amount of storage space used within the pool.
-
-
- bootsize
- The size of the system boot partition. This property can only be set at
pool creation time and is read-only once pool is created. Setting this
property implies using the
-B
option.
-
-
- capacity
- Percentage of pool space used. This property can also be referred to by
its shortened column name, cap.
-
-
- ddt_capped=on|off
- When the ddt_capped is
on this indicates DDT growth has been
stopped. New unique writes will not be deduped to prevent further DDT
growth.
-
-
- expandsize
- Amount of uninitialized space within the pool or device that can be used
to increase the total capacity of the pool. Uninitialized space consists
of any space on an EFI labeled vdev which has not been brought online
(e.g, using
zpool
online
-e). This space occurs when a LUN is
dynamically expanded.
-
-
- fragmentation
- The amount of fragmentation in the pool.
-
-
- free
- The amount of free space available in the pool.
-
-
- freeing
- freeing is the amount of pool space remaining
to be reclaimed. After a file, dataset or snapshot is destroyed, the space
it was using is returned to the pool asynchronously. Over time
freeing will decrease while
free increases.
-
-
- health
- The current health of the pool. Health can be one of
ONLINE,
DEGRADED,
FAULTED, OFFLINE,
REMOVED, UNAVAIL.
-
-
- guid
- A unique identifier for the pool.
-
-
- size
- Total size of the storage pool.
-
-
- unsupported@feature_guid
- Information about unsupported features that are enabled on the pool. See
zpool-features(5) for details.
The space usage properties report actual physical space available to the storage
pool. The physical space can be different from the total amount of space that
any contained datasets can actually use. The amount of space used in a raidz
configuration depends on the characteristics of the data being written. In
addition, ZFS reserves some space for internal accounting that the
zfs(1M) command takes into account, but the
zpool command does not. For non-full pools
of a reasonable size, these effects should be invisible. For small pools, or
pools that are close to being completely full, these discrepancies may become
more noticeable.
The following property can be set at creation time and import time:
-
-
- altroot
- Alternate root directory. If set, this directory is prepended to any mount
points within the pool. This can be used when examining an unknown pool
where the mount points cannot be trusted, or in an alternate boot
environment, where the typical paths are not valid.
altroot is not a persistent property. It is
valid only while the system is up. Setting
altroot defaults to using
cachefile=none,
though this may be overridden using an explicit setting.
The following property can be set only at import time:
-
-
- readonly=on|off
- If set to on, the pool will be imported in
read-only mode. This property can also be referred to by its shortened
column name, rdonly.
The following properties can be set at creation time and import time, and later
changed with the
zpool
set command:
-
-
- autoexpand=on|off
- Controls automatic pool expansion when the underlying LUN is grown. If set
to on, the pool will be resized according to
the size of the expanded device. If the device is part of a mirror or
raidz then all devices within that mirror/raidz group must be expanded
before the new space is made available to the pool. The default behavior
is off. This property can also be referred to
by its shortened column name, expand.
-
-
- autoreplace=on|off
- Controls automatic device replacement. If set to
off, device replacement must be initiated by
the administrator by using the
zpool
replace command. If set to
on, any new device, found in the same
physical location as a device that previously belonged to the pool, is
automatically formatted and replaced. The default behavior is
off. This property can also be referred to by
its shortened column name, replace.
-
-
- autotrim=on|off
- When set to on, while deleting data, ZFS will
inform the underlying vdevs of any blocks that have been marked as freed.
This allows thinly provisioned vdevs to reclaim unused blocks. Currently,
this feature supports sending SCSI UNMAP commands to SCSI and SAS disk
vdevs, and using file hole punching on file-backed vdevs. SATA TRIM is
currently not implemented. The default setting for this property is
off.
Please note that automatic trimming of data blocks can put significant
stress on the underlying storage devices if they do not handle these
commands in a background, low-priority manner. In that case, it may be
possible to achieve most of the benefits of trimming free space on the
pool by running an on-demand (manual) trim every once in a while during a
maintenance window using the
zpool
trim command.
Automatic trim does not reclaim blocks after a delete immediately. Instead,
it waits approximately 32-64 TXGs (or as defined by the
zfs_txgs_per_trim tunable) to allow for more
efficient aggregation of smaller portions of free space into fewer larger
regions, as well as to allow for longer pool corruption recovery via
zpool
import
-F.
-
-
- bootfs=pool/dataset
- Identifies the default bootable dataset for the root pool. This property
is expected to be set mainly by the installation and upgrade
programs.
-
-
- cachefile=path|none
- Controls the location of where the pool configuration is cached.
Discovering all pools on system startup requires a cached copy of the
configuration data that is stored on the root file system. All pools in
this cache are automatically imported when the system boots. Some
environments, such as install and clustering, need to cache this
information in a different location so that pools are not automatically
imported. Setting this property caches the pool configuration in a
different location that can later be imported with
zpool
import
-c. Setting it to the special value
none creates a temporary pool that is never
cached, and the special value “” (empty string) uses the
default location.
Multiple pools can share the same cache file. Because the kernel destroys
and recreates this file when pools are added and removed, care should be
taken when attempting to access this file. When the last pool using a
cachefile is exported or destroyed, the file
is removed.
-
-
- comment=text
- A text string consisting of printable ASCII characters that will be stored
such that it is available even if the pool becomes faulted. An
administrator can provide additional information about a pool using this
property.
-
-
- dedupditto=number
- Threshold for the number of block ditto copies. If the reference count for
a deduplicated block increases above this number, a new ditto copy of this
block is automatically stored. The default setting is
0 which causes no ditto copies to be created
for deduplicated blocks. The minimum legal nonzero setting is
100.
-
-
- delegation=on|off
- Controls whether a non-privileged user is granted access based on the
dataset permissions defined on the dataset. See
zfs(1M) for more information on ZFS delegated
administration.
-
-
- failmode=wait|continue|panic
- Controls the system behavior in the event of catastrophic pool failure.
This condition is typically a result of a loss of connectivity to the
underlying storage device(s) or a failure of all devices within the pool.
The behavior of such an event is determined as follows:
-
-
- wait
- Blocks all I/O access until the device connectivity is recovered and
the errors are cleared. This is the default behavior.
-
-
- continue
- Returns
EIO to any new write I/O
requests but allows reads to any of the remaining healthy devices. Any
write requests that have yet to be committed to disk would be
blocked.
-
-
- panic
- Prints out a message to the console and generates a system crash
dump.
-
-
- feature@feature_name=enabled
- The value of this property is the current state of
feature_name. The only valid value when
setting this property is enabled which moves
feature_name to the enabled state. See
zpool-features(5) for details on feature
states.
-
-
- forcetrim=on|off
- Controls whether device support is taken into consideration when issuing
TRIM commands to the underlying vdevs of the pool. Normally, both
automatic trim and on-demand (manual) trim only issue TRIM commands if a
vdev indicates support for it. Setting the
forcetrim property to
on will force ZFS to issue TRIMs even if it
thinks a device does not support it. The default value is
off.
-
-
- listsnapshots=on|off
- Controls whether information about snapshots associated with this pool is
output when
zfs
list is run without the
-t option. The default value is
off. This property can also be referred to by
its shortened name, listsnaps.
-
-
- scrubprio=0-100
- Sets the priority of scrub I/O for this pool. This is a number from 0 to
100, higher numbers meaning a higher priority and thus more bandwidth
allocated to scrub I/O, provided there is other I/O competing for
bandwidth. If no other I/O is competing for bandwidth, scrub is allowed to
consume as much bandwidth as the pool is capable of providing. A priority
of 100 means that scrub I/O has equal
priority to any other user-generated I/O. The value
0 is special, because it turns per-pool
scrub priority control. In that case, scrub I/O priority is determined by
the zfs_vdev_scrub_min_active and
zfs_vdev_scrub_max_active tunables. The
default value is 5.
-
-
- resilverprio=0-100
- Same as the scrubprio property, but controls
the priority for resilver I/O. The default value is
10. When set to
0 the global tunables used for queue
sizing are zfs_vdev_resilver_min_active and
zfs_vdev_resilver_max_active.
-
-
- version=version
- The current on-disk version of the pool. This can be increased, but never
decreased. The preferred method of updating pools is with the
zpool
upgrade command, though this property
can be used when a specific version is needed for backwards compatibility.
Once feature flags are enabled on a pool this property will no longer have
a value.
Each device can have several properties associated with it. These properites
override global tunables and are designed to provide more control over the
operational parameters of this specific device, as well as to help manage this
device.
The
cos device property can reference a CoS
property descriptor by name, in which case, the values of device properties
are determined according to the following rule: the device settings override
CoS settings, which in turn, override the global tunables.
The following device properties are available:
-
-
- cos=cos-name
- This property indicates whether the device is associated with a CoS
property descriptor object. If so, the properties from the CoS descriptor
that are not explicitly overridden by the device properties are in effect
for this device.
-
-
- l2arc_ddt=on|off
- This property is meaningful for L2ARC devices. If this property is turned
on ZFS will dedicate the L2ARC device to
cache deduplication table (DDT) buffers only.
-
-
- prefread=1..100
- This property is meaningful for devices that belong to a mirror. The
property determines the preference that is given to the device when
reading from the mirror. The ratio of the value to the sum of the values
of this property for all the devices in the mirror determines the relative
frequency (which also is considered “probability”) of
reading from this specific device.
-
-
- sparegroup=group-name
- This property indicates whether the device is a part of a spare device
group. Devices in the pool (including spares) can be labeled with strings
that are meaningful in the context of the management workflow in effect.
When a failed device is automatically replaced by spares, the spares whose
sparegroup property match the failed device's
property are used first.
-
-
- {read|aread|write|awrite|scrub|resilver}_{minactive|maxactive}=1..1000
- These properties define the minimim/maximum number of outstanding active
requests for the queueable classes of I/O requests as defined by the ZFS
I/O scheduler. The classes include read, asynchronous read, write,
asynchronous write, and scrub classes.
All subcommands that modify state are logged persistently to the pool in their
original form.
The
zpool command provides subcommands to
create and destroy storage pools, add capacity to storage pools, and provide
information about the storage pools. The following subcommands are supported:
-
-
zpool
-?
- Displays a help message.
-
-
zpool
add
[-fn
]
pool vdev...
- Adds the specified virtual devices to the given pool. The
vdev specification is described in the
Virtual Devices
section. The behavior of the
-f option,
and the device checks performed are described in the
zpool
create subcommand.
-
-
-f
- Forces use of vdevs, even if they
appear in use or specify a conflicting replication level. Not all
devices can be overridden in this manner.
-
-
-n
- Displays the configuration that would be used without actually adding
the vdevs. The actual pool creation
can still fail due to insufficient privileges or device sharing.
-
-
zpool
attach
[-f
]
pool device new_device
- Attaches new_device to the existing
device. The existing device cannot be
part of a raidz configuration. If device
is not currently part of a mirrored configuration,
device automatically transforms into a
two-way mirror of device and
new_device. If
device is part of a two-way mirror,
attaching new_device creates a three-way
mirror, and so on. In either case,
new_device begins to resilver
immediately.
-
-
-f
- Forces use of new_device, even if its
appears to be in use. Not all devices can be overridden in this
manner.
-
-
zpool
clear
pool
[device
]
- Clears device errors in a pool. If no arguments are specified, all device
errors within the pool are cleared. If one or more devices is specified,
only those errors associated with the specified device or devices are
cleared.
-
-
zpool
create
[-dfn
]
[-B
]
[-m
mountpoint
]
[-o
property=value
]...
[-O
file-system-property=value
]...
[-R
root
] pool
vdev...
- Creates a new storage pool containing the virtual devices specified on the
command line. The pool name must begin with a letter, and can only contain
alphanumeric characters as well as underscore
(“_”), dash
(“-”), and period
(“.”). The pool names
mirror, raidz,
spare and log
are reserved, as are names beginning with the pattern
c[0-9]. The
vdev specification is described in the
Virtual Devices
section.
The command verifies that each device specified is accessible and not
currently in use by another subsystem. There are some uses, such as being
currently mounted, or specified as the dedicated dump device, that
prevents a device from ever being used by ZFS. Other uses, such as having
a preexisting UFS file system, can be overridden with the
-f option.
The command also checks that the replication strategy for the pool is
consistent. An attempt to combine redundant and non-redundant storage in a
single pool, or to mix disks and files, results in an error unless
-f is specified. The use of differently
sized devices within a single raidz or mirror group is also flagged as an
error unless -f is specified.
Unless the -R option is specified, the
default mount point is
/pool.
The mount point must not exist or must be empty, or else the root dataset
cannot be mounted. This can be overridden with the
-m option.
By default all supported features are enabled on the new pool unless the
-d option is specified.
-
-
-B
- Create whole disk pool with EFI System partition to support booting
system with UEFI firmware. Default size is 256MB. To create boot
partition with custom size, set the
bootsize property with the
-o option. See the
Properties section for
details.
-
-
-d
- Do not enable any features on the new pool. Individual features can be
enabled by setting their corresponding properties to
enabled with the
-o option. See
zpool-features(5) for details about
feature properties.
-
-
-f
- Forces use of vdevs, even if they
appear in use or specify a conflicting replication level. Not all
devices can be overridden in this manner.
-
-
-m
mountpoint
- Sets the mount point for the root dataset. The default mount point is
/pool or
altroot/pool if
altroot is specified. The mount point
must be an absolute path, legacy, or
none. For more information on dataset
mount points, see zfs(1M).
-
-
-n
- Displays the configuration that would be used without actually
creating the pool. The actual pool creation can still fail due to
insufficient privileges or device sharing.
-
-
-o
property=value
- Sets the given pool properties. See the
Pool Properties
section for a list of valid properties that can be set.
-
-
-O
file-system-property=value
- Sets the given file system properties in the root file system of the
pool. See the
Properties section of
zfs(1M) for a list of valid properties
that can be set.
-
-
-R
root
- Equivalent to
-o
cachefile=none
-o
altroot=root
-
-
zpool
destroy
[-f
]
pool
- Destroys the given pool, freeing up any devices for other use. This
command tries to unmount any active datasets before destroying the pool.
-
-
-f
- Forces any active datasets contained within the pool to be
unmounted.
-
-
zpool
detach pool
device
- Detaches device from a mirror. The
operation is refused if there are no other valid replicas of the
data.
-
-
zpool
export
[-cfF
]
[-t
numthreads
]
pool...
- Exports the given pools from the system. All devices are marked as
exported, but are still considered in use by other subsystems. The devices
can be moved between systems (even those of different endianness) and
imported as long as a sufficient number of devices are present.
Before exporting the pool, all datasets within the pool are unmounted. A
pool can not be exported if it has a shared spare that is currently being
used.
For pools to be portable, you must give the
zpool command whole disks, not just
slices, so that ZFS can label the disks with portable EFI labels.
Otherwise, disk drivers on platforms of different endianness will not
recognize the disks.
-
-
-c
- Keep configuration information of exported pool in the cache
file.
-
-
-f
- Forcefully unmount all datasets, using the
unmount
-f command.
This command will forcefully export the pool even if it has a shared
spare that is currently being used. This may lead to potential data
corruption.
-
-
-F
- Do not update device labels or cache file with new configuration.
-
-
-t
numthreads
- Unmount datasets in parallel using up to
numthreads threads.
-
-
zpool
get
[-Hp
]
[]
all|property[,property
]...
pool...
- Retrieves the given list of properties (or all properties if
all is used) for the specified storage
pool(s). These properties are displayed with the following fields:
name Name of storage pool
property Property name
value Property value
source Property source, either 'default' or 'local'.
See the Pool Properties
section for more information on the available pool properties.
-
-
-H
- Scripted mode. Do not display headers, and separate fields by a single
tab instead of arbitrary space.
-
-
-o
field
- A comma-separated list of columns to display.
name,property,value,source
is the default value.
-
-
-p
- Display numbers in parsable (exact) values.
-
-
zpool
history
[-il
]
[pool
]...
- Displays the command history of the specified pool(s) or all pools if no
pool is specified.
-
-
-i
- Displays internally logged ZFS events in addition to user initiated
events.
-
-
-l
- Displays log records in long format, which in addition to standard
format includes, the user name, the hostname, and the zone in which
the operation was performed.
-
-
zpool
import
[-D
]
[-d
dir
]
- Lists pools available to import. If the
-d option is not specified, this
command searches for devices in
/dev/dsk. The
-d option can be specified multiple
times, and all directories are searched. If the device appears to be part
of an exported pool, this command displays a summary of the pool with the
name of the pool, a numeric identifier, as well as the vdev layout and
current health of the device for each device or file. Destroyed pools,
pools that were previously destroyed with the
zpool
destroy command, are not listed unless
the -D option is specified.
The numeric identifier is unique, and can be used instead of the pool name
when multiple exported pools of the same name are available.
-
-
-c
cachefile
- Reads configuration from the given
cachefile that was created with the
cachefile pool property. This
cachefile is used instead of
searching for devices.
-
-
-d
dir
- Searches for devices or files in dir.
The
-d option can be specified
multiple times.
-
-
-D
- Lists destroyed pools only.
-
-
zpool
import
-a
[-DfmN
]
[]
[-c
cachefile|-d
dir
]
[-o
mntopts
]
[-o
property=value
]...
[-R
root
]
- Imports all pools found in the search directories. Identical to the
previous command, except that all pools with a sufficient number of
devices available are imported. Destroyed pools, pools that were
previously destroyed with the
zpool
destroy command, will not be imported
unless the -D option is specified.
-
-
-a
- Searches for and imports all pools found.
-
-
-c
cachefile
- Reads configuration from the given
cachefile that was created with the
cachefile pool property. This
cachefile is used instead of
searching for devices.
-
-
-d
dir
- Searches for devices or files in dir.
The
-d option can be specified
multiple times. This option is incompatible with the
-c option.
-
-
-D
- Imports destroyed pools only. The
-f option is also required.
-
-
-f
- Forces import, even if the pool appears to be potentially active.
-
-
-F
- Recovery mode for a non-importable pool. Attempt to return the pool to
an importable state by discarding the last few transactions. Not all
damaged pools can be recovered by using this option. If successful,
the data from the discarded transactions is irretrievably lost. This
option is ignored if the pool is importable or already imported.
-
-
-m
- Allows a pool to import when there is a missing log device. Recent
transactions can be lost because the log device will be
discarded.
-
-
-n
- Used with the
-F recovery option.
Determines whether a non-importable pool can be made importable again,
but does not actually perform the pool recovery. For more details
about pool recovery mode, see the
-F option, above.
-
-
-N
- Import the pool without mounting any file systems.
-
-
-o
mntopts
- Comma-separated list of mount options to use when mounting datasets
within the pool. See zfs(1M) for a
description of dataset properties and mount options.
-
-
-o
property=value
- Sets the specified property on the imported pool. See the
Pool Properties
section for more information on the available pool properties.
-
-
-R
root
- Sets the cachefile property to
none and the
altroot property to
root.
-
-
zpool
import
[-Dfm
]
[]
[-c
cachefile|-d
dir
]
[-o
mntopts
]
[-o
property=value
]...
[-R
root
]
pool|id
[newpool
]
- Imports a specific pool. A pool can be identified by its name or the
numeric identifier. If newpool is
specified, the pool is imported using the name
newpool. Otherwise, it is imported with
the same name as its exported name.
If a device is removed from a system without running
zpool
export first, the device appears as
potentially active. It cannot be determined if this was a failed export,
or whether the device is really in use from another host. To import a pool
in this state, the -f option is
required.
-
-
-c
cachefile
- Reads configuration from the given
cachefile that was created with the
cachefile pool property. This
cachefile is used instead of
searching for devices.
-
-
-d
dir
- Searches for devices or files in dir.
The
-d option can be specified
multiple times. This option is incompatible with the
-c option.
-
-
-D
- Imports destroyed pool. The
-f
option is also required.
-
-
-f
- Forces import, even if the pool appears to be potentially active.
-
-
-F
- Recovery mode for a non-importable pool. Attempt to return the pool to
an importable state by discarding the last few transactions. Not all
damaged pools can be recovered by using this option. If successful,
the data from the discarded transactions is irretrievably lost. This
option is ignored if the pool is importable or already imported.
-
-
-m
- Allows a pool to import when there is a missing log device. Recent
transactions can be lost because the log device will be
discarded.
-
-
-n
- Used with the
-F recovery option.
Determines whether a non-importable pool can be made importable again,
but does not actually perform the pool recovery. For more details
about pool recovery mode, see the
-F option, above.
-
-
-o
mntopts
- Comma-separated list of mount options to use when mounting datasets
within the pool. See zfs(1M) for a
description of dataset properties and mount options.
-
-
-o
property=value
- Sets the specified property on the imported pool. See the
Pool Properties
section for more information on the available pool properties.
-
-
-R
root
- Sets the cachefile property to
none and the
altroot property to
root.
-
-
-t
numthreads
- Mount datasets in parallel using up to
numthreads threads.
-
-
zpool
iostat
[-v
]
[-T
u|d
]
[pool
]...
[]
- Displays I/O statistics for the given pools. When given an
interval, the statistics are printed
every interval seconds until ^C is
pressed. If no pools are specified,
statistics for every pool in the system is shown. If
count is specified, the command exits
after count reports are printed.
-
-
-T
u|d
- Display a time stamp. Specify u for a
printed representation of the internal representation of time. See
time(2). Specify
d for standard date format. See
date(1).
-
-
-v
- Verbose statistics Reports usage statistics for individual vdevs
within the pool, in addition to the pool-wide statistics.
-
-
zpool
labelclear
[-f
]
device
- Removes ZFS label information from the specified
device. The
device must not be part of an active pool
configuration.
-
-
-f
- Treat exported or foreign devices as inactive.
-
-
zpool
list
[-Hpv
]
[-o
property[
,property
]...
]
[-T
u|d
]
[pool
]...
[]
- Lists the given pools along with a health status and space usage. If no
pools are specified, all pools in the
system are listed. When given an
interval, the information is printed
every interval seconds until ^C is
pressed. If count is specified, the
command exits after count reports are
printed.
-
-
-H
- Scripted mode. Do not display headers, and separate fields by a single
tab instead of arbitrary space.
-
-
-o
property
- Comma-separated list of properties to display. See the
Pool Properties
section for a list of valid properties. The default list is
name,
size,
allocated,
free,
expandsize,
fragmentation,
capacity,
dedupratio,
health,
altroot.
-
-
-p
- Display numbers in parsable (exact) values.
-
-
-T
u|d
- Display a time stamp. Specify
-u
for a printed representation of the internal representation of time.
See time(2). Specify
-d for standard date format. See
date(1).
-
-
-v
- Verbose statistics. Reports usage statistics for individual vdevs
within the pool, in addition to the pool-wise statistics.
-
-
zpool
offline
[-t
]
pool
device...
- Takes the specified physical device offline. While the
device is offline, no attempt is made to
read or write to the device. This command is not applicable to spares.
-
-
-t
- Temporary. Upon reboot, the specified physical device reverts to its
previous state.
-
-
zpool
online
[-e
]
pool
device...
- Brings the specified physical device online. This command is not
applicable to spares.
-
-
-e
- Expand the device to use all available space. If the device is part of
a mirror or raidz then all devices must be expanded before the new
space will become available to the pool.
-
-
zpool
reguid
pool
- Generates a new unique identifier for the pool. You must ensure that all
devices in this pool are online and healthy before performing this
action.
-
-
zpool
reopen
pool
- Reopen all the vdevs associated with the pool.
-
-
zpool
remove
pool
device...
- Removes the specified device from the pool. This command currently only
supports removing hot spares, cache, log and special devices. A mirrored
log device can be removed by specifying the top-level mirror for the log.
Non-log devices that are part of a mirrored configuration can be removed
using the
zpool
detach command. Non-redundant and raidz
devices cannot be removed from a pool.
-
-
zpool
replace
[-f
]
pool
device
[new_device
]
- Replaces old_device with
new_device. This is equivalent to
attaching new_device, waiting for it to
resilver, and then detaching old_device.
The size of new_device must be greater than
or equal to the minimum size of all the devices in a mirror or raidz
configuration.
new_device is required if the pool is not
redundant. If new_device is not
specified, it defaults to old_device.
This form of replacement is useful after an existing disk has failed and
has been physically replaced. In this case, the new disk may have the same
/dev/dsk path as the old device, even
though it is actually a different disk. ZFS recognizes this.
-
-
-f
- Forces use of new_device, even if its
appears to be in use. Not all devices can be overridden in this
manner.
-
-
zpool
scrub
[-m|-M|-p|-s
]
pool...
- Begins a scrub or resumes a paused scrub. The scrub examines all data in
the specified pools to verify that it checksums correctly. For replicated
(mirror or raidz) devices, ZFS automatically repairs any damage discovered
during the scrub. The
zpool
status command reports the progress of
the scrub and summarizes the results of the scrub upon completion.
Scrubbing and resilvering are very similar operations. The difference is
that resilvering only examines data that ZFS knows to be out of date (for
example, when attaching a new device to a mirror or replacing an existing
device), whereas scrubbing examines all data to discover silent errors due
to hardware faults or disk failure.
Because scrubbing and resilvering are I/O-intensive operations, ZFS only
allows one at a time. If a scrub is paused, the
zpool
scrub resumes it. If a resilver is in
progress, ZFS does not allow a scrub to be started until the resilver
completes.
Partial scrub may be requested using -m
or -M option.
-
-
-m
- Scrub only metadata blocks.
-
-
-M
- Scrub only MOS blocks.
-
-
-p
- Pause scrubbing. Scrub pause state and progress are periodically
synced to disk. If the system is restarted or pool is exported during
a paused scrub, even after import, scrub will remain paused until it
is resumed. Once resumed the scrub will pick up from the place where
it was last checkpointed to disk. To resume a paused scrub issue
zpool
scrub again.
-
-
-s
- Stop scrubbing.
-
-
zpool
set
property=value
pool
- Sets the given property on the specified pool. See the
Pool Properties
section for more information on what properties can be set and acceptable
values.
-
-
zpool
split
[-n
]
[-o
property=value
]...
[-R
root
] pool
newpool
- Splits devices off pool creating
newpool. All vdevs in
pool must be mirrors. At the time of the
split, newpool will be a replica of
pool.
-
-
-n
- Do dry run, do not actually perform the split. Print out the expected
configuration of newpool.
-
-
-o
property=value
- Sets the specified property for
newpool. See the
Pool Properties
section for more information on the available pool properties.
-
-
-R
root
- Set altroot for
newpool to
root and automatically import
it.
-
-
zpool
status
[-Dvx
]
[-T
u|d
]
[pool
]...
[]
- Displays the detailed health status for the given pools. If no
pool is specified, then the status of
each pool in the system is displayed. For more information on pool and
device health, see the
Device
Failure and Recovery section.
If a scrub or resilver is in progress, this command reports the percentage
done and the estimated time to completion. Both of these are only
approximate, because the amount of data in the pool and the other
workloads on the system can change.
-
-
-D
- Display a histogram of deduplication statistics, showing the allocated
(physically present on disk) and referenced (logically referenced in
the pool) block counts and sizes by reference count.
-
-
-T
u|d
- Display a time stamp. Specify
-u
for a printed representation of the internal representation of time.
See time(2). Specify
-d for standard date format. See
date(1).
-
-
-v
- Displays verbose data error information, printing out a complete list
of all data errors since the last complete pool scrub.
-
-
-x
- Only display status for pools that are exhibiting errors or are
otherwise unavailable. Warnings about pools not using the latest
on-disk format will not be included.
-
-
zpool
trim
[-r
rate|-s
]
pool...
- Initiates a on-demand TRIM operation on all of the free space of a pool.
This informs the underlying storage devices of all of the blocks that the
pool no longer considers allocated, thus allowing thinly provisioned
storage devices to reclaim them. Please note that this collects all space
marked as “freed” in the pool immediately and doesn't wait
the zfs_txgs_per_trim delay as automatic TRIM
does. Hence, this can limit pool corruption recovery options during and
immediately following the on-demand TRIM to 1-2 TXGs into the past
(instead of the standard 32-64 of automatic TRIM). This approach, however,
allows you to recover the maximum amount of free space from the pool
immediately without having to wait.
Also note that an on-demand TRIM operation can be initiated irrespective of
the autotrim pool property setting. It does,
however, respect the forcetrim pool property.
An on-demand TRIM operation does not conflict with an ongoing scrub, but it
can put significant I/O stress on the underlying vdevs. A resilver,
however, automatically stops an on-demand TRIM operation. You can manually
reinitiate the TRIM operation after the resilver has started, by simply
reissuing the
zpool
trim command.
Adding a vdev during TRIM is supported, although the progression display in
zpool
status might not be entirely accurate
in that case (TRIM will complete before reaching 100%). Removing or
detaching a vdev will prematurely terminate an on-demand TRIM operation.
-
-
-r
rate
- Controls the speed at which the TRIM operation progresses. Without
this option, TRIM is executed in parallel on all top-level vdevs as
quickly as possible. This option allows you to control how fast (in
bytes per second) the TRIM is executed. This rate is applied on a
per-vdev basis, i.e. every top-level vdev in the pool tries to match
this speed.
Due to limitations in how the algorithm is designed, TRIMs are executed
in whole-metaslab increments. Each top-level vdev contains
approximately 200 metaslabs, so a rate-limited TRIM progresses in
steps, i.e. it TRIMs one metaslab completely and then waits for a
while so that over the whole device, the speed averages out.
When an on-demand TRIM operation is already in progress, this option
changes its rate. To change a rate-limited TRIM to an unlimited one,
simply execute the
zpool
trim command without the
-r option.
-
-
-s
- Stop trimming. If an on-demand TRIM operation is not ongoing at the
moment, this does nothing and the command returns success.
-
-
zpool
upgrade
- Displays pools which do not have all supported features enabled and pools
formatted using a legacy ZFS version number. These pools can continue to
be used, but some features may not be available. Use
zpool
upgrade
-a to enable all features on all
pools.
-
-
zpool
upgrade
-v
- Displays legacy ZFS versions supported by the current software. See
zpool-features(5) for a description of
feature flags features supported by the current software.
-
-
zpool
upgrade
[-V
version
]
-a|pool...
- Enables all supported features on the given pool. Once this is done, the
pool will no longer be accessible on systems that do not support feature
flags. See zpool-features(5) for details on
compatibility with systems that support feature flags, but do not support
all features enabled on the pool.
-
-
-a
- Enables all supported features on all pools.
-
-
-V
version
- Upgrade to the specified legacy version. If the
-V flag is specified, no features
will be enabled on the pool. This option can only be used to increase
the version number up to the last supported legacy version
number.
-
-
zpool
vdev-get
all|property[,property
]...
pool
vdev-name|vdev-guid
- Retrieves the given list of vdev properties (or all properties if
all is used) for the specified vdev of the
specified storage pool. These properties are displayed in the same manner
as the pool properties. The operation is supported for leaf-level vdevs
only. See the Device
Properties section for more information on the available
properties.
-
-
zpool
vdev-set
property=value
pool
vdev-name|vdev-guid
- Sets the given property on the specified device of the specified pool. If
top-level vdev is specified, sets the property on all the child devices.
See the Device
Properties section for more information on what properties can be set
and accepted values.
The following exit values are returned:
-
-
- 0
- Successful completion.
-
-
- 1
- An error occurred.
-
-
- 2
- Invalid command line options were specified.
-
-
- Example 1 Creating a RAID-Z
Storage Pool
- The following command creates a pool with a single raidz root vdev that
consists of six disks.
# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
-
-
- Example 2 Creating a
Mirrored Storage Pool
- The following command creates a pool with two mirrors, where each mirror
contains two disks.
# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
-
-
- Example 3 Creating a ZFS
Storage Pool by Using Slices
- The following command creates an unmirrored pool using two disk slices.
# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
-
-
- Example 4 Creating a ZFS
Storage Pool by Using Files
- The following command creates an unmirrored pool using files. While not
recommended, a pool based on files can be useful for experimental
purposes.
# zpool create tank /path/to/file/a /path/to/file/b
-
-
- Example 5 Adding a Mirror to
a ZFS Storage Pool
- The following command adds two mirrored disks to the pool
tank, assuming the pool is already made up of
two-way mirrors. The additional space is immediately available to any
datasets within the pool.
# zpool add tank mirror c1t0d0 c1t1d0
-
-
- Example 6 Listing Available
ZFS Storage Pools
- The following command lists all available pools on the system. In this
case, the pool zion is faulted due to a
missing device. The results from this command are similar to the
following:
# zpool list
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
zion - - - - - - - FAULTED -
-
-
- Example 7 Destroying a ZFS
Storage Pool
- The following command destroys the pool tank
and any datasets contained within.
-
-
- Example 8 Exporting a ZFS
Storage Pool
- The following command exports the devices in pool
tank so that they can be relocated or later
imported.
-
-
- Example 9 Importing a ZFS
Storage Pool
- The following command displays available pools, and then imports the pool
tank for use on the system. The results from
this command are similar to the following:
# zpool import
pool: tank
id: 15451357997522795478
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
mirror ONLINE
c1t2d0 ONLINE
c1t3d0 ONLINE
# zpool import tank
-
-
- Example 10 Upgrading All ZFS
Storage Pools to the Current Version
- The following command upgrades all ZFS Storage pools to the current
version of the software.
# zpool upgrade -a
This system is currently running ZFS version 2.
-
-
- Example 11 Managing Hot
Spares
- The following command creates a new pool with an available hot spare:
# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
If one of the disks were to fail, the pool would be reduced to the degraded
state. The failed device can be replaced using the following command:
# zpool replace tank c0t0d0 c0t3d0
Once the data has been resilvered, the spare is automatically removed and is
made available for use should another device fail. The hot spare can be
permanently removed from the pool using the following command:
# zpool remove tank c0t2d0
-
-
- Example 12 Creating a ZFS
Pool with Mirrored Separate Intent Logs
- The following command creates a ZFS storage pool consisting of two,
two-way mirrors and mirrored log devices:
# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
c4d0 c5d0
-
-
- Example 13 Adding Cache
Devices to a ZFS Pool
- The following command adds two disks for use as cache devices to a ZFS
storage pool:
# zpool add pool cache c2d0 c3d0
Once added, the cache devices gradually fill with content from main memory.
Depending on the size of your cache devices, it could take over an hour
for them to fill. Capacity and reads can be monitored using the
iostat option as follows:
-
-
- Example 14 Removing a
Mirrored Log Device
- The following command removes the mirrored log device
mirror-2. Given this configuration:
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c6t0d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c6t2d0 ONLINE 0 0 0
c6t3d0 ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
c4t0d0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
The command to remove the mirrored log mirror-2
is:
# zpool remove tank mirror-2
-
-
- Example 15 Displaying
expanded space on a device
- The following command displays the detailed information for the pool
data. This pool is comprised of a single
raidz vdev where one of its devices increased its capacity by 10GB. In
this example, the pool will not be able to utilize this extra capacity
until all the devices under the raidz vdev have been expanded.
# zpool list -v data
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
raidz1 23.9G 14.6G 9.30G 48% -
c1t1d0 - - - - -
c1t2d0 - - - - 10G
c1t3d0 - - - - -
Evolving
zfs(1M),
attributes(5),
zpool-features(5)