Print this page
NEX-18069 Unable to get/set VDEV_PROP_RESILVER_MAXACTIVE/VDEV_PROP_RESILVER_MINACTIVE props
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9552 zfs_scan_idle throttling harms performance and needs to be removed
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-5284 need to document and update default for import -t option
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4934 Add capability to remove special vdev
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4258 restore and update vdev-get & vdev-set in zpool man page
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3502 dedup ceiling should set a pool prop when cap is in effect
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
        usr/src/common/zfs/zpool_prop.c
        usr/src/uts/common/sys/fs/zfs.h
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
    usr/src/uts/common/io/scsi/targets/sd.c
    usr/src/uts/common/sys/scsi/targets/sddef.h
SUP-817 Removed references to special device from man and help
Revert "SUP-817 Removed references to special device"
This reverts commit f8970e28f0d8bd6b69711722f341e3e1d0e1babf.
SUP-817 Removed references to special device
OS-102 add man page info and tests for vdev/CoS properties and ZFS meta features
Issue #26: partial scrub
Added partial scrub options:
-M for MOS only scrub
-m for metadata scrub
re 13748 added zpool export -c option
zpool export -c command exports specified pool while keeping its latest
configuration in the cache file for subsequent zpool import -c.
re #11781 rb3701 Update man related tools (add missed files)
re #11781 rb3701 Update man related tools
--HG--
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE => usr/src/cmd/man/THIRDPARTYLICENSE
rename : usr/src/cmd/man/src/THIRDPARTYLICENSE.descrip => usr/src/cmd/man/THIRDPARTYLICENSE.descrip
rename : usr/src/cmd/man/src/man.c => usr/src/cmd/man/man.c

*** 18,28 **** .\" .\" CDDL HEADER END .\" .\" .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. ! .\" Copyright (c) 2012, 2017 by Delphix. All rights reserved. .\" Copyright 2017 Nexenta Systems, Inc. .\" Copyright (c) 2017 Datto Inc. .\" Copyright (c) 2017 George Melikov. All Rights Reserved. .\" .Dd December 6, 2017 --- 18,28 ---- .\" .\" CDDL HEADER END .\" .\" .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. ! .\" Copyright (c) 2013 by Delphix. All rights reserved. .\" Copyright 2017 Nexenta Systems, Inc. .\" Copyright (c) 2017 Datto Inc. .\" Copyright (c) 2017 George Melikov. All Rights Reserved. .\" .Dd December 6, 2017
*** 62,72 **** .Nm .Cm detach .Ar pool device .Nm .Cm export ! .Op Fl f .Ar pool Ns ... .Nm .Cm get .Op Fl Hp .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... --- 62,73 ---- .Nm .Cm detach .Ar pool device .Nm .Cm export ! .Op Fl cfF ! .Op Fl t Ar numthreads .Ar pool Ns ... .Nm .Cm get .Op Fl Hp .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
*** 87,104 **** --- 88,107 ---- .Op Fl F Op Fl n .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir .Op Fl o Ar mntopts .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ... .Op Fl R Ar root + .Op Fl t Ar numthreads .Nm .Cm import .Op Fl Dfm .Op Fl F Op Fl n .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir .Op Fl o Ar mntopts .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ... .Op Fl R Ar root + .Op Fl t Ar numthreads .Ar pool Ns | Ns Ar id .Op Ar newpool .Nm .Cm iostat .Op Fl v
*** 130,152 **** .Nm .Cm reopen .Ar pool .Nm .Cm remove - .Op Fl np .Ar pool Ar device Ns ... .Nm - .Cm remove - .Fl s - .Ar pool - .Nm .Cm replace .Op Fl f .Ar pool Ar device Op Ar new_device .Nm .Cm scrub ! .Op Fl s | Fl p .Ar pool Ns ... .Nm .Cm set .Ar property Ns = Ns Ar value .Ar pool --- 133,150 ---- .Nm .Cm reopen .Ar pool .Nm .Cm remove .Ar pool Ar device Ns ... .Nm .Cm replace .Op Fl f .Ar pool Ar device Op Ar new_device .Nm .Cm scrub ! .Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s .Ar pool Ns ... .Nm .Cm set .Ar property Ns = Ns Ar value .Ar pool
*** 161,178 **** --- 159,190 ---- .Op Fl Dvx .Op Fl T Sy u Ns | Ns Sy d .Oo Ar pool Oc Ns ... .Op Ar interval Op Ar count .Nm + .Cm trim + .Op Fl r Ar rate Ns | Ns Fl s + .Ar pool Ns ... + .Nm .Cm upgrade .Nm .Cm upgrade .Fl v .Nm .Cm upgrade .Op Fl V Ar version .Fl a Ns | Ns Ar pool Ns ... + .Nm + .Cm vdev-get + .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ... + .Ar pool + .Ar vdev-name Ns | Ns Ar vdev-guid + .Nm + .Cm vdev-set + .Ar property Ns = Ns Ar value + .Ar pool + .Ar vdev-name Ns | Ns Ar vdev-guid .Sh DESCRIPTION The .Nm command configures ZFS storage pools. A storage pool is a collection of devices that provides physical storage and
*** 398,407 **** --- 410,425 ---- An in-progress spare replacement can be cancelled by detaching the hot spare. If the original faulted device is detached, then the hot spare assumes its place in the configuration, and is removed from the spare list of all active pools. .Pp + See + .Sy sparegroup + vdev property in + .Sx Device Properties + section for information on how to control spare selection. + .Pp Spares cannot replace log devices. .Ss Intent Log The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous transactions. For instance, databases often require their transactions to be on stable storage
*** 422,432 **** .Sx EXAMPLES section for an example of mirroring multiple log devices. .Pp Log devices can be added, replaced, attached, detached, and imported and exported as part of the larger pool. ! Mirrored devices can be removed by specifying the top-level mirror vdev. .Ss Cache Devices Devices can be added to a storage pool as .Qq cache devices . These devices provide an additional layer of caching between main memory and disk. --- 440,451 ---- .Sx EXAMPLES section for an example of mirroring multiple log devices. .Pp Log devices can be added, replaced, attached, detached, and imported and exported as part of the larger pool. ! Mirrored log devices can be removed by specifying the top-level mirror for the ! log. .Ss Cache Devices Devices can be added to a storage pool as .Qq cache devices . These devices provide an additional layer of caching between main memory and disk.
*** 449,459 **** the original storage pool device, which might be part of a mirrored or raidz configuration. .Pp The content of the cache devices is considered volatile, as is the case with other system caches. ! .Ss Properties Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool. .Pp The following are read-only properties: --- 468,478 ---- the original storage pool device, which might be part of a mirrored or raidz configuration. .Pp The content of the cache devices is considered volatile, as is the case with other system caches. ! .Ss Pool Properties Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool. .Pp The following are read-only properties:
*** 469,478 **** --- 488,504 ---- option. .It Sy capacity Percentage of pool space used. This property can also be referred to by its shortened column name, .Sy cap . + .It Sy ddt_capped Ns = Ns Sy on Ns | Ns Sy off + When the + .Sy ddt_capped + is + .Sy on + this indicates DDT growth has been stopped. + New unique writes will not be deduped to prevent further DDT growth. .It Sy expandsize Amount of uninitialized space within the pool or device that can be used to increase the total capacity of the pool. Uninitialized space consists of any space on an EFI labeled vdev which has not been brought online
*** 483,496 **** .It Sy fragmentation The amount of fragmentation in the pool. .It Sy free The amount of free space available in the pool. .It Sy freeing - After a file system or snapshot is destroyed, the space it was using is - returned to the pool asynchronously. .Sy freeing ! is the amount of space remaining to be reclaimed. Over time .Sy freeing will decrease while .Sy free increases. --- 509,522 ---- .It Sy fragmentation The amount of fragmentation in the pool. .It Sy free The amount of free space available in the pool. .It Sy freeing .Sy freeing ! is the amount of pool space remaining to be reclaimed. ! After a file, dataset or snapshot is destroyed, the space it was using is ! returned to the pool asynchronously. Over time .Sy freeing will decrease while .Sy free increases.
*** 582,591 **** --- 608,649 ---- belonged to the pool, is automatically formatted and replaced. The default behavior is .Sy off . This property can also be referred to by its shortened column name, .Sy replace . + .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off + When set to + .Sy on , + while deleting data, ZFS will inform the underlying vdevs of any blocks that + have been marked as freed. + This allows thinly provisioned vdevs to reclaim unused blocks. + Currently, this feature supports sending SCSI UNMAP commands to SCSI and SAS + disk vdevs, and using file hole punching on file-backed vdevs. + SATA TRIM is currently not implemented. + The default setting for this property is + .Sy off . + .Pp + Please note that automatic trimming of data blocks can put significant stress on + the underlying storage devices if they do not handle these commands in a + background, low-priority manner. + In that case, it may be possible to achieve most of the benefits of trimming + free space on the pool by running an on-demand + .Pq manual + trim every once in a while during a maintenance window using the + .Nm zpool Cm trim + command. + .Pp + Automatic trim does not reclaim blocks after a delete immediately. + Instead, it waits approximately 32-64 TXGs + .Po or as defined by the + .Sy zfs_txgs_per_trim + tunable + .Pc + to allow for more efficient aggregation of smaller portions of free space into + fewer larger regions, as well as to allow for longer pool corruption recovery + via + .Nm zpool Cm import Fl F . .It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset Identifies the default bootable dataset for the root pool. This property is expected to be set mainly by the installation and upgrade programs. .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
*** 660,669 **** --- 718,740 ---- .Ar feature_name to the enabled state. See .Xr zpool-features 5 for details on feature states. + .It Sy forcetrim Ns = Ns Sy on Ns | Ns Sy off + Controls whether device support is taken into consideration when issuing TRIM + commands to the underlying vdevs of the pool. + Normally, both automatic trim and on-demand + .Pq manual + trim only issue TRIM commands if a vdev indicates support for it. + Setting the + .Sy forcetrim + property to + .Sy on + will force ZFS to issue TRIMs even if it thinks a device does not support it. + The default value is + .Sy off . .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off Controls whether information about snapshots associated with this pool is output when .Nm zfs Cm list is run without the
*** 671,680 **** --- 742,783 ---- option. The default value is .Sy off . This property can also be referred to by its shortened name, .Sy listsnaps . + .It Sy scrubprio Ns = Ns Ar 0-100 Ns + Sets the priority of scrub I/O for this pool. + This is a number from 0 to 100, higher numbers meaning a higher priority + and thus more bandwidth allocated to scrub I/O, provided there is other + I/O competing for bandwidth. + If no other I/O is competing for bandwidth, scrub is allowed to consume + as much bandwidth as the pool is capable of providing. + A priority of + .Ar 100 + means that scrub I/O has equal priority to any other user-generated I/O. + The value + .Ar 0 + is special, because it turns per-pool scrub priority control. + In that case, scrub I/O priority is determined by the + .Sy zfs_vdev_scrub_min_active + and + .Sy zfs_vdev_scrub_max_active + tunables. + The default value is + .Ar 5 . + .It Sy resilverprio Ns = Ns Ar 0-100 Ns + Same as the + .Sy scrubprio + property, but controls the priority for resilver I/O. + The default value is + .Ar 10 . + When set to + .Ar 0 + the global tunables used for queue sizing are + .Sy zfs_vdev_resilver_min_active + and + .Sy zfs_vdev_resilver_max_active . .It Sy version Ns = Ns Ar version The current on-disk version of the pool. This can be increased, but never decreased. The preferred method of updating pools is with the .Nm zpool Cm upgrade
*** 681,690 **** --- 784,851 ---- command, though this property can be used when a specific version is needed for backwards compatibility. Once feature flags are enabled on a pool this property will no longer have a value. .El + .Ss Device Properties + Each device can have several properties associated with it. + These properites override global tunables and are designed to provide more + control over the operational parameters of this specific device, as well as to + help manage this device. + .Pp + The + .Sy cos + device property can reference a CoS property descriptor by name, in which case, + the values of device properties are determined according to the following rule: + the device settings override CoS settings, which in turn, override the global + tunables. + .Pp + The following device properties are available: + .Bl -tag -width Ds + .It Sy cos Ns = Ns Ar cos-name + This property indicates whether the device is associated with a CoS property + descriptor object. + If so, the properties from the CoS descriptor that are not explicitly overridden + by the device properties are in effect for this device. + .It Sy l2arc_ddt Ns = Ns Sy on Ns | Ns Sy off + This property is meaningful for L2ARC devices. + If this property is turned + .Sy on + ZFS will dedicate the L2ARC device to cache deduplication table + .Pq DDT + buffers only. + .It Sy prefread Ns = Ns Sy 1 Ns .. Ns Sy 100 + This property is meaningful for devices that belong to a mirror. + The property determines the preference that is given to the device when reading + from the mirror. + The ratio of the value to the sum of the values of this property for all the + devices in the mirror determines the relative frequency + .Po which also is considered + .Qq probability + .Pc + of reading from this specific device. + .It Sy sparegroup Ns = Ns Ar group-name + This property indicates whether the device is a part of a spare device group. + Devices in the pool + .Pq including spares + can be labeled with strings that are meaningful in the context of the management + workflow in effect. + When a failed device is automatically replaced by spares, the spares whose + .Sy sparegroup + property match the failed device's property are used first. + .It Xo + .Bro Sy read Ns | Ns Sy aread Ns | Ns Sy write Ns | Ns + .Sy awrite Ns | Ns Sy scrub Ns | Ns Sy resilver Brc Ns _ Ns + .Bro Sy minactive Ns | Ns Sy maxactive Brc Ns = Ns + .Sy 1 Ns .. Ns Sy 1000 + .Xc + These properties define the minimim/maximum number of outstanding active + requests for the queueable classes of I/O requests as defined by the + ZFS I/O scheduler. + The classes include read, asynchronous read, write, asynchronous write, and + scrub classes. + .El .Ss Subcommands All subcommands that modify state are logged persistently to the pool in their original form. .Pp The
*** 887,897 **** The actual pool creation can still fail due to insufficient privileges or device sharing. .It Fl o Ar property Ns = Ns Ar value Sets the given pool properties. See the ! .Sx Properties section for a list of valid properties that can be set. .It Fl O Ar file-system-property Ns = Ns Ar value Sets the given file system properties in the root file system of the pool. See the .Sx Properties --- 1048,1058 ---- The actual pool creation can still fail due to insufficient privileges or device sharing. .It Fl o Ar property Ns = Ns Ar value Sets the given pool properties. See the ! .Sx Pool Properties section for a list of valid properties that can be set. .It Fl O Ar file-system-property Ns = Ns Ar value Sets the given file system properties in the root file system of the pool. See the .Sx Properties
*** 924,934 **** from a mirror. The operation is refused if there are no other valid replicas of the data. .It Xo .Nm .Cm export ! .Op Fl f .Ar pool Ns ... .Xc Exports the given pools from the system. All devices are marked as exported, but are still considered in use by other subsystems. --- 1085,1096 ---- from a mirror. The operation is refused if there are no other valid replicas of the data. .It Xo .Nm .Cm export ! .Op Fl cfF ! .Op Fl t Ar numthreads .Ar pool Ns ... .Xc Exports the given pools from the system. All devices are marked as exported, but are still considered in use by other subsystems.
*** 945,962 **** --- 1107,1132 ---- command whole disks, not just slices, so that ZFS can label the disks with portable EFI labels. Otherwise, disk drivers on platforms of different endianness will not recognize the disks. .Bl -tag -width Ds + .It Fl c + Keep configuration information of exported pool in the cache file. .It Fl f Forcefully unmount all datasets, using the .Nm unmount Fl f command. .Pp This command will forcefully export the pool even if it has a shared spare that is currently being used. This may lead to potential data corruption. + .It Fl F + Do not update device labels or cache file with new configuration. + .It Fl t Ar numthreads + Unmount datasets in parallel using up to + .Ar numthreads + threads. .El .It Xo .Nm .Cm get .Op Fl Hp
*** 978,988 **** value Property value source Property source, either 'default' or 'local'. .Ed .Pp See the ! .Sx Properties section for more information on the available pool properties. .Bl -tag -width Ds .It Fl H Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary --- 1148,1158 ---- value Property value source Property source, either 'default' or 'local'. .Ed .Pp See the ! .Sx Pool Properties section for more information on the available pool properties. .Bl -tag -width Ds .It Fl H Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary
*** 1129,1139 **** .Xr zfs 1M for a description of dataset properties and mount options. .It Fl o Ar property Ns = Ns Ar value Sets the specified property on the imported pool. See the ! .Sx Properties section for more information on the available pool properties. .It Fl R Ar root Sets the .Sy cachefile property to --- 1299,1309 ---- .Xr zfs 1M for a description of dataset properties and mount options. .It Fl o Ar property Ns = Ns Ar value Sets the specified property on the imported pool. See the ! .Sx Pool Properties section for more information on the available pool properties. .It Fl R Ar root Sets the .Sy cachefile property to
*** 1223,1233 **** .Xr zfs 1M for a description of dataset properties and mount options. .It Fl o Ar property Ns = Ns Ar value Sets the specified property on the imported pool. See the ! .Sx Properties section for more information on the available pool properties. .It Fl R Ar root Sets the .Sy cachefile property to --- 1393,1403 ---- .Xr zfs 1M for a description of dataset properties and mount options. .It Fl o Ar property Ns = Ns Ar value Sets the specified property on the imported pool. See the ! .Sx Pool Properties section for more information on the available pool properties. .It Fl R Ar root Sets the .Sy cachefile property to
*** 1234,1243 **** --- 1404,1417 ---- .Sy none and the .Sy altroot property to .Ar root . + .It Fl t Ar numthreads + Mount datasets in parallel using up to + .Ar numthreads + threads. .El .It Xo .Nm .Cm iostat .Op Fl v
*** 1320,1330 **** Do not display headers, and separate fields by a single tab instead of arbitrary space. .It Fl o Ar property Comma-separated list of properties to display. See the ! .Sx Properties section for a list of valid properties. The default list is .Cm name , size , allocated , free , expandsize , fragmentation , capacity , .Cm dedupratio , health , altroot . .It Fl p --- 1494,1504 ---- Do not display headers, and separate fields by a single tab instead of arbitrary space. .It Fl o Ar property Comma-separated list of properties to display. See the ! .Sx Pool Properties section for a list of valid properties. The default list is .Cm name , size , allocated , free , expandsize , fragmentation , capacity , .Cm dedupratio , health , altroot . .It Fl p
*** 1393,1449 **** .Xc Reopen all the vdevs associated with the pool. .It Xo .Nm .Cm remove - .Op Fl np .Ar pool Ar device Ns ... .Xc Removes the specified device from the pool. ! This command currently only supports removing hot spares, cache, log ! devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz. ! .sp ! Removing a top-level vdev reduces the total amount of space in the storage pool. ! The specified device will be evacuated by copying all allocated space from it to ! the other devices in the pool. ! In this case, the ! .Nm zpool Cm remove ! command initiates the removal and returns, while the evacuation continues in ! the background. ! The removal progress can be monitored with ! .Nm zpool Cm status. ! This feature must be enabled to be used, see ! .Xr zpool-features 5 ! .Pp ! A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the ! same. ! Non-log devices or data devices that are part of a mirrored configuration can be removed using the .Nm zpool Cm detach command. ! .Bl -tag -width Ds ! .It Fl n ! Do not actually perform the removal ("no-op"). ! Instead, print the estimated amount of memory that will be used by the ! mapping table after the removal completes. ! This is nonzero only for top-level vdevs. ! .El ! .Bl -tag -width Ds ! .It Fl p ! Used in conjunction with the ! .Fl n ! flag, displays numbers as parsable (exact) values. ! .El .It Xo .Nm - .Cm remove - .Fl s - .Ar pool - .Xc - Stops and cancels an in-progress removal of a top-level vdev. - .It Xo - .Nm .Cm replace .Op Fl f .Ar pool Ar device Op Ar new_device .Xc Replaces --- 1567,1590 ---- .Xc Reopen all the vdevs associated with the pool. .It Xo .Nm .Cm remove .Ar pool Ar device Ns ... .Xc Removes the specified device from the pool. ! This command currently only supports removing hot spares, cache, log and special ! devices. ! A mirrored log device can be removed by specifying the top-level mirror for the ! log. ! Non-log devices that are part of a mirrored configuration can be removed using the .Nm zpool Cm detach command. ! Non-redundant and raidz devices cannot be removed from a pool. .It Xo .Nm .Cm replace .Op Fl f .Ar pool Ar device Op Ar new_device .Xc Replaces
*** 1480,1490 **** Not all devices can be overridden in this manner. .El .It Xo .Nm .Cm scrub ! .Op Fl s | Fl p .Ar pool Ns ... .Xc Begins a scrub or resumes a paused scrub. The scrub examines all data in the specified pools to verify that it checksums correctly. --- 1621,1631 ---- Not all devices can be overridden in this manner. .El .It Xo .Nm .Cm scrub ! .Op Fl m Ns | Ns Fl M Ns | Ns Fl p Ns | Ns Fl s .Ar pool Ns ... .Xc Begins a scrub or resumes a paused scrub. The scrub examines all data in the specified pools to verify that it checksums correctly.
*** 1511,1525 **** If a scrub is paused, the .Nm zpool Cm scrub resumes it. If a resilver is in progress, ZFS does not allow a scrub to be started until the resilver completes. .Bl -tag -width Ds ! .It Fl s ! Stop scrubbing. ! .El ! .Bl -tag -width Ds .It Fl p Pause scrubbing. Scrub pause state and progress are periodically synced to disk. If the system is restarted or pool is exported during a paused scrub, even after import, scrub will remain paused until it is resumed. --- 1652,1672 ---- If a scrub is paused, the .Nm zpool Cm scrub resumes it. If a resilver is in progress, ZFS does not allow a scrub to be started until the resilver completes. + .Pp + Partial scrub may be requested using + .Fl m + or + .Fl M + option. .Bl -tag -width Ds ! .It Fl m ! Scrub only metadata blocks. ! .It Fl M ! Scrub only MOS blocks. .It Fl p Pause scrubbing. Scrub pause state and progress are periodically synced to disk. If the system is restarted or pool is exported during a paused scrub, even after import, scrub will remain paused until it is resumed.
*** 1526,1545 **** Once resumed the scrub will pick up from the place where it was last checkpointed to disk. To resume a paused scrub issue .Nm zpool Cm scrub again. .El .It Xo .Nm .Cm set .Ar property Ns = Ns Ar value .Ar pool .Xc Sets the given property on the specified pool. See the ! .Sx Properties section for more information on what properties can be set and acceptable values. .It Xo .Nm .Cm split --- 1673,1694 ---- Once resumed the scrub will pick up from the place where it was last checkpointed to disk. To resume a paused scrub issue .Nm zpool Cm scrub again. + .It Fl s + Stop scrubbing. .El .It Xo .Nm .Cm set .Ar property Ns = Ns Ar value .Ar pool .Xc Sets the given property on the specified pool. See the ! .Sx Pool Properties section for more information on what properties can be set and acceptable values. .It Xo .Nm .Cm split
*** 1566,1576 **** .Ar newpool . .It Fl o Ar property Ns = Ns Ar value Sets the specified property for .Ar newpool . See the ! .Sx Properties section for more information on the available pool properties. .It Fl R Ar root Set .Sy altroot for --- 1715,1725 ---- .Ar newpool . .It Fl o Ar property Ns = Ns Ar value Sets the specified property for .Ar newpool . See the ! .Sx Pool Properties section for more information on the available pool properties. .It Fl R Ar root Set .Sy altroot for
*** 1626,1635 **** --- 1775,1855 ---- unavailable. Warnings about pools not using the latest on-disk format will not be included. .El .It Xo .Nm + .Cm trim + .Op Fl r Ar rate Ns | Ns Fl s + .Ar pool Ns ... + .Xc + Initiates a on-demand TRIM operation on all of the free space of a pool. + This informs the underlying storage devices of all of the blocks that the pool + no longer considers allocated, thus allowing thinly provisioned storage devices + to reclaim them. + Please note that this collects all space marked as + .Qq freed + in the pool immediately and doesn't wait the + .Sy zfs_txgs_per_trim + delay as automatic TRIM does. + Hence, this can limit pool corruption recovery options during and immediately + following the on-demand TRIM to 1-2 TXGs into the past + .Pq instead of the standard 32-64 of automatic TRIM . + This approach, however, allows you to recover the maximum amount of free space + from the pool immediately without having to wait. + .Pp + Also note that an on-demand TRIM operation can be initiated irrespective of the + .Sy autotrim + pool property setting. + It does, however, respect the + .Sy forcetrim + pool property. + .Pp + An on-demand TRIM operation does not conflict with an ongoing scrub, but it can + put significant I/O stress on the underlying vdevs. + A resilver, however, automatically stops an on-demand TRIM operation. + You can manually reinitiate the TRIM operation after the resilver has started, + by simply reissuing the + .Nm zpool Cm trim + command. + .Pp + Adding a vdev during TRIM is supported, although the progression display in + .Nm zpool Cm status + might not be entirely accurate in that case + .Pq TRIM will complete before reaching 100% . + Removing or detaching a vdev will prematurely terminate an on-demand TRIM + operation. + .Bl -tag -width Ds + .It Fl r Ar rate + Controls the speed at which the TRIM operation progresses. + Without this option, TRIM is executed in parallel on all top-level vdevs as + quickly as possible. + This option allows you to control how fast + .Pq in bytes per second + the TRIM is executed. + This rate is applied on a per-vdev basis, i.e. every top-level vdev in the pool + tries to match this speed. + .Pp + Due to limitations in how the algorithm is designed, TRIMs are executed in + whole-metaslab increments. + Each top-level vdev contains approximately 200 metaslabs, so a rate-limited TRIM + progresses in steps, i.e. it TRIMs one metaslab completely and then waits for a + while so that over the whole device, the speed averages out. + .Pp + When an on-demand TRIM operation is already in progress, this option changes its + rate. + To change a rate-limited TRIM to an unlimited one, simply execute the + .Nm zpool Cm trim + command without the + .Fl r + option. + .It Fl s + Stop trimming. + If an on-demand TRIM operation is not ongoing at the moment, this does nothing + and the command returns success. + .El + .It Xo + .Nm .Cm upgrade .Xc Displays pools which do not have all supported features enabled and pools formatted using a legacy ZFS version number. These pools can continue to be used, but some features may not be available.
*** 1667,1676 **** --- 1887,1926 ---- .Fl V flag is specified, no features will be enabled on the pool. This option can only be used to increase the version number up to the last supported legacy version number. .El + .It Xo + .Nm + .Cm vdev-get + .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ... + .Ar pool + .Ar vdev-name Ns | Ns Ar vdev-guid + .Xc + Retrieves the given list of vdev properties + .Po or all properties if + .Sy all + is used + .Pc + for the specified vdev of the specified storage pool. + These properties are displayed in the same manner as the pool properties. + The operation is supported for leaf-level vdevs only. + See the + .Sx Device Properties + section for more information on the available properties. + .It Xo + .Nm + .Cm vdev-set + .Ar property Ns = Ns Ar value + .Ar pool + .Ar vdev-name Ns | Ns Ar vdev-guid + .Xc + Sets the given property on the specified device of the specified pool. + If top-level vdev is specified, sets the property on all the child devices. + See the + .Sx Device Properties + section for more information on what properties can be set and accepted values. .El .Sh EXIT STATUS The following exit values are returned: .Bl -tag -width Ds .It Sy 0
*** 1809,1824 **** .Cm iostat option as follows: .Bd -literal # zpool iostat -v pool 5 .Ed ! .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device ! The following commands remove the mirrored log device ! .Sy mirror-2 ! and mirrored top-level data device ! .Sy mirror-1 . ! .Pp Given this configuration: .Bd -literal pool: tank state: ONLINE scrub: none requested --- 2059,2071 ---- .Cm iostat option as follows: .Bd -literal # zpool iostat -v pool 5 .Ed ! .It Sy Example 14 No Removing a Mirrored Log Device ! The following command removes the mirrored log device ! .Sy mirror-2 . Given this configuration: .Bd -literal pool: tank state: ONLINE scrub: none requested
*** 1842,1858 **** .Sy mirror-2 is: .Bd -literal # zpool remove tank mirror-2 .Ed - .Pp - The command to remove the mirrored data - .Sy mirror-1 - is: - .Bd -literal - # zpool remove tank mirror-1 - .Ed .It Sy Example 15 No Displaying expanded space on a device The following command displays the detailed information for the pool .Em data . This pool is comprised of a single raidz vdev where one of its devices increased its capacity by 10GB. --- 2089,2098 ----