Print this page
NEX-19592 zfs_dbgmsg should not contain info calculated latency
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Evan Layton <evan.layton@nexenta.com>
Reviewed by: Rick McNeal <rick.mcneal@nexenta.com>
NEX-17348 The ZFS deadman timer is currently set too high
Reviewed by: Evan Layton <evan.layton@nexenta.com>
Reviewed by: Rob Gittins  <rob.gittins@nexenta.com>
Reviewed by: Joyce McIntosh<joyce.macintosh@nexenta.com>
NEX-9200 Improve the scalability of attribute locking in zfs_zget
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-13140 DVA-throttle support for special-class
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9989 Changing volume names can result in double imports and data corruption
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-10069 ZFS_READONLY is a little too strict (fix test lint)
NEX-9553 Move ss_fill gap logic from scan algorithm into range_tree.c
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-6088 ZFS scrub/resilver take excessively long due to issuing lots of random IO
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5856 ddt_capped isn't reset when deduped dataset is destroyed
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-5553 ZFS auto-trim, manual-trim and scrub can race and deadlock
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Rob Gittins <rob.gittins@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5795 Rename 'wrc' as 'wbc' in the source and in the tech docs
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-5064 On-demand trim should store operation start and stop time
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-5188 Removed special-vdev causes panic on read or on get size of special-bp
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5186 smf-tests contains built files and it shouldn't
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
NEX-5168 cleanup and productize non-default latency based writecache load-balancer
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-3729 KRRP changes mess up iostat(1M)
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-4807 writecache load-balancing statistics: several distinct problems, must be revisited and revised
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4876 On-demand TRIM shouldn't use system_taskq and should queue jobs
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4683 WRC: Special block pointer must know that it is special
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-4677 Fix for NEX-4619 build breakage
NEX-4620 ZFS autotrim triggering is unreliable
NEX-4622 On-demand TRIM code illogically enumerates metaslabs via mg_ms_tree
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
NEX-4619 Want kstats to monitor TRIM and UNMAP operation
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
4185 add new cryptographic checksums to ZFS: SHA-512, Skein, Edon-R (fix studio build)
4185 add new cryptographic checksums to ZFS: SHA-512, Skein, Edon-R
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Richard Lowe <richlowe@richlowe.net>
Approved by: Garrett D'Amore <garrett@damore.org>
5818 zfs {ref}compressratio is incorrect with 4k sector size
Reviewed by: Alex Reece <alex@delphix.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Richard Elling <richard.elling@richardelling.com>
Reviewed by: Steven Hartland <killing@multiplay.co.uk>
Reviewed by: Don Brady <dev.fs.zfs@gmail.com>
Approved by: Albert Lee <trisk@omniti.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Revert "NEX-4476 WRC: Allow to use write back cache per tree of datasets"
This reverts commit fe97b74444278a6f36fec93179133641296312da.
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4245 WRC: Code cleanup and refactoring to simplify merge with upstream
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4203 spa_config_tryenter incorrectly handles the multiple-lock case
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
NEX-3965 System may panic on the importing of pool with WRC
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Revert "NEX-3965 System may panic on the importing of pool with WRC"
This reverts commit 45bc50222913cddafde94621d28b78d6efaea897.
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
        usr/src/common/zfs/zpool_prop.c
        usr/src/uts/common/sys/fs/zfs.h
NEX-3965 System may panic on the importing of pool with WRC
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
NEX-3558 KRRP Integration
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
    usr/src/uts/common/io/scsi/targets/sd.c
    usr/src/uts/common/sys/scsi/targets/sddef.h
NEX-3165 need some dedup improvements
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
4391 panic system rather than corrupting pool if we hit bug 4390
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Approved by: Gordon Ross <gwr@nexenta.com>
4370 avoid transmitting holes during zfs send
4371 DMU code clean up
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Josef 'Jeff' Sipek <jeffpc@josefsipek.net>
Approved by: Garrett D'Amore <garrett@damore.org>
OS-114 Heap leak when exporting/destroying pools with CoS
SUP-577 deadlock between zpool detach and syseventd
OS-80 support for vdev and CoS properties for the new I/O scheduler
OS-95 lint warning introduced by OS-61
Fixup merge results
re #13333 rb4362 - eliminated spa_update_iotime() to fix the stats
re #12643 rb4064 ZFS meta refactoring - vdev utilization tracking, auto-dedup
re #12585 rb4049 ZFS++ work port - refactoring to improve separation of open/closed code, bug fixes, performance improvements - open code
re #8346 rb2639 KT disk failures
Bug 11205: add missing libzfs_closed_stubs.c to fix opensource-only build.
ZFS plus work: special vdevs, cos, cos/vdev properties

Split Close
Expand all
Collapse all
          --- old/usr/src/uts/common/fs/zfs/spa_misc.c
          +++ new/usr/src/uts/common/fs/zfs/spa_misc.c
↓ open down ↓ 13 lines elided ↑ open up ↑
  14   14   * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15   15   * If applicable, add the following below this CDDL HEADER, with the
  16   16   * fields enclosed by brackets "[]" replaced with your own identifying
  17   17   * information: Portions Copyright [yyyy] [name of copyright owner]
  18   18   *
  19   19   * CDDL HEADER END
  20   20   */
  21   21  /*
  22   22   * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
  23   23   * Copyright (c) 2011, 2017 by Delphix. All rights reserved.
  24      - * Copyright 2015 Nexenta Systems, Inc.  All rights reserved.
  25   24   * Copyright (c) 2014 Spectra Logic Corporation, All rights reserved.
       25 + * Copyright 2019 Nexenta Systems, Inc.  All rights reserved.
  26   26   * Copyright 2013 Saso Kiselkov. All rights reserved.
  27   27   * Copyright (c) 2014 Integros [integros.com]
  28   28   * Copyright (c) 2017 Datto Inc.
  29   29   */
  30   30  
  31   31  #include <sys/zfs_context.h>
  32   32  #include <sys/spa_impl.h>
  33   33  #include <sys/spa_boot.h>
  34   34  #include <sys/zio.h>
  35   35  #include <sys/zio_checksum.h>
↓ open down ↓ 9 lines elided ↑ open up ↑
  45   45  #include <sys/avl.h>
  46   46  #include <sys/unique.h>
  47   47  #include <sys/dsl_pool.h>
  48   48  #include <sys/dsl_dir.h>
  49   49  #include <sys/dsl_prop.h>
  50   50  #include <sys/dsl_scan.h>
  51   51  #include <sys/fs/zfs.h>
  52   52  #include <sys/metaslab_impl.h>
  53   53  #include <sys/arc.h>
  54   54  #include <sys/ddt.h>
       55 +#include <sys/cos.h>
  55   56  #include "zfs_prop.h"
  56   57  #include <sys/zfeature.h>
  57   58  
  58   59  /*
  59   60   * SPA locking
  60   61   *
  61   62   * There are four basic locks for managing spa_t structures:
  62   63   *
  63   64   * spa_namespace_lock (global mutex)
  64   65   *
↓ open down ↓ 154 lines elided ↑ open up ↑
 219  220   *                              cache, and release the namespace lock.
 220  221   *
 221  222   * vdev state is protected by spa_vdev_state_enter() / spa_vdev_state_exit().
 222  223   * Like spa_vdev_enter/exit, these are convenience wrappers -- the actual
 223  224   * locking is, always, based on spa_namespace_lock and spa_config_lock[].
 224  225   *
 225  226   * spa_rename() is also implemented within this file since it requires
 226  227   * manipulation of the namespace.
 227  228   */
 228  229  
      230 +struct spa_trimstats {
      231 +        kstat_named_t   st_extents;             /* # of extents issued to zio */
      232 +        kstat_named_t   st_bytes;               /* # of bytes issued to zio */
      233 +        kstat_named_t   st_extents_skipped;     /* # of extents too small */
      234 +        kstat_named_t   st_bytes_skipped;       /* bytes in extents_skipped */
      235 +        kstat_named_t   st_auto_slow;           /* trim slow, exts dropped */
      236 +};
      237 +
 229  238  static avl_tree_t spa_namespace_avl;
 230  239  kmutex_t spa_namespace_lock;
 231  240  static kcondvar_t spa_namespace_cv;
 232  241  static int spa_active_count;
 233  242  int spa_max_replication_override = SPA_DVAS_PER_BP;
 234  243  
 235  244  static kmutex_t spa_spare_lock;
 236  245  static avl_tree_t spa_spare_avl;
 237  246  static kmutex_t spa_l2cache_lock;
 238  247  static avl_tree_t spa_l2cache_avl;
 239  248  
 240  249  kmem_cache_t *spa_buffer_pool;
 241  250  int spa_mode_global;
 242  251  
 243  252  #ifdef ZFS_DEBUG
 244      -/*
 245      - * Everything except dprintf, spa, and indirect_remap is on by default
 246      - * in debug builds.
 247      - */
 248      -int zfs_flags = ~(ZFS_DEBUG_DPRINTF | ZFS_DEBUG_SPA | ZFS_DEBUG_INDIRECT_REMAP);
      253 +/* Everything except dprintf and spa is on by default in debug builds */
      254 +int zfs_flags = ~(ZFS_DEBUG_DPRINTF | ZFS_DEBUG_SPA);
 249  255  #else
 250  256  int zfs_flags = 0;
 251  257  #endif
 252  258  
      259 +#define ZFS_OBJ_MTX_DEFAULT_SZ  64
      260 +uint64_t spa_obj_mtx_sz = ZFS_OBJ_MTX_DEFAULT_SZ;
      261 +
 253  262  /*
 254  263   * zfs_recover can be set to nonzero to attempt to recover from
 255  264   * otherwise-fatal errors, typically caused by on-disk corruption.  When
 256  265   * set, calls to zfs_panic_recover() will turn into warning messages.
 257  266   * This should only be used as a last resort, as it typically results
 258  267   * in leaked space, or worse.
 259  268   */
 260  269  boolean_t zfs_recover = B_FALSE;
 261  270  
 262  271  /*
↓ open down ↓ 21 lines elided ↑ open up ↑
 284  293   * forward progress regardless.  In case (b), because the error is
 285  294   * permanent, the best we can do is leak the minimum amount of space,
 286  295   * which is what setting this flag will do.  Therefore, it is reasonable
 287  296   * for this flag to normally be set, but we chose the more conservative
 288  297   * approach of not setting it, so that there is no possibility of
 289  298   * leaking space in the "partial temporary" failure case.
 290  299   */
 291  300  boolean_t zfs_free_leak_on_eio = B_FALSE;
 292  301  
 293  302  /*
      303 + * alpha for spa_update_latency() rolling average of pool latency, which
      304 + * is updated on every txg commit.
      305 + */
      306 +int64_t zfs_root_latency_alpha = 10;
      307 +
      308 +/*
 294  309   * Expiration time in milliseconds. This value has two meanings. First it is
 295  310   * used to determine when the spa_deadman() logic should fire. By default the
 296      - * spa_deadman() will fire if spa_sync() has not completed in 1000 seconds.
      311 + * spa_deadman() will fire if spa_sync() has not completed in 250 seconds.
 297  312   * Secondly, the value determines if an I/O is considered "hung". Any I/O that
 298  313   * has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
 299  314   * in a system panic.
 300  315   */
 301      -uint64_t zfs_deadman_synctime_ms = 1000000ULL;
      316 +uint64_t zfs_deadman_synctime_ms = 250000ULL;
 302  317  
 303  318  /*
 304  319   * Check time in milliseconds. This defines the frequency at which we check
 305  320   * for hung I/O.
 306  321   */
 307  322  uint64_t zfs_deadman_checktime_ms = 5000ULL;
 308  323  
 309  324  /*
 310  325   * Override the zfs deadman behavior via /etc/system. By default the
 311  326   * deadman is enabled except on VMware and sparc deployments.
↓ open down ↓ 35 lines elided ↑ open up ↑
 347  362   *
 348  363   * Note that on very small pools, the slop space will be larger than
 349  364   * 3.2%, in an effort to have it be at least spa_min_slop (128MB),
 350  365   * but we never allow it to be more than half the pool size.
 351  366   *
 352  367   * See also the comments in zfs_space_check_t.
 353  368   */
 354  369  int spa_slop_shift = 5;
 355  370  uint64_t spa_min_slop = 128 * 1024 * 1024;
 356  371  
 357      -/*PRINTFLIKE2*/
 358      -void
 359      -spa_load_failed(spa_t *spa, const char *fmt, ...)
 360      -{
 361      -        va_list adx;
 362      -        char buf[256];
      372 +static void spa_trimstats_create(spa_t *spa);
      373 +static void spa_trimstats_destroy(spa_t *spa);
 363  374  
 364      -        va_start(adx, fmt);
 365      -        (void) vsnprintf(buf, sizeof (buf), fmt, adx);
 366      -        va_end(adx);
 367      -
 368      -        zfs_dbgmsg("spa_load(%s, config %s): FAILED: %s", spa->spa_name,
 369      -            spa->spa_trust_config ? "trusted" : "untrusted", buf);
 370      -}
 371      -
 372      -/*PRINTFLIKE2*/
 373      -void
 374      -spa_load_note(spa_t *spa, const char *fmt, ...)
 375      -{
 376      -        va_list adx;
 377      -        char buf[256];
 378      -
 379      -        va_start(adx, fmt);
 380      -        (void) vsnprintf(buf, sizeof (buf), fmt, adx);
 381      -        va_end(adx);
 382      -
 383      -        zfs_dbgmsg("spa_load(%s, config %s): %s", spa->spa_name,
 384      -            spa->spa_trust_config ? "trusted" : "untrusted", buf);
 385      -}
 386      -
 387  375  /*
 388  376   * ==========================================================================
 389  377   * SPA config locking
 390  378   * ==========================================================================
 391  379   */
 392  380  static void
 393  381  spa_config_lock_init(spa_t *spa)
 394  382  {
 395  383          for (int i = 0; i < SCL_LOCKS; i++) {
 396  384                  spa_config_lock_t *scl = &spa->spa_config_lock[i];
↓ open down ↓ 72 lines elided ↑ open up ↑
 469  457                          while (!refcount_is_zero(&scl->scl_count)) {
 470  458                                  scl->scl_write_wanted++;
 471  459                                  cv_wait(&scl->scl_cv, &scl->scl_lock);
 472  460                                  scl->scl_write_wanted--;
 473  461                          }
 474  462                          scl->scl_writer = curthread;
 475  463                  }
 476  464                  (void) refcount_add(&scl->scl_count, tag);
 477  465                  mutex_exit(&scl->scl_lock);
 478  466          }
 479      -        ASSERT3U(wlocks_held, <=, locks);
      467 +        ASSERT(wlocks_held <= locks);
 480  468  }
 481  469  
 482  470  void
 483  471  spa_config_exit(spa_t *spa, int locks, void *tag)
 484  472  {
 485  473          for (int i = SCL_LOCKS - 1; i >= 0; i--) {
 486  474                  spa_config_lock_t *scl = &spa->spa_config_lock[i];
 487  475                  if (!(locks & (1 << i)))
 488  476                          continue;
 489  477                  mutex_enter(&scl->scl_lock);
↓ open down ↓ 90 lines elided ↑ open up ↑
 580  568   * spa_namespace_lock.  The caller must ensure that the spa_t doesn't already
 581  569   * exist by calling spa_lookup() first.
 582  570   */
 583  571  spa_t *
 584  572  spa_add(const char *name, nvlist_t *config, const char *altroot)
 585  573  {
 586  574          spa_t *spa;
 587  575          spa_config_dirent_t *dp;
 588  576          cyc_handler_t hdlr;
 589  577          cyc_time_t when;
      578 +        uint64_t guid;
 590  579  
 591  580          ASSERT(MUTEX_HELD(&spa_namespace_lock));
 592  581  
 593  582          spa = kmem_zalloc(sizeof (spa_t), KM_SLEEP);
 594  583  
 595  584          mutex_init(&spa->spa_async_lock, NULL, MUTEX_DEFAULT, NULL);
 596  585          mutex_init(&spa->spa_errlist_lock, NULL, MUTEX_DEFAULT, NULL);
 597  586          mutex_init(&spa->spa_errlog_lock, NULL, MUTEX_DEFAULT, NULL);
 598  587          mutex_init(&spa->spa_evicting_os_lock, NULL, MUTEX_DEFAULT, NULL);
 599  588          mutex_init(&spa->spa_history_lock, NULL, MUTEX_DEFAULT, NULL);
 600  589          mutex_init(&spa->spa_proc_lock, NULL, MUTEX_DEFAULT, NULL);
 601  590          mutex_init(&spa->spa_props_lock, NULL, MUTEX_DEFAULT, NULL);
 602  591          mutex_init(&spa->spa_cksum_tmpls_lock, NULL, MUTEX_DEFAULT, NULL);
 603  592          mutex_init(&spa->spa_scrub_lock, NULL, MUTEX_DEFAULT, NULL);
 604  593          mutex_init(&spa->spa_suspend_lock, NULL, MUTEX_DEFAULT, NULL);
 605  594          mutex_init(&spa->spa_vdev_top_lock, NULL, MUTEX_DEFAULT, NULL);
 606  595          mutex_init(&spa->spa_iokstat_lock, NULL, MUTEX_DEFAULT, NULL);
 607      -        mutex_init(&spa->spa_alloc_lock, NULL, MUTEX_DEFAULT, NULL);
      596 +        mutex_init(&spa->spa_cos_props_lock, NULL, MUTEX_DEFAULT, NULL);
      597 +        mutex_init(&spa->spa_vdev_props_lock, NULL, MUTEX_DEFAULT, NULL);
      598 +        mutex_init(&spa->spa_perfmon.perfmon_lock, NULL, MUTEX_DEFAULT, NULL);
 608  599  
      600 +        mutex_init(&spa->spa_auto_trim_lock, NULL, MUTEX_DEFAULT, NULL);
      601 +        mutex_init(&spa->spa_man_trim_lock, NULL, MUTEX_DEFAULT, NULL);
      602 +
 609  603          cv_init(&spa->spa_async_cv, NULL, CV_DEFAULT, NULL);
 610  604          cv_init(&spa->spa_evicting_os_cv, NULL, CV_DEFAULT, NULL);
 611  605          cv_init(&spa->spa_proc_cv, NULL, CV_DEFAULT, NULL);
 612  606          cv_init(&spa->spa_scrub_io_cv, NULL, CV_DEFAULT, NULL);
 613  607          cv_init(&spa->spa_suspend_cv, NULL, CV_DEFAULT, NULL);
      608 +        cv_init(&spa->spa_auto_trim_done_cv, NULL, CV_DEFAULT, NULL);
      609 +        cv_init(&spa->spa_man_trim_update_cv, NULL, CV_DEFAULT, NULL);
      610 +        cv_init(&spa->spa_man_trim_done_cv, NULL, CV_DEFAULT, NULL);
 614  611  
 615  612          for (int t = 0; t < TXG_SIZE; t++)
 616  613                  bplist_create(&spa->spa_free_bplist[t]);
 617  614  
 618  615          (void) strlcpy(spa->spa_name, name, sizeof (spa->spa_name));
 619  616          spa->spa_state = POOL_STATE_UNINITIALIZED;
 620  617          spa->spa_freeze_txg = UINT64_MAX;
 621  618          spa->spa_final_txg = UINT64_MAX;
 622  619          spa->spa_load_max_txg = UINT64_MAX;
 623  620          spa->spa_proc = &p0;
 624  621          spa->spa_proc_state = SPA_PROC_NONE;
 625      -        spa->spa_trust_config = B_TRUE;
      622 +        if (spa_obj_mtx_sz < 1 || spa_obj_mtx_sz > INT_MAX)
      623 +                spa->spa_obj_mtx_sz = ZFS_OBJ_MTX_DEFAULT_SZ;
      624 +        else
      625 +                spa->spa_obj_mtx_sz = spa_obj_mtx_sz;
 626  626  
      627 +        /*
      628 +         * Grabbing the guid here is just so that spa_config_guid_exists can
      629 +         * check early on to protect against doubled imports of the same pool
      630 +         * under different names. If the GUID isn't provided here, we will
      631 +         * let spa generate one later on during spa_load, although in that
      632 +         * case we might not be able to provide the double-import protection.
      633 +         */
      634 +        if (nvlist_lookup_uint64(config, ZPOOL_CONFIG_POOL_GUID, &guid) == 0) {
      635 +                spa->spa_config_guid = guid;
      636 +                ASSERT(!spa_config_guid_exists(guid));
      637 +        }
      638 +
 627  639          hdlr.cyh_func = spa_deadman;
 628  640          hdlr.cyh_arg = spa;
 629  641          hdlr.cyh_level = CY_LOW_LEVEL;
 630  642  
 631  643          spa->spa_deadman_synctime = MSEC2NSEC(zfs_deadman_synctime_ms);
 632  644  
 633  645          /*
 634  646           * This determines how often we need to check for hung I/Os after
 635  647           * the cyclic has already fired. Since checking for hung I/Os is
 636  648           * an expensive operation we don't want to check too frequently.
↓ open down ↓ 11 lines elided ↑ open up ↑
 648  660          avl_add(&spa_namespace_avl, spa);
 649  661  
 650  662          /*
 651  663           * Set the alternate root, if there is one.
 652  664           */
 653  665          if (altroot) {
 654  666                  spa->spa_root = spa_strdup(altroot);
 655  667                  spa_active_count++;
 656  668          }
 657  669  
 658      -        avl_create(&spa->spa_alloc_tree, zio_bookmark_compare,
 659      -            sizeof (zio_t), offsetof(zio_t, io_alloc_node));
 660      -
 661  670          /*
 662  671           * Every pool starts with the default cachefile
 663  672           */
 664  673          list_create(&spa->spa_config_list, sizeof (spa_config_dirent_t),
 665  674              offsetof(spa_config_dirent_t, scd_link));
 666  675  
 667  676          dp = kmem_zalloc(sizeof (spa_config_dirent_t), KM_SLEEP);
 668  677          dp->scd_path = altroot ? NULL : spa_strdup(spa_config_path);
 669  678          list_insert_head(&spa->spa_config_list, dp);
 670  679  
↓ open down ↓ 11 lines elided ↑ open up ↑
 682  691  
 683  692                  VERIFY(nvlist_dup(config, &spa->spa_config, 0) == 0);
 684  693          }
 685  694  
 686  695          if (spa->spa_label_features == NULL) {
 687  696                  VERIFY(nvlist_alloc(&spa->spa_label_features, NV_UNIQUE_NAME,
 688  697                      KM_SLEEP) == 0);
 689  698          }
 690  699  
 691  700          spa->spa_iokstat = kstat_create("zfs", 0, name,
 692      -            "disk", KSTAT_TYPE_IO, 1, 0);
      701 +            "zfs", KSTAT_TYPE_IO, 1, 0);
 693  702          if (spa->spa_iokstat) {
 694  703                  spa->spa_iokstat->ks_lock = &spa->spa_iokstat_lock;
 695  704                  kstat_install(spa->spa_iokstat);
 696  705          }
 697  706  
      707 +        spa_trimstats_create(spa);
      708 +
 698  709          spa->spa_debug = ((zfs_flags & ZFS_DEBUG_SPA) != 0);
 699  710  
      711 +        autosnap_init(spa);
      712 +
      713 +        spa_cos_init(spa);
      714 +
      715 +        spa_special_init(spa);
      716 +
 700  717          spa->spa_min_ashift = INT_MAX;
 701  718          spa->spa_max_ashift = 0;
      719 +        wbc_init(&spa->spa_wbc, spa);
 702  720  
 703  721          /*
 704  722           * As a pool is being created, treat all features as disabled by
 705  723           * setting SPA_FEATURE_DISABLED for all entries in the feature
 706  724           * refcount cache.
 707  725           */
 708  726          for (int i = 0; i < SPA_FEATURES; i++) {
 709  727                  spa->spa_feat_refcount_cache[i] = SPA_FEATURE_DISABLED;
 710  728          }
 711  729  
↓ open down ↓ 24 lines elided ↑ open up ↑
 736  754                  spa_active_count--;
 737  755          }
 738  756  
 739  757          while ((dp = list_head(&spa->spa_config_list)) != NULL) {
 740  758                  list_remove(&spa->spa_config_list, dp);
 741  759                  if (dp->scd_path != NULL)
 742  760                          spa_strfree(dp->scd_path);
 743  761                  kmem_free(dp, sizeof (spa_config_dirent_t));
 744  762          }
 745  763  
 746      -        avl_destroy(&spa->spa_alloc_tree);
 747  764          list_destroy(&spa->spa_config_list);
 748  765  
      766 +        wbc_fini(&spa->spa_wbc);
      767 +
      768 +        spa_special_fini(spa);
      769 +
      770 +        spa_cos_fini(spa);
      771 +
      772 +        autosnap_fini(spa);
      773 +
 749  774          nvlist_free(spa->spa_label_features);
 750  775          nvlist_free(spa->spa_load_info);
 751  776          spa_config_set(spa, NULL);
 752  777  
 753  778          mutex_enter(&cpu_lock);
 754  779          if (spa->spa_deadman_cycid != CYCLIC_NONE)
 755  780                  cyclic_remove(spa->spa_deadman_cycid);
 756  781          mutex_exit(&cpu_lock);
 757  782          spa->spa_deadman_cycid = CYCLIC_NONE;
 758  783  
 759  784          refcount_destroy(&spa->spa_refcount);
 760  785  
 761  786          spa_config_lock_destroy(spa);
 762  787  
      788 +        spa_trimstats_destroy(spa);
      789 +
 763  790          kstat_delete(spa->spa_iokstat);
 764  791          spa->spa_iokstat = NULL;
 765  792  
 766  793          for (int t = 0; t < TXG_SIZE; t++)
 767  794                  bplist_destroy(&spa->spa_free_bplist[t]);
 768  795  
 769  796          zio_checksum_templates_free(spa);
 770  797  
 771  798          cv_destroy(&spa->spa_async_cv);
 772  799          cv_destroy(&spa->spa_evicting_os_cv);
 773  800          cv_destroy(&spa->spa_proc_cv);
 774  801          cv_destroy(&spa->spa_scrub_io_cv);
 775  802          cv_destroy(&spa->spa_suspend_cv);
      803 +        cv_destroy(&spa->spa_auto_trim_done_cv);
      804 +        cv_destroy(&spa->spa_man_trim_update_cv);
      805 +        cv_destroy(&spa->spa_man_trim_done_cv);
 776  806  
 777      -        mutex_destroy(&spa->spa_alloc_lock);
 778  807          mutex_destroy(&spa->spa_async_lock);
 779  808          mutex_destroy(&spa->spa_errlist_lock);
 780  809          mutex_destroy(&spa->spa_errlog_lock);
 781  810          mutex_destroy(&spa->spa_evicting_os_lock);
 782  811          mutex_destroy(&spa->spa_history_lock);
 783  812          mutex_destroy(&spa->spa_proc_lock);
 784  813          mutex_destroy(&spa->spa_props_lock);
 785  814          mutex_destroy(&spa->spa_cksum_tmpls_lock);
 786  815          mutex_destroy(&spa->spa_scrub_lock);
 787  816          mutex_destroy(&spa->spa_suspend_lock);
 788  817          mutex_destroy(&spa->spa_vdev_top_lock);
 789  818          mutex_destroy(&spa->spa_iokstat_lock);
      819 +        mutex_destroy(&spa->spa_cos_props_lock);
      820 +        mutex_destroy(&spa->spa_vdev_props_lock);
      821 +        mutex_destroy(&spa->spa_auto_trim_lock);
      822 +        mutex_destroy(&spa->spa_man_trim_lock);
 790  823  
 791  824          kmem_free(spa, sizeof (spa_t));
 792  825  }
 793  826  
 794  827  /*
 795  828   * Given a pool, return the next pool in the namespace, or NULL if there is
 796  829   * none.  If 'prev' is NULL, return the first pool.
 797  830   */
 798  831  spa_t *
 799  832  spa_next(spa_t *prev)
↓ open down ↓ 168 lines elided ↑ open up ↑
 968 1001          found = avl_find(avl, &search, &where);
 969 1002          ASSERT(found != NULL);
 970 1003          ASSERT(found->aux_pool == 0ULL);
 971 1004  
 972 1005          found->aux_pool = spa_guid(vd->vdev_spa);
 973 1006  }
 974 1007  
 975 1008  /*
 976 1009   * Spares are tracked globally due to the following constraints:
 977 1010   *
 978      - *      - A spare may be part of multiple pools.
 979      - *      - A spare may be added to a pool even if it's actively in use within
     1011 + *      - A spare may be part of multiple pools.
     1012 + *      - A spare may be added to a pool even if it's actively in use within
 980 1013   *        another pool.
 981      - *      - A spare in use in any pool can only be the source of a replacement if
     1014 + *      - A spare in use in any pool can only be the source of a replacement if
 982 1015   *        the target is a spare in the same pool.
 983 1016   *
 984 1017   * We keep track of all spares on the system through the use of a reference
 985 1018   * counted AVL tree.  When a vdev is added as a spare, or used as a replacement
 986 1019   * spare, then we bump the reference count in the AVL tree.  In addition, we set
 987 1020   * the 'vdev_isspare' member to indicate that the device is a spare (active or
 988 1021   * inactive).  When a spare is made active (used to replace a device in the
 989 1022   * pool), we also keep track of which pool its been made a part of.
 990 1023   *
 991 1024   * The 'spa_spare_lock' protects the AVL tree.  These functions are normally
↓ open down ↓ 111 lines elided ↑ open up ↑
1103 1136  /*
1104 1137   * Lock the given spa_t for the purpose of adding or removing a vdev.
1105 1138   * Grabs the global spa_namespace_lock plus the spa config lock for writing.
1106 1139   * It returns the next transaction group for the spa_t.
1107 1140   */
1108 1141  uint64_t
1109 1142  spa_vdev_enter(spa_t *spa)
1110 1143  {
1111 1144          mutex_enter(&spa->spa_vdev_top_lock);
1112 1145          mutex_enter(&spa_namespace_lock);
     1146 +        mutex_enter(&spa->spa_auto_trim_lock);
     1147 +        mutex_enter(&spa->spa_man_trim_lock);
     1148 +        spa_trim_stop_wait(spa);
1113 1149          return (spa_vdev_config_enter(spa));
1114 1150  }
1115 1151  
1116 1152  /*
1117 1153   * Internal implementation for spa_vdev_enter().  Used when a vdev
1118 1154   * operation requires multiple syncs (i.e. removing a device) while
1119 1155   * keeping the spa_namespace_lock held.
1120 1156   */
1121 1157  uint64_t
1122 1158  spa_vdev_config_enter(spa_t *spa)
↓ open down ↓ 28 lines elided ↑ open up ↑
1151 1187          if (error == 0 && !list_is_empty(&spa->spa_config_dirty_list)) {
1152 1188                  config_changed = B_TRUE;
1153 1189                  spa->spa_config_generation++;
1154 1190          }
1155 1191  
1156 1192          /*
1157 1193           * Verify the metaslab classes.
1158 1194           */
1159 1195          ASSERT(metaslab_class_validate(spa_normal_class(spa)) == 0);
1160 1196          ASSERT(metaslab_class_validate(spa_log_class(spa)) == 0);
     1197 +        ASSERT(metaslab_class_validate(spa_special_class(spa)) == 0);
1161 1198  
1162 1199          spa_config_exit(spa, SCL_ALL, spa);
1163 1200  
1164 1201          /*
1165 1202           * Panic the system if the specified tag requires it.  This
1166 1203           * is useful for ensuring that configurations are updated
1167 1204           * transactionally.
1168 1205           */
1169 1206          if (zio_injection_enabled)
1170 1207                  zio_handle_panic_injection(spa, tag, 0);
↓ open down ↓ 10 lines elided ↑ open up ↑
1181 1218                  ASSERT(!vd->vdev_detached || vd->vdev_dtl_sm == NULL);
1182 1219                  spa_config_enter(spa, SCL_ALL, spa, RW_WRITER);
1183 1220                  vdev_free(vd);
1184 1221                  spa_config_exit(spa, SCL_ALL, spa);
1185 1222          }
1186 1223  
1187 1224          /*
1188 1225           * If the config changed, update the config cache.
1189 1226           */
1190 1227          if (config_changed)
1191      -                spa_write_cachefile(spa, B_FALSE, B_TRUE);
     1228 +                spa_config_sync(spa, B_FALSE, B_TRUE);
1192 1229  }
1193 1230  
1194 1231  /*
1195 1232   * Unlock the spa_t after adding or removing a vdev.  Besides undoing the
1196 1233   * locking of spa_vdev_enter(), we also want make sure the transactions have
1197 1234   * synced to disk, and then update the global configuration cache with the new
1198 1235   * information.
1199 1236   */
1200 1237  int
1201 1238  spa_vdev_exit(spa_t *spa, vdev_t *vd, uint64_t txg, int error)
1202 1239  {
1203 1240          spa_vdev_config_exit(spa, vd, txg, error, FTAG);
     1241 +        mutex_exit(&spa->spa_man_trim_lock);
     1242 +        mutex_exit(&spa->spa_auto_trim_lock);
1204 1243          mutex_exit(&spa_namespace_lock);
1205 1244          mutex_exit(&spa->spa_vdev_top_lock);
1206 1245  
1207 1246          return (error);
1208 1247  }
1209 1248  
1210 1249  /*
1211 1250   * Lock the given spa_t for the purpose of changing vdev state.
1212 1251   */
1213 1252  void
↓ open down ↓ 51 lines elided ↑ open up ↑
1265 1304           * when the command completes, you expect no further I/O from ZFS.
1266 1305           */
1267 1306          if (vd != NULL)
1268 1307                  txg_wait_synced(spa->spa_dsl_pool, 0);
1269 1308  
1270 1309          /*
1271 1310           * If the config changed, update the config cache.
1272 1311           */
1273 1312          if (config_changed) {
1274 1313                  mutex_enter(&spa_namespace_lock);
1275      -                spa_write_cachefile(spa, B_FALSE, B_TRUE);
     1314 +                spa_config_sync(spa, B_FALSE, B_TRUE);
1276 1315                  mutex_exit(&spa_namespace_lock);
1277 1316          }
1278 1317  
1279 1318          return (error);
1280 1319  }
1281 1320  
1282 1321  /*
1283 1322   * ==========================================================================
1284 1323   * Miscellaneous functions
1285 1324   * ==========================================================================
↓ open down ↓ 57 lines elided ↑ open up ↑
1343 1382           */
1344 1383          vdev_config_dirty(spa->spa_root_vdev);
1345 1384  
1346 1385          spa_config_exit(spa, SCL_ALL, FTAG);
1347 1386  
1348 1387          txg_wait_synced(spa->spa_dsl_pool, 0);
1349 1388  
1350 1389          /*
1351 1390           * Sync the updated config cache.
1352 1391           */
1353      -        spa_write_cachefile(spa, B_FALSE, B_TRUE);
     1392 +        spa_config_sync(spa, B_FALSE, B_TRUE);
1354 1393  
1355 1394          spa_close(spa, FTAG);
1356 1395  
1357 1396          mutex_exit(&spa_namespace_lock);
1358 1397  
1359 1398          return (0);
1360 1399  }
1361 1400  
1362 1401  /*
1363 1402   * Return the spa_t associated with given pool_guid, if it exists.  If
↓ open down ↓ 37 lines elided ↑ open up ↑
1401 1440  
1402 1441  /*
1403 1442   * Determine whether a pool with the given pool_guid exists.
1404 1443   */
1405 1444  boolean_t
1406 1445  spa_guid_exists(uint64_t pool_guid, uint64_t device_guid)
1407 1446  {
1408 1447          return (spa_by_guid(pool_guid, device_guid) != NULL);
1409 1448  }
1410 1449  
     1450 +/*
     1451 + * Similar to spa_guid_exists, but uses the spa_config_guid and doesn't
     1452 + * filter the check by pool state (as spa_guid_exists does). This is
     1453 + * used to protect against attempting to spa_add the same pool (with the
     1454 + * same pool GUID) under different names. This situation can happen if
     1455 + * the boot_archive contains an outdated zpool.cache file after a pool
     1456 + * rename. That would make us import the pool twice, resulting in data
     1457 + * corruption. Normally the boot_archive shouldn't contain a zpool.cache
     1458 + * file, but if due to misconfiguration it does, this function serves as
     1459 + * a failsafe to prevent the double import.
     1460 + */
     1461 +boolean_t
     1462 +spa_config_guid_exists(uint64_t pool_guid)
     1463 +{
     1464 +        spa_t *spa;
     1465 +
     1466 +        ASSERT(MUTEX_HELD(&spa_namespace_lock));
     1467 +        if (pool_guid == 0)
     1468 +                return (B_FALSE);
     1469 +
     1470 +        for (spa = avl_first(&spa_namespace_avl); spa != NULL;
     1471 +            spa = AVL_NEXT(&spa_namespace_avl, spa)) {
     1472 +                if (spa->spa_config_guid == pool_guid)
     1473 +                        return (B_TRUE);
     1474 +        }
     1475 +
     1476 +        return (B_FALSE);
     1477 +}
     1478 +
1411 1479  char *
1412 1480  spa_strdup(const char *s)
1413 1481  {
1414 1482          size_t len;
1415 1483          char *new;
1416 1484  
1417 1485          len = strlen(s);
1418 1486          new = kmem_alloc(len + 1, KM_SLEEP);
1419 1487          bcopy(s, new, len);
1420 1488          new[len] = '\0';
↓ open down ↓ 138 lines elided ↑ open up ↑
1559 1627  {
1560 1628          return (spa->spa_dsl_pool);
1561 1629  }
1562 1630  
1563 1631  boolean_t
1564 1632  spa_is_initializing(spa_t *spa)
1565 1633  {
1566 1634          return (spa->spa_is_initializing);
1567 1635  }
1568 1636  
1569      -boolean_t
1570      -spa_indirect_vdevs_loaded(spa_t *spa)
1571      -{
1572      -        return (spa->spa_indirect_vdevs_loaded);
1573      -}
1574      -
1575 1637  blkptr_t *
1576 1638  spa_get_rootblkptr(spa_t *spa)
1577 1639  {
1578 1640          return (&spa->spa_ubsync.ub_rootbp);
1579 1641  }
1580 1642  
1581 1643  void
1582 1644  spa_set_rootblkptr(spa_t *spa, const blkptr_t *bp)
1583 1645  {
1584 1646          spa->spa_uberblock.ub_rootbp = *bp;
↓ open down ↓ 106 lines elided ↑ open up ↑
1691 1753  }
1692 1754  
1693 1755  /* ARGSUSED */
1694 1756  uint64_t
1695 1757  spa_get_worst_case_asize(spa_t *spa, uint64_t lsize)
1696 1758  {
1697 1759          return (lsize * spa_asize_inflation);
1698 1760  }
1699 1761  
1700 1762  /*
     1763 + * Get either on disk (phys == B_TRUE) or possible in core DDT size
     1764 + */
     1765 +uint64_t
     1766 +spa_get_ddts_size(spa_t *spa, boolean_t phys)
     1767 +{
     1768 +        if (phys)
     1769 +                return (spa->spa_ddt_dsize);
     1770 +
     1771 +        return (spa->spa_ddt_msize);
     1772 +}
     1773 +
     1774 +/*
     1775 + * Check to see if we need to stop DDT growth to stay within some limit
     1776 + */
     1777 +boolean_t
     1778 +spa_enable_dedup_cap(spa_t *spa)
     1779 +{
     1780 +        if (zfs_ddt_byte_ceiling != 0) {
     1781 +                if (zfs_ddts_msize > zfs_ddt_byte_ceiling) {
     1782 +                        /* need to limit DDT to an in core bytecount */
     1783 +                        return (B_TRUE);
     1784 +                }
     1785 +        } else if (zfs_ddt_limit_type == DDT_LIMIT_TO_ARC) {
     1786 +                if (zfs_ddts_msize > *arc_ddt_evict_threshold) {
     1787 +                        /* need to limit DDT to fit into ARC */
     1788 +                        return (B_TRUE);
     1789 +                }
     1790 +        } else if (zfs_ddt_limit_type == DDT_LIMIT_TO_L2ARC) {
     1791 +                if (spa->spa_l2arc_ddt_devs_size != 0) {
     1792 +                        if (spa_get_ddts_size(spa, B_TRUE) >
     1793 +                            spa->spa_l2arc_ddt_devs_size) {
     1794 +                                /* limit DDT to fit into L2ARC DDT dev */
     1795 +                                return (B_TRUE);
     1796 +                        }
     1797 +                } else if (zfs_ddts_msize > *arc_ddt_evict_threshold) {
     1798 +                        /* no L2ARC DDT dev - limit DDT to fit into ARC */
     1799 +                        return (B_TRUE);
     1800 +                }
     1801 +        }
     1802 +
     1803 +        return (B_FALSE);
     1804 +}
     1805 +
     1806 +/*
1701 1807   * Return the amount of slop space in bytes.  It is 1/32 of the pool (3.2%),
1702 1808   * or at least 128MB, unless that would cause it to be more than half the
1703 1809   * pool size.
1704 1810   *
1705 1811   * See the comment above spa_slop_shift for details.
1706 1812   */
1707 1813  uint64_t
1708 1814  spa_get_slop_space(spa_t *spa)
1709 1815  {
1710 1816          uint64_t space = spa_get_dspace(spa);
↓ open down ↓ 4 lines elided ↑ open up ↑
1715 1821  spa_get_dspace(spa_t *spa)
1716 1822  {
1717 1823          return (spa->spa_dspace);
1718 1824  }
1719 1825  
1720 1826  void
1721 1827  spa_update_dspace(spa_t *spa)
1722 1828  {
1723 1829          spa->spa_dspace = metaslab_class_get_dspace(spa_normal_class(spa)) +
1724 1830              ddt_get_dedup_dspace(spa);
1725      -        if (spa->spa_vdev_removal != NULL) {
1726      -                /*
1727      -                 * We can't allocate from the removing device, so
1728      -                 * subtract its size.  This prevents the DMU/DSL from
1729      -                 * filling up the (now smaller) pool while we are in the
1730      -                 * middle of removing the device.
1731      -                 *
1732      -                 * Note that the DMU/DSL doesn't actually know or care
1733      -                 * how much space is allocated (it does its own tracking
1734      -                 * of how much space has been logically used).  So it
1735      -                 * doesn't matter that the data we are moving may be
1736      -                 * allocated twice (on the old device and the new
1737      -                 * device).
1738      -                 */
1739      -                vdev_t *vd = spa->spa_vdev_removal->svr_vdev;
1740      -                spa->spa_dspace -= spa_deflate(spa) ?
1741      -                    vd->vdev_stat.vs_dspace : vd->vdev_stat.vs_space;
     1831 +}
     1832 +
     1833 +/*
     1834 + * EXPERIMENTAL
     1835 + * Use exponential moving average to track root vdev iotime, as well as top
     1836 + * level vdev iotime.
     1837 + * The principle: avg_new = avg_prev + (cur - avg_prev) * a / 100; a is
     1838 + * tuneable. For example, if a = 10 (alpha = 0.1), it will take 20 iterations,
     1839 + * or 100 seconds at 5 second txg commit intervals for the values from last 20
     1840 + * iterations to account for 66% of the moving average.
     1841 + * Currently, the challenge is that we keep track of iotime in cumulative
     1842 + * nanoseconds since zpool import, both for leaf and top vdevs, so a way of
     1843 + * getting delta pre/post txg commit is required.
     1844 + */
     1845 +
     1846 +void
     1847 +spa_update_latency(spa_t *spa)
     1848 +{
     1849 +        vdev_t *rvd = spa->spa_root_vdev;
     1850 +        vdev_stat_t *rvs = &rvd->vdev_stat;
     1851 +        for (int c = 0; c < rvd->vdev_children; c++) {
     1852 +                vdev_t *cvd = rvd->vdev_child[c];
     1853 +                vdev_stat_t *cvs = &cvd->vdev_stat;
     1854 +                mutex_enter(&rvd->vdev_stat_lock);
     1855 +
     1856 +                for (int t = 0; t < ZIO_TYPES; t++) {
     1857 +
     1858 +                        /*
     1859 +                         * Non-trivial bit here. We update the moving latency
     1860 +                         * average for each child vdev separately, but since we
     1861 +                         * want the average to settle at the same rate
     1862 +                         * regardless of top level vdev count, we effectively
     1863 +                         * divide our alpha by number of children of the root
     1864 +                         * vdev to account for that.
     1865 +                         */
     1866 +                        rvs->vs_latency[t] += ((((int64_t)cvs->vs_latency[t] -
     1867 +                            (int64_t)rvs->vs_latency[t]) *
     1868 +                            (int64_t)zfs_root_latency_alpha) / 100) /
     1869 +                            (int64_t)(rvd->vdev_children);
     1870 +                }
     1871 +                mutex_exit(&rvd->vdev_stat_lock);
1742 1872          }
1743 1873  }
1744 1874  
     1875 +
1745 1876  /*
1746 1877   * Return the failure mode that has been set to this pool. The default
1747 1878   * behavior will be to block all I/Os when a complete failure occurs.
1748 1879   */
1749 1880  uint8_t
1750 1881  spa_get_failmode(spa_t *spa)
1751 1882  {
1752 1883          return (spa->spa_failmode);
1753 1884  }
1754 1885  
↓ open down ↓ 2 lines elided ↑ open up ↑
1757 1888  {
1758 1889          return (spa->spa_suspended);
1759 1890  }
1760 1891  
1761 1892  uint64_t
1762 1893  spa_version(spa_t *spa)
1763 1894  {
1764 1895          return (spa->spa_ubsync.ub_version);
1765 1896  }
1766 1897  
     1898 +int
     1899 +spa_get_obj_mtx_sz(spa_t *spa)
     1900 +{
     1901 +        return (spa->spa_obj_mtx_sz);
     1902 +}
     1903 +
1767 1904  boolean_t
1768 1905  spa_deflate(spa_t *spa)
1769 1906  {
1770 1907          return (spa->spa_deflate);
1771 1908  }
1772 1909  
1773 1910  metaslab_class_t *
1774 1911  spa_normal_class(spa_t *spa)
1775 1912  {
1776 1913          return (spa->spa_normal_class);
1777 1914  }
1778 1915  
1779 1916  metaslab_class_t *
1780 1917  spa_log_class(spa_t *spa)
1781 1918  {
1782 1919          return (spa->spa_log_class);
1783 1920  }
1784 1921  
     1922 +metaslab_class_t *
     1923 +spa_special_class(spa_t *spa)
     1924 +{
     1925 +        return (spa->spa_special_class);
     1926 +}
     1927 +
1785 1928  void
1786 1929  spa_evicting_os_register(spa_t *spa, objset_t *os)
1787 1930  {
1788 1931          mutex_enter(&spa->spa_evicting_os_lock);
1789 1932          list_insert_head(&spa->spa_evicting_os_list, os);
1790 1933          mutex_exit(&spa->spa_evicting_os_lock);
1791 1934  }
1792 1935  
1793 1936  void
1794 1937  spa_evicting_os_deregister(spa_t *spa, objset_t *os)
↓ open down ↓ 8 lines elided ↑ open up ↑
1803 1946  spa_evicting_os_wait(spa_t *spa)
1804 1947  {
1805 1948          mutex_enter(&spa->spa_evicting_os_lock);
1806 1949          while (!list_is_empty(&spa->spa_evicting_os_list))
1807 1950                  cv_wait(&spa->spa_evicting_os_cv, &spa->spa_evicting_os_lock);
1808 1951          mutex_exit(&spa->spa_evicting_os_lock);
1809 1952  
1810 1953          dmu_buf_user_evict_wait();
1811 1954  }
1812 1955  
     1956 +uint64_t
     1957 +spa_class_alloc_percentage(metaslab_class_t *mc)
     1958 +{
     1959 +        uint64_t capacity = mc->mc_space;
     1960 +        uint64_t alloc = mc->mc_alloc;
     1961 +        uint64_t one_percent = capacity / 100;
     1962 +
     1963 +        return (alloc / one_percent);
     1964 +}
     1965 +
1813 1966  int
1814 1967  spa_max_replication(spa_t *spa)
1815 1968  {
1816 1969          /*
1817 1970           * As of SPA_VERSION == SPA_VERSION_DITTO_BLOCKS, we are able to
1818 1971           * handle BPs with more than one DVA allocated.  Set our max
1819 1972           * replication level accordingly.
1820 1973           */
1821 1974          if (spa_version(spa) < SPA_VERSION_DITTO_BLOCKS)
1822 1975                  return (1);
↓ open down ↓ 5 lines elided ↑ open up ↑
1828 1981  {
1829 1982          return (spa->spa_prev_software_version);
1830 1983  }
1831 1984  
1832 1985  uint64_t
1833 1986  spa_deadman_synctime(spa_t *spa)
1834 1987  {
1835 1988          return (spa->spa_deadman_synctime);
1836 1989  }
1837 1990  
     1991 +spa_force_trim_t
     1992 +spa_get_force_trim(spa_t *spa)
     1993 +{
     1994 +        return (spa->spa_force_trim);
     1995 +}
     1996 +
     1997 +spa_auto_trim_t
     1998 +spa_get_auto_trim(spa_t *spa)
     1999 +{
     2000 +        return (spa->spa_auto_trim);
     2001 +}
     2002 +
1838 2003  uint64_t
1839 2004  dva_get_dsize_sync(spa_t *spa, const dva_t *dva)
1840 2005  {
1841 2006          uint64_t asize = DVA_GET_ASIZE(dva);
1842 2007          uint64_t dsize = asize;
1843 2008  
1844 2009          ASSERT(spa_config_held(spa, SCL_ALL, RW_READER) != 0);
1845 2010  
1846 2011          if (asize != 0 && spa->spa_deflate) {
1847 2012                  vdev_t *vd = vdev_lookup_top(spa, DVA_GET_VDEV(dva));
1848 2013                  dsize = (asize >> SPA_MINBLOCKSHIFT) * vd->vdev_deflate_ratio;
1849 2014          }
1850 2015  
1851 2016          return (dsize);
1852 2017  }
1853 2018  
     2019 +/*
     2020 + * This function walks over the all DVAs of the given BP and
     2021 + * adds up their sizes.
     2022 + */
1854 2023  uint64_t
1855 2024  bp_get_dsize_sync(spa_t *spa, const blkptr_t *bp)
1856 2025  {
     2026 +        /*
     2027 +         * SPECIAL-BP has two DVAs, but DVA[0] in this case is a
     2028 +         * temporary DVA, and after migration only the DVA[1]
     2029 +         * contains valid data. Therefore, we start walking for
     2030 +         * these BPs from DVA[1].
     2031 +         */
     2032 +        int start_dva = BP_IS_SPECIAL(bp) ? 1 : 0;
1857 2033          uint64_t dsize = 0;
1858 2034  
1859      -        for (int d = 0; d < BP_GET_NDVAS(bp); d++)
     2035 +        for (int d = start_dva; d < BP_GET_NDVAS(bp); d++) {
1860 2036                  dsize += dva_get_dsize_sync(spa, &bp->blk_dva[d]);
     2037 +        }
1861 2038  
1862 2039          return (dsize);
1863 2040  }
1864 2041  
1865 2042  uint64_t
1866 2043  bp_get_dsize(spa_t *spa, const blkptr_t *bp)
1867 2044  {
1868      -        uint64_t dsize = 0;
     2045 +        uint64_t dsize;
1869 2046  
1870 2047          spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
1871 2048  
1872      -        for (int d = 0; d < BP_GET_NDVAS(bp); d++)
1873      -                dsize += dva_get_dsize_sync(spa, &bp->blk_dva[d]);
     2049 +        dsize = bp_get_dsize_sync(spa, bp);
1874 2050  
1875 2051          spa_config_exit(spa, SCL_VDEV, FTAG);
1876 2052  
1877 2053          return (dsize);
1878 2054  }
1879 2055  
1880 2056  /*
1881 2057   * ==========================================================================
1882 2058   * Initialization and Termination
1883 2059   * ==========================================================================
↓ open down ↓ 38 lines elided ↑ open up ↑
1922 2098              offsetof(spa_t, spa_avl));
1923 2099  
1924 2100          avl_create(&spa_spare_avl, spa_spare_compare, sizeof (spa_aux_t),
1925 2101              offsetof(spa_aux_t, aux_avl));
1926 2102  
1927 2103          avl_create(&spa_l2cache_avl, spa_l2cache_compare, sizeof (spa_aux_t),
1928 2104              offsetof(spa_aux_t, aux_avl));
1929 2105  
1930 2106          spa_mode_global = mode;
1931 2107  
     2108 +        /*
     2109 +         * logevent_max_q_sz from log_sysevent.c gives us upper bound on
     2110 +         * the number of taskq entries; queueing of sysevents is serialized,
     2111 +         * so there is no need for more than one worker thread
     2112 +         */
     2113 +        spa_sysevent_taskq = taskq_create("spa_sysevent_tq", 1,
     2114 +            minclsyspri, 1, 5000, TASKQ_DYNAMIC);
     2115 +
1932 2116  #ifdef _KERNEL
1933 2117          spa_arch_init();
1934 2118  #else
1935 2119          if (spa_mode_global != FREAD && dprintf_find_string("watch")) {
1936 2120                  arc_procfd = open("/proc/self/ctl", O_WRONLY);
1937 2121                  if (arc_procfd == -1) {
1938 2122                          perror("could not enable watchpoints: "
1939 2123                              "opening /proc/self/ctl failed: ");
1940 2124                  } else {
1941 2125                          arc_watch = B_TRUE;
↓ open down ↓ 5 lines elided ↑ open up ↑
1947 2131          unique_init();
1948 2132          range_tree_init();
1949 2133          metaslab_alloc_trace_init();
1950 2134          zio_init();
1951 2135          dmu_init();
1952 2136          zil_init();
1953 2137          vdev_cache_stat_init();
1954 2138          zfs_prop_init();
1955 2139          zpool_prop_init();
1956 2140          zpool_feature_init();
     2141 +        vdev_prop_init();
     2142 +        cos_prop_init();
1957 2143          spa_config_load();
1958 2144          l2arc_start();
     2145 +        ddt_init();
     2146 +        dsl_scan_global_init();
1959 2147  }
1960 2148  
1961 2149  void
1962 2150  spa_fini(void)
1963 2151  {
     2152 +        ddt_fini();
     2153 +
1964 2154          l2arc_stop();
1965 2155  
1966 2156          spa_evict_all();
1967 2157  
1968 2158          vdev_cache_stat_fini();
1969 2159          zil_fini();
1970 2160          dmu_fini();
1971 2161          zio_fini();
1972 2162          metaslab_alloc_trace_fini();
1973 2163          range_tree_fini();
1974 2164          unique_fini();
1975 2165          refcount_fini();
1976 2166  
     2167 +        taskq_destroy(spa_sysevent_taskq);
     2168 +
1977 2169          avl_destroy(&spa_namespace_avl);
1978 2170          avl_destroy(&spa_spare_avl);
1979 2171          avl_destroy(&spa_l2cache_avl);
1980 2172  
1981 2173          cv_destroy(&spa_namespace_cv);
1982 2174          mutex_destroy(&spa_namespace_lock);
1983 2175          mutex_destroy(&spa_spare_lock);
1984 2176          mutex_destroy(&spa_l2cache_lock);
1985 2177  }
1986 2178  
↓ open down ↓ 22 lines elided ↑ open up ↑
2009 2201  
2010 2202  boolean_t
2011 2203  spa_is_root(spa_t *spa)
2012 2204  {
2013 2205          return (spa->spa_is_root);
2014 2206  }
2015 2207  
2016 2208  boolean_t
2017 2209  spa_writeable(spa_t *spa)
2018 2210  {
2019      -        return (!!(spa->spa_mode & FWRITE) && spa->spa_trust_config);
     2211 +        return (!!(spa->spa_mode & FWRITE));
2020 2212  }
2021 2213  
2022 2214  /*
2023 2215   * Returns true if there is a pending sync task in any of the current
2024 2216   * syncing txg, the current quiescing txg, or the current open txg.
2025 2217   */
2026 2218  boolean_t
2027 2219  spa_has_pending_synctask(spa_t *spa)
2028 2220  {
2029 2221          return (!txg_all_lists_empty(&spa->spa_dsl_pool->dp_sync_tasks));
2030 2222  }
2031 2223  
     2224 +boolean_t
     2225 +spa_has_special(spa_t *spa)
     2226 +{
     2227 +        return (spa->spa_special_class->mc_rotor != NULL);
     2228 +}
     2229 +
2032 2230  int
2033 2231  spa_mode(spa_t *spa)
2034 2232  {
2035 2233          return (spa->spa_mode);
2036 2234  }
2037 2235  
2038 2236  uint64_t
2039 2237  spa_bootfs(spa_t *spa)
2040 2238  {
2041 2239          return (spa->spa_bootfs);
↓ open down ↓ 24 lines elided ↑ open up ↑
2066 2264  spa_scan_stat_init(spa_t *spa)
2067 2265  {
2068 2266          /* data not stored on disk */
2069 2267          spa->spa_scan_pass_start = gethrestime_sec();
2070 2268          if (dsl_scan_is_paused_scrub(spa->spa_dsl_pool->dp_scan))
2071 2269                  spa->spa_scan_pass_scrub_pause = spa->spa_scan_pass_start;
2072 2270          else
2073 2271                  spa->spa_scan_pass_scrub_pause = 0;
2074 2272          spa->spa_scan_pass_scrub_spent_paused = 0;
2075 2273          spa->spa_scan_pass_exam = 0;
     2274 +        spa->spa_scan_pass_work = 0;
2076 2275          vdev_scan_stat_init(spa->spa_root_vdev);
2077 2276  }
2078 2277  
2079 2278  /*
2080 2279   * Get scan stats for zpool status reports
2081 2280   */
2082 2281  int
2083 2282  spa_scan_get_stats(spa_t *spa, pool_scan_stat_t *ps)
2084 2283  {
2085 2284          dsl_scan_t *scn = spa->spa_dsl_pool ? spa->spa_dsl_pool->dp_scan : NULL;
↓ open down ↓ 5 lines elided ↑ open up ↑
2091 2290          /* data stored on disk */
2092 2291          ps->pss_func = scn->scn_phys.scn_func;
2093 2292          ps->pss_start_time = scn->scn_phys.scn_start_time;
2094 2293          ps->pss_end_time = scn->scn_phys.scn_end_time;
2095 2294          ps->pss_to_examine = scn->scn_phys.scn_to_examine;
2096 2295          ps->pss_examined = scn->scn_phys.scn_examined;
2097 2296          ps->pss_to_process = scn->scn_phys.scn_to_process;
2098 2297          ps->pss_processed = scn->scn_phys.scn_processed;
2099 2298          ps->pss_errors = scn->scn_phys.scn_errors;
2100 2299          ps->pss_state = scn->scn_phys.scn_state;
     2300 +        mutex_enter(&scn->scn_status_lock);
     2301 +        ps->pss_issued = scn->scn_bytes_issued;
     2302 +        mutex_exit(&scn->scn_status_lock);
2101 2303  
2102 2304          /* data not stored on disk */
2103 2305          ps->pss_pass_start = spa->spa_scan_pass_start;
2104 2306          ps->pss_pass_exam = spa->spa_scan_pass_exam;
     2307 +        ps->pss_pass_work = spa->spa_scan_pass_work;
2105 2308          ps->pss_pass_scrub_pause = spa->spa_scan_pass_scrub_pause;
2106 2309          ps->pss_pass_scrub_spent_paused = spa->spa_scan_pass_scrub_spent_paused;
2107 2310  
2108 2311          return (0);
2109 2312  }
2110 2313  
2111 2314  boolean_t
2112 2315  spa_debug_enabled(spa_t *spa)
2113 2316  {
2114 2317          return (spa->spa_debug);
↓ open down ↓ 1 lines elided ↑ open up ↑
2116 2319  
2117 2320  int
2118 2321  spa_maxblocksize(spa_t *spa)
2119 2322  {
2120 2323          if (spa_feature_is_enabled(spa, SPA_FEATURE_LARGE_BLOCKS))
2121 2324                  return (SPA_MAXBLOCKSIZE);
2122 2325          else
2123 2326                  return (SPA_OLD_MAXBLOCKSIZE);
2124 2327  }
2125 2328  
     2329 +boolean_t
     2330 +spa_wbc_present(spa_t *spa)
     2331 +{
     2332 +        return (spa->spa_wbc_mode != WBC_MODE_OFF);
     2333 +}
     2334 +
     2335 +boolean_t
     2336 +spa_wbc_active(spa_t *spa)
     2337 +{
     2338 +        return (spa->spa_wbc_mode == WBC_MODE_ACTIVE);
     2339 +}
     2340 +
     2341 +int
     2342 +spa_wbc_mode(const char *name)
     2343 +{
     2344 +        int ret = 0;
     2345 +        spa_t *spa;
     2346 +
     2347 +        mutex_enter(&spa_namespace_lock);
     2348 +        spa = spa_lookup(name);
     2349 +        if (!spa) {
     2350 +                mutex_exit(&spa_namespace_lock);
     2351 +                return (-1);
     2352 +        }
     2353 +
     2354 +        ret = (int)spa->spa_wbc_mode;
     2355 +        mutex_exit(&spa_namespace_lock);
     2356 +        return (ret);
     2357 +}
     2358 +
     2359 +struct zfs_autosnap *
     2360 +spa_get_autosnap(spa_t *spa)
     2361 +{
     2362 +        return (&spa->spa_autosnap);
     2363 +}
     2364 +
     2365 +wbc_data_t *
     2366 +spa_get_wbc_data(spa_t *spa)
     2367 +{
     2368 +        return (&spa->spa_wbc);
     2369 +}
     2370 +
2126 2371  /*
2127      - * Returns the txg that the last device removal completed. No indirect mappings
2128      - * have been added since this txg.
     2372 + * Creates the trim kstats structure for a spa.
2129 2373   */
2130      -uint64_t
2131      -spa_get_last_removal_txg(spa_t *spa)
     2374 +static void
     2375 +spa_trimstats_create(spa_t *spa)
2132 2376  {
2133      -        uint64_t vdevid;
2134      -        uint64_t ret = -1ULL;
     2377 +        /* truncate pool name to accomodate "_trimstats" suffix */
     2378 +        char short_spa_name[KSTAT_STRLEN - 10];
     2379 +        char name[KSTAT_STRLEN];
2135 2380  
2136      -        spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
2137      -        /*
2138      -         * sr_prev_indirect_vdev is only modified while holding all the
2139      -         * config locks, so it is sufficient to hold SCL_VDEV as reader when
2140      -         * examining it.
2141      -         */
2142      -        vdevid = spa->spa_removing_phys.sr_prev_indirect_vdev;
     2381 +        ASSERT3P(spa->spa_trimstats, ==, NULL);
     2382 +        ASSERT3P(spa->spa_trimstats_ks, ==, NULL);
2143 2383  
2144      -        while (vdevid != -1ULL) {
2145      -                vdev_t *vd = vdev_lookup_top(spa, vdevid);
2146      -                vdev_indirect_births_t *vib = vd->vdev_indirect_births;
     2384 +        (void) snprintf(short_spa_name, sizeof (short_spa_name), "%s",
     2385 +            spa->spa_name);
     2386 +        (void) snprintf(name, sizeof (name), "%s_trimstats", short_spa_name);
2147 2387  
2148      -                ASSERT3P(vd->vdev_ops, ==, &vdev_indirect_ops);
     2388 +        spa->spa_trimstats_ks = kstat_create("zfs", 0, name, "misc",
     2389 +            KSTAT_TYPE_NAMED, sizeof (*spa->spa_trimstats) /
     2390 +            sizeof (kstat_named_t), 0);
     2391 +        if (spa->spa_trimstats_ks) {
     2392 +                spa->spa_trimstats = spa->spa_trimstats_ks->ks_data;
2149 2393  
2150      -                /*
2151      -                 * If the removal did not remap any data, we don't care.
2152      -                 */
2153      -                if (vdev_indirect_births_count(vib) != 0) {
2154      -                        ret = vdev_indirect_births_last_entry_txg(vib);
2155      -                        break;
2156      -                }
     2394 +#ifdef _KERNEL
     2395 +                kstat_named_init(&spa->spa_trimstats->st_extents,
     2396 +                    "extents", KSTAT_DATA_UINT64);
     2397 +                kstat_named_init(&spa->spa_trimstats->st_bytes,
     2398 +                    "bytes", KSTAT_DATA_UINT64);
     2399 +                kstat_named_init(&spa->spa_trimstats->st_extents_skipped,
     2400 +                    "extents_skipped", KSTAT_DATA_UINT64);
     2401 +                kstat_named_init(&spa->spa_trimstats->st_bytes_skipped,
     2402 +                    "bytes_skipped", KSTAT_DATA_UINT64);
     2403 +                kstat_named_init(&spa->spa_trimstats->st_auto_slow,
     2404 +                    "auto_slow", KSTAT_DATA_UINT64);
     2405 +#endif  /* _KERNEL */
2157 2406  
2158      -                vdevid = vd->vdev_indirect_config.vic_prev_indirect_vdev;
     2407 +                kstat_install(spa->spa_trimstats_ks);
     2408 +        } else {
     2409 +                cmn_err(CE_NOTE, "!Cannot create trim kstats for pool %s",
     2410 +                    spa->spa_name);
2159 2411          }
2160      -        spa_config_exit(spa, SCL_VDEV, FTAG);
     2412 +}
2161 2413  
2162      -        IMPLY(ret != -1ULL,
2163      -            spa_feature_is_active(spa, SPA_FEATURE_DEVICE_REMOVAL));
     2414 +/*
     2415 + * Destroys the trim kstats for a spa.
     2416 + */
     2417 +static void
     2418 +spa_trimstats_destroy(spa_t *spa)
     2419 +{
     2420 +        if (spa->spa_trimstats_ks) {
     2421 +                kstat_delete(spa->spa_trimstats_ks);
     2422 +                spa->spa_trimstats = NULL;
     2423 +                spa->spa_trimstats_ks = NULL;
     2424 +        }
     2425 +}
2164 2426  
2165      -        return (ret);
     2427 +/*
     2428 + * Updates the numerical trim kstats for a spa.
     2429 + */
     2430 +void
     2431 +spa_trimstats_update(spa_t *spa, uint64_t extents, uint64_t bytes,
     2432 +    uint64_t extents_skipped, uint64_t bytes_skipped)
     2433 +{
     2434 +        spa_trimstats_t *st = spa->spa_trimstats;
     2435 +        if (st) {
     2436 +                atomic_add_64(&st->st_extents.value.ui64, extents);
     2437 +                atomic_add_64(&st->st_bytes.value.ui64, bytes);
     2438 +                atomic_add_64(&st->st_extents_skipped.value.ui64,
     2439 +                    extents_skipped);
     2440 +                atomic_add_64(&st->st_bytes_skipped.value.ui64,
     2441 +                    bytes_skipped);
     2442 +        }
2166 2443  }
2167 2444  
2168      -boolean_t
2169      -spa_trust_config(spa_t *spa)
     2445 +/*
     2446 + * Increments the slow-trim kstat for a spa.
     2447 + */
     2448 +void
     2449 +spa_trimstats_auto_slow_incr(spa_t *spa)
2170 2450  {
2171      -        return (spa->spa_trust_config);
     2451 +        spa_trimstats_t *st = spa->spa_trimstats;
     2452 +        if (st)
     2453 +                atomic_inc_64(&st->st_auto_slow.value.ui64);
2172 2454  }
2173 2455  
2174      -uint64_t
2175      -spa_missing_tvds_allowed(spa_t *spa)
     2456 +/*
     2457 + * Creates the taskq used for dispatching auto-trim. This is called only when
     2458 + * the property is set to `on' or when the pool is loaded (and the autotrim
     2459 + * property is `on').
     2460 + */
     2461 +void
     2462 +spa_auto_trim_taskq_create(spa_t *spa)
2176 2463  {
2177      -        return (spa->spa_missing_tvds_allowed);
     2464 +        char name[MAXPATHLEN];
     2465 +        ASSERT(MUTEX_HELD(&spa->spa_auto_trim_lock));
     2466 +        ASSERT(spa->spa_auto_trim_taskq == NULL);
     2467 +        (void) snprintf(name, sizeof (name), "%s_auto_trim", spa->spa_name);
     2468 +        spa->spa_auto_trim_taskq = taskq_create(name, 1, minclsyspri, 1,
     2469 +            spa->spa_root_vdev->vdev_children, TASKQ_DYNAMIC);
     2470 +        VERIFY(spa->spa_auto_trim_taskq != NULL);
2178 2471  }
2179 2472  
     2473 +/*
     2474 + * Creates the taskq for dispatching manual trim. This taskq is recreated
     2475 + * each time `zpool trim <poolname>' is issued and destroyed after the run
     2476 + * completes in an async spa request.
     2477 + */
2180 2478  void
2181      -spa_set_missing_tvds(spa_t *spa, uint64_t missing)
     2479 +spa_man_trim_taskq_create(spa_t *spa)
2182 2480  {
2183      -        spa->spa_missing_tvds = missing;
     2481 +        char name[MAXPATHLEN];
     2482 +        ASSERT(MUTEX_HELD(&spa->spa_man_trim_lock));
     2483 +        spa_async_unrequest(spa, SPA_ASYNC_MAN_TRIM_TASKQ_DESTROY);
     2484 +        if (spa->spa_man_trim_taskq != NULL)
     2485 +                /*
     2486 +                 * The async taskq destroy has been pre-empted, so just
     2487 +                 * return, the taskq is still good to use.
     2488 +                 */
     2489 +                return;
     2490 +        (void) snprintf(name, sizeof (name), "%s_man_trim", spa->spa_name);
     2491 +        spa->spa_man_trim_taskq = taskq_create(name, 1, minclsyspri, 1,
     2492 +            spa->spa_root_vdev->vdev_children, TASKQ_DYNAMIC);
     2493 +        VERIFY(spa->spa_man_trim_taskq != NULL);
     2494 +}
     2495 +
     2496 +/*
     2497 + * Destroys the taskq created in spa_auto_trim_taskq_create. The taskq
     2498 + * is only destroyed when the autotrim property is set to `off'.
     2499 + */
     2500 +void
     2501 +spa_auto_trim_taskq_destroy(spa_t *spa)
     2502 +{
     2503 +        ASSERT(MUTEX_HELD(&spa->spa_auto_trim_lock));
     2504 +        ASSERT(spa->spa_auto_trim_taskq != NULL);
     2505 +        while (spa->spa_num_auto_trimming != 0)
     2506 +                cv_wait(&spa->spa_auto_trim_done_cv, &spa->spa_auto_trim_lock);
     2507 +        taskq_destroy(spa->spa_auto_trim_taskq);
     2508 +        spa->spa_auto_trim_taskq = NULL;
     2509 +}
     2510 +
     2511 +/*
     2512 + * Destroys the taskq created in spa_man_trim_taskq_create. The taskq is
     2513 + * destroyed after a manual trim run completes from an async spa request.
     2514 + * There is a bit of lag between an async request being issued at the
     2515 + * completion of a trim run and it finally being acted on, hence why this
     2516 + * function checks if new manual trimming threads haven't been re-spawned.
     2517 + * If they have, we assume the async spa request been preempted by another
     2518 + * manual trim request and we back off.
     2519 + */
     2520 +void
     2521 +spa_man_trim_taskq_destroy(spa_t *spa)
     2522 +{
     2523 +        ASSERT(MUTEX_HELD(&spa->spa_man_trim_lock));
     2524 +        ASSERT(spa->spa_man_trim_taskq != NULL);
     2525 +        if (spa->spa_num_man_trimming != 0)
     2526 +                /* another trim got started before we got here, back off */
     2527 +                return;
     2528 +        taskq_destroy(spa->spa_man_trim_taskq);
     2529 +        spa->spa_man_trim_taskq = NULL;
2184 2530  }
    
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX