Print this page
NEX-9673 Add capability to replicate cloned datasets relative to origin
Reviewed by: Alex Deiter <alex.deiter@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-7479 Autosnap may dispatch duplicated sync-tasks
Reviewed by: Alex Deiter <alex.deiter@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexent.com>
NEX-7543 backout async delete (NEX-5085 and NEX-6151)
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5795 Rename 'wrc' as 'wbc' in the source and in the tech docs
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Marcel Telka <marcel.telka@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-5024 Slow performance with a single large file delete
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4830 writecache=off leaks data on special vdev (the data will never migrate)
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
5981 Deadlock in dmu_objset_find_dp
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Approved by: Robert Mustacchi <rm@joyent.com>
5269 zpool import slow
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Approved by: Dan McDonald <danmcd@omniti.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Revert "NEX-4476 WRC: Allow to use write back cache per tree of datasets"
This reverts commit fe97b74444278a6f36fec93179133641296312da.
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-3964 It should not be allowed to rename a snapshot that its new name is matched to the prefix of in-kernel autosnapshots
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
NEX-3485 Deferred deletes causing loss of service for NFS clients on cluster failover
Reviewed by: Marcel Telka <marcel.telka@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-3558 KRRP Integration
NEX-3079 port illumos ARC improvements
Fixup merge results
re #13253 rb4328 ssh: openssl version checking needs updating
re #11441 rb4292 panic in apic_record_rdt_entry on VMware hardware version 9
re #12619, rb4287 Deadlocked zfs txg processing in dsl_sync_task_group_sync()
re #12585 rb4049 ZFS++ work port - refactoring to improve separation of open/closed code, bug fixes, performance improvements - open code
Bug 11205: add missing libzfs_closed_stubs.c to fix opensource-only build.
ZFS plus work: special vdevs, cos, cos/vdev properties

Split Close
Expand all
Collapse all
          --- old/usr/src/uts/common/fs/zfs/dsl_pool.c
          +++ new/usr/src/uts/common/fs/zfs/dsl_pool.c
↓ open down ↓ 16 lines elided ↑ open up ↑
  17   17   * information: Portions Copyright [yyyy] [name of copyright owner]
  18   18   *
  19   19   * CDDL HEADER END
  20   20   */
  21   21  /*
  22   22   * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
  23   23   * Copyright (c) 2011, 2017 by Delphix. All rights reserved.
  24   24   * Copyright (c) 2013 Steven Hartland. All rights reserved.
  25   25   * Copyright (c) 2014 Spectra Logic Corporation, All rights reserved.
  26   26   * Copyright (c) 2014 Integros [integros.com]
  27      - * Copyright 2016 Nexenta Systems, Inc.  All rights reserved.
       27 + * Copyright 2016 Nexenta Systems, Inc. All rights reserved.
  28   28   */
  29   29  
       30 +#include <sys/autosnap.h>
  30   31  #include <sys/dsl_pool.h>
  31   32  #include <sys/dsl_dataset.h>
  32   33  #include <sys/dsl_prop.h>
  33   34  #include <sys/dsl_dir.h>
  34   35  #include <sys/dsl_synctask.h>
       36 +#include <sys/dsl_dataset.h>
  35   37  #include <sys/dsl_scan.h>
  36   38  #include <sys/dnode.h>
  37   39  #include <sys/dmu_tx.h>
  38   40  #include <sys/dmu_objset.h>
       41 +#include <sys/dmu_traverse.h>
  39   42  #include <sys/arc.h>
  40   43  #include <sys/zap.h>
  41   44  #include <sys/zio.h>
  42   45  #include <sys/zfs_context.h>
  43   46  #include <sys/fs/zfs.h>
  44   47  #include <sys/zfs_znode.h>
  45   48  #include <sys/spa_impl.h>
  46   49  #include <sys/dsl_deadlist.h>
  47   50  #include <sys/bptree.h>
  48   51  #include <sys/zfeature.h>
  49   52  #include <sys/zil_impl.h>
  50   53  #include <sys/dsl_userhold.h>
  51   54  
       55 +#include <sys/wbc.h>
       56 +#include <sys/time.h>
       57 +
  52   58  /*
  53   59   * ZFS Write Throttle
  54   60   * ------------------
  55   61   *
  56   62   * ZFS must limit the rate of incoming writes to the rate at which it is able
  57   63   * to sync data modifications to the backend storage. Throttling by too much
  58   64   * creates an artificial limit; throttling by too little can only be sustained
  59   65   * for short periods and would lead to highly lumpy performance. On a per-pool
  60   66   * basis, ZFS tracks the amount of modified (dirty) data. As operations change
  61   67   * data, the amount of dirty data increases; as ZFS syncs out data, the amount
↓ open down ↓ 93 lines elided ↑ open up ↑
 155  161   * result in zil_itxg_clean() being called synchronously from zil_clean()
 156  162   * (which can adversely affect performance of spa_sync()).
 157  163   *
 158  164   * Additionally, the number of threads used by the taskq can be
 159  165   * configured via the "zfs_zil_clean_taskq_nthr_pct" tunable.
 160  166   */
 161  167  int zfs_zil_clean_taskq_nthr_pct = 100;
 162  168  int zfs_zil_clean_taskq_minalloc = 1024;
 163  169  int zfs_zil_clean_taskq_maxalloc = 1024 * 1024;
 164  170  
      171 +/*
      172 + * Tunable to control max number of tasks available for processing of
      173 + * deferred deletes.
      174 + */
      175 +int zfs_vn_rele_max_tasks = 256;
      176 +
 165  177  int
 166  178  dsl_pool_open_special_dir(dsl_pool_t *dp, const char *name, dsl_dir_t **ddp)
 167  179  {
 168  180          uint64_t obj;
 169  181          int err;
 170  182  
 171  183          err = zap_lookup(dp->dp_meta_objset,
 172  184              dsl_dir_phys(dp->dp_root_dir)->dd_child_dir_zapobj,
 173  185              name, sizeof (obj), 1, &obj);
 174  186          if (err)
↓ open down ↓ 5 lines elided ↑ open up ↑
 180  192  static dsl_pool_t *
 181  193  dsl_pool_open_impl(spa_t *spa, uint64_t txg)
 182  194  {
 183  195          dsl_pool_t *dp;
 184  196          blkptr_t *bp = spa_get_rootblkptr(spa);
 185  197  
 186  198          dp = kmem_zalloc(sizeof (dsl_pool_t), KM_SLEEP);
 187  199          dp->dp_spa = spa;
 188  200          dp->dp_meta_rootbp = *bp;
 189  201          rrw_init(&dp->dp_config_rwlock, B_TRUE);
      202 +
      203 +        dp->dp_sync_history[0] = dp->dp_sync_history[1] = 0;
      204 +
 190  205          txg_init(dp, txg);
 191  206  
 192  207          txg_list_create(&dp->dp_dirty_datasets, spa,
 193  208              offsetof(dsl_dataset_t, ds_dirty_link));
 194  209          txg_list_create(&dp->dp_dirty_zilogs, spa,
 195  210              offsetof(zilog_t, zl_dirty_link));
 196  211          txg_list_create(&dp->dp_dirty_dirs, spa,
 197  212              offsetof(dsl_dir_t, dd_dirty_link));
 198  213          txg_list_create(&dp->dp_sync_tasks, spa,
 199  214              offsetof(dsl_sync_task_t, dst_node));
↓ open down ↓ 4 lines elided ↑ open up ↑
 204  219  
 205  220          dp->dp_zil_clean_taskq = taskq_create("dp_zil_clean_taskq",
 206  221              zfs_zil_clean_taskq_nthr_pct, minclsyspri,
 207  222              zfs_zil_clean_taskq_minalloc,
 208  223              zfs_zil_clean_taskq_maxalloc,
 209  224              TASKQ_PREPOPULATE | TASKQ_THREADS_CPU_PCT);
 210  225  
 211  226          mutex_init(&dp->dp_lock, NULL, MUTEX_DEFAULT, NULL);
 212  227          cv_init(&dp->dp_spaceavail_cv, NULL, CV_DEFAULT, NULL);
 213  228  
 214      -        dp->dp_vnrele_taskq = taskq_create("zfs_vn_rele_taskq", 1, minclsyspri,
 215      -            1, 4, 0);
      229 +        dp->dp_vnrele_taskq = taskq_create("zfs_vn_rele_taskq",
      230 +            zfs_vn_rele_max_tasks, minclsyspri,
      231 +            1, zfs_vn_rele_max_tasks, TASKQ_DYNAMIC);
 216  232  
 217  233          return (dp);
 218  234  }
 219  235  
 220  236  int
 221  237  dsl_pool_init(spa_t *spa, uint64_t txg, dsl_pool_t **dpp)
 222  238  {
 223  239          int err;
 224  240          dsl_pool_t *dp = dsl_pool_open_impl(spa, txg);
 225  241  
↓ open down ↓ 55 lines elided ↑ open up ↑
 281  297                          goto out;
 282  298  
 283  299                  err = zap_lookup(dp->dp_meta_objset, DMU_POOL_DIRECTORY_OBJECT,
 284  300                      DMU_POOL_FREE_BPOBJ, sizeof (uint64_t), 1, &obj);
 285  301                  if (err)
 286  302                          goto out;
 287  303                  VERIFY0(bpobj_open(&dp->dp_free_bpobj,
 288  304                      dp->dp_meta_objset, obj));
 289  305          }
 290  306  
 291      -        if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_OBSOLETE_COUNTS)) {
 292      -                err = zap_lookup(dp->dp_meta_objset, DMU_POOL_DIRECTORY_OBJECT,
 293      -                    DMU_POOL_OBSOLETE_BPOBJ, sizeof (uint64_t), 1, &obj);
 294      -                if (err == 0) {
 295      -                        VERIFY0(bpobj_open(&dp->dp_obsolete_bpobj,
 296      -                            dp->dp_meta_objset, obj));
 297      -                } else if (err == ENOENT) {
 298      -                        /*
 299      -                         * We might not have created the remap bpobj yet.
 300      -                         */
 301      -                        err = 0;
 302      -                } else {
 303      -                        goto out;
 304      -                }
 305      -        }
 306      -
 307  307          /*
 308      -         * Note: errors ignored, because the these special dirs, used for
 309      -         * space accounting, are only created on demand.
      308 +         * Note: errors ignored, because the leak dir will not exist if we
      309 +         * have not encountered a leak yet.
 310  310           */
 311  311          (void) dsl_pool_open_special_dir(dp, LEAK_DIR_NAME,
 312  312              &dp->dp_leak_dir);
 313  313  
 314  314          if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_ASYNC_DESTROY)) {
 315  315                  err = zap_lookup(dp->dp_meta_objset, DMU_POOL_DIRECTORY_OBJECT,
 316  316                      DMU_POOL_BPTREE_OBJ, sizeof (uint64_t), 1,
 317  317                      &dp->dp_bptree_obj);
 318  318                  if (err != 0)
 319  319                          goto out;
↓ open down ↓ 25 lines elided ↑ open up ↑
 345  345  void
 346  346  dsl_pool_close(dsl_pool_t *dp)
 347  347  {
 348  348          /*
 349  349           * Drop our references from dsl_pool_open().
 350  350           *
 351  351           * Since we held the origin_snap from "syncing" context (which
 352  352           * includes pool-opening context), it actually only got a "ref"
 353  353           * and not a hold, so just drop that here.
 354  354           */
 355      -        if (dp->dp_origin_snap != NULL)
      355 +        if (dp->dp_origin_snap)
 356  356                  dsl_dataset_rele(dp->dp_origin_snap, dp);
 357      -        if (dp->dp_mos_dir != NULL)
      357 +        if (dp->dp_mos_dir)
 358  358                  dsl_dir_rele(dp->dp_mos_dir, dp);
 359      -        if (dp->dp_free_dir != NULL)
      359 +        if (dp->dp_free_dir)
 360  360                  dsl_dir_rele(dp->dp_free_dir, dp);
 361      -        if (dp->dp_leak_dir != NULL)
      361 +        if (dp->dp_leak_dir)
 362  362                  dsl_dir_rele(dp->dp_leak_dir, dp);
 363      -        if (dp->dp_root_dir != NULL)
      363 +        if (dp->dp_root_dir)
 364  364                  dsl_dir_rele(dp->dp_root_dir, dp);
 365  365  
 366  366          bpobj_close(&dp->dp_free_bpobj);
 367      -        bpobj_close(&dp->dp_obsolete_bpobj);
 368  367  
 369  368          /* undo the dmu_objset_open_impl(mos) from dsl_pool_open() */
 370      -        if (dp->dp_meta_objset != NULL)
      369 +        if (dp->dp_meta_objset)
 371  370                  dmu_objset_evict(dp->dp_meta_objset);
 372  371  
 373  372          txg_list_destroy(&dp->dp_dirty_datasets);
 374  373          txg_list_destroy(&dp->dp_dirty_zilogs);
 375  374          txg_list_destroy(&dp->dp_sync_tasks);
 376  375          txg_list_destroy(&dp->dp_dirty_dirs);
 377  376  
 378  377          taskq_destroy(dp->dp_zil_clean_taskq);
 379  378          taskq_destroy(dp->dp_sync_taskq);
 380  379  
 381  380          /*
 382  381           * We can't set retry to TRUE since we're explicitly specifying
 383  382           * a spa to flush. This is good enough; any missed buffers for
 384  383           * this spa won't cause trouble, and they'll eventually fall
 385  384           * out of the ARC just like any other unused buffer.
 386  385           */
 387      -        arc_flush(dp->dp_spa, FALSE);
 388      -
      386 +        arc_flush(dp->dp_spa, B_FALSE);
 389  387          txg_fini(dp);
 390  388          dsl_scan_fini(dp);
 391  389          dmu_buf_user_evict_wait();
 392  390  
 393  391          rrw_destroy(&dp->dp_config_rwlock);
 394  392          mutex_destroy(&dp->dp_lock);
 395  393          taskq_destroy(dp->dp_vnrele_taskq);
 396      -        if (dp->dp_blkstats != NULL)
      394 +        if (dp->dp_blkstats)
 397  395                  kmem_free(dp->dp_blkstats, sizeof (zfs_all_blkstats_t));
 398  396          kmem_free(dp, sizeof (dsl_pool_t));
 399  397  }
 400  398  
 401      -void
 402      -dsl_pool_create_obsolete_bpobj(dsl_pool_t *dp, dmu_tx_t *tx)
 403      -{
 404      -        uint64_t obj;
 405      -        /*
 406      -         * Currently, we only create the obsolete_bpobj where there are
 407      -         * indirect vdevs with referenced mappings.
 408      -         */
 409      -        ASSERT(spa_feature_is_active(dp->dp_spa, SPA_FEATURE_DEVICE_REMOVAL));
 410      -        /* create and open the obsolete_bpobj */
 411      -        obj = bpobj_alloc(dp->dp_meta_objset, SPA_OLD_MAXBLOCKSIZE, tx);
 412      -        VERIFY0(bpobj_open(&dp->dp_obsolete_bpobj, dp->dp_meta_objset, obj));
 413      -        VERIFY0(zap_add(dp->dp_meta_objset, DMU_POOL_DIRECTORY_OBJECT,
 414      -            DMU_POOL_OBSOLETE_BPOBJ, sizeof (uint64_t), 1, &obj, tx));
 415      -        spa_feature_incr(dp->dp_spa, SPA_FEATURE_OBSOLETE_COUNTS, tx);
 416      -}
 417      -
 418      -void
 419      -dsl_pool_destroy_obsolete_bpobj(dsl_pool_t *dp, dmu_tx_t *tx)
 420      -{
 421      -        spa_feature_decr(dp->dp_spa, SPA_FEATURE_OBSOLETE_COUNTS, tx);
 422      -        VERIFY0(zap_remove(dp->dp_meta_objset,
 423      -            DMU_POOL_DIRECTORY_OBJECT,
 424      -            DMU_POOL_OBSOLETE_BPOBJ, tx));
 425      -        bpobj_free(dp->dp_meta_objset,
 426      -            dp->dp_obsolete_bpobj.bpo_object, tx);
 427      -        bpobj_close(&dp->dp_obsolete_bpobj);
 428      -}
 429      -
 430  399  dsl_pool_t *
 431  400  dsl_pool_create(spa_t *spa, nvlist_t *zplprops, uint64_t txg)
 432  401  {
 433  402          int err;
 434  403          dsl_pool_t *dp = dsl_pool_open_impl(spa, txg);
 435  404          dmu_tx_t *tx = dmu_tx_create_assigned(dp, txg);
 436  405          dsl_dataset_t *ds;
 437  406          uint64_t obj;
 438  407  
 439  408          rrw_enter(&dp->dp_config_rwlock, RW_WRITER, FTAG);
↓ open down ↓ 106 lines elided ↑ open up ↑
 546  515  }
 547  516  
 548  517  void
 549  518  dsl_pool_sync(dsl_pool_t *dp, uint64_t txg)
 550  519  {
 551  520          zio_t *zio;
 552  521          dmu_tx_t *tx;
 553  522          dsl_dir_t *dd;
 554  523          dsl_dataset_t *ds;
 555  524          objset_t *mos = dp->dp_meta_objset;
      525 +        spa_t *spa = dp->dp_spa;
 556  526          list_t synced_datasets;
      527 +        dsl_sync_task_t *iter;
      528 +        boolean_t wbc_skip_txg = B_FALSE;
      529 +        boolean_t sync_ops = B_FALSE;
      530 +        boolean_t user_snap = B_FALSE;
      531 +        zfs_autosnap_t *autosnap = spa_get_autosnap(spa);
      532 +        boolean_t autosnap_initialized = autosnap->initialized;
      533 +        char snap[ZFS_MAX_DATASET_NAME_LEN];
 557  534  
      535 +        /* check if there are  ny sync ops in the txg */
      536 +        if (txg_list_head(&dp->dp_sync_tasks, txg) != NULL)
      537 +                sync_ops = B_TRUE;
      538 +
      539 +        /* check if there are user snaps in the txg */
      540 +        for (iter = txg_list_head(&dp->dp_sync_tasks, txg);
      541 +            iter != NULL;
      542 +            iter = txg_list_next(&dp->dp_sync_tasks, iter, txg)) {
      543 +                if (iter->dst_syncfunc == dsl_dataset_snapshot_sync) {
      544 +                        user_snap = B_TRUE;
      545 +                        break;
      546 +                }
      547 +        }
      548 +
      549 +
 558  550          list_create(&synced_datasets, sizeof (dsl_dataset_t),
 559  551              offsetof(dsl_dataset_t, ds_synced_link));
 560  552  
 561  553          tx = dmu_tx_create_assigned(dp, txg);
 562  554  
      555 +        (void) sprintf(snap, "%s%llu", AUTOSNAP_PREFIX,
      556 +            (unsigned long long int) txg);
      557 +
      558 +        if (autosnap_initialized && spa->spa_sync_pass == 1) {
      559 +                autosnap_zone_t *azone;
      560 +
      561 +                rrw_enter(&dp->dp_config_rwlock, RW_READER, FTAG);
      562 +                mutex_enter(&autosnap->autosnap_lock);
      563 +
      564 +                /*
      565 +                 * WBC: the mechanism to ensure all WBC-ed dirty datasets
      566 +                 * are synchronously auto-snapshotted
      567 +                 * within (or by) the same TXG sync
      568 +                 * The "synchronicity" of the rightmost boundary of the WBC
      569 +                 * window is important to avoid used-space leakages
      570 +                 * on special vdev.
      571 +                 * Note that we skip here the WBC-ed datasets that are
      572 +                 * already fully migrated and don't have data on special
      573 +                 */
      574 +
      575 +                for (ds = txg_list_head(&dp->dp_dirty_datasets, txg);
      576 +                    ds != NULL;
      577 +                    ds = txg_list_next(&dp->dp_dirty_datasets, ds, txg)) {
      578 +                        char ds_name[ZFS_MAX_DATASET_NAME_LEN];
      579 +                        boolean_t wbc_azone;
      580 +
      581 +                        dsl_dataset_name(ds, ds_name);
      582 +
      583 +                        azone = autosnap_find_zone(autosnap, ds_name, B_TRUE);
      584 +                        if (azone == NULL)
      585 +                                continue;
      586 +
      587 +                        if ((azone->flags & AUTOSNAP_CREATOR) == 0)
      588 +                                continue;
      589 +
      590 +                        if (azone->created)
      591 +                                continue;
      592 +
      593 +                        azone->delayed = B_TRUE;
      594 +                        azone->dirty = B_TRUE;
      595 +                        wbc_azone = (azone->flags & AUTOSNAP_WBC) != 0;
      596 +
      597 +                        if (autosnap_confirm_snap(azone, txg)) {
      598 +                                if (!wbc_azone && !user_snap && !sync_ops) {
      599 +                                        autosnap_create_snapshot(azone,
      600 +                                            snap, dp, txg, tx);
      601 +                                }
      602 +                        } else if (wbc_azone) {
      603 +                                wbc_skip_txg = B_TRUE;
      604 +                        }
      605 +                }
      606 +
      607 +                azone = list_head(&autosnap->autosnap_zones);
      608 +                while (azone != NULL) {
      609 +                        boolean_t wbc_azone =
      610 +                            ((azone->flags & AUTOSNAP_WBC) != 0);
      611 +
      612 +                        if (user_snap) {
      613 +                                azone->delayed = B_TRUE;
      614 +                        } else if (!azone->dirty && azone->delayed) {
      615 +                                if (autosnap_confirm_snap(azone, txg)) {
      616 +                                        if (!wbc_azone && !user_snap &&
      617 +                                            !sync_ops) {
      618 +                                                autosnap_create_snapshot(azone,
      619 +                                                    snap, dp, txg, tx);
      620 +                                        }
      621 +                                } else if (wbc_azone) {
      622 +                                        wbc_skip_txg = B_TRUE;
      623 +                                }
      624 +                        }
      625 +
      626 +                        azone = list_next(&autosnap->autosnap_zones, azone);
      627 +                }
      628 +
      629 +                mutex_exit(&autosnap->autosnap_lock);
      630 +                rrw_exit(&dp->dp_config_rwlock, FTAG);
      631 +        }
      632 +
      633 +
 563  634          /*
 564  635           * Write out all dirty blocks of dirty datasets.
 565  636           */
 566  637          zio = zio_root(dp->dp_spa, NULL, NULL, ZIO_FLAG_MUSTSUCCEED);
 567  638          while ((ds = txg_list_remove(&dp->dp_dirty_datasets, txg)) != NULL) {
      639 +
 568  640                  /*
 569  641                   * We must not sync any non-MOS datasets twice, because
 570  642                   * we may have taken a snapshot of them.  However, we
 571  643                   * may sync newly-created datasets on pass 2.
 572  644                   */
 573  645                  ASSERT(!list_link_active(&ds->ds_synced_link));
 574  646                  list_insert_tail(&synced_datasets, ds);
 575  647                  dsl_dataset_sync(ds, zio, tx);
 576  648          }
      649 +
 577  650          VERIFY0(zio_wait(zio));
 578  651  
      652 +        if (autosnap_initialized && spa->spa_sync_pass == 1 &&
      653 +            !user_snap) {
      654 +                autosnap_zone_t *azone;
      655 +
      656 +                rrw_enter(&dp->dp_config_rwlock, RW_READER, FTAG);
      657 +                mutex_enter(&autosnap->autosnap_lock);
      658 +
      659 +                /*
      660 +                 * At this stage we are walking over all delayed zones
      661 +                 * to create autosnaps
      662 +                 */
      663 +
      664 +                azone = list_head(&autosnap->autosnap_zones);
      665 +                while (azone != NULL) {
      666 +                        boolean_t skip_zone =
      667 +                            ((azone->flags & AUTOSNAP_CREATOR) == 0);
      668 +
      669 +                        if (azone->delayed && !skip_zone) {
      670 +                                boolean_t wbc_azone =
      671 +                                    ((azone->flags & AUTOSNAP_WBC) != 0);
      672 +
      673 +                                if ((!wbc_azone || !wbc_skip_txg) &&
      674 +                                    autosnap_confirm_snap(azone, txg)) {
      675 +                                        autosnap_create_snapshot(azone,
      676 +                                            snap, dp, txg, tx);
      677 +                                }
      678 +                        }
      679 +
      680 +                        if (skip_zone)
      681 +                                azone->delayed = B_FALSE;
      682 +
      683 +                        azone = list_next(&autosnap->autosnap_zones, azone);
      684 +                }
      685 +
      686 +                mutex_exit(&autosnap->autosnap_lock);
      687 +                rrw_exit(&dp->dp_config_rwlock, FTAG);
      688 +        }
      689 +
 579  690          /*
 580  691           * We have written all of the accounted dirty data, so our
 581  692           * dp_space_towrite should now be zero.  However, some seldom-used
 582  693           * code paths do not adhere to this (e.g. dbuf_undirty(), also
 583  694           * rounding error in dbuf_write_physdone).
 584  695           * Shore up the accounting of any dirtied space now.
 585  696           */
 586  697          dsl_pool_undirty_space(dp, dp->dp_dirty_pertxg[txg & TXG_MASK], txg);
 587  698  
 588  699          /*
 589  700           * Update the long range free counter after
 590  701           * we're done syncing user data
 591  702           */
 592  703          mutex_enter(&dp->dp_lock);
 593  704          ASSERT(spa_sync_pass(dp->dp_spa) == 1 ||
 594  705              dp->dp_long_free_dirty_pertxg[txg & TXG_MASK] == 0);
      706 +        dp->dp_long_freeing_total -=
      707 +            dp->dp_long_free_dirty_pertxg[txg & TXG_MASK];
 595  708          dp->dp_long_free_dirty_pertxg[txg & TXG_MASK] = 0;
 596  709          mutex_exit(&dp->dp_lock);
 597  710  
 598  711          /*
 599  712           * After the data blocks have been written (ensured by the zio_wait()
 600  713           * above), update the user/group space accounting.  This happens
 601  714           * in tasks dispatched to dp_sync_taskq, so wait for them before
 602  715           * continuing.
 603  716           */
 604  717          for (ds = list_head(&synced_datasets); ds != NULL;
↓ open down ↓ 2 lines elided ↑ open up ↑
 607  720          }
 608  721          taskq_wait(dp->dp_sync_taskq);
 609  722  
 610  723          /*
 611  724           * Sync the datasets again to push out the changes due to
 612  725           * userspace updates.  This must be done before we process the
 613  726           * sync tasks, so that any snapshots will have the correct
 614  727           * user accounting information (and we won't get confused
 615  728           * about which blocks are part of the snapshot).
 616  729           */
      730 +
 617  731          zio = zio_root(dp->dp_spa, NULL, NULL, ZIO_FLAG_MUSTSUCCEED);
 618  732          while ((ds = txg_list_remove(&dp->dp_dirty_datasets, txg)) != NULL) {
 619  733                  ASSERT(list_link_active(&ds->ds_synced_link));
 620  734                  dmu_buf_rele(ds->ds_dbuf, ds);
 621  735                  dsl_dataset_sync(ds, zio, tx);
 622  736          }
 623  737          VERIFY0(zio_wait(zio));
 624  738  
 625  739          /*
 626  740           * Now that the datasets have been completely synced, we can
↓ open down ↓ 31 lines elided ↑ open up ↑
 658  772  
 659  773          /*
 660  774           * If we modify a dataset in the same txg that we want to destroy it,
 661  775           * its dsl_dir's dd_dbuf will be dirty, and thus have a hold on it.
 662  776           * dsl_dir_destroy_check() will fail if there are unexpected holds.
 663  777           * Therefore, we want to sync the MOS (thus syncing the dd_dbuf
 664  778           * and clearing the hold on it) before we process the sync_tasks.
 665  779           * The MOS data dirtied by the sync_tasks will be synced on the next
 666  780           * pass.
 667  781           */
      782 +
 668  783          if (!txg_list_empty(&dp->dp_sync_tasks, txg)) {
 669  784                  dsl_sync_task_t *dst;
 670  785                  /*
 671  786                   * No more sync tasks should have been added while we
 672  787                   * were syncing.
 673  788                   */
 674  789                  ASSERT3U(spa_sync_pass(dp->dp_spa), ==, 1);
 675  790                  while ((dst = txg_list_remove(&dp->dp_sync_tasks, txg)) != NULL)
 676  791                          dsl_sync_task_sync(dst, tx);
 677  792          }
 678  793  
      794 +        if (spa_feature_is_active(spa, SPA_FEATURE_WBC)) {
      795 +                wbc_trigger_wbcthread(dp->dp_spa,
      796 +                    ((dp->dp_sync_history[0] + dp->dp_sync_history[1]) / 2));
      797 +        }
      798 +
 679  799          dmu_tx_commit(tx);
 680  800  
 681  801          DTRACE_PROBE2(dsl_pool_sync__done, dsl_pool_t *dp, dp, uint64_t, txg);
 682  802  }
 683  803  
 684  804  void
 685  805  dsl_pool_sync_done(dsl_pool_t *dp, uint64_t txg)
 686  806  {
 687  807          zilog_t *zilog;
 688  808  
↓ open down ↓ 43 lines elided ↑ open up ↑
 732  852          return (space - resv);
 733  853  }
 734  854  
 735  855  boolean_t
 736  856  dsl_pool_need_dirty_delay(dsl_pool_t *dp)
 737  857  {
 738  858          uint64_t delay_min_bytes =
 739  859              zfs_dirty_data_max * zfs_delay_min_dirty_percent / 100;
 740  860          boolean_t rv;
 741  861  
 742      -        mutex_enter(&dp->dp_lock);
 743  862          if (dp->dp_dirty_total > zfs_dirty_data_sync)
 744  863                  txg_kick(dp);
 745  864          rv = (dp->dp_dirty_total > delay_min_bytes);
 746      -        mutex_exit(&dp->dp_lock);
      865 +
 747  866          return (rv);
 748  867  }
 749  868  
 750  869  void
 751  870  dsl_pool_dirty_space(dsl_pool_t *dp, int64_t space, dmu_tx_t *tx)
 752  871  {
 753  872          if (space > 0) {
 754  873                  mutex_enter(&dp->dp_lock);
 755  874                  dp->dp_dirty_pertxg[tx->tx_txg & TXG_MASK] += space;
 756  875                  dsl_pool_dirty_delta(dp, space);
↓ open down ↓ 421 lines elided ↑ open up ↑
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX