Print this page
NEX-9673 Add capability to replicate cloned datasets relative to origin
Reviewed by: Alex Deiter <alex.deiter@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Revert "NEX-5085 implement async delete for large files"
This reverts commit 65aa8f42d93fcbd6e0efb3d4883170a20d760611.
Fails regression testing of the zfs test mirror_stress_004.
NEX-5085 implement async delete for large files
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Kirill Davydychev <kirill.davydychev@nexenta.com>
NEX-7479 Autosnap may dispatch duplicated sync-tasks
Reviewed by: Alex Deiter <alex.deiter@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexent.com>
NEX-7543 backout async delete (NEX-5085 and NEX-6151)
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5795 Rename 'wrc' as 'wbc' in the source and in the tech docs
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-5078 Want ability to see progress of freeing data and how much is left to free after large file delete patch
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5085 implement async delete for large files
Reviewed by: Marcel Telka <marcel.telka@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-5024 Slow performance with a single large file delete
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4830 writecache=off leaks data on special vdev (the data will never migrate)
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
5981 Deadlock in dmu_objset_find_dp
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Approved by: Robert Mustacchi <rm@joyent.com>
5269 zpool import slow
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Approved by: Dan McDonald <danmcd@omniti.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Revert "NEX-4476 WRC: Allow to use write back cache per tree of datasets"
This reverts commit fe97b74444278a6f36fec93179133641296312da.
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-3964 It should not be allowed to rename a snapshot that its new name is matched to the prefix of in-kernel autosnapshots
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
NEX-3485 Deferred deletes causing loss of service for NFS clients on cluster failover
Reviewed by: Marcel Telka <marcel.telka@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-3558 KRRP Integration
NEX-3079 port illumos ARC improvements
Fixup merge results
re #13253 rb4328 ssh: openssl version checking needs updating
re #11441 rb4292 panic in apic_record_rdt_entry on VMware hardware version 9
re #12619, rb4287 Deadlocked zfs txg processing in dsl_sync_task_group_sync()
re #12585 rb4049 ZFS++ work port - refactoring to improve separation of open/closed code, bug fixes, performance improvements - open code
Bug 11205: add missing libzfs_closed_stubs.c to fix opensource-only build.
ZFS plus work: special vdevs, cos, cos/vdev properties

*** 25,43 **** --- 25,46 ---- * Copyright (c) 2014 Spectra Logic Corporation, All rights reserved. * Copyright (c) 2014 Integros [integros.com] * Copyright 2016 Nexenta Systems, Inc. All rights reserved. */ + #include <sys/autosnap.h> #include <sys/dsl_pool.h> #include <sys/dsl_dataset.h> #include <sys/dsl_prop.h> #include <sys/dsl_dir.h> #include <sys/dsl_synctask.h> + #include <sys/dsl_dataset.h> #include <sys/dsl_scan.h> #include <sys/dnode.h> #include <sys/dmu_tx.h> #include <sys/dmu_objset.h> + #include <sys/dmu_traverse.h> #include <sys/arc.h> #include <sys/zap.h> #include <sys/zio.h> #include <sys/zfs_context.h> #include <sys/fs/zfs.h>
*** 47,56 **** --- 50,62 ---- #include <sys/bptree.h> #include <sys/zfeature.h> #include <sys/zil_impl.h> #include <sys/dsl_userhold.h> + #include <sys/wbc.h> + #include <sys/time.h> + /* * ZFS Write Throttle * ------------------ * * ZFS must limit the rate of incoming writes to the rate at which it is able
*** 160,169 **** --- 166,181 ---- */ int zfs_zil_clean_taskq_nthr_pct = 100; int zfs_zil_clean_taskq_minalloc = 1024; int zfs_zil_clean_taskq_maxalloc = 1024 * 1024; + /* + * Tunable to control max number of tasks available for processing of + * deferred deletes. + */ + int zfs_vn_rele_max_tasks = 256; + int dsl_pool_open_special_dir(dsl_pool_t *dp, const char *name, dsl_dir_t **ddp) { uint64_t obj; int err;
*** 185,194 **** --- 197,209 ---- dp = kmem_zalloc(sizeof (dsl_pool_t), KM_SLEEP); dp->dp_spa = spa; dp->dp_meta_rootbp = *bp; rrw_init(&dp->dp_config_rwlock, B_TRUE); + + dp->dp_sync_history[0] = dp->dp_sync_history[1] = 0; + txg_init(dp, txg); txg_list_create(&dp->dp_dirty_datasets, spa, offsetof(dsl_dataset_t, ds_dirty_link)); txg_list_create(&dp->dp_dirty_zilogs, spa,
*** 209,220 **** TASKQ_PREPOPULATE | TASKQ_THREADS_CPU_PCT); mutex_init(&dp->dp_lock, NULL, MUTEX_DEFAULT, NULL); cv_init(&dp->dp_spaceavail_cv, NULL, CV_DEFAULT, NULL); ! dp->dp_vnrele_taskq = taskq_create("zfs_vn_rele_taskq", 1, minclsyspri, ! 1, 4, 0); return (dp); } int --- 224,236 ---- TASKQ_PREPOPULATE | TASKQ_THREADS_CPU_PCT); mutex_init(&dp->dp_lock, NULL, MUTEX_DEFAULT, NULL); cv_init(&dp->dp_spaceavail_cv, NULL, CV_DEFAULT, NULL); ! dp->dp_vnrele_taskq = taskq_create("zfs_vn_rele_taskq", ! zfs_vn_rele_max_tasks, minclsyspri, ! 1, zfs_vn_rele_max_tasks, TASKQ_DYNAMIC); return (dp); } int
*** 286,315 **** goto out; VERIFY0(bpobj_open(&dp->dp_free_bpobj, dp->dp_meta_objset, obj)); } - if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_OBSOLETE_COUNTS)) { - err = zap_lookup(dp->dp_meta_objset, DMU_POOL_DIRECTORY_OBJECT, - DMU_POOL_OBSOLETE_BPOBJ, sizeof (uint64_t), 1, &obj); - if (err == 0) { - VERIFY0(bpobj_open(&dp->dp_obsolete_bpobj, - dp->dp_meta_objset, obj)); - } else if (err == ENOENT) { /* ! * We might not have created the remap bpobj yet. */ - err = 0; - } else { - goto out; - } - } - - /* - * Note: errors ignored, because the these special dirs, used for - * space accounting, are only created on demand. - */ (void) dsl_pool_open_special_dir(dp, LEAK_DIR_NAME, &dp->dp_leak_dir); if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_ASYNC_DESTROY)) { err = zap_lookup(dp->dp_meta_objset, DMU_POOL_DIRECTORY_OBJECT, --- 302,315 ---- goto out; VERIFY0(bpobj_open(&dp->dp_free_bpobj, dp->dp_meta_objset, obj)); } /* ! * Note: errors ignored, because the leak dir will not exist if we ! * have not encountered a leak yet. */ (void) dsl_pool_open_special_dir(dp, LEAK_DIR_NAME, &dp->dp_leak_dir); if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_ASYNC_DESTROY)) { err = zap_lookup(dp->dp_meta_objset, DMU_POOL_DIRECTORY_OBJECT,
*** 350,375 **** * * Since we held the origin_snap from "syncing" context (which * includes pool-opening context), it actually only got a "ref" * and not a hold, so just drop that here. */ ! if (dp->dp_origin_snap != NULL) dsl_dataset_rele(dp->dp_origin_snap, dp); ! if (dp->dp_mos_dir != NULL) dsl_dir_rele(dp->dp_mos_dir, dp); ! if (dp->dp_free_dir != NULL) dsl_dir_rele(dp->dp_free_dir, dp); ! if (dp->dp_leak_dir != NULL) dsl_dir_rele(dp->dp_leak_dir, dp); ! if (dp->dp_root_dir != NULL) dsl_dir_rele(dp->dp_root_dir, dp); bpobj_close(&dp->dp_free_bpobj); - bpobj_close(&dp->dp_obsolete_bpobj); /* undo the dmu_objset_open_impl(mos) from dsl_pool_open() */ ! if (dp->dp_meta_objset != NULL) dmu_objset_evict(dp->dp_meta_objset); txg_list_destroy(&dp->dp_dirty_datasets); txg_list_destroy(&dp->dp_dirty_zilogs); txg_list_destroy(&dp->dp_sync_tasks); --- 350,374 ---- * * Since we held the origin_snap from "syncing" context (which * includes pool-opening context), it actually only got a "ref" * and not a hold, so just drop that here. */ ! if (dp->dp_origin_snap) dsl_dataset_rele(dp->dp_origin_snap, dp); ! if (dp->dp_mos_dir) dsl_dir_rele(dp->dp_mos_dir, dp); ! if (dp->dp_free_dir) dsl_dir_rele(dp->dp_free_dir, dp); ! if (dp->dp_leak_dir) dsl_dir_rele(dp->dp_leak_dir, dp); ! if (dp->dp_root_dir) dsl_dir_rele(dp->dp_root_dir, dp); bpobj_close(&dp->dp_free_bpobj); /* undo the dmu_objset_open_impl(mos) from dsl_pool_open() */ ! if (dp->dp_meta_objset) dmu_objset_evict(dp->dp_meta_objset); txg_list_destroy(&dp->dp_dirty_datasets); txg_list_destroy(&dp->dp_dirty_zilogs); txg_list_destroy(&dp->dp_sync_tasks);
*** 382,434 **** * We can't set retry to TRUE since we're explicitly specifying * a spa to flush. This is good enough; any missed buffers for * this spa won't cause trouble, and they'll eventually fall * out of the ARC just like any other unused buffer. */ ! arc_flush(dp->dp_spa, FALSE); ! txg_fini(dp); dsl_scan_fini(dp); dmu_buf_user_evict_wait(); rrw_destroy(&dp->dp_config_rwlock); mutex_destroy(&dp->dp_lock); taskq_destroy(dp->dp_vnrele_taskq); ! if (dp->dp_blkstats != NULL) kmem_free(dp->dp_blkstats, sizeof (zfs_all_blkstats_t)); kmem_free(dp, sizeof (dsl_pool_t)); } - void - dsl_pool_create_obsolete_bpobj(dsl_pool_t *dp, dmu_tx_t *tx) - { - uint64_t obj; - /* - * Currently, we only create the obsolete_bpobj where there are - * indirect vdevs with referenced mappings. - */ - ASSERT(spa_feature_is_active(dp->dp_spa, SPA_FEATURE_DEVICE_REMOVAL)); - /* create and open the obsolete_bpobj */ - obj = bpobj_alloc(dp->dp_meta_objset, SPA_OLD_MAXBLOCKSIZE, tx); - VERIFY0(bpobj_open(&dp->dp_obsolete_bpobj, dp->dp_meta_objset, obj)); - VERIFY0(zap_add(dp->dp_meta_objset, DMU_POOL_DIRECTORY_OBJECT, - DMU_POOL_OBSOLETE_BPOBJ, sizeof (uint64_t), 1, &obj, tx)); - spa_feature_incr(dp->dp_spa, SPA_FEATURE_OBSOLETE_COUNTS, tx); - } - - void - dsl_pool_destroy_obsolete_bpobj(dsl_pool_t *dp, dmu_tx_t *tx) - { - spa_feature_decr(dp->dp_spa, SPA_FEATURE_OBSOLETE_COUNTS, tx); - VERIFY0(zap_remove(dp->dp_meta_objset, - DMU_POOL_DIRECTORY_OBJECT, - DMU_POOL_OBSOLETE_BPOBJ, tx)); - bpobj_free(dp->dp_meta_objset, - dp->dp_obsolete_bpobj.bpo_object, tx); - bpobj_close(&dp->dp_obsolete_bpobj); - } - dsl_pool_t * dsl_pool_create(spa_t *spa, nvlist_t *zplprops, uint64_t txg) { int err; dsl_pool_t *dp = dsl_pool_open_impl(spa, txg); --- 381,403 ---- * We can't set retry to TRUE since we're explicitly specifying * a spa to flush. This is good enough; any missed buffers for * this spa won't cause trouble, and they'll eventually fall * out of the ARC just like any other unused buffer. */ ! arc_flush(dp->dp_spa, B_FALSE); txg_fini(dp); dsl_scan_fini(dp); dmu_buf_user_evict_wait(); rrw_destroy(&dp->dp_config_rwlock); mutex_destroy(&dp->dp_lock); taskq_destroy(dp->dp_vnrele_taskq); ! if (dp->dp_blkstats) kmem_free(dp->dp_blkstats, sizeof (zfs_all_blkstats_t)); kmem_free(dp, sizeof (dsl_pool_t)); } dsl_pool_t * dsl_pool_create(spa_t *spa, nvlist_t *zplprops, uint64_t txg) { int err; dsl_pool_t *dp = dsl_pool_open_impl(spa, txg);
*** 551,584 **** --- 520,695 ---- zio_t *zio; dmu_tx_t *tx; dsl_dir_t *dd; dsl_dataset_t *ds; objset_t *mos = dp->dp_meta_objset; + spa_t *spa = dp->dp_spa; list_t synced_datasets; + dsl_sync_task_t *iter; + boolean_t wbc_skip_txg = B_FALSE; + boolean_t sync_ops = B_FALSE; + boolean_t user_snap = B_FALSE; + zfs_autosnap_t *autosnap = spa_get_autosnap(spa); + boolean_t autosnap_initialized = autosnap->initialized; + char snap[ZFS_MAX_DATASET_NAME_LEN]; + /* check if there are ny sync ops in the txg */ + if (txg_list_head(&dp->dp_sync_tasks, txg) != NULL) + sync_ops = B_TRUE; + + /* check if there are user snaps in the txg */ + for (iter = txg_list_head(&dp->dp_sync_tasks, txg); + iter != NULL; + iter = txg_list_next(&dp->dp_sync_tasks, iter, txg)) { + if (iter->dst_syncfunc == dsl_dataset_snapshot_sync) { + user_snap = B_TRUE; + break; + } + } + + list_create(&synced_datasets, sizeof (dsl_dataset_t), offsetof(dsl_dataset_t, ds_synced_link)); tx = dmu_tx_create_assigned(dp, txg); + (void) sprintf(snap, "%s%llu", AUTOSNAP_PREFIX, + (unsigned long long int) txg); + + if (autosnap_initialized && spa->spa_sync_pass == 1) { + autosnap_zone_t *azone; + + rrw_enter(&dp->dp_config_rwlock, RW_READER, FTAG); + mutex_enter(&autosnap->autosnap_lock); + /* + * WBC: the mechanism to ensure all WBC-ed dirty datasets + * are synchronously auto-snapshotted + * within (or by) the same TXG sync + * The "synchronicity" of the rightmost boundary of the WBC + * window is important to avoid used-space leakages + * on special vdev. + * Note that we skip here the WBC-ed datasets that are + * already fully migrated and don't have data on special + */ + + for (ds = txg_list_head(&dp->dp_dirty_datasets, txg); + ds != NULL; + ds = txg_list_next(&dp->dp_dirty_datasets, ds, txg)) { + char ds_name[ZFS_MAX_DATASET_NAME_LEN]; + boolean_t wbc_azone; + + dsl_dataset_name(ds, ds_name); + + azone = autosnap_find_zone(autosnap, ds_name, B_TRUE); + if (azone == NULL) + continue; + + if ((azone->flags & AUTOSNAP_CREATOR) == 0) + continue; + + if (azone->created) + continue; + + azone->delayed = B_TRUE; + azone->dirty = B_TRUE; + wbc_azone = (azone->flags & AUTOSNAP_WBC) != 0; + + if (autosnap_confirm_snap(azone, txg)) { + if (!wbc_azone && !user_snap && !sync_ops) { + autosnap_create_snapshot(azone, + snap, dp, txg, tx); + } + } else if (wbc_azone) { + wbc_skip_txg = B_TRUE; + } + } + + azone = list_head(&autosnap->autosnap_zones); + while (azone != NULL) { + boolean_t wbc_azone = + ((azone->flags & AUTOSNAP_WBC) != 0); + + if (user_snap) { + azone->delayed = B_TRUE; + } else if (!azone->dirty && azone->delayed) { + if (autosnap_confirm_snap(azone, txg)) { + if (!wbc_azone && !user_snap && + !sync_ops) { + autosnap_create_snapshot(azone, + snap, dp, txg, tx); + } + } else if (wbc_azone) { + wbc_skip_txg = B_TRUE; + } + } + + azone = list_next(&autosnap->autosnap_zones, azone); + } + + mutex_exit(&autosnap->autosnap_lock); + rrw_exit(&dp->dp_config_rwlock, FTAG); + } + + + /* * Write out all dirty blocks of dirty datasets. */ zio = zio_root(dp->dp_spa, NULL, NULL, ZIO_FLAG_MUSTSUCCEED); while ((ds = txg_list_remove(&dp->dp_dirty_datasets, txg)) != NULL) { + /* * We must not sync any non-MOS datasets twice, because * we may have taken a snapshot of them. However, we * may sync newly-created datasets on pass 2. */ ASSERT(!list_link_active(&ds->ds_synced_link)); list_insert_tail(&synced_datasets, ds); dsl_dataset_sync(ds, zio, tx); } + VERIFY0(zio_wait(zio)); + if (autosnap_initialized && spa->spa_sync_pass == 1 && + !user_snap) { + autosnap_zone_t *azone; + + rrw_enter(&dp->dp_config_rwlock, RW_READER, FTAG); + mutex_enter(&autosnap->autosnap_lock); + /* + * At this stage we are walking over all delayed zones + * to create autosnaps + */ + + azone = list_head(&autosnap->autosnap_zones); + while (azone != NULL) { + boolean_t skip_zone = + ((azone->flags & AUTOSNAP_CREATOR) == 0); + + if (azone->delayed && !skip_zone) { + boolean_t wbc_azone = + ((azone->flags & AUTOSNAP_WBC) != 0); + + if ((!wbc_azone || !wbc_skip_txg) && + autosnap_confirm_snap(azone, txg)) { + autosnap_create_snapshot(azone, + snap, dp, txg, tx); + } + } + + if (skip_zone) + azone->delayed = B_FALSE; + + azone = list_next(&autosnap->autosnap_zones, azone); + } + + mutex_exit(&autosnap->autosnap_lock); + rrw_exit(&dp->dp_config_rwlock, FTAG); + } + + /* * We have written all of the accounted dirty data, so our * dp_space_towrite should now be zero. However, some seldom-used * code paths do not adhere to this (e.g. dbuf_undirty(), also * rounding error in dbuf_write_physdone). * Shore up the accounting of any dirtied space now.
*** 590,599 **** --- 701,712 ---- * we're done syncing user data */ mutex_enter(&dp->dp_lock); ASSERT(spa_sync_pass(dp->dp_spa) == 1 || dp->dp_long_free_dirty_pertxg[txg & TXG_MASK] == 0); + dp->dp_long_freeing_total -= + dp->dp_long_free_dirty_pertxg[txg & TXG_MASK]; dp->dp_long_free_dirty_pertxg[txg & TXG_MASK] = 0; mutex_exit(&dp->dp_lock); /* * After the data blocks have been written (ensured by the zio_wait()
*** 612,621 **** --- 725,735 ---- * userspace updates. This must be done before we process the * sync tasks, so that any snapshots will have the correct * user accounting information (and we won't get confused * about which blocks are part of the snapshot). */ + zio = zio_root(dp->dp_spa, NULL, NULL, ZIO_FLAG_MUSTSUCCEED); while ((ds = txg_list_remove(&dp->dp_dirty_datasets, txg)) != NULL) { ASSERT(list_link_active(&ds->ds_synced_link)); dmu_buf_rele(ds->ds_dbuf, ds); dsl_dataset_sync(ds, zio, tx);
*** 663,672 **** --- 777,787 ---- * Therefore, we want to sync the MOS (thus syncing the dd_dbuf * and clearing the hold on it) before we process the sync_tasks. * The MOS data dirtied by the sync_tasks will be synced on the next * pass. */ + if (!txg_list_empty(&dp->dp_sync_tasks, txg)) { dsl_sync_task_t *dst; /* * No more sync tasks should have been added while we * were syncing.
*** 674,683 **** --- 789,803 ---- ASSERT3U(spa_sync_pass(dp->dp_spa), ==, 1); while ((dst = txg_list_remove(&dp->dp_sync_tasks, txg)) != NULL) dsl_sync_task_sync(dst, tx); } + if (spa_feature_is_active(spa, SPA_FEATURE_WBC)) { + wbc_trigger_wbcthread(dp->dp_spa, + ((dp->dp_sync_history[0] + dp->dp_sync_history[1]) / 2)); + } + dmu_tx_commit(tx); DTRACE_PROBE2(dsl_pool_sync__done, dsl_pool_t *dp, dp, uint64_t, txg); }
*** 737,751 **** { uint64_t delay_min_bytes = zfs_dirty_data_max * zfs_delay_min_dirty_percent / 100; boolean_t rv; - mutex_enter(&dp->dp_lock); if (dp->dp_dirty_total > zfs_dirty_data_sync) txg_kick(dp); rv = (dp->dp_dirty_total > delay_min_bytes); ! mutex_exit(&dp->dp_lock); return (rv); } void dsl_pool_dirty_space(dsl_pool_t *dp, int64_t space, dmu_tx_t *tx) --- 857,870 ---- { uint64_t delay_min_bytes = zfs_dirty_data_max * zfs_delay_min_dirty_percent / 100; boolean_t rv; if (dp->dp_dirty_total > zfs_dirty_data_sync) txg_kick(dp); rv = (dp->dp_dirty_total > delay_min_bytes); ! return (rv); } void dsl_pool_dirty_space(dsl_pool_t *dp, int64_t space, dmu_tx_t *tx)