Print this page
NEX-13140 DVA-throttle support for special-class
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-13135 Running BDD tests exposes a panic in ZFS TRIM due to a trimset overlap
Reviewed by: Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-10069 ZFS_READONLY is a little too strict (fix test lint)
NEX-9553 Move ss_fill gap logic from scan algorithm into range_tree.c
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-6088 ZFS scrub/resilver take excessively long due to issuing lots of random IO
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5553 ZFS auto-trim, manual-trim and scrub can race and deadlock
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Rob Gittins <rob.gittins@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5795 Rename 'wrc' as 'wbc' in the source and in the tech docs
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-4720 WRC: DVA allocation bypass for special BPs works incorrect
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-4683 WRC: Special block pointer must know that it is special
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-4620 ZFS autotrim triggering is unreliable
NEX-4622 On-demand TRIM code illogically enumerates metaslabs via mg_ms_tree
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
6295 metaslab_condense's dbgmsg should include vdev id
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andriy Gapon <avg@freebsd.org>
Reviewed by: Xin Li <delphij@freebsd.org>
Reviewed by: Justin Gibbs <gibbs@scsiguy.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
NEX-4245 WRC: Code cleanup and refactoring to simplify merge with upstream
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-4059 On-demand TRIM can sometimes race in metaslab_load
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
NEX-3984 On-demand TRIM
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Conflicts:
        usr/src/common/zfs/zpool_prop.c
        usr/src/uts/common/sys/fs/zfs.h
NEX-3710 WRC improvements and bug-fixes
 * refactored WRC move-logic to use zio kmem_cashes
 * replace size and compression fields by blk_prop field
   (the same in blkptr_t) to little reduce size of wrc_block_t
   and use similar macros as for blkptr_t to get PSIZE, LSIZE
   and COMPRESSION
 * make CPU more happy by reduce atomic calls
 * removed unused code
 * fixed naming of variables
 * fixed possible system panic after restart system
   with enabled WRC
 * fixed a race that causes system panic
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
NEX-3558 KRRP Integration
NEX-3508 CLONE - Port NEX-2946 Add UNMAP/TRIM functionality to ZFS and illumos
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Conflicts:
    usr/src/uts/common/io/scsi/targets/sd.c
    usr/src/uts/common/sys/scsi/targets/sddef.h
OS-197 Series of zpool exports and imports can hang the system
Reviewed by: Sarah Jelinek <sarah.jelinek@nexetna.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Rob Gittins <rob.gittens@nexenta.com>
Reviewed by: Tony Nguyen <tony.nguyen@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
re #8346 rb2639 KT disk failures


   6  * You may not use this file except in compliance with the License.
   7  *
   8  * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
   9  * or http://www.opensolaris.org/os/licensing.
  10  * See the License for the specific language governing permissions
  11  * and limitations under the License.
  12  *
  13  * When distributing Covered Code, include this CDDL HEADER in each
  14  * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15  * If applicable, add the following below this CDDL HEADER, with the
  16  * fields enclosed by brackets "[]" replaced with your own identifying
  17  * information: Portions Copyright [yyyy] [name of copyright owner]
  18  *
  19  * CDDL HEADER END
  20  */
  21 /*
  22  * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
  23  * Copyright (c) 2011, 2015 by Delphix. All rights reserved.
  24  * Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
  25  * Copyright (c) 2014 Integros [integros.com]

  26  */
  27 
  28 #include <sys/zfs_context.h>
  29 #include <sys/dmu.h>
  30 #include <sys/dmu_tx.h>
  31 #include <sys/space_map.h>
  32 #include <sys/metaslab_impl.h>
  33 #include <sys/vdev_impl.h>
  34 #include <sys/zio.h>
  35 #include <sys/spa_impl.h>
  36 #include <sys/zfeature.h>
  37 #include <sys/vdev_indirect_mapping.h>
  38 
  39 #define GANG_ALLOCATION(flags) \
  40         ((flags) & (METASLAB_GANG_CHILD | METASLAB_GANG_HEADER))
  41 
  42 uint64_t metaslab_aliquot = 512ULL << 10;
  43 uint64_t metaslab_gang_bang = SPA_MAXBLOCKSIZE + 1;     /* force gang blocks */
  44 
  45 /*
  46  * The in-core space map representation is more compact than its on-disk form.
  47  * The zfs_condense_pct determines how much more compact the in-core
  48  * space map representation must be before we compact it on-disk.
  49  * Values should be greater than or equal to 100.
  50  */
  51 int zfs_condense_pct = 200;
  52 
  53 /*
  54  * Condensing a metaslab is not guaranteed to actually reduce the amount of
  55  * space used on disk. In particular, a space map uses data in increments of
  56  * MAX(1 << ashift, space_map_blksize), so a metaslab might use the
  57  * same number of blocks after condensing. Since the goal of condensing is to


 150  * Enable/disable preloading of metaslab.
 151  */
 152 boolean_t metaslab_preload_enabled = B_TRUE;
 153 
 154 /*
 155  * Enable/disable fragmentation weighting on metaslabs.
 156  */
 157 boolean_t metaslab_fragmentation_factor_enabled = B_TRUE;
 158 
 159 /*
 160  * Enable/disable lba weighting (i.e. outer tracks are given preference).
 161  */
 162 boolean_t metaslab_lba_weighting_enabled = B_TRUE;
 163 
 164 /*
 165  * Enable/disable metaslab group biasing.
 166  */
 167 boolean_t metaslab_bias_enabled = B_TRUE;
 168 
 169 /*
 170  * Enable/disable remapping of indirect DVAs to their concrete vdevs.
 171  */
 172 boolean_t zfs_remap_blkptr_enable = B_TRUE;
 173 
 174 /*
 175  * Enable/disable segment-based metaslab selection.
 176  */
 177 boolean_t zfs_metaslab_segment_weight_enabled = B_TRUE;
 178 
 179 /*
 180  * When using segment-based metaslab selection, we will continue
 181  * allocating from the active metaslab until we have exhausted
 182  * zfs_metaslab_switch_threshold of its buckets.
 183  */
 184 int zfs_metaslab_switch_threshold = 2;
 185 
 186 /*
 187  * Internal switch to enable/disable the metaslab allocation tracing
 188  * facility.
 189  */
 190 boolean_t metaslab_trace_enabled = B_TRUE;
 191 
 192 /*
 193  * Maximum entries that the metaslab allocation tracing facility will keep
 194  * in a given list when running in non-debug mode. We limit the number
 195  * of entries in non-debug mode to prevent us from using up too much memory.
 196  * The limit should be sufficiently large that we don't expect any allocation
 197  * to every exceed this value. In debug mode, the system will panic if this
 198  * limit is ever reached allowing for further investigation.
 199  */
 200 uint64_t metaslab_trace_max_entries = 5000;
 201 
 202 static uint64_t metaslab_weight(metaslab_t *);
 203 static void metaslab_set_fragmentation(metaslab_t *);
 204 static void metaslab_free_impl(vdev_t *, uint64_t, uint64_t, uint64_t);
 205 static void metaslab_check_free_impl(vdev_t *, uint64_t, uint64_t);
 206 
 207 kmem_cache_t *metaslab_alloc_trace_cache;
 208 
 209 /*




































 210  * ==========================================================================
 211  * Metaslab classes
 212  * ==========================================================================
 213  */
 214 metaslab_class_t *
 215 metaslab_class_create(spa_t *spa, metaslab_ops_t *ops)
 216 {
 217         metaslab_class_t *mc;
 218 
 219         mc = kmem_zalloc(sizeof (metaslab_class_t), KM_SLEEP);
 220 




 221         mc->mc_spa = spa;
 222         mc->mc_rotor = NULL;
 223         mc->mc_ops = ops;
 224         mutex_init(&mc->mc_lock, NULL, MUTEX_DEFAULT, NULL);
 225         refcount_create_tracked(&mc->mc_alloc_slots);
 226 
 227         return (mc);
 228 }
 229 
 230 void
 231 metaslab_class_destroy(metaslab_class_t *mc)
 232 {
 233         ASSERT(mc->mc_rotor == NULL);
 234         ASSERT(mc->mc_alloc == 0);
 235         ASSERT(mc->mc_deferred == 0);
 236         ASSERT(mc->mc_space == 0);
 237         ASSERT(mc->mc_dspace == 0);
 238 



 239         refcount_destroy(&mc->mc_alloc_slots);
 240         mutex_destroy(&mc->mc_lock);
 241         kmem_free(mc, sizeof (metaslab_class_t));
 242 }
 243 
 244 int
 245 metaslab_class_validate(metaslab_class_t *mc)
 246 {
 247         metaslab_group_t *mg;
 248         vdev_t *vd;
 249 
 250         /*
 251          * Must hold one of the spa_config locks.
 252          */
 253         ASSERT(spa_config_held(mc->mc_spa, SCL_ALL, RW_READER) ||
 254             spa_config_held(mc->mc_spa, SCL_ALL, RW_WRITER));
 255 
 256         if ((mg = mc->mc_rotor) == NULL)
 257                 return (0);
 258 


 305 metaslab_class_histogram_verify(metaslab_class_t *mc)
 306 {
 307         vdev_t *rvd = mc->mc_spa->spa_root_vdev;
 308         uint64_t *mc_hist;
 309         int i;
 310 
 311         if ((zfs_flags & ZFS_DEBUG_HISTOGRAM_VERIFY) == 0)
 312                 return;
 313 
 314         mc_hist = kmem_zalloc(sizeof (uint64_t) * RANGE_TREE_HISTOGRAM_SIZE,
 315             KM_SLEEP);
 316 
 317         for (int c = 0; c < rvd->vdev_children; c++) {
 318                 vdev_t *tvd = rvd->vdev_child[c];
 319                 metaslab_group_t *mg = tvd->vdev_mg;
 320 
 321                 /*
 322                  * Skip any holes, uninitialized top-levels, or
 323                  * vdevs that are not in this metalab class.
 324                  */
 325                 if (!vdev_is_concrete(tvd) || tvd->vdev_ms_shift == 0 ||
 326                     mg->mg_class != mc) {
 327                         continue;
 328                 }
 329 
 330                 for (i = 0; i < RANGE_TREE_HISTOGRAM_SIZE; i++)
 331                         mc_hist[i] += mg->mg_histogram[i];
 332         }
 333 
 334         for (i = 0; i < RANGE_TREE_HISTOGRAM_SIZE; i++)
 335                 VERIFY3U(mc_hist[i], ==, mc->mc_histogram[i]);
 336 
 337         kmem_free(mc_hist, sizeof (uint64_t) * RANGE_TREE_HISTOGRAM_SIZE);
 338 }
 339 
 340 /*
 341  * Calculate the metaslab class's fragmentation metric. The metric
 342  * is weighted based on the space contribution of each metaslab group.
 343  * The return value will be a number between 0 and 100 (inclusive), or
 344  * ZFS_FRAG_INVALID if the metric has not been set. See comment above the
 345  * zfs_frag_table for more information about the metric.
 346  */
 347 uint64_t
 348 metaslab_class_fragmentation(metaslab_class_t *mc)
 349 {
 350         vdev_t *rvd = mc->mc_spa->spa_root_vdev;
 351         uint64_t fragmentation = 0;
 352 
 353         spa_config_enter(mc->mc_spa, SCL_VDEV, FTAG, RW_READER);
 354 
 355         for (int c = 0; c < rvd->vdev_children; c++) {
 356                 vdev_t *tvd = rvd->vdev_child[c];
 357                 metaslab_group_t *mg = tvd->vdev_mg;
 358 
 359                 /*
 360                  * Skip any holes, uninitialized top-levels,
 361                  * or vdevs that are not in this metalab class.
 362                  */
 363                 if (!vdev_is_concrete(tvd) || tvd->vdev_ms_shift == 0 ||
 364                     mg->mg_class != mc) {
 365                         continue;
 366                 }
 367 
 368                 /*
 369                  * If a metaslab group does not contain a fragmentation
 370                  * metric then just bail out.
 371                  */
 372                 if (mg->mg_fragmentation == ZFS_FRAG_INVALID) {
 373                         spa_config_exit(mc->mc_spa, SCL_VDEV, FTAG);
 374                         return (ZFS_FRAG_INVALID);
 375                 }
 376 
 377                 /*
 378                  * Determine how much this metaslab_group is contributing
 379                  * to the overall pool fragmentation metric.
 380                  */
 381                 fragmentation += mg->mg_fragmentation *
 382                     metaslab_group_get_space(mg);
 383         }


 389 }
 390 
 391 /*
 392  * Calculate the amount of expandable space that is available in
 393  * this metaslab class. If a device is expanded then its expandable
 394  * space will be the amount of allocatable space that is currently not
 395  * part of this metaslab class.
 396  */
 397 uint64_t
 398 metaslab_class_expandable_space(metaslab_class_t *mc)
 399 {
 400         vdev_t *rvd = mc->mc_spa->spa_root_vdev;
 401         uint64_t space = 0;
 402 
 403         spa_config_enter(mc->mc_spa, SCL_VDEV, FTAG, RW_READER);
 404         for (int c = 0; c < rvd->vdev_children; c++) {
 405                 uint64_t tspace;
 406                 vdev_t *tvd = rvd->vdev_child[c];
 407                 metaslab_group_t *mg = tvd->vdev_mg;
 408 
 409                 if (!vdev_is_concrete(tvd) || tvd->vdev_ms_shift == 0 ||
 410                     mg->mg_class != mc) {
 411                         continue;
 412                 }
 413 
 414                 /*
 415                  * Calculate if we have enough space to add additional
 416                  * metaslabs. We report the expandable space in terms
 417                  * of the metaslab size since that's the unit of expansion.
 418                  * Adjust by efi system partition size.
 419                  */
 420                 tspace = tvd->vdev_max_asize - tvd->vdev_asize;
 421                 if (tspace > mc->mc_spa->spa_bootsize) {
 422                         tspace -= mc->mc_spa->spa_bootsize;
 423                 }
 424                 space += P2ALIGN(tspace, 1ULL << tvd->vdev_ms_shift);
 425         }
 426         spa_config_exit(mc->mc_spa, SCL_VDEV, FTAG);
 427         return (space);
 428 }
 429 


 501  * ==========================================================================
 502  */
 503 /*
 504  * Update the allocatable flag and the metaslab group's capacity.
 505  * The allocatable flag is set to true if the capacity is below
 506  * the zfs_mg_noalloc_threshold or has a fragmentation value that is
 507  * greater than zfs_mg_fragmentation_threshold. If a metaslab group
 508  * transitions from allocatable to non-allocatable or vice versa then the
 509  * metaslab group's class is updated to reflect the transition.
 510  */
 511 static void
 512 metaslab_group_alloc_update(metaslab_group_t *mg)
 513 {
 514         vdev_t *vd = mg->mg_vd;
 515         metaslab_class_t *mc = mg->mg_class;
 516         vdev_stat_t *vs = &vd->vdev_stat;
 517         boolean_t was_allocatable;
 518         boolean_t was_initialized;
 519 
 520         ASSERT(vd == vd->vdev_top);
 521         ASSERT3U(spa_config_held(mc->mc_spa, SCL_ALLOC, RW_READER), ==,
 522             SCL_ALLOC);
 523 
 524         mutex_enter(&mg->mg_lock);
 525         was_allocatable = mg->mg_allocatable;
 526         was_initialized = mg->mg_initialized;
 527 
 528         mg->mg_free_capacity = ((vs->vs_space - vs->vs_alloc) * 100) /
 529             (vs->vs_space + 1);
 530 
 531         mutex_enter(&mc->mc_lock);
 532 
 533         /*
 534          * If the metaslab group was just added then it won't
 535          * have any space until we finish syncing out this txg.
 536          * At that point we will consider it initialized and available
 537          * for allocations.  We also don't consider non-activated
 538          * metaslab groups (e.g. vdevs that are in the middle of being removed)
 539          * to be initialized, because they can't be used for allocation.
 540          */
 541         mg->mg_initialized = metaslab_group_initialized(mg);
 542         if (!was_initialized && mg->mg_initialized) {


 600         refcount_create_tracked(&mg->mg_alloc_queue_depth);
 601 
 602         mg->mg_taskq = taskq_create("metaslab_group_taskq", metaslab_load_pct,
 603             minclsyspri, 10, INT_MAX, TASKQ_THREADS_CPU_PCT);
 604 
 605         return (mg);
 606 }
 607 
 608 void
 609 metaslab_group_destroy(metaslab_group_t *mg)
 610 {
 611         ASSERT(mg->mg_prev == NULL);
 612         ASSERT(mg->mg_next == NULL);
 613         /*
 614          * We may have gone below zero with the activation count
 615          * either because we never activated in the first place or
 616          * because we're done, and possibly removing the vdev.
 617          */
 618         ASSERT(mg->mg_activation_count <= 0);
 619 

 620         taskq_destroy(mg->mg_taskq);
 621         avl_destroy(&mg->mg_metaslab_tree);
 622         mutex_destroy(&mg->mg_lock);
 623         refcount_destroy(&mg->mg_alloc_queue_depth);
 624         kmem_free(mg, sizeof (metaslab_group_t));
 625 }
 626 
 627 void
 628 metaslab_group_activate(metaslab_group_t *mg)
 629 {
 630         metaslab_class_t *mc = mg->mg_class;
 631         metaslab_group_t *mgprev, *mgnext;
 632 
 633         ASSERT3U(spa_config_held(mc->mc_spa, SCL_ALLOC, RW_WRITER), !=, 0);
 634 
 635         ASSERT(mc->mc_rotor != mg);
 636         ASSERT(mg->mg_prev == NULL);
 637         ASSERT(mg->mg_next == NULL);
 638         ASSERT(mg->mg_activation_count <= 0);
 639 
 640         if (++mg->mg_activation_count <= 0)
 641                 return;
 642 
 643         mg->mg_aliquot = metaslab_aliquot * MAX(1, mg->mg_vd->vdev_children);
 644         metaslab_group_alloc_update(mg);
 645 
 646         if ((mgprev = mc->mc_rotor) == NULL) {
 647                 mg->mg_prev = mg;
 648                 mg->mg_next = mg;
 649         } else {
 650                 mgnext = mgprev->mg_next;
 651                 mg->mg_prev = mgprev;
 652                 mg->mg_next = mgnext;
 653                 mgprev->mg_next = mg;
 654                 mgnext->mg_prev = mg;
 655         }
 656         mc->mc_rotor = mg;
 657 }
 658 
 659 /*
 660  * Passivate a metaslab group and remove it from the allocation rotor.
 661  * Callers must hold both the SCL_ALLOC and SCL_ZIO lock prior to passivating
 662  * a metaslab group. This function will momentarily drop spa_config_locks
 663  * that are lower than the SCL_ALLOC lock (see comment below).
 664  */
 665 void
 666 metaslab_group_passivate(metaslab_group_t *mg)
 667 {
 668         metaslab_class_t *mc = mg->mg_class;
 669         spa_t *spa = mc->mc_spa;
 670         metaslab_group_t *mgprev, *mgnext;
 671         int locks = spa_config_held(spa, SCL_ALL, RW_WRITER);
 672 
 673         ASSERT3U(spa_config_held(spa, SCL_ALLOC | SCL_ZIO, RW_WRITER), ==,
 674             (SCL_ALLOC | SCL_ZIO));
 675 
 676         if (--mg->mg_activation_count != 0) {
 677                 ASSERT(mc->mc_rotor != mg);
 678                 ASSERT(mg->mg_prev == NULL);
 679                 ASSERT(mg->mg_next == NULL);
 680                 ASSERT(mg->mg_activation_count < 0);
 681                 return;
 682         }
 683 
 684         /*
 685          * The spa_config_lock is an array of rwlocks, ordered as
 686          * follows (from highest to lowest):
 687          *      SCL_CONFIG > SCL_STATE > SCL_L2ARC > SCL_ALLOC >
 688          *      SCL_ZIO > SCL_FREE > SCL_VDEV
 689          * (For more information about the spa_config_lock see spa_misc.c)
 690          * The higher the lock, the broader its coverage. When we passivate
 691          * a metaslab group, we must hold both the SCL_ALLOC and the SCL_ZIO
 692          * config locks. However, the metaslab group's taskq might be trying
 693          * to preload metaslabs so we must drop the SCL_ZIO lock and any
 694          * lower locks to allow the I/O to complete. At a minimum,
 695          * we continue to hold the SCL_ALLOC lock, which prevents any future
 696          * allocations from taking place and any changes to the vdev tree.
 697          */
 698         spa_config_exit(spa, locks & ~(SCL_ZIO - 1), spa);
 699         taskq_wait(mg->mg_taskq);
 700         spa_config_enter(spa, locks & ~(SCL_ZIO - 1), spa, RW_WRITER);
 701         metaslab_group_alloc_update(mg);
 702 
 703         mgprev = mg->mg_prev;
 704         mgnext = mg->mg_next;
 705 
 706         if (mg == mgnext) {
 707                 mc->mc_rotor = NULL;
 708         } else {
 709                 mc->mc_rotor = mgnext;
 710                 mgprev->mg_next = mgnext;
 711                 mgnext->mg_prev = mgprev;
 712         }
 713 
 714         mg->mg_prev = NULL;
 715         mg->mg_next = NULL;
 716 }
 717 
 718 boolean_t
 719 metaslab_group_initialized(metaslab_group_t *mg)
 720 {


1124         range_seg_t *rs, rsearch;
1125         avl_index_t where;
1126 
1127         rsearch.rs_start = start;
1128         rsearch.rs_end = start + size;
1129 
1130         rs = avl_find(t, &rsearch, &where);
1131         if (rs == NULL) {
1132                 rs = avl_nearest(t, where, AVL_AFTER);
1133         }
1134 
1135         return (rs);
1136 }
1137 
1138 /*
1139  * This is a helper function that can be used by the allocator to find
1140  * a suitable block to allocate. This will search the specified AVL
1141  * tree looking for a block that matches the specified criteria.
1142  */
1143 static uint64_t
1144 metaslab_block_picker(avl_tree_t *t, uint64_t *cursor, uint64_t size,
1145     uint64_t align)
1146 {
1147         range_seg_t *rs = metaslab_block_find(t, *cursor, size);
1148 
1149         while (rs != NULL) {
1150                 uint64_t offset = P2ROUNDUP(rs->rs_start, align);
1151 
1152                 if (offset + size <= rs->rs_end) {


1153                         *cursor = offset + size;
1154                         return (offset);
1155                 }
1156                 rs = AVL_NEXT(t, rs);
1157         }
1158 
1159         /*
1160          * If we know we've searched the whole map (*cursor == 0), give up.
1161          * Otherwise, reset the cursor to the beginning and try again.
1162          */
1163         if (*cursor == 0)
1164                 return (-1ULL);
1165 
1166         *cursor = 0;
1167         return (metaslab_block_picker(t, cursor, size, align));
1168 }
1169 
1170 /*
1171  * ==========================================================================
1172  * The first-fit block allocator
1173  * ==========================================================================
1174  */
1175 static uint64_t
1176 metaslab_ff_alloc(metaslab_t *msp, uint64_t size)
1177 {
1178         /*
1179          * Find the largest power of 2 block size that evenly divides the
1180          * requested size. This is used to try to allocate blocks with similar
1181          * alignment from the same area of the metaslab (i.e. same cursor
1182          * bucket) but it does not guarantee that other allocations sizes
1183          * may exist in the same region.
1184          */
1185         uint64_t align = size & -size;
1186         uint64_t *cursor = &msp->ms_lbas[highbit64(align) - 1];
1187         avl_tree_t *t = &msp->ms_tree->rt_root;
1188 
1189         return (metaslab_block_picker(t, cursor, size, align));
1190 }
1191 
1192 static metaslab_ops_t metaslab_ff_ops = {
1193         metaslab_ff_alloc
1194 };
1195 
1196 /*
1197  * ==========================================================================
1198  * Dynamic block allocator -
1199  * Uses the first fit allocation scheme until space get low and then
1200  * adjusts to a best fit allocation method. Uses metaslab_df_alloc_threshold
1201  * and metaslab_df_free_pct to determine when to switch the allocation scheme.
1202  * ==========================================================================
1203  */
1204 static uint64_t
1205 metaslab_df_alloc(metaslab_t *msp, uint64_t size)
1206 {
1207         /*
1208          * Find the largest power of 2 block size that evenly divides the
1209          * requested size. This is used to try to allocate blocks with similar


1217         avl_tree_t *t = &rt->rt_root;
1218         uint64_t max_size = metaslab_block_maxsize(msp);
1219         int free_pct = range_tree_space(rt) * 100 / msp->ms_size;
1220 
1221         ASSERT(MUTEX_HELD(&msp->ms_lock));
1222         ASSERT3U(avl_numnodes(t), ==, avl_numnodes(&msp->ms_size_tree));
1223 
1224         if (max_size < size)
1225                 return (-1ULL);
1226 
1227         /*
1228          * If we're running low on space switch to using the size
1229          * sorted AVL tree (best-fit).
1230          */
1231         if (max_size < metaslab_df_alloc_threshold ||
1232             free_pct < metaslab_df_free_pct) {
1233                 t = &msp->ms_size_tree;
1234                 *cursor = 0;
1235         }
1236 
1237         return (metaslab_block_picker(t, cursor, size, 1ULL));
1238 }
1239 
1240 static metaslab_ops_t metaslab_df_ops = {
1241         metaslab_df_alloc
1242 };
1243 
1244 /*
1245  * ==========================================================================
1246  * Cursor fit block allocator -
1247  * Select the largest region in the metaslab, set the cursor to the beginning
1248  * of the range and the cursor_end to the end of the range. As allocations
1249  * are made advance the cursor. Continue allocating from the cursor until
1250  * the range is exhausted and then find a new range.
1251  * ==========================================================================
1252  */
1253 static uint64_t
1254 metaslab_cf_alloc(metaslab_t *msp, uint64_t size)
1255 {
1256         range_tree_t *rt = msp->ms_tree;
1257         avl_tree_t *t = &msp->ms_size_tree;
1258         uint64_t *cursor = &msp->ms_lbas[0];
1259         uint64_t *cursor_end = &msp->ms_lbas[1];
1260         uint64_t offset = 0;
1261 
1262         ASSERT(MUTEX_HELD(&msp->ms_lock));
1263         ASSERT3U(avl_numnodes(t), ==, avl_numnodes(&rt->rt_root));
1264 
1265         ASSERT3U(*cursor_end, >=, *cursor);
1266 
1267         if ((*cursor + size) > *cursor_end) {
1268                 range_seg_t *rs;
1269 
1270                 rs = avl_last(&msp->ms_size_tree);
1271                 if (rs == NULL || (rs->rs_end - rs->rs_start) < size)
1272                         return (-1ULL);
1273 
1274                 *cursor = rs->rs_start;
1275                 *cursor_end = rs->rs_end;




1276         }




1277 
1278         offset = *cursor;
1279         *cursor += size;
1280 
1281         return (offset);
1282 }
1283 
1284 static metaslab_ops_t metaslab_cf_ops = {
1285         metaslab_cf_alloc
1286 };
1287 
1288 /*
1289  * ==========================================================================
1290  * New dynamic fit allocator -
1291  * Select a region that is large enough to allocate 2^metaslab_ndf_clump_shift
1292  * contiguous blocks. If no region is found then just use the largest segment
1293  * that remains.
1294  * ==========================================================================
1295  */
1296 
1297 /*
1298  * Determines desired number of contiguous blocks (2^metaslab_ndf_clump_shift)
1299  * to request from the allocator.
1300  */
1301 uint64_t metaslab_ndf_clump_shift = 4;
1302 
1303 static uint64_t
1304 metaslab_ndf_alloc(metaslab_t *msp, uint64_t size)
1305 {
1306         avl_tree_t *t = &msp->ms_tree->rt_root;
1307         avl_index_t where;
1308         range_seg_t *rs, rsearch;
1309         uint64_t hbit = highbit64(size);
1310         uint64_t *cursor = &msp->ms_lbas[hbit - 1];
1311         uint64_t max_size = metaslab_block_maxsize(msp);


1312 
1313         ASSERT(MUTEX_HELD(&msp->ms_lock));
1314         ASSERT3U(avl_numnodes(t), ==, avl_numnodes(&msp->ms_size_tree));
1315 
1316         if (max_size < size)
1317                 return (-1ULL);
1318 
1319         rsearch.rs_start = *cursor;
1320         rsearch.rs_end = *cursor + size;
1321 
1322         rs = avl_find(t, &rsearch, &where);
1323         if (rs == NULL || (rs->rs_end - rs->rs_start) < size) {





1324                 t = &msp->ms_size_tree;
1325 
1326                 rsearch.rs_start = 0;
1327                 rsearch.rs_end = MIN(max_size,
1328                     1ULL << (hbit + metaslab_ndf_clump_shift));
1329                 rs = avl_find(t, &rsearch, &where);
1330                 if (rs == NULL)
1331                         rs = avl_nearest(t, where, AVL_AFTER);
1332                 ASSERT(rs != NULL);






1333         }
1334 
1335         if ((rs->rs_end - rs->rs_start) >= size) {
1336                 *cursor = rs->rs_start + size;
1337                 return (rs->rs_start);
1338         }
1339         return (-1ULL);


1340 }
1341 
1342 static metaslab_ops_t metaslab_ndf_ops = {
1343         metaslab_ndf_alloc
1344 };
1345 
1346 metaslab_ops_t *zfs_metaslab_ops = &metaslab_df_ops;
1347 
1348 /*
1349  * ==========================================================================
1350  * Metaslabs
1351  * ==========================================================================
1352  */
1353 
1354 /*
1355  * Wait for any in-progress metaslab loads to complete.
1356  */
1357 void
1358 metaslab_load_wait(metaslab_t *msp)
1359 {
1360         ASSERT(MUTEX_HELD(&msp->ms_lock));
1361 
1362         while (msp->ms_loading) {
1363                 ASSERT(!msp->ms_loaded);
1364                 cv_wait(&msp->ms_load_cv, &msp->ms_lock);
1365         }
1366 }
1367 
1368 int
1369 metaslab_load(metaslab_t *msp)
1370 {
1371         int error = 0;
1372         boolean_t success = B_FALSE;
1373 
1374         ASSERT(MUTEX_HELD(&msp->ms_lock));
1375         ASSERT(!msp->ms_loaded);
1376         ASSERT(!msp->ms_loading);
1377 
1378         msp->ms_loading = B_TRUE;
1379         /*
1380          * Nobody else can manipulate a loading metaslab, so it's now safe
1381          * to drop the lock.  This way we don't have to hold the lock while
1382          * reading the spacemap from disk.
1383          */
1384         mutex_exit(&msp->ms_lock);
1385 
1386         /*
1387          * If the space map has not been allocated yet, then treat
1388          * all the space in the metaslab as free and add it to the
1389          * ms_tree.
1390          */
1391         if (msp->ms_sm != NULL)
1392                 error = space_map_load(msp->ms_sm, msp->ms_tree, SM_FREE);
1393         else
1394                 range_tree_add(msp->ms_tree, msp->ms_start, msp->ms_size);
1395 
1396         success = (error == 0);
1397 
1398         mutex_enter(&msp->ms_lock);
1399         msp->ms_loading = B_FALSE;
1400 
1401         if (success) {
1402                 ASSERT3P(msp->ms_group, !=, NULL);
1403                 msp->ms_loaded = B_TRUE;
1404 
1405                 for (int t = 0; t < TXG_DEFER_SIZE; t++) {
1406                         range_tree_walk(msp->ms_defertree[t],
1407                             range_tree_remove, msp->ms_tree);


1408                 }
1409                 msp->ms_max_size = metaslab_block_maxsize(msp);
1410         }
1411         cv_broadcast(&msp->ms_load_cv);
1412         return (error);
1413 }
1414 
1415 void
1416 metaslab_unload(metaslab_t *msp)
1417 {
1418         ASSERT(MUTEX_HELD(&msp->ms_lock));
1419         range_tree_vacate(msp->ms_tree, NULL, NULL);
1420         msp->ms_loaded = B_FALSE;
1421         msp->ms_weight &= ~METASLAB_ACTIVE_MASK;
1422         msp->ms_max_size = 0;
1423 }
1424 
1425 int
1426 metaslab_init(metaslab_group_t *mg, uint64_t id, uint64_t object, uint64_t txg,
1427     metaslab_t **msp)
1428 {
1429         vdev_t *vd = mg->mg_vd;
1430         objset_t *mos = vd->vdev_spa->spa_meta_objset;
1431         metaslab_t *ms;
1432         int error;
1433 
1434         ms = kmem_zalloc(sizeof (metaslab_t), KM_SLEEP);
1435         mutex_init(&ms->ms_lock, NULL, MUTEX_DEFAULT, NULL);
1436         mutex_init(&ms->ms_sync_lock, NULL, MUTEX_DEFAULT, NULL);
1437         cv_init(&ms->ms_load_cv, NULL, CV_DEFAULT, NULL);

1438         ms->ms_id = id;
1439         ms->ms_start = id << vd->vdev_ms_shift;
1440         ms->ms_size = 1ULL << vd->vdev_ms_shift;
1441 
1442         /*
1443          * We only open space map objects that already exist. All others
1444          * will be opened when we finally allocate an object for it.
1445          */
1446         if (object != 0) {
1447                 error = space_map_open(&ms->ms_sm, mos, object, ms->ms_start,
1448                     ms->ms_size, vd->vdev_ashift);
1449 
1450                 if (error != 0) {
1451                         kmem_free(ms, sizeof (metaslab_t));
1452                         return (error);
1453                 }
1454 
1455                 ASSERT(ms->ms_sm != NULL);
1456         }
1457 


1458         /*
1459          * We create the main range tree here, but we don't create the
1460          * other range trees until metaslab_sync_done().  This serves
1461          * two purposes: it allows metaslab_sync_done() to detect the
1462          * addition of new space; and for debugging, it ensures that we'd
1463          * data fault on any attempt to use this metaslab before it's ready.
1464          */
1465         ms->ms_tree = range_tree_create(&metaslab_rt_ops, ms);
1466         metaslab_group_add(mg, ms);
1467 
1468         metaslab_set_fragmentation(ms);
1469 
1470         /*
1471          * If we're opening an existing pool (txg == 0) or creating
1472          * a new one (txg == TXG_INITIAL), all space is available now.
1473          * If we're adding space to an existing pool, the new space
1474          * does not become available until after this txg has synced.
1475          * The metaslab's weight will also be initialized when we sync
1476          * out this txg. This ensures that we don't attempt to allocate
1477          * from it before we have initialized it completely.
1478          */
1479         if (txg <= TXG_INITIAL)
1480                 metaslab_sync_done(ms, 0);
1481 
1482         /*
1483          * If metaslab_debug_load is set and we're initializing a metaslab
1484          * that has an allocated space map object then load the its space
1485          * map so that can verify frees.


1509 
1510         mutex_enter(&msp->ms_lock);
1511         VERIFY(msp->ms_group == NULL);
1512         vdev_space_update(mg->mg_vd, -space_map_allocated(msp->ms_sm),
1513             0, -msp->ms_size);
1514         space_map_close(msp->ms_sm);
1515 
1516         metaslab_unload(msp);
1517         range_tree_destroy(msp->ms_tree);
1518         range_tree_destroy(msp->ms_freeingtree);
1519         range_tree_destroy(msp->ms_freedtree);
1520 
1521         for (int t = 0; t < TXG_SIZE; t++) {
1522                 range_tree_destroy(msp->ms_alloctree[t]);
1523         }
1524 
1525         for (int t = 0; t < TXG_DEFER_SIZE; t++) {
1526                 range_tree_destroy(msp->ms_defertree[t]);
1527         }
1528 





1529         ASSERT0(msp->ms_deferspace);
1530 
1531         mutex_exit(&msp->ms_lock);
1532         cv_destroy(&msp->ms_load_cv);

1533         mutex_destroy(&msp->ms_lock);
1534         mutex_destroy(&msp->ms_sync_lock);
1535 
1536         kmem_free(msp, sizeof (metaslab_t));
1537 }
1538 
1539 #define FRAGMENTATION_TABLE_SIZE        17
1540 
1541 /*
1542  * This table defines a segment size based fragmentation metric that will
1543  * allow each metaslab to derive its own fragmentation value. This is done
1544  * by calculating the space in each bucket of the spacemap histogram and
1545  * multiplying that by the fragmetation metric in this table. Doing
1546  * this for all buckets and dividing it by the total amount of free
1547  * space in this metaslab (i.e. the total free space in all buckets) gives
1548  * us the fragmentation metric. This means that a high fragmentation metric
1549  * equates to most of the free space being comprised of small segments.
1550  * Conversely, if the metric is low, then most of the free space is in
1551  * large segments. A 10% change in fragmentation equates to approximately
1552  * double the number of segments.
1553  *
1554  * This table defines 0% fragmented space using 16MB segments. Testing has


1880                  */
1881                 should_allocate = (asize <
1882                     1ULL << (WEIGHT_GET_INDEX(msp->ms_weight) + 1));
1883         } else {
1884                 should_allocate = (asize <=
1885                     (msp->ms_weight & ~METASLAB_WEIGHT_TYPE));
1886         }
1887         return (should_allocate);
1888 }
1889 
1890 static uint64_t
1891 metaslab_weight(metaslab_t *msp)
1892 {
1893         vdev_t *vd = msp->ms_group->mg_vd;
1894         spa_t *spa = vd->vdev_spa;
1895         uint64_t weight;
1896 
1897         ASSERT(MUTEX_HELD(&msp->ms_lock));
1898 
1899         /*
1900          * If this vdev is in the process of being removed, there is nothing
1901          * for us to do here.
1902          */
1903         if (vd->vdev_removing)


1904                 return (0);

1905 
1906         metaslab_set_fragmentation(msp);
1907 
1908         /*
1909          * Update the maximum size if the metaslab is loaded. This will
1910          * ensure that we get an accurate maximum size if newly freed space
1911          * has been added back into the free tree.
1912          */
1913         if (msp->ms_loaded)
1914                 msp->ms_max_size = metaslab_block_maxsize(msp);
1915 
1916         /*
1917          * Segment-based weighting requires space map histogram support.
1918          */
1919         if (zfs_metaslab_segment_weight_enabled &&
1920             spa_feature_is_enabled(spa, SPA_FEATURE_SPACEMAP_HISTOGRAM) &&
1921             (msp->ms_sm == NULL || msp->ms_sm->sm_dbuf->db_size ==
1922             sizeof (space_map_phys_t))) {
1923                 weight = metaslab_segment_weight(msp);
1924         } else {


2016         if (!msp->ms_loaded)
2017                 (void) metaslab_load(msp);
2018         msp->ms_selected_txg = spa_syncing_txg(spa);
2019         mutex_exit(&msp->ms_lock);
2020 }
2021 
2022 static void
2023 metaslab_group_preload(metaslab_group_t *mg)
2024 {
2025         spa_t *spa = mg->mg_vd->vdev_spa;
2026         metaslab_t *msp;
2027         avl_tree_t *t = &mg->mg_metaslab_tree;
2028         int m = 0;
2029 
2030         if (spa_shutting_down(spa) || !metaslab_preload_enabled) {
2031                 taskq_wait(mg->mg_taskq);
2032                 return;
2033         }
2034 
2035         mutex_enter(&mg->mg_lock);
2036 
2037         /*
2038          * Load the next potential metaslabs
2039          */
2040         for (msp = avl_first(t); msp != NULL; msp = AVL_NEXT(t, msp)) {
2041                 ASSERT3P(msp->ms_group, ==, mg);
2042 
2043                 /*
2044                  * We preload only the maximum number of metaslabs specified
2045                  * by metaslab_preload_limit. If a metaslab is being forced
2046                  * to condense then we preload it too. This will ensure
2047                  * that force condensing happens in the next txg.
2048                  */
2049                 if (++m > metaslab_preload_limit && !msp->ms_condense_wanted) {
2050                         continue;
2051                 }
2052 
2053                 VERIFY(taskq_dispatch(mg->mg_taskq, metaslab_preload,
2054                     msp, TQ_SLEEP) != NULL);
2055         }
2056         mutex_exit(&mg->mg_lock);
2057 }
2058 
2059 /*
2060  * Determine if the space map's on-disk footprint is past our tolerance
2061  * for inefficiency. We would like to use the following criteria to make
2062  * our decision:
2063  *
2064  * 1. The size of the space map object should not dramatically increase as a
2065  * result of writing out the free space range tree.
2066  *
2067  * 2. The minimal on-disk space map representation is zfs_condense_pct/100
2068  * times the size than the free space range tree representation
2069  * (i.e. zfs_condense_pct = 110 and in-core = 1MB, minimal = 1.1MB).
2070  *
2071  * 3. The on-disk size of the space map should actually decrease.
2072  *
2073  * Checking the first condition is tricky since we don't want to walk
2074  * the entire AVL tree calculating the estimated on-disk size. Instead we
2075  * use the size-ordered range tree in the metaslab and calculate the
2076  * size required to write out the largest segment in our free tree. If the
2077  * size required to represent that segment on disk is larger than the space
2078  * map object then we avoid condensing this map.
2079  *
2080  * To determine the second criterion we use a best-case estimate and assume
2081  * each segment can be represented on-disk as a single 64-bit entry. We refer
2082  * to this best-case estimate as the space map's minimal form.
2083  *
2084  * Unfortunately, we cannot compute the on-disk size of the space map in this
2085  * context because we cannot accurately compute the effects of compression, etc.
2086  * Instead, we apply the heuristic described in the block comment for
2087  * zfs_metaslab_condense_block_threshold - we only condense if the space used
2088  * is greater than a threshold number of blocks.
2089  */


2146         ASSERT3U(spa_sync_pass(spa), ==, 1);
2147         ASSERT(msp->ms_loaded);
2148 
2149 
2150         spa_dbgmsg(spa, "condensing: txg %llu, msp[%llu] %p, vdev id %llu, "
2151             "spa %s, smp size %llu, segments %lu, forcing condense=%s", txg,
2152             msp->ms_id, msp, msp->ms_group->mg_vd->vdev_id,
2153             msp->ms_group->mg_vd->vdev_spa->spa_name,
2154             space_map_length(msp->ms_sm), avl_numnodes(&msp->ms_tree->rt_root),
2155             msp->ms_condense_wanted ? "TRUE" : "FALSE");
2156 
2157         msp->ms_condense_wanted = B_FALSE;
2158 
2159         /*
2160          * Create an range tree that is 100% allocated. We remove segments
2161          * that have been freed in this txg, any deferred frees that exist,
2162          * and any allocation in the future. Removing segments should be
2163          * a relatively inexpensive operation since we expect these trees to
2164          * have a small number of nodes.
2165          */
2166         condense_tree = range_tree_create(NULL, NULL);
2167         range_tree_add(condense_tree, msp->ms_start, msp->ms_size);
2168 
2169         /*
2170          * Remove what's been freed in this txg from the condense_tree.
2171          * Since we're in sync_pass 1, we know that all the frees from
2172          * this txg are in the freeingtree.
2173          */
2174         range_tree_walk(msp->ms_freeingtree, range_tree_remove, condense_tree);
2175 
2176         for (int t = 0; t < TXG_DEFER_SIZE; t++) {
2177                 range_tree_walk(msp->ms_defertree[t],
2178                     range_tree_remove, condense_tree);
2179         }
2180 
2181         for (int t = 1; t < TXG_CONCURRENT_STATES; t++) {
2182                 range_tree_walk(msp->ms_alloctree[(txg + t) & TXG_MASK],
2183                     range_tree_remove, condense_tree);
2184         }
2185 
2186         /*
2187          * We're about to drop the metaslab's lock thus allowing
2188          * other consumers to change it's content. Set the
2189          * metaslab's ms_condensing flag to ensure that
2190          * allocations on this metaslab do not occur while we're
2191          * in the middle of committing it to disk. This is only critical
2192          * for the ms_tree as all other range trees use per txg
2193          * views of their content.
2194          */
2195         msp->ms_condensing = B_TRUE;
2196 
2197         mutex_exit(&msp->ms_lock);
2198         space_map_truncate(sm, tx);

2199 
2200         /*
2201          * While we would ideally like to create a space map representation
2202          * that consists only of allocation records, doing so can be
2203          * prohibitively expensive because the in-core free tree can be
2204          * large, and therefore computationally expensive to subtract
2205          * from the condense_tree. Instead we sync out two trees, a cheap
2206          * allocation only tree followed by the in-core free tree. While not
2207          * optimal, this is typically close to optimal, and much cheaper to
2208          * compute.
2209          */
2210         space_map_write(sm, condense_tree, SM_ALLOC, tx);
2211         range_tree_vacate(condense_tree, NULL, NULL);
2212         range_tree_destroy(condense_tree);
2213 
2214         space_map_write(sm, msp->ms_tree, SM_FREE, tx);
2215         mutex_enter(&msp->ms_lock);
2216         msp->ms_condensing = B_FALSE;
2217 }
2218 
2219 /*
2220  * Write a metaslab to disk in the context of the specified transaction group.
2221  */
2222 void
2223 metaslab_sync(metaslab_t *msp, uint64_t txg)
2224 {
2225         metaslab_group_t *mg = msp->ms_group;
2226         vdev_t *vd = mg->mg_vd;
2227         spa_t *spa = vd->vdev_spa;
2228         objset_t *mos = spa_meta_objset(spa);
2229         range_tree_t *alloctree = msp->ms_alloctree[txg & TXG_MASK];
2230         dmu_tx_t *tx;
2231         uint64_t object = space_map_object(msp->ms_sm);
2232 
2233         ASSERT(!vd->vdev_ishole);
2234 


2235         /*
2236          * This metaslab has just been added so there's no work to do now.
2237          */
2238         if (msp->ms_freeingtree == NULL) {
2239                 ASSERT3P(alloctree, ==, NULL);

2240                 return;
2241         }
2242 
2243         ASSERT3P(alloctree, !=, NULL);
2244         ASSERT3P(msp->ms_freeingtree, !=, NULL);
2245         ASSERT3P(msp->ms_freedtree, !=, NULL);
2246 
2247         /*
2248          * Normally, we don't want to process a metaslab if there
2249          * are no allocations or frees to perform. However, if the metaslab
2250          * is being forced to condense and it's loaded, we need to let it
2251          * through.
2252          */
2253         if (range_tree_space(alloctree) == 0 &&
2254             range_tree_space(msp->ms_freeingtree) == 0 &&
2255             !(msp->ms_loaded && msp->ms_condense_wanted))

2256                 return;

2257 
2258 
2259         VERIFY(txg <= spa_final_dirty_txg(spa));
2260 
2261         /*
2262          * The only state that can actually be changing concurrently with
2263          * metaslab_sync() is the metaslab's ms_tree.  No other thread can
2264          * be modifying this txg's alloctree, freeingtree, freedtree, or
2265          * space_map_phys_t.  We drop ms_lock whenever we could call
2266          * into the DMU, because the DMU can call down to us
2267          * (e.g. via zio_free()) at any time.
2268          *
2269          * The spa_vdev_remove_thread() can be reading metaslab state
2270          * concurrently, and it is locked out by the ms_sync_lock.  Note
2271          * that the ms_lock is insufficient for this, because it is dropped
2272          * by space_map_write().
2273          */
2274 
2275         tx = dmu_tx_create_assigned(spa_get_dsl(spa), txg);
2276 
2277         if (msp->ms_sm == NULL) {
2278                 uint64_t new_object;
2279 
2280                 new_object = space_map_alloc(mos, tx);
2281                 VERIFY3U(new_object, !=, 0);
2282 
2283                 VERIFY0(space_map_open(&msp->ms_sm, mos, new_object,
2284                     msp->ms_start, msp->ms_size, vd->vdev_ashift));

2285                 ASSERT(msp->ms_sm != NULL);
2286         }
2287 
2288         mutex_enter(&msp->ms_sync_lock);
2289         mutex_enter(&msp->ms_lock);
2290 
2291         /*
2292          * Note: metaslab_condense() clears the space map's histogram.
2293          * Therefore we must verify and remove this histogram before
2294          * condensing.
2295          */
2296         metaslab_group_histogram_verify(mg);
2297         metaslab_class_histogram_verify(mg->mg_class);
2298         metaslab_group_histogram_remove(mg, msp);
2299 
2300         if (msp->ms_loaded && spa_sync_pass(spa) == 1 &&
2301             metaslab_should_condense(msp)) {
2302                 metaslab_condense(msp, txg, tx);
2303         } else {
2304                 mutex_exit(&msp->ms_lock);
2305                 space_map_write(msp->ms_sm, alloctree, SM_ALLOC, tx);
2306                 space_map_write(msp->ms_sm, msp->ms_freeingtree, SM_FREE, tx);
2307                 mutex_enter(&msp->ms_lock);
2308         }
2309 
2310         if (msp->ms_loaded) {
2311                 /*
2312                  * When the space map is loaded, we have an accurate
2313                  * histogram in the range tree. This gives us an opportunity
2314                  * to bring the space map's histogram up-to-date so we clear
2315                  * it first before updating it.
2316                  */
2317                 space_map_histogram_clear(msp->ms_sm);
2318                 space_map_histogram_add(msp->ms_sm, msp->ms_tree, tx);
2319 
2320                 /*
2321                  * Since we've cleared the histogram we need to add back
2322                  * any free space that has already been processed, plus
2323                  * any deferred space. This allows the on-disk histogram
2324                  * to accurately reflect all free space even if some space
2325                  * is not yet available for allocation (i.e. deferred).
2326                  */
2327                 space_map_histogram_add(msp->ms_sm, msp->ms_freedtree, tx);
2328 
2329                 /*
2330                  * Add back any deferred free space that has not been
2331                  * added back into the in-core free tree yet. This will
2332                  * ensure that we don't end up with a space map histogram


2360          */
2361         if (spa_sync_pass(spa) == 1) {
2362                 range_tree_swap(&msp->ms_freeingtree, &msp->ms_freedtree);
2363         } else {
2364                 range_tree_vacate(msp->ms_freeingtree,
2365                     range_tree_add, msp->ms_freedtree);
2366         }
2367         range_tree_vacate(alloctree, NULL, NULL);
2368 
2369         ASSERT0(range_tree_space(msp->ms_alloctree[txg & TXG_MASK]));
2370         ASSERT0(range_tree_space(msp->ms_alloctree[TXG_CLEAN(txg) & TXG_MASK]));
2371         ASSERT0(range_tree_space(msp->ms_freeingtree));
2372 
2373         mutex_exit(&msp->ms_lock);
2374 
2375         if (object != space_map_object(msp->ms_sm)) {
2376                 object = space_map_object(msp->ms_sm);
2377                 dmu_write(mos, vd->vdev_ms_array, sizeof (uint64_t) *
2378                     msp->ms_id, sizeof (uint64_t), &object, tx);
2379         }
2380         mutex_exit(&msp->ms_sync_lock);
2381         dmu_tx_commit(tx);
2382 }
2383 
2384 /*
2385  * Called after a transaction group has completely synced to mark
2386  * all of the metaslab's free space as usable.
2387  */
2388 void
2389 metaslab_sync_done(metaslab_t *msp, uint64_t txg)
2390 {
2391         metaslab_group_t *mg = msp->ms_group;
2392         vdev_t *vd = mg->mg_vd;
2393         spa_t *spa = vd->vdev_spa;
2394         range_tree_t **defer_tree;
2395         int64_t alloc_delta, defer_delta;
2396         boolean_t defer_allowed = B_TRUE;
2397 
2398         ASSERT(!vd->vdev_ishole);
2399 
2400         mutex_enter(&msp->ms_lock);
2401 
2402         /*
2403          * If this metaslab is just becoming available, initialize its
2404          * range trees and add its capacity to the vdev.
2405          */
2406         if (msp->ms_freedtree == NULL) {
2407                 for (int t = 0; t < TXG_SIZE; t++) {
2408                         ASSERT(msp->ms_alloctree[t] == NULL);
2409 
2410                         msp->ms_alloctree[t] = range_tree_create(NULL, NULL);

2411                 }
2412 
2413                 ASSERT3P(msp->ms_freeingtree, ==, NULL);
2414                 msp->ms_freeingtree = range_tree_create(NULL, NULL);

2415 
2416                 ASSERT3P(msp->ms_freedtree, ==, NULL);
2417                 msp->ms_freedtree = range_tree_create(NULL, NULL);

2418 
2419                 for (int t = 0; t < TXG_DEFER_SIZE; t++) {
2420                         ASSERT(msp->ms_defertree[t] == NULL);
2421 
2422                         msp->ms_defertree[t] = range_tree_create(NULL, NULL);

2423                 }
2424 
2425                 vdev_space_update(vd, 0, 0, msp->ms_size);
2426         }
2427 
2428         defer_tree = &msp->ms_defertree[txg % TXG_DEFER_SIZE];
2429 
2430         uint64_t free_space = metaslab_class_get_space(spa_normal_class(spa)) -
2431             metaslab_class_get_alloc(spa_normal_class(spa));
2432         if (free_space <= spa_get_slop_space(spa) || vd->vdev_removing) {
2433                 defer_allowed = B_FALSE;
2434         }
2435 
2436         defer_delta = 0;
2437         alloc_delta = space_map_alloc_delta(msp->ms_sm);
2438         if (defer_allowed) {
2439                 defer_delta = range_tree_space(msp->ms_freedtree) -
2440                     range_tree_space(*defer_tree);
2441         } else {
2442                 defer_delta -= range_tree_space(*defer_tree);
2443         }
2444 
2445         vdev_space_update(vd, alloc_delta + defer_delta, defer_delta, 0);
2446 
2447         /*
2448          * If there's a metaslab_load() in progress, wait for it to complete
2449          * so that we have a consistent view of the in-core space map.
2450          */
2451         metaslab_load_wait(msp);
2452 
2453         /*
2454          * Move the frees from the defer_tree back to the free
2455          * range tree (if it's loaded). Swap the freed_tree and the
2456          * defer_tree -- this is safe to do because we've just emptied out
2457          * the defer_tree.
2458          */








2459         range_tree_vacate(*defer_tree,
2460             msp->ms_loaded ? range_tree_add : NULL, msp->ms_tree);
2461         if (defer_allowed) {
2462                 range_tree_swap(&msp->ms_freedtree, defer_tree);
2463         } else {
2464                 range_tree_vacate(msp->ms_freedtree,
2465                     msp->ms_loaded ? range_tree_add : NULL, msp->ms_tree);
2466         }
2467 
2468         space_map_update(msp->ms_sm);
2469 
2470         msp->ms_deferspace += defer_delta;
2471         ASSERT3S(msp->ms_deferspace, >=, 0);
2472         ASSERT3S(msp->ms_deferspace, <=, msp->ms_size);
2473         if (msp->ms_deferspace != 0) {
2474                 /*
2475                  * Keep syncing this metaslab until all deferred frees
2476                  * are back in circulation.
2477                  */
2478                 vdev_dirty(vd, VDD_METASLAB, msp, txg + 1);


2482          * Calculate the new weights before unloading any metaslabs.
2483          * This will give us the most accurate weighting.
2484          */
2485         metaslab_group_sort(mg, msp, metaslab_weight(msp));
2486 
2487         /*
2488          * If the metaslab is loaded and we've not tried to load or allocate
2489          * from it in 'metaslab_unload_delay' txgs, then unload it.
2490          */
2491         if (msp->ms_loaded &&
2492             msp->ms_selected_txg + metaslab_unload_delay < txg) {
2493                 for (int t = 1; t < TXG_CONCURRENT_STATES; t++) {
2494                         VERIFY0(range_tree_space(
2495                             msp->ms_alloctree[(txg + t) & TXG_MASK]));
2496                 }
2497 
2498                 if (!metaslab_debug_unload)
2499                         metaslab_unload(msp);
2500         }
2501 
2502         ASSERT0(range_tree_space(msp->ms_alloctree[txg & TXG_MASK]));
2503         ASSERT0(range_tree_space(msp->ms_freeingtree));
2504         ASSERT0(range_tree_space(msp->ms_freedtree));
2505 
2506         mutex_exit(&msp->ms_lock);
2507 }
2508 
2509 void
2510 metaslab_sync_reassess(metaslab_group_t *mg)
2511 {
2512         spa_t *spa = mg->mg_class->mc_spa;
2513 
2514         spa_config_enter(spa, SCL_ALLOC, FTAG, RW_READER);
2515         metaslab_group_alloc_update(mg);
2516         mg->mg_fragmentation = metaslab_group_fragmentation(mg);
2517 
2518         /*
2519          * Preload the next potential metaslabs but only on active
2520          * metaslab groups. We can get into a state where the metaslab
2521          * is no longer active since we dirty metaslabs as we remove a
2522          * a device, thus potentially making the metaslab group eligible
2523          * for preloading.
2524          */
2525         if (mg->mg_activation_count > 0) {
2526                 metaslab_group_preload(mg);
2527         }
2528         spa_config_exit(spa, SCL_ALLOC, FTAG);
2529 }
2530 
2531 static uint64_t
2532 metaslab_distance(metaslab_t *msp, dva_t *dva)
2533 {
2534         uint64_t ms_shift = msp->ms_group->mg_vd->vdev_ms_shift;
2535         uint64_t offset = DVA_GET_OFFSET(dva) >> ms_shift;
2536         uint64_t start = msp->ms_id;
2537 
2538         if (msp->ms_group->mg_vd->vdev_id != DVA_GET_VDEV(dva))
2539                 return (1ULL << 63);
2540 
2541         if (offset < start)
2542                 return ((start - offset) << ms_shift);
2543         if (offset > start)
2544                 return ((offset - start) << ms_shift);
2545         return (0);
2546 }
2547 
2548 /*


2702 }
2703 
2704 static uint64_t
2705 metaslab_block_alloc(metaslab_t *msp, uint64_t size, uint64_t txg)
2706 {
2707         uint64_t start;
2708         range_tree_t *rt = msp->ms_tree;
2709         metaslab_class_t *mc = msp->ms_group->mg_class;
2710 
2711         VERIFY(!msp->ms_condensing);
2712 
2713         start = mc->mc_ops->msop_alloc(msp, size);
2714         if (start != -1ULL) {
2715                 metaslab_group_t *mg = msp->ms_group;
2716                 vdev_t *vd = mg->mg_vd;
2717 
2718                 VERIFY0(P2PHASE(start, 1ULL << vd->vdev_ashift));
2719                 VERIFY0(P2PHASE(size, 1ULL << vd->vdev_ashift));
2720                 VERIFY3U(range_tree_space(rt) - size, <=, msp->ms_size);
2721                 range_tree_remove(rt, start, size);

2722 
2723                 if (range_tree_space(msp->ms_alloctree[txg & TXG_MASK]) == 0)
2724                         vdev_dirty(mg->mg_vd, VDD_METASLAB, msp, txg);
2725 
2726                 range_tree_add(msp->ms_alloctree[txg & TXG_MASK], start, size);
2727 
2728                 /* Track the last successful allocation */
2729                 msp->ms_alloc_txg = txg;
2730                 metaslab_verify_space(msp, txg);
2731         }
2732 
2733         /*
2734          * Now that we've attempted the allocation we need to update the
2735          * metaslab's maximum block size since it may have changed.
2736          */
2737         msp->ms_max_size = metaslab_block_maxsize(msp);
2738         return (start);
2739 }
2740 
2741 static uint64_t
2742 metaslab_group_alloc_normal(metaslab_group_t *mg, zio_alloc_list_t *zal,
2743     uint64_t asize, uint64_t txg, uint64_t min_distance, dva_t *dva, int d)

2744 {
2745         metaslab_t *msp = NULL;
2746         uint64_t offset = -1ULL;
2747         uint64_t activation_weight;
2748         uint64_t target_distance;
2749         int i;
2750 
2751         activation_weight = METASLAB_WEIGHT_PRIMARY;
2752         for (i = 0; i < d; i++) {
2753                 if (DVA_GET_VDEV(&dva[i]) == mg->mg_vd->vdev_id) {
2754                         activation_weight = METASLAB_WEIGHT_SECONDARY;
2755                         break;
2756                 }
2757         }
2758 
2759         metaslab_t *search = kmem_alloc(sizeof (*search), KM_SLEEP);
2760         search->ms_weight = UINT64_MAX;
2761         search->ms_start = 0;
2762         for (;;) {
2763                 boolean_t was_active;

2764                 avl_tree_t *t = &mg->mg_metaslab_tree;
2765                 avl_index_t idx;
2766 
2767                 mutex_enter(&mg->mg_lock);
2768 
2769                 /*
2770                  * Find the metaslab with the highest weight that is less
2771                  * than what we've already tried.  In the common case, this
2772                  * means that we will examine each metaslab at most once.
2773                  * Note that concurrent callers could reorder metaslabs
2774                  * by activation/passivation once we have dropped the mg_lock.
2775                  * If a metaslab is activated by another thread, and we fail
2776                  * to allocate from the metaslab we have selected, we may
2777                  * not try the newly-activated metaslab, and instead activate
2778                  * another metaslab.  This is not optimal, but generally
2779                  * does not cause any problems (a possible exception being
2780                  * if every metaslab is completely full except for the
2781                  * the newly-activated metaslab which we fail to examine).
2782                  */
2783                 msp = avl_find(t, search, &idx);
2784                 if (msp == NULL)
2785                         msp = avl_nearest(t, idx, AVL_AFTER);
2786                 for (; msp != NULL; msp = AVL_NEXT(t, msp)) {
2787 
2788                         if (!metaslab_should_allocate(msp, asize)) {
2789                                 metaslab_trace_add(zal, mg, msp, asize, d,
2790                                     TRACE_TOO_SMALL);
2791                                 continue;
2792                         }
2793 
2794                         /*
2795                          * If the selected metaslab is condensing, skip it.
2796                          */
2797                         if (msp->ms_condensing)
2798                                 continue;
2799 
2800                         was_active = msp->ms_weight & METASLAB_ACTIVE_MASK;
2801                         if (activation_weight == METASLAB_WEIGHT_PRIMARY)




2802                                 break;

2803 






2804                         target_distance = min_distance +
2805                             (space_map_allocated(msp->ms_sm) != 0 ? 0 :
2806                             min_distance >> 1);
2807 
2808                         for (i = 0; i < d; i++) {
2809                                 if (metaslab_distance(msp, &dva[i]) <
2810                                     target_distance)
2811                                         break;
2812                         }
2813                         if (i == d)
2814                                 break;
2815                 }

2816                 mutex_exit(&mg->mg_lock);
2817                 if (msp == NULL) {
2818                         kmem_free(search, sizeof (*search));
2819                         return (-1ULL);
2820                 }
2821                 search->ms_weight = msp->ms_weight;
2822                 search->ms_start = msp->ms_start + 1;
2823 
2824                 mutex_enter(&msp->ms_lock);
2825 
2826                 /*
2827                  * Ensure that the metaslab we have selected is still
2828                  * capable of handling our request. It's possible that
2829                  * another thread may have changed the weight while we
2830                  * were blocked on the metaslab lock. We check the
2831                  * active status first to see if we need to reselect
2832                  * a new metaslab.
2833                  */
2834                 if (was_active && !(msp->ms_weight & METASLAB_ACTIVE_MASK)) {
2835                         mutex_exit(&msp->ms_lock);


2916                         metaslab_passivate(msp,
2917                             metaslab_weight_from_range_tree(msp));
2918                 }
2919 
2920                 /*
2921                  * We have just failed an allocation attempt, check
2922                  * that metaslab_should_allocate() agrees. Otherwise,
2923                  * we may end up in an infinite loop retrying the same
2924                  * metaslab.
2925                  */
2926                 ASSERT(!metaslab_should_allocate(msp, asize));
2927                 mutex_exit(&msp->ms_lock);
2928         }
2929         mutex_exit(&msp->ms_lock);
2930         kmem_free(search, sizeof (*search));
2931         return (offset);
2932 }
2933 
2934 static uint64_t
2935 metaslab_group_alloc(metaslab_group_t *mg, zio_alloc_list_t *zal,
2936     uint64_t asize, uint64_t txg, uint64_t min_distance, dva_t *dva, int d)

2937 {
2938         uint64_t offset;
2939         ASSERT(mg->mg_initialized);
2940 
2941         offset = metaslab_group_alloc_normal(mg, zal, asize, txg,
2942             min_distance, dva, d);
2943 
2944         mutex_enter(&mg->mg_lock);
2945         if (offset == -1ULL) {
2946                 mg->mg_failed_allocations++;
2947                 metaslab_trace_add(zal, mg, NULL, asize, d,
2948                     TRACE_GROUP_FAILURE);
2949                 if (asize == SPA_GANGBLOCKSIZE) {
2950                         /*
2951                          * This metaslab group was unable to allocate
2952                          * the minimum gang block size so it must be out of
2953                          * space. We must notify the allocation throttle
2954                          * to start skipping allocation attempts to this
2955                          * metaslab group until more space becomes available.
2956                          * Note: this failure cannot be caused by the
2957                          * allocation throttle since the allocation throttle
2958                          * is only responsible for skipping devices and
2959                          * not failing block allocations.
2960                          */
2961                         mg->mg_no_free_space = B_TRUE;
2962                 }
2963         }
2964         mg->mg_allocations++;
2965         mutex_exit(&mg->mg_lock);
2966         return (offset);
2967 }
2968 
2969 /*
2970  * If we have to write a ditto block (i.e. more than one DVA for a given BP)
2971  * on the same vdev as an existing DVA of this BP, then try to allocate it
2972  * at least (vdev_asize / (2 ^ ditto_same_vdev_distance_shift)) away from the
2973  * existing DVAs.
2974  */
2975 int ditto_same_vdev_distance_shift = 3;
2976 
2977 /*
2978  * Allocate a block for the specified i/o.
2979  */
2980 int
2981 metaslab_alloc_dva(spa_t *spa, metaslab_class_t *mc, uint64_t psize,
2982     dva_t *dva, int d, dva_t *hintdva, uint64_t txg, int flags,
2983     zio_alloc_list_t *zal)
2984 {
2985         metaslab_group_t *mg, *rotor;
2986         vdev_t *vd;
2987         boolean_t try_hard = B_FALSE;
2988 
2989         ASSERT(!DVA_IS_VALID(&dva[d]));
2990 
2991         /*
2992          * For testing, make some blocks above a certain size be gang blocks.
2993          */
2994         if (psize >= metaslab_gang_bang && (ddi_get_lbolt() & 3) == 0) {
2995                 metaslab_trace_add(zal, NULL, NULL, psize, d, TRACE_FORCE_GANG);
2996                 return (SET_ERROR(ENOSPC));
2997         }
2998 
2999         /*
3000          * Start at the rotor and loop through all mgs until we find something.


3006          * consecutive vdevs.  If we're forced to reuse a vdev before we've
3007          * allocated all of our ditto blocks, then try and spread them out on
3008          * that vdev as much as possible.  If it turns out to not be possible,
3009          * gradually lower our standards until anything becomes acceptable.
3010          * Also, allocating on consecutive vdevs (as opposed to random vdevs)
3011          * gives us hope of containing our fault domains to something we're
3012          * able to reason about.  Otherwise, any two top-level vdev failures
3013          * will guarantee the loss of data.  With consecutive allocation,
3014          * only two adjacent top-level vdev failures will result in data loss.
3015          *
3016          * If we are doing gang blocks (hintdva is non-NULL), try to keep
3017          * ourselves on the same vdev as our gang block header.  That
3018          * way, we can hope for locality in vdev_cache, plus it makes our
3019          * fault domains something tractable.
3020          */
3021         if (hintdva) {
3022                 vd = vdev_lookup_top(spa, DVA_GET_VDEV(&hintdva[d]));
3023 
3024                 /*
3025                  * It's possible the vdev we're using as the hint no
3026                  * longer exists or its mg has been closed (e.g. by
3027                  * device removal).  Consult the rotor when
3028                  * all else fails.
3029                  */
3030                 if (vd != NULL && vd->vdev_mg != NULL) {
3031                         mg = vd->vdev_mg;
3032 
3033                         if (flags & METASLAB_HINTBP_AVOID &&
3034                             mg->mg_next != NULL)
3035                                 mg = mg->mg_next;
3036                 } else {
3037                         mg = mc->mc_rotor;
3038                 }
3039         } else if (d != 0) {
3040                 vd = vdev_lookup_top(spa, DVA_GET_VDEV(&dva[d - 1]));
3041                 mg = vd->vdev_mg->mg_next;
3042         } else {
3043                 mg = mc->mc_rotor;
3044         }
3045 
3046         /*
3047          * If the hint put us into the wrong metaslab class, or into a
3048          * metaslab group that has been passivated, just follow the rotor.
3049          */
3050         if (mg->mg_class != mc || mg->mg_activation_count <= 0)


3105                 ASSERT(mg->mg_class == mc);
3106 
3107                 /*
3108                  * If we don't need to try hard, then require that the
3109                  * block be 1/8th of the device away from any other DVAs
3110                  * in this BP.  If we are trying hard, allow any offset
3111                  * to be used (distance=0).
3112                  */
3113                 uint64_t distance = 0;
3114                 if (!try_hard) {
3115                         distance = vd->vdev_asize >>
3116                             ditto_same_vdev_distance_shift;
3117                         if (distance <= (1ULL << vd->vdev_ms_shift))
3118                                 distance = 0;
3119                 }
3120 
3121                 uint64_t asize = vdev_psize_to_asize(vd, psize);
3122                 ASSERT(P2PHASE(asize, 1ULL << vd->vdev_ashift) == 0);
3123 
3124                 uint64_t offset = metaslab_group_alloc(mg, zal, asize, txg,
3125                     distance, dva, d);
3126 
3127                 if (offset != -1ULL) {
3128                         /*
3129                          * If we've just selected this metaslab group,
3130                          * figure out whether the corresponding vdev is
3131                          * over- or under-used relative to the pool,
3132                          * and set an allocation bias to even it out.
3133                          */
3134                         if (mc->mc_aliquot == 0 && metaslab_bias_enabled) {
3135                                 vdev_stat_t *vs = &vd->vdev_stat;
3136                                 int64_t vu, cu;

3137 
3138                                 vu = (vs->vs_alloc * 100) / (vs->vs_space + 1);
3139                                 cu = (mc->mc_alloc * 100) / (mc->mc_space + 1);




3140 
3141                                 /*
3142                                  * Calculate how much more or less we should
3143                                  * try to allocate from this device during
3144                                  * this iteration around the rotor.
3145                                  * For example, if a device is 80% full
3146                                  * and the pool is 20% full then we should
3147                                  * reduce allocations by 60% on this device.
3148                                  *
3149                                  * mg_bias = (20 - 80) * 512K / 100 = -307K
3150                                  *
3151                                  * This reduces allocations by 307K for this
3152                                  * iteration.
3153                                  */
3154                                 mg->mg_bias = ((cu - vu) *
3155                                     (int64_t)mg->mg_aliquot) / 100;



















3156                         } else if (!metaslab_bias_enabled) {
3157                                 mg->mg_bias = 0;
3158                         }
3159 
3160                         if (atomic_add_64_nv(&mc->mc_aliquot, asize) >=
3161                             mg->mg_aliquot + mg->mg_bias) {
3162                                 mc->mc_rotor = mg->mg_next;
3163                                 mc->mc_aliquot = 0;
3164                         }
3165 
3166                         DVA_SET_VDEV(&dva[d], vd->vdev_id);
3167                         DVA_SET_OFFSET(&dva[d], offset);
3168                         DVA_SET_GANG(&dva[d], !!(flags & METASLAB_GANG_HEADER));
3169                         DVA_SET_ASIZE(&dva[d], asize);


3170 
3171                         return (0);
3172                 }
3173 next:
3174                 mc->mc_rotor = mg->mg_next;
3175                 mc->mc_aliquot = 0;
3176         } while ((mg = mg->mg_next) != rotor);
3177 
3178         /*
3179          * If we haven't tried hard, do so now.
3180          */
3181         if (!try_hard) {
3182                 try_hard = B_TRUE;
3183                 goto top;
3184         }
3185 
3186         bzero(&dva[d], sizeof (dva_t));
3187 
3188         metaslab_trace_add(zal, rotor, NULL, psize, d, TRACE_ENOSPC);
3189         return (SET_ERROR(ENOSPC));
3190 }
3191 
3192 void
3193 metaslab_free_concrete(vdev_t *vd, uint64_t offset, uint64_t asize,
3194     uint64_t txg)
3195 {
3196         metaslab_t *msp;
3197         spa_t *spa = vd->vdev_spa;
3198 
3199         ASSERT3U(txg, ==, spa->spa_syncing_txg);
3200         ASSERT(vdev_is_concrete(vd));
3201         ASSERT3U(spa_config_held(spa, SCL_ALL, RW_READER), !=, 0);
3202         ASSERT3U(offset >> vd->vdev_ms_shift, <, vd->vdev_ms_count);
3203 
3204         msp = vd->vdev_ms[offset >> vd->vdev_ms_shift];
3205 
3206         VERIFY(!msp->ms_condensing);
3207         VERIFY3U(offset, >=, msp->ms_start);
3208         VERIFY3U(offset + asize, <=, msp->ms_start + msp->ms_size);
3209         VERIFY0(P2PHASE(offset, 1ULL << vd->vdev_ashift));
3210         VERIFY0(P2PHASE(asize, 1ULL << vd->vdev_ashift));
3211 
3212         metaslab_check_free_impl(vd, offset, asize);
3213         mutex_enter(&msp->ms_lock);
3214         if (range_tree_space(msp->ms_freeingtree) == 0) {
3215                 vdev_dirty(vd, VDD_METASLAB, msp, txg);
3216         }
3217         range_tree_add(msp->ms_freeingtree, offset, asize);
3218         mutex_exit(&msp->ms_lock);
3219 }
3220 
3221 /* ARGSUSED */
3222 void
3223 metaslab_free_impl_cb(uint64_t inner_offset, vdev_t *vd, uint64_t offset,
3224     uint64_t size, void *arg)
3225 {
3226         uint64_t *txgp = arg;
3227 
3228         if (vd->vdev_ops->vdev_op_remap != NULL)
3229                 vdev_indirect_mark_obsolete(vd, offset, size, *txgp);
3230         else
3231                 metaslab_free_impl(vd, offset, size, *txgp);
3232 }
3233 
3234 static void
3235 metaslab_free_impl(vdev_t *vd, uint64_t offset, uint64_t size,
3236     uint64_t txg)
3237 {
3238         spa_t *spa = vd->vdev_spa;
3239 
3240         ASSERT3U(spa_config_held(spa, SCL_ALL, RW_READER), !=, 0);
3241 
3242         if (txg > spa_freeze_txg(spa))
3243                 return;
3244 
3245         if (spa->spa_vdev_removal != NULL &&
3246             spa->spa_vdev_removal->svr_vdev == vd &&
3247             vdev_is_concrete(vd)) {
3248                 /*
3249                  * Note: we check if the vdev is concrete because when
3250                  * we complete the removal, we first change the vdev to be
3251                  * an indirect vdev (in open context), and then (in syncing
3252                  * context) clear spa_vdev_removal.
3253                  */
3254                 free_from_removing_vdev(vd, offset, size, txg);
3255         } else if (vd->vdev_ops->vdev_op_remap != NULL) {
3256                 vdev_indirect_mark_obsolete(vd, offset, size, txg);
3257                 vd->vdev_ops->vdev_op_remap(vd, offset, size,
3258                     metaslab_free_impl_cb, &txg);
3259         } else {
3260                 metaslab_free_concrete(vd, offset, size, txg);
3261         }
3262 }
3263 
3264 typedef struct remap_blkptr_cb_arg {
3265         blkptr_t *rbca_bp;
3266         spa_remap_cb_t rbca_cb;
3267         vdev_t *rbca_remap_vd;
3268         uint64_t rbca_remap_offset;
3269         void *rbca_cb_arg;
3270 } remap_blkptr_cb_arg_t;
3271 
3272 void
3273 remap_blkptr_cb(uint64_t inner_offset, vdev_t *vd, uint64_t offset,
3274     uint64_t size, void *arg)
3275 {
3276         remap_blkptr_cb_arg_t *rbca = arg;
3277         blkptr_t *bp = rbca->rbca_bp;
3278 
3279         /* We can not remap split blocks. */
3280         if (size != DVA_GET_ASIZE(&bp->blk_dva[0]))
3281                 return;
3282         ASSERT0(inner_offset);
3283 
3284         if (rbca->rbca_cb != NULL) {
3285                 /*
3286                  * At this point we know that we are not handling split
3287                  * blocks and we invoke the callback on the previous
3288                  * vdev which must be indirect.
3289                  */
3290                 ASSERT3P(rbca->rbca_remap_vd->vdev_ops, ==, &vdev_indirect_ops);
3291 
3292                 rbca->rbca_cb(rbca->rbca_remap_vd->vdev_id,
3293                     rbca->rbca_remap_offset, size, rbca->rbca_cb_arg);
3294 
3295                 /* set up remap_blkptr_cb_arg for the next call */
3296                 rbca->rbca_remap_vd = vd;
3297                 rbca->rbca_remap_offset = offset;
3298         }
3299 
3300         /*
3301          * The phys birth time is that of dva[0].  This ensures that we know
3302          * when each dva was written, so that resilver can determine which
3303          * blocks need to be scrubbed (i.e. those written during the time
3304          * the vdev was offline).  It also ensures that the key used in
3305          * the ARC hash table is unique (i.e. dva[0] + phys_birth).  If
3306          * we didn't change the phys_birth, a lookup in the ARC for a
3307          * remapped BP could find the data that was previously stored at
3308          * this vdev + offset.
3309          */
3310         vdev_t *oldvd = vdev_lookup_top(vd->vdev_spa,
3311             DVA_GET_VDEV(&bp->blk_dva[0]));
3312         vdev_indirect_births_t *vib = oldvd->vdev_indirect_births;
3313         bp->blk_phys_birth = vdev_indirect_births_physbirth(vib,
3314             DVA_GET_OFFSET(&bp->blk_dva[0]), DVA_GET_ASIZE(&bp->blk_dva[0]));
3315 
3316         DVA_SET_VDEV(&bp->blk_dva[0], vd->vdev_id);
3317         DVA_SET_OFFSET(&bp->blk_dva[0], offset);
3318 }
3319 
3320 /*
3321  * If the block pointer contains any indirect DVAs, modify them to refer to
3322  * concrete DVAs.  Note that this will sometimes not be possible, leaving
3323  * the indirect DVA in place.  This happens if the indirect DVA spans multiple
3324  * segments in the mapping (i.e. it is a "split block").
3325  *
3326  * If the BP was remapped, calls the callback on the original dva (note the
3327  * callback can be called multiple times if the original indirect DVA refers
3328  * to another indirect DVA, etc).
3329  *
3330  * Returns TRUE if the BP was remapped.
3331  */
3332 boolean_t
3333 spa_remap_blkptr(spa_t *spa, blkptr_t *bp, spa_remap_cb_t callback, void *arg)
3334 {
3335         remap_blkptr_cb_arg_t rbca;
3336 
3337         if (!zfs_remap_blkptr_enable)
3338                 return (B_FALSE);
3339 
3340         if (!spa_feature_is_enabled(spa, SPA_FEATURE_OBSOLETE_COUNTS))
3341                 return (B_FALSE);
3342 
3343         /*
3344          * Dedup BP's can not be remapped, because ddt_phys_select() depends
3345          * on DVA[0] being the same in the BP as in the DDT (dedup table).
3346          */
3347         if (BP_GET_DEDUP(bp))
3348                 return (B_FALSE);
3349 
3350         /*
3351          * Gang blocks can not be remapped, because
3352          * zio_checksum_gang_verifier() depends on the DVA[0] that's in
3353          * the BP used to read the gang block header (GBH) being the same
3354          * as the DVA[0] that we allocated for the GBH.
3355          */
3356         if (BP_IS_GANG(bp))
3357                 return (B_FALSE);
3358 
3359         /*
3360          * Embedded BP's have no DVA to remap.
3361          */
3362         if (BP_GET_NDVAS(bp) < 1)
3363                 return (B_FALSE);
3364 
3365         /*
3366          * Note: we only remap dva[0].  If we remapped other dvas, we
3367          * would no longer know what their phys birth txg is.
3368          */
3369         dva_t *dva = &bp->blk_dva[0];
3370 
3371         uint64_t offset = DVA_GET_OFFSET(dva);
3372         uint64_t size = DVA_GET_ASIZE(dva);
3373         vdev_t *vd = vdev_lookup_top(spa, DVA_GET_VDEV(dva));
3374 
3375         if (vd->vdev_ops->vdev_op_remap == NULL)
3376                 return (B_FALSE);
3377 
3378         rbca.rbca_bp = bp;
3379         rbca.rbca_cb = callback;
3380         rbca.rbca_remap_vd = vd;
3381         rbca.rbca_remap_offset = offset;
3382         rbca.rbca_cb_arg = arg;
3383 
3384         /*
3385          * remap_blkptr_cb() will be called in order for each level of
3386          * indirection, until a concrete vdev is reached or a split block is
3387          * encountered. old_vd and old_offset are updated within the callback
3388          * as we go from the one indirect vdev to the next one (either concrete
3389          * or indirect again) in that order.
3390          */
3391         vd->vdev_ops->vdev_op_remap(vd, offset, size, remap_blkptr_cb, &rbca);
3392 
3393         /* Check if the DVA wasn't remapped because it is a split block */
3394         if (DVA_GET_VDEV(&rbca.rbca_bp->blk_dva[0]) == vd->vdev_id)
3395                 return (B_FALSE);
3396 
3397         return (B_TRUE);
3398 }
3399 
3400 /*
3401  * Undo the allocation of a DVA which happened in the given transaction group.
3402  */
3403 void
3404 metaslab_unalloc_dva(spa_t *spa, const dva_t *dva, uint64_t txg)
3405 {
3406         metaslab_t *msp;
3407         vdev_t *vd;
3408         uint64_t vdev = DVA_GET_VDEV(dva);
3409         uint64_t offset = DVA_GET_OFFSET(dva);
3410         uint64_t size = DVA_GET_ASIZE(dva);


3411 



3412         ASSERT(DVA_IS_VALID(dva));
3413         ASSERT3U(spa_config_held(spa, SCL_ALL, RW_READER), !=, 0);
3414 
3415         if (txg > spa_freeze_txg(spa))
3416                 return;
3417 
3418         if ((vd = vdev_lookup_top(spa, vdev)) == NULL ||
3419             (offset >> vd->vdev_ms_shift) >= vd->vdev_ms_count) {
3420                 cmn_err(CE_WARN, "metaslab_free_dva(): bad DVA %llu:%llu",
3421                     (u_longlong_t)vdev, (u_longlong_t)offset);
3422                 ASSERT(0);
3423                 return;
3424         }
3425 
3426         ASSERT(!vd->vdev_removing);
3427         ASSERT(vdev_is_concrete(vd));
3428         ASSERT0(vd->vdev_indirect_config.vic_mapping_object);
3429         ASSERT3P(vd->vdev_indirect_mapping, ==, NULL);
3430 
3431         if (DVA_GET_GANG(dva))
3432                 size = vdev_psize_to_asize(vd, SPA_GANGBLOCKSIZE);
3433 
3434         msp = vd->vdev_ms[offset >> vd->vdev_ms_shift];
3435 
3436         mutex_enter(&msp->ms_lock);


3437         range_tree_remove(msp->ms_alloctree[txg & TXG_MASK],
3438             offset, size);
3439 
3440         VERIFY(!msp->ms_condensing);
3441         VERIFY3U(offset, >=, msp->ms_start);
3442         VERIFY3U(offset + size, <=, msp->ms_start + msp->ms_size);
3443         VERIFY3U(range_tree_space(msp->ms_tree) + size, <=,
3444             msp->ms_size);
3445         VERIFY0(P2PHASE(offset, 1ULL << vd->vdev_ashift));
3446         VERIFY0(P2PHASE(size, 1ULL << vd->vdev_ashift));
3447         range_tree_add(msp->ms_tree, offset, size);











3448         mutex_exit(&msp->ms_lock);
3449 }
3450 
3451 /*
3452  * Free the block represented by DVA in the context of the specified
3453  * transaction group.


3454  */
3455 void
3456 metaslab_free_dva(spa_t *spa, const dva_t *dva, uint64_t txg)
3457 {
3458         uint64_t vdev = DVA_GET_VDEV(dva);
3459         uint64_t offset = DVA_GET_OFFSET(dva);
3460         uint64_t size = DVA_GET_ASIZE(dva);
3461         vdev_t *vd = vdev_lookup_top(spa, vdev);


3462 
3463         ASSERT(DVA_IS_VALID(dva));
3464         ASSERT3U(spa_config_held(spa, SCL_ALL, RW_READER), !=, 0);
3465 
3466         if (DVA_GET_GANG(dva)) {






3467                 size = vdev_psize_to_asize(vd, SPA_GANGBLOCKSIZE);












3468         }
3469 
3470         metaslab_free_impl(vd, offset, size, txg);















3471 }
3472 
3473 /*
3474  * Reserve some allocation slots. The reservation system must be called
3475  * before we call into the allocator. If there aren't any available slots
3476  * then the I/O will be throttled until an I/O completes and its slots are
3477  * freed up. The function returns true if it was successful in placing
3478  * the reservation.
3479  */
3480 boolean_t
3481 metaslab_class_throttle_reserve(metaslab_class_t *mc, int slots, zio_t *zio,
3482     int flags)
3483 {
3484         uint64_t available_slots = 0;
3485         boolean_t slot_reserved = B_FALSE;
3486 
3487         ASSERT(mc->mc_alloc_throttle_enabled);
3488         mutex_enter(&mc->mc_lock);
3489 
3490         uint64_t reserved_slots = refcount_count(&mc->mc_alloc_slots);


3501                 }
3502                 zio->io_flags |= ZIO_FLAG_IO_ALLOCATING;
3503                 slot_reserved = B_TRUE;
3504         }
3505 
3506         mutex_exit(&mc->mc_lock);
3507         return (slot_reserved);
3508 }
3509 
3510 void
3511 metaslab_class_throttle_unreserve(metaslab_class_t *mc, int slots, zio_t *zio)
3512 {
3513         ASSERT(mc->mc_alloc_throttle_enabled);
3514         mutex_enter(&mc->mc_lock);
3515         for (int d = 0; d < slots; d++) {
3516                 (void) refcount_remove(&mc->mc_alloc_slots, zio);
3517         }
3518         mutex_exit(&mc->mc_lock);
3519 }
3520 
3521 static int
3522 metaslab_claim_concrete(vdev_t *vd, uint64_t offset, uint64_t size,
3523     uint64_t txg)
3524 {
3525         metaslab_t *msp;
3526         spa_t *spa = vd->vdev_spa;
3527         int error = 0;
3528 
3529         if (offset >> vd->vdev_ms_shift >= vd->vdev_ms_count)
3530                 return (ENXIO);
3531 
3532         ASSERT3P(vd->vdev_ms, !=, NULL);
3533         msp = vd->vdev_ms[offset >> vd->vdev_ms_shift];
3534 
3535         mutex_enter(&msp->ms_lock);
3536 
3537         if ((txg != 0 && spa_writeable(spa)) || !msp->ms_loaded)
3538                 error = metaslab_activate(msp, METASLAB_WEIGHT_SECONDARY);
3539 
3540         if (error == 0 && !range_tree_contains(msp->ms_tree, offset, size))
3541                 error = SET_ERROR(ENOENT);
3542 
3543         if (error || txg == 0) {        /* txg == 0 indicates dry run */
3544                 mutex_exit(&msp->ms_lock);
3545                 return (error);
3546         }
3547 
3548         VERIFY(!msp->ms_condensing);
3549         VERIFY0(P2PHASE(offset, 1ULL << vd->vdev_ashift));
3550         VERIFY0(P2PHASE(size, 1ULL << vd->vdev_ashift));
3551         VERIFY3U(range_tree_space(msp->ms_tree) - size, <=, msp->ms_size);
3552         range_tree_remove(msp->ms_tree, offset, size);
3553 
3554         if (spa_writeable(spa)) {       /* don't dirty if we're zdb(1M) */
3555                 if (range_tree_space(msp->ms_alloctree[txg & TXG_MASK]) == 0)
3556                         vdev_dirty(vd, VDD_METASLAB, msp, txg);
3557                 range_tree_add(msp->ms_alloctree[txg & TXG_MASK], offset, size);
3558         }
3559 
3560         mutex_exit(&msp->ms_lock);
3561 
3562         return (0);
3563 }
3564 
3565 typedef struct metaslab_claim_cb_arg_t {
3566         uint64_t        mcca_txg;
3567         int             mcca_error;
3568 } metaslab_claim_cb_arg_t;
3569 
3570 /* ARGSUSED */
3571 static void
3572 metaslab_claim_impl_cb(uint64_t inner_offset, vdev_t *vd, uint64_t offset,
3573     uint64_t size, void *arg)
3574 {
3575         metaslab_claim_cb_arg_t *mcca_arg = arg;
3576 
3577         if (mcca_arg->mcca_error == 0) {
3578                 mcca_arg->mcca_error = metaslab_claim_concrete(vd, offset,
3579                     size, mcca_arg->mcca_txg);
3580         }
3581 }
3582 
3583 int
3584 metaslab_claim_impl(vdev_t *vd, uint64_t offset, uint64_t size, uint64_t txg)
3585 {
3586         if (vd->vdev_ops->vdev_op_remap != NULL) {
3587                 metaslab_claim_cb_arg_t arg;
3588 
3589                 /*
3590                  * Only zdb(1M) can claim on indirect vdevs.  This is used
3591                  * to detect leaks of mapped space (that are not accounted
3592                  * for in the obsolete counts, spacemap, or bpobj).
3593                  */
3594                 ASSERT(!spa_writeable(vd->vdev_spa));
3595                 arg.mcca_error = 0;
3596                 arg.mcca_txg = txg;
3597 
3598                 vd->vdev_ops->vdev_op_remap(vd, offset, size,
3599                     metaslab_claim_impl_cb, &arg);
3600 
3601                 if (arg.mcca_error == 0) {
3602                         arg.mcca_error = metaslab_claim_concrete(vd,
3603                             offset, size, txg);
3604                 }
3605                 return (arg.mcca_error);
3606         } else {
3607                 return (metaslab_claim_concrete(vd, offset, size, txg));
3608         }
3609 }
3610 
3611 /*
3612  * Intent log support: upon opening the pool after a crash, notify the SPA
3613  * of blocks that the intent log has allocated for immediate write, but
3614  * which are still considered free by the SPA because the last transaction
3615  * group didn't commit yet.
3616  */
3617 static int
3618 metaslab_claim_dva(spa_t *spa, const dva_t *dva, uint64_t txg)
3619 {
3620         uint64_t vdev = DVA_GET_VDEV(dva);
3621         uint64_t offset = DVA_GET_OFFSET(dva);
3622         uint64_t size = DVA_GET_ASIZE(dva);
3623         vdev_t *vd;
3624 
3625         if ((vd = vdev_lookup_top(spa, vdev)) == NULL) {
3626                 return (SET_ERROR(ENXIO));
3627         }
3628 
3629         ASSERT(DVA_IS_VALID(dva));
3630 
3631         if (DVA_GET_GANG(dva))
3632                 size = vdev_psize_to_asize(vd, SPA_GANGBLOCKSIZE);
3633 
3634         return (metaslab_claim_impl(vd, offset, size, txg));
3635 }
3636 
3637 int
3638 metaslab_alloc(spa_t *spa, metaslab_class_t *mc, uint64_t psize, blkptr_t *bp,
3639     int ndvas, uint64_t txg, blkptr_t *hintbp, int flags,
3640     zio_alloc_list_t *zal, zio_t *zio)
3641 {
3642         dva_t *dva = bp->blk_dva;
3643         dva_t *hintdva = hintbp->blk_dva;
3644         int error = 0;
3645 
3646         ASSERT(bp->blk_birth == 0);
3647         ASSERT(BP_PHYSICAL_BIRTH(bp) == 0);
3648 
3649         spa_config_enter(spa, SCL_ALLOC, FTAG, RW_READER);
3650 
3651         if (mc->mc_rotor == NULL) {  /* no vdevs in this class */
3652                 spa_config_exit(spa, SCL_ALLOC, FTAG);
3653                 return (SET_ERROR(ENOSPC));
3654         }
3655 
3656         ASSERT(ndvas > 0 && ndvas <= spa_max_replication(spa));
3657         ASSERT(BP_GET_NDVAS(bp) == 0);
3658         ASSERT(hintbp == NULL || ndvas <= BP_GET_NDVAS(hintbp));
3659         ASSERT3P(zal, !=, NULL);
3660 











































3661         for (int d = 0; d < ndvas; d++) {
3662                 error = metaslab_alloc_dva(spa, mc, psize, dva, d, hintdva,
3663                     txg, flags, zal);
3664                 if (error != 0) {
3665                         for (d--; d >= 0; d--) {
3666                                 metaslab_unalloc_dva(spa, &dva[d], txg);

3667                                 metaslab_group_alloc_decrement(spa,
3668                                     DVA_GET_VDEV(&dva[d]), zio, flags);
3669                                 bzero(&dva[d], sizeof (dva_t));
3670                         }
3671                         spa_config_exit(spa, SCL_ALLOC, FTAG);
3672                         return (error);
3673                 } else {
3674                         /*
3675                          * Update the metaslab group's queue depth
3676                          * based on the newly allocated dva.
3677                          */
3678                         metaslab_group_alloc_increment(spa,
3679                             DVA_GET_VDEV(&dva[d]), zio, flags);
3680                 }
3681 
3682         }
3683         ASSERT(error == 0);
3684         ASSERT(BP_GET_NDVAS(bp) == ndvas);


3685 
3686         spa_config_exit(spa, SCL_ALLOC, FTAG);
3687 
3688         BP_SET_BIRTH(bp, txg, txg);
3689 
3690         return (0);
3691 }
3692 
3693 void
3694 metaslab_free(spa_t *spa, const blkptr_t *bp, uint64_t txg, boolean_t now)
3695 {
3696         const dva_t *dva = bp->blk_dva;
3697         int ndvas = BP_GET_NDVAS(bp);
3698 
3699         ASSERT(!BP_IS_HOLE(bp));
3700         ASSERT(!now || bp->blk_birth >= spa_syncing_txg(spa));
3701 
3702         spa_config_enter(spa, SCL_FREE, FTAG, RW_READER);
3703 
3704         for (int d = 0; d < ndvas; d++) {
3705                 if (now) {
3706                         metaslab_unalloc_dva(spa, &dva[d], txg);















3707                 } else {
3708                         metaslab_free_dva(spa, &dva[d], txg);

3709                 }
3710         }
3711 
3712         spa_config_exit(spa, SCL_FREE, FTAG);
3713 }
3714 
3715 int
3716 metaslab_claim(spa_t *spa, const blkptr_t *bp, uint64_t txg)
3717 {
3718         const dva_t *dva = bp->blk_dva;
3719         int ndvas = BP_GET_NDVAS(bp);
3720         int error = 0;
3721 
3722         ASSERT(!BP_IS_HOLE(bp));
3723 
3724         if (txg != 0) {
3725                 /*
3726                  * First do a dry run to make sure all DVAs are claimable,
3727                  * so we don't have to unwind from partial failures below.
3728                  */
3729                 if ((error = metaslab_claim(spa, bp, 0)) != 0)
3730                         return (error);
3731         }
3732 
3733         spa_config_enter(spa, SCL_ALLOC, FTAG, RW_READER);
3734 






















3735         for (int d = 0; d < ndvas; d++)
3736                 if ((error = metaslab_claim_dva(spa, &dva[d], txg)) != 0)

3737                         break;

3738 
3739         spa_config_exit(spa, SCL_ALLOC, FTAG);
3740 
3741         ASSERT(error == 0 || txg == 0);
3742 
3743         return (error);
3744 }
3745 
3746 /* ARGSUSED */
3747 static void
3748 metaslab_check_free_impl_cb(uint64_t inner, vdev_t *vd, uint64_t offset,
3749     uint64_t size, void *arg)
3750 {
3751         if (vd->vdev_ops == &vdev_indirect_ops)
3752                 return;
3753 
3754         metaslab_check_free_impl(vd, offset, size);
3755 }
3756 
3757 static void
3758 metaslab_check_free_impl(vdev_t *vd, uint64_t offset, uint64_t size)
3759 {
3760         metaslab_t *msp;
3761         spa_t *spa = vd->vdev_spa;
3762 
3763         if ((zfs_flags & ZFS_DEBUG_ZIO_FREE) == 0)
3764                 return;
3765 
3766         if (vd->vdev_ops->vdev_op_remap != NULL) {
3767                 vd->vdev_ops->vdev_op_remap(vd, offset, size,
3768                     metaslab_check_free_impl_cb, NULL);
3769                 return;
3770         }
3771 
3772         ASSERT(vdev_is_concrete(vd));
3773         ASSERT3U(offset >> vd->vdev_ms_shift, <, vd->vdev_ms_count);
3774         ASSERT3U(spa_config_held(spa, SCL_ALL, RW_READER), !=, 0);




3775 
3776         msp = vd->vdev_ms[offset >> vd->vdev_ms_shift];
3777 
3778         mutex_enter(&msp->ms_lock);
3779         if (msp->ms_loaded)
3780                 range_tree_verify(msp->ms_tree, offset, size);







3781 
3782         range_tree_verify(msp->ms_freeingtree, offset, size);
3783         range_tree_verify(msp->ms_freedtree, offset, size);
3784         for (int j = 0; j < TXG_DEFER_SIZE; j++)
3785                 range_tree_verify(msp->ms_defertree[j], offset, size);


























3786         mutex_exit(&msp->ms_lock);





















3787 }
3788 


































3789 void
3790 metaslab_check_free(spa_t *spa, const blkptr_t *bp)
3791 {
3792         if ((zfs_flags & ZFS_DEBUG_ZIO_FREE) == 0)
3793                 return;
3794 
3795         spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
3796         for (int i = 0; i < BP_GET_NDVAS(bp); i++) {
3797                 uint64_t vdev = DVA_GET_VDEV(&bp->blk_dva[i]);
3798                 vdev_t *vd = vdev_lookup_top(spa, vdev);
3799                 uint64_t offset = DVA_GET_OFFSET(&bp->blk_dva[i]);
3800                 uint64_t size = DVA_GET_ASIZE(&bp->blk_dva[i]);
3801 
3802                 if (DVA_GET_GANG(&bp->blk_dva[i]))
3803                         size = vdev_psize_to_asize(vd, SPA_GANGBLOCKSIZE);

















3804 
3805                 ASSERT3P(vd, !=, NULL);


































3806 
3807                 metaslab_check_free_impl(vd, offset, size);













































3808         }
3809         spa_config_exit(spa, SCL_VDEV, FTAG);











































































3810 }


   6  * You may not use this file except in compliance with the License.
   7  *
   8  * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
   9  * or http://www.opensolaris.org/os/licensing.
  10  * See the License for the specific language governing permissions
  11  * and limitations under the License.
  12  *
  13  * When distributing Covered Code, include this CDDL HEADER in each
  14  * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15  * If applicable, add the following below this CDDL HEADER, with the
  16  * fields enclosed by brackets "[]" replaced with your own identifying
  17  * information: Portions Copyright [yyyy] [name of copyright owner]
  18  *
  19  * CDDL HEADER END
  20  */
  21 /*
  22  * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
  23  * Copyright (c) 2011, 2015 by Delphix. All rights reserved.
  24  * Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
  25  * Copyright (c) 2014 Integros [integros.com]
  26  * Copyright 2017 Nexenta Systems, Inc. All rights reserved.
  27  */
  28 
  29 #include <sys/zfs_context.h>
  30 #include <sys/dmu.h>
  31 #include <sys/dmu_tx.h>
  32 #include <sys/space_map.h>
  33 #include <sys/metaslab_impl.h>
  34 #include <sys/vdev_impl.h>
  35 #include <sys/zio.h>
  36 #include <sys/spa_impl.h>
  37 #include <sys/zfeature.h>
  38 #include <sys/wbc.h>
  39 
  40 #define GANG_ALLOCATION(flags) \
  41         ((flags) & (METASLAB_GANG_CHILD | METASLAB_GANG_HEADER))
  42 
  43 uint64_t metaslab_aliquot = 512ULL << 10;
  44 uint64_t metaslab_gang_bang = SPA_MAXBLOCKSIZE + 1;     /* force gang blocks */
  45 
  46 /*
  47  * The in-core space map representation is more compact than its on-disk form.
  48  * The zfs_condense_pct determines how much more compact the in-core
  49  * space map representation must be before we compact it on-disk.
  50  * Values should be greater than or equal to 100.
  51  */
  52 int zfs_condense_pct = 200;
  53 
  54 /*
  55  * Condensing a metaslab is not guaranteed to actually reduce the amount of
  56  * space used on disk. In particular, a space map uses data in increments of
  57  * MAX(1 << ashift, space_map_blksize), so a metaslab might use the
  58  * same number of blocks after condensing. Since the goal of condensing is to


 151  * Enable/disable preloading of metaslab.
 152  */
 153 boolean_t metaslab_preload_enabled = B_TRUE;
 154 
 155 /*
 156  * Enable/disable fragmentation weighting on metaslabs.
 157  */
 158 boolean_t metaslab_fragmentation_factor_enabled = B_TRUE;
 159 
 160 /*
 161  * Enable/disable lba weighting (i.e. outer tracks are given preference).
 162  */
 163 boolean_t metaslab_lba_weighting_enabled = B_TRUE;
 164 
 165 /*
 166  * Enable/disable metaslab group biasing.
 167  */
 168 boolean_t metaslab_bias_enabled = B_TRUE;
 169 
 170 /*





 171  * Enable/disable segment-based metaslab selection.
 172  */
 173 boolean_t zfs_metaslab_segment_weight_enabled = B_TRUE;
 174 
 175 /*
 176  * When using segment-based metaslab selection, we will continue
 177  * allocating from the active metaslab until we have exhausted
 178  * zfs_metaslab_switch_threshold of its buckets.
 179  */
 180 int zfs_metaslab_switch_threshold = 2;
 181 
 182 /*
 183  * Internal switch to enable/disable the metaslab allocation tracing
 184  * facility.
 185  */
 186 boolean_t metaslab_trace_enabled = B_TRUE;
 187 
 188 /*
 189  * Maximum entries that the metaslab allocation tracing facility will keep
 190  * in a given list when running in non-debug mode. We limit the number
 191  * of entries in non-debug mode to prevent us from using up too much memory.
 192  * The limit should be sufficiently large that we don't expect any allocation
 193  * to every exceed this value. In debug mode, the system will panic if this
 194  * limit is ever reached allowing for further investigation.
 195  */
 196 uint64_t metaslab_trace_max_entries = 5000;
 197 
 198 static uint64_t metaslab_weight(metaslab_t *);
 199 static void metaslab_set_fragmentation(metaslab_t *);


 200 
 201 kmem_cache_t *metaslab_alloc_trace_cache;
 202 
 203 /*
 204  * Toggle between space-based DVA allocator 0, latency-based 1 or hybrid 2.
 205  * A value other than 0, 1 or 2 will be considered 0 (default).
 206  */
 207 int metaslab_alloc_dva_algorithm = 0;
 208 
 209 /*
 210  * How many TXG's worth of updates should be aggregated per TRIM/UNMAP
 211  * issued to the underlying vdev. We keep two range trees of extents
 212  * (called "trim sets") to be trimmed per metaslab, the `current' and
 213  * the `previous' TS. New free's are added to the current TS. Then,
 214  * once `zfs_txgs_per_trim' transactions have elapsed, the `current'
 215  * TS becomes the `previous' TS and a new, blank TS is created to be
 216  * the new `current', which will then start accumulating any new frees.
 217  * Once another zfs_txgs_per_trim TXGs have passed, the previous TS's
 218  * extents are trimmed, the TS is destroyed and the current TS again
 219  * becomes the previous TS.
 220  * This serves to fulfill two functions: aggregate many small frees
 221  * into fewer larger trim operations (which should help with devices
 222  * which do not take so kindly to them) and to allow for disaster
 223  * recovery (extents won't get trimmed immediately, but instead only
 224  * after passing this rather long timeout, thus not preserving
 225  * 'zfs import -F' functionality).
 226  */
 227 unsigned int zfs_txgs_per_trim = 32;
 228 
 229 static void metaslab_trim_remove(void *arg, uint64_t offset, uint64_t size);
 230 static void metaslab_trim_add(void *arg, uint64_t offset, uint64_t size);
 231 
 232 static zio_t *metaslab_exec_trim(metaslab_t *msp);
 233 
 234 static metaslab_trimset_t *metaslab_new_trimset(uint64_t txg, kmutex_t *lock);
 235 static void metaslab_free_trimset(metaslab_trimset_t *ts);
 236 static boolean_t metaslab_check_trim_conflict(metaslab_t *msp,
 237     uint64_t *offset, uint64_t size, uint64_t align, uint64_t limit);
 238 
 239 /*
 240  * ==========================================================================
 241  * Metaslab classes
 242  * ==========================================================================
 243  */
 244 metaslab_class_t *
 245 metaslab_class_create(spa_t *spa, metaslab_ops_t *ops)
 246 {
 247         metaslab_class_t *mc;
 248 
 249         mc = kmem_zalloc(sizeof (metaslab_class_t), KM_SLEEP);
 250 
 251         mutex_init(&mc->mc_alloc_lock, NULL, MUTEX_DEFAULT, NULL);
 252         avl_create(&mc->mc_alloc_tree, zio_bookmark_compare,
 253             sizeof (zio_t), offsetof(zio_t, io_alloc_node));
 254 
 255         mc->mc_spa = spa;
 256         mc->mc_rotor = NULL;
 257         mc->mc_ops = ops;
 258         mutex_init(&mc->mc_lock, NULL, MUTEX_DEFAULT, NULL);
 259         refcount_create_tracked(&mc->mc_alloc_slots);
 260 
 261         return (mc);
 262 }
 263 
 264 void
 265 metaslab_class_destroy(metaslab_class_t *mc)
 266 {
 267         ASSERT(mc->mc_rotor == NULL);
 268         ASSERT(mc->mc_alloc == 0);
 269         ASSERT(mc->mc_deferred == 0);
 270         ASSERT(mc->mc_space == 0);
 271         ASSERT(mc->mc_dspace == 0);
 272 
 273         avl_destroy(&mc->mc_alloc_tree);
 274         mutex_destroy(&mc->mc_alloc_lock);
 275 
 276         refcount_destroy(&mc->mc_alloc_slots);
 277         mutex_destroy(&mc->mc_lock);
 278         kmem_free(mc, sizeof (metaslab_class_t));
 279 }
 280 
 281 int
 282 metaslab_class_validate(metaslab_class_t *mc)
 283 {
 284         metaslab_group_t *mg;
 285         vdev_t *vd;
 286 
 287         /*
 288          * Must hold one of the spa_config locks.
 289          */
 290         ASSERT(spa_config_held(mc->mc_spa, SCL_ALL, RW_READER) ||
 291             spa_config_held(mc->mc_spa, SCL_ALL, RW_WRITER));
 292 
 293         if ((mg = mc->mc_rotor) == NULL)
 294                 return (0);
 295 


 342 metaslab_class_histogram_verify(metaslab_class_t *mc)
 343 {
 344         vdev_t *rvd = mc->mc_spa->spa_root_vdev;
 345         uint64_t *mc_hist;
 346         int i;
 347 
 348         if ((zfs_flags & ZFS_DEBUG_HISTOGRAM_VERIFY) == 0)
 349                 return;
 350 
 351         mc_hist = kmem_zalloc(sizeof (uint64_t) * RANGE_TREE_HISTOGRAM_SIZE,
 352             KM_SLEEP);
 353 
 354         for (int c = 0; c < rvd->vdev_children; c++) {
 355                 vdev_t *tvd = rvd->vdev_child[c];
 356                 metaslab_group_t *mg = tvd->vdev_mg;
 357 
 358                 /*
 359                  * Skip any holes, uninitialized top-levels, or
 360                  * vdevs that are not in this metalab class.
 361                  */
 362                 if (tvd->vdev_ishole || tvd->vdev_ms_shift == 0 ||
 363                     mg->mg_class != mc) {
 364                         continue;
 365                 }
 366 
 367                 for (i = 0; i < RANGE_TREE_HISTOGRAM_SIZE; i++)
 368                         mc_hist[i] += mg->mg_histogram[i];
 369         }
 370 
 371         for (i = 0; i < RANGE_TREE_HISTOGRAM_SIZE; i++)
 372                 VERIFY3U(mc_hist[i], ==, mc->mc_histogram[i]);
 373 
 374         kmem_free(mc_hist, sizeof (uint64_t) * RANGE_TREE_HISTOGRAM_SIZE);
 375 }
 376 
 377 /*
 378  * Calculate the metaslab class's fragmentation metric. The metric
 379  * is weighted based on the space contribution of each metaslab group.
 380  * The return value will be a number between 0 and 100 (inclusive), or
 381  * ZFS_FRAG_INVALID if the metric has not been set. See comment above the
 382  * zfs_frag_table for more information about the metric.
 383  */
 384 uint64_t
 385 metaslab_class_fragmentation(metaslab_class_t *mc)
 386 {
 387         vdev_t *rvd = mc->mc_spa->spa_root_vdev;
 388         uint64_t fragmentation = 0;
 389 
 390         spa_config_enter(mc->mc_spa, SCL_VDEV, FTAG, RW_READER);
 391 
 392         for (int c = 0; c < rvd->vdev_children; c++) {
 393                 vdev_t *tvd = rvd->vdev_child[c];
 394                 metaslab_group_t *mg = tvd->vdev_mg;
 395 
 396                 /*
 397                  * Skip any holes, uninitialized top-levels, or
 398                  * vdevs that are not in this metalab class.
 399                  */
 400                 if (tvd->vdev_ishole || tvd->vdev_ms_shift == 0 ||
 401                     mg->mg_class != mc) {
 402                         continue;
 403                 }
 404 
 405                 /*
 406                  * If a metaslab group does not contain a fragmentation
 407                  * metric then just bail out.
 408                  */
 409                 if (mg->mg_fragmentation == ZFS_FRAG_INVALID) {
 410                         spa_config_exit(mc->mc_spa, SCL_VDEV, FTAG);
 411                         return (ZFS_FRAG_INVALID);
 412                 }
 413 
 414                 /*
 415                  * Determine how much this metaslab_group is contributing
 416                  * to the overall pool fragmentation metric.
 417                  */
 418                 fragmentation += mg->mg_fragmentation *
 419                     metaslab_group_get_space(mg);
 420         }


 426 }
 427 
 428 /*
 429  * Calculate the amount of expandable space that is available in
 430  * this metaslab class. If a device is expanded then its expandable
 431  * space will be the amount of allocatable space that is currently not
 432  * part of this metaslab class.
 433  */
 434 uint64_t
 435 metaslab_class_expandable_space(metaslab_class_t *mc)
 436 {
 437         vdev_t *rvd = mc->mc_spa->spa_root_vdev;
 438         uint64_t space = 0;
 439 
 440         spa_config_enter(mc->mc_spa, SCL_VDEV, FTAG, RW_READER);
 441         for (int c = 0; c < rvd->vdev_children; c++) {
 442                 uint64_t tspace;
 443                 vdev_t *tvd = rvd->vdev_child[c];
 444                 metaslab_group_t *mg = tvd->vdev_mg;
 445 
 446                 if (tvd->vdev_ishole || tvd->vdev_ms_shift == 0 ||
 447                     mg->mg_class != mc) {
 448                         continue;
 449                 }
 450 
 451                 /*
 452                  * Calculate if we have enough space to add additional
 453                  * metaslabs. We report the expandable space in terms
 454                  * of the metaslab size since that's the unit of expansion.
 455                  * Adjust by efi system partition size.
 456                  */
 457                 tspace = tvd->vdev_max_asize - tvd->vdev_asize;
 458                 if (tspace > mc->mc_spa->spa_bootsize) {
 459                         tspace -= mc->mc_spa->spa_bootsize;
 460                 }
 461                 space += P2ALIGN(tspace, 1ULL << tvd->vdev_ms_shift);
 462         }
 463         spa_config_exit(mc->mc_spa, SCL_VDEV, FTAG);
 464         return (space);
 465 }
 466 


 538  * ==========================================================================
 539  */
 540 /*
 541  * Update the allocatable flag and the metaslab group's capacity.
 542  * The allocatable flag is set to true if the capacity is below
 543  * the zfs_mg_noalloc_threshold or has a fragmentation value that is
 544  * greater than zfs_mg_fragmentation_threshold. If a metaslab group
 545  * transitions from allocatable to non-allocatable or vice versa then the
 546  * metaslab group's class is updated to reflect the transition.
 547  */
 548 static void
 549 metaslab_group_alloc_update(metaslab_group_t *mg)
 550 {
 551         vdev_t *vd = mg->mg_vd;
 552         metaslab_class_t *mc = mg->mg_class;
 553         vdev_stat_t *vs = &vd->vdev_stat;
 554         boolean_t was_allocatable;
 555         boolean_t was_initialized;
 556 
 557         ASSERT(vd == vd->vdev_top);


 558 
 559         mutex_enter(&mg->mg_lock);
 560         was_allocatable = mg->mg_allocatable;
 561         was_initialized = mg->mg_initialized;
 562 
 563         mg->mg_free_capacity = ((vs->vs_space - vs->vs_alloc) * 100) /
 564             (vs->vs_space + 1);
 565 
 566         mutex_enter(&mc->mc_lock);
 567 
 568         /*
 569          * If the metaslab group was just added then it won't
 570          * have any space until we finish syncing out this txg.
 571          * At that point we will consider it initialized and available
 572          * for allocations.  We also don't consider non-activated
 573          * metaslab groups (e.g. vdevs that are in the middle of being removed)
 574          * to be initialized, because they can't be used for allocation.
 575          */
 576         mg->mg_initialized = metaslab_group_initialized(mg);
 577         if (!was_initialized && mg->mg_initialized) {


 635         refcount_create_tracked(&mg->mg_alloc_queue_depth);
 636 
 637         mg->mg_taskq = taskq_create("metaslab_group_taskq", metaslab_load_pct,
 638             minclsyspri, 10, INT_MAX, TASKQ_THREADS_CPU_PCT);
 639 
 640         return (mg);
 641 }
 642 
 643 void
 644 metaslab_group_destroy(metaslab_group_t *mg)
 645 {
 646         ASSERT(mg->mg_prev == NULL);
 647         ASSERT(mg->mg_next == NULL);
 648         /*
 649          * We may have gone below zero with the activation count
 650          * either because we never activated in the first place or
 651          * because we're done, and possibly removing the vdev.
 652          */
 653         ASSERT(mg->mg_activation_count <= 0);
 654 
 655         if (mg->mg_taskq)
 656                 taskq_destroy(mg->mg_taskq);
 657         avl_destroy(&mg->mg_metaslab_tree);
 658         mutex_destroy(&mg->mg_lock);
 659         refcount_destroy(&mg->mg_alloc_queue_depth);
 660         kmem_free(mg, sizeof (metaslab_group_t));
 661 }
 662 
 663 void
 664 metaslab_group_activate(metaslab_group_t *mg)
 665 {
 666         metaslab_class_t *mc = mg->mg_class;
 667         metaslab_group_t *mgprev, *mgnext;
 668 
 669         ASSERT(spa_config_held(mc->mc_spa, SCL_ALLOC, RW_WRITER));
 670 
 671         ASSERT(mc->mc_rotor != mg);
 672         ASSERT(mg->mg_prev == NULL);
 673         ASSERT(mg->mg_next == NULL);
 674         ASSERT(mg->mg_activation_count <= 0);
 675 
 676         if (++mg->mg_activation_count <= 0)
 677                 return;
 678 
 679         mg->mg_aliquot = metaslab_aliquot * MAX(1, mg->mg_vd->vdev_children);
 680         metaslab_group_alloc_update(mg);
 681 
 682         if ((mgprev = mc->mc_rotor) == NULL) {
 683                 mg->mg_prev = mg;
 684                 mg->mg_next = mg;
 685         } else {
 686                 mgnext = mgprev->mg_next;
 687                 mg->mg_prev = mgprev;
 688                 mg->mg_next = mgnext;
 689                 mgprev->mg_next = mg;
 690                 mgnext->mg_prev = mg;
 691         }
 692         mc->mc_rotor = mg;
 693 }
 694 






 695 void
 696 metaslab_group_passivate(metaslab_group_t *mg)
 697 {
 698         metaslab_class_t *mc = mg->mg_class;

 699         metaslab_group_t *mgprev, *mgnext;

 700 
 701         ASSERT(spa_config_held(mc->mc_spa, SCL_ALLOC, RW_WRITER));

 702 
 703         if (--mg->mg_activation_count != 0) {
 704                 ASSERT(mc->mc_rotor != mg);
 705                 ASSERT(mg->mg_prev == NULL);
 706                 ASSERT(mg->mg_next == NULL);
 707                 ASSERT(mg->mg_activation_count < 0);
 708                 return;
 709         }
 710 















 711         taskq_wait(mg->mg_taskq);

 712         metaslab_group_alloc_update(mg);
 713 
 714         mgprev = mg->mg_prev;
 715         mgnext = mg->mg_next;
 716 
 717         if (mg == mgnext) {
 718                 mc->mc_rotor = NULL;
 719         } else {
 720                 mc->mc_rotor = mgnext;
 721                 mgprev->mg_next = mgnext;
 722                 mgnext->mg_prev = mgprev;
 723         }
 724 
 725         mg->mg_prev = NULL;
 726         mg->mg_next = NULL;
 727 }
 728 
 729 boolean_t
 730 metaslab_group_initialized(metaslab_group_t *mg)
 731 {


1135         range_seg_t *rs, rsearch;
1136         avl_index_t where;
1137 
1138         rsearch.rs_start = start;
1139         rsearch.rs_end = start + size;
1140 
1141         rs = avl_find(t, &rsearch, &where);
1142         if (rs == NULL) {
1143                 rs = avl_nearest(t, where, AVL_AFTER);
1144         }
1145 
1146         return (rs);
1147 }
1148 
1149 /*
1150  * This is a helper function that can be used by the allocator to find
1151  * a suitable block to allocate. This will search the specified AVL
1152  * tree looking for a block that matches the specified criteria.
1153  */
1154 static uint64_t
1155 metaslab_block_picker(metaslab_t *msp, avl_tree_t *t, uint64_t *cursor,
1156     uint64_t size, uint64_t align)
1157 {
1158         range_seg_t *rs = metaslab_block_find(t, *cursor, size);
1159 
1160         for (; rs != NULL; rs = AVL_NEXT(t, rs)) {
1161                 uint64_t offset = P2ROUNDUP(rs->rs_start, align);
1162 
1163                 if (offset + size <= rs->rs_end &&
1164                     !metaslab_check_trim_conflict(msp, &offset, size, align,
1165                     rs->rs_end)) {
1166                         *cursor = offset + size;
1167                         return (offset);
1168                 }

1169         }
1170 
1171         /*
1172          * If we know we've searched the whole map (*cursor == 0), give up.
1173          * Otherwise, reset the cursor to the beginning and try again.
1174          */
1175         if (*cursor == 0)
1176                 return (-1ULL);
1177 
1178         *cursor = 0;
1179         return (metaslab_block_picker(msp, t, cursor, size, align));
1180 }
1181 
1182 /*
1183  * ==========================================================================
1184  * The first-fit block allocator
1185  * ==========================================================================
1186  */
1187 static uint64_t
1188 metaslab_ff_alloc(metaslab_t *msp, uint64_t size)
1189 {
1190         /*
1191          * Find the largest power of 2 block size that evenly divides the
1192          * requested size. This is used to try to allocate blocks with similar
1193          * alignment from the same area of the metaslab (i.e. same cursor
1194          * bucket) but it does not guarantee that other allocations sizes
1195          * may exist in the same region.
1196          */
1197         uint64_t align = size & -size;
1198         uint64_t *cursor = &msp->ms_lbas[highbit64(align) - 1];
1199         avl_tree_t *t = &msp->ms_tree->rt_root;
1200 
1201         return (metaslab_block_picker(msp, t, cursor, size, align));
1202 }
1203 
1204 static metaslab_ops_t metaslab_ff_ops = {
1205         metaslab_ff_alloc
1206 };
1207 
1208 /*
1209  * ==========================================================================
1210  * Dynamic block allocator -
1211  * Uses the first fit allocation scheme until space get low and then
1212  * adjusts to a best fit allocation method. Uses metaslab_df_alloc_threshold
1213  * and metaslab_df_free_pct to determine when to switch the allocation scheme.
1214  * ==========================================================================
1215  */
1216 static uint64_t
1217 metaslab_df_alloc(metaslab_t *msp, uint64_t size)
1218 {
1219         /*
1220          * Find the largest power of 2 block size that evenly divides the
1221          * requested size. This is used to try to allocate blocks with similar


1229         avl_tree_t *t = &rt->rt_root;
1230         uint64_t max_size = metaslab_block_maxsize(msp);
1231         int free_pct = range_tree_space(rt) * 100 / msp->ms_size;
1232 
1233         ASSERT(MUTEX_HELD(&msp->ms_lock));
1234         ASSERT3U(avl_numnodes(t), ==, avl_numnodes(&msp->ms_size_tree));
1235 
1236         if (max_size < size)
1237                 return (-1ULL);
1238 
1239         /*
1240          * If we're running low on space switch to using the size
1241          * sorted AVL tree (best-fit).
1242          */
1243         if (max_size < metaslab_df_alloc_threshold ||
1244             free_pct < metaslab_df_free_pct) {
1245                 t = &msp->ms_size_tree;
1246                 *cursor = 0;
1247         }
1248 
1249         return (metaslab_block_picker(msp, t, cursor, size, 1ULL));
1250 }
1251 
1252 static metaslab_ops_t metaslab_df_ops = {
1253         metaslab_df_alloc
1254 };
1255 
1256 /*
1257  * ==========================================================================
1258  * Cursor fit block allocator -
1259  * Select the largest region in the metaslab, set the cursor to the beginning
1260  * of the range and the cursor_end to the end of the range. As allocations
1261  * are made advance the cursor. Continue allocating from the cursor until
1262  * the range is exhausted and then find a new range.
1263  * ==========================================================================
1264  */
1265 static uint64_t
1266 metaslab_cf_alloc(metaslab_t *msp, uint64_t size)
1267 {
1268         range_tree_t *rt = msp->ms_tree;
1269         avl_tree_t *t = &msp->ms_size_tree;
1270         uint64_t *cursor = &msp->ms_lbas[0];
1271         uint64_t *cursor_end = &msp->ms_lbas[1];
1272         uint64_t offset = 0;
1273 
1274         ASSERT(MUTEX_HELD(&msp->ms_lock));
1275         ASSERT3U(avl_numnodes(t), ==, avl_numnodes(&rt->rt_root));
1276 
1277         ASSERT3U(*cursor_end, >=, *cursor);
1278 
1279         if ((*cursor + size) > *cursor_end) {
1280                 range_seg_t *rs;
1281                 for (rs = avl_last(&msp->ms_size_tree);
1282                     rs != NULL && rs->rs_end - rs->rs_start >= size;
1283                     rs = AVL_PREV(&msp->ms_size_tree, rs)) {


1284                         *cursor = rs->rs_start;
1285                         *cursor_end = rs->rs_end;
1286                         if (!metaslab_check_trim_conflict(msp, cursor, size,
1287                             1, *cursor_end)) {
1288                                 /* segment appears to be acceptable */
1289                                 break;
1290                         }
1291                 }
1292                 if (rs == NULL || rs->rs_end - rs->rs_start < size)
1293                         return (-1ULL);
1294         }
1295 
1296         offset = *cursor;
1297         *cursor += size;
1298 
1299         return (offset);
1300 }
1301 
1302 static metaslab_ops_t metaslab_cf_ops = {
1303         metaslab_cf_alloc
1304 };
1305 
1306 /*
1307  * ==========================================================================
1308  * New dynamic fit allocator -
1309  * Select a region that is large enough to allocate 2^metaslab_ndf_clump_shift
1310  * contiguous blocks. If no region is found then just use the largest segment
1311  * that remains.
1312  * ==========================================================================
1313  */
1314 
1315 /*
1316  * Determines desired number of contiguous blocks (2^metaslab_ndf_clump_shift)
1317  * to request from the allocator.
1318  */
1319 uint64_t metaslab_ndf_clump_shift = 4;
1320 
1321 static uint64_t
1322 metaslab_ndf_alloc(metaslab_t *msp, uint64_t size)
1323 {
1324         avl_tree_t *t = &msp->ms_tree->rt_root;
1325         avl_index_t where;
1326         range_seg_t *rs, rsearch;
1327         uint64_t hbit = highbit64(size);
1328         uint64_t *cursor = &msp->ms_lbas[hbit - 1];
1329         uint64_t max_size = metaslab_block_maxsize(msp);
1330         /* mutable copy for adjustment by metaslab_check_trim_conflict */
1331         uint64_t adjustable_start;
1332 
1333         ASSERT(MUTEX_HELD(&msp->ms_lock));
1334         ASSERT3U(avl_numnodes(t), ==, avl_numnodes(&msp->ms_size_tree));
1335 
1336         if (max_size < size)
1337                 return (-1ULL);
1338 
1339         rsearch.rs_start = *cursor;
1340         rsearch.rs_end = *cursor + size;
1341 
1342         rs = avl_find(t, &rsearch, &where);
1343         if (rs != NULL)
1344                 adjustable_start = rs->rs_start;
1345         if (rs == NULL || rs->rs_end - adjustable_start < size ||
1346             metaslab_check_trim_conflict(msp, &adjustable_start, size, 1,
1347             rs->rs_end)) {
1348                 /* segment not usable, try the largest remaining one */
1349                 t = &msp->ms_size_tree;
1350 
1351                 rsearch.rs_start = 0;
1352                 rsearch.rs_end = MIN(max_size,
1353                     1ULL << (hbit + metaslab_ndf_clump_shift));
1354                 rs = avl_find(t, &rsearch, &where);
1355                 if (rs == NULL)
1356                         rs = avl_nearest(t, where, AVL_AFTER);
1357                 ASSERT(rs != NULL);
1358                 adjustable_start = rs->rs_start;
1359                 if (rs->rs_end - adjustable_start < size ||
1360                     metaslab_check_trim_conflict(msp, &adjustable_start,
1361                     size, 1, rs->rs_end)) {
1362                         /* even largest remaining segment not usable */
1363                         return (-1ULL);
1364                 }




1365         }
1366 
1367         *cursor = adjustable_start + size;
1368         return (*cursor);
1369 }
1370 
1371 static metaslab_ops_t metaslab_ndf_ops = {
1372         metaslab_ndf_alloc
1373 };
1374 
1375 metaslab_ops_t *zfs_metaslab_ops = &metaslab_df_ops;
1376 
1377 /*
1378  * ==========================================================================
1379  * Metaslabs
1380  * ==========================================================================
1381  */
1382 
1383 /*
1384  * Wait for any in-progress metaslab loads to complete.
1385  */
1386 void
1387 metaslab_load_wait(metaslab_t *msp)
1388 {
1389         ASSERT(MUTEX_HELD(&msp->ms_lock));
1390 
1391         while (msp->ms_loading) {
1392                 ASSERT(!msp->ms_loaded);
1393                 cv_wait(&msp->ms_load_cv, &msp->ms_lock);
1394         }
1395 }
1396 
1397 int
1398 metaslab_load(metaslab_t *msp)
1399 {
1400         int error = 0;
1401         boolean_t success = B_FALSE;
1402 
1403         ASSERT(MUTEX_HELD(&msp->ms_lock));
1404         ASSERT(!msp->ms_loaded);
1405         ASSERT(!msp->ms_loading);
1406 
1407         msp->ms_loading = B_TRUE;






1408 
1409         /*
1410          * If the space map has not been allocated yet, then treat
1411          * all the space in the metaslab as free and add it to the
1412          * ms_tree.
1413          */
1414         if (msp->ms_sm != NULL)
1415                 error = space_map_load(msp->ms_sm, msp->ms_tree, SM_FREE);
1416         else
1417                 range_tree_add(msp->ms_tree, msp->ms_start, msp->ms_size);
1418 
1419         success = (error == 0);


1420         msp->ms_loading = B_FALSE;
1421 
1422         if (success) {
1423                 ASSERT3P(msp->ms_group, !=, NULL);
1424                 msp->ms_loaded = B_TRUE;
1425 
1426                 for (int t = 0; t < TXG_DEFER_SIZE; t++) {
1427                         range_tree_walk(msp->ms_defertree[t],
1428                             range_tree_remove, msp->ms_tree);
1429                         range_tree_walk(msp->ms_defertree[t],
1430                             metaslab_trim_remove, msp);
1431                 }
1432                 msp->ms_max_size = metaslab_block_maxsize(msp);
1433         }
1434         cv_broadcast(&msp->ms_load_cv);
1435         return (error);
1436 }
1437 
1438 void
1439 metaslab_unload(metaslab_t *msp)
1440 {
1441         ASSERT(MUTEX_HELD(&msp->ms_lock));
1442         range_tree_vacate(msp->ms_tree, NULL, NULL);
1443         msp->ms_loaded = B_FALSE;
1444         msp->ms_weight &= ~METASLAB_ACTIVE_MASK;
1445         msp->ms_max_size = 0;
1446 }
1447 
1448 int
1449 metaslab_init(metaslab_group_t *mg, uint64_t id, uint64_t object, uint64_t txg,
1450     metaslab_t **msp)
1451 {
1452         vdev_t *vd = mg->mg_vd;
1453         objset_t *mos = vd->vdev_spa->spa_meta_objset;
1454         metaslab_t *ms;
1455         int error;
1456 
1457         ms = kmem_zalloc(sizeof (metaslab_t), KM_SLEEP);
1458         mutex_init(&ms->ms_lock, NULL, MUTEX_DEFAULT, NULL);

1459         cv_init(&ms->ms_load_cv, NULL, CV_DEFAULT, NULL);
1460         cv_init(&ms->ms_trim_cv, NULL, CV_DEFAULT, NULL);
1461         ms->ms_id = id;
1462         ms->ms_start = id << vd->vdev_ms_shift;
1463         ms->ms_size = 1ULL << vd->vdev_ms_shift;
1464 
1465         /*
1466          * We only open space map objects that already exist. All others
1467          * will be opened when we finally allocate an object for it.
1468          */
1469         if (object != 0) {
1470                 error = space_map_open(&ms->ms_sm, mos, object, ms->ms_start,
1471                     ms->ms_size, vd->vdev_ashift, &ms->ms_lock);
1472 
1473                 if (error != 0) {
1474                         kmem_free(ms, sizeof (metaslab_t));
1475                         return (error);
1476                 }
1477 
1478                 ASSERT(ms->ms_sm != NULL);
1479         }
1480 
1481         ms->ms_cur_ts = metaslab_new_trimset(0, &ms->ms_lock);
1482 
1483         /*
1484          * We create the main range tree here, but we don't create the
1485          * other range trees until metaslab_sync_done().  This serves
1486          * two purposes: it allows metaslab_sync_done() to detect the
1487          * addition of new space; and for debugging, it ensures that we'd
1488          * data fault on any attempt to use this metaslab before it's ready.
1489          */
1490         ms->ms_tree = range_tree_create(&metaslab_rt_ops, ms, &ms->ms_lock);
1491         metaslab_group_add(mg, ms);
1492 
1493         metaslab_set_fragmentation(ms);
1494 
1495         /*
1496          * If we're opening an existing pool (txg == 0) or creating
1497          * a new one (txg == TXG_INITIAL), all space is available now.
1498          * If we're adding space to an existing pool, the new space
1499          * does not become available until after this txg has synced.
1500          * The metaslab's weight will also be initialized when we sync
1501          * out this txg. This ensures that we don't attempt to allocate
1502          * from it before we have initialized it completely.
1503          */
1504         if (txg <= TXG_INITIAL)
1505                 metaslab_sync_done(ms, 0);
1506 
1507         /*
1508          * If metaslab_debug_load is set and we're initializing a metaslab
1509          * that has an allocated space map object then load the its space
1510          * map so that can verify frees.


1534 
1535         mutex_enter(&msp->ms_lock);
1536         VERIFY(msp->ms_group == NULL);
1537         vdev_space_update(mg->mg_vd, -space_map_allocated(msp->ms_sm),
1538             0, -msp->ms_size);
1539         space_map_close(msp->ms_sm);
1540 
1541         metaslab_unload(msp);
1542         range_tree_destroy(msp->ms_tree);
1543         range_tree_destroy(msp->ms_freeingtree);
1544         range_tree_destroy(msp->ms_freedtree);
1545 
1546         for (int t = 0; t < TXG_SIZE; t++) {
1547                 range_tree_destroy(msp->ms_alloctree[t]);
1548         }
1549 
1550         for (int t = 0; t < TXG_DEFER_SIZE; t++) {
1551                 range_tree_destroy(msp->ms_defertree[t]);
1552         }
1553 
1554         metaslab_free_trimset(msp->ms_cur_ts);
1555         if (msp->ms_prev_ts)
1556                 metaslab_free_trimset(msp->ms_prev_ts);
1557         ASSERT3P(msp->ms_trimming_ts, ==, NULL);
1558 
1559         ASSERT0(msp->ms_deferspace);
1560 
1561         mutex_exit(&msp->ms_lock);
1562         cv_destroy(&msp->ms_load_cv);
1563         cv_destroy(&msp->ms_trim_cv);
1564         mutex_destroy(&msp->ms_lock);

1565 
1566         kmem_free(msp, sizeof (metaslab_t));
1567 }
1568 
1569 #define FRAGMENTATION_TABLE_SIZE        17
1570 
1571 /*
1572  * This table defines a segment size based fragmentation metric that will
1573  * allow each metaslab to derive its own fragmentation value. This is done
1574  * by calculating the space in each bucket of the spacemap histogram and
1575  * multiplying that by the fragmetation metric in this table. Doing
1576  * this for all buckets and dividing it by the total amount of free
1577  * space in this metaslab (i.e. the total free space in all buckets) gives
1578  * us the fragmentation metric. This means that a high fragmentation metric
1579  * equates to most of the free space being comprised of small segments.
1580  * Conversely, if the metric is low, then most of the free space is in
1581  * large segments. A 10% change in fragmentation equates to approximately
1582  * double the number of segments.
1583  *
1584  * This table defines 0% fragmented space using 16MB segments. Testing has


1910                  */
1911                 should_allocate = (asize <
1912                     1ULL << (WEIGHT_GET_INDEX(msp->ms_weight) + 1));
1913         } else {
1914                 should_allocate = (asize <=
1915                     (msp->ms_weight & ~METASLAB_WEIGHT_TYPE));
1916         }
1917         return (should_allocate);
1918 }
1919 
1920 static uint64_t
1921 metaslab_weight(metaslab_t *msp)
1922 {
1923         vdev_t *vd = msp->ms_group->mg_vd;
1924         spa_t *spa = vd->vdev_spa;
1925         uint64_t weight;
1926 
1927         ASSERT(MUTEX_HELD(&msp->ms_lock));
1928 
1929         /*
1930          * This vdev is in the process of being removed so there is nothing
1931          * for us to do here.
1932          */
1933         if (vd->vdev_removing) {
1934                 ASSERT0(space_map_allocated(msp->ms_sm));
1935                 ASSERT0(vd->vdev_ms_shift);
1936                 return (0);
1937         }
1938 
1939         metaslab_set_fragmentation(msp);
1940 
1941         /*
1942          * Update the maximum size if the metaslab is loaded. This will
1943          * ensure that we get an accurate maximum size if newly freed space
1944          * has been added back into the free tree.
1945          */
1946         if (msp->ms_loaded)
1947                 msp->ms_max_size = metaslab_block_maxsize(msp);
1948 
1949         /*
1950          * Segment-based weighting requires space map histogram support.
1951          */
1952         if (zfs_metaslab_segment_weight_enabled &&
1953             spa_feature_is_enabled(spa, SPA_FEATURE_SPACEMAP_HISTOGRAM) &&
1954             (msp->ms_sm == NULL || msp->ms_sm->sm_dbuf->db_size ==
1955             sizeof (space_map_phys_t))) {
1956                 weight = metaslab_segment_weight(msp);
1957         } else {


2049         if (!msp->ms_loaded)
2050                 (void) metaslab_load(msp);
2051         msp->ms_selected_txg = spa_syncing_txg(spa);
2052         mutex_exit(&msp->ms_lock);
2053 }
2054 
2055 static void
2056 metaslab_group_preload(metaslab_group_t *mg)
2057 {
2058         spa_t *spa = mg->mg_vd->vdev_spa;
2059         metaslab_t *msp;
2060         avl_tree_t *t = &mg->mg_metaslab_tree;
2061         int m = 0;
2062 
2063         if (spa_shutting_down(spa) || !metaslab_preload_enabled) {
2064                 taskq_wait(mg->mg_taskq);
2065                 return;
2066         }
2067 
2068         mutex_enter(&mg->mg_lock);

2069         /*
2070          * Load the next potential metaslabs
2071          */
2072         for (msp = avl_first(t); msp != NULL; msp = AVL_NEXT(t, msp)) {


2073                 /*
2074                  * We preload only the maximum number of metaslabs specified
2075                  * by metaslab_preload_limit. If a metaslab is being forced
2076                  * to condense then we preload it too. This will ensure
2077                  * that force condensing happens in the next txg.
2078                  */
2079                 if (++m > metaslab_preload_limit && !msp->ms_condense_wanted) {
2080                         continue;
2081                 }
2082 
2083                 VERIFY(taskq_dispatch(mg->mg_taskq, metaslab_preload,
2084                     msp, TQ_SLEEP) != NULL);
2085         }
2086         mutex_exit(&mg->mg_lock);
2087 }
2088 
2089 /*
2090  * Determine if the space map's on-disk footprint is past our tolerance
2091  * for inefficiency. We would like to use the following criteria to make
2092  * our decision:
2093  *
2094  * 1. The size of the space map object should not dramatically increase as a
2095  * result of writing out the free space range tree.
2096  *
2097  * 2. The minimal on-disk space map representation is zfs_condense_pct/100
2098  * times the size than the free space range tree representation
2099  * (i.e. zfs_condense_pct = 110 and in-core = 1MB, minimal = 1.1.MB).
2100  *
2101  * 3. The on-disk size of the space map should actually decrease.
2102  *
2103  * Checking the first condition is tricky since we don't want to walk
2104  * the entire AVL tree calculating the estimated on-disk size. Instead we
2105  * use the size-ordered range tree in the metaslab and calculate the
2106  * size required to write out the largest segment in our free tree. If the
2107  * size required to represent that segment on disk is larger than the space
2108  * map object then we avoid condensing this map.
2109  *
2110  * To determine the second criterion we use a best-case estimate and assume
2111  * each segment can be represented on-disk as a single 64-bit entry. We refer
2112  * to this best-case estimate as the space map's minimal form.
2113  *
2114  * Unfortunately, we cannot compute the on-disk size of the space map in this
2115  * context because we cannot accurately compute the effects of compression, etc.
2116  * Instead, we apply the heuristic described in the block comment for
2117  * zfs_metaslab_condense_block_threshold - we only condense if the space used
2118  * is greater than a threshold number of blocks.
2119  */


2176         ASSERT3U(spa_sync_pass(spa), ==, 1);
2177         ASSERT(msp->ms_loaded);
2178 
2179 
2180         spa_dbgmsg(spa, "condensing: txg %llu, msp[%llu] %p, vdev id %llu, "
2181             "spa %s, smp size %llu, segments %lu, forcing condense=%s", txg,
2182             msp->ms_id, msp, msp->ms_group->mg_vd->vdev_id,
2183             msp->ms_group->mg_vd->vdev_spa->spa_name,
2184             space_map_length(msp->ms_sm), avl_numnodes(&msp->ms_tree->rt_root),
2185             msp->ms_condense_wanted ? "TRUE" : "FALSE");
2186 
2187         msp->ms_condense_wanted = B_FALSE;
2188 
2189         /*
2190          * Create an range tree that is 100% allocated. We remove segments
2191          * that have been freed in this txg, any deferred frees that exist,
2192          * and any allocation in the future. Removing segments should be
2193          * a relatively inexpensive operation since we expect these trees to
2194          * have a small number of nodes.
2195          */
2196         condense_tree = range_tree_create(NULL, NULL, &msp->ms_lock);
2197         range_tree_add(condense_tree, msp->ms_start, msp->ms_size);
2198 
2199         /*
2200          * Remove what's been freed in this txg from the condense_tree.
2201          * Since we're in sync_pass 1, we know that all the frees from
2202          * this txg are in the freeingtree.
2203          */
2204         range_tree_walk(msp->ms_freeingtree, range_tree_remove, condense_tree);
2205 
2206         for (int t = 0; t < TXG_DEFER_SIZE; t++) {
2207                 range_tree_walk(msp->ms_defertree[t],
2208                     range_tree_remove, condense_tree);
2209         }
2210 
2211         for (int t = 1; t < TXG_CONCURRENT_STATES; t++) {
2212                 range_tree_walk(msp->ms_alloctree[(txg + t) & TXG_MASK],
2213                     range_tree_remove, condense_tree);
2214         }
2215 
2216         /*
2217          * We're about to drop the metaslab's lock thus allowing
2218          * other consumers to change it's content. Set the
2219          * metaslab's ms_condensing flag to ensure that
2220          * allocations on this metaslab do not occur while we're
2221          * in the middle of committing it to disk. This is only critical
2222          * for the ms_tree as all other range trees use per txg
2223          * views of their content.
2224          */
2225         msp->ms_condensing = B_TRUE;
2226 
2227         mutex_exit(&msp->ms_lock);
2228         space_map_truncate(sm, tx);
2229         mutex_enter(&msp->ms_lock);
2230 
2231         /*
2232          * While we would ideally like to create a space map representation
2233          * that consists only of allocation records, doing so can be
2234          * prohibitively expensive because the in-core free tree can be
2235          * large, and therefore computationally expensive to subtract
2236          * from the condense_tree. Instead we sync out two trees, a cheap
2237          * allocation only tree followed by the in-core free tree. While not
2238          * optimal, this is typically close to optimal, and much cheaper to
2239          * compute.
2240          */
2241         space_map_write(sm, condense_tree, SM_ALLOC, tx);
2242         range_tree_vacate(condense_tree, NULL, NULL);
2243         range_tree_destroy(condense_tree);
2244 
2245         space_map_write(sm, msp->ms_tree, SM_FREE, tx);

2246         msp->ms_condensing = B_FALSE;
2247 }
2248 
2249 /*
2250  * Write a metaslab to disk in the context of the specified transaction group.
2251  */
2252 void
2253 metaslab_sync(metaslab_t *msp, uint64_t txg)
2254 {
2255         metaslab_group_t *mg = msp->ms_group;
2256         vdev_t *vd = mg->mg_vd;
2257         spa_t *spa = vd->vdev_spa;
2258         objset_t *mos = spa_meta_objset(spa);
2259         range_tree_t *alloctree = msp->ms_alloctree[txg & TXG_MASK];
2260         dmu_tx_t *tx;
2261         uint64_t object = space_map_object(msp->ms_sm);
2262 
2263         ASSERT(!vd->vdev_ishole);
2264 
2265         mutex_enter(&msp->ms_lock);
2266 
2267         /*
2268          * This metaslab has just been added so there's no work to do now.
2269          */
2270         if (msp->ms_freeingtree == NULL) {
2271                 ASSERT3P(alloctree, ==, NULL);
2272                 mutex_exit(&msp->ms_lock);
2273                 return;
2274         }
2275 
2276         ASSERT3P(alloctree, !=, NULL);
2277         ASSERT3P(msp->ms_freeingtree, !=, NULL);
2278         ASSERT3P(msp->ms_freedtree, !=, NULL);
2279 
2280         /*
2281          * Normally, we don't want to process a metaslab if there
2282          * are no allocations or frees to perform. However, if the metaslab
2283          * is being forced to condense and it's loaded, we need to let it
2284          * through.
2285          */
2286         if (range_tree_space(alloctree) == 0 &&
2287             range_tree_space(msp->ms_freeingtree) == 0 &&
2288             !(msp->ms_loaded && msp->ms_condense_wanted)) {
2289                 mutex_exit(&msp->ms_lock);
2290                 return;
2291         }
2292 
2293 
2294         VERIFY(txg <= spa_final_dirty_txg(spa));
2295 
2296         /*
2297          * The only state that can actually be changing concurrently with
2298          * metaslab_sync() is the metaslab's ms_tree.  No other thread can
2299          * be modifying this txg's alloctree, freeingtree, freedtree, or
2300          * space_map_phys_t. Therefore, we only hold ms_lock to satify
2301          * space map ASSERTs. We drop it whenever we call into the DMU,
2302          * because the DMU can call down to us (e.g. via zio_free()) at
2303          * any time.




2304          */
2305 
2306         tx = dmu_tx_create_assigned(spa_get_dsl(spa), txg);
2307 
2308         if (msp->ms_sm == NULL) {
2309                 uint64_t new_object;
2310 
2311                 new_object = space_map_alloc(mos, tx);
2312                 VERIFY3U(new_object, !=, 0);
2313 
2314                 VERIFY0(space_map_open(&msp->ms_sm, mos, new_object,
2315                     msp->ms_start, msp->ms_size, vd->vdev_ashift,
2316                     &msp->ms_lock));
2317                 ASSERT(msp->ms_sm != NULL);
2318         }
2319 



2320         /*
2321          * Note: metaslab_condense() clears the space map's histogram.
2322          * Therefore we must verify and remove this histogram before
2323          * condensing.
2324          */
2325         metaslab_group_histogram_verify(mg);
2326         metaslab_class_histogram_verify(mg->mg_class);
2327         metaslab_group_histogram_remove(mg, msp);
2328 
2329         if (msp->ms_loaded && spa_sync_pass(spa) == 1 &&
2330             metaslab_should_condense(msp)) {
2331                 metaslab_condense(msp, txg, tx);
2332         } else {

2333                 space_map_write(msp->ms_sm, alloctree, SM_ALLOC, tx);
2334                 space_map_write(msp->ms_sm, msp->ms_freeingtree, SM_FREE, tx);

2335         }
2336 
2337         if (msp->ms_loaded) {
2338                 /*
2339                  * When the space map is loaded, we have an accruate
2340                  * histogram in the range tree. This gives us an opportunity
2341                  * to bring the space map's histogram up-to-date so we clear
2342                  * it first before updating it.
2343                  */
2344                 space_map_histogram_clear(msp->ms_sm);
2345                 space_map_histogram_add(msp->ms_sm, msp->ms_tree, tx);
2346 
2347                 /*
2348                  * Since we've cleared the histogram we need to add back
2349                  * any free space that has already been processed, plus
2350                  * any deferred space. This allows the on-disk histogram
2351                  * to accurately reflect all free space even if some space
2352                  * is not yet available for allocation (i.e. deferred).
2353                  */
2354                 space_map_histogram_add(msp->ms_sm, msp->ms_freedtree, tx);
2355 
2356                 /*
2357                  * Add back any deferred free space that has not been
2358                  * added back into the in-core free tree yet. This will
2359                  * ensure that we don't end up with a space map histogram


2387          */
2388         if (spa_sync_pass(spa) == 1) {
2389                 range_tree_swap(&msp->ms_freeingtree, &msp->ms_freedtree);
2390         } else {
2391                 range_tree_vacate(msp->ms_freeingtree,
2392                     range_tree_add, msp->ms_freedtree);
2393         }
2394         range_tree_vacate(alloctree, NULL, NULL);
2395 
2396         ASSERT0(range_tree_space(msp->ms_alloctree[txg & TXG_MASK]));
2397         ASSERT0(range_tree_space(msp->ms_alloctree[TXG_CLEAN(txg) & TXG_MASK]));
2398         ASSERT0(range_tree_space(msp->ms_freeingtree));
2399 
2400         mutex_exit(&msp->ms_lock);
2401 
2402         if (object != space_map_object(msp->ms_sm)) {
2403                 object = space_map_object(msp->ms_sm);
2404                 dmu_write(mos, vd->vdev_ms_array, sizeof (uint64_t) *
2405                     msp->ms_id, sizeof (uint64_t), &object, tx);
2406         }

2407         dmu_tx_commit(tx);
2408 }
2409 
2410 /*
2411  * Called after a transaction group has completely synced to mark
2412  * all of the metaslab's free space as usable.
2413  */
2414 void
2415 metaslab_sync_done(metaslab_t *msp, uint64_t txg)
2416 {
2417         metaslab_group_t *mg = msp->ms_group;
2418         vdev_t *vd = mg->mg_vd;
2419         spa_t *spa = vd->vdev_spa;
2420         range_tree_t **defer_tree;
2421         int64_t alloc_delta, defer_delta;
2422         boolean_t defer_allowed = B_TRUE;
2423 
2424         ASSERT(!vd->vdev_ishole);
2425 
2426         mutex_enter(&msp->ms_lock);
2427 
2428         /*
2429          * If this metaslab is just becoming available, initialize its
2430          * range trees and add its capacity to the vdev.
2431          */
2432         if (msp->ms_freedtree == NULL) {
2433                 for (int t = 0; t < TXG_SIZE; t++) {
2434                         ASSERT(msp->ms_alloctree[t] == NULL);
2435 
2436                         msp->ms_alloctree[t] = range_tree_create(NULL, msp,
2437                             &msp->ms_lock);
2438                 }
2439 
2440                 ASSERT3P(msp->ms_freeingtree, ==, NULL);
2441                 msp->ms_freeingtree = range_tree_create(NULL, msp,
2442                     &msp->ms_lock);
2443 
2444                 ASSERT3P(msp->ms_freedtree, ==, NULL);
2445                 msp->ms_freedtree = range_tree_create(NULL, msp,
2446                     &msp->ms_lock);
2447 
2448                 for (int t = 0; t < TXG_DEFER_SIZE; t++) {
2449                         ASSERT(msp->ms_defertree[t] == NULL);
2450 
2451                         msp->ms_defertree[t] = range_tree_create(NULL, msp,
2452                             &msp->ms_lock);
2453                 }
2454 
2455                 vdev_space_update(vd, 0, 0, msp->ms_size);
2456         }
2457 
2458         defer_tree = &msp->ms_defertree[txg % TXG_DEFER_SIZE];
2459 
2460         uint64_t free_space = metaslab_class_get_space(spa_normal_class(spa)) -
2461             metaslab_class_get_alloc(spa_normal_class(spa));
2462         if (free_space <= spa_get_slop_space(spa)) {
2463                 defer_allowed = B_FALSE;
2464         }
2465 
2466         defer_delta = 0;
2467         alloc_delta = space_map_alloc_delta(msp->ms_sm);
2468         if (defer_allowed) {
2469                 defer_delta = range_tree_space(msp->ms_freedtree) -
2470                     range_tree_space(*defer_tree);
2471         } else {
2472                 defer_delta -= range_tree_space(*defer_tree);
2473         }
2474 
2475         vdev_space_update(vd, alloc_delta + defer_delta, defer_delta, 0);
2476 
2477         /*
2478          * If there's a metaslab_load() in progress, wait for it to complete
2479          * so that we have a consistent view of the in-core space map.
2480          */
2481         metaslab_load_wait(msp);
2482 
2483         /*
2484          * Move the frees from the defer_tree back to the free
2485          * range tree (if it's loaded). Swap the freed_tree and the
2486          * defer_tree -- this is safe to do because we've just emptied out
2487          * the defer_tree.
2488          */
2489         if (spa_get_auto_trim(spa) == SPA_AUTO_TRIM_ON &&
2490             !vd->vdev_man_trimming) {
2491                 range_tree_walk(*defer_tree, metaslab_trim_add, msp);
2492                 if (!defer_allowed) {
2493                         range_tree_walk(msp->ms_freedtree, metaslab_trim_add,
2494                             msp);
2495                 }
2496         }
2497         range_tree_vacate(*defer_tree,
2498             msp->ms_loaded ? range_tree_add : NULL, msp->ms_tree);
2499         if (defer_allowed) {
2500                 range_tree_swap(&msp->ms_freedtree, defer_tree);
2501         } else {
2502                 range_tree_vacate(msp->ms_freedtree,
2503                     msp->ms_loaded ? range_tree_add : NULL, msp->ms_tree);
2504         }
2505 
2506         space_map_update(msp->ms_sm);
2507 
2508         msp->ms_deferspace += defer_delta;
2509         ASSERT3S(msp->ms_deferspace, >=, 0);
2510         ASSERT3S(msp->ms_deferspace, <=, msp->ms_size);
2511         if (msp->ms_deferspace != 0) {
2512                 /*
2513                  * Keep syncing this metaslab until all deferred frees
2514                  * are back in circulation.
2515                  */
2516                 vdev_dirty(vd, VDD_METASLAB, msp, txg + 1);


2520          * Calculate the new weights before unloading any metaslabs.
2521          * This will give us the most accurate weighting.
2522          */
2523         metaslab_group_sort(mg, msp, metaslab_weight(msp));
2524 
2525         /*
2526          * If the metaslab is loaded and we've not tried to load or allocate
2527          * from it in 'metaslab_unload_delay' txgs, then unload it.
2528          */
2529         if (msp->ms_loaded &&
2530             msp->ms_selected_txg + metaslab_unload_delay < txg) {
2531                 for (int t = 1; t < TXG_CONCURRENT_STATES; t++) {
2532                         VERIFY0(range_tree_space(
2533                             msp->ms_alloctree[(txg + t) & TXG_MASK]));
2534                 }
2535 
2536                 if (!metaslab_debug_unload)
2537                         metaslab_unload(msp);
2538         }
2539 




2540         mutex_exit(&msp->ms_lock);
2541 }
2542 
2543 void
2544 metaslab_sync_reassess(metaslab_group_t *mg)
2545 {



2546         metaslab_group_alloc_update(mg);
2547         mg->mg_fragmentation = metaslab_group_fragmentation(mg);
2548 
2549         /*
2550          * Preload the next potential metaslabs




2551          */

2552         metaslab_group_preload(mg);


2553 }
2554 
2555 static uint64_t
2556 metaslab_distance(metaslab_t *msp, dva_t *dva)
2557 {
2558         uint64_t ms_shift = msp->ms_group->mg_vd->vdev_ms_shift;
2559         uint64_t offset = DVA_GET_OFFSET(dva) >> ms_shift;
2560         uint64_t start = msp->ms_id;
2561 
2562         if (msp->ms_group->mg_vd->vdev_id != DVA_GET_VDEV(dva))
2563                 return (1ULL << 63);
2564 
2565         if (offset < start)
2566                 return ((start - offset) << ms_shift);
2567         if (offset > start)
2568                 return ((offset - start) << ms_shift);
2569         return (0);
2570 }
2571 
2572 /*


2726 }
2727 
2728 static uint64_t
2729 metaslab_block_alloc(metaslab_t *msp, uint64_t size, uint64_t txg)
2730 {
2731         uint64_t start;
2732         range_tree_t *rt = msp->ms_tree;
2733         metaslab_class_t *mc = msp->ms_group->mg_class;
2734 
2735         VERIFY(!msp->ms_condensing);
2736 
2737         start = mc->mc_ops->msop_alloc(msp, size);
2738         if (start != -1ULL) {
2739                 metaslab_group_t *mg = msp->ms_group;
2740                 vdev_t *vd = mg->mg_vd;
2741 
2742                 VERIFY0(P2PHASE(start, 1ULL << vd->vdev_ashift));
2743                 VERIFY0(P2PHASE(size, 1ULL << vd->vdev_ashift));
2744                 VERIFY3U(range_tree_space(rt) - size, <=, msp->ms_size);
2745                 range_tree_remove(rt, start, size);
2746                 metaslab_trim_remove(msp, start, size);
2747 
2748                 if (range_tree_space(msp->ms_alloctree[txg & TXG_MASK]) == 0)
2749                         vdev_dirty(mg->mg_vd, VDD_METASLAB, msp, txg);
2750 
2751                 range_tree_add(msp->ms_alloctree[txg & TXG_MASK], start, size);
2752 
2753                 /* Track the last successful allocation */
2754                 msp->ms_alloc_txg = txg;
2755                 metaslab_verify_space(msp, txg);
2756         }
2757 
2758         /*
2759          * Now that we've attempted the allocation we need to update the
2760          * metaslab's maximum block size since it may have changed.
2761          */
2762         msp->ms_max_size = metaslab_block_maxsize(msp);
2763         return (start);
2764 }
2765 
2766 static uint64_t
2767 metaslab_group_alloc_normal(metaslab_group_t *mg, zio_alloc_list_t *zal,
2768     uint64_t asize, uint64_t txg, uint64_t min_distance, dva_t *dva, int d,
2769     int flags)
2770 {
2771         metaslab_t *msp = NULL;
2772         uint64_t offset = -1ULL;
2773         uint64_t activation_weight;
2774         uint64_t target_distance;
2775         int i;
2776 
2777         activation_weight = METASLAB_WEIGHT_PRIMARY;
2778         for (i = 0; i < d; i++) {
2779                 if (DVA_GET_VDEV(&dva[i]) == mg->mg_vd->vdev_id) {
2780                         activation_weight = METASLAB_WEIGHT_SECONDARY;
2781                         break;
2782                 }
2783         }
2784 
2785         metaslab_t *search = kmem_alloc(sizeof (*search), KM_SLEEP);
2786         search->ms_weight = UINT64_MAX;
2787         search->ms_start = 0;
2788         for (;;) {
2789                 boolean_t was_active;
2790                 boolean_t pass_primary = B_TRUE;
2791                 avl_tree_t *t = &mg->mg_metaslab_tree;
2792                 avl_index_t idx;
2793 
2794                 mutex_enter(&mg->mg_lock);
2795 
2796                 /*
2797                  * Find the metaslab with the highest weight that is less
2798                  * than what we've already tried.  In the common case, this
2799                  * means that we will examine each metaslab at most once.
2800                  * Note that concurrent callers could reorder metaslabs
2801                  * by activation/passivation once we have dropped the mg_lock.
2802                  * If a metaslab is activated by another thread, and we fail
2803                  * to allocate from the metaslab we have selected, we may
2804                  * not try the newly-activated metaslab, and instead activate
2805                  * another metaslab.  This is not optimal, but generally
2806                  * does not cause any problems (a possible exception being
2807                  * if every metaslab is completely full except for the
2808                  * the newly-activated metaslab which we fail to examine).
2809                  */
2810                 msp = avl_find(t, search, &idx);
2811                 if (msp == NULL)
2812                         msp = avl_nearest(t, idx, AVL_AFTER);
2813                 for (; msp != NULL; msp = AVL_NEXT(t, msp)) {
2814 
2815                         if (!metaslab_should_allocate(msp, asize)) {
2816                                 metaslab_trace_add(zal, mg, msp, asize, d,
2817                                     TRACE_TOO_SMALL);
2818                                 continue;
2819                         }
2820 
2821                         /*
2822                          * If the selected metaslab is condensing, skip it.
2823                          */
2824                         if (msp->ms_condensing)
2825                                 continue;
2826 
2827                         was_active = msp->ms_weight & METASLAB_ACTIVE_MASK;
2828                         if (flags & METASLAB_USE_WEIGHT_SECONDARY) {
2829                                 if (!pass_primary) {
2830                                         DTRACE_PROBE(metaslab_use_secondary);
2831                                         activation_weight =
2832                                             METASLAB_WEIGHT_SECONDARY;
2833                                         break;
2834                                 }
2835 
2836                                 pass_primary = B_FALSE;
2837                         } else {
2838                                 if (activation_weight ==
2839                                     METASLAB_WEIGHT_PRIMARY)
2840                                         break;
2841 
2842                                 target_distance = min_distance +
2843                                     (space_map_allocated(msp->ms_sm) != 0 ? 0 :
2844                                     min_distance >> 1);
2845 
2846                                 for (i = 0; i < d; i++)
2847                                         if (metaslab_distance(msp, &dva[i]) <
2848                                             target_distance)
2849                                                 break;

2850                                 if (i == d)
2851                                         break;
2852                         }
2853                 }
2854                 mutex_exit(&mg->mg_lock);
2855                 if (msp == NULL) {
2856                         kmem_free(search, sizeof (*search));
2857                         return (-1ULL);
2858                 }
2859                 search->ms_weight = msp->ms_weight;
2860                 search->ms_start = msp->ms_start + 1;
2861 
2862                 mutex_enter(&msp->ms_lock);
2863 
2864                 /*
2865                  * Ensure that the metaslab we have selected is still
2866                  * capable of handling our request. It's possible that
2867                  * another thread may have changed the weight while we
2868                  * were blocked on the metaslab lock. We check the
2869                  * active status first to see if we need to reselect
2870                  * a new metaslab.
2871                  */
2872                 if (was_active && !(msp->ms_weight & METASLAB_ACTIVE_MASK)) {
2873                         mutex_exit(&msp->ms_lock);


2954                         metaslab_passivate(msp,
2955                             metaslab_weight_from_range_tree(msp));
2956                 }
2957 
2958                 /*
2959                  * We have just failed an allocation attempt, check
2960                  * that metaslab_should_allocate() agrees. Otherwise,
2961                  * we may end up in an infinite loop retrying the same
2962                  * metaslab.
2963                  */
2964                 ASSERT(!metaslab_should_allocate(msp, asize));
2965                 mutex_exit(&msp->ms_lock);
2966         }
2967         mutex_exit(&msp->ms_lock);
2968         kmem_free(search, sizeof (*search));
2969         return (offset);
2970 }
2971 
2972 static uint64_t
2973 metaslab_group_alloc(metaslab_group_t *mg, zio_alloc_list_t *zal,
2974     uint64_t asize, uint64_t txg, uint64_t min_distance, dva_t *dva,
2975     int d, int flags)
2976 {
2977         uint64_t offset;
2978         ASSERT(mg->mg_initialized);
2979 
2980         offset = metaslab_group_alloc_normal(mg, zal, asize, txg,
2981             min_distance, dva, d, flags);
2982 
2983         mutex_enter(&mg->mg_lock);
2984         if (offset == -1ULL) {
2985                 mg->mg_failed_allocations++;
2986                 metaslab_trace_add(zal, mg, NULL, asize, d,
2987                     TRACE_GROUP_FAILURE);
2988                 if (asize == SPA_GANGBLOCKSIZE) {
2989                         /*
2990                          * This metaslab group was unable to allocate
2991                          * the minimum gang block size so it must be out of
2992                          * space. We must notify the allocation throttle
2993                          * to start skipping allocation attempts to this
2994                          * metaslab group until more space becomes available.
2995                          * Note: this failure cannot be caused by the
2996                          * allocation throttle since the allocation throttle
2997                          * is only responsible for skipping devices and
2998                          * not failing block allocations.
2999                          */
3000                         mg->mg_no_free_space = B_TRUE;
3001                 }
3002         }
3003         mg->mg_allocations++;
3004         mutex_exit(&mg->mg_lock);
3005         return (offset);
3006 }
3007 
3008 /*
3009  * If we have to write a ditto block (i.e. more than one DVA for a given BP)
3010  * on the same vdev as an existing DVA of this BP, then try to allocate it
3011  * at least (vdev_asize / (2 ^ ditto_same_vdev_distance_shift)) away from the
3012  * existing DVAs.
3013  */
3014 int ditto_same_vdev_distance_shift = 3;
3015 
3016 /*
3017  * Allocate a block for the specified i/o.
3018  */
3019 static int
3020 metaslab_alloc_dva(spa_t *spa, metaslab_class_t *mc, uint64_t psize,
3021     dva_t *dva, int d, dva_t *hintdva, uint64_t txg, int flags,
3022     zio_alloc_list_t *zal)
3023 {
3024         metaslab_group_t *mg, *rotor;
3025         vdev_t *vd;
3026         boolean_t try_hard = B_FALSE;
3027 
3028         ASSERT(!DVA_IS_VALID(&dva[d]));
3029 
3030         /*
3031          * For testing, make some blocks above a certain size be gang blocks.
3032          */
3033         if (psize >= metaslab_gang_bang && (ddi_get_lbolt() & 3) == 0) {
3034                 metaslab_trace_add(zal, NULL, NULL, psize, d, TRACE_FORCE_GANG);
3035                 return (SET_ERROR(ENOSPC));
3036         }
3037 
3038         /*
3039          * Start at the rotor and loop through all mgs until we find something.


3045          * consecutive vdevs.  If we're forced to reuse a vdev before we've
3046          * allocated all of our ditto blocks, then try and spread them out on
3047          * that vdev as much as possible.  If it turns out to not be possible,
3048          * gradually lower our standards until anything becomes acceptable.
3049          * Also, allocating on consecutive vdevs (as opposed to random vdevs)
3050          * gives us hope of containing our fault domains to something we're
3051          * able to reason about.  Otherwise, any two top-level vdev failures
3052          * will guarantee the loss of data.  With consecutive allocation,
3053          * only two adjacent top-level vdev failures will result in data loss.
3054          *
3055          * If we are doing gang blocks (hintdva is non-NULL), try to keep
3056          * ourselves on the same vdev as our gang block header.  That
3057          * way, we can hope for locality in vdev_cache, plus it makes our
3058          * fault domains something tractable.
3059          */
3060         if (hintdva) {
3061                 vd = vdev_lookup_top(spa, DVA_GET_VDEV(&hintdva[d]));
3062 
3063                 /*
3064                  * It's possible the vdev we're using as the hint no
3065                  * longer exists (i.e. removed). Consult the rotor when

3066                  * all else fails.
3067                  */
3068                 if (vd != NULL) {
3069                         mg = vd->vdev_mg;
3070 
3071                         if (flags & METASLAB_HINTBP_AVOID &&
3072                             mg->mg_next != NULL)
3073                                 mg = mg->mg_next;
3074                 } else {
3075                         mg = mc->mc_rotor;
3076                 }
3077         } else if (d != 0) {
3078                 vd = vdev_lookup_top(spa, DVA_GET_VDEV(&dva[d - 1]));
3079                 mg = vd->vdev_mg->mg_next;
3080         } else {
3081                 mg = mc->mc_rotor;
3082         }
3083 
3084         /*
3085          * If the hint put us into the wrong metaslab class, or into a
3086          * metaslab group that has been passivated, just follow the rotor.
3087          */
3088         if (mg->mg_class != mc || mg->mg_activation_count <= 0)


3143                 ASSERT(mg->mg_class == mc);
3144 
3145                 /*
3146                  * If we don't need to try hard, then require that the
3147                  * block be 1/8th of the device away from any other DVAs
3148                  * in this BP.  If we are trying hard, allow any offset
3149                  * to be used (distance=0).
3150                  */
3151                 uint64_t distance = 0;
3152                 if (!try_hard) {
3153                         distance = vd->vdev_asize >>
3154                             ditto_same_vdev_distance_shift;
3155                         if (distance <= (1ULL << vd->vdev_ms_shift))
3156                                 distance = 0;
3157                 }
3158 
3159                 uint64_t asize = vdev_psize_to_asize(vd, psize);
3160                 ASSERT(P2PHASE(asize, 1ULL << vd->vdev_ashift) == 0);
3161 
3162                 uint64_t offset = metaslab_group_alloc(mg, zal, asize, txg,
3163                     distance, dva, d, flags);
3164 
3165                 if (offset != -1ULL) {
3166                         /*
3167                          * If we've just selected this metaslab group,
3168                          * figure out whether the corresponding vdev is
3169                          * over- or under-used relative to the pool,
3170                          * and set an allocation bias to even it out.
3171                          */
3172                         if (mc->mc_aliquot == 0 && metaslab_bias_enabled) {
3173                                 vdev_stat_t *vs = &vd->vdev_stat;
3174                                 vdev_stat_t *pvs = &vd->vdev_parent->vdev_stat;
3175                                 int64_t vu, cu, vu_io;
3176 
3177                                 vu = (vs->vs_alloc * 100) / (vs->vs_space + 1);
3178                                 cu = (mc->mc_alloc * 100) / (mc->mc_space + 1);
3179                                 vu_io =
3180                                     (((vs->vs_iotime[ZIO_TYPE_WRITE] * 100) /
3181                                     (pvs->vs_iotime[ZIO_TYPE_WRITE] + 1)) *
3182                                     (vd->vdev_parent->vdev_children)) - 100;
3183 
3184                                 /*
3185                                  * Calculate how much more or less we should
3186                                  * try to allocate from this device during
3187                                  * this iteration around the rotor.
3188                                  * For example, if a device is 80% full
3189                                  * and the pool is 20% full then we should
3190                                  * reduce allocations by 60% on this device.
3191                                  *
3192                                  * mg_bias = (20 - 80) * 512K / 100 = -307K
3193                                  *
3194                                  * This reduces allocations by 307K for this
3195                                  * iteration.
3196                                  */
3197                                 mg->mg_bias = ((cu - vu) *
3198                                     (int64_t)mg->mg_aliquot) / 100;
3199 
3200                                 /*
3201                                  * Experiment: space-based DVA allocator 0,
3202                                  * latency-based 1 or hybrid 2.
3203                                  */
3204                                 switch (metaslab_alloc_dva_algorithm) {
3205                                 case 1:
3206                                         mg->mg_bias =
3207                                             (vu_io * (int64_t)mg->mg_aliquot) /
3208                                             100;
3209                                         break;
3210                                 case 2:
3211                                         mg->mg_bias =
3212                                             ((((cu - vu) + vu_io) / 2) *
3213                                             (int64_t)mg->mg_aliquot) / 100;
3214                                         break;
3215                                 default:
3216                                         break;
3217                                 }
3218                         } else if (!metaslab_bias_enabled) {
3219                                 mg->mg_bias = 0;
3220                         }
3221 
3222                         if (atomic_add_64_nv(&mc->mc_aliquot, asize) >=
3223                             mg->mg_aliquot + mg->mg_bias) {
3224                                 mc->mc_rotor = mg->mg_next;
3225                                 mc->mc_aliquot = 0;
3226                         }
3227 
3228                         DVA_SET_VDEV(&dva[d], vd->vdev_id);
3229                         DVA_SET_OFFSET(&dva[d], offset);
3230                         DVA_SET_GANG(&dva[d], !!(flags & METASLAB_GANG_HEADER));
3231                         DVA_SET_ASIZE(&dva[d], asize);
3232                         DTRACE_PROBE3(alloc_dva_probe, uint64_t, vd->vdev_id,
3233                             uint64_t, offset, uint64_t, psize);
3234 
3235                         return (0);
3236                 }
3237 next:
3238                 mc->mc_rotor = mg->mg_next;
3239                 mc->mc_aliquot = 0;
3240         } while ((mg = mg->mg_next) != rotor);
3241 
3242         /*
3243          * If we haven't tried hard, do so now.
3244          */
3245         if (!try_hard) {
3246                 try_hard = B_TRUE;
3247                 goto top;
3248         }
3249 
3250         bzero(&dva[d], sizeof (dva_t));
3251 
3252         metaslab_trace_add(zal, rotor, NULL, psize, d, TRACE_ENOSPC);
3253         return (SET_ERROR(ENOSPC));
3254 }
3255 
































































































































3256 /*
3257  * Free the block represented by DVA in the context of the specified
3258  * transaction group.








3259  */







































































3260 void
3261 metaslab_free_dva(spa_t *spa, const dva_t *dva, uint64_t txg, boolean_t now)
3262 {


3263         uint64_t vdev = DVA_GET_VDEV(dva);
3264         uint64_t offset = DVA_GET_OFFSET(dva);
3265         uint64_t size = DVA_GET_ASIZE(dva);
3266         vdev_t *vd;
3267         metaslab_t *msp;
3268 
3269         DTRACE_PROBE3(free_dva_probe, uint64_t, vdev,
3270             uint64_t, offset, uint64_t, size);
3271 
3272         ASSERT(DVA_IS_VALID(dva));

3273 
3274         if (txg > spa_freeze_txg(spa))
3275                 return;
3276 
3277         if ((vd = vdev_lookup_top(spa, vdev)) == NULL ||
3278             (offset >> vd->vdev_ms_shift) >= vd->vdev_ms_count) {
3279                 cmn_err(CE_WARN, "metaslab_free_dva(): bad DVA %llu:%llu",
3280                     (u_longlong_t)vdev, (u_longlong_t)offset);
3281                 ASSERT(0);
3282                 return;
3283         }
3284 
3285         msp = vd->vdev_ms[offset >> vd->vdev_ms_shift];



3286 
3287         if (DVA_GET_GANG(dva))
3288                 size = vdev_psize_to_asize(vd, SPA_GANGBLOCKSIZE);
3289 


3290         mutex_enter(&msp->ms_lock);
3291 
3292         if (now) {
3293                 range_tree_remove(msp->ms_alloctree[txg & TXG_MASK],
3294                     offset, size);
3295 
3296                 VERIFY(!msp->ms_condensing);
3297                 VERIFY3U(offset, >=, msp->ms_start);
3298                 VERIFY3U(offset + size, <=, msp->ms_start + msp->ms_size);
3299                 VERIFY3U(range_tree_space(msp->ms_tree) + size, <=,
3300                     msp->ms_size);
3301                 VERIFY0(P2PHASE(offset, 1ULL << vd->vdev_ashift));
3302                 VERIFY0(P2PHASE(size, 1ULL << vd->vdev_ashift));
3303                 range_tree_add(msp->ms_tree, offset, size);
3304                 if (spa_get_auto_trim(spa) == SPA_AUTO_TRIM_ON &&
3305                     !vd->vdev_man_trimming)
3306                         metaslab_trim_add(msp, offset, size);
3307                 msp->ms_max_size = metaslab_block_maxsize(msp);
3308         } else {
3309                 VERIFY3U(txg, ==, spa->spa_syncing_txg);
3310                 if (range_tree_space(msp->ms_freeingtree) == 0)
3311                         vdev_dirty(vd, VDD_METASLAB, msp, txg);
3312                 range_tree_add(msp->ms_freeingtree, offset, size);
3313         }
3314 
3315         mutex_exit(&msp->ms_lock);
3316 }
3317 
3318 /*
3319  * Intent log support: upon opening the pool after a crash, notify the SPA
3320  * of blocks that the intent log has allocated for immediate write, but
3321  * which are still considered free by the SPA because the last transaction
3322  * group didn't commit yet.
3323  */
3324 static int
3325 metaslab_claim_dva(spa_t *spa, const dva_t *dva, uint64_t txg)
3326 {
3327         uint64_t vdev = DVA_GET_VDEV(dva);
3328         uint64_t offset = DVA_GET_OFFSET(dva);
3329         uint64_t size = DVA_GET_ASIZE(dva);
3330         vdev_t *vd;
3331         metaslab_t *msp;
3332         int error = 0;
3333 
3334         ASSERT(DVA_IS_VALID(dva));

3335 
3336         if ((vd = vdev_lookup_top(spa, vdev)) == NULL ||
3337             (offset >> vd->vdev_ms_shift) >= vd->vdev_ms_count)
3338                 return (SET_ERROR(ENXIO));
3339 
3340         msp = vd->vdev_ms[offset >> vd->vdev_ms_shift];
3341 
3342         if (DVA_GET_GANG(dva))
3343                 size = vdev_psize_to_asize(vd, SPA_GANGBLOCKSIZE);
3344 
3345         mutex_enter(&msp->ms_lock);
3346 
3347         if ((txg != 0 && spa_writeable(spa)) || !msp->ms_loaded)
3348                 error = metaslab_activate(msp, METASLAB_WEIGHT_SECONDARY);
3349 
3350         if (error == 0 && !range_tree_contains(msp->ms_tree, offset, size))
3351                 error = SET_ERROR(ENOENT);
3352 
3353         if (error || txg == 0) {        /* txg == 0 indicates dry run */
3354                 mutex_exit(&msp->ms_lock);
3355                 return (error);
3356         }
3357 
3358         VERIFY(!msp->ms_condensing);
3359         VERIFY0(P2PHASE(offset, 1ULL << vd->vdev_ashift));
3360         VERIFY0(P2PHASE(size, 1ULL << vd->vdev_ashift));
3361         VERIFY3U(range_tree_space(msp->ms_tree) - size, <=, msp->ms_size);
3362         range_tree_remove(msp->ms_tree, offset, size);
3363         metaslab_trim_remove(msp, offset, size);
3364 
3365         if (spa_writeable(spa)) {       /* don't dirty if we're zdb(1M) */
3366                 if (range_tree_space(msp->ms_alloctree[txg & TXG_MASK]) == 0)
3367                         vdev_dirty(vd, VDD_METASLAB, msp, txg);
3368                 range_tree_add(msp->ms_alloctree[txg & TXG_MASK], offset, size);
3369         }
3370 
3371         mutex_exit(&msp->ms_lock);
3372 
3373         return (0);
3374 }
3375 
3376 /*
3377  * Reserve some allocation slots. The reservation system must be called
3378  * before we call into the allocator. If there aren't any available slots
3379  * then the I/O will be throttled until an I/O completes and its slots are
3380  * freed up. The function returns true if it was successful in placing
3381  * the reservation.
3382  */
3383 boolean_t
3384 metaslab_class_throttle_reserve(metaslab_class_t *mc, int slots, zio_t *zio,
3385     int flags)
3386 {
3387         uint64_t available_slots = 0;
3388         boolean_t slot_reserved = B_FALSE;
3389 
3390         ASSERT(mc->mc_alloc_throttle_enabled);
3391         mutex_enter(&mc->mc_lock);
3392 
3393         uint64_t reserved_slots = refcount_count(&mc->mc_alloc_slots);


3404                 }
3405                 zio->io_flags |= ZIO_FLAG_IO_ALLOCATING;
3406                 slot_reserved = B_TRUE;
3407         }
3408 
3409         mutex_exit(&mc->mc_lock);
3410         return (slot_reserved);
3411 }
3412 
3413 void
3414 metaslab_class_throttle_unreserve(metaslab_class_t *mc, int slots, zio_t *zio)
3415 {
3416         ASSERT(mc->mc_alloc_throttle_enabled);
3417         mutex_enter(&mc->mc_lock);
3418         for (int d = 0; d < slots; d++) {
3419                 (void) refcount_remove(&mc->mc_alloc_slots, zio);
3420         }
3421         mutex_exit(&mc->mc_lock);
3422 }
3423 






























































3424 int






















































3425 metaslab_alloc(spa_t *spa, metaslab_class_t *mc, uint64_t psize, blkptr_t *bp,
3426     int ndvas, uint64_t txg, blkptr_t *hintbp, int flags,
3427     zio_alloc_list_t *zal, zio_t *zio)
3428 {
3429         dva_t *dva = bp->blk_dva;
3430         dva_t *hintdva = hintbp->blk_dva;
3431         int error = 0;
3432 
3433         ASSERT(bp->blk_birth == 0);
3434         ASSERT(BP_PHYSICAL_BIRTH(bp) == 0);
3435 
3436         spa_config_enter(spa, SCL_ALLOC, FTAG, RW_READER);
3437 
3438         if (mc->mc_rotor == NULL) {  /* no vdevs in this class */
3439                 spa_config_exit(spa, SCL_ALLOC, FTAG);
3440                 return (SET_ERROR(ENOSPC));
3441         }
3442 
3443         ASSERT(ndvas > 0 && ndvas <= spa_max_replication(spa));
3444         ASSERT(BP_GET_NDVAS(bp) == 0);
3445         ASSERT(hintbp == NULL || ndvas <= BP_GET_NDVAS(hintbp));
3446         ASSERT3P(zal, !=, NULL);
3447 
3448         if (mc == spa_special_class(spa) && !BP_IS_METADATA(bp) &&
3449             !(flags & (METASLAB_GANG_HEADER)) &&
3450             !(spa->spa_meta_policy.spa_small_data_to_special &&
3451             psize <= spa->spa_meta_policy.spa_small_data_to_special)) {
3452                 error = metaslab_alloc_dva(spa, spa_normal_class(spa),
3453                     psize, &dva[WBC_NORMAL_DVA], 0, NULL, txg,
3454                     flags | METASLAB_USE_WEIGHT_SECONDARY, zal);
3455                 if (error == 0) {
3456                         error = metaslab_alloc_dva(spa, mc, psize,
3457                             &dva[WBC_SPECIAL_DVA], 0, NULL, txg, flags, zal);
3458                         if (error != 0) {
3459                                 error = 0;
3460                                 /*
3461                                  * Change the place of NORMAL and cleanup the
3462                                  * second DVA. After that this BP is just a
3463                                  * regular BP with one DVA
3464                                  *
3465                                  * This operation is valid only if:
3466                                  * WBC_SPECIAL_DVA is dva[0]
3467                                  * WBC_NORMAL_DVA is dva[1]
3468                                  *
3469                                  * see wbc.h
3470                                  */
3471                                 bcopy(&dva[WBC_NORMAL_DVA],
3472                                     &dva[WBC_SPECIAL_DVA], sizeof (dva_t));
3473                                 bzero(&dva[WBC_NORMAL_DVA], sizeof (dva_t));
3474 
3475                                 /*
3476                                  * Allocation of special DVA has failed,
3477                                  * so this BP will be a regular BP and need
3478                                  * to update the metaslab group's queue depth
3479                                  * based on the newly allocated dva.
3480                                  */
3481                                 metaslab_group_alloc_increment(spa,
3482                                     DVA_GET_VDEV(&dva[0]), zio, flags);
3483                         } else {
3484                                 BP_SET_SPECIAL(bp, 1);
3485                         }
3486                 } else {
3487                         spa_config_exit(spa, SCL_ALLOC, FTAG);
3488                         return (error);
3489                 }
3490         } else {
3491                 for (int d = 0; d < ndvas; d++) {
3492                         error = metaslab_alloc_dva(spa, mc, psize, dva, d,
3493                             hintdva, txg, flags, zal);
3494                         if (error != 0) {
3495                                 for (d--; d >= 0; d--) {
3496                                         metaslab_free_dva(spa, &dva[d],
3497                                             txg, B_TRUE);
3498                                         metaslab_group_alloc_decrement(spa,
3499                                             DVA_GET_VDEV(&dva[d]), zio, flags);
3500                                         bzero(&dva[d], sizeof (dva_t));
3501                                 }
3502                                 spa_config_exit(spa, SCL_ALLOC, FTAG);
3503                                 return (error);
3504                         } else {
3505                                 /*
3506                                  * Update the metaslab group's queue depth
3507                                  * based on the newly allocated dva.
3508                                  */
3509                                 metaslab_group_alloc_increment(spa,
3510                                     DVA_GET_VDEV(&dva[d]), zio, flags);
3511                         }

3512                 }

3513                 ASSERT(BP_GET_NDVAS(bp) == ndvas);
3514         }
3515         ASSERT(error == 0);
3516 
3517         spa_config_exit(spa, SCL_ALLOC, FTAG);
3518 
3519         BP_SET_BIRTH(bp, txg, txg);
3520 
3521         return (0);
3522 }
3523 
3524 void
3525 metaslab_free(spa_t *spa, const blkptr_t *bp, uint64_t txg, boolean_t now)
3526 {
3527         const dva_t *dva = bp->blk_dva;
3528         int ndvas = BP_GET_NDVAS(bp);
3529 
3530         ASSERT(!BP_IS_HOLE(bp));
3531         ASSERT(!now || bp->blk_birth >= spa_syncing_txg(spa));
3532 
3533         spa_config_enter(spa, SCL_FREE, FTAG, RW_READER);
3534 
3535         if (BP_IS_SPECIAL(bp)) {
3536                 int start_dva;
3537                 wbc_data_t *wbc_data = spa_get_wbc_data(spa);
3538 
3539                 mutex_enter(&wbc_data->wbc_lock);
3540                 start_dva = wbc_first_valid_dva(bp, wbc_data, B_TRUE);
3541                 mutex_exit(&wbc_data->wbc_lock);
3542 
3543                 /*
3544                  * Actual freeing should not be locked as
3545                  * the block is already exempted from WBC
3546                  * trees, and thus will not be moved
3547                  */
3548                 metaslab_free_dva(spa, &dva[WBC_NORMAL_DVA], txg, now);
3549                 if (start_dva == 0) {
3550                         metaslab_free_dva(spa, &dva[WBC_SPECIAL_DVA],
3551                             txg, now);
3552                 }
3553         } else {
3554                 for (int d = 0; d < ndvas; d++)
3555                         metaslab_free_dva(spa, &dva[d], txg, now);
3556         }

3557 
3558         spa_config_exit(spa, SCL_FREE, FTAG);
3559 }
3560 
3561 int
3562 metaslab_claim(spa_t *spa, const blkptr_t *bp, uint64_t txg)
3563 {
3564         const dva_t *dva = bp->blk_dva;
3565         int ndvas = BP_GET_NDVAS(bp);
3566         int error = 0;
3567 
3568         ASSERT(!BP_IS_HOLE(bp));
3569 
3570         if (txg != 0) {
3571                 /*
3572                  * First do a dry run to make sure all DVAs are claimable,
3573                  * so we don't have to unwind from partial failures below.
3574                  */
3575                 if ((error = metaslab_claim(spa, bp, 0)) != 0)
3576                         return (error);
3577         }
3578 
3579         spa_config_enter(spa, SCL_ALLOC, FTAG, RW_READER);
3580 
3581         if (BP_IS_SPECIAL(bp)) {
3582                 int start_dva;
3583                 wbc_data_t *wbc_data = spa_get_wbc_data(spa);
3584 
3585                 mutex_enter(&wbc_data->wbc_lock);
3586                 start_dva = wbc_first_valid_dva(bp, wbc_data, B_FALSE);
3587 
3588                 /*
3589                  * Actual claiming should be under lock for WBC blocks. It must
3590                  * be done to ensure zdb will not fail. The only other user of
3591                  * the claiming is ZIL whose blocks can not be WBC ones, and
3592                  * thus the lock will not be held for them.
3593                  */
3594                 error = metaslab_claim_dva(spa,
3595                     &dva[WBC_NORMAL_DVA], txg);
3596                 if (error == 0 && start_dva == 0) {
3597                         error = metaslab_claim_dva(spa,
3598                             &dva[WBC_SPECIAL_DVA], txg);
3599                 }
3600 
3601                 mutex_exit(&wbc_data->wbc_lock);
3602         } else {
3603                 for (int d = 0; d < ndvas; d++)
3604                         if ((error = metaslab_claim_dva(spa,
3605                             &dva[d], txg)) != 0)
3606                                 break;
3607         }
3608 
3609         spa_config_exit(spa, SCL_ALLOC, FTAG);
3610 
3611         ASSERT(error == 0 || txg == 0);
3612 
3613         return (error);
3614 }
3615 
3616 void
3617 metaslab_check_free(spa_t *spa, const blkptr_t *bp)


3618 {












3619         if ((zfs_flags & ZFS_DEBUG_ZIO_FREE) == 0)
3620                 return;
3621 
3622         if (BP_IS_SPECIAL(bp)) {
3623                 /* Do not check frees for WBC blocks */

3624                 return;
3625         }
3626 
3627         spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
3628         for (int i = 0; i < BP_GET_NDVAS(bp); i++) {
3629                 uint64_t vdev = DVA_GET_VDEV(&bp->blk_dva[i]);
3630                 vdev_t *vd = vdev_lookup_top(spa, vdev);
3631                 uint64_t offset = DVA_GET_OFFSET(&bp->blk_dva[i]);
3632                 uint64_t size = DVA_GET_ASIZE(&bp->blk_dva[i]);
3633                 metaslab_t *msp = vd->vdev_ms[offset >> vd->vdev_ms_shift];
3634 
3635                 if (msp->ms_loaded) {



3636                         range_tree_verify(msp->ms_tree, offset, size);
3637                         range_tree_verify(msp->ms_cur_ts->ts_tree,
3638                             offset, size);
3639                         if (msp->ms_prev_ts != NULL) {
3640                                 range_tree_verify(msp->ms_prev_ts->ts_tree,
3641                                     offset, size);
3642                         }
3643                 }
3644 
3645                 range_tree_verify(msp->ms_freeingtree, offset, size);
3646                 range_tree_verify(msp->ms_freedtree, offset, size);
3647                 for (int j = 0; j < TXG_DEFER_SIZE; j++)
3648                         range_tree_verify(msp->ms_defertree[j], offset, size);
3649         }
3650         spa_config_exit(spa, SCL_VDEV, FTAG);
3651 }
3652 
3653 /*
3654  * Trims all free space in the metaslab. Returns the root TRIM zio (that the
3655  * caller should zio_wait() for) and the amount of space in the metaslab that
3656  * has been scheduled for trimming in the `delta' return argument.
3657  */
3658 zio_t *
3659 metaslab_trim_all(metaslab_t *msp, uint64_t *delta)
3660 {
3661         boolean_t was_loaded;
3662         uint64_t trimmed_space;
3663         zio_t *trim_io;
3664 
3665         ASSERT(!MUTEX_HELD(&msp->ms_group->mg_lock));
3666 
3667         mutex_enter(&msp->ms_lock);
3668 
3669         while (msp->ms_loading)
3670                 metaslab_load_wait(msp);
3671         /* If we loaded the metaslab, unload it when we're done. */
3672         was_loaded = msp->ms_loaded;
3673         if (!was_loaded) {
3674                 if (metaslab_load(msp) != 0) {
3675                         mutex_exit(&msp->ms_lock);
3676                         return (0);
3677                 }
3678         }
3679         /* Flush out any scheduled extents and add everything in ms_tree. */
3680         range_tree_vacate(msp->ms_cur_ts->ts_tree, NULL, NULL);
3681         range_tree_walk(msp->ms_tree, metaslab_trim_add, msp);
3682 
3683         /* Force this trim to take place ASAP. */
3684         if (msp->ms_prev_ts != NULL)
3685                 metaslab_free_trimset(msp->ms_prev_ts);
3686         msp->ms_prev_ts = msp->ms_cur_ts;
3687         msp->ms_cur_ts = metaslab_new_trimset(0, &msp->ms_lock);
3688         trimmed_space = range_tree_space(msp->ms_tree);
3689         if (!was_loaded)
3690                 metaslab_unload(msp);
3691 
3692         trim_io = metaslab_exec_trim(msp);
3693         mutex_exit(&msp->ms_lock);
3694         *delta = trimmed_space;
3695 
3696         return (trim_io);
3697 }
3698 
3699 /*
3700  * Notifies the trimsets in a metaslab that an extent has been allocated.
3701  * This removes the segment from the queues of extents awaiting to be trimmed.
3702  */
3703 static void
3704 metaslab_trim_remove(void *arg, uint64_t offset, uint64_t size)
3705 {
3706         metaslab_t *msp = arg;
3707 
3708         range_tree_remove_overlap(msp->ms_cur_ts->ts_tree, offset, size);
3709         if (msp->ms_prev_ts != NULL) {
3710                 range_tree_remove_overlap(msp->ms_prev_ts->ts_tree, offset,
3711                     size);
3712         }
3713 }
3714 
3715 /*
3716  * Notifies the trimsets in a metaslab that an extent has been freed.
3717  * This adds the segment to the currently open queue of extents awaiting
3718  * to be trimmed.
3719  */
3720 static void
3721 metaslab_trim_add(void *arg, uint64_t offset, uint64_t size)
3722 {
3723         metaslab_t *msp = arg;
3724         ASSERT(msp->ms_cur_ts != NULL);
3725         range_tree_add(msp->ms_cur_ts->ts_tree, offset, size);
3726 }
3727 
3728 /*
3729  * Does a metaslab's automatic trim operation processing. This must be
3730  * called from metaslab_sync, with the txg number of the txg. This function
3731  * issues trims in intervals as dictated by the zfs_txgs_per_trim tunable.
3732  */
3733 void
3734 metaslab_auto_trim(metaslab_t *msp, uint64_t txg)
3735 {
3736         /* for atomicity */
3737         uint64_t txgs_per_trim = zfs_txgs_per_trim;
3738 
3739         ASSERT(!MUTEX_HELD(&msp->ms_lock));
3740         mutex_enter(&msp->ms_lock);




3741 
3742         /*
3743          * Since we typically have hundreds of metaslabs per vdev, but we only
3744          * trim them once every zfs_txgs_per_trim txgs, it'd be best if we
3745          * could sequence the TRIM commands from all metaslabs so that they
3746          * don't all always pound the device in the same txg. We do so by
3747          * artificially inflating the birth txg of the first trim set by a
3748          * sequence number derived from the metaslab's starting offset
3749          * (modulo zfs_txgs_per_trim). Thus, for the default 200 metaslabs and
3750          * 32 txgs per trim, we'll only be trimming ~6.25 metaslabs per txg.
3751          *
3752          * If we detect that the txg has advanced too far ahead of ts_birth,
3753          * it means our birth txg is out of lockstep. Recompute it by
3754          * rounding down to the nearest zfs_txgs_per_trim multiple and adding
3755          * our metaslab id modulo zfs_txgs_per_trim.
3756          */
3757         if (txg > msp->ms_cur_ts->ts_birth + txgs_per_trim) {
3758                 msp->ms_cur_ts->ts_birth = (txg / txgs_per_trim) *
3759                     txgs_per_trim + (msp->ms_id % txgs_per_trim);
3760         }
3761 
3762         /* Time to swap out the current and previous trimsets */
3763         if (txg == msp->ms_cur_ts->ts_birth + txgs_per_trim) {
3764                 if (msp->ms_prev_ts != NULL) {
3765                         if (msp->ms_trimming_ts != NULL) {
3766                                 spa_t *spa = msp->ms_group->mg_class->mc_spa;
3767                                 /*
3768                                  * The previous trim run is still ongoing, so
3769                                  * the device is reacting slowly to our trim
3770                                  * requests. Drop this trimset, so as not to
3771                                  * back the device up with trim requests.
3772                                  */
3773                                 spa_trimstats_auto_slow_incr(spa);
3774                                 metaslab_free_trimset(msp->ms_prev_ts);
3775                         } else if (msp->ms_group->mg_vd->vdev_man_trimming) {
3776                                 /*
3777                                  * If a manual trim is ongoing, we want to
3778                                  * inhibit autotrim temporarily so it doesn't
3779                                  * slow down the manual trim.
3780                                  */
3781                                 metaslab_free_trimset(msp->ms_prev_ts);
3782                         } else {
3783                                 /*
3784                                  * Trim out aged extents on the vdevs - these
3785                                  * are safe to be destroyed now. We'll keep
3786                                  * the trimset around to deny allocations from
3787                                  * these regions while the trims are ongoing.
3788                                  */
3789                                 zio_nowait(metaslab_exec_trim(msp));
3790                         }
3791                 }
3792                 msp->ms_prev_ts = msp->ms_cur_ts;
3793                 msp->ms_cur_ts = metaslab_new_trimset(txg, &msp->ms_lock);
3794         }
3795         mutex_exit(&msp->ms_lock);
3796 }
3797 
3798 static void
3799 metaslab_trim_done(zio_t *zio)
3800 {
3801         metaslab_t *msp = zio->io_private;
3802         boolean_t held;
3803 
3804         ASSERT(msp != NULL);
3805         ASSERT(msp->ms_trimming_ts != NULL);
3806         held = MUTEX_HELD(&msp->ms_lock);
3807         if (!held)
3808                 mutex_enter(&msp->ms_lock);
3809         metaslab_free_trimset(msp->ms_trimming_ts);
3810         msp->ms_trimming_ts = NULL;
3811         cv_signal(&msp->ms_trim_cv);
3812         if (!held)
3813                 mutex_exit(&msp->ms_lock);
3814 }
3815 
3816 /*
3817  * Executes a zio_trim on a range tree holding freed extents in the metaslab.
3818  */
3819 static zio_t *
3820 metaslab_exec_trim(metaslab_t *msp)
3821 {
3822         metaslab_group_t *mg = msp->ms_group;
3823         spa_t *spa = mg->mg_class->mc_spa;
3824         vdev_t *vd = mg->mg_vd;
3825         range_tree_t *trim_tree;
3826         zio_t *zio;
3827 
3828         ASSERT(MUTEX_HELD(&msp->ms_lock));
3829 
3830         /* wait for a preceding trim to finish */
3831         while (msp->ms_trimming_ts != NULL)
3832                 cv_wait(&msp->ms_trim_cv, &msp->ms_lock);
3833         msp->ms_trimming_ts = msp->ms_prev_ts;
3834         msp->ms_prev_ts = NULL;
3835         trim_tree = msp->ms_trimming_ts->ts_tree;
3836 #ifdef  DEBUG
3837         if (msp->ms_loaded) {
3838                 for (range_seg_t *rs = avl_first(&trim_tree->rt_root);
3839                     rs != NULL; rs = AVL_NEXT(&trim_tree->rt_root, rs)) {
3840                         if (!range_tree_contains(msp->ms_tree,
3841                             rs->rs_start, rs->rs_end - rs->rs_start)) {
3842                                 panic("trimming allocated region; mss=%p",
3843                                     (void*)rs);
3844                         }
3845                 }
3846         }
3847 #endif
3848 
3849         /* Nothing to trim */
3850         if (range_tree_space(trim_tree) == 0) {
3851                 metaslab_free_trimset(msp->ms_trimming_ts);
3852                 msp->ms_trimming_ts = 0;
3853                 return (zio_root(spa, NULL, NULL, 0));
3854         }
3855         zio = zio_trim(spa, vd, trim_tree, metaslab_trim_done, msp, 0,
3856             ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY |
3857             ZIO_FLAG_CONFIG_WRITER, msp);
3858 
3859         return (zio);
3860 }
3861 
3862 /*
3863  * Allocates and initializes a new trimset structure. The `txg' argument
3864  * indicates when this trimset was born and `lock' indicates the lock to
3865  * link to the range tree.
3866  */
3867 static metaslab_trimset_t *
3868 metaslab_new_trimset(uint64_t txg, kmutex_t *lock)
3869 {
3870         metaslab_trimset_t *ts;
3871 
3872         ts = kmem_zalloc(sizeof (*ts), KM_SLEEP);
3873         ts->ts_birth = txg;
3874         ts->ts_tree = range_tree_create(NULL, NULL, lock);
3875 
3876         return (ts);
3877 }
3878 
3879 /*
3880  * Destroys and frees a trim set previously allocated by metaslab_new_trimset.
3881  */
3882 static void
3883 metaslab_free_trimset(metaslab_trimset_t *ts)
3884 {
3885         range_tree_vacate(ts->ts_tree, NULL, NULL);
3886         range_tree_destroy(ts->ts_tree);
3887         kmem_free(ts, sizeof (*ts));
3888 }
3889 
3890 /*
3891  * Checks whether an allocation conflicts with an ongoing trim operation in
3892  * the given metaslab. This function takes a segment starting at `*offset'
3893  * of `size' and checks whether it hits any region in the metaslab currently
3894  * being trimmed. If yes, it tries to adjust the allocation to the end of
3895  * the region being trimmed (P2ROUNDUP aligned by `align'), but only up to
3896  * `limit' (no part of the allocation is allowed to go past this point).
3897  *
3898  * Returns B_FALSE if either the original allocation wasn't in conflict, or
3899  * the conflict could be resolved by adjusting the value stored in `offset'
3900  * such that the whole allocation still fits below `limit'. Returns B_TRUE
3901  * if the allocation conflict couldn't be resolved.
3902  */
3903 static boolean_t metaslab_check_trim_conflict(metaslab_t *msp,
3904     uint64_t *offset, uint64_t size, uint64_t align, uint64_t limit)
3905 {
3906         uint64_t new_offset;
3907 
3908         if (msp->ms_trimming_ts == NULL)
3909                 /* no trim conflict, original offset is OK */
3910                 return (B_FALSE);
3911 
3912         new_offset = P2ROUNDUP(range_tree_find_gap(msp->ms_trimming_ts->ts_tree,
3913             *offset, size), align);
3914         if (new_offset != *offset && new_offset + size > limit)
3915                 /* trim conflict and adjustment not possible */
3916                 return (B_TRUE);
3917 
3918         /* trim conflict, but adjusted offset still within limit */
3919         *offset = new_offset;
3920         return (B_FALSE);
3921 }