Print this page
NEX-9752 backport illumos 6950 ARC should cache compressed data
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
6950 ARC should cache compressed data
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
NEX-8521 zdb -h <pool> raises core dump
Reviewed by: Alex Deiter <alex.deiter@nexenta.com>
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
SUP-918: zdb -h infinite loop when buffering records larger than static limit
Reviewed by: Rob Gittins <rob.gittins@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-3650 KRRP needs to clean up cstyle, hdrchk, and mapfile issues
Reviewed by: Jean McCormack <jean.mccormack@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-3214 remove cos object type from dmu.h
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
6391 Override default SPA config location via environment
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed by: Richard Yao <ryao@gentoo.org>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Will Andrews <will@freebsd.org>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
6268 zfs diff confused by moving a file to another directory
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Justin Gibbs <gibbs@scsiguy.com>
Approved by: Dan McDonald <danmcd@omniti.com>
6290 zdb -h overflows stack
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Brian Donohue <brian.donohue@delphix.com>
Reviewed by: Xin Li <delphij@freebsd.org>
Reviewed by: Don Brady <dev.fs.zfs@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
6047 SPARC boot should support feature@embedded_data
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
5959 clean up per-dataset feature count code
Reviewed by: Toomas Soome <tsoome@me.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
NEX-4582 update wrc test cases for allow to use write back cache per tree of datasets
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
5960 zfs recv should prefetch indirect blocks
5925 zfs receive -o origin=
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
5812 assertion failed in zrl_tryenter(): zr_owner==NULL
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Reviewed by: Will Andrews <will@freebsd.org>
Approved by: Gordon Ross <gwr@nexenta.com>
5810 zdb should print details of bpobj
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Will Andrews <will@freebsd.org>
Reviewed by: Simon Klinkert <simon.klinkert@gmail.com>
Approved by: Gordon Ross <gwr@nexenta.com>
NEX-3558 KRRP Integration
NEX-3212 remove vdev prop object type from dmu.h
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
4370 avoid transmitting holes during zfs send
4371 DMU code clean up
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Josef 'Jeff' Sipek <jeffpc@josefsipek.net>
Approved by: Garrett D'Amore <garrett@damore.org>
Make special vdev subtree topology the same as regular vdev subtree to simplify testcase setup
Fixup merge issues
Issue #40: ZDB shouldn't crash with new code
re #12611 rb4105 zpool import panic in ddt_zap_count()
re #8279 rb3915 need a mechanism to notify NMS about ZFS config changes (fix lint -courtesy of Yuri Pankov)
re #12584 rb4049 zfsxx latest code merge (fix lint - courtesy of Yuri Pankov)
re #12585 rb4049 ZFS++ work port - refactoring to improve separation of open/closed code, bug fixes, performance improvements - open code
Bug 11205: add missing libzfs_closed_stubs.c to fix opensource-only build.
ZFS plus work: special vdevs, cos, cos/vdev properties

*** 19,29 **** * CDDL HEADER END */ /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. ! * Copyright (c) 2011, 2017 by Delphix. All rights reserved. * Copyright (c) 2014 Integros [integros.com] * Copyright 2017 Nexenta Systems, Inc. * Copyright 2017 RackTop Systems. */ --- 19,29 ---- * CDDL HEADER END */ /* * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. ! * Copyright (c) 2011, 2016 by Delphix. All rights reserved. * Copyright (c) 2014 Integros [integros.com] * Copyright 2017 Nexenta Systems, Inc. * Copyright 2017 RackTop Systems. */
*** 30,39 **** --- 30,41 ---- #include <stdio.h> #include <unistd.h> #include <stdio_ext.h> #include <stdlib.h> #include <ctype.h> + #include <string.h> + #include <errno.h> #include <sys/zfs_context.h> #include <sys/spa.h> #include <sys/spa_impl.h> #include <sys/dmu.h> #include <sys/zap.h>
*** 76,104 **** zio_checksum_table[(idx)].ci_name : "UNKNOWN") #define ZDB_OT_NAME(idx) ((idx) < DMU_OT_NUMTYPES ? \ dmu_ot[(idx)].ot_name : DMU_OT_IS_VALID(idx) ? \ dmu_ot_byteswap[DMU_OT_BYTESWAP(idx)].ob_name : "UNKNOWN") #define ZDB_OT_TYPE(idx) ((idx) < DMU_OT_NUMTYPES ? (idx) : \ ! (idx) == DMU_OTN_ZAP_DATA || (idx) == DMU_OTN_ZAP_METADATA ? \ ! DMU_OT_ZAP_OTHER : \ ! (idx) == DMU_OTN_UINT64_DATA || (idx) == DMU_OTN_UINT64_METADATA ? \ ! DMU_OT_UINT64_OTHER : DMU_OT_NUMTYPES) #ifndef lint extern int reference_tracking_enable; extern boolean_t zfs_recover; extern uint64_t zfs_arc_max, zfs_arc_meta_limit; extern int zfs_vdev_async_read_max_active; extern int aok; - extern boolean_t spa_load_verify_dryrun; #else int reference_tracking_enable; boolean_t zfs_recover; uint64_t zfs_arc_max, zfs_arc_meta_limit; int zfs_vdev_async_read_max_active; int aok; - boolean_t spa_load_verify_dryrun; #endif static const char cmdname[] = "zdb"; uint8_t dump_opt[256]; --- 78,102 ---- zio_checksum_table[(idx)].ci_name : "UNKNOWN") #define ZDB_OT_NAME(idx) ((idx) < DMU_OT_NUMTYPES ? \ dmu_ot[(idx)].ot_name : DMU_OT_IS_VALID(idx) ? \ dmu_ot_byteswap[DMU_OT_BYTESWAP(idx)].ob_name : "UNKNOWN") #define ZDB_OT_TYPE(idx) ((idx) < DMU_OT_NUMTYPES ? (idx) : \ ! (((idx) == DMU_OTN_ZAP_DATA || (idx) == DMU_OTN_ZAP_METADATA) ? \ ! DMU_OT_ZAP_OTHER : DMU_OT_NUMTYPES)) #ifndef lint extern int reference_tracking_enable; extern boolean_t zfs_recover; extern uint64_t zfs_arc_max, zfs_arc_meta_limit; extern int zfs_vdev_async_read_max_active; extern int aok; #else int reference_tracking_enable; boolean_t zfs_recover; uint64_t zfs_arc_max, zfs_arc_meta_limit; int zfs_vdev_async_read_max_active; int aok; #endif static const char cmdname[] = "zdb"; uint8_t dump_opt[256];
*** 672,683 **** static int get_metaslab_refcount(vdev_t *vd) { int refcount = 0; ! if (vd->vdev_top == vd) { ! for (uint64_t m = 0; m < vd->vdev_ms_count; m++) { space_map_t *sm = vd->vdev_ms[m]->ms_sm; if (sm != NULL && sm->sm_dbuf->db_size == sizeof (space_map_phys_t)) refcount++; --- 670,681 ---- static int get_metaslab_refcount(vdev_t *vd) { int refcount = 0; ! if (vd->vdev_top == vd && !vd->vdev_removing) { ! for (unsigned m = 0; m < vd->vdev_ms_count; m++) { space_map_t *sm = vd->vdev_ms[m]->ms_sm; if (sm != NULL && sm->sm_dbuf->db_size == sizeof (space_map_phys_t)) refcount++;
*** 688,736 **** return (refcount); } static int - get_obsolete_refcount(vdev_t *vd) - { - int refcount = 0; - - uint64_t obsolete_sm_obj = vdev_obsolete_sm_object(vd); - if (vd->vdev_top == vd && obsolete_sm_obj != 0) { - dmu_object_info_t doi; - VERIFY0(dmu_object_info(vd->vdev_spa->spa_meta_objset, - obsolete_sm_obj, &doi)); - if (doi.doi_bonus_size == sizeof (space_map_phys_t)) { - refcount++; - } - } else { - ASSERT3P(vd->vdev_obsolete_sm, ==, NULL); - ASSERT3U(obsolete_sm_obj, ==, 0); - } - for (unsigned c = 0; c < vd->vdev_children; c++) { - refcount += get_obsolete_refcount(vd->vdev_child[c]); - } - - return (refcount); - } - - static int - get_prev_obsolete_spacemap_refcount(spa_t *spa) - { - uint64_t prev_obj = - spa->spa_condensing_indirect_phys.scip_prev_obsolete_sm_object; - if (prev_obj != 0) { - dmu_object_info_t doi; - VERIFY0(dmu_object_info(spa->spa_meta_objset, prev_obj, &doi)); - if (doi.doi_bonus_size == sizeof (space_map_phys_t)) { - return (1); - } - } - return (0); - } - - static int verify_spacemap_refcounts(spa_t *spa) { uint64_t expected_refcount = 0; uint64_t actual_refcount; --- 686,695 ----
*** 737,748 **** (void) feature_get_refcount(spa, &spa_feature_table[SPA_FEATURE_SPACEMAP_HISTOGRAM], &expected_refcount); actual_refcount = get_dtl_refcount(spa->spa_root_vdev); actual_refcount += get_metaslab_refcount(spa->spa_root_vdev); - actual_refcount += get_obsolete_refcount(spa->spa_root_vdev); - actual_refcount += get_prev_obsolete_spacemap_refcount(spa); if (expected_refcount != actual_refcount) { (void) printf("space map refcount mismatch: expected %lld != " "actual %lld\n", (longlong_t)expected_refcount, --- 696,705 ----
*** 754,776 **** static void dump_spacemap(objset_t *os, space_map_t *sm) { uint64_t alloc, offset, entry; ! char *ddata[] = { "ALLOC", "FREE", "CONDENSE", "INVALID", "INVALID", "INVALID", "INVALID", "INVALID" }; if (sm == NULL) return; - (void) printf("space map object %llu:\n", - (longlong_t)sm->sm_phys->smp_object); - (void) printf(" smp_objsize = 0x%llx\n", - (longlong_t)sm->sm_phys->smp_objsize); - (void) printf(" smp_alloc = 0x%llx\n", - (longlong_t)sm->sm_phys->smp_alloc); - /* * Print out the freelist entries in both encoded and decoded form. */ alloc = 0; for (offset = 0; offset < space_map_length(sm); --- 711,726 ---- static void dump_spacemap(objset_t *os, space_map_t *sm) { uint64_t alloc, offset, entry; ! const char *ddata[] = { "ALLOC", "FREE", "CONDENSE", "INVALID", "INVALID", "INVALID", "INVALID", "INVALID" }; if (sm == NULL) return; /* * Print out the freelist entries in both encoded and decoded form. */ alloc = 0; for (offset = 0; offset < space_map_length(sm);
*** 871,881 **** --- 821,833 ---- } if (dump_opt['d'] > 5 || dump_opt['m'] > 3) { ASSERT(msp->ms_size == (1ULL << vd->vdev_ms_shift)); + mutex_enter(&msp->ms_lock); dump_spacemap(spa->spa_meta_objset, msp->ms_sm); + mutex_exit(&msp->ms_lock); } } static void print_vdev_metaslab_header(vdev_t *vd)
*** 929,1010 **** (void) printf("\t%3llu%%\n", (u_longlong_t)fragmentation); dump_histogram(mc->mc_histogram, RANGE_TREE_HISTOGRAM_SIZE, 0); } static void - print_vdev_indirect(vdev_t *vd) - { - vdev_indirect_config_t *vic = &vd->vdev_indirect_config; - vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; - vdev_indirect_births_t *vib = vd->vdev_indirect_births; - - if (vim == NULL) { - ASSERT3P(vib, ==, NULL); - return; - } - - ASSERT3U(vdev_indirect_mapping_object(vim), ==, - vic->vic_mapping_object); - ASSERT3U(vdev_indirect_births_object(vib), ==, - vic->vic_births_object); - - (void) printf("indirect births obj %llu:\n", - (longlong_t)vic->vic_births_object); - (void) printf(" vib_count = %llu\n", - (longlong_t)vdev_indirect_births_count(vib)); - for (uint64_t i = 0; i < vdev_indirect_births_count(vib); i++) { - vdev_indirect_birth_entry_phys_t *cur_vibe = - &vib->vib_entries[i]; - (void) printf("\toffset %llx -> txg %llu\n", - (longlong_t)cur_vibe->vibe_offset, - (longlong_t)cur_vibe->vibe_phys_birth_txg); - } - (void) printf("\n"); - - (void) printf("indirect mapping obj %llu:\n", - (longlong_t)vic->vic_mapping_object); - (void) printf(" vim_max_offset = 0x%llx\n", - (longlong_t)vdev_indirect_mapping_max_offset(vim)); - (void) printf(" vim_bytes_mapped = 0x%llx\n", - (longlong_t)vdev_indirect_mapping_bytes_mapped(vim)); - (void) printf(" vim_count = %llu\n", - (longlong_t)vdev_indirect_mapping_num_entries(vim)); - - if (dump_opt['d'] <= 5 && dump_opt['m'] <= 3) - return; - - uint32_t *counts = vdev_indirect_mapping_load_obsolete_counts(vim); - - for (uint64_t i = 0; i < vdev_indirect_mapping_num_entries(vim); i++) { - vdev_indirect_mapping_entry_phys_t *vimep = - &vim->vim_entries[i]; - (void) printf("\t<%llx:%llx:%llx> -> " - "<%llx:%llx:%llx> (%x obsolete)\n", - (longlong_t)vd->vdev_id, - (longlong_t)DVA_MAPPING_GET_SRC_OFFSET(vimep), - (longlong_t)DVA_GET_ASIZE(&vimep->vimep_dst), - (longlong_t)DVA_GET_VDEV(&vimep->vimep_dst), - (longlong_t)DVA_GET_OFFSET(&vimep->vimep_dst), - (longlong_t)DVA_GET_ASIZE(&vimep->vimep_dst), - counts[i]); - } - (void) printf("\n"); - - uint64_t obsolete_sm_object = vdev_obsolete_sm_object(vd); - if (obsolete_sm_object != 0) { - objset_t *mos = vd->vdev_spa->spa_meta_objset; - (void) printf("obsolete space map object %llu:\n", - (u_longlong_t)obsolete_sm_object); - ASSERT(vd->vdev_obsolete_sm != NULL); - ASSERT3U(space_map_object(vd->vdev_obsolete_sm), ==, - obsolete_sm_object); - dump_spacemap(mos, vd->vdev_obsolete_sm); - (void) printf("\n"); - } - } - - static void dump_metaslabs(spa_t *spa) { vdev_t *vd, *rvd = spa->spa_root_vdev; uint64_t m, c = 0, children = rvd->vdev_children; --- 881,890 ----
*** 1036,1047 **** } for (; c < children; c++) { vd = rvd->vdev_child[c]; print_vdev_metaslab_header(vd); - print_vdev_indirect(vd); - for (m = 0; m < vd->vdev_ms_count; m++) dump_metaslab(vd->vdev_ms[m]); (void) printf("\n"); } } --- 916,925 ----
*** 1102,1112 **** if (error == ENOENT) return; ASSERT(error == 0); ! if ((count = ddt_object_count(ddt, type, class)) == 0) return; dspace = doi.doi_physical_blocks_512 << 9; mspace = doi.doi_fill_count * doi.doi_data_block_size; --- 980,991 ---- if (error == ENOENT) return; ASSERT(error == 0); ! (void) ddt_object_count(ddt, type, class, &count); ! if (count == 0) return; dspace = doi.doi_physical_blocks_512 << 9; mspace = doi.doi_fill_count * doi.doi_data_block_size;
*** 1213,1223 **** --- 1092,1104 ---- range_tree_t *rt = vd->vdev_dtl[t]; if (range_tree_space(rt) == 0) continue; (void) snprintf(prefix, sizeof (prefix), "\t%*s%s", indent + 2, "", name[t]); + mutex_enter(rt->rt_lock); range_tree_walk(rt, dump_dtl_seg, prefix); + mutex_exit(rt->rt_lock); if (dump_opt['d'] > 5 && vd->vdev_children == 0) dump_spacemap(spa->spa_meta_objset, vd->vdev_dtl_sm); } for (unsigned c = 0; c < vd->vdev_children; c++)
*** 1227,1261 **** static void dump_history(spa_t *spa) { nvlist_t **events = NULL; uint64_t resid, len, off = 0; uint_t num = 0; int error; time_t tsec; struct tm t; char tbuf[30]; char internalstr[MAXPATHLEN]; ! char *buf = umem_alloc(SPA_MAXBLOCKSIZE, UMEM_NOFAIL); do { ! len = SPA_MAXBLOCKSIZE; if ((error = spa_history_get(spa, &off, &len, buf)) != 0) { ! (void) fprintf(stderr, "Unable to read history: " ! "error %d\n", error); ! umem_free(buf, SPA_MAXBLOCKSIZE); ! return; } ! if (zpool_history_unpack(buf, len, &resid, &events, &num) != 0) break; off -= resid; } while (len != 0); ! umem_free(buf, SPA_MAXBLOCKSIZE); (void) printf("\nHistory:\n"); for (unsigned i = 0; i < num; i++) { uint64_t time, txg, ievent; char *cmd, *intstr; boolean_t printed = B_FALSE; --- 1108,1159 ---- static void dump_history(spa_t *spa) { nvlist_t **events = NULL; uint64_t resid, len, off = 0; + uint64_t buflen; uint_t num = 0; int error; time_t tsec; struct tm t; char tbuf[30]; char internalstr[MAXPATHLEN]; ! buflen = SPA_MAXBLOCKSIZE; ! char *buf = umem_alloc(buflen, UMEM_NOFAIL); do { ! len = buflen; if ((error = spa_history_get(spa, &off, &len, buf)) != 0) { ! break; } ! error = zpool_history_unpack(buf, len, &resid, &events, &num); ! if (error != 0) { break; + } off -= resid; + if (resid == len) { + umem_free(buf, buflen); + buflen *= 2; + buf = umem_alloc(buflen, UMEM_NOFAIL); + if (buf == NULL) { + (void) fprintf(stderr, "Unable to read history: %s\n", + strerror(error)); + goto err; + } + } } while (len != 0); ! umem_free(buf, buflen); + if (error != 0) { + (void) fprintf(stderr, "Unable to read history: %s\n", + strerror(error)); + goto err; + } + (void) printf("\nHistory:\n"); for (unsigned i = 0; i < num; i++) { uint64_t time, txg, ievent; char *cmd, *intstr; boolean_t printed = B_FALSE;
*** 1293,1302 **** --- 1191,1205 ---- if (!printed) (void) printf("unrecognized record:\n"); dump_nvlist(events[i], 2); } } + err: + for (unsigned i = 0; i < num; i++) { + nvlist_free(events[i]); + } + free(events); } /*ARGSUSED*/ static void dump_dnode(objset_t *os, uint64_t object, void *data, size_t size)
*** 2207,2226 **** } if (dump_opt['i'] != 0 || verbosity >= 2) dump_intent_log(dmu_objset_zil(os)); ! if (dmu_objset_ds(os) != NULL) { ! dsl_dataset_t *ds = dmu_objset_ds(os); ! dump_deadlist(&ds->ds_deadlist); - if (dsl_dataset_remap_deadlist_exists(ds)) { - (void) printf("ds_remap_deadlist:\n"); - dump_deadlist(&ds->ds_remap_deadlist); - } - } - if (verbosity < 2) return; if (BP_IS_HOLE(os->os_rootbp)) return; --- 2110,2122 ---- } if (dump_opt['i'] != 0 || verbosity >= 2) dump_intent_log(dmu_objset_zil(os)); ! if (dmu_objset_ds(os) != NULL) ! dump_deadlist(&dmu_objset_ds(os)->ds_deadlist); if (verbosity < 2) return; if (BP_IS_HOLE(os->os_rootbp)) return;
*** 2559,2569 **** return (label_found ? 0 : 2); } static uint64_t dataset_feature_count[SPA_FEATURES]; - static uint64_t remap_deadlist_count = 0; /*ARGSUSED*/ static int dump_one_dir(const char *dsname, void *arg) { --- 2455,2464 ----
*** 2580,2593 **** ASSERT(spa_feature_table[f].fi_flags & ZFEATURE_FLAG_PER_DATASET); dataset_feature_count[f]++; } - if (dsl_dataset_remap_deadlist_exists(dmu_objset_ds(os))) { - remap_deadlist_count++; - } - dump_dir(os); close_objset(os, FTAG); fuid_table_destroy(); return (0); } --- 2475,2484 ----
*** 2623,2633 **** #define ZB_TOTAL DN_MAX_LEVELS typedef struct zdb_cb { zdb_blkstats_t zcb_type[ZB_TOTAL + 1][ZDB_OT_TOTAL + 1]; - uint64_t zcb_removing_size; uint64_t zcb_dedup_asize; uint64_t zcb_dedup_blocks; uint64_t zcb_embedded_blocks[NUM_BP_EMBEDDED_TYPES]; uint64_t zcb_embedded_histogram[NUM_BP_EMBEDDED_TYPES] [BPE_PAYLOAD_SIZE]; --- 2514,2523 ----
*** 2636,2646 **** uint64_t zcb_totalasize; uint64_t zcb_errors[256]; int zcb_readfails; int zcb_haderrors; spa_t *zcb_spa; - uint32_t **zcb_vd_obsolete_counts; } zdb_cb_t; static void zdb_count_block(zdb_cb_t *zcb, zilog_t *zilog, const blkptr_t *bp, dmu_object_type_t type) --- 2526,2535 ----
*** 2707,2729 **** if (BP_GET_DEDUP(bp)) { ddt_t *ddt; ddt_entry_t *dde; ddt = ddt_select(zcb->zcb_spa, bp); - ddt_enter(ddt); dde = ddt_lookup(ddt, bp, B_FALSE); if (dde == NULL) { refcnt = 0; } else { ddt_phys_t *ddp = ddt_phys_select(dde, bp); ddt_phys_decref(ddp); refcnt = ddp->ddp_refcnt; if (ddt_phys_total_refcnt(dde) == 0) ddt_remove(ddt, dde); } - ddt_exit(ddt); } VERIFY3U(zio_wait(zio_claim(NULL, zcb->zcb_spa, refcnt ? 0 : spa_first_txg(zcb->zcb_spa), bp, NULL, NULL, ZIO_FLAG_CANFAIL)), ==, 0); --- 2596,2620 ---- if (BP_GET_DEDUP(bp)) { ddt_t *ddt; ddt_entry_t *dde; ddt = ddt_select(zcb->zcb_spa, bp); dde = ddt_lookup(ddt, bp, B_FALSE); if (dde == NULL) { refcnt = 0; } else { ddt_phys_t *ddp = ddt_phys_select(dde, bp); + + /* no other competitors for dde */ + dde_exit(dde); + ddt_phys_decref(ddp); refcnt = ddp->ddp_refcnt; if (ddt_phys_total_refcnt(dde) == 0) ddt_remove(ddt, dde); } } VERIFY3U(zio_wait(zio_claim(NULL, zcb->zcb_spa, refcnt ? 0 : spa_first_txg(zcb->zcb_spa), bp, NULL, NULL, ZIO_FLAG_CANFAIL)), ==, 0);
*** 2797,2807 **** type = BP_GET_TYPE(bp); zdb_count_block(zcb, zilog, bp, (type & DMU_OT_NEWTYPE) ? ZDB_OT_OTHER : type); ! is_metadata = (BP_GET_LEVEL(bp) != 0 || DMU_OT_IS_METADATA(type)); if (!BP_IS_EMBEDDED(bp) && (dump_opt['c'] > 1 || (dump_opt['c'] && is_metadata))) { size_t size = BP_GET_PSIZE(bp); abd_t *abd = abd_alloc(size, B_FALSE); --- 2688,2698 ---- type = BP_GET_TYPE(bp); zdb_count_block(zcb, zilog, bp, (type & DMU_OT_NEWTYPE) ? ZDB_OT_OTHER : type); ! is_metadata = BP_IS_METADATA(bp); if (!BP_IS_EMBEDDED(bp) && (dump_opt['c'] > 1 || (dump_opt['c'] && is_metadata))) { size_t size = BP_GET_PSIZE(bp); abd_t *abd = abd_alloc(size, B_FALSE);
*** 2900,3120 **** zcb->zcb_dedup_blocks++; } } if (!dump_opt['L']) { ddt_t *ddt = spa->spa_ddt[ddb.ddb_checksum]; ! ddt_enter(ddt); ! VERIFY(ddt_lookup(ddt, &blk, B_TRUE) != NULL); ! ddt_exit(ddt); } } ASSERT(error == ENOENT); } - /* ARGSUSED */ static void - claim_segment_impl_cb(uint64_t inner_offset, vdev_t *vd, uint64_t offset, - uint64_t size, void *arg) - { - /* - * This callback was called through a remap from - * a device being removed. Therefore, the vdev that - * this callback is applied to is a concrete - * vdev. - */ - ASSERT(vdev_is_concrete(vd)); - - VERIFY0(metaslab_claim_impl(vd, offset, size, - spa_first_txg(vd->vdev_spa))); - } - - static void - claim_segment_cb(void *arg, uint64_t offset, uint64_t size) - { - vdev_t *vd = arg; - - vdev_indirect_ops.vdev_op_remap(vd, offset, size, - claim_segment_impl_cb, NULL); - } - - /* - * After accounting for all allocated blocks that are directly referenced, - * we might have missed a reference to a block from a partially complete - * (and thus unused) indirect mapping object. We perform a secondary pass - * through the metaslabs we have already mapped and claim the destination - * blocks. - */ - static void - zdb_claim_removing(spa_t *spa, zdb_cb_t *zcb) - { - if (spa->spa_vdev_removal == NULL) - return; - - spa_config_enter(spa, SCL_CONFIG, FTAG, RW_READER); - - spa_vdev_removal_t *svr = spa->spa_vdev_removal; - vdev_t *vd = svr->svr_vdev; - vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; - - for (uint64_t msi = 0; msi < vd->vdev_ms_count; msi++) { - metaslab_t *msp = vd->vdev_ms[msi]; - - if (msp->ms_start >= vdev_indirect_mapping_max_offset(vim)) - break; - - ASSERT0(range_tree_space(svr->svr_allocd_segs)); - - if (msp->ms_sm != NULL) { - VERIFY0(space_map_load(msp->ms_sm, - svr->svr_allocd_segs, SM_ALLOC)); - - /* - * Clear everything past what has been synced, - * because we have not allocated mappings for it yet. - */ - range_tree_clear(svr->svr_allocd_segs, - vdev_indirect_mapping_max_offset(vim), - msp->ms_sm->sm_start + msp->ms_sm->sm_size - - vdev_indirect_mapping_max_offset(vim)); - } - - zcb->zcb_removing_size += - range_tree_space(svr->svr_allocd_segs); - range_tree_vacate(svr->svr_allocd_segs, claim_segment_cb, vd); - } - - spa_config_exit(spa, SCL_CONFIG, FTAG); - } - - /* - * vm_idxp is an in-out parameter which (for indirect vdevs) is the - * index in vim_entries that has the first entry in this metaslab. On - * return, it will be set to the first entry after this metaslab. - */ - static void - zdb_leak_init_ms(metaslab_t *msp, uint64_t *vim_idxp) - { - metaslab_group_t *mg = msp->ms_group; - vdev_t *vd = mg->mg_vd; - vdev_t *rvd = vd->vdev_spa->spa_root_vdev; - - mutex_enter(&msp->ms_lock); - metaslab_unload(msp); - - /* - * We don't want to spend the CPU manipulating the size-ordered - * tree, so clear the range_tree ops. - */ - msp->ms_tree->rt_ops = NULL; - - (void) fprintf(stderr, - "\rloading vdev %llu of %llu, metaslab %llu of %llu ...", - (longlong_t)vd->vdev_id, - (longlong_t)rvd->vdev_children, - (longlong_t)msp->ms_id, - (longlong_t)vd->vdev_ms_count); - - /* - * For leak detection, we overload the metaslab ms_tree to - * contain allocated segments instead of free segments. As a - * result, we can't use the normal metaslab_load/unload - * interfaces. - */ - if (vd->vdev_ops == &vdev_indirect_ops) { - vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; - for (; *vim_idxp < vdev_indirect_mapping_num_entries(vim); - (*vim_idxp)++) { - vdev_indirect_mapping_entry_phys_t *vimep = - &vim->vim_entries[*vim_idxp]; - uint64_t ent_offset = DVA_MAPPING_GET_SRC_OFFSET(vimep); - uint64_t ent_len = DVA_GET_ASIZE(&vimep->vimep_dst); - ASSERT3U(ent_offset, >=, msp->ms_start); - if (ent_offset >= msp->ms_start + msp->ms_size) - break; - - /* - * Mappings do not cross metaslab boundaries, - * because we create them by walking the metaslabs. - */ - ASSERT3U(ent_offset + ent_len, <=, - msp->ms_start + msp->ms_size); - range_tree_add(msp->ms_tree, ent_offset, ent_len); - } - } else if (msp->ms_sm != NULL) { - VERIFY0(space_map_load(msp->ms_sm, msp->ms_tree, SM_ALLOC)); - } - - if (!msp->ms_loaded) { - msp->ms_loaded = B_TRUE; - } - mutex_exit(&msp->ms_lock); - } - - /* ARGSUSED */ - static int - increment_indirect_mapping_cb(void *arg, const blkptr_t *bp, dmu_tx_t *tx) - { - zdb_cb_t *zcb = arg; - spa_t *spa = zcb->zcb_spa; - vdev_t *vd; - const dva_t *dva = &bp->blk_dva[0]; - - ASSERT(!dump_opt['L']); - ASSERT3U(BP_GET_NDVAS(bp), ==, 1); - - spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER); - vd = vdev_lookup_top(zcb->zcb_spa, DVA_GET_VDEV(dva)); - ASSERT3P(vd, !=, NULL); - spa_config_exit(spa, SCL_VDEV, FTAG); - - ASSERT(vd->vdev_indirect_config.vic_mapping_object != 0); - ASSERT3P(zcb->zcb_vd_obsolete_counts[vd->vdev_id], !=, NULL); - - vdev_indirect_mapping_increment_obsolete_count( - vd->vdev_indirect_mapping, - DVA_GET_OFFSET(dva), DVA_GET_ASIZE(dva), - zcb->zcb_vd_obsolete_counts[vd->vdev_id]); - - return (0); - } - - static uint32_t * - zdb_load_obsolete_counts(vdev_t *vd) - { - vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; - spa_t *spa = vd->vdev_spa; - spa_condensing_indirect_phys_t *scip = - &spa->spa_condensing_indirect_phys; - uint32_t *counts; - - EQUIV(vdev_obsolete_sm_object(vd) != 0, vd->vdev_obsolete_sm != NULL); - counts = vdev_indirect_mapping_load_obsolete_counts(vim); - if (vd->vdev_obsolete_sm != NULL) { - vdev_indirect_mapping_load_obsolete_spacemap(vim, counts, - vd->vdev_obsolete_sm); - } - if (scip->scip_vdev == vd->vdev_id && - scip->scip_prev_obsolete_sm_object != 0) { - space_map_t *prev_obsolete_sm = NULL; - VERIFY0(space_map_open(&prev_obsolete_sm, spa->spa_meta_objset, - scip->scip_prev_obsolete_sm_object, 0, vd->vdev_asize, 0)); - space_map_update(prev_obsolete_sm); - vdev_indirect_mapping_load_obsolete_spacemap(vim, counts, - prev_obsolete_sm); - space_map_close(prev_obsolete_sm); - } - return (counts); - } - - static void zdb_leak_init(spa_t *spa, zdb_cb_t *zcb) { zcb->zcb_spa = spa; if (!dump_opt['L']) { - dsl_pool_t *dp = spa->spa_dsl_pool; vdev_t *rvd = spa->spa_root_vdev; /* * We are going to be changing the meaning of the metaslab's * ms_tree. Ensure that the allocator doesn't try to --- 2791,2815 ---- zcb->zcb_dedup_blocks++; } } if (!dump_opt['L']) { ddt_t *ddt = spa->spa_ddt[ddb.ddb_checksum]; ! ddt_entry_t *dde; ! VERIFY((dde = ddt_lookup(ddt, &blk, B_TRUE)) != NULL); ! dde_exit(dde); } } ASSERT(error == ENOENT); } static void zdb_leak_init(spa_t *spa, zdb_cb_t *zcb) { zcb->zcb_spa = spa; if (!dump_opt['L']) { vdev_t *rvd = spa->spa_root_vdev; /* * We are going to be changing the meaning of the metaslab's * ms_tree. Ensure that the allocator doesn't try to
*** 3121,3304 **** * use the tree. */ spa->spa_normal_class->mc_ops = &zdb_metaslab_ops; spa->spa_log_class->mc_ops = &zdb_metaslab_ops; - zcb->zcb_vd_obsolete_counts = - umem_zalloc(rvd->vdev_children * sizeof (uint32_t *), - UMEM_NOFAIL); - - for (uint64_t c = 0; c < rvd->vdev_children; c++) { vdev_t *vd = rvd->vdev_child[c]; ! uint64_t vim_idx = 0; - ASSERT3U(c, ==, vd->vdev_id); - /* ! * Note: we don't check for mapping leaks on ! * removing vdevs because their ms_tree's are ! * used to look for leaks in allocated space. */ ! if (vd->vdev_ops == &vdev_indirect_ops) { ! zcb->zcb_vd_obsolete_counts[c] = ! zdb_load_obsolete_counts(vd); /* ! * Normally, indirect vdevs don't have any ! * metaslabs. We want to set them up for ! * zio_claim(). */ ! VERIFY0(vdev_metaslab_init(vd, 0)); ! } ! for (uint64_t m = 0; m < vd->vdev_ms_count; m++) { ! zdb_leak_init_ms(vd->vdev_ms[m], &vim_idx); } - if (vd->vdev_ops == &vdev_indirect_ops) { - ASSERT3U(vim_idx, ==, - vdev_indirect_mapping_num_entries( - vd->vdev_indirect_mapping)); } } (void) fprintf(stderr, "\n"); - - if (bpobj_is_open(&dp->dp_obsolete_bpobj)) { - ASSERT(spa_feature_is_enabled(spa, - SPA_FEATURE_DEVICE_REMOVAL)); - (void) bpobj_iterate_nofree(&dp->dp_obsolete_bpobj, - increment_indirect_mapping_cb, zcb, NULL); } - } spa_config_enter(spa, SCL_CONFIG, FTAG, RW_READER); zdb_ddt_leak_init(spa, zcb); spa_config_exit(spa, SCL_CONFIG, FTAG); } ! static boolean_t ! zdb_check_for_obsolete_leaks(vdev_t *vd, zdb_cb_t *zcb) { - boolean_t leaks = B_FALSE; - vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; - uint64_t total_leaked = 0; - - ASSERT(vim != NULL); - - for (uint64_t i = 0; i < vdev_indirect_mapping_num_entries(vim); i++) { - vdev_indirect_mapping_entry_phys_t *vimep = - &vim->vim_entries[i]; - uint64_t obsolete_bytes = 0; - uint64_t offset = DVA_MAPPING_GET_SRC_OFFSET(vimep); - metaslab_t *msp = vd->vdev_ms[offset >> vd->vdev_ms_shift]; - - /* - * This is not very efficient but it's easy to - * verify correctness. - */ - for (uint64_t inner_offset = 0; - inner_offset < DVA_GET_ASIZE(&vimep->vimep_dst); - inner_offset += 1 << vd->vdev_ashift) { - if (range_tree_contains(msp->ms_tree, - offset + inner_offset, 1 << vd->vdev_ashift)) { - obsolete_bytes += 1 << vd->vdev_ashift; - } - } - - int64_t bytes_leaked = obsolete_bytes - - zcb->zcb_vd_obsolete_counts[vd->vdev_id][i]; - ASSERT3U(DVA_GET_ASIZE(&vimep->vimep_dst), >=, - zcb->zcb_vd_obsolete_counts[vd->vdev_id][i]); - if (bytes_leaked != 0 && - (vdev_obsolete_counts_are_precise(vd) || - dump_opt['d'] >= 5)) { - (void) printf("obsolete indirect mapping count " - "mismatch on %llu:%llx:%llx : %llx bytes leaked\n", - (u_longlong_t)vd->vdev_id, - (u_longlong_t)DVA_MAPPING_GET_SRC_OFFSET(vimep), - (u_longlong_t)DVA_GET_ASIZE(&vimep->vimep_dst), - (u_longlong_t)bytes_leaked); - } - total_leaked += ABS(bytes_leaked); - } - - if (!vdev_obsolete_counts_are_precise(vd) && total_leaked > 0) { - int pct_leaked = total_leaked * 100 / - vdev_indirect_mapping_bytes_mapped(vim); - (void) printf("cannot verify obsolete indirect mapping " - "counts of vdev %llu because precise feature was not " - "enabled when it was removed: %d%% (%llx bytes) of mapping" - "unreferenced\n", - (u_longlong_t)vd->vdev_id, pct_leaked, - (u_longlong_t)total_leaked); - } else if (total_leaked > 0) { - (void) printf("obsolete indirect mapping count mismatch " - "for vdev %llu -- %llx total bytes mismatched\n", - (u_longlong_t)vd->vdev_id, - (u_longlong_t)total_leaked); - leaks |= B_TRUE; - } - - vdev_indirect_mapping_free_obsolete_counts(vim, - zcb->zcb_vd_obsolete_counts[vd->vdev_id]); - zcb->zcb_vd_obsolete_counts[vd->vdev_id] = NULL; - - return (leaks); - } - - static boolean_t - zdb_leak_fini(spa_t *spa, zdb_cb_t *zcb) - { - boolean_t leaks = B_FALSE; if (!dump_opt['L']) { vdev_t *rvd = spa->spa_root_vdev; for (unsigned c = 0; c < rvd->vdev_children; c++) { vdev_t *vd = rvd->vdev_child[c]; metaslab_group_t *mg = vd->vdev_mg; ! ! if (zcb->zcb_vd_obsolete_counts[c] != NULL) { ! leaks |= zdb_check_for_obsolete_leaks(vd, zcb); ! } ! ! for (uint64_t m = 0; m < vd->vdev_ms_count; m++) { metaslab_t *msp = vd->vdev_ms[m]; ASSERT3P(mg, ==, msp->ms_group); /* * The ms_tree has been overloaded to * contain allocated segments. Now that we * finished traversing all blocks, any * block that remains in the ms_tree * represents an allocated block that we * did not claim during the traversal. * Claimed blocks would have been removed ! * from the ms_tree. For indirect vdevs, ! * space remaining in the tree represents ! * parts of the mapping that are not ! * referenced, which is not a bug. */ ! if (vd->vdev_ops == &vdev_indirect_ops) { ! range_tree_vacate(msp->ms_tree, ! NULL, NULL); ! } else { ! range_tree_vacate(msp->ms_tree, ! zdb_leak, vd); ! } if (msp->ms_loaded) { msp->ms_loaded = B_FALSE; } } } - - umem_free(zcb->zcb_vd_obsolete_counts, - rvd->vdev_children * sizeof (uint32_t *)); - zcb->zcb_vd_obsolete_counts = NULL; } - return (leaks); } /* ARGSUSED */ static int count_block_cb(void *arg, const blkptr_t *bp, dmu_tx_t *tx) --- 2816,2911 ---- * use the tree. */ spa->spa_normal_class->mc_ops = &zdb_metaslab_ops; spa->spa_log_class->mc_ops = &zdb_metaslab_ops; for (uint64_t c = 0; c < rvd->vdev_children; c++) { vdev_t *vd = rvd->vdev_child[c]; ! metaslab_group_t *mg = vd->vdev_mg; ! for (uint64_t m = 0; m < vd->vdev_ms_count; m++) { ! metaslab_t *msp = vd->vdev_ms[m]; ! ASSERT3P(msp->ms_group, ==, mg); ! mutex_enter(&msp->ms_lock); ! metaslab_unload(msp); /* ! * For leak detection, we overload the metaslab ! * ms_tree to contain allocated segments ! * instead of free segments. As a result, ! * we can't use the normal metaslab_load/unload ! * interfaces. */ ! if (msp->ms_sm != NULL) { ! (void) fprintf(stderr, ! "\rloading space map for " ! "vdev %llu of %llu, " ! "metaslab %llu of %llu ...", ! (longlong_t)c, ! (longlong_t)rvd->vdev_children, ! (longlong_t)m, ! (longlong_t)vd->vdev_ms_count); /* ! * We don't want to spend the CPU ! * manipulating the size-ordered ! * tree, so clear the range_tree ! * ops. */ ! msp->ms_tree->rt_ops = NULL; ! VERIFY0(space_map_load(msp->ms_sm, ! msp->ms_tree, SM_ALLOC)); ! if (!msp->ms_loaded) { ! msp->ms_loaded = B_TRUE; } } + mutex_exit(&msp->ms_lock); } + } (void) fprintf(stderr, "\n"); } spa_config_enter(spa, SCL_CONFIG, FTAG, RW_READER); zdb_ddt_leak_init(spa, zcb); spa_config_exit(spa, SCL_CONFIG, FTAG); } ! static void ! zdb_leak_fini(spa_t *spa) { if (!dump_opt['L']) { vdev_t *rvd = spa->spa_root_vdev; for (unsigned c = 0; c < rvd->vdev_children; c++) { vdev_t *vd = rvd->vdev_child[c]; metaslab_group_t *mg = vd->vdev_mg; ! for (unsigned m = 0; m < vd->vdev_ms_count; m++) { metaslab_t *msp = vd->vdev_ms[m]; ASSERT3P(mg, ==, msp->ms_group); + mutex_enter(&msp->ms_lock); /* * The ms_tree has been overloaded to * contain allocated segments. Now that we * finished traversing all blocks, any * block that remains in the ms_tree * represents an allocated block that we * did not claim during the traversal. * Claimed blocks would have been removed ! * from the ms_tree. */ ! range_tree_vacate(msp->ms_tree, zdb_leak, vd); if (msp->ms_loaded) { msp->ms_loaded = B_FALSE; } + + mutex_exit(&msp->ms_lock); } } } } /* ARGSUSED */ static int count_block_cb(void *arg, const blkptr_t *bp, dmu_tx_t *tx)
*** 3318,3328 **** static int dump_block_stats(spa_t *spa) { zdb_cb_t zcb; zdb_blkstats_t *zb, *tzb; ! uint64_t norm_alloc, norm_space, total_alloc, total_found; int flags = TRAVERSE_PRE | TRAVERSE_PREFETCH_METADATA | TRAVERSE_HARD; boolean_t leaks = B_FALSE; bzero(&zcb, sizeof (zcb)); (void) printf("\nTraversing all blocks %s%s%s%s%s...\n\n", --- 2925,2935 ---- static int dump_block_stats(spa_t *spa) { zdb_cb_t zcb; zdb_blkstats_t *zb, *tzb; ! uint64_t norm_alloc, spec_alloc, norm_space, total_alloc, total_found; int flags = TRAVERSE_PRE | TRAVERSE_PREFETCH_METADATA | TRAVERSE_HARD; boolean_t leaks = B_FALSE; bzero(&zcb, sizeof (zcb)); (void) printf("\nTraversing all blocks %s%s%s%s%s...\n\n",
*** 3345,3362 **** /* * If there's a deferred-free bplist, process that first. */ (void) bpobj_iterate_nofree(&spa->spa_deferred_bpobj, count_block_cb, &zcb, NULL); - if (spa_version(spa) >= SPA_VERSION_DEADLISTS) { (void) bpobj_iterate_nofree(&spa->spa_dsl_pool->dp_free_bpobj, count_block_cb, &zcb, NULL); } - - zdb_claim_removing(spa, &zcb); - if (spa_feature_is_active(spa, SPA_FEATURE_ASYNC_DESTROY)) { VERIFY3U(0, ==, bptree_iterate(spa->spa_meta_objset, spa->spa_dsl_pool->dp_bptree_obj, B_FALSE, count_block_cb, &zcb, NULL)); } --- 2952,2965 ----
*** 3364,3374 **** if (dump_opt['c'] > 1) flags |= TRAVERSE_PREFETCH_DATA; zcb.zcb_totalasize = metaslab_class_get_alloc(spa_normal_class(spa)); zcb.zcb_start = zcb.zcb_lastprint = gethrtime(); ! zcb.zcb_haderrors |= traverse_pool(spa, 0, flags, zdb_blkptr_cb, &zcb); /* * If we've traversed the data blocks then we need to wait for those * I/Os to complete. We leverage "The Godfather" zio to wait on * all async I/Os to complete. --- 2967,2978 ---- if (dump_opt['c'] > 1) flags |= TRAVERSE_PREFETCH_DATA; zcb.zcb_totalasize = metaslab_class_get_alloc(spa_normal_class(spa)); zcb.zcb_start = zcb.zcb_lastprint = gethrtime(); ! zcb.zcb_haderrors |= traverse_pool(spa, 0, UINT64_MAX, ! flags, zdb_blkptr_cb, &zcb, NULL); /* * If we've traversed the data blocks then we need to wait for those * I/Os to complete. We leverage "The Godfather" zio to wait on * all async I/Os to complete.
*** 3394,3413 **** } /* * Report any leaked segments. */ ! leaks |= zdb_leak_fini(spa, &zcb); tzb = &zcb.zcb_type[ZB_TOTAL][ZDB_OT_TOTAL]; norm_alloc = metaslab_class_get_alloc(spa_normal_class(spa)); norm_space = metaslab_class_get_space(spa_normal_class(spa)); total_alloc = norm_alloc + metaslab_class_get_alloc(spa_log_class(spa)); ! total_found = tzb->zb_asize - zcb.zcb_dedup_asize + ! zcb.zcb_removing_size; if (total_found == total_alloc) { if (!dump_opt['L']) (void) printf("\n\tNo leaks (block sum matches space" " maps exactly)\n"); --- 2998,3018 ---- } /* * Report any leaked segments. */ ! zdb_leak_fini(spa); tzb = &zcb.zcb_type[ZB_TOTAL][ZDB_OT_TOTAL]; norm_alloc = metaslab_class_get_alloc(spa_normal_class(spa)); + spec_alloc = metaslab_class_get_alloc(spa_special_class(spa)); norm_space = metaslab_class_get_space(spa_normal_class(spa)); + norm_alloc += spec_alloc; total_alloc = norm_alloc + metaslab_class_get_alloc(spa_log_class(spa)); ! total_found = tzb->zb_asize - zcb.zcb_dedup_asize; if (total_found == total_alloc) { if (!dump_opt['L']) (void) printf("\n\tNo leaks (block sum matches space" " maps exactly)\n");
*** 3445,3454 **** --- 3050,3063 ---- (void) printf("\tbp deduped: %10llu ref>1:" " %6llu deduplication: %6.2f\n", (u_longlong_t)zcb.zcb_dedup_asize, (u_longlong_t)zcb.zcb_dedup_blocks, (double)zcb.zcb_dedup_asize / tzb->zb_asize + 1.0); + if (spec_alloc != 0) { + (void) printf("\tspecial allocated: %10llu\n", + (u_longlong_t)spec_alloc); + } (void) printf("\tSPA allocated: %10llu used: %5.2f%%\n", (u_longlong_t)norm_alloc, 100.0 * norm_alloc / norm_space); for (bp_embedded_type_t i = 0; i < NUM_BP_EMBEDDED_TYPES; i++) { if (zcb.zcb_embedded_blocks[i] == 0)
*** 3470,3497 **** if (tzb->zb_ditto_samevdev != 0) { (void) printf("\tDittoed blocks on same vdev: %llu\n", (longlong_t)tzb->zb_ditto_samevdev); } - for (uint64_t v = 0; v < spa->spa_root_vdev->vdev_children; v++) { - vdev_t *vd = spa->spa_root_vdev->vdev_child[v]; - vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; - - if (vim == NULL) { - continue; - } - - char mem[32]; - zdb_nicenum(vdev_indirect_mapping_num_entries(vim), - mem, vdev_indirect_mapping_size(vim)); - - (void) printf("\tindirect vdev id %llu has %llu segments " - "(%s in memory)\n", - (longlong_t)vd->vdev_id, - (longlong_t)vdev_indirect_mapping_num_entries(vim), mem); - } - if (dump_opt['b'] >= 2) { int l, t, level; (void) printf("\nBlocks\tLSIZE\tPSIZE\tASIZE" "\t avg\t comp\t%%Total\tType\n"); --- 3079,3088 ----
*** 3620,3630 **** (u_longlong_t)BP_GET_FILL(bp), avl_numnodes(t)); } if (BP_IS_HOLE(bp) || BP_GET_CHECKSUM(bp) == ZIO_CHECKSUM_OFF || ! BP_GET_LEVEL(bp) > 0 || DMU_OT_IS_METADATA(BP_GET_TYPE(bp))) return (0); ddt_key_fill(&zdde_search.zdde_key, bp); zdde = avl_find(t, &zdde_search, &where); --- 3211,3221 ---- (u_longlong_t)BP_GET_FILL(bp), avl_numnodes(t)); } if (BP_IS_HOLE(bp) || BP_GET_CHECKSUM(bp) == ZIO_CHECKSUM_OFF || ! BP_IS_METADATA(bp)) return (0); ddt_key_fill(&zdde_search.zdde_key, bp); zdde = avl_find(t, &zdde_search, &where);
*** 3657,3668 **** avl_create(&t, ddt_entry_compare, sizeof (zdb_ddt_entry_t), offsetof(zdb_ddt_entry_t, zdde_node)); spa_config_enter(spa, SCL_CONFIG, FTAG, RW_READER); ! (void) traverse_pool(spa, 0, TRAVERSE_PRE | TRAVERSE_PREFETCH_METADATA, ! zdb_ddt_add_cb, &t); spa_config_exit(spa, SCL_CONFIG, FTAG); while ((zdde = avl_destroy_nodes(&t, &cookie)) != NULL) { ddt_stat_t dds; --- 3248,3260 ---- avl_create(&t, ddt_entry_compare, sizeof (zdb_ddt_entry_t), offsetof(zdb_ddt_entry_t, zdde_node)); spa_config_enter(spa, SCL_CONFIG, FTAG, RW_READER); ! (void) traverse_pool(spa, 0, UINT64_MAX, ! TRAVERSE_PRE | TRAVERSE_PREFETCH_METADATA, ! zdb_ddt_add_cb, &t, NULL); spa_config_exit(spa, SCL_CONFIG, FTAG); while ((zdde = avl_destroy_nodes(&t, &cookie)) != NULL) { ddt_stat_t dds;
*** 3694,3821 **** zpool_dump_ddt(&dds_total, &ddh_total); dump_dedup_ratio(&dds_total); } - static int - verify_device_removal_feature_counts(spa_t *spa) - { - uint64_t dr_feature_refcount = 0; - uint64_t oc_feature_refcount = 0; - uint64_t indirect_vdev_count = 0; - uint64_t precise_vdev_count = 0; - uint64_t obsolete_counts_object_count = 0; - uint64_t obsolete_sm_count = 0; - uint64_t obsolete_counts_count = 0; - uint64_t scip_count = 0; - uint64_t obsolete_bpobj_count = 0; - int ret = 0; - - spa_condensing_indirect_phys_t *scip = - &spa->spa_condensing_indirect_phys; - if (scip->scip_next_mapping_object != 0) { - vdev_t *vd = spa->spa_root_vdev->vdev_child[scip->scip_vdev]; - ASSERT(scip->scip_prev_obsolete_sm_object != 0); - ASSERT3P(vd->vdev_ops, ==, &vdev_indirect_ops); - - (void) printf("Condensing indirect vdev %llu: new mapping " - "object %llu, prev obsolete sm %llu\n", - (u_longlong_t)scip->scip_vdev, - (u_longlong_t)scip->scip_next_mapping_object, - (u_longlong_t)scip->scip_prev_obsolete_sm_object); - if (scip->scip_prev_obsolete_sm_object != 0) { - space_map_t *prev_obsolete_sm = NULL; - VERIFY0(space_map_open(&prev_obsolete_sm, - spa->spa_meta_objset, - scip->scip_prev_obsolete_sm_object, - 0, vd->vdev_asize, 0)); - space_map_update(prev_obsolete_sm); - dump_spacemap(spa->spa_meta_objset, prev_obsolete_sm); - (void) printf("\n"); - space_map_close(prev_obsolete_sm); - } - - scip_count += 2; - } - - for (uint64_t i = 0; i < spa->spa_root_vdev->vdev_children; i++) { - vdev_t *vd = spa->spa_root_vdev->vdev_child[i]; - vdev_indirect_config_t *vic = &vd->vdev_indirect_config; - - if (vic->vic_mapping_object != 0) { - ASSERT(vd->vdev_ops == &vdev_indirect_ops || - vd->vdev_removing); - indirect_vdev_count++; - - if (vd->vdev_indirect_mapping->vim_havecounts) { - obsolete_counts_count++; - } - } - if (vdev_obsolete_counts_are_precise(vd)) { - ASSERT(vic->vic_mapping_object != 0); - precise_vdev_count++; - } - if (vdev_obsolete_sm_object(vd) != 0) { - ASSERT(vic->vic_mapping_object != 0); - obsolete_sm_count++; - } - } - - (void) feature_get_refcount(spa, - &spa_feature_table[SPA_FEATURE_DEVICE_REMOVAL], - &dr_feature_refcount); - (void) feature_get_refcount(spa, - &spa_feature_table[SPA_FEATURE_OBSOLETE_COUNTS], - &oc_feature_refcount); - - if (dr_feature_refcount != indirect_vdev_count) { - ret = 1; - (void) printf("Number of indirect vdevs (%llu) " \ - "does not match feature count (%llu)\n", - (u_longlong_t)indirect_vdev_count, - (u_longlong_t)dr_feature_refcount); - } else { - (void) printf("Verified device_removal feature refcount " \ - "of %llu is correct\n", - (u_longlong_t)dr_feature_refcount); - } - - if (zap_contains(spa_meta_objset(spa), DMU_POOL_DIRECTORY_OBJECT, - DMU_POOL_OBSOLETE_BPOBJ) == 0) { - obsolete_bpobj_count++; - } - - - obsolete_counts_object_count = precise_vdev_count; - obsolete_counts_object_count += obsolete_sm_count; - obsolete_counts_object_count += obsolete_counts_count; - obsolete_counts_object_count += scip_count; - obsolete_counts_object_count += obsolete_bpobj_count; - obsolete_counts_object_count += remap_deadlist_count; - - if (oc_feature_refcount != obsolete_counts_object_count) { - ret = 1; - (void) printf("Number of obsolete counts objects (%llu) " \ - "does not match feature count (%llu)\n", - (u_longlong_t)obsolete_counts_object_count, - (u_longlong_t)oc_feature_refcount); - (void) printf("pv:%llu os:%llu oc:%llu sc:%llu " - "ob:%llu rd:%llu\n", - (u_longlong_t)precise_vdev_count, - (u_longlong_t)obsolete_sm_count, - (u_longlong_t)obsolete_counts_count, - (u_longlong_t)scip_count, - (u_longlong_t)obsolete_bpobj_count, - (u_longlong_t)remap_deadlist_count); - } else { - (void) printf("Verified indirect_refcount feature refcount " \ - "of %llu is correct\n", - (u_longlong_t)oc_feature_refcount); - } - return (ret); - } - static void dump_zpool(spa_t *spa) { dsl_pool_t *dp = spa_get_dsl(spa); int rc = 0; --- 3286,3295 ----
*** 3845,3872 **** dump_metaslab_groups(spa); if (dump_opt['d'] || dump_opt['i']) { dump_dir(dp->dp_meta_objset); if (dump_opt['d'] >= 3) { - dsl_pool_t *dp = spa->spa_dsl_pool; dump_full_bpobj(&spa->spa_deferred_bpobj, "Deferred frees", 0); if (spa_version(spa) >= SPA_VERSION_DEADLISTS) { ! dump_full_bpobj(&dp->dp_free_bpobj, "Pool snapshot frees", 0); } - if (bpobj_is_open(&dp->dp_obsolete_bpobj)) { - ASSERT(spa_feature_is_enabled(spa, - SPA_FEATURE_DEVICE_REMOVAL)); - dump_full_bpobj(&dp->dp_obsolete_bpobj, - "Pool obsolete blocks", 0); - } if (spa_feature_is_active(spa, SPA_FEATURE_ASYNC_DESTROY)) { dump_bptree(spa->spa_meta_objset, ! dp->dp_bptree_obj, "Pool dataset frees"); } dump_dtl(spa->spa_root_vdev, 0); } (void) dmu_objset_find(spa_name(spa), dump_one_dir, --- 3319,3340 ---- dump_metaslab_groups(spa); if (dump_opt['d'] || dump_opt['i']) { dump_dir(dp->dp_meta_objset); if (dump_opt['d'] >= 3) { dump_full_bpobj(&spa->spa_deferred_bpobj, "Deferred frees", 0); if (spa_version(spa) >= SPA_VERSION_DEADLISTS) { ! dump_full_bpobj( ! &spa->spa_dsl_pool->dp_free_bpobj, "Pool snapshot frees", 0); } if (spa_feature_is_active(spa, SPA_FEATURE_ASYNC_DESTROY)) { dump_bptree(spa->spa_meta_objset, ! spa->spa_dsl_pool->dp_bptree_obj, "Pool dataset frees"); } dump_dtl(spa->spa_root_vdev, 0); } (void) dmu_objset_find(spa_name(spa), dump_one_dir,
*** 3895,3909 **** "of %llu is correct\n", spa_feature_table[f].fi_uname, (longlong_t)refcount); } } - - if (rc == 0) { - rc = verify_device_removal_feature_counts(spa); } - } if (rc == 0 && (dump_opt['b'] || dump_opt['c'])) rc = dump_block_stats(spa); if (rc == 0) rc = verify_spacemap_refcounts(spa); --- 3363,3373 ----
*** 4206,4217 **** */ zio_nowait(zio_vdev_child_io(zio, bp, vd, offset, pabd, psize, ZIO_TYPE_READ, ZIO_PRIORITY_SYNC_READ, ZIO_FLAG_DONT_CACHE | ZIO_FLAG_DONT_QUEUE | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY | ! ZIO_FLAG_CANFAIL | ZIO_FLAG_RAW | ZIO_FLAG_OPTIONAL, ! NULL, NULL)); } error = zio_wait(zio); spa_config_exit(spa, SCL_STATE, FTAG); --- 3670,3680 ---- */ zio_nowait(zio_vdev_child_io(zio, bp, vd, offset, pabd, psize, ZIO_TYPE_READ, ZIO_PRIORITY_SYNC_READ, ZIO_FLAG_DONT_CACHE | ZIO_FLAG_DONT_QUEUE | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY | ! ZIO_FLAG_CANFAIL | ZIO_FLAG_RAW, NULL, NULL)); } error = zio_wait(zio); spa_config_exit(spa, SCL_STATE, FTAG);
*** 4546,4561 **** /* * Disable reference tracking for better performance. */ reference_tracking_enable = B_FALSE; - /* - * Do not fail spa_load when spa_load_verify fails. This is needed - * to load non-idle pools. - */ - spa_load_verify_dryrun = B_TRUE; - kernel_init(FREAD); g_zfs = libzfs_init(); ASSERT(g_zfs != NULL); if (dump_all) --- 4009,4018 ----