Print this page
Revert "8958 Update Intel ucode to 20180108 release"
This reverts commit 1adc3ffcd976ec0a34010cc7db08037a14c3ea4c.
NEX-15280 New default metadata block size is too large
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-15280 New default metadata block size is too large
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-9752 backport illumos 6950 ARC should cache compressed data
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
6950 ARC should cache compressed data
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
NEX-5366 Race between unique_insert() and unique_remove() causes ZFS fsid change
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Dan Vatca <dan.vatca@gmail.com>
NEX-5058 WBC: Race between the purging of window and opening new one
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-2830 ZFS smart compression
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
5987 zfs prefetch code needs work
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Gordon Ross <gordon.ross@nexenta.com>
NEX-4582 update wrc test cases for allow to use write back cache per tree of datasets
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
5960 zfs recv should prefetch indirect blocks
5925 zfs receive -o origin=
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
5911 ZFS "hangs" while deleting file
Reviewed by: Bayard Bell <buffer.g.overflow@gmail.com>
Reviewed by: Alek Pinchuk <alek@nexenta.com>
Reviewed by: Simon Klinkert <simon.klinkert@gmail.com>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
NEX-1823 Slow performance doing of a large dataset
5911 ZFS "hangs" while deleting file
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Bayard Bell <bayard.bell@nexenta.com>
NEX-3266 5630 stale bonus buffer in recycled dnode_t leads to data corruption
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Will Andrews <will@freebsd.org>
Approved by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Dan Fields <dan.fields@nexenta.com>
SUP-507 Delete or truncate of large files delayed on datasets with small recordsize
Reviewed by: Albert Lee <trisk@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Ilya Usvyatsky <ilya.usvyatsky@nexenta.com>
Reviewed by: Tony Nguyen <tony.nguyen@nexenta.com>
4370 avoid transmitting holes during zfs send
4371 DMU code clean up
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Josef 'Jeff' Sipek <jeffpc@josefsipek.net>
Approved by: Garrett D'Amore <garrett@damore.org>
Moved closed ZFS files to open repo, changed Makefiles accordingly
Removed unneeded weak symbols
re #12585 rb4049 ZFS++ work port - refactoring to improve separation of open/closed code, bug fixes, performance improvements - open code
Bug 11205: add missing libzfs_closed_stubs.c to fix opensource-only build.
ZFS plus work: special vdevs, cos, cos/vdev properties

Split Close
Expand all
Collapse all
          --- old/usr/src/uts/common/fs/zfs/dnode.c
          +++ new/usr/src/uts/common/fs/zfs/dnode.c
↓ open down ↓ 12 lines elided ↑ open up ↑
  13   13   * When distributing Covered Code, include this CDDL HEADER in each
  14   14   * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15   15   * If applicable, add the following below this CDDL HEADER, with the
  16   16   * fields enclosed by brackets "[]" replaced with your own identifying
  17   17   * information: Portions Copyright [yyyy] [name of copyright owner]
  18   18   *
  19   19   * CDDL HEADER END
  20   20   */
  21   21  /*
  22   22   * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
       23 + * Copyright 2015 Nexenta Systems, Inc.  All rights reserved.
  23   24   * Copyright (c) 2012, 2017 by Delphix. All rights reserved.
  24   25   * Copyright (c) 2014 Spectra Logic Corporation, All rights reserved.
  25   26   * Copyright (c) 2014 Integros [integros.com]
  26   27   * Copyright 2017 RackTop Systems.
  27   28   */
  28   29  
  29   30  #include <sys/zfs_context.h>
  30   31  #include <sys/dbuf.h>
  31   32  #include <sys/dnode.h>
  32   33  #include <sys/dmu.h>
  33   34  #include <sys/dmu_impl.h>
  34   35  #include <sys/dmu_tx.h>
  35   36  #include <sys/dmu_objset.h>
  36   37  #include <sys/dsl_dir.h>
  37   38  #include <sys/dsl_dataset.h>
  38   39  #include <sys/spa.h>
  39   40  #include <sys/zio.h>
  40   41  #include <sys/dmu_zfetch.h>
  41   42  #include <sys/range_tree.h>
  42   43  
       44 +static void smartcomp_check_comp(dnode_smartcomp_t *sc);
       45 +
  43   46  static kmem_cache_t *dnode_cache;
  44   47  /*
  45   48   * Define DNODE_STATS to turn on statistic gathering. By default, it is only
  46   49   * turned on when DEBUG is also defined.
  47   50   */
  48   51  #ifdef  DEBUG
  49   52  #define DNODE_STATS
  50   53  #endif  /* DEBUG */
  51   54  
  52   55  #ifdef  DNODE_STATS
  53   56  #define DNODE_STAT_ADD(stat)                    ((stat)++)
  54   57  #else
  55   58  #define DNODE_STAT_ADD(stat)                    /* nothing */
  56   59  #endif  /* DNODE_STATS */
  57   60  
  58   61  static dnode_phys_t dnode_phys_zero;
  59   62  
  60   63  int zfs_default_bs = SPA_MINBLOCKSHIFT;
  61      -int zfs_default_ibs = DN_MAX_INDBLKSHIFT;
       64 +int zfs_default_ibs = DN_DFL_INDBLKSHIFT;
  62   65  
  63   66  #ifdef  _KERNEL
  64   67  static kmem_cbrc_t dnode_move(void *, void *, size_t, void *);
  65   68  #endif  /* _KERNEL */
  66   69  
  67   70  static int
  68   71  dbuf_compare(const void *x1, const void *x2)
  69   72  {
  70   73          const dmu_buf_impl_t *d1 = x1;
  71   74          const dmu_buf_impl_t *d2 = x2;
↓ open down ↓ 79 lines elided ↑ open up ↑
 151  154          dn->dn_oldgid = 0;
 152  155          dn->dn_newuid = 0;
 153  156          dn->dn_newgid = 0;
 154  157          dn->dn_id_flags = 0;
 155  158  
 156  159          dn->dn_dbufs_count = 0;
 157  160          avl_create(&dn->dn_dbufs, dbuf_compare, sizeof (dmu_buf_impl_t),
 158  161              offsetof(dmu_buf_impl_t, db_link));
 159  162  
 160  163          dn->dn_moved = 0;
      164 +
      165 +        bzero(&dn->dn_smartcomp, sizeof (dn->dn_smartcomp));
      166 +        mutex_init(&dn->dn_smartcomp.sc_lock, NULL, MUTEX_DEFAULT, NULL);
      167 +
 161  168          return (0);
 162  169  }
 163  170  
 164  171  /* ARGSUSED */
 165  172  static void
 166  173  dnode_dest(void *arg, void *unused)
 167  174  {
 168  175          int i;
 169  176          dnode_t *dn = arg;
 170  177  
      178 +        mutex_destroy(&dn->dn_smartcomp.sc_lock);
      179 +
 171  180          rw_destroy(&dn->dn_struct_rwlock);
 172  181          mutex_destroy(&dn->dn_mtx);
 173  182          mutex_destroy(&dn->dn_dbufs_mtx);
 174  183          cv_destroy(&dn->dn_notxholds);
 175  184          refcount_destroy(&dn->dn_holds);
 176  185          refcount_destroy(&dn->dn_tx_holds);
 177  186          ASSERT(!list_link_active(&dn->dn_link));
 178  187  
 179  188          for (i = 0; i < TXG_SIZE; i++) {
 180  189                  ASSERT(!list_link_active(&dn->dn_dirty_link[i]));
↓ open down ↓ 450 lines elided ↑ open up ↑
 631  640          ASSERT0(blocksize % SPA_MINBLOCKSIZE);
 632  641          ASSERT(dn->dn_object != DMU_META_DNODE_OBJECT || dmu_tx_private_ok(tx));
 633  642          ASSERT(tx->tx_txg != 0);
 634  643          ASSERT((bonustype == DMU_OT_NONE && bonuslen == 0) ||
 635  644              (bonustype != DMU_OT_NONE && bonuslen != 0) ||
 636  645              (bonustype == DMU_OT_SA && bonuslen == 0));
 637  646          ASSERT(DMU_OT_IS_VALID(bonustype));
 638  647          ASSERT3U(bonuslen, <=, DN_MAX_BONUSLEN);
 639  648  
 640  649          /* clean up any unreferenced dbufs */
 641      -        dnode_evict_dbufs(dn);
      650 +        dnode_evict_dbufs(dn, DBUF_EVICT_ALL);
 642  651  
 643  652          dn->dn_id_flags = 0;
 644  653  
 645  654          rw_enter(&dn->dn_struct_rwlock, RW_WRITER);
 646  655          dnode_setdirty(dn, tx);
 647  656          if (dn->dn_datablksz != blocksize) {
 648  657                  /* change blocksize */
 649  658                  ASSERT(dn->dn_maxblkid == 0 &&
 650  659                      (BP_IS_HOLE(&dn->dn_phys->dn_blkptr[0]) ||
 651  660                      dnode_block_freed(dn, 0)));
↓ open down ↓ 608 lines elided ↑ open up ↑
1260 1269                   * that the handle has zero references, but that will be
1261 1270                   * asserted anyway when the handle gets destroyed.
1262 1271                   */
1263 1272                  dbuf_rele(db, dnh);
1264 1273          }
1265 1274  }
1266 1275  
1267 1276  void
1268 1277  dnode_setdirty(dnode_t *dn, dmu_tx_t *tx)
1269 1278  {
     1279 +        dnode_setdirty_sc(dn, tx, B_TRUE);
     1280 +}
     1281 +
     1282 +void
     1283 +dnode_setdirty_sc(dnode_t *dn, dmu_tx_t *tx, boolean_t usesc)
     1284 +{
1270 1285          objset_t *os = dn->dn_objset;
1271 1286          uint64_t txg = tx->tx_txg;
1272 1287  
1273 1288          if (DMU_OBJECT_IS_SPECIAL(dn->dn_object)) {
1274 1289                  dsl_dataset_dirty(os->os_dsl_dataset, tx);
1275 1290                  return;
1276 1291          }
1277 1292  
1278 1293          DNODE_VERIFY(dn);
1279 1294  
↓ open down ↓ 38 lines elided ↑ open up ↑
1318 1333           * The dnode maintains a hold on its containing dbuf as
1319 1334           * long as there are holds on it.  Each instantiated child
1320 1335           * dbuf maintains a hold on the dnode.  When the last child
1321 1336           * drops its hold, the dnode will drop its hold on the
1322 1337           * containing dbuf. We add a "dirty hold" here so that the
1323 1338           * dnode will hang around after we finish processing its
1324 1339           * children.
1325 1340           */
1326 1341          VERIFY(dnode_add_ref(dn, (void *)(uintptr_t)tx->tx_txg));
1327 1342  
1328      -        (void) dbuf_dirty(dn->dn_dbuf, tx);
1329      -
     1343 +        (void) dbuf_dirty_sc(dn->dn_dbuf, tx, usesc);
1330 1344          dsl_dataset_dirty(os->os_dsl_dataset, tx);
1331 1345  }
1332 1346  
1333 1347  void
1334 1348  dnode_free(dnode_t *dn, dmu_tx_t *tx)
1335 1349  {
1336 1350          mutex_enter(&dn->dn_mtx);
1337 1351          if (dn->dn_type == DMU_OT_NONE || dn->dn_free_txg) {
1338 1352                  mutex_exit(&dn->dn_mtx);
1339 1353                  return;
↓ open down ↓ 67 lines elided ↑ open up ↑
1407 1421          rw_exit(&dn->dn_struct_rwlock);
1408 1422          return (0);
1409 1423  
1410 1424  fail:
1411 1425          rw_exit(&dn->dn_struct_rwlock);
1412 1426          return (SET_ERROR(ENOTSUP));
1413 1427  }
1414 1428  
1415 1429  /* read-holding callers must not rely on the lock being continuously held */
1416 1430  void
1417      -dnode_new_blkid(dnode_t *dn, uint64_t blkid, dmu_tx_t *tx, boolean_t have_read)
     1431 +dnode_new_blkid(dnode_t *dn, uint64_t blkid, dmu_tx_t *tx,
     1432 +    boolean_t usesc, boolean_t have_read)
1418 1433  {
1419 1434          uint64_t txgoff = tx->tx_txg & TXG_MASK;
1420 1435          int epbs, new_nlevels;
1421 1436          uint64_t sz;
1422 1437  
1423 1438          ASSERT(blkid != DMU_BONUS_BLKID);
1424 1439  
1425 1440          ASSERT(have_read ?
1426 1441              RW_READ_HELD(&dn->dn_struct_rwlock) :
1427 1442              RW_WRITE_HELD(&dn->dn_struct_rwlock));
↓ open down ↓ 33 lines elided ↑ open up ↑
1461 1476                  dbuf_dirty_record_t *new, *dr, *dr_next;
1462 1477  
1463 1478                  dn->dn_nlevels = new_nlevels;
1464 1479  
1465 1480                  ASSERT3U(new_nlevels, >, dn->dn_next_nlevels[txgoff]);
1466 1481                  dn->dn_next_nlevels[txgoff] = new_nlevels;
1467 1482  
1468 1483                  /* dirty the left indirects */
1469 1484                  db = dbuf_hold_level(dn, old_nlevels, 0, FTAG);
1470 1485                  ASSERT(db != NULL);
1471      -                new = dbuf_dirty(db, tx);
     1486 +                new = dbuf_dirty_sc(db, tx, usesc);
1472 1487                  dbuf_rele(db, FTAG);
1473 1488  
1474 1489                  /* transfer the dirty records to the new indirect */
1475 1490                  mutex_enter(&dn->dn_mtx);
1476 1491                  mutex_enter(&new->dt.di.dr_mtx);
1477 1492                  list = &dn->dn_dirty_records[txgoff];
1478 1493                  for (dr = list_head(list); dr; dr = dr_next) {
1479 1494                          dr_next = list_next(&dn->dn_dirty_records[txgoff], dr);
1480 1495                          if (dr->dr_dbuf->db_level != new_nlevels-1 &&
1481 1496                              dr->dr_dbuf->db_blkid != DMU_BONUS_BLKID &&
↓ open down ↓ 208 lines elided ↑ open up ↑
1690 1705          }
1691 1706  
1692 1707  done:
1693 1708          /*
1694 1709           * Add this range to the dnode range list.
1695 1710           * We will finish up this free operation in the syncing phase.
1696 1711           */
1697 1712          mutex_enter(&dn->dn_mtx);
1698 1713          int txgoff = tx->tx_txg & TXG_MASK;
1699 1714          if (dn->dn_free_ranges[txgoff] == NULL) {
1700      -                dn->dn_free_ranges[txgoff] = range_tree_create(NULL, NULL);
     1715 +                dn->dn_free_ranges[txgoff] =
     1716 +                    range_tree_create(NULL, NULL, &dn->dn_mtx);
1701 1717          }
1702 1718          range_tree_clear(dn->dn_free_ranges[txgoff], blkid, nblks);
1703 1719          range_tree_add(dn->dn_free_ranges[txgoff], blkid, nblks);
1704 1720          dprintf_dnode(dn, "blkid=%llu nblks=%llu txg=%llu\n",
1705 1721              blkid, nblks, tx->tx_txg);
1706 1722          mutex_exit(&dn->dn_mtx);
1707 1723  
1708 1724          dbuf_free_range(dn, blkid, blkid + nblks - 1, tx);
1709 1725          dnode_setdirty(dn, tx);
1710 1726  out:
↓ open down ↓ 278 lines elided ↑ open up ↑
1989 2005          }
1990 2006  
1991 2007          if (error == 0 && (flags & DNODE_FIND_BACKWARDS ?
1992 2008              initial_offset < *offset : initial_offset > *offset))
1993 2009                  error = SET_ERROR(ESRCH);
1994 2010  out:
1995 2011          if (!(flags & DNODE_FIND_HAVELOCK))
1996 2012                  rw_exit(&dn->dn_struct_rwlock);
1997 2013  
1998 2014          return (error);
     2015 +}
     2016 +
     2017 +/*
     2018 + * When in the compressing phase, we check our results every 1 MiB. If
     2019 + * compression ratio drops below the threshold factor, we give up trying
     2020 + * to compress the file for a while. The length of the interval is
     2021 + * calculated from this interval value according to the algorithm in
     2022 + * smartcomp_check_comp.
     2023 + */
     2024 +uint64_t zfs_smartcomp_interval = 1 * 1024 * 1024;
     2025 +
     2026 +/*
     2027 + * Minimum compression factor is 12.5% (100% / factor) - below that we
     2028 + * consider compression to have failed.
     2029 + */
     2030 +uint64_t zfs_smartcomp_threshold_factor = 8;
     2031 +
     2032 +/*
     2033 + * Maximum power-of-2 exponent on the deny interval and consequently
     2034 + * the maximum number of compression successes and failures we track.
     2035 + * Successive compression failures extend the deny interval, whereas
     2036 + * repeated successes makes the algorithm more hesitant to start denying.
     2037 + */
     2038 +int64_t zfs_smartcomp_interval_exp = 5;
     2039 +
     2040 +/*
     2041 + * Callback invoked by the zio machinery when it wants to compress a data
     2042 + * block. If we are in the denying compression phase, we add the amount of
     2043 + * data written to our stats and check if we've denied enough data to
     2044 + * transition back in to the compression phase again.
     2045 + */
     2046 +boolean_t
     2047 +dnode_smartcomp_ask_cb(void *userinfo, const zio_t *zio)
     2048 +{
     2049 +        dnode_t *dn = userinfo;
     2050 +        dnode_smartcomp_t *sc;
     2051 +        dnode_smartcomp_state_t old_state;
     2052 +
     2053 +        ASSERT(dn != NULL);
     2054 +
     2055 +        sc = &dn->dn_smartcomp;
     2056 +        mutex_enter(&sc->sc_lock);
     2057 +        old_state = sc->sc_state;
     2058 +        if (sc->sc_state == DNODE_SMARTCOMP_DENYING) {
     2059 +                sc->sc_orig_size += zio->io_orig_size;
     2060 +                if (sc->sc_orig_size >= sc->sc_deny_interval) {
     2061 +                        /* time to retry compression on next call */
     2062 +                        sc->sc_state = DNODE_SMARTCOMP_COMPRESSING;
     2063 +                        sc->sc_size = 0;
     2064 +                        sc->sc_orig_size = 0;
     2065 +                }
     2066 +        }
     2067 +        mutex_exit(&sc->sc_lock);
     2068 +
     2069 +        return (old_state != DNODE_SMARTCOMP_DENYING);
     2070 +}
     2071 +
     2072 +/*
     2073 + * Callback invoked after compression has been performed to allow us to
     2074 + * monitor compression performance. If we're in a compressing phase, we
     2075 + * add the uncompressed and compressed data volumes to our state counters
     2076 + * and see if we need to recheck compression performance in
     2077 + * smartcomp_check_comp.
     2078 + */
     2079 +void
     2080 +dnode_smartcomp_result_cb(void *userinfo, const zio_t *zio)
     2081 +{
     2082 +        dnode_t *dn = userinfo;
     2083 +        dnode_smartcomp_t *sc;
     2084 +        uint64_t io_size = zio->io_size, io_orig_size = zio->io_orig_size;
     2085 +
     2086 +        ASSERT(dn != NULL);
     2087 +        sc = &dn->dn_smartcomp;
     2088 +
     2089 +        if (io_orig_size == 0)
     2090 +                /* XXX: is this valid anyway? */
     2091 +                return;
     2092 +
     2093 +        mutex_enter(&sc->sc_lock);
     2094 +        if (sc->sc_state == DNODE_SMARTCOMP_COMPRESSING) {
     2095 +                /* add last block's compression performance to our stats */
     2096 +                sc->sc_size += io_size;
     2097 +                sc->sc_orig_size += io_orig_size;
     2098 +                /* time to recheck compression performance? */
     2099 +                if (sc->sc_orig_size >= zfs_smartcomp_interval)
     2100 +                        smartcomp_check_comp(sc);
     2101 +        }
     2102 +        mutex_exit(&sc->sc_lock);
     2103 +}
     2104 +
     2105 +/*
     2106 + * This function checks whether the compression we've been getting is above
     2107 + * the threshold value. If it is, we decrement the sc_comp_failures counter
     2108 + * to indicate compression success. If it isn't we increment the same
     2109 + * counter and potentially start a compression deny phase.
     2110 + */
     2111 +static void
     2112 +smartcomp_check_comp(dnode_smartcomp_t *sc)
     2113 +{
     2114 +        uint64_t threshold = sc->sc_orig_size -
     2115 +            sc->sc_orig_size / zfs_smartcomp_threshold_factor;
     2116 +
     2117 +        ASSERT(MUTEX_HELD(&sc->sc_lock));
     2118 +        if (sc->sc_size > threshold) {
     2119 +                sc->sc_comp_failures =
     2120 +                    MIN(sc->sc_comp_failures + 1, zfs_smartcomp_interval_exp);
     2121 +                if (sc->sc_comp_failures > 0) {
     2122 +                        /* consistently getting too little compression, stop */
     2123 +                        sc->sc_state = DNODE_SMARTCOMP_DENYING;
     2124 +                        sc->sc_deny_interval =
     2125 +                            zfs_smartcomp_interval << sc->sc_comp_failures;
     2126 +                        /* randomize the interval by +-10% to avoid patterns */
     2127 +                        sc->sc_deny_interval = (sc->sc_deny_interval -
     2128 +                            (sc->sc_deny_interval / 10)) +
     2129 +                            spa_get_random(sc->sc_deny_interval / 5 + 1);
     2130 +                }
     2131 +        } else {
     2132 +                if (sc->sc_comp_failures > 0) {
     2133 +                        /*
     2134 +                         * We're biased for compression, so any success makes
     2135 +                         * us forget the file's past incompressibility.
     2136 +                         */
     2137 +                        sc->sc_comp_failures = 0;
     2138 +                } else {
     2139 +                        sc->sc_comp_failures = MAX(sc->sc_comp_failures - 1,
     2140 +                            -zfs_smartcomp_interval_exp);
     2141 +                }
     2142 +        }
     2143 +        /* reset state counters */
     2144 +        sc->sc_size = 0;
     2145 +        sc->sc_orig_size = 0;
     2146 +}
     2147 +
     2148 +/*
     2149 + * Prepares a zio_smartcomp_info_t structure for passing to zio_write or
     2150 + * arc_write depending on whether smart compression should be applied to
     2151 + * the specified objset, dnode and buffer.
     2152 + */
     2153 +extern void
     2154 +dnode_setup_zio_smartcomp(dmu_buf_impl_t *db, zio_smartcomp_info_t *sc)
     2155 +{
     2156 +        dnode_t *dn = DB_DNODE(db);
     2157 +        objset_t *os = dn->dn_objset;
     2158 +
     2159 +        /* Only do smart compression on user data of plain files. */
     2160 +        if (dn->dn_type == DMU_OT_PLAIN_FILE_CONTENTS && db->db_level == 0 &&
     2161 +            os->os_smartcomp_enabled && os->os_compress != ZIO_COMPRESS_OFF) {
     2162 +                sc->sc_ask = dnode_smartcomp_ask_cb;
     2163 +                sc->sc_result = dnode_smartcomp_result_cb;
     2164 +                sc->sc_userinfo = dn;
     2165 +        } else {
     2166 +                /*
     2167 +                 * Zeroing out the structure passed to zio_write will turn
     2168 +                 * smart compression off.
     2169 +                 */
     2170 +                bzero(sc, sizeof (*sc));
     2171 +        }
1999 2172  }
    
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX