Print this page
NEX-5736 implement autoreplace matching based on FRU slot number
NEX-6200 hot spares are not reactivated after reinserting into enclosure
NEX-9403 need to update FRU for spare and l2cache devices
NEX-9404 remove lofi autoreplace support from syseventd
NEX-9409 hotsparing doesn't work for vdevs without FRU
NEX-9424 zfs`vdev_online() needs better notification about state changes
Portions contributed by: Alek Pinchuk <alek@nexenta.com>
Portions contributed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-8986 assertion triggered in syseventd zfs_mod.so
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Marcel Telka <marcel@telka.sk>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
6175 sdev can create bogus zvol directories
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Jason King <jason.brian.king@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
6174 /dev/zvol does not show pool directories
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Jason King <jason.brian.king@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
5997 FRU field not set during pool creation and never updated
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Richard Elling <richard.elling@gmail.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Robert Mustacchi <rm@joyent.com>
6046 SPARC boot should support com.delphix:hole_birth
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
6041 SPARC boot should support LZ4
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
6044 SPARC zfs reader is using wrong size for objset_phys
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
backout 5997: breaks "zpool add"
5997 FRU field not set during pool creation and never updated
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
Reviewed by: Richard Elling <richard.elling@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
NEX-3474 CLONE - Port NEX-2591 FRU field not set during pool creation and never updated
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Josef Sipek <josef.sipek@nexenta.com>
NEX-860 Offlined vdevs are online after reboot
re #12684 rb4206 importing pool with autoreplace=on and "hole" vdevs crashes syseventd

Split Close
Expand all
Collapse all
          --- old/usr/src/cmd/syseventd/modules/zfs_mod/zfs_mod.c
          +++ new/usr/src/cmd/syseventd/modules/zfs_mod/zfs_mod.c
↓ open down ↓ 10 lines elided ↑ open up ↑
  11   11   * and limitations under the License.
  12   12   *
  13   13   * When distributing Covered Code, include this CDDL HEADER in each
  14   14   * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15   15   * If applicable, add the following below this CDDL HEADER, with the
  16   16   * fields enclosed by brackets "[]" replaced with your own identifying
  17   17   * information: Portions Copyright [yyyy] [name of copyright owner]
  18   18   *
  19   19   * CDDL HEADER END
  20   20   */
       21 +
  21   22  /*
  22   23   * Copyright (c) 2007, 2010, Oracle and/or its affiliates. All rights reserved.
  23   24   * Copyright (c) 2012 by Delphix. All rights reserved.
  24      - * Copyright 2016 Nexenta Systems, Inc. All rights reserved.
       25 + * Copyright 2017 Nexenta Systems, Inc.
  25   26   */
  26   27  
  27   28  /*
  28   29   * ZFS syseventd module.
  29   30   *
  30      - * The purpose of this module is to identify when devices are added to the
  31      - * system, and appropriately online or replace the affected vdevs.
       31 + * The purpose of this module is to process ZFS related events.
  32   32   *
  33      - * When a device is added to the system:
       33 + * EC_DEV_ADD
       34 + *  ESC_DISK            Search for associated vdevs matching devid, physpath,
       35 + *                      or FRU, and appropriately online or replace the device.
  34   36   *
  35      - *      1. Search for any vdevs whose devid matches that of the newly added
  36      - *         device.
       37 + * EC_DEV_STATUS
       38 + *  ESC_DEV_DLE         Device capacity dynamically changed.  Process the change
       39 + *                      according to 'autoexpand' property.
  37   40   *
  38      - *      2. If no vdevs are found, then search for any vdevs whose devfs path
  39      - *         matches that of the new device.
  40      - *
  41      - *      3. If no vdevs match by either method, then ignore the event.
  42      - *
  43      - *      4. Attempt to online the device with a flag to indicate that it should
  44      - *         be unspared when resilvering completes.  If this succeeds, then the
  45      - *         same device was inserted and we should continue normally.
  46      - *
  47      - *      5. If the pool does not have the 'autoreplace' property set, attempt to
  48      - *         online the device again without the unspare flag, which will
  49      - *         generate a FMA fault.
  50      - *
  51      - *      6. If the pool has the 'autoreplace' property set, and the matching vdev
  52      - *         is a whole disk, then label the new disk and attempt a 'zpool
  53      - *         replace'.
  54      - *
  55      - * The module responds to EC_DEV_ADD events for both disks and lofi devices,
  56      - * with the latter used for testing.  The special ESC_ZFS_VDEV_CHECK event
  57      - * indicates that a device failed to open during pool load, but the autoreplace
  58      - * property was set.  In this case, we deferred the associated FMA fault until
  59      - * our module had a chance to process the autoreplace logic.  If the device
  60      - * could not be replaced, then the second online attempt will trigger the FMA
  61      - * fault that we skipped earlier.
       41 + * EC_ZFS
       42 + *  ESC_ZFS_VDEV_CHECK  This event indicates that a device failed to open during
       43 + *                      pool load, but the autoreplace property was set.  In
       44 + *                      this case the associated FMA fault was deferred until
       45 + *                      the module had a chance to process the autoreplace
       46 + *                      logic.  If the device could not be replaced, then the
       47 + *                      second online attempt will trigger the FMA fault that
       48 + *                      was skipped earlier.
       49 + *  ESC_ZFS_VDEV_ADD
       50 + *  ESC_ZFS_VDEV_ATTACH
       51 + *  ESC_ZFS_VDEV_CLEAR
       52 + *  ESC_ZFS_VDEV_ONLINE
       53 + *  ESC_ZFS_POOL_CREATE
       54 + *  ESC_ZFS_POOL_IMPORT All of the above events will trigger the update of
       55 + *                      FRU for all associated devices.
  62   56   */
  63   57  
  64   58  #include <alloca.h>
  65   59  #include <devid.h>
  66   60  #include <fcntl.h>
  67   61  #include <libnvpair.h>
  68   62  #include <libsysevent.h>
  69   63  #include <libzfs.h>
  70   64  #include <limits.h>
  71   65  #include <stdlib.h>
  72   66  #include <string.h>
  73      -#include <syslog.h>
  74   67  #include <sys/list.h>
  75   68  #include <sys/sunddi.h>
       69 +#include <sys/fs/zfs.h>
  76   70  #include <sys/sysevent/eventdefs.h>
  77   71  #include <sys/sysevent/dev.h>
  78   72  #include <thread_pool.h>
  79   73  #include <unistd.h>
  80   74  #include "syseventd.h"
  81   75  
  82   76  #if defined(__i386) || defined(__amd64)
  83      -#define PHYS_PATH       ":q"
  84      -#define RAW_SLICE       "p0"
       77 +#define WD_MINOR        ":q"
  85   78  #elif defined(__sparc)
  86      -#define PHYS_PATH       ":c"
  87      -#define RAW_SLICE       "s2"
       79 +#define WD_MINOR        ":c"
  88   80  #else
  89   81  #error Unknown architecture
  90   82  #endif
  91   83  
  92      -typedef void (*zfs_process_func_t)(zpool_handle_t *, nvlist_t *, boolean_t);
       84 +#define DEVICE_PREFIX   "/devices"
  93   85  
       86 +typedef void (*zfs_process_func_t)(zpool_handle_t *, nvlist_t *, const char *);
       87 +
  94   88  libzfs_handle_t *g_zfshdl;
  95   89  list_t g_pool_list;
  96   90  tpool_t *g_tpool;
  97   91  boolean_t g_enumeration_done;
  98   92  thread_t g_zfs_tid;
  99   93  
 100   94  typedef struct unavailpool {
 101   95          zpool_handle_t  *uap_zhp;
 102   96          list_node_t     uap_node;
 103   97  } unavailpool_t;
↓ open down ↓ 20 lines elided ↑ open up ↑
 124  118                  uap = malloc(sizeof (unavailpool_t));
 125  119                  uap->uap_zhp = zhp;
 126  120                  list_insert_tail((list_t *)data, uap);
 127  121          } else {
 128  122                  zpool_close(zhp);
 129  123          }
 130  124          return (0);
 131  125  }
 132  126  
 133  127  /*
 134      - * The device associated with the given vdev (either by devid or physical path)
 135      - * has been added to the system.  If 'isdisk' is set, then we only attempt a
 136      - * replacement if it's a whole disk.  This also implies that we should label the
 137      - * disk first.
 138      - *
 139      - * First, we attempt to online the device (making sure to undo any spare
 140      - * operation when finished).  If this succeeds, then we're done.  If it fails,
 141      - * and the new state is VDEV_CANT_OPEN, it indicates that the device was opened,
 142      - * but that the label was not what we expected.  If the 'autoreplace' property
 143      - * is not set, then we relabel the disk (if specified), and attempt a 'zpool
 144      - * replace'.  If the online is successful, but the new state is something else
 145      - * (REMOVED or FAULTED), it indicates that we're out of sync or in some sort of
 146      - * race, and we should avoid attempting to relabel the disk.
      128 + * The device associated with the given vdev (matched by devid, physical path,
      129 + * or FRU) has been added to the system.
 147  130   */
 148  131  static void
 149      -zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t isdisk)
      132 +zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, const char *newrawpath)
 150  133  {
 151      -        char *path;
 152  134          vdev_state_t newstate;
 153      -        nvlist_t *nvroot, *newvd;
      135 +        nvlist_t *nvroot = NULL, *newvd = NULL;
 154  136          uint64_t wholedisk = 0ULL;
 155  137          uint64_t offline = 0ULL;
 156      -        char *physpath = NULL;
 157      -        char rawpath[PATH_MAX], fullpath[PATH_MAX];
      138 +        boolean_t avail_spare, l2cache;
      139 +        const char *zc_type = ZPOOL_CONFIG_CHILDREN;
      140 +        char *devpath;                  /* current /dev path */
      141 +        char *physpath;                 /* current /devices node */
      142 +        char fullpath[PATH_MAX];        /* current /dev path without slice */
      143 +        char fullphyspath[PATH_MAX];    /* full /devices phys path */
      144 +        char newdevpath[PATH_MAX];      /* new /dev path */
      145 +        char newphyspath[PATH_MAX];     /* new /devices node */
      146 +        char diskname[PATH_MAX];        /* disk device without /dev and slice */
      147 +        const char *adevid = NULL;      /* devid to attach */
      148 +        const char *adevpath;           /* /dev path to attach */
      149 +        const char *aphyspath = NULL;   /* /devices node to attach */
 158  150          zpool_boot_label_t boot_type;
 159  151          uint64_t boot_size;
 160      -        size_t len;
 161  152  
 162      -        if (nvlist_lookup_string(vdev, ZPOOL_CONFIG_PATH, &path) != 0)
      153 +        if (nvlist_lookup_string(vdev, ZPOOL_CONFIG_PATH, &devpath) != 0)
 163  154                  return;
 164      -
 165  155          (void) nvlist_lookup_string(vdev, ZPOOL_CONFIG_PHYS_PATH, &physpath);
 166  156          (void) nvlist_lookup_uint64(vdev, ZPOOL_CONFIG_WHOLE_DISK, &wholedisk);
 167  157          (void) nvlist_lookup_uint64(vdev, ZPOOL_CONFIG_OFFLINE, &offline);
 168  158  
 169      -        /*
 170      -         * We should have a way to online a device by guid.  With the current
 171      -         * interface, we are forced to chop off the 's0' for whole disks.
 172      -         */
 173      -        (void) strlcpy(fullpath, path, sizeof (fullpath));
      159 +        /* Do nothing if vdev is explicitly marked offline */
      160 +        if (offline)
      161 +                return;
      162 +
      163 +        (void) strlcpy(fullpath, devpath, sizeof (fullpath));
      164 +        /* Chop off slice for whole disks */
 174  165          if (wholedisk)
 175  166                  fullpath[strlen(fullpath) - 2] = '\0';
 176  167  
 177  168          /*
 178      -         * Attempt to online the device.  It would be nice to online this by
 179      -         * GUID, but the current interface only supports lookup by path.
      169 +         * Device could still have valid label, so first attempt to online the
      170 +         * device undoing any spare operation. If online succeeds and new state
      171 +         * is either HEALTHY or DEGRADED, we are done.
 180  172           */
 181      -        if (offline ||
 182      -            (zpool_vdev_online(zhp, fullpath,
      173 +        if (zpool_vdev_online(zhp, fullpath,
 183  174              ZFS_ONLINE_CHECKREMOVE | ZFS_ONLINE_UNSPARE, &newstate) == 0 &&
 184      -            (newstate == VDEV_STATE_HEALTHY ||
 185      -            newstate == VDEV_STATE_DEGRADED)))
      175 +            (newstate == VDEV_STATE_HEALTHY || newstate == VDEV_STATE_DEGRADED))
 186  176                  return;
 187  177  
 188  178          /*
 189      -         * If the pool doesn't have the autoreplace property set, then attempt a
 190      -         * true online (without the unspare flag), which will trigger a FMA
 191      -         * fault.
      179 +         * If the pool doesn't have the autoreplace property set or this is a
      180 +         * non-whole disk vdev, there's nothing else we can do so attempt a true
      181 +         * online (without the unspare flag), which will trigger a FMA fault.
 192  182           */
 193      -        if (!zpool_get_prop_int(zhp, ZPOOL_PROP_AUTOREPLACE, NULL) ||
 194      -            (isdisk && !wholedisk)) {
      183 +        if (zpool_get_prop_int(zhp, ZPOOL_PROP_AUTOREPLACE, NULL) == 0 ||
      184 +            !wholedisk) {
 195  185                  (void) zpool_vdev_online(zhp, fullpath, ZFS_ONLINE_FORCEFAULT,
 196  186                      &newstate);
 197  187                  return;
 198  188          }
 199  189  
 200      -        if (isdisk) {
 201      -                /*
 202      -                 * If this is a request to label a whole disk, then attempt to
 203      -                 * write out the label.  Before we can label the disk, we need
 204      -                 * access to a raw node.  Ideally, we'd like to walk the devinfo
 205      -                 * tree and find a raw node from the corresponding parent node.
 206      -                 * This is overly complicated, and since we know how we labeled
 207      -                 * this device in the first place, we know it's save to switch
 208      -                 * from /dev/dsk to /dev/rdsk and append the backup slice.
 209      -                 *
 210      -                 * If any part of this process fails, then do a force online to
 211      -                 * trigger a ZFS fault for the device (and any hot spare
 212      -                 * replacement).
 213      -                 */
 214      -                if (strncmp(path, ZFS_DISK_ROOTD,
 215      -                    strlen(ZFS_DISK_ROOTD)) != 0) {
 216      -                        (void) zpool_vdev_online(zhp, fullpath,
 217      -                            ZFS_ONLINE_FORCEFAULT, &newstate);
 218      -                        return;
 219      -                }
      190 +        /*
      191 +         * Attempt to replace the device.
      192 +         *
      193 +         * If newrawpath is set (not NULL), then we matched by FRU and need to
      194 +         * use new /dev and /devices paths for attach.
      195 +         *
      196 +         * First, construct the short disk name to label, chopping off any
      197 +         * leading /dev path and slice (which newrawpath doesn't include).
      198 +         */
      199 +        if (newrawpath != NULL) {
      200 +                (void) strlcpy(diskname, newrawpath +
      201 +                    strlen(ZFS_RDISK_ROOTD), sizeof (diskname));
      202 +        } else {
      203 +                (void) strlcpy(diskname, fullpath +
      204 +                    strlen(ZFS_DISK_ROOTD), sizeof (diskname));
      205 +        }
 220  206  
 221      -                (void) strlcpy(rawpath, path + 9, sizeof (rawpath));
 222      -                len = strlen(rawpath);
 223      -                rawpath[len - 2] = '\0';
      207 +        /* Write out the label */
      208 +        if (zpool_is_bootable(zhp))
      209 +                boot_type = ZPOOL_COPY_BOOT_LABEL;
      210 +        else
      211 +                boot_type = ZPOOL_NO_BOOT_LABEL;
 224  212  
 225      -                if (zpool_is_bootable(zhp))
 226      -                        boot_type = ZPOOL_COPY_BOOT_LABEL;
 227      -                else
 228      -                        boot_type = ZPOOL_NO_BOOT_LABEL;
      213 +        boot_size = zpool_get_prop_int(zhp, ZPOOL_PROP_BOOTSIZE, NULL);
      214 +        if (zpool_label_disk(g_zfshdl, zhp, diskname, boot_type, boot_size,
      215 +            NULL) != 0) {
      216 +                syseventd_print(9, "%s: failed to write the label\n", __func__);
      217 +                return;
      218 +        }
 229  219  
 230      -                boot_size = zpool_get_prop_int(zhp, ZPOOL_PROP_BOOTSIZE, NULL);
 231      -                if (zpool_label_disk(g_zfshdl, zhp, rawpath,
 232      -                    boot_type, boot_size, NULL) != 0) {
 233      -                        (void) zpool_vdev_online(zhp, fullpath,
 234      -                            ZFS_ONLINE_FORCEFAULT, &newstate);
 235      -                        return;
 236      -                }
      220 +        /* Define "path" and "physpath" to be used for attach */
      221 +        if (newrawpath != NULL) {
      222 +                /* Construct newdevpath from newrawpath */
      223 +                (void) snprintf(newdevpath, sizeof (newdevpath), "%s%s%s",
      224 +                    ZFS_DISK_ROOTD, newrawpath + strlen(ZFS_RDISK_ROOTD),
      225 +                    (boot_size > 0) ? "s1" : "s0");
      226 +                /* Use replacing vdev's "path" and "physpath" */
      227 +                adevpath = newdevpath;
      228 +                /* Resolve /dev path to /devices node */
      229 +                aphyspath = realpath(newdevpath, newphyspath) +
      230 +                    strlen(DEVICE_PREFIX);
      231 +        } else {
      232 +                /* Use original vdev's "path" and "physpath" */
      233 +                adevpath = devpath;
      234 +                aphyspath = physpath;
 237  235          }
 238  236  
      237 +        /* Construct new devid */
      238 +        (void) snprintf(fullphyspath, sizeof (fullphyspath), "%s%s",
      239 +            DEVICE_PREFIX, aphyspath);
      240 +        adevid = devid_str_from_path(fullphyspath);
      241 +
 239  242          /*
 240      -         * Cosntruct the root vdev to pass to zpool_vdev_attach().  While adding
 241      -         * the entire vdev structure is harmless, we construct a reduced set of
 242      -         * path/physpath/wholedisk to keep it simple.
      243 +         * Check if replaced vdev is "available" (not swapped in) spare
      244 +         * or l2cache device.
 243  245           */
 244      -        if (nvlist_alloc(&nvroot, NV_UNIQUE_NAME, 0) != 0)
 245      -                return;
      246 +        (void) zpool_find_vdev(zhp, fullpath, &avail_spare, &l2cache, NULL,
      247 +            NULL);
      248 +        if (avail_spare)
      249 +                zc_type = ZPOOL_CONFIG_SPARES;
      250 +        else if (l2cache)
      251 +                zc_type = ZPOOL_CONFIG_L2CACHE;
 246  252  
 247      -        if (nvlist_alloc(&newvd, NV_UNIQUE_NAME, 0) != 0) {
 248      -                nvlist_free(nvroot);
 249      -                return;
 250      -        }
      253 +        /* Construct the root vdev */
      254 +        if (nvlist_alloc(&nvroot, NV_UNIQUE_NAME, 0) != 0 ||
      255 +            nvlist_alloc(&newvd, NV_UNIQUE_NAME, 0) != 0)
      256 +                goto fail;
 251  257  
 252  258          if (nvlist_add_string(newvd, ZPOOL_CONFIG_TYPE, VDEV_TYPE_DISK) != 0 ||
 253      -            nvlist_add_string(newvd, ZPOOL_CONFIG_PATH, path) != 0 ||
 254      -            (physpath != NULL && nvlist_add_string(newvd,
 255      -            ZPOOL_CONFIG_PHYS_PATH, physpath) != 0) ||
      259 +            (adevid != NULL &&
      260 +            nvlist_add_string(newvd, ZPOOL_CONFIG_DEVID, adevid) != 0) ||
      261 +            nvlist_add_string(newvd, ZPOOL_CONFIG_PATH, adevpath) != 0 ||
      262 +            (aphyspath != NULL &&
      263 +            nvlist_add_string(newvd, ZPOOL_CONFIG_PHYS_PATH, aphyspath) != 0) ||
 256  264              nvlist_add_uint64(newvd, ZPOOL_CONFIG_WHOLE_DISK, wholedisk) != 0 ||
 257  265              nvlist_add_string(nvroot, ZPOOL_CONFIG_TYPE, VDEV_TYPE_ROOT) != 0 ||
 258      -            nvlist_add_nvlist_array(nvroot, ZPOOL_CONFIG_CHILDREN, &newvd,
 259      -            1) != 0) {
 260      -                nvlist_free(newvd);
 261      -                nvlist_free(nvroot);
 262      -                return;
      266 +            nvlist_add_nvlist_array(nvroot, zc_type, &newvd, 1) != 0)
      267 +                goto fail;
      268 +
      269 +        if (avail_spare || l2cache) {
      270 +                /*
      271 +                 * For spares/l2cache, we need to explicitly remove the device
      272 +                 * and add the new one.
      273 +                 */
      274 +                (void) zpool_vdev_remove(zhp, fullpath);
      275 +                (void) zpool_add(zhp, nvroot);
      276 +        } else {
      277 +                /* Do the replace for regular vdevs */
      278 +                (void) zpool_vdev_attach(zhp, fullpath, adevpath, nvroot,
      279 +                    B_TRUE);
 263  280          }
 264  281  
      282 +fail:
      283 +        if (adevid != NULL)
      284 +                devid_str_free((char *)adevid);
 265  285          nvlist_free(newvd);
 266      -
 267      -        (void) zpool_vdev_attach(zhp, fullpath, path, nvroot, B_TRUE);
 268      -
 269  286          nvlist_free(nvroot);
 270      -
 271  287  }
 272  288  
 273  289  /*
 274  290   * Utility functions to find a vdev matching given criteria.
 275  291   */
 276  292  typedef struct dev_data {
 277  293          const char              *dd_compare;
 278  294          const char              *dd_prop;
      295 +        const char              *dd_devpath;
 279  296          zfs_process_func_t      dd_func;
      297 +        int                     (*dd_cmp_func)(libzfs_handle_t *, const char *,
      298 +                                    const char *, size_t);
 280  299          boolean_t               dd_found;
 281      -        boolean_t               dd_isdisk;
 282  300          uint64_t                dd_pool_guid;
 283  301          uint64_t                dd_vdev_guid;
 284  302  } dev_data_t;
 285  303  
 286  304  static void
 287  305  zfs_iter_vdev(zpool_handle_t *zhp, nvlist_t *nvl, void *data)
 288  306  {
 289  307          dev_data_t *dp = data;
 290      -        char *path;
 291      -        uint_t c, children;
 292      -        nvlist_t **child;
 293      -        size_t len;
      308 +        boolean_t nested = B_FALSE;
      309 +        char *cmp_str;
      310 +        nvlist_t **cnvl, **snvl, **lnvl;
      311 +        uint_t i, nc, ns, nl;
 294  312          uint64_t guid;
 295  313  
 296      -        /*
 297      -         * First iterate over any children.
 298      -         */
      314 +        /* Iterate over child vdevs */
 299  315          if (nvlist_lookup_nvlist_array(nvl, ZPOOL_CONFIG_CHILDREN,
 300      -            &child, &children) == 0) {
 301      -                for (c = 0; c < children; c++)
 302      -                        zfs_iter_vdev(zhp, child[c], data);
 303      -                return;
      316 +            &cnvl, &nc) == 0) {
      317 +                for (i = 0; i < nc; i++)
      318 +                        zfs_iter_vdev(zhp, cnvl[i], data);
      319 +                nested = B_TRUE;
 304  320          }
      321 +        /* Iterate over spare vdevs */
      322 +        if (nvlist_lookup_nvlist_array(nvl, ZPOOL_CONFIG_SPARES,
      323 +            &snvl, &ns) == 0) {
      324 +                for (i = 0; i < ns; i++)
      325 +                        zfs_iter_vdev(zhp, snvl[i], data);
      326 +                nested = B_TRUE;
      327 +        }
      328 +        /* Iterate over l2cache vdevs */
      329 +        if (nvlist_lookup_nvlist_array(nvl, ZPOOL_CONFIG_L2CACHE,
      330 +            &lnvl, &nl) == 0) {
      331 +                for (i = 0; i < nl; i++)
      332 +                        zfs_iter_vdev(zhp, lnvl[i], data);
      333 +                nested = B_TRUE;
      334 +        }
 305  335  
 306      -        if (dp->dd_vdev_guid != 0) {
 307      -                if (nvlist_lookup_uint64(nvl, ZPOOL_CONFIG_GUID,
 308      -                    &guid) != 0 || guid != dp->dd_vdev_guid)
 309      -                        return;
 310      -        } else if (dp->dd_compare != NULL) {
 311      -                len = strlen(dp->dd_compare);
      336 +        if (nested)
      337 +                return;
 312  338  
 313      -                if (nvlist_lookup_string(nvl, dp->dd_prop, &path) != 0 ||
 314      -                    strncmp(dp->dd_compare, path, len) != 0)
      339 +        if (dp->dd_vdev_guid != 0 && (nvlist_lookup_uint64(nvl,
      340 +            ZPOOL_CONFIG_GUID, &guid) != 0 || guid != dp->dd_vdev_guid))
 315  341                          return;
 316  342  
 317      -                /*
 318      -                 * Normally, we want to have an exact match for the comparison
 319      -                 * string.  However, we allow substring matches in the following
 320      -                 * cases:
 321      -                 *
 322      -                 *      <path>:         This is a devpath, and the target is one
 323      -                 *                      of its children.
 324      -                 *
 325      -                 *      <path/>         This is a devid for a whole disk, and
 326      -                 *                      the target is one of its children.
 327      -                 */
 328      -                if (path[len] != '\0' && path[len] != ':' &&
 329      -                    path[len - 1] != '/')
      343 +        if (dp->dd_compare != NULL && (nvlist_lookup_string(nvl, dp->dd_prop,
      344 +            &cmp_str) != 0 || dp->dd_cmp_func(g_zfshdl, dp->dd_compare, cmp_str,
      345 +            strlen(dp->dd_compare)) != 0))
 330  346                          return;
 331      -        }
 332  347  
 333      -        (dp->dd_func)(zhp, nvl, dp->dd_isdisk);
      348 +        dp->dd_found = B_TRUE;
      349 +        (dp->dd_func)(zhp, nvl, dp->dd_devpath);
 334  350  }
 335  351  
 336  352  void
 337  353  zfs_enable_ds(void *arg)
 338  354  {
 339  355          unavailpool_t *pool = (unavailpool_t *)arg;
 340  356  
 341  357          (void) zpool_enable_datasets(pool->uap_zhp, NULL, 0);
 342  358          zpool_close(pool->uap_zhp);
 343  359          free(pool);
↓ open down ↓ 30 lines elided ↑ open up ↑
 374  390                                  break;
 375  391                          }
 376  392                  }
 377  393          }
 378  394  
 379  395          zpool_close(zhp);
 380  396          return (0);
 381  397  }
 382  398  
 383  399  /*
      400 + * Wrap strncmp() to be used as comparison function for devid_iter() and
      401 + * physpath_iter().
      402 + */
      403 +/* ARGSUSED */
      404 +static int
      405 +strncmp_wrap(libzfs_handle_t *hdl, const char *a, const char *b, size_t len)
      406 +{
      407 +        return (strncmp(a, b, len));
      408 +}
      409 +
      410 +/*
 384  411   * Given a physical device path, iterate over all (pool, vdev) pairs which
      412 + * correspond to the given path's FRU.
      413 + */
      414 +static boolean_t
      415 +devfru_iter(const char *devpath, const char *physpath, zfs_process_func_t func)
      416 +{
      417 +        dev_data_t data = { 0 };
      418 +        const char *fru;
      419 +
      420 +        /*
      421 +         * Need to refresh the fru cache otherwise we won't find the newly
      422 +         * inserted disk.
      423 +         */
      424 +        libzfs_fru_refresh(g_zfshdl);
      425 +
      426 +        fru = libzfs_fru_lookup(g_zfshdl, physpath);
      427 +        if (fru == NULL)
      428 +                return (B_FALSE);
      429 +
      430 +        data.dd_compare = fru;
      431 +        data.dd_func = func;
      432 +        data.dd_cmp_func = libzfs_fru_cmp_slot;
      433 +        data.dd_prop = ZPOOL_CONFIG_FRU;
      434 +        data.dd_found = B_FALSE;
      435 +        data.dd_devpath = devpath;
      436 +
      437 +        (void) zpool_iter(g_zfshdl, zfs_iter_pool, &data);
      438 +
      439 +        return (data.dd_found);
      440 +}
      441 +
      442 +/*
      443 + * Given a physical device path, iterate over all (pool, vdev) pairs which
 385  444   * correspond to the given path.
 386  445   */
      446 +/*ARGSUSED*/
 387  447  static boolean_t
 388      -devpath_iter(const char *devpath, zfs_process_func_t func, boolean_t wholedisk)
      448 +physpath_iter(const char *devpath, const char *physpath,
      449 +    zfs_process_func_t func)
 389  450  {
 390  451          dev_data_t data = { 0 };
 391  452  
 392      -        data.dd_compare = devpath;
      453 +        data.dd_compare = physpath;
 393  454          data.dd_func = func;
      455 +        data.dd_cmp_func = strncmp_wrap;
 394  456          data.dd_prop = ZPOOL_CONFIG_PHYS_PATH;
 395  457          data.dd_found = B_FALSE;
 396      -        data.dd_isdisk = wholedisk;
      458 +        data.dd_devpath = NULL;
 397  459  
 398  460          (void) zpool_iter(g_zfshdl, zfs_iter_pool, &data);
 399  461  
 400  462          return (data.dd_found);
 401  463  }
 402  464  
 403  465  /*
 404      - * Given a /devices path, lookup the corresponding devid for each minor node,
 405      - * and find any vdevs with matching devids.  Doing this straight up would be
 406      - * rather inefficient, O(minor nodes * vdevs in system), so we take advantage of
 407      - * the fact that each devid ends with "/<minornode>".  Once we find any valid
 408      - * minor node, we chop off the portion after the last slash, and then search for
 409      - * matching vdevs, which is O(vdevs in system).
      466 + * Given a devid, iterate over all (pool, vdev) pairs which correspond to the
      467 + * given vdev.
 410  468   */
      469 +/*ARGSUSED*/
 411  470  static boolean_t
 412      -devid_iter(const char *devpath, zfs_process_func_t func, boolean_t wholedisk)
      471 +devid_iter(const char *devpath, const char *physpath, zfs_process_func_t func)
 413  472  {
 414      -        size_t len = strlen(devpath) + sizeof ("/devices") +
 415      -            sizeof (PHYS_PATH) - 1;
 416      -        char *fullpath;
 417      -        int fd;
 418      -        ddi_devid_t devid;
 419      -        char *devidstr, *fulldevid;
      473 +        char fullphyspath[PATH_MAX];
      474 +        char *devidstr;
      475 +        char *s;
 420  476          dev_data_t data = { 0 };
 421  477  
 422      -        /*
 423      -         * Try to open a known minor node.
 424      -         */
 425      -        fullpath = alloca(len);
 426      -        (void) snprintf(fullpath, len, "/devices%s%s", devpath, PHYS_PATH);
 427      -        if ((fd = open(fullpath, O_RDONLY)) < 0)
 428      -                return (B_FALSE);
      478 +        /* Try to open a known minor node */
      479 +        (void) snprintf(fullphyspath, sizeof (fullphyspath), "%s%s%s",
      480 +            DEVICE_PREFIX, physpath, WD_MINOR);
 429  481  
 430      -        /*
 431      -         * Determine the devid as a string, with no trailing slash for the minor
 432      -         * node.
 433      -         */
 434      -        if (devid_get(fd, &devid) != 0) {
 435      -                (void) close(fd);
      482 +        devidstr = devid_str_from_path(fullphyspath);
      483 +        if (devidstr == NULL)
 436  484                  return (B_FALSE);
 437      -        }
 438      -        (void) close(fd);
      485 +        /* Chop off the minor node */
      486 +        if ((s = strrchr(devidstr, '/')) != NULL)
      487 +                *(s + 1) = '\0';
 439  488  
 440      -        if ((devidstr = devid_str_encode(devid, NULL)) == NULL) {
 441      -                devid_free(devid);
 442      -                return (B_FALSE);
 443      -        }
 444      -
 445      -        len = strlen(devidstr) + 2;
 446      -        fulldevid = alloca(len);
 447      -        (void) snprintf(fulldevid, len, "%s/", devidstr);
 448      -
 449      -        data.dd_compare = fulldevid;
      489 +        data.dd_compare = devidstr;
 450  490          data.dd_func = func;
      491 +        data.dd_cmp_func = strncmp_wrap;
 451  492          data.dd_prop = ZPOOL_CONFIG_DEVID;
 452  493          data.dd_found = B_FALSE;
 453      -        data.dd_isdisk = wholedisk;
      494 +        data.dd_devpath = NULL;
 454  495  
 455  496          (void) zpool_iter(g_zfshdl, zfs_iter_pool, &data);
 456  497  
 457  498          devid_str_free(devidstr);
 458      -        devid_free(devid);
 459  499  
 460  500          return (data.dd_found);
 461  501  }
 462  502  
 463  503  /*
 464      - * This function is called when we receive a devfs add event.  This can be
 465      - * either a disk event or a lofi event, and the behavior is slightly different
 466      - * depending on which it is.
      504 + * This function is called when we receive a devfs add event.
 467  505   */
 468  506  static int
 469      -zfs_deliver_add(nvlist_t *nvl, boolean_t is_lofi)
      507 +zfs_deliver_add(nvlist_t *nvl)
 470  508  {
 471      -        char *devpath, *devname;
 472      -        char path[PATH_MAX], realpath[PATH_MAX];
 473      -        char *colon, *raw;
 474      -        int ret;
      509 +        char *devpath, *physpath;
 475  510  
 476      -        /*
 477      -         * The main unit of operation is the physical device path.  For disks,
 478      -         * this is the device node, as all minor nodes are affected.  For lofi
 479      -         * devices, this includes the minor path.  Unfortunately, this isn't
 480      -         * represented in the DEV_PHYS_PATH for various reasons.
 481      -         */
 482      -        if (nvlist_lookup_string(nvl, DEV_PHYS_PATH, &devpath) != 0)
      511 +        if (nvlist_lookup_string(nvl, DEV_NAME, &devpath) != 0 ||
      512 +            nvlist_lookup_string(nvl, DEV_PHYS_PATH, &physpath) != 0)
 483  513                  return (-1);
 484  514  
 485  515          /*
 486      -         * If this is a lofi device, then also get the minor instance name.
 487      -         * Unfortunately, the current payload doesn't include an easy way to get
 488      -         * this information.  So we cheat by resolving the 'dev_name' (which
 489      -         * refers to the raw device) and taking the portion between ':(*),raw'.
      516 +         * Iterate over all vdevs with a matching devid, then those with a
      517 +         * matching /devices path, and finally those with a matching FRU slot
      518 +         * number, only paying attention to vdevs marked as whole disks.
 490  519           */
 491      -        (void) strlcpy(realpath, devpath, sizeof (realpath));
 492      -        if (is_lofi) {
 493      -                if (nvlist_lookup_string(nvl, DEV_NAME,
 494      -                    &devname) == 0 &&
 495      -                    (ret = resolvepath(devname, path,
 496      -                    sizeof (path))) > 0) {
 497      -                        path[ret] = '\0';
 498      -                        colon = strchr(path, ':');
 499      -                        if (colon != NULL)
 500      -                                raw = strstr(colon + 1, ",raw");
 501      -                        if (colon != NULL && raw != NULL) {
 502      -                                *raw = '\0';
 503      -                                (void) snprintf(realpath,
 504      -                                    sizeof (realpath), "%s%s",
 505      -                                    devpath, colon);
 506      -                                *raw = ',';
 507      -                        }
 508      -                }
      520 +        if (!devid_iter(devpath, physpath, zfs_process_add) &&
      521 +            !physpath_iter(devpath, physpath, zfs_process_add) &&
      522 +            !devfru_iter(devpath, physpath, zfs_process_add)) {
      523 +                syseventd_print(9, "%s: match failed devpath=%s physpath=%s\n",
      524 +                    __func__, devpath, physpath);
 509  525          }
 510  526  
 511      -        /*
 512      -         * Iterate over all vdevs with a matching devid, and then those with a
 513      -         * matching /devices path.  For disks, we only want to pay attention to
 514      -         * vdevs marked as whole disks.  For lofi, we don't care (because we're
 515      -         * matching an exact minor name).
 516      -         */
 517      -        if (!devid_iter(realpath, zfs_process_add, !is_lofi))
 518      -                (void) devpath_iter(realpath, zfs_process_add, !is_lofi);
 519      -
 520  527          return (0);
 521  528  }
 522  529  
 523  530  /*
 524  531   * Called when we receive a VDEV_CHECK event, which indicates a device could not
 525  532   * be opened during initial pool open, but the autoreplace property was set on
 526  533   * the pool.  In this case, we treat it as if it were an add event.
 527  534   */
 528  535  static int
 529  536  zfs_deliver_check(nvlist_t *nvl)
 530  537  {
 531  538          dev_data_t data = { 0 };
 532  539  
 533  540          if (nvlist_lookup_uint64(nvl, ZFS_EV_POOL_GUID,
 534  541              &data.dd_pool_guid) != 0 ||
 535  542              nvlist_lookup_uint64(nvl, ZFS_EV_VDEV_GUID,
 536  543              &data.dd_vdev_guid) != 0 ||
 537  544              data.dd_vdev_guid == 0)
 538  545                  return (0);
 539  546  
 540      -        data.dd_isdisk = B_TRUE;
 541  547          data.dd_func = zfs_process_add;
 542  548  
 543  549          (void) zpool_iter(g_zfshdl, zfs_iter_pool, &data);
 544  550  
 545  551          return (0);
 546  552  }
 547  553  
 548      -#define DEVICE_PREFIX   "/devices"
 549      -
 550  554  static int
 551  555  zfsdle_vdev_online(zpool_handle_t *zhp, void *data)
 552  556  {
 553  557          char *devname = data;
 554  558          boolean_t avail_spare, l2cache;
 555  559          vdev_state_t newstate;
 556  560          nvlist_t *tgt;
 557  561  
 558      -        syseventd_print(9, "zfsdle_vdev_online: searching for %s in pool %s\n",
      562 +        syseventd_print(9, "%s: searching for %s in pool %s\n", __func__,
 559  563              devname, zpool_get_name(zhp));
 560  564  
 561  565          if ((tgt = zpool_find_vdev_by_physpath(zhp, devname,
 562  566              &avail_spare, &l2cache, NULL)) != NULL) {
 563  567                  char *path, fullpath[MAXPATHLEN];
 564  568                  uint64_t wholedisk = 0ULL;
 565  569  
 566  570                  verify(nvlist_lookup_string(tgt, ZPOOL_CONFIG_PATH,
 567  571                      &path) == 0);
 568  572                  verify(nvlist_lookup_uint64(tgt, ZPOOL_CONFIG_WHOLE_DISK,
↓ open down ↓ 5 lines elided ↑ open up ↑
 574  578  
 575  579                          /*
 576  580                           * We need to reopen the pool associated with this
 577  581                           * device so that the kernel can update the size
 578  582                           * of the expanded device.
 579  583                           */
 580  584                          (void) zpool_reopen(zhp);
 581  585                  }
 582  586  
 583  587                  if (zpool_get_prop_int(zhp, ZPOOL_PROP_AUTOEXPAND, NULL)) {
 584      -                        syseventd_print(9, "zfsdle_vdev_online: setting device"
 585      -                            " device %s to ONLINE state in pool %s.\n",
 586      -                            fullpath, zpool_get_name(zhp));
      588 +                        syseventd_print(9, "%s: setting device '%s' to ONLINE "
      589 +                            "state in pool %s\n", __func__, fullpath,
      590 +                            zpool_get_name(zhp));
 587  591                          if (zpool_get_state(zhp) != POOL_STATE_UNAVAIL)
 588  592                                  (void) zpool_vdev_online(zhp, fullpath, 0,
 589  593                                      &newstate);
 590  594                  }
 591  595                  zpool_close(zhp);
 592  596                  return (1);
 593  597          }
 594  598          zpool_close(zhp);
 595  599          return (0);
 596  600  }
 597  601  
 598  602  /*
 599  603   * This function is called for each vdev of a pool for which any of the
 600      - * following events was recieved:
      604 + * following events was received:
 601  605   *  - ESC_ZFS_vdev_add
 602  606   *  - ESC_ZFS_vdev_attach
 603  607   *  - ESC_ZFS_vdev_clear
 604  608   *  - ESC_ZFS_vdev_online
 605  609   *  - ESC_ZFS_pool_create
 606  610   *  - ESC_ZFS_pool_import
 607  611   * It will update the vdevs FRU property if it is out of date.
 608  612   */
 609      -/*ARGSUSED2*/
      613 +/*ARGSUSED*/
 610  614  static void
 611      -zfs_update_vdev_fru(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t isdisk)
      615 +zfs_update_vdev_fru(zpool_handle_t *zhp, nvlist_t *vdev, const char *devpath)
 612  616  {
 613      -        char *devpath, *cptr, *oldfru = NULL;
      617 +        char *physpath, *cptr, *oldfru = NULL;
 614  618          const char *newfru;
 615  619          uint64_t vdev_guid;
 616  620  
 617  621          (void) nvlist_lookup_uint64(vdev, ZPOOL_CONFIG_GUID, &vdev_guid);
 618      -        (void) nvlist_lookup_string(vdev, ZPOOL_CONFIG_PHYS_PATH, &devpath);
      622 +        (void) nvlist_lookup_string(vdev, ZPOOL_CONFIG_PHYS_PATH, &physpath);
 619  623          (void) nvlist_lookup_string(vdev, ZPOOL_CONFIG_FRU, &oldfru);
 620  624  
 621      -        /* remove :<slice> from devpath */
 622      -        cptr = strrchr(devpath, ':');
      625 +        /* Remove :<slice> from physpath */
      626 +        cptr = strrchr(physpath, ':');
 623  627          if (cptr != NULL)
 624  628                  *cptr = '\0';
 625  629  
 626      -        newfru = libzfs_fru_lookup(g_zfshdl, devpath);
      630 +        newfru = libzfs_fru_lookup(g_zfshdl, physpath);
 627  631          if (newfru == NULL) {
 628      -                syseventd_print(9, "zfs_update_vdev_fru: no FRU for %s\n",
 629      -                    devpath);
      632 +                syseventd_print(9, "%s: physpath=%s newFRU=<none>\n", __func__,
      633 +                    physpath);
 630  634                  return;
 631  635          }
 632  636  
 633      -        /* do nothing if the FRU hasn't changed */
      637 +        /* Do nothing if the FRU hasn't changed */
 634  638          if (oldfru != NULL && libzfs_fru_compare(g_zfshdl, oldfru, newfru)) {
 635      -                syseventd_print(9, "zfs_update_vdev_fru: FRU unchanged\n");
      639 +                syseventd_print(9, "%s: physpath=%s newFRU=<unchanged>\n",
      640 +                    __func__, physpath);
 636  641                  return;
 637  642          }
 638  643  
 639      -        syseventd_print(9, "zfs_update_vdev_fru: devpath = %s\n", devpath);
 640      -        syseventd_print(9, "zfs_update_vdev_fru: FRU = %s\n", newfru);
      644 +        syseventd_print(9, "%s: physpath=%s newFRU=%s\n", __func__, physpath,
      645 +            newfru);
 641  646  
 642  647          (void) zpool_fru_set(zhp, vdev_guid, newfru);
 643  648  }
 644  649  
 645  650  /*
 646  651   * This function handles the following events:
 647  652   *  - ESC_ZFS_vdev_add
 648  653   *  - ESC_ZFS_vdev_attach
 649  654   *  - ESC_ZFS_vdev_clear
 650  655   *  - ESC_ZFS_vdev_online
↓ open down ↓ 3 lines elided ↑ open up ↑
 654  659   */
 655  660  int
 656  661  zfs_deliver_update(nvlist_t *nvl)
 657  662  {
 658  663          dev_data_t dd = { 0 };
 659  664          char *pname;
 660  665          zpool_handle_t *zhp;
 661  666          nvlist_t *config, *vdev;
 662  667  
 663  668          if (nvlist_lookup_string(nvl, "pool_name", &pname) != 0) {
 664      -                syseventd_print(9, "zfs_deliver_update: no pool name\n");
      669 +                syseventd_print(9, "%s: no pool name\n", __func__);
 665  670                  return (-1);
 666  671          }
 667  672  
 668  673          /*
 669  674           * If this event was triggered by a pool export or destroy we cannot
 670  675           * open the pool. This is not an error, just return 0 as we don't care
 671  676           * about these events.
 672  677           */
 673  678          zhp = zpool_open_canfail(g_zfshdl, pname);
 674  679          if (zhp == NULL)
 675  680                  return (0);
 676  681  
 677  682          config = zpool_get_config(zhp, NULL);
 678  683          if (config == NULL) {
 679      -                syseventd_print(9, "zfs_deliver_update: "
 680      -                    "failed to get pool config for %s\n", pname);
      684 +                syseventd_print(9, "%s: failed to get pool config for %s\n",
      685 +                    __func__, pname);
 681  686                  zpool_close(zhp);
 682  687                  return (-1);
 683  688          }
 684  689  
 685  690          if (nvlist_lookup_nvlist(config, ZPOOL_CONFIG_VDEV_TREE, &vdev) != 0) {
 686      -                syseventd_print(0, "zfs_deliver_update: "
 687      -                    "failed to get vdev tree for %s\n", pname);
      691 +                syseventd_print(0, "%s: failed to get vdev tree for %s\n",
      692 +                    __func__, pname);
 688  693                  zpool_close(zhp);
 689  694                  return (-1);
 690  695          }
 691  696  
 692  697          libzfs_fru_refresh(g_zfshdl);
 693  698  
 694  699          dd.dd_func = zfs_update_vdev_fru;
 695  700          zfs_iter_vdev(zhp, vdev, &dd);
 696  701  
 697  702          zpool_close(zhp);
 698  703          return (0);
 699  704  }
 700  705  
 701  706  int
 702  707  zfs_deliver_dle(nvlist_t *nvl)
 703  708  {
 704      -        char *devname;
 705      -        if (nvlist_lookup_string(nvl, DEV_PHYS_PATH, &devname) != 0) {
 706      -                syseventd_print(9, "zfs_deliver_event: no physpath\n");
      709 +        char *physpath;
      710 +
      711 +        if (nvlist_lookup_string(nvl, DEV_PHYS_PATH, &physpath) != 0) {
      712 +                syseventd_print(9, "%s: no physpath\n", __func__);
 707  713                  return (-1);
 708  714          }
 709      -        if (strncmp(devname, DEVICE_PREFIX, strlen(DEVICE_PREFIX)) != 0) {
 710      -                syseventd_print(9, "zfs_deliver_event: invalid "
 711      -                    "device '%s'", devname);
      715 +        if (strncmp(physpath, DEVICE_PREFIX, strlen(DEVICE_PREFIX)) != 0) {
      716 +                syseventd_print(9, "%s: invalid device '%s'", __func__,
      717 +                    physpath);
 712  718                  return (-1);
 713  719          }
 714  720  
 715  721          /*
 716  722           * We try to find the device using the physical
 717  723           * path that has been supplied. We need to strip off
 718  724           * the /devices prefix before starting our search.
 719  725           */
 720      -        devname += strlen(DEVICE_PREFIX);
 721      -        if (zpool_iter(g_zfshdl, zfsdle_vdev_online, devname) != 1) {
 722      -                syseventd_print(9, "zfs_deliver_event: device '%s' not"
 723      -                    " found\n", devname);
      726 +        physpath += strlen(DEVICE_PREFIX);
      727 +        if (zpool_iter(g_zfshdl, zfsdle_vdev_online, physpath) != 1) {
      728 +                syseventd_print(9, "%s: device '%s' not  found\n",
      729 +                    __func__, physpath);
 724  730                  return (1);
 725  731          }
 726  732          return (0);
 727  733  }
 728  734  
 729  735  
 730  736  /*ARGSUSED*/
 731  737  static int
 732  738  zfs_deliver_event(sysevent_t *ev, int unused)
 733  739  {
 734  740          const char *class = sysevent_get_class_name(ev);
 735  741          const char *subclass = sysevent_get_subclass_name(ev);
 736  742          nvlist_t *nvl;
 737  743          int ret;
 738      -        boolean_t is_lofi = B_FALSE, is_check = B_FALSE;
 739      -        boolean_t is_dle = B_FALSE, is_update = B_FALSE;
      744 +        boolean_t is_check = B_FALSE;
      745 +        boolean_t is_dle = B_FALSE;
      746 +        boolean_t is_update = B_FALSE;
 740  747  
 741  748          if (strcmp(class, EC_DEV_ADD) == 0) {
 742      -                /*
 743      -                 * We're mainly interested in disk additions, but we also listen
 744      -                 * for new lofi devices, to allow for simplified testing.
 745      -                 */
 746      -                if (strcmp(subclass, ESC_DISK) == 0)
 747      -                        is_lofi = B_FALSE;
 748      -                else if (strcmp(subclass, ESC_LOFI) == 0)
 749      -                        is_lofi = B_TRUE;
 750      -                else
      749 +                /* We're only interested in disk additions */
      750 +                if (strcmp(subclass, ESC_DISK) != 0)
 751  751                          return (0);
 752      -
 753      -                is_check = B_FALSE;
 754  752          } else if (strcmp(class, EC_ZFS) == 0) {
 755  753                  if (strcmp(subclass, ESC_ZFS_VDEV_CHECK) == 0) {
 756  754                          /*
 757  755                           * This event signifies that a device failed to open
 758  756                           * during pool load, but the 'autoreplace' property was
 759  757                           * set, so we should pretend it's just been added.
 760  758                           */
 761  759                          is_check = B_TRUE;
 762  760                  } else if ((strcmp(subclass, ESC_ZFS_VDEV_ADD) == 0) ||
 763  761                      (strcmp(subclass, ESC_ZFS_VDEV_ATTACH) == 0) ||
↓ open down ↓ 17 lines elided ↑ open up ↑
 781  779          if (sysevent_get_attr_list(ev, &nvl) != 0)
 782  780                  return (-1);
 783  781  
 784  782          if (is_dle)
 785  783                  ret = zfs_deliver_dle(nvl);
 786  784          else if (is_update)
 787  785                  ret = zfs_deliver_update(nvl);
 788  786          else if (is_check)
 789  787                  ret = zfs_deliver_check(nvl);
 790  788          else
 791      -                ret = zfs_deliver_add(nvl, is_lofi);
      789 +                ret = zfs_deliver_add(nvl);
 792  790  
 793  791          nvlist_free(nvl);
 794  792          return (ret);
 795  793  }
 796  794  
 797  795  /*ARGSUSED*/
 798  796  void *
 799  797  zfs_enum_pools(void *arg)
 800  798  {
 801  799          (void) zpool_iter(g_zfshdl, zfs_unavail_pool, (void *)&g_pool_list);
↓ open down ↓ 45 lines elided ↑ open up ↑
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX