Print this page
NEX-19178 Changing the NFS export path makes the SMB share offline
Reviewed by: Evan Layton <evan.layton@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Matt Barden <matt.barden@nexenta.com>
Revert "NEX-19178 Changing the ZFS mountpoint property of a dataset takes the SMB share offline"
This reverts commit 35bb44b3cdee0719ce685304ca801335d5cc234e.
NEX-19178 Changing the ZFS mountpoint property of a dataset takes the SMB share offline
Reviewed by: Rob Gittins <rob.gittins@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Matt Barden <matt.barden@nexenta.com>
NEX-15279 support NFS server in zone
NEX-15520 online NFS shares cause zoneadm halt to hang in nfs_export_zone_fini
Portions contributed by: Dan Kruchinin dan.kruchinin@nexenta.com
Portions contributed by: Stepan Zastupov stepan.zastupov@gmail.com
Reviewed by: Joyce McIntosh <joyce.mcintosh@nexenta.com>
Reviewed by: Rob Gittins <rob.gittins@nexenta.com>
Reviewed by: Gordon Ross <gordon.ross@nexenta.com>
NEX-16219 pool import performance regression due to repeated libshare initialization
Reviewd by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Evan Layton <evan.layton@nexenta.com>
NEX-15937 zpool import performance degradation in filesystem sharing
Reviewed by: Evan Layton <evan.layton@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-15937 zpool import performance degradation in filesystem sharing
Reviewed by: Evan Layton <evan.layton@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-6586 cleanup gcc warnings in libzfs_mount.c
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
2605 want to resume interrupted zfs send
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed by: Xin Li <delphij@freebsd.org>
Reviewed by: Arne Jansen <sensille@gmx.net>
Approved by: Dan McDonald <danmcd@omniti.com>
6280 libzfs: unshare_one() could fail with EZFS_SHARENFSFAILED
Reviewed by: Toomas Soome <tsoome@me.com>
Reviewed by: Dan McDonald <danmcd@omniti.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Gordon Ross <gwr@nexenta.com>
NEX-1557 Parallel mount during HA Failover sometimes doesn't share the dataset, causes shares to go offline
SUP-647 Long failover times dominated by zpool import times trigger client-side errors
re #13594 rb4488 Lint complaints fix
re #10054 #13409 rb4387 added parallel unmount for zpool export
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/lib/libzfs/common/libzfs_mount.c
+++ new/usr/src/lib/libzfs/common/libzfs_mount.c
1 1 /*
2 2 * CDDL HEADER START
3 3 *
4 4 * The contents of this file are subject to the terms of the
5 5 * Common Development and Distribution License (the "License").
6 6 * You may not use this file except in compliance with the License.
7 7 *
8 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 * or http://www.opensolaris.org/os/licensing.
10 10 * See the License for the specific language governing permissions
11 11 * and limitations under the License.
12 12 *
|
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
13 13 * When distributing Covered Code, include this CDDL HEADER in each
14 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 * If applicable, add the following below this CDDL HEADER, with the
16 16 * fields enclosed by brackets "[]" replaced with your own identifying
17 17 * information: Portions Copyright [yyyy] [name of copyright owner]
18 18 *
19 19 * CDDL HEADER END
20 20 */
21 21
22 22 /*
23 - * Copyright 2015 Nexenta Systems, Inc. All rights reserved.
23 + * Copyright 2018 Nexenta Systems, Inc. All rights reserved.
24 24 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
25 + */
26 +
27 +/*
28 + * Copyright 2019 Nexenta Systems, Inc.
25 29 * Copyright (c) 2014, 2016 by Delphix. All rights reserved.
26 30 * Copyright 2016 Igor Kozhukhov <ikozhukhov@gmail.com>
27 31 * Copyright 2017 Joyent, Inc.
28 32 * Copyright 2017 RackTop Systems.
29 33 */
30 34
31 35 /*
32 36 * Routines to manage ZFS mounts. We separate all the nasty routines that have
33 37 * to deal with the OS. The following functions are the main entry points --
34 38 * they are used by mount and unmount and when changing a filesystem's
35 39 * mountpoint.
36 40 *
37 41 * zfs_is_mounted()
38 42 * zfs_mount()
39 43 * zfs_unmount()
40 44 * zfs_unmountall()
41 45 *
42 46 * This file also contains the functions used to manage sharing filesystems via
43 47 * NFS and iSCSI:
44 48 *
45 49 * zfs_is_shared()
46 50 * zfs_share()
47 51 * zfs_unshare()
48 52 *
49 53 * zfs_is_shared_nfs()
50 54 * zfs_is_shared_smb()
51 55 * zfs_share_proto()
52 56 * zfs_shareall();
53 57 * zfs_unshare_nfs()
|
↓ open down ↓ |
19 lines elided |
↑ open up ↑ |
54 58 * zfs_unshare_smb()
55 59 * zfs_unshareall_nfs()
56 60 * zfs_unshareall_smb()
57 61 * zfs_unshareall()
58 62 * zfs_unshareall_bypath()
59 63 *
60 64 * The following functions are available for pool consumers, and will
61 65 * mount/unmount and share/unshare all datasets within pool:
62 66 *
63 67 * zpool_enable_datasets()
68 + * zpool_enable_datasets_ex()
64 69 * zpool_disable_datasets()
70 + * zpool_disable_datasets_ex()
65 71 */
66 72
67 73 #include <dirent.h>
68 74 #include <dlfcn.h>
69 75 #include <errno.h>
70 76 #include <fcntl.h>
71 77 #include <libgen.h>
72 78 #include <libintl.h>
73 79 #include <stdio.h>
74 80 #include <stdlib.h>
75 81 #include <strings.h>
76 82 #include <unistd.h>
77 83 #include <zone.h>
78 84 #include <sys/mntent.h>
79 85 #include <sys/mount.h>
80 86 #include <sys/stat.h>
87 +#include <thread_pool.h>
81 88 #include <sys/statvfs.h>
82 89
83 90 #include <libzfs.h>
84 91
85 92 #include "libzfs_impl.h"
86 93
87 94 #include <libshare.h>
88 95 #include <sys/systeminfo.h>
89 96 #define MAXISALEN 257 /* based on sysinfo(2) man page */
90 97
91 98 static int zfs_share_proto(zfs_handle_t *, zfs_share_proto_t *);
92 99 zfs_share_type_t zfs_is_shared_proto(zfs_handle_t *, char **,
93 100 zfs_share_proto_t);
94 101
95 102 /*
96 103 * The share protocols table must be in the same order as the zfs_share_proto_t
97 104 * enum in libzfs_impl.h
98 105 */
99 106 typedef struct {
100 107 zfs_prop_t p_prop;
101 108 char *p_name;
102 109 int p_share_err;
103 110 int p_unshare_err;
104 111 } proto_table_t;
105 112
106 113 proto_table_t proto_table[PROTO_END] = {
107 114 {ZFS_PROP_SHARENFS, "nfs", EZFS_SHARENFSFAILED, EZFS_UNSHARENFSFAILED},
108 115 {ZFS_PROP_SHARESMB, "smb", EZFS_SHARESMBFAILED, EZFS_UNSHARESMBFAILED},
109 116 };
110 117
111 118 zfs_share_proto_t nfs_only[] = {
112 119 PROTO_NFS,
113 120 PROTO_END
114 121 };
115 122
116 123 zfs_share_proto_t smb_only[] = {
117 124 PROTO_SMB,
118 125 PROTO_END
119 126 };
120 127 zfs_share_proto_t share_all_proto[] = {
121 128 PROTO_NFS,
122 129 PROTO_SMB,
123 130 PROTO_END
124 131 };
125 132
126 133 /*
127 134 * Search the sharetab for the given mountpoint and protocol, returning
128 135 * a zfs_share_type_t value.
129 136 */
130 137 static zfs_share_type_t
131 138 is_shared(libzfs_handle_t *hdl, const char *mountpoint, zfs_share_proto_t proto)
132 139 {
133 140 char buf[MAXPATHLEN], *tab;
134 141 char *ptr;
135 142
136 143 if (hdl->libzfs_sharetab == NULL)
137 144 return (SHARED_NOT_SHARED);
138 145
139 146 (void) fseek(hdl->libzfs_sharetab, 0, SEEK_SET);
140 147
141 148 while (fgets(buf, sizeof (buf), hdl->libzfs_sharetab) != NULL) {
142 149
143 150 /* the mountpoint is the first entry on each line */
144 151 if ((tab = strchr(buf, '\t')) == NULL)
145 152 continue;
146 153
147 154 *tab = '\0';
148 155 if (strcmp(buf, mountpoint) == 0) {
149 156 /*
150 157 * the protocol field is the third field
151 158 * skip over second field
152 159 */
153 160 ptr = ++tab;
154 161 if ((tab = strchr(ptr, '\t')) == NULL)
155 162 continue;
156 163 ptr = ++tab;
157 164 if ((tab = strchr(ptr, '\t')) == NULL)
158 165 continue;
159 166 *tab = '\0';
160 167 if (strcmp(ptr,
161 168 proto_table[proto].p_name) == 0) {
162 169 switch (proto) {
163 170 case PROTO_NFS:
164 171 return (SHARED_NFS);
165 172 case PROTO_SMB:
166 173 return (SHARED_SMB);
167 174 default:
168 175 return (0);
169 176 }
170 177 }
171 178 }
172 179 }
173 180
174 181 return (SHARED_NOT_SHARED);
175 182 }
176 183
177 184 static boolean_t
178 185 dir_is_empty_stat(const char *dirname)
179 186 {
180 187 struct stat st;
181 188
182 189 /*
183 190 * We only want to return false if the given path is a non empty
184 191 * directory, all other errors are handled elsewhere.
185 192 */
186 193 if (stat(dirname, &st) < 0 || !S_ISDIR(st.st_mode)) {
187 194 return (B_TRUE);
188 195 }
189 196
190 197 /*
191 198 * An empty directory will still have two entries in it, one
192 199 * entry for each of "." and "..".
193 200 */
194 201 if (st.st_size > 2) {
195 202 return (B_FALSE);
196 203 }
197 204
198 205 return (B_TRUE);
199 206 }
200 207
201 208 static boolean_t
202 209 dir_is_empty_readdir(const char *dirname)
203 210 {
204 211 DIR *dirp;
205 212 struct dirent64 *dp;
206 213 int dirfd;
207 214
208 215 if ((dirfd = openat(AT_FDCWD, dirname,
209 216 O_RDONLY | O_NDELAY | O_LARGEFILE | O_CLOEXEC, 0)) < 0) {
210 217 return (B_TRUE);
211 218 }
212 219
213 220 if ((dirp = fdopendir(dirfd)) == NULL) {
214 221 (void) close(dirfd);
215 222 return (B_TRUE);
216 223 }
217 224
218 225 while ((dp = readdir64(dirp)) != NULL) {
219 226
220 227 if (strcmp(dp->d_name, ".") == 0 ||
221 228 strcmp(dp->d_name, "..") == 0)
222 229 continue;
223 230
224 231 (void) closedir(dirp);
225 232 return (B_FALSE);
226 233 }
227 234
228 235 (void) closedir(dirp);
229 236 return (B_TRUE);
230 237 }
231 238
232 239 /*
233 240 * Returns true if the specified directory is empty. If we can't open the
234 241 * directory at all, return true so that the mount can fail with a more
235 242 * informative error message.
236 243 */
237 244 static boolean_t
238 245 dir_is_empty(const char *dirname)
239 246 {
240 247 struct statvfs64 st;
241 248
242 249 /*
243 250 * If the statvfs call fails or the filesystem is not a ZFS
244 251 * filesystem, fall back to the slow path which uses readdir.
245 252 */
246 253 if ((statvfs64(dirname, &st) != 0) ||
247 254 (strcmp(st.f_basetype, "zfs") != 0)) {
248 255 return (dir_is_empty_readdir(dirname));
249 256 }
250 257
251 258 /*
252 259 * At this point, we know the provided path is on a ZFS
253 260 * filesystem, so we can use stat instead of readdir to
254 261 * determine if the directory is empty or not. We try to avoid
255 262 * using readdir because that requires opening "dirname"; this
256 263 * open file descriptor can potentially end up in a child
257 264 * process if there's a concurrent fork, thus preventing the
258 265 * zfs_mount() from otherwise succeeding (the open file
259 266 * descriptor inherited by the child process will cause the
260 267 * parent's mount to fail with EBUSY). The performance
261 268 * implications of replacing the open, read, and close with a
262 269 * single stat is nice; but is not the main motivation for the
263 270 * added complexity.
264 271 */
265 272 return (dir_is_empty_stat(dirname));
266 273 }
267 274
268 275 /*
269 276 * Checks to see if the mount is active. If the filesystem is mounted, we fill
270 277 * in 'where' with the current mountpoint, and return 1. Otherwise, we return
271 278 * 0.
272 279 */
273 280 boolean_t
274 281 is_mounted(libzfs_handle_t *zfs_hdl, const char *special, char **where)
275 282 {
276 283 struct mnttab entry;
277 284
278 285 if (libzfs_mnttab_find(zfs_hdl, special, &entry) != 0)
279 286 return (B_FALSE);
280 287
281 288 if (where != NULL)
282 289 *where = zfs_strdup(zfs_hdl, entry.mnt_mountp);
283 290
284 291 return (B_TRUE);
285 292 }
286 293
287 294 boolean_t
288 295 zfs_is_mounted(zfs_handle_t *zhp, char **where)
289 296 {
290 297 return (is_mounted(zhp->zfs_hdl, zfs_get_name(zhp), where));
291 298 }
292 299
293 300 /*
294 301 * Returns true if the given dataset is mountable, false otherwise. Returns the
295 302 * mountpoint in 'buf'.
296 303 */
297 304 static boolean_t
298 305 zfs_is_mountable(zfs_handle_t *zhp, char *buf, size_t buflen,
299 306 zprop_source_t *source)
300 307 {
301 308 char sourceloc[MAXNAMELEN];
302 309 zprop_source_t sourcetype;
303 310
304 311 if (!zfs_prop_valid_for_type(ZFS_PROP_MOUNTPOINT, zhp->zfs_type))
305 312 return (B_FALSE);
306 313
307 314 verify(zfs_prop_get(zhp, ZFS_PROP_MOUNTPOINT, buf, buflen,
308 315 &sourcetype, sourceloc, sizeof (sourceloc), B_FALSE) == 0);
309 316
310 317 if (strcmp(buf, ZFS_MOUNTPOINT_NONE) == 0 ||
311 318 strcmp(buf, ZFS_MOUNTPOINT_LEGACY) == 0)
312 319 return (B_FALSE);
313 320
314 321 if (zfs_prop_get_int(zhp, ZFS_PROP_CANMOUNT) == ZFS_CANMOUNT_OFF)
315 322 return (B_FALSE);
316 323
317 324 if (zfs_prop_get_int(zhp, ZFS_PROP_ZONED) &&
318 325 getzoneid() == GLOBAL_ZONEID)
319 326 return (B_FALSE);
320 327
321 328 if (source)
322 329 *source = sourcetype;
323 330
324 331 return (B_TRUE);
325 332 }
326 333
327 334 /*
328 335 * Mount the given filesystem.
329 336 */
330 337 int
331 338 zfs_mount(zfs_handle_t *zhp, const char *options, int flags)
332 339 {
333 340 struct stat buf;
334 341 char mountpoint[ZFS_MAXPROPLEN];
335 342 char mntopts[MNT_LINE_MAX];
336 343 libzfs_handle_t *hdl = zhp->zfs_hdl;
337 344
338 345 if (options == NULL)
339 346 mntopts[0] = '\0';
340 347 else
341 348 (void) strlcpy(mntopts, options, sizeof (mntopts));
342 349
343 350 /*
344 351 * If the pool is imported read-only then all mounts must be read-only
345 352 */
346 353 if (zpool_get_prop_int(zhp->zpool_hdl, ZPOOL_PROP_READONLY, NULL))
347 354 flags |= MS_RDONLY;
348 355
349 356 if (!zfs_is_mountable(zhp, mountpoint, sizeof (mountpoint), NULL))
350 357 return (0);
351 358
352 359 /* Create the directory if it doesn't already exist */
353 360 if (lstat(mountpoint, &buf) != 0) {
354 361 if (mkdirp(mountpoint, 0755) != 0) {
355 362 zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
356 363 "failed to create mountpoint"));
357 364 return (zfs_error_fmt(hdl, EZFS_MOUNTFAILED,
358 365 dgettext(TEXT_DOMAIN, "cannot mount '%s'"),
359 366 mountpoint));
360 367 }
361 368 }
362 369
363 370 /*
364 371 * Determine if the mountpoint is empty. If so, refuse to perform the
365 372 * mount. We don't perform this check if MS_OVERLAY is specified, which
366 373 * would defeat the point. We also avoid this check if 'remount' is
367 374 * specified.
368 375 */
369 376 if ((flags & MS_OVERLAY) == 0 &&
370 377 strstr(mntopts, MNTOPT_REMOUNT) == NULL &&
371 378 !dir_is_empty(mountpoint)) {
372 379 zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
373 380 "directory is not empty"));
374 381 return (zfs_error_fmt(hdl, EZFS_MOUNTFAILED,
375 382 dgettext(TEXT_DOMAIN, "cannot mount '%s'"), mountpoint));
376 383 }
377 384
378 385 /* perform the mount */
379 386 if (mount(zfs_get_name(zhp), mountpoint, MS_OPTIONSTR | flags,
380 387 MNTTYPE_ZFS, NULL, 0, mntopts, sizeof (mntopts)) != 0) {
381 388 /*
382 389 * Generic errors are nasty, but there are just way too many
383 390 * from mount(), and they're well-understood. We pick a few
384 391 * common ones to improve upon.
385 392 */
386 393 if (errno == EBUSY) {
387 394 zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
388 395 "mountpoint or dataset is busy"));
389 396 } else if (errno == EPERM) {
390 397 zfs_error_aux(hdl, dgettext(TEXT_DOMAIN,
391 398 "Insufficient privileges"));
392 399 } else if (errno == ENOTSUP) {
393 400 char buf[256];
394 401 int spa_version;
395 402
396 403 VERIFY(zfs_spa_version(zhp, &spa_version) == 0);
397 404 (void) snprintf(buf, sizeof (buf),
398 405 dgettext(TEXT_DOMAIN, "Can't mount a version %lld "
399 406 "file system on a version %d pool. Pool must be"
400 407 " upgraded to mount this file system."),
401 408 (u_longlong_t)zfs_prop_get_int(zhp,
402 409 ZFS_PROP_VERSION), spa_version);
403 410 zfs_error_aux(hdl, dgettext(TEXT_DOMAIN, buf));
404 411 } else {
405 412 zfs_error_aux(hdl, strerror(errno));
406 413 }
407 414 return (zfs_error_fmt(hdl, EZFS_MOUNTFAILED,
408 415 dgettext(TEXT_DOMAIN, "cannot mount '%s'"),
409 416 zhp->zfs_name));
410 417 }
411 418
412 419 /* add the mounted entry into our cache */
413 420 libzfs_mnttab_add(hdl, zfs_get_name(zhp), mountpoint,
|
↓ open down ↓ |
323 lines elided |
↑ open up ↑ |
414 421 mntopts);
415 422 return (0);
416 423 }
417 424
418 425 /*
419 426 * Unmount a single filesystem.
420 427 */
421 428 static int
422 429 unmount_one(libzfs_handle_t *hdl, const char *mountpoint, int flags)
423 430 {
424 - if (umount2(mountpoint, flags) != 0) {
431 + int ret = umount2(mountpoint, flags);
432 + if (ret != 0) {
425 433 zfs_error_aux(hdl, strerror(errno));
426 434 return (zfs_error_fmt(hdl, EZFS_UMOUNTFAILED,
427 435 dgettext(TEXT_DOMAIN, "cannot unmount '%s'"),
428 436 mountpoint));
429 437 }
430 438
431 439 return (0);
432 440 }
433 441
434 442 /*
435 443 * Unmount the given filesystem.
436 444 */
437 445 int
438 446 zfs_unmount(zfs_handle_t *zhp, const char *mountpoint, int flags)
439 447 {
440 448 libzfs_handle_t *hdl = zhp->zfs_hdl;
441 449 struct mnttab entry;
442 450 char *mntpt = NULL;
443 451
444 452 /* check to see if we need to unmount the filesystem */
445 453 if (mountpoint != NULL || ((zfs_get_type(zhp) == ZFS_TYPE_FILESYSTEM) &&
446 454 libzfs_mnttab_find(hdl, zhp->zfs_name, &entry) == 0)) {
447 455 /*
448 456 * mountpoint may have come from a call to
449 457 * getmnt/getmntany if it isn't NULL. If it is NULL,
450 458 * we know it comes from libzfs_mnttab_find which can
451 459 * then get freed later. We strdup it to play it safe.
452 460 */
453 461 if (mountpoint == NULL)
454 462 mntpt = zfs_strdup(hdl, entry.mnt_mountp);
455 463 else
456 464 mntpt = zfs_strdup(hdl, mountpoint);
457 465
458 466 /*
459 467 * Unshare and unmount the filesystem
460 468 */
461 469 if (zfs_unshare_proto(zhp, mntpt, share_all_proto) != 0)
462 470 return (-1);
463 471
464 472 if (unmount_one(hdl, mntpt, flags) != 0) {
465 473 free(mntpt);
466 474 (void) zfs_shareall(zhp);
467 475 return (-1);
468 476 }
469 477 libzfs_mnttab_remove(hdl, zhp->zfs_name);
470 478 free(mntpt);
471 479 }
472 480
473 481 return (0);
474 482 }
475 483
476 484 /*
477 485 * Unmount this filesystem and any children inheriting the mountpoint property.
478 486 * To do this, just act like we're changing the mountpoint property, but don't
479 487 * remount the filesystems afterwards.
480 488 */
481 489 int
482 490 zfs_unmountall(zfs_handle_t *zhp, int flags)
483 491 {
484 492 prop_changelist_t *clp;
485 493 int ret;
486 494
487 495 clp = changelist_gather(zhp, ZFS_PROP_MOUNTPOINT, 0, flags);
488 496 if (clp == NULL)
489 497 return (-1);
490 498
491 499 ret = changelist_prefix(clp);
492 500 changelist_free(clp);
493 501
494 502 return (ret);
495 503 }
496 504
497 505 boolean_t
498 506 zfs_is_shared(zfs_handle_t *zhp)
499 507 {
500 508 zfs_share_type_t rc = 0;
501 509 zfs_share_proto_t *curr_proto;
502 510
503 511 if (ZFS_IS_VOLUME(zhp))
504 512 return (B_FALSE);
505 513
506 514 for (curr_proto = share_all_proto; *curr_proto != PROTO_END;
507 515 curr_proto++)
508 516 rc |= zfs_is_shared_proto(zhp, NULL, *curr_proto);
509 517
510 518 return (rc ? B_TRUE : B_FALSE);
511 519 }
512 520
513 521 int
514 522 zfs_share(zfs_handle_t *zhp)
515 523 {
516 524 assert(!ZFS_IS_VOLUME(zhp));
517 525 return (zfs_share_proto(zhp, share_all_proto));
518 526 }
519 527
520 528 int
521 529 zfs_unshare(zfs_handle_t *zhp)
522 530 {
523 531 assert(!ZFS_IS_VOLUME(zhp));
524 532 return (zfs_unshareall(zhp));
525 533 }
526 534
527 535 /*
528 536 * Check to see if the filesystem is currently shared.
529 537 */
530 538 zfs_share_type_t
531 539 zfs_is_shared_proto(zfs_handle_t *zhp, char **where, zfs_share_proto_t proto)
532 540 {
533 541 char *mountpoint;
534 542 zfs_share_type_t rc;
535 543
536 544 if (!zfs_is_mounted(zhp, &mountpoint))
537 545 return (SHARED_NOT_SHARED);
538 546
539 547 if ((rc = is_shared(zhp->zfs_hdl, mountpoint, proto))
540 548 != SHARED_NOT_SHARED) {
541 549 if (where != NULL)
542 550 *where = mountpoint;
543 551 else
544 552 free(mountpoint);
545 553 return (rc);
546 554 } else {
547 555 free(mountpoint);
548 556 return (SHARED_NOT_SHARED);
549 557 }
550 558 }
551 559
552 560 boolean_t
553 561 zfs_is_shared_nfs(zfs_handle_t *zhp, char **where)
554 562 {
555 563 return (zfs_is_shared_proto(zhp, where,
556 564 PROTO_NFS) != SHARED_NOT_SHARED);
557 565 }
558 566
559 567 boolean_t
560 568 zfs_is_shared_smb(zfs_handle_t *zhp, char **where)
561 569 {
562 570 return (zfs_is_shared_proto(zhp, where,
563 571 PROTO_SMB) != SHARED_NOT_SHARED);
|
↓ open down ↓ |
129 lines elided |
↑ open up ↑ |
564 572 }
565 573
566 574 /*
567 575 * Make sure things will work if libshare isn't installed by using
568 576 * wrapper functions that check to see that the pointers to functions
569 577 * initialized in _zfs_init_libshare() are actually present.
570 578 */
571 579
572 580 static sa_handle_t (*_sa_init)(int);
573 581 static sa_handle_t (*_sa_init_arg)(int, void *);
582 +static int (*_sa_service)(sa_handle_t);
574 583 static void (*_sa_fini)(sa_handle_t);
575 584 static sa_share_t (*_sa_find_share)(sa_handle_t, char *);
576 585 static int (*_sa_enable_share)(sa_share_t, char *);
577 586 static int (*_sa_disable_share)(sa_share_t, char *);
578 587 static char *(*_sa_errorstr)(int);
579 588 static int (*_sa_parse_legacy_options)(sa_group_t, char *, char *);
580 589 static boolean_t (*_sa_needs_refresh)(sa_handle_t *);
581 590 static libzfs_handle_t *(*_sa_get_zfs_handle)(sa_handle_t);
582 -static int (*_sa_zfs_process_share)(sa_handle_t, sa_group_t, sa_share_t,
583 - char *, char *, zprop_source_t, char *, char *, char *);
591 +static int (* _sa_get_zfs_share)(sa_handle_t, char *, zfs_handle_t *);
584 592 static void (*_sa_update_sharetab_ts)(sa_handle_t);
585 593
586 594 /*
587 595 * _zfs_init_libshare()
588 596 *
589 597 * Find the libshare.so.1 entry points that we use here and save the
590 598 * values to be used later. This is triggered by the runtime loader.
591 599 * Make sure the correct ISA version is loaded.
592 600 */
593 601
594 602 #pragma init(_zfs_init_libshare)
595 603 static void
596 604 _zfs_init_libshare(void)
597 605 {
598 606 void *libshare;
599 607 char path[MAXPATHLEN];
600 608 char isa[MAXISALEN];
601 609
602 610 #if defined(_LP64)
603 611 if (sysinfo(SI_ARCHITECTURE_64, isa, MAXISALEN) == -1)
604 612 isa[0] = '\0';
605 613 #else
|
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
606 614 isa[0] = '\0';
607 615 #endif
608 616 (void) snprintf(path, MAXPATHLEN,
609 617 "/usr/lib/%s/libshare.so.1", isa);
610 618
611 619 if ((libshare = dlopen(path, RTLD_LAZY | RTLD_GLOBAL)) != NULL) {
612 620 _sa_init = (sa_handle_t (*)(int))dlsym(libshare, "sa_init");
613 621 _sa_init_arg = (sa_handle_t (*)(int, void *))dlsym(libshare,
614 622 "sa_init_arg");
615 623 _sa_fini = (void (*)(sa_handle_t))dlsym(libshare, "sa_fini");
624 + _sa_service = (int (*)(sa_handle_t))dlsym(libshare,
625 + "sa_service");
616 626 _sa_find_share = (sa_share_t (*)(sa_handle_t, char *))
617 627 dlsym(libshare, "sa_find_share");
618 628 _sa_enable_share = (int (*)(sa_share_t, char *))dlsym(libshare,
619 629 "sa_enable_share");
620 630 _sa_disable_share = (int (*)(sa_share_t, char *))dlsym(libshare,
621 631 "sa_disable_share");
622 632 _sa_errorstr = (char *(*)(int))dlsym(libshare, "sa_errorstr");
623 633 _sa_parse_legacy_options = (int (*)(sa_group_t, char *, char *))
624 634 dlsym(libshare, "sa_parse_legacy_options");
625 635 _sa_needs_refresh = (boolean_t (*)(sa_handle_t *))
626 636 dlsym(libshare, "sa_needs_refresh");
627 637 _sa_get_zfs_handle = (libzfs_handle_t *(*)(sa_handle_t))
628 638 dlsym(libshare, "sa_get_zfs_handle");
629 - _sa_zfs_process_share = (int (*)(sa_handle_t, sa_group_t,
630 - sa_share_t, char *, char *, zprop_source_t, char *,
631 - char *, char *))dlsym(libshare, "sa_zfs_process_share");
639 + _sa_get_zfs_share = (int (*)(sa_handle_t, char *,
640 + zfs_handle_t *)) dlsym(libshare, "sa_get_zfs_share");
632 641 _sa_update_sharetab_ts = (void (*)(sa_handle_t))
633 642 dlsym(libshare, "sa_update_sharetab_ts");
634 643 if (_sa_init == NULL || _sa_init_arg == NULL ||
635 644 _sa_fini == NULL || _sa_find_share == NULL ||
636 645 _sa_enable_share == NULL || _sa_disable_share == NULL ||
637 646 _sa_errorstr == NULL || _sa_parse_legacy_options == NULL ||
638 647 _sa_needs_refresh == NULL || _sa_get_zfs_handle == NULL ||
639 - _sa_zfs_process_share == NULL ||
648 + _sa_get_zfs_share == NULL || _sa_service == NULL ||
640 649 _sa_update_sharetab_ts == NULL) {
641 650 _sa_init = NULL;
642 651 _sa_init_arg = NULL;
652 + _sa_service = NULL;
643 653 _sa_fini = NULL;
644 654 _sa_disable_share = NULL;
645 655 _sa_enable_share = NULL;
646 656 _sa_errorstr = NULL;
647 657 _sa_parse_legacy_options = NULL;
648 658 (void) dlclose(libshare);
649 659 _sa_needs_refresh = NULL;
650 660 _sa_get_zfs_handle = NULL;
651 - _sa_zfs_process_share = NULL;
661 + _sa_get_zfs_share = NULL;
652 662 _sa_update_sharetab_ts = NULL;
653 663 }
654 664 }
655 665 }
656 666
657 667 /*
658 668 * zfs_init_libshare(zhandle, service)
659 669 *
660 670 * Initialize the libshare API if it hasn't already been initialized.
661 671 * In all cases it returns 0 if it succeeded and an error if not. The
662 672 * service value is which part(s) of the API to initialize and is a
663 673 * direct map to the libshare sa_init(service) interface.
664 674 */
665 675 static int
666 676 zfs_init_libshare_impl(libzfs_handle_t *zhandle, int service, void *arg)
667 677 {
668 678 /*
669 679 * libshare is either not installed or we're in a branded zone. The
670 680 * rest of the wrapper functions around the libshare calls already
671 681 * handle NULL function pointers, but we don't want the callers of
672 682 * zfs_init_libshare() to fail prematurely if libshare is not available.
673 683 */
674 684 if (_sa_init == NULL)
675 685 return (SA_OK);
676 686
677 687 /*
678 688 * Attempt to refresh libshare. This is necessary if there was a cache
679 689 * miss for a new ZFS dataset that was just created, or if state of the
680 690 * sharetab file has changed since libshare was last initialized. We
681 691 * want to make sure so check timestamps to see if a different process
682 692 * has updated any of the configuration. If there was some non-ZFS
683 693 * change, we need to re-initialize the internal cache.
684 694 */
685 695 if (_sa_needs_refresh != NULL &&
686 696 _sa_needs_refresh(zhandle->libzfs_sharehdl)) {
687 697 zfs_uninit_libshare(zhandle);
688 698 zhandle->libzfs_sharehdl = _sa_init_arg(service, arg);
689 699 }
690 700
691 701 if (zhandle && zhandle->libzfs_sharehdl == NULL)
692 702 zhandle->libzfs_sharehdl = _sa_init_arg(service, arg);
693 703
694 704 if (zhandle->libzfs_sharehdl == NULL)
695 705 return (SA_NO_MEMORY);
696 706
697 707 return (SA_OK);
698 708 }
699 709 int
700 710 zfs_init_libshare(libzfs_handle_t *zhandle, int service)
701 711 {
702 712 return (zfs_init_libshare_impl(zhandle, service, NULL));
703 713 }
704 714
705 715 int
706 716 zfs_init_libshare_arg(libzfs_handle_t *zhandle, int service, void *arg)
707 717 {
708 718 return (zfs_init_libshare_impl(zhandle, service, arg));
709 719 }
710 720
711 721
712 722 /*
713 723 * zfs_uninit_libshare(zhandle)
714 724 *
715 725 * Uninitialize the libshare API if it hasn't already been
716 726 * uninitialized. It is OK to call multiple times.
717 727 */
718 728 void
719 729 zfs_uninit_libshare(libzfs_handle_t *zhandle)
720 730 {
721 731 if (zhandle != NULL && zhandle->libzfs_sharehdl != NULL) {
722 732 if (_sa_fini != NULL)
723 733 _sa_fini(zhandle->libzfs_sharehdl);
724 734 zhandle->libzfs_sharehdl = NULL;
725 735 }
726 736 }
727 737
728 738 /*
729 739 * zfs_parse_options(options, proto)
730 740 *
731 741 * Call the legacy parse interface to get the protocol specific
732 742 * options using the NULL arg to indicate that this is a "parse" only.
733 743 */
734 744 int
735 745 zfs_parse_options(char *options, zfs_share_proto_t proto)
736 746 {
737 747 if (_sa_parse_legacy_options != NULL) {
738 748 return (_sa_parse_legacy_options(NULL, options,
739 749 proto_table[proto].p_name));
740 750 }
741 751 return (SA_CONFIG_ERR);
742 752 }
743 753
744 754 /*
745 755 * zfs_sa_find_share(handle, path)
746 756 *
747 757 * wrapper around sa_find_share to find a share path in the
748 758 * configuration.
749 759 */
750 760 static sa_share_t
751 761 zfs_sa_find_share(sa_handle_t handle, char *path)
752 762 {
753 763 if (_sa_find_share != NULL)
754 764 return (_sa_find_share(handle, path));
755 765 return (NULL);
756 766 }
757 767
758 768 /*
759 769 * zfs_sa_enable_share(share, proto)
760 770 *
761 771 * Wrapper for sa_enable_share which enables a share for a specified
762 772 * protocol.
763 773 */
764 774 static int
765 775 zfs_sa_enable_share(sa_share_t share, char *proto)
766 776 {
767 777 if (_sa_enable_share != NULL)
768 778 return (_sa_enable_share(share, proto));
769 779 return (SA_CONFIG_ERR);
770 780 }
771 781
772 782 /*
773 783 * zfs_sa_disable_share(share, proto)
774 784 *
775 785 * Wrapper for sa_enable_share which disables a share for a specified
776 786 * protocol.
777 787 */
778 788 static int
779 789 zfs_sa_disable_share(sa_share_t share, char *proto)
780 790 {
781 791 if (_sa_disable_share != NULL)
782 792 return (_sa_disable_share(share, proto));
783 793 return (SA_CONFIG_ERR);
784 794 }
785 795
786 796 /*
787 797 * Share the given filesystem according to the options in the specified
788 798 * protocol specific properties (sharenfs, sharesmb). We rely
789 799 * on "libshare" to the dirty work for us.
790 800 */
|
↓ open down ↓ |
129 lines elided |
↑ open up ↑ |
791 801 static int
792 802 zfs_share_proto(zfs_handle_t *zhp, zfs_share_proto_t *proto)
793 803 {
794 804 char mountpoint[ZFS_MAXPROPLEN];
795 805 char shareopts[ZFS_MAXPROPLEN];
796 806 char sourcestr[ZFS_MAXPROPLEN];
797 807 libzfs_handle_t *hdl = zhp->zfs_hdl;
798 808 sa_share_t share;
799 809 zfs_share_proto_t *curr_proto;
800 810 zprop_source_t sourcetype;
811 + int service = SA_INIT_ONE_SHARE_FROM_HANDLE;
801 812 int ret;
802 813
803 814 if (!zfs_is_mountable(zhp, mountpoint, sizeof (mountpoint), NULL))
804 815 return (0);
805 816
817 + /*
818 + * Function may be called in a loop from higher up stack, with libshare
819 + * initialized for multiple shares (SA_INIT_SHARE_API_SELECTIVE).
820 + * zfs_init_libshare_arg will refresh the handle's cache if necessary.
821 + * In this case we do not want to switch to per share initialization.
822 + * Specify SA_INIT_SHARE_API to do full refresh, if refresh required.
823 + */
824 + if ((hdl->libzfs_sharehdl != NULL) && (_sa_service != NULL) &&
825 + (_sa_service(hdl->libzfs_sharehdl) ==
826 + SA_INIT_SHARE_API_SELECTIVE)) {
827 + service = SA_INIT_SHARE_API;
828 + }
829 +
806 830 for (curr_proto = proto; *curr_proto != PROTO_END; curr_proto++) {
807 831 /*
808 832 * Return success if there are no share options.
809 833 */
810 834 if (zfs_prop_get(zhp, proto_table[*curr_proto].p_prop,
811 835 shareopts, sizeof (shareopts), &sourcetype, sourcestr,
812 836 ZFS_MAXPROPLEN, B_FALSE) != 0 ||
813 837 strcmp(shareopts, "off") == 0)
814 838 continue;
815 - ret = zfs_init_libshare_arg(hdl, SA_INIT_ONE_SHARE_FROM_HANDLE,
816 - zhp);
839 + ret = zfs_init_libshare_arg(hdl, service, zhp);
817 840 if (ret != SA_OK) {
818 841 (void) zfs_error_fmt(hdl, EZFS_SHARENFSFAILED,
819 842 dgettext(TEXT_DOMAIN, "cannot share '%s': %s"),
820 843 zfs_get_name(zhp), _sa_errorstr != NULL ?
821 844 _sa_errorstr(ret) : "");
822 845 return (-1);
823 846 }
824 847
825 - /*
826 - * If the 'zoned' property is set, then zfs_is_mountable()
827 - * will have already bailed out if we are in the global zone.
828 - * But local zones cannot be NFS servers, so we ignore it for
829 - * local zones as well.
830 - */
831 - if (zfs_prop_get_int(zhp, ZFS_PROP_ZONED))
832 - continue;
833 -
834 848 share = zfs_sa_find_share(hdl->libzfs_sharehdl, mountpoint);
835 849 if (share == NULL) {
836 850 /*
837 851 * This may be a new file system that was just
838 - * created so isn't in the internal cache
839 - * (second time through). Rather than
840 - * reloading the entire configuration, we can
841 - * assume ZFS has done the checking and it is
842 - * safe to add this to the internal
843 - * configuration.
852 + * created so isn't in the internal cache.
853 + * Rather than reloading the entire configuration,
854 + * we can add just this one share to the cache.
844 855 */
845 - if (_sa_zfs_process_share(hdl->libzfs_sharehdl,
846 - NULL, NULL, mountpoint,
847 - proto_table[*curr_proto].p_name, sourcetype,
848 - shareopts, sourcestr, zhp->zfs_name) != SA_OK) {
856 + if ((_sa_get_zfs_share == NULL) ||
857 + (_sa_get_zfs_share(hdl->libzfs_sharehdl, "zfs", zhp)
858 + != SA_OK)) {
849 859 (void) zfs_error_fmt(hdl,
850 860 proto_table[*curr_proto].p_share_err,
851 861 dgettext(TEXT_DOMAIN, "cannot share '%s'"),
852 862 zfs_get_name(zhp));
853 863 return (-1);
854 864 }
855 865 share = zfs_sa_find_share(hdl->libzfs_sharehdl,
856 866 mountpoint);
857 867 }
858 868 if (share != NULL) {
859 869 int err;
860 870 err = zfs_sa_enable_share(share,
861 871 proto_table[*curr_proto].p_name);
862 872 if (err != SA_OK) {
863 873 (void) zfs_error_fmt(hdl,
864 874 proto_table[*curr_proto].p_share_err,
865 875 dgettext(TEXT_DOMAIN, "cannot share '%s'"),
866 876 zfs_get_name(zhp));
867 877 return (-1);
868 878 }
869 879 } else {
870 880 (void) zfs_error_fmt(hdl,
871 881 proto_table[*curr_proto].p_share_err,
872 882 dgettext(TEXT_DOMAIN, "cannot share '%s'"),
873 883 zfs_get_name(zhp));
874 884 return (-1);
875 885 }
876 886
877 887 }
878 888 return (0);
879 889 }
880 890
881 891
882 892 int
883 893 zfs_share_nfs(zfs_handle_t *zhp)
884 894 {
885 895 return (zfs_share_proto(zhp, nfs_only));
886 896 }
887 897
888 898 int
889 899 zfs_share_smb(zfs_handle_t *zhp)
890 900 {
891 901 return (zfs_share_proto(zhp, smb_only));
892 902 }
893 903
894 904 int
895 905 zfs_shareall(zfs_handle_t *zhp)
896 906 {
897 907 return (zfs_share_proto(zhp, share_all_proto));
898 908 }
899 909
|
↓ open down ↓ |
41 lines elided |
↑ open up ↑ |
900 910 /*
901 911 * Unshare a filesystem by mountpoint.
902 912 */
903 913 static int
904 914 unshare_one(libzfs_handle_t *hdl, const char *name, const char *mountpoint,
905 915 zfs_share_proto_t proto)
906 916 {
907 917 sa_share_t share;
908 918 int err;
909 919 char *mntpt;
920 + int service = SA_INIT_ONE_SHARE_FROM_NAME;
910 921
911 922 /*
912 923 * Mountpoint could get trashed if libshare calls getmntany
913 924 * which it does during API initialization, so strdup the
914 925 * value.
915 926 */
916 927 mntpt = zfs_strdup(hdl, mountpoint);
917 928
918 929 /*
919 - * make sure libshare initialized, initialize everything because we
920 - * don't know what other unsharing may happen later. Functions up the
921 - * stack are allowed to initialize instead a subset of shares at the
922 - * time the set is known.
930 + * Function may be called in a loop from higher up stack, with libshare
931 + * initialized for multiple shares (SA_INIT_SHARE_API_SELECTIVE).
932 + * zfs_init_libshare_arg will refresh the handle's cache if necessary.
933 + * In this case we do not want to switch to per share initialization.
934 + * Specify SA_INIT_SHARE_API to do full refresh, if refresh required.
923 935 */
924 - if ((err = zfs_init_libshare_arg(hdl, SA_INIT_ONE_SHARE_FROM_NAME,
925 - (void *)name)) != SA_OK) {
936 + if ((hdl->libzfs_sharehdl != NULL) && (_sa_service != NULL) &&
937 + (_sa_service(hdl->libzfs_sharehdl) ==
938 + SA_INIT_SHARE_API_SELECTIVE)) {
939 + service = SA_INIT_SHARE_API;
940 + }
941 +
942 + err = zfs_init_libshare_arg(hdl, service, (void *)name);
943 + if (err != SA_OK) {
926 944 free(mntpt); /* don't need the copy anymore */
927 945 return (zfs_error_fmt(hdl, proto_table[proto].p_unshare_err,
928 946 dgettext(TEXT_DOMAIN, "cannot unshare '%s': %s"),
929 947 name, _sa_errorstr(err)));
930 948 }
931 949
932 950 share = zfs_sa_find_share(hdl->libzfs_sharehdl, mntpt);
933 951 free(mntpt); /* don't need the copy anymore */
934 952
935 953 if (share != NULL) {
936 954 err = zfs_sa_disable_share(share, proto_table[proto].p_name);
937 955 if (err != SA_OK) {
938 956 return (zfs_error_fmt(hdl,
939 957 proto_table[proto].p_unshare_err,
940 958 dgettext(TEXT_DOMAIN, "cannot unshare '%s': %s"),
941 959 name, _sa_errorstr(err)));
942 960 }
943 961 } else {
944 962 return (zfs_error_fmt(hdl, proto_table[proto].p_unshare_err,
945 963 dgettext(TEXT_DOMAIN, "cannot unshare '%s': not found"),
946 964 name));
947 965 }
948 966 return (0);
949 967 }
950 968
951 969 /*
952 970 * Unshare the given filesystem.
953 971 */
954 972 int
955 973 zfs_unshare_proto(zfs_handle_t *zhp, const char *mountpoint,
956 974 zfs_share_proto_t *proto)
957 975 {
958 976 libzfs_handle_t *hdl = zhp->zfs_hdl;
959 977 struct mnttab entry;
960 978 char *mntpt = NULL;
961 979
962 980 /* check to see if need to unmount the filesystem */
963 981 rewind(zhp->zfs_hdl->libzfs_mnttab);
964 982 if (mountpoint != NULL)
965 983 mountpoint = mntpt = zfs_strdup(hdl, mountpoint);
966 984
967 985 if (mountpoint != NULL || ((zfs_get_type(zhp) == ZFS_TYPE_FILESYSTEM) &&
968 986 libzfs_mnttab_find(hdl, zfs_get_name(zhp), &entry) == 0)) {
969 987 zfs_share_proto_t *curr_proto;
970 988
971 989 if (mountpoint == NULL)
972 990 mntpt = zfs_strdup(zhp->zfs_hdl, entry.mnt_mountp);
973 991
974 992 for (curr_proto = proto; *curr_proto != PROTO_END;
975 993 curr_proto++) {
976 994
977 995 if (is_shared(hdl, mntpt, *curr_proto) &&
978 996 unshare_one(hdl, zhp->zfs_name,
979 997 mntpt, *curr_proto) != 0) {
980 998 if (mntpt != NULL)
981 999 free(mntpt);
982 1000 return (-1);
983 1001 }
984 1002 }
985 1003 }
986 1004 if (mntpt != NULL)
987 1005 free(mntpt);
988 1006
989 1007 return (0);
990 1008 }
991 1009
992 1010 int
993 1011 zfs_unshare_nfs(zfs_handle_t *zhp, const char *mountpoint)
994 1012 {
995 1013 return (zfs_unshare_proto(zhp, mountpoint, nfs_only));
996 1014 }
997 1015
998 1016 int
999 1017 zfs_unshare_smb(zfs_handle_t *zhp, const char *mountpoint)
1000 1018 {
1001 1019 return (zfs_unshare_proto(zhp, mountpoint, smb_only));
1002 1020 }
1003 1021
1004 1022 /*
1005 1023 * Same as zfs_unmountall(), but for NFS and SMB unshares.
1006 1024 */
1007 1025 int
1008 1026 zfs_unshareall_proto(zfs_handle_t *zhp, zfs_share_proto_t *proto)
1009 1027 {
1010 1028 prop_changelist_t *clp;
1011 1029 int ret;
1012 1030
1013 1031 clp = changelist_gather(zhp, ZFS_PROP_SHARENFS, 0, 0);
1014 1032 if (clp == NULL)
1015 1033 return (-1);
1016 1034
1017 1035 ret = changelist_unshare(clp, proto);
1018 1036 changelist_free(clp);
1019 1037
1020 1038 return (ret);
1021 1039 }
1022 1040
1023 1041 int
1024 1042 zfs_unshareall_nfs(zfs_handle_t *zhp)
1025 1043 {
1026 1044 return (zfs_unshareall_proto(zhp, nfs_only));
1027 1045 }
1028 1046
1029 1047 int
1030 1048 zfs_unshareall_smb(zfs_handle_t *zhp)
1031 1049 {
1032 1050 return (zfs_unshareall_proto(zhp, smb_only));
1033 1051 }
1034 1052
1035 1053 int
1036 1054 zfs_unshareall(zfs_handle_t *zhp)
1037 1055 {
1038 1056 return (zfs_unshareall_proto(zhp, share_all_proto));
1039 1057 }
1040 1058
1041 1059 int
1042 1060 zfs_unshareall_bypath(zfs_handle_t *zhp, const char *mountpoint)
1043 1061 {
1044 1062 return (zfs_unshare_proto(zhp, mountpoint, share_all_proto));
1045 1063 }
1046 1064
1047 1065 /*
1048 1066 * Remove the mountpoint associated with the current dataset, if necessary.
1049 1067 * We only remove the underlying directory if:
1050 1068 *
1051 1069 * - The mountpoint is not 'none' or 'legacy'
1052 1070 * - The mountpoint is non-empty
1053 1071 * - The mountpoint is the default or inherited
1054 1072 * - The 'zoned' property is set, or we're in a local zone
1055 1073 *
1056 1074 * Any other directories we leave alone.
1057 1075 */
1058 1076 void
1059 1077 remove_mountpoint(zfs_handle_t *zhp)
1060 1078 {
1061 1079 char mountpoint[ZFS_MAXPROPLEN];
1062 1080 zprop_source_t source;
1063 1081
1064 1082 if (!zfs_is_mountable(zhp, mountpoint, sizeof (mountpoint),
1065 1083 &source))
1066 1084 return;
1067 1085
1068 1086 if (source == ZPROP_SRC_DEFAULT ||
1069 1087 source == ZPROP_SRC_INHERITED) {
1070 1088 /*
1071 1089 * Try to remove the directory, silently ignoring any errors.
1072 1090 * The filesystem may have since been removed or moved around,
1073 1091 * and this error isn't really useful to the administrator in
1074 1092 * any way.
1075 1093 */
1076 1094 (void) rmdir(mountpoint);
1077 1095 }
1078 1096 }
1079 1097
1080 1098 void
1081 1099 libzfs_add_handle(get_all_cb_t *cbp, zfs_handle_t *zhp)
1082 1100 {
1083 1101 if (cbp->cb_alloc == cbp->cb_used) {
1084 1102 size_t newsz;
1085 1103 void *ptr;
1086 1104
1087 1105 newsz = cbp->cb_alloc ? cbp->cb_alloc * 2 : 64;
1088 1106 ptr = zfs_realloc(zhp->zfs_hdl,
1089 1107 cbp->cb_handles, cbp->cb_alloc * sizeof (void *),
1090 1108 newsz * sizeof (void *));
1091 1109 cbp->cb_handles = ptr;
1092 1110 cbp->cb_alloc = newsz;
1093 1111 }
1094 1112 cbp->cb_handles[cbp->cb_used++] = zhp;
1095 1113 }
1096 1114
1097 1115 static int
1098 1116 mount_cb(zfs_handle_t *zhp, void *data)
1099 1117 {
1100 1118 get_all_cb_t *cbp = data;
1101 1119
1102 1120 if (!(zfs_get_type(zhp) & ZFS_TYPE_FILESYSTEM)) {
1103 1121 zfs_close(zhp);
1104 1122 return (0);
1105 1123 }
1106 1124
1107 1125 if (zfs_prop_get_int(zhp, ZFS_PROP_CANMOUNT) == ZFS_CANMOUNT_NOAUTO) {
1108 1126 zfs_close(zhp);
1109 1127 return (0);
1110 1128 }
1111 1129
1112 1130 /*
1113 1131 * If this filesystem is inconsistent and has a receive resume
1114 1132 * token, we can not mount it.
1115 1133 */
1116 1134 if (zfs_prop_get_int(zhp, ZFS_PROP_INCONSISTENT) &&
1117 1135 zfs_prop_get(zhp, ZFS_PROP_RECEIVE_RESUME_TOKEN,
1118 1136 NULL, 0, NULL, NULL, 0, B_TRUE) == 0) {
1119 1137 zfs_close(zhp);
1120 1138 return (0);
1121 1139 }
1122 1140
1123 1141 libzfs_add_handle(cbp, zhp);
1124 1142 if (zfs_iter_filesystems(zhp, mount_cb, cbp) != 0) {
1125 1143 zfs_close(zhp);
1126 1144 return (-1);
1127 1145 }
1128 1146 return (0);
1129 1147 }
1130 1148
1131 1149 int
1132 1150 libzfs_dataset_cmp(const void *a, const void *b)
1133 1151 {
1134 1152 zfs_handle_t **za = (zfs_handle_t **)a;
1135 1153 zfs_handle_t **zb = (zfs_handle_t **)b;
1136 1154 char mounta[MAXPATHLEN];
1137 1155 char mountb[MAXPATHLEN];
1138 1156 boolean_t gota, gotb;
1139 1157
1140 1158 if ((gota = (zfs_get_type(*za) == ZFS_TYPE_FILESYSTEM)) != 0)
1141 1159 verify(zfs_prop_get(*za, ZFS_PROP_MOUNTPOINT, mounta,
1142 1160 sizeof (mounta), NULL, NULL, 0, B_FALSE) == 0);
1143 1161 if ((gotb = (zfs_get_type(*zb) == ZFS_TYPE_FILESYSTEM)) != 0)
1144 1162 verify(zfs_prop_get(*zb, ZFS_PROP_MOUNTPOINT, mountb,
1145 1163 sizeof (mountb), NULL, NULL, 0, B_FALSE) == 0);
1146 1164
1147 1165 if (gota && gotb)
|
↓ open down ↓ |
212 lines elided |
↑ open up ↑ |
1148 1166 return (strcmp(mounta, mountb));
1149 1167
1150 1168 if (gota)
1151 1169 return (-1);
1152 1170 if (gotb)
1153 1171 return (1);
1154 1172
1155 1173 return (strcmp(zfs_get_name(a), zfs_get_name(b)));
1156 1174 }
1157 1175
1158 -/*
1159 - * Mount and share all datasets within the given pool. This assumes that no
1160 - * datasets within the pool are currently mounted. Because users can create
1161 - * complicated nested hierarchies of mountpoints, we first gather all the
1162 - * datasets and mountpoints within the pool, and sort them by mountpoint. Once
1163 - * we have the list of all filesystems, we iterate over them in order and mount
1164 - * and/or share each one.
1165 - */
1166 -#pragma weak zpool_mount_datasets = zpool_enable_datasets
1176 +static int
1177 +mountpoint_compare(const void *a, const void *b)
1178 +{
1179 + const char *mounta = *((char **)a);
1180 + const char *mountb = *((char **)b);
1181 +
1182 + return (strcmp(mountb, mounta));
1183 +}
1184 +
1185 +typedef enum {
1186 + TASK_TO_PROCESS,
1187 + TASK_IN_PROCESSING,
1188 + TASK_DONE,
1189 + TASK_MAX
1190 +} task_state_t;
1191 +
1192 +typedef struct mount_task {
1193 + const char *mp;
1194 + zfs_handle_t *zh;
1195 + task_state_t state;
1196 + int error;
1197 +} mount_task_t;
1198 +
1199 +typedef struct mount_task_q {
1200 + pthread_mutex_t q_lock;
1201 + libzfs_handle_t *hdl;
1202 + const char *mntopts;
1203 + const char *error_mp;
1204 + zfs_handle_t *error_zh;
1205 + int error;
1206 + int q_length;
1207 + int n_tasks;
1208 + int flags;
1209 + mount_task_t task[1];
1210 +} mount_task_q_t;
1211 +
1212 +static int
1213 +mount_task_q_init(int argc, zfs_handle_t **handles, const char *mntopts,
1214 + int flags, mount_task_q_t **task)
1215 +{
1216 + mount_task_q_t *task_q;
1217 + int i, error;
1218 + size_t task_q_size;
1219 +
1220 + *task = NULL;
1221 + /* nothing to do ? should not be here */
1222 + if (argc <= 0)
1223 + return (EINVAL);
1224 +
1225 + /* allocate and init task_q */
1226 + task_q_size = sizeof (mount_task_q_t) +
1227 + (argc - 1) * sizeof (mount_task_t);
1228 + task_q = calloc(task_q_size, 1);
1229 + if (task_q == NULL)
1230 + return (ENOMEM);
1231 +
1232 + if ((error = pthread_mutex_init(&task_q->q_lock, NULL)) != 0) {
1233 + free(task_q);
1234 + return (error);
1235 + }
1236 + task_q->q_length = argc;
1237 + task_q->n_tasks = argc;
1238 + task_q->flags = flags;
1239 + task_q->mntopts = mntopts;
1240 +
1241 + /* we are not going to change the strings, so no need to strdup */
1242 + for (i = 0; i < argc; ++i) {
1243 + task_q->task[i].zh = handles[i];
1244 + task_q->task[i].state = TASK_TO_PROCESS;
1245 + task_q->error = 0;
1246 + }
1247 +
1248 + *task = task_q;
1249 + return (0);
1250 +}
1251 +
1252 +static int
1253 +umount_task_q_init(int argc, const char **argv, int flags,
1254 + libzfs_handle_t *hdl, mount_task_q_t **task)
1255 +{
1256 + mount_task_q_t *task_q;
1257 + int i, error;
1258 + size_t task_q_size;
1259 +
1260 + *task = NULL;
1261 + /* nothing to do ? should not be here */
1262 + if (argc <= 0)
1263 + return (EINVAL);
1264 +
1265 + /* allocate and init task_q */
1266 + task_q_size = sizeof (mount_task_q_t) +
1267 + (argc - 1) * sizeof (mount_task_t);
1268 + task_q = calloc(task_q_size, 1);
1269 + if (task_q == NULL)
1270 + return (ENOMEM);
1271 +
1272 + if ((error = pthread_mutex_init(&task_q->q_lock, NULL)) != 0) {
1273 + free(task_q);
1274 + return (error);
1275 + }
1276 + task_q->hdl = hdl;
1277 + task_q->q_length = argc;
1278 + task_q->n_tasks = argc;
1279 + task_q->flags = flags;
1280 +
1281 + /* we are not going to change the strings, so no need to strdup */
1282 + for (i = 0; i < argc; ++i) {
1283 + task_q->task[i].mp = argv[i];
1284 + task_q->task[i].state = TASK_TO_PROCESS;
1285 + task_q->error = 0;
1286 + }
1287 +
1288 + *task = task_q;
1289 + return (0);
1290 +}
1291 +
1292 +static void
1293 +mount_task_q_fini(mount_task_q_t *task_q)
1294 +{
1295 + assert(task_q != NULL);
1296 + (void) pthread_mutex_destroy(&task_q->q_lock);
1297 + free(task_q);
1298 +}
1299 +
1300 +static int
1301 +is_child_of(const char *s1, const char *s2)
1302 +{
1303 + for (; *s1 && *s2 && (*s1 == *s2); ++s1, ++s2)
1304 + ;
1305 + return (!*s2 && (*s1 == '/'));
1306 +}
1307 +
1308 +static boolean_t
1309 +task_completed(int ind, mount_task_q_t *task_q)
1310 +{
1311 + return (task_q->task[ind].state == TASK_DONE);
1312 +}
1313 +
1314 +static boolean_t
1315 +task_to_process(int ind, mount_task_q_t *task_q)
1316 +{
1317 + return (task_q->task[ind].state == TASK_TO_PROCESS);
1318 +}
1319 +
1320 +static boolean_t
1321 +task_in_processing(int ind, mount_task_q_t *task_q)
1322 +{
1323 + return (task_q->task[ind].state == TASK_IN_PROCESSING);
1324 +}
1325 +
1326 +static void
1327 +task_next_stage(int ind, mount_task_q_t *task_q)
1328 +{
1329 + /* our state machine is a pipeline */
1330 + task_q->task[ind].state++;
1331 + assert(task_q->task[ind].state < TASK_MAX);
1332 +}
1333 +
1334 +static boolean_t
1335 +task_state_valid(int ind, mount_task_q_t *task_q)
1336 +{
1337 + /* our state machine is a pipeline */
1338 + return (task_q->task[ind].state < TASK_MAX);
1339 +}
1340 +
1341 +static boolean_t
1342 +child_umount_pending(int ind, mount_task_q_t *task_q)
1343 +{
1344 + int i;
1345 + for (i = ind-1; i >= 0; --i) {
1346 + assert(task_state_valid(i, task_q));
1347 + if ((task_q->task[i].state != TASK_DONE) &&
1348 + is_child_of(task_q->task[i].mp, task_q->task[ind].mp))
1349 + return (B_TRUE);
1350 + }
1351 +
1352 + return (B_FALSE);
1353 +}
1354 +
1355 +static boolean_t
1356 +parent_mount_pending(int ind, mount_task_q_t *task_q)
1357 +{
1358 + int i;
1359 + for (i = ind-1; i >= 0; --i) {
1360 + assert(task_state_valid(i, task_q));
1361 + if ((task_q->task[i].state != TASK_DONE) &&
1362 + is_child_of(task_q->task[ind].zh->zfs_name,
1363 + task_q->task[i].zh->zfs_name))
1364 + return (B_TRUE);
1365 + }
1366 +
1367 + return (B_FALSE);
1368 +}
1369 +
1370 +static void
1371 +unmounter(void *arg)
1372 +{
1373 + mount_task_q_t *task_q = (mount_task_q_t *)arg;
1374 + int error = 0, done = 0;
1375 +
1376 + assert(task_q != NULL);
1377 + if (task_q == NULL)
1378 + return;
1379 +
1380 + while (!error && !done) {
1381 + mount_task_t *task;
1382 + int i, t, umount_err, flags, q_error;
1383 +
1384 + if ((error = pthread_mutex_lock(&task_q->q_lock)) != 0)
1385 + break; /* Out of while() loop */
1386 +
1387 + if (task_q->error || task_q->n_tasks == 0) {
1388 + (void) pthread_mutex_unlock(&task_q->q_lock);
1389 + break; /* Out of while() loop */
1390 + }
1391 +
1392 + /* Find task ready for processing */
1393 + for (i = 0, task = NULL, t = -1; i < task_q->q_length; ++i) {
1394 + if (task_q->error) {
1395 + /* Fatal error, stop processing */
1396 + done = 1;
1397 + break; /* Out of for() loop */
1398 + }
1399 +
1400 + if (task_completed(i, task_q))
1401 + continue; /* for() loop */
1402 +
1403 + if (task_to_process(i, task_q)) {
1404 + /*
1405 + * Cannot umount if some children are still
1406 + * mounted; come back later
1407 + */
1408 + if ((child_umount_pending(i, task_q)))
1409 + continue; /* for() loop */
1410 + /* Should be OK to unmount now */
1411 + task_next_stage(i, task_q);
1412 + task = &task_q->task[i];
1413 + t = i;
1414 + break; /* Out of for() loop */
1415 + }
1416 +
1417 + /* Otherwise, the task is already in processing */
1418 + assert(task_in_processing(i, task_q));
1419 + }
1420 +
1421 + flags = task_q->flags;
1422 +
1423 + error = pthread_mutex_unlock(&task_q->q_lock);
1424 +
1425 + if (done || (task == NULL) || error || task_q->error)
1426 + break; /* Out of while() loop */
1427 +
1428 + umount_err = umount2(task->mp, flags);
1429 + q_error = errno;
1430 +
1431 + if ((error = pthread_mutex_lock(&task_q->q_lock)) != 0)
1432 + break; /* Out of while() loop */
1433 +
1434 + /* done processing */
1435 + assert(t >= 0 && t < task_q->q_length);
1436 + task_next_stage(t, task_q);
1437 + assert(task_completed(t, task_q));
1438 + task_q->n_tasks--;
1439 +
1440 + if (umount_err) {
1441 + /*
1442 + * umount2() failed, cannot be busy because of mounted
1443 + * children - we have checked above, so it is fatal
1444 + */
1445 + assert(child_umount_pending(t, task_q) == B_FALSE);
1446 + task->error = q_error;
1447 + if (!task_q->error) {
1448 + task_q->error = task->error;
1449 + task_q->error_mp = task->mp;
1450 + }
1451 + done = 1;
1452 + }
1453 +
1454 + if ((error = pthread_mutex_unlock(&task_q->q_lock)) != 0)
1455 + break; /* Out of while() loop */
1456 + }
1457 +}
1458 +
1459 +static void
1460 +mounter(void *arg)
1461 +{
1462 + mount_task_q_t *task_q = (mount_task_q_t *)arg;
1463 + int error = 0, done = 0;
1464 +
1465 + assert(task_q != NULL);
1466 + if (task_q == NULL)
1467 + return;
1468 +
1469 + while (!error && !done) {
1470 + mount_task_t *task;
1471 + int i, t, mount_err, flags, q_error;
1472 + const char *mntopts;
1473 +
1474 + if ((error = pthread_mutex_lock(&task_q->q_lock)) != 0)
1475 + break; /* Out of while() loop */
1476 +
1477 + if (task_q->error || task_q->n_tasks == 0) {
1478 + (void) pthread_mutex_unlock(&task_q->q_lock);
1479 + break; /* Out of while() loop */
1480 + }
1481 +
1482 + /* Find task ready for processing */
1483 + for (i = 0, task = NULL, t = -1; i < task_q->q_length; ++i) {
1484 + if (task_q->error) {
1485 + /* Fatal error, stop processing */
1486 + done = 1;
1487 + break; /* Out of for() loop */
1488 + }
1489 +
1490 + if (task_completed(i, task_q))
1491 + continue; /* for() loop */
1492 +
1493 + if (task_to_process(i, task_q)) {
1494 + /*
1495 + * Cannot mount if some parents are not
1496 + * mounted yet; come back later
1497 + */
1498 + if ((parent_mount_pending(i, task_q)))
1499 + continue; /* for() loop */
1500 + /* Should be OK to mount now */
1501 + task_next_stage(i, task_q);
1502 + task = &task_q->task[i];
1503 + t = i;
1504 + break; /* Out of for() loop */
1505 + }
1506 +
1507 + /* Otherwise, the task is already in processing */
1508 + assert(task_in_processing(i, task_q));
1509 + }
1510 +
1511 + flags = task_q->flags;
1512 + mntopts = task_q->mntopts;
1513 +
1514 + error = pthread_mutex_unlock(&task_q->q_lock);
1515 +
1516 + if (done || (task == NULL) || error || task_q->error)
1517 + break; /* Out of while() loop */
1518 +
1519 + mount_err = zfs_mount(task->zh, mntopts, flags);
1520 + q_error = errno;
1521 +
1522 + if ((error = pthread_mutex_lock(&task_q->q_lock)) != 0)
1523 + break; /* Out of while() loop */
1524 +
1525 + /* done processing */
1526 + assert(t >= 0 && t < task_q->q_length);
1527 + task_next_stage(t, task_q);
1528 + assert(task_completed(t, task_q));
1529 + task_q->n_tasks--;
1530 +
1531 + if (mount_err) {
1532 + task->error = q_error;
1533 + if (!task_q->error) {
1534 + task_q->error = task->error;
1535 + task_q->error_zh = task->zh;
1536 + }
1537 + done = 1;
1538 + }
1539 +
1540 + if ((error = pthread_mutex_unlock(&task_q->q_lock)) != 0)
1541 + break; /* Out of while() loop */
1542 + }
1543 +}
1544 +
1545 +#define THREADS_HARD_LIMIT 128
1546 +int parallel_unmount(libzfs_handle_t *hdl, int argc, const char **argv,
1547 + int flags, int n_threads)
1548 +{
1549 + mount_task_q_t *task_queue = NULL;
1550 + int i, error;
1551 + tpool_t *t;
1552 +
1553 + if (argc == 0)
1554 + return (0);
1555 +
1556 + if ((error = umount_task_q_init(argc, argv, flags, hdl, &task_queue))
1557 + != 0) {
1558 + assert(task_queue == NULL);
1559 + return (error);
1560 + }
1561 +
1562 + if (n_threads > argc)
1563 + n_threads = argc;
1564 +
1565 + if (n_threads > THREADS_HARD_LIMIT)
1566 + n_threads = THREADS_HARD_LIMIT;
1567 +
1568 + t = tpool_create(1, n_threads, 0, NULL);
1569 +
1570 + for (i = 0; i < n_threads; ++i)
1571 + (void) tpool_dispatch(t, unmounter, task_queue);
1572 +
1573 + tpool_wait(t);
1574 + tpool_destroy(t);
1575 +
1576 + if (task_queue->error) {
1577 + /*
1578 + * Tell ZFS!
1579 + */
1580 + zfs_error_aux(hdl,
1581 + strerror(error ? error : task_queue->error));
1582 + error = zfs_error_fmt(hdl, EZFS_UMOUNTFAILED,
1583 + dgettext(TEXT_DOMAIN, "cannot unmount '%s'"),
1584 + error ? "datasets" : task_queue->error_mp);
1585 + }
1586 + if (task_queue)
1587 + mount_task_q_fini(task_queue);
1588 +
1589 + return (error);
1590 +}
1591 +
1592 +int parallel_mount(get_all_cb_t *cb, int *good, const char *mntopts,
1593 + int flags, int n_threads)
1594 +{
1595 + int i, error = 0;
1596 + mount_task_q_t *task_queue = NULL;
1597 + tpool_t *t;
1598 +
1599 + if (cb->cb_used == 0)
1600 + return (0);
1601 +
1602 + if (n_threads > cb->cb_used)
1603 + n_threads = cb->cb_used;
1604 +
1605 + if ((error = mount_task_q_init(cb->cb_used, cb->cb_handles,
1606 + mntopts, flags, &task_queue)) != 0) {
1607 + assert(task_queue == NULL);
1608 + return (error);
1609 + }
1610 +
1611 + t = tpool_create(1, n_threads, 0, NULL);
1612 +
1613 + for (i = 0; i < n_threads; ++i)
1614 + (void) tpool_dispatch(t, mounter, task_queue);
1615 +
1616 + tpool_wait(t);
1617 + for (i = 0; i < cb->cb_used; ++i) {
1618 + good[i] = !task_queue->task[i].error;
1619 + if (!good[i]) {
1620 + zfs_handle_t *hdl = task_queue->error_zh;
1621 + zfs_error_aux(hdl->zfs_hdl,
1622 + strerror(task_queue->task[i].error));
1623 + (void) zfs_error_fmt(hdl->zfs_hdl, EZFS_MOUNTFAILED,
1624 + dgettext(TEXT_DOMAIN, "cannot mount '%s'"),
1625 + task_queue->task[i].zh->zfs_name);
1626 + }
1627 + }
1628 + tpool_destroy(t);
1629 +
1630 + if (task_queue->error) {
1631 + zfs_handle_t *hdl = task_queue->error_zh;
1632 + /*
1633 + * Tell ZFS!
1634 + */
1635 + zfs_error_aux(hdl->zfs_hdl,
1636 + strerror(error ? error : task_queue->error));
1637 + error = zfs_error_fmt(hdl->zfs_hdl, EZFS_MOUNTFAILED,
1638 + dgettext(TEXT_DOMAIN, "cannot mount '%s'"),
1639 + error ? "datasets" : hdl->zfs_name);
1640 + }
1641 + if (task_queue)
1642 + mount_task_q_fini(task_queue);
1643 +
1644 + return (error);
1645 +}
1646 +
1167 1647 int
1168 -zpool_enable_datasets(zpool_handle_t *zhp, const char *mntopts, int flags)
1648 +zpool_enable_datasets_ex(zpool_handle_t *zhp, const char *mntopts, int flags,
1649 + int n_threads)
1169 1650 {
1170 1651 get_all_cb_t cb = { 0 };
1171 1652 libzfs_handle_t *hdl = zhp->zpool_hdl;
1172 1653 zfs_handle_t *zfsp;
1173 1654 int i, ret = -1;
1174 1655 int *good;
1656 + sa_init_selective_arg_t sharearg;
1175 1657
1176 1658 /*
1177 1659 * Gather all non-snap datasets within the pool.
1178 1660 */
1179 1661 if ((zfsp = zfs_open(hdl, zhp->zpool_name, ZFS_TYPE_DATASET)) == NULL)
1180 1662 goto out;
1181 1663
1182 1664 libzfs_add_handle(&cb, zfsp);
1183 1665 if (zfs_iter_filesystems(zfsp, mount_cb, &cb) != 0)
1184 1666 goto out;
1185 1667 /*
1186 1668 * Sort the datasets by mountpoint.
1187 1669 */
1188 1670 qsort(cb.cb_handles, cb.cb_used, sizeof (void *),
1189 1671 libzfs_dataset_cmp);
|
↓ open down ↓ |
5 lines elided |
↑ open up ↑ |
1190 1672
1191 1673 /*
1192 1674 * And mount all the datasets, keeping track of which ones
1193 1675 * succeeded or failed.
1194 1676 */
1195 1677 if ((good = zfs_alloc(zhp->zpool_hdl,
1196 1678 cb.cb_used * sizeof (int))) == NULL)
1197 1679 goto out;
1198 1680
1199 1681 ret = 0;
1200 - for (i = 0; i < cb.cb_used; i++) {
1201 - if (zfs_mount(cb.cb_handles[i], mntopts, flags) != 0)
1202 - ret = -1;
1203 - else
1204 - good[i] = 1;
1682 + if (n_threads < 2) {
1683 + for (i = 0; i < cb.cb_used; i++) {
1684 + if (zfs_mount(cb.cb_handles[i], mntopts, flags) != 0)
1685 + ret = -1;
1686 + else
1687 + good[i] = 1;
1688 + }
1689 + } else {
1690 + ret = parallel_mount(&cb, good, mntopts, flags, n_threads);
1205 1691 }
1206 1692
1207 1693 /*
1694 + * Initilialize libshare SA_INIT_SHARE_API_SELECTIVE here
1695 + * to avoid unneccesary load/unload of the libshare API
1696 + * per shared dataset downstream.
1697 + */
1698 + sharearg.zhandle_arr = cb.cb_handles;
1699 + sharearg.zhandle_len = cb.cb_used;
1700 + ret = zfs_init_libshare_arg(hdl, SA_INIT_SHARE_API_SELECTIVE,
1701 + &sharearg);
1702 + if (ret != 0) {
1703 + free(good);
1704 + goto out;
1705 + }
1706 +
1707 + /*
1208 1708 * Then share all the ones that need to be shared. This needs
1209 1709 * to be a separate pass in order to avoid excessive reloading
1210 1710 * of the configuration. Good should never be NULL since
1211 1711 * zfs_alloc is supposed to exit if memory isn't available.
1212 1712 */
1213 1713 for (i = 0; i < cb.cb_used; i++) {
1214 1714 if (good[i] && zfs_share(cb.cb_handles[i]) != 0)
1215 1715 ret = -1;
1216 1716 }
1217 1717
1218 1718 free(good);
1219 1719
1220 1720 out:
1221 1721 for (i = 0; i < cb.cb_used; i++)
1222 1722 zfs_close(cb.cb_handles[i]);
1223 1723 free(cb.cb_handles);
1224 1724
1225 1725 return (ret);
1226 1726 }
1227 1727
1228 -static int
1229 -mountpoint_compare(const void *a, const void *b)
1230 -{
1231 - const char *mounta = *((char **)a);
1232 - const char *mountb = *((char **)b);
1233 -
1234 - return (strcmp(mountb, mounta));
1235 -}
1236 -
1237 -/* alias for 2002/240 */
1238 -#pragma weak zpool_unmount_datasets = zpool_disable_datasets
1239 -/*
1240 - * Unshare and unmount all datasets within the given pool. We don't want to
1241 - * rely on traversing the DSL to discover the filesystems within the pool,
1242 - * because this may be expensive (if not all of them are mounted), and can fail
1243 - * arbitrarily (on I/O error, for example). Instead, we walk /etc/mnttab and
1244 - * gather all the filesystems that are currently mounted.
1245 - */
1246 1728 int
1247 -zpool_disable_datasets(zpool_handle_t *zhp, boolean_t force)
1729 +zpool_disable_datasets_ex(zpool_handle_t *zhp, boolean_t force, int n_threads)
1248 1730 {
1249 1731 int used, alloc;
1250 1732 struct mnttab entry;
1251 1733 size_t namelen;
1252 1734 char **mountpoints = NULL;
1253 1735 zfs_handle_t **datasets = NULL;
1254 1736 libzfs_handle_t *hdl = zhp->zpool_hdl;
1255 1737 int i;
1256 1738 int ret = -1;
1257 1739 int flags = (force ? MS_FORCE : 0);
1258 1740 sa_init_selective_arg_t sharearg;
1259 1741
1260 1742 namelen = strlen(zhp->zpool_name);
1261 1743
1262 1744 rewind(hdl->libzfs_mnttab);
1263 1745 used = alloc = 0;
1264 1746 while (getmntent(hdl->libzfs_mnttab, &entry) == 0) {
1265 1747 /*
1266 1748 * Ignore non-ZFS entries.
1267 1749 */
1268 1750 if (entry.mnt_fstype == NULL ||
1269 1751 strcmp(entry.mnt_fstype, MNTTYPE_ZFS) != 0)
1270 1752 continue;
1271 1753
1272 1754 /*
1273 1755 * Ignore filesystems not within this pool.
1274 1756 */
1275 1757 if (entry.mnt_mountp == NULL ||
1276 1758 strncmp(entry.mnt_special, zhp->zpool_name, namelen) != 0 ||
1277 1759 (entry.mnt_special[namelen] != '/' &&
1278 1760 entry.mnt_special[namelen] != '\0'))
1279 1761 continue;
1280 1762
1281 1763 /*
1282 1764 * At this point we've found a filesystem within our pool. Add
1283 1765 * it to our growing list.
1284 1766 */
1285 1767 if (used == alloc) {
1286 1768 if (alloc == 0) {
1287 1769 if ((mountpoints = zfs_alloc(hdl,
1288 1770 8 * sizeof (void *))) == NULL)
1289 1771 goto out;
1290 1772
1291 1773 if ((datasets = zfs_alloc(hdl,
1292 1774 8 * sizeof (void *))) == NULL)
1293 1775 goto out;
1294 1776
1295 1777 alloc = 8;
1296 1778 } else {
1297 1779 void *ptr;
1298 1780
1299 1781 if ((ptr = zfs_realloc(hdl, mountpoints,
1300 1782 alloc * sizeof (void *),
1301 1783 alloc * 2 * sizeof (void *))) == NULL)
1302 1784 goto out;
1303 1785 mountpoints = ptr;
1304 1786
1305 1787 if ((ptr = zfs_realloc(hdl, datasets,
1306 1788 alloc * sizeof (void *),
1307 1789 alloc * 2 * sizeof (void *))) == NULL)
1308 1790 goto out;
1309 1791 datasets = ptr;
1310 1792
1311 1793 alloc *= 2;
1312 1794 }
1313 1795 }
1314 1796
1315 1797 if ((mountpoints[used] = zfs_strdup(hdl,
1316 1798 entry.mnt_mountp)) == NULL)
1317 1799 goto out;
1318 1800
1319 1801 /*
1320 1802 * This is allowed to fail, in case there is some I/O error. It
1321 1803 * is only used to determine if we need to remove the underlying
1322 1804 * mountpoint, so failure is not fatal.
1323 1805 */
1324 1806 datasets[used] = make_dataset_handle(hdl, entry.mnt_special);
1325 1807
1326 1808 used++;
1327 1809 }
1328 1810
1329 1811 /*
1330 1812 * At this point, we have the entire list of filesystems, so sort it by
1331 1813 * mountpoint.
1332 1814 */
1333 1815 sharearg.zhandle_arr = datasets;
1334 1816 sharearg.zhandle_len = used;
1335 1817 ret = zfs_init_libshare_arg(hdl, SA_INIT_SHARE_API_SELECTIVE,
1336 1818 &sharearg);
1337 1819 if (ret != 0)
1338 1820 goto out;
|
↓ open down ↓ |
81 lines elided |
↑ open up ↑ |
1339 1821 qsort(mountpoints, used, sizeof (char *), mountpoint_compare);
1340 1822
1341 1823 /*
1342 1824 * Walk through and first unshare everything.
1343 1825 */
1344 1826 for (i = 0; i < used; i++) {
1345 1827 zfs_share_proto_t *curr_proto;
1346 1828 for (curr_proto = share_all_proto; *curr_proto != PROTO_END;
1347 1829 curr_proto++) {
1348 1830 if (is_shared(hdl, mountpoints[i], *curr_proto) &&
1349 - unshare_one(hdl, mountpoints[i],
1350 - mountpoints[i], *curr_proto) != 0)
1831 + unshare_one(hdl, mountpoints[i], mountpoints[i],
1832 + *curr_proto) != 0)
1351 1833 goto out;
1352 1834 }
1353 1835 }
1354 1836
1355 1837 /*
1356 1838 * Now unmount everything, removing the underlying directories as
1357 1839 * appropriate.
1358 1840 */
1359 - for (i = 0; i < used; i++) {
1360 - if (unmount_one(hdl, mountpoints[i], flags) != 0)
1841 + if (n_threads < 2) {
1842 + for (i = 0; i < used; i++) {
1843 + if (unmount_one(hdl, mountpoints[i], flags) != 0)
1844 + goto out;
1845 + }
1846 + } else {
1847 + if (parallel_unmount(hdl, used, (const char **)mountpoints,
1848 + flags, n_threads) != 0)
1361 1849 goto out;
1362 1850 }
1363 -
1364 1851 for (i = 0; i < used; i++) {
1365 1852 if (datasets[i])
1366 1853 remove_mountpoint(datasets[i]);
1367 1854 }
1368 -
1369 1855 ret = 0;
1370 1856 out:
1371 1857 for (i = 0; i < used; i++) {
1372 1858 if (datasets[i])
1373 1859 zfs_close(datasets[i]);
1374 1860 free(mountpoints[i]);
1375 1861 }
1376 1862 free(datasets);
1377 1863 free(mountpoints);
1378 1864
1379 1865 return (ret);
1866 +}
1867 +
1868 +/*
1869 + * Mount and share all datasets within the given pool. This assumes that no
1870 + * datasets within the pool are currently mounted. Because users can create
1871 + * complicated nested hierarchies of mountpoints, we first gather all the
1872 + * datasets and mountpoints within the pool, and sort them by mountpoint. Once
1873 + * we have the list of all filesystems, we iterate over them in order and mount
1874 + * and/or share each one.
1875 + */
1876 +#pragma weak zpool_mount_datasets = zpool_enable_datasets
1877 +int
1878 +zpool_enable_datasets(zpool_handle_t *zhp, const char *mntopts, int flags)
1879 +{
1880 + return (zpool_enable_datasets_ex(zhp, mntopts, flags, 1));
1881 +}
1882 +
1883 +/* alias for 2002/240 */
1884 +#pragma weak zpool_unmount_datasets = zpool_disable_datasets
1885 +/*
1886 + * Unshare and unmount all datasets within the given pool. We don't want to
1887 + * rely on traversing the DSL to discover the filesystems within the pool,
1888 + * because this may be expensive (if not all of them are mounted), and can fail
1889 + * arbitrarily (on I/O error, for example). Instead, we walk /etc/mnttab and
1890 + * gather all the filesystems that are currently mounted.
1891 + */
1892 +int
1893 +zpool_disable_datasets(zpool_handle_t *zhp, boolean_t force)
1894 +{
1895 + return (zpool_disable_datasets_ex(zhp, force, 1));
1380 1896 }
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX