Print this page
NEX-15281 zfs_panic_recover() during hpr disable/enable
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-15281 zfs_panic_recover() during hpr disable/enable
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-13629 zfs send -s: assertion failed: err != 0 || (dsp->dsa_sent_begin && dsp->dsa_sent_end), file: ../../common/fs/zfs/dmu_send.c, line: 1010
Reviewed by: Alex Deiter <alex.deiter@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-9752 backport illumos 6950 ARC should cache compressed data
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
6950 ARC should cache compressed data
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Don Brady <don.brady@intel.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
NEX-9575 zfs send -s panics
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Revert "NEX-7251 Resume_token is not cleared right after finishing receive"
This reverts commit 9e97a45e8cf6ca59307a39e2d3c11c6e845e4187.
NEX-7251 Resume_token is not cleared right after finishing receive
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alexey Komarov <alexey.komarov@nexenta.com>
NEX-5928 KRRP: Integrate illumos/openzfs resume-token, to resume replication from a given synced offset
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alexey Komarov <alexey.komarov@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-5795 Rename 'wrc' as 'wbc' in the source and in the tech docs
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
NEX-5272 KRRP: replicate snapshot properties
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Alexey Komarov <alexey.komarov@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-5270 WBC: Incorrect error message when trying to 'zfs recv' into wrcached dataset
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-5132 WBC: Do not allow recv to datasets with enabled writecache
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
6358 A faulted pool with only unavailable vdevs triggers assertion failure in libzfs
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Reviewed by: Serban Maduta <serban.maduta@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
6393 zfs receive a full send as a clone
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Approved by: Dan McDonald <danmcd@omniti.com>
2605 want to resume interrupted zfs send
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed by: Xin Li <delphij@freebsd.org>
Reviewed by: Arne Jansen <sensille@gmx.net>
Approved by: Dan McDonald <danmcd@omniti.com>
4185 add new cryptographic checksums to ZFS: SHA-512, Skein, Edon-R (fix studio build)
4185 add new cryptographic checksums to ZFS: SHA-512, Skein, Edon-R
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Reviewed by: Richard Lowe <richlowe@richlowe.net>
Approved by: Garrett D'Amore <garrett@damore.org>
6047 SPARC boot should support feature@embedded_data
Reviewed by: Igor Kozhukhov <ikozhukhov@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
5959 clean up per-dataset feature count code
Reviewed by: Toomas Soome <tsoome@me.com>
Reviewed by: George Wilson <george@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
NEX-4582 update wrc test cases for allow to use write back cache per tree of datasets
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
5960 zfs recv should prefetch indirect blocks
5925 zfs receive -o origin=
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
5946 zfs_ioc_space_snaps must check that firstsnap and lastsnap refer to snapshots
5945 zfs_ioc_send_space must ensure that fromsnap refers to a snapshot
Reviewed by: Steven Hartland <killing@multiplay.co.uk>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Gordon Ross <gordon.ross@nexenta.com>
5870 dmu_recv_end_check() leaks origin_head hold if error happens in drc_force branch
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Approved by: Dan McDonald <danmcd@omniti.com>
5912 full stream can not be force-received into a dataset if it has a snapshot
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Approved by: Dan McDonald <danmcd@omniti.com>
5809 Blowaway full receive in v1 pool causes kernel panic
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Alex Reece <alex@delphix.com>
Reviewed by: Will Andrews <will@freebsd.org>
Approved by: Gordon Ross <gwr@nexenta.com>
5746 more checksumming in zfs send
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Bayard Bell <buffer.g.overflow@gmail.com>
Approved by: Albert Lee <trisk@omniti.com>
5765 add support for estimating send stream size with lzc_send_space when source is a bookmark
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Steven Hartland <killing@multiplay.co.uk>
Reviewed by: Bayard Bell <buffer.g.overflow@gmail.com>
Approved by: Albert Lee <trisk@nexenta.com>
5769 Cast 'zfs bad bloc' to ULL for x86
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <paul.dagnelie@delphix.com>
Reviewed by: Richard PALO <richard@NetBSD.org>
Approved by: Dan McDonald <danmcd@omniti.com>
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
Revert "NEX-4476 WRC: Allow to use write back cache per tree of datasets"
This reverts commit fe97b74444278a6f36fec93179133641296312da.
NEX-4476 WRC: Allow to use write back cache per tree of datasets
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Alex Aizman <alex.aizman@nexenta.com>
NEX-3588 krrp panics in zfs:dmu_recv_end_check+13b () when running zfs tests.
Reviewed by: Alek Pinchuk <alek.pinchuk@nexenta.com>
Reviewed by: Kevin Crowe <kevin.crowe@nexenta.com>
NEX-3558 KRRP Integration
4370 avoid transmitting holes during zfs send
4371 DMU code clean up
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Josef 'Jeff' Sipek <jeffpc@josefsipek.net>
Approved by: Garrett D'Amore <garrett@damore.org>
Fixup merge results
re #12619 rb4429 More dp->dp_config_rwlock holds
Bug 10481 - Dry run option in 'zfs send' isn't the same as in NexentaStor 3.1
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/uts/common/fs/zfs/dmu_send.c
+++ new/usr/src/uts/common/fs/zfs/dmu_send.c
1 1 /*
2 2 * CDDL HEADER START
3 3 *
4 4 * The contents of this file are subject to the terms of the
5 5 * Common Development and Distribution License (the "License").
6 6 * You may not use this file except in compliance with the License.
7 7 *
8 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 * or http://www.opensolaris.org/os/licensing.
10 10 * See the License for the specific language governing permissions
11 11 * and limitations under the License.
12 12 *
|
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
13 13 * When distributing Covered Code, include this CDDL HEADER in each
14 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 * If applicable, add the following below this CDDL HEADER, with the
16 16 * fields enclosed by brackets "[]" replaced with your own identifying
17 17 * information: Portions Copyright [yyyy] [name of copyright owner]
18 18 *
19 19 * CDDL HEADER END
20 20 */
21 21 /*
22 22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
23 - * Copyright 2011 Nexenta Systems, Inc. All rights reserved.
24 23 * Copyright (c) 2011, 2015 by Delphix. All rights reserved.
25 24 * Copyright (c) 2014, Joyent, Inc. All rights reserved.
26 25 * Copyright 2014 HybridCluster. All rights reserved.
26 + * Copyright 2017 Nexenta Systems, Inc. All rights reserved.
27 27 * Copyright 2016 RackTop Systems.
28 28 * Copyright (c) 2014 Integros [integros.com]
29 29 */
30 30
31 31 #include <sys/dmu.h>
32 32 #include <sys/dmu_impl.h>
33 33 #include <sys/dmu_tx.h>
34 34 #include <sys/dbuf.h>
35 35 #include <sys/dnode.h>
36 36 #include <sys/zfs_context.h>
37 37 #include <sys/dmu_objset.h>
38 38 #include <sys/dmu_traverse.h>
39 39 #include <sys/dsl_dataset.h>
40 40 #include <sys/dsl_dir.h>
41 41 #include <sys/dsl_prop.h>
42 42 #include <sys/dsl_pool.h>
43 43 #include <sys/dsl_synctask.h>
44 44 #include <sys/zfs_ioctl.h>
45 45 #include <sys/zap.h>
46 46 #include <sys/zio_checksum.h>
|
↓ open down ↓ |
10 lines elided |
↑ open up ↑ |
47 47 #include <sys/zfs_znode.h>
48 48 #include <zfs_fletcher.h>
49 49 #include <sys/avl.h>
50 50 #include <sys/ddt.h>
51 51 #include <sys/zfs_onexit.h>
52 52 #include <sys/dmu_send.h>
53 53 #include <sys/dsl_destroy.h>
54 54 #include <sys/blkptr.h>
55 55 #include <sys/dsl_bookmark.h>
56 56 #include <sys/zfeature.h>
57 +#include <sys/autosnap.h>
57 58 #include <sys/bqueue.h>
58 59
60 +#include "zfs_errno.h"
61 +
59 62 /* Set this tunable to TRUE to replace corrupt data with 0x2f5baddb10c */
60 63 int zfs_send_corrupt_data = B_FALSE;
61 64 int zfs_send_queue_length = 16 * 1024 * 1024;
62 65 int zfs_recv_queue_length = 16 * 1024 * 1024;
63 66 /* Set this tunable to FALSE to disable setting of DRR_FLAG_FREERECORDS */
64 67 int zfs_send_set_freerecords_bit = B_TRUE;
65 68
66 69 static char *dmu_recv_tag = "dmu_recv_tag";
67 70 const char *recv_clone_name = "%recv";
68 71
69 72 #define BP_SPAN(datablkszsec, indblkshift, level) \
70 73 (((uint64_t)datablkszsec) << (SPA_MINBLOCKSHIFT + \
71 74 (level) * (indblkshift - SPA_BLKPTRSHIFT)))
72 75
73 76 static void byteswap_record(dmu_replay_record_t *drr);
74 77
75 78 struct send_thread_arg {
76 79 bqueue_t q;
77 80 dsl_dataset_t *ds; /* Dataset to traverse */
78 81 uint64_t fromtxg; /* Traverse from this txg */
79 82 int flags; /* flags to pass to traverse_dataset */
80 83 int error_code;
81 84 boolean_t cancel;
82 85 zbookmark_phys_t resume;
83 86 };
84 87
85 88 struct send_block_record {
86 89 boolean_t eos_marker; /* Marks the end of the stream */
87 90 blkptr_t bp;
88 91 zbookmark_phys_t zb;
89 92 uint8_t indblkshift;
90 93 uint16_t datablkszsec;
91 94 bqueue_node_t ln;
92 95 };
93 96
94 97 static int
95 98 dump_bytes(dmu_sendarg_t *dsp, void *buf, int len)
96 99 {
97 100 dsl_dataset_t *ds = dmu_objset_ds(dsp->dsa_os);
98 101 ssize_t resid; /* have to get resid to get detailed errno */
99 102
100 103 /*
101 104 * The code does not rely on this (len being a multiple of 8). We keep
102 105 * this assertion because of the corresponding assertion in
|
↓ open down ↓ |
34 lines elided |
↑ open up ↑ |
103 106 * receive_read(). Keeping this assertion ensures that we do not
104 107 * inadvertently break backwards compatibility (causing the assertion
105 108 * in receive_read() to trigger on old software).
106 109 *
107 110 * Removing the assertions could be rolled into a new feature that uses
108 111 * data that isn't 8-byte aligned; if the assertions were removed, a
109 112 * feature flag would have to be added.
110 113 */
111 114
112 115 ASSERT0(len % 8);
116 + ASSERT(buf != NULL);
113 117
114 - dsp->dsa_err = vn_rdwr(UIO_WRITE, dsp->dsa_vp,
115 - (caddr_t)buf, len,
116 - 0, UIO_SYSSPACE, FAPPEND, RLIM64_INFINITY, CRED(), &resid);
117 -
118 + dsp->dsa_err = 0;
119 + if (!dsp->sendsize) {
120 + /* if vp is NULL, then the send is from krrp */
121 + if (dsp->dsa_vp != NULL) {
122 + dsp->dsa_err = vn_rdwr(UIO_WRITE, dsp->dsa_vp,
123 + (caddr_t)buf, len,
124 + 0, UIO_SYSSPACE, FAPPEND, RLIM64_INFINITY,
125 + CRED(), &resid);
126 + } else {
127 + ASSERT(dsp->dsa_krrp_task != NULL);
128 + dsp->dsa_err = dmu_krrp_buffer_write(buf, len,
129 + dsp->dsa_krrp_task);
130 + }
131 + }
118 132 mutex_enter(&ds->ds_sendstream_lock);
119 133 *dsp->dsa_off += len;
120 134 mutex_exit(&ds->ds_sendstream_lock);
121 135
122 136 return (dsp->dsa_err);
123 137 }
124 138
139 +static int
140 +dump_bytes_with_checksum(dmu_sendarg_t *dsp, void *buf, int len)
141 +{
142 + if (!dsp->sendsize && (dsp->dsa_krrp_task == NULL ||
143 + dsp->dsa_krrp_task->buffer_args.force_cksum)) {
144 + (void) fletcher_4_incremental_native(buf, len, &dsp->dsa_zc);
145 + }
146 +
147 + return (dump_bytes(dsp, buf, len));
148 +}
149 +
125 150 /*
126 151 * For all record types except BEGIN, fill in the checksum (overlaid in
127 152 * drr_u.drr_checksum.drr_checksum). The checksum verifies everything
128 153 * up to the start of the checksum itself.
129 154 */
130 155 static int
131 156 dump_record(dmu_sendarg_t *dsp, void *payload, int payload_len)
132 157 {
158 + boolean_t do_checksum = (dsp->dsa_krrp_task == NULL ||
159 + dsp->dsa_krrp_task->buffer_args.force_cksum);
160 +
133 161 ASSERT3U(offsetof(dmu_replay_record_t, drr_u.drr_checksum.drr_checksum),
134 162 ==, sizeof (dmu_replay_record_t) - sizeof (zio_cksum_t));
135 - (void) fletcher_4_incremental_native(dsp->dsa_drr,
136 - offsetof(dmu_replay_record_t, drr_u.drr_checksum.drr_checksum),
137 - &dsp->dsa_zc);
163 +
138 164 if (dsp->dsa_drr->drr_type == DRR_BEGIN) {
139 165 dsp->dsa_sent_begin = B_TRUE;
140 - } else {
141 - ASSERT(ZIO_CHECKSUM_IS_ZERO(&dsp->dsa_drr->drr_u.
142 - drr_checksum.drr_checksum));
143 - dsp->dsa_drr->drr_u.drr_checksum.drr_checksum = dsp->dsa_zc;
144 166 }
167 +
145 168 if (dsp->dsa_drr->drr_type == DRR_END) {
146 169 dsp->dsa_sent_end = B_TRUE;
147 170 }
148 - (void) fletcher_4_incremental_native(&dsp->dsa_drr->
149 - drr_u.drr_checksum.drr_checksum,
150 - sizeof (zio_cksum_t), &dsp->dsa_zc);
171 +
172 + if (!dsp->sendsize && do_checksum) {
173 + (void) fletcher_4_incremental_native(dsp->dsa_drr,
174 + offsetof(dmu_replay_record_t,
175 + drr_u.drr_checksum.drr_checksum),
176 + &dsp->dsa_zc);
177 + if (dsp->dsa_drr->drr_type != DRR_BEGIN) {
178 + ASSERT(ZIO_CHECKSUM_IS_ZERO(&dsp->dsa_drr->drr_u.
179 + drr_checksum.drr_checksum));
180 + dsp->dsa_drr->drr_u.drr_checksum.drr_checksum =
181 + dsp->dsa_zc;
182 + }
183 +
184 + (void) fletcher_4_incremental_native(&dsp->dsa_drr->
185 + drr_u.drr_checksum.drr_checksum,
186 + sizeof (zio_cksum_t), &dsp->dsa_zc);
187 + }
188 +
151 189 if (dump_bytes(dsp, dsp->dsa_drr, sizeof (dmu_replay_record_t)) != 0)
152 190 return (SET_ERROR(EINTR));
153 191 if (payload_len != 0) {
154 - (void) fletcher_4_incremental_native(payload, payload_len,
155 - &dsp->dsa_zc);
156 - if (dump_bytes(dsp, payload, payload_len) != 0)
192 + if (dump_bytes_with_checksum(dsp, payload, payload_len) != 0)
157 193 return (SET_ERROR(EINTR));
158 194 }
159 195 return (0);
160 196 }
161 197
162 198 /*
163 199 * Fill in the drr_free struct, or perform aggregation if the previous record is
164 200 * also a free record, and the two are adjacent.
165 201 *
166 202 * Note that we send free records even for a full send, because we want to be
167 203 * able to receive a full send as a clone, which requires a list of all the free
168 204 * and freeobject records that were generated on the source.
169 205 */
170 206 static int
171 207 dump_free(dmu_sendarg_t *dsp, uint64_t object, uint64_t offset,
172 208 uint64_t length)
173 209 {
174 210 struct drr_free *drrf = &(dsp->dsa_drr->drr_u.drr_free);
175 211
176 212 /*
177 213 * When we receive a free record, dbuf_free_range() assumes
178 214 * that the receiving system doesn't have any dbufs in the range
179 215 * being freed. This is always true because there is a one-record
180 216 * constraint: we only send one WRITE record for any given
181 217 * object,offset. We know that the one-record constraint is
182 218 * true because we always send data in increasing order by
183 219 * object,offset.
184 220 *
185 221 * If the increasing-order constraint ever changes, we should find
186 222 * another way to assert that the one-record constraint is still
187 223 * satisfied.
188 224 */
189 225 ASSERT(object > dsp->dsa_last_data_object ||
190 226 (object == dsp->dsa_last_data_object &&
191 227 offset > dsp->dsa_last_data_offset));
192 228
193 229 if (length != -1ULL && offset + length < offset)
194 230 length = -1ULL;
195 231
196 232 /*
197 233 * If there is a pending op, but it's not PENDING_FREE, push it out,
198 234 * since free block aggregation can only be done for blocks of the
199 235 * same type (i.e., DRR_FREE records can only be aggregated with
200 236 * other DRR_FREE records. DRR_FREEOBJECTS records can only be
201 237 * aggregated with other DRR_FREEOBJECTS records.
202 238 */
203 239 if (dsp->dsa_pending_op != PENDING_NONE &&
204 240 dsp->dsa_pending_op != PENDING_FREE) {
205 241 if (dump_record(dsp, NULL, 0) != 0)
206 242 return (SET_ERROR(EINTR));
207 243 dsp->dsa_pending_op = PENDING_NONE;
208 244 }
209 245
210 246 if (dsp->dsa_pending_op == PENDING_FREE) {
211 247 /*
212 248 * There should never be a PENDING_FREE if length is -1
213 249 * (because dump_dnode is the only place where this
214 250 * function is called with a -1, and only after flushing
215 251 * any pending record).
216 252 */
217 253 ASSERT(length != -1ULL);
218 254 /*
219 255 * Check to see whether this free block can be aggregated
220 256 * with pending one.
221 257 */
222 258 if (drrf->drr_object == object && drrf->drr_offset +
223 259 drrf->drr_length == offset) {
224 260 drrf->drr_length += length;
225 261 return (0);
226 262 } else {
227 263 /* not a continuation. Push out pending record */
228 264 if (dump_record(dsp, NULL, 0) != 0)
229 265 return (SET_ERROR(EINTR));
230 266 dsp->dsa_pending_op = PENDING_NONE;
231 267 }
232 268 }
233 269 /* create a FREE record and make it pending */
234 270 bzero(dsp->dsa_drr, sizeof (dmu_replay_record_t));
235 271 dsp->dsa_drr->drr_type = DRR_FREE;
236 272 drrf->drr_object = object;
237 273 drrf->drr_offset = offset;
238 274 drrf->drr_length = length;
239 275 drrf->drr_toguid = dsp->dsa_toguid;
240 276 if (length == -1ULL) {
241 277 if (dump_record(dsp, NULL, 0) != 0)
242 278 return (SET_ERROR(EINTR));
243 279 } else {
244 280 dsp->dsa_pending_op = PENDING_FREE;
245 281 }
246 282
247 283 return (0);
248 284 }
249 285
250 286 static int
251 287 dump_write(dmu_sendarg_t *dsp, dmu_object_type_t type,
252 288 uint64_t object, uint64_t offset, int lsize, int psize, const blkptr_t *bp,
253 289 void *data)
254 290 {
255 291 uint64_t payload_size;
256 292 struct drr_write *drrw = &(dsp->dsa_drr->drr_u.drr_write);
257 293
258 294 /*
259 295 * We send data in increasing object, offset order.
260 296 * See comment in dump_free() for details.
261 297 */
262 298 ASSERT(object > dsp->dsa_last_data_object ||
263 299 (object == dsp->dsa_last_data_object &&
264 300 offset > dsp->dsa_last_data_offset));
265 301 dsp->dsa_last_data_object = object;
266 302 dsp->dsa_last_data_offset = offset + lsize - 1;
267 303
268 304 /*
269 305 * If there is any kind of pending aggregation (currently either
270 306 * a grouping of free objects or free blocks), push it out to
271 307 * the stream, since aggregation can't be done across operations
272 308 * of different types.
273 309 */
274 310 if (dsp->dsa_pending_op != PENDING_NONE) {
275 311 if (dump_record(dsp, NULL, 0) != 0)
276 312 return (SET_ERROR(EINTR));
277 313 dsp->dsa_pending_op = PENDING_NONE;
278 314 }
279 315 /* write a WRITE record */
280 316 bzero(dsp->dsa_drr, sizeof (dmu_replay_record_t));
281 317 dsp->dsa_drr->drr_type = DRR_WRITE;
282 318 drrw->drr_object = object;
283 319 drrw->drr_type = type;
284 320 drrw->drr_offset = offset;
285 321 drrw->drr_toguid = dsp->dsa_toguid;
286 322 drrw->drr_logical_size = lsize;
287 323
288 324 /* only set the compression fields if the buf is compressed */
289 325 if (lsize != psize) {
290 326 ASSERT(dsp->dsa_featureflags & DMU_BACKUP_FEATURE_COMPRESSED);
291 327 ASSERT(!BP_IS_EMBEDDED(bp));
292 328 ASSERT(!BP_SHOULD_BYTESWAP(bp));
293 329 ASSERT(!DMU_OT_IS_METADATA(BP_GET_TYPE(bp)));
294 330 ASSERT3U(BP_GET_COMPRESS(bp), !=, ZIO_COMPRESS_OFF);
295 331 ASSERT3S(psize, >, 0);
296 332 ASSERT3S(lsize, >=, psize);
297 333
298 334 drrw->drr_compressiontype = BP_GET_COMPRESS(bp);
299 335 drrw->drr_compressed_size = psize;
300 336 payload_size = drrw->drr_compressed_size;
301 337 } else {
302 338 payload_size = drrw->drr_logical_size;
303 339 }
304 340
305 341 if (bp == NULL || BP_IS_EMBEDDED(bp)) {
306 342 /*
307 343 * There's no pre-computed checksum for partial-block
308 344 * writes or embedded BP's, so (like
309 345 * fletcher4-checkummed blocks) userland will have to
310 346 * compute a dedup-capable checksum itself.
311 347 */
312 348 drrw->drr_checksumtype = ZIO_CHECKSUM_OFF;
313 349 } else {
314 350 drrw->drr_checksumtype = BP_GET_CHECKSUM(bp);
315 351 if (zio_checksum_table[drrw->drr_checksumtype].ci_flags &
316 352 ZCHECKSUM_FLAG_DEDUP)
317 353 drrw->drr_checksumflags |= DRR_CHECKSUM_DEDUP;
318 354 DDK_SET_LSIZE(&drrw->drr_key, BP_GET_LSIZE(bp));
319 355 DDK_SET_PSIZE(&drrw->drr_key, BP_GET_PSIZE(bp));
320 356 DDK_SET_COMPRESS(&drrw->drr_key, BP_GET_COMPRESS(bp));
321 357 drrw->drr_key.ddk_cksum = bp->blk_cksum;
322 358 }
323 359
324 360 if (dump_record(dsp, data, payload_size) != 0)
325 361 return (SET_ERROR(EINTR));
326 362 return (0);
327 363 }
328 364
329 365 static int
330 366 dump_write_embedded(dmu_sendarg_t *dsp, uint64_t object, uint64_t offset,
331 367 int blksz, const blkptr_t *bp)
332 368 {
333 369 char buf[BPE_PAYLOAD_SIZE];
334 370 struct drr_write_embedded *drrw =
335 371 &(dsp->dsa_drr->drr_u.drr_write_embedded);
336 372
337 373 if (dsp->dsa_pending_op != PENDING_NONE) {
338 374 if (dump_record(dsp, NULL, 0) != 0)
339 375 return (EINTR);
340 376 dsp->dsa_pending_op = PENDING_NONE;
341 377 }
342 378
343 379 ASSERT(BP_IS_EMBEDDED(bp));
344 380
345 381 bzero(dsp->dsa_drr, sizeof (dmu_replay_record_t));
346 382 dsp->dsa_drr->drr_type = DRR_WRITE_EMBEDDED;
347 383 drrw->drr_object = object;
348 384 drrw->drr_offset = offset;
349 385 drrw->drr_length = blksz;
350 386 drrw->drr_toguid = dsp->dsa_toguid;
351 387 drrw->drr_compression = BP_GET_COMPRESS(bp);
352 388 drrw->drr_etype = BPE_GET_ETYPE(bp);
353 389 drrw->drr_lsize = BPE_GET_LSIZE(bp);
|
↓ open down ↓ |
187 lines elided |
↑ open up ↑ |
354 390 drrw->drr_psize = BPE_GET_PSIZE(bp);
355 391
356 392 decode_embedded_bp_compressed(bp, buf);
357 393
358 394 if (dump_record(dsp, buf, P2ROUNDUP(drrw->drr_psize, 8)) != 0)
359 395 return (EINTR);
360 396 return (0);
361 397 }
362 398
363 399 static int
364 -dump_spill(dmu_sendarg_t *dsp, uint64_t object, int blksz, void *data)
400 +dump_spill(dmu_sendarg_t *dsp, uint64_t object,
401 + const blkptr_t *bp, const zbookmark_phys_t *zb)
365 402 {
403 + int rc = 0;
366 404 struct drr_spill *drrs = &(dsp->dsa_drr->drr_u.drr_spill);
405 + enum arc_flags aflags = ARC_FLAG_WAIT;
406 + int blksz = BP_GET_LSIZE(bp);
407 + arc_buf_t *abuf;
367 408
368 409 if (dsp->dsa_pending_op != PENDING_NONE) {
369 410 if (dump_record(dsp, NULL, 0) != 0)
370 411 return (SET_ERROR(EINTR));
371 412 dsp->dsa_pending_op = PENDING_NONE;
372 413 }
373 414
374 415 /* write a SPILL record */
375 416 bzero(dsp->dsa_drr, sizeof (dmu_replay_record_t));
376 417 dsp->dsa_drr->drr_type = DRR_SPILL;
377 418 drrs->drr_object = object;
378 419 drrs->drr_length = blksz;
379 420 drrs->drr_toguid = dsp->dsa_toguid;
380 421
381 - if (dump_record(dsp, data, blksz) != 0)
422 + if (dump_record(dsp, NULL, 0))
382 423 return (SET_ERROR(EINTR));
424 +
425 + /*
426 + * if dsa_krrp task is not NULL, then the send is from krrp and we can
427 + * try to bypass copying data to an intermediate buffer.
428 + */
429 + if (!dsp->sendsize && dsp->dsa_krrp_task != NULL) {
430 + rc = dmu_krrp_direct_arc_read(dsp->dsa_os->os_spa,
431 + dsp->dsa_krrp_task, &dsp->dsa_zc, bp);
432 + /*
433 + * rc == 0 means that we successfully copy
434 + * the data directly from ARC to krrp buffer
435 + * rc != 0 && rc != EINTR means that we cannot
436 + * zerocopy the data and need to use slow-path
437 + */
438 + if (rc == 0 || rc == EINTR)
439 + return (rc);
440 +
441 + ASSERT3U(rc, ==, ENODATA);
442 + }
443 +
444 + if (arc_read(NULL, dsp->dsa_os->os_spa, bp, arc_getbuf_func, &abuf,
445 + ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_CANFAIL,
446 + &aflags, zb) != 0)
447 + return (SET_ERROR(EIO));
448 +
449 + rc = dump_bytes_with_checksum(dsp, abuf->b_data, blksz);
450 + arc_buf_destroy(abuf, &abuf);
451 + if (rc != 0)
452 + return (SET_ERROR(EINTR));
453 +
383 454 return (0);
384 455 }
385 456
386 457 static int
387 458 dump_freeobjects(dmu_sendarg_t *dsp, uint64_t firstobj, uint64_t numobjs)
388 459 {
389 460 struct drr_freeobjects *drrfo = &(dsp->dsa_drr->drr_u.drr_freeobjects);
390 461
391 462 /*
392 463 * If there is a pending op, but it's not PENDING_FREEOBJECTS,
393 464 * push it out, since free block aggregation can only be done for
394 465 * blocks of the same type (i.e., DRR_FREE records can only be
395 466 * aggregated with other DRR_FREE records. DRR_FREEOBJECTS records
396 467 * can only be aggregated with other DRR_FREEOBJECTS records.
397 468 */
398 469 if (dsp->dsa_pending_op != PENDING_NONE &&
399 470 dsp->dsa_pending_op != PENDING_FREEOBJECTS) {
400 471 if (dump_record(dsp, NULL, 0) != 0)
401 472 return (SET_ERROR(EINTR));
402 473 dsp->dsa_pending_op = PENDING_NONE;
403 474 }
404 475 if (dsp->dsa_pending_op == PENDING_FREEOBJECTS) {
405 476 /*
406 477 * See whether this free object array can be aggregated
407 478 * with pending one
408 479 */
409 480 if (drrfo->drr_firstobj + drrfo->drr_numobjs == firstobj) {
410 481 drrfo->drr_numobjs += numobjs;
411 482 return (0);
412 483 } else {
413 484 /* can't be aggregated. Push out pending record */
414 485 if (dump_record(dsp, NULL, 0) != 0)
415 486 return (SET_ERROR(EINTR));
416 487 dsp->dsa_pending_op = PENDING_NONE;
417 488 }
418 489 }
419 490
420 491 /* write a FREEOBJECTS record */
421 492 bzero(dsp->dsa_drr, sizeof (dmu_replay_record_t));
422 493 dsp->dsa_drr->drr_type = DRR_FREEOBJECTS;
423 494 drrfo->drr_firstobj = firstobj;
424 495 drrfo->drr_numobjs = numobjs;
425 496 drrfo->drr_toguid = dsp->dsa_toguid;
426 497
427 498 dsp->dsa_pending_op = PENDING_FREEOBJECTS;
428 499
429 500 return (0);
430 501 }
431 502
432 503 static int
433 504 dump_dnode(dmu_sendarg_t *dsp, uint64_t object, dnode_phys_t *dnp)
434 505 {
435 506 struct drr_object *drro = &(dsp->dsa_drr->drr_u.drr_object);
436 507
437 508 if (object < dsp->dsa_resume_object) {
438 509 /*
439 510 * Note: when resuming, we will visit all the dnodes in
440 511 * the block of dnodes that we are resuming from. In
441 512 * this case it's unnecessary to send the dnodes prior to
442 513 * the one we are resuming from. We should be at most one
443 514 * block's worth of dnodes behind the resume point.
444 515 */
445 516 ASSERT3U(dsp->dsa_resume_object - object, <,
446 517 1 << (DNODE_BLOCK_SHIFT - DNODE_SHIFT));
447 518 return (0);
448 519 }
449 520
450 521 if (dnp == NULL || dnp->dn_type == DMU_OT_NONE)
451 522 return (dump_freeobjects(dsp, object, 1));
452 523
453 524 if (dsp->dsa_pending_op != PENDING_NONE) {
454 525 if (dump_record(dsp, NULL, 0) != 0)
455 526 return (SET_ERROR(EINTR));
456 527 dsp->dsa_pending_op = PENDING_NONE;
457 528 }
458 529
459 530 /* write an OBJECT record */
460 531 bzero(dsp->dsa_drr, sizeof (dmu_replay_record_t));
461 532 dsp->dsa_drr->drr_type = DRR_OBJECT;
462 533 drro->drr_object = object;
463 534 drro->drr_type = dnp->dn_type;
464 535 drro->drr_bonustype = dnp->dn_bonustype;
465 536 drro->drr_blksz = dnp->dn_datablkszsec << SPA_MINBLOCKSHIFT;
466 537 drro->drr_bonuslen = dnp->dn_bonuslen;
467 538 drro->drr_checksumtype = dnp->dn_checksum;
468 539 drro->drr_compress = dnp->dn_compress;
469 540 drro->drr_toguid = dsp->dsa_toguid;
470 541
471 542 if (!(dsp->dsa_featureflags & DMU_BACKUP_FEATURE_LARGE_BLOCKS) &&
472 543 drro->drr_blksz > SPA_OLD_MAXBLOCKSIZE)
473 544 drro->drr_blksz = SPA_OLD_MAXBLOCKSIZE;
474 545
475 546 if (dump_record(dsp, DN_BONUS(dnp),
476 547 P2ROUNDUP(dnp->dn_bonuslen, 8)) != 0) {
477 548 return (SET_ERROR(EINTR));
478 549 }
479 550
480 551 /* Free anything past the end of the file. */
481 552 if (dump_free(dsp, object, (dnp->dn_maxblkid + 1) *
482 553 (dnp->dn_datablkszsec << SPA_MINBLOCKSHIFT), -1ULL) != 0)
483 554 return (SET_ERROR(EINTR));
484 555 if (dsp->dsa_err != 0)
485 556 return (SET_ERROR(EINTR));
486 557 return (0);
487 558 }
488 559
489 560 static boolean_t
490 561 backup_do_embed(dmu_sendarg_t *dsp, const blkptr_t *bp)
491 562 {
492 563 if (!BP_IS_EMBEDDED(bp))
493 564 return (B_FALSE);
494 565
495 566 /*
496 567 * Compression function must be legacy, or explicitly enabled.
497 568 */
498 569 if ((BP_GET_COMPRESS(bp) >= ZIO_COMPRESS_LEGACY_FUNCTIONS &&
499 570 !(dsp->dsa_featureflags & DMU_BACKUP_FEATURE_LZ4)))
500 571 return (B_FALSE);
501 572
502 573 /*
503 574 * Embed type must be explicitly enabled.
504 575 */
505 576 switch (BPE_GET_ETYPE(bp)) {
506 577 case BP_EMBEDDED_TYPE_DATA:
507 578 if (dsp->dsa_featureflags & DMU_BACKUP_FEATURE_EMBED_DATA)
508 579 return (B_TRUE);
509 580 break;
510 581 default:
511 582 return (B_FALSE);
512 583 }
513 584 return (B_FALSE);
514 585 }
515 586
516 587 /*
517 588 * This is the callback function to traverse_dataset that acts as the worker
518 589 * thread for dmu_send_impl.
519 590 */
520 591 /*ARGSUSED*/
521 592 static int
522 593 send_cb(spa_t *spa, zilog_t *zilog, const blkptr_t *bp,
523 594 const zbookmark_phys_t *zb, const struct dnode_phys *dnp, void *arg)
524 595 {
525 596 struct send_thread_arg *sta = arg;
526 597 struct send_block_record *record;
527 598 uint64_t record_size;
528 599 int err = 0;
529 600
530 601 ASSERT(zb->zb_object == DMU_META_DNODE_OBJECT ||
531 602 zb->zb_object >= sta->resume.zb_object);
532 603
533 604 if (sta->cancel)
534 605 return (SET_ERROR(EINTR));
535 606
536 607 if (bp == NULL) {
537 608 ASSERT3U(zb->zb_level, ==, ZB_DNODE_LEVEL);
538 609 return (0);
539 610 } else if (zb->zb_level < 0) {
540 611 return (0);
541 612 }
542 613
543 614 record = kmem_zalloc(sizeof (struct send_block_record), KM_SLEEP);
544 615 record->eos_marker = B_FALSE;
545 616 record->bp = *bp;
546 617 record->zb = *zb;
547 618 record->indblkshift = dnp->dn_indblkshift;
548 619 record->datablkszsec = dnp->dn_datablkszsec;
549 620 record_size = dnp->dn_datablkszsec << SPA_MINBLOCKSHIFT;
550 621 bqueue_enqueue(&sta->q, record, record_size);
551 622
552 623 return (err);
553 624 }
554 625
555 626 /*
556 627 * This function kicks off the traverse_dataset. It also handles setting the
557 628 * error code of the thread in case something goes wrong, and pushes the End of
558 629 * Stream record when the traverse_dataset call has finished. If there is no
559 630 * dataset to traverse, the thread immediately pushes End of Stream marker.
560 631 */
561 632 static void
562 633 send_traverse_thread(void *arg)
563 634 {
564 635 struct send_thread_arg *st_arg = arg;
565 636 int err;
566 637 struct send_block_record *data;
567 638
568 639 if (st_arg->ds != NULL) {
569 640 err = traverse_dataset_resume(st_arg->ds,
570 641 st_arg->fromtxg, &st_arg->resume,
571 642 st_arg->flags, send_cb, st_arg);
572 643
573 644 if (err != EINTR)
574 645 st_arg->error_code = err;
575 646 }
576 647 data = kmem_zalloc(sizeof (*data), KM_SLEEP);
577 648 data->eos_marker = B_TRUE;
578 649 bqueue_enqueue(&st_arg->q, data, 1);
579 650 thread_exit();
580 651 }
581 652
582 653 /*
583 654 * This function actually handles figuring out what kind of record needs to be
584 655 * dumped, reading the data (which has hopefully been prefetched), and calling
585 656 * the appropriate helper function.
586 657 */
587 658 static int
588 659 do_dump(dmu_sendarg_t *dsa, struct send_block_record *data)
589 660 {
590 661 dsl_dataset_t *ds = dmu_objset_ds(dsa->dsa_os);
591 662 const blkptr_t *bp = &data->bp;
592 663 const zbookmark_phys_t *zb = &data->zb;
593 664 uint8_t indblkshift = data->indblkshift;
594 665 uint16_t dblkszsec = data->datablkszsec;
595 666 spa_t *spa = ds->ds_dir->dd_pool->dp_spa;
596 667 dmu_object_type_t type = bp ? BP_GET_TYPE(bp) : DMU_OT_NONE;
597 668 int err = 0;
598 669
599 670 ASSERT3U(zb->zb_level, >=, 0);
600 671
601 672 ASSERT(zb->zb_object == DMU_META_DNODE_OBJECT ||
602 673 zb->zb_object >= dsa->dsa_resume_object);
603 674
604 675 if (zb->zb_object != DMU_META_DNODE_OBJECT &&
605 676 DMU_OBJECT_IS_SPECIAL(zb->zb_object)) {
606 677 return (0);
607 678 } else if (BP_IS_HOLE(bp) &&
608 679 zb->zb_object == DMU_META_DNODE_OBJECT) {
609 680 uint64_t span = BP_SPAN(dblkszsec, indblkshift, zb->zb_level);
610 681 uint64_t dnobj = (zb->zb_blkid * span) >> DNODE_SHIFT;
611 682 err = dump_freeobjects(dsa, dnobj, span >> DNODE_SHIFT);
612 683 } else if (BP_IS_HOLE(bp)) {
613 684 uint64_t span = BP_SPAN(dblkszsec, indblkshift, zb->zb_level);
614 685 uint64_t offset = zb->zb_blkid * span;
615 686 err = dump_free(dsa, zb->zb_object, offset, span);
616 687 } else if (zb->zb_level > 0 || type == DMU_OT_OBJSET) {
617 688 return (0);
618 689 } else if (type == DMU_OT_DNODE) {
619 690 int blksz = BP_GET_LSIZE(bp);
620 691 arc_flags_t aflags = ARC_FLAG_WAIT;
621 692 arc_buf_t *abuf;
622 693
623 694 ASSERT0(zb->zb_level);
624 695
625 696 if (arc_read(NULL, spa, bp, arc_getbuf_func, &abuf,
626 697 ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_CANFAIL,
627 698 &aflags, zb) != 0)
628 699 return (SET_ERROR(EIO));
|
↓ open down ↓ |
236 lines elided |
↑ open up ↑ |
629 700
630 701 dnode_phys_t *blk = abuf->b_data;
631 702 uint64_t dnobj = zb->zb_blkid * (blksz >> DNODE_SHIFT);
632 703 for (int i = 0; i < blksz >> DNODE_SHIFT; i++) {
633 704 err = dump_dnode(dsa, dnobj + i, blk + i);
634 705 if (err != 0)
635 706 break;
636 707 }
637 708 arc_buf_destroy(abuf, &abuf);
638 709 } else if (type == DMU_OT_SA) {
639 - arc_flags_t aflags = ARC_FLAG_WAIT;
640 - arc_buf_t *abuf;
641 - int blksz = BP_GET_LSIZE(bp);
642 -
643 - if (arc_read(NULL, spa, bp, arc_getbuf_func, &abuf,
644 - ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_CANFAIL,
645 - &aflags, zb) != 0)
646 - return (SET_ERROR(EIO));
647 -
648 - err = dump_spill(dsa, zb->zb_object, blksz, abuf->b_data);
649 - arc_buf_destroy(abuf, &abuf);
710 + /*
711 + * The upstream code has arc_read() call here, but we moved
712 + * it to dump_spill() since we want to take advantage of
713 + * zero copy of the buffer if possible
714 + */
715 + err = dump_spill(dsa, zb->zb_object, bp, zb);
650 716 } else if (backup_do_embed(dsa, bp)) {
651 717 /* it's an embedded level-0 block of a regular object */
652 718 int blksz = dblkszsec << SPA_MINBLOCKSHIFT;
653 719 ASSERT0(zb->zb_level);
654 720 err = dump_write_embedded(dsa, zb->zb_object,
655 721 zb->zb_blkid * blksz, blksz, bp);
656 722 } else {
657 723 /* it's a level-0 block of a regular object */
658 724 arc_flags_t aflags = ARC_FLAG_WAIT;
659 725 arc_buf_t *abuf;
660 726 int blksz = dblkszsec << SPA_MINBLOCKSHIFT;
661 727 uint64_t offset;
662 728
663 729 /*
664 730 * If we have large blocks stored on disk but the send flags
665 731 * don't allow us to send large blocks, we split the data from
666 732 * the arc buf into chunks.
667 733 */
668 734 boolean_t split_large_blocks = blksz > SPA_OLD_MAXBLOCKSIZE &&
669 735 !(dsa->dsa_featureflags & DMU_BACKUP_FEATURE_LARGE_BLOCKS);
670 736 /*
671 737 * We should only request compressed data from the ARC if all
672 738 * the following are true:
673 739 * - stream compression was requested
674 740 * - we aren't splitting large blocks into smaller chunks
675 741 * - the data won't need to be byteswapped before sending
676 742 * - this isn't an embedded block
677 743 * - this isn't metadata (if receiving on a different endian
678 744 * system it can be byteswapped more easily)
679 745 */
|
↓ open down ↓ |
20 lines elided |
↑ open up ↑ |
680 746 boolean_t request_compressed =
681 747 (dsa->dsa_featureflags & DMU_BACKUP_FEATURE_COMPRESSED) &&
682 748 !split_large_blocks && !BP_SHOULD_BYTESWAP(bp) &&
683 749 !BP_IS_EMBEDDED(bp) && !DMU_OT_IS_METADATA(BP_GET_TYPE(bp));
684 750
685 751 ASSERT0(zb->zb_level);
686 752 ASSERT(zb->zb_object > dsa->dsa_resume_object ||
687 753 (zb->zb_object == dsa->dsa_resume_object &&
688 754 zb->zb_blkid * blksz >= dsa->dsa_resume_offset));
689 755
690 - ASSERT0(zb->zb_level);
691 - ASSERT(zb->zb_object > dsa->dsa_resume_object ||
692 - (zb->zb_object == dsa->dsa_resume_object &&
693 - zb->zb_blkid * blksz >= dsa->dsa_resume_offset));
694 -
695 756 ASSERT3U(blksz, ==, BP_GET_LSIZE(bp));
696 757
697 758 enum zio_flag zioflags = ZIO_FLAG_CANFAIL;
698 759 if (request_compressed)
699 760 zioflags |= ZIO_FLAG_RAW;
700 761 if (arc_read(NULL, spa, bp, arc_getbuf_func, &abuf,
701 762 ZIO_PRIORITY_ASYNC_READ, zioflags, &aflags, zb) != 0) {
702 763 if (zfs_send_corrupt_data) {
703 764 /* Send a block filled with 0x"zfs badd bloc" */
704 765 abuf = arc_alloc_buf(spa, &abuf, ARC_BUFC_DATA,
705 766 blksz);
706 767 uint64_t *ptr;
707 768 for (ptr = abuf->b_data;
708 769 (char *)ptr < (char *)abuf->b_data + blksz;
709 770 ptr++)
710 771 *ptr = 0x2f5baddb10cULL;
711 772 } else {
712 773 return (SET_ERROR(EIO));
713 774 }
714 775 }
715 776
716 777 offset = zb->zb_blkid * blksz;
|
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
717 778
718 779 if (split_large_blocks) {
719 780 ASSERT3U(arc_get_compression(abuf), ==,
720 781 ZIO_COMPRESS_OFF);
721 782 char *buf = abuf->b_data;
722 783 while (blksz > 0 && err == 0) {
723 784 int n = MIN(blksz, SPA_OLD_MAXBLOCKSIZE);
724 785 err = dump_write(dsa, type, zb->zb_object,
725 786 offset, n, n, NULL, buf);
726 787 offset += n;
727 - buf += n;
728 788 blksz -= n;
729 789 }
730 790 } else {
731 791 err = dump_write(dsa, type, zb->zb_object, offset,
732 792 blksz, arc_buf_size(abuf), bp, abuf->b_data);
733 793 }
734 794 arc_buf_destroy(abuf, &abuf);
735 795 }
736 796
737 797 ASSERT(err == 0 || err == EINTR);
738 798 return (err);
739 799 }
740 800
741 801 /*
742 802 * Pop the new data off the queue, and free the old data.
743 803 */
744 804 static struct send_block_record *
745 805 get_next_record(bqueue_t *bq, struct send_block_record *data)
746 806 {
747 807 struct send_block_record *tmp = bqueue_dequeue(bq);
|
↓ open down ↓ |
10 lines elided |
↑ open up ↑ |
748 808 kmem_free(data, sizeof (*data));
749 809 return (tmp);
750 810 }
751 811
752 812 /*
753 813 * Actually do the bulk of the work in a zfs send.
754 814 *
755 815 * Note: Releases dp using the specified tag.
756 816 */
757 817 static int
758 -dmu_send_impl(void *tag, dsl_pool_t *dp, dsl_dataset_t *to_ds,
818 +dmu_send_impl_ss(void *tag, dsl_pool_t *dp, dsl_dataset_t *to_ds,
759 819 zfs_bookmark_phys_t *ancestor_zb, boolean_t is_clone,
760 820 boolean_t embedok, boolean_t large_block_ok, boolean_t compressok,
761 - int outfd, uint64_t resumeobj, uint64_t resumeoff,
762 - vnode_t *vp, offset_t *off)
821 + int outfd, uint64_t resumeobj, uint64_t resumeoff, vnode_t *vp,
822 + offset_t *off, boolean_t sendsize, dmu_krrp_task_t *krrp_task)
763 823 {
764 824 objset_t *os;
765 825 dmu_replay_record_t *drr;
766 826 dmu_sendarg_t *dsp;
767 827 int err;
768 828 uint64_t fromtxg = 0;
769 829 uint64_t featureflags = 0;
770 830 struct send_thread_arg to_arg = { 0 };
771 831
772 832 err = dmu_objset_from_ds(to_ds, &os);
773 833 if (err != 0) {
774 834 dsl_pool_rele(dp, tag);
775 835 return (err);
776 836 }
777 837
778 838 drr = kmem_zalloc(sizeof (dmu_replay_record_t), KM_SLEEP);
779 839 drr->drr_type = DRR_BEGIN;
780 840 drr->drr_u.drr_begin.drr_magic = DMU_BACKUP_MAGIC;
781 841 DMU_SET_STREAM_HDRTYPE(drr->drr_u.drr_begin.drr_versioninfo,
782 842 DMU_SUBSTREAM);
783 843
784 844 #ifdef _KERNEL
785 845 if (dmu_objset_type(os) == DMU_OST_ZFS) {
786 846 uint64_t version;
787 847 if (zfs_get_zplprop(os, ZFS_PROP_VERSION, &version) != 0) {
788 848 kmem_free(drr, sizeof (dmu_replay_record_t));
789 849 dsl_pool_rele(dp, tag);
790 850 return (SET_ERROR(EINVAL));
791 851 }
792 852 if (version >= ZPL_VERSION_SA) {
793 853 featureflags |= DMU_BACKUP_FEATURE_SA_SPILL;
794 854 }
795 855 }
796 856 #endif
797 857
798 858 if (large_block_ok && to_ds->ds_feature_inuse[SPA_FEATURE_LARGE_BLOCKS])
799 859 featureflags |= DMU_BACKUP_FEATURE_LARGE_BLOCKS;
800 860 if (embedok &&
801 861 spa_feature_is_active(dp->dp_spa, SPA_FEATURE_EMBEDDED_DATA)) {
802 862 featureflags |= DMU_BACKUP_FEATURE_EMBED_DATA;
803 863 if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_LZ4_COMPRESS))
804 864 featureflags |= DMU_BACKUP_FEATURE_LZ4;
805 865 }
806 866 if (compressok) {
807 867 featureflags |= DMU_BACKUP_FEATURE_COMPRESSED;
808 868 }
809 869 if ((featureflags &
810 870 (DMU_BACKUP_FEATURE_EMBED_DATA | DMU_BACKUP_FEATURE_COMPRESSED)) !=
811 871 0 && spa_feature_is_active(dp->dp_spa, SPA_FEATURE_LZ4_COMPRESS)) {
812 872 featureflags |= DMU_BACKUP_FEATURE_LZ4;
813 873 }
814 874
815 875 if (resumeobj != 0 || resumeoff != 0) {
816 876 featureflags |= DMU_BACKUP_FEATURE_RESUMING;
817 877 }
818 878
819 879 DMU_SET_FEATUREFLAGS(drr->drr_u.drr_begin.drr_versioninfo,
820 880 featureflags);
821 881
822 882 drr->drr_u.drr_begin.drr_creation_time =
823 883 dsl_dataset_phys(to_ds)->ds_creation_time;
824 884 drr->drr_u.drr_begin.drr_type = dmu_objset_type(os);
825 885 if (is_clone)
826 886 drr->drr_u.drr_begin.drr_flags |= DRR_FLAG_CLONE;
827 887 drr->drr_u.drr_begin.drr_toguid = dsl_dataset_phys(to_ds)->ds_guid;
828 888 if (dsl_dataset_phys(to_ds)->ds_flags & DS_FLAG_CI_DATASET)
829 889 drr->drr_u.drr_begin.drr_flags |= DRR_FLAG_CI_DATA;
830 890 if (zfs_send_set_freerecords_bit)
831 891 drr->drr_u.drr_begin.drr_flags |= DRR_FLAG_FREERECORDS;
832 892
833 893 if (ancestor_zb != NULL) {
834 894 drr->drr_u.drr_begin.drr_fromguid =
835 895 ancestor_zb->zbm_guid;
836 896 fromtxg = ancestor_zb->zbm_creation_txg;
837 897 }
838 898 dsl_dataset_name(to_ds, drr->drr_u.drr_begin.drr_toname);
839 899 if (!to_ds->ds_is_snapshot) {
840 900 (void) strlcat(drr->drr_u.drr_begin.drr_toname, "@--head--",
841 901 sizeof (drr->drr_u.drr_begin.drr_toname));
842 902 }
|
↓ open down ↓ |
70 lines elided |
↑ open up ↑ |
843 903
844 904 dsp = kmem_zalloc(sizeof (dmu_sendarg_t), KM_SLEEP);
845 905
846 906 dsp->dsa_drr = drr;
847 907 dsp->dsa_vp = vp;
848 908 dsp->dsa_outfd = outfd;
849 909 dsp->dsa_proc = curproc;
850 910 dsp->dsa_os = os;
851 911 dsp->dsa_off = off;
852 912 dsp->dsa_toguid = dsl_dataset_phys(to_ds)->ds_guid;
913 + dsp->dsa_krrp_task = krrp_task;
853 914 dsp->dsa_pending_op = PENDING_NONE;
854 915 dsp->dsa_featureflags = featureflags;
916 + dsp->sendsize = sendsize;
855 917 dsp->dsa_resume_object = resumeobj;
856 918 dsp->dsa_resume_offset = resumeoff;
857 919
858 920 mutex_enter(&to_ds->ds_sendstream_lock);
859 921 list_insert_head(&to_ds->ds_sendstreams, dsp);
860 922 mutex_exit(&to_ds->ds_sendstream_lock);
861 923
862 924 dsl_dataset_long_hold(to_ds, FTAG);
863 925 dsl_pool_rele(dp, tag);
864 926
865 927 void *payload = NULL;
866 928 size_t payload_len = 0;
867 929 if (resumeobj != 0 || resumeoff != 0) {
868 930 dmu_object_info_t to_doi;
869 931 err = dmu_object_info(os, resumeobj, &to_doi);
870 932 if (err != 0)
871 933 goto out;
872 934 SET_BOOKMARK(&to_arg.resume, to_ds->ds_object, resumeobj, 0,
873 935 resumeoff / to_doi.doi_data_block_size);
874 936
875 937 nvlist_t *nvl = fnvlist_alloc();
876 938 fnvlist_add_uint64(nvl, "resume_object", resumeobj);
877 939 fnvlist_add_uint64(nvl, "resume_offset", resumeoff);
878 940 payload = fnvlist_pack(nvl, &payload_len);
879 941 drr->drr_payloadlen = payload_len;
880 942 fnvlist_free(nvl);
881 943 }
882 944
883 945 err = dump_record(dsp, payload, payload_len);
884 946 fnvlist_pack_free(payload, payload_len);
885 947 if (err != 0) {
886 948 err = dsp->dsa_err;
887 949 goto out;
888 950 }
889 951
890 952 err = bqueue_init(&to_arg.q, zfs_send_queue_length,
891 953 offsetof(struct send_block_record, ln));
892 954 to_arg.error_code = 0;
893 955 to_arg.cancel = B_FALSE;
894 956 to_arg.ds = to_ds;
895 957 to_arg.fromtxg = fromtxg;
|
↓ open down ↓ |
31 lines elided |
↑ open up ↑ |
896 958 to_arg.flags = TRAVERSE_PRE | TRAVERSE_PREFETCH;
897 959 (void) thread_create(NULL, 0, send_traverse_thread, &to_arg, 0, curproc,
898 960 TS_RUN, minclsyspri);
899 961
900 962 struct send_block_record *to_data;
901 963 to_data = bqueue_dequeue(&to_arg.q);
902 964
903 965 while (!to_data->eos_marker && err == 0) {
904 966 err = do_dump(dsp, to_data);
905 967 to_data = get_next_record(&to_arg.q, to_data);
906 - if (issig(JUSTLOOKING) && issig(FORREAL))
968 + if (vp != NULL && issig(JUSTLOOKING) && issig(FORREAL))
907 969 err = EINTR;
908 970 }
909 971
910 972 if (err != 0) {
911 973 to_arg.cancel = B_TRUE;
912 974 while (!to_data->eos_marker) {
913 975 to_data = get_next_record(&to_arg.q, to_data);
914 976 }
915 977 }
916 978 kmem_free(to_data, sizeof (*to_data));
917 979
918 980 bqueue_destroy(&to_arg.q);
919 981
920 982 if (err == 0 && to_arg.error_code != 0)
921 983 err = to_arg.error_code;
922 984
923 985 if (err != 0)
924 986 goto out;
925 987
926 988 if (dsp->dsa_pending_op != PENDING_NONE)
927 989 if (dump_record(dsp, NULL, 0) != 0)
928 990 err = SET_ERROR(EINTR);
929 991
930 992 if (err != 0) {
931 993 if (err == EINTR && dsp->dsa_err != 0)
932 994 err = dsp->dsa_err;
933 995 goto out;
934 996 }
935 997
936 998 bzero(drr, sizeof (dmu_replay_record_t));
937 999 drr->drr_type = DRR_END;
938 1000 drr->drr_u.drr_end.drr_checksum = dsp->dsa_zc;
939 1001 drr->drr_u.drr_end.drr_toguid = dsp->dsa_toguid;
940 1002
941 1003 if (dump_record(dsp, NULL, 0) != 0)
942 1004 err = dsp->dsa_err;
943 1005
944 1006 out:
945 1007 mutex_enter(&to_ds->ds_sendstream_lock);
946 1008 list_remove(&to_ds->ds_sendstreams, dsp);
947 1009 mutex_exit(&to_ds->ds_sendstream_lock);
948 1010
949 1011 VERIFY(err != 0 || (dsp->dsa_sent_begin && dsp->dsa_sent_end));
|
↓ open down ↓ |
33 lines elided |
↑ open up ↑ |
950 1012
951 1013 kmem_free(drr, sizeof (dmu_replay_record_t));
952 1014 kmem_free(dsp, sizeof (dmu_sendarg_t));
953 1015
954 1016 dsl_dataset_long_rele(to_ds, FTAG);
955 1017
956 1018 return (err);
957 1019 }
958 1020
959 1021 int
1022 +dmu_send_impl(void *tag, dsl_pool_t *dp, dsl_dataset_t *to_ds,
1023 + zfs_bookmark_phys_t *ancestor_zb, boolean_t is_clone, boolean_t embedok,
1024 + boolean_t large_block_ok, boolean_t compressok, int outfd,
1025 + uint64_t resumeobj, uint64_t resumeoff, vnode_t *vp, offset_t *off,
1026 + dmu_krrp_task_t *krrp_task)
1027 +{
1028 + return (dmu_send_impl_ss(tag, dp, to_ds, ancestor_zb, is_clone,
1029 + embedok, large_block_ok, compressok, outfd, resumeobj, resumeoff,
1030 + vp, off, B_FALSE, krrp_task));
1031 +}
1032 +
1033 +int
960 1034 dmu_send_obj(const char *pool, uint64_t tosnap, uint64_t fromsnap,
961 1035 boolean_t embedok, boolean_t large_block_ok, boolean_t compressok,
962 - int outfd, vnode_t *vp, offset_t *off)
1036 + int outfd, vnode_t *vp, offset_t *off, boolean_t sendsize)
963 1037 {
964 1038 dsl_pool_t *dp;
965 1039 dsl_dataset_t *ds;
966 1040 dsl_dataset_t *fromds = NULL;
967 1041 int err;
968 1042
969 1043 err = dsl_pool_hold(pool, FTAG, &dp);
970 1044 if (err != 0)
971 1045 return (err);
972 1046
973 1047 err = dsl_dataset_hold_obj(dp, tosnap, FTAG, &ds);
974 1048 if (err != 0) {
975 1049 dsl_pool_rele(dp, FTAG);
976 1050 return (err);
977 1051 }
978 1052
979 1053 if (fromsnap != 0) {
980 1054 zfs_bookmark_phys_t zb;
981 1055 boolean_t is_clone;
982 1056
983 1057 err = dsl_dataset_hold_obj(dp, fromsnap, FTAG, &fromds);
984 1058 if (err != 0) {
985 1059 dsl_dataset_rele(ds, FTAG);
986 1060 dsl_pool_rele(dp, FTAG);
|
↓ open down ↓ |
14 lines elided |
↑ open up ↑ |
987 1061 return (err);
988 1062 }
989 1063 if (!dsl_dataset_is_before(ds, fromds, 0))
990 1064 err = SET_ERROR(EXDEV);
991 1065 zb.zbm_creation_time =
992 1066 dsl_dataset_phys(fromds)->ds_creation_time;
993 1067 zb.zbm_creation_txg = dsl_dataset_phys(fromds)->ds_creation_txg;
994 1068 zb.zbm_guid = dsl_dataset_phys(fromds)->ds_guid;
995 1069 is_clone = (fromds->ds_dir != ds->ds_dir);
996 1070 dsl_dataset_rele(fromds, FTAG);
997 - err = dmu_send_impl(FTAG, dp, ds, &zb, is_clone,
998 - embedok, large_block_ok, compressok, outfd, 0, 0, vp, off);
1071 + err = dmu_send_impl_ss(FTAG, dp, ds, &zb, is_clone,
1072 + embedok, large_block_ok, compressok, outfd, 0, 0, vp, off,
1073 + sendsize, NULL);
999 1074 } else {
1000 - err = dmu_send_impl(FTAG, dp, ds, NULL, B_FALSE,
1001 - embedok, large_block_ok, compressok, outfd, 0, 0, vp, off);
1075 + err = dmu_send_impl_ss(FTAG, dp, ds, NULL, B_FALSE,
1076 + embedok, large_block_ok, compressok, outfd, 0, 0, vp, off,
1077 + sendsize, NULL);
1002 1078 }
1003 1079 dsl_dataset_rele(ds, FTAG);
1004 1080 return (err);
1005 1081 }
1006 1082
1007 1083 int
1008 1084 dmu_send(const char *tosnap, const char *fromsnap, boolean_t embedok,
1009 1085 boolean_t large_block_ok, boolean_t compressok, int outfd,
1010 1086 uint64_t resumeobj, uint64_t resumeoff,
1011 1087 vnode_t *vp, offset_t *off)
1012 1088 {
1013 1089 dsl_pool_t *dp;
1014 1090 dsl_dataset_t *ds;
1015 1091 int err;
1016 1092 boolean_t owned = B_FALSE;
1017 1093
1018 1094 if (fromsnap != NULL && strpbrk(fromsnap, "@#") == NULL)
1019 1095 return (SET_ERROR(EINVAL));
1020 1096
1021 1097 err = dsl_pool_hold(tosnap, FTAG, &dp);
1022 1098 if (err != 0)
1023 1099 return (err);
1024 1100
1025 1101 if (strchr(tosnap, '@') == NULL && spa_writeable(dp->dp_spa)) {
1026 1102 /*
1027 1103 * We are sending a filesystem or volume. Ensure
1028 1104 * that it doesn't change by owning the dataset.
1029 1105 */
1030 1106 err = dsl_dataset_own(dp, tosnap, FTAG, &ds);
1031 1107 owned = B_TRUE;
1032 1108 } else {
1033 1109 err = dsl_dataset_hold(dp, tosnap, FTAG, &ds);
1034 1110 }
1035 1111 if (err != 0) {
1036 1112 dsl_pool_rele(dp, FTAG);
1037 1113 return (err);
1038 1114 }
1039 1115
1040 1116 if (fromsnap != NULL) {
1041 1117 zfs_bookmark_phys_t zb;
1042 1118 boolean_t is_clone = B_FALSE;
1043 1119 int fsnamelen = strchr(tosnap, '@') - tosnap;
1044 1120
1045 1121 /*
1046 1122 * If the fromsnap is in a different filesystem, then
1047 1123 * mark the send stream as a clone.
1048 1124 */
1049 1125 if (strncmp(tosnap, fromsnap, fsnamelen) != 0 ||
1050 1126 (fromsnap[fsnamelen] != '@' &&
1051 1127 fromsnap[fsnamelen] != '#')) {
1052 1128 is_clone = B_TRUE;
1053 1129 }
1054 1130
1055 1131 if (strchr(fromsnap, '@')) {
1056 1132 dsl_dataset_t *fromds;
1057 1133 err = dsl_dataset_hold(dp, fromsnap, FTAG, &fromds);
1058 1134 if (err == 0) {
1059 1135 if (!dsl_dataset_is_before(ds, fromds, 0))
1060 1136 err = SET_ERROR(EXDEV);
1061 1137 zb.zbm_creation_time =
1062 1138 dsl_dataset_phys(fromds)->ds_creation_time;
1063 1139 zb.zbm_creation_txg =
1064 1140 dsl_dataset_phys(fromds)->ds_creation_txg;
1065 1141 zb.zbm_guid = dsl_dataset_phys(fromds)->ds_guid;
1066 1142 is_clone = (ds->ds_dir != fromds->ds_dir);
1067 1143 dsl_dataset_rele(fromds, FTAG);
|
↓ open down ↓ |
56 lines elided |
↑ open up ↑ |
1068 1144 }
1069 1145 } else {
1070 1146 err = dsl_bookmark_lookup(dp, fromsnap, ds, &zb);
1071 1147 }
1072 1148 if (err != 0) {
1073 1149 dsl_dataset_rele(ds, FTAG);
1074 1150 dsl_pool_rele(dp, FTAG);
1075 1151 return (err);
1076 1152 }
1077 1153 err = dmu_send_impl(FTAG, dp, ds, &zb, is_clone,
1078 - embedok, large_block_ok, compressok,
1079 - outfd, resumeobj, resumeoff, vp, off);
1154 + embedok, large_block_ok, compressok, outfd,
1155 + resumeobj, resumeoff, vp, off, NULL);
1080 1156 } else {
1081 1157 err = dmu_send_impl(FTAG, dp, ds, NULL, B_FALSE,
1082 - embedok, large_block_ok, compressok,
1083 - outfd, resumeobj, resumeoff, vp, off);
1158 + embedok, large_block_ok, compressok, outfd,
1159 + resumeobj, resumeoff, vp, off, NULL);
1084 1160 }
1085 1161 if (owned)
1086 1162 dsl_dataset_disown(ds, FTAG);
1087 1163 else
1088 1164 dsl_dataset_rele(ds, FTAG);
1089 1165 return (err);
1090 1166 }
1091 1167
1092 1168 static int
1093 1169 dmu_adjust_send_estimate_for_indirects(dsl_dataset_t *ds, uint64_t uncompressed,
1094 1170 uint64_t compressed, boolean_t stream_compressed, uint64_t *sizep)
1095 1171 {
1096 1172 int err;
1097 1173 uint64_t size;
1098 1174 /*
1099 1175 * Assume that space (both on-disk and in-stream) is dominated by
1100 1176 * data. We will adjust for indirect blocks and the copies property,
1101 1177 * but ignore per-object space used (eg, dnodes and DRR_OBJECT records).
1102 1178 */
1103 1179 uint64_t recordsize;
1104 1180 uint64_t record_count;
1105 1181 objset_t *os;
1106 1182 VERIFY0(dmu_objset_from_ds(ds, &os));
1107 1183
1108 1184 /* Assume all (uncompressed) blocks are recordsize. */
1109 1185 if (os->os_phys->os_type == DMU_OST_ZVOL) {
1110 1186 err = dsl_prop_get_int_ds(ds,
1111 1187 zfs_prop_to_name(ZFS_PROP_VOLBLOCKSIZE), &recordsize);
1112 1188 } else {
1113 1189 err = dsl_prop_get_int_ds(ds,
1114 1190 zfs_prop_to_name(ZFS_PROP_RECORDSIZE), &recordsize);
1115 1191 }
1116 1192 if (err != 0)
1117 1193 return (err);
1118 1194 record_count = uncompressed / recordsize;
1119 1195
1120 1196 /*
1121 1197 * If we're estimating a send size for a compressed stream, use the
1122 1198 * compressed data size to estimate the stream size. Otherwise, use the
1123 1199 * uncompressed data size.
1124 1200 */
1125 1201 size = stream_compressed ? compressed : uncompressed;
1126 1202
1127 1203 /*
1128 1204 * Subtract out approximate space used by indirect blocks.
1129 1205 * Assume most space is used by data blocks (non-indirect, non-dnode).
1130 1206 * Assume no ditto blocks or internal fragmentation.
1131 1207 *
1132 1208 * Therefore, space used by indirect blocks is sizeof(blkptr_t) per
1133 1209 * block.
1134 1210 */
1135 1211 size -= record_count * sizeof (blkptr_t);
1136 1212
1137 1213 /* Add in the space for the record associated with each block. */
1138 1214 size += record_count * sizeof (dmu_replay_record_t);
1139 1215
1140 1216 *sizep = size;
1141 1217
1142 1218 return (0);
1143 1219 }
1144 1220
1145 1221 int
1146 1222 dmu_send_estimate(dsl_dataset_t *ds, dsl_dataset_t *fromds,
1147 1223 boolean_t stream_compressed, uint64_t *sizep)
1148 1224 {
1149 1225 dsl_pool_t *dp = ds->ds_dir->dd_pool;
1150 1226 int err;
1151 1227 uint64_t uncomp, comp;
1152 1228
1153 1229 ASSERT(dsl_pool_config_held(dp));
1154 1230
1155 1231 /* tosnap must be a snapshot */
1156 1232 if (!ds->ds_is_snapshot)
1157 1233 return (SET_ERROR(EINVAL));
1158 1234
1159 1235 /* fromsnap, if provided, must be a snapshot */
1160 1236 if (fromds != NULL && !fromds->ds_is_snapshot)
1161 1237 return (SET_ERROR(EINVAL));
1162 1238
1163 1239 /*
1164 1240 * fromsnap must be an earlier snapshot from the same fs as tosnap,
1165 1241 * or the origin's fs.
1166 1242 */
1167 1243 if (fromds != NULL && !dsl_dataset_is_before(ds, fromds, 0))
1168 1244 return (SET_ERROR(EXDEV));
1169 1245
1170 1246 /* Get compressed and uncompressed size estimates of changed data. */
1171 1247 if (fromds == NULL) {
1172 1248 uncomp = dsl_dataset_phys(ds)->ds_uncompressed_bytes;
1173 1249 comp = dsl_dataset_phys(ds)->ds_compressed_bytes;
1174 1250 } else {
1175 1251 uint64_t used;
1176 1252 err = dsl_dataset_space_written(fromds, ds,
1177 1253 &used, &comp, &uncomp);
1178 1254 if (err != 0)
1179 1255 return (err);
1180 1256 }
1181 1257
1182 1258 err = dmu_adjust_send_estimate_for_indirects(ds, uncomp, comp,
1183 1259 stream_compressed, sizep);
1184 1260 /*
1185 1261 * Add the size of the BEGIN and END records to the estimate.
1186 1262 */
1187 1263 *sizep += 2 * sizeof (dmu_replay_record_t);
1188 1264 return (err);
1189 1265 }
1190 1266
1191 1267 struct calculate_send_arg {
1192 1268 uint64_t uncompressed;
1193 1269 uint64_t compressed;
1194 1270 };
1195 1271
1196 1272 /*
1197 1273 * Simple callback used to traverse the blocks of a snapshot and sum their
1198 1274 * uncompressed and compressed sizes.
1199 1275 */
1200 1276 /* ARGSUSED */
1201 1277 static int
1202 1278 dmu_calculate_send_traversal(spa_t *spa, zilog_t *zilog, const blkptr_t *bp,
1203 1279 const zbookmark_phys_t *zb, const dnode_phys_t *dnp, void *arg)
1204 1280 {
1205 1281 struct calculate_send_arg *space = arg;
1206 1282 if (bp != NULL && !BP_IS_HOLE(bp)) {
1207 1283 space->uncompressed += BP_GET_UCSIZE(bp);
1208 1284 space->compressed += BP_GET_PSIZE(bp);
1209 1285 }
1210 1286 return (0);
1211 1287 }
1212 1288
1213 1289 /*
1214 1290 * Given a desination snapshot and a TXG, calculate the approximate size of a
1215 1291 * send stream sent from that TXG. from_txg may be zero, indicating that the
1216 1292 * whole snapshot will be sent.
1217 1293 */
1218 1294 int
1219 1295 dmu_send_estimate_from_txg(dsl_dataset_t *ds, uint64_t from_txg,
1220 1296 boolean_t stream_compressed, uint64_t *sizep)
1221 1297 {
1222 1298 dsl_pool_t *dp = ds->ds_dir->dd_pool;
1223 1299 int err;
1224 1300 struct calculate_send_arg size = { 0 };
1225 1301
1226 1302 ASSERT(dsl_pool_config_held(dp));
1227 1303
1228 1304 /* tosnap must be a snapshot */
1229 1305 if (!ds->ds_is_snapshot)
1230 1306 return (SET_ERROR(EINVAL));
1231 1307
1232 1308 /* verify that from_txg is before the provided snapshot was taken */
1233 1309 if (from_txg >= dsl_dataset_phys(ds)->ds_creation_txg) {
1234 1310 return (SET_ERROR(EXDEV));
1235 1311 }
1236 1312
1237 1313 /*
1238 1314 * traverse the blocks of the snapshot with birth times after
1239 1315 * from_txg, summing their uncompressed size
1240 1316 */
1241 1317 err = traverse_dataset(ds, from_txg, TRAVERSE_POST,
1242 1318 dmu_calculate_send_traversal, &size);
1243 1319 if (err)
1244 1320 return (err);
1245 1321
1246 1322 err = dmu_adjust_send_estimate_for_indirects(ds, size.uncompressed,
1247 1323 size.compressed, stream_compressed, sizep);
1248 1324 return (err);
1249 1325 }
|
↓ open down ↓ |
156 lines elided |
↑ open up ↑ |
1250 1326
1251 1327 typedef struct dmu_recv_begin_arg {
1252 1328 const char *drba_origin;
1253 1329 dmu_recv_cookie_t *drba_cookie;
1254 1330 cred_t *drba_cred;
1255 1331 uint64_t drba_snapobj;
1256 1332 } dmu_recv_begin_arg_t;
1257 1333
1258 1334 static int
1259 1335 recv_begin_check_existing_impl(dmu_recv_begin_arg_t *drba, dsl_dataset_t *ds,
1260 - uint64_t fromguid)
1336 + uint64_t fromguid, dmu_tx_t *tx)
1261 1337 {
1262 1338 uint64_t val;
1263 1339 int error;
1264 1340 dsl_pool_t *dp = ds->ds_dir->dd_pool;
1265 1341
1266 - /* temporary clone name must not exist */
1267 - error = zap_lookup(dp->dp_meta_objset,
1268 - dsl_dir_phys(ds->ds_dir)->dd_child_dir_zapobj, recv_clone_name,
1269 - 8, 1, &val);
1270 - if (error != ENOENT)
1271 - return (error == 0 ? EBUSY : error);
1342 + if (dmu_tx_is_syncing(tx)) {
1343 + /* temporary clone name must not exist */
1344 + error = zap_lookup(dp->dp_meta_objset,
1345 + dsl_dir_phys(ds->ds_dir)->dd_child_dir_zapobj,
1346 + recv_clone_name, 8, 1, &val);
1347 + if (error == 0) {
1348 + dsl_dataset_t *tds;
1272 1349
1350 + /* check that if it is currently used */
1351 + error = dsl_dataset_own_obj(dp, val, FTAG, &tds);
1352 + if (!error) {
1353 + char name[ZFS_MAX_DATASET_NAME_LEN];
1354 +
1355 + dsl_dataset_name(tds, name);
1356 + dsl_dataset_disown(tds, FTAG);
1357 +
1358 + error = dsl_dataset_hold(dp, name, FTAG, &tds);
1359 + if (!error) {
1360 + dsl_destroy_head_sync_impl(tds, tx);
1361 + dsl_dataset_rele(tds, FTAG);
1362 + error = ENOENT;
1363 + }
1364 + } else {
1365 + error = 0;
1366 + }
1367 + }
1368 + if (error != ENOENT) {
1369 + return (error == 0 ?
1370 + SET_ERROR(EBUSY) : SET_ERROR(error));
1371 + }
1372 + }
1373 +
1273 1374 /* new snapshot name must not exist */
1274 1375 error = zap_lookup(dp->dp_meta_objset,
1275 1376 dsl_dataset_phys(ds)->ds_snapnames_zapobj,
1276 1377 drba->drba_cookie->drc_tosnap, 8, 1, &val);
1277 1378 if (error != ENOENT)
1278 - return (error == 0 ? EEXIST : error);
1379 + return (error == 0 ? SET_ERROR(EEXIST) : SET_ERROR(error));
1279 1380
1280 1381 /*
1281 1382 * Check snapshot limit before receiving. We'll recheck again at the
1282 1383 * end, but might as well abort before receiving if we're already over
1283 1384 * the limit.
1284 1385 *
1285 1386 * Note that we do not check the file system limit with
1286 1387 * dsl_dir_fscount_check because the temporary %clones don't count
1287 1388 * against that limit.
1288 1389 */
1289 1390 error = dsl_fs_ss_limit_check(ds->ds_dir, 1, ZFS_PROP_SNAPSHOT_LIMIT,
1290 1391 NULL, drba->drba_cred);
1291 1392 if (error != 0)
1292 1393 return (error);
1293 1394
1294 1395 if (fromguid != 0) {
1295 1396 dsl_dataset_t *snap;
1296 1397 uint64_t obj = dsl_dataset_phys(ds)->ds_prev_snap_obj;
1297 1398
1298 1399 /* Find snapshot in this dir that matches fromguid. */
1299 1400 while (obj != 0) {
1300 1401 error = dsl_dataset_hold_obj(dp, obj, FTAG,
1301 1402 &snap);
1302 1403 if (error != 0)
1303 1404 return (SET_ERROR(ENODEV));
1304 1405 if (snap->ds_dir != ds->ds_dir) {
1305 1406 dsl_dataset_rele(snap, FTAG);
1306 1407 return (SET_ERROR(ENODEV));
1307 1408 }
1308 1409 if (dsl_dataset_phys(snap)->ds_guid == fromguid)
1309 1410 break;
1310 1411 obj = dsl_dataset_phys(snap)->ds_prev_snap_obj;
1311 1412 dsl_dataset_rele(snap, FTAG);
1312 1413 }
1313 1414 if (obj == 0)
1314 1415 return (SET_ERROR(ENODEV));
1315 1416
1316 1417 if (drba->drba_cookie->drc_force) {
1317 1418 drba->drba_snapobj = obj;
1318 1419 } else {
1319 1420 /*
1320 1421 * If we are not forcing, there must be no
1321 1422 * changes since fromsnap.
1322 1423 */
1323 1424 if (dsl_dataset_modified_since_snap(ds, snap)) {
1324 1425 dsl_dataset_rele(snap, FTAG);
1325 1426 return (SET_ERROR(ETXTBSY));
1326 1427 }
1327 1428 drba->drba_snapobj = ds->ds_prev->ds_object;
1328 1429 }
1329 1430
1330 1431 dsl_dataset_rele(snap, FTAG);
1331 1432 } else {
1332 1433 /* if full, then must be forced */
1333 1434 if (!drba->drba_cookie->drc_force)
1334 1435 return (SET_ERROR(EEXIST));
1335 1436 /* start from $ORIGIN@$ORIGIN, if supported */
1336 1437 drba->drba_snapobj = dp->dp_origin_snap != NULL ?
1337 1438 dp->dp_origin_snap->ds_object : 0;
1338 1439 }
1339 1440
1340 1441 return (0);
1341 1442
1342 1443 }
1343 1444
1344 1445 static int
1345 1446 dmu_recv_begin_check(void *arg, dmu_tx_t *tx)
1346 1447 {
1347 1448 dmu_recv_begin_arg_t *drba = arg;
1348 1449 dsl_pool_t *dp = dmu_tx_pool(tx);
1349 1450 struct drr_begin *drrb = drba->drba_cookie->drc_drrb;
1350 1451 uint64_t fromguid = drrb->drr_fromguid;
1351 1452 int flags = drrb->drr_flags;
1352 1453 int error;
1353 1454 uint64_t featureflags = DMU_GET_FEATUREFLAGS(drrb->drr_versioninfo);
1354 1455 dsl_dataset_t *ds;
1355 1456 const char *tofs = drba->drba_cookie->drc_tofs;
1356 1457
1357 1458 /* already checked */
1358 1459 ASSERT3U(drrb->drr_magic, ==, DMU_BACKUP_MAGIC);
1359 1460 ASSERT(!(featureflags & DMU_BACKUP_FEATURE_RESUMING));
1360 1461
1361 1462 if (DMU_GET_STREAM_HDRTYPE(drrb->drr_versioninfo) ==
1362 1463 DMU_COMPOUNDSTREAM ||
1363 1464 drrb->drr_type >= DMU_OST_NUMTYPES ||
1364 1465 ((flags & DRR_FLAG_CLONE) && drba->drba_origin == NULL))
1365 1466 return (SET_ERROR(EINVAL));
1366 1467
1367 1468 /* Verify pool version supports SA if SA_SPILL feature set */
1368 1469 if ((featureflags & DMU_BACKUP_FEATURE_SA_SPILL) &&
1369 1470 spa_version(dp->dp_spa) < SPA_VERSION_SA)
1370 1471 return (SET_ERROR(ENOTSUP));
1371 1472
1372 1473 if (drba->drba_cookie->drc_resumable &&
1373 1474 !spa_feature_is_enabled(dp->dp_spa, SPA_FEATURE_EXTENSIBLE_DATASET))
1374 1475 return (SET_ERROR(ENOTSUP));
1375 1476
1376 1477 /*
1377 1478 * The receiving code doesn't know how to translate a WRITE_EMBEDDED
1378 1479 * record to a plain WRITE record, so the pool must have the
1379 1480 * EMBEDDED_DATA feature enabled if the stream has WRITE_EMBEDDED
1380 1481 * records. Same with WRITE_EMBEDDED records that use LZ4 compression.
1381 1482 */
1382 1483 if ((featureflags & DMU_BACKUP_FEATURE_EMBED_DATA) &&
1383 1484 !spa_feature_is_enabled(dp->dp_spa, SPA_FEATURE_EMBEDDED_DATA))
1384 1485 return (SET_ERROR(ENOTSUP));
1385 1486 if ((featureflags & DMU_BACKUP_FEATURE_LZ4) &&
1386 1487 !spa_feature_is_enabled(dp->dp_spa, SPA_FEATURE_LZ4_COMPRESS))
1387 1488 return (SET_ERROR(ENOTSUP));
1388 1489
1389 1490 /*
1390 1491 * The receiving code doesn't know how to translate large blocks
1391 1492 * to smaller ones, so the pool must have the LARGE_BLOCKS
|
↓ open down ↓ |
103 lines elided |
↑ open up ↑ |
1392 1493 * feature enabled if the stream has LARGE_BLOCKS.
1393 1494 */
1394 1495 if ((featureflags & DMU_BACKUP_FEATURE_LARGE_BLOCKS) &&
1395 1496 !spa_feature_is_enabled(dp->dp_spa, SPA_FEATURE_LARGE_BLOCKS))
1396 1497 return (SET_ERROR(ENOTSUP));
1397 1498
1398 1499 error = dsl_dataset_hold(dp, tofs, FTAG, &ds);
1399 1500 if (error == 0) {
1400 1501 /* target fs already exists; recv into temp clone */
1401 1502
1503 + if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_WBC)) {
1504 + objset_t *os = NULL;
1505 +
1506 + error = dmu_objset_from_ds(ds, &os);
1507 + if (error) {
1508 + dsl_dataset_rele(ds, FTAG);
1509 + return (error);
1510 + }
1511 +
1512 + /* Recv is impossible into DS that uses WBC */
1513 + if (os->os_wbc_mode != ZFS_WBC_MODE_OFF) {
1514 + dsl_dataset_rele(ds, FTAG);
1515 + return (SET_ERROR(EKZFS_WBCNOTSUP));
1516 + }
1517 + }
1518 +
1402 1519 /* Can't recv a clone into an existing fs */
1403 1520 if (flags & DRR_FLAG_CLONE || drba->drba_origin) {
1404 1521 dsl_dataset_rele(ds, FTAG);
1405 1522 return (SET_ERROR(EINVAL));
1406 1523 }
1407 1524
1408 - error = recv_begin_check_existing_impl(drba, ds, fromguid);
1525 + error = recv_begin_check_existing_impl(drba, ds, fromguid, tx);
1409 1526 dsl_dataset_rele(ds, FTAG);
1410 1527 } else if (error == ENOENT) {
1411 1528 /* target fs does not exist; must be a full backup or clone */
1412 1529 char buf[ZFS_MAX_DATASET_NAME_LEN];
1413 1530
1414 1531 /*
1415 1532 * If it's a non-clone incremental, we are missing the
1416 1533 * target fs, so fail the recv.
1417 1534 */
1418 1535 if (fromguid != 0 && !(flags & DRR_FLAG_CLONE ||
1419 1536 drba->drba_origin))
1420 1537 return (SET_ERROR(ENOENT));
1421 1538
1422 1539 /*
1423 1540 * If we're receiving a full send as a clone, and it doesn't
1424 1541 * contain all the necessary free records and freeobject
1425 1542 * records, reject it.
1426 1543 */
1427 1544 if (fromguid == 0 && drba->drba_origin &&
|
↓ open down ↓ |
9 lines elided |
↑ open up ↑ |
1428 1545 !(flags & DRR_FLAG_FREERECORDS))
1429 1546 return (SET_ERROR(EINVAL));
1430 1547
1431 1548 /* Open the parent of tofs */
1432 1549 ASSERT3U(strlen(tofs), <, sizeof (buf));
1433 1550 (void) strlcpy(buf, tofs, strrchr(tofs, '/') - tofs + 1);
1434 1551 error = dsl_dataset_hold(dp, buf, FTAG, &ds);
1435 1552 if (error != 0)
1436 1553 return (error);
1437 1554
1555 + if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_WBC)) {
1556 + objset_t *os = NULL;
1557 +
1558 + error = dmu_objset_from_ds(ds, &os);
1559 + if (error) {
1560 + dsl_dataset_rele(ds, FTAG);
1561 + return (error);
1562 + }
1563 +
1564 + /* Recv is impossible into DS that uses WBC */
1565 + if (os->os_wbc_mode != ZFS_WBC_MODE_OFF) {
1566 + dsl_dataset_rele(ds, FTAG);
1567 + return (SET_ERROR(EKZFS_WBCNOTSUP));
1568 + }
1569 + }
1570 +
1438 1571 /*
1439 1572 * Check filesystem and snapshot limits before receiving. We'll
1440 1573 * recheck snapshot limits again at the end (we create the
1441 1574 * filesystems and increment those counts during begin_sync).
1442 1575 */
1443 1576 error = dsl_fs_ss_limit_check(ds->ds_dir, 1,
1444 1577 ZFS_PROP_FILESYSTEM_LIMIT, NULL, drba->drba_cred);
1445 1578 if (error != 0) {
1446 1579 dsl_dataset_rele(ds, FTAG);
1447 1580 return (error);
1448 1581 }
1449 1582
1450 1583 error = dsl_fs_ss_limit_check(ds->ds_dir, 1,
1451 1584 ZFS_PROP_SNAPSHOT_LIMIT, NULL, drba->drba_cred);
1452 1585 if (error != 0) {
1453 1586 dsl_dataset_rele(ds, FTAG);
1454 1587 return (error);
1455 1588 }
1456 1589
1457 1590 if (drba->drba_origin != NULL) {
1458 1591 dsl_dataset_t *origin;
1459 1592 error = dsl_dataset_hold(dp, drba->drba_origin,
1460 1593 FTAG, &origin);
1461 1594 if (error != 0) {
1462 1595 dsl_dataset_rele(ds, FTAG);
1463 1596 return (error);
1464 1597 }
1465 1598 if (!origin->ds_is_snapshot) {
1466 1599 dsl_dataset_rele(origin, FTAG);
1467 1600 dsl_dataset_rele(ds, FTAG);
1468 1601 return (SET_ERROR(EINVAL));
1469 1602 }
1470 1603 if (dsl_dataset_phys(origin)->ds_guid != fromguid &&
1471 1604 fromguid != 0) {
1472 1605 dsl_dataset_rele(origin, FTAG);
1473 1606 dsl_dataset_rele(ds, FTAG);
1474 1607 return (SET_ERROR(ENODEV));
1475 1608 }
1476 1609 dsl_dataset_rele(origin, FTAG);
1477 1610 }
1478 1611 dsl_dataset_rele(ds, FTAG);
1479 1612 error = 0;
1480 1613 }
1481 1614 return (error);
1482 1615 }
1483 1616
1484 1617 static void
1485 1618 dmu_recv_begin_sync(void *arg, dmu_tx_t *tx)
1486 1619 {
1487 1620 dmu_recv_begin_arg_t *drba = arg;
1488 1621 dsl_pool_t *dp = dmu_tx_pool(tx);
1489 1622 objset_t *mos = dp->dp_meta_objset;
1490 1623 struct drr_begin *drrb = drba->drba_cookie->drc_drrb;
1491 1624 const char *tofs = drba->drba_cookie->drc_tofs;
1492 1625 dsl_dataset_t *ds, *newds;
1493 1626 uint64_t dsobj;
1494 1627 int error;
1495 1628 uint64_t crflags = 0;
1496 1629
1497 1630 if (drrb->drr_flags & DRR_FLAG_CI_DATA)
1498 1631 crflags |= DS_FLAG_CI_DATASET;
1499 1632
1500 1633 error = dsl_dataset_hold(dp, tofs, FTAG, &ds);
1501 1634 if (error == 0) {
1502 1635 /* create temporary clone */
1503 1636 dsl_dataset_t *snap = NULL;
1504 1637 if (drba->drba_snapobj != 0) {
1505 1638 VERIFY0(dsl_dataset_hold_obj(dp,
1506 1639 drba->drba_snapobj, FTAG, &snap));
1507 1640 }
1508 1641 dsobj = dsl_dataset_create_sync(ds->ds_dir, recv_clone_name,
1509 1642 snap, crflags, drba->drba_cred, tx);
1510 1643 if (drba->drba_snapobj != 0)
1511 1644 dsl_dataset_rele(snap, FTAG);
1512 1645 dsl_dataset_rele(ds, FTAG);
1513 1646 } else {
1514 1647 dsl_dir_t *dd;
1515 1648 const char *tail;
1516 1649 dsl_dataset_t *origin = NULL;
1517 1650
1518 1651 VERIFY0(dsl_dir_hold(dp, tofs, FTAG, &dd, &tail));
1519 1652
1520 1653 if (drba->drba_origin != NULL) {
1521 1654 VERIFY0(dsl_dataset_hold(dp, drba->drba_origin,
1522 1655 FTAG, &origin));
1523 1656 }
1524 1657
1525 1658 /* Create new dataset. */
1526 1659 dsobj = dsl_dataset_create_sync(dd,
1527 1660 strrchr(tofs, '/') + 1,
1528 1661 origin, crflags, drba->drba_cred, tx);
1529 1662 if (origin != NULL)
1530 1663 dsl_dataset_rele(origin, FTAG);
1531 1664 dsl_dir_rele(dd, FTAG);
1532 1665 drba->drba_cookie->drc_newfs = B_TRUE;
1533 1666 }
1534 1667 VERIFY0(dsl_dataset_own_obj(dp, dsobj, dmu_recv_tag, &newds));
1535 1668
1536 1669 if (drba->drba_cookie->drc_resumable) {
1537 1670 dsl_dataset_zapify(newds, tx);
1538 1671 if (drrb->drr_fromguid != 0) {
1539 1672 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_FROMGUID,
1540 1673 8, 1, &drrb->drr_fromguid, tx));
1541 1674 }
1542 1675 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_TOGUID,
1543 1676 8, 1, &drrb->drr_toguid, tx));
1544 1677 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_TONAME,
1545 1678 1, strlen(drrb->drr_toname) + 1, drrb->drr_toname, tx));
1546 1679 uint64_t one = 1;
1547 1680 uint64_t zero = 0;
1548 1681 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_OBJECT,
1549 1682 8, 1, &one, tx));
1550 1683 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_OFFSET,
1551 1684 8, 1, &zero, tx));
1552 1685 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_BYTES,
1553 1686 8, 1, &zero, tx));
1554 1687 if (DMU_GET_FEATUREFLAGS(drrb->drr_versioninfo) &
1555 1688 DMU_BACKUP_FEATURE_LARGE_BLOCKS) {
1556 1689 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_LARGEBLOCK,
1557 1690 8, 1, &one, tx));
1558 1691 }
1559 1692 if (DMU_GET_FEATUREFLAGS(drrb->drr_versioninfo) &
1560 1693 DMU_BACKUP_FEATURE_EMBED_DATA) {
1561 1694 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_EMBEDOK,
1562 1695 8, 1, &one, tx));
1563 1696 }
1564 1697 if (DMU_GET_FEATUREFLAGS(drrb->drr_versioninfo) &
1565 1698 DMU_BACKUP_FEATURE_COMPRESSED) {
1566 1699 VERIFY0(zap_add(mos, dsobj, DS_FIELD_RESUME_COMPRESSOK,
1567 1700 8, 1, &one, tx));
1568 1701 }
1569 1702 }
1570 1703
1571 1704 dmu_buf_will_dirty(newds->ds_dbuf, tx);
1572 1705 dsl_dataset_phys(newds)->ds_flags |= DS_FLAG_INCONSISTENT;
1573 1706
1574 1707 /*
1575 1708 * If we actually created a non-clone, we need to create the
1576 1709 * objset in our new dataset.
1577 1710 */
1578 1711 rrw_enter(&newds->ds_bp_rwlock, RW_READER, FTAG);
1579 1712 if (BP_IS_HOLE(dsl_dataset_get_blkptr(newds))) {
1580 1713 (void) dmu_objset_create_impl(dp->dp_spa,
1581 1714 newds, dsl_dataset_get_blkptr(newds), drrb->drr_type, tx);
1582 1715 }
1583 1716 rrw_exit(&newds->ds_bp_rwlock, FTAG);
1584 1717
1585 1718 drba->drba_cookie->drc_ds = newds;
1586 1719
1587 1720 spa_history_log_internal_ds(newds, "receive", tx, "");
1588 1721 }
1589 1722
1590 1723 static int
1591 1724 dmu_recv_resume_begin_check(void *arg, dmu_tx_t *tx)
1592 1725 {
1593 1726 dmu_recv_begin_arg_t *drba = arg;
1594 1727 dsl_pool_t *dp = dmu_tx_pool(tx);
1595 1728 struct drr_begin *drrb = drba->drba_cookie->drc_drrb;
1596 1729 int error;
1597 1730 uint64_t featureflags = DMU_GET_FEATUREFLAGS(drrb->drr_versioninfo);
1598 1731 dsl_dataset_t *ds;
1599 1732 const char *tofs = drba->drba_cookie->drc_tofs;
1600 1733
1601 1734 /* already checked */
1602 1735 ASSERT3U(drrb->drr_magic, ==, DMU_BACKUP_MAGIC);
1603 1736 ASSERT(featureflags & DMU_BACKUP_FEATURE_RESUMING);
1604 1737
1605 1738 if (DMU_GET_STREAM_HDRTYPE(drrb->drr_versioninfo) ==
1606 1739 DMU_COMPOUNDSTREAM ||
1607 1740 drrb->drr_type >= DMU_OST_NUMTYPES)
1608 1741 return (SET_ERROR(EINVAL));
1609 1742
1610 1743 /* Verify pool version supports SA if SA_SPILL feature set */
1611 1744 if ((featureflags & DMU_BACKUP_FEATURE_SA_SPILL) &&
1612 1745 spa_version(dp->dp_spa) < SPA_VERSION_SA)
1613 1746 return (SET_ERROR(ENOTSUP));
1614 1747
1615 1748 /*
1616 1749 * The receiving code doesn't know how to translate a WRITE_EMBEDDED
1617 1750 * record to a plain WRITE record, so the pool must have the
1618 1751 * EMBEDDED_DATA feature enabled if the stream has WRITE_EMBEDDED
1619 1752 * records. Same with WRITE_EMBEDDED records that use LZ4 compression.
1620 1753 */
1621 1754 if ((featureflags & DMU_BACKUP_FEATURE_EMBED_DATA) &&
1622 1755 !spa_feature_is_enabled(dp->dp_spa, SPA_FEATURE_EMBEDDED_DATA))
1623 1756 return (SET_ERROR(ENOTSUP));
1624 1757 if ((featureflags & DMU_BACKUP_FEATURE_LZ4) &&
1625 1758 !spa_feature_is_enabled(dp->dp_spa, SPA_FEATURE_LZ4_COMPRESS))
1626 1759 return (SET_ERROR(ENOTSUP));
1627 1760
1628 1761 /* 6 extra bytes for /%recv */
1629 1762 char recvname[ZFS_MAX_DATASET_NAME_LEN + 6];
1630 1763
1631 1764 (void) snprintf(recvname, sizeof (recvname), "%s/%s",
1632 1765 tofs, recv_clone_name);
1633 1766
1634 1767 if (dsl_dataset_hold(dp, recvname, FTAG, &ds) != 0) {
1635 1768 /* %recv does not exist; continue in tofs */
1636 1769 error = dsl_dataset_hold(dp, tofs, FTAG, &ds);
1637 1770 if (error != 0)
1638 1771 return (error);
1639 1772 }
1640 1773
1641 1774 /* check that ds is marked inconsistent */
|
↓ open down ↓ |
194 lines elided |
↑ open up ↑ |
1642 1775 if (!DS_IS_INCONSISTENT(ds)) {
1643 1776 dsl_dataset_rele(ds, FTAG);
1644 1777 return (SET_ERROR(EINVAL));
1645 1778 }
1646 1779
1647 1780 /* check that there is resuming data, and that the toguid matches */
1648 1781 if (!dsl_dataset_is_zapified(ds)) {
1649 1782 dsl_dataset_rele(ds, FTAG);
1650 1783 return (SET_ERROR(EINVAL));
1651 1784 }
1652 - uint64_t val;
1785 + uint64_t val = 0;
1653 1786 error = zap_lookup(dp->dp_meta_objset, ds->ds_object,
1654 1787 DS_FIELD_RESUME_TOGUID, sizeof (val), 1, &val);
1655 1788 if (error != 0 || drrb->drr_toguid != val) {
1656 1789 dsl_dataset_rele(ds, FTAG);
1657 1790 return (SET_ERROR(EINVAL));
1658 1791 }
1659 1792
1660 1793 /*
1661 1794 * Check if the receive is still running. If so, it will be owned.
1662 1795 * Note that nothing else can own the dataset (e.g. after the receive
1663 1796 * fails) because it will be marked inconsistent.
1664 1797 */
1665 1798 if (dsl_dataset_has_owner(ds)) {
1666 1799 dsl_dataset_rele(ds, FTAG);
1667 1800 return (SET_ERROR(EBUSY));
1668 1801 }
1669 1802
1670 1803 /* There should not be any snapshots of this fs yet. */
1671 1804 if (ds->ds_prev != NULL && ds->ds_prev->ds_dir == ds->ds_dir) {
1672 1805 dsl_dataset_rele(ds, FTAG);
1673 1806 return (SET_ERROR(EINVAL));
1674 1807 }
1675 1808
1676 1809 /*
1677 1810 * Note: resume point will be checked when we process the first WRITE
1678 1811 * record.
1679 1812 */
1680 1813
1681 1814 /* check that the origin matches */
1682 1815 val = 0;
1683 1816 (void) zap_lookup(dp->dp_meta_objset, ds->ds_object,
1684 1817 DS_FIELD_RESUME_FROMGUID, sizeof (val), 1, &val);
1685 1818 if (drrb->drr_fromguid != val) {
1686 1819 dsl_dataset_rele(ds, FTAG);
1687 1820 return (SET_ERROR(EINVAL));
1688 1821 }
1689 1822
1690 1823 dsl_dataset_rele(ds, FTAG);
1691 1824 return (0);
1692 1825 }
1693 1826
1694 1827 static void
1695 1828 dmu_recv_resume_begin_sync(void *arg, dmu_tx_t *tx)
1696 1829 {
1697 1830 dmu_recv_begin_arg_t *drba = arg;
1698 1831 dsl_pool_t *dp = dmu_tx_pool(tx);
1699 1832 const char *tofs = drba->drba_cookie->drc_tofs;
1700 1833 dsl_dataset_t *ds;
1701 1834 uint64_t dsobj;
1702 1835 /* 6 extra bytes for /%recv */
1703 1836 char recvname[ZFS_MAX_DATASET_NAME_LEN + 6];
1704 1837
1705 1838 (void) snprintf(recvname, sizeof (recvname), "%s/%s",
1706 1839 tofs, recv_clone_name);
1707 1840
1708 1841 if (dsl_dataset_hold(dp, recvname, FTAG, &ds) != 0) {
1709 1842 /* %recv does not exist; continue in tofs */
1710 1843 VERIFY0(dsl_dataset_hold(dp, tofs, FTAG, &ds));
1711 1844 drba->drba_cookie->drc_newfs = B_TRUE;
1712 1845 }
1713 1846
1714 1847 /* clear the inconsistent flag so that we can own it */
1715 1848 ASSERT(DS_IS_INCONSISTENT(ds));
1716 1849 dmu_buf_will_dirty(ds->ds_dbuf, tx);
1717 1850 dsl_dataset_phys(ds)->ds_flags &= ~DS_FLAG_INCONSISTENT;
1718 1851 dsobj = ds->ds_object;
1719 1852 dsl_dataset_rele(ds, FTAG);
1720 1853
1721 1854 VERIFY0(dsl_dataset_own_obj(dp, dsobj, dmu_recv_tag, &ds));
1722 1855
1723 1856 dmu_buf_will_dirty(ds->ds_dbuf, tx);
1724 1857 dsl_dataset_phys(ds)->ds_flags |= DS_FLAG_INCONSISTENT;
1725 1858
1726 1859 rrw_enter(&ds->ds_bp_rwlock, RW_READER, FTAG);
1727 1860 ASSERT(!BP_IS_HOLE(dsl_dataset_get_blkptr(ds)));
1728 1861 rrw_exit(&ds->ds_bp_rwlock, FTAG);
1729 1862
1730 1863 drba->drba_cookie->drc_ds = ds;
|
↓ open down ↓ |
68 lines elided |
↑ open up ↑ |
1731 1864
1732 1865 spa_history_log_internal_ds(ds, "resume receive", tx, "");
1733 1866 }
1734 1867
1735 1868 /*
1736 1869 * NB: callers *MUST* call dmu_recv_stream() if dmu_recv_begin()
1737 1870 * succeeds; otherwise we will leak the holds on the datasets.
1738 1871 */
1739 1872 int
1740 1873 dmu_recv_begin(char *tofs, char *tosnap, dmu_replay_record_t *drr_begin,
1741 - boolean_t force, boolean_t resumable, char *origin, dmu_recv_cookie_t *drc)
1874 + boolean_t force, boolean_t resumable, boolean_t force_cksum,
1875 + char *origin, dmu_recv_cookie_t *drc)
1742 1876 {
1743 1877 dmu_recv_begin_arg_t drba = { 0 };
1744 1878
1745 1879 bzero(drc, sizeof (dmu_recv_cookie_t));
1746 1880 drc->drc_drr_begin = drr_begin;
1747 1881 drc->drc_drrb = &drr_begin->drr_u.drr_begin;
1748 1882 drc->drc_tosnap = tosnap;
1749 1883 drc->drc_tofs = tofs;
1750 1884 drc->drc_force = force;
1751 1885 drc->drc_resumable = resumable;
1752 1886 drc->drc_cred = CRED();
1753 1887
1754 1888 if (drc->drc_drrb->drr_magic == BSWAP_64(DMU_BACKUP_MAGIC)) {
1755 1889 drc->drc_byteswap = B_TRUE;
1756 - (void) fletcher_4_incremental_byteswap(drr_begin,
1757 - sizeof (dmu_replay_record_t), &drc->drc_cksum);
1758 - byteswap_record(drr_begin);
1890 +
1891 + /* on-wire checksum can be disabled for krrp */
1892 + if (force_cksum) {
1893 + (void) fletcher_4_incremental_byteswap(drr_begin,
1894 + sizeof (dmu_replay_record_t), &drc->drc_cksum);
1895 + byteswap_record(drr_begin);
1896 + }
1759 1897 } else if (drc->drc_drrb->drr_magic == DMU_BACKUP_MAGIC) {
1760 - (void) fletcher_4_incremental_native(drr_begin,
1761 - sizeof (dmu_replay_record_t), &drc->drc_cksum);
1898 + /* on-wire checksum can be disabled for krrp */
1899 + if (force_cksum) {
1900 + (void) fletcher_4_incremental_native(drr_begin,
1901 + sizeof (dmu_replay_record_t), &drc->drc_cksum);
1902 + }
1762 1903 } else {
1763 1904 return (SET_ERROR(EINVAL));
1764 1905 }
1765 1906
1766 1907 drba.drba_origin = origin;
1767 1908 drba.drba_cookie = drc;
1768 1909 drba.drba_cred = CRED();
1769 1910
1770 1911 if (DMU_GET_FEATUREFLAGS(drc->drc_drrb->drr_versioninfo) &
1771 1912 DMU_BACKUP_FEATURE_RESUMING) {
1772 1913 return (dsl_sync_task(tofs,
1773 1914 dmu_recv_resume_begin_check, dmu_recv_resume_begin_sync,
1774 1915 &drba, 5, ZFS_SPACE_CHECK_NORMAL));
1775 1916 } else {
1776 1917 return (dsl_sync_task(tofs,
1777 1918 dmu_recv_begin_check, dmu_recv_begin_sync,
1778 1919 &drba, 5, ZFS_SPACE_CHECK_NORMAL));
1779 1920 }
1780 1921 }
1781 1922
1782 1923 struct receive_record_arg {
1783 1924 dmu_replay_record_t header;
1784 1925 void *payload; /* Pointer to a buffer containing the payload */
1785 1926 /*
1786 1927 * If the record is a write, pointer to the arc_buf_t containing the
1787 1928 * payload.
1788 1929 */
1789 1930 arc_buf_t *write_buf;
1790 1931 int payload_size;
1791 1932 uint64_t bytes_read; /* bytes read from stream when record created */
1792 1933 boolean_t eos_marker; /* Marks the end of the stream */
1793 1934 bqueue_node_t node;
1794 1935 };
1795 1936
1796 1937 struct receive_writer_arg {
1797 1938 objset_t *os;
1798 1939 boolean_t byteswap;
1799 1940 bqueue_t q;
1800 1941
1801 1942 /*
1802 1943 * These three args are used to signal to the main thread that we're
1803 1944 * done.
1804 1945 */
1805 1946 kmutex_t mutex;
1806 1947 kcondvar_t cv;
1807 1948 boolean_t done;
1808 1949
1809 1950 int err;
1810 1951 /* A map from guid to dataset to help handle dedup'd streams. */
1811 1952 avl_tree_t *guid_to_ds_map;
1812 1953 boolean_t resumable;
1813 1954 uint64_t last_object, last_offset;
1814 1955 uint64_t bytes_read; /* bytes read when current record created */
1815 1956 };
1816 1957
1817 1958 struct objlist {
1818 1959 list_t list; /* List of struct receive_objnode. */
1819 1960 /*
1820 1961 * Last object looked up. Used to assert that objects are being looked
1821 1962 * up in ascending order.
1822 1963 */
1823 1964 uint64_t last_lookup;
1824 1965 };
1825 1966
1826 1967 struct receive_objnode {
1827 1968 list_node_t node;
1828 1969 uint64_t object;
1829 1970 };
1830 1971
1831 1972 struct receive_arg {
1832 1973 objset_t *os;
1833 1974 vnode_t *vp; /* The vnode to read the stream from */
1834 1975 uint64_t voff; /* The current offset in the stream */
|
↓ open down ↓ |
63 lines elided |
↑ open up ↑ |
1835 1976 uint64_t bytes_read;
1836 1977 /*
1837 1978 * A record that has had its payload read in, but hasn't yet been handed
1838 1979 * off to the worker thread.
1839 1980 */
1840 1981 struct receive_record_arg *rrd;
1841 1982 /* A record that has had its header read in, but not its payload. */
1842 1983 struct receive_record_arg *next_rrd;
1843 1984 zio_cksum_t cksum;
1844 1985 zio_cksum_t prev_cksum;
1986 + dmu_krrp_task_t *krrp_task;
1845 1987 int err;
1846 1988 boolean_t byteswap;
1847 1989 /* Sorted list of objects not to issue prefetches for. */
1848 1990 struct objlist ignore_objlist;
1849 1991 };
1850 1992
1851 1993 typedef struct guid_map_entry {
1852 1994 uint64_t guid;
1853 1995 dsl_dataset_t *gme_ds;
1854 1996 avl_node_t avlnode;
1855 1997 } guid_map_entry_t;
1856 1998
1857 1999 static int
1858 2000 guid_compare(const void *arg1, const void *arg2)
1859 2001 {
1860 2002 const guid_map_entry_t *gmep1 = arg1;
1861 2003 const guid_map_entry_t *gmep2 = arg2;
1862 2004
1863 2005 if (gmep1->guid < gmep2->guid)
1864 2006 return (-1);
1865 2007 else if (gmep1->guid > gmep2->guid)
1866 2008 return (1);
1867 2009 return (0);
1868 2010 }
1869 2011
1870 2012 static void
1871 2013 free_guid_map_onexit(void *arg)
1872 2014 {
1873 2015 avl_tree_t *ca = arg;
1874 2016 void *cookie = NULL;
1875 2017 guid_map_entry_t *gmep;
1876 2018
1877 2019 while ((gmep = avl_destroy_nodes(ca, &cookie)) != NULL) {
1878 2020 dsl_dataset_long_rele(gmep->gme_ds, gmep);
1879 2021 dsl_dataset_rele(gmep->gme_ds, gmep);
1880 2022 kmem_free(gmep, sizeof (guid_map_entry_t));
1881 2023 }
1882 2024 avl_destroy(ca);
1883 2025 kmem_free(ca, sizeof (avl_tree_t));
1884 2026 }
1885 2027
1886 2028 static int
|
↓ open down ↓ |
32 lines elided |
↑ open up ↑ |
1887 2029 receive_read(struct receive_arg *ra, int len, void *buf)
1888 2030 {
1889 2031 int done = 0;
1890 2032
1891 2033 /*
1892 2034 * The code doesn't rely on this (lengths being multiples of 8). See
1893 2035 * comment in dump_bytes.
1894 2036 */
1895 2037 ASSERT0(len % 8);
1896 2038
1897 - while (done < len) {
1898 - ssize_t resid;
2039 + /*
2040 + * if vp is NULL, then the send is from krrp and we can try to bypass
2041 + * copying data to an intermediate buffer.
2042 + */
2043 + if (ra->vp != NULL) {
2044 + while (done < len) {
2045 + ssize_t resid = 0;
1899 2046
1900 - ra->err = vn_rdwr(UIO_READ, ra->vp,
1901 - (char *)buf + done, len - done,
1902 - ra->voff, UIO_SYSSPACE, FAPPEND,
1903 - RLIM64_INFINITY, CRED(), &resid);
1904 -
1905 - if (resid == len - done) {
1906 - /*
1907 - * Note: ECKSUM indicates that the receive
1908 - * was interrupted and can potentially be resumed.
1909 - */
1910 - ra->err = SET_ERROR(ECKSUM);
2047 + ra->err = vn_rdwr(UIO_READ, ra->vp,
2048 + (char *)buf + done, len - done,
2049 + ra->voff, UIO_SYSSPACE, FAPPEND,
2050 + RLIM64_INFINITY, CRED(), &resid);
2051 + if (resid == len - done) {
2052 + /*
2053 + * Note: ECKSUM indicates that the receive was
2054 + * interrupted and can potentially be resumed.
2055 + */
2056 + ra->err = SET_ERROR(ECKSUM);
2057 + }
2058 + ra->voff += len - done - resid;
2059 + done = len - resid;
2060 + if (ra->err != 0)
2061 + return (ra->err);
1911 2062 }
1912 - ra->voff += len - done - resid;
1913 - done = len - resid;
2063 + } else {
2064 + ASSERT(ra->krrp_task != NULL);
2065 + ra->err = dmu_krrp_buffer_read(buf, len, ra->krrp_task);
1914 2066 if (ra->err != 0)
1915 2067 return (ra->err);
2068 +
2069 + done = len;
1916 2070 }
1917 2071
1918 2072 ra->bytes_read += len;
1919 2073
1920 2074 ASSERT3U(done, ==, len);
1921 2075 return (0);
1922 2076 }
1923 2077
1924 2078 static void
1925 2079 byteswap_record(dmu_replay_record_t *drr)
1926 2080 {
1927 2081 #define DO64(X) (drr->drr_u.X = BSWAP_64(drr->drr_u.X))
1928 2082 #define DO32(X) (drr->drr_u.X = BSWAP_32(drr->drr_u.X))
1929 2083 drr->drr_type = BSWAP_32(drr->drr_type);
1930 2084 drr->drr_payloadlen = BSWAP_32(drr->drr_payloadlen);
1931 2085
1932 2086 switch (drr->drr_type) {
1933 2087 case DRR_BEGIN:
1934 2088 DO64(drr_begin.drr_magic);
1935 2089 DO64(drr_begin.drr_versioninfo);
1936 2090 DO64(drr_begin.drr_creation_time);
1937 2091 DO32(drr_begin.drr_type);
1938 2092 DO32(drr_begin.drr_flags);
1939 2093 DO64(drr_begin.drr_toguid);
1940 2094 DO64(drr_begin.drr_fromguid);
1941 2095 break;
1942 2096 case DRR_OBJECT:
1943 2097 DO64(drr_object.drr_object);
1944 2098 DO32(drr_object.drr_type);
1945 2099 DO32(drr_object.drr_bonustype);
1946 2100 DO32(drr_object.drr_blksz);
1947 2101 DO32(drr_object.drr_bonuslen);
1948 2102 DO64(drr_object.drr_toguid);
1949 2103 break;
1950 2104 case DRR_FREEOBJECTS:
1951 2105 DO64(drr_freeobjects.drr_firstobj);
1952 2106 DO64(drr_freeobjects.drr_numobjs);
1953 2107 DO64(drr_freeobjects.drr_toguid);
1954 2108 break;
1955 2109 case DRR_WRITE:
1956 2110 DO64(drr_write.drr_object);
1957 2111 DO32(drr_write.drr_type);
1958 2112 DO64(drr_write.drr_offset);
1959 2113 DO64(drr_write.drr_logical_size);
1960 2114 DO64(drr_write.drr_toguid);
1961 2115 ZIO_CHECKSUM_BSWAP(&drr->drr_u.drr_write.drr_key.ddk_cksum);
1962 2116 DO64(drr_write.drr_key.ddk_prop);
1963 2117 DO64(drr_write.drr_compressed_size);
1964 2118 break;
1965 2119 case DRR_WRITE_BYREF:
1966 2120 DO64(drr_write_byref.drr_object);
1967 2121 DO64(drr_write_byref.drr_offset);
1968 2122 DO64(drr_write_byref.drr_length);
1969 2123 DO64(drr_write_byref.drr_toguid);
1970 2124 DO64(drr_write_byref.drr_refguid);
1971 2125 DO64(drr_write_byref.drr_refobject);
1972 2126 DO64(drr_write_byref.drr_refoffset);
1973 2127 ZIO_CHECKSUM_BSWAP(&drr->drr_u.drr_write_byref.
1974 2128 drr_key.ddk_cksum);
1975 2129 DO64(drr_write_byref.drr_key.ddk_prop);
1976 2130 break;
1977 2131 case DRR_WRITE_EMBEDDED:
1978 2132 DO64(drr_write_embedded.drr_object);
1979 2133 DO64(drr_write_embedded.drr_offset);
1980 2134 DO64(drr_write_embedded.drr_length);
1981 2135 DO64(drr_write_embedded.drr_toguid);
1982 2136 DO32(drr_write_embedded.drr_lsize);
1983 2137 DO32(drr_write_embedded.drr_psize);
1984 2138 break;
1985 2139 case DRR_FREE:
1986 2140 DO64(drr_free.drr_object);
1987 2141 DO64(drr_free.drr_offset);
1988 2142 DO64(drr_free.drr_length);
1989 2143 DO64(drr_free.drr_toguid);
1990 2144 break;
1991 2145 case DRR_SPILL:
1992 2146 DO64(drr_spill.drr_object);
1993 2147 DO64(drr_spill.drr_length);
1994 2148 DO64(drr_spill.drr_toguid);
1995 2149 break;
1996 2150 case DRR_END:
1997 2151 DO64(drr_end.drr_toguid);
1998 2152 ZIO_CHECKSUM_BSWAP(&drr->drr_u.drr_end.drr_checksum);
1999 2153 break;
2000 2154 }
2001 2155
2002 2156 if (drr->drr_type != DRR_BEGIN) {
2003 2157 ZIO_CHECKSUM_BSWAP(&drr->drr_u.drr_checksum.drr_checksum);
2004 2158 }
2005 2159
2006 2160 #undef DO64
2007 2161 #undef DO32
2008 2162 }
2009 2163
2010 2164 static inline uint8_t
2011 2165 deduce_nblkptr(dmu_object_type_t bonus_type, uint64_t bonus_size)
2012 2166 {
2013 2167 if (bonus_type == DMU_OT_SA) {
2014 2168 return (1);
2015 2169 } else {
2016 2170 return (1 +
2017 2171 ((DN_MAX_BONUSLEN - bonus_size) >> SPA_BLKPTRSHIFT));
2018 2172 }
2019 2173 }
2020 2174
2021 2175 static void
2022 2176 save_resume_state(struct receive_writer_arg *rwa,
2023 2177 uint64_t object, uint64_t offset, dmu_tx_t *tx)
2024 2178 {
2025 2179 int txgoff = dmu_tx_get_txg(tx) & TXG_MASK;
2026 2180
2027 2181 if (!rwa->resumable)
2028 2182 return;
2029 2183
2030 2184 /*
2031 2185 * We use ds_resume_bytes[] != 0 to indicate that we need to
2032 2186 * update this on disk, so it must not be 0.
2033 2187 */
2034 2188 ASSERT(rwa->bytes_read != 0);
2035 2189
2036 2190 /*
2037 2191 * We only resume from write records, which have a valid
2038 2192 * (non-meta-dnode) object number.
2039 2193 */
2040 2194 ASSERT(object != 0);
2041 2195
2042 2196 /*
2043 2197 * For resuming to work correctly, we must receive records in order,
2044 2198 * sorted by object,offset. This is checked by the callers, but
2045 2199 * assert it here for good measure.
2046 2200 */
2047 2201 ASSERT3U(object, >=, rwa->os->os_dsl_dataset->ds_resume_object[txgoff]);
2048 2202 ASSERT(object != rwa->os->os_dsl_dataset->ds_resume_object[txgoff] ||
2049 2203 offset >= rwa->os->os_dsl_dataset->ds_resume_offset[txgoff]);
2050 2204 ASSERT3U(rwa->bytes_read, >=,
2051 2205 rwa->os->os_dsl_dataset->ds_resume_bytes[txgoff]);
2052 2206
2053 2207 rwa->os->os_dsl_dataset->ds_resume_object[txgoff] = object;
2054 2208 rwa->os->os_dsl_dataset->ds_resume_offset[txgoff] = offset;
2055 2209 rwa->os->os_dsl_dataset->ds_resume_bytes[txgoff] = rwa->bytes_read;
2056 2210 }
2057 2211
2058 2212 static int
2059 2213 receive_object(struct receive_writer_arg *rwa, struct drr_object *drro,
2060 2214 void *data)
2061 2215 {
2062 2216 dmu_object_info_t doi;
2063 2217 dmu_tx_t *tx;
2064 2218 uint64_t object;
2065 2219 int err;
2066 2220
2067 2221 if (drro->drr_type == DMU_OT_NONE ||
2068 2222 !DMU_OT_IS_VALID(drro->drr_type) ||
2069 2223 !DMU_OT_IS_VALID(drro->drr_bonustype) ||
2070 2224 drro->drr_checksumtype >= ZIO_CHECKSUM_FUNCTIONS ||
2071 2225 drro->drr_compress >= ZIO_COMPRESS_FUNCTIONS ||
2072 2226 P2PHASE(drro->drr_blksz, SPA_MINBLOCKSIZE) ||
2073 2227 drro->drr_blksz < SPA_MINBLOCKSIZE ||
2074 2228 drro->drr_blksz > spa_maxblocksize(dmu_objset_spa(rwa->os)) ||
2075 2229 drro->drr_bonuslen > DN_MAX_BONUSLEN) {
2076 2230 return (SET_ERROR(EINVAL));
2077 2231 }
2078 2232
2079 2233 err = dmu_object_info(rwa->os, drro->drr_object, &doi);
2080 2234
2081 2235 if (err != 0 && err != ENOENT)
2082 2236 return (SET_ERROR(EINVAL));
2083 2237 object = err == 0 ? drro->drr_object : DMU_NEW_OBJECT;
2084 2238
2085 2239 /*
2086 2240 * If we are losing blkptrs or changing the block size this must
2087 2241 * be a new file instance. We must clear out the previous file
2088 2242 * contents before we can change this type of metadata in the dnode.
2089 2243 */
2090 2244 if (err == 0) {
2091 2245 int nblkptr;
2092 2246
2093 2247 nblkptr = deduce_nblkptr(drro->drr_bonustype,
2094 2248 drro->drr_bonuslen);
2095 2249
2096 2250 if (drro->drr_blksz != doi.doi_data_block_size ||
2097 2251 nblkptr < doi.doi_nblkptr) {
2098 2252 err = dmu_free_long_range(rwa->os, drro->drr_object,
2099 2253 0, DMU_OBJECT_END);
2100 2254 if (err != 0)
2101 2255 return (SET_ERROR(EINVAL));
2102 2256 }
2103 2257 }
2104 2258
2105 2259 tx = dmu_tx_create(rwa->os);
2106 2260 dmu_tx_hold_bonus(tx, object);
2107 2261 err = dmu_tx_assign(tx, TXG_WAIT);
2108 2262 if (err != 0) {
2109 2263 dmu_tx_abort(tx);
2110 2264 return (err);
2111 2265 }
2112 2266
2113 2267 if (object == DMU_NEW_OBJECT) {
2114 2268 /* currently free, want to be allocated */
2115 2269 err = dmu_object_claim(rwa->os, drro->drr_object,
2116 2270 drro->drr_type, drro->drr_blksz,
2117 2271 drro->drr_bonustype, drro->drr_bonuslen, tx);
2118 2272 } else if (drro->drr_type != doi.doi_type ||
2119 2273 drro->drr_blksz != doi.doi_data_block_size ||
2120 2274 drro->drr_bonustype != doi.doi_bonus_type ||
2121 2275 drro->drr_bonuslen != doi.doi_bonus_size) {
2122 2276 /* currently allocated, but with different properties */
2123 2277 err = dmu_object_reclaim(rwa->os, drro->drr_object,
2124 2278 drro->drr_type, drro->drr_blksz,
2125 2279 drro->drr_bonustype, drro->drr_bonuslen, tx);
2126 2280 }
2127 2281 if (err != 0) {
2128 2282 dmu_tx_commit(tx);
2129 2283 return (SET_ERROR(EINVAL));
2130 2284 }
2131 2285
2132 2286 dmu_object_set_checksum(rwa->os, drro->drr_object,
2133 2287 drro->drr_checksumtype, tx);
2134 2288 dmu_object_set_compress(rwa->os, drro->drr_object,
2135 2289 drro->drr_compress, tx);
2136 2290
2137 2291 if (data != NULL) {
2138 2292 dmu_buf_t *db;
2139 2293
2140 2294 VERIFY0(dmu_bonus_hold(rwa->os, drro->drr_object, FTAG, &db));
2141 2295 dmu_buf_will_dirty(db, tx);
2142 2296
2143 2297 ASSERT3U(db->db_size, >=, drro->drr_bonuslen);
2144 2298 bcopy(data, db->db_data, drro->drr_bonuslen);
2145 2299 if (rwa->byteswap) {
2146 2300 dmu_object_byteswap_t byteswap =
2147 2301 DMU_OT_BYTESWAP(drro->drr_bonustype);
2148 2302 dmu_ot_byteswap[byteswap].ob_func(db->db_data,
2149 2303 drro->drr_bonuslen);
2150 2304 }
2151 2305 dmu_buf_rele(db, FTAG);
2152 2306 }
2153 2307 dmu_tx_commit(tx);
2154 2308
2155 2309 return (0);
2156 2310 }
2157 2311
2158 2312 /* ARGSUSED */
2159 2313 static int
2160 2314 receive_freeobjects(struct receive_writer_arg *rwa,
2161 2315 struct drr_freeobjects *drrfo)
2162 2316 {
2163 2317 uint64_t obj;
2164 2318 int next_err = 0;
2165 2319
2166 2320 if (drrfo->drr_firstobj + drrfo->drr_numobjs < drrfo->drr_firstobj)
2167 2321 return (SET_ERROR(EINVAL));
2168 2322
2169 2323 for (obj = drrfo->drr_firstobj;
2170 2324 obj < drrfo->drr_firstobj + drrfo->drr_numobjs && next_err == 0;
2171 2325 next_err = dmu_object_next(rwa->os, &obj, FALSE, 0)) {
2172 2326 int err;
2173 2327
2174 2328 if (dmu_object_info(rwa->os, obj, NULL) != 0)
2175 2329 continue;
2176 2330
2177 2331 err = dmu_free_long_object(rwa->os, obj);
2178 2332 if (err != 0)
2179 2333 return (err);
2180 2334 }
2181 2335 if (next_err != ESRCH)
2182 2336 return (next_err);
2183 2337 return (0);
2184 2338 }
2185 2339
2186 2340 static int
2187 2341 receive_write(struct receive_writer_arg *rwa, struct drr_write *drrw,
2188 2342 arc_buf_t *abuf)
2189 2343 {
2190 2344 dmu_tx_t *tx;
2191 2345 int err;
2192 2346
2193 2347 if (drrw->drr_offset + drrw->drr_logical_size < drrw->drr_offset ||
2194 2348 !DMU_OT_IS_VALID(drrw->drr_type))
2195 2349 return (SET_ERROR(EINVAL));
2196 2350
2197 2351 /*
2198 2352 * For resuming to work, records must be in increasing order
2199 2353 * by (object, offset).
2200 2354 */
2201 2355 if (drrw->drr_object < rwa->last_object ||
2202 2356 (drrw->drr_object == rwa->last_object &&
2203 2357 drrw->drr_offset < rwa->last_offset)) {
2204 2358 return (SET_ERROR(EINVAL));
2205 2359 }
2206 2360 rwa->last_object = drrw->drr_object;
2207 2361 rwa->last_offset = drrw->drr_offset;
2208 2362
2209 2363 if (dmu_object_info(rwa->os, drrw->drr_object, NULL) != 0)
2210 2364 return (SET_ERROR(EINVAL));
|
↓ open down ↓ |
285 lines elided |
↑ open up ↑ |
2211 2365
2212 2366 tx = dmu_tx_create(rwa->os);
2213 2367
2214 2368 dmu_tx_hold_write(tx, drrw->drr_object,
2215 2369 drrw->drr_offset, drrw->drr_logical_size);
2216 2370 err = dmu_tx_assign(tx, TXG_WAIT);
2217 2371 if (err != 0) {
2218 2372 dmu_tx_abort(tx);
2219 2373 return (err);
2220 2374 }
2375 +
2221 2376 if (rwa->byteswap) {
2222 2377 dmu_object_byteswap_t byteswap =
2223 2378 DMU_OT_BYTESWAP(drrw->drr_type);
2224 2379 dmu_ot_byteswap[byteswap].ob_func(abuf->b_data,
2225 2380 DRR_WRITE_PAYLOAD_SIZE(drrw));
2226 2381 }
2227 2382
2228 2383 /* use the bonus buf to look up the dnode in dmu_assign_arcbuf */
2229 2384 dmu_buf_t *bonus;
2230 2385 if (dmu_bonus_hold(rwa->os, drrw->drr_object, FTAG, &bonus) != 0)
2231 2386 return (SET_ERROR(EINVAL));
2232 2387 dmu_assign_arcbuf(bonus, drrw->drr_offset, abuf, tx);
2233 2388
2234 2389 /*
2235 2390 * Note: If the receive fails, we want the resume stream to start
2236 2391 * with the same record that we last successfully received (as opposed
2237 2392 * to the next record), so that we can verify that we are
2238 2393 * resuming from the correct location.
2239 2394 */
2240 2395 save_resume_state(rwa, drrw->drr_object, drrw->drr_offset, tx);
2241 2396 dmu_tx_commit(tx);
2242 2397 dmu_buf_rele(bonus, FTAG);
2243 2398
2244 2399 return (0);
2245 2400 }
2246 2401
2247 2402 /*
2248 2403 * Handle a DRR_WRITE_BYREF record. This record is used in dedup'ed
2249 2404 * streams to refer to a copy of the data that is already on the
2250 2405 * system because it came in earlier in the stream. This function
2251 2406 * finds the earlier copy of the data, and uses that copy instead of
2252 2407 * data from the stream to fulfill this write.
2253 2408 */
2254 2409 static int
2255 2410 receive_write_byref(struct receive_writer_arg *rwa,
2256 2411 struct drr_write_byref *drrwbr)
2257 2412 {
2258 2413 dmu_tx_t *tx;
2259 2414 int err;
2260 2415 guid_map_entry_t gmesrch;
2261 2416 guid_map_entry_t *gmep;
2262 2417 avl_index_t where;
2263 2418 objset_t *ref_os = NULL;
2264 2419 dmu_buf_t *dbp;
2265 2420
2266 2421 if (drrwbr->drr_offset + drrwbr->drr_length < drrwbr->drr_offset)
2267 2422 return (SET_ERROR(EINVAL));
2268 2423
2269 2424 /*
2270 2425 * If the GUID of the referenced dataset is different from the
2271 2426 * GUID of the target dataset, find the referenced dataset.
2272 2427 */
2273 2428 if (drrwbr->drr_toguid != drrwbr->drr_refguid) {
2274 2429 gmesrch.guid = drrwbr->drr_refguid;
2275 2430 if ((gmep = avl_find(rwa->guid_to_ds_map, &gmesrch,
2276 2431 &where)) == NULL) {
2277 2432 return (SET_ERROR(EINVAL));
2278 2433 }
2279 2434 if (dmu_objset_from_ds(gmep->gme_ds, &ref_os))
2280 2435 return (SET_ERROR(EINVAL));
2281 2436 } else {
2282 2437 ref_os = rwa->os;
2283 2438 }
2284 2439
2285 2440 err = dmu_buf_hold(ref_os, drrwbr->drr_refobject,
2286 2441 drrwbr->drr_refoffset, FTAG, &dbp, DMU_READ_PREFETCH);
2287 2442 if (err != 0)
2288 2443 return (err);
2289 2444
2290 2445 tx = dmu_tx_create(rwa->os);
2291 2446
2292 2447 dmu_tx_hold_write(tx, drrwbr->drr_object,
2293 2448 drrwbr->drr_offset, drrwbr->drr_length);
2294 2449 err = dmu_tx_assign(tx, TXG_WAIT);
2295 2450 if (err != 0) {
2296 2451 dmu_tx_abort(tx);
2297 2452 return (err);
2298 2453 }
2299 2454 dmu_write(rwa->os, drrwbr->drr_object,
2300 2455 drrwbr->drr_offset, drrwbr->drr_length, dbp->db_data, tx);
2301 2456 dmu_buf_rele(dbp, FTAG);
2302 2457
2303 2458 /* See comment in restore_write. */
2304 2459 save_resume_state(rwa, drrwbr->drr_object, drrwbr->drr_offset, tx);
2305 2460 dmu_tx_commit(tx);
2306 2461 return (0);
2307 2462 }
2308 2463
2309 2464 static int
2310 2465 receive_write_embedded(struct receive_writer_arg *rwa,
2311 2466 struct drr_write_embedded *drrwe, void *data)
2312 2467 {
2313 2468 dmu_tx_t *tx;
2314 2469 int err;
2315 2470
2316 2471 if (drrwe->drr_offset + drrwe->drr_length < drrwe->drr_offset)
2317 2472 return (EINVAL);
2318 2473
2319 2474 if (drrwe->drr_psize > BPE_PAYLOAD_SIZE)
2320 2475 return (EINVAL);
2321 2476
2322 2477 if (drrwe->drr_etype >= NUM_BP_EMBEDDED_TYPES)
2323 2478 return (EINVAL);
2324 2479 if (drrwe->drr_compression >= ZIO_COMPRESS_FUNCTIONS)
2325 2480 return (EINVAL);
2326 2481
2327 2482 tx = dmu_tx_create(rwa->os);
2328 2483
2329 2484 dmu_tx_hold_write(tx, drrwe->drr_object,
2330 2485 drrwe->drr_offset, drrwe->drr_length);
2331 2486 err = dmu_tx_assign(tx, TXG_WAIT);
2332 2487 if (err != 0) {
2333 2488 dmu_tx_abort(tx);
2334 2489 return (err);
2335 2490 }
2336 2491
2337 2492 dmu_write_embedded(rwa->os, drrwe->drr_object,
2338 2493 drrwe->drr_offset, data, drrwe->drr_etype,
2339 2494 drrwe->drr_compression, drrwe->drr_lsize, drrwe->drr_psize,
2340 2495 rwa->byteswap ^ ZFS_HOST_BYTEORDER, tx);
2341 2496
2342 2497 /* See comment in restore_write. */
2343 2498 save_resume_state(rwa, drrwe->drr_object, drrwe->drr_offset, tx);
2344 2499 dmu_tx_commit(tx);
2345 2500 return (0);
2346 2501 }
2347 2502
2348 2503 static int
2349 2504 receive_spill(struct receive_writer_arg *rwa, struct drr_spill *drrs,
2350 2505 void *data)
2351 2506 {
2352 2507 dmu_tx_t *tx;
2353 2508 dmu_buf_t *db, *db_spill;
2354 2509 int err;
2355 2510
2356 2511 if (drrs->drr_length < SPA_MINBLOCKSIZE ||
2357 2512 drrs->drr_length > spa_maxblocksize(dmu_objset_spa(rwa->os)))
2358 2513 return (SET_ERROR(EINVAL));
2359 2514
2360 2515 if (dmu_object_info(rwa->os, drrs->drr_object, NULL) != 0)
2361 2516 return (SET_ERROR(EINVAL));
2362 2517
2363 2518 VERIFY0(dmu_bonus_hold(rwa->os, drrs->drr_object, FTAG, &db));
2364 2519 if ((err = dmu_spill_hold_by_bonus(db, FTAG, &db_spill)) != 0) {
2365 2520 dmu_buf_rele(db, FTAG);
2366 2521 return (err);
2367 2522 }
2368 2523
2369 2524 tx = dmu_tx_create(rwa->os);
2370 2525
2371 2526 dmu_tx_hold_spill(tx, db->db_object);
2372 2527
2373 2528 err = dmu_tx_assign(tx, TXG_WAIT);
2374 2529 if (err != 0) {
2375 2530 dmu_buf_rele(db, FTAG);
2376 2531 dmu_buf_rele(db_spill, FTAG);
2377 2532 dmu_tx_abort(tx);
2378 2533 return (err);
2379 2534 }
2380 2535 dmu_buf_will_dirty(db_spill, tx);
2381 2536
2382 2537 if (db_spill->db_size < drrs->drr_length)
2383 2538 VERIFY(0 == dbuf_spill_set_blksz(db_spill,
2384 2539 drrs->drr_length, tx));
2385 2540 bcopy(data, db_spill->db_data, drrs->drr_length);
2386 2541
2387 2542 dmu_buf_rele(db, FTAG);
2388 2543 dmu_buf_rele(db_spill, FTAG);
2389 2544
2390 2545 dmu_tx_commit(tx);
2391 2546 return (0);
2392 2547 }
2393 2548
2394 2549 /* ARGSUSED */
2395 2550 static int
2396 2551 receive_free(struct receive_writer_arg *rwa, struct drr_free *drrf)
2397 2552 {
2398 2553 int err;
2399 2554
2400 2555 if (drrf->drr_length != -1ULL &&
2401 2556 drrf->drr_offset + drrf->drr_length < drrf->drr_offset)
2402 2557 return (SET_ERROR(EINVAL));
2403 2558
2404 2559 if (dmu_object_info(rwa->os, drrf->drr_object, NULL) != 0)
2405 2560 return (SET_ERROR(EINVAL));
2406 2561
2407 2562 err = dmu_free_long_range(rwa->os, drrf->drr_object,
2408 2563 drrf->drr_offset, drrf->drr_length);
2409 2564
2410 2565 return (err);
2411 2566 }
2412 2567
2413 2568 /* used to destroy the drc_ds on error */
2414 2569 static void
2415 2570 dmu_recv_cleanup_ds(dmu_recv_cookie_t *drc)
2416 2571 {
2417 2572 if (drc->drc_resumable) {
2418 2573 /* wait for our resume state to be written to disk */
2419 2574 txg_wait_synced(drc->drc_ds->ds_dir->dd_pool, 0);
2420 2575 dsl_dataset_disown(drc->drc_ds, dmu_recv_tag);
2421 2576 } else {
2422 2577 char name[ZFS_MAX_DATASET_NAME_LEN];
2423 2578 dsl_dataset_name(drc->drc_ds, name);
2424 2579 dsl_dataset_disown(drc->drc_ds, dmu_recv_tag);
2425 2580 (void) dsl_destroy_head(name);
2426 2581 }
2427 2582 }
2428 2583
2429 2584 static void
2430 2585 receive_cksum(struct receive_arg *ra, int len, void *buf)
2431 2586 {
2432 2587 if (ra->byteswap) {
2433 2588 (void) fletcher_4_incremental_byteswap(buf, len, &ra->cksum);
2434 2589 } else {
2435 2590 (void) fletcher_4_incremental_native(buf, len, &ra->cksum);
2436 2591 }
2437 2592 }
2438 2593
2439 2594 /*
|
↓ open down ↓ |
209 lines elided |
↑ open up ↑ |
2440 2595 * Read the payload into a buffer of size len, and update the current record's
2441 2596 * payload field.
2442 2597 * Allocate ra->next_rrd and read the next record's header into
2443 2598 * ra->next_rrd->header.
2444 2599 * Verify checksum of payload and next record.
2445 2600 */
2446 2601 static int
2447 2602 receive_read_payload_and_next_header(struct receive_arg *ra, int len, void *buf)
2448 2603 {
2449 2604 int err;
2605 + boolean_t checksum_enable = (ra->krrp_task == NULL ||
2606 + ra->krrp_task->buffer_args.force_cksum);
2450 2607
2451 2608 if (len != 0) {
2452 2609 ASSERT3U(len, <=, SPA_MAXBLOCKSIZE);
2453 2610 err = receive_read(ra, len, buf);
2454 2611 if (err != 0)
2455 2612 return (err);
2456 2613 receive_cksum(ra, len, buf);
2457 2614
2458 2615 /* note: rrd is NULL when reading the begin record's payload */
2459 2616 if (ra->rrd != NULL) {
2460 2617 ra->rrd->payload = buf;
2461 2618 ra->rrd->payload_size = len;
2462 2619 ra->rrd->bytes_read = ra->bytes_read;
2463 2620 }
2464 2621 }
2465 2622
2466 2623 ra->prev_cksum = ra->cksum;
2467 2624
2468 2625 ra->next_rrd = kmem_zalloc(sizeof (*ra->next_rrd), KM_SLEEP);
2469 2626 err = receive_read(ra, sizeof (ra->next_rrd->header),
2470 2627 &ra->next_rrd->header);
2471 2628 ra->next_rrd->bytes_read = ra->bytes_read;
2472 2629 if (err != 0) {
|
↓ open down ↓ |
13 lines elided |
↑ open up ↑ |
2473 2630 kmem_free(ra->next_rrd, sizeof (*ra->next_rrd));
2474 2631 ra->next_rrd = NULL;
2475 2632 return (err);
2476 2633 }
2477 2634 if (ra->next_rrd->header.drr_type == DRR_BEGIN) {
2478 2635 kmem_free(ra->next_rrd, sizeof (*ra->next_rrd));
2479 2636 ra->next_rrd = NULL;
2480 2637 return (SET_ERROR(EINVAL));
2481 2638 }
2482 2639
2483 - /*
2484 - * Note: checksum is of everything up to but not including the
2485 - * checksum itself.
2486 - */
2487 - ASSERT3U(offsetof(dmu_replay_record_t, drr_u.drr_checksum.drr_checksum),
2488 - ==, sizeof (dmu_replay_record_t) - sizeof (zio_cksum_t));
2489 - receive_cksum(ra,
2490 - offsetof(dmu_replay_record_t, drr_u.drr_checksum.drr_checksum),
2491 - &ra->next_rrd->header);
2640 + if (checksum_enable) {
2641 + /*
2642 + * Note: checksum is of everything up to but not including the
2643 + * checksum itself.
2644 + */
2645 + ASSERT3U(offsetof(dmu_replay_record_t,
2646 + drr_u.drr_checksum.drr_checksum),
2647 + ==, sizeof (dmu_replay_record_t) - sizeof (zio_cksum_t));
2648 + receive_cksum(ra,
2649 + offsetof(dmu_replay_record_t,
2650 + drr_u.drr_checksum.drr_checksum),
2651 + &ra->next_rrd->header);
2492 2652
2493 - zio_cksum_t cksum_orig =
2494 - ra->next_rrd->header.drr_u.drr_checksum.drr_checksum;
2495 - zio_cksum_t *cksump =
2496 - &ra->next_rrd->header.drr_u.drr_checksum.drr_checksum;
2653 + zio_cksum_t cksum_orig =
2654 + ra->next_rrd->header.drr_u.drr_checksum.drr_checksum;
2655 + zio_cksum_t *cksump =
2656 + &ra->next_rrd->header.drr_u.drr_checksum.drr_checksum;
2497 2657
2498 - if (ra->byteswap)
2499 - byteswap_record(&ra->next_rrd->header);
2658 + if (ra->byteswap)
2659 + byteswap_record(&ra->next_rrd->header);
2500 2660
2501 - if ((!ZIO_CHECKSUM_IS_ZERO(cksump)) &&
2502 - !ZIO_CHECKSUM_EQUAL(ra->cksum, *cksump)) {
2503 - kmem_free(ra->next_rrd, sizeof (*ra->next_rrd));
2504 - ra->next_rrd = NULL;
2505 - return (SET_ERROR(ECKSUM));
2661 + if ((!ZIO_CHECKSUM_IS_ZERO(cksump)) &&
2662 + !ZIO_CHECKSUM_EQUAL(ra->cksum, *cksump)) {
2663 + kmem_free(ra->next_rrd, sizeof (*ra->next_rrd));
2664 + ra->next_rrd = NULL;
2665 + return (SET_ERROR(ECKSUM));
2666 + }
2667 +
2668 + receive_cksum(ra, sizeof (cksum_orig), &cksum_orig);
2506 2669 }
2507 2670
2508 - receive_cksum(ra, sizeof (cksum_orig), &cksum_orig);
2509 -
2510 2671 return (0);
2511 2672 }
2512 2673
2513 2674 static void
2514 2675 objlist_create(struct objlist *list)
2515 2676 {
2516 2677 list_create(&list->list, sizeof (struct receive_objnode),
2517 2678 offsetof(struct receive_objnode, node));
2518 2679 list->last_lookup = 0;
2519 2680 }
2520 2681
2521 2682 static void
2522 2683 objlist_destroy(struct objlist *list)
2523 2684 {
2524 2685 for (struct receive_objnode *n = list_remove_head(&list->list);
2525 2686 n != NULL; n = list_remove_head(&list->list)) {
2526 2687 kmem_free(n, sizeof (*n));
2527 2688 }
2528 2689 list_destroy(&list->list);
2529 2690 }
2530 2691
2531 2692 /*
2532 2693 * This function looks through the objlist to see if the specified object number
2533 2694 * is contained in the objlist. In the process, it will remove all object
2534 2695 * numbers in the list that are smaller than the specified object number. Thus,
2535 2696 * any lookup of an object number smaller than a previously looked up object
2536 2697 * number will always return false; therefore, all lookups should be done in
2537 2698 * ascending order.
2538 2699 */
2539 2700 static boolean_t
2540 2701 objlist_exists(struct objlist *list, uint64_t object)
2541 2702 {
2542 2703 struct receive_objnode *node = list_head(&list->list);
2543 2704 ASSERT3U(object, >=, list->last_lookup);
2544 2705 list->last_lookup = object;
2545 2706 while (node != NULL && node->object < object) {
2546 2707 VERIFY3P(node, ==, list_remove_head(&list->list));
2547 2708 kmem_free(node, sizeof (*node));
2548 2709 node = list_head(&list->list);
2549 2710 }
2550 2711 return (node != NULL && node->object == object);
2551 2712 }
2552 2713
2553 2714 /*
2554 2715 * The objlist is a list of object numbers stored in ascending order. However,
2555 2716 * the insertion of new object numbers does not seek out the correct location to
2556 2717 * store a new object number; instead, it appends it to the list for simplicity.
2557 2718 * Thus, any users must take care to only insert new object numbers in ascending
2558 2719 * order.
2559 2720 */
2560 2721 static void
2561 2722 objlist_insert(struct objlist *list, uint64_t object)
2562 2723 {
2563 2724 struct receive_objnode *node = kmem_zalloc(sizeof (*node), KM_SLEEP);
2564 2725 node->object = object;
2565 2726 #ifdef ZFS_DEBUG
2566 2727 struct receive_objnode *last_object = list_tail(&list->list);
2567 2728 uint64_t last_objnum = (last_object != NULL ? last_object->object : 0);
2568 2729 ASSERT3U(node->object, >, last_objnum);
2569 2730 #endif
2570 2731 list_insert_tail(&list->list, node);
2571 2732 }
2572 2733
2573 2734 /*
2574 2735 * Issue the prefetch reads for any necessary indirect blocks.
2575 2736 *
2576 2737 * We use the object ignore list to tell us whether or not to issue prefetches
2577 2738 * for a given object. We do this for both correctness (in case the blocksize
2578 2739 * of an object has changed) and performance (if the object doesn't exist, don't
2579 2740 * needlessly try to issue prefetches). We also trim the list as we go through
2580 2741 * the stream to prevent it from growing to an unbounded size.
2581 2742 *
2582 2743 * The object numbers within will always be in sorted order, and any write
2583 2744 * records we see will also be in sorted order, but they're not sorted with
2584 2745 * respect to each other (i.e. we can get several object records before
2585 2746 * receiving each object's write records). As a result, once we've reached a
2586 2747 * given object number, we can safely remove any reference to lower object
2587 2748 * numbers in the ignore list. In practice, we receive up to 32 object records
2588 2749 * before receiving write records, so the list can have up to 32 nodes in it.
2589 2750 */
2590 2751 /* ARGSUSED */
2591 2752 static void
2592 2753 receive_read_prefetch(struct receive_arg *ra,
2593 2754 uint64_t object, uint64_t offset, uint64_t length)
2594 2755 {
2595 2756 if (!objlist_exists(&ra->ignore_objlist, object)) {
2596 2757 dmu_prefetch(ra->os, object, 1, offset, length,
2597 2758 ZIO_PRIORITY_SYNC_READ);
2598 2759 }
2599 2760 }
2600 2761
2601 2762 /*
2602 2763 * Read records off the stream, issuing any necessary prefetches.
2603 2764 */
2604 2765 static int
2605 2766 receive_read_record(struct receive_arg *ra)
2606 2767 {
2607 2768 int err;
2608 2769
2609 2770 switch (ra->rrd->header.drr_type) {
2610 2771 case DRR_OBJECT:
2611 2772 {
2612 2773 struct drr_object *drro = &ra->rrd->header.drr_u.drr_object;
2613 2774 uint32_t size = P2ROUNDUP(drro->drr_bonuslen, 8);
2614 2775 void *buf = kmem_zalloc(size, KM_SLEEP);
2615 2776 dmu_object_info_t doi;
2616 2777 err = receive_read_payload_and_next_header(ra, size, buf);
2617 2778 if (err != 0) {
2618 2779 kmem_free(buf, size);
2619 2780 return (err);
2620 2781 }
2621 2782 err = dmu_object_info(ra->os, drro->drr_object, &doi);
2622 2783 /*
2623 2784 * See receive_read_prefetch for an explanation why we're
2624 2785 * storing this object in the ignore_obj_list.
2625 2786 */
2626 2787 if (err == ENOENT ||
2627 2788 (err == 0 && doi.doi_data_block_size != drro->drr_blksz)) {
2628 2789 objlist_insert(&ra->ignore_objlist, drro->drr_object);
2629 2790 err = 0;
2630 2791 }
2631 2792 return (err);
2632 2793 }
2633 2794 case DRR_FREEOBJECTS:
2634 2795 {
2635 2796 err = receive_read_payload_and_next_header(ra, 0, NULL);
2636 2797 return (err);
2637 2798 }
2638 2799 case DRR_WRITE:
2639 2800 {
2640 2801 struct drr_write *drrw = &ra->rrd->header.drr_u.drr_write;
2641 2802 arc_buf_t *abuf;
2642 2803 boolean_t is_meta = DMU_OT_IS_METADATA(drrw->drr_type);
2643 2804 if (DRR_WRITE_COMPRESSED(drrw)) {
2644 2805 ASSERT3U(drrw->drr_compressed_size, >, 0);
2645 2806 ASSERT3U(drrw->drr_logical_size, >=,
2646 2807 drrw->drr_compressed_size);
2647 2808 ASSERT(!is_meta);
2648 2809 abuf = arc_loan_compressed_buf(
2649 2810 dmu_objset_spa(ra->os),
2650 2811 drrw->drr_compressed_size, drrw->drr_logical_size,
2651 2812 drrw->drr_compressiontype);
2652 2813 } else {
2653 2814 abuf = arc_loan_buf(dmu_objset_spa(ra->os),
2654 2815 is_meta, drrw->drr_logical_size);
2655 2816 }
2656 2817
2657 2818 err = receive_read_payload_and_next_header(ra,
2658 2819 DRR_WRITE_PAYLOAD_SIZE(drrw), abuf->b_data);
2659 2820 if (err != 0) {
2660 2821 dmu_return_arcbuf(abuf);
2661 2822 return (err);
2662 2823 }
2663 2824 ra->rrd->write_buf = abuf;
2664 2825 receive_read_prefetch(ra, drrw->drr_object, drrw->drr_offset,
2665 2826 drrw->drr_logical_size);
2666 2827 return (err);
2667 2828 }
2668 2829 case DRR_WRITE_BYREF:
2669 2830 {
2670 2831 struct drr_write_byref *drrwb =
2671 2832 &ra->rrd->header.drr_u.drr_write_byref;
2672 2833 err = receive_read_payload_and_next_header(ra, 0, NULL);
2673 2834 receive_read_prefetch(ra, drrwb->drr_object, drrwb->drr_offset,
2674 2835 drrwb->drr_length);
2675 2836 return (err);
2676 2837 }
2677 2838 case DRR_WRITE_EMBEDDED:
2678 2839 {
2679 2840 struct drr_write_embedded *drrwe =
2680 2841 &ra->rrd->header.drr_u.drr_write_embedded;
2681 2842 uint32_t size = P2ROUNDUP(drrwe->drr_psize, 8);
2682 2843 void *buf = kmem_zalloc(size, KM_SLEEP);
2683 2844
2684 2845 err = receive_read_payload_and_next_header(ra, size, buf);
2685 2846 if (err != 0) {
2686 2847 kmem_free(buf, size);
2687 2848 return (err);
2688 2849 }
2689 2850
2690 2851 receive_read_prefetch(ra, drrwe->drr_object, drrwe->drr_offset,
2691 2852 drrwe->drr_length);
2692 2853 return (err);
2693 2854 }
2694 2855 case DRR_FREE:
|
↓ open down ↓ |
175 lines elided |
↑ open up ↑ |
2695 2856 {
2696 2857 /*
2697 2858 * It might be beneficial to prefetch indirect blocks here, but
2698 2859 * we don't really have the data to decide for sure.
2699 2860 */
2700 2861 err = receive_read_payload_and_next_header(ra, 0, NULL);
2701 2862 return (err);
2702 2863 }
2703 2864 case DRR_END:
2704 2865 {
2705 - struct drr_end *drre = &ra->rrd->header.drr_u.drr_end;
2706 - if (!ZIO_CHECKSUM_EQUAL(ra->prev_cksum, drre->drr_checksum))
2707 - return (SET_ERROR(ECKSUM));
2866 + if (ra->krrp_task == NULL ||
2867 + ra->krrp_task->buffer_args.force_cksum) {
2868 + struct drr_end *drre = &ra->rrd->header.drr_u.drr_end;
2869 + if (!ZIO_CHECKSUM_EQUAL(ra->prev_cksum,
2870 + drre->drr_checksum))
2871 + return (SET_ERROR(ECKSUM));
2872 + }
2708 2873 return (0);
2709 2874 }
2710 2875 case DRR_SPILL:
2711 2876 {
2712 2877 struct drr_spill *drrs = &ra->rrd->header.drr_u.drr_spill;
2713 2878 void *buf = kmem_zalloc(drrs->drr_length, KM_SLEEP);
2714 2879 err = receive_read_payload_and_next_header(ra, drrs->drr_length,
2715 2880 buf);
2716 2881 if (err != 0)
2717 2882 kmem_free(buf, drrs->drr_length);
2718 2883 return (err);
2719 2884 }
2720 2885 default:
2721 2886 return (SET_ERROR(EINVAL));
2722 2887 }
2723 2888 }
2724 2889
2725 2890 /*
2726 2891 * Commit the records to the pool.
2727 2892 */
2728 2893 static int
2729 2894 receive_process_record(struct receive_writer_arg *rwa,
2730 2895 struct receive_record_arg *rrd)
2731 2896 {
2732 2897 int err;
2733 2898
2734 2899 /* Processing in order, therefore bytes_read should be increasing. */
2735 2900 ASSERT3U(rrd->bytes_read, >=, rwa->bytes_read);
2736 2901 rwa->bytes_read = rrd->bytes_read;
2737 2902
2738 2903 switch (rrd->header.drr_type) {
2739 2904 case DRR_OBJECT:
2740 2905 {
2741 2906 struct drr_object *drro = &rrd->header.drr_u.drr_object;
2742 2907 err = receive_object(rwa, drro, rrd->payload);
2743 2908 kmem_free(rrd->payload, rrd->payload_size);
2744 2909 rrd->payload = NULL;
2745 2910 return (err);
2746 2911 }
2747 2912 case DRR_FREEOBJECTS:
2748 2913 {
2749 2914 struct drr_freeobjects *drrfo =
2750 2915 &rrd->header.drr_u.drr_freeobjects;
2751 2916 return (receive_freeobjects(rwa, drrfo));
2752 2917 }
2753 2918 case DRR_WRITE:
2754 2919 {
2755 2920 struct drr_write *drrw = &rrd->header.drr_u.drr_write;
2756 2921 err = receive_write(rwa, drrw, rrd->write_buf);
2757 2922 /* if receive_write() is successful, it consumes the arc_buf */
2758 2923 if (err != 0)
2759 2924 dmu_return_arcbuf(rrd->write_buf);
2760 2925 rrd->write_buf = NULL;
2761 2926 rrd->payload = NULL;
2762 2927 return (err);
2763 2928 }
2764 2929 case DRR_WRITE_BYREF:
2765 2930 {
2766 2931 struct drr_write_byref *drrwbr =
2767 2932 &rrd->header.drr_u.drr_write_byref;
2768 2933 return (receive_write_byref(rwa, drrwbr));
2769 2934 }
2770 2935 case DRR_WRITE_EMBEDDED:
2771 2936 {
2772 2937 struct drr_write_embedded *drrwe =
2773 2938 &rrd->header.drr_u.drr_write_embedded;
2774 2939 err = receive_write_embedded(rwa, drrwe, rrd->payload);
2775 2940 kmem_free(rrd->payload, rrd->payload_size);
2776 2941 rrd->payload = NULL;
2777 2942 return (err);
2778 2943 }
2779 2944 case DRR_FREE:
2780 2945 {
2781 2946 struct drr_free *drrf = &rrd->header.drr_u.drr_free;
2782 2947 return (receive_free(rwa, drrf));
2783 2948 }
2784 2949 case DRR_SPILL:
2785 2950 {
2786 2951 struct drr_spill *drrs = &rrd->header.drr_u.drr_spill;
2787 2952 err = receive_spill(rwa, drrs, rrd->payload);
2788 2953 kmem_free(rrd->payload, rrd->payload_size);
2789 2954 rrd->payload = NULL;
2790 2955 return (err);
2791 2956 }
2792 2957 default:
2793 2958 return (SET_ERROR(EINVAL));
2794 2959 }
2795 2960 }
2796 2961
2797 2962 /*
2798 2963 * dmu_recv_stream's worker thread; pull records off the queue, and then call
2799 2964 * receive_process_record When we're done, signal the main thread and exit.
2800 2965 */
2801 2966 static void
2802 2967 receive_writer_thread(void *arg)
2803 2968 {
2804 2969 struct receive_writer_arg *rwa = arg;
2805 2970 struct receive_record_arg *rrd;
2806 2971 for (rrd = bqueue_dequeue(&rwa->q); !rrd->eos_marker;
2807 2972 rrd = bqueue_dequeue(&rwa->q)) {
2808 2973 /*
2809 2974 * If there's an error, the main thread will stop putting things
2810 2975 * on the queue, but we need to clear everything in it before we
2811 2976 * can exit.
2812 2977 */
2813 2978 if (rwa->err == 0) {
2814 2979 rwa->err = receive_process_record(rwa, rrd);
2815 2980 } else if (rrd->write_buf != NULL) {
2816 2981 dmu_return_arcbuf(rrd->write_buf);
2817 2982 rrd->write_buf = NULL;
2818 2983 rrd->payload = NULL;
2819 2984 } else if (rrd->payload != NULL) {
2820 2985 kmem_free(rrd->payload, rrd->payload_size);
2821 2986 rrd->payload = NULL;
2822 2987 }
2823 2988 kmem_free(rrd, sizeof (*rrd));
2824 2989 }
2825 2990 kmem_free(rrd, sizeof (*rrd));
2826 2991 mutex_enter(&rwa->mutex);
2827 2992 rwa->done = B_TRUE;
2828 2993 cv_signal(&rwa->cv);
2829 2994 mutex_exit(&rwa->mutex);
2830 2995 thread_exit();
2831 2996 }
2832 2997
2833 2998 static int
2834 2999 resume_check(struct receive_arg *ra, nvlist_t *begin_nvl)
2835 3000 {
2836 3001 uint64_t val;
2837 3002 objset_t *mos = dmu_objset_pool(ra->os)->dp_meta_objset;
2838 3003 uint64_t dsobj = dmu_objset_id(ra->os);
2839 3004 uint64_t resume_obj, resume_off;
2840 3005
2841 3006 if (nvlist_lookup_uint64(begin_nvl,
2842 3007 "resume_object", &resume_obj) != 0 ||
2843 3008 nvlist_lookup_uint64(begin_nvl,
2844 3009 "resume_offset", &resume_off) != 0) {
2845 3010 return (SET_ERROR(EINVAL));
2846 3011 }
2847 3012 VERIFY0(zap_lookup(mos, dsobj,
2848 3013 DS_FIELD_RESUME_OBJECT, sizeof (val), 1, &val));
2849 3014 if (resume_obj != val)
2850 3015 return (SET_ERROR(EINVAL));
2851 3016 VERIFY0(zap_lookup(mos, dsobj,
2852 3017 DS_FIELD_RESUME_OFFSET, sizeof (val), 1, &val));
2853 3018 if (resume_off != val)
2854 3019 return (SET_ERROR(EINVAL));
2855 3020
2856 3021 return (0);
2857 3022 }
2858 3023
2859 3024 /*
2860 3025 * Read in the stream's records, one by one, and apply them to the pool. There
2861 3026 * are two threads involved; the thread that calls this function will spin up a
2862 3027 * worker thread, read the records off the stream one by one, and issue
|
↓ open down ↓ |
145 lines elided |
↑ open up ↑ |
2863 3028 * prefetches for any necessary indirect blocks. It will then push the records
2864 3029 * onto an internal blocking queue. The worker thread will pull the records off
2865 3030 * the queue, and actually write the data into the DMU. This way, the worker
2866 3031 * thread doesn't have to wait for reads to complete, since everything it needs
2867 3032 * (the indirect blocks) will be prefetched.
2868 3033 *
2869 3034 * NB: callers *must* call dmu_recv_end() if this succeeds.
2870 3035 */
2871 3036 int
2872 3037 dmu_recv_stream(dmu_recv_cookie_t *drc, vnode_t *vp, offset_t *voffp,
2873 - int cleanup_fd, uint64_t *action_handlep)
3038 + int cleanup_fd, uint64_t *action_handlep, dmu_krrp_task_t *krrp_task)
2874 3039 {
2875 3040 int err = 0;
2876 3041 struct receive_arg ra = { 0 };
2877 3042 struct receive_writer_arg rwa = { 0 };
2878 3043 int featureflags;
2879 3044 nvlist_t *begin_nvl = NULL;
2880 3045
2881 3046 ra.byteswap = drc->drc_byteswap;
2882 3047 ra.cksum = drc->drc_cksum;
2883 3048 ra.vp = vp;
2884 3049 ra.voff = *voffp;
3050 + ra.krrp_task = krrp_task;
2885 3051
2886 3052 if (dsl_dataset_is_zapified(drc->drc_ds)) {
2887 3053 (void) zap_lookup(drc->drc_ds->ds_dir->dd_pool->dp_meta_objset,
2888 3054 drc->drc_ds->ds_object, DS_FIELD_RESUME_BYTES,
2889 3055 sizeof (ra.bytes_read), 1, &ra.bytes_read);
2890 3056 }
2891 3057
2892 3058 objlist_create(&ra.ignore_objlist);
2893 3059
2894 3060 /* these were verified in dmu_recv_begin */
2895 3061 ASSERT3U(DMU_GET_STREAM_HDRTYPE(drc->drc_drrb->drr_versioninfo), ==,
2896 3062 DMU_SUBSTREAM);
2897 3063 ASSERT3U(drc->drc_drrb->drr_type, <, DMU_OST_NUMTYPES);
2898 3064
2899 3065 /*
2900 3066 * Open the objset we are modifying.
2901 3067 */
2902 3068 VERIFY0(dmu_objset_from_ds(drc->drc_ds, &ra.os));
2903 3069
2904 3070 ASSERT(dsl_dataset_phys(drc->drc_ds)->ds_flags & DS_FLAG_INCONSISTENT);
2905 3071
2906 3072 featureflags = DMU_GET_FEATUREFLAGS(drc->drc_drrb->drr_versioninfo);
2907 3073
2908 3074 /* if this stream is dedup'ed, set up the avl tree for guid mapping */
2909 3075 if (featureflags & DMU_BACKUP_FEATURE_DEDUP) {
2910 3076 minor_t minor;
2911 3077
2912 3078 if (cleanup_fd == -1) {
2913 3079 ra.err = SET_ERROR(EBADF);
2914 3080 goto out;
2915 3081 }
2916 3082 ra.err = zfs_onexit_fd_hold(cleanup_fd, &minor);
2917 3083 if (ra.err != 0) {
2918 3084 cleanup_fd = -1;
2919 3085 goto out;
2920 3086 }
2921 3087
2922 3088 if (*action_handlep == 0) {
2923 3089 rwa.guid_to_ds_map =
2924 3090 kmem_alloc(sizeof (avl_tree_t), KM_SLEEP);
2925 3091 avl_create(rwa.guid_to_ds_map, guid_compare,
2926 3092 sizeof (guid_map_entry_t),
2927 3093 offsetof(guid_map_entry_t, avlnode));
2928 3094 err = zfs_onexit_add_cb(minor,
2929 3095 free_guid_map_onexit, rwa.guid_to_ds_map,
2930 3096 action_handlep);
2931 3097 if (ra.err != 0)
2932 3098 goto out;
2933 3099 } else {
2934 3100 err = zfs_onexit_cb_data(minor, *action_handlep,
2935 3101 (void **)&rwa.guid_to_ds_map);
2936 3102 if (ra.err != 0)
2937 3103 goto out;
2938 3104 }
2939 3105
2940 3106 drc->drc_guid_to_ds_map = rwa.guid_to_ds_map;
2941 3107 }
2942 3108
2943 3109 uint32_t payloadlen = drc->drc_drr_begin->drr_payloadlen;
2944 3110 void *payload = NULL;
2945 3111 if (payloadlen != 0)
2946 3112 payload = kmem_alloc(payloadlen, KM_SLEEP);
2947 3113
2948 3114 err = receive_read_payload_and_next_header(&ra, payloadlen, payload);
2949 3115 if (err != 0) {
2950 3116 if (payloadlen != 0)
2951 3117 kmem_free(payload, payloadlen);
2952 3118 goto out;
2953 3119 }
2954 3120 if (payloadlen != 0) {
2955 3121 err = nvlist_unpack(payload, payloadlen, &begin_nvl, KM_SLEEP);
2956 3122 kmem_free(payload, payloadlen);
2957 3123 if (err != 0)
2958 3124 goto out;
2959 3125 }
2960 3126
2961 3127 if (featureflags & DMU_BACKUP_FEATURE_RESUMING) {
2962 3128 err = resume_check(&ra, begin_nvl);
2963 3129 if (err != 0)
2964 3130 goto out;
2965 3131 }
2966 3132
2967 3133 (void) bqueue_init(&rwa.q, zfs_recv_queue_length,
2968 3134 offsetof(struct receive_record_arg, node));
2969 3135 cv_init(&rwa.cv, NULL, CV_DEFAULT, NULL);
2970 3136 mutex_init(&rwa.mutex, NULL, MUTEX_DEFAULT, NULL);
2971 3137 rwa.os = ra.os;
2972 3138 rwa.byteswap = drc->drc_byteswap;
2973 3139 rwa.resumable = drc->drc_resumable;
2974 3140
2975 3141 (void) thread_create(NULL, 0, receive_writer_thread, &rwa, 0, curproc,
2976 3142 TS_RUN, minclsyspri);
2977 3143 /*
2978 3144 * We're reading rwa.err without locks, which is safe since we are the
2979 3145 * only reader, and the worker thread is the only writer. It's ok if we
2980 3146 * miss a write for an iteration or two of the loop, since the writer
2981 3147 * thread will keep freeing records we send it until we send it an eos
2982 3148 * marker.
|
↓ open down ↓ |
88 lines elided |
↑ open up ↑ |
2983 3149 *
2984 3150 * We can leave this loop in 3 ways: First, if rwa.err is
2985 3151 * non-zero. In that case, the writer thread will free the rrd we just
2986 3152 * pushed. Second, if we're interrupted; in that case, either it's the
2987 3153 * first loop and ra.rrd was never allocated, or it's later, and ra.rrd
2988 3154 * has been handed off to the writer thread who will free it. Finally,
2989 3155 * if receive_read_record fails or we're at the end of the stream, then
2990 3156 * we free ra.rrd and exit.
2991 3157 */
2992 3158 while (rwa.err == 0) {
2993 - if (issig(JUSTLOOKING) && issig(FORREAL)) {
3159 + if (vp && issig(JUSTLOOKING) && issig(FORREAL)) {
2994 3160 err = SET_ERROR(EINTR);
2995 3161 break;
2996 3162 }
2997 3163
2998 3164 ASSERT3P(ra.rrd, ==, NULL);
2999 3165 ra.rrd = ra.next_rrd;
3000 3166 ra.next_rrd = NULL;
3001 3167 /* Allocates and loads header into ra.next_rrd */
3002 3168 err = receive_read_record(&ra);
3003 3169
3004 3170 if (ra.rrd->header.drr_type == DRR_END || err != 0) {
3005 3171 kmem_free(ra.rrd, sizeof (*ra.rrd));
3006 3172 ra.rrd = NULL;
3007 3173 break;
3008 3174 }
3009 3175
3010 3176 bqueue_enqueue(&rwa.q, ra.rrd,
3011 3177 sizeof (struct receive_record_arg) + ra.rrd->payload_size);
3012 3178 ra.rrd = NULL;
3013 3179 }
3014 3180 if (ra.next_rrd == NULL)
3015 3181 ra.next_rrd = kmem_zalloc(sizeof (*ra.next_rrd), KM_SLEEP);
3016 3182 ra.next_rrd->eos_marker = B_TRUE;
3017 3183 bqueue_enqueue(&rwa.q, ra.next_rrd, 1);
3018 3184
3019 3185 mutex_enter(&rwa.mutex);
3020 3186 while (!rwa.done) {
3021 3187 cv_wait(&rwa.cv, &rwa.mutex);
3022 3188 }
3023 3189 mutex_exit(&rwa.mutex);
3024 3190
3025 3191 cv_destroy(&rwa.cv);
3026 3192 mutex_destroy(&rwa.mutex);
3027 3193 bqueue_destroy(&rwa.q);
3028 3194 if (err == 0)
3029 3195 err = rwa.err;
3030 3196
3031 3197 out:
3032 3198 nvlist_free(begin_nvl);
3033 3199 if ((featureflags & DMU_BACKUP_FEATURE_DEDUP) && (cleanup_fd != -1))
3034 3200 zfs_onexit_fd_rele(cleanup_fd);
3035 3201
3036 3202 if (err != 0) {
3037 3203 /*
3038 3204 * Clean up references. If receive is not resumable,
3039 3205 * destroy what we created, so we don't leave it in
3040 3206 * the inconsistent state.
3041 3207 */
3042 3208 dmu_recv_cleanup_ds(drc);
3043 3209 }
3044 3210
3045 3211 *voffp = ra.voff;
3046 3212 objlist_destroy(&ra.ignore_objlist);
3047 3213 return (err);
3048 3214 }
|
↓ open down ↓ |
45 lines elided |
↑ open up ↑ |
3049 3215
3050 3216 static int
3051 3217 dmu_recv_end_check(void *arg, dmu_tx_t *tx)
3052 3218 {
3053 3219 dmu_recv_cookie_t *drc = arg;
3054 3220 dsl_pool_t *dp = dmu_tx_pool(tx);
3055 3221 int error;
3056 3222
3057 3223 ASSERT3P(drc->drc_ds->ds_owner, ==, dmu_recv_tag);
3058 3224
3225 + if (spa_feature_is_active(dp->dp_spa, SPA_FEATURE_WBC)) {
3226 + objset_t *os = NULL;
3227 +
3228 + error = dmu_objset_from_ds(drc->drc_ds, &os);
3229 + if (error)
3230 + return (error);
3231 +
3232 + /* Recv is impossible into DS that uses WBC */
3233 + if (os->os_wbc_mode != ZFS_WBC_MODE_OFF)
3234 + return (SET_ERROR(EKZFS_WBCNOTSUP));
3235 + }
3236 +
3059 3237 if (!drc->drc_newfs) {
3060 3238 dsl_dataset_t *origin_head;
3061 3239
3062 3240 error = dsl_dataset_hold(dp, drc->drc_tofs, FTAG, &origin_head);
3063 3241 if (error != 0)
3064 3242 return (error);
3065 3243 if (drc->drc_force) {
3066 3244 /*
3067 3245 * We will destroy any snapshots in tofs (i.e. before
3068 3246 * origin_head) that are after the origin (which is
3069 3247 * the snap before drc_ds, because drc_ds can not
3070 3248 * have any snaps of its own).
3071 3249 */
3072 3250 uint64_t obj;
3073 3251
3074 3252 obj = dsl_dataset_phys(origin_head)->ds_prev_snap_obj;
3075 3253 while (obj !=
3076 3254 dsl_dataset_phys(drc->drc_ds)->ds_prev_snap_obj) {
3077 3255 dsl_dataset_t *snap;
3078 3256 error = dsl_dataset_hold_obj(dp, obj, FTAG,
3079 3257 &snap);
3080 3258 if (error != 0)
3081 3259 break;
3082 3260 if (snap->ds_dir != origin_head->ds_dir)
3083 3261 error = SET_ERROR(EINVAL);
3084 3262 if (error == 0) {
3085 3263 error = dsl_destroy_snapshot_check_impl(
3086 3264 snap, B_FALSE);
3087 3265 }
3088 3266 obj = dsl_dataset_phys(snap)->ds_prev_snap_obj;
3089 3267 dsl_dataset_rele(snap, FTAG);
3090 3268 if (error != 0)
3091 3269 break;
3092 3270 }
3093 3271 if (error != 0) {
3094 3272 dsl_dataset_rele(origin_head, FTAG);
3095 3273 return (error);
3096 3274 }
3097 3275 }
3098 3276 error = dsl_dataset_clone_swap_check_impl(drc->drc_ds,
3099 3277 origin_head, drc->drc_force, drc->drc_owner, tx);
3100 3278 if (error != 0) {
3101 3279 dsl_dataset_rele(origin_head, FTAG);
3102 3280 return (error);
3103 3281 }
3104 3282 error = dsl_dataset_snapshot_check_impl(origin_head,
|
↓ open down ↓ |
36 lines elided |
↑ open up ↑ |
3105 3283 drc->drc_tosnap, tx, B_TRUE, 1, drc->drc_cred);
3106 3284 dsl_dataset_rele(origin_head, FTAG);
3107 3285 if (error != 0)
3108 3286 return (error);
3109 3287
3110 3288 error = dsl_destroy_head_check_impl(drc->drc_ds, 1);
3111 3289 } else {
3112 3290 error = dsl_dataset_snapshot_check_impl(drc->drc_ds,
3113 3291 drc->drc_tosnap, tx, B_TRUE, 1, drc->drc_cred);
3114 3292 }
3293 +
3294 + if (dmu_tx_is_syncing(tx) && drc->drc_krrp_task != NULL) {
3295 + const char *token =
3296 + drc->drc_krrp_task->buffer_args.to_ds;
3297 + const char *cookie = drc->drc_krrp_task->cookie;
3298 + dsl_pool_t *dp = tx->tx_pool;
3299 +
3300 + if (*token != '\0') {
3301 + error = zap_update(dp->dp_meta_objset,
3302 + DMU_POOL_DIRECTORY_OBJECT, token, 1,
3303 + strlen(cookie) + 1, cookie, tx);
3304 + }
3305 + }
3115 3306 return (error);
3116 3307 }
3117 3308
3118 3309 static void
3119 3310 dmu_recv_end_sync(void *arg, dmu_tx_t *tx)
3120 3311 {
3121 3312 dmu_recv_cookie_t *drc = arg;
3122 3313 dsl_pool_t *dp = dmu_tx_pool(tx);
3123 3314
3124 3315 spa_history_log_internal_ds(drc->drc_ds, "finish receiving",
3125 3316 tx, "snap=%s", drc->drc_tosnap);
3126 3317
3127 3318 if (!drc->drc_newfs) {
3128 3319 dsl_dataset_t *origin_head;
3129 3320
3130 3321 VERIFY0(dsl_dataset_hold(dp, drc->drc_tofs, FTAG,
3131 3322 &origin_head));
3132 3323
3133 3324 if (drc->drc_force) {
3134 3325 /*
3135 3326 * Destroy any snapshots of drc_tofs (origin_head)
3136 3327 * after the origin (the snap before drc_ds).
3137 3328 */
3138 3329 uint64_t obj;
3139 3330
3140 3331 obj = dsl_dataset_phys(origin_head)->ds_prev_snap_obj;
3141 3332 while (obj !=
3142 3333 dsl_dataset_phys(drc->drc_ds)->ds_prev_snap_obj) {
3143 3334 dsl_dataset_t *snap;
3144 3335 VERIFY0(dsl_dataset_hold_obj(dp, obj, FTAG,
3145 3336 &snap));
3146 3337 ASSERT3P(snap->ds_dir, ==, origin_head->ds_dir);
3147 3338 obj = dsl_dataset_phys(snap)->ds_prev_snap_obj;
3148 3339 dsl_destroy_snapshot_sync_impl(snap,
3149 3340 B_FALSE, tx);
3150 3341 dsl_dataset_rele(snap, FTAG);
3151 3342 }
3152 3343 }
3153 3344 VERIFY3P(drc->drc_ds->ds_prev, ==,
3154 3345 origin_head->ds_prev);
3155 3346
3156 3347 dsl_dataset_clone_swap_sync_impl(drc->drc_ds,
3157 3348 origin_head, tx);
3158 3349 dsl_dataset_snapshot_sync_impl(origin_head,
3159 3350 drc->drc_tosnap, tx);
3160 3351
3161 3352 /* set snapshot's creation time and guid */
3162 3353 dmu_buf_will_dirty(origin_head->ds_prev->ds_dbuf, tx);
3163 3354 dsl_dataset_phys(origin_head->ds_prev)->ds_creation_time =
3164 3355 drc->drc_drrb->drr_creation_time;
3165 3356 dsl_dataset_phys(origin_head->ds_prev)->ds_guid =
3166 3357 drc->drc_drrb->drr_toguid;
3167 3358 dsl_dataset_phys(origin_head->ds_prev)->ds_flags &=
3168 3359 ~DS_FLAG_INCONSISTENT;
3169 3360
3170 3361 dmu_buf_will_dirty(origin_head->ds_dbuf, tx);
3171 3362 dsl_dataset_phys(origin_head)->ds_flags &=
3172 3363 ~DS_FLAG_INCONSISTENT;
3173 3364
3174 3365 drc->drc_newsnapobj =
3175 3366 dsl_dataset_phys(origin_head)->ds_prev_snap_obj;
3176 3367
3177 3368 dsl_dataset_rele(origin_head, FTAG);
3178 3369 dsl_destroy_head_sync_impl(drc->drc_ds, tx);
3179 3370
3180 3371 if (drc->drc_owner != NULL)
3181 3372 VERIFY3P(origin_head->ds_owner, ==, drc->drc_owner);
3182 3373 } else {
3183 3374 dsl_dataset_t *ds = drc->drc_ds;
3184 3375
3185 3376 dsl_dataset_snapshot_sync_impl(ds, drc->drc_tosnap, tx);
3186 3377
3187 3378 /* set snapshot's creation time and guid */
3188 3379 dmu_buf_will_dirty(ds->ds_prev->ds_dbuf, tx);
3189 3380 dsl_dataset_phys(ds->ds_prev)->ds_creation_time =
3190 3381 drc->drc_drrb->drr_creation_time;
3191 3382 dsl_dataset_phys(ds->ds_prev)->ds_guid =
3192 3383 drc->drc_drrb->drr_toguid;
3193 3384 dsl_dataset_phys(ds->ds_prev)->ds_flags &=
3194 3385 ~DS_FLAG_INCONSISTENT;
3195 3386
3196 3387 dmu_buf_will_dirty(ds->ds_dbuf, tx);
3197 3388 dsl_dataset_phys(ds)->ds_flags &= ~DS_FLAG_INCONSISTENT;
3198 3389 if (dsl_dataset_has_resume_receive_state(ds)) {
3199 3390 (void) zap_remove(dp->dp_meta_objset, ds->ds_object,
3200 3391 DS_FIELD_RESUME_FROMGUID, tx);
3201 3392 (void) zap_remove(dp->dp_meta_objset, ds->ds_object,
3202 3393 DS_FIELD_RESUME_OBJECT, tx);
3203 3394 (void) zap_remove(dp->dp_meta_objset, ds->ds_object,
3204 3395 DS_FIELD_RESUME_OFFSET, tx);
3205 3396 (void) zap_remove(dp->dp_meta_objset, ds->ds_object,
3206 3397 DS_FIELD_RESUME_BYTES, tx);
3207 3398 (void) zap_remove(dp->dp_meta_objset, ds->ds_object,
3208 3399 DS_FIELD_RESUME_TOGUID, tx);
3209 3400 (void) zap_remove(dp->dp_meta_objset, ds->ds_object,
3210 3401 DS_FIELD_RESUME_TONAME, tx);
3211 3402 }
3212 3403 drc->drc_newsnapobj =
3213 3404 dsl_dataset_phys(drc->drc_ds)->ds_prev_snap_obj;
3214 3405 }
3215 3406 /*
3216 3407 * Release the hold from dmu_recv_begin. This must be done before
3217 3408 * we return to open context, so that when we free the dataset's dnode,
3218 3409 * we can evict its bonus buffer.
3219 3410 */
3220 3411 dsl_dataset_disown(drc->drc_ds, dmu_recv_tag);
3221 3412 drc->drc_ds = NULL;
3222 3413 }
3223 3414
3224 3415 static int
3225 3416 add_ds_to_guidmap(const char *name, avl_tree_t *guid_map, uint64_t snapobj)
3226 3417 {
3227 3418 dsl_pool_t *dp;
3228 3419 dsl_dataset_t *snapds;
3229 3420 guid_map_entry_t *gmep;
3230 3421 int err;
3231 3422
3232 3423 ASSERT(guid_map != NULL);
3233 3424
3234 3425 err = dsl_pool_hold(name, FTAG, &dp);
3235 3426 if (err != 0)
3236 3427 return (err);
3237 3428 gmep = kmem_alloc(sizeof (*gmep), KM_SLEEP);
3238 3429 err = dsl_dataset_hold_obj(dp, snapobj, gmep, &snapds);
3239 3430 if (err == 0) {
3240 3431 gmep->guid = dsl_dataset_phys(snapds)->ds_guid;
3241 3432 gmep->gme_ds = snapds;
3242 3433 avl_add(guid_map, gmep);
3243 3434 dsl_dataset_long_hold(snapds, gmep);
3244 3435 } else {
3245 3436 kmem_free(gmep, sizeof (*gmep));
3246 3437 }
3247 3438
3248 3439 dsl_pool_rele(dp, FTAG);
3249 3440 return (err);
3250 3441 }
3251 3442
3252 3443 static int dmu_recv_end_modified_blocks = 3;
3253 3444
3254 3445 static int
3255 3446 dmu_recv_existing_end(dmu_recv_cookie_t *drc)
3256 3447 {
3257 3448 #ifdef _KERNEL
3258 3449 /*
3259 3450 * We will be destroying the ds; make sure its origin is unmounted if
3260 3451 * necessary.
3261 3452 */
3262 3453 char name[ZFS_MAX_DATASET_NAME_LEN];
3263 3454 dsl_dataset_name(drc->drc_ds, name);
3264 3455 zfs_destroy_unmount_origin(name);
3265 3456 #endif
3266 3457
3267 3458 return (dsl_sync_task(drc->drc_tofs,
3268 3459 dmu_recv_end_check, dmu_recv_end_sync, drc,
3269 3460 dmu_recv_end_modified_blocks, ZFS_SPACE_CHECK_NORMAL));
3270 3461 }
3271 3462
3272 3463 static int
3273 3464 dmu_recv_new_end(dmu_recv_cookie_t *drc)
3274 3465 {
3275 3466 return (dsl_sync_task(drc->drc_tofs,
3276 3467 dmu_recv_end_check, dmu_recv_end_sync, drc,
3277 3468 dmu_recv_end_modified_blocks, ZFS_SPACE_CHECK_NORMAL));
3278 3469 }
3279 3470
3280 3471 int
3281 3472 dmu_recv_end(dmu_recv_cookie_t *drc, void *owner)
3282 3473 {
3283 3474 int error;
3284 3475
3285 3476 drc->drc_owner = owner;
3286 3477
3287 3478 if (drc->drc_newfs)
3288 3479 error = dmu_recv_new_end(drc);
3289 3480 else
3290 3481 error = dmu_recv_existing_end(drc);
3291 3482
3292 3483 if (error != 0) {
3293 3484 dmu_recv_cleanup_ds(drc);
3294 3485 } else if (drc->drc_guid_to_ds_map != NULL) {
3295 3486 (void) add_ds_to_guidmap(drc->drc_tofs,
3296 3487 drc->drc_guid_to_ds_map,
3297 3488 drc->drc_newsnapobj);
3298 3489 }
3299 3490 return (error);
3300 3491 }
3301 3492
3302 3493 /*
3303 3494 * Return TRUE if this objset is currently being received into.
3304 3495 */
3305 3496 boolean_t
3306 3497 dmu_objset_is_receiving(objset_t *os)
3307 3498 {
3308 3499 return (os->os_dsl_dataset != NULL &&
3309 3500 os->os_dsl_dataset->ds_owner == dmu_recv_tag);
3310 3501 }
|
↓ open down ↓ |
186 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX