Print this page
NEX-19691 Unsuccessful mpt_sas IOC reset leads to the panic in no I/O to the pool - days later
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-20282 Add disk target queue depth tunable to mpt_sas
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Rob Gittins <rob.gittins@nexenta.com>
Reviewed by: Rick McNeal <rick.mcneal@nexenta.com>
NEX-19821 All SAS paths down sometimes does not cause panic and trigger automatic HA failover
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
9048 mpt_sas should not require targets to send SEP messages
Reviewed by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@joyent.com>
Reviewed by: Patrick Mooney <patrick.mooney@joyent.com>
Approved by: Gordon Ross <gwr@nexenta.com>
NEX-17446 cleanup of hot unplugged disks fails intermittently
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Evan Layton <evan.layton@nexenta.com>
Reviewed by: Rick McNeal <rick.mcneal@nexenta.com>
NEX-17944 HBA drivers don't need the redundant devfs_clean step
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Rick McNeal <rick.mcneal@nexenta.com>
NEX-17006 backport mpt_sas tri-mode parts support change
9044 Need support for mpt_sas tri-mode parts
9045 Clean up mpt_sas compiler warnings
9046 mptsas_handle_topo_change can return without its locks held
9047 workaround SAS3408 firmware issue
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@joyent.com>
Reviewed by: Albert Lee <trisk@forkgnu.org>
Reviewed by: Yuri Pankov <yuripv@yuripv.net>
Approved by: Richard Lowe <richlowe@richlowe.net>
NEX-16174 scsi error messages should go to system log only
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-2100 vmem_hash_delete(ffffff5b5dee0000, 0, 1): bad free
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Marcel Telka <marcel@telka.sk>
NEX-6064 Son of single bad device causes outage a.k.a one disk fault
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-4418 SATA inquiry property generation doesn't work as advertised
Reviewed by: Dan McDonald <danmcd@omniti.com>
Reviewed by: Garrett D'Amore <garrett@damore.org>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
NEX-3988 Single bad device causes outage a.k.a one disk fault
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
Reviewed by: Kevin Crowe <kevin.crowe@nexenta.com>
NEX-3717 mptsas doesn't handle timeouts in mptsas_get_sata_guid()
Reviewed by: Gordon Ross <gordon.ross@nexenta.com>
Reviewed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Yuri Pankov <yuri.pankov@nexenta.com>
NEX-2103 12G mpt_sas needs additional minor enhancements
Revert OS-73 do not do IO complettions in the ISR
NEX-1889 mpt_sas should support 12G HBAs
4500 mptsas_hash_traverse() is unsafe, leads to missing devices
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
Approved by: Albert Lee <trisk@nexenta.com>
backout 4500 mptsas_hash_traverse() is unsafe, leads to missing devices
4403 mpt_sas panic when pulling a drive
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
Reviewed by: Albert Lee <trisk@nexenta.com>
Reviewed by: Andy Giles <illumos@ang.homedns.org>
Approved by: Robert Mustacchi <rm@joyent.com>
4500 mptsas_hash_traverse() is unsafe, leads to missing devices
Reviewed by: Hans Rosenfeld <hans.rosenfeld@nexenta.com>
Approved by: Albert Lee <trisk@nexenta.com>
NEX-1052 mptsas_do_passthru() does triggers assertion
OS-126 Creating a LUN for retired device results in sysevent loop
OS-91 mptsas does inquiry without setting pkt_time
OS-73 do not do IO complettions in the ISR
OS-61 Need ability for fault injection in mptsas
OS-87 pkt_reason not set accordingly when mpt_sas times out commands
OS-84 slow-io changes cause assertion failure
OS-62 slow io error detector is needed.
OS-59 remove automated target removal mechanism from mpt_sas.
Fix up some merges where we wanted the upstream version.
re #12927 rb4203 LSI 2008 mpt_sas
re #9517 rb4120 After single disk fault patch installed single disk fault still causes process hangs (fix gcc build)
re #9517 rb4120 After single disk fault patch installed single disk fault still causes process hangs
re #8346 rb2639 KT disk failures (fix lint/cstyle)
re #10443 rb3479 3.1.3 crash: BAD TRAP: type=e (#pf Page fault)
re #8346 rb2639 KT disk failures
re #7364 rb2201 "hddisco" hangs after unplugging both cables from JBOD (and NMS too)
re #8346 rb2639 KT disk failures
re #9636 rb2836 - mpt_sas should attempt an MUR reset at attach time.
--HG--
branch : stable-4.0
re #9636 rb2836 - mpt_sas should attempt an MUR reset at attach time.
re #7550 rb2134 lint-clean nza-kernel
re #6530 mpt_sas crash when more than 1 Initiator involved - ie HA
re #6834 rb1773 less verbosity in nfs4_rnode
re #6833 rb1771 less verbosity in mptsas
| Split |
Close |
| Expand all |
| Collapse all |
--- old/usr/src/uts/common/io/scsi/adapters/mpt_sas/mptsas.c
+++ new/usr/src/uts/common/io/scsi/adapters/mpt_sas/mptsas.c
1 1 /*
2 2 * CDDL HEADER START
3 3 *
4 4 * The contents of this file are subject to the terms of the
5 5 * Common Development and Distribution License (the "License").
6 6 * You may not use this file except in compliance with the License.
7 7 *
8 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 * or http://www.opensolaris.org/os/licensing.
10 10 * See the License for the specific language governing permissions
11 11 * and limitations under the License.
12 12 *
13 13 * When distributing Covered Code, include this CDDL HEADER in each
|
↓ open down ↓ |
13 lines elided |
↑ open up ↑ |
14 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 * If applicable, add the following below this CDDL HEADER, with the
16 16 * fields enclosed by brackets "[]" replaced with your own identifying
17 17 * information: Portions Copyright [yyyy] [name of copyright owner]
18 18 *
19 19 * CDDL HEADER END
20 20 */
21 21
22 22 /*
23 23 * Copyright (c) 2009, 2010, Oracle and/or its affiliates. All rights reserved.
24 - * Copyright 2016 Nexenta Systems, Inc. All rights reserved.
24 + * Copyright 2019 Nexenta Systems, Inc.
25 25 * Copyright (c) 2017, Joyent, Inc.
26 26 * Copyright 2014 OmniTI Computer Consulting, Inc. All rights reserved.
27 27 * Copyright (c) 2014, Tegile Systems Inc. All rights reserved.
28 28 */
29 29
30 30 /*
31 31 * Copyright (c) 2000 to 2010, LSI Corporation.
32 32 * All rights reserved.
33 33 *
34 34 * Redistribution and use in source and binary forms of all code within
35 35 * this file that is exclusively owned by LSI, with or without
36 36 * modification, is permitted provided that, in addition to the CDDL 1.0
37 37 * License requirements, the following conditions are met:
38 38 *
39 39 * Neither the name of the author nor the names of its contributors may be
40 40 * used to endorse or promote products derived from this software without
41 41 * specific prior written permission.
42 42 *
43 43 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
44 44 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
45 45 * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
46 46 * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
47 47 * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
48 48 * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
49 49 * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
50 50 * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
51 51 * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
52 52 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
53 53 * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
54 54 * DAMAGE.
55 55 */
56 56
57 57 /*
58 58 * mptsas - This is a driver based on LSI Logic's MPT2.0/2.5 interface.
59 59 *
60 60 */
61 61
62 62 #if defined(lint) || defined(DEBUG)
63 63 #define MPTSAS_DEBUG
64 64 #endif
65 65
66 66 /*
67 67 * standard header files.
68 68 */
69 69 #include <sys/note.h>
70 70 #include <sys/scsi/scsi.h>
71 71 #include <sys/pci.h>
|
↓ open down ↓ |
37 lines elided |
↑ open up ↑ |
72 72 #include <sys/file.h>
73 73 #include <sys/policy.h>
74 74 #include <sys/model.h>
75 75 #include <sys/sysevent.h>
76 76 #include <sys/sysevent/eventdefs.h>
77 77 #include <sys/sysevent/dr.h>
78 78 #include <sys/sata/sata_defs.h>
79 79 #include <sys/sata/sata_hba.h>
80 80 #include <sys/scsi/generic/sas.h>
81 81 #include <sys/scsi/impl/scsi_sas.h>
82 +#include <sys/sdt.h>
83 +#include <sys/mdi_impldefs.h>
82 84
83 85 #pragma pack(1)
84 86 #include <sys/scsi/adapters/mpt_sas/mpi/mpi2_type.h>
85 87 #include <sys/scsi/adapters/mpt_sas/mpi/mpi2.h>
86 88 #include <sys/scsi/adapters/mpt_sas/mpi/mpi2_cnfg.h>
87 89 #include <sys/scsi/adapters/mpt_sas/mpi/mpi2_init.h>
88 90 #include <sys/scsi/adapters/mpt_sas/mpi/mpi2_ioc.h>
89 91 #include <sys/scsi/adapters/mpt_sas/mpi/mpi2_sas.h>
90 92 #include <sys/scsi/adapters/mpt_sas/mpi/mpi2_tool.h>
91 93 #include <sys/scsi/adapters/mpt_sas/mpi/mpi2_raid.h>
92 94 #pragma pack()
93 95
94 96 /*
|
↓ open down ↓ |
3 lines elided |
↑ open up ↑ |
95 97 * private header files.
96 98 *
97 99 */
98 100 #include <sys/scsi/impl/scsi_reset_notify.h>
99 101 #include <sys/scsi/adapters/mpt_sas/mptsas_var.h>
100 102 #include <sys/scsi/adapters/mpt_sas/mptsas_ioctl.h>
101 103 #include <sys/scsi/adapters/mpt_sas/mptsas_smhba.h>
102 104 #include <sys/scsi/adapters/mpt_sas/mptsas_hash.h>
103 105 #include <sys/raidioctl.h>
104 106
105 -#include <sys/fs/dv_node.h> /* devfs_clean */
106 -
107 107 /*
108 108 * FMA header files
109 109 */
110 110 #include <sys/ddifm.h>
111 111 #include <sys/fm/protocol.h>
112 112 #include <sys/fm/util.h>
113 113 #include <sys/fm/io/ddi.h>
114 114
115 115 /*
116 116 * autoconfiguration data and routines.
117 117 */
118 118 static int mptsas_attach(dev_info_t *dip, ddi_attach_cmd_t cmd);
119 119 static int mptsas_detach(dev_info_t *devi, ddi_detach_cmd_t cmd);
120 120 static int mptsas_power(dev_info_t *dip, int component, int level);
121 121
122 122 /*
123 123 * cb_ops function
124 124 */
125 125 static int mptsas_ioctl(dev_t dev, int cmd, intptr_t data, int mode,
126 126 cred_t *credp, int *rval);
127 127 #ifdef __sparc
128 128 static int mptsas_reset(dev_info_t *devi, ddi_reset_cmd_t cmd);
129 129 #else /* __sparc */
130 130 static int mptsas_quiesce(dev_info_t *devi);
131 131 #endif /* __sparc */
132 132
133 133 /*
134 134 * Resource initilaization for hardware
135 135 */
136 136 static void mptsas_setup_cmd_reg(mptsas_t *mpt);
137 137 static void mptsas_disable_bus_master(mptsas_t *mpt);
138 138 static void mptsas_hba_fini(mptsas_t *mpt);
139 139 static void mptsas_cfg_fini(mptsas_t *mptsas_blkp);
140 140 static int mptsas_hba_setup(mptsas_t *mpt);
141 141 static void mptsas_hba_teardown(mptsas_t *mpt);
142 142 static int mptsas_config_space_init(mptsas_t *mpt);
143 143 static void mptsas_config_space_fini(mptsas_t *mpt);
144 144 static void mptsas_iport_register(mptsas_t *mpt);
145 145 static int mptsas_smp_setup(mptsas_t *mpt);
146 146 static void mptsas_smp_teardown(mptsas_t *mpt);
147 147 static int mptsas_enc_setup(mptsas_t *mpt);
148 148 static void mptsas_enc_teardown(mptsas_t *mpt);
149 149 static int mptsas_cache_create(mptsas_t *mpt);
150 150 static void mptsas_cache_destroy(mptsas_t *mpt);
151 151 static int mptsas_alloc_request_frames(mptsas_t *mpt);
152 152 static int mptsas_alloc_sense_bufs(mptsas_t *mpt);
153 153 static int mptsas_alloc_reply_frames(mptsas_t *mpt);
154 154 static int mptsas_alloc_free_queue(mptsas_t *mpt);
155 155 static int mptsas_alloc_post_queue(mptsas_t *mpt);
156 156 static void mptsas_alloc_reply_args(mptsas_t *mpt);
157 157 static int mptsas_alloc_extra_sgl_frame(mptsas_t *mpt, mptsas_cmd_t *cmd);
158 158 static void mptsas_free_extra_sgl_frame(mptsas_t *mpt, mptsas_cmd_t *cmd);
159 159 static int mptsas_init_chip(mptsas_t *mpt, int first_time);
160 160 static void mptsas_update_hashtab(mptsas_t *mpt);
161 161
162 162 /*
163 163 * SCSA function prototypes
164 164 */
165 165 static int mptsas_scsi_start(struct scsi_address *ap, struct scsi_pkt *pkt);
166 166 static int mptsas_scsi_reset(struct scsi_address *ap, int level);
167 167 static int mptsas_scsi_abort(struct scsi_address *ap, struct scsi_pkt *pkt);
168 168 static int mptsas_scsi_getcap(struct scsi_address *ap, char *cap, int tgtonly);
169 169 static int mptsas_scsi_setcap(struct scsi_address *ap, char *cap, int value,
170 170 int tgtonly);
171 171 static void mptsas_scsi_dmafree(struct scsi_address *ap, struct scsi_pkt *pkt);
172 172 static struct scsi_pkt *mptsas_scsi_init_pkt(struct scsi_address *ap,
173 173 struct scsi_pkt *pkt, struct buf *bp, int cmdlen, int statuslen,
174 174 int tgtlen, int flags, int (*callback)(), caddr_t arg);
175 175 static void mptsas_scsi_sync_pkt(struct scsi_address *ap, struct scsi_pkt *pkt);
176 176 static void mptsas_scsi_destroy_pkt(struct scsi_address *ap,
177 177 struct scsi_pkt *pkt);
178 178 static int mptsas_scsi_tgt_init(dev_info_t *hba_dip, dev_info_t *tgt_dip,
179 179 scsi_hba_tran_t *hba_tran, struct scsi_device *sd);
180 180 static void mptsas_scsi_tgt_free(dev_info_t *hba_dip, dev_info_t *tgt_dip,
181 181 scsi_hba_tran_t *hba_tran, struct scsi_device *sd);
182 182 static int mptsas_scsi_reset_notify(struct scsi_address *ap, int flag,
183 183 void (*callback)(caddr_t), caddr_t arg);
184 184 static int mptsas_get_name(struct scsi_device *sd, char *name, int len);
185 185 static int mptsas_get_bus_addr(struct scsi_device *sd, char *name, int len);
186 186 static int mptsas_scsi_quiesce(dev_info_t *dip);
187 187 static int mptsas_scsi_unquiesce(dev_info_t *dip);
188 188 static int mptsas_bus_config(dev_info_t *pdip, uint_t flags,
189 189 ddi_bus_config_op_t op, void *arg, dev_info_t **childp);
190 190
191 191 /*
192 192 * SMP functions
193 193 */
194 194 static int mptsas_smp_start(struct smp_pkt *smp_pkt);
195 195
196 196 /*
197 197 * internal function prototypes.
198 198 */
199 199 static void mptsas_list_add(mptsas_t *mpt);
200 200 static void mptsas_list_del(mptsas_t *mpt);
201 201
202 202 static int mptsas_quiesce_bus(mptsas_t *mpt);
203 203 static int mptsas_unquiesce_bus(mptsas_t *mpt);
204 204
205 205 static int mptsas_alloc_handshake_msg(mptsas_t *mpt, size_t alloc_size);
206 206 static void mptsas_free_handshake_msg(mptsas_t *mpt);
207 207
208 208 static void mptsas_ncmds_checkdrain(void *arg);
209 209
210 210 static int mptsas_prepare_pkt(mptsas_cmd_t *cmd);
211 211 static int mptsas_accept_pkt(mptsas_t *mpt, mptsas_cmd_t *sp);
212 212 static int mptsas_accept_txwq_and_pkt(mptsas_t *mpt, mptsas_cmd_t *sp);
213 213 static void mptsas_accept_tx_waitq(mptsas_t *mpt);
214 214
215 215 static int mptsas_do_detach(dev_info_t *dev);
216 216 static int mptsas_do_scsi_reset(mptsas_t *mpt, uint16_t devhdl);
217 217 static int mptsas_do_scsi_abort(mptsas_t *mpt, int target, int lun,
218 218 struct scsi_pkt *pkt);
219 219 static int mptsas_scsi_capchk(char *cap, int tgtonly, int *cidxp);
220 220
221 221 static void mptsas_handle_qfull(mptsas_t *mpt, mptsas_cmd_t *cmd);
222 222 static void mptsas_handle_event(void *args);
223 223 static int mptsas_handle_event_sync(void *args);
224 224 static void mptsas_handle_dr(void *args);
225 225 static void mptsas_handle_topo_change(mptsas_topo_change_list_t *topo_node,
226 226 dev_info_t *pdip);
227 227
228 228 static void mptsas_restart_cmd(void *);
229 229
230 230 static void mptsas_flush_hba(mptsas_t *mpt);
231 231 static void mptsas_flush_target(mptsas_t *mpt, ushort_t target, int lun,
232 232 uint8_t tasktype);
233 233 static void mptsas_set_pkt_reason(mptsas_t *mpt, mptsas_cmd_t *cmd,
234 234 uchar_t reason, uint_t stat);
235 235
236 236 static uint_t mptsas_intr(caddr_t arg1, caddr_t arg2);
237 237 static void mptsas_process_intr(mptsas_t *mpt,
238 238 pMpi2ReplyDescriptorsUnion_t reply_desc_union);
239 239 static void mptsas_handle_scsi_io_success(mptsas_t *mpt,
240 240 pMpi2ReplyDescriptorsUnion_t reply_desc);
241 241 static void mptsas_handle_address_reply(mptsas_t *mpt,
242 242 pMpi2ReplyDescriptorsUnion_t reply_desc);
243 243 static int mptsas_wait_intr(mptsas_t *mpt, int polltime);
244 244 static void mptsas_sge_setup(mptsas_t *mpt, mptsas_cmd_t *cmd,
245 245 uint32_t *control, pMpi2SCSIIORequest_t frame, ddi_acc_handle_t acc_hdl);
246 246
247 247 static void mptsas_watch(void *arg);
248 248 static void mptsas_watchsubr(mptsas_t *mpt);
249 249 static void mptsas_cmd_timeout(mptsas_t *mpt, mptsas_target_t *ptgt);
250 250
251 251 static void mptsas_start_passthru(mptsas_t *mpt, mptsas_cmd_t *cmd);
252 252 static int mptsas_do_passthru(mptsas_t *mpt, uint8_t *request, uint8_t *reply,
253 253 uint8_t *data, uint32_t request_size, uint32_t reply_size,
254 254 uint32_t data_size, uint32_t direction, uint8_t *dataout,
255 255 uint32_t dataout_size, short timeout, int mode);
256 256 static int mptsas_free_devhdl(mptsas_t *mpt, uint16_t devhdl);
257 257
258 258 static uint8_t mptsas_get_fw_diag_buffer_number(mptsas_t *mpt,
259 259 uint32_t unique_id);
260 260 static void mptsas_start_diag(mptsas_t *mpt, mptsas_cmd_t *cmd);
261 261 static int mptsas_post_fw_diag_buffer(mptsas_t *mpt,
262 262 mptsas_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code);
263 263 static int mptsas_release_fw_diag_buffer(mptsas_t *mpt,
264 264 mptsas_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code,
265 265 uint32_t diag_type);
266 266 static int mptsas_diag_register(mptsas_t *mpt,
267 267 mptsas_fw_diag_register_t *diag_register, uint32_t *return_code);
268 268 static int mptsas_diag_unregister(mptsas_t *mpt,
269 269 mptsas_fw_diag_unregister_t *diag_unregister, uint32_t *return_code);
270 270 static int mptsas_diag_query(mptsas_t *mpt, mptsas_fw_diag_query_t *diag_query,
271 271 uint32_t *return_code);
272 272 static int mptsas_diag_read_buffer(mptsas_t *mpt,
273 273 mptsas_diag_read_buffer_t *diag_read_buffer, uint8_t *ioctl_buf,
274 274 uint32_t *return_code, int ioctl_mode);
275 275 static int mptsas_diag_release(mptsas_t *mpt,
276 276 mptsas_fw_diag_release_t *diag_release, uint32_t *return_code);
277 277 static int mptsas_do_diag_action(mptsas_t *mpt, uint32_t action,
278 278 uint8_t *diag_action, uint32_t length, uint32_t *return_code,
279 279 int ioctl_mode);
280 280 static int mptsas_diag_action(mptsas_t *mpt, mptsas_diag_action_t *data,
281 281 int mode);
282 282
283 283 static int mptsas_pkt_alloc_extern(mptsas_t *mpt, mptsas_cmd_t *cmd,
284 284 int cmdlen, int tgtlen, int statuslen, int kf);
285 285 static void mptsas_pkt_destroy_extern(mptsas_t *mpt, mptsas_cmd_t *cmd);
286 286
287 287 static int mptsas_kmem_cache_constructor(void *buf, void *cdrarg, int kmflags);
288 288 static void mptsas_kmem_cache_destructor(void *buf, void *cdrarg);
289 289
290 290 static int mptsas_cache_frames_constructor(void *buf, void *cdrarg,
291 291 int kmflags);
292 292 static void mptsas_cache_frames_destructor(void *buf, void *cdrarg);
293 293
294 294 static void mptsas_check_scsi_io_error(mptsas_t *mpt, pMpi2SCSIIOReply_t reply,
295 295 mptsas_cmd_t *cmd);
296 296 static void mptsas_check_task_mgt(mptsas_t *mpt,
297 297 pMpi2SCSIManagementReply_t reply, mptsas_cmd_t *cmd);
298 298 static int mptsas_send_scsi_cmd(mptsas_t *mpt, struct scsi_address *ap,
299 299 mptsas_target_t *ptgt, uchar_t *cdb, int cdblen, struct buf *data_bp,
300 300 int *resid);
301 301
302 302 static int mptsas_alloc_active_slots(mptsas_t *mpt, int flag);
303 303 static void mptsas_free_active_slots(mptsas_t *mpt);
304 304 static int mptsas_start_cmd(mptsas_t *mpt, mptsas_cmd_t *cmd);
305 305
306 306 static void mptsas_restart_hba(mptsas_t *mpt);
307 307 static void mptsas_restart_waitq(mptsas_t *mpt);
308 308
309 309 static void mptsas_deliver_doneq_thread(mptsas_t *mpt);
310 310 static void mptsas_doneq_add(mptsas_t *mpt, mptsas_cmd_t *cmd);
311 311 static void mptsas_doneq_mv(mptsas_t *mpt, uint64_t t);
312 312
313 313 static mptsas_cmd_t *mptsas_doneq_thread_rm(mptsas_t *mpt, uint64_t t);
314 314 static void mptsas_doneq_empty(mptsas_t *mpt);
315 315 static void mptsas_doneq_thread(mptsas_doneq_thread_arg_t *arg);
316 316
317 317 static mptsas_cmd_t *mptsas_waitq_rm(mptsas_t *mpt);
318 318 static void mptsas_waitq_delete(mptsas_t *mpt, mptsas_cmd_t *cmd);
319 319 static mptsas_cmd_t *mptsas_tx_waitq_rm(mptsas_t *mpt);
320 320 static void mptsas_tx_waitq_delete(mptsas_t *mpt, mptsas_cmd_t *cmd);
321 321
322 322
323 323 static void mptsas_start_watch_reset_delay();
324 324 static void mptsas_setup_bus_reset_delay(mptsas_t *mpt);
325 325 static void mptsas_watch_reset_delay(void *arg);
326 326 static int mptsas_watch_reset_delay_subr(mptsas_t *mpt);
327 327
328 328 /*
329 329 * helper functions
330 330 */
331 331 static void mptsas_dump_cmd(mptsas_t *mpt, mptsas_cmd_t *cmd);
332 332
333 333 static dev_info_t *mptsas_find_child(dev_info_t *pdip, char *name);
334 334 static dev_info_t *mptsas_find_child_phy(dev_info_t *pdip, uint8_t phy);
335 335 static dev_info_t *mptsas_find_child_addr(dev_info_t *pdip, uint64_t sasaddr,
336 336 int lun);
337 337 static mdi_pathinfo_t *mptsas_find_path_addr(dev_info_t *pdip, uint64_t sasaddr,
338 338 int lun);
339 339 static mdi_pathinfo_t *mptsas_find_path_phy(dev_info_t *pdip, uint8_t phy);
340 340 static dev_info_t *mptsas_find_smp_child(dev_info_t *pdip, char *str_wwn);
341 341
342 342 static int mptsas_parse_address(char *name, uint64_t *wwid, uint8_t *phy,
343 343 int *lun);
344 344 static int mptsas_parse_smp_name(char *name, uint64_t *wwn);
345 345
346 346 static mptsas_target_t *mptsas_phy_to_tgt(mptsas_t *mpt,
347 347 mptsas_phymask_t phymask, uint8_t phy);
348 348 static mptsas_target_t *mptsas_wwid_to_ptgt(mptsas_t *mpt,
349 349 mptsas_phymask_t phymask, uint64_t wwid);
|
↓ open down ↓ |
233 lines elided |
↑ open up ↑ |
350 350 static mptsas_smp_t *mptsas_wwid_to_psmp(mptsas_t *mpt,
351 351 mptsas_phymask_t phymask, uint64_t wwid);
352 352
353 353 static int mptsas_inquiry(mptsas_t *mpt, mptsas_target_t *ptgt, int lun,
354 354 uchar_t page, unsigned char *buf, int len, int *rlen, uchar_t evpd);
355 355
356 356 static int mptsas_get_target_device_info(mptsas_t *mpt, uint32_t page_address,
357 357 uint16_t *handle, mptsas_target_t **pptgt);
358 358 static void mptsas_update_phymask(mptsas_t *mpt);
359 359
360 -static int mptsas_send_sep(mptsas_t *mpt, mptsas_target_t *ptgt,
360 +static int mptsas_flush_led_status(mptsas_t *mpt, mptsas_enclosure_t *mep,
361 + uint16_t idx);
362 +static int mptsas_send_sep(mptsas_t *mpt, mptsas_enclosure_t *mep, uint16_t idx,
361 363 uint32_t *status, uint8_t cmd);
362 364 static dev_info_t *mptsas_get_dip_from_dev(dev_t dev,
363 365 mptsas_phymask_t *phymask);
364 366 static mptsas_target_t *mptsas_addr_to_ptgt(mptsas_t *mpt, char *addr,
365 367 mptsas_phymask_t phymask);
366 -static int mptsas_flush_led_status(mptsas_t *mpt, mptsas_target_t *ptgt);
367 368
368 369
369 370 /*
370 371 * Enumeration / DR functions
371 372 */
372 373 static void mptsas_config_all(dev_info_t *pdip);
373 374 static int mptsas_config_one_addr(dev_info_t *pdip, uint64_t sasaddr, int lun,
374 375 dev_info_t **lundip);
375 376 static int mptsas_config_one_phy(dev_info_t *pdip, uint8_t phy, int lun,
376 377 dev_info_t **lundip);
377 378
378 379 static int mptsas_config_target(dev_info_t *pdip, mptsas_target_t *ptgt);
379 380 static int mptsas_offline_target(dev_info_t *pdip, char *name);
380 381
381 382 static int mptsas_config_raid(dev_info_t *pdip, uint16_t target,
382 383 dev_info_t **dip);
383 384
384 385 static int mptsas_config_luns(dev_info_t *pdip, mptsas_target_t *ptgt);
385 386 static int mptsas_probe_lun(dev_info_t *pdip, int lun,
386 387 dev_info_t **dip, mptsas_target_t *ptgt);
387 388
388 389 static int mptsas_create_lun(dev_info_t *pdip, struct scsi_inquiry *sd_inq,
|
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
389 390 dev_info_t **dip, mptsas_target_t *ptgt, int lun);
390 391
391 392 static int mptsas_create_phys_lun(dev_info_t *pdip, struct scsi_inquiry *sd,
392 393 char *guid, dev_info_t **dip, mptsas_target_t *ptgt, int lun);
393 394 static int mptsas_create_virt_lun(dev_info_t *pdip, struct scsi_inquiry *sd,
394 395 char *guid, dev_info_t **dip, mdi_pathinfo_t **pip, mptsas_target_t *ptgt,
395 396 int lun);
396 397
397 398 static void mptsas_offline_missed_luns(dev_info_t *pdip,
398 399 uint16_t *repluns, int lun_cnt, mptsas_target_t *ptgt);
399 -static int mptsas_offline_lun(dev_info_t *pdip, dev_info_t *rdip,
400 - mdi_pathinfo_t *rpip, uint_t flags);
400 +static int mptsas_offline_lun(dev_info_t *rdip, mdi_pathinfo_t *rpip);
401 401
402 402 static int mptsas_config_smp(dev_info_t *pdip, uint64_t sas_wwn,
403 403 dev_info_t **smp_dip);
404 -static int mptsas_offline_smp(dev_info_t *pdip, mptsas_smp_t *smp_node,
405 - uint_t flags);
404 +static int mptsas_offline_smp(dev_info_t *pdip, mptsas_smp_t *smp_node);
406 405
407 406 static int mptsas_event_query(mptsas_t *mpt, mptsas_event_query_t *data,
408 407 int mode, int *rval);
409 408 static int mptsas_event_enable(mptsas_t *mpt, mptsas_event_enable_t *data,
410 409 int mode, int *rval);
411 410 static int mptsas_event_report(mptsas_t *mpt, mptsas_event_report_t *data,
412 411 int mode, int *rval);
413 412 static void mptsas_record_event(void *args);
414 413 static int mptsas_reg_access(mptsas_t *mpt, mptsas_reg_access_t *data,
415 414 int mode);
416 415
417 416 mptsas_target_t *mptsas_tgt_alloc(refhash_t *, uint16_t, uint64_t,
418 417 uint32_t, mptsas_phymask_t, uint8_t);
419 418 static mptsas_smp_t *mptsas_smp_alloc(mptsas_t *, mptsas_smp_t *);
420 419 static int mptsas_online_smp(dev_info_t *pdip, mptsas_smp_t *smp_node,
421 420 dev_info_t **smp_dip);
422 421
423 422 /*
424 423 * Power management functions
425 424 */
426 425 static int mptsas_get_pci_cap(mptsas_t *mpt);
427 426 static int mptsas_init_pm(mptsas_t *mpt);
428 427
429 428 /*
430 429 * MPT MSI tunable:
431 430 *
432 431 * By default MSI is enabled on all supported platforms.
433 432 */
434 433 boolean_t mptsas_enable_msi = B_TRUE;
435 434 boolean_t mptsas_physical_bind_failed_page_83 = B_FALSE;
436 435
437 436 /*
438 437 * Global switch for use of MPI2.5 FAST PATH.
439 438 * We don't really know what FAST PATH actually does, so if it is suspected
440 439 * to cause problems it can be turned off by setting this variable to B_FALSE.
441 440 */
442 441 boolean_t mptsas_use_fastpath = B_TRUE;
443 442
444 443 static int mptsas_register_intrs(mptsas_t *);
445 444 static void mptsas_unregister_intrs(mptsas_t *);
446 445 static int mptsas_add_intrs(mptsas_t *, int);
447 446 static void mptsas_rem_intrs(mptsas_t *);
448 447
449 448 /*
450 449 * FMA Prototypes
451 450 */
452 451 static void mptsas_fm_init(mptsas_t *mpt);
453 452 static void mptsas_fm_fini(mptsas_t *mpt);
454 453 static int mptsas_fm_error_cb(dev_info_t *, ddi_fm_error_t *, const void *);
455 454
456 455 extern pri_t minclsyspri, maxclsyspri;
457 456
458 457 /*
459 458 * This device is created by the SCSI pseudo nexus driver (SCSI vHCI). It is
460 459 * under this device that the paths to a physical device are created when
461 460 * MPxIO is used.
|
↓ open down ↓ |
46 lines elided |
↑ open up ↑ |
462 461 */
463 462 extern dev_info_t *scsi_vhci_dip;
464 463
465 464 /*
466 465 * Tunable timeout value for Inquiry VPD page 0x83
467 466 * By default the value is 30 seconds.
468 467 */
469 468 int mptsas_inq83_retry_timeout = 30;
470 469
471 470 /*
471 + * Tunable for default SCSI pkt timeout. Defaults to 5 seconds, which should
472 + * be plenty for INQUIRY and REPORT_LUNS, which are the only commands currently
473 + * issued by mptsas directly.
474 + */
475 +int mptsas_scsi_pkt_time = 5;
476 +
477 +/*
472 478 * This is used to allocate memory for message frame storage, not for
473 479 * data I/O DMA. All message frames must be stored in the first 4G of
474 480 * physical memory.
475 481 */
476 482 ddi_dma_attr_t mptsas_dma_attrs = {
477 483 DMA_ATTR_V0, /* attribute layout version */
478 484 0x0ull, /* address low - should be 0 (longlong) */
479 485 0xffffffffull, /* address high - 32-bit max range */
480 486 0x00ffffffull, /* count max - max DMA object size */
481 487 4, /* allocation alignment requirements */
482 488 0x78, /* burstsizes - binary encoded values */
483 489 1, /* minxfer - gran. of DMA engine */
484 490 0x00ffffffull, /* maxxfer - gran. of DMA engine */
485 491 0xffffffffull, /* max segment size (DMA boundary) */
486 492 MPTSAS_MAX_DMA_SEGS, /* scatter/gather list length */
487 493 512, /* granularity - device transfer size */
488 494 0 /* flags, set to 0 */
489 495 };
490 496
491 497 /*
492 498 * This is used for data I/O DMA memory allocation. (full 64-bit DMA
493 499 * physical addresses are supported.)
494 500 */
495 501 ddi_dma_attr_t mptsas_dma_attrs64 = {
496 502 DMA_ATTR_V0, /* attribute layout version */
497 503 0x0ull, /* address low - should be 0 (longlong) */
498 504 0xffffffffffffffffull, /* address high - 64-bit max */
499 505 0x00ffffffull, /* count max - max DMA object size */
500 506 4, /* allocation alignment requirements */
501 507 0x78, /* burstsizes - binary encoded values */
502 508 1, /* minxfer - gran. of DMA engine */
503 509 0x00ffffffull, /* maxxfer - gran. of DMA engine */
504 510 0xffffffffull, /* max segment size (DMA boundary) */
505 511 MPTSAS_MAX_DMA_SEGS, /* scatter/gather list length */
506 512 512, /* granularity - device transfer size */
507 513 0 /* flags, set to 0 */
508 514 };
509 515
510 516 ddi_device_acc_attr_t mptsas_dev_attr = {
511 517 DDI_DEVICE_ATTR_V1,
512 518 DDI_STRUCTURE_LE_ACC,
513 519 DDI_STRICTORDER_ACC,
514 520 DDI_DEFAULT_ACC
515 521 };
516 522
517 523 static struct cb_ops mptsas_cb_ops = {
518 524 scsi_hba_open, /* open */
519 525 scsi_hba_close, /* close */
520 526 nodev, /* strategy */
521 527 nodev, /* print */
522 528 nodev, /* dump */
523 529 nodev, /* read */
524 530 nodev, /* write */
525 531 mptsas_ioctl, /* ioctl */
526 532 nodev, /* devmap */
527 533 nodev, /* mmap */
528 534 nodev, /* segmap */
529 535 nochpoll, /* chpoll */
530 536 ddi_prop_op, /* cb_prop_op */
531 537 NULL, /* streamtab */
532 538 D_MP, /* cb_flag */
533 539 CB_REV, /* rev */
534 540 nodev, /* aread */
535 541 nodev /* awrite */
536 542 };
537 543
538 544 static struct dev_ops mptsas_ops = {
539 545 DEVO_REV, /* devo_rev, */
540 546 0, /* refcnt */
541 547 ddi_no_info, /* info */
542 548 nulldev, /* identify */
543 549 nulldev, /* probe */
544 550 mptsas_attach, /* attach */
545 551 mptsas_detach, /* detach */
546 552 #ifdef __sparc
547 553 mptsas_reset,
548 554 #else
549 555 nodev, /* reset */
550 556 #endif /* __sparc */
551 557 &mptsas_cb_ops, /* driver operations */
552 558 NULL, /* bus operations */
553 559 mptsas_power, /* power management */
554 560 #ifdef __sparc
555 561 ddi_quiesce_not_needed
556 562 #else
557 563 mptsas_quiesce /* quiesce */
558 564 #endif /* __sparc */
559 565 };
560 566
561 567
562 568 #define MPTSAS_MOD_STRING "MPTSAS HBA Driver 00.00.00.24"
563 569
564 570 static struct modldrv modldrv = {
565 571 &mod_driverops, /* Type of module. This one is a driver */
566 572 MPTSAS_MOD_STRING, /* Name of the module. */
567 573 &mptsas_ops, /* driver ops */
568 574 };
569 575
570 576 static struct modlinkage modlinkage = {
571 577 MODREV_1, &modldrv, NULL
572 578 };
573 579 #define TARGET_PROP "target"
574 580 #define LUN_PROP "lun"
575 581 #define LUN64_PROP "lun64"
576 582 #define SAS_PROP "sas-mpt"
577 583 #define MDI_GUID "wwn"
578 584 #define NDI_GUID "guid"
579 585 #define MPTSAS_DEV_GONE "mptsas_dev_gone"
580 586
581 587 /*
582 588 * Local static data
583 589 */
584 590 #if defined(MPTSAS_DEBUG)
585 591 /*
586 592 * Flags to indicate which debug messages are to be printed and which go to the
587 593 * debug log ring buffer. Default is to not print anything, and to log
588 594 * everything except the watchsubr() output which normally happens every second.
589 595 */
590 596 uint32_t mptsas_debugprt_flags = 0x0;
591 597 uint32_t mptsas_debuglog_flags = ~(1U << 30);
592 598 #endif /* defined(MPTSAS_DEBUG) */
593 599 uint32_t mptsas_debug_resets = 0;
594 600
595 601 static kmutex_t mptsas_global_mutex;
596 602 static void *mptsas_state; /* soft state ptr */
597 603 static krwlock_t mptsas_global_rwlock;
598 604
599 605 static kmutex_t mptsas_log_mutex;
600 606 static char mptsas_log_buf[256];
601 607 _NOTE(MUTEX_PROTECTS_DATA(mptsas_log_mutex, mptsas_log_buf))
602 608
603 609 static mptsas_t *mptsas_head, *mptsas_tail;
604 610 static clock_t mptsas_scsi_watchdog_tick;
605 611 static clock_t mptsas_tick;
606 612 static timeout_id_t mptsas_reset_watch;
607 613 static timeout_id_t mptsas_timeout_id;
608 614 static int mptsas_timeouts_enabled = 0;
609 615
610 616 /*
611 617 * Default length for extended auto request sense buffers.
612 618 * All sense buffers need to be under the same alloc because there
613 619 * is only one common top 32bits (of 64bits) address register.
614 620 * Most requests only require 32 bytes, but some request >256.
615 621 * We use rmalloc()/rmfree() on this additional memory to manage the
616 622 * "extended" requests.
617 623 */
618 624 int mptsas_extreq_sense_bufsize = 256*64;
619 625
620 626 /*
621 627 * We believe that all software resrictions of having to run with DMA
622 628 * attributes to limit allocation to the first 4G are removed.
623 629 * However, this flag remains to enable quick switchback should suspicious
624 630 * problems emerge.
625 631 * Note that scsi_alloc_consistent_buf() does still adhere to allocating
626 632 * 32 bit addressable memory, but we can cope if that is changed now.
627 633 */
628 634 int mptsas_use_64bit_msgaddr = 1;
629 635
630 636 /*
631 637 * warlock directives
632 638 */
633 639 _NOTE(SCHEME_PROTECTS_DATA("unique per pkt", scsi_pkt \
634 640 mptsas_cmd NcrTableIndirect buf scsi_cdb scsi_status))
635 641 _NOTE(SCHEME_PROTECTS_DATA("unique per pkt", smp_pkt))
636 642 _NOTE(SCHEME_PROTECTS_DATA("stable data", scsi_device scsi_address))
637 643 _NOTE(SCHEME_PROTECTS_DATA("No Mutex Needed", mptsas_tgt_private))
638 644 _NOTE(SCHEME_PROTECTS_DATA("No Mutex Needed", scsi_hba_tran::tran_tgt_private))
639 645
640 646 /*
641 647 * SM - HBA statics
642 648 */
643 649 char *mptsas_driver_rev = MPTSAS_MOD_STRING;
644 650
645 651 #ifdef MPTSAS_DEBUG
646 652 void debug_enter(char *);
647 653 #endif
648 654
649 655 /*
650 656 * Notes:
651 657 * - scsi_hba_init(9F) initializes SCSI HBA modules
652 658 * - must call scsi_hba_fini(9F) if modload() fails
653 659 */
654 660 int
655 661 _init(void)
656 662 {
657 663 int status;
658 664 /* CONSTCOND */
659 665 ASSERT(NO_COMPETING_THREADS);
660 666
661 667 NDBG0(("_init"));
662 668
663 669 status = ddi_soft_state_init(&mptsas_state, MPTSAS_SIZE,
664 670 MPTSAS_INITIAL_SOFT_SPACE);
665 671 if (status != 0) {
666 672 return (status);
667 673 }
668 674
669 675 if ((status = scsi_hba_init(&modlinkage)) != 0) {
670 676 ddi_soft_state_fini(&mptsas_state);
671 677 return (status);
672 678 }
673 679
674 680 mutex_init(&mptsas_global_mutex, NULL, MUTEX_DRIVER, NULL);
675 681 rw_init(&mptsas_global_rwlock, NULL, RW_DRIVER, NULL);
676 682 mutex_init(&mptsas_log_mutex, NULL, MUTEX_DRIVER, NULL);
677 683
678 684 if ((status = mod_install(&modlinkage)) != 0) {
679 685 mutex_destroy(&mptsas_log_mutex);
680 686 rw_destroy(&mptsas_global_rwlock);
681 687 mutex_destroy(&mptsas_global_mutex);
682 688 ddi_soft_state_fini(&mptsas_state);
683 689 scsi_hba_fini(&modlinkage);
684 690 }
685 691
686 692 return (status);
687 693 }
688 694
689 695 /*
690 696 * Notes:
691 697 * - scsi_hba_fini(9F) uninitializes SCSI HBA modules
692 698 */
693 699 int
694 700 _fini(void)
695 701 {
696 702 int status;
697 703 /* CONSTCOND */
698 704 ASSERT(NO_COMPETING_THREADS);
699 705
700 706 NDBG0(("_fini"));
701 707
702 708 if ((status = mod_remove(&modlinkage)) == 0) {
703 709 ddi_soft_state_fini(&mptsas_state);
704 710 scsi_hba_fini(&modlinkage);
705 711 mutex_destroy(&mptsas_global_mutex);
706 712 rw_destroy(&mptsas_global_rwlock);
707 713 mutex_destroy(&mptsas_log_mutex);
708 714 }
709 715 return (status);
710 716 }
711 717
712 718 /*
713 719 * The loadable-module _info(9E) entry point
714 720 */
715 721 int
716 722 _info(struct modinfo *modinfop)
717 723 {
718 724 /* CONSTCOND */
719 725 ASSERT(NO_COMPETING_THREADS);
720 726 NDBG0(("mptsas _info"));
721 727
722 728 return (mod_info(&modlinkage, modinfop));
723 729 }
724 730
|
↓ open down ↓ |
243 lines elided |
↑ open up ↑ |
725 731 static int
726 732 mptsas_target_eval_devhdl(const void *op, void *arg)
727 733 {
728 734 uint16_t dh = *(uint16_t *)arg;
729 735 const mptsas_target_t *tp = op;
730 736
731 737 return ((int)tp->m_devhdl - (int)dh);
732 738 }
733 739
734 740 static int
735 -mptsas_target_eval_slot(const void *op, void *arg)
736 -{
737 - mptsas_led_control_t *lcp = arg;
738 - const mptsas_target_t *tp = op;
739 -
740 - if (tp->m_enclosure != lcp->Enclosure)
741 - return ((int)tp->m_enclosure - (int)lcp->Enclosure);
742 -
743 - return ((int)tp->m_slot_num - (int)lcp->Slot);
744 -}
745 -
746 -static int
747 741 mptsas_target_eval_nowwn(const void *op, void *arg)
748 742 {
749 743 uint8_t phy = *(uint8_t *)arg;
750 744 const mptsas_target_t *tp = op;
751 745
752 746 if (tp->m_addr.mta_wwn != 0)
753 747 return (-1);
754 748
755 749 return ((int)tp->m_phynum - (int)phy);
756 750 }
757 751
758 752 static int
759 753 mptsas_smp_eval_devhdl(const void *op, void *arg)
760 754 {
761 755 uint16_t dh = *(uint16_t *)arg;
762 756 const mptsas_smp_t *sp = op;
763 757
764 758 return ((int)sp->m_devhdl - (int)dh);
765 759 }
766 760
767 761 static uint64_t
768 762 mptsas_target_addr_hash(const void *tp)
769 763 {
770 764 const mptsas_target_addr_t *tap = tp;
771 765
772 766 return ((tap->mta_wwn & 0xffffffffffffULL) |
773 767 ((uint64_t)tap->mta_phymask << 48));
774 768 }
775 769
776 770 static int
777 771 mptsas_target_addr_cmp(const void *a, const void *b)
778 772 {
779 773 const mptsas_target_addr_t *aap = a;
780 774 const mptsas_target_addr_t *bap = b;
781 775
782 776 if (aap->mta_wwn < bap->mta_wwn)
783 777 return (-1);
784 778 if (aap->mta_wwn > bap->mta_wwn)
785 779 return (1);
786 780 return ((int)bap->mta_phymask - (int)aap->mta_phymask);
787 781 }
788 782
789 783 static uint64_t
790 784 mptsas_tmp_target_hash(const void *tp)
791 785 {
792 786 return ((uint64_t)(uintptr_t)tp);
793 787 }
794 788
795 789 static int
796 790 mptsas_tmp_target_cmp(const void *a, const void *b)
797 791 {
798 792 if (a > b)
799 793 return (1);
800 794 if (b < a)
801 795 return (-1);
802 796
803 797 return (0);
804 798 }
805 799
806 800 static void
807 801 mptsas_target_free(void *op)
808 802 {
809 803 kmem_free(op, sizeof (mptsas_target_t));
810 804 }
811 805
812 806 static void
813 807 mptsas_smp_free(void *op)
814 808 {
815 809 kmem_free(op, sizeof (mptsas_smp_t));
816 810 }
817 811
818 812 static void
819 813 mptsas_destroy_hashes(mptsas_t *mpt)
820 814 {
821 815 mptsas_target_t *tp;
822 816 mptsas_smp_t *sp;
823 817
824 818 for (tp = refhash_first(mpt->m_targets); tp != NULL;
825 819 tp = refhash_next(mpt->m_targets, tp)) {
826 820 refhash_remove(mpt->m_targets, tp);
827 821 }
828 822 for (sp = refhash_first(mpt->m_smp_targets); sp != NULL;
829 823 sp = refhash_next(mpt->m_smp_targets, sp)) {
830 824 refhash_remove(mpt->m_smp_targets, sp);
831 825 }
832 826 refhash_destroy(mpt->m_tmp_targets);
833 827 refhash_destroy(mpt->m_targets);
834 828 refhash_destroy(mpt->m_smp_targets);
835 829 mpt->m_targets = NULL;
836 830 mpt->m_smp_targets = NULL;
837 831 }
838 832
839 833 static int
840 834 mptsas_iport_attach(dev_info_t *dip, ddi_attach_cmd_t cmd)
841 835 {
842 836 dev_info_t *pdip;
843 837 mptsas_t *mpt;
844 838 scsi_hba_tran_t *hba_tran;
845 839 char *iport = NULL;
846 840 char phymask[MPTSAS_MAX_PHYS];
847 841 mptsas_phymask_t phy_mask = 0;
848 842 int dynamic_port = 0;
849 843 uint32_t page_address;
850 844 char initiator_wwnstr[MPTSAS_WWN_STRLEN];
851 845 int rval = DDI_FAILURE;
852 846 int i = 0;
853 847 uint8_t numphys = 0;
854 848 uint8_t phy_id;
855 849 uint8_t phy_port = 0;
856 850 uint16_t attached_devhdl = 0;
857 851 uint32_t dev_info;
858 852 uint64_t attached_sas_wwn;
859 853 uint16_t dev_hdl;
860 854 uint16_t pdev_hdl;
861 855 uint16_t bay_num, enclosure, io_flags;
862 856 char attached_wwnstr[MPTSAS_WWN_STRLEN];
863 857
864 858 /* CONSTCOND */
865 859 ASSERT(NO_COMPETING_THREADS);
866 860
867 861 switch (cmd) {
868 862 case DDI_ATTACH:
869 863 break;
870 864
871 865 case DDI_RESUME:
872 866 /*
873 867 * If this a scsi-iport node, nothing to do here.
874 868 */
875 869 return (DDI_SUCCESS);
876 870
877 871 default:
878 872 return (DDI_FAILURE);
879 873 }
880 874
881 875 pdip = ddi_get_parent(dip);
882 876
883 877 if ((hba_tran = ndi_flavorv_get(pdip, SCSA_FLAVOR_SCSI_DEVICE)) ==
884 878 NULL) {
885 879 cmn_err(CE_WARN, "Failed attach iport because fail to "
886 880 "get tran vector for the HBA node");
887 881 return (DDI_FAILURE);
888 882 }
889 883
890 884 mpt = TRAN2MPT(hba_tran);
891 885 ASSERT(mpt != NULL);
892 886 if (mpt == NULL)
893 887 return (DDI_FAILURE);
894 888
895 889 if ((hba_tran = ndi_flavorv_get(dip, SCSA_FLAVOR_SCSI_DEVICE)) ==
896 890 NULL) {
897 891 mptsas_log(mpt, CE_WARN, "Failed attach iport because fail to "
898 892 "get tran vector for the iport node");
899 893 return (DDI_FAILURE);
900 894 }
901 895
902 896 /*
903 897 * Overwrite parent's tran_hba_private to iport's tran vector
904 898 */
905 899 hba_tran->tran_hba_private = mpt;
906 900
907 901 ddi_report_dev(dip);
908 902
909 903 /*
910 904 * Get SAS address for initiator port according dev_handle
911 905 */
912 906 iport = ddi_get_name_addr(dip);
913 907 if (iport && strncmp(iport, "v0", 2) == 0) {
914 908 if (ddi_prop_update_int(DDI_DEV_T_NONE, dip,
915 909 MPTSAS_VIRTUAL_PORT, 1) !=
916 910 DDI_PROP_SUCCESS) {
917 911 (void) ddi_prop_remove(DDI_DEV_T_NONE, dip,
918 912 MPTSAS_VIRTUAL_PORT);
919 913 mptsas_log(mpt, CE_WARN, "mptsas virtual port "
920 914 "prop update failed");
921 915 return (DDI_FAILURE);
922 916 }
923 917 return (DDI_SUCCESS);
924 918 }
925 919
926 920 mutex_enter(&mpt->m_mutex);
927 921 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
928 922 bzero(phymask, sizeof (phymask));
929 923 (void) sprintf(phymask,
930 924 "%x", mpt->m_phy_info[i].phy_mask);
931 925 if (strcmp(phymask, iport) == 0) {
932 926 break;
933 927 }
934 928 }
935 929
936 930 if (i == MPTSAS_MAX_PHYS) {
937 931 mptsas_log(mpt, CE_WARN, "Failed attach port %s because port"
938 932 "seems not exist", iport);
939 933 mutex_exit(&mpt->m_mutex);
940 934 return (DDI_FAILURE);
941 935 }
942 936
943 937 phy_mask = mpt->m_phy_info[i].phy_mask;
944 938
945 939 if (mpt->m_phy_info[i].port_flags & AUTO_PORT_CONFIGURATION)
946 940 dynamic_port = 1;
947 941 else
948 942 dynamic_port = 0;
949 943
950 944 /*
951 945 * Update PHY info for smhba
952 946 */
953 947 if (mptsas_smhba_phy_init(mpt)) {
954 948 mutex_exit(&mpt->m_mutex);
955 949 mptsas_log(mpt, CE_WARN, "mptsas phy update "
956 950 "failed");
957 951 return (DDI_FAILURE);
958 952 }
959 953
960 954 mutex_exit(&mpt->m_mutex);
961 955
962 956 numphys = 0;
963 957 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
964 958 if ((phy_mask >> i) & 0x01) {
965 959 numphys++;
966 960 }
967 961 }
968 962
969 963 bzero(initiator_wwnstr, sizeof (initiator_wwnstr));
970 964 (void) sprintf(initiator_wwnstr, "w%016"PRIx64,
971 965 mpt->un.m_base_wwid);
972 966
973 967 if (ddi_prop_update_string(DDI_DEV_T_NONE, dip,
974 968 SCSI_ADDR_PROP_INITIATOR_PORT, initiator_wwnstr) !=
975 969 DDI_PROP_SUCCESS) {
976 970 (void) ddi_prop_remove(DDI_DEV_T_NONE,
977 971 dip, SCSI_ADDR_PROP_INITIATOR_PORT);
978 972 mptsas_log(mpt, CE_WARN, "mptsas Initiator port "
979 973 "prop update failed");
980 974 return (DDI_FAILURE);
981 975 }
982 976 if (ddi_prop_update_int(DDI_DEV_T_NONE, dip,
983 977 MPTSAS_NUM_PHYS, numphys) !=
984 978 DDI_PROP_SUCCESS) {
985 979 (void) ddi_prop_remove(DDI_DEV_T_NONE, dip, MPTSAS_NUM_PHYS);
986 980 return (DDI_FAILURE);
987 981 }
988 982
989 983 if (ddi_prop_update_int(DDI_DEV_T_NONE, dip,
990 984 "phymask", phy_mask) !=
991 985 DDI_PROP_SUCCESS) {
992 986 (void) ddi_prop_remove(DDI_DEV_T_NONE, dip, "phymask");
993 987 mptsas_log(mpt, CE_WARN, "mptsas phy mask "
994 988 "prop update failed");
995 989 return (DDI_FAILURE);
996 990 }
997 991
998 992 if (ddi_prop_update_int(DDI_DEV_T_NONE, dip,
999 993 "dynamic-port", dynamic_port) !=
1000 994 DDI_PROP_SUCCESS) {
1001 995 (void) ddi_prop_remove(DDI_DEV_T_NONE, dip, "dynamic-port");
1002 996 mptsas_log(mpt, CE_WARN, "mptsas dynamic port "
1003 997 "prop update failed");
1004 998 return (DDI_FAILURE);
1005 999 }
1006 1000 if (ddi_prop_update_int(DDI_DEV_T_NONE, dip,
1007 1001 MPTSAS_VIRTUAL_PORT, 0) !=
1008 1002 DDI_PROP_SUCCESS) {
1009 1003 (void) ddi_prop_remove(DDI_DEV_T_NONE, dip,
1010 1004 MPTSAS_VIRTUAL_PORT);
1011 1005 mptsas_log(mpt, CE_WARN, "mptsas virtual port "
1012 1006 "prop update failed");
1013 1007 return (DDI_FAILURE);
1014 1008 }
1015 1009 mptsas_smhba_set_all_phy_props(mpt, dip, numphys, phy_mask,
1016 1010 &attached_devhdl);
1017 1011
1018 1012 mutex_enter(&mpt->m_mutex);
1019 1013 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
1020 1014 MPI2_SAS_DEVICE_PGAD_FORM_MASK) | (uint32_t)attached_devhdl;
1021 1015 rval = mptsas_get_sas_device_page0(mpt, page_address, &dev_hdl,
1022 1016 &attached_sas_wwn, &dev_info, &phy_port, &phy_id,
1023 1017 &pdev_hdl, &bay_num, &enclosure, &io_flags);
1024 1018 if (rval != DDI_SUCCESS) {
1025 1019 mptsas_log(mpt, CE_WARN,
1026 1020 "Failed to get device page0 for handle:%d",
1027 1021 attached_devhdl);
1028 1022 mutex_exit(&mpt->m_mutex);
1029 1023 return (DDI_FAILURE);
1030 1024 }
1031 1025
1032 1026 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
1033 1027 bzero(phymask, sizeof (phymask));
1034 1028 (void) sprintf(phymask, "%x", mpt->m_phy_info[i].phy_mask);
1035 1029 if (strcmp(phymask, iport) == 0) {
1036 1030 (void) sprintf(&mpt->m_phy_info[i].smhba_info.path[0],
1037 1031 "%x",
1038 1032 mpt->m_phy_info[i].phy_mask);
1039 1033 }
1040 1034 }
1041 1035 mutex_exit(&mpt->m_mutex);
1042 1036
1043 1037 bzero(attached_wwnstr, sizeof (attached_wwnstr));
1044 1038 (void) sprintf(attached_wwnstr, "w%016"PRIx64,
1045 1039 attached_sas_wwn);
1046 1040 if (ddi_prop_update_string(DDI_DEV_T_NONE, dip,
1047 1041 SCSI_ADDR_PROP_ATTACHED_PORT, attached_wwnstr) !=
1048 1042 DDI_PROP_SUCCESS) {
1049 1043 (void) ddi_prop_remove(DDI_DEV_T_NONE,
1050 1044 dip, SCSI_ADDR_PROP_ATTACHED_PORT);
1051 1045 return (DDI_FAILURE);
1052 1046 }
1053 1047
1054 1048 /* Create kstats for each phy on this iport */
1055 1049
1056 1050 mptsas_create_phy_stats(mpt, iport, dip);
1057 1051
1058 1052 /*
1059 1053 * register sas hba iport with mdi (MPxIO/vhci)
1060 1054 */
1061 1055 if (mdi_phci_register(MDI_HCI_CLASS_SCSI,
1062 1056 dip, 0) == MDI_SUCCESS) {
1063 1057 mpt->m_mpxio_enable = TRUE;
1064 1058 }
1065 1059 return (DDI_SUCCESS);
1066 1060 }
1067 1061
1068 1062 /*
1069 1063 * Notes:
1070 1064 * Set up all device state and allocate data structures,
1071 1065 * mutexes, condition variables, etc. for device operation.
1072 1066 * Add interrupts needed.
1073 1067 * Return DDI_SUCCESS if device is ready, else return DDI_FAILURE.
1074 1068 */
1075 1069 static int
1076 1070 mptsas_attach(dev_info_t *dip, ddi_attach_cmd_t cmd)
1077 1071 {
1078 1072 mptsas_t *mpt = NULL;
1079 1073 int instance, i, j;
1080 1074 int doneq_thread_num;
1081 1075 char intr_added = 0;
1082 1076 char map_setup = 0;
1083 1077 char config_setup = 0;
1084 1078 char hba_attach_setup = 0;
1085 1079 char smp_attach_setup = 0;
1086 1080 char enc_attach_setup = 0;
1087 1081 char mutex_init_done = 0;
1088 1082 char event_taskq_create = 0;
1089 1083 char dr_taskq_create = 0;
1090 1084 char doneq_thread_create = 0;
1091 1085 char added_watchdog = 0;
1092 1086 scsi_hba_tran_t *hba_tran;
1093 1087 uint_t mem_bar = MEM_SPACE;
1094 1088 int rval = DDI_FAILURE;
1095 1089
1096 1090 /* CONSTCOND */
1097 1091 ASSERT(NO_COMPETING_THREADS);
1098 1092
1099 1093 if (scsi_hba_iport_unit_address(dip)) {
1100 1094 return (mptsas_iport_attach(dip, cmd));
1101 1095 }
1102 1096
1103 1097 switch (cmd) {
1104 1098 case DDI_ATTACH:
1105 1099 break;
1106 1100
1107 1101 case DDI_RESUME:
1108 1102 if ((hba_tran = ddi_get_driver_private(dip)) == NULL)
1109 1103 return (DDI_FAILURE);
1110 1104
1111 1105 mpt = TRAN2MPT(hba_tran);
1112 1106
1113 1107 if (!mpt) {
1114 1108 return (DDI_FAILURE);
1115 1109 }
1116 1110
1117 1111 /*
1118 1112 * Reset hardware and softc to "no outstanding commands"
1119 1113 * Note that a check condition can result on first command
1120 1114 * to a target.
1121 1115 */
1122 1116 mutex_enter(&mpt->m_mutex);
1123 1117
1124 1118 /*
1125 1119 * raise power.
1126 1120 */
1127 1121 if (mpt->m_options & MPTSAS_OPT_PM) {
1128 1122 mutex_exit(&mpt->m_mutex);
1129 1123 (void) pm_busy_component(dip, 0);
1130 1124 rval = pm_power_has_changed(dip, 0, PM_LEVEL_D0);
1131 1125 if (rval == DDI_SUCCESS) {
1132 1126 mutex_enter(&mpt->m_mutex);
1133 1127 } else {
1134 1128 /*
1135 1129 * The pm_raise_power() call above failed,
1136 1130 * and that can only occur if we were unable
1137 1131 * to reset the hardware. This is probably
1138 1132 * due to unhealty hardware, and because
1139 1133 * important filesystems(such as the root
1140 1134 * filesystem) could be on the attached disks,
1141 1135 * it would not be a good idea to continue,
1142 1136 * as we won't be entirely certain we are
1143 1137 * writing correct data. So we panic() here
1144 1138 * to not only prevent possible data corruption,
1145 1139 * but to give developers or end users a hope
1146 1140 * of identifying and correcting any problems.
1147 1141 */
1148 1142 fm_panic("mptsas could not reset hardware "
1149 1143 "during resume");
1150 1144 }
1151 1145 }
1152 1146
1153 1147 mpt->m_suspended = 0;
1154 1148
1155 1149 /*
1156 1150 * Reinitialize ioc
1157 1151 */
1158 1152 mpt->m_softstate |= MPTSAS_SS_MSG_UNIT_RESET;
1159 1153 if (mptsas_init_chip(mpt, FALSE) == DDI_FAILURE) {
1160 1154 mutex_exit(&mpt->m_mutex);
1161 1155 if (mpt->m_options & MPTSAS_OPT_PM) {
1162 1156 (void) pm_idle_component(dip, 0);
1163 1157 }
1164 1158 fm_panic("mptsas init chip fail during resume");
1165 1159 }
1166 1160 /*
1167 1161 * mptsas_update_driver_data needs interrupts so enable them
1168 1162 * first.
1169 1163 */
1170 1164 MPTSAS_ENABLE_INTR(mpt);
1171 1165 mptsas_update_driver_data(mpt);
1172 1166
1173 1167 /* start requests, if possible */
1174 1168 mptsas_restart_hba(mpt);
1175 1169
1176 1170 mutex_exit(&mpt->m_mutex);
1177 1171
1178 1172 /*
1179 1173 * Restart watch thread
1180 1174 */
1181 1175 mutex_enter(&mptsas_global_mutex);
1182 1176 if (mptsas_timeout_id == 0) {
1183 1177 mptsas_timeout_id = timeout(mptsas_watch, NULL,
1184 1178 mptsas_tick);
1185 1179 mptsas_timeouts_enabled = 1;
1186 1180 }
1187 1181 mutex_exit(&mptsas_global_mutex);
1188 1182
1189 1183 /* report idle status to pm framework */
1190 1184 if (mpt->m_options & MPTSAS_OPT_PM) {
1191 1185 (void) pm_idle_component(dip, 0);
1192 1186 }
1193 1187
1194 1188 return (DDI_SUCCESS);
1195 1189
1196 1190 default:
|
↓ open down ↓ |
440 lines elided |
↑ open up ↑ |
1197 1191 return (DDI_FAILURE);
1198 1192
1199 1193 }
1200 1194
1201 1195 instance = ddi_get_instance(dip);
1202 1196
1203 1197 /*
1204 1198 * Allocate softc information.
1205 1199 */
1206 1200 if (ddi_soft_state_zalloc(mptsas_state, instance) != DDI_SUCCESS) {
1207 - mptsas_log(NULL, CE_WARN,
1208 - "mptsas%d: cannot allocate soft state", instance);
1201 + mptsas_log(NULL, CE_WARN, "cannot allocate soft state");
1209 1202 goto fail;
1210 1203 }
1211 1204
1212 1205 mpt = ddi_get_soft_state(mptsas_state, instance);
1213 1206
1214 1207 if (mpt == NULL) {
1215 - mptsas_log(NULL, CE_WARN,
1216 - "mptsas%d: cannot get soft state", instance);
1208 + mptsas_log(NULL, CE_WARN, "cannot get soft state");
1217 1209 goto fail;
1218 1210 }
1219 1211
1220 1212 /* Indicate that we are 'sizeof (scsi_*(9S))' clean. */
1221 1213 scsi_size_clean(dip);
1222 1214
1223 1215 mpt->m_dip = dip;
1224 1216 mpt->m_instance = instance;
1225 1217
1226 1218 /* Make a per-instance copy of the structures */
1227 1219 mpt->m_io_dma_attr = mptsas_dma_attrs64;
1228 1220 if (mptsas_use_64bit_msgaddr) {
1229 1221 mpt->m_msg_dma_attr = mptsas_dma_attrs64;
1230 1222 } else {
1231 1223 mpt->m_msg_dma_attr = mptsas_dma_attrs;
1232 1224 }
1233 1225 mpt->m_reg_acc_attr = mptsas_dev_attr;
1234 1226 mpt->m_dev_acc_attr = mptsas_dev_attr;
1235 1227
1236 1228 /*
1237 1229 * Size of individual request sense buffer
1238 1230 */
1239 1231 mpt->m_req_sense_size = EXTCMDS_STATUS_SIZE;
1240 1232
1241 1233 /*
1242 1234 * Initialize FMA
1243 1235 */
1244 1236 mpt->m_fm_capabilities = ddi_getprop(DDI_DEV_T_ANY, mpt->m_dip,
1245 1237 DDI_PROP_CANSLEEP | DDI_PROP_DONTPASS, "fm-capable",
1246 1238 DDI_FM_EREPORT_CAPABLE | DDI_FM_ACCCHK_CAPABLE |
1247 1239 DDI_FM_DMACHK_CAPABLE | DDI_FM_ERRCB_CAPABLE);
1248 1240
1249 1241 mptsas_fm_init(mpt);
1250 1242
1251 1243 if (mptsas_alloc_handshake_msg(mpt,
1252 1244 sizeof (Mpi2SCSITaskManagementRequest_t)) == DDI_FAILURE) {
1253 1245 mptsas_log(mpt, CE_WARN, "cannot initialize handshake msg.");
1254 1246 goto fail;
1255 1247 }
1256 1248
1257 1249 /*
1258 1250 * Setup configuration space
1259 1251 */
1260 1252 if (mptsas_config_space_init(mpt) == FALSE) {
1261 1253 mptsas_log(mpt, CE_WARN, "mptsas_config_space_init failed");
1262 1254 goto fail;
1263 1255 }
1264 1256 config_setup++;
1265 1257
1266 1258 if (ddi_regs_map_setup(dip, mem_bar, (caddr_t *)&mpt->m_reg,
1267 1259 0, 0, &mpt->m_reg_acc_attr, &mpt->m_datap) != DDI_SUCCESS) {
1268 1260 mptsas_log(mpt, CE_WARN, "map setup failed");
1269 1261 goto fail;
1270 1262 }
1271 1263 map_setup++;
1272 1264
1273 1265 /*
1274 1266 * A taskq is created for dealing with the event handler
1275 1267 */
1276 1268 if ((mpt->m_event_taskq = ddi_taskq_create(dip, "mptsas_event_taskq",
1277 1269 1, TASKQ_DEFAULTPRI, 0)) == NULL) {
1278 1270 mptsas_log(mpt, CE_NOTE, "ddi_taskq_create failed");
1279 1271 goto fail;
1280 1272 }
1281 1273 event_taskq_create++;
1282 1274
1283 1275 /*
1284 1276 * A taskq is created for dealing with dr events
1285 1277 */
1286 1278 if ((mpt->m_dr_taskq = ddi_taskq_create(dip,
1287 1279 "mptsas_dr_taskq",
1288 1280 1, TASKQ_DEFAULTPRI, 0)) == NULL) {
1289 1281 mptsas_log(mpt, CE_NOTE, "ddi_taskq_create for discovery "
1290 1282 "failed");
|
↓ open down ↓ |
64 lines elided |
↑ open up ↑ |
1291 1283 goto fail;
1292 1284 }
1293 1285 dr_taskq_create++;
1294 1286
1295 1287 mpt->m_doneq_thread_threshold = ddi_prop_get_int(DDI_DEV_T_ANY, dip,
1296 1288 0, "mptsas_doneq_thread_threshold_prop", 10);
1297 1289 mpt->m_doneq_length_threshold = ddi_prop_get_int(DDI_DEV_T_ANY, dip,
1298 1290 0, "mptsas_doneq_length_threshold_prop", 8);
1299 1291 mpt->m_doneq_thread_n = ddi_prop_get_int(DDI_DEV_T_ANY, dip,
1300 1292 0, "mptsas_doneq_thread_n_prop", 8);
1293 + mpt->m_max_tune_throttle = ddi_prop_get_int(DDI_DEV_T_ANY, dip,
1294 + 0, "mptsas_max_throttle", MAX_THROTTLE);
1301 1295
1296 + /*
1297 + * Error check to make sure value is withing range. If nothing
1298 + * is set default to original design value.
1299 + */
1300 + if (mpt->m_max_tune_throttle < THROTTLE_LO) {
1301 + mpt->m_max_tune_throttle = MAX_THROTTLE;
1302 + } else if (mpt->m_max_tune_throttle > THROTTLE_HI) {
1303 + mpt->m_max_tune_throttle = THROTTLE_HI;
1304 + }
1305 +
1302 1306 if (mpt->m_doneq_thread_n) {
1303 1307 cv_init(&mpt->m_doneq_thread_cv, NULL, CV_DRIVER, NULL);
1304 1308 mutex_init(&mpt->m_doneq_mutex, NULL, MUTEX_DRIVER, NULL);
1305 1309
1306 1310 mutex_enter(&mpt->m_doneq_mutex);
1307 1311 mpt->m_doneq_thread_id =
1308 1312 kmem_zalloc(sizeof (mptsas_doneq_thread_list_t)
1309 1313 * mpt->m_doneq_thread_n, KM_SLEEP);
1310 1314
1311 1315 for (j = 0; j < mpt->m_doneq_thread_n; j++) {
1312 1316 cv_init(&mpt->m_doneq_thread_id[j].cv, NULL,
1313 1317 CV_DRIVER, NULL);
1314 1318 mutex_init(&mpt->m_doneq_thread_id[j].mutex, NULL,
1315 1319 MUTEX_DRIVER, NULL);
1316 1320 mutex_enter(&mpt->m_doneq_thread_id[j].mutex);
1317 1321 mpt->m_doneq_thread_id[j].flag |=
1318 1322 MPTSAS_DONEQ_THREAD_ACTIVE;
1319 1323 mpt->m_doneq_thread_id[j].arg.mpt = mpt;
1320 1324 mpt->m_doneq_thread_id[j].arg.t = j;
1321 1325 mpt->m_doneq_thread_id[j].threadp =
1322 1326 thread_create(NULL, 0, mptsas_doneq_thread,
1323 1327 &mpt->m_doneq_thread_id[j].arg,
1324 1328 0, &p0, TS_RUN, minclsyspri);
1325 1329 mpt->m_doneq_thread_id[j].donetail =
1326 1330 &mpt->m_doneq_thread_id[j].doneq;
1327 1331 mutex_exit(&mpt->m_doneq_thread_id[j].mutex);
1328 1332 }
1329 1333 mutex_exit(&mpt->m_doneq_mutex);
1330 1334 doneq_thread_create++;
1331 1335 }
1332 1336
1333 1337 /*
1334 1338 * Disable hardware interrupt since we're not ready to
1335 1339 * handle it yet.
1336 1340 */
1337 1341 MPTSAS_DISABLE_INTR(mpt);
1338 1342 if (mptsas_register_intrs(mpt) == FALSE)
1339 1343 goto fail;
1340 1344 intr_added++;
1341 1345
1342 1346 /* Initialize mutex used in interrupt handler */
1343 1347 mutex_init(&mpt->m_mutex, NULL, MUTEX_DRIVER,
1344 1348 DDI_INTR_PRI(mpt->m_intr_pri));
1345 1349 mutex_init(&mpt->m_passthru_mutex, NULL, MUTEX_DRIVER, NULL);
1346 1350 mutex_init(&mpt->m_tx_waitq_mutex, NULL, MUTEX_DRIVER,
1347 1351 DDI_INTR_PRI(mpt->m_intr_pri));
1348 1352 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
|
↓ open down ↓ |
37 lines elided |
↑ open up ↑ |
1349 1353 mutex_init(&mpt->m_phy_info[i].smhba_info.phy_mutex,
1350 1354 NULL, MUTEX_DRIVER,
1351 1355 DDI_INTR_PRI(mpt->m_intr_pri));
1352 1356 }
1353 1357
1354 1358 cv_init(&mpt->m_cv, NULL, CV_DRIVER, NULL);
1355 1359 cv_init(&mpt->m_passthru_cv, NULL, CV_DRIVER, NULL);
1356 1360 cv_init(&mpt->m_fw_cv, NULL, CV_DRIVER, NULL);
1357 1361 cv_init(&mpt->m_config_cv, NULL, CV_DRIVER, NULL);
1358 1362 cv_init(&mpt->m_fw_diag_cv, NULL, CV_DRIVER, NULL);
1359 - cv_init(&mpt->m_extreq_sense_refcount_cv, NULL, CV_DRIVER, NULL);
1360 1363 mutex_init_done++;
1361 1364
1365 +#ifdef MPTSAS_FAULTINJECTION
1366 + TAILQ_INIT(&mpt->m_fminj_cmdq);
1367 +#endif
1368 +
1362 1369 mutex_enter(&mpt->m_mutex);
1363 1370 /*
1364 1371 * Initialize power management component
1365 1372 */
1366 1373 if (mpt->m_options & MPTSAS_OPT_PM) {
1367 1374 if (mptsas_init_pm(mpt)) {
1368 1375 mutex_exit(&mpt->m_mutex);
1369 1376 mptsas_log(mpt, CE_WARN, "mptsas pm initialization "
1370 1377 "failed");
1371 1378 goto fail;
1372 1379 }
1373 1380 }
1374 1381
1375 1382 /*
1376 1383 * Initialize chip using Message Unit Reset, if allowed
1377 1384 */
1378 1385 mpt->m_softstate |= MPTSAS_SS_MSG_UNIT_RESET;
1379 1386 if (mptsas_init_chip(mpt, TRUE) == DDI_FAILURE) {
1380 1387 mutex_exit(&mpt->m_mutex);
1381 1388 mptsas_log(mpt, CE_WARN, "mptsas chip initialization failed");
1382 1389 goto fail;
1383 1390 }
1384 1391
1385 1392 mpt->m_targets = refhash_create(MPTSAS_TARGET_BUCKET_COUNT,
1386 1393 mptsas_target_addr_hash, mptsas_target_addr_cmp,
1387 1394 mptsas_target_free, sizeof (mptsas_target_t),
1388 1395 offsetof(mptsas_target_t, m_link),
1389 1396 offsetof(mptsas_target_t, m_addr), KM_SLEEP);
1390 1397
1391 1398 /*
1392 1399 * The refhash for temporary targets uses the address of the target
1393 1400 * struct itself as tag, so the tag offset is 0. See the implementation
1394 1401 * of mptsas_tmp_target_hash() and mptsas_tmp_target_cmp().
1395 1402 */
1396 1403 mpt->m_tmp_targets = refhash_create(MPTSAS_TMP_TARGET_BUCKET_COUNT,
1397 1404 mptsas_tmp_target_hash, mptsas_tmp_target_cmp,
1398 1405 mptsas_target_free, sizeof (mptsas_target_t),
1399 1406 offsetof(mptsas_target_t, m_link), 0, KM_SLEEP);
1400 1407
1401 1408 /*
1402 1409 * Fill in the phy_info structure and get the base WWID
1403 1410 */
1404 1411 if (mptsas_get_manufacture_page5(mpt) == DDI_FAILURE) {
1405 1412 mptsas_log(mpt, CE_WARN,
1406 1413 "mptsas_get_manufacture_page5 failed!");
1407 1414 goto fail;
1408 1415 }
1409 1416
1410 1417 if (mptsas_get_sas_io_unit_page_hndshk(mpt)) {
1411 1418 mptsas_log(mpt, CE_WARN,
1412 1419 "mptsas_get_sas_io_unit_page_hndshk failed!");
1413 1420 goto fail;
1414 1421 }
1415 1422
1416 1423 if (mptsas_get_manufacture_page0(mpt) == DDI_FAILURE) {
1417 1424 mptsas_log(mpt, CE_WARN,
1418 1425 "mptsas_get_manufacture_page0 failed!");
1419 1426 goto fail;
1420 1427 }
1421 1428
1422 1429 mutex_exit(&mpt->m_mutex);
1423 1430
1424 1431 /*
1425 1432 * Register the iport for multiple port HBA
1426 1433 */
1427 1434 mptsas_iport_register(mpt);
1428 1435
1429 1436 /*
1430 1437 * initialize SCSI HBA transport structure
1431 1438 */
1432 1439 if (mptsas_hba_setup(mpt) == FALSE)
1433 1440 goto fail;
1434 1441 hba_attach_setup++;
1435 1442
1436 1443 if (mptsas_smp_setup(mpt) == FALSE)
1437 1444 goto fail;
1438 1445 smp_attach_setup++;
1439 1446
1440 1447 if (mptsas_enc_setup(mpt) == FALSE)
1441 1448 goto fail;
|
↓ open down ↓ |
70 lines elided |
↑ open up ↑ |
1442 1449 enc_attach_setup++;
1443 1450
1444 1451 if (mptsas_cache_create(mpt) == FALSE)
1445 1452 goto fail;
1446 1453
1447 1454 mpt->m_scsi_reset_delay = ddi_prop_get_int(DDI_DEV_T_ANY,
1448 1455 dip, 0, "scsi-reset-delay", SCSI_DEFAULT_RESET_DELAY);
1449 1456 if (mpt->m_scsi_reset_delay == 0) {
1450 1457 mptsas_log(mpt, CE_NOTE,
1451 1458 "scsi_reset_delay of 0 is not recommended,"
1452 - " resetting to SCSI_DEFAULT_RESET_DELAY\n");
1459 + " resetting to SCSI_DEFAULT_RESET_DELAY");
1453 1460 mpt->m_scsi_reset_delay = SCSI_DEFAULT_RESET_DELAY;
1454 1461 }
1455 1462
1456 1463 /*
1457 1464 * Initialize the wait and done FIFO queue
1458 1465 */
1459 1466 mpt->m_donetail = &mpt->m_doneq;
1460 1467 mpt->m_waitqtail = &mpt->m_waitq;
1461 1468 mpt->m_tx_waitqtail = &mpt->m_tx_waitq;
1462 1469 mpt->m_tx_draining = 0;
1463 1470
1464 1471 /*
1465 1472 * ioc cmd queue initialize
1466 1473 */
1467 1474 mpt->m_ioc_event_cmdtail = &mpt->m_ioc_event_cmdq;
1468 1475 mpt->m_dev_handle = 0xFFFF;
1469 1476
1470 1477 MPTSAS_ENABLE_INTR(mpt);
1471 1478
1472 1479 /*
1473 1480 * enable event notification
1474 1481 */
1475 1482 mutex_enter(&mpt->m_mutex);
1476 1483 if (mptsas_ioc_enable_event_notification(mpt)) {
1477 1484 mutex_exit(&mpt->m_mutex);
1478 1485 goto fail;
1479 1486 }
1480 1487 mutex_exit(&mpt->m_mutex);
1481 1488
1482 1489 /*
1483 1490 * used for mptsas_watch
1484 1491 */
1485 1492 mptsas_list_add(mpt);
1486 1493
1487 1494 mutex_enter(&mptsas_global_mutex);
1488 1495 if (mptsas_timeouts_enabled == 0) {
1489 1496 mptsas_scsi_watchdog_tick = ddi_prop_get_int(DDI_DEV_T_ANY,
1490 1497 dip, 0, "scsi-watchdog-tick", DEFAULT_WD_TICK);
1491 1498
1492 1499 mptsas_tick = mptsas_scsi_watchdog_tick *
1493 1500 drv_usectohz((clock_t)1000000);
1494 1501
1495 1502 mptsas_timeout_id = timeout(mptsas_watch, NULL, mptsas_tick);
1496 1503 mptsas_timeouts_enabled = 1;
1497 1504 }
1498 1505 mutex_exit(&mptsas_global_mutex);
1499 1506 added_watchdog++;
1500 1507
1501 1508 /*
1502 1509 * Initialize PHY info for smhba.
1503 1510 * This requires watchdog to be enabled otherwise if interrupts
1504 1511 * don't work the system will hang.
1505 1512 */
1506 1513 if (mptsas_smhba_setup(mpt)) {
1507 1514 mptsas_log(mpt, CE_WARN, "mptsas phy initialization "
1508 1515 "failed");
1509 1516 goto fail;
1510 1517 }
1511 1518
1512 1519 /* Check all dma handles allocated in attach */
1513 1520 if ((mptsas_check_dma_handle(mpt->m_dma_req_frame_hdl)
1514 1521 != DDI_SUCCESS) ||
1515 1522 (mptsas_check_dma_handle(mpt->m_dma_req_sense_hdl)
1516 1523 != DDI_SUCCESS) ||
1517 1524 (mptsas_check_dma_handle(mpt->m_dma_reply_frame_hdl)
1518 1525 != DDI_SUCCESS) ||
1519 1526 (mptsas_check_dma_handle(mpt->m_dma_free_queue_hdl)
1520 1527 != DDI_SUCCESS) ||
1521 1528 (mptsas_check_dma_handle(mpt->m_dma_post_queue_hdl)
1522 1529 != DDI_SUCCESS) ||
1523 1530 (mptsas_check_dma_handle(mpt->m_hshk_dma_hdl)
1524 1531 != DDI_SUCCESS)) {
1525 1532 goto fail;
1526 1533 }
1527 1534
1528 1535 /* Check all acc handles allocated in attach */
1529 1536 if ((mptsas_check_acc_handle(mpt->m_datap) != DDI_SUCCESS) ||
1530 1537 (mptsas_check_acc_handle(mpt->m_acc_req_frame_hdl)
1531 1538 != DDI_SUCCESS) ||
1532 1539 (mptsas_check_acc_handle(mpt->m_acc_req_sense_hdl)
1533 1540 != DDI_SUCCESS) ||
1534 1541 (mptsas_check_acc_handle(mpt->m_acc_reply_frame_hdl)
1535 1542 != DDI_SUCCESS) ||
1536 1543 (mptsas_check_acc_handle(mpt->m_acc_free_queue_hdl)
1537 1544 != DDI_SUCCESS) ||
1538 1545 (mptsas_check_acc_handle(mpt->m_acc_post_queue_hdl)
1539 1546 != DDI_SUCCESS) ||
1540 1547 (mptsas_check_acc_handle(mpt->m_hshk_acc_hdl)
1541 1548 != DDI_SUCCESS) ||
1542 1549 (mptsas_check_acc_handle(mpt->m_config_handle)
1543 1550 != DDI_SUCCESS)) {
1544 1551 goto fail;
1545 1552 }
1546 1553
1547 1554 /*
1548 1555 * After this point, we are not going to fail the attach.
1549 1556 */
1550 1557
1551 1558 /* Print message of HBA present */
1552 1559 ddi_report_dev(dip);
1553 1560
1554 1561 /* report idle status to pm framework */
1555 1562 if (mpt->m_options & MPTSAS_OPT_PM) {
1556 1563 (void) pm_idle_component(dip, 0);
1557 1564 }
1558 1565
1559 1566 return (DDI_SUCCESS);
1560 1567
1561 1568 fail:
1562 1569 mptsas_log(mpt, CE_WARN, "attach failed");
1563 1570 mptsas_fm_ereport(mpt, DDI_FM_DEVICE_NO_RESPONSE);
1564 1571 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_LOST);
1565 1572 if (mpt) {
1566 1573 /* deallocate in reverse order */
1567 1574 if (added_watchdog) {
1568 1575 mptsas_list_del(mpt);
1569 1576 mutex_enter(&mptsas_global_mutex);
1570 1577
1571 1578 if (mptsas_timeout_id && (mptsas_head == NULL)) {
1572 1579 timeout_id_t tid = mptsas_timeout_id;
1573 1580 mptsas_timeouts_enabled = 0;
1574 1581 mptsas_timeout_id = 0;
1575 1582 mutex_exit(&mptsas_global_mutex);
1576 1583 (void) untimeout(tid);
1577 1584 mutex_enter(&mptsas_global_mutex);
1578 1585 }
1579 1586 mutex_exit(&mptsas_global_mutex);
1580 1587 }
1581 1588
1582 1589 mptsas_cache_destroy(mpt);
1583 1590
1584 1591 if (smp_attach_setup) {
1585 1592 mptsas_smp_teardown(mpt);
1586 1593 }
1587 1594 if (enc_attach_setup) {
1588 1595 mptsas_enc_teardown(mpt);
1589 1596 }
1590 1597 if (hba_attach_setup) {
1591 1598 mptsas_hba_teardown(mpt);
1592 1599 }
1593 1600
1594 1601 if (mpt->m_tmp_targets)
1595 1602 refhash_destroy(mpt->m_tmp_targets);
1596 1603 if (mpt->m_targets)
1597 1604 refhash_destroy(mpt->m_targets);
1598 1605 if (mpt->m_smp_targets)
1599 1606 refhash_destroy(mpt->m_smp_targets);
1600 1607
1601 1608 if (mpt->m_active) {
1602 1609 mptsas_free_active_slots(mpt);
1603 1610 }
1604 1611 if (intr_added) {
1605 1612 mptsas_unregister_intrs(mpt);
1606 1613 }
1607 1614
1608 1615 if (doneq_thread_create) {
1609 1616 mutex_enter(&mpt->m_doneq_mutex);
1610 1617 doneq_thread_num = mpt->m_doneq_thread_n;
1611 1618 for (j = 0; j < mpt->m_doneq_thread_n; j++) {
1612 1619 mutex_enter(&mpt->m_doneq_thread_id[j].mutex);
1613 1620 mpt->m_doneq_thread_id[j].flag &=
1614 1621 (~MPTSAS_DONEQ_THREAD_ACTIVE);
1615 1622 cv_signal(&mpt->m_doneq_thread_id[j].cv);
1616 1623 mutex_exit(&mpt->m_doneq_thread_id[j].mutex);
1617 1624 }
1618 1625 while (mpt->m_doneq_thread_n) {
1619 1626 cv_wait(&mpt->m_doneq_thread_cv,
1620 1627 &mpt->m_doneq_mutex);
1621 1628 }
1622 1629 for (j = 0; j < doneq_thread_num; j++) {
1623 1630 cv_destroy(&mpt->m_doneq_thread_id[j].cv);
1624 1631 mutex_destroy(&mpt->m_doneq_thread_id[j].mutex);
1625 1632 }
1626 1633 kmem_free(mpt->m_doneq_thread_id,
1627 1634 sizeof (mptsas_doneq_thread_list_t)
1628 1635 * doneq_thread_num);
1629 1636 mutex_exit(&mpt->m_doneq_mutex);
1630 1637 cv_destroy(&mpt->m_doneq_thread_cv);
1631 1638 mutex_destroy(&mpt->m_doneq_mutex);
1632 1639 }
1633 1640 if (event_taskq_create) {
1634 1641 ddi_taskq_destroy(mpt->m_event_taskq);
1635 1642 }
1636 1643 if (dr_taskq_create) {
1637 1644 ddi_taskq_destroy(mpt->m_dr_taskq);
1638 1645 }
1639 1646 if (mutex_init_done) {
1640 1647 mutex_destroy(&mpt->m_tx_waitq_mutex);
1641 1648 mutex_destroy(&mpt->m_passthru_mutex);
|
↓ open down ↓ |
179 lines elided |
↑ open up ↑ |
1642 1649 mutex_destroy(&mpt->m_mutex);
1643 1650 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
1644 1651 mutex_destroy(
1645 1652 &mpt->m_phy_info[i].smhba_info.phy_mutex);
1646 1653 }
1647 1654 cv_destroy(&mpt->m_cv);
1648 1655 cv_destroy(&mpt->m_passthru_cv);
1649 1656 cv_destroy(&mpt->m_fw_cv);
1650 1657 cv_destroy(&mpt->m_config_cv);
1651 1658 cv_destroy(&mpt->m_fw_diag_cv);
1652 - cv_destroy(&mpt->m_extreq_sense_refcount_cv);
1653 1659 }
1654 1660
1655 1661 if (map_setup) {
1656 1662 mptsas_cfg_fini(mpt);
1657 1663 }
1658 1664 if (config_setup) {
1659 1665 mptsas_config_space_fini(mpt);
1660 1666 }
1661 1667 mptsas_free_handshake_msg(mpt);
1662 1668 mptsas_hba_fini(mpt);
1663 1669
1664 1670 mptsas_fm_fini(mpt);
1665 1671 ddi_soft_state_free(mptsas_state, instance);
1666 1672 ddi_prop_remove_all(dip);
1667 1673 }
1668 1674 return (DDI_FAILURE);
1669 1675 }
1670 1676
1671 1677 static int
1672 1678 mptsas_suspend(dev_info_t *devi)
1673 1679 {
1674 1680 mptsas_t *mpt, *g;
1675 1681 scsi_hba_tran_t *tran;
1676 1682
1677 1683 if (scsi_hba_iport_unit_address(devi)) {
1678 1684 return (DDI_SUCCESS);
1679 1685 }
1680 1686
1681 1687 if ((tran = ddi_get_driver_private(devi)) == NULL)
1682 1688 return (DDI_SUCCESS);
1683 1689
1684 1690 mpt = TRAN2MPT(tran);
1685 1691 if (!mpt) {
1686 1692 return (DDI_SUCCESS);
1687 1693 }
1688 1694
1689 1695 mutex_enter(&mpt->m_mutex);
1690 1696
1691 1697 if (mpt->m_suspended++) {
1692 1698 mutex_exit(&mpt->m_mutex);
1693 1699 return (DDI_SUCCESS);
1694 1700 }
1695 1701
1696 1702 /*
1697 1703 * Cancel timeout threads for this mpt
1698 1704 */
1699 1705 if (mpt->m_quiesce_timeid) {
1700 1706 timeout_id_t tid = mpt->m_quiesce_timeid;
1701 1707 mpt->m_quiesce_timeid = 0;
1702 1708 mutex_exit(&mpt->m_mutex);
1703 1709 (void) untimeout(tid);
1704 1710 mutex_enter(&mpt->m_mutex);
1705 1711 }
1706 1712
1707 1713 if (mpt->m_restart_cmd_timeid) {
1708 1714 timeout_id_t tid = mpt->m_restart_cmd_timeid;
1709 1715 mpt->m_restart_cmd_timeid = 0;
1710 1716 mutex_exit(&mpt->m_mutex);
1711 1717 (void) untimeout(tid);
1712 1718 mutex_enter(&mpt->m_mutex);
1713 1719 }
1714 1720
1715 1721 mutex_exit(&mpt->m_mutex);
1716 1722
1717 1723 (void) pm_idle_component(mpt->m_dip, 0);
1718 1724
1719 1725 /*
1720 1726 * Cancel watch threads if all mpts suspended
1721 1727 */
1722 1728 rw_enter(&mptsas_global_rwlock, RW_WRITER);
1723 1729 for (g = mptsas_head; g != NULL; g = g->m_next) {
1724 1730 if (!g->m_suspended)
1725 1731 break;
1726 1732 }
1727 1733 rw_exit(&mptsas_global_rwlock);
1728 1734
1729 1735 mutex_enter(&mptsas_global_mutex);
1730 1736 if (g == NULL) {
1731 1737 timeout_id_t tid;
1732 1738
1733 1739 mptsas_timeouts_enabled = 0;
1734 1740 if (mptsas_timeout_id) {
1735 1741 tid = mptsas_timeout_id;
1736 1742 mptsas_timeout_id = 0;
1737 1743 mutex_exit(&mptsas_global_mutex);
1738 1744 (void) untimeout(tid);
1739 1745 mutex_enter(&mptsas_global_mutex);
1740 1746 }
1741 1747 if (mptsas_reset_watch) {
1742 1748 tid = mptsas_reset_watch;
1743 1749 mptsas_reset_watch = 0;
1744 1750 mutex_exit(&mptsas_global_mutex);
1745 1751 (void) untimeout(tid);
1746 1752 mutex_enter(&mptsas_global_mutex);
1747 1753 }
1748 1754 }
1749 1755 mutex_exit(&mptsas_global_mutex);
1750 1756
1751 1757 mutex_enter(&mpt->m_mutex);
1752 1758
1753 1759 /*
1754 1760 * If this mpt is not in full power(PM_LEVEL_D0), just return.
1755 1761 */
1756 1762 if ((mpt->m_options & MPTSAS_OPT_PM) &&
1757 1763 (mpt->m_power_level != PM_LEVEL_D0)) {
1758 1764 mutex_exit(&mpt->m_mutex);
1759 1765 return (DDI_SUCCESS);
1760 1766 }
1761 1767
1762 1768 /* Disable HBA interrupts in hardware */
1763 1769 MPTSAS_DISABLE_INTR(mpt);
1764 1770 /*
1765 1771 * Send RAID action system shutdown to sync IR
1766 1772 */
1767 1773 mptsas_raid_action_system_shutdown(mpt);
1768 1774
1769 1775 mutex_exit(&mpt->m_mutex);
1770 1776
1771 1777 /* drain the taskq */
1772 1778 ddi_taskq_wait(mpt->m_event_taskq);
1773 1779 ddi_taskq_wait(mpt->m_dr_taskq);
1774 1780
1775 1781 return (DDI_SUCCESS);
1776 1782 }
1777 1783
1778 1784 #ifdef __sparc
1779 1785 /*ARGSUSED*/
1780 1786 static int
1781 1787 mptsas_reset(dev_info_t *devi, ddi_reset_cmd_t cmd)
1782 1788 {
1783 1789 mptsas_t *mpt;
1784 1790 scsi_hba_tran_t *tran;
1785 1791
1786 1792 /*
1787 1793 * If this call is for iport, just return.
1788 1794 */
1789 1795 if (scsi_hba_iport_unit_address(devi))
1790 1796 return (DDI_SUCCESS);
1791 1797
1792 1798 if ((tran = ddi_get_driver_private(devi)) == NULL)
1793 1799 return (DDI_SUCCESS);
1794 1800
1795 1801 if ((mpt = TRAN2MPT(tran)) == NULL)
1796 1802 return (DDI_SUCCESS);
1797 1803
1798 1804 /*
1799 1805 * Send RAID action system shutdown to sync IR. Disable HBA
1800 1806 * interrupts in hardware first.
1801 1807 */
1802 1808 MPTSAS_DISABLE_INTR(mpt);
1803 1809 mptsas_raid_action_system_shutdown(mpt);
1804 1810
1805 1811 return (DDI_SUCCESS);
1806 1812 }
1807 1813 #else /* __sparc */
1808 1814 /*
1809 1815 * quiesce(9E) entry point.
1810 1816 *
1811 1817 * This function is called when the system is single-threaded at high
1812 1818 * PIL with preemption disabled. Therefore, this function must not be
1813 1819 * blocked.
1814 1820 *
1815 1821 * This function returns DDI_SUCCESS on success, or DDI_FAILURE on failure.
1816 1822 * DDI_FAILURE indicates an error condition and should almost never happen.
1817 1823 */
1818 1824 static int
1819 1825 mptsas_quiesce(dev_info_t *devi)
1820 1826 {
1821 1827 mptsas_t *mpt;
1822 1828 scsi_hba_tran_t *tran;
1823 1829
1824 1830 /*
1825 1831 * If this call is for iport, just return.
1826 1832 */
1827 1833 if (scsi_hba_iport_unit_address(devi))
1828 1834 return (DDI_SUCCESS);
1829 1835
1830 1836 if ((tran = ddi_get_driver_private(devi)) == NULL)
1831 1837 return (DDI_SUCCESS);
1832 1838
1833 1839 if ((mpt = TRAN2MPT(tran)) == NULL)
1834 1840 return (DDI_SUCCESS);
1835 1841
1836 1842 /* Disable HBA interrupts in hardware */
1837 1843 MPTSAS_DISABLE_INTR(mpt);
1838 1844 /* Send RAID action system shutdonw to sync IR */
1839 1845 mptsas_raid_action_system_shutdown(mpt);
1840 1846
1841 1847 return (DDI_SUCCESS);
1842 1848 }
1843 1849 #endif /* __sparc */
1844 1850
1845 1851 /*
1846 1852 * detach(9E). Remove all device allocations and system resources;
1847 1853 * disable device interrupts.
1848 1854 * Return DDI_SUCCESS if done; DDI_FAILURE if there's a problem.
1849 1855 */
1850 1856 static int
1851 1857 mptsas_detach(dev_info_t *devi, ddi_detach_cmd_t cmd)
1852 1858 {
1853 1859 /* CONSTCOND */
1854 1860 ASSERT(NO_COMPETING_THREADS);
1855 1861 NDBG0(("mptsas_detach: dip=0x%p cmd=0x%p", (void *)devi, (void *)cmd));
1856 1862
1857 1863 switch (cmd) {
1858 1864 case DDI_DETACH:
1859 1865 return (mptsas_do_detach(devi));
1860 1866
1861 1867 case DDI_SUSPEND:
1862 1868 return (mptsas_suspend(devi));
1863 1869
1864 1870 default:
1865 1871 return (DDI_FAILURE);
1866 1872 }
1867 1873 /* NOTREACHED */
1868 1874 }
1869 1875
1870 1876 static int
1871 1877 mptsas_do_detach(dev_info_t *dip)
1872 1878 {
1873 1879 mptsas_t *mpt;
1874 1880 scsi_hba_tran_t *tran;
1875 1881 int circ = 0;
1876 1882 int circ1 = 0;
1877 1883 mdi_pathinfo_t *pip = NULL;
1878 1884 int i;
1879 1885 int doneq_thread_num = 0;
1880 1886
1881 1887 NDBG0(("mptsas_do_detach: dip=0x%p", (void *)dip));
1882 1888
1883 1889 if ((tran = ndi_flavorv_get(dip, SCSA_FLAVOR_SCSI_DEVICE)) == NULL)
1884 1890 return (DDI_FAILURE);
1885 1891
1886 1892 mpt = TRAN2MPT(tran);
1887 1893 if (!mpt) {
1888 1894 return (DDI_FAILURE);
1889 1895 }
|
↓ open down ↓ |
227 lines elided |
↑ open up ↑ |
1890 1896 /*
1891 1897 * Still have pathinfo child, should not detach mpt driver
1892 1898 */
1893 1899 if (scsi_hba_iport_unit_address(dip)) {
1894 1900 if (mpt->m_mpxio_enable) {
1895 1901 /*
1896 1902 * MPxIO enabled for the iport
1897 1903 */
1898 1904 ndi_devi_enter(scsi_vhci_dip, &circ1);
1899 1905 ndi_devi_enter(dip, &circ);
1900 - while (pip = mdi_get_next_client_path(dip, NULL)) {
1906 + while ((pip = mdi_get_next_client_path(dip, NULL)) !=
1907 + NULL) {
1901 1908 if (mdi_pi_free(pip, 0) == MDI_SUCCESS) {
1902 1909 continue;
1903 1910 }
1904 1911 ndi_devi_exit(dip, circ);
1905 1912 ndi_devi_exit(scsi_vhci_dip, circ1);
1906 1913 NDBG12(("detach failed because of "
1907 1914 "outstanding path info"));
1908 1915 return (DDI_FAILURE);
1909 1916 }
1910 1917 ndi_devi_exit(dip, circ);
1911 1918 ndi_devi_exit(scsi_vhci_dip, circ1);
1912 1919 (void) mdi_phci_unregister(dip, 0);
1913 1920 }
1914 1921
1915 1922 ddi_prop_remove_all(dip);
1916 1923
|
↓ open down ↓ |
6 lines elided |
↑ open up ↑ |
1917 1924 return (DDI_SUCCESS);
1918 1925 }
1919 1926
1920 1927 /* Make sure power level is D0 before accessing registers */
1921 1928 if (mpt->m_options & MPTSAS_OPT_PM) {
1922 1929 (void) pm_busy_component(dip, 0);
1923 1930 if (mpt->m_power_level != PM_LEVEL_D0) {
1924 1931 if (pm_raise_power(dip, 0, PM_LEVEL_D0) !=
1925 1932 DDI_SUCCESS) {
1926 1933 mptsas_log(mpt, CE_WARN,
1927 - "mptsas%d: Raise power request failed.",
1928 - mpt->m_instance);
1934 + "raise power request failed");
1929 1935 (void) pm_idle_component(dip, 0);
1930 1936 return (DDI_FAILURE);
1931 1937 }
1932 1938 }
1933 1939 }
1934 1940
1935 1941 /*
1936 1942 * Send RAID action system shutdown to sync IR. After action, send a
1937 1943 * Message Unit Reset. Since after that DMA resource will be freed,
1938 1944 * set ioc to READY state will avoid HBA initiated DMA operation.
1939 1945 */
1940 1946 mutex_enter(&mpt->m_mutex);
1941 1947 MPTSAS_DISABLE_INTR(mpt);
1942 1948 mptsas_raid_action_system_shutdown(mpt);
1943 1949 mpt->m_softstate |= MPTSAS_SS_MSG_UNIT_RESET;
1944 1950 (void) mptsas_ioc_reset(mpt, FALSE);
1945 1951 mutex_exit(&mpt->m_mutex);
1946 1952 mptsas_rem_intrs(mpt);
1947 1953 ddi_taskq_destroy(mpt->m_event_taskq);
1948 1954 ddi_taskq_destroy(mpt->m_dr_taskq);
1949 1955
1950 1956 if (mpt->m_doneq_thread_n) {
1951 1957 mutex_enter(&mpt->m_doneq_mutex);
1952 1958 doneq_thread_num = mpt->m_doneq_thread_n;
1953 1959 for (i = 0; i < mpt->m_doneq_thread_n; i++) {
1954 1960 mutex_enter(&mpt->m_doneq_thread_id[i].mutex);
1955 1961 mpt->m_doneq_thread_id[i].flag &=
1956 1962 (~MPTSAS_DONEQ_THREAD_ACTIVE);
1957 1963 cv_signal(&mpt->m_doneq_thread_id[i].cv);
1958 1964 mutex_exit(&mpt->m_doneq_thread_id[i].mutex);
1959 1965 }
1960 1966 while (mpt->m_doneq_thread_n) {
1961 1967 cv_wait(&mpt->m_doneq_thread_cv,
1962 1968 &mpt->m_doneq_mutex);
1963 1969 }
1964 1970 for (i = 0; i < doneq_thread_num; i++) {
1965 1971 cv_destroy(&mpt->m_doneq_thread_id[i].cv);
1966 1972 mutex_destroy(&mpt->m_doneq_thread_id[i].mutex);
1967 1973 }
1968 1974 kmem_free(mpt->m_doneq_thread_id,
1969 1975 sizeof (mptsas_doneq_thread_list_t)
1970 1976 * doneq_thread_num);
1971 1977 mutex_exit(&mpt->m_doneq_mutex);
1972 1978 cv_destroy(&mpt->m_doneq_thread_cv);
1973 1979 mutex_destroy(&mpt->m_doneq_mutex);
1974 1980 }
1975 1981
1976 1982 scsi_hba_reset_notify_tear_down(mpt->m_reset_notify_listf);
1977 1983
1978 1984 mptsas_list_del(mpt);
1979 1985
1980 1986 /*
1981 1987 * Cancel timeout threads for this mpt
1982 1988 */
1983 1989 mutex_enter(&mpt->m_mutex);
1984 1990 if (mpt->m_quiesce_timeid) {
1985 1991 timeout_id_t tid = mpt->m_quiesce_timeid;
1986 1992 mpt->m_quiesce_timeid = 0;
1987 1993 mutex_exit(&mpt->m_mutex);
1988 1994 (void) untimeout(tid);
1989 1995 mutex_enter(&mpt->m_mutex);
1990 1996 }
1991 1997
1992 1998 if (mpt->m_restart_cmd_timeid) {
1993 1999 timeout_id_t tid = mpt->m_restart_cmd_timeid;
1994 2000 mpt->m_restart_cmd_timeid = 0;
1995 2001 mutex_exit(&mpt->m_mutex);
1996 2002 (void) untimeout(tid);
1997 2003 mutex_enter(&mpt->m_mutex);
1998 2004 }
1999 2005
2000 2006 mutex_exit(&mpt->m_mutex);
2001 2007
2002 2008 /*
2003 2009 * last mpt? ... if active, CANCEL watch threads.
2004 2010 */
2005 2011 mutex_enter(&mptsas_global_mutex);
2006 2012 if (mptsas_head == NULL) {
2007 2013 timeout_id_t tid;
2008 2014 /*
2009 2015 * Clear mptsas_timeouts_enable so that the watch thread
2010 2016 * gets restarted on DDI_ATTACH
2011 2017 */
2012 2018 mptsas_timeouts_enabled = 0;
2013 2019 if (mptsas_timeout_id) {
2014 2020 tid = mptsas_timeout_id;
2015 2021 mptsas_timeout_id = 0;
2016 2022 mutex_exit(&mptsas_global_mutex);
2017 2023 (void) untimeout(tid);
2018 2024 mutex_enter(&mptsas_global_mutex);
2019 2025 }
2020 2026 if (mptsas_reset_watch) {
2021 2027 tid = mptsas_reset_watch;
2022 2028 mptsas_reset_watch = 0;
2023 2029 mutex_exit(&mptsas_global_mutex);
2024 2030 (void) untimeout(tid);
2025 2031 mutex_enter(&mptsas_global_mutex);
2026 2032 }
2027 2033 }
2028 2034 mutex_exit(&mptsas_global_mutex);
2029 2035
2030 2036 /*
2031 2037 * Delete Phy stats
2032 2038 */
2033 2039 mptsas_destroy_phy_stats(mpt);
2034 2040
2035 2041 mptsas_destroy_hashes(mpt);
2036 2042
2037 2043 /*
2038 2044 * Delete nt_active.
2039 2045 */
2040 2046 mutex_enter(&mpt->m_mutex);
2041 2047 mptsas_free_active_slots(mpt);
2042 2048 mutex_exit(&mpt->m_mutex);
2043 2049
|
↓ open down ↓ |
105 lines elided |
↑ open up ↑ |
2044 2050 /* deallocate everything that was allocated in mptsas_attach */
2045 2051 mptsas_cache_destroy(mpt);
2046 2052
2047 2053 mptsas_hba_fini(mpt);
2048 2054 mptsas_cfg_fini(mpt);
2049 2055
2050 2056 /* Lower the power informing PM Framework */
2051 2057 if (mpt->m_options & MPTSAS_OPT_PM) {
2052 2058 if (pm_lower_power(dip, 0, PM_LEVEL_D3) != DDI_SUCCESS)
2053 2059 mptsas_log(mpt, CE_WARN,
2054 - "!mptsas%d: Lower power request failed "
2055 - "during detach, ignoring.",
2056 - mpt->m_instance);
2060 + "lower power request failed during detach, "
2061 + "ignoring");
2057 2062 }
2058 2063
2059 2064 mutex_destroy(&mpt->m_tx_waitq_mutex);
2060 2065 mutex_destroy(&mpt->m_passthru_mutex);
2061 2066 mutex_destroy(&mpt->m_mutex);
2062 2067 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
2063 2068 mutex_destroy(&mpt->m_phy_info[i].smhba_info.phy_mutex);
2064 2069 }
2065 2070 cv_destroy(&mpt->m_cv);
2066 2071 cv_destroy(&mpt->m_passthru_cv);
2067 2072 cv_destroy(&mpt->m_fw_cv);
2068 2073 cv_destroy(&mpt->m_config_cv);
2069 2074 cv_destroy(&mpt->m_fw_diag_cv);
2070 - cv_destroy(&mpt->m_extreq_sense_refcount_cv);
2071 2075
2076 +#ifdef MPTSAS_FAULTINJECTION
2077 + ASSERT(TAILQ_EMPTY(&mpt->m_fminj_cmdq));
2078 +#endif
2079 +
2072 2080 mptsas_smp_teardown(mpt);
2073 2081 mptsas_enc_teardown(mpt);
2074 2082 mptsas_hba_teardown(mpt);
2075 2083
2076 2084 mptsas_config_space_fini(mpt);
2077 2085
2078 2086 mptsas_free_handshake_msg(mpt);
2079 2087
2080 2088 mptsas_fm_fini(mpt);
2081 2089 ddi_soft_state_free(mptsas_state, ddi_get_instance(dip));
2082 2090 ddi_prop_remove_all(dip);
2083 2091
2084 2092 return (DDI_SUCCESS);
2085 2093 }
2086 2094
2087 2095 static void
2088 2096 mptsas_list_add(mptsas_t *mpt)
2089 2097 {
2090 2098 rw_enter(&mptsas_global_rwlock, RW_WRITER);
2091 2099
2092 2100 if (mptsas_head == NULL) {
2093 2101 mptsas_head = mpt;
2094 2102 } else {
2095 2103 mptsas_tail->m_next = mpt;
2096 2104 }
2097 2105 mptsas_tail = mpt;
2098 2106 rw_exit(&mptsas_global_rwlock);
2099 2107 }
2100 2108
2101 2109 static void
2102 2110 mptsas_list_del(mptsas_t *mpt)
2103 2111 {
2104 2112 mptsas_t *m;
2105 2113 /*
2106 2114 * Remove device instance from the global linked list
2107 2115 */
2108 2116 rw_enter(&mptsas_global_rwlock, RW_WRITER);
2109 2117 if (mptsas_head == mpt) {
2110 2118 m = mptsas_head = mpt->m_next;
2111 2119 } else {
2112 2120 for (m = mptsas_head; m != NULL; m = m->m_next) {
2113 2121 if (m->m_next == mpt) {
2114 2122 m->m_next = mpt->m_next;
2115 2123 break;
2116 2124 }
2117 2125 }
2118 2126 if (m == NULL) {
2119 2127 mptsas_log(mpt, CE_PANIC, "Not in softc list!");
2120 2128 }
2121 2129 }
2122 2130
2123 2131 if (mptsas_tail == mpt) {
2124 2132 mptsas_tail = m;
2125 2133 }
2126 2134 rw_exit(&mptsas_global_rwlock);
2127 2135 }
2128 2136
2129 2137 static int
2130 2138 mptsas_alloc_handshake_msg(mptsas_t *mpt, size_t alloc_size)
2131 2139 {
2132 2140 ddi_dma_attr_t task_dma_attrs;
2133 2141
2134 2142 mpt->m_hshk_dma_size = 0;
2135 2143 task_dma_attrs = mpt->m_msg_dma_attr;
2136 2144 task_dma_attrs.dma_attr_sgllen = 1;
2137 2145 task_dma_attrs.dma_attr_granular = (uint32_t)(alloc_size);
2138 2146
2139 2147 /* allocate Task Management ddi_dma resources */
2140 2148 if (mptsas_dma_addr_create(mpt, task_dma_attrs,
2141 2149 &mpt->m_hshk_dma_hdl, &mpt->m_hshk_acc_hdl, &mpt->m_hshk_memp,
2142 2150 alloc_size, NULL) == FALSE) {
2143 2151 return (DDI_FAILURE);
2144 2152 }
2145 2153 mpt->m_hshk_dma_size = alloc_size;
2146 2154
2147 2155 return (DDI_SUCCESS);
2148 2156 }
2149 2157
2150 2158 static void
2151 2159 mptsas_free_handshake_msg(mptsas_t *mpt)
2152 2160 {
2153 2161 if (mpt->m_hshk_dma_size == 0)
2154 2162 return;
2155 2163 mptsas_dma_addr_destroy(&mpt->m_hshk_dma_hdl, &mpt->m_hshk_acc_hdl);
2156 2164 mpt->m_hshk_dma_size = 0;
2157 2165 }
2158 2166
2159 2167 static int
2160 2168 mptsas_hba_setup(mptsas_t *mpt)
2161 2169 {
2162 2170 scsi_hba_tran_t *hba_tran;
2163 2171 int tran_flags;
2164 2172
2165 2173 /* Allocate a transport structure */
2166 2174 hba_tran = mpt->m_tran = scsi_hba_tran_alloc(mpt->m_dip,
2167 2175 SCSI_HBA_CANSLEEP);
2168 2176 ASSERT(mpt->m_tran != NULL);
2169 2177
2170 2178 hba_tran->tran_hba_private = mpt;
2171 2179 hba_tran->tran_tgt_private = NULL;
2172 2180
2173 2181 hba_tran->tran_tgt_init = mptsas_scsi_tgt_init;
2174 2182 hba_tran->tran_tgt_free = mptsas_scsi_tgt_free;
2175 2183
2176 2184 hba_tran->tran_start = mptsas_scsi_start;
2177 2185 hba_tran->tran_reset = mptsas_scsi_reset;
2178 2186 hba_tran->tran_abort = mptsas_scsi_abort;
2179 2187 hba_tran->tran_getcap = mptsas_scsi_getcap;
2180 2188 hba_tran->tran_setcap = mptsas_scsi_setcap;
2181 2189 hba_tran->tran_init_pkt = mptsas_scsi_init_pkt;
2182 2190 hba_tran->tran_destroy_pkt = mptsas_scsi_destroy_pkt;
2183 2191
2184 2192 hba_tran->tran_dmafree = mptsas_scsi_dmafree;
2185 2193 hba_tran->tran_sync_pkt = mptsas_scsi_sync_pkt;
2186 2194 hba_tran->tran_reset_notify = mptsas_scsi_reset_notify;
2187 2195
2188 2196 hba_tran->tran_get_bus_addr = mptsas_get_bus_addr;
2189 2197 hba_tran->tran_get_name = mptsas_get_name;
2190 2198
2191 2199 hba_tran->tran_quiesce = mptsas_scsi_quiesce;
2192 2200 hba_tran->tran_unquiesce = mptsas_scsi_unquiesce;
2193 2201 hba_tran->tran_bus_reset = NULL;
2194 2202
2195 2203 hba_tran->tran_add_eventcall = NULL;
2196 2204 hba_tran->tran_get_eventcookie = NULL;
2197 2205 hba_tran->tran_post_event = NULL;
2198 2206 hba_tran->tran_remove_eventcall = NULL;
2199 2207
2200 2208 hba_tran->tran_bus_config = mptsas_bus_config;
2201 2209
2202 2210 hba_tran->tran_interconnect_type = INTERCONNECT_SAS;
2203 2211
2204 2212 /*
2205 2213 * All children of the HBA are iports. We need tran was cloned.
2206 2214 * So we pass the flags to SCSA. SCSI_HBA_TRAN_CLONE will be
2207 2215 * inherited to iport's tran vector.
2208 2216 */
2209 2217 tran_flags = (SCSI_HBA_HBA | SCSI_HBA_TRAN_CLONE);
2210 2218
2211 2219 if (scsi_hba_attach_setup(mpt->m_dip, &mpt->m_msg_dma_attr,
2212 2220 hba_tran, tran_flags) != DDI_SUCCESS) {
2213 2221 mptsas_log(mpt, CE_WARN, "hba attach setup failed");
2214 2222 scsi_hba_tran_free(hba_tran);
2215 2223 mpt->m_tran = NULL;
2216 2224 return (FALSE);
2217 2225 }
2218 2226 return (TRUE);
2219 2227 }
2220 2228
2221 2229 static void
2222 2230 mptsas_hba_teardown(mptsas_t *mpt)
2223 2231 {
2224 2232 (void) scsi_hba_detach(mpt->m_dip);
2225 2233 if (mpt->m_tran != NULL) {
2226 2234 scsi_hba_tran_free(mpt->m_tran);
2227 2235 mpt->m_tran = NULL;
2228 2236 }
2229 2237 }
2230 2238
2231 2239 static void
2232 2240 mptsas_iport_register(mptsas_t *mpt)
2233 2241 {
2234 2242 int i, j;
2235 2243 mptsas_phymask_t mask = 0x0;
2236 2244 /*
2237 2245 * initial value of mask is 0
2238 2246 */
2239 2247 mutex_enter(&mpt->m_mutex);
2240 2248 for (i = 0; i < mpt->m_num_phys; i++) {
2241 2249 mptsas_phymask_t phy_mask = 0x0;
2242 2250 char phy_mask_name[MPTSAS_MAX_PHYS];
2243 2251 uint8_t current_port;
2244 2252
2245 2253 if (mpt->m_phy_info[i].attached_devhdl == 0)
2246 2254 continue;
2247 2255
2248 2256 bzero(phy_mask_name, sizeof (phy_mask_name));
2249 2257
2250 2258 current_port = mpt->m_phy_info[i].port_num;
2251 2259
2252 2260 if ((mask & (1 << i)) != 0)
2253 2261 continue;
2254 2262
2255 2263 for (j = 0; j < mpt->m_num_phys; j++) {
2256 2264 if (mpt->m_phy_info[j].attached_devhdl &&
2257 2265 (mpt->m_phy_info[j].port_num == current_port)) {
2258 2266 phy_mask |= (1 << j);
2259 2267 }
2260 2268 }
2261 2269 mask = mask | phy_mask;
2262 2270
2263 2271 for (j = 0; j < mpt->m_num_phys; j++) {
2264 2272 if ((phy_mask >> j) & 0x01) {
2265 2273 mpt->m_phy_info[j].phy_mask = phy_mask;
2266 2274 }
2267 2275 }
2268 2276
2269 2277 (void) sprintf(phy_mask_name, "%x", phy_mask);
2270 2278
2271 2279 mutex_exit(&mpt->m_mutex);
2272 2280 /*
2273 2281 * register a iport
2274 2282 */
2275 2283 (void) scsi_hba_iport_register(mpt->m_dip, phy_mask_name);
2276 2284 mutex_enter(&mpt->m_mutex);
2277 2285 }
2278 2286 mutex_exit(&mpt->m_mutex);
2279 2287 /*
2280 2288 * register a virtual port for RAID volume always
2281 2289 */
2282 2290 (void) scsi_hba_iport_register(mpt->m_dip, "v0");
2283 2291
2284 2292 }
2285 2293
2286 2294 static int
2287 2295 mptsas_smp_setup(mptsas_t *mpt)
2288 2296 {
2289 2297 mpt->m_smptran = smp_hba_tran_alloc(mpt->m_dip);
2290 2298 ASSERT(mpt->m_smptran != NULL);
2291 2299 mpt->m_smptran->smp_tran_hba_private = mpt;
2292 2300 mpt->m_smptran->smp_tran_start = mptsas_smp_start;
2293 2301 if (smp_hba_attach_setup(mpt->m_dip, mpt->m_smptran) != DDI_SUCCESS) {
2294 2302 mptsas_log(mpt, CE_WARN, "smp attach setup failed");
2295 2303 smp_hba_tran_free(mpt->m_smptran);
2296 2304 mpt->m_smptran = NULL;
2297 2305 return (FALSE);
2298 2306 }
2299 2307 /*
2300 2308 * Initialize smp hash table
2301 2309 */
2302 2310 mpt->m_smp_targets = refhash_create(MPTSAS_SMP_BUCKET_COUNT,
2303 2311 mptsas_target_addr_hash, mptsas_target_addr_cmp,
2304 2312 mptsas_smp_free, sizeof (mptsas_smp_t),
2305 2313 offsetof(mptsas_smp_t, m_link), offsetof(mptsas_smp_t, m_addr),
2306 2314 KM_SLEEP);
2307 2315 mpt->m_smp_devhdl = 0xFFFF;
2308 2316
2309 2317 return (TRUE);
2310 2318 }
2311 2319
2312 2320 static void
2313 2321 mptsas_smp_teardown(mptsas_t *mpt)
2314 2322 {
2315 2323 (void) smp_hba_detach(mpt->m_dip);
2316 2324 if (mpt->m_smptran != NULL) {
2317 2325 smp_hba_tran_free(mpt->m_smptran);
2318 2326 mpt->m_smptran = NULL;
2319 2327 }
2320 2328 mpt->m_smp_devhdl = 0;
2321 2329 }
|
↓ open down ↓ |
240 lines elided |
↑ open up ↑ |
2322 2330
2323 2331 static int
2324 2332 mptsas_enc_setup(mptsas_t *mpt)
2325 2333 {
2326 2334 list_create(&mpt->m_enclosures, sizeof (mptsas_enclosure_t),
2327 2335 offsetof(mptsas_enclosure_t, me_link));
2328 2336 return (TRUE);
2329 2337 }
2330 2338
2331 2339 static void
2340 +mptsas_enc_free(mptsas_enclosure_t *mep)
2341 +{
2342 + if (mep == NULL)
2343 + return;
2344 + if (mep->me_slotleds != NULL) {
2345 + VERIFY3U(mep->me_nslots, >, 0);
2346 + kmem_free(mep->me_slotleds, sizeof (uint8_t) * mep->me_nslots);
2347 + }
2348 + kmem_free(mep, sizeof (mptsas_enclosure_t));
2349 +}
2350 +
2351 +static void
2332 2352 mptsas_enc_teardown(mptsas_t *mpt)
2333 2353 {
2334 2354 mptsas_enclosure_t *mep;
2335 2355
2336 2356 while ((mep = list_remove_head(&mpt->m_enclosures)) != NULL) {
2337 - kmem_free(mep, sizeof (mptsas_enclosure_t));
2357 + mptsas_enc_free(mep);
2338 2358 }
2339 2359 list_destroy(&mpt->m_enclosures);
2340 2360 }
2341 2361
2342 2362 static mptsas_enclosure_t *
2343 2363 mptsas_enc_lookup(mptsas_t *mpt, uint16_t hdl)
2344 2364 {
2345 2365 mptsas_enclosure_t *mep;
2346 2366
2347 2367 ASSERT(MUTEX_HELD(&mpt->m_mutex));
2348 2368
2349 2369 for (mep = list_head(&mpt->m_enclosures); mep != NULL;
2350 2370 mep = list_next(&mpt->m_enclosures, mep)) {
2351 2371 if (hdl == mep->me_enchdl) {
2352 2372 return (mep);
2353 2373 }
2354 2374 }
2355 2375
2356 2376 return (NULL);
2357 2377 }
2358 2378
2359 2379 static int
2360 2380 mptsas_cache_create(mptsas_t *mpt)
2361 2381 {
2362 2382 int instance = mpt->m_instance;
2363 2383 char buf[64];
2364 2384
2365 2385 /*
2366 2386 * create kmem cache for packets
2367 2387 */
2368 2388 (void) sprintf(buf, "mptsas%d_cache", instance);
2369 2389 mpt->m_kmem_cache = kmem_cache_create(buf,
2370 2390 sizeof (struct mptsas_cmd) + scsi_pkt_size(), 8,
2371 2391 mptsas_kmem_cache_constructor, mptsas_kmem_cache_destructor,
2372 2392 NULL, (void *)mpt, NULL, 0);
2373 2393
2374 2394 if (mpt->m_kmem_cache == NULL) {
2375 2395 mptsas_log(mpt, CE_WARN, "creating kmem cache failed");
2376 2396 return (FALSE);
2377 2397 }
2378 2398
2379 2399 /*
2380 2400 * create kmem cache for extra SGL frames if SGL cannot
2381 2401 * be accomodated into main request frame.
2382 2402 */
2383 2403 (void) sprintf(buf, "mptsas%d_cache_frames", instance);
2384 2404 mpt->m_cache_frames = kmem_cache_create(buf,
2385 2405 sizeof (mptsas_cache_frames_t), 8,
2386 2406 mptsas_cache_frames_constructor, mptsas_cache_frames_destructor,
2387 2407 NULL, (void *)mpt, NULL, 0);
2388 2408
2389 2409 if (mpt->m_cache_frames == NULL) {
2390 2410 mptsas_log(mpt, CE_WARN, "creating cache for frames failed");
2391 2411 return (FALSE);
2392 2412 }
2393 2413
2394 2414 return (TRUE);
2395 2415 }
2396 2416
2397 2417 static void
2398 2418 mptsas_cache_destroy(mptsas_t *mpt)
2399 2419 {
2400 2420 /* deallocate in reverse order */
2401 2421 if (mpt->m_cache_frames) {
2402 2422 kmem_cache_destroy(mpt->m_cache_frames);
2403 2423 mpt->m_cache_frames = NULL;
2404 2424 }
2405 2425 if (mpt->m_kmem_cache) {
2406 2426 kmem_cache_destroy(mpt->m_kmem_cache);
2407 2427 mpt->m_kmem_cache = NULL;
2408 2428 }
2409 2429 }
2410 2430
2411 2431 static int
2412 2432 mptsas_power(dev_info_t *dip, int component, int level)
2413 2433 {
2414 2434 #ifndef __lock_lint
2415 2435 _NOTE(ARGUNUSED(component))
2416 2436 #endif
2417 2437 mptsas_t *mpt;
2418 2438 int rval = DDI_SUCCESS;
2419 2439 int polls = 0;
2420 2440 uint32_t ioc_status;
2421 2441
2422 2442 if (scsi_hba_iport_unit_address(dip) != 0)
2423 2443 return (DDI_SUCCESS);
2424 2444
2425 2445 mpt = ddi_get_soft_state(mptsas_state, ddi_get_instance(dip));
2426 2446 if (mpt == NULL) {
2427 2447 return (DDI_FAILURE);
2428 2448 }
2429 2449
2430 2450 mutex_enter(&mpt->m_mutex);
2431 2451
2432 2452 /*
2433 2453 * If the device is busy, don't lower its power level
2434 2454 */
2435 2455 if (mpt->m_busy && (mpt->m_power_level > level)) {
2436 2456 mutex_exit(&mpt->m_mutex);
2437 2457 return (DDI_FAILURE);
2438 2458 }
2439 2459 switch (level) {
2440 2460 case PM_LEVEL_D0:
2441 2461 NDBG11(("mptsas%d: turning power ON.", mpt->m_instance));
2442 2462 MPTSAS_POWER_ON(mpt);
2443 2463 /*
2444 2464 * Wait up to 30 seconds for IOC to come out of reset.
2445 2465 */
2446 2466 while (((ioc_status = ddi_get32(mpt->m_datap,
2447 2467 &mpt->m_reg->Doorbell)) &
2448 2468 MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_RESET) {
2449 2469 if (polls++ > 3000) {
2450 2470 break;
2451 2471 }
2452 2472 delay(drv_usectohz(10000));
2453 2473 }
2454 2474 /*
2455 2475 * If IOC is not in operational state, try to hard reset it.
2456 2476 */
2457 2477 if ((ioc_status & MPI2_IOC_STATE_MASK) !=
2458 2478 MPI2_IOC_STATE_OPERATIONAL) {
2459 2479 mpt->m_softstate &= ~MPTSAS_SS_MSG_UNIT_RESET;
2460 2480 if (mptsas_restart_ioc(mpt) == DDI_FAILURE) {
2461 2481 mptsas_log(mpt, CE_WARN,
2462 2482 "mptsas_power: hard reset failed");
2463 2483 mutex_exit(&mpt->m_mutex);
|
↓ open down ↓ |
116 lines elided |
↑ open up ↑ |
2464 2484 return (DDI_FAILURE);
2465 2485 }
2466 2486 }
2467 2487 mpt->m_power_level = PM_LEVEL_D0;
2468 2488 break;
2469 2489 case PM_LEVEL_D3:
2470 2490 NDBG11(("mptsas%d: turning power OFF.", mpt->m_instance));
2471 2491 MPTSAS_POWER_OFF(mpt);
2472 2492 break;
2473 2493 default:
2474 - mptsas_log(mpt, CE_WARN, "mptsas%d: unknown power level <%x>.",
2475 - mpt->m_instance, level);
2494 + mptsas_log(mpt, CE_WARN, "unknown power level <%x>", level);
2476 2495 rval = DDI_FAILURE;
2477 2496 break;
2478 2497 }
2479 2498 mutex_exit(&mpt->m_mutex);
2480 2499 return (rval);
2481 2500 }
2482 2501
2483 2502 /*
2484 2503 * Initialize configuration space and figure out which
2485 2504 * chip and revison of the chip the mpt driver is using.
2486 2505 */
2487 2506 static int
2488 2507 mptsas_config_space_init(mptsas_t *mpt)
2489 2508 {
2490 2509 NDBG0(("mptsas_config_space_init"));
2491 2510
2492 2511 if (mpt->m_config_handle != NULL)
2493 2512 return (TRUE);
2494 2513
2495 2514 if (pci_config_setup(mpt->m_dip,
2496 2515 &mpt->m_config_handle) != DDI_SUCCESS) {
2497 2516 mptsas_log(mpt, CE_WARN, "cannot map configuration space.");
2498 2517 return (FALSE);
2499 2518 }
2500 2519
2501 2520 /*
2502 2521 * This is a workaround for a XMITS ASIC bug which does not
2503 2522 * drive the CBE upper bits.
2504 2523 */
2505 2524 if (pci_config_get16(mpt->m_config_handle, PCI_CONF_STAT) &
2506 2525 PCI_STAT_PERROR) {
2507 2526 pci_config_put16(mpt->m_config_handle, PCI_CONF_STAT,
2508 2527 PCI_STAT_PERROR);
2509 2528 }
2510 2529
2511 2530 mptsas_setup_cmd_reg(mpt);
2512 2531
2513 2532 /*
2514 2533 * Get the chip device id:
2515 2534 */
2516 2535 mpt->m_devid = pci_config_get16(mpt->m_config_handle, PCI_CONF_DEVID);
2517 2536
2518 2537 /*
2519 2538 * Save the revision.
2520 2539 */
2521 2540 mpt->m_revid = pci_config_get8(mpt->m_config_handle, PCI_CONF_REVID);
2522 2541
2523 2542 /*
2524 2543 * Save the SubSystem Vendor and Device IDs
2525 2544 */
2526 2545 mpt->m_svid = pci_config_get16(mpt->m_config_handle, PCI_CONF_SUBVENID);
2527 2546 mpt->m_ssid = pci_config_get16(mpt->m_config_handle, PCI_CONF_SUBSYSID);
2528 2547
2529 2548 /*
2530 2549 * Set the latency timer to 0x40 as specified by the upa -> pci
2531 2550 * bridge chip design team. This may be done by the sparc pci
2532 2551 * bus nexus driver, but the driver should make sure the latency
2533 2552 * timer is correct for performance reasons.
2534 2553 */
2535 2554 pci_config_put8(mpt->m_config_handle, PCI_CONF_LATENCY_TIMER,
2536 2555 MPTSAS_LATENCY_TIMER);
2537 2556
2538 2557 (void) mptsas_get_pci_cap(mpt);
2539 2558 return (TRUE);
2540 2559 }
2541 2560
2542 2561 static void
2543 2562 mptsas_config_space_fini(mptsas_t *mpt)
2544 2563 {
2545 2564 if (mpt->m_config_handle != NULL) {
2546 2565 mptsas_disable_bus_master(mpt);
2547 2566 pci_config_teardown(&mpt->m_config_handle);
2548 2567 mpt->m_config_handle = NULL;
2549 2568 }
2550 2569 }
2551 2570
2552 2571 static void
2553 2572 mptsas_setup_cmd_reg(mptsas_t *mpt)
2554 2573 {
2555 2574 ushort_t cmdreg;
2556 2575
2557 2576 /*
2558 2577 * Set the command register to the needed values.
2559 2578 */
2560 2579 cmdreg = pci_config_get16(mpt->m_config_handle, PCI_CONF_COMM);
2561 2580 cmdreg |= (PCI_COMM_ME | PCI_COMM_SERR_ENABLE |
2562 2581 PCI_COMM_PARITY_DETECT | PCI_COMM_MAE);
2563 2582 cmdreg &= ~PCI_COMM_IO;
2564 2583 pci_config_put16(mpt->m_config_handle, PCI_CONF_COMM, cmdreg);
2565 2584 }
2566 2585
2567 2586 static void
2568 2587 mptsas_disable_bus_master(mptsas_t *mpt)
2569 2588 {
2570 2589 ushort_t cmdreg;
2571 2590
2572 2591 /*
2573 2592 * Clear the master enable bit in the PCI command register.
2574 2593 * This prevents any bus mastering activity like DMA.
2575 2594 */
2576 2595 cmdreg = pci_config_get16(mpt->m_config_handle, PCI_CONF_COMM);
2577 2596 cmdreg &= ~PCI_COMM_ME;
2578 2597 pci_config_put16(mpt->m_config_handle, PCI_CONF_COMM, cmdreg);
2579 2598 }
2580 2599
2581 2600 int
2582 2601 mptsas_dma_alloc(mptsas_t *mpt, mptsas_dma_alloc_state_t *dma_statep)
2583 2602 {
2584 2603 ddi_dma_attr_t attrs;
2585 2604
2586 2605 attrs = mpt->m_io_dma_attr;
2587 2606 attrs.dma_attr_sgllen = 1;
2588 2607
2589 2608 ASSERT(dma_statep != NULL);
2590 2609
2591 2610 if (mptsas_dma_addr_create(mpt, attrs, &dma_statep->handle,
2592 2611 &dma_statep->accessp, &dma_statep->memp, dma_statep->size,
2593 2612 &dma_statep->cookie) == FALSE) {
2594 2613 return (DDI_FAILURE);
2595 2614 }
2596 2615
2597 2616 return (DDI_SUCCESS);
2598 2617 }
2599 2618
2600 2619 void
2601 2620 mptsas_dma_free(mptsas_dma_alloc_state_t *dma_statep)
2602 2621 {
2603 2622 ASSERT(dma_statep != NULL);
2604 2623 mptsas_dma_addr_destroy(&dma_statep->handle, &dma_statep->accessp);
2605 2624 dma_statep->size = 0;
2606 2625 }
2607 2626
2608 2627 int
2609 2628 mptsas_do_dma(mptsas_t *mpt, uint32_t size, int var, int (*callback)())
2610 2629 {
2611 2630 ddi_dma_attr_t attrs;
2612 2631 ddi_dma_handle_t dma_handle;
2613 2632 caddr_t memp;
2614 2633 ddi_acc_handle_t accessp;
2615 2634 int rval;
2616 2635
2617 2636 ASSERT(mutex_owned(&mpt->m_mutex));
2618 2637
2619 2638 attrs = mpt->m_msg_dma_attr;
2620 2639 attrs.dma_attr_sgllen = 1;
2621 2640 attrs.dma_attr_granular = size;
2622 2641
2623 2642 if (mptsas_dma_addr_create(mpt, attrs, &dma_handle,
2624 2643 &accessp, &memp, size, NULL) == FALSE) {
2625 2644 return (DDI_FAILURE);
2626 2645 }
2627 2646
2628 2647 rval = (*callback) (mpt, memp, var, accessp);
2629 2648
2630 2649 if ((mptsas_check_dma_handle(dma_handle) != DDI_SUCCESS) ||
2631 2650 (mptsas_check_acc_handle(accessp) != DDI_SUCCESS)) {
2632 2651 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
2633 2652 rval = DDI_FAILURE;
2634 2653 }
2635 2654
2636 2655 mptsas_dma_addr_destroy(&dma_handle, &accessp);
2637 2656 return (rval);
2638 2657
2639 2658 }
2640 2659
2641 2660 static int
2642 2661 mptsas_alloc_request_frames(mptsas_t *mpt)
2643 2662 {
2644 2663 ddi_dma_attr_t frame_dma_attrs;
2645 2664 caddr_t memp;
2646 2665 ddi_dma_cookie_t cookie;
2647 2666 size_t mem_size;
2648 2667
2649 2668 /*
2650 2669 * re-alloc when it has already alloced
2651 2670 */
2652 2671 if (mpt->m_dma_req_frame_hdl)
2653 2672 mptsas_dma_addr_destroy(&mpt->m_dma_req_frame_hdl,
2654 2673 &mpt->m_acc_req_frame_hdl);
2655 2674
2656 2675 /*
2657 2676 * The size of the request frame pool is:
2658 2677 * Number of Request Frames * Request Frame Size
2659 2678 */
2660 2679 mem_size = mpt->m_max_requests * mpt->m_req_frame_size;
2661 2680
2662 2681 /*
2663 2682 * set the DMA attributes. System Request Message Frames must be
2664 2683 * aligned on a 16-byte boundry.
2665 2684 */
2666 2685 frame_dma_attrs = mpt->m_msg_dma_attr;
2667 2686 frame_dma_attrs.dma_attr_align = 16;
2668 2687 frame_dma_attrs.dma_attr_sgllen = 1;
2669 2688
2670 2689 /*
2671 2690 * allocate the request frame pool.
2672 2691 */
2673 2692 if (mptsas_dma_addr_create(mpt, frame_dma_attrs,
2674 2693 &mpt->m_dma_req_frame_hdl, &mpt->m_acc_req_frame_hdl, &memp,
2675 2694 mem_size, &cookie) == FALSE) {
2676 2695 return (DDI_FAILURE);
2677 2696 }
2678 2697
2679 2698 /*
2680 2699 * Store the request frame memory address. This chip uses this
2681 2700 * address to dma to and from the driver's frame. The second
2682 2701 * address is the address mpt uses to fill in the frame.
2683 2702 */
2684 2703 mpt->m_req_frame_dma_addr = cookie.dmac_laddress;
2685 2704 mpt->m_req_frame = memp;
2686 2705
2687 2706 /*
2688 2707 * Clear the request frame pool.
2689 2708 */
2690 2709 bzero(mpt->m_req_frame, mem_size);
2691 2710
2692 2711 return (DDI_SUCCESS);
2693 2712 }
|
↓ open down ↓ |
208 lines elided |
↑ open up ↑ |
2694 2713
2695 2714 static int
2696 2715 mptsas_alloc_sense_bufs(mptsas_t *mpt)
2697 2716 {
2698 2717 ddi_dma_attr_t sense_dma_attrs;
2699 2718 caddr_t memp;
2700 2719 ddi_dma_cookie_t cookie;
2701 2720 size_t mem_size;
2702 2721 int num_extrqsense_bufs;
2703 2722
2704 - ASSERT(mpt->m_extreq_sense_refcount == 0);
2705 -
2706 2723 /*
2707 2724 * re-alloc when it has already alloced
2708 2725 */
2709 2726 if (mpt->m_dma_req_sense_hdl) {
2710 2727 rmfreemap(mpt->m_erqsense_map);
2711 2728 mptsas_dma_addr_destroy(&mpt->m_dma_req_sense_hdl,
2712 2729 &mpt->m_acc_req_sense_hdl);
2713 2730 }
2714 2731
2715 2732 /*
2716 2733 * The size of the request sense pool is:
2717 2734 * (Number of Request Frames - 2 ) * Request Sense Size +
2718 2735 * extra memory for extended sense requests.
2719 2736 */
2720 2737 mem_size = ((mpt->m_max_requests - 2) * mpt->m_req_sense_size) +
2721 2738 mptsas_extreq_sense_bufsize;
2722 2739
2723 2740 /*
2724 2741 * set the DMA attributes. ARQ buffers
2725 2742 * aligned on a 16-byte boundry.
2726 2743 */
2727 2744 sense_dma_attrs = mpt->m_msg_dma_attr;
2728 2745 sense_dma_attrs.dma_attr_align = 16;
2729 2746 sense_dma_attrs.dma_attr_sgllen = 1;
2730 2747
2731 2748 /*
2732 2749 * allocate the request sense buffer pool.
2733 2750 */
2734 2751 if (mptsas_dma_addr_create(mpt, sense_dma_attrs,
2735 2752 &mpt->m_dma_req_sense_hdl, &mpt->m_acc_req_sense_hdl, &memp,
2736 2753 mem_size, &cookie) == FALSE) {
2737 2754 return (DDI_FAILURE);
2738 2755 }
2739 2756
2740 2757 /*
2741 2758 * Store the request sense base memory address. This chip uses this
2742 2759 * address to dma the request sense data. The second
2743 2760 * address is the address mpt uses to access the data.
2744 2761 * The third is the base for the extended rqsense buffers.
2745 2762 */
2746 2763 mpt->m_req_sense_dma_addr = cookie.dmac_laddress;
2747 2764 mpt->m_req_sense = memp;
2748 2765 memp += (mpt->m_max_requests - 2) * mpt->m_req_sense_size;
2749 2766 mpt->m_extreq_sense = memp;
2750 2767
2751 2768 /*
2752 2769 * The extra memory is divided up into multiples of the base
2753 2770 * buffer size in order to allocate via rmalloc().
2754 2771 * Note that the rmallocmap cannot start at zero!
2755 2772 */
2756 2773 num_extrqsense_bufs = mptsas_extreq_sense_bufsize /
2757 2774 mpt->m_req_sense_size;
2758 2775 mpt->m_erqsense_map = rmallocmap_wait(num_extrqsense_bufs);
2759 2776 rmfree(mpt->m_erqsense_map, num_extrqsense_bufs, 1);
2760 2777
2761 2778 /*
2762 2779 * Clear the pool.
2763 2780 */
2764 2781 bzero(mpt->m_req_sense, mem_size);
2765 2782
2766 2783 return (DDI_SUCCESS);
2767 2784 }
2768 2785
2769 2786 static int
2770 2787 mptsas_alloc_reply_frames(mptsas_t *mpt)
2771 2788 {
2772 2789 ddi_dma_attr_t frame_dma_attrs;
2773 2790 caddr_t memp;
2774 2791 ddi_dma_cookie_t cookie;
2775 2792 size_t mem_size;
2776 2793
2777 2794 /*
2778 2795 * re-alloc when it has already alloced
2779 2796 */
2780 2797 if (mpt->m_dma_reply_frame_hdl) {
2781 2798 mptsas_dma_addr_destroy(&mpt->m_dma_reply_frame_hdl,
2782 2799 &mpt->m_acc_reply_frame_hdl);
2783 2800 }
2784 2801
2785 2802 /*
2786 2803 * The size of the reply frame pool is:
2787 2804 * Number of Reply Frames * Reply Frame Size
2788 2805 */
2789 2806 mem_size = mpt->m_max_replies * mpt->m_reply_frame_size;
2790 2807
2791 2808 /*
2792 2809 * set the DMA attributes. System Reply Message Frames must be
2793 2810 * aligned on a 4-byte boundry. This is the default.
2794 2811 */
2795 2812 frame_dma_attrs = mpt->m_msg_dma_attr;
2796 2813 frame_dma_attrs.dma_attr_sgllen = 1;
2797 2814
2798 2815 /*
2799 2816 * allocate the reply frame pool
2800 2817 */
2801 2818 if (mptsas_dma_addr_create(mpt, frame_dma_attrs,
2802 2819 &mpt->m_dma_reply_frame_hdl, &mpt->m_acc_reply_frame_hdl, &memp,
2803 2820 mem_size, &cookie) == FALSE) {
2804 2821 return (DDI_FAILURE);
2805 2822 }
2806 2823
2807 2824 /*
2808 2825 * Store the reply frame memory address. This chip uses this
2809 2826 * address to dma to and from the driver's frame. The second
2810 2827 * address is the address mpt uses to process the frame.
2811 2828 */
2812 2829 mpt->m_reply_frame_dma_addr = cookie.dmac_laddress;
2813 2830 mpt->m_reply_frame = memp;
2814 2831
2815 2832 /*
2816 2833 * Clear the reply frame pool.
2817 2834 */
2818 2835 bzero(mpt->m_reply_frame, mem_size);
2819 2836
2820 2837 return (DDI_SUCCESS);
2821 2838 }
2822 2839
2823 2840 static int
2824 2841 mptsas_alloc_free_queue(mptsas_t *mpt)
2825 2842 {
2826 2843 ddi_dma_attr_t frame_dma_attrs;
2827 2844 caddr_t memp;
2828 2845 ddi_dma_cookie_t cookie;
2829 2846 size_t mem_size;
2830 2847
2831 2848 /*
2832 2849 * re-alloc when it has already alloced
2833 2850 */
2834 2851 if (mpt->m_dma_free_queue_hdl) {
2835 2852 mptsas_dma_addr_destroy(&mpt->m_dma_free_queue_hdl,
2836 2853 &mpt->m_acc_free_queue_hdl);
2837 2854 }
2838 2855
2839 2856 /*
2840 2857 * The reply free queue size is:
2841 2858 * Reply Free Queue Depth * 4
2842 2859 * The "4" is the size of one 32 bit address (low part of 64-bit
2843 2860 * address)
2844 2861 */
2845 2862 mem_size = mpt->m_free_queue_depth * 4;
2846 2863
2847 2864 /*
2848 2865 * set the DMA attributes The Reply Free Queue must be aligned on a
2849 2866 * 16-byte boundry.
2850 2867 */
2851 2868 frame_dma_attrs = mpt->m_msg_dma_attr;
2852 2869 frame_dma_attrs.dma_attr_align = 16;
2853 2870 frame_dma_attrs.dma_attr_sgllen = 1;
2854 2871
2855 2872 /*
2856 2873 * allocate the reply free queue
2857 2874 */
2858 2875 if (mptsas_dma_addr_create(mpt, frame_dma_attrs,
2859 2876 &mpt->m_dma_free_queue_hdl, &mpt->m_acc_free_queue_hdl, &memp,
2860 2877 mem_size, &cookie) == FALSE) {
2861 2878 return (DDI_FAILURE);
2862 2879 }
2863 2880
2864 2881 /*
2865 2882 * Store the reply free queue memory address. This chip uses this
2866 2883 * address to read from the reply free queue. The second address
2867 2884 * is the address mpt uses to manage the queue.
2868 2885 */
2869 2886 mpt->m_free_queue_dma_addr = cookie.dmac_laddress;
2870 2887 mpt->m_free_queue = memp;
2871 2888
2872 2889 /*
2873 2890 * Clear the reply free queue memory.
2874 2891 */
2875 2892 bzero(mpt->m_free_queue, mem_size);
2876 2893
2877 2894 return (DDI_SUCCESS);
2878 2895 }
2879 2896
2880 2897 static int
2881 2898 mptsas_alloc_post_queue(mptsas_t *mpt)
2882 2899 {
2883 2900 ddi_dma_attr_t frame_dma_attrs;
2884 2901 caddr_t memp;
2885 2902 ddi_dma_cookie_t cookie;
2886 2903 size_t mem_size;
2887 2904
2888 2905 /*
2889 2906 * re-alloc when it has already alloced
2890 2907 */
2891 2908 if (mpt->m_dma_post_queue_hdl) {
2892 2909 mptsas_dma_addr_destroy(&mpt->m_dma_post_queue_hdl,
2893 2910 &mpt->m_acc_post_queue_hdl);
2894 2911 }
2895 2912
2896 2913 /*
2897 2914 * The reply descriptor post queue size is:
2898 2915 * Reply Descriptor Post Queue Depth * 8
2899 2916 * The "8" is the size of each descriptor (8 bytes or 64 bits).
2900 2917 */
2901 2918 mem_size = mpt->m_post_queue_depth * 8;
2902 2919
2903 2920 /*
2904 2921 * set the DMA attributes. The Reply Descriptor Post Queue must be
2905 2922 * aligned on a 16-byte boundry.
2906 2923 */
2907 2924 frame_dma_attrs = mpt->m_msg_dma_attr;
2908 2925 frame_dma_attrs.dma_attr_align = 16;
2909 2926 frame_dma_attrs.dma_attr_sgllen = 1;
2910 2927
2911 2928 /*
2912 2929 * allocate the reply post queue
2913 2930 */
2914 2931 if (mptsas_dma_addr_create(mpt, frame_dma_attrs,
2915 2932 &mpt->m_dma_post_queue_hdl, &mpt->m_acc_post_queue_hdl, &memp,
2916 2933 mem_size, &cookie) == FALSE) {
2917 2934 return (DDI_FAILURE);
2918 2935 }
2919 2936
2920 2937 /*
2921 2938 * Store the reply descriptor post queue memory address. This chip
2922 2939 * uses this address to write to the reply descriptor post queue. The
2923 2940 * second address is the address mpt uses to manage the queue.
2924 2941 */
2925 2942 mpt->m_post_queue_dma_addr = cookie.dmac_laddress;
2926 2943 mpt->m_post_queue = memp;
2927 2944
2928 2945 /*
2929 2946 * Clear the reply post queue memory.
2930 2947 */
2931 2948 bzero(mpt->m_post_queue, mem_size);
2932 2949
2933 2950 return (DDI_SUCCESS);
2934 2951 }
2935 2952
2936 2953 static void
2937 2954 mptsas_alloc_reply_args(mptsas_t *mpt)
2938 2955 {
2939 2956 if (mpt->m_replyh_args == NULL) {
2940 2957 mpt->m_replyh_args = kmem_zalloc(sizeof (m_replyh_arg_t) *
2941 2958 mpt->m_max_replies, KM_SLEEP);
2942 2959 }
2943 2960 }
2944 2961
2945 2962 static int
2946 2963 mptsas_alloc_extra_sgl_frame(mptsas_t *mpt, mptsas_cmd_t *cmd)
2947 2964 {
2948 2965 mptsas_cache_frames_t *frames = NULL;
2949 2966 if (cmd->cmd_extra_frames == NULL) {
2950 2967 frames = kmem_cache_alloc(mpt->m_cache_frames, KM_NOSLEEP);
2951 2968 if (frames == NULL) {
2952 2969 return (DDI_FAILURE);
2953 2970 }
2954 2971 cmd->cmd_extra_frames = frames;
2955 2972 }
2956 2973 return (DDI_SUCCESS);
2957 2974 }
2958 2975
2959 2976 static void
2960 2977 mptsas_free_extra_sgl_frame(mptsas_t *mpt, mptsas_cmd_t *cmd)
2961 2978 {
2962 2979 if (cmd->cmd_extra_frames) {
2963 2980 kmem_cache_free(mpt->m_cache_frames,
2964 2981 (void *)cmd->cmd_extra_frames);
2965 2982 cmd->cmd_extra_frames = NULL;
2966 2983 }
2967 2984 }
2968 2985
2969 2986 static void
2970 2987 mptsas_cfg_fini(mptsas_t *mpt)
2971 2988 {
2972 2989 NDBG0(("mptsas_cfg_fini"));
2973 2990 ddi_regs_map_free(&mpt->m_datap);
2974 2991 }
2975 2992
2976 2993 static void
2977 2994 mptsas_hba_fini(mptsas_t *mpt)
2978 2995 {
2979 2996 NDBG0(("mptsas_hba_fini"));
2980 2997
2981 2998 /*
2982 2999 * Free up any allocated memory
2983 3000 */
2984 3001 if (mpt->m_dma_req_frame_hdl) {
2985 3002 mptsas_dma_addr_destroy(&mpt->m_dma_req_frame_hdl,
2986 3003 &mpt->m_acc_req_frame_hdl);
2987 3004 }
2988 3005
2989 3006 if (mpt->m_dma_req_sense_hdl) {
2990 3007 rmfreemap(mpt->m_erqsense_map);
2991 3008 mptsas_dma_addr_destroy(&mpt->m_dma_req_sense_hdl,
2992 3009 &mpt->m_acc_req_sense_hdl);
2993 3010 }
2994 3011
2995 3012 if (mpt->m_dma_reply_frame_hdl) {
2996 3013 mptsas_dma_addr_destroy(&mpt->m_dma_reply_frame_hdl,
2997 3014 &mpt->m_acc_reply_frame_hdl);
2998 3015 }
2999 3016
3000 3017 if (mpt->m_dma_free_queue_hdl) {
3001 3018 mptsas_dma_addr_destroy(&mpt->m_dma_free_queue_hdl,
3002 3019 &mpt->m_acc_free_queue_hdl);
3003 3020 }
3004 3021
3005 3022 if (mpt->m_dma_post_queue_hdl) {
3006 3023 mptsas_dma_addr_destroy(&mpt->m_dma_post_queue_hdl,
3007 3024 &mpt->m_acc_post_queue_hdl);
3008 3025 }
3009 3026
3010 3027 if (mpt->m_replyh_args != NULL) {
3011 3028 kmem_free(mpt->m_replyh_args, sizeof (m_replyh_arg_t)
3012 3029 * mpt->m_max_replies);
3013 3030 }
3014 3031 }
3015 3032
3016 3033 static int
3017 3034 mptsas_name_child(dev_info_t *lun_dip, char *name, int len)
3018 3035 {
3019 3036 int lun = 0;
3020 3037 char *sas_wwn = NULL;
3021 3038 int phynum = -1;
3022 3039 int reallen = 0;
3023 3040
3024 3041 /* Get the target num */
3025 3042 lun = ddi_prop_get_int(DDI_DEV_T_ANY, lun_dip, DDI_PROP_DONTPASS,
3026 3043 LUN_PROP, 0);
3027 3044
3028 3045 if ((phynum = ddi_prop_get_int(DDI_DEV_T_ANY, lun_dip,
3029 3046 DDI_PROP_DONTPASS, "sata-phy", -1)) != -1) {
3030 3047 /*
3031 3048 * Stick in the address of form "pPHY,LUN"
3032 3049 */
3033 3050 reallen = snprintf(name, len, "p%x,%x", phynum, lun);
3034 3051 } else if (ddi_prop_lookup_string(DDI_DEV_T_ANY, lun_dip,
3035 3052 DDI_PROP_DONTPASS, SCSI_ADDR_PROP_TARGET_PORT, &sas_wwn)
3036 3053 == DDI_PROP_SUCCESS) {
3037 3054 /*
|
↓ open down ↓ |
322 lines elided |
↑ open up ↑ |
3038 3055 * Stick in the address of the form "wWWN,LUN"
3039 3056 */
3040 3057 reallen = snprintf(name, len, "%s,%x", sas_wwn, lun);
3041 3058 ddi_prop_free(sas_wwn);
3042 3059 } else {
3043 3060 return (DDI_FAILURE);
3044 3061 }
3045 3062
3046 3063 ASSERT(reallen < len);
3047 3064 if (reallen >= len) {
3048 - mptsas_log(0, CE_WARN, "!mptsas_get_name: name parameter "
3065 + mptsas_log(0, CE_WARN, "mptsas_get_name: name parameter "
3049 3066 "length too small, it needs to be %d bytes", reallen + 1);
3050 3067 }
3051 3068 return (DDI_SUCCESS);
3052 3069 }
3053 3070
3054 3071 /*
3055 3072 * tran_tgt_init(9E) - target device instance initialization
3056 3073 */
3057 3074 static int
3058 3075 mptsas_scsi_tgt_init(dev_info_t *hba_dip, dev_info_t *tgt_dip,
3059 3076 scsi_hba_tran_t *hba_tran, struct scsi_device *sd)
3060 3077 {
3061 3078 #ifndef __lock_lint
3062 3079 _NOTE(ARGUNUSED(hba_tran))
3063 3080 #endif
3064 3081
3065 3082 /*
3066 3083 * At this point, the scsi_device structure already exists
3067 3084 * and has been initialized.
3068 3085 *
3069 3086 * Use this function to allocate target-private data structures,
3070 3087 * if needed by this HBA. Add revised flow-control and queue
3071 3088 * properties for child here, if desired and if you can tell they
3072 3089 * support tagged queueing by now.
3073 3090 */
3074 3091 mptsas_t *mpt;
3075 3092 int lun = sd->sd_address.a_lun;
3076 3093 mdi_pathinfo_t *pip = NULL;
3077 3094 mptsas_tgt_private_t *tgt_private = NULL;
3078 3095 mptsas_target_t *ptgt = NULL;
3079 3096 char *psas_wwn = NULL;
3080 3097 mptsas_phymask_t phymask = 0;
3081 3098 uint64_t sas_wwn = 0;
3082 3099 mptsas_target_addr_t addr;
3083 3100 mpt = SDEV2MPT(sd);
3084 3101
|
↓ open down ↓ |
26 lines elided |
↑ open up ↑ |
3085 3102 ASSERT(scsi_hba_iport_unit_address(hba_dip) != 0);
3086 3103
3087 3104 NDBG0(("mptsas_scsi_tgt_init: hbadip=0x%p tgtdip=0x%p lun=%d",
3088 3105 (void *)hba_dip, (void *)tgt_dip, lun));
3089 3106
3090 3107 if (ndi_dev_is_persistent_node(tgt_dip) == 0) {
3091 3108 (void) ndi_merge_node(tgt_dip, mptsas_name_child);
3092 3109 ddi_set_name_addr(tgt_dip, NULL);
3093 3110 return (DDI_FAILURE);
3094 3111 }
3112 +
3095 3113 /*
3096 - * phymask is 0 means the virtual port for RAID
3114 + * The phymask exists if the port is active, otherwise
3115 + * nothing to do.
3097 3116 */
3117 + if (ddi_prop_exists(DDI_DEV_T_ANY, hba_dip,
3118 + DDI_PROP_DONTPASS | DDI_PROP_NOTPROM, "phymask") == 0)
3119 + return (DDI_FAILURE);
3120 +
3098 3121 phymask = (mptsas_phymask_t)ddi_prop_get_int(DDI_DEV_T_ANY, hba_dip, 0,
3099 3122 "phymask", 0);
3123 +
3100 3124 if (mdi_component_is_client(tgt_dip, NULL) == MDI_SUCCESS) {
3101 3125 if ((pip = (void *)(sd->sd_private)) == NULL) {
3102 3126 /*
3103 3127 * Very bad news if this occurs. Somehow scsi_vhci has
3104 3128 * lost the pathinfo node for this target.
3105 3129 */
3106 3130 return (DDI_NOT_WELL_FORMED);
3107 3131 }
3108 3132
3109 3133 if (mdi_prop_lookup_int(pip, LUN_PROP, &lun) !=
3110 3134 DDI_PROP_SUCCESS) {
3111 - mptsas_log(mpt, CE_WARN, "Get lun property failed\n");
3135 + mptsas_log(mpt, CE_WARN, "Get lun property failed");
3112 3136 return (DDI_FAILURE);
3113 3137 }
3114 3138
3115 3139 if (mdi_prop_lookup_string(pip, SCSI_ADDR_PROP_TARGET_PORT,
3116 3140 &psas_wwn) == MDI_SUCCESS) {
3117 3141 if (scsi_wwnstr_to_wwn(psas_wwn, &sas_wwn)) {
3118 3142 sas_wwn = 0;
3119 3143 }
3120 3144 (void) mdi_prop_free(psas_wwn);
3121 3145 }
3122 3146 } else {
3123 3147 lun = ddi_prop_get_int(DDI_DEV_T_ANY, tgt_dip,
3124 3148 DDI_PROP_DONTPASS, LUN_PROP, 0);
3125 3149 if (ddi_prop_lookup_string(DDI_DEV_T_ANY, tgt_dip,
3126 3150 DDI_PROP_DONTPASS, SCSI_ADDR_PROP_TARGET_PORT, &psas_wwn) ==
3127 3151 DDI_PROP_SUCCESS) {
3128 3152 if (scsi_wwnstr_to_wwn(psas_wwn, &sas_wwn)) {
3129 3153 sas_wwn = 0;
3130 3154 }
3131 3155 ddi_prop_free(psas_wwn);
3132 3156 } else {
3133 3157 sas_wwn = 0;
|
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
3134 3158 }
3135 3159 }
3136 3160
3137 3161 ASSERT((sas_wwn != 0) || (phymask != 0));
3138 3162 addr.mta_wwn = sas_wwn;
3139 3163 addr.mta_phymask = phymask;
3140 3164 mutex_enter(&mpt->m_mutex);
3141 3165 ptgt = refhash_lookup(mpt->m_targets, &addr);
3142 3166 mutex_exit(&mpt->m_mutex);
3143 3167 if (ptgt == NULL) {
3144 - mptsas_log(mpt, CE_WARN, "!tgt_init: target doesn't exist or "
3168 + mptsas_log(mpt, CE_WARN, "tgt_init: target doesn't exist or "
3145 3169 "gone already! phymask:%x, saswwn %"PRIx64, phymask,
3146 3170 sas_wwn);
3147 3171 return (DDI_FAILURE);
3148 3172 }
3149 3173 if (hba_tran->tran_tgt_private == NULL) {
3150 3174 tgt_private = kmem_zalloc(sizeof (mptsas_tgt_private_t),
3151 3175 KM_SLEEP);
3152 3176 tgt_private->t_lun = lun;
3153 3177 tgt_private->t_private = ptgt;
3154 3178 hba_tran->tran_tgt_private = tgt_private;
3155 3179 }
3156 3180
3157 3181 if (mdi_component_is_client(tgt_dip, NULL) == MDI_SUCCESS) {
3158 3182 return (DDI_SUCCESS);
3159 3183 }
3160 3184 mutex_enter(&mpt->m_mutex);
3161 3185
3162 3186 if (ptgt->m_deviceinfo &
3163 3187 (MPI2_SAS_DEVICE_INFO_SATA_DEVICE |
3164 3188 MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE)) {
3165 3189 uchar_t *inq89 = NULL;
3166 3190 int inq89_len = 0x238;
3167 3191 int reallen = 0;
3168 3192 int rval = 0;
3169 3193 struct sata_id *sid = NULL;
3170 3194 char model[SATA_ID_MODEL_LEN + 1];
3171 3195 char fw[SATA_ID_FW_LEN + 1];
3172 3196 char *vid, *pid;
3173 3197
3174 3198 mutex_exit(&mpt->m_mutex);
3175 3199 /*
3176 3200 * According SCSI/ATA Translation -2 (SAT-2) revision 01a
3177 3201 * chapter 12.4.2 VPD page 89h includes 512 bytes ATA IDENTIFY
3178 3202 * DEVICE data or ATA IDENTIFY PACKET DEVICE data.
|
↓ open down ↓ |
24 lines elided |
↑ open up ↑ |
3179 3203 */
3180 3204 inq89 = kmem_zalloc(inq89_len, KM_SLEEP);
3181 3205 rval = mptsas_inquiry(mpt, ptgt, 0, 0x89,
3182 3206 inq89, inq89_len, &reallen, 1);
3183 3207
3184 3208 if (rval != 0) {
3185 3209 if (inq89 != NULL) {
3186 3210 kmem_free(inq89, inq89_len);
3187 3211 }
3188 3212
3189 - mptsas_log(mpt, CE_WARN, "!mptsas request inquiry page "
3213 + mptsas_log(mpt, CE_WARN, "mptsas request inquiry page "
3190 3214 "0x89 for SATA target:%x failed!", ptgt->m_devhdl);
3191 3215 return (DDI_SUCCESS);
3192 3216 }
3193 3217 sid = (void *)(&inq89[60]);
3194 3218
3195 3219 swab(sid->ai_model, model, SATA_ID_MODEL_LEN);
3196 3220 swab(sid->ai_fw, fw, SATA_ID_FW_LEN);
3197 3221
3198 3222 model[SATA_ID_MODEL_LEN] = 0;
3199 3223 fw[SATA_ID_FW_LEN] = 0;
3200 3224
3201 3225 sata_split_model(model, &vid, &pid);
3202 3226
3203 3227 /*
3204 3228 * override SCSA "inquiry-*" properties
3205 3229 */
3206 3230 if (vid)
3207 3231 (void) scsi_device_prop_update_inqstring(sd,
3208 3232 INQUIRY_VENDOR_ID, vid, strlen(vid));
3209 3233 if (pid)
3210 3234 (void) scsi_device_prop_update_inqstring(sd,
3211 3235 INQUIRY_PRODUCT_ID, pid, strlen(pid));
3212 3236 (void) scsi_device_prop_update_inqstring(sd,
3213 3237 INQUIRY_REVISION_ID, fw, strlen(fw));
3214 3238
3215 3239 if (inq89 != NULL) {
3216 3240 kmem_free(inq89, inq89_len);
3217 3241 }
3218 3242 } else {
3219 3243 mutex_exit(&mpt->m_mutex);
3220 3244 }
3221 3245
3222 3246 return (DDI_SUCCESS);
3223 3247 }
3224 3248 /*
3225 3249 * tran_tgt_free(9E) - target device instance deallocation
3226 3250 */
3227 3251 static void
3228 3252 mptsas_scsi_tgt_free(dev_info_t *hba_dip, dev_info_t *tgt_dip,
3229 3253 scsi_hba_tran_t *hba_tran, struct scsi_device *sd)
3230 3254 {
3231 3255 #ifndef __lock_lint
3232 3256 _NOTE(ARGUNUSED(hba_dip, tgt_dip, hba_tran, sd))
3233 3257 #endif
3234 3258
3235 3259 mptsas_tgt_private_t *tgt_private = hba_tran->tran_tgt_private;
3236 3260
3237 3261 if (tgt_private != NULL) {
3238 3262 kmem_free(tgt_private, sizeof (mptsas_tgt_private_t));
3239 3263 hba_tran->tran_tgt_private = NULL;
3240 3264 }
3241 3265 }
3242 3266
3243 3267 /*
3244 3268 * scsi_pkt handling
3245 3269 *
3246 3270 * Visible to the external world via the transport structure.
3247 3271 */
3248 3272
3249 3273 /*
3250 3274 * Notes:
3251 3275 * - transport the command to the addressed SCSI target/lun device
3252 3276 * - normal operation is to schedule the command to be transported,
3253 3277 * and return TRAN_ACCEPT if this is successful.
3254 3278 * - if NO_INTR, tran_start must poll device for command completion
3255 3279 */
3256 3280 static int
3257 3281 mptsas_scsi_start(struct scsi_address *ap, struct scsi_pkt *pkt)
3258 3282 {
3259 3283 #ifndef __lock_lint
3260 3284 _NOTE(ARGUNUSED(ap))
3261 3285 #endif
3262 3286 mptsas_t *mpt = PKT2MPT(pkt);
3263 3287 mptsas_cmd_t *cmd = PKT2CMD(pkt);
3264 3288 int rval;
3265 3289 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
3266 3290
3267 3291 NDBG1(("mptsas_scsi_start: pkt=0x%p", (void *)pkt));
3268 3292 ASSERT(ptgt);
3269 3293 if (ptgt == NULL)
3270 3294 return (TRAN_FATAL_ERROR);
3271 3295
3272 3296 /*
3273 3297 * prepare the pkt before taking mutex.
3274 3298 */
3275 3299 rval = mptsas_prepare_pkt(cmd);
3276 3300 if (rval != TRAN_ACCEPT) {
3277 3301 return (rval);
3278 3302 }
3279 3303
3280 3304 /*
3281 3305 * Send the command to target/lun, however your HBA requires it.
3282 3306 * If busy, return TRAN_BUSY; if there's some other formatting error
3283 3307 * in the packet, return TRAN_BADPKT; otherwise, fall through to the
3284 3308 * return of TRAN_ACCEPT.
3285 3309 *
3286 3310 * Remember that access to shared resources, including the mptsas_t
3287 3311 * data structure and the HBA hardware registers, must be protected
3288 3312 * with mutexes, here and everywhere.
3289 3313 *
3290 3314 * Also remember that at interrupt time, you'll get an argument
3291 3315 * to the interrupt handler which is a pointer to your mptsas_t
3292 3316 * structure; you'll have to remember which commands are outstanding
3293 3317 * and which scsi_pkt is the currently-running command so the
3294 3318 * interrupt handler can refer to the pkt to set completion
3295 3319 * status, call the target driver back through pkt_comp, etc.
3296 3320 *
3297 3321 * If the instance lock is held by other thread, don't spin to wait
3298 3322 * for it. Instead, queue the cmd and next time when the instance lock
3299 3323 * is not held, accept all the queued cmd. A extra tx_waitq is
3300 3324 * introduced to protect the queue.
3301 3325 *
3302 3326 * The polled cmd will not be queud and accepted as usual.
3303 3327 *
3304 3328 * Under the tx_waitq mutex, record whether a thread is draining
3305 3329 * the tx_waitq. An IO requesting thread that finds the instance
3306 3330 * mutex contended appends to the tx_waitq and while holding the
3307 3331 * tx_wait mutex, if the draining flag is not set, sets it and then
3308 3332 * proceeds to spin for the instance mutex. This scheme ensures that
3309 3333 * the last cmd in a burst be processed.
3310 3334 *
3311 3335 * we enable this feature only when the helper threads are enabled,
3312 3336 * at which we think the loads are heavy.
3313 3337 *
3314 3338 * per instance mutex m_tx_waitq_mutex is introduced to protect the
3315 3339 * m_tx_waitqtail, m_tx_waitq, m_tx_draining.
3316 3340 */
3317 3341
3318 3342 if (mpt->m_doneq_thread_n) {
3319 3343 if (mutex_tryenter(&mpt->m_mutex) != 0) {
3320 3344 rval = mptsas_accept_txwq_and_pkt(mpt, cmd);
3321 3345 mutex_exit(&mpt->m_mutex);
3322 3346 } else if (cmd->cmd_pkt_flags & FLAG_NOINTR) {
3323 3347 mutex_enter(&mpt->m_mutex);
3324 3348 rval = mptsas_accept_txwq_and_pkt(mpt, cmd);
3325 3349 mutex_exit(&mpt->m_mutex);
3326 3350 } else {
3327 3351 mutex_enter(&mpt->m_tx_waitq_mutex);
3328 3352 /*
3329 3353 * ptgt->m_dr_flag is protected by m_mutex or
3330 3354 * m_tx_waitq_mutex. In this case, m_tx_waitq_mutex
3331 3355 * is acquired.
3332 3356 */
3333 3357 if (ptgt->m_dr_flag == MPTSAS_DR_INTRANSITION) {
3334 3358 if (cmd->cmd_pkt_flags & FLAG_NOQUEUE) {
3335 3359 /*
3336 3360 * The command should be allowed to
3337 3361 * retry by returning TRAN_BUSY to
3338 3362 * to stall the I/O's which come from
3339 3363 * scsi_vhci since the device/path is
3340 3364 * in unstable state now.
3341 3365 */
3342 3366 mutex_exit(&mpt->m_tx_waitq_mutex);
3343 3367 return (TRAN_BUSY);
3344 3368 } else {
3345 3369 /*
3346 3370 * The device is offline, just fail the
3347 3371 * command by returning
3348 3372 * TRAN_FATAL_ERROR.
3349 3373 */
3350 3374 mutex_exit(&mpt->m_tx_waitq_mutex);
3351 3375 return (TRAN_FATAL_ERROR);
3352 3376 }
3353 3377 }
3354 3378 if (mpt->m_tx_draining) {
3355 3379 cmd->cmd_flags |= CFLAG_TXQ;
3356 3380 *mpt->m_tx_waitqtail = cmd;
3357 3381 mpt->m_tx_waitqtail = &cmd->cmd_linkp;
3358 3382 mutex_exit(&mpt->m_tx_waitq_mutex);
3359 3383 } else { /* drain the queue */
3360 3384 mpt->m_tx_draining = 1;
3361 3385 mutex_exit(&mpt->m_tx_waitq_mutex);
3362 3386 mutex_enter(&mpt->m_mutex);
3363 3387 rval = mptsas_accept_txwq_and_pkt(mpt, cmd);
3364 3388 mutex_exit(&mpt->m_mutex);
3365 3389 }
3366 3390 }
3367 3391 } else {
3368 3392 mutex_enter(&mpt->m_mutex);
3369 3393 /*
3370 3394 * ptgt->m_dr_flag is protected by m_mutex or m_tx_waitq_mutex
3371 3395 * in this case, m_mutex is acquired.
3372 3396 */
3373 3397 if (ptgt->m_dr_flag == MPTSAS_DR_INTRANSITION) {
3374 3398 if (cmd->cmd_pkt_flags & FLAG_NOQUEUE) {
3375 3399 /*
3376 3400 * commands should be allowed to retry by
3377 3401 * returning TRAN_BUSY to stall the I/O's
3378 3402 * which come from scsi_vhci since the device/
3379 3403 * path is in unstable state now.
3380 3404 */
3381 3405 mutex_exit(&mpt->m_mutex);
3382 3406 return (TRAN_BUSY);
3383 3407 } else {
3384 3408 /*
3385 3409 * The device is offline, just fail the
3386 3410 * command by returning TRAN_FATAL_ERROR.
3387 3411 */
3388 3412 mutex_exit(&mpt->m_mutex);
3389 3413 return (TRAN_FATAL_ERROR);
3390 3414 }
3391 3415 }
3392 3416 rval = mptsas_accept_pkt(mpt, cmd);
3393 3417 mutex_exit(&mpt->m_mutex);
3394 3418 }
3395 3419
3396 3420 return (rval);
3397 3421 }
3398 3422
3399 3423 /*
3400 3424 * Accept all the queued cmds(if any) before accept the current one.
3401 3425 */
3402 3426 static int
3403 3427 mptsas_accept_txwq_and_pkt(mptsas_t *mpt, mptsas_cmd_t *cmd)
3404 3428 {
3405 3429 int rval;
3406 3430 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
3407 3431
3408 3432 ASSERT(mutex_owned(&mpt->m_mutex));
3409 3433 /*
3410 3434 * The call to mptsas_accept_tx_waitq() must always be performed
3411 3435 * because that is where mpt->m_tx_draining is cleared.
3412 3436 */
3413 3437 mutex_enter(&mpt->m_tx_waitq_mutex);
3414 3438 mptsas_accept_tx_waitq(mpt);
3415 3439 mutex_exit(&mpt->m_tx_waitq_mutex);
3416 3440 /*
3417 3441 * ptgt->m_dr_flag is protected by m_mutex or m_tx_waitq_mutex
3418 3442 * in this case, m_mutex is acquired.
3419 3443 */
3420 3444 if (ptgt->m_dr_flag == MPTSAS_DR_INTRANSITION) {
3421 3445 if (cmd->cmd_pkt_flags & FLAG_NOQUEUE) {
3422 3446 /*
3423 3447 * The command should be allowed to retry by returning
3424 3448 * TRAN_BUSY to stall the I/O's which come from
3425 3449 * scsi_vhci since the device/path is in unstable state
3426 3450 * now.
3427 3451 */
3428 3452 return (TRAN_BUSY);
3429 3453 } else {
3430 3454 /*
3431 3455 * The device is offline, just fail the command by
|
↓ open down ↓ |
232 lines elided |
↑ open up ↑ |
3432 3456 * return TRAN_FATAL_ERROR.
3433 3457 */
3434 3458 return (TRAN_FATAL_ERROR);
3435 3459 }
3436 3460 }
3437 3461 rval = mptsas_accept_pkt(mpt, cmd);
3438 3462
3439 3463 return (rval);
3440 3464 }
3441 3465
3466 +#ifdef MPTSAS_FAULTINJECTION
3467 +static void
3468 +mptsas_fminj_move_cmd_to_doneq(mptsas_t *mpt, mptsas_cmd_t *cmd,
3469 + uchar_t reason, uint_t stat)
3470 +{
3471 + struct scsi_pkt *pkt = cmd->cmd_pkt;
3472 +
3473 + TAILQ_REMOVE(&mpt->m_fminj_cmdq, cmd, cmd_active_link);
3474 +
3475 + /* Setup reason/statistics. */
3476 + pkt->pkt_reason = reason;
3477 + pkt->pkt_statistics = stat;
3478 +
3479 + cmd->cmd_active_expiration = 0;
3480 +
3481 + /* Move command to doneque. */
3482 + cmd->cmd_linkp = NULL;
3483 + cmd->cmd_flags |= CFLAG_FINISHED;
3484 + cmd->cmd_flags &= ~CFLAG_IN_TRANSPORT;
3485 +
3486 + *mpt->m_donetail = cmd;
3487 + mpt->m_donetail = &cmd->cmd_linkp;
3488 + mpt->m_doneq_len++;
3489 +}
3490 +
3491 +static void
3492 +mptsas_fminj_move_tgt_to_doneq(mptsas_t *mpt, ushort_t target,
3493 + uchar_t reason, uint_t stat)
3494 +{
3495 + mptsas_cmd_t *cmd;
3496 +
3497 + ASSERT(mutex_owned(&mpt->m_mutex));
3498 +
3499 + if (!TAILQ_EMPTY(&mpt->m_fminj_cmdq)) {
3500 + cmd = TAILQ_FIRST(&mpt->m_fminj_cmdq);
3501 + ASSERT(cmd != NULL);
3502 +
3503 + while (cmd != NULL) {
3504 + mptsas_cmd_t *next = TAILQ_NEXT(cmd, cmd_active_link);
3505 +
3506 + if (Tgt(cmd) == target) {
3507 + mptsas_fminj_move_cmd_to_doneq(mpt, cmd,
3508 + reason, stat);
3509 + }
3510 + cmd = next;
3511 + }
3512 + }
3513 +}
3514 +
3515 +static void
3516 +mptsas_fminj_watchsubr(mptsas_t *mpt,
3517 + struct mptsas_active_cmdq *expired)
3518 +{
3519 + mptsas_cmd_t *cmd;
3520 +
3521 + ASSERT(mutex_owned(&mpt->m_mutex));
3522 +
3523 + if (!TAILQ_EMPTY(&mpt->m_fminj_cmdq)) {
3524 + hrtime_t timestamp = gethrtime();
3525 +
3526 + cmd = TAILQ_FIRST(&mpt->m_fminj_cmdq);
3527 + ASSERT(cmd != NULL);
3528 +
3529 + while (cmd != NULL) {
3530 + mptsas_cmd_t *next = TAILQ_NEXT(cmd, cmd_active_link);
3531 +
3532 + if (cmd->cmd_active_expiration <= timestamp) {
3533 + struct scsi_pkt *pkt = cmd->cmd_pkt;
3534 +
3535 + DTRACE_PROBE1(mptsas__command__timeout,
3536 + struct scsi_pkt *, pkt);
3537 +
3538 + /* Setup proper flags. */
3539 + pkt->pkt_reason = CMD_TIMEOUT;
3540 + pkt->pkt_statistics = (STAT_TIMEOUT |
3541 + STAT_DEV_RESET);
3542 + cmd->cmd_active_expiration = 0;
3543 +
3544 + TAILQ_REMOVE(&mpt->m_fminj_cmdq, cmd,
3545 + cmd_active_link);
3546 + TAILQ_INSERT_TAIL(expired, cmd,
3547 + cmd_active_link);
3548 + }
3549 + cmd = next;
3550 + }
3551 + }
3552 +}
3553 +
3442 3554 static int
3555 +mptsas_fminject(mptsas_t *mpt, mptsas_cmd_t *cmd)
3556 +{
3557 + struct scsi_pkt *pkt = cmd->cmd_pkt;
3558 +
3559 + ASSERT(mutex_owned(&mpt->m_mutex));
3560 +
3561 + if (pkt->pkt_flags & FLAG_PKT_TIMEOUT) {
3562 + if (((pkt->pkt_flags & FLAG_NOINTR) == 0) &&
3563 + (pkt->pkt_comp != NULL)) {
3564 + pkt->pkt_state = (STATE_GOT_BUS|STATE_GOT_TARGET|
3565 + STATE_SENT_CMD);
3566 + cmd->cmd_active_expiration =
3567 + gethrtime() + (hrtime_t)pkt->pkt_time * NANOSEC;
3568 + TAILQ_INSERT_TAIL(&mpt->m_fminj_cmdq,
3569 + cmd, cmd_active_link);
3570 + return (0);
3571 + }
3572 + }
3573 + return (-1);
3574 +}
3575 +#endif /* MPTSAS_FAULTINJECTION */
3576 +
3577 +static int
3443 3578 mptsas_accept_pkt(mptsas_t *mpt, mptsas_cmd_t *cmd)
3444 3579 {
3445 3580 int rval = TRAN_ACCEPT;
3446 3581 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
3447 3582
3448 3583 NDBG1(("mptsas_accept_pkt: cmd=0x%p", (void *)cmd));
3449 3584
3450 3585 ASSERT(mutex_owned(&mpt->m_mutex));
3451 3586
3452 3587 if ((cmd->cmd_flags & CFLAG_PREPARED) == 0) {
3453 3588 rval = mptsas_prepare_pkt(cmd);
3454 3589 if (rval != TRAN_ACCEPT) {
3455 3590 cmd->cmd_flags &= ~CFLAG_TRANFLAG;
3456 3591 return (rval);
3457 3592 }
3458 3593 }
3459 3594
3460 3595 /*
|
↓ open down ↓ |
8 lines elided |
↑ open up ↑ |
3461 3596 * reset the throttle if we were draining
3462 3597 */
3463 3598 if ((ptgt->m_t_ncmds == 0) &&
3464 3599 (ptgt->m_t_throttle == DRAIN_THROTTLE)) {
3465 3600 NDBG23(("reset throttle"));
3466 3601 ASSERT(ptgt->m_reset_delay == 0);
3467 3602 mptsas_set_throttle(mpt, ptgt, MAX_THROTTLE);
3468 3603 }
3469 3604
3470 3605 /*
3471 - * If HBA is being reset, the DevHandles are being re-initialized,
3472 - * which means that they could be invalid even if the target is still
3473 - * attached. Check if being reset and if DevHandle is being
3474 - * re-initialized. If this is the case, return BUSY so the I/O can be
3475 - * retried later.
3606 + * If HBA is being reset, the device handles will be invalidated.
3607 + * This is temporary and, if target is still attached, the device
3608 + * handles will be re-assigned when firmware reset completes.
3609 + * Then, if command was already waiting, complete the command
3610 + * otherwise return BUSY and expect transport retry.
3476 3611 */
3477 3612 if ((ptgt->m_devhdl == MPTSAS_INVALID_DEVHDL) && mpt->m_in_reset) {
3613 + NDBG20(("retry command, invalid devhdl, during FW reset."));
3478 3614 mptsas_set_pkt_reason(mpt, cmd, CMD_RESET, STAT_BUS_RESET);
3479 3615 if (cmd->cmd_flags & CFLAG_TXQ) {
3480 3616 mptsas_doneq_add(mpt, cmd);
3481 3617 mptsas_doneq_empty(mpt);
3482 3618 return (rval);
3483 3619 } else {
3484 3620 return (TRAN_BUSY);
3485 3621 }
3486 3622 }
3487 3623
3488 3624 /*
3489 - * If device handle has already been invalidated, just
3490 - * fail the command. In theory, command from scsi_vhci
3491 - * client is impossible send down command with invalid
3492 - * devhdl since devhdl is set after path offline, target
3493 - * driver is not suppose to select a offlined path.
3625 + * If the device handle has been invalidated, set the response
3626 + * reason to indicate the device is gone. Then add the
3627 + * command to the done queue and run the completion routine
3628 + * so the initiator of the command can clean up.
3494 3629 */
3495 3630 if (ptgt->m_devhdl == MPTSAS_INVALID_DEVHDL) {
3496 - NDBG3(("rejecting command, it might because invalid devhdl "
3497 - "request."));
3631 + NDBG20(("rejecting command, invalid devhdl because "
3632 + "device gone."));
3498 3633 mptsas_set_pkt_reason(mpt, cmd, CMD_DEV_GONE, STAT_TERMINATED);
3499 3634 if (cmd->cmd_flags & CFLAG_TXQ) {
3500 3635 mptsas_doneq_add(mpt, cmd);
3501 3636 mptsas_doneq_empty(mpt);
3502 3637 return (rval);
3503 3638 } else {
3504 3639 return (TRAN_FATAL_ERROR);
3505 3640 }
3506 3641 }
3642 +
3507 3643 /*
3644 + * Do fault injecttion before transmitting command.
3645 + * FLAG_NOINTR commands are skipped.
3646 + */
3647 +#ifdef MPTSAS_FAULTINJECTION
3648 + if (!mptsas_fminject(mpt, cmd)) {
3649 + return (TRAN_ACCEPT);
3650 + }
3651 +#endif
3652 +
3653 + /*
3508 3654 * The first case is the normal case. mpt gets a command from the
3509 3655 * target driver and starts it.
3510 3656 * Since SMID 0 is reserved and the TM slot is reserved, the actual max
3511 3657 * commands is m_max_requests - 2.
3512 3658 */
3513 3659 if ((mpt->m_ncmds <= (mpt->m_max_requests - 2)) &&
3514 3660 (ptgt->m_t_throttle > HOLD_THROTTLE) &&
3515 3661 (ptgt->m_t_ncmds < ptgt->m_t_throttle) &&
3516 3662 (ptgt->m_reset_delay == 0) &&
3517 3663 (ptgt->m_t_nwait == 0) &&
3518 3664 ((cmd->cmd_pkt_flags & FLAG_NOINTR) == 0)) {
3519 3665 if (mptsas_save_cmd(mpt, cmd) == TRUE) {
3520 3666 (void) mptsas_start_cmd(mpt, cmd);
3521 3667 } else {
3522 3668 mptsas_waitq_add(mpt, cmd);
3523 3669 }
3524 3670 } else {
3525 3671 /*
3526 3672 * Add this pkt to the work queue
3527 3673 */
3528 3674 mptsas_waitq_add(mpt, cmd);
3529 3675
3530 3676 if (cmd->cmd_pkt_flags & FLAG_NOINTR) {
3531 3677 (void) mptsas_poll(mpt, cmd, MPTSAS_POLL_TIME);
3532 3678
3533 3679 /*
3534 3680 * Only flush the doneq if this is not a TM
3535 3681 * cmd. For TM cmds the flushing of the
3536 3682 * doneq will be done in those routines.
3537 3683 */
3538 3684 if ((cmd->cmd_flags & CFLAG_TM_CMD) == 0) {
3539 3685 mptsas_doneq_empty(mpt);
3540 3686 }
3541 3687 }
3542 3688 }
3543 3689 return (rval);
3544 3690 }
3545 3691
3546 3692 int
3547 3693 mptsas_save_cmd(mptsas_t *mpt, mptsas_cmd_t *cmd)
3548 3694 {
3549 3695 mptsas_slots_t *slots = mpt->m_active;
3550 3696 uint_t slot, start_rotor;
3551 3697 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
3552 3698
3553 3699 ASSERT(MUTEX_HELD(&mpt->m_mutex));
3554 3700
3555 3701 /*
3556 3702 * Account for reserved TM request slot and reserved SMID of 0.
3557 3703 */
3558 3704 ASSERT(slots->m_n_normal == (mpt->m_max_requests - 2));
3559 3705
3560 3706 /*
3561 3707 * Find the next available slot, beginning at m_rotor. If no slot is
3562 3708 * available, we'll return FALSE to indicate that. This mechanism
3563 3709 * considers only the normal slots, not the reserved slot 0 nor the
3564 3710 * task management slot m_n_normal + 1. The rotor is left to point to
3565 3711 * the normal slot after the one we select, unless we select the last
3566 3712 * normal slot in which case it returns to slot 1.
3567 3713 */
3568 3714 start_rotor = slots->m_rotor;
3569 3715 do {
3570 3716 slot = slots->m_rotor++;
3571 3717 if (slots->m_rotor > slots->m_n_normal)
3572 3718 slots->m_rotor = 1;
3573 3719
3574 3720 if (slots->m_rotor == start_rotor)
3575 3721 break;
3576 3722 } while (slots->m_slot[slot] != NULL);
3577 3723
3578 3724 if (slots->m_slot[slot] != NULL)
3579 3725 return (FALSE);
3580 3726
3581 3727 ASSERT(slot != 0 && slot <= slots->m_n_normal);
3582 3728
3583 3729 cmd->cmd_slot = slot;
3584 3730 slots->m_slot[slot] = cmd;
3585 3731 mpt->m_ncmds++;
3586 3732
3587 3733 /*
3588 3734 * only increment per target ncmds if this is not a
3589 3735 * command that has no target associated with it (i.e. a
3590 3736 * event acknoledgment)
3591 3737 */
3592 3738 if ((cmd->cmd_flags & CFLAG_CMDIOC) == 0) {
3593 3739 /*
3594 3740 * Expiration time is set in mptsas_start_cmd
3595 3741 */
3596 3742 ptgt->m_t_ncmds++;
3597 3743 cmd->cmd_active_expiration = 0;
3598 3744 } else {
3599 3745 /*
3600 3746 * Initialize expiration time for passthrough commands,
3601 3747 */
3602 3748 cmd->cmd_active_expiration = gethrtime() +
3603 3749 (hrtime_t)cmd->cmd_pkt->pkt_time * NANOSEC;
3604 3750 }
3605 3751 return (TRUE);
3606 3752 }
3607 3753
3608 3754 /*
3609 3755 * prepare the pkt:
|
↓ open down ↓ |
92 lines elided |
↑ open up ↑ |
3610 3756 * the pkt may have been resubmitted or just reused so
3611 3757 * initialize some fields and do some checks.
3612 3758 */
3613 3759 static int
3614 3760 mptsas_prepare_pkt(mptsas_cmd_t *cmd)
3615 3761 {
3616 3762 struct scsi_pkt *pkt = CMD2PKT(cmd);
3617 3763
3618 3764 NDBG1(("mptsas_prepare_pkt: cmd=0x%p", (void *)cmd));
3619 3765
3766 +#ifdef MPTSAS_FAULTINJECTION
3767 + /* Check for fault flags prior to perform actual initialization. */
3768 + if (pkt->pkt_flags & FLAG_PKT_BUSY) {
3769 + return (TRAN_BUSY);
3770 + }
3771 +#endif
3772 +
3620 3773 /*
3621 3774 * Reinitialize some fields that need it; the packet may
3622 3775 * have been resubmitted
3623 3776 */
3624 3777 pkt->pkt_reason = CMD_CMPLT;
3625 3778 pkt->pkt_state = 0;
3626 3779 pkt->pkt_statistics = 0;
3627 3780 pkt->pkt_resid = 0;
3628 3781 cmd->cmd_age = 0;
3629 3782 cmd->cmd_pkt_flags = pkt->pkt_flags;
3630 3783
3631 3784 /*
3632 3785 * zero status byte.
3633 3786 */
3634 3787 *(pkt->pkt_scbp) = 0;
3635 3788
3636 3789 if (cmd->cmd_flags & CFLAG_DMAVALID) {
3637 3790 pkt->pkt_resid = cmd->cmd_dmacount;
3638 3791
3639 3792 /*
3640 3793 * consistent packets need to be sync'ed first
3641 3794 * (only for data going out)
3642 3795 */
3643 3796 if ((cmd->cmd_flags & CFLAG_CMDIOPB) &&
3644 3797 (cmd->cmd_flags & CFLAG_DMASEND)) {
3645 3798 (void) ddi_dma_sync(cmd->cmd_dmahandle, 0, 0,
3646 3799 DDI_DMA_SYNC_FORDEV);
3647 3800 }
3648 3801 }
3649 3802
3650 3803 cmd->cmd_flags =
3651 3804 (cmd->cmd_flags & ~(CFLAG_TRANFLAG)) |
3652 3805 CFLAG_PREPARED | CFLAG_IN_TRANSPORT;
3653 3806
3654 3807 return (TRAN_ACCEPT);
3655 3808 }
3656 3809
3657 3810 /*
3658 3811 * tran_init_pkt(9E) - allocate scsi_pkt(9S) for command
3659 3812 *
3660 3813 * One of three possibilities:
3661 3814 * - allocate scsi_pkt
3662 3815 * - allocate scsi_pkt and DMA resources
3663 3816 * - allocate DMA resources to an already-allocated pkt
3664 3817 */
3665 3818 static struct scsi_pkt *
3666 3819 mptsas_scsi_init_pkt(struct scsi_address *ap, struct scsi_pkt *pkt,
3667 3820 struct buf *bp, int cmdlen, int statuslen, int tgtlen, int flags,
3668 3821 int (*callback)(), caddr_t arg)
3669 3822 {
3670 3823 mptsas_cmd_t *cmd, *new_cmd;
3671 3824 mptsas_t *mpt = ADDR2MPT(ap);
3672 3825 uint_t oldcookiec;
3673 3826 mptsas_target_t *ptgt = NULL;
3674 3827 int rval;
3675 3828 mptsas_tgt_private_t *tgt_private;
3676 3829 int kf;
3677 3830
3678 3831 kf = (callback == SLEEP_FUNC)? KM_SLEEP: KM_NOSLEEP;
3679 3832
3680 3833 tgt_private = (mptsas_tgt_private_t *)ap->a_hba_tran->
3681 3834 tran_tgt_private;
3682 3835 ASSERT(tgt_private != NULL);
3683 3836 if (tgt_private == NULL) {
3684 3837 return (NULL);
3685 3838 }
3686 3839 ptgt = tgt_private->t_private;
3687 3840 ASSERT(ptgt != NULL);
3688 3841 if (ptgt == NULL)
3689 3842 return (NULL);
3690 3843 ap->a_target = ptgt->m_devhdl;
3691 3844 ap->a_lun = tgt_private->t_lun;
3692 3845
3693 3846 ASSERT(callback == NULL_FUNC || callback == SLEEP_FUNC);
3694 3847 #ifdef MPTSAS_TEST_EXTRN_ALLOC
3695 3848 statuslen *= 100; tgtlen *= 4;
3696 3849 #endif
3697 3850 NDBG3(("mptsas_scsi_init_pkt:\n"
3698 3851 "\ttgt=%d in=0x%p bp=0x%p clen=%d slen=%d tlen=%d flags=%x",
3699 3852 ap->a_target, (void *)pkt, (void *)bp,
3700 3853 cmdlen, statuslen, tgtlen, flags));
3701 3854
3702 3855 /*
3703 3856 * Allocate the new packet.
3704 3857 */
3705 3858 if (pkt == NULL) {
3706 3859 ddi_dma_handle_t save_dma_handle;
3707 3860
3708 3861 cmd = kmem_cache_alloc(mpt->m_kmem_cache, kf);
3709 3862 if (cmd == NULL)
3710 3863 return (NULL);
3711 3864
3712 3865 save_dma_handle = cmd->cmd_dmahandle;
3713 3866 bzero(cmd, sizeof (*cmd) + scsi_pkt_size());
3714 3867 cmd->cmd_dmahandle = save_dma_handle;
3715 3868
3716 3869 pkt = (void *)((uchar_t *)cmd +
3717 3870 sizeof (struct mptsas_cmd));
3718 3871 pkt->pkt_ha_private = (opaque_t)cmd;
3719 3872 pkt->pkt_address = *ap;
3720 3873 pkt->pkt_private = (opaque_t)cmd->cmd_pkt_private;
3721 3874 pkt->pkt_scbp = (opaque_t)&cmd->cmd_scb;
|
↓ open down ↓ |
92 lines elided |
↑ open up ↑ |
3722 3875 pkt->pkt_cdbp = (opaque_t)&cmd->cmd_cdb;
3723 3876 cmd->cmd_pkt = (struct scsi_pkt *)pkt;
3724 3877 cmd->cmd_cdblen = (uchar_t)cmdlen;
3725 3878 cmd->cmd_scblen = statuslen;
3726 3879 cmd->cmd_rqslen = SENSE_LENGTH;
3727 3880 cmd->cmd_tgt_addr = ptgt;
3728 3881
3729 3882 if ((cmdlen > sizeof (cmd->cmd_cdb)) ||
3730 3883 (tgtlen > PKT_PRIV_LEN) ||
3731 3884 (statuslen > EXTCMDS_STATUS_SIZE)) {
3732 - int failure;
3733 -
3734 - /*
3735 - * We are going to allocate external packet space which
3736 - * might include the sense data buffer for DMA so we
3737 - * need to increase the reference counter here. In a
3738 - * case the HBA is in reset we just simply free the
3739 - * allocated packet and bail out.
3740 - */
3741 - mutex_enter(&mpt->m_mutex);
3742 - if (mpt->m_in_reset) {
3743 - mutex_exit(&mpt->m_mutex);
3744 -
3745 - cmd->cmd_flags = CFLAG_FREE;
3746 - kmem_cache_free(mpt->m_kmem_cache, cmd);
3747 - return (NULL);
3748 - }
3749 - mpt->m_extreq_sense_refcount++;
3750 - ASSERT(mpt->m_extreq_sense_refcount > 0);
3751 - mutex_exit(&mpt->m_mutex);
3752 -
3753 - /*
3754 - * if extern alloc fails, all will be
3755 - * deallocated, including cmd
3756 - */
3757 - failure = mptsas_pkt_alloc_extern(mpt, cmd,
3758 - cmdlen, tgtlen, statuslen, kf);
3759 -
3760 - if (failure != 0 || cmd->cmd_extrqslen == 0) {
3885 + if (mptsas_pkt_alloc_extern(mpt, cmd,
3886 + cmdlen, tgtlen, statuslen, kf)) {
3761 3887 /*
3762 - * If the external packet space allocation
3763 - * failed, or we didn't allocate the sense
3764 - * data buffer for DMA we need to decrease the
3765 - * reference counter.
3888 + * if extern allocation fails, it will
3889 + * deallocate the new pkt as well
3766 3890 */
3767 - mutex_enter(&mpt->m_mutex);
3768 - ASSERT(mpt->m_extreq_sense_refcount > 0);
3769 - mpt->m_extreq_sense_refcount--;
3770 - if (mpt->m_extreq_sense_refcount == 0)
3771 - cv_broadcast(
3772 - &mpt->m_extreq_sense_refcount_cv);
3773 - mutex_exit(&mpt->m_mutex);
3774 -
3775 - if (failure != 0) {
3776 - /*
3777 - * if extern allocation fails, it will
3778 - * deallocate the new pkt as well
3779 - */
3780 - return (NULL);
3781 - }
3891 + return (NULL);
3782 3892 }
3783 3893 }
3784 3894 new_cmd = cmd;
3785 3895
3786 3896 } else {
3787 3897 cmd = PKT2CMD(pkt);
3898 + pkt->pkt_start = 0;
3899 + pkt->pkt_stop = 0;
3788 3900 new_cmd = NULL;
3789 3901 }
3790 3902
3791 3903
3792 3904 /* grab cmd->cmd_cookiec here as oldcookiec */
3793 3905
3794 3906 oldcookiec = cmd->cmd_cookiec;
3795 3907
3796 3908 /*
3797 3909 * If the dma was broken up into PARTIAL transfers cmd_nwin will be
3798 3910 * greater than 0 and we'll need to grab the next dma window
3799 3911 */
3800 3912 /*
3801 3913 * SLM-not doing extra command frame right now; may add later
3802 3914 */
3803 3915
3804 3916 if (cmd->cmd_nwin > 0) {
3805 3917
3806 3918 /*
3807 3919 * Make sure we havn't gone past the the total number
3808 3920 * of windows
3809 3921 */
3810 3922 if (++cmd->cmd_winindex >= cmd->cmd_nwin) {
3811 3923 return (NULL);
3812 3924 }
3813 3925 if (ddi_dma_getwin(cmd->cmd_dmahandle, cmd->cmd_winindex,
3814 3926 &cmd->cmd_dma_offset, &cmd->cmd_dma_len,
3815 3927 &cmd->cmd_cookie, &cmd->cmd_cookiec) == DDI_FAILURE) {
3816 3928 return (NULL);
3817 3929 }
3818 3930 goto get_dma_cookies;
3819 3931 }
3820 3932
3821 3933
3822 3934 if (flags & PKT_XARQ) {
3823 3935 cmd->cmd_flags |= CFLAG_XARQ;
3824 3936 }
3825 3937
3826 3938 /*
3827 3939 * DMA resource allocation. This version assumes your
3828 3940 * HBA has some sort of bus-mastering or onboard DMA capability, with a
3829 3941 * scatter-gather list of length MPTSAS_MAX_DMA_SEGS, as given in the
3830 3942 * ddi_dma_attr_t structure and passed to scsi_impl_dmaget.
3831 3943 */
3832 3944 if (bp && (bp->b_bcount != 0) &&
3833 3945 (cmd->cmd_flags & CFLAG_DMAVALID) == 0) {
3834 3946
3835 3947 int cnt, dma_flags;
3836 3948 mptti_t *dmap; /* ptr to the S/G list */
3837 3949
3838 3950 /*
3839 3951 * Set up DMA memory and position to the next DMA segment.
3840 3952 */
3841 3953 ASSERT(cmd->cmd_dmahandle != NULL);
3842 3954
3843 3955 if (bp->b_flags & B_READ) {
3844 3956 dma_flags = DDI_DMA_READ;
3845 3957 cmd->cmd_flags &= ~CFLAG_DMASEND;
3846 3958 } else {
3847 3959 dma_flags = DDI_DMA_WRITE;
3848 3960 cmd->cmd_flags |= CFLAG_DMASEND;
3849 3961 }
3850 3962 if (flags & PKT_CONSISTENT) {
3851 3963 cmd->cmd_flags |= CFLAG_CMDIOPB;
3852 3964 dma_flags |= DDI_DMA_CONSISTENT;
3853 3965 }
3854 3966
3855 3967 if (flags & PKT_DMA_PARTIAL) {
3856 3968 dma_flags |= DDI_DMA_PARTIAL;
3857 3969 }
3858 3970
3859 3971 /*
3860 3972 * workaround for byte hole issue on psycho and
3861 3973 * schizo pre 2.1
3862 3974 */
3863 3975 if ((bp->b_flags & B_READ) && ((bp->b_flags &
3864 3976 (B_PAGEIO|B_REMAPPED)) != B_PAGEIO) &&
3865 3977 ((uintptr_t)bp->b_un.b_addr & 0x7)) {
3866 3978 dma_flags |= DDI_DMA_CONSISTENT;
3867 3979 }
3868 3980
3869 3981 rval = ddi_dma_buf_bind_handle(cmd->cmd_dmahandle, bp,
3870 3982 dma_flags, callback, arg,
3871 3983 &cmd->cmd_cookie, &cmd->cmd_cookiec);
3872 3984 if (rval == DDI_DMA_PARTIAL_MAP) {
3873 3985 (void) ddi_dma_numwin(cmd->cmd_dmahandle,
3874 3986 &cmd->cmd_nwin);
3875 3987 cmd->cmd_winindex = 0;
3876 3988 (void) ddi_dma_getwin(cmd->cmd_dmahandle,
3877 3989 cmd->cmd_winindex, &cmd->cmd_dma_offset,
3878 3990 &cmd->cmd_dma_len, &cmd->cmd_cookie,
3879 3991 &cmd->cmd_cookiec);
3880 3992 } else if (rval && (rval != DDI_DMA_MAPPED)) {
3881 3993 switch (rval) {
3882 3994 case DDI_DMA_NORESOURCES:
3883 3995 bioerror(bp, 0);
3884 3996 break;
3885 3997 case DDI_DMA_BADATTR:
3886 3998 case DDI_DMA_NOMAPPING:
3887 3999 bioerror(bp, EFAULT);
3888 4000 break;
3889 4001 case DDI_DMA_TOOBIG:
3890 4002 default:
3891 4003 bioerror(bp, EINVAL);
3892 4004 break;
3893 4005 }
3894 4006 cmd->cmd_flags &= ~CFLAG_DMAVALID;
3895 4007 if (new_cmd) {
|
↓ open down ↓ |
98 lines elided |
↑ open up ↑ |
3896 4008 mptsas_scsi_destroy_pkt(ap, pkt);
3897 4009 }
3898 4010 return ((struct scsi_pkt *)NULL);
3899 4011 }
3900 4012
3901 4013 get_dma_cookies:
3902 4014 cmd->cmd_flags |= CFLAG_DMAVALID;
3903 4015 ASSERT(cmd->cmd_cookiec > 0);
3904 4016
3905 4017 if (cmd->cmd_cookiec > MPTSAS_MAX_CMD_SEGS) {
3906 - mptsas_log(mpt, CE_NOTE, "large cookiec received %d\n",
4018 + mptsas_log(mpt, CE_NOTE, "large cookiec received %d",
3907 4019 cmd->cmd_cookiec);
3908 4020 bioerror(bp, EINVAL);
3909 4021 if (new_cmd) {
3910 4022 mptsas_scsi_destroy_pkt(ap, pkt);
3911 4023 }
3912 4024 return ((struct scsi_pkt *)NULL);
3913 4025 }
3914 4026
3915 4027 /*
3916 4028 * Allocate extra SGL buffer if needed.
3917 4029 */
3918 4030 if ((cmd->cmd_cookiec > MPTSAS_MAX_FRAME_SGES64(mpt)) &&
3919 4031 (cmd->cmd_extra_frames == NULL)) {
3920 4032 if (mptsas_alloc_extra_sgl_frame(mpt, cmd) ==
3921 4033 DDI_FAILURE) {
3922 4034 mptsas_log(mpt, CE_WARN, "MPT SGL mem alloc "
3923 4035 "failed");
3924 4036 bioerror(bp, ENOMEM);
3925 4037 if (new_cmd) {
3926 4038 mptsas_scsi_destroy_pkt(ap, pkt);
3927 4039 }
3928 4040 return ((struct scsi_pkt *)NULL);
3929 4041 }
3930 4042 }
3931 4043
3932 4044 /*
3933 4045 * Always use scatter-gather transfer
3934 4046 * Use the loop below to store physical addresses of
3935 4047 * DMA segments, from the DMA cookies, into your HBA's
3936 4048 * scatter-gather list.
3937 4049 * We need to ensure we have enough kmem alloc'd
3938 4050 * for the sg entries since we are no longer using an
3939 4051 * array inside mptsas_cmd_t.
3940 4052 *
3941 4053 * We check cmd->cmd_cookiec against oldcookiec so
3942 4054 * the scatter-gather list is correctly allocated
3943 4055 */
3944 4056
3945 4057 if (oldcookiec != cmd->cmd_cookiec) {
3946 4058 if (cmd->cmd_sg != (mptti_t *)NULL) {
3947 4059 kmem_free(cmd->cmd_sg, sizeof (mptti_t) *
3948 4060 oldcookiec);
3949 4061 cmd->cmd_sg = NULL;
3950 4062 }
3951 4063 }
3952 4064
3953 4065 if (cmd->cmd_sg == (mptti_t *)NULL) {
3954 4066 cmd->cmd_sg = kmem_alloc((size_t)(sizeof (mptti_t)*
3955 4067 cmd->cmd_cookiec), kf);
3956 4068
3957 4069 if (cmd->cmd_sg == (mptti_t *)NULL) {
3958 4070 mptsas_log(mpt, CE_WARN,
3959 4071 "unable to kmem_alloc enough memory "
3960 4072 "for scatter/gather list");
3961 4073 /*
3962 4074 * if we have an ENOMEM condition we need to behave
3963 4075 * the same way as the rest of this routine
3964 4076 */
3965 4077
3966 4078 bioerror(bp, ENOMEM);
3967 4079 if (new_cmd) {
3968 4080 mptsas_scsi_destroy_pkt(ap, pkt);
3969 4081 }
3970 4082 return ((struct scsi_pkt *)NULL);
3971 4083 }
3972 4084 }
3973 4085
3974 4086 dmap = cmd->cmd_sg;
3975 4087
3976 4088 ASSERT(cmd->cmd_cookie.dmac_size != 0);
3977 4089
3978 4090 /*
3979 4091 * store the first segment into the S/G list
3980 4092 */
3981 4093 dmap->count = cmd->cmd_cookie.dmac_size;
3982 4094 dmap->addr.address64.Low = (uint32_t)
3983 4095 (cmd->cmd_cookie.dmac_laddress & 0xffffffffull);
3984 4096 dmap->addr.address64.High = (uint32_t)
3985 4097 (cmd->cmd_cookie.dmac_laddress >> 32);
3986 4098
3987 4099 /*
3988 4100 * dmacount counts the size of the dma for this window
3989 4101 * (if partial dma is being used). totaldmacount
3990 4102 * keeps track of the total amount of dma we have
3991 4103 * transferred for all the windows (needed to calculate
3992 4104 * the resid value below).
3993 4105 */
3994 4106 cmd->cmd_dmacount = cmd->cmd_cookie.dmac_size;
3995 4107 cmd->cmd_totaldmacount += cmd->cmd_cookie.dmac_size;
3996 4108
3997 4109 /*
3998 4110 * We already stored the first DMA scatter gather segment,
3999 4111 * start at 1 if we need to store more.
4000 4112 */
4001 4113 for (cnt = 1; cnt < cmd->cmd_cookiec; cnt++) {
4002 4114 /*
4003 4115 * Get next DMA cookie
4004 4116 */
4005 4117 ddi_dma_nextcookie(cmd->cmd_dmahandle,
4006 4118 &cmd->cmd_cookie);
4007 4119 dmap++;
4008 4120
4009 4121 cmd->cmd_dmacount += cmd->cmd_cookie.dmac_size;
4010 4122 cmd->cmd_totaldmacount += cmd->cmd_cookie.dmac_size;
4011 4123
4012 4124 /*
4013 4125 * store the segment parms into the S/G list
4014 4126 */
4015 4127 dmap->count = cmd->cmd_cookie.dmac_size;
4016 4128 dmap->addr.address64.Low = (uint32_t)
4017 4129 (cmd->cmd_cookie.dmac_laddress & 0xffffffffull);
4018 4130 dmap->addr.address64.High = (uint32_t)
4019 4131 (cmd->cmd_cookie.dmac_laddress >> 32);
4020 4132 }
4021 4133
4022 4134 /*
4023 4135 * If this was partially allocated we set the resid
4024 4136 * the amount of data NOT transferred in this window
4025 4137 * If there is only one window, the resid will be 0
4026 4138 */
4027 4139 pkt->pkt_resid = (bp->b_bcount - cmd->cmd_totaldmacount);
4028 4140 NDBG3(("mptsas_scsi_init_pkt: cmd_dmacount=%d.",
4029 4141 cmd->cmd_dmacount));
4030 4142 }
4031 4143 return (pkt);
4032 4144 }
4033 4145
4034 4146 /*
4035 4147 * tran_destroy_pkt(9E) - scsi_pkt(9s) deallocation
4036 4148 *
4037 4149 * Notes:
4038 4150 * - also frees DMA resources if allocated
4039 4151 * - implicit DMA synchonization
4040 4152 */
4041 4153 static void
4042 4154 mptsas_scsi_destroy_pkt(struct scsi_address *ap, struct scsi_pkt *pkt)
4043 4155 {
4044 4156 mptsas_cmd_t *cmd = PKT2CMD(pkt);
4045 4157 mptsas_t *mpt = ADDR2MPT(ap);
4046 4158
4047 4159 NDBG3(("mptsas_scsi_destroy_pkt: target=%d pkt=0x%p",
4048 4160 ap->a_target, (void *)pkt));
4049 4161
4050 4162 if (cmd->cmd_flags & CFLAG_DMAVALID) {
4051 4163 (void) ddi_dma_unbind_handle(cmd->cmd_dmahandle);
4052 4164 cmd->cmd_flags &= ~CFLAG_DMAVALID;
4053 4165 }
4054 4166
4055 4167 if (cmd->cmd_sg) {
4056 4168 kmem_free(cmd->cmd_sg, sizeof (mptti_t) * cmd->cmd_cookiec);
4057 4169 cmd->cmd_sg = NULL;
|
↓ open down ↓ |
141 lines elided |
↑ open up ↑ |
4058 4170 }
4059 4171
4060 4172 mptsas_free_extra_sgl_frame(mpt, cmd);
4061 4173
4062 4174 if ((cmd->cmd_flags &
4063 4175 (CFLAG_FREE | CFLAG_CDBEXTERN | CFLAG_PRIVEXTERN |
4064 4176 CFLAG_SCBEXTERN)) == 0) {
4065 4177 cmd->cmd_flags = CFLAG_FREE;
4066 4178 kmem_cache_free(mpt->m_kmem_cache, (void *)cmd);
4067 4179 } else {
4068 - boolean_t extrqslen = cmd->cmd_extrqslen != 0;
4069 -
4070 4180 mptsas_pkt_destroy_extern(mpt, cmd);
4071 -
4072 - /*
4073 - * If the packet had the sense data buffer for DMA allocated we
4074 - * need to decrease the reference counter.
4075 - */
4076 - if (extrqslen) {
4077 - mutex_enter(&mpt->m_mutex);
4078 - ASSERT(mpt->m_extreq_sense_refcount > 0);
4079 - mpt->m_extreq_sense_refcount--;
4080 - if (mpt->m_extreq_sense_refcount == 0)
4081 - cv_broadcast(&mpt->m_extreq_sense_refcount_cv);
4082 - mutex_exit(&mpt->m_mutex);
4083 - }
4084 4181 }
4085 4182 }
4086 4183
4087 4184 /*
4088 4185 * kmem cache constructor and destructor:
4089 4186 * When constructing, we bzero the cmd and allocate the dma handle
4090 4187 * When destructing, just free the dma handle
4091 4188 */
4092 4189 static int
4093 4190 mptsas_kmem_cache_constructor(void *buf, void *cdrarg, int kmflags)
4094 4191 {
4095 4192 mptsas_cmd_t *cmd = buf;
4096 4193 mptsas_t *mpt = cdrarg;
4097 4194 int (*callback)(caddr_t);
4098 4195
4099 4196 callback = (kmflags == KM_SLEEP)? DDI_DMA_SLEEP: DDI_DMA_DONTWAIT;
4100 4197
4101 4198 NDBG4(("mptsas_kmem_cache_constructor"));
4102 4199
4103 4200 /*
4104 4201 * allocate a dma handle
4105 4202 */
4106 4203 if ((ddi_dma_alloc_handle(mpt->m_dip, &mpt->m_io_dma_attr, callback,
4107 4204 NULL, &cmd->cmd_dmahandle)) != DDI_SUCCESS) {
4108 4205 cmd->cmd_dmahandle = NULL;
4109 4206 return (-1);
4110 4207 }
4111 4208 return (0);
4112 4209 }
4113 4210
4114 4211 static void
4115 4212 mptsas_kmem_cache_destructor(void *buf, void *cdrarg)
4116 4213 {
4117 4214 #ifndef __lock_lint
4118 4215 _NOTE(ARGUNUSED(cdrarg))
4119 4216 #endif
4120 4217 mptsas_cmd_t *cmd = buf;
4121 4218
4122 4219 NDBG4(("mptsas_kmem_cache_destructor"));
4123 4220
4124 4221 if (cmd->cmd_dmahandle) {
4125 4222 ddi_dma_free_handle(&cmd->cmd_dmahandle);
4126 4223 cmd->cmd_dmahandle = NULL;
4127 4224 }
4128 4225 }
4129 4226
4130 4227 static int
4131 4228 mptsas_cache_frames_constructor(void *buf, void *cdrarg, int kmflags)
4132 4229 {
4133 4230 mptsas_cache_frames_t *p = buf;
4134 4231 mptsas_t *mpt = cdrarg;
4135 4232 ddi_dma_attr_t frame_dma_attr;
4136 4233 size_t mem_size, alloc_len;
4137 4234 ddi_dma_cookie_t cookie;
4138 4235 uint_t ncookie;
4139 4236 int (*callback)(caddr_t) = (kmflags == KM_SLEEP)
4140 4237 ? DDI_DMA_SLEEP: DDI_DMA_DONTWAIT;
4141 4238
4142 4239 frame_dma_attr = mpt->m_msg_dma_attr;
4143 4240 frame_dma_attr.dma_attr_align = 0x10;
4144 4241 frame_dma_attr.dma_attr_sgllen = 1;
4145 4242
4146 4243 if (ddi_dma_alloc_handle(mpt->m_dip, &frame_dma_attr, callback, NULL,
4147 4244 &p->m_dma_hdl) != DDI_SUCCESS) {
4148 4245 mptsas_log(mpt, CE_WARN, "Unable to allocate dma handle for"
4149 4246 " extra SGL.");
4150 4247 return (DDI_FAILURE);
4151 4248 }
4152 4249
4153 4250 mem_size = (mpt->m_max_request_frames - 1) * mpt->m_req_frame_size;
4154 4251
4155 4252 if (ddi_dma_mem_alloc(p->m_dma_hdl, mem_size, &mpt->m_dev_acc_attr,
4156 4253 DDI_DMA_CONSISTENT, callback, NULL, (caddr_t *)&p->m_frames_addr,
4157 4254 &alloc_len, &p->m_acc_hdl) != DDI_SUCCESS) {
4158 4255 ddi_dma_free_handle(&p->m_dma_hdl);
4159 4256 p->m_dma_hdl = NULL;
4160 4257 mptsas_log(mpt, CE_WARN, "Unable to allocate dma memory for"
4161 4258 " extra SGL.");
4162 4259 return (DDI_FAILURE);
4163 4260 }
4164 4261
4165 4262 if (ddi_dma_addr_bind_handle(p->m_dma_hdl, NULL, p->m_frames_addr,
4166 4263 alloc_len, DDI_DMA_RDWR | DDI_DMA_CONSISTENT, callback, NULL,
4167 4264 &cookie, &ncookie) != DDI_DMA_MAPPED) {
4168 4265 (void) ddi_dma_mem_free(&p->m_acc_hdl);
4169 4266 ddi_dma_free_handle(&p->m_dma_hdl);
4170 4267 p->m_dma_hdl = NULL;
4171 4268 mptsas_log(mpt, CE_WARN, "Unable to bind DMA resources for"
4172 4269 " extra SGL");
4173 4270 return (DDI_FAILURE);
4174 4271 }
4175 4272
4176 4273 /*
4177 4274 * Store the SGL memory address. This chip uses this
4178 4275 * address to dma to and from the driver. The second
4179 4276 * address is the address mpt uses to fill in the SGL.
4180 4277 */
4181 4278 p->m_phys_addr = cookie.dmac_laddress;
4182 4279
4183 4280 return (DDI_SUCCESS);
4184 4281 }
4185 4282
4186 4283 static void
4187 4284 mptsas_cache_frames_destructor(void *buf, void *cdrarg)
4188 4285 {
4189 4286 #ifndef __lock_lint
4190 4287 _NOTE(ARGUNUSED(cdrarg))
4191 4288 #endif
4192 4289 mptsas_cache_frames_t *p = buf;
4193 4290 if (p->m_dma_hdl != NULL) {
4194 4291 (void) ddi_dma_unbind_handle(p->m_dma_hdl);
4195 4292 (void) ddi_dma_mem_free(&p->m_acc_hdl);
4196 4293 ddi_dma_free_handle(&p->m_dma_hdl);
4197 4294 p->m_phys_addr = NULL;
4198 4295 p->m_frames_addr = NULL;
4199 4296 p->m_dma_hdl = NULL;
4200 4297 p->m_acc_hdl = NULL;
4201 4298 }
4202 4299
4203 4300 }
4204 4301
4205 4302 /*
4206 4303 * Figure out if we need to use a different method for the request
4207 4304 * sense buffer and allocate from the map if necessary.
4208 4305 */
4209 4306 static boolean_t
4210 4307 mptsas_cmdarqsize(mptsas_t *mpt, mptsas_cmd_t *cmd, size_t senselength, int kf)
4211 4308 {
4212 4309 if (senselength > mpt->m_req_sense_size) {
4213 4310 unsigned long i;
4214 4311
4215 4312 /* Sense length is limited to an 8 bit value in MPI Spec. */
4216 4313 if (senselength > 255)
4217 4314 senselength = 255;
4218 4315 cmd->cmd_extrqschunks = (senselength +
4219 4316 (mpt->m_req_sense_size - 1))/mpt->m_req_sense_size;
4220 4317 i = (kf == KM_SLEEP ? rmalloc_wait : rmalloc)
4221 4318 (mpt->m_erqsense_map, cmd->cmd_extrqschunks);
4222 4319
4223 4320 if (i == 0)
4224 4321 return (B_FALSE);
4225 4322
4226 4323 cmd->cmd_extrqslen = (uint16_t)senselength;
4227 4324 cmd->cmd_extrqsidx = i - 1;
4228 4325 cmd->cmd_arq_buf = mpt->m_extreq_sense +
4229 4326 (cmd->cmd_extrqsidx * mpt->m_req_sense_size);
4230 4327 } else {
4231 4328 cmd->cmd_rqslen = (uchar_t)senselength;
4232 4329 }
4233 4330
4234 4331 return (B_TRUE);
4235 4332 }
4236 4333
4237 4334 /*
4238 4335 * allocate and deallocate external pkt space (ie. not part of mptsas_cmd)
4239 4336 * for non-standard length cdb, pkt_private, status areas
4240 4337 * if allocation fails, then deallocate all external space and the pkt
4241 4338 */
4242 4339 /* ARGSUSED */
4243 4340 static int
4244 4341 mptsas_pkt_alloc_extern(mptsas_t *mpt, mptsas_cmd_t *cmd,
4245 4342 int cmdlen, int tgtlen, int statuslen, int kf)
4246 4343 {
4247 4344 caddr_t cdbp, scbp, tgt;
4248 4345
4249 4346 NDBG3(("mptsas_pkt_alloc_extern: "
4250 4347 "cmd=0x%p cmdlen=%d tgtlen=%d statuslen=%d kf=%x",
4251 4348 (void *)cmd, cmdlen, tgtlen, statuslen, kf));
4252 4349
4253 4350 tgt = cdbp = scbp = NULL;
4254 4351 cmd->cmd_scblen = statuslen;
4255 4352 cmd->cmd_privlen = (uchar_t)tgtlen;
4256 4353
4257 4354 if (cmdlen > sizeof (cmd->cmd_cdb)) {
4258 4355 if ((cdbp = kmem_zalloc((size_t)cmdlen, kf)) == NULL) {
4259 4356 goto fail;
4260 4357 }
4261 4358 cmd->cmd_pkt->pkt_cdbp = (opaque_t)cdbp;
4262 4359 cmd->cmd_flags |= CFLAG_CDBEXTERN;
4263 4360 }
4264 4361 if (tgtlen > PKT_PRIV_LEN) {
4265 4362 if ((tgt = kmem_zalloc((size_t)tgtlen, kf)) == NULL) {
4266 4363 goto fail;
4267 4364 }
4268 4365 cmd->cmd_flags |= CFLAG_PRIVEXTERN;
4269 4366 cmd->cmd_pkt->pkt_private = tgt;
4270 4367 }
4271 4368 if (statuslen > EXTCMDS_STATUS_SIZE) {
4272 4369 if ((scbp = kmem_zalloc((size_t)statuslen, kf)) == NULL) {
4273 4370 goto fail;
4274 4371 }
4275 4372 cmd->cmd_flags |= CFLAG_SCBEXTERN;
4276 4373 cmd->cmd_pkt->pkt_scbp = (opaque_t)scbp;
4277 4374
4278 4375 /* allocate sense data buf for DMA */
4279 4376 if (mptsas_cmdarqsize(mpt, cmd, statuslen -
4280 4377 MPTSAS_GET_ITEM_OFF(struct scsi_arq_status, sts_sensedata),
4281 4378 kf) == B_FALSE)
4282 4379 goto fail;
4283 4380 }
4284 4381 return (0);
4285 4382 fail:
4286 4383 mptsas_pkt_destroy_extern(mpt, cmd);
4287 4384 return (1);
4288 4385 }
4289 4386
4290 4387 /*
4291 4388 * deallocate external pkt space and deallocate the pkt
4292 4389 */
4293 4390 static void
4294 4391 mptsas_pkt_destroy_extern(mptsas_t *mpt, mptsas_cmd_t *cmd)
4295 4392 {
4296 4393 NDBG3(("mptsas_pkt_destroy_extern: cmd=0x%p", (void *)cmd));
4297 4394
4298 4395 if (cmd->cmd_flags & CFLAG_FREE) {
4299 4396 mptsas_log(mpt, CE_PANIC,
4300 4397 "mptsas_pkt_destroy_extern: freeing free packet");
4301 4398 _NOTE(NOT_REACHED)
4302 4399 /* NOTREACHED */
4303 4400 }
4304 4401 if (cmd->cmd_extrqslen != 0) {
4305 4402 rmfree(mpt->m_erqsense_map, cmd->cmd_extrqschunks,
4306 4403 cmd->cmd_extrqsidx + 1);
4307 4404 }
4308 4405 if (cmd->cmd_flags & CFLAG_CDBEXTERN) {
4309 4406 kmem_free(cmd->cmd_pkt->pkt_cdbp, (size_t)cmd->cmd_cdblen);
4310 4407 }
4311 4408 if (cmd->cmd_flags & CFLAG_SCBEXTERN) {
4312 4409 kmem_free(cmd->cmd_pkt->pkt_scbp, (size_t)cmd->cmd_scblen);
4313 4410 }
4314 4411 if (cmd->cmd_flags & CFLAG_PRIVEXTERN) {
4315 4412 kmem_free(cmd->cmd_pkt->pkt_private, (size_t)cmd->cmd_privlen);
4316 4413 }
4317 4414 cmd->cmd_flags = CFLAG_FREE;
4318 4415 kmem_cache_free(mpt->m_kmem_cache, (void *)cmd);
4319 4416 }
4320 4417
4321 4418 /*
4322 4419 * tran_sync_pkt(9E) - explicit DMA synchronization
4323 4420 */
4324 4421 /*ARGSUSED*/
4325 4422 static void
4326 4423 mptsas_scsi_sync_pkt(struct scsi_address *ap, struct scsi_pkt *pkt)
4327 4424 {
4328 4425 mptsas_cmd_t *cmd = PKT2CMD(pkt);
4329 4426
4330 4427 NDBG3(("mptsas_scsi_sync_pkt: target=%d, pkt=0x%p",
4331 4428 ap->a_target, (void *)pkt));
4332 4429
4333 4430 if (cmd->cmd_dmahandle) {
4334 4431 (void) ddi_dma_sync(cmd->cmd_dmahandle, 0, 0,
4335 4432 (cmd->cmd_flags & CFLAG_DMASEND) ?
4336 4433 DDI_DMA_SYNC_FORDEV : DDI_DMA_SYNC_FORCPU);
4337 4434 }
4338 4435 }
4339 4436
4340 4437 /*
4341 4438 * tran_dmafree(9E) - deallocate DMA resources allocated for command
4342 4439 */
4343 4440 /*ARGSUSED*/
4344 4441 static void
4345 4442 mptsas_scsi_dmafree(struct scsi_address *ap, struct scsi_pkt *pkt)
4346 4443 {
4347 4444 mptsas_cmd_t *cmd = PKT2CMD(pkt);
4348 4445 mptsas_t *mpt = ADDR2MPT(ap);
4349 4446
4350 4447 NDBG3(("mptsas_scsi_dmafree: target=%d pkt=0x%p",
4351 4448 ap->a_target, (void *)pkt));
4352 4449
4353 4450 if (cmd->cmd_flags & CFLAG_DMAVALID) {
4354 4451 (void) ddi_dma_unbind_handle(cmd->cmd_dmahandle);
4355 4452 cmd->cmd_flags &= ~CFLAG_DMAVALID;
4356 4453 }
4357 4454
4358 4455 mptsas_free_extra_sgl_frame(mpt, cmd);
4359 4456 }
4360 4457
4361 4458 static void
4362 4459 mptsas_pkt_comp(struct scsi_pkt *pkt, mptsas_cmd_t *cmd)
4363 4460 {
4364 4461 if ((cmd->cmd_flags & CFLAG_CMDIOPB) &&
4365 4462 (!(cmd->cmd_flags & CFLAG_DMASEND))) {
4366 4463 (void) ddi_dma_sync(cmd->cmd_dmahandle, 0, 0,
4367 4464 DDI_DMA_SYNC_FORCPU);
4368 4465 }
4369 4466 (*pkt->pkt_comp)(pkt);
4370 4467 }
4371 4468
4372 4469 static void
4373 4470 mptsas_sge_mainframe(mptsas_cmd_t *cmd, pMpi2SCSIIORequest_t frame,
4374 4471 ddi_acc_handle_t acc_hdl, uint_t cookiec, uint32_t end_flags)
4375 4472 {
4376 4473 pMpi2SGESimple64_t sge;
4377 4474 mptti_t *dmap;
4378 4475 uint32_t flags;
4379 4476
4380 4477 dmap = cmd->cmd_sg;
4381 4478
4382 4479 sge = (pMpi2SGESimple64_t)(&frame->SGL);
4383 4480 while (cookiec--) {
4384 4481 ddi_put32(acc_hdl,
4385 4482 &sge->Address.Low, dmap->addr.address64.Low);
4386 4483 ddi_put32(acc_hdl,
4387 4484 &sge->Address.High, dmap->addr.address64.High);
4388 4485 ddi_put32(acc_hdl, &sge->FlagsLength,
4389 4486 dmap->count);
4390 4487 flags = ddi_get32(acc_hdl, &sge->FlagsLength);
4391 4488 flags |= ((uint32_t)
4392 4489 (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
4393 4490 MPI2_SGE_FLAGS_SYSTEM_ADDRESS |
4394 4491 MPI2_SGE_FLAGS_64_BIT_ADDRESSING) <<
4395 4492 MPI2_SGE_FLAGS_SHIFT);
4396 4493
4397 4494 /*
4398 4495 * If this is the last cookie, we set the flags
4399 4496 * to indicate so
4400 4497 */
4401 4498 if (cookiec == 0) {
4402 4499 flags |= end_flags;
4403 4500 }
4404 4501 if (cmd->cmd_flags & CFLAG_DMASEND) {
4405 4502 flags |= (MPI2_SGE_FLAGS_HOST_TO_IOC <<
4406 4503 MPI2_SGE_FLAGS_SHIFT);
4407 4504 } else {
4408 4505 flags |= (MPI2_SGE_FLAGS_IOC_TO_HOST <<
4409 4506 MPI2_SGE_FLAGS_SHIFT);
4410 4507 }
4411 4508 ddi_put32(acc_hdl, &sge->FlagsLength, flags);
4412 4509 dmap++;
4413 4510 sge++;
4414 4511 }
4415 4512 }
4416 4513
4417 4514 static void
4418 4515 mptsas_sge_chain(mptsas_t *mpt, mptsas_cmd_t *cmd,
4419 4516 pMpi2SCSIIORequest_t frame, ddi_acc_handle_t acc_hdl)
4420 4517 {
4421 4518 pMpi2SGESimple64_t sge;
4422 4519 pMpi2SGEChain64_t sgechain;
4423 4520 uint64_t nframe_phys_addr;
4424 4521 uint_t cookiec;
4425 4522 mptti_t *dmap;
4426 4523 uint32_t flags;
4427 4524
4428 4525 /*
4429 4526 * Save the number of entries in the DMA
4430 4527 * Scatter/Gather list
4431 4528 */
4432 4529 cookiec = cmd->cmd_cookiec;
4433 4530
4434 4531 /*
4435 4532 * Hereby we start to deal with multiple frames.
4436 4533 * The process is as follows:
4437 4534 * 1. Determine how many frames are needed for SGL element
4438 4535 * storage; Note that all frames are stored in contiguous
4439 4536 * memory space and in 64-bit DMA mode each element is
4440 4537 * 3 double-words (12 bytes) long.
4441 4538 * 2. Fill up the main frame. We need to do this separately
4442 4539 * since it contains the SCSI IO request header and needs
4443 4540 * dedicated processing. Note that the last 4 double-words
4444 4541 * of the SCSI IO header is for SGL element storage
4445 4542 * (MPI2_SGE_IO_UNION).
4446 4543 * 3. Fill the chain element in the main frame, so the DMA
4447 4544 * engine can use the following frames.
4448 4545 * 4. Enter a loop to fill the remaining frames. Note that the
4449 4546 * last frame contains no chain element. The remaining
4450 4547 * frames go into the mpt SGL buffer allocated on the fly,
4451 4548 * not immediately following the main message frame, as in
4452 4549 * Gen1.
4453 4550 * Some restrictions:
4454 4551 * 1. For 64-bit DMA, the simple element and chain element
4455 4552 * are both of 3 double-words (12 bytes) in size, even
4456 4553 * though all frames are stored in the first 4G of mem
4457 4554 * range and the higher 32-bits of the address are always 0.
4458 4555 * 2. On some controllers (like the 1064/1068), a frame can
4459 4556 * hold SGL elements with the last 1 or 2 double-words
4460 4557 * (4 or 8 bytes) un-used. On these controllers, we should
4461 4558 * recognize that there's not enough room for another SGL
4462 4559 * element and move the sge pointer to the next frame.
4463 4560 */
4464 4561 int i, j, k, l, frames, sgemax;
4465 4562 int temp;
4466 4563 uint8_t chainflags;
4467 4564 uint16_t chainlength;
4468 4565 mptsas_cache_frames_t *p;
4469 4566
4470 4567 /*
4471 4568 * Sgemax is the number of SGE's that will fit
4472 4569 * each extra frame and frames is total
4473 4570 * number of frames we'll need. 1 sge entry per
4474 4571 * frame is reseverd for the chain element thus the -1 below.
4475 4572 */
4476 4573 sgemax = ((mpt->m_req_frame_size / sizeof (MPI2_SGE_SIMPLE64))
4477 4574 - 1);
4478 4575 temp = (cookiec - (MPTSAS_MAX_FRAME_SGES64(mpt) - 1)) / sgemax;
4479 4576
4480 4577 /*
4481 4578 * A little check to see if we need to round up the number
4482 4579 * of frames we need
4483 4580 */
4484 4581 if ((cookiec - (MPTSAS_MAX_FRAME_SGES64(mpt) - 1)) - (temp *
4485 4582 sgemax) > 1) {
4486 4583 frames = (temp + 1);
4487 4584 } else {
4488 4585 frames = temp;
4489 4586 }
4490 4587 dmap = cmd->cmd_sg;
4491 4588 sge = (pMpi2SGESimple64_t)(&frame->SGL);
4492 4589
4493 4590 /*
4494 4591 * First fill in the main frame
4495 4592 */
4496 4593 j = MPTSAS_MAX_FRAME_SGES64(mpt) - 1;
4497 4594 mptsas_sge_mainframe(cmd, frame, acc_hdl, j,
4498 4595 ((uint32_t)(MPI2_SGE_FLAGS_LAST_ELEMENT) <<
4499 4596 MPI2_SGE_FLAGS_SHIFT));
4500 4597 dmap += j;
4501 4598 sge += j;
4502 4599 j++;
4503 4600
4504 4601 /*
4505 4602 * Fill in the chain element in the main frame.
4506 4603 * About calculation on ChainOffset:
4507 4604 * 1. Struct msg_scsi_io_request has 4 double-words (16 bytes)
4508 4605 * in the end reserved for SGL element storage
4509 4606 * (MPI2_SGE_IO_UNION); we should count it in our
4510 4607 * calculation. See its definition in the header file.
4511 4608 * 2. Constant j is the counter of the current SGL element
4512 4609 * that will be processed, and (j - 1) is the number of
4513 4610 * SGL elements that have been processed (stored in the
4514 4611 * main frame).
4515 4612 * 3. ChainOffset value should be in units of double-words (4
4516 4613 * bytes) so the last value should be divided by 4.
4517 4614 */
4518 4615 ddi_put8(acc_hdl, &frame->ChainOffset,
4519 4616 (sizeof (MPI2_SCSI_IO_REQUEST) -
4520 4617 sizeof (MPI2_SGE_IO_UNION) +
4521 4618 (j - 1) * sizeof (MPI2_SGE_SIMPLE64)) >> 2);
4522 4619 sgechain = (pMpi2SGEChain64_t)sge;
4523 4620 chainflags = (MPI2_SGE_FLAGS_CHAIN_ELEMENT |
4524 4621 MPI2_SGE_FLAGS_SYSTEM_ADDRESS |
4525 4622 MPI2_SGE_FLAGS_64_BIT_ADDRESSING);
4526 4623 ddi_put8(acc_hdl, &sgechain->Flags, chainflags);
4527 4624
4528 4625 /*
4529 4626 * The size of the next frame is the accurate size of space
4530 4627 * (in bytes) used to store the SGL elements. j is the counter
4531 4628 * of SGL elements. (j - 1) is the number of SGL elements that
4532 4629 * have been processed (stored in frames).
4533 4630 */
4534 4631 if (frames >= 2) {
4535 4632 ASSERT(mpt->m_req_frame_size >= sizeof (MPI2_SGE_SIMPLE64));
4536 4633 chainlength = mpt->m_req_frame_size /
4537 4634 sizeof (MPI2_SGE_SIMPLE64) *
4538 4635 sizeof (MPI2_SGE_SIMPLE64);
4539 4636 } else {
4540 4637 chainlength = ((cookiec - (j - 1)) *
4541 4638 sizeof (MPI2_SGE_SIMPLE64));
4542 4639 }
4543 4640
4544 4641 p = cmd->cmd_extra_frames;
4545 4642
4546 4643 ddi_put16(acc_hdl, &sgechain->Length, chainlength);
4547 4644 ddi_put32(acc_hdl, &sgechain->Address.Low, p->m_phys_addr);
4548 4645 ddi_put32(acc_hdl, &sgechain->Address.High, p->m_phys_addr >> 32);
4549 4646
4550 4647 /*
4551 4648 * If there are more than 2 frames left we have to
4552 4649 * fill in the next chain offset to the location of
4553 4650 * the chain element in the next frame.
4554 4651 * sgemax is the number of simple elements in an extra
4555 4652 * frame. Note that the value NextChainOffset should be
4556 4653 * in double-words (4 bytes).
4557 4654 */
4558 4655 if (frames >= 2) {
4559 4656 ddi_put8(acc_hdl, &sgechain->NextChainOffset,
4560 4657 (sgemax * sizeof (MPI2_SGE_SIMPLE64)) >> 2);
4561 4658 } else {
4562 4659 ddi_put8(acc_hdl, &sgechain->NextChainOffset, 0);
4563 4660 }
4564 4661
4565 4662 /*
4566 4663 * Jump to next frame;
4567 4664 * Starting here, chain buffers go into the per command SGL.
4568 4665 * This buffer is allocated when chain buffers are needed.
4569 4666 */
4570 4667 sge = (pMpi2SGESimple64_t)p->m_frames_addr;
4571 4668 i = cookiec;
4572 4669
4573 4670 /*
4574 4671 * Start filling in frames with SGE's. If we
4575 4672 * reach the end of frame and still have SGE's
4576 4673 * to fill we need to add a chain element and
4577 4674 * use another frame. j will be our counter
4578 4675 * for what cookie we are at and i will be
4579 4676 * the total cookiec. k is the current frame
4580 4677 */
4581 4678 for (k = 1; k <= frames; k++) {
4582 4679 for (l = 1; (l <= (sgemax + 1)) && (j <= i); j++, l++) {
4583 4680
4584 4681 /*
4585 4682 * If we have reached the end of frame
4586 4683 * and we have more SGE's to fill in
4587 4684 * we have to fill the final entry
4588 4685 * with a chain element and then
4589 4686 * continue to the next frame
4590 4687 */
4591 4688 if ((l == (sgemax + 1)) && (k != frames)) {
4592 4689 sgechain = (pMpi2SGEChain64_t)sge;
4593 4690 j--;
4594 4691 chainflags = (
4595 4692 MPI2_SGE_FLAGS_CHAIN_ELEMENT |
4596 4693 MPI2_SGE_FLAGS_SYSTEM_ADDRESS |
4597 4694 MPI2_SGE_FLAGS_64_BIT_ADDRESSING);
4598 4695 ddi_put8(p->m_acc_hdl,
4599 4696 &sgechain->Flags, chainflags);
4600 4697 /*
4601 4698 * k is the frame counter and (k + 1)
4602 4699 * is the number of the next frame.
4603 4700 * Note that frames are in contiguous
4604 4701 * memory space.
4605 4702 */
4606 4703 nframe_phys_addr = p->m_phys_addr +
4607 4704 (mpt->m_req_frame_size * k);
4608 4705 ddi_put32(p->m_acc_hdl,
4609 4706 &sgechain->Address.Low,
4610 4707 nframe_phys_addr);
4611 4708 ddi_put32(p->m_acc_hdl,
4612 4709 &sgechain->Address.High,
4613 4710 nframe_phys_addr >> 32);
4614 4711
4615 4712 /*
4616 4713 * If there are more than 2 frames left
4617 4714 * we have to next chain offset to
4618 4715 * the location of the chain element
4619 4716 * in the next frame and fill in the
4620 4717 * length of the next chain
4621 4718 */
4622 4719 if ((frames - k) >= 2) {
4623 4720 ddi_put8(p->m_acc_hdl,
4624 4721 &sgechain->NextChainOffset,
4625 4722 (sgemax *
4626 4723 sizeof (MPI2_SGE_SIMPLE64))
4627 4724 >> 2);
4628 4725 ddi_put16(p->m_acc_hdl,
4629 4726 &sgechain->Length,
4630 4727 mpt->m_req_frame_size /
4631 4728 sizeof (MPI2_SGE_SIMPLE64) *
4632 4729 sizeof (MPI2_SGE_SIMPLE64));
4633 4730 } else {
4634 4731 /*
4635 4732 * This is the last frame. Set
4636 4733 * the NextChainOffset to 0 and
4637 4734 * Length is the total size of
4638 4735 * all remaining simple elements
4639 4736 */
4640 4737 ddi_put8(p->m_acc_hdl,
4641 4738 &sgechain->NextChainOffset,
4642 4739 0);
4643 4740 ddi_put16(p->m_acc_hdl,
4644 4741 &sgechain->Length,
4645 4742 (cookiec - j) *
4646 4743 sizeof (MPI2_SGE_SIMPLE64));
4647 4744 }
4648 4745
4649 4746 /* Jump to the next frame */
4650 4747 sge = (pMpi2SGESimple64_t)
4651 4748 ((char *)p->m_frames_addr +
4652 4749 (int)mpt->m_req_frame_size * k);
4653 4750
4654 4751 continue;
4655 4752 }
4656 4753
4657 4754 ddi_put32(p->m_acc_hdl,
4658 4755 &sge->Address.Low,
4659 4756 dmap->addr.address64.Low);
4660 4757 ddi_put32(p->m_acc_hdl,
4661 4758 &sge->Address.High,
4662 4759 dmap->addr.address64.High);
4663 4760 ddi_put32(p->m_acc_hdl,
4664 4761 &sge->FlagsLength, dmap->count);
4665 4762 flags = ddi_get32(p->m_acc_hdl,
4666 4763 &sge->FlagsLength);
4667 4764 flags |= ((uint32_t)(
4668 4765 MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
4669 4766 MPI2_SGE_FLAGS_SYSTEM_ADDRESS |
4670 4767 MPI2_SGE_FLAGS_64_BIT_ADDRESSING) <<
4671 4768 MPI2_SGE_FLAGS_SHIFT);
4672 4769
4673 4770 /*
4674 4771 * If we are at the end of the frame and
4675 4772 * there is another frame to fill in
4676 4773 * we set the last simple element as last
4677 4774 * element
4678 4775 */
4679 4776 if ((l == sgemax) && (k != frames)) {
4680 4777 flags |= ((uint32_t)
4681 4778 (MPI2_SGE_FLAGS_LAST_ELEMENT) <<
4682 4779 MPI2_SGE_FLAGS_SHIFT);
4683 4780 }
4684 4781
4685 4782 /*
4686 4783 * If this is the final cookie we
4687 4784 * indicate it by setting the flags
4688 4785 */
4689 4786 if (j == i) {
4690 4787 flags |= ((uint32_t)
4691 4788 (MPI2_SGE_FLAGS_LAST_ELEMENT |
4692 4789 MPI2_SGE_FLAGS_END_OF_BUFFER |
4693 4790 MPI2_SGE_FLAGS_END_OF_LIST) <<
4694 4791 MPI2_SGE_FLAGS_SHIFT);
4695 4792 }
4696 4793 if (cmd->cmd_flags & CFLAG_DMASEND) {
4697 4794 flags |=
4698 4795 (MPI2_SGE_FLAGS_HOST_TO_IOC <<
4699 4796 MPI2_SGE_FLAGS_SHIFT);
4700 4797 } else {
4701 4798 flags |=
4702 4799 (MPI2_SGE_FLAGS_IOC_TO_HOST <<
4703 4800 MPI2_SGE_FLAGS_SHIFT);
4704 4801 }
4705 4802 ddi_put32(p->m_acc_hdl,
4706 4803 &sge->FlagsLength, flags);
4707 4804 dmap++;
4708 4805 sge++;
4709 4806 }
4710 4807 }
4711 4808
4712 4809 /*
4713 4810 * Sync DMA with the chain buffers that were just created
4714 4811 */
4715 4812 (void) ddi_dma_sync(p->m_dma_hdl, 0, 0, DDI_DMA_SYNC_FORDEV);
4716 4813 }
4717 4814
4718 4815 static void
4719 4816 mptsas_ieee_sge_mainframe(mptsas_cmd_t *cmd, pMpi2SCSIIORequest_t frame,
4720 4817 ddi_acc_handle_t acc_hdl, uint_t cookiec, uint8_t end_flag)
4721 4818 {
4722 4819 pMpi2IeeeSgeSimple64_t ieeesge;
4723 4820 mptti_t *dmap;
4724 4821 uint8_t flags;
4725 4822
4726 4823 dmap = cmd->cmd_sg;
4727 4824
4728 4825 NDBG1(("mptsas_ieee_sge_mainframe: cookiec=%d, %s", cookiec,
4729 4826 cmd->cmd_flags & CFLAG_DMASEND?"Out":"In"));
4730 4827
4731 4828 ieeesge = (pMpi2IeeeSgeSimple64_t)(&frame->SGL);
4732 4829 while (cookiec--) {
4733 4830 ddi_put32(acc_hdl,
4734 4831 &ieeesge->Address.Low, dmap->addr.address64.Low);
4735 4832 ddi_put32(acc_hdl,
4736 4833 &ieeesge->Address.High, dmap->addr.address64.High);
4737 4834 ddi_put32(acc_hdl, &ieeesge->Length,
4738 4835 dmap->count);
4739 4836 NDBG1(("mptsas_ieee_sge_mainframe: len=%d", dmap->count));
4740 4837 flags = (MPI2_IEEE_SGE_FLAGS_SIMPLE_ELEMENT |
4741 4838 MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR);
4742 4839
4743 4840 /*
4744 4841 * If this is the last cookie, we set the flags
4745 4842 * to indicate so
4746 4843 */
4747 4844 if (cookiec == 0) {
4748 4845 flags |= end_flag;
4749 4846 }
4750 4847
4751 4848 ddi_put8(acc_hdl, &ieeesge->Flags, flags);
4752 4849 dmap++;
4753 4850 ieeesge++;
4754 4851 }
4755 4852 }
4756 4853
4757 4854 static void
4758 4855 mptsas_ieee_sge_chain(mptsas_t *mpt, mptsas_cmd_t *cmd,
4759 4856 pMpi2SCSIIORequest_t frame, ddi_acc_handle_t acc_hdl)
4760 4857 {
4761 4858 pMpi2IeeeSgeSimple64_t ieeesge;
4762 4859 pMpi25IeeeSgeChain64_t ieeesgechain;
4763 4860 uint64_t nframe_phys_addr;
4764 4861 uint_t cookiec;
4765 4862 mptti_t *dmap;
4766 4863 uint8_t flags;
4767 4864
4768 4865 /*
4769 4866 * Save the number of entries in the DMA
4770 4867 * Scatter/Gather list
4771 4868 */
4772 4869 cookiec = cmd->cmd_cookiec;
4773 4870
4774 4871 NDBG1(("mptsas_ieee_sge_chain: cookiec=%d", cookiec));
4775 4872
4776 4873 /*
4777 4874 * Hereby we start to deal with multiple frames.
4778 4875 * The process is as follows:
4779 4876 * 1. Determine how many frames are needed for SGL element
4780 4877 * storage; Note that all frames are stored in contiguous
4781 4878 * memory space and in 64-bit DMA mode each element is
4782 4879 * 4 double-words (16 bytes) long.
4783 4880 * 2. Fill up the main frame. We need to do this separately
4784 4881 * since it contains the SCSI IO request header and needs
4785 4882 * dedicated processing. Note that the last 4 double-words
4786 4883 * of the SCSI IO header is for SGL element storage
4787 4884 * (MPI2_SGE_IO_UNION).
4788 4885 * 3. Fill the chain element in the main frame, so the DMA
4789 4886 * engine can use the following frames.
4790 4887 * 4. Enter a loop to fill the remaining frames. Note that the
4791 4888 * last frame contains no chain element. The remaining
4792 4889 * frames go into the mpt SGL buffer allocated on the fly,
4793 4890 * not immediately following the main message frame, as in
4794 4891 * Gen1.
4795 4892 * Restrictions:
4796 4893 * For 64-bit DMA, the simple element and chain element
4797 4894 * are both of 4 double-words (16 bytes) in size, even
4798 4895 * though all frames are stored in the first 4G of mem
4799 4896 * range and the higher 32-bits of the address are always 0.
4800 4897 */
4801 4898 int i, j, k, l, frames, sgemax;
4802 4899 int temp;
4803 4900 uint8_t chainflags;
4804 4901 uint32_t chainlength;
4805 4902 mptsas_cache_frames_t *p;
4806 4903
4807 4904 /*
4808 4905 * Sgemax is the number of SGE's that will fit
4809 4906 * each extra frame and frames is total
4810 4907 * number of frames we'll need. 1 sge entry per
4811 4908 * frame is reseverd for the chain element thus the -1 below.
4812 4909 */
4813 4910 sgemax = ((mpt->m_req_frame_size / sizeof (MPI2_IEEE_SGE_SIMPLE64))
4814 4911 - 1);
4815 4912 temp = (cookiec - (MPTSAS_MAX_FRAME_SGES64(mpt) - 1)) / sgemax;
4816 4913
4817 4914 /*
4818 4915 * A little check to see if we need to round up the number
4819 4916 * of frames we need
4820 4917 */
4821 4918 if ((cookiec - (MPTSAS_MAX_FRAME_SGES64(mpt) - 1)) - (temp *
4822 4919 sgemax) > 1) {
4823 4920 frames = (temp + 1);
4824 4921 } else {
4825 4922 frames = temp;
4826 4923 }
4827 4924 NDBG1(("mptsas_ieee_sge_chain: temp=%d, frames=%d", temp, frames));
4828 4925 dmap = cmd->cmd_sg;
4829 4926 ieeesge = (pMpi2IeeeSgeSimple64_t)(&frame->SGL);
4830 4927
4831 4928 /*
4832 4929 * First fill in the main frame
4833 4930 */
4834 4931 j = MPTSAS_MAX_FRAME_SGES64(mpt) - 1;
4835 4932 mptsas_ieee_sge_mainframe(cmd, frame, acc_hdl, j, 0);
4836 4933 dmap += j;
4837 4934 ieeesge += j;
4838 4935 j++;
4839 4936
4840 4937 /*
4841 4938 * Fill in the chain element in the main frame.
4842 4939 * About calculation on ChainOffset:
4843 4940 * 1. Struct msg_scsi_io_request has 4 double-words (16 bytes)
4844 4941 * in the end reserved for SGL element storage
4845 4942 * (MPI2_SGE_IO_UNION); we should count it in our
4846 4943 * calculation. See its definition in the header file.
4847 4944 * 2. Constant j is the counter of the current SGL element
4848 4945 * that will be processed, and (j - 1) is the number of
4849 4946 * SGL elements that have been processed (stored in the
4850 4947 * main frame).
4851 4948 * 3. ChainOffset value should be in units of quad-words (16
4852 4949 * bytes) so the last value should be divided by 16.
4853 4950 */
4854 4951 ddi_put8(acc_hdl, &frame->ChainOffset,
4855 4952 (sizeof (MPI2_SCSI_IO_REQUEST) -
4856 4953 sizeof (MPI2_SGE_IO_UNION) +
4857 4954 (j - 1) * sizeof (MPI2_IEEE_SGE_SIMPLE64)) >> 4);
4858 4955 ieeesgechain = (pMpi25IeeeSgeChain64_t)ieeesge;
4859 4956 chainflags = (MPI2_IEEE_SGE_FLAGS_CHAIN_ELEMENT |
4860 4957 MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR);
4861 4958 ddi_put8(acc_hdl, &ieeesgechain->Flags, chainflags);
4862 4959
4863 4960 /*
4864 4961 * The size of the next frame is the accurate size of space
4865 4962 * (in bytes) used to store the SGL elements. j is the counter
4866 4963 * of SGL elements. (j - 1) is the number of SGL elements that
4867 4964 * have been processed (stored in frames).
4868 4965 */
4869 4966 if (frames >= 2) {
4870 4967 ASSERT(mpt->m_req_frame_size >=
4871 4968 sizeof (MPI2_IEEE_SGE_SIMPLE64));
4872 4969 chainlength = mpt->m_req_frame_size /
4873 4970 sizeof (MPI2_IEEE_SGE_SIMPLE64) *
4874 4971 sizeof (MPI2_IEEE_SGE_SIMPLE64);
4875 4972 } else {
4876 4973 chainlength = ((cookiec - (j - 1)) *
4877 4974 sizeof (MPI2_IEEE_SGE_SIMPLE64));
4878 4975 }
4879 4976
4880 4977 p = cmd->cmd_extra_frames;
4881 4978
4882 4979 ddi_put32(acc_hdl, &ieeesgechain->Length, chainlength);
4883 4980 ddi_put32(acc_hdl, &ieeesgechain->Address.Low, p->m_phys_addr);
4884 4981 ddi_put32(acc_hdl, &ieeesgechain->Address.High, p->m_phys_addr >> 32);
4885 4982
4886 4983 /*
4887 4984 * If there are more than 2 frames left we have to
4888 4985 * fill in the next chain offset to the location of
4889 4986 * the chain element in the next frame.
4890 4987 * sgemax is the number of simple elements in an extra
4891 4988 * frame. Note that the value NextChainOffset should be
4892 4989 * in double-words (4 bytes).
4893 4990 */
4894 4991 if (frames >= 2) {
4895 4992 ddi_put8(acc_hdl, &ieeesgechain->NextChainOffset,
4896 4993 (sgemax * sizeof (MPI2_IEEE_SGE_SIMPLE64)) >> 4);
4897 4994 } else {
4898 4995 ddi_put8(acc_hdl, &ieeesgechain->NextChainOffset, 0);
4899 4996 }
4900 4997
4901 4998 /*
4902 4999 * Jump to next frame;
4903 5000 * Starting here, chain buffers go into the per command SGL.
4904 5001 * This buffer is allocated when chain buffers are needed.
4905 5002 */
4906 5003 ieeesge = (pMpi2IeeeSgeSimple64_t)p->m_frames_addr;
4907 5004 i = cookiec;
4908 5005
4909 5006 /*
4910 5007 * Start filling in frames with SGE's. If we
4911 5008 * reach the end of frame and still have SGE's
4912 5009 * to fill we need to add a chain element and
4913 5010 * use another frame. j will be our counter
4914 5011 * for what cookie we are at and i will be
4915 5012 * the total cookiec. k is the current frame
4916 5013 */
4917 5014 for (k = 1; k <= frames; k++) {
4918 5015 for (l = 1; (l <= (sgemax + 1)) && (j <= i); j++, l++) {
4919 5016
4920 5017 /*
4921 5018 * If we have reached the end of frame
4922 5019 * and we have more SGE's to fill in
4923 5020 * we have to fill the final entry
4924 5021 * with a chain element and then
4925 5022 * continue to the next frame
4926 5023 */
4927 5024 if ((l == (sgemax + 1)) && (k != frames)) {
4928 5025 ieeesgechain = (pMpi25IeeeSgeChain64_t)ieeesge;
4929 5026 j--;
4930 5027 chainflags =
4931 5028 MPI2_IEEE_SGE_FLAGS_CHAIN_ELEMENT |
4932 5029 MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR;
4933 5030 ddi_put8(p->m_acc_hdl,
4934 5031 &ieeesgechain->Flags, chainflags);
4935 5032 /*
4936 5033 * k is the frame counter and (k + 1)
4937 5034 * is the number of the next frame.
4938 5035 * Note that frames are in contiguous
4939 5036 * memory space.
4940 5037 */
4941 5038 nframe_phys_addr = p->m_phys_addr +
4942 5039 (mpt->m_req_frame_size * k);
4943 5040 ddi_put32(p->m_acc_hdl,
4944 5041 &ieeesgechain->Address.Low,
4945 5042 nframe_phys_addr);
4946 5043 ddi_put32(p->m_acc_hdl,
4947 5044 &ieeesgechain->Address.High,
4948 5045 nframe_phys_addr >> 32);
4949 5046
4950 5047 /*
4951 5048 * If there are more than 2 frames left
4952 5049 * we have to next chain offset to
4953 5050 * the location of the chain element
4954 5051 * in the next frame and fill in the
4955 5052 * length of the next chain
4956 5053 */
4957 5054 if ((frames - k) >= 2) {
4958 5055 ddi_put8(p->m_acc_hdl,
4959 5056 &ieeesgechain->NextChainOffset,
4960 5057 (sgemax *
4961 5058 sizeof (MPI2_IEEE_SGE_SIMPLE64))
4962 5059 >> 4);
4963 5060 ASSERT(mpt->m_req_frame_size >=
4964 5061 sizeof (MPI2_IEEE_SGE_SIMPLE64));
4965 5062 ddi_put32(p->m_acc_hdl,
4966 5063 &ieeesgechain->Length,
4967 5064 mpt->m_req_frame_size /
4968 5065 sizeof (MPI2_IEEE_SGE_SIMPLE64) *
4969 5066 sizeof (MPI2_IEEE_SGE_SIMPLE64));
4970 5067 } else {
4971 5068 /*
4972 5069 * This is the last frame. Set
4973 5070 * the NextChainOffset to 0 and
4974 5071 * Length is the total size of
4975 5072 * all remaining simple elements
4976 5073 */
4977 5074 ddi_put8(p->m_acc_hdl,
4978 5075 &ieeesgechain->NextChainOffset,
4979 5076 0);
4980 5077 ddi_put32(p->m_acc_hdl,
4981 5078 &ieeesgechain->Length,
4982 5079 (cookiec - j) *
4983 5080 sizeof (MPI2_IEEE_SGE_SIMPLE64));
4984 5081 }
4985 5082
4986 5083 /* Jump to the next frame */
4987 5084 ieeesge = (pMpi2IeeeSgeSimple64_t)
4988 5085 ((char *)p->m_frames_addr +
4989 5086 (int)mpt->m_req_frame_size * k);
4990 5087
4991 5088 continue;
4992 5089 }
4993 5090
4994 5091 ddi_put32(p->m_acc_hdl,
4995 5092 &ieeesge->Address.Low,
4996 5093 dmap->addr.address64.Low);
4997 5094 ddi_put32(p->m_acc_hdl,
4998 5095 &ieeesge->Address.High,
4999 5096 dmap->addr.address64.High);
5000 5097 ddi_put32(p->m_acc_hdl,
5001 5098 &ieeesge->Length, dmap->count);
5002 5099 flags = (MPI2_IEEE_SGE_FLAGS_SIMPLE_ELEMENT |
5003 5100 MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR);
5004 5101
5005 5102 /*
5006 5103 * If we are at the end of the frame and
5007 5104 * there is another frame to fill in
5008 5105 * do we need to do anything?
5009 5106 * if ((l == sgemax) && (k != frames)) {
5010 5107 * }
5011 5108 */
5012 5109
5013 5110 /*
5014 5111 * If this is the final cookie set end of list.
5015 5112 */
5016 5113 if (j == i) {
5017 5114 flags |= MPI25_IEEE_SGE_FLAGS_END_OF_LIST;
5018 5115 }
5019 5116
5020 5117 ddi_put8(p->m_acc_hdl, &ieeesge->Flags, flags);
5021 5118 dmap++;
5022 5119 ieeesge++;
5023 5120 }
5024 5121 }
5025 5122
5026 5123 /*
5027 5124 * Sync DMA with the chain buffers that were just created
5028 5125 */
5029 5126 (void) ddi_dma_sync(p->m_dma_hdl, 0, 0, DDI_DMA_SYNC_FORDEV);
5030 5127 }
5031 5128
5032 5129 static void
5033 5130 mptsas_sge_setup(mptsas_t *mpt, mptsas_cmd_t *cmd, uint32_t *control,
5034 5131 pMpi2SCSIIORequest_t frame, ddi_acc_handle_t acc_hdl)
5035 5132 {
5036 5133 ASSERT(cmd->cmd_flags & CFLAG_DMAVALID);
5037 5134
5038 5135 NDBG1(("mptsas_sge_setup: cookiec=%d", cmd->cmd_cookiec));
5039 5136
5040 5137 /*
5041 5138 * Set read/write bit in control.
5042 5139 */
5043 5140 if (cmd->cmd_flags & CFLAG_DMASEND) {
5044 5141 *control |= MPI2_SCSIIO_CONTROL_WRITE;
5045 5142 } else {
5046 5143 *control |= MPI2_SCSIIO_CONTROL_READ;
5047 5144 }
5048 5145
5049 5146 ddi_put32(acc_hdl, &frame->DataLength, cmd->cmd_dmacount);
5050 5147
5051 5148 /*
5052 5149 * We have 4 cases here. First where we can fit all the
5053 5150 * SG elements into the main frame, and the case
5054 5151 * where we can't. The SG element is also different when using
5055 5152 * MPI2.5 interface.
5056 5153 * If we have more cookies than we can attach to a frame
5057 5154 * we will need to use a chain element to point
5058 5155 * a location of memory where the rest of the S/G
5059 5156 * elements reside.
5060 5157 */
5061 5158 if (cmd->cmd_cookiec <= MPTSAS_MAX_FRAME_SGES64(mpt)) {
5062 5159 if (mpt->m_MPI25) {
5063 5160 mptsas_ieee_sge_mainframe(cmd, frame, acc_hdl,
5064 5161 cmd->cmd_cookiec,
5065 5162 MPI25_IEEE_SGE_FLAGS_END_OF_LIST);
5066 5163 } else {
5067 5164 mptsas_sge_mainframe(cmd, frame, acc_hdl,
5068 5165 cmd->cmd_cookiec,
5069 5166 ((uint32_t)(MPI2_SGE_FLAGS_LAST_ELEMENT
5070 5167 | MPI2_SGE_FLAGS_END_OF_BUFFER
5071 5168 | MPI2_SGE_FLAGS_END_OF_LIST) <<
5072 5169 MPI2_SGE_FLAGS_SHIFT));
5073 5170 }
5074 5171 } else {
5075 5172 if (mpt->m_MPI25) {
5076 5173 mptsas_ieee_sge_chain(mpt, cmd, frame, acc_hdl);
5077 5174 } else {
5078 5175 mptsas_sge_chain(mpt, cmd, frame, acc_hdl);
5079 5176 }
5080 5177 }
5081 5178 }
5082 5179
5083 5180 /*
5084 5181 * Interrupt handling
5085 5182 * Utility routine. Poll for status of a command sent to HBA
5086 5183 * without interrupts (a FLAG_NOINTR command).
5087 5184 */
5088 5185 int
5089 5186 mptsas_poll(mptsas_t *mpt, mptsas_cmd_t *poll_cmd, int polltime)
5090 5187 {
5091 5188 int rval = TRUE;
5092 5189
5093 5190 NDBG5(("mptsas_poll: cmd=0x%p", (void *)poll_cmd));
5094 5191
5095 5192 if ((poll_cmd->cmd_flags & CFLAG_TM_CMD) == 0) {
5096 5193 mptsas_restart_hba(mpt);
5097 5194 }
5098 5195
5099 5196 /*
5100 5197 * Wait, using drv_usecwait(), long enough for the command to
5101 5198 * reasonably return from the target if the target isn't
5102 5199 * "dead". A polled command may well be sent from scsi_poll, and
5103 5200 * there are retries built in to scsi_poll if the transport
5104 5201 * accepted the packet (TRAN_ACCEPT). scsi_poll waits 1 second
5105 5202 * and retries the transport up to scsi_poll_busycnt times
5106 5203 * (currently 60) if
5107 5204 * 1. pkt_reason is CMD_INCOMPLETE and pkt_state is 0, or
5108 5205 * 2. pkt_reason is CMD_CMPLT and *pkt_scbp has STATUS_BUSY
5109 5206 *
5110 5207 * limit the waiting to avoid a hang in the event that the
5111 5208 * cmd never gets started but we are still receiving interrupts
5112 5209 */
5113 5210 while (!(poll_cmd->cmd_flags & CFLAG_FINISHED)) {
5114 5211 if (mptsas_wait_intr(mpt, polltime) == FALSE) {
5115 5212 NDBG5(("mptsas_poll: command incomplete"));
5116 5213 rval = FALSE;
5117 5214 break;
5118 5215 }
5119 5216 }
5120 5217
5121 5218 if (rval == FALSE) {
5122 5219
5123 5220 /*
5124 5221 * this isn't supposed to happen, the hba must be wedged
5125 5222 * Mark this cmd as a timeout.
5126 5223 */
5127 5224 mptsas_set_pkt_reason(mpt, poll_cmd, CMD_TIMEOUT,
5128 5225 (STAT_TIMEOUT|STAT_ABORTED));
5129 5226
5130 5227 if (poll_cmd->cmd_queued == FALSE) {
5131 5228
5132 5229 NDBG5(("mptsas_poll: not on waitq"));
5133 5230
5134 5231 poll_cmd->cmd_pkt->pkt_state |=
5135 5232 (STATE_GOT_BUS|STATE_GOT_TARGET|STATE_SENT_CMD);
5136 5233 } else {
5137 5234
5138 5235 /* find and remove it from the waitq */
5139 5236 NDBG5(("mptsas_poll: delete from waitq"));
5140 5237 mptsas_waitq_delete(mpt, poll_cmd);
5141 5238 }
5142 5239
5143 5240 }
5144 5241 mptsas_fma_check(mpt, poll_cmd);
5145 5242 NDBG5(("mptsas_poll: done"));
5146 5243 return (rval);
5147 5244 }
5148 5245
5149 5246 /*
5150 5247 * Used for polling cmds and TM function
5151 5248 */
5152 5249 static int
5153 5250 mptsas_wait_intr(mptsas_t *mpt, int polltime)
5154 5251 {
5155 5252 int cnt;
5156 5253 pMpi2ReplyDescriptorsUnion_t reply_desc_union;
5157 5254 uint32_t int_mask;
5158 5255
5159 5256 NDBG5(("mptsas_wait_intr"));
5160 5257
5161 5258 mpt->m_polled_intr = 1;
5162 5259
5163 5260 /*
5164 5261 * Get the current interrupt mask and disable interrupts. When
5165 5262 * re-enabling ints, set mask to saved value.
5166 5263 */
5167 5264 int_mask = ddi_get32(mpt->m_datap, &mpt->m_reg->HostInterruptMask);
5168 5265 MPTSAS_DISABLE_INTR(mpt);
5169 5266
5170 5267 /*
5171 5268 * Keep polling for at least (polltime * 1000) seconds
5172 5269 */
5173 5270 for (cnt = 0; cnt < polltime; cnt++) {
5174 5271 (void) ddi_dma_sync(mpt->m_dma_post_queue_hdl, 0, 0,
5175 5272 DDI_DMA_SYNC_FORCPU);
5176 5273
5177 5274 reply_desc_union = (pMpi2ReplyDescriptorsUnion_t)
5178 5275 MPTSAS_GET_NEXT_REPLY(mpt, mpt->m_post_index);
5179 5276
5180 5277 if (ddi_get32(mpt->m_acc_post_queue_hdl,
5181 5278 &reply_desc_union->Words.Low) == 0xFFFFFFFF ||
5182 5279 ddi_get32(mpt->m_acc_post_queue_hdl,
5183 5280 &reply_desc_union->Words.High) == 0xFFFFFFFF) {
5184 5281 drv_usecwait(1000);
5185 5282 continue;
5186 5283 }
5187 5284
5188 5285 /*
5189 5286 * The reply is valid, process it according to its
5190 5287 * type.
5191 5288 */
5192 5289 mptsas_process_intr(mpt, reply_desc_union);
5193 5290
5194 5291 if (++mpt->m_post_index == mpt->m_post_queue_depth) {
5195 5292 mpt->m_post_index = 0;
5196 5293 }
5197 5294
5198 5295 /*
5199 5296 * Update the global reply index
5200 5297 */
5201 5298 ddi_put32(mpt->m_datap,
5202 5299 &mpt->m_reg->ReplyPostHostIndex, mpt->m_post_index);
5203 5300 mpt->m_polled_intr = 0;
5204 5301
5205 5302 /*
5206 5303 * Re-enable interrupts and quit.
5207 5304 */
5208 5305 ddi_put32(mpt->m_datap, &mpt->m_reg->HostInterruptMask,
5209 5306 int_mask);
5210 5307 return (TRUE);
5211 5308
5212 5309 }
5213 5310
5214 5311 /*
5215 5312 * Clear polling flag, re-enable interrupts and quit.
5216 5313 */
5217 5314 mpt->m_polled_intr = 0;
5218 5315 ddi_put32(mpt->m_datap, &mpt->m_reg->HostInterruptMask, int_mask);
5219 5316 return (FALSE);
5220 5317 }
5221 5318
5222 5319 static void
5223 5320 mptsas_handle_scsi_io_success(mptsas_t *mpt,
5224 5321 pMpi2ReplyDescriptorsUnion_t reply_desc)
5225 5322 {
5226 5323 pMpi2SCSIIOSuccessReplyDescriptor_t scsi_io_success;
5227 5324 uint16_t SMID;
5228 5325 mptsas_slots_t *slots = mpt->m_active;
5229 5326 mptsas_cmd_t *cmd = NULL;
5230 5327 struct scsi_pkt *pkt;
5231 5328
5232 5329 ASSERT(mutex_owned(&mpt->m_mutex));
|
↓ open down ↓ |
1139 lines elided |
↑ open up ↑ |
5233 5330
5234 5331 scsi_io_success = (pMpi2SCSIIOSuccessReplyDescriptor_t)reply_desc;
5235 5332 SMID = ddi_get16(mpt->m_acc_post_queue_hdl, &scsi_io_success->SMID);
5236 5333
5237 5334 /*
5238 5335 * This is a success reply so just complete the IO. First, do a sanity
5239 5336 * check on the SMID. The final slot is used for TM requests, which
5240 5337 * would not come into this reply handler.
5241 5338 */
5242 5339 if ((SMID == 0) || (SMID > slots->m_n_normal)) {
5243 - mptsas_log(mpt, CE_WARN, "?Received invalid SMID of %d\n",
5244 - SMID);
5340 + mptsas_log(mpt, CE_WARN, "received invalid SMID of %d", SMID);
5245 5341 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
5246 5342 return;
5247 5343 }
5248 5344
5249 5345 cmd = slots->m_slot[SMID];
5250 5346
5251 5347 /*
5252 5348 * print warning and return if the slot is empty
5253 5349 */
5254 5350 if (cmd == NULL) {
5255 - mptsas_log(mpt, CE_WARN, "?NULL command for successful SCSI IO "
5351 + mptsas_log(mpt, CE_WARN, "NULL command for successful SCSI IO "
5256 5352 "in slot %d", SMID);
5257 5353 return;
5258 5354 }
5259 5355
5260 5356 pkt = CMD2PKT(cmd);
5357 + ASSERT(pkt->pkt_start != 0);
5358 + pkt->pkt_stop = gethrtime();
5261 5359 pkt->pkt_state |= (STATE_GOT_BUS | STATE_GOT_TARGET | STATE_SENT_CMD |
5262 5360 STATE_GOT_STATUS);
5263 5361 if (cmd->cmd_flags & CFLAG_DMAVALID) {
5264 5362 pkt->pkt_state |= STATE_XFERRED_DATA;
5265 5363 }
5266 5364 pkt->pkt_resid = 0;
5267 5365
5268 5366 if (cmd->cmd_flags & CFLAG_PASSTHRU) {
5269 5367 cmd->cmd_flags |= CFLAG_FINISHED;
5270 5368 cv_broadcast(&mpt->m_passthru_cv);
5271 5369 return;
5272 5370 } else {
5273 5371 mptsas_remove_cmd(mpt, cmd);
5274 5372 }
5275 5373
5276 5374 if (cmd->cmd_flags & CFLAG_RETRY) {
5277 5375 /*
5278 5376 * The target returned QFULL or busy, do not add tihs
5279 5377 * pkt to the doneq since the hba will retry
5280 5378 * this cmd.
5281 5379 *
5282 5380 * The pkt has already been resubmitted in
5283 5381 * mptsas_handle_qfull() or in mptsas_check_scsi_io_error().
5284 5382 * Remove this cmd_flag here.
5285 5383 */
5286 5384 cmd->cmd_flags &= ~CFLAG_RETRY;
5287 5385 } else {
5288 5386 mptsas_doneq_add(mpt, cmd);
5289 5387 }
5290 5388 }
5291 5389
5292 5390 static void
5293 5391 mptsas_handle_address_reply(mptsas_t *mpt,
5294 5392 pMpi2ReplyDescriptorsUnion_t reply_desc)
5295 5393 {
5296 5394 pMpi2AddressReplyDescriptor_t address_reply;
5297 5395 pMPI2DefaultReply_t reply;
5298 5396 mptsas_fw_diagnostic_buffer_t *pBuffer;
5299 5397 uint32_t reply_addr, reply_frame_dma_baseaddr;
5300 5398 uint16_t SMID, iocstatus;
5301 5399 mptsas_slots_t *slots = mpt->m_active;
5302 5400 mptsas_cmd_t *cmd = NULL;
5303 5401 uint8_t function, buffer_type;
5304 5402 m_replyh_arg_t *args;
5305 5403 int reply_frame_no;
5306 5404
5307 5405 ASSERT(mutex_owned(&mpt->m_mutex));
5308 5406
5309 5407 address_reply = (pMpi2AddressReplyDescriptor_t)reply_desc;
5310 5408 reply_addr = ddi_get32(mpt->m_acc_post_queue_hdl,
5311 5409 &address_reply->ReplyFrameAddress);
5312 5410 SMID = ddi_get16(mpt->m_acc_post_queue_hdl, &address_reply->SMID);
5313 5411
|
↓ open down ↓ |
43 lines elided |
↑ open up ↑ |
5314 5412 /*
5315 5413 * If reply frame is not in the proper range we should ignore this
5316 5414 * message and exit the interrupt handler.
5317 5415 */
5318 5416 reply_frame_dma_baseaddr = mpt->m_reply_frame_dma_addr & 0xffffffffu;
5319 5417 if ((reply_addr < reply_frame_dma_baseaddr) ||
5320 5418 (reply_addr >= (reply_frame_dma_baseaddr +
5321 5419 (mpt->m_reply_frame_size * mpt->m_max_replies))) ||
5322 5420 ((reply_addr - reply_frame_dma_baseaddr) %
5323 5421 mpt->m_reply_frame_size != 0)) {
5324 - mptsas_log(mpt, CE_WARN, "?Received invalid reply frame "
5325 - "address 0x%x\n", reply_addr);
5422 + mptsas_log(mpt, CE_WARN, "received invalid reply frame "
5423 + "address 0x%x", reply_addr);
5326 5424 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
5327 5425 return;
5328 5426 }
5329 5427
5330 5428 (void) ddi_dma_sync(mpt->m_dma_reply_frame_hdl, 0, 0,
5331 5429 DDI_DMA_SYNC_FORCPU);
5332 5430 reply = (pMPI2DefaultReply_t)(mpt->m_reply_frame + (reply_addr -
5333 5431 reply_frame_dma_baseaddr));
5334 5432 function = ddi_get8(mpt->m_acc_reply_frame_hdl, &reply->Function);
5335 5433
5336 5434 NDBG31(("mptsas_handle_address_reply: function 0x%x, reply_addr=0x%x",
5337 5435 function, reply_addr));
5338 5436
5339 5437 /*
|
↓ open down ↓ |
4 lines elided |
↑ open up ↑ |
5340 5438 * don't get slot information and command for events since these values
5341 5439 * don't exist
5342 5440 */
5343 5441 if ((function != MPI2_FUNCTION_EVENT_NOTIFICATION) &&
5344 5442 (function != MPI2_FUNCTION_DIAG_BUFFER_POST)) {
5345 5443 /*
5346 5444 * This could be a TM reply, which use the last allocated SMID,
5347 5445 * so allow for that.
5348 5446 */
5349 5447 if ((SMID == 0) || (SMID > (slots->m_n_normal + 1))) {
5350 - mptsas_log(mpt, CE_WARN, "?Received invalid SMID of "
5351 - "%d\n", SMID);
5448 + mptsas_log(mpt, CE_WARN, "received invalid SMID of "
5449 + "%d", SMID);
5352 5450 ddi_fm_service_impact(mpt->m_dip,
5353 5451 DDI_SERVICE_UNAFFECTED);
5354 5452 return;
5355 5453 }
5356 5454
5357 5455 cmd = slots->m_slot[SMID];
5358 5456
5359 5457 /*
5360 5458 * print warning and return if the slot is empty
5361 5459 */
5362 5460 if (cmd == NULL) {
5363 - mptsas_log(mpt, CE_WARN, "?NULL command for address "
5461 + mptsas_log(mpt, CE_WARN, "NULL command for address "
5364 5462 "reply in slot %d", SMID);
5365 5463 return;
5366 5464 }
5367 5465 if ((cmd->cmd_flags &
5368 5466 (CFLAG_PASSTHRU | CFLAG_CONFIG | CFLAG_FW_DIAG))) {
5369 5467 cmd->cmd_rfm = reply_addr;
5370 5468 cmd->cmd_flags |= CFLAG_FINISHED;
5371 5469 cv_broadcast(&mpt->m_passthru_cv);
5372 5470 cv_broadcast(&mpt->m_config_cv);
5373 5471 cv_broadcast(&mpt->m_fw_diag_cv);
5374 5472 return;
5375 5473 } else if (!(cmd->cmd_flags & CFLAG_FW_CMD)) {
5376 5474 mptsas_remove_cmd(mpt, cmd);
5377 5475 }
5378 5476 NDBG31(("\t\tmptsas_process_intr: slot=%d", SMID));
5379 5477 }
5380 5478 /*
5381 5479 * Depending on the function, we need to handle
5382 5480 * the reply frame (and cmd) differently.
5383 5481 */
5384 5482 switch (function) {
5385 5483 case MPI2_FUNCTION_SCSI_IO_REQUEST:
5386 5484 mptsas_check_scsi_io_error(mpt, (pMpi2SCSIIOReply_t)reply, cmd);
5387 5485 break;
5388 5486 case MPI2_FUNCTION_SCSI_TASK_MGMT:
5389 5487 cmd->cmd_rfm = reply_addr;
5390 5488 mptsas_check_task_mgt(mpt, (pMpi2SCSIManagementReply_t)reply,
5391 5489 cmd);
5392 5490 break;
5393 5491 case MPI2_FUNCTION_FW_DOWNLOAD:
5394 5492 cmd->cmd_flags |= CFLAG_FINISHED;
5395 5493 cv_signal(&mpt->m_fw_cv);
5396 5494 break;
5397 5495 case MPI2_FUNCTION_EVENT_NOTIFICATION:
5398 5496 reply_frame_no = (reply_addr - reply_frame_dma_baseaddr) /
5399 5497 mpt->m_reply_frame_size;
5400 5498 args = &mpt->m_replyh_args[reply_frame_no];
5401 5499 args->mpt = (void *)mpt;
5402 5500 args->rfm = reply_addr;
5403 5501
5404 5502 /*
5405 5503 * Record the event if its type is enabled in
5406 5504 * this mpt instance by ioctl.
5407 5505 */
5408 5506 mptsas_record_event(args);
5409 5507
5410 5508 /*
5411 5509 * Handle time critical events
5412 5510 * NOT_RESPONDING/ADDED only now
5413 5511 */
5414 5512 if (mptsas_handle_event_sync(args) == DDI_SUCCESS) {
5415 5513 /*
5416 5514 * Would not return main process,
5417 5515 * just let taskq resolve ack action
5418 5516 * and ack would be sent in taskq thread
5419 5517 */
5420 5518 NDBG20(("send mptsas_handle_event_sync success"));
5421 5519 }
5422 5520
5423 5521 if (mpt->m_in_reset) {
5424 5522 NDBG20(("dropping event received during reset"));
5425 5523 return;
5426 5524 }
5427 5525
5428 5526 if ((ddi_taskq_dispatch(mpt->m_event_taskq, mptsas_handle_event,
5429 5527 (void *)args, DDI_NOSLEEP)) != DDI_SUCCESS) {
5430 5528 mptsas_log(mpt, CE_WARN, "No memory available"
5431 5529 "for dispatch taskq");
5432 5530 /*
5433 5531 * Return the reply frame to the free queue.
5434 5532 */
5435 5533 ddi_put32(mpt->m_acc_free_queue_hdl,
5436 5534 &((uint32_t *)(void *)
5437 5535 mpt->m_free_queue)[mpt->m_free_index], reply_addr);
5438 5536 (void) ddi_dma_sync(mpt->m_dma_free_queue_hdl, 0, 0,
5439 5537 DDI_DMA_SYNC_FORDEV);
5440 5538 if (++mpt->m_free_index == mpt->m_free_queue_depth) {
5441 5539 mpt->m_free_index = 0;
5442 5540 }
5443 5541
5444 5542 ddi_put32(mpt->m_datap,
5445 5543 &mpt->m_reg->ReplyFreeHostIndex, mpt->m_free_index);
5446 5544 }
5447 5545 return;
5448 5546 case MPI2_FUNCTION_DIAG_BUFFER_POST:
5449 5547 /*
5450 5548 * If SMID is 0, this implies that the reply is due to a
5451 5549 * release function with a status that the buffer has been
5452 5550 * released. Set the buffer flags accordingly.
5453 5551 */
5454 5552 if (SMID == 0) {
5455 5553 iocstatus = ddi_get16(mpt->m_acc_reply_frame_hdl,
5456 5554 &reply->IOCStatus);
5457 5555 buffer_type = ddi_get8(mpt->m_acc_reply_frame_hdl,
5458 5556 &(((pMpi2DiagBufferPostReply_t)reply)->BufferType));
5459 5557 if (iocstatus == MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED) {
5460 5558 pBuffer =
5461 5559 &mpt->m_fw_diag_buffer_list[buffer_type];
5462 5560 pBuffer->valid_data = TRUE;
5463 5561 pBuffer->owned_by_firmware = FALSE;
5464 5562 pBuffer->immediate = FALSE;
5465 5563 }
|
↓ open down ↓ |
92 lines elided |
↑ open up ↑ |
5466 5564 } else {
5467 5565 /*
5468 5566 * Normal handling of diag post reply with SMID.
5469 5567 */
5470 5568 cmd = slots->m_slot[SMID];
5471 5569
5472 5570 /*
5473 5571 * print warning and return if the slot is empty
5474 5572 */
5475 5573 if (cmd == NULL) {
5476 - mptsas_log(mpt, CE_WARN, "?NULL command for "
5574 + mptsas_log(mpt, CE_WARN, "NULL command for "
5477 5575 "address reply in slot %d", SMID);
5478 5576 return;
5479 5577 }
5480 5578 cmd->cmd_rfm = reply_addr;
5481 5579 cmd->cmd_flags |= CFLAG_FINISHED;
5482 5580 cv_broadcast(&mpt->m_fw_diag_cv);
5483 5581 }
5484 5582 return;
5485 5583 default:
5486 5584 mptsas_log(mpt, CE_WARN, "Unknown function 0x%x ", function);
5487 5585 break;
5488 5586 }
5489 5587
5490 5588 /*
5491 5589 * Return the reply frame to the free queue.
5492 5590 */
5493 5591 ddi_put32(mpt->m_acc_free_queue_hdl,
5494 5592 &((uint32_t *)(void *)mpt->m_free_queue)[mpt->m_free_index],
5495 5593 reply_addr);
5496 5594 (void) ddi_dma_sync(mpt->m_dma_free_queue_hdl, 0, 0,
5497 5595 DDI_DMA_SYNC_FORDEV);
5498 5596 if (++mpt->m_free_index == mpt->m_free_queue_depth) {
5499 5597 mpt->m_free_index = 0;
5500 5598 }
5501 5599 ddi_put32(mpt->m_datap, &mpt->m_reg->ReplyFreeHostIndex,
5502 5600 mpt->m_free_index);
5503 5601
5504 5602 if (cmd->cmd_flags & CFLAG_FW_CMD)
5505 5603 return;
5506 5604
5507 5605 if (cmd->cmd_flags & CFLAG_RETRY) {
5508 5606 /*
5509 5607 * The target returned QFULL or busy, do not add this
5510 5608 * pkt to the doneq since the hba will retry
5511 5609 * this cmd.
5512 5610 *
5513 5611 * The pkt has already been resubmitted in
5514 5612 * mptsas_handle_qfull() or in mptsas_check_scsi_io_error().
5515 5613 * Remove this cmd_flag here.
5516 5614 */
5517 5615 cmd->cmd_flags &= ~CFLAG_RETRY;
5518 5616 } else {
5519 5617 mptsas_doneq_add(mpt, cmd);
5520 5618 }
5521 5619 }
5522 5620
5523 5621 #ifdef MPTSAS_DEBUG
5524 5622 static uint8_t mptsas_last_sense[256];
5525 5623 #endif
5526 5624
5527 5625 static void
5528 5626 mptsas_check_scsi_io_error(mptsas_t *mpt, pMpi2SCSIIOReply_t reply,
5529 5627 mptsas_cmd_t *cmd)
5530 5628 {
5531 5629 uint8_t scsi_status, scsi_state;
5532 5630 uint16_t ioc_status, cmd_rqs_len;
5533 5631 uint32_t xferred, sensecount, responsedata, loginfo = 0;
5534 5632 struct scsi_pkt *pkt;
5535 5633 struct scsi_arq_status *arqstat;
5536 5634 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
5537 5635 uint8_t *sensedata = NULL;
5538 5636 uint64_t sas_wwn;
5539 5637 uint8_t phy;
5540 5638 char wwn_str[MPTSAS_WWN_STRLEN];
5541 5639
5542 5640 scsi_status = ddi_get8(mpt->m_acc_reply_frame_hdl, &reply->SCSIStatus);
5543 5641 ioc_status = ddi_get16(mpt->m_acc_reply_frame_hdl, &reply->IOCStatus);
5544 5642 scsi_state = ddi_get8(mpt->m_acc_reply_frame_hdl, &reply->SCSIState);
5545 5643 xferred = ddi_get32(mpt->m_acc_reply_frame_hdl, &reply->TransferCount);
5546 5644 sensecount = ddi_get32(mpt->m_acc_reply_frame_hdl, &reply->SenseCount);
5547 5645 responsedata = ddi_get32(mpt->m_acc_reply_frame_hdl,
5548 5646 &reply->ResponseInfo);
5549 5647
5550 5648 if (ioc_status & MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) {
|
↓ open down ↓ |
64 lines elided |
↑ open up ↑ |
5551 5649 sas_wwn = ptgt->m_addr.mta_wwn;
5552 5650 phy = ptgt->m_phynum;
5553 5651 if (sas_wwn == 0) {
5554 5652 (void) sprintf(wwn_str, "p%x", phy);
5555 5653 } else {
5556 5654 (void) sprintf(wwn_str, "w%016"PRIx64, sas_wwn);
5557 5655 }
5558 5656 loginfo = ddi_get32(mpt->m_acc_reply_frame_hdl,
5559 5657 &reply->IOCLogInfo);
5560 5658 mptsas_log(mpt, CE_NOTE,
5561 - "?Log info 0x%x received for target %d %s.\n"
5562 - "\tscsi_status=0x%x, ioc_status=0x%x, scsi_state=0x%x",
5659 + "log info 0x%x received for target %d %s, "
5660 + "scsi_status=0x%x, ioc_status=0x%x, scsi_state=0x%x",
5563 5661 loginfo, Tgt(cmd), wwn_str, scsi_status, ioc_status,
5564 5662 scsi_state);
5565 5663 }
5566 5664
5567 5665 NDBG31(("\t\tscsi_status=0x%x, ioc_status=0x%x, scsi_state=0x%x",
5568 5666 scsi_status, ioc_status, scsi_state));
5569 5667
5570 5668 pkt = CMD2PKT(cmd);
5669 + ASSERT(pkt->pkt_start != 0);
5670 + pkt->pkt_stop = gethrtime();
5571 5671 *(pkt->pkt_scbp) = scsi_status;
5572 5672
5573 5673 if (loginfo == 0x31170000) {
5574 5674 /*
5575 5675 * if loginfo PL_LOGINFO_CODE_IO_DEVICE_MISSING_DELAY_RETRY
5576 5676 * 0x31170000 comes, that means the device missing delay
5577 5677 * is in progressing, the command need retry later.
5578 5678 */
5579 5679 *(pkt->pkt_scbp) = STATUS_BUSY;
5580 5680 return;
5581 5681 }
5582 5682
5583 5683 if ((scsi_state & MPI2_SCSI_STATE_NO_SCSI_STATUS) &&
5584 5684 ((ioc_status & MPI2_IOCSTATUS_MASK) ==
5585 5685 MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE)) {
5586 5686 pkt->pkt_reason = CMD_INCOMPLETE;
5587 5687 pkt->pkt_state |= STATE_GOT_BUS;
|
↓ open down ↓ |
7 lines elided |
↑ open up ↑ |
5588 5688 if (ptgt->m_reset_delay == 0) {
5589 5689 mptsas_set_throttle(mpt, ptgt,
5590 5690 DRAIN_THROTTLE);
5591 5691 }
5592 5692 return;
5593 5693 }
5594 5694
5595 5695 if (scsi_state & MPI2_SCSI_STATE_RESPONSE_INFO_VALID) {
5596 5696 responsedata &= 0x000000FF;
5597 5697 if (responsedata & MPTSAS_SCSI_RESPONSE_CODE_TLR_OFF) {
5598 - mptsas_log(mpt, CE_NOTE, "Do not support the TLR\n");
5698 + mptsas_log(mpt, CE_NOTE, "TLR not supported");
5599 5699 pkt->pkt_reason = CMD_TLR_OFF;
5600 5700 return;
5601 5701 }
5602 5702 }
5603 5703
5604 5704
5605 5705 switch (scsi_status) {
5606 5706 case MPI2_SCSI_STATUS_CHECK_CONDITION:
5607 5707 pkt->pkt_resid = (cmd->cmd_dmacount - xferred);
5608 5708 arqstat = (void*)(pkt->pkt_scbp);
5609 5709 arqstat->sts_rqpkt_status = *((struct scsi_status *)
5610 5710 (pkt->pkt_scbp));
5611 5711 pkt->pkt_state |= (STATE_GOT_BUS | STATE_GOT_TARGET |
5612 5712 STATE_SENT_CMD | STATE_GOT_STATUS | STATE_ARQ_DONE);
5613 5713 if (cmd->cmd_flags & CFLAG_XARQ) {
5614 5714 pkt->pkt_state |= STATE_XARQ_DONE;
5615 5715 }
5616 5716 if (pkt->pkt_resid != cmd->cmd_dmacount) {
5617 5717 pkt->pkt_state |= STATE_XFERRED_DATA;
5618 5718 }
5619 5719 arqstat->sts_rqpkt_reason = pkt->pkt_reason;
5620 5720 arqstat->sts_rqpkt_state = pkt->pkt_state;
5621 5721 arqstat->sts_rqpkt_state |= STATE_XFERRED_DATA;
5622 5722 arqstat->sts_rqpkt_statistics = pkt->pkt_statistics;
5623 5723 sensedata = (uint8_t *)&arqstat->sts_sensedata;
5624 5724 cmd_rqs_len = cmd->cmd_extrqslen ?
5625 5725 cmd->cmd_extrqslen : cmd->cmd_rqslen;
5626 5726 (void) ddi_dma_sync(mpt->m_dma_req_sense_hdl, 0, 0,
5627 5727 DDI_DMA_SYNC_FORKERNEL);
5628 5728 #ifdef MPTSAS_DEBUG
5629 5729 bcopy(cmd->cmd_arq_buf, mptsas_last_sense,
5630 5730 ((cmd_rqs_len >= sizeof (mptsas_last_sense)) ?
5631 5731 sizeof (mptsas_last_sense):cmd_rqs_len));
5632 5732 #endif
5633 5733 bcopy((uchar_t *)cmd->cmd_arq_buf, sensedata,
5634 5734 ((cmd_rqs_len >= sensecount) ? sensecount :
5635 5735 cmd_rqs_len));
5636 5736 arqstat->sts_rqpkt_resid = (cmd_rqs_len - sensecount);
5637 5737 cmd->cmd_flags |= CFLAG_CMDARQ;
5638 5738 /*
5639 5739 * Set proper status for pkt if autosense was valid
5640 5740 */
5641 5741 if (scsi_state & MPI2_SCSI_STATE_AUTOSENSE_VALID) {
5642 5742 struct scsi_status zero_status = { 0 };
5643 5743 arqstat->sts_rqpkt_status = zero_status;
5644 5744 }
5645 5745
5646 5746 /*
5647 5747 * ASC=0x47 is parity error
5648 5748 * ASC=0x48 is initiator detected error received
5649 5749 */
5650 5750 if ((scsi_sense_key(sensedata) == KEY_ABORTED_COMMAND) &&
5651 5751 ((scsi_sense_asc(sensedata) == 0x47) ||
5652 5752 (scsi_sense_asc(sensedata) == 0x48))) {
5653 5753 mptsas_log(mpt, CE_NOTE, "Aborted_command!");
5654 5754 }
5655 5755
5656 5756 /*
5657 5757 * ASC/ASCQ=0x3F/0x0E means report_luns data changed
5658 5758 * ASC/ASCQ=0x25/0x00 means invalid lun
5659 5759 */
5660 5760 if (((scsi_sense_key(sensedata) == KEY_UNIT_ATTENTION) &&
5661 5761 (scsi_sense_asc(sensedata) == 0x3F) &&
5662 5762 (scsi_sense_ascq(sensedata) == 0x0E)) ||
5663 5763 ((scsi_sense_key(sensedata) == KEY_ILLEGAL_REQUEST) &&
|
↓ open down ↓ |
55 lines elided |
↑ open up ↑ |
5664 5764 (scsi_sense_asc(sensedata) == 0x25) &&
5665 5765 (scsi_sense_ascq(sensedata) == 0x00))) {
5666 5766 mptsas_topo_change_list_t *topo_node = NULL;
5667 5767
5668 5768 topo_node = kmem_zalloc(
5669 5769 sizeof (mptsas_topo_change_list_t),
5670 5770 KM_NOSLEEP);
5671 5771 if (topo_node == NULL) {
5672 5772 mptsas_log(mpt, CE_NOTE, "No memory"
5673 5773 "resource for handle SAS dynamic"
5674 - "reconfigure.\n");
5774 + "reconfigure");
5675 5775 break;
5676 5776 }
5677 5777 topo_node->mpt = mpt;
5678 5778 topo_node->event = MPTSAS_DR_EVENT_RECONFIG_TARGET;
5679 5779 topo_node->un.phymask = ptgt->m_addr.mta_phymask;
5680 5780 topo_node->devhdl = ptgt->m_devhdl;
5681 5781 topo_node->object = (void *)ptgt;
5682 5782 topo_node->flags = MPTSAS_TOPO_FLAG_LUN_ASSOCIATED;
5683 5783
5684 5784 if ((ddi_taskq_dispatch(mpt->m_dr_taskq,
5685 5785 mptsas_handle_dr,
5686 5786 (void *)topo_node,
5687 5787 DDI_NOSLEEP)) != DDI_SUCCESS) {
5688 5788 kmem_free(topo_node,
5689 5789 sizeof (mptsas_topo_change_list_t));
5690 5790 mptsas_log(mpt, CE_NOTE, "mptsas start taskq"
5691 5791 "for handle SAS dynamic reconfigure"
5692 - "failed. \n");
5792 + "failed");
5693 5793 }
5694 5794 }
5695 5795 break;
5696 5796 case MPI2_SCSI_STATUS_GOOD:
5697 5797 switch (ioc_status & MPI2_IOCSTATUS_MASK) {
5698 5798 case MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
5699 5799 pkt->pkt_reason = CMD_DEV_GONE;
5700 5800 pkt->pkt_state |= STATE_GOT_BUS;
5701 5801 if (ptgt->m_reset_delay == 0) {
5702 5802 mptsas_set_throttle(mpt, ptgt, DRAIN_THROTTLE);
5703 5803 }
5704 5804 NDBG31(("lost disk for target%d, command:%x",
5705 5805 Tgt(cmd), pkt->pkt_cdbp[0]));
5706 5806 break;
5707 5807 case MPI2_IOCSTATUS_SCSI_DATA_OVERRUN:
5708 5808 NDBG31(("data overrun: xferred=%d", xferred));
5709 5809 NDBG31(("dmacount=%d", cmd->cmd_dmacount));
5710 5810 pkt->pkt_reason = CMD_DATA_OVR;
5711 5811 pkt->pkt_state |= (STATE_GOT_BUS | STATE_GOT_TARGET
5712 5812 | STATE_SENT_CMD | STATE_GOT_STATUS
5713 5813 | STATE_XFERRED_DATA);
5714 5814 pkt->pkt_resid = 0;
5715 5815 break;
5716 5816 case MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
5717 5817 case MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN:
5718 5818 NDBG31(("data underrun: xferred=%d", xferred));
5719 5819 NDBG31(("dmacount=%d", cmd->cmd_dmacount));
5720 5820 pkt->pkt_state |= (STATE_GOT_BUS | STATE_GOT_TARGET
5721 5821 | STATE_SENT_CMD | STATE_GOT_STATUS);
5722 5822 pkt->pkt_resid = (cmd->cmd_dmacount - xferred);
5723 5823 if (pkt->pkt_resid != cmd->cmd_dmacount) {
5724 5824 pkt->pkt_state |= STATE_XFERRED_DATA;
5725 5825 }
5726 5826 break;
5727 5827 case MPI2_IOCSTATUS_SCSI_TASK_TERMINATED:
5728 5828 if (cmd->cmd_active_expiration <= gethrtime()) {
5729 5829 /*
5730 5830 * When timeout requested, propagate
5731 5831 * proper reason and statistics to
5732 5832 * target drivers.
5733 5833 */
5734 5834 mptsas_set_pkt_reason(mpt, cmd, CMD_TIMEOUT,
5735 5835 STAT_BUS_RESET | STAT_TIMEOUT);
5736 5836 } else {
5737 5837 mptsas_set_pkt_reason(mpt, cmd, CMD_RESET,
5738 5838 STAT_BUS_RESET);
5739 5839 }
5740 5840 break;
5741 5841 case MPI2_IOCSTATUS_SCSI_IOC_TERMINATED:
5742 5842 case MPI2_IOCSTATUS_SCSI_EXT_TERMINATED:
5743 5843 mptsas_set_pkt_reason(mpt,
5744 5844 cmd, CMD_RESET, STAT_DEV_RESET);
5745 5845 break;
5746 5846 case MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR:
5747 5847 case MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR:
5748 5848 pkt->pkt_state |= (STATE_GOT_BUS | STATE_GOT_TARGET);
5749 5849 mptsas_set_pkt_reason(mpt,
5750 5850 cmd, CMD_TERMINATED, STAT_TERMINATED);
5751 5851 break;
5752 5852 case MPI2_IOCSTATUS_INSUFFICIENT_RESOURCES:
5753 5853 case MPI2_IOCSTATUS_BUSY:
5754 5854 /*
5755 5855 * set throttles to drain
5756 5856 */
5757 5857 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
5758 5858 ptgt = refhash_next(mpt->m_targets, ptgt)) {
5759 5859 mptsas_set_throttle(mpt, ptgt, DRAIN_THROTTLE);
5760 5860 }
5761 5861
|
↓ open down ↓ |
59 lines elided |
↑ open up ↑ |
5762 5862 /*
5763 5863 * retry command
5764 5864 */
5765 5865 cmd->cmd_flags |= CFLAG_RETRY;
5766 5866 cmd->cmd_pkt_flags |= FLAG_HEAD;
5767 5867
5768 5868 (void) mptsas_accept_pkt(mpt, cmd);
5769 5869 break;
5770 5870 default:
5771 5871 mptsas_log(mpt, CE_WARN,
5772 - "unknown ioc_status = %x\n", ioc_status);
5872 + "unknown ioc_status = %x", ioc_status);
5773 5873 mptsas_log(mpt, CE_CONT, "scsi_state = %x, transfer "
5774 5874 "count = %x, scsi_status = %x", scsi_state,
5775 5875 xferred, scsi_status);
5776 5876 break;
5777 5877 }
5778 5878 break;
5779 5879 case MPI2_SCSI_STATUS_TASK_SET_FULL:
5780 5880 mptsas_handle_qfull(mpt, cmd);
5781 5881 break;
5782 5882 case MPI2_SCSI_STATUS_BUSY:
5783 5883 NDBG31(("scsi_status busy received"));
5784 5884 break;
5785 5885 case MPI2_SCSI_STATUS_RESERVATION_CONFLICT:
5786 5886 NDBG31(("scsi_status reservation conflict received"));
5787 5887 break;
5788 5888 default:
5789 - mptsas_log(mpt, CE_WARN, "scsi_status=%x, ioc_status=%x\n",
5889 + mptsas_log(mpt, CE_WARN, "scsi_status=%x, ioc_status=%x",
5790 5890 scsi_status, ioc_status);
5791 5891 mptsas_log(mpt, CE_WARN,
5792 - "mptsas_process_intr: invalid scsi status\n");
5892 + "mptsas_process_intr: invalid scsi status");
5793 5893 break;
5794 5894 }
5795 5895 }
5796 5896
5797 5897 static void
5798 5898 mptsas_check_task_mgt(mptsas_t *mpt, pMpi2SCSIManagementReply_t reply,
5799 5899 mptsas_cmd_t *cmd)
5800 5900 {
5801 5901 uint8_t task_type;
5802 5902 uint16_t ioc_status;
5803 5903 uint32_t log_info;
|
↓ open down ↓ |
1 lines elided |
↑ open up ↑ |
5804 5904 uint16_t dev_handle;
5805 5905 struct scsi_pkt *pkt = CMD2PKT(cmd);
5806 5906
5807 5907 task_type = ddi_get8(mpt->m_acc_reply_frame_hdl, &reply->TaskType);
5808 5908 ioc_status = ddi_get16(mpt->m_acc_reply_frame_hdl, &reply->IOCStatus);
5809 5909 log_info = ddi_get32(mpt->m_acc_reply_frame_hdl, &reply->IOCLogInfo);
5810 5910 dev_handle = ddi_get16(mpt->m_acc_reply_frame_hdl, &reply->DevHandle);
5811 5911
5812 5912 if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
5813 5913 mptsas_log(mpt, CE_WARN, "mptsas_check_task_mgt: Task 0x%x "
5814 - "failed. IOCStatus=0x%x IOCLogInfo=0x%x target=%d\n",
5914 + "failed. IOCStatus=0x%x IOCLogInfo=0x%x target=%d",
5815 5915 task_type, ioc_status, log_info, dev_handle);
5816 5916 pkt->pkt_reason = CMD_INCOMPLETE;
5817 5917 return;
5818 5918 }
5819 5919
5820 5920 switch (task_type) {
5821 5921 case MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK:
5822 5922 case MPI2_SCSITASKMGMT_TASKTYPE_CLEAR_TASK_SET:
5823 5923 case MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK:
5824 5924 case MPI2_SCSITASKMGMT_TASKTYPE_CLR_ACA:
5825 5925 case MPI2_SCSITASKMGMT_TASKTYPE_QRY_TASK_SET:
|
↓ open down ↓ |
1 lines elided |
↑ open up ↑ |
5826 5926 case MPI2_SCSITASKMGMT_TASKTYPE_QRY_UNIT_ATTENTION:
5827 5927 break;
5828 5928 case MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET:
5829 5929 case MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET:
5830 5930 case MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET:
5831 5931 /*
5832 5932 * Check for invalid DevHandle of 0 in case application
5833 5933 * sends bad command. DevHandle of 0 could cause problems.
5834 5934 */
5835 5935 if (dev_handle == 0) {
5836 - mptsas_log(mpt, CE_WARN, "!Can't flush target with"
5936 + mptsas_log(mpt, CE_WARN, "Can't flush target with"
5837 5937 " DevHandle of 0.");
5838 5938 } else {
5839 5939 mptsas_flush_target(mpt, dev_handle, Lun(cmd),
5840 5940 task_type);
5841 5941 }
5842 5942 break;
5843 5943 default:
5844 5944 mptsas_log(mpt, CE_WARN, "Unknown task management type %d.",
5845 5945 task_type);
5846 5946 mptsas_log(mpt, CE_WARN, "ioc status = %x", ioc_status);
5847 5947 break;
5848 5948 }
5849 5949 }
5850 5950
5851 5951 static void
5852 5952 mptsas_doneq_thread(mptsas_doneq_thread_arg_t *arg)
5853 5953 {
5854 5954 mptsas_t *mpt = arg->mpt;
5855 5955 uint64_t t = arg->t;
5856 5956 mptsas_cmd_t *cmd;
5857 5957 struct scsi_pkt *pkt;
5858 5958 mptsas_doneq_thread_list_t *item = &mpt->m_doneq_thread_id[t];
5859 5959
5860 5960 mutex_enter(&item->mutex);
5861 5961 while (item->flag & MPTSAS_DONEQ_THREAD_ACTIVE) {
5862 5962 if (!item->doneq) {
5863 5963 cv_wait(&item->cv, &item->mutex);
5864 5964 }
5865 5965 pkt = NULL;
5866 5966 if ((cmd = mptsas_doneq_thread_rm(mpt, t)) != NULL) {
5867 5967 cmd->cmd_flags |= CFLAG_COMPLETED;
5868 5968 pkt = CMD2PKT(cmd);
5869 5969 }
5870 5970 mutex_exit(&item->mutex);
5871 5971 if (pkt) {
5872 5972 mptsas_pkt_comp(pkt, cmd);
5873 5973 }
5874 5974 mutex_enter(&item->mutex);
5875 5975 }
5876 5976 mutex_exit(&item->mutex);
5877 5977 mutex_enter(&mpt->m_doneq_mutex);
5878 5978 mpt->m_doneq_thread_n--;
5879 5979 cv_broadcast(&mpt->m_doneq_thread_cv);
5880 5980 mutex_exit(&mpt->m_doneq_mutex);
5881 5981 }
5882 5982
5883 5983
5884 5984 /*
5885 5985 * mpt interrupt handler.
5886 5986 */
5887 5987 static uint_t
5888 5988 mptsas_intr(caddr_t arg1, caddr_t arg2)
5889 5989 {
5890 5990 mptsas_t *mpt = (void *)arg1;
5891 5991 pMpi2ReplyDescriptorsUnion_t reply_desc_union;
5892 5992 uchar_t did_reply = FALSE;
5893 5993
5894 5994 NDBG1(("mptsas_intr: arg1 0x%p arg2 0x%p", (void *)arg1, (void *)arg2));
5895 5995
5896 5996 mutex_enter(&mpt->m_mutex);
5897 5997
5898 5998 /*
5899 5999 * If interrupts are shared by two channels then check whether this
5900 6000 * interrupt is genuinely for this channel by making sure first the
5901 6001 * chip is in high power state.
5902 6002 */
5903 6003 if ((mpt->m_options & MPTSAS_OPT_PM) &&
5904 6004 (mpt->m_power_level != PM_LEVEL_D0)) {
5905 6005 mutex_exit(&mpt->m_mutex);
5906 6006 return (DDI_INTR_UNCLAIMED);
5907 6007 }
5908 6008
5909 6009 /*
5910 6010 * If polling, interrupt was triggered by some shared interrupt because
5911 6011 * IOC interrupts are disabled during polling, so polling routine will
5912 6012 * handle any replies. Considering this, if polling is happening,
5913 6013 * return with interrupt unclaimed.
5914 6014 */
5915 6015 if (mpt->m_polled_intr) {
5916 6016 mutex_exit(&mpt->m_mutex);
5917 6017 mptsas_log(mpt, CE_WARN, "mpt_sas: Unclaimed interrupt");
5918 6018 return (DDI_INTR_UNCLAIMED);
5919 6019 }
5920 6020
5921 6021 /*
5922 6022 * Read the istat register.
5923 6023 */
5924 6024 if ((INTPENDING(mpt)) != 0) {
5925 6025 /*
5926 6026 * read fifo until empty.
5927 6027 */
5928 6028 #ifndef __lock_lint
5929 6029 _NOTE(CONSTCOND)
5930 6030 #endif
5931 6031 while (TRUE) {
5932 6032 (void) ddi_dma_sync(mpt->m_dma_post_queue_hdl, 0, 0,
5933 6033 DDI_DMA_SYNC_FORCPU);
5934 6034 reply_desc_union = (pMpi2ReplyDescriptorsUnion_t)
5935 6035 MPTSAS_GET_NEXT_REPLY(mpt, mpt->m_post_index);
5936 6036
5937 6037 if (ddi_get32(mpt->m_acc_post_queue_hdl,
5938 6038 &reply_desc_union->Words.Low) == 0xFFFFFFFF ||
5939 6039 ddi_get32(mpt->m_acc_post_queue_hdl,
5940 6040 &reply_desc_union->Words.High) == 0xFFFFFFFF) {
5941 6041 break;
5942 6042 }
5943 6043
5944 6044 /*
5945 6045 * The reply is valid, process it according to its
5946 6046 * type. Also, set a flag for updating the reply index
5947 6047 * after they've all been processed.
5948 6048 */
5949 6049 did_reply = TRUE;
5950 6050
5951 6051 mptsas_process_intr(mpt, reply_desc_union);
5952 6052
5953 6053 /*
5954 6054 * Increment post index and roll over if needed.
5955 6055 */
5956 6056 if (++mpt->m_post_index == mpt->m_post_queue_depth) {
5957 6057 mpt->m_post_index = 0;
5958 6058 }
5959 6059 }
5960 6060
5961 6061 /*
5962 6062 * Update the global reply index if at least one reply was
5963 6063 * processed.
5964 6064 */
5965 6065 if (did_reply) {
5966 6066 ddi_put32(mpt->m_datap,
5967 6067 &mpt->m_reg->ReplyPostHostIndex, mpt->m_post_index);
5968 6068 }
5969 6069 } else {
5970 6070 mutex_exit(&mpt->m_mutex);
5971 6071 return (DDI_INTR_UNCLAIMED);
5972 6072 }
5973 6073 NDBG1(("mptsas_intr complete"));
5974 6074
5975 6075 /*
5976 6076 * If no helper threads are created, process the doneq in ISR. If
5977 6077 * helpers are created, use the doneq length as a metric to measure the
5978 6078 * load on the interrupt CPU. If it is long enough, which indicates the
5979 6079 * load is heavy, then we deliver the IO completions to the helpers.
5980 6080 * This measurement has some limitations, although it is simple and
5981 6081 * straightforward and works well for most of the cases at present.
5982 6082 */
5983 6083 if (!mpt->m_doneq_thread_n ||
5984 6084 (mpt->m_doneq_len <= mpt->m_doneq_length_threshold)) {
5985 6085 mptsas_doneq_empty(mpt);
5986 6086 } else {
5987 6087 mptsas_deliver_doneq_thread(mpt);
5988 6088 }
5989 6089
5990 6090 /*
5991 6091 * If there are queued cmd, start them now.
5992 6092 */
5993 6093 if (mpt->m_waitq != NULL) {
5994 6094 mptsas_restart_waitq(mpt);
5995 6095 }
5996 6096
5997 6097 mutex_exit(&mpt->m_mutex);
5998 6098 return (DDI_INTR_CLAIMED);
5999 6099 }
6000 6100
6001 6101 static void
6002 6102 mptsas_process_intr(mptsas_t *mpt,
6003 6103 pMpi2ReplyDescriptorsUnion_t reply_desc_union)
6004 6104 {
6005 6105 uint8_t reply_type;
6006 6106
6007 6107 ASSERT(mutex_owned(&mpt->m_mutex));
6008 6108
6009 6109 /*
6010 6110 * The reply is valid, process it according to its
6011 6111 * type. Also, set a flag for updated the reply index
6012 6112 * after they've all been processed.
|
↓ open down ↓ |
166 lines elided |
↑ open up ↑ |
6013 6113 */
6014 6114 reply_type = ddi_get8(mpt->m_acc_post_queue_hdl,
6015 6115 &reply_desc_union->Default.ReplyFlags);
6016 6116 reply_type &= MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK;
6017 6117 if (reply_type == MPI2_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS ||
6018 6118 reply_type == MPI25_RPY_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO_SUCCESS) {
6019 6119 mptsas_handle_scsi_io_success(mpt, reply_desc_union);
6020 6120 } else if (reply_type == MPI2_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY) {
6021 6121 mptsas_handle_address_reply(mpt, reply_desc_union);
6022 6122 } else {
6023 - mptsas_log(mpt, CE_WARN, "?Bad reply type %x", reply_type);
6123 + mptsas_log(mpt, CE_WARN, "bad reply type %x", reply_type);
6024 6124 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
6025 6125 }
6026 6126
6027 6127 /*
6028 6128 * Clear the reply descriptor for re-use and increment
6029 6129 * index.
6030 6130 */
6031 6131 ddi_put64(mpt->m_acc_post_queue_hdl,
6032 6132 &((uint64_t *)(void *)mpt->m_post_queue)[mpt->m_post_index],
6033 6133 0xFFFFFFFFFFFFFFFF);
6034 6134 (void) ddi_dma_sync(mpt->m_dma_post_queue_hdl, 0, 0,
6035 6135 DDI_DMA_SYNC_FORDEV);
6036 6136 }
6037 6137
6038 6138 /*
6039 6139 * handle qfull condition
6040 6140 */
6041 6141 static void
6042 6142 mptsas_handle_qfull(mptsas_t *mpt, mptsas_cmd_t *cmd)
6043 6143 {
6044 6144 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
6045 6145
6046 6146 if ((++cmd->cmd_qfull_retries > ptgt->m_qfull_retries) ||
6047 6147 (ptgt->m_qfull_retries == 0)) {
6048 6148 /*
6049 6149 * We have exhausted the retries on QFULL, or,
6050 6150 * the target driver has indicated that it
6051 6151 * wants to handle QFULL itself by setting
6052 6152 * qfull-retries capability to 0. In either case
6053 6153 * we want the target driver's QFULL handling
6054 6154 * to kick in. We do this by having pkt_reason
6055 6155 * as CMD_CMPLT and pkt_scbp as STATUS_QFULL.
6056 6156 */
6057 6157 mptsas_set_throttle(mpt, ptgt, DRAIN_THROTTLE);
6058 6158 } else {
6059 6159 if (ptgt->m_reset_delay == 0) {
6060 6160 ptgt->m_t_throttle =
6061 6161 max((ptgt->m_t_ncmds - 2), 0);
6062 6162 }
6063 6163
6064 6164 cmd->cmd_pkt_flags |= FLAG_HEAD;
6065 6165 cmd->cmd_flags &= ~(CFLAG_TRANFLAG);
6066 6166 cmd->cmd_flags |= CFLAG_RETRY;
6067 6167
6068 6168 (void) mptsas_accept_pkt(mpt, cmd);
6069 6169
6070 6170 /*
6071 6171 * when target gives queue full status with no commands
6072 6172 * outstanding (m_t_ncmds == 0), throttle is set to 0
6073 6173 * (HOLD_THROTTLE), and the queue full handling start
6074 6174 * (see psarc/1994/313); if there are commands outstanding,
6075 6175 * throttle is set to (m_t_ncmds - 2)
6076 6176 */
6077 6177 if (ptgt->m_t_throttle == HOLD_THROTTLE) {
6078 6178 /*
6079 6179 * By setting throttle to QFULL_THROTTLE, we
6080 6180 * avoid submitting new commands and in
6081 6181 * mptsas_restart_cmd find out slots which need
6082 6182 * their throttles to be cleared.
6083 6183 */
6084 6184 mptsas_set_throttle(mpt, ptgt, QFULL_THROTTLE);
6085 6185 if (mpt->m_restart_cmd_timeid == 0) {
6086 6186 mpt->m_restart_cmd_timeid =
6087 6187 timeout(mptsas_restart_cmd, mpt,
6088 6188 ptgt->m_qfull_retry_interval);
6089 6189 }
6090 6190 }
6091 6191 }
6092 6192 }
6093 6193
6094 6194 mptsas_phymask_t
6095 6195 mptsas_physport_to_phymask(mptsas_t *mpt, uint8_t physport)
6096 6196 {
6097 6197 mptsas_phymask_t phy_mask = 0;
6098 6198 uint8_t i = 0;
6099 6199
6100 6200 NDBG20(("mptsas%d physport_to_phymask enter", mpt->m_instance));
6101 6201
6102 6202 ASSERT(mutex_owned(&mpt->m_mutex));
6103 6203
6104 6204 /*
6105 6205 * If physport is 0xFF, this is a RAID volume. Use phymask of 0.
6106 6206 */
6107 6207 if (physport == 0xFF) {
6108 6208 return (0);
6109 6209 }
6110 6210
6111 6211 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
6112 6212 if (mpt->m_phy_info[i].attached_devhdl &&
6113 6213 (mpt->m_phy_info[i].phy_mask != 0) &&
6114 6214 (mpt->m_phy_info[i].port_num == physport)) {
6115 6215 phy_mask = mpt->m_phy_info[i].phy_mask;
6116 6216 break;
6117 6217 }
6118 6218 }
6119 6219 NDBG20(("mptsas%d physport_to_phymask:physport :%x phymask :%x, ",
6120 6220 mpt->m_instance, physport, phy_mask));
6121 6221 return (phy_mask);
6122 6222 }
6123 6223
6124 6224 /*
6125 6225 * mpt free device handle after device gone, by use of passthrough
6126 6226 */
6127 6227 static int
6128 6228 mptsas_free_devhdl(mptsas_t *mpt, uint16_t devhdl)
6129 6229 {
6130 6230 Mpi2SasIoUnitControlRequest_t req;
6131 6231 Mpi2SasIoUnitControlReply_t rep;
6132 6232 int ret;
6133 6233
6134 6234 ASSERT(mutex_owned(&mpt->m_mutex));
6135 6235
6136 6236 /*
6137 6237 * Need to compose a SAS IO Unit Control request message
6138 6238 * and call mptsas_do_passthru() function
6139 6239 */
6140 6240 bzero(&req, sizeof (req));
6141 6241 bzero(&rep, sizeof (rep));
6142 6242
6143 6243 req.Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL;
6144 6244 req.Operation = MPI2_SAS_OP_REMOVE_DEVICE;
6145 6245 req.DevHandle = LE_16(devhdl);
6146 6246
6147 6247 ret = mptsas_do_passthru(mpt, (uint8_t *)&req, (uint8_t *)&rep, NULL,
6148 6248 sizeof (req), sizeof (rep), NULL, 0, NULL, 0, 60, FKIOCTL);
6149 6249 if (ret != 0) {
6150 6250 cmn_err(CE_WARN, "mptsas_free_devhdl: passthru SAS IO Unit "
6151 6251 "Control error %d", ret);
6152 6252 return (DDI_FAILURE);
6153 6253 }
6154 6254
6155 6255 /* do passthrough success, check the ioc status */
6156 6256 if (LE_16(rep.IOCStatus) != MPI2_IOCSTATUS_SUCCESS) {
6157 6257 cmn_err(CE_WARN, "mptsas_free_devhdl: passthru SAS IO Unit "
6158 6258 "Control IOCStatus %d", LE_16(rep.IOCStatus));
6159 6259 return (DDI_FAILURE);
6160 6260 }
6161 6261
6162 6262 return (DDI_SUCCESS);
6163 6263 }
6164 6264
6165 6265 /*
6166 6266 * We have a SATA target that has changed, which means the "bridge-port"
6167 6267 * property must be updated to reflect the SAS WWN of the new attachment point.
6168 6268 * This may change if a SATA device changes which bay, and therefore phy, it is
6169 6269 * plugged into. This SATA device may be a multipath virtual device or may be a
6170 6270 * physical device. We have to handle both cases.
6171 6271 */
6172 6272 static boolean_t
6173 6273 mptsas_update_sata_bridge(mptsas_t *mpt, dev_info_t *parent,
6174 6274 mptsas_target_t *ptgt)
|
↓ open down ↓ |
141 lines elided |
↑ open up ↑ |
6175 6275 {
6176 6276 int rval;
6177 6277 uint16_t dev_hdl;
6178 6278 uint16_t pdev_hdl;
6179 6279 uint64_t dev_sas_wwn;
6180 6280 uint8_t physport;
6181 6281 uint8_t phy_id;
6182 6282 uint32_t page_address;
6183 6283 uint16_t bay_num, enclosure, io_flags;
6184 6284 uint32_t dev_info;
6185 - char uabuf[SCSI_WWN_BUFLEN];
6285 + char uabuf[SCSI_WWN_BUFLEN];
6186 6286 dev_info_t *dip;
6187 6287 mdi_pathinfo_t *pip;
6188 6288
6189 6289 mutex_enter(&mpt->m_mutex);
6190 6290 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
6191 6291 MPI2_SAS_DEVICE_PGAD_FORM_MASK) | (uint32_t)ptgt->m_devhdl;
6192 6292 rval = mptsas_get_sas_device_page0(mpt, page_address, &dev_hdl,
6193 6293 &dev_sas_wwn, &dev_info, &physport, &phy_id, &pdev_hdl, &bay_num,
6194 6294 &enclosure, &io_flags);
6195 6295 mutex_exit(&mpt->m_mutex);
6196 6296 if (rval != DDI_SUCCESS) {
6197 6297 mptsas_log(mpt, CE_WARN, "unable to get SAS page 0 for "
6198 6298 "handle %d", page_address);
6199 6299 return (B_FALSE);
6200 6300 }
6201 6301
6202 6302 if (scsi_wwn_to_wwnstr(dev_sas_wwn, 1, uabuf) == NULL) {
6203 6303 mptsas_log(mpt, CE_WARN,
6204 6304 "mptsas unable to format SATA bridge WWN");
6205 6305 return (B_FALSE);
6206 6306 }
6207 6307
6208 6308 if (mpt->m_mpxio_enable == TRUE && (pip = mptsas_find_path_addr(parent,
6209 6309 ptgt->m_addr.mta_wwn, 0)) != NULL) {
6210 6310 if (mdi_prop_update_string(pip, SCSI_ADDR_PROP_BRIDGE_PORT,
6211 6311 uabuf) != DDI_SUCCESS) {
6212 6312 mptsas_log(mpt, CE_WARN,
6213 6313 "mptsas unable to create SCSI bridge port "
6214 6314 "property for SATA device");
6215 6315 return (B_FALSE);
6216 6316 }
6217 6317 return (B_TRUE);
6218 6318 }
6219 6319
6220 6320 if ((dip = mptsas_find_child_addr(parent, ptgt->m_addr.mta_wwn,
6221 6321 0)) != NULL) {
6222 6322 if (ndi_prop_update_string(DDI_DEV_T_NONE, dip,
6223 6323 SCSI_ADDR_PROP_BRIDGE_PORT, uabuf) != DDI_PROP_SUCCESS) {
6224 6324 mptsas_log(mpt, CE_WARN,
6225 6325 "mptsas unable to create SCSI bridge port "
6226 6326 "property for SATA device");
6227 6327 return (B_FALSE);
6228 6328 }
6229 6329 return (B_TRUE);
6230 6330 }
6231 6331
6232 6332 mptsas_log(mpt, CE_WARN, "mptsas failed to find dev_info_t or "
6233 6333 "mdi_pathinfo_t for target with WWN %016" PRIx64,
6234 6334 ptgt->m_addr.mta_wwn);
6235 6335
6236 6336 return (B_FALSE);
6237 6337 }
6238 6338
6239 6339 static void
6240 6340 mptsas_update_phymask(mptsas_t *mpt)
6241 6341 {
6242 6342 mptsas_phymask_t mask = 0, phy_mask;
6243 6343 char *phy_mask_name;
6244 6344 uint8_t current_port;
6245 6345 int i, j;
6246 6346
6247 6347 NDBG20(("mptsas%d update phymask ", mpt->m_instance));
6248 6348
6249 6349 ASSERT(mutex_owned(&mpt->m_mutex));
6250 6350
6251 6351 (void) mptsas_get_sas_io_unit_page(mpt);
6252 6352
6253 6353 phy_mask_name = kmem_zalloc(MPTSAS_MAX_PHYS, KM_SLEEP);
6254 6354
6255 6355 for (i = 0; i < mpt->m_num_phys; i++) {
6256 6356 phy_mask = 0x00;
6257 6357
6258 6358 if (mpt->m_phy_info[i].attached_devhdl == 0)
6259 6359 continue;
6260 6360
6261 6361 bzero(phy_mask_name, sizeof (phy_mask_name));
6262 6362
6263 6363 current_port = mpt->m_phy_info[i].port_num;
6264 6364
6265 6365 if ((mask & (1 << i)) != 0)
6266 6366 continue;
6267 6367
6268 6368 for (j = 0; j < mpt->m_num_phys; j++) {
6269 6369 if (mpt->m_phy_info[j].attached_devhdl &&
6270 6370 (mpt->m_phy_info[j].port_num == current_port)) {
6271 6371 phy_mask |= (1 << j);
6272 6372 }
6273 6373 }
6274 6374 mask = mask | phy_mask;
6275 6375
6276 6376 for (j = 0; j < mpt->m_num_phys; j++) {
6277 6377 if ((phy_mask >> j) & 0x01) {
6278 6378 mpt->m_phy_info[j].phy_mask = phy_mask;
6279 6379 }
6280 6380 }
6281 6381
6282 6382 (void) sprintf(phy_mask_name, "%x", phy_mask);
6283 6383
6284 6384 mutex_exit(&mpt->m_mutex);
6285 6385 /*
6286 6386 * register a iport, if the port has already been existed
6287 6387 * SCSA will do nothing and just return.
6288 6388 */
6289 6389 (void) scsi_hba_iport_register(mpt->m_dip, phy_mask_name);
6290 6390 mutex_enter(&mpt->m_mutex);
6291 6391 }
6292 6392 kmem_free(phy_mask_name, MPTSAS_MAX_PHYS);
6293 6393 NDBG20(("mptsas%d update phymask return", mpt->m_instance));
6294 6394 }
6295 6395
6296 6396 /*
6297 6397 * mptsas_handle_dr is a task handler for DR, the DR action includes:
6298 6398 * 1. Directly attched Device Added/Removed.
6299 6399 * 2. Expander Device Added/Removed.
6300 6400 * 3. Indirectly Attached Device Added/Expander.
6301 6401 * 4. LUNs of a existing device status change.
6302 6402 * 5. RAID volume created/deleted.
6303 6403 * 6. Member of RAID volume is released because of RAID deletion.
6304 6404 * 7. Physical disks are removed because of RAID creation.
6305 6405 */
6306 6406 static void
6307 6407 mptsas_handle_dr(void *args)
6308 6408 {
6309 6409 mptsas_topo_change_list_t *topo_node = NULL;
6310 6410 mptsas_topo_change_list_t *save_node = NULL;
6311 6411 mptsas_t *mpt;
6312 6412 dev_info_t *parent = NULL;
6313 6413 mptsas_phymask_t phymask = 0;
6314 6414 char *phy_mask_name;
6315 6415 uint8_t flags = 0, physport = 0xff;
6316 6416 uint8_t port_update = 0;
6317 6417 uint_t event;
6318 6418
6319 6419 topo_node = (mptsas_topo_change_list_t *)args;
6320 6420
6321 6421 mpt = topo_node->mpt;
6322 6422 event = topo_node->event;
6323 6423 flags = topo_node->flags;
6324 6424
6325 6425 phy_mask_name = kmem_zalloc(MPTSAS_MAX_PHYS, KM_SLEEP);
6326 6426
6327 6427 NDBG20(("mptsas%d handle_dr enter", mpt->m_instance));
6328 6428
6329 6429 switch (event) {
6330 6430 case MPTSAS_DR_EVENT_RECONFIG_TARGET:
6331 6431 if ((flags == MPTSAS_TOPO_FLAG_DIRECT_ATTACHED_DEVICE) ||
6332 6432 (flags == MPTSAS_TOPO_FLAG_EXPANDER_ATTACHED_DEVICE) ||
6333 6433 (flags == MPTSAS_TOPO_FLAG_RAID_PHYSDRV_ASSOCIATED)) {
6334 6434 /*
6335 6435 * Direct attached or expander attached device added
6336 6436 * into system or a Phys Disk that is being unhidden.
6337 6437 */
6338 6438 port_update = 1;
6339 6439 }
6340 6440 break;
6341 6441 case MPTSAS_DR_EVENT_RECONFIG_SMP:
6342 6442 /*
6343 6443 * New expander added into system, it must be the head
6344 6444 * of topo_change_list_t
6345 6445 */
6346 6446 port_update = 1;
6347 6447 break;
6348 6448 default:
6349 6449 port_update = 0;
6350 6450 break;
6351 6451 }
6352 6452 /*
6353 6453 * All cases port_update == 1 may cause initiator port form change
6354 6454 */
6355 6455 mutex_enter(&mpt->m_mutex);
6356 6456 if (mpt->m_port_chng && port_update) {
6357 6457 /*
6358 6458 * mpt->m_port_chng flag indicates some PHYs of initiator
6359 6459 * port have changed to online. So when expander added or
6360 6460 * directly attached device online event come, we force to
6361 6461 * update port information by issueing SAS IO Unit Page and
6362 6462 * update PHYMASKs.
6363 6463 */
6364 6464 (void) mptsas_update_phymask(mpt);
6365 6465 mpt->m_port_chng = 0;
6366 6466
6367 6467 }
6368 6468 mutex_exit(&mpt->m_mutex);
6369 6469 while (topo_node) {
6370 6470 phymask = 0;
6371 6471 if (parent == NULL) {
6372 6472 physport = topo_node->un.physport;
6373 6473 event = topo_node->event;
6374 6474 flags = topo_node->flags;
6375 6475 if (event & (MPTSAS_DR_EVENT_OFFLINE_TARGET |
6376 6476 MPTSAS_DR_EVENT_OFFLINE_SMP)) {
6377 6477 /*
6378 6478 * For all offline events, phymask is known
6379 6479 */
6380 6480 phymask = topo_node->un.phymask;
6381 6481 goto find_parent;
6382 6482 }
6383 6483 if (event & MPTSAS_TOPO_FLAG_REMOVE_HANDLE) {
6384 6484 goto handle_topo_change;
6385 6485 }
6386 6486 if (flags & MPTSAS_TOPO_FLAG_LUN_ASSOCIATED) {
6387 6487 phymask = topo_node->un.phymask;
6388 6488 goto find_parent;
6389 6489 }
6390 6490
6391 6491 if ((flags ==
6392 6492 MPTSAS_TOPO_FLAG_RAID_PHYSDRV_ASSOCIATED) &&
6393 6493 (event == MPTSAS_DR_EVENT_RECONFIG_TARGET)) {
6394 6494 /*
6395 6495 * There is no any field in IR_CONFIG_CHANGE
6396 6496 * event indicate physport/phynum, let's get
6397 6497 * parent after SAS Device Page0 request.
6398 6498 */
6399 6499 goto handle_topo_change;
6400 6500 }
6401 6501
6402 6502 mutex_enter(&mpt->m_mutex);
6403 6503 if (flags == MPTSAS_TOPO_FLAG_DIRECT_ATTACHED_DEVICE) {
6404 6504 /*
6405 6505 * If the direct attached device added or a
6406 6506 * phys disk is being unhidden, argument
6407 6507 * physport actually is PHY#, so we have to get
6408 6508 * phymask according PHY#.
6409 6509 */
6410 6510 physport = mpt->m_phy_info[physport].port_num;
6411 6511 }
6412 6512
6413 6513 /*
6414 6514 * Translate physport to phymask so that we can search
6415 6515 * parent dip.
6416 6516 */
6417 6517 phymask = mptsas_physport_to_phymask(mpt,
6418 6518 physport);
6419 6519 mutex_exit(&mpt->m_mutex);
6420 6520
6421 6521 find_parent:
6422 6522 bzero(phy_mask_name, MPTSAS_MAX_PHYS);
6423 6523 /*
6424 6524 * For RAID topology change node, write the iport name
6425 6525 * as v0.
6426 6526 */
6427 6527 if (flags & MPTSAS_TOPO_FLAG_RAID_ASSOCIATED) {
6428 6528 (void) sprintf(phy_mask_name, "v0");
6429 6529 } else {
6430 6530 /*
6431 6531 * phymask can bo 0 if the drive has been
6432 6532 * pulled by the time an add event is
6433 6533 * processed. If phymask is 0, just skip this
6434 6534 * event and continue.
6435 6535 */
6436 6536 if (phymask == 0) {
6437 6537 mutex_enter(&mpt->m_mutex);
6438 6538 save_node = topo_node;
6439 6539 topo_node = topo_node->next;
6440 6540 ASSERT(save_node);
6441 6541 kmem_free(save_node,
6442 6542 sizeof (mptsas_topo_change_list_t));
6443 6543 mutex_exit(&mpt->m_mutex);
6444 6544
6445 6545 parent = NULL;
6446 6546 continue;
6447 6547 }
6448 6548 (void) sprintf(phy_mask_name, "%x", phymask);
6449 6549 }
6450 6550 parent = scsi_hba_iport_find(mpt->m_dip,
6451 6551 phy_mask_name);
6452 6552 if (parent == NULL) {
6453 6553 mptsas_log(mpt, CE_WARN, "Failed to find an "
6454 6554 "iport, should not happen!");
6455 6555 goto out;
6456 6556 }
|
↓ open down ↓ |
261 lines elided |
↑ open up ↑ |
6457 6557
6458 6558 }
6459 6559 ASSERT(parent);
6460 6560 handle_topo_change:
6461 6561
6462 6562 mutex_enter(&mpt->m_mutex);
6463 6563 /*
6464 6564 * If HBA is being reset, don't perform operations depending
6465 6565 * on the IOC. We must free the topo list, however.
6466 6566 */
6467 - if (!mpt->m_in_reset)
6567 + if (!mpt->m_in_reset) {
6468 6568 mptsas_handle_topo_change(topo_node, parent);
6469 - else
6569 + } else {
6470 6570 NDBG20(("skipping topo change received during reset"));
6571 + }
6471 6572 save_node = topo_node;
6472 6573 topo_node = topo_node->next;
6473 6574 ASSERT(save_node);
6474 6575 kmem_free(save_node, sizeof (mptsas_topo_change_list_t));
6475 6576 mutex_exit(&mpt->m_mutex);
6476 6577
6477 6578 if ((flags == MPTSAS_TOPO_FLAG_DIRECT_ATTACHED_DEVICE) ||
6478 6579 (flags == MPTSAS_TOPO_FLAG_RAID_PHYSDRV_ASSOCIATED) ||
6479 6580 (flags == MPTSAS_TOPO_FLAG_RAID_ASSOCIATED)) {
6480 6581 /*
6481 6582 * If direct attached device associated, make sure
6482 6583 * reset the parent before start the next one. But
6483 6584 * all devices associated with expander shares the
6484 6585 * parent. Also, reset parent if this is for RAID.
6485 6586 */
6486 6587 parent = NULL;
6487 6588 }
6488 6589 }
6489 6590 out:
6490 6591 kmem_free(phy_mask_name, MPTSAS_MAX_PHYS);
6491 6592 }
6492 6593
6493 6594 static void
6494 6595 mptsas_handle_topo_change(mptsas_topo_change_list_t *topo_node,
6495 6596 dev_info_t *parent)
6496 6597 {
6497 6598 mptsas_target_t *ptgt = NULL;
6498 6599 mptsas_smp_t *psmp = NULL;
6499 6600 mptsas_t *mpt = (void *)topo_node->mpt;
6500 6601 uint16_t devhdl;
6501 6602 uint16_t attached_devhdl;
6502 6603 uint64_t sas_wwn = 0;
6503 6604 int rval = 0;
6504 6605 uint32_t page_address;
6505 6606 uint8_t phy, flags;
6506 6607 char *addr = NULL;
6507 6608 dev_info_t *lundip;
6508 6609 int circ = 0, circ1 = 0;
6509 6610 char attached_wwnstr[MPTSAS_WWN_STRLEN];
6510 6611
6511 6612 NDBG20(("mptsas%d handle_topo_change enter, devhdl 0x%x,"
6512 6613 "event 0x%x, flags 0x%x", mpt->m_instance, topo_node->devhdl,
6513 6614 topo_node->event, topo_node->flags));
6514 6615
6515 6616 ASSERT(mutex_owned(&mpt->m_mutex));
6516 6617
6517 6618 switch (topo_node->event) {
6518 6619 case MPTSAS_DR_EVENT_RECONFIG_TARGET:
6519 6620 {
6520 6621 char *phy_mask_name;
6521 6622 mptsas_phymask_t phymask = 0;
6522 6623
6523 6624 if (topo_node->flags == MPTSAS_TOPO_FLAG_RAID_ASSOCIATED) {
6524 6625 /*
6525 6626 * Get latest RAID info.
6526 6627 */
6527 6628 (void) mptsas_get_raid_info(mpt);
6528 6629 ptgt = refhash_linear_search(mpt->m_targets,
6529 6630 mptsas_target_eval_devhdl, &topo_node->devhdl);
6530 6631 if (ptgt == NULL)
6531 6632 break;
6532 6633 } else {
6533 6634 ptgt = (void *)topo_node->object;
6534 6635 }
6535 6636
6536 6637 if (ptgt == NULL) {
6537 6638 /*
6538 6639 * If a Phys Disk was deleted, RAID info needs to be
6539 6640 * updated to reflect the new topology.
6540 6641 */
6541 6642 (void) mptsas_get_raid_info(mpt);
6542 6643
6543 6644 /*
6544 6645 * Get sas device page 0 by DevHandle to make sure if
6545 6646 * SSP/SATA end device exist.
|
↓ open down ↓ |
65 lines elided |
↑ open up ↑ |
6546 6647 */
6547 6648 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
6548 6649 MPI2_SAS_DEVICE_PGAD_FORM_MASK) |
6549 6650 topo_node->devhdl;
6550 6651
6551 6652 rval = mptsas_get_target_device_info(mpt, page_address,
6552 6653 &devhdl, &ptgt);
6553 6654 if (rval == DEV_INFO_WRONG_DEVICE_TYPE) {
6554 6655 mptsas_log(mpt, CE_NOTE,
6555 6656 "mptsas_handle_topo_change: target %d is "
6556 - "not a SAS/SATA device. \n",
6657 + "not a SAS/SATA device",
6557 6658 topo_node->devhdl);
6558 6659 } else if (rval == DEV_INFO_FAIL_ALLOC) {
6559 6660 mptsas_log(mpt, CE_NOTE,
6560 6661 "mptsas_handle_topo_change: could not "
6561 - "allocate memory. \n");
6662 + "allocate memory");
6562 6663 } else if (rval == DEV_INFO_FAIL_GUID) {
6563 6664 mptsas_log(mpt, CE_NOTE,
6564 6665 "mptsas_handle_topo_change: could not "
6565 - "get SATA GUID for target %d. \n",
6666 + "get SATA GUID for target %d",
6566 6667 topo_node->devhdl);
6567 6668 }
6568 6669 /*
6569 6670 * If rval is DEV_INFO_PHYS_DISK or indicates failure
6570 6671 * then there is nothing else to do, just leave.
6571 6672 */
6572 6673 if (rval != DEV_INFO_SUCCESS) {
6573 6674 return;
6574 6675 }
6575 6676 }
6576 6677
6577 6678 ASSERT(ptgt->m_devhdl == topo_node->devhdl);
6578 6679
6579 6680 mutex_exit(&mpt->m_mutex);
6580 6681 flags = topo_node->flags;
6581 6682
6582 6683 if (flags == MPTSAS_TOPO_FLAG_RAID_PHYSDRV_ASSOCIATED) {
6583 6684 phymask = ptgt->m_addr.mta_phymask;
6584 6685 phy_mask_name = kmem_zalloc(MPTSAS_MAX_PHYS, KM_SLEEP);
6585 6686 (void) sprintf(phy_mask_name, "%x", phymask);
6586 6687 parent = scsi_hba_iport_find(mpt->m_dip,
6587 6688 phy_mask_name);
6588 6689 kmem_free(phy_mask_name, MPTSAS_MAX_PHYS);
6589 6690 if (parent == NULL) {
6590 6691 mptsas_log(mpt, CE_WARN, "Failed to find a "
6591 6692 "iport for PD, should not happen!");
6592 6693 mutex_enter(&mpt->m_mutex);
6593 6694 break;
6594 6695 }
6595 6696 }
6596 6697
6597 6698 if (flags == MPTSAS_TOPO_FLAG_RAID_ASSOCIATED) {
6598 6699 ndi_devi_enter(parent, &circ1);
6599 6700 (void) mptsas_config_raid(parent, topo_node->devhdl,
6600 6701 &lundip);
6601 6702 ndi_devi_exit(parent, circ1);
6602 6703 } else {
6603 6704 /*
6604 6705 * hold nexus for bus configure
6605 6706 */
6606 6707 ndi_devi_enter(scsi_vhci_dip, &circ);
6607 6708 ndi_devi_enter(parent, &circ1);
6608 6709 rval = mptsas_config_target(parent, ptgt);
6609 6710 /*
6610 6711 * release nexus for bus configure
6611 6712 */
6612 6713 ndi_devi_exit(parent, circ1);
6613 6714 ndi_devi_exit(scsi_vhci_dip, circ);
6614 6715
6615 6716 /*
6616 6717 * If this is a SATA device, make sure that the
6617 6718 * bridge-port (the SAS WWN that the SATA device is
6618 6719 * plugged into) is updated. This may change if a SATA
6619 6720 * device changes which bay, and therefore phy, it is
6620 6721 * plugged into.
6621 6722 */
6622 6723 if (IS_SATA_DEVICE(ptgt->m_deviceinfo)) {
6623 6724 if (!mptsas_update_sata_bridge(mpt, parent,
6624 6725 ptgt)) {
6625 6726 mutex_enter(&mpt->m_mutex);
6626 6727 return;
6627 6728 }
6628 6729 }
6629 6730
6630 6731 /*
6631 6732 * Add parent's props for SMHBA support
6632 6733 */
6633 6734 if (flags == MPTSAS_TOPO_FLAG_DIRECT_ATTACHED_DEVICE) {
6634 6735 bzero(attached_wwnstr,
6635 6736 sizeof (attached_wwnstr));
6636 6737 (void) sprintf(attached_wwnstr, "w%016"PRIx64,
6637 6738 ptgt->m_addr.mta_wwn);
6638 6739 if (ddi_prop_update_string(DDI_DEV_T_NONE,
6639 6740 parent,
6640 6741 SCSI_ADDR_PROP_ATTACHED_PORT,
6641 6742 attached_wwnstr)
6642 6743 != DDI_PROP_SUCCESS) {
6643 6744 (void) ddi_prop_remove(DDI_DEV_T_NONE,
6644 6745 parent,
6645 6746 SCSI_ADDR_PROP_ATTACHED_PORT);
6646 6747 mptsas_log(mpt, CE_WARN, "Failed to"
6647 6748 "attached-port props");
6648 6749 mutex_enter(&mpt->m_mutex);
6649 6750 return;
6650 6751 }
6651 6752 if (ddi_prop_update_int(DDI_DEV_T_NONE, parent,
6652 6753 MPTSAS_NUM_PHYS, 1) !=
6653 6754 DDI_PROP_SUCCESS) {
6654 6755 (void) ddi_prop_remove(DDI_DEV_T_NONE,
6655 6756 parent, MPTSAS_NUM_PHYS);
6656 6757 mptsas_log(mpt, CE_WARN, "Failed to"
6657 6758 " create num-phys props");
6658 6759 mutex_enter(&mpt->m_mutex);
6659 6760 return;
6660 6761 }
6661 6762
6662 6763 /*
6663 6764 * Update PHY info for smhba
6664 6765 */
6665 6766 mutex_enter(&mpt->m_mutex);
6666 6767 if (mptsas_smhba_phy_init(mpt)) {
6667 6768 mptsas_log(mpt, CE_WARN, "mptsas phy"
6668 6769 " update failed");
6669 6770 return;
6670 6771 }
6671 6772 mutex_exit(&mpt->m_mutex);
6672 6773
6673 6774 /*
6674 6775 * topo_node->un.physport is really the PHY#
6675 6776 * for direct attached devices
6676 6777 */
6677 6778 mptsas_smhba_set_one_phy_props(mpt, parent,
6678 6779 topo_node->un.physport, &attached_devhdl);
6679 6780
6680 6781 if (ddi_prop_update_int(DDI_DEV_T_NONE, parent,
6681 6782 MPTSAS_VIRTUAL_PORT, 0) !=
6682 6783 DDI_PROP_SUCCESS) {
6683 6784 (void) ddi_prop_remove(DDI_DEV_T_NONE,
6684 6785 parent, MPTSAS_VIRTUAL_PORT);
6685 6786 mptsas_log(mpt, CE_WARN,
6686 6787 "mptsas virtual-port"
6687 6788 "port prop update failed");
6688 6789 mutex_enter(&mpt->m_mutex);
6689 6790 return;
6690 6791 }
6691 6792 }
6692 6793 }
6693 6794 mutex_enter(&mpt->m_mutex);
6694 6795
6695 6796 NDBG20(("mptsas%d handle_topo_change to online devhdl:%x, "
6696 6797 "phymask:%x.", mpt->m_instance, ptgt->m_devhdl,
6697 6798 ptgt->m_addr.mta_phymask));
6698 6799 break;
6699 6800 }
6700 6801 case MPTSAS_DR_EVENT_OFFLINE_TARGET:
6701 6802 {
6702 6803 devhdl = topo_node->devhdl;
6703 6804 ptgt = refhash_linear_search(mpt->m_targets,
6704 6805 mptsas_target_eval_devhdl, &devhdl);
6705 6806 if (ptgt == NULL)
6706 6807 break;
6707 6808
6708 6809 sas_wwn = ptgt->m_addr.mta_wwn;
6709 6810 phy = ptgt->m_phynum;
6710 6811
6711 6812 addr = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
6712 6813
6713 6814 if (sas_wwn) {
6714 6815 (void) sprintf(addr, "w%016"PRIx64, sas_wwn);
6715 6816 } else {
6716 6817 (void) sprintf(addr, "p%x", phy);
6717 6818 }
6718 6819 ASSERT(ptgt->m_devhdl == devhdl);
6719 6820
6720 6821 if ((topo_node->flags == MPTSAS_TOPO_FLAG_RAID_ASSOCIATED) ||
6721 6822 (topo_node->flags ==
6722 6823 MPTSAS_TOPO_FLAG_RAID_PHYSDRV_ASSOCIATED)) {
6723 6824 /*
6724 6825 * Get latest RAID info if RAID volume status changes
6725 6826 * or Phys Disk status changes
6726 6827 */
6727 6828 (void) mptsas_get_raid_info(mpt);
6728 6829 }
6729 6830 /*
6730 6831 * Abort all outstanding command on the device
6731 6832 */
6732 6833 rval = mptsas_do_scsi_reset(mpt, devhdl);
6733 6834 if (rval) {
6734 6835 NDBG20(("mptsas%d handle_topo_change to reset target "
6735 6836 "before offline devhdl:%x, phymask:%x, rval:%x",
6736 6837 mpt->m_instance, ptgt->m_devhdl,
6737 6838 ptgt->m_addr.mta_phymask, rval));
6738 6839 }
6739 6840
6740 6841 mutex_exit(&mpt->m_mutex);
6741 6842
6742 6843 ndi_devi_enter(scsi_vhci_dip, &circ);
6743 6844 ndi_devi_enter(parent, &circ1);
6744 6845 rval = mptsas_offline_target(parent, addr);
6745 6846 ndi_devi_exit(parent, circ1);
6746 6847 ndi_devi_exit(scsi_vhci_dip, circ);
6747 6848 NDBG20(("mptsas%d handle_topo_change to offline devhdl:%x, "
6748 6849 "phymask:%x, rval:%x", mpt->m_instance,
6749 6850 ptgt->m_devhdl, ptgt->m_addr.mta_phymask, rval));
6750 6851
6751 6852 kmem_free(addr, SCSI_MAXNAMELEN);
6752 6853
6753 6854 /*
6754 6855 * Clear parent's props for SMHBA support
6755 6856 */
6756 6857 flags = topo_node->flags;
6757 6858 if (flags == MPTSAS_TOPO_FLAG_DIRECT_ATTACHED_DEVICE) {
6758 6859 bzero(attached_wwnstr, sizeof (attached_wwnstr));
6759 6860 if (ddi_prop_update_string(DDI_DEV_T_NONE, parent,
6760 6861 SCSI_ADDR_PROP_ATTACHED_PORT, attached_wwnstr) !=
6761 6862 DDI_PROP_SUCCESS) {
6762 6863 (void) ddi_prop_remove(DDI_DEV_T_NONE, parent,
6763 6864 SCSI_ADDR_PROP_ATTACHED_PORT);
6764 6865 mptsas_log(mpt, CE_WARN, "mptsas attached port "
6765 6866 "prop update failed");
6766 6867 mutex_enter(&mpt->m_mutex);
6767 6868 break;
6768 6869 }
6769 6870 if (ddi_prop_update_int(DDI_DEV_T_NONE, parent,
6770 6871 MPTSAS_NUM_PHYS, 0) !=
6771 6872 DDI_PROP_SUCCESS) {
6772 6873 (void) ddi_prop_remove(DDI_DEV_T_NONE, parent,
6773 6874 MPTSAS_NUM_PHYS);
6774 6875 mptsas_log(mpt, CE_WARN, "mptsas num phys "
6775 6876 "prop update failed");
6776 6877 mutex_enter(&mpt->m_mutex);
6777 6878 break;
6778 6879 }
6779 6880 if (ddi_prop_update_int(DDI_DEV_T_NONE, parent,
6780 6881 MPTSAS_VIRTUAL_PORT, 1) !=
6781 6882 DDI_PROP_SUCCESS) {
|
↓ open down ↓ |
206 lines elided |
↑ open up ↑ |
6782 6883 (void) ddi_prop_remove(DDI_DEV_T_NONE, parent,
6783 6884 MPTSAS_VIRTUAL_PORT);
6784 6885 mptsas_log(mpt, CE_WARN, "mptsas virtual port "
6785 6886 "prop update failed");
6786 6887 mutex_enter(&mpt->m_mutex);
6787 6888 break;
6788 6889 }
6789 6890 }
6790 6891
6791 6892 mutex_enter(&mpt->m_mutex);
6792 - ptgt->m_led_status = 0;
6793 - (void) mptsas_flush_led_status(mpt, ptgt);
6794 6893 if (rval == DDI_SUCCESS) {
6795 6894 refhash_remove(mpt->m_targets, ptgt);
6796 6895 ptgt = NULL;
6797 6896 } else {
6798 6897 /*
6799 6898 * clean DR_INTRANSITION flag to allow I/O down to
6800 6899 * PHCI driver since failover finished.
6801 6900 * Invalidate the devhdl
6802 6901 */
6803 6902 ptgt->m_devhdl = MPTSAS_INVALID_DEVHDL;
6804 6903 ptgt->m_tgt_unconfigured = 0;
6805 6904 mutex_enter(&mpt->m_tx_waitq_mutex);
6806 6905 ptgt->m_dr_flag = MPTSAS_DR_INACTIVE;
6807 6906 mutex_exit(&mpt->m_tx_waitq_mutex);
6808 6907 }
6809 6908
6810 6909 /*
6811 6910 * Send SAS IO Unit Control to free the dev handle
6812 6911 */
6813 6912 if ((flags == MPTSAS_TOPO_FLAG_DIRECT_ATTACHED_DEVICE) ||
6814 6913 (flags == MPTSAS_TOPO_FLAG_EXPANDER_ATTACHED_DEVICE)) {
6815 6914 rval = mptsas_free_devhdl(mpt, devhdl);
6816 6915
6817 6916 NDBG20(("mptsas%d handle_topo_change to remove "
6818 6917 "devhdl:%x, rval:%x", mpt->m_instance, devhdl,
6819 6918 rval));
6820 6919 }
6821 6920
6822 6921 break;
6823 6922 }
6824 6923 case MPTSAS_TOPO_FLAG_REMOVE_HANDLE:
6825 6924 {
6826 6925 devhdl = topo_node->devhdl;
6827 6926 /*
6828 6927 * If this is the remove handle event, do a reset first.
6829 6928 */
6830 6929 if (topo_node->event == MPTSAS_TOPO_FLAG_REMOVE_HANDLE) {
6831 6930 rval = mptsas_do_scsi_reset(mpt, devhdl);
6832 6931 if (rval) {
6833 6932 NDBG20(("mpt%d reset target before remove "
6834 6933 "devhdl:%x, rval:%x", mpt->m_instance,
6835 6934 devhdl, rval));
6836 6935 }
6837 6936 }
6838 6937
6839 6938 /*
6840 6939 * Send SAS IO Unit Control to free the dev handle
6841 6940 */
6842 6941 rval = mptsas_free_devhdl(mpt, devhdl);
6843 6942 NDBG20(("mptsas%d handle_topo_change to remove "
6844 6943 "devhdl:%x, rval:%x", mpt->m_instance, devhdl,
6845 6944 rval));
6846 6945 break;
6847 6946 }
6848 6947 case MPTSAS_DR_EVENT_RECONFIG_SMP:
6849 6948 {
6850 6949 mptsas_smp_t smp;
6851 6950 dev_info_t *smpdip;
6852 6951
6853 6952 devhdl = topo_node->devhdl;
6854 6953
6855 6954 page_address = (MPI2_SAS_EXPAND_PGAD_FORM_HNDL &
6856 6955 MPI2_SAS_EXPAND_PGAD_FORM_MASK) | (uint32_t)devhdl;
6857 6956 rval = mptsas_get_sas_expander_page0(mpt, page_address, &smp);
6858 6957 if (rval != DDI_SUCCESS) {
6859 6958 mptsas_log(mpt, CE_WARN, "failed to online smp, "
6860 6959 "handle %x", devhdl);
6861 6960 return;
6862 6961 }
6863 6962
6864 6963 psmp = mptsas_smp_alloc(mpt, &smp);
6865 6964 if (psmp == NULL) {
6866 6965 return;
6867 6966 }
6868 6967
6869 6968 mutex_exit(&mpt->m_mutex);
6870 6969 ndi_devi_enter(parent, &circ1);
6871 6970 (void) mptsas_online_smp(parent, psmp, &smpdip);
6872 6971 ndi_devi_exit(parent, circ1);
6873 6972
6874 6973 mutex_enter(&mpt->m_mutex);
6875 6974 break;
6876 6975 }
6877 6976 case MPTSAS_DR_EVENT_OFFLINE_SMP:
6878 6977 {
6879 6978 devhdl = topo_node->devhdl;
6880 6979 uint32_t dev_info;
6881 6980
6882 6981 psmp = refhash_linear_search(mpt->m_smp_targets,
|
↓ open down ↓ |
79 lines elided |
↑ open up ↑ |
6883 6982 mptsas_smp_eval_devhdl, &devhdl);
6884 6983 if (psmp == NULL)
6885 6984 break;
6886 6985 /*
6887 6986 * The mptsas_smp_t data is released only if the dip is offlined
6888 6987 * successfully.
6889 6988 */
6890 6989 mutex_exit(&mpt->m_mutex);
6891 6990
6892 6991 ndi_devi_enter(parent, &circ1);
6893 - rval = mptsas_offline_smp(parent, psmp, NDI_DEVI_REMOVE);
6992 + rval = mptsas_offline_smp(parent, psmp);
6894 6993 ndi_devi_exit(parent, circ1);
6895 6994
6896 6995 dev_info = psmp->m_deviceinfo;
6897 6996 if ((dev_info & DEVINFO_DIRECT_ATTACHED) ==
6898 6997 DEVINFO_DIRECT_ATTACHED) {
6899 6998 if (ddi_prop_update_int(DDI_DEV_T_NONE, parent,
6900 6999 MPTSAS_VIRTUAL_PORT, 1) !=
6901 7000 DDI_PROP_SUCCESS) {
6902 7001 (void) ddi_prop_remove(DDI_DEV_T_NONE, parent,
6903 7002 MPTSAS_VIRTUAL_PORT);
6904 7003 mptsas_log(mpt, CE_WARN, "mptsas virtual port "
6905 7004 "prop update failed");
7005 + mutex_enter(&mpt->m_mutex);
6906 7006 return;
6907 7007 }
6908 7008 /*
6909 7009 * Check whether the smp connected to the iport,
6910 7010 */
6911 7011 if (ddi_prop_update_int(DDI_DEV_T_NONE, parent,
6912 7012 MPTSAS_NUM_PHYS, 0) !=
6913 7013 DDI_PROP_SUCCESS) {
6914 7014 (void) ddi_prop_remove(DDI_DEV_T_NONE, parent,
6915 7015 MPTSAS_NUM_PHYS);
6916 7016 mptsas_log(mpt, CE_WARN, "mptsas num phys"
6917 7017 "prop update failed");
7018 + mutex_enter(&mpt->m_mutex);
6918 7019 return;
6919 7020 }
6920 7021 /*
6921 7022 * Clear parent's attached-port props
6922 7023 */
6923 7024 bzero(attached_wwnstr, sizeof (attached_wwnstr));
6924 7025 if (ddi_prop_update_string(DDI_DEV_T_NONE, parent,
6925 7026 SCSI_ADDR_PROP_ATTACHED_PORT, attached_wwnstr) !=
6926 7027 DDI_PROP_SUCCESS) {
6927 7028 (void) ddi_prop_remove(DDI_DEV_T_NONE, parent,
6928 7029 SCSI_ADDR_PROP_ATTACHED_PORT);
6929 7030 mptsas_log(mpt, CE_WARN, "mptsas attached port "
6930 7031 "prop update failed");
7032 + mutex_enter(&mpt->m_mutex);
6931 7033 return;
6932 7034 }
6933 7035 }
6934 7036
6935 7037 mutex_enter(&mpt->m_mutex);
6936 7038 NDBG20(("mptsas%d handle_topo_change to remove devhdl:%x, "
6937 7039 "rval:%x", mpt->m_instance, psmp->m_devhdl, rval));
6938 7040 if (rval == DDI_SUCCESS) {
6939 7041 refhash_remove(mpt->m_smp_targets, psmp);
6940 7042 } else {
6941 7043 psmp->m_devhdl = MPTSAS_INVALID_DEVHDL;
6942 7044 }
6943 7045
6944 7046 bzero(attached_wwnstr, sizeof (attached_wwnstr));
6945 7047
6946 7048 break;
6947 7049 }
6948 7050 default:
6949 7051 return;
6950 7052 }
6951 7053 }
6952 7054
6953 7055 /*
6954 7056 * Record the event if its type is enabled in mpt instance by ioctl.
6955 7057 */
6956 7058 static void
6957 7059 mptsas_record_event(void *args)
6958 7060 {
6959 7061 m_replyh_arg_t *replyh_arg;
6960 7062 pMpi2EventNotificationReply_t eventreply;
6961 7063 uint32_t event, rfm;
6962 7064 mptsas_t *mpt;
6963 7065 int i, j;
6964 7066 uint16_t event_data_len;
6965 7067 boolean_t sendAEN = FALSE;
6966 7068
6967 7069 replyh_arg = (m_replyh_arg_t *)args;
6968 7070 rfm = replyh_arg->rfm;
6969 7071 mpt = replyh_arg->mpt;
6970 7072
6971 7073 eventreply = (pMpi2EventNotificationReply_t)
6972 7074 (mpt->m_reply_frame + (rfm -
6973 7075 (mpt->m_reply_frame_dma_addr & 0xffffffffu)));
6974 7076 event = ddi_get16(mpt->m_acc_reply_frame_hdl, &eventreply->Event);
6975 7077
6976 7078
6977 7079 /*
6978 7080 * Generate a system event to let anyone who cares know that a
6979 7081 * LOG_ENTRY_ADDED event has occurred. This is sent no matter what the
6980 7082 * event mask is set to.
6981 7083 */
6982 7084 if (event == MPI2_EVENT_LOG_ENTRY_ADDED) {
6983 7085 sendAEN = TRUE;
6984 7086 }
6985 7087
6986 7088 /*
6987 7089 * Record the event only if it is not masked. Determine which dword
6988 7090 * and bit of event mask to test.
6989 7091 */
6990 7092 i = (uint8_t)(event / 32);
6991 7093 j = (uint8_t)(event % 32);
6992 7094 if ((i < 4) && ((1 << j) & mpt->m_event_mask[i])) {
6993 7095 i = mpt->m_event_index;
6994 7096 mpt->m_events[i].Type = event;
6995 7097 mpt->m_events[i].Number = ++mpt->m_event_number;
6996 7098 bzero(mpt->m_events[i].Data, MPTSAS_MAX_EVENT_DATA_LENGTH * 4);
6997 7099 event_data_len = ddi_get16(mpt->m_acc_reply_frame_hdl,
6998 7100 &eventreply->EventDataLength);
6999 7101
7000 7102 if (event_data_len > 0) {
7001 7103 /*
7002 7104 * Limit data to size in m_event entry
7003 7105 */
7004 7106 if (event_data_len > MPTSAS_MAX_EVENT_DATA_LENGTH) {
7005 7107 event_data_len = MPTSAS_MAX_EVENT_DATA_LENGTH;
7006 7108 }
7007 7109 for (j = 0; j < event_data_len; j++) {
7008 7110 mpt->m_events[i].Data[j] =
7009 7111 ddi_get32(mpt->m_acc_reply_frame_hdl,
7010 7112 &(eventreply->EventData[j]));
7011 7113 }
7012 7114
7013 7115 /*
7014 7116 * check for index wrap-around
7015 7117 */
7016 7118 if (++i == MPTSAS_EVENT_QUEUE_SIZE) {
7017 7119 i = 0;
7018 7120 }
7019 7121 mpt->m_event_index = (uint8_t)i;
7020 7122
7021 7123 /*
7022 7124 * Set flag to send the event.
7023 7125 */
7024 7126 sendAEN = TRUE;
7025 7127 }
7026 7128 }
7027 7129
7028 7130 /*
7029 7131 * Generate a system event if flag is set to let anyone who cares know
7030 7132 * that an event has occurred.
7031 7133 */
7032 7134 if (sendAEN) {
7033 7135 (void) ddi_log_sysevent(mpt->m_dip, DDI_VENDOR_LSI, "MPT_SAS",
7034 7136 "SAS", NULL, NULL, DDI_NOSLEEP);
7035 7137 }
7036 7138 }
7037 7139
7038 7140 #define SMP_RESET_IN_PROGRESS MPI2_EVENT_SAS_TOPO_LR_SMP_RESET_IN_PROGRESS
7039 7141 /*
7040 7142 * handle sync events from ioc in interrupt
7041 7143 * return value:
7042 7144 * DDI_SUCCESS: The event is handled by this func
7043 7145 * DDI_FAILURE: Event is not handled
7044 7146 */
7045 7147 static int
7046 7148 mptsas_handle_event_sync(void *args)
7047 7149 {
7048 7150 m_replyh_arg_t *replyh_arg;
7049 7151 pMpi2EventNotificationReply_t eventreply;
7050 7152 uint32_t event, rfm;
7051 7153 mptsas_t *mpt;
7052 7154 uint_t iocstatus;
7053 7155
7054 7156 replyh_arg = (m_replyh_arg_t *)args;
|
↓ open down ↓ |
114 lines elided |
↑ open up ↑ |
7055 7157 rfm = replyh_arg->rfm;
7056 7158 mpt = replyh_arg->mpt;
7057 7159
7058 7160 ASSERT(mutex_owned(&mpt->m_mutex));
7059 7161
7060 7162 eventreply = (pMpi2EventNotificationReply_t)
7061 7163 (mpt->m_reply_frame + (rfm -
7062 7164 (mpt->m_reply_frame_dma_addr & 0xffffffffu)));
7063 7165 event = ddi_get16(mpt->m_acc_reply_frame_hdl, &eventreply->Event);
7064 7166
7065 - if (iocstatus = ddi_get16(mpt->m_acc_reply_frame_hdl,
7066 - &eventreply->IOCStatus)) {
7167 + if ((iocstatus = ddi_get16(mpt->m_acc_reply_frame_hdl,
7168 + &eventreply->IOCStatus)) != 0) {
7067 7169 if (iocstatus == MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) {
7068 7170 mptsas_log(mpt, CE_WARN,
7069 - "!mptsas_handle_event_sync: event 0x%x, "
7171 + "mptsas_handle_event_sync: event 0x%x, "
7070 7172 "IOCStatus=0x%x, "
7071 7173 "IOCLogInfo=0x%x", event, iocstatus,
7072 7174 ddi_get32(mpt->m_acc_reply_frame_hdl,
7073 7175 &eventreply->IOCLogInfo));
7074 7176 } else {
7075 7177 mptsas_log(mpt, CE_WARN,
7076 7178 "mptsas_handle_event_sync: event 0x%x, "
7077 7179 "IOCStatus=0x%x, "
7078 7180 "(IOCLogInfo=0x%x)", event, iocstatus,
7079 7181 ddi_get32(mpt->m_acc_reply_frame_hdl,
7080 7182 &eventreply->IOCLogInfo));
7081 7183 }
7082 7184 }
7083 7185
7084 7186 /*
7085 7187 * figure out what kind of event we got and handle accordingly
7086 7188 */
7087 7189 switch (event) {
7088 7190 case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
7089 7191 {
7090 7192 pMpi2EventDataSasTopologyChangeList_t sas_topo_change_list;
7091 7193 uint8_t num_entries, expstatus, phy;
7092 7194 uint8_t phystatus, physport, state, i;
7093 7195 uint8_t start_phy_num, link_rate;
7094 7196 uint16_t dev_handle, reason_code;
7095 7197 uint16_t enc_handle, expd_handle;
7096 7198 char string[80], curr[80], prev[80];
7097 7199 mptsas_topo_change_list_t *topo_head = NULL;
7098 7200 mptsas_topo_change_list_t *topo_tail = NULL;
7099 7201 mptsas_topo_change_list_t *topo_node = NULL;
7100 7202 mptsas_target_t *ptgt;
7101 7203 mptsas_smp_t *psmp;
7102 7204 uint8_t flags = 0, exp_flag;
7103 7205 smhba_info_t *pSmhba = NULL;
7104 7206
7105 7207 NDBG20(("mptsas_handle_event_sync: SAS topology change"));
7106 7208
7107 7209 sas_topo_change_list = (pMpi2EventDataSasTopologyChangeList_t)
7108 7210 eventreply->EventData;
7109 7211
7110 7212 enc_handle = ddi_get16(mpt->m_acc_reply_frame_hdl,
7111 7213 &sas_topo_change_list->EnclosureHandle);
7112 7214 expd_handle = ddi_get16(mpt->m_acc_reply_frame_hdl,
7113 7215 &sas_topo_change_list->ExpanderDevHandle);
7114 7216 num_entries = ddi_get8(mpt->m_acc_reply_frame_hdl,
7115 7217 &sas_topo_change_list->NumEntries);
7116 7218 start_phy_num = ddi_get8(mpt->m_acc_reply_frame_hdl,
7117 7219 &sas_topo_change_list->StartPhyNum);
7118 7220 expstatus = ddi_get8(mpt->m_acc_reply_frame_hdl,
7119 7221 &sas_topo_change_list->ExpStatus);
7120 7222 physport = ddi_get8(mpt->m_acc_reply_frame_hdl,
7121 7223 &sas_topo_change_list->PhysicalPort);
7122 7224
7123 7225 string[0] = 0;
7124 7226 if (expd_handle) {
7125 7227 flags = MPTSAS_TOPO_FLAG_EXPANDER_ASSOCIATED;
7126 7228 switch (expstatus) {
7127 7229 case MPI2_EVENT_SAS_TOPO_ES_ADDED:
7128 7230 (void) sprintf(string, " added");
7129 7231 /*
7130 7232 * New expander device added
7131 7233 */
7132 7234 mpt->m_port_chng = 1;
7133 7235 topo_node = kmem_zalloc(
7134 7236 sizeof (mptsas_topo_change_list_t),
7135 7237 KM_SLEEP);
7136 7238 topo_node->mpt = mpt;
7137 7239 topo_node->event = MPTSAS_DR_EVENT_RECONFIG_SMP;
7138 7240 topo_node->un.physport = physport;
7139 7241 topo_node->devhdl = expd_handle;
7140 7242 topo_node->flags = flags;
7141 7243 topo_node->object = NULL;
7142 7244 if (topo_head == NULL) {
7143 7245 topo_head = topo_tail = topo_node;
7144 7246 } else {
7145 7247 topo_tail->next = topo_node;
7146 7248 topo_tail = topo_node;
7147 7249 }
7148 7250 break;
7149 7251 case MPI2_EVENT_SAS_TOPO_ES_NOT_RESPONDING:
7150 7252 (void) sprintf(string, " not responding, "
7151 7253 "removed");
7152 7254 psmp = refhash_linear_search(mpt->m_smp_targets,
7153 7255 mptsas_smp_eval_devhdl, &expd_handle);
7154 7256 if (psmp == NULL)
7155 7257 break;
7156 7258
7157 7259 topo_node = kmem_zalloc(
7158 7260 sizeof (mptsas_topo_change_list_t),
7159 7261 KM_SLEEP);
7160 7262 topo_node->mpt = mpt;
7161 7263 topo_node->un.phymask =
7162 7264 psmp->m_addr.mta_phymask;
7163 7265 topo_node->event = MPTSAS_DR_EVENT_OFFLINE_SMP;
7164 7266 topo_node->devhdl = expd_handle;
7165 7267 topo_node->flags = flags;
7166 7268 topo_node->object = NULL;
7167 7269 if (topo_head == NULL) {
7168 7270 topo_head = topo_tail = topo_node;
7169 7271 } else {
7170 7272 topo_tail->next = topo_node;
7171 7273 topo_tail = topo_node;
7172 7274 }
7173 7275 break;
7174 7276 case MPI2_EVENT_SAS_TOPO_ES_RESPONDING:
7175 7277 break;
7176 7278 case MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING:
7177 7279 (void) sprintf(string, " not responding, "
7178 7280 "delaying removal");
7179 7281 break;
7180 7282 default:
7181 7283 break;
7182 7284 }
7183 7285 } else {
7184 7286 flags = MPTSAS_TOPO_FLAG_DIRECT_ATTACHED_DEVICE;
7185 7287 }
7186 7288
7187 7289 NDBG20(("SAS TOPOLOGY CHANGE for enclosure %x expander %x%s\n",
7188 7290 enc_handle, expd_handle, string));
7189 7291 for (i = 0; i < num_entries; i++) {
7190 7292 phy = i + start_phy_num;
7191 7293 phystatus = ddi_get8(mpt->m_acc_reply_frame_hdl,
7192 7294 &sas_topo_change_list->PHY[i].PhyStatus);
7193 7295 dev_handle = ddi_get16(mpt->m_acc_reply_frame_hdl,
7194 7296 &sas_topo_change_list->PHY[i].AttachedDevHandle);
7195 7297 reason_code = phystatus & MPI2_EVENT_SAS_TOPO_RC_MASK;
7196 7298 /*
7197 7299 * Filter out processing of Phy Vacant Status unless
7198 7300 * the reason code is "Not Responding". Process all
7199 7301 * other combinations of Phy Status and Reason Codes.
7200 7302 */
7201 7303 if ((phystatus &
7202 7304 MPI2_EVENT_SAS_TOPO_PHYSTATUS_VACANT) &&
7203 7305 (reason_code !=
7204 7306 MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING)) {
7205 7307 continue;
7206 7308 }
7207 7309 curr[0] = 0;
7208 7310 prev[0] = 0;
7209 7311 string[0] = 0;
7210 7312 switch (reason_code) {
7211 7313 case MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED:
7212 7314 {
7213 7315 NDBG20(("mptsas%d phy %d physical_port %d "
7214 7316 "dev_handle %d added", mpt->m_instance, phy,
7215 7317 physport, dev_handle));
7216 7318 link_rate = ddi_get8(mpt->m_acc_reply_frame_hdl,
7217 7319 &sas_topo_change_list->PHY[i].LinkRate);
7218 7320 state = (link_rate &
7219 7321 MPI2_EVENT_SAS_TOPO_LR_CURRENT_MASK) >>
7220 7322 MPI2_EVENT_SAS_TOPO_LR_CURRENT_SHIFT;
7221 7323 switch (state) {
7222 7324 case MPI2_EVENT_SAS_TOPO_LR_PHY_DISABLED:
7223 7325 (void) sprintf(curr, "is disabled");
7224 7326 break;
7225 7327 case MPI2_EVENT_SAS_TOPO_LR_NEGOTIATION_FAILED:
7226 7328 (void) sprintf(curr, "is offline, "
7227 7329 "failed speed negotiation");
7228 7330 break;
7229 7331 case MPI2_EVENT_SAS_TOPO_LR_SATA_OOB_COMPLETE:
7230 7332 (void) sprintf(curr, "SATA OOB "
7231 7333 "complete");
7232 7334 break;
7233 7335 case SMP_RESET_IN_PROGRESS:
7234 7336 (void) sprintf(curr, "SMP reset in "
7235 7337 "progress");
7236 7338 break;
7237 7339 case MPI2_EVENT_SAS_TOPO_LR_RATE_1_5:
7238 7340 (void) sprintf(curr, "is online at "
7239 7341 "1.5 Gbps");
7240 7342 break;
7241 7343 case MPI2_EVENT_SAS_TOPO_LR_RATE_3_0:
7242 7344 (void) sprintf(curr, "is online at 3.0 "
7243 7345 "Gbps");
7244 7346 break;
7245 7347 case MPI2_EVENT_SAS_TOPO_LR_RATE_6_0:
7246 7348 (void) sprintf(curr, "is online at 6.0 "
7247 7349 "Gbps");
7248 7350 break;
7249 7351 case MPI25_EVENT_SAS_TOPO_LR_RATE_12_0:
7250 7352 (void) sprintf(curr,
7251 7353 "is online at 12.0 Gbps");
7252 7354 break;
7253 7355 default:
7254 7356 (void) sprintf(curr, "state is "
7255 7357 "unknown");
7256 7358 break;
7257 7359 }
7258 7360 /*
7259 7361 * New target device added into the system.
7260 7362 * Set association flag according to if an
7261 7363 * expander is used or not.
7262 7364 */
7263 7365 exp_flag =
7264 7366 MPTSAS_TOPO_FLAG_EXPANDER_ATTACHED_DEVICE;
7265 7367 if (flags ==
7266 7368 MPTSAS_TOPO_FLAG_EXPANDER_ASSOCIATED) {
7267 7369 flags = exp_flag;
7268 7370 }
7269 7371 topo_node = kmem_zalloc(
7270 7372 sizeof (mptsas_topo_change_list_t),
7271 7373 KM_SLEEP);
7272 7374 topo_node->mpt = mpt;
7273 7375 topo_node->event =
7274 7376 MPTSAS_DR_EVENT_RECONFIG_TARGET;
7275 7377 if (expd_handle == 0) {
7276 7378 /*
7277 7379 * Per MPI 2, if expander dev handle
7278 7380 * is 0, it's a directly attached
7279 7381 * device. So driver use PHY to decide
7280 7382 * which iport is associated
7281 7383 */
7282 7384 physport = phy;
7283 7385 mpt->m_port_chng = 1;
7284 7386 }
7285 7387 topo_node->un.physport = physport;
7286 7388 topo_node->devhdl = dev_handle;
7287 7389 topo_node->flags = flags;
7288 7390 topo_node->object = NULL;
7289 7391 if (topo_head == NULL) {
7290 7392 topo_head = topo_tail = topo_node;
7291 7393 } else {
7292 7394 topo_tail->next = topo_node;
7293 7395 topo_tail = topo_node;
7294 7396 }
7295 7397 break;
7296 7398 }
7297 7399 case MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING:
7298 7400 {
7299 7401 NDBG20(("mptsas%d phy %d physical_port %d "
7300 7402 "dev_handle %d removed", mpt->m_instance,
7301 7403 phy, physport, dev_handle));
7302 7404 /*
7303 7405 * Set association flag according to if an
7304 7406 * expander is used or not.
7305 7407 */
7306 7408 exp_flag =
7307 7409 MPTSAS_TOPO_FLAG_EXPANDER_ATTACHED_DEVICE;
7308 7410 if (flags ==
7309 7411 MPTSAS_TOPO_FLAG_EXPANDER_ASSOCIATED) {
7310 7412 flags = exp_flag;
7311 7413 }
7312 7414 /*
7313 7415 * Target device is removed from the system
7314 7416 * Before the device is really offline from
7315 7417 * from system.
7316 7418 */
7317 7419 ptgt = refhash_linear_search(mpt->m_targets,
7318 7420 mptsas_target_eval_devhdl, &dev_handle);
7319 7421 /*
7320 7422 * If ptgt is NULL here, it means that the
7321 7423 * DevHandle is not in the hash table. This is
7322 7424 * reasonable sometimes. For example, if a
7323 7425 * disk was pulled, then added, then pulled
7324 7426 * again, the disk will not have been put into
7325 7427 * the hash table because the add event will
7326 7428 * have an invalid phymask. BUT, this does not
7327 7429 * mean that the DevHandle is invalid. The
7328 7430 * controller will still have a valid DevHandle
7329 7431 * that must be removed. To do this, use the
7330 7432 * MPTSAS_TOPO_FLAG_REMOVE_HANDLE event.
7331 7433 */
7332 7434 if (ptgt == NULL) {
7333 7435 topo_node = kmem_zalloc(
7334 7436 sizeof (mptsas_topo_change_list_t),
7335 7437 KM_SLEEP);
7336 7438 topo_node->mpt = mpt;
7337 7439 topo_node->un.phymask = 0;
7338 7440 topo_node->event =
7339 7441 MPTSAS_TOPO_FLAG_REMOVE_HANDLE;
7340 7442 topo_node->devhdl = dev_handle;
7341 7443 topo_node->flags = flags;
7342 7444 topo_node->object = NULL;
7343 7445 if (topo_head == NULL) {
7344 7446 topo_head = topo_tail =
7345 7447 topo_node;
7346 7448 } else {
7347 7449 topo_tail->next = topo_node;
7348 7450 topo_tail = topo_node;
7349 7451 }
7350 7452 break;
7351 7453 }
7352 7454
7353 7455 /*
7354 7456 * Update DR flag immediately avoid I/O failure
7355 7457 * before failover finish. Pay attention to the
7356 7458 * mutex protect, we need grab m_tx_waitq_mutex
7357 7459 * during set m_dr_flag because we won't add
7358 7460 * the following command into waitq, instead,
7359 7461 * we need return TRAN_BUSY in the tran_start
7360 7462 * context.
7361 7463 */
7362 7464 mutex_enter(&mpt->m_tx_waitq_mutex);
7363 7465 ptgt->m_dr_flag = MPTSAS_DR_INTRANSITION;
7364 7466 mutex_exit(&mpt->m_tx_waitq_mutex);
7365 7467
7366 7468 topo_node = kmem_zalloc(
7367 7469 sizeof (mptsas_topo_change_list_t),
7368 7470 KM_SLEEP);
7369 7471 topo_node->mpt = mpt;
7370 7472 topo_node->un.phymask =
7371 7473 ptgt->m_addr.mta_phymask;
7372 7474 topo_node->event =
7373 7475 MPTSAS_DR_EVENT_OFFLINE_TARGET;
7374 7476 topo_node->devhdl = dev_handle;
7375 7477 topo_node->flags = flags;
7376 7478 topo_node->object = NULL;
7377 7479 if (topo_head == NULL) {
7378 7480 topo_head = topo_tail = topo_node;
7379 7481 } else {
7380 7482 topo_tail->next = topo_node;
7381 7483 topo_tail = topo_node;
7382 7484 }
7383 7485 break;
7384 7486 }
7385 7487 case MPI2_EVENT_SAS_TOPO_RC_PHY_CHANGED:
7386 7488 link_rate = ddi_get8(mpt->m_acc_reply_frame_hdl,
7387 7489 &sas_topo_change_list->PHY[i].LinkRate);
7388 7490 state = (link_rate &
7389 7491 MPI2_EVENT_SAS_TOPO_LR_CURRENT_MASK) >>
7390 7492 MPI2_EVENT_SAS_TOPO_LR_CURRENT_SHIFT;
7391 7493 pSmhba = &mpt->m_phy_info[i].smhba_info;
7392 7494 pSmhba->negotiated_link_rate = state;
7393 7495 switch (state) {
7394 7496 case MPI2_EVENT_SAS_TOPO_LR_PHY_DISABLED:
7395 7497 (void) sprintf(curr, "is disabled");
7396 7498 mptsas_smhba_log_sysevent(mpt,
7397 7499 ESC_SAS_PHY_EVENT,
7398 7500 SAS_PHY_REMOVE,
7399 7501 &mpt->m_phy_info[i].smhba_info);
7400 7502 mpt->m_phy_info[i].smhba_info.
7401 7503 negotiated_link_rate
7402 7504 = 0x1;
7403 7505 break;
7404 7506 case MPI2_EVENT_SAS_TOPO_LR_NEGOTIATION_FAILED:
7405 7507 (void) sprintf(curr, "is offline, "
7406 7508 "failed speed negotiation");
7407 7509 mptsas_smhba_log_sysevent(mpt,
7408 7510 ESC_SAS_PHY_EVENT,
7409 7511 SAS_PHY_OFFLINE,
7410 7512 &mpt->m_phy_info[i].smhba_info);
7411 7513 break;
7412 7514 case MPI2_EVENT_SAS_TOPO_LR_SATA_OOB_COMPLETE:
7413 7515 (void) sprintf(curr, "SATA OOB "
7414 7516 "complete");
7415 7517 break;
7416 7518 case SMP_RESET_IN_PROGRESS:
7417 7519 (void) sprintf(curr, "SMP reset in "
7418 7520 "progress");
7419 7521 break;
7420 7522 case MPI2_EVENT_SAS_TOPO_LR_RATE_1_5:
7421 7523 (void) sprintf(curr, "is online at "
7422 7524 "1.5 Gbps");
7423 7525 if ((expd_handle == 0) &&
7424 7526 (enc_handle == 1)) {
7425 7527 mpt->m_port_chng = 1;
7426 7528 }
7427 7529 mptsas_smhba_log_sysevent(mpt,
7428 7530 ESC_SAS_PHY_EVENT,
7429 7531 SAS_PHY_ONLINE,
7430 7532 &mpt->m_phy_info[i].smhba_info);
7431 7533 break;
7432 7534 case MPI2_EVENT_SAS_TOPO_LR_RATE_3_0:
7433 7535 (void) sprintf(curr, "is online at 3.0 "
7434 7536 "Gbps");
7435 7537 if ((expd_handle == 0) &&
7436 7538 (enc_handle == 1)) {
7437 7539 mpt->m_port_chng = 1;
7438 7540 }
7439 7541 mptsas_smhba_log_sysevent(mpt,
7440 7542 ESC_SAS_PHY_EVENT,
7441 7543 SAS_PHY_ONLINE,
7442 7544 &mpt->m_phy_info[i].smhba_info);
7443 7545 break;
7444 7546 case MPI2_EVENT_SAS_TOPO_LR_RATE_6_0:
7445 7547 (void) sprintf(curr, "is online at "
7446 7548 "6.0 Gbps");
7447 7549 if ((expd_handle == 0) &&
7448 7550 (enc_handle == 1)) {
7449 7551 mpt->m_port_chng = 1;
7450 7552 }
7451 7553 mptsas_smhba_log_sysevent(mpt,
7452 7554 ESC_SAS_PHY_EVENT,
7453 7555 SAS_PHY_ONLINE,
7454 7556 &mpt->m_phy_info[i].smhba_info);
7455 7557 break;
7456 7558 case MPI25_EVENT_SAS_TOPO_LR_RATE_12_0:
7457 7559 (void) sprintf(curr, "is online at "
7458 7560 "12.0 Gbps");
7459 7561 if ((expd_handle == 0) &&
7460 7562 (enc_handle == 1)) {
7461 7563 mpt->m_port_chng = 1;
7462 7564 }
7463 7565 mptsas_smhba_log_sysevent(mpt,
7464 7566 ESC_SAS_PHY_EVENT,
7465 7567 SAS_PHY_ONLINE,
7466 7568 &mpt->m_phy_info[i].smhba_info);
7467 7569 break;
7468 7570 default:
7469 7571 (void) sprintf(curr, "state is "
7470 7572 "unknown");
7471 7573 break;
7472 7574 }
7473 7575
7474 7576 state = (link_rate &
7475 7577 MPI2_EVENT_SAS_TOPO_LR_PREV_MASK) >>
7476 7578 MPI2_EVENT_SAS_TOPO_LR_PREV_SHIFT;
7477 7579 switch (state) {
7478 7580 case MPI2_EVENT_SAS_TOPO_LR_PHY_DISABLED:
7479 7581 (void) sprintf(prev, ", was disabled");
7480 7582 break;
7481 7583 case MPI2_EVENT_SAS_TOPO_LR_NEGOTIATION_FAILED:
7482 7584 (void) sprintf(prev, ", was offline, "
7483 7585 "failed speed negotiation");
7484 7586 break;
7485 7587 case MPI2_EVENT_SAS_TOPO_LR_SATA_OOB_COMPLETE:
7486 7588 (void) sprintf(prev, ", was SATA OOB "
7487 7589 "complete");
7488 7590 break;
7489 7591 case SMP_RESET_IN_PROGRESS:
7490 7592 (void) sprintf(prev, ", was SMP reset "
7491 7593 "in progress");
7492 7594 break;
7493 7595 case MPI2_EVENT_SAS_TOPO_LR_RATE_1_5:
7494 7596 (void) sprintf(prev, ", was online at "
7495 7597 "1.5 Gbps");
7496 7598 break;
7497 7599 case MPI2_EVENT_SAS_TOPO_LR_RATE_3_0:
7498 7600 (void) sprintf(prev, ", was online at "
7499 7601 "3.0 Gbps");
7500 7602 break;
7501 7603 case MPI2_EVENT_SAS_TOPO_LR_RATE_6_0:
7502 7604 (void) sprintf(prev, ", was online at "
7503 7605 "6.0 Gbps");
7504 7606 break;
7505 7607 case MPI25_EVENT_SAS_TOPO_LR_RATE_12_0:
7506 7608 (void) sprintf(prev, ", was online at "
7507 7609 "12.0 Gbps");
7508 7610 break;
7509 7611 default:
7510 7612 break;
7511 7613 }
7512 7614 (void) sprintf(&string[strlen(string)], "link "
7513 7615 "changed, ");
7514 7616 break;
7515 7617 case MPI2_EVENT_SAS_TOPO_RC_NO_CHANGE:
7516 7618 continue;
7517 7619 case MPI2_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING:
7518 7620 (void) sprintf(&string[strlen(string)],
7519 7621 "target not responding, delaying "
7520 7622 "removal");
7521 7623 break;
7522 7624 }
7523 7625 NDBG20(("mptsas%d phy %d DevHandle %x, %s%s%s\n",
7524 7626 mpt->m_instance, phy, dev_handle, string, curr,
7525 7627 prev));
7526 7628 }
7527 7629 if (topo_head != NULL) {
7528 7630 /*
7529 7631 * Launch DR taskq to handle topology change
7530 7632 */
|
↓ open down ↓ |
451 lines elided |
↑ open up ↑ |
7531 7633 if ((ddi_taskq_dispatch(mpt->m_dr_taskq,
7532 7634 mptsas_handle_dr, (void *)topo_head,
7533 7635 DDI_NOSLEEP)) != DDI_SUCCESS) {
7534 7636 while (topo_head != NULL) {
7535 7637 topo_node = topo_head;
7536 7638 topo_head = topo_head->next;
7537 7639 kmem_free(topo_node,
7538 7640 sizeof (mptsas_topo_change_list_t));
7539 7641 }
7540 7642 mptsas_log(mpt, CE_NOTE, "mptsas start taskq "
7541 - "for handle SAS DR event failed. \n");
7643 + "for handle SAS DR event failed");
7542 7644 }
7543 7645 }
7544 7646 break;
7545 7647 }
7546 7648 case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST:
7547 7649 {
7548 7650 Mpi2EventDataIrConfigChangeList_t *irChangeList;
7549 7651 mptsas_topo_change_list_t *topo_head = NULL;
7550 7652 mptsas_topo_change_list_t *topo_tail = NULL;
7551 7653 mptsas_topo_change_list_t *topo_node = NULL;
7552 7654 mptsas_target_t *ptgt;
7553 7655 uint8_t num_entries, i, reason;
7554 7656 uint16_t volhandle, diskhandle;
7555 7657
7556 7658 irChangeList = (pMpi2EventDataIrConfigChangeList_t)
7557 7659 eventreply->EventData;
7558 7660 num_entries = ddi_get8(mpt->m_acc_reply_frame_hdl,
7559 7661 &irChangeList->NumElements);
7560 7662
7561 7663 NDBG20(("mptsas%d IR_CONFIGURATION_CHANGE_LIST event received",
7562 7664 mpt->m_instance));
7563 7665
7564 7666 for (i = 0; i < num_entries; i++) {
7565 7667 reason = ddi_get8(mpt->m_acc_reply_frame_hdl,
7566 7668 &irChangeList->ConfigElement[i].ReasonCode);
7567 7669 volhandle = ddi_get16(mpt->m_acc_reply_frame_hdl,
7568 7670 &irChangeList->ConfigElement[i].VolDevHandle);
7569 7671 diskhandle = ddi_get16(mpt->m_acc_reply_frame_hdl,
7570 7672 &irChangeList->ConfigElement[i].PhysDiskDevHandle);
7571 7673
7572 7674 switch (reason) {
7573 7675 case MPI2_EVENT_IR_CHANGE_RC_ADDED:
7574 7676 case MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED:
7575 7677 {
7576 7678 NDBG20(("mptsas %d volume added\n",
7577 7679 mpt->m_instance));
7578 7680
7579 7681 topo_node = kmem_zalloc(
7580 7682 sizeof (mptsas_topo_change_list_t),
7581 7683 KM_SLEEP);
7582 7684
7583 7685 topo_node->mpt = mpt;
7584 7686 topo_node->event =
7585 7687 MPTSAS_DR_EVENT_RECONFIG_TARGET;
7586 7688 topo_node->un.physport = 0xff;
7587 7689 topo_node->devhdl = volhandle;
7588 7690 topo_node->flags =
7589 7691 MPTSAS_TOPO_FLAG_RAID_ASSOCIATED;
7590 7692 topo_node->object = NULL;
7591 7693 if (topo_head == NULL) {
7592 7694 topo_head = topo_tail = topo_node;
7593 7695 } else {
7594 7696 topo_tail->next = topo_node;
7595 7697 topo_tail = topo_node;
7596 7698 }
7597 7699 break;
7598 7700 }
7599 7701 case MPI2_EVENT_IR_CHANGE_RC_REMOVED:
7600 7702 case MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED:
7601 7703 {
7602 7704 NDBG20(("mptsas %d volume deleted\n",
7603 7705 mpt->m_instance));
7604 7706 ptgt = refhash_linear_search(mpt->m_targets,
7605 7707 mptsas_target_eval_devhdl, &volhandle);
7606 7708 if (ptgt == NULL)
7607 7709 break;
7608 7710
7609 7711 /*
7610 7712 * Clear any flags related to volume
7611 7713 */
7612 7714 (void) mptsas_delete_volume(mpt, volhandle);
7613 7715
7614 7716 /*
7615 7717 * Update DR flag immediately avoid I/O failure
7616 7718 */
7617 7719 mutex_enter(&mpt->m_tx_waitq_mutex);
7618 7720 ptgt->m_dr_flag = MPTSAS_DR_INTRANSITION;
7619 7721 mutex_exit(&mpt->m_tx_waitq_mutex);
7620 7722
7621 7723 topo_node = kmem_zalloc(
7622 7724 sizeof (mptsas_topo_change_list_t),
7623 7725 KM_SLEEP);
7624 7726 topo_node->mpt = mpt;
7625 7727 topo_node->un.phymask =
7626 7728 ptgt->m_addr.mta_phymask;
7627 7729 topo_node->event =
7628 7730 MPTSAS_DR_EVENT_OFFLINE_TARGET;
7629 7731 topo_node->devhdl = volhandle;
7630 7732 topo_node->flags =
7631 7733 MPTSAS_TOPO_FLAG_RAID_ASSOCIATED;
7632 7734 topo_node->object = (void *)ptgt;
7633 7735 if (topo_head == NULL) {
7634 7736 topo_head = topo_tail = topo_node;
7635 7737 } else {
7636 7738 topo_tail->next = topo_node;
7637 7739 topo_tail = topo_node;
7638 7740 }
7639 7741 break;
7640 7742 }
7641 7743 case MPI2_EVENT_IR_CHANGE_RC_PD_CREATED:
7642 7744 case MPI2_EVENT_IR_CHANGE_RC_HIDE:
7643 7745 {
7644 7746 ptgt = refhash_linear_search(mpt->m_targets,
7645 7747 mptsas_target_eval_devhdl, &diskhandle);
7646 7748 if (ptgt == NULL)
7647 7749 break;
7648 7750
7649 7751 /*
7650 7752 * Update DR flag immediately avoid I/O failure
7651 7753 */
7652 7754 mutex_enter(&mpt->m_tx_waitq_mutex);
7653 7755 ptgt->m_dr_flag = MPTSAS_DR_INTRANSITION;
7654 7756 mutex_exit(&mpt->m_tx_waitq_mutex);
7655 7757
7656 7758 topo_node = kmem_zalloc(
7657 7759 sizeof (mptsas_topo_change_list_t),
7658 7760 KM_SLEEP);
7659 7761 topo_node->mpt = mpt;
7660 7762 topo_node->un.phymask =
7661 7763 ptgt->m_addr.mta_phymask;
7662 7764 topo_node->event =
7663 7765 MPTSAS_DR_EVENT_OFFLINE_TARGET;
7664 7766 topo_node->devhdl = diskhandle;
7665 7767 topo_node->flags =
7666 7768 MPTSAS_TOPO_FLAG_RAID_PHYSDRV_ASSOCIATED;
7667 7769 topo_node->object = (void *)ptgt;
7668 7770 if (topo_head == NULL) {
7669 7771 topo_head = topo_tail = topo_node;
7670 7772 } else {
7671 7773 topo_tail->next = topo_node;
7672 7774 topo_tail = topo_node;
7673 7775 }
7674 7776 break;
7675 7777 }
7676 7778 case MPI2_EVENT_IR_CHANGE_RC_UNHIDE:
7677 7779 case MPI2_EVENT_IR_CHANGE_RC_PD_DELETED:
7678 7780 {
7679 7781 /*
7680 7782 * The physical drive is released by a IR
7681 7783 * volume. But we cannot get the the physport
7682 7784 * or phynum from the event data, so we only
7683 7785 * can get the physport/phynum after SAS
7684 7786 * Device Page0 request for the devhdl.
7685 7787 */
7686 7788 topo_node = kmem_zalloc(
7687 7789 sizeof (mptsas_topo_change_list_t),
7688 7790 KM_SLEEP);
7689 7791 topo_node->mpt = mpt;
7690 7792 topo_node->un.phymask = 0;
7691 7793 topo_node->event =
7692 7794 MPTSAS_DR_EVENT_RECONFIG_TARGET;
7693 7795 topo_node->devhdl = diskhandle;
7694 7796 topo_node->flags =
7695 7797 MPTSAS_TOPO_FLAG_RAID_PHYSDRV_ASSOCIATED;
7696 7798 topo_node->object = NULL;
7697 7799 mpt->m_port_chng = 1;
7698 7800 if (topo_head == NULL) {
7699 7801 topo_head = topo_tail = topo_node;
7700 7802 } else {
7701 7803 topo_tail->next = topo_node;
7702 7804 topo_tail = topo_node;
7703 7805 }
7704 7806 break;
7705 7807 }
7706 7808 default:
7707 7809 break;
7708 7810 }
7709 7811 }
7710 7812
7711 7813 if (topo_head != NULL) {
7712 7814 /*
7713 7815 * Launch DR taskq to handle topology change
7714 7816 */
|
↓ open down ↓ |
163 lines elided |
↑ open up ↑ |
7715 7817 if ((ddi_taskq_dispatch(mpt->m_dr_taskq,
7716 7818 mptsas_handle_dr, (void *)topo_head,
7717 7819 DDI_NOSLEEP)) != DDI_SUCCESS) {
7718 7820 while (topo_head != NULL) {
7719 7821 topo_node = topo_head;
7720 7822 topo_head = topo_head->next;
7721 7823 kmem_free(topo_node,
7722 7824 sizeof (mptsas_topo_change_list_t));
7723 7825 }
7724 7826 mptsas_log(mpt, CE_NOTE, "mptsas start taskq "
7725 - "for handle SAS DR event failed. \n");
7827 + "for handle SAS DR event failed");
7726 7828 }
7727 7829 }
7728 7830 break;
7729 7831 }
7730 7832 default:
7731 7833 return (DDI_FAILURE);
7732 7834 }
7733 7835
7734 7836 return (DDI_SUCCESS);
7735 7837 }
7736 7838
7737 7839 /*
7738 7840 * handle events from ioc
7739 7841 */
7740 7842 static void
7741 7843 mptsas_handle_event(void *args)
7742 7844 {
7743 7845 m_replyh_arg_t *replyh_arg;
7744 7846 pMpi2EventNotificationReply_t eventreply;
7745 7847 uint32_t event, iocloginfo, rfm;
7746 7848 uint32_t status;
7747 7849 uint8_t port;
7748 7850 mptsas_t *mpt;
7749 7851 uint_t iocstatus;
7750 7852
7751 7853 replyh_arg = (m_replyh_arg_t *)args;
7752 7854 rfm = replyh_arg->rfm;
7753 7855 mpt = replyh_arg->mpt;
7754 7856
7755 7857 mutex_enter(&mpt->m_mutex);
7756 7858 /*
7757 7859 * If HBA is being reset, drop incoming event.
7758 7860 */
7759 7861 if (mpt->m_in_reset) {
|
↓ open down ↓ |
24 lines elided |
↑ open up ↑ |
7760 7862 NDBG20(("dropping event received prior to reset"));
7761 7863 mutex_exit(&mpt->m_mutex);
7762 7864 return;
7763 7865 }
7764 7866
7765 7867 eventreply = (pMpi2EventNotificationReply_t)
7766 7868 (mpt->m_reply_frame + (rfm -
7767 7869 (mpt->m_reply_frame_dma_addr & 0xffffffffu)));
7768 7870 event = ddi_get16(mpt->m_acc_reply_frame_hdl, &eventreply->Event);
7769 7871
7770 - if (iocstatus = ddi_get16(mpt->m_acc_reply_frame_hdl,
7771 - &eventreply->IOCStatus)) {
7872 + if ((iocstatus = ddi_get16(mpt->m_acc_reply_frame_hdl,
7873 + &eventreply->IOCStatus)) != 0) {
7772 7874 if (iocstatus == MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) {
7773 7875 mptsas_log(mpt, CE_WARN,
7774 - "!mptsas_handle_event: IOCStatus=0x%x, "
7876 + "mptsas_handle_event: IOCStatus=0x%x, "
7775 7877 "IOCLogInfo=0x%x", iocstatus,
7776 7878 ddi_get32(mpt->m_acc_reply_frame_hdl,
7777 7879 &eventreply->IOCLogInfo));
7778 7880 } else {
7779 7881 mptsas_log(mpt, CE_WARN,
7780 7882 "mptsas_handle_event: IOCStatus=0x%x, "
7781 7883 "IOCLogInfo=0x%x", iocstatus,
7782 7884 ddi_get32(mpt->m_acc_reply_frame_hdl,
7783 7885 &eventreply->IOCLogInfo));
7784 7886 }
7785 7887 }
7786 7888
7787 7889 /*
7788 7890 * figure out what kind of event we got and handle accordingly
7789 7891 */
7790 7892 switch (event) {
7791 7893 case MPI2_EVENT_LOG_ENTRY_ADDED:
7792 7894 break;
7793 7895 case MPI2_EVENT_LOG_DATA:
7794 7896 iocloginfo = ddi_get32(mpt->m_acc_reply_frame_hdl,
7795 7897 &eventreply->IOCLogInfo);
7796 7898 NDBG20(("mptsas %d log info %x received.\n", mpt->m_instance,
7797 7899 iocloginfo));
7798 7900 break;
7799 7901 case MPI2_EVENT_STATE_CHANGE:
7800 7902 NDBG20(("mptsas%d state change.", mpt->m_instance));
7801 7903 break;
7802 7904 case MPI2_EVENT_HARD_RESET_RECEIVED:
7803 7905 NDBG20(("mptsas%d event change.", mpt->m_instance));
7804 7906 break;
7805 7907 case MPI2_EVENT_SAS_DISCOVERY:
7806 7908 {
7807 7909 MPI2_EVENT_DATA_SAS_DISCOVERY *sasdiscovery;
7808 7910 char string[80];
7809 7911 uint8_t rc;
7810 7912
7811 7913 sasdiscovery =
7812 7914 (pMpi2EventDataSasDiscovery_t)eventreply->EventData;
7813 7915
7814 7916 rc = ddi_get8(mpt->m_acc_reply_frame_hdl,
7815 7917 &sasdiscovery->ReasonCode);
7816 7918 port = ddi_get8(mpt->m_acc_reply_frame_hdl,
7817 7919 &sasdiscovery->PhysicalPort);
7818 7920 status = ddi_get32(mpt->m_acc_reply_frame_hdl,
7819 7921 &sasdiscovery->DiscoveryStatus);
7820 7922
7821 7923 string[0] = 0;
7822 7924 switch (rc) {
7823 7925 case MPI2_EVENT_SAS_DISC_RC_STARTED:
7824 7926 (void) sprintf(string, "STARTING");
7825 7927 break;
7826 7928 case MPI2_EVENT_SAS_DISC_RC_COMPLETED:
7827 7929 (void) sprintf(string, "COMPLETED");
7828 7930 break;
7829 7931 default:
7830 7932 (void) sprintf(string, "UNKNOWN");
7831 7933 break;
7832 7934 }
7833 7935
7834 7936 NDBG20(("SAS DISCOVERY is %s for port %d, status %x", string,
7835 7937 port, status));
7836 7938
7837 7939 break;
7838 7940 }
7839 7941 case MPI2_EVENT_EVENT_CHANGE:
7840 7942 NDBG20(("mptsas%d event change.", mpt->m_instance));
7841 7943 break;
7842 7944 case MPI2_EVENT_TASK_SET_FULL:
7843 7945 {
7844 7946 pMpi2EventDataTaskSetFull_t taskfull;
7845 7947
7846 7948 taskfull = (pMpi2EventDataTaskSetFull_t)eventreply->EventData;
7847 7949
7848 7950 NDBG20(("TASK_SET_FULL received for mptsas%d, depth %d\n",
7849 7951 mpt->m_instance, ddi_get16(mpt->m_acc_reply_frame_hdl,
7850 7952 &taskfull->CurrentDepth)));
7851 7953 break;
7852 7954 }
7853 7955 case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
7854 7956 {
7855 7957 /*
7856 7958 * SAS TOPOLOGY CHANGE LIST Event has already been handled
7857 7959 * in mptsas_handle_event_sync() of interrupt context
7858 7960 */
7859 7961 break;
7860 7962 }
7861 7963 case MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE:
7862 7964 {
7863 7965 pMpi2EventDataSasEnclDevStatusChange_t encstatus;
7864 7966 uint8_t rc;
7865 7967 uint16_t enchdl;
7866 7968 char string[80];
7867 7969 mptsas_enclosure_t *mep;
7868 7970
7869 7971 encstatus = (pMpi2EventDataSasEnclDevStatusChange_t)
7870 7972 eventreply->EventData;
7871 7973
7872 7974 rc = ddi_get8(mpt->m_acc_reply_frame_hdl,
7873 7975 &encstatus->ReasonCode);
7874 7976 enchdl = ddi_get16(mpt->m_acc_reply_frame_hdl,
|
↓ open down ↓ |
90 lines elided |
↑ open up ↑ |
7875 7977 &encstatus->EnclosureHandle);
7876 7978
7877 7979 switch (rc) {
7878 7980 case MPI2_EVENT_SAS_ENCL_RC_ADDED:
7879 7981 (void) sprintf(string, "added");
7880 7982 break;
7881 7983 case MPI2_EVENT_SAS_ENCL_RC_NOT_RESPONDING:
7882 7984 mep = mptsas_enc_lookup(mpt, enchdl);
7883 7985 if (mep != NULL) {
7884 7986 list_remove(&mpt->m_enclosures, mep);
7885 - kmem_free(mep, sizeof (*mep));
7987 + mptsas_enc_free(mep);
7988 + mep = NULL;
7886 7989 }
7887 7990 (void) sprintf(string, ", not responding");
7888 7991 break;
7889 7992 default:
7890 7993 break;
7891 7994 }
7892 7995 NDBG20(("mptsas%d ENCLOSURE STATUS CHANGE for enclosure "
7893 7996 "%x%s\n", mpt->m_instance,
7894 7997 ddi_get16(mpt->m_acc_reply_frame_hdl,
7895 7998 &encstatus->EnclosureHandle), string));
7896 7999
7897 8000 /*
7898 8001 * No matter what has happened, update all of our device state
7899 8002 * for enclosures, by retriggering an evaluation.
7900 8003 */
7901 8004 mpt->m_done_traverse_enc = 0;
7902 8005 mptsas_update_hashtab(mpt);
7903 8006 break;
7904 8007 }
7905 8008
7906 8009 /*
7907 8010 * MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE is handled by
7908 8011 * mptsas_handle_event_sync,in here just send ack message.
7909 8012 */
7910 8013 case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE:
7911 8014 {
7912 8015 pMpi2EventDataSasDeviceStatusChange_t statuschange;
7913 8016 uint8_t rc;
7914 8017 uint16_t devhdl;
7915 8018 uint64_t wwn = 0;
7916 8019 uint32_t wwn_lo, wwn_hi;
7917 8020
7918 8021 statuschange = (pMpi2EventDataSasDeviceStatusChange_t)
7919 8022 eventreply->EventData;
7920 8023 rc = ddi_get8(mpt->m_acc_reply_frame_hdl,
7921 8024 &statuschange->ReasonCode);
7922 8025 wwn_lo = ddi_get32(mpt->m_acc_reply_frame_hdl,
7923 8026 (uint32_t *)(void *)&statuschange->SASAddress);
7924 8027 wwn_hi = ddi_get32(mpt->m_acc_reply_frame_hdl,
7925 8028 (uint32_t *)(void *)&statuschange->SASAddress + 1);
7926 8029 wwn = ((uint64_t)wwn_hi << 32) | wwn_lo;
7927 8030 devhdl = ddi_get16(mpt->m_acc_reply_frame_hdl,
7928 8031 &statuschange->DevHandle);
7929 8032
7930 8033 NDBG13(("MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE wwn is %"PRIx64,
7931 8034 wwn));
7932 8035
7933 8036 switch (rc) {
7934 8037 case MPI2_EVENT_SAS_DEV_STAT_RC_SMART_DATA:
7935 8038 NDBG20(("SMART data received, ASC/ASCQ = %02x/%02x",
7936 8039 ddi_get8(mpt->m_acc_reply_frame_hdl,
7937 8040 &statuschange->ASC),
7938 8041 ddi_get8(mpt->m_acc_reply_frame_hdl,
7939 8042 &statuschange->ASCQ)));
7940 8043 break;
7941 8044
7942 8045 case MPI2_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED:
7943 8046 NDBG20(("Device not supported"));
7944 8047 break;
7945 8048
7946 8049 case MPI2_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET:
7947 8050 NDBG20(("IOC internally generated the Target Reset "
7948 8051 "for devhdl:%x", devhdl));
7949 8052 break;
7950 8053
7951 8054 case MPI2_EVENT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET:
7952 8055 NDBG20(("IOC's internally generated Target Reset "
7953 8056 "completed for devhdl:%x", devhdl));
7954 8057 break;
7955 8058
7956 8059 case MPI2_EVENT_SAS_DEV_STAT_RC_TASK_ABORT_INTERNAL:
7957 8060 NDBG20(("IOC internally generated Abort Task"));
7958 8061 break;
7959 8062
7960 8063 case MPI2_EVENT_SAS_DEV_STAT_RC_CMP_TASK_ABORT_INTERNAL:
7961 8064 NDBG20(("IOC's internally generated Abort Task "
7962 8065 "completed"));
7963 8066 break;
7964 8067
7965 8068 case MPI2_EVENT_SAS_DEV_STAT_RC_ABORT_TASK_SET_INTERNAL:
7966 8069 NDBG20(("IOC internally generated Abort Task Set"));
7967 8070 break;
7968 8071
7969 8072 case MPI2_EVENT_SAS_DEV_STAT_RC_CLEAR_TASK_SET_INTERNAL:
7970 8073 NDBG20(("IOC internally generated Clear Task Set"));
7971 8074 break;
7972 8075
7973 8076 case MPI2_EVENT_SAS_DEV_STAT_RC_QUERY_TASK_INTERNAL:
7974 8077 NDBG20(("IOC internally generated Query Task"));
7975 8078 break;
7976 8079
7977 8080 case MPI2_EVENT_SAS_DEV_STAT_RC_ASYNC_NOTIFICATION:
7978 8081 NDBG20(("Device sent an Asynchronous Notification"));
7979 8082 break;
7980 8083
7981 8084 default:
7982 8085 break;
7983 8086 }
7984 8087 break;
7985 8088 }
7986 8089 case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST:
7987 8090 {
7988 8091 /*
7989 8092 * IR TOPOLOGY CHANGE LIST Event has already been handled
7990 8093 * in mpt_handle_event_sync() of interrupt context
7991 8094 */
7992 8095 break;
7993 8096 }
7994 8097 case MPI2_EVENT_IR_OPERATION_STATUS:
7995 8098 {
7996 8099 Mpi2EventDataIrOperationStatus_t *irOpStatus;
7997 8100 char reason_str[80];
7998 8101 uint8_t rc, percent;
7999 8102 uint16_t handle;
8000 8103
8001 8104 irOpStatus = (pMpi2EventDataIrOperationStatus_t)
8002 8105 eventreply->EventData;
8003 8106 rc = ddi_get8(mpt->m_acc_reply_frame_hdl,
8004 8107 &irOpStatus->RAIDOperation);
8005 8108 percent = ddi_get8(mpt->m_acc_reply_frame_hdl,
8006 8109 &irOpStatus->PercentComplete);
8007 8110 handle = ddi_get16(mpt->m_acc_reply_frame_hdl,
8008 8111 &irOpStatus->VolDevHandle);
8009 8112
8010 8113 switch (rc) {
8011 8114 case MPI2_EVENT_IR_RAIDOP_RESYNC:
8012 8115 (void) sprintf(reason_str, "resync");
8013 8116 break;
8014 8117 case MPI2_EVENT_IR_RAIDOP_ONLINE_CAP_EXPANSION:
8015 8118 (void) sprintf(reason_str, "online capacity "
8016 8119 "expansion");
8017 8120 break;
8018 8121 case MPI2_EVENT_IR_RAIDOP_CONSISTENCY_CHECK:
8019 8122 (void) sprintf(reason_str, "consistency check");
8020 8123 break;
8021 8124 default:
8022 8125 (void) sprintf(reason_str, "unknown reason %x",
8023 8126 rc);
8024 8127 }
8025 8128
8026 8129 NDBG20(("mptsas%d raid operational status: (%s)"
8027 8130 "\thandle(0x%04x), percent complete(%d)\n",
8028 8131 mpt->m_instance, reason_str, handle, percent));
8029 8132 break;
8030 8133 }
8031 8134 case MPI2_EVENT_SAS_BROADCAST_PRIMITIVE:
8032 8135 {
8033 8136 pMpi2EventDataSasBroadcastPrimitive_t sas_broadcast;
8034 8137 uint8_t phy_num;
8035 8138 uint8_t primitive;
8036 8139
8037 8140 sas_broadcast = (pMpi2EventDataSasBroadcastPrimitive_t)
8038 8141 eventreply->EventData;
8039 8142
8040 8143 phy_num = ddi_get8(mpt->m_acc_reply_frame_hdl,
8041 8144 &sas_broadcast->PhyNum);
8042 8145 primitive = ddi_get8(mpt->m_acc_reply_frame_hdl,
8043 8146 &sas_broadcast->Primitive);
8044 8147
8045 8148 switch (primitive) {
8046 8149 case MPI2_EVENT_PRIMITIVE_CHANGE:
8047 8150 mptsas_smhba_log_sysevent(mpt,
8048 8151 ESC_SAS_HBA_PORT_BROADCAST,
8049 8152 SAS_PORT_BROADCAST_CHANGE,
8050 8153 &mpt->m_phy_info[phy_num].smhba_info);
8051 8154 break;
8052 8155 case MPI2_EVENT_PRIMITIVE_SES:
8053 8156 mptsas_smhba_log_sysevent(mpt,
8054 8157 ESC_SAS_HBA_PORT_BROADCAST,
8055 8158 SAS_PORT_BROADCAST_SES,
8056 8159 &mpt->m_phy_info[phy_num].smhba_info);
8057 8160 break;
8058 8161 case MPI2_EVENT_PRIMITIVE_EXPANDER:
8059 8162 mptsas_smhba_log_sysevent(mpt,
8060 8163 ESC_SAS_HBA_PORT_BROADCAST,
8061 8164 SAS_PORT_BROADCAST_D01_4,
8062 8165 &mpt->m_phy_info[phy_num].smhba_info);
8063 8166 break;
8064 8167 case MPI2_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT:
8065 8168 mptsas_smhba_log_sysevent(mpt,
8066 8169 ESC_SAS_HBA_PORT_BROADCAST,
8067 8170 SAS_PORT_BROADCAST_D04_7,
8068 8171 &mpt->m_phy_info[phy_num].smhba_info);
8069 8172 break;
8070 8173 case MPI2_EVENT_PRIMITIVE_RESERVED3:
8071 8174 mptsas_smhba_log_sysevent(mpt,
8072 8175 ESC_SAS_HBA_PORT_BROADCAST,
8073 8176 SAS_PORT_BROADCAST_D16_7,
8074 8177 &mpt->m_phy_info[phy_num].smhba_info);
8075 8178 break;
8076 8179 case MPI2_EVENT_PRIMITIVE_RESERVED4:
8077 8180 mptsas_smhba_log_sysevent(mpt,
8078 8181 ESC_SAS_HBA_PORT_BROADCAST,
8079 8182 SAS_PORT_BROADCAST_D29_7,
8080 8183 &mpt->m_phy_info[phy_num].smhba_info);
8081 8184 break;
8082 8185 case MPI2_EVENT_PRIMITIVE_CHANGE0_RESERVED:
8083 8186 mptsas_smhba_log_sysevent(mpt,
8084 8187 ESC_SAS_HBA_PORT_BROADCAST,
8085 8188 SAS_PORT_BROADCAST_D24_0,
8086 8189 &mpt->m_phy_info[phy_num].smhba_info);
8087 8190 break;
8088 8191 case MPI2_EVENT_PRIMITIVE_CHANGE1_RESERVED:
8089 8192 mptsas_smhba_log_sysevent(mpt,
8090 8193 ESC_SAS_HBA_PORT_BROADCAST,
8091 8194 SAS_PORT_BROADCAST_D27_4,
8092 8195 &mpt->m_phy_info[phy_num].smhba_info);
8093 8196 break;
8094 8197 default:
8095 8198 NDBG16(("mptsas%d: unknown BROADCAST PRIMITIVE"
8096 8199 " %x received",
8097 8200 mpt->m_instance, primitive));
8098 8201 break;
8099 8202 }
8100 8203 NDBG16(("mptsas%d sas broadcast primitive: "
8101 8204 "\tprimitive(0x%04x), phy(%d) complete\n",
8102 8205 mpt->m_instance, primitive, phy_num));
8103 8206 break;
8104 8207 }
8105 8208 case MPI2_EVENT_IR_VOLUME:
8106 8209 {
8107 8210 Mpi2EventDataIrVolume_t *irVolume;
8108 8211 uint16_t devhandle;
8109 8212 uint32_t state;
8110 8213 int config, vol;
8111 8214 uint8_t found = FALSE;
8112 8215
8113 8216 irVolume = (pMpi2EventDataIrVolume_t)eventreply->EventData;
8114 8217 state = ddi_get32(mpt->m_acc_reply_frame_hdl,
8115 8218 &irVolume->NewValue);
8116 8219 devhandle = ddi_get16(mpt->m_acc_reply_frame_hdl,
8117 8220 &irVolume->VolDevHandle);
8118 8221
8119 8222 NDBG20(("EVENT_IR_VOLUME event is received"));
8120 8223
8121 8224 /*
8122 8225 * Get latest RAID info and then find the DevHandle for this
8123 8226 * event in the configuration. If the DevHandle is not found
8124 8227 * just exit the event.
8125 8228 */
8126 8229 (void) mptsas_get_raid_info(mpt);
8127 8230 for (config = 0; (config < mpt->m_num_raid_configs) &&
8128 8231 (!found); config++) {
8129 8232 for (vol = 0; vol < MPTSAS_MAX_RAIDVOLS; vol++) {
8130 8233 if (mpt->m_raidconfig[config].m_raidvol[vol].
8131 8234 m_raidhandle == devhandle) {
8132 8235 found = TRUE;
8133 8236 break;
8134 8237 }
8135 8238 }
8136 8239 }
8137 8240 if (!found) {
8138 8241 break;
8139 8242 }
8140 8243
8141 8244 switch (irVolume->ReasonCode) {
|
↓ open down ↓ |
246 lines elided |
↑ open up ↑ |
8142 8245 case MPI2_EVENT_IR_VOLUME_RC_SETTINGS_CHANGED:
8143 8246 {
8144 8247 uint32_t i;
8145 8248 mpt->m_raidconfig[config].m_raidvol[vol].m_settings =
8146 8249 state;
8147 8250
8148 8251 i = state & MPI2_RAIDVOL0_SETTING_MASK_WRITE_CACHING;
8149 8252 mptsas_log(mpt, CE_NOTE, " Volume %d settings changed"
8150 8253 ", auto-config of hot-swap drives is %s"
8151 8254 ", write caching is %s"
8152 - ", hot-spare pool mask is %02x\n",
8255 + ", hot-spare pool mask is %02x",
8153 8256 vol, state &
8154 8257 MPI2_RAIDVOL0_SETTING_AUTO_CONFIG_HSWAP_DISABLE
8155 8258 ? "disabled" : "enabled",
8156 8259 i == MPI2_RAIDVOL0_SETTING_UNCHANGED
8157 8260 ? "controlled by member disks" :
8158 8261 i == MPI2_RAIDVOL0_SETTING_DISABLE_WRITE_CACHING
8159 8262 ? "disabled" :
8160 8263 i == MPI2_RAIDVOL0_SETTING_ENABLE_WRITE_CACHING
8161 8264 ? "enabled" :
8162 8265 "incorrectly set",
8163 8266 (state >> 16) & 0xff);
8164 8267 break;
8165 8268 }
8166 8269 case MPI2_EVENT_IR_VOLUME_RC_STATE_CHANGED:
8167 8270 {
8168 8271 mpt->m_raidconfig[config].m_raidvol[vol].m_state =
8169 8272 (uint8_t)state;
8170 8273
8171 8274 mptsas_log(mpt, CE_NOTE,
8172 - "Volume %d is now %s\n", vol,
8275 + "Volume %d is now %s", vol,
8173 8276 state == MPI2_RAID_VOL_STATE_OPTIMAL
8174 8277 ? "optimal" :
8175 8278 state == MPI2_RAID_VOL_STATE_DEGRADED
8176 8279 ? "degraded" :
8177 8280 state == MPI2_RAID_VOL_STATE_ONLINE
8178 8281 ? "online" :
8179 8282 state == MPI2_RAID_VOL_STATE_INITIALIZING
8180 8283 ? "initializing" :
8181 8284 state == MPI2_RAID_VOL_STATE_FAILED
8182 8285 ? "failed" :
8183 8286 state == MPI2_RAID_VOL_STATE_MISSING
|
↓ open down ↓ |
1 lines elided |
↑ open up ↑ |
8184 8287 ? "missing" :
8185 8288 "state unknown");
8186 8289 break;
8187 8290 }
8188 8291 case MPI2_EVENT_IR_VOLUME_RC_STATUS_FLAGS_CHANGED:
8189 8292 {
8190 8293 mpt->m_raidconfig[config].m_raidvol[vol].
8191 8294 m_statusflags = state;
8192 8295
8193 8296 mptsas_log(mpt, CE_NOTE,
8194 - " Volume %d is now %s%s%s%s%s%s%s%s%s\n",
8297 + " Volume %d is now %s%s%s%s%s%s%s%s%s",
8195 8298 vol,
8196 8299 state & MPI2_RAIDVOL0_STATUS_FLAG_ENABLED
8197 8300 ? ", enabled" : ", disabled",
8198 8301 state & MPI2_RAIDVOL0_STATUS_FLAG_QUIESCED
8199 8302 ? ", quiesced" : "",
8200 8303 state & MPI2_RAIDVOL0_STATUS_FLAG_VOLUME_INACTIVE
8201 8304 ? ", inactive" : ", active",
8202 8305 state &
8203 8306 MPI2_RAIDVOL0_STATUS_FLAG_BAD_BLOCK_TABLE_FULL
8204 8307 ? ", bad block table is full" : "",
8205 8308 state &
8206 8309 MPI2_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS
8207 8310 ? ", resync in progress" : "",
8208 8311 state & MPI2_RAIDVOL0_STATUS_FLAG_BACKGROUND_INIT
8209 8312 ? ", background initialization in progress" : "",
8210 8313 state &
8211 8314 MPI2_RAIDVOL0_STATUS_FLAG_CAPACITY_EXPANSION
8212 8315 ? ", capacity expansion in progress" : "",
8213 8316 state &
8214 8317 MPI2_RAIDVOL0_STATUS_FLAG_CONSISTENCY_CHECK
8215 8318 ? ", consistency check in progress" : "",
8216 8319 state & MPI2_RAIDVOL0_STATUS_FLAG_DATA_SCRUB
8217 8320 ? ", data scrub in progress" : "");
8218 8321 break;
8219 8322 }
8220 8323 default:
8221 8324 break;
8222 8325 }
8223 8326 break;
8224 8327 }
8225 8328 case MPI2_EVENT_IR_PHYSICAL_DISK:
8226 8329 {
8227 8330 Mpi2EventDataIrPhysicalDisk_t *irPhysDisk;
8228 8331 uint16_t devhandle, enchandle, slot;
8229 8332 uint32_t status, state;
8230 8333 uint8_t physdisknum, reason;
8231 8334
8232 8335 irPhysDisk = (Mpi2EventDataIrPhysicalDisk_t *)
8233 8336 eventreply->EventData;
8234 8337 physdisknum = ddi_get8(mpt->m_acc_reply_frame_hdl,
8235 8338 &irPhysDisk->PhysDiskNum);
8236 8339 devhandle = ddi_get16(mpt->m_acc_reply_frame_hdl,
8237 8340 &irPhysDisk->PhysDiskDevHandle);
8238 8341 enchandle = ddi_get16(mpt->m_acc_reply_frame_hdl,
8239 8342 &irPhysDisk->EnclosureHandle);
8240 8343 slot = ddi_get16(mpt->m_acc_reply_frame_hdl,
8241 8344 &irPhysDisk->Slot);
8242 8345 state = ddi_get32(mpt->m_acc_reply_frame_hdl,
8243 8346 &irPhysDisk->NewValue);
8244 8347 reason = ddi_get8(mpt->m_acc_reply_frame_hdl,
8245 8348 &irPhysDisk->ReasonCode);
8246 8349
8247 8350 NDBG20(("EVENT_IR_PHYSICAL_DISK event is received"));
8248 8351
8249 8352 switch (reason) {
8250 8353 case MPI2_EVENT_IR_PHYSDISK_RC_SETTINGS_CHANGED:
8251 8354 mptsas_log(mpt, CE_NOTE,
8252 8355 " PhysDiskNum %d with DevHandle 0x%x in slot %d "
8253 8356 "for enclosure with handle 0x%x is now in hot "
|
↓ open down ↓ |
49 lines elided |
↑ open up ↑ |
8254 8357 "spare pool %d",
8255 8358 physdisknum, devhandle, slot, enchandle,
8256 8359 (state >> 16) & 0xff);
8257 8360 break;
8258 8361
8259 8362 case MPI2_EVENT_IR_PHYSDISK_RC_STATUS_FLAGS_CHANGED:
8260 8363 status = state;
8261 8364 mptsas_log(mpt, CE_NOTE,
8262 8365 " PhysDiskNum %d with DevHandle 0x%x in slot %d "
8263 8366 "for enclosure with handle 0x%x is now "
8264 - "%s%s%s%s%s\n", physdisknum, devhandle, slot,
8367 + "%s%s%s%s%s", physdisknum, devhandle, slot,
8265 8368 enchandle,
8266 8369 status & MPI2_PHYSDISK0_STATUS_FLAG_INACTIVE_VOLUME
8267 8370 ? ", inactive" : ", active",
8268 8371 status & MPI2_PHYSDISK0_STATUS_FLAG_OUT_OF_SYNC
8269 8372 ? ", out of sync" : "",
8270 8373 status & MPI2_PHYSDISK0_STATUS_FLAG_QUIESCED
8271 8374 ? ", quiesced" : "",
8272 8375 status &
8273 8376 MPI2_PHYSDISK0_STATUS_FLAG_WRITE_CACHE_ENABLED
8274 8377 ? ", write cache enabled" : "",
8275 8378 status & MPI2_PHYSDISK0_STATUS_FLAG_OCE_TARGET
8276 8379 ? ", capacity expansion target" : "");
8277 8380 break;
8278 8381
8279 8382 case MPI2_EVENT_IR_PHYSDISK_RC_STATE_CHANGED:
8280 8383 mptsas_log(mpt, CE_NOTE,
8281 8384 " PhysDiskNum %d with DevHandle 0x%x in slot %d "
8282 - "for enclosure with handle 0x%x is now %s\n",
8385 + "for enclosure with handle 0x%x is now %s",
8283 8386 physdisknum, devhandle, slot, enchandle,
8284 8387 state == MPI2_RAID_PD_STATE_OPTIMAL
8285 8388 ? "optimal" :
8286 8389 state == MPI2_RAID_PD_STATE_REBUILDING
8287 8390 ? "rebuilding" :
8288 8391 state == MPI2_RAID_PD_STATE_DEGRADED
8289 8392 ? "degraded" :
8290 8393 state == MPI2_RAID_PD_STATE_HOT_SPARE
8291 8394 ? "a hot spare" :
8292 8395 state == MPI2_RAID_PD_STATE_ONLINE
8293 8396 ? "online" :
8294 8397 state == MPI2_RAID_PD_STATE_OFFLINE
|
↓ open down ↓ |
2 lines elided |
↑ open up ↑ |
8295 8398 ? "offline" :
8296 8399 state == MPI2_RAID_PD_STATE_NOT_COMPATIBLE
8297 8400 ? "not compatible" :
8298 8401 state == MPI2_RAID_PD_STATE_NOT_CONFIGURED
8299 8402 ? "not configured" :
8300 8403 "state unknown");
8301 8404 break;
8302 8405 }
8303 8406 break;
8304 8407 }
8408 + case MPI2_EVENT_ACTIVE_CABLE_EXCEPTION:
8409 + {
8410 + pMpi26EventDataActiveCableExcept_t actcable;
8411 + uint32_t power;
8412 + uint8_t reason, id;
8413 +
8414 + actcable = (pMpi26EventDataActiveCableExcept_t)
8415 + eventreply->EventData;
8416 + power = ddi_get32(mpt->m_acc_reply_frame_hdl,
8417 + &actcable->ActiveCablePowerRequirement);
8418 + reason = ddi_get8(mpt->m_acc_reply_frame_hdl,
8419 + &actcable->ReasonCode);
8420 + id = ddi_get8(mpt->m_acc_reply_frame_hdl,
8421 + &actcable->ReceptacleID);
8422 +
8423 + /*
8424 + * It'd be nice if this weren't just logging to the system but
8425 + * were telling FMA about the active cable problem and FMA was
8426 + * aware of the cable topology and state.
8427 + */
8428 + switch (reason) {
8429 + case MPI26_EVENT_ACTIVE_CABLE_PRESENT:
8430 + /* Don't log anything if it's fine */
8431 + break;
8432 + case MPI26_EVENT_ACTIVE_CABLE_INSUFFICIENT_POWER:
8433 + mptsas_log(mpt, CE_WARN, "An active cable (id %u) does "
8434 + "not have sufficient power to be enabled. "
8435 + "Devices connected to this cable will not be "
8436 + "visible to the system.", id);
8437 + if (power == UINT32_MAX) {
8438 + mptsas_log(mpt, CE_CONT, "The cable's power "
8439 + "requirements are unknown.\n");
8440 + } else {
8441 + mptsas_log(mpt, CE_CONT, "The cable requires "
8442 + "%u mW of power to function.\n", power);
8443 + }
8444 + break;
8445 + case MPI26_EVENT_ACTIVE_CABLE_DEGRADED:
8446 + mptsas_log(mpt, CE_WARN, "An active cable (id %u) is "
8447 + "degraded and not running at its full speed. "
8448 + "Some devices might not appear.", id);
8449 + break;
8450 + default:
8451 + break;
8452 + }
8453 + break;
8454 + }
8455 + case MPI2_EVENT_PCIE_DEVICE_STATUS_CHANGE:
8456 + case MPI2_EVENT_PCIE_ENUMERATION:
8457 + case MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST:
8458 + case MPI2_EVENT_PCIE_LINK_COUNTER:
8459 + mptsas_log(mpt, CE_NOTE, "Unhandled mpt_sas PCIe device "
8460 + "event received (0x%x)", event);
8461 + break;
8305 8462 default:
8306 8463 NDBG20(("mptsas%d: unknown event %x received",
8307 8464 mpt->m_instance, event));
8308 8465 break;
8309 8466 }
8310 8467
8311 8468 /*
8312 8469 * Return the reply frame to the free queue.
8313 8470 */
8314 8471 ddi_put32(mpt->m_acc_free_queue_hdl,
8315 8472 &((uint32_t *)(void *)mpt->m_free_queue)[mpt->m_free_index], rfm);
8316 8473 (void) ddi_dma_sync(mpt->m_dma_free_queue_hdl, 0, 0,
8317 8474 DDI_DMA_SYNC_FORDEV);
8318 8475 if (++mpt->m_free_index == mpt->m_free_queue_depth) {
8319 8476 mpt->m_free_index = 0;
8320 8477 }
8321 8478 ddi_put32(mpt->m_datap, &mpt->m_reg->ReplyFreeHostIndex,
8322 8479 mpt->m_free_index);
8323 8480 mutex_exit(&mpt->m_mutex);
8324 8481 }
8325 8482
8326 8483 /*
8327 8484 * invoked from timeout() to restart qfull cmds with throttle == 0
8328 8485 */
8329 8486 static void
8330 8487 mptsas_restart_cmd(void *arg)
8331 8488 {
8332 8489 mptsas_t *mpt = arg;
8333 8490 mptsas_target_t *ptgt = NULL;
8334 8491
8335 8492 mutex_enter(&mpt->m_mutex);
8336 8493
8337 8494 mpt->m_restart_cmd_timeid = 0;
8338 8495
8339 8496 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
8340 8497 ptgt = refhash_next(mpt->m_targets, ptgt)) {
8341 8498 if (ptgt->m_reset_delay == 0) {
8342 8499 if (ptgt->m_t_throttle == QFULL_THROTTLE) {
8343 8500 mptsas_set_throttle(mpt, ptgt,
8344 8501 MAX_THROTTLE);
8345 8502 }
8346 8503 }
8347 8504 }
8348 8505 mptsas_restart_hba(mpt);
8349 8506 mutex_exit(&mpt->m_mutex);
8350 8507 }
8351 8508
8352 8509 void
8353 8510 mptsas_remove_cmd(mptsas_t *mpt, mptsas_cmd_t *cmd)
8354 8511 {
8355 8512 int slot;
8356 8513 mptsas_slots_t *slots = mpt->m_active;
8357 8514 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
8358 8515
8359 8516 ASSERT(cmd != NULL);
8360 8517 ASSERT(cmd->cmd_queued == FALSE);
8361 8518
8362 8519 /*
8363 8520 * Task Management cmds are removed in their own routines. Also,
8364 8521 * we don't want to modify timeout based on TM cmds.
8365 8522 */
8366 8523 if (cmd->cmd_flags & CFLAG_TM_CMD) {
8367 8524 return;
8368 8525 }
8369 8526
8370 8527 slot = cmd->cmd_slot;
8371 8528
8372 8529 /*
8373 8530 * remove the cmd.
8374 8531 */
8375 8532 if (cmd == slots->m_slot[slot]) {
8376 8533 NDBG31(("mptsas_remove_cmd: removing cmd=0x%p, flags "
8377 8534 "0x%x", (void *)cmd, cmd->cmd_flags));
8378 8535 slots->m_slot[slot] = NULL;
8379 8536 mpt->m_ncmds--;
8380 8537
8381 8538 /*
8382 8539 * only decrement per target ncmds if command
8383 8540 * has a target associated with it.
8384 8541 */
8385 8542 if ((cmd->cmd_flags & CFLAG_CMDIOC) == 0) {
8386 8543 ptgt->m_t_ncmds--;
8387 8544 /*
8388 8545 * reset throttle if we just ran an untagged command
8389 8546 * to a tagged target
8390 8547 */
8391 8548 if ((ptgt->m_t_ncmds == 0) &&
8392 8549 ((cmd->cmd_pkt_flags & FLAG_TAGMASK) == 0)) {
8393 8550 mptsas_set_throttle(mpt, ptgt, MAX_THROTTLE);
8394 8551 }
8395 8552
8396 8553 /*
8397 8554 * Remove this command from the active queue.
8398 8555 */
8399 8556 if (cmd->cmd_active_expiration != 0) {
8400 8557 TAILQ_REMOVE(&ptgt->m_active_cmdq, cmd,
8401 8558 cmd_active_link);
8402 8559 cmd->cmd_active_expiration = 0;
8403 8560 }
8404 8561 }
8405 8562 }
8406 8563
8407 8564 /*
8408 8565 * This is all we need to do for ioc commands.
8409 8566 */
8410 8567 if (cmd->cmd_flags & CFLAG_CMDIOC) {
8411 8568 mptsas_return_to_pool(mpt, cmd);
8412 8569 return;
8413 8570 }
8414 8571
8415 8572 ASSERT(cmd != slots->m_slot[cmd->cmd_slot]);
8416 8573 }
8417 8574
8418 8575 /*
8419 8576 * accept all cmds on the tx_waitq if any and then
8420 8577 * start a fresh request from the top of the device queue.
8421 8578 *
8422 8579 * since there are always cmds queued on the tx_waitq, and rare cmds on
8423 8580 * the instance waitq, so this function should not be invoked in the ISR,
8424 8581 * the mptsas_restart_waitq() is invoked in the ISR instead. otherwise, the
8425 8582 * burden belongs to the IO dispatch CPUs is moved the interrupt CPU.
8426 8583 */
8427 8584 static void
8428 8585 mptsas_restart_hba(mptsas_t *mpt)
8429 8586 {
8430 8587 ASSERT(mutex_owned(&mpt->m_mutex));
8431 8588
8432 8589 mutex_enter(&mpt->m_tx_waitq_mutex);
8433 8590 if (mpt->m_tx_waitq) {
8434 8591 mptsas_accept_tx_waitq(mpt);
8435 8592 }
8436 8593 mutex_exit(&mpt->m_tx_waitq_mutex);
8437 8594 mptsas_restart_waitq(mpt);
8438 8595 }
8439 8596
8440 8597 /*
8441 8598 * start a fresh request from the top of the device queue
8442 8599 */
8443 8600 static void
8444 8601 mptsas_restart_waitq(mptsas_t *mpt)
8445 8602 {
8446 8603 mptsas_cmd_t *cmd, *next_cmd;
8447 8604 mptsas_target_t *ptgt = NULL;
8448 8605
8449 8606 NDBG1(("mptsas_restart_waitq: mpt=0x%p", (void *)mpt));
8450 8607
8451 8608 ASSERT(mutex_owned(&mpt->m_mutex));
8452 8609
8453 8610 /*
8454 8611 * If there is a reset delay, don't start any cmds. Otherwise, start
8455 8612 * as many cmds as possible.
8456 8613 * Since SMID 0 is reserved and the TM slot is reserved, the actual max
8457 8614 * commands is m_max_requests - 2.
8458 8615 */
8459 8616 cmd = mpt->m_waitq;
8460 8617
8461 8618 while (cmd != NULL) {
8462 8619 next_cmd = cmd->cmd_linkp;
8463 8620 if (cmd->cmd_flags & CFLAG_PASSTHRU) {
8464 8621 if (mptsas_save_cmd(mpt, cmd) == TRUE) {
8465 8622 /*
8466 8623 * passthru command get slot need
8467 8624 * set CFLAG_PREPARED.
8468 8625 */
8469 8626 cmd->cmd_flags |= CFLAG_PREPARED;
8470 8627 mptsas_waitq_delete(mpt, cmd);
8471 8628 mptsas_start_passthru(mpt, cmd);
8472 8629 }
8473 8630 cmd = next_cmd;
8474 8631 continue;
8475 8632 }
8476 8633 if (cmd->cmd_flags & CFLAG_CONFIG) {
8477 8634 if (mptsas_save_cmd(mpt, cmd) == TRUE) {
8478 8635 /*
8479 8636 * Send the config page request and delete it
8480 8637 * from the waitq.
8481 8638 */
8482 8639 cmd->cmd_flags |= CFLAG_PREPARED;
8483 8640 mptsas_waitq_delete(mpt, cmd);
8484 8641 mptsas_start_config_page_access(mpt, cmd);
8485 8642 }
8486 8643 cmd = next_cmd;
8487 8644 continue;
8488 8645 }
8489 8646 if (cmd->cmd_flags & CFLAG_FW_DIAG) {
8490 8647 if (mptsas_save_cmd(mpt, cmd) == TRUE) {
8491 8648 /*
8492 8649 * Send the FW Diag request and delete if from
8493 8650 * the waitq.
8494 8651 */
8495 8652 cmd->cmd_flags |= CFLAG_PREPARED;
8496 8653 mptsas_waitq_delete(mpt, cmd);
8497 8654 mptsas_start_diag(mpt, cmd);
8498 8655 }
8499 8656 cmd = next_cmd;
8500 8657 continue;
8501 8658 }
8502 8659
8503 8660 ptgt = cmd->cmd_tgt_addr;
8504 8661 if (ptgt && (ptgt->m_t_throttle == DRAIN_THROTTLE) &&
8505 8662 (ptgt->m_t_ncmds == 0)) {
8506 8663 mptsas_set_throttle(mpt, ptgt, MAX_THROTTLE);
8507 8664 }
8508 8665 if ((mpt->m_ncmds <= (mpt->m_max_requests - 2)) &&
8509 8666 (ptgt && (ptgt->m_reset_delay == 0)) &&
8510 8667 (ptgt && (ptgt->m_t_ncmds <
8511 8668 ptgt->m_t_throttle))) {
8512 8669 if (mptsas_save_cmd(mpt, cmd) == TRUE) {
8513 8670 mptsas_waitq_delete(mpt, cmd);
8514 8671 (void) mptsas_start_cmd(mpt, cmd);
8515 8672 }
8516 8673 }
8517 8674 cmd = next_cmd;
8518 8675 }
8519 8676 }
8520 8677 /*
8521 8678 * Cmds are queued if tran_start() doesn't get the m_mutexlock(no wait).
8522 8679 * Accept all those queued cmds before new cmd is accept so that the
8523 8680 * cmds are sent in order.
8524 8681 */
8525 8682 static void
8526 8683 mptsas_accept_tx_waitq(mptsas_t *mpt)
8527 8684 {
8528 8685 mptsas_cmd_t *cmd;
8529 8686
8530 8687 ASSERT(mutex_owned(&mpt->m_mutex));
8531 8688 ASSERT(mutex_owned(&mpt->m_tx_waitq_mutex));
8532 8689
8533 8690 /*
8534 8691 * A Bus Reset could occur at any time and flush the tx_waitq,
8535 8692 * so we cannot count on the tx_waitq to contain even one cmd.
8536 8693 * And when the m_tx_waitq_mutex is released and run
8537 8694 * mptsas_accept_pkt(), the tx_waitq may be flushed.
8538 8695 */
8539 8696 cmd = mpt->m_tx_waitq;
8540 8697 for (;;) {
8541 8698 if ((cmd = mpt->m_tx_waitq) == NULL) {
|
↓ open down ↓ |
227 lines elided |
↑ open up ↑ |
8542 8699 mpt->m_tx_draining = 0;
8543 8700 break;
8544 8701 }
8545 8702 if ((mpt->m_tx_waitq = cmd->cmd_linkp) == NULL) {
8546 8703 mpt->m_tx_waitqtail = &mpt->m_tx_waitq;
8547 8704 }
8548 8705 cmd->cmd_linkp = NULL;
8549 8706 mutex_exit(&mpt->m_tx_waitq_mutex);
8550 8707 if (mptsas_accept_pkt(mpt, cmd) != TRAN_ACCEPT)
8551 8708 cmn_err(CE_WARN, "mpt: mptsas_accept_tx_waitq: failed "
8552 - "to accept cmd on queue\n");
8709 + "to accept cmd on queue");
8553 8710 mutex_enter(&mpt->m_tx_waitq_mutex);
8554 8711 }
8555 8712 }
8556 8713
8557 8714
8558 8715 /*
8559 8716 * mpt tag type lookup
8560 8717 */
8561 8718 static char mptsas_tag_lookup[] =
8562 8719 {0, MSG_HEAD_QTAG, MSG_ORDERED_QTAG, 0, MSG_SIMPLE_QTAG};
8563 8720
8564 8721 static int
8565 8722 mptsas_start_cmd(mptsas_t *mpt, mptsas_cmd_t *cmd)
8566 8723 {
8567 8724 struct scsi_pkt *pkt = CMD2PKT(cmd);
8568 8725 uint32_t control = 0;
8569 8726 caddr_t mem, arsbuf;
8570 8727 pMpi2SCSIIORequest_t io_request;
8571 8728 ddi_dma_handle_t dma_hdl = mpt->m_dma_req_frame_hdl;
8572 8729 ddi_acc_handle_t acc_hdl = mpt->m_acc_req_frame_hdl;
8573 8730 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
8574 8731 uint16_t SMID, io_flags = 0;
8575 8732 uint8_t ars_size;
8576 8733 uint64_t request_desc;
8577 8734 uint32_t ars_dmaaddrlow;
8578 8735 mptsas_cmd_t *c;
8579 8736
8580 8737 NDBG1(("mptsas_start_cmd: cmd=0x%p, flags 0x%x", (void *)cmd,
8581 8738 cmd->cmd_flags));
8582 8739
8583 8740 /*
8584 8741 * Set SMID and increment index. Rollover to 1 instead of 0 if index
8585 8742 * is at the max. 0 is an invalid SMID, so we call the first index 1.
8586 8743 */
8587 8744 SMID = cmd->cmd_slot;
8588 8745
8589 8746 /*
8590 8747 * It is possible for back to back device reset to
8591 8748 * happen before the reset delay has expired. That's
8592 8749 * ok, just let the device reset go out on the bus.
8593 8750 */
8594 8751 if ((cmd->cmd_pkt_flags & FLAG_NOINTR) == 0) {
8595 8752 ASSERT(ptgt->m_reset_delay == 0);
8596 8753 }
8597 8754
8598 8755 /*
8599 8756 * if a non-tagged cmd is submitted to an active tagged target
8600 8757 * then drain before submitting this cmd; SCSI-2 allows RQSENSE
8601 8758 * to be untagged
8602 8759 */
8603 8760 if (((cmd->cmd_pkt_flags & FLAG_TAGMASK) == 0) &&
8604 8761 (ptgt->m_t_ncmds > 1) &&
8605 8762 ((cmd->cmd_flags & CFLAG_TM_CMD) == 0) &&
8606 8763 (*(cmd->cmd_pkt->pkt_cdbp) != SCMD_REQUEST_SENSE)) {
8607 8764 if ((cmd->cmd_pkt_flags & FLAG_NOINTR) == 0) {
8608 8765 NDBG23(("target=%d, untagged cmd, start draining\n",
8609 8766 ptgt->m_devhdl));
8610 8767
8611 8768 if (ptgt->m_reset_delay == 0) {
8612 8769 mptsas_set_throttle(mpt, ptgt, DRAIN_THROTTLE);
8613 8770 }
8614 8771
8615 8772 mptsas_remove_cmd(mpt, cmd);
8616 8773 cmd->cmd_pkt_flags |= FLAG_HEAD;
8617 8774 mptsas_waitq_add(mpt, cmd);
8618 8775 }
8619 8776 return (DDI_FAILURE);
8620 8777 }
8621 8778
8622 8779 /*
8623 8780 * Set correct tag bits.
8624 8781 */
8625 8782 if (cmd->cmd_pkt_flags & FLAG_TAGMASK) {
8626 8783 switch (mptsas_tag_lookup[((cmd->cmd_pkt_flags &
8627 8784 FLAG_TAGMASK) >> 12)]) {
|
↓ open down ↓ |
65 lines elided |
↑ open up ↑ |
8628 8785 case MSG_SIMPLE_QTAG:
8629 8786 control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
8630 8787 break;
8631 8788 case MSG_HEAD_QTAG:
8632 8789 control |= MPI2_SCSIIO_CONTROL_HEADOFQ;
8633 8790 break;
8634 8791 case MSG_ORDERED_QTAG:
8635 8792 control |= MPI2_SCSIIO_CONTROL_ORDEREDQ;
8636 8793 break;
8637 8794 default:
8638 - mptsas_log(mpt, CE_WARN, "mpt: Invalid tag type\n");
8795 + mptsas_log(mpt, CE_WARN, "invalid tag type");
8639 8796 break;
8640 8797 }
8641 8798 } else {
8642 8799 if (*(cmd->cmd_pkt->pkt_cdbp) != SCMD_REQUEST_SENSE) {
8643 8800 ptgt->m_t_throttle = 1;
8644 8801 }
8645 8802 control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
8646 8803 }
8647 8804
8648 8805 if (cmd->cmd_pkt_flags & FLAG_TLR) {
8649 8806 control |= MPI2_SCSIIO_CONTROL_TLR_ON;
8650 8807 }
8651 8808
8652 8809 mem = mpt->m_req_frame + (mpt->m_req_frame_size * SMID);
8653 8810 io_request = (pMpi2SCSIIORequest_t)mem;
8654 8811 if (cmd->cmd_extrqslen != 0) {
8655 8812 /*
8656 8813 * Mapping of the buffer was done in mptsas_pkt_alloc_extern().
8657 8814 * Calculate the DMA address with the same offset.
8658 8815 */
8659 8816 arsbuf = cmd->cmd_arq_buf;
8660 8817 ars_size = cmd->cmd_extrqslen;
8661 8818 ars_dmaaddrlow = (mpt->m_req_sense_dma_addr +
8662 8819 ((uintptr_t)arsbuf - (uintptr_t)mpt->m_req_sense)) &
8663 8820 0xffffffffu;
8664 8821 } else {
8665 8822 arsbuf = mpt->m_req_sense + (mpt->m_req_sense_size * (SMID-1));
8666 8823 cmd->cmd_arq_buf = arsbuf;
8667 8824 ars_size = mpt->m_req_sense_size;
8668 8825 ars_dmaaddrlow = (mpt->m_req_sense_dma_addr +
8669 8826 (mpt->m_req_sense_size * (SMID-1))) &
8670 8827 0xffffffffu;
8671 8828 }
8672 8829 bzero(io_request, sizeof (Mpi2SCSIIORequest_t));
8673 8830 bzero(arsbuf, ars_size);
8674 8831
8675 8832 ddi_put8(acc_hdl, &io_request->SGLOffset0, offsetof
8676 8833 (MPI2_SCSI_IO_REQUEST, SGL) / 4);
8677 8834 mptsas_init_std_hdr(acc_hdl, io_request, ptgt->m_devhdl, Lun(cmd), 0,
8678 8835 MPI2_FUNCTION_SCSI_IO_REQUEST);
8679 8836
8680 8837 (void) ddi_rep_put8(acc_hdl, (uint8_t *)pkt->pkt_cdbp,
8681 8838 io_request->CDB.CDB32, cmd->cmd_cdblen, DDI_DEV_AUTOINCR);
8682 8839
8683 8840 io_flags = cmd->cmd_cdblen;
8684 8841 if (mptsas_use_fastpath &&
8685 8842 ptgt->m_io_flags & MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH) {
8686 8843 io_flags |= MPI25_SCSIIO_IOFLAGS_FAST_PATH;
8687 8844 request_desc = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO;
8688 8845 } else {
8689 8846 request_desc = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
8690 8847 }
8691 8848 ddi_put16(acc_hdl, &io_request->IoFlags, io_flags);
8692 8849 /*
8693 8850 * setup the Scatter/Gather DMA list for this request
8694 8851 */
8695 8852 if (cmd->cmd_cookiec > 0) {
8696 8853 mptsas_sge_setup(mpt, cmd, &control, io_request, acc_hdl);
8697 8854 } else {
8698 8855 ddi_put32(acc_hdl, &io_request->SGL.MpiSimple.FlagsLength,
8699 8856 ((uint32_t)MPI2_SGE_FLAGS_LAST_ELEMENT |
8700 8857 MPI2_SGE_FLAGS_END_OF_BUFFER |
8701 8858 MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
8702 8859 MPI2_SGE_FLAGS_END_OF_LIST) << MPI2_SGE_FLAGS_SHIFT);
8703 8860 }
8704 8861
8705 8862 /*
8706 8863 * save ARQ information
8707 8864 */
8708 8865 ddi_put8(acc_hdl, &io_request->SenseBufferLength, ars_size);
|
↓ open down ↓ |
60 lines elided |
↑ open up ↑ |
8709 8866 ddi_put32(acc_hdl, &io_request->SenseBufferLowAddress, ars_dmaaddrlow);
8710 8867
8711 8868 ddi_put32(acc_hdl, &io_request->Control, control);
8712 8869
8713 8870 NDBG31(("starting message=%d(0x%p), with cmd=0x%p",
8714 8871 SMID, (void *)io_request, (void *)cmd));
8715 8872
8716 8873 (void) ddi_dma_sync(dma_hdl, 0, 0, DDI_DMA_SYNC_FORDEV);
8717 8874 (void) ddi_dma_sync(mpt->m_dma_req_sense_hdl, 0, 0,
8718 8875 DDI_DMA_SYNC_FORDEV);
8876 + pkt->pkt_start = gethrtime();
8719 8877
8720 8878 /*
8721 8879 * Build request descriptor and write it to the request desc post reg.
8722 8880 */
8723 8881 request_desc |= (SMID << 16);
8724 8882 request_desc |= (uint64_t)ptgt->m_devhdl << 48;
8725 8883 MPTSAS_START_CMD(mpt, request_desc);
8726 8884
8727 8885 /*
8728 8886 * Start timeout.
8729 8887 */
8730 - cmd->cmd_active_expiration =
8731 - gethrtime() + (hrtime_t)pkt->pkt_time * NANOSEC;
8888 + cmd->cmd_active_expiration = pkt->pkt_start +
8889 + (hrtime_t)pkt->pkt_time * (hrtime_t)NANOSEC;
8890 +
8732 8891 #ifdef MPTSAS_TEST
8733 8892 /*
8734 8893 * Force timeouts to happen immediately.
8735 8894 */
8736 8895 if (mptsas_test_timeouts)
8737 8896 cmd->cmd_active_expiration = gethrtime();
8738 8897 #endif
8739 8898 c = TAILQ_FIRST(&ptgt->m_active_cmdq);
8740 8899 if (c == NULL ||
8741 8900 c->cmd_active_expiration < cmd->cmd_active_expiration) {
8742 8901 /*
8743 8902 * Common case is that this is the last pending expiration
8744 8903 * (or queue is empty). Insert at head of the queue.
8745 8904 */
8746 8905 TAILQ_INSERT_HEAD(&ptgt->m_active_cmdq, cmd, cmd_active_link);
8747 8906 } else {
8748 8907 /*
8749 8908 * Queue is not empty and first element expires later than
8750 8909 * this command. Search for element expiring sooner.
8751 8910 */
8752 8911 while ((c = TAILQ_NEXT(c, cmd_active_link)) != NULL) {
8753 8912 if (c->cmd_active_expiration <
8754 8913 cmd->cmd_active_expiration) {
8755 8914 TAILQ_INSERT_BEFORE(c, cmd, cmd_active_link);
8756 8915 break;
8757 8916 }
8758 8917 }
8759 8918 if (c == NULL) {
8760 8919 /*
8761 8920 * No element found expiring sooner, append to
8762 8921 * non-empty queue.
8763 8922 */
8764 8923 TAILQ_INSERT_TAIL(&ptgt->m_active_cmdq, cmd,
8765 8924 cmd_active_link);
8766 8925 }
8767 8926 }
8768 8927
8769 8928 if ((mptsas_check_dma_handle(dma_hdl) != DDI_SUCCESS) ||
8770 8929 (mptsas_check_acc_handle(acc_hdl) != DDI_SUCCESS)) {
8771 8930 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
8772 8931 return (DDI_FAILURE);
8773 8932 }
8774 8933 return (DDI_SUCCESS);
8775 8934 }
8776 8935
8777 8936 /*
8778 8937 * Select a helper thread to handle current doneq
8779 8938 */
8780 8939 static void
8781 8940 mptsas_deliver_doneq_thread(mptsas_t *mpt)
8782 8941 {
8783 8942 uint64_t t, i;
8784 8943 uint32_t min = 0xffffffff;
8785 8944 mptsas_doneq_thread_list_t *item;
8786 8945
8787 8946 for (i = 0; i < mpt->m_doneq_thread_n; i++) {
8788 8947 item = &mpt->m_doneq_thread_id[i];
8789 8948 /*
8790 8949 * If the completed command on help thread[i] less than
8791 8950 * doneq_thread_threshold, then pick the thread[i]. Otherwise
8792 8951 * pick a thread which has least completed command.
8793 8952 */
8794 8953
8795 8954 mutex_enter(&item->mutex);
8796 8955 if (item->len < mpt->m_doneq_thread_threshold) {
8797 8956 t = i;
8798 8957 mutex_exit(&item->mutex);
8799 8958 break;
8800 8959 }
8801 8960 if (item->len < min) {
8802 8961 min = item->len;
8803 8962 t = i;
8804 8963 }
8805 8964 mutex_exit(&item->mutex);
8806 8965 }
8807 8966 mutex_enter(&mpt->m_doneq_thread_id[t].mutex);
8808 8967 mptsas_doneq_mv(mpt, t);
8809 8968 cv_signal(&mpt->m_doneq_thread_id[t].cv);
8810 8969 mutex_exit(&mpt->m_doneq_thread_id[t].mutex);
8811 8970 }
8812 8971
8813 8972 /*
8814 8973 * move the current global doneq to the doneq of thead[t]
8815 8974 */
8816 8975 static void
8817 8976 mptsas_doneq_mv(mptsas_t *mpt, uint64_t t)
8818 8977 {
8819 8978 mptsas_cmd_t *cmd;
8820 8979 mptsas_doneq_thread_list_t *item = &mpt->m_doneq_thread_id[t];
8821 8980
8822 8981 ASSERT(mutex_owned(&item->mutex));
8823 8982 while ((cmd = mpt->m_doneq) != NULL) {
8824 8983 if ((mpt->m_doneq = cmd->cmd_linkp) == NULL) {
8825 8984 mpt->m_donetail = &mpt->m_doneq;
8826 8985 }
8827 8986 cmd->cmd_linkp = NULL;
8828 8987 *item->donetail = cmd;
8829 8988 item->donetail = &cmd->cmd_linkp;
8830 8989 mpt->m_doneq_len--;
8831 8990 item->len++;
8832 8991 }
8833 8992 }
8834 8993
8835 8994 void
8836 8995 mptsas_fma_check(mptsas_t *mpt, mptsas_cmd_t *cmd)
8837 8996 {
8838 8997 struct scsi_pkt *pkt = CMD2PKT(cmd);
8839 8998
8840 8999 /* Check all acc and dma handles */
8841 9000 if ((mptsas_check_acc_handle(mpt->m_datap) !=
8842 9001 DDI_SUCCESS) ||
8843 9002 (mptsas_check_acc_handle(mpt->m_acc_req_frame_hdl) !=
8844 9003 DDI_SUCCESS) ||
8845 9004 (mptsas_check_acc_handle(mpt->m_acc_req_sense_hdl) !=
8846 9005 DDI_SUCCESS) ||
8847 9006 (mptsas_check_acc_handle(mpt->m_acc_reply_frame_hdl) !=
8848 9007 DDI_SUCCESS) ||
8849 9008 (mptsas_check_acc_handle(mpt->m_acc_free_queue_hdl) !=
8850 9009 DDI_SUCCESS) ||
8851 9010 (mptsas_check_acc_handle(mpt->m_acc_post_queue_hdl) !=
8852 9011 DDI_SUCCESS) ||
8853 9012 (mptsas_check_acc_handle(mpt->m_hshk_acc_hdl) !=
8854 9013 DDI_SUCCESS) ||
8855 9014 (mptsas_check_acc_handle(mpt->m_config_handle) !=
8856 9015 DDI_SUCCESS)) {
8857 9016 ddi_fm_service_impact(mpt->m_dip,
8858 9017 DDI_SERVICE_UNAFFECTED);
8859 9018 ddi_fm_acc_err_clear(mpt->m_config_handle,
8860 9019 DDI_FME_VER0);
8861 9020 pkt->pkt_reason = CMD_TRAN_ERR;
8862 9021 pkt->pkt_statistics = 0;
8863 9022 }
8864 9023 if ((mptsas_check_dma_handle(mpt->m_dma_req_frame_hdl) !=
8865 9024 DDI_SUCCESS) ||
8866 9025 (mptsas_check_dma_handle(mpt->m_dma_req_sense_hdl) !=
8867 9026 DDI_SUCCESS) ||
8868 9027 (mptsas_check_dma_handle(mpt->m_dma_reply_frame_hdl) !=
8869 9028 DDI_SUCCESS) ||
8870 9029 (mptsas_check_dma_handle(mpt->m_dma_free_queue_hdl) !=
8871 9030 DDI_SUCCESS) ||
8872 9031 (mptsas_check_dma_handle(mpt->m_dma_post_queue_hdl) !=
8873 9032 DDI_SUCCESS) ||
8874 9033 (mptsas_check_dma_handle(mpt->m_hshk_dma_hdl) !=
8875 9034 DDI_SUCCESS)) {
8876 9035 ddi_fm_service_impact(mpt->m_dip,
8877 9036 DDI_SERVICE_UNAFFECTED);
8878 9037 pkt->pkt_reason = CMD_TRAN_ERR;
8879 9038 pkt->pkt_statistics = 0;
8880 9039 }
8881 9040 if (cmd->cmd_dmahandle &&
8882 9041 (mptsas_check_dma_handle(cmd->cmd_dmahandle) != DDI_SUCCESS)) {
8883 9042 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
8884 9043 pkt->pkt_reason = CMD_TRAN_ERR;
8885 9044 pkt->pkt_statistics = 0;
8886 9045 }
8887 9046 if ((cmd->cmd_extra_frames &&
8888 9047 ((mptsas_check_dma_handle(cmd->cmd_extra_frames->m_dma_hdl) !=
8889 9048 DDI_SUCCESS) ||
8890 9049 (mptsas_check_acc_handle(cmd->cmd_extra_frames->m_acc_hdl) !=
8891 9050 DDI_SUCCESS)))) {
8892 9051 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
8893 9052 pkt->pkt_reason = CMD_TRAN_ERR;
8894 9053 pkt->pkt_statistics = 0;
8895 9054 }
8896 9055 }
8897 9056
8898 9057 /*
8899 9058 * These routines manipulate the queue of commands that
8900 9059 * are waiting for their completion routines to be called.
8901 9060 * The queue is usually in FIFO order but on an MP system
8902 9061 * it's possible for the completion routines to get out
8903 9062 * of order. If that's a problem you need to add a global
8904 9063 * mutex around the code that calls the completion routine
8905 9064 * in the interrupt handler.
8906 9065 */
8907 9066 static void
8908 9067 mptsas_doneq_add(mptsas_t *mpt, mptsas_cmd_t *cmd)
8909 9068 {
8910 9069 struct scsi_pkt *pkt = CMD2PKT(cmd);
8911 9070
8912 9071 NDBG31(("mptsas_doneq_add: cmd=0x%p", (void *)cmd));
8913 9072
8914 9073 ASSERT((cmd->cmd_flags & CFLAG_COMPLETED) == 0);
8915 9074 cmd->cmd_linkp = NULL;
8916 9075 cmd->cmd_flags |= CFLAG_FINISHED;
8917 9076 cmd->cmd_flags &= ~CFLAG_IN_TRANSPORT;
8918 9077
8919 9078 mptsas_fma_check(mpt, cmd);
8920 9079
8921 9080 /*
8922 9081 * only add scsi pkts that have completion routines to
8923 9082 * the doneq. no intr cmds do not have callbacks.
8924 9083 */
8925 9084 if (pkt && (pkt->pkt_comp)) {
8926 9085 *mpt->m_donetail = cmd;
8927 9086 mpt->m_donetail = &cmd->cmd_linkp;
8928 9087 mpt->m_doneq_len++;
8929 9088 }
8930 9089 }
8931 9090
8932 9091 static mptsas_cmd_t *
8933 9092 mptsas_doneq_thread_rm(mptsas_t *mpt, uint64_t t)
8934 9093 {
8935 9094 mptsas_cmd_t *cmd;
8936 9095 mptsas_doneq_thread_list_t *item = &mpt->m_doneq_thread_id[t];
8937 9096
8938 9097 /* pop one off the done queue */
8939 9098 if ((cmd = item->doneq) != NULL) {
8940 9099 /* if the queue is now empty fix the tail pointer */
8941 9100 NDBG31(("mptsas_doneq_thread_rm: cmd=0x%p", (void *)cmd));
8942 9101 if ((item->doneq = cmd->cmd_linkp) == NULL) {
8943 9102 item->donetail = &item->doneq;
8944 9103 }
8945 9104 cmd->cmd_linkp = NULL;
8946 9105 item->len--;
8947 9106 }
8948 9107 return (cmd);
8949 9108 }
8950 9109
8951 9110 static void
8952 9111 mptsas_doneq_empty(mptsas_t *mpt)
8953 9112 {
8954 9113 if (mpt->m_doneq && !mpt->m_in_callback) {
8955 9114 mptsas_cmd_t *cmd, *next;
8956 9115 struct scsi_pkt *pkt;
8957 9116
8958 9117 mpt->m_in_callback = 1;
8959 9118 cmd = mpt->m_doneq;
8960 9119 mpt->m_doneq = NULL;
8961 9120 mpt->m_donetail = &mpt->m_doneq;
8962 9121 mpt->m_doneq_len = 0;
8963 9122
8964 9123 mutex_exit(&mpt->m_mutex);
8965 9124 /*
8966 9125 * run the completion routines of all the
8967 9126 * completed commands
8968 9127 */
8969 9128 while (cmd != NULL) {
8970 9129 next = cmd->cmd_linkp;
8971 9130 cmd->cmd_linkp = NULL;
8972 9131 /* run this command's completion routine */
8973 9132 cmd->cmd_flags |= CFLAG_COMPLETED;
8974 9133 pkt = CMD2PKT(cmd);
8975 9134 mptsas_pkt_comp(pkt, cmd);
8976 9135 cmd = next;
8977 9136 }
8978 9137 mutex_enter(&mpt->m_mutex);
8979 9138 mpt->m_in_callback = 0;
8980 9139 }
8981 9140 }
8982 9141
8983 9142 /*
8984 9143 * These routines manipulate the target's queue of pending requests
8985 9144 */
8986 9145 void
8987 9146 mptsas_waitq_add(mptsas_t *mpt, mptsas_cmd_t *cmd)
8988 9147 {
8989 9148 NDBG7(("mptsas_waitq_add: cmd=0x%p", (void *)cmd));
8990 9149 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
8991 9150 cmd->cmd_queued = TRUE;
8992 9151 if (ptgt)
8993 9152 ptgt->m_t_nwait++;
8994 9153 if (cmd->cmd_pkt_flags & FLAG_HEAD) {
8995 9154 if ((cmd->cmd_linkp = mpt->m_waitq) == NULL) {
8996 9155 mpt->m_waitqtail = &cmd->cmd_linkp;
8997 9156 }
8998 9157 mpt->m_waitq = cmd;
8999 9158 } else {
9000 9159 cmd->cmd_linkp = NULL;
9001 9160 *(mpt->m_waitqtail) = cmd;
9002 9161 mpt->m_waitqtail = &cmd->cmd_linkp;
9003 9162 }
9004 9163 }
9005 9164
9006 9165 static mptsas_cmd_t *
9007 9166 mptsas_waitq_rm(mptsas_t *mpt)
9008 9167 {
9009 9168 mptsas_cmd_t *cmd;
9010 9169 mptsas_target_t *ptgt;
9011 9170 NDBG7(("mptsas_waitq_rm"));
9012 9171
9013 9172 MPTSAS_WAITQ_RM(mpt, cmd);
9014 9173
9015 9174 NDBG7(("mptsas_waitq_rm: cmd=0x%p", (void *)cmd));
9016 9175 if (cmd) {
9017 9176 ptgt = cmd->cmd_tgt_addr;
9018 9177 if (ptgt) {
9019 9178 ptgt->m_t_nwait--;
9020 9179 ASSERT(ptgt->m_t_nwait >= 0);
9021 9180 }
9022 9181 }
9023 9182 return (cmd);
9024 9183 }
9025 9184
9026 9185 /*
9027 9186 * remove specified cmd from the middle of the wait queue.
9028 9187 */
9029 9188 static void
9030 9189 mptsas_waitq_delete(mptsas_t *mpt, mptsas_cmd_t *cmd)
9031 9190 {
9032 9191 mptsas_cmd_t *prevp = mpt->m_waitq;
9033 9192 mptsas_target_t *ptgt = cmd->cmd_tgt_addr;
9034 9193
9035 9194 NDBG7(("mptsas_waitq_delete: mpt=0x%p cmd=0x%p",
9036 9195 (void *)mpt, (void *)cmd));
9037 9196 if (ptgt) {
9038 9197 ptgt->m_t_nwait--;
9039 9198 ASSERT(ptgt->m_t_nwait >= 0);
9040 9199 }
9041 9200
9042 9201 if (prevp == cmd) {
9043 9202 if ((mpt->m_waitq = cmd->cmd_linkp) == NULL)
9044 9203 mpt->m_waitqtail = &mpt->m_waitq;
9045 9204
9046 9205 cmd->cmd_linkp = NULL;
9047 9206 cmd->cmd_queued = FALSE;
9048 9207 NDBG7(("mptsas_waitq_delete: mpt=0x%p cmd=0x%p",
9049 9208 (void *)mpt, (void *)cmd));
9050 9209 return;
9051 9210 }
9052 9211
9053 9212 while (prevp != NULL) {
9054 9213 if (prevp->cmd_linkp == cmd) {
9055 9214 if ((prevp->cmd_linkp = cmd->cmd_linkp) == NULL)
9056 9215 mpt->m_waitqtail = &prevp->cmd_linkp;
9057 9216
9058 9217 cmd->cmd_linkp = NULL;
9059 9218 cmd->cmd_queued = FALSE;
9060 9219 NDBG7(("mptsas_waitq_delete: mpt=0x%p cmd=0x%p",
9061 9220 (void *)mpt, (void *)cmd));
9062 9221 return;
9063 9222 }
9064 9223 prevp = prevp->cmd_linkp;
9065 9224 }
9066 9225 cmn_err(CE_PANIC, "mpt: mptsas_waitq_delete: queue botch");
9067 9226 }
9068 9227
9069 9228 static mptsas_cmd_t *
9070 9229 mptsas_tx_waitq_rm(mptsas_t *mpt)
9071 9230 {
9072 9231 mptsas_cmd_t *cmd;
9073 9232 NDBG7(("mptsas_tx_waitq_rm"));
9074 9233
9075 9234 MPTSAS_TX_WAITQ_RM(mpt, cmd);
9076 9235
9077 9236 NDBG7(("mptsas_tx_waitq_rm: cmd=0x%p", (void *)cmd));
9078 9237
9079 9238 return (cmd);
9080 9239 }
9081 9240
9082 9241 /*
9083 9242 * remove specified cmd from the middle of the tx_waitq.
9084 9243 */
9085 9244 static void
9086 9245 mptsas_tx_waitq_delete(mptsas_t *mpt, mptsas_cmd_t *cmd)
9087 9246 {
9088 9247 mptsas_cmd_t *prevp = mpt->m_tx_waitq;
9089 9248
9090 9249 NDBG7(("mptsas_tx_waitq_delete: mpt=0x%p cmd=0x%p",
9091 9250 (void *)mpt, (void *)cmd));
9092 9251
9093 9252 if (prevp == cmd) {
9094 9253 if ((mpt->m_tx_waitq = cmd->cmd_linkp) == NULL)
9095 9254 mpt->m_tx_waitqtail = &mpt->m_tx_waitq;
9096 9255
9097 9256 cmd->cmd_linkp = NULL;
9098 9257 cmd->cmd_queued = FALSE;
9099 9258 NDBG7(("mptsas_tx_waitq_delete: mpt=0x%p cmd=0x%p",
9100 9259 (void *)mpt, (void *)cmd));
9101 9260 return;
9102 9261 }
9103 9262
9104 9263 while (prevp != NULL) {
9105 9264 if (prevp->cmd_linkp == cmd) {
9106 9265 if ((prevp->cmd_linkp = cmd->cmd_linkp) == NULL)
9107 9266 mpt->m_tx_waitqtail = &prevp->cmd_linkp;
9108 9267
9109 9268 cmd->cmd_linkp = NULL;
9110 9269 cmd->cmd_queued = FALSE;
9111 9270 NDBG7(("mptsas_tx_waitq_delete: mpt=0x%p cmd=0x%p",
9112 9271 (void *)mpt, (void *)cmd));
9113 9272 return;
9114 9273 }
9115 9274 prevp = prevp->cmd_linkp;
9116 9275 }
9117 9276 cmn_err(CE_PANIC, "mpt: mptsas_tx_waitq_delete: queue botch");
9118 9277 }
9119 9278
9120 9279 /*
9121 9280 * device and bus reset handling
9122 9281 *
9123 9282 * Notes:
9124 9283 * - RESET_ALL: reset the controller
9125 9284 * - RESET_TARGET: reset the target specified in scsi_address
9126 9285 */
9127 9286 static int
9128 9287 mptsas_scsi_reset(struct scsi_address *ap, int level)
9129 9288 {
9130 9289 mptsas_t *mpt = ADDR2MPT(ap);
9131 9290 int rval;
9132 9291 mptsas_tgt_private_t *tgt_private;
9133 9292 mptsas_target_t *ptgt = NULL;
9134 9293
9135 9294 tgt_private = (mptsas_tgt_private_t *)ap->a_hba_tran->tran_tgt_private;
9136 9295 ptgt = tgt_private->t_private;
9137 9296 if (ptgt == NULL) {
9138 9297 return (FALSE);
9139 9298 }
9140 9299 NDBG22(("mptsas_scsi_reset: target=%d level=%d", ptgt->m_devhdl,
9141 9300 level));
9142 9301
9143 9302 mutex_enter(&mpt->m_mutex);
9144 9303 /*
9145 9304 * if we are not in panic set up a reset delay for this target
9146 9305 */
9147 9306 if (!ddi_in_panic()) {
9148 9307 mptsas_setup_bus_reset_delay(mpt);
9149 9308 } else {
9150 9309 drv_usecwait(mpt->m_scsi_reset_delay * 1000);
9151 9310 }
9152 9311 rval = mptsas_do_scsi_reset(mpt, ptgt->m_devhdl);
9153 9312 mutex_exit(&mpt->m_mutex);
9154 9313
9155 9314 /*
9156 9315 * The transport layer expect to only see TRUE and
9157 9316 * FALSE. Therefore, we will adjust the return value
9158 9317 * if mptsas_do_scsi_reset returns FAILED.
9159 9318 */
9160 9319 if (rval == FAILED)
9161 9320 rval = FALSE;
9162 9321 return (rval);
9163 9322 }
9164 9323
9165 9324 static int
9166 9325 mptsas_do_scsi_reset(mptsas_t *mpt, uint16_t devhdl)
9167 9326 {
9168 9327 int rval = FALSE;
9169 9328 uint8_t config, disk;
9170 9329
9171 9330 ASSERT(mutex_owned(&mpt->m_mutex));
9172 9331
9173 9332 if (mptsas_debug_resets) {
9174 9333 mptsas_log(mpt, CE_WARN, "mptsas_do_scsi_reset: target=%d",
9175 9334 devhdl);
9176 9335 }
9177 9336
9178 9337 /*
9179 9338 * Issue a Target Reset message to the target specified but not to a
9180 9339 * disk making up a raid volume. Just look through the RAID config
9181 9340 * Phys Disk list of DevHandles. If the target's DevHandle is in this
9182 9341 * list, then don't reset this target.
9183 9342 */
9184 9343 for (config = 0; config < mpt->m_num_raid_configs; config++) {
9185 9344 for (disk = 0; disk < MPTSAS_MAX_DISKS_IN_CONFIG; disk++) {
9186 9345 if (devhdl == mpt->m_raidconfig[config].
9187 9346 m_physdisk_devhdl[disk]) {
9188 9347 return (TRUE);
9189 9348 }
9190 9349 }
9191 9350 }
9192 9351
9193 9352 rval = mptsas_ioc_task_management(mpt,
9194 9353 MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, devhdl, 0, NULL, 0, 0);
9195 9354
9196 9355 mptsas_doneq_empty(mpt);
9197 9356 return (rval);
9198 9357 }
9199 9358
9200 9359 static int
9201 9360 mptsas_scsi_reset_notify(struct scsi_address *ap, int flag,
9202 9361 void (*callback)(caddr_t), caddr_t arg)
9203 9362 {
9204 9363 mptsas_t *mpt = ADDR2MPT(ap);
9205 9364
9206 9365 NDBG22(("mptsas_scsi_reset_notify: tgt=%d", ap->a_target));
9207 9366
9208 9367 return (scsi_hba_reset_notify_setup(ap, flag, callback, arg,
9209 9368 &mpt->m_mutex, &mpt->m_reset_notify_listf));
9210 9369 }
9211 9370
9212 9371 static int
9213 9372 mptsas_get_name(struct scsi_device *sd, char *name, int len)
9214 9373 {
9215 9374 dev_info_t *lun_dip = NULL;
9216 9375
9217 9376 ASSERT(sd != NULL);
9218 9377 ASSERT(name != NULL);
9219 9378 lun_dip = sd->sd_dev;
9220 9379 ASSERT(lun_dip != NULL);
9221 9380
9222 9381 if (mptsas_name_child(lun_dip, name, len) == DDI_SUCCESS) {
9223 9382 return (1);
9224 9383 } else {
9225 9384 return (0);
9226 9385 }
9227 9386 }
9228 9387
9229 9388 static int
9230 9389 mptsas_get_bus_addr(struct scsi_device *sd, char *name, int len)
9231 9390 {
9232 9391 return (mptsas_get_name(sd, name, len));
9233 9392 }
9234 9393
9235 9394 void
9236 9395 mptsas_set_throttle(mptsas_t *mpt, mptsas_target_t *ptgt, int what)
9237 9396 {
9238 9397
9239 9398 NDBG25(("mptsas_set_throttle: throttle=%x", what));
9240 9399
9241 9400 /*
9242 9401 * if the bus is draining/quiesced, no changes to the throttles
|
↓ open down ↓ |
501 lines elided |
↑ open up ↑ |
9243 9402 * are allowed. Not allowing change of throttles during draining
9244 9403 * limits error recovery but will reduce draining time
9245 9404 *
9246 9405 * all throttles should have been set to HOLD_THROTTLE
9247 9406 */
9248 9407 if (mpt->m_softstate & (MPTSAS_SS_QUIESCED | MPTSAS_SS_DRAINING)) {
9249 9408 return;
9250 9409 }
9251 9410
9252 9411 if (what == HOLD_THROTTLE) {
9253 - ptgt->m_t_throttle = HOLD_THROTTLE;
9254 - } else if (ptgt->m_reset_delay == 0) {
9255 9412 ptgt->m_t_throttle = what;
9413 + } else if (ptgt->m_reset_delay == 0) {
9414 + if (what == MAX_THROTTLE)
9415 + ptgt->m_t_throttle = mpt->m_max_tune_throttle;
9416 + else
9417 + ptgt->m_t_throttle = what;
9256 9418 }
9257 9419 }
9258 9420
9259 9421 /*
9260 9422 * Clean up from a device reset.
9261 9423 * For the case of target reset, this function clears the waitq of all
9262 9424 * commands for a particular target. For the case of abort task set, this
9263 9425 * function clears the waitq of all commonds for a particular target/lun.
9264 9426 */
9265 9427 static void
9266 9428 mptsas_flush_target(mptsas_t *mpt, ushort_t target, int lun, uint8_t tasktype)
9267 9429 {
9268 9430 mptsas_slots_t *slots = mpt->m_active;
9269 9431 mptsas_cmd_t *cmd, *next_cmd;
9270 9432 int slot;
9271 9433 uchar_t reason;
9272 9434 uint_t stat;
9273 9435 hrtime_t timestamp;
9274 9436
9275 9437 NDBG25(("mptsas_flush_target: target=%d lun=%d", target, lun));
9276 9438
9277 9439 timestamp = gethrtime();
9278 9440
9279 9441 /*
9280 9442 * Make sure the I/O Controller has flushed all cmds
9281 9443 * that are associated with this target for a target reset
9282 9444 * and target/lun for abort task set.
9283 9445 * Account for TM requests, which use the last SMID.
9284 9446 */
9285 9447 for (slot = 0; slot <= mpt->m_active->m_n_normal; slot++) {
9286 9448 if ((cmd = slots->m_slot[slot]) == NULL)
9287 9449 continue;
9288 9450 reason = CMD_RESET;
9289 9451 stat = STAT_DEV_RESET;
9290 9452 switch (tasktype) {
9291 9453 case MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET:
9292 9454 if (Tgt(cmd) == target) {
9293 9455 if (cmd->cmd_active_expiration <= timestamp) {
9294 9456 /*
9295 9457 * When timeout requested, propagate
9296 9458 * proper reason and statistics to
9297 9459 * target drivers.
9298 9460 */
9299 9461 reason = CMD_TIMEOUT;
9300 9462 stat |= STAT_TIMEOUT;
9301 9463 }
9302 9464 NDBG25(("mptsas_flush_target discovered non-"
9303 9465 "NULL cmd in slot %d, tasktype 0x%x", slot,
9304 9466 tasktype));
9305 9467 mptsas_dump_cmd(mpt, cmd);
9306 9468 mptsas_remove_cmd(mpt, cmd);
9307 9469 mptsas_set_pkt_reason(mpt, cmd, reason, stat);
9308 9470 mptsas_doneq_add(mpt, cmd);
9309 9471 }
9310 9472 break;
9311 9473 case MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET:
9312 9474 reason = CMD_ABORTED;
9313 9475 stat = STAT_ABORTED;
9314 9476 /*FALLTHROUGH*/
9315 9477 case MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET:
9316 9478 if ((Tgt(cmd) == target) && (Lun(cmd) == lun)) {
9317 9479
9318 9480 NDBG25(("mptsas_flush_target discovered non-"
9319 9481 "NULL cmd in slot %d, tasktype 0x%x", slot,
9320 9482 tasktype));
9321 9483 mptsas_dump_cmd(mpt, cmd);
9322 9484 mptsas_remove_cmd(mpt, cmd);
9323 9485 mptsas_set_pkt_reason(mpt, cmd, reason,
9324 9486 stat);
9325 9487 mptsas_doneq_add(mpt, cmd);
9326 9488 }
9327 9489 break;
9328 9490 default:
9329 9491 break;
9330 9492 }
9331 9493 }
9332 9494
9333 9495 /*
9334 9496 * Flush the waitq and tx_waitq of this target's cmds
9335 9497 */
9336 9498 cmd = mpt->m_waitq;
9337 9499
9338 9500 reason = CMD_RESET;
9339 9501 stat = STAT_DEV_RESET;
9340 9502
9341 9503 switch (tasktype) {
9342 9504 case MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET:
9343 9505 while (cmd != NULL) {
9344 9506 next_cmd = cmd->cmd_linkp;
9345 9507 if (Tgt(cmd) == target) {
9346 9508 mptsas_waitq_delete(mpt, cmd);
9347 9509 mptsas_set_pkt_reason(mpt, cmd,
9348 9510 reason, stat);
9349 9511 mptsas_doneq_add(mpt, cmd);
9350 9512 }
9351 9513 cmd = next_cmd;
9352 9514 }
9353 9515 mutex_enter(&mpt->m_tx_waitq_mutex);
9354 9516 cmd = mpt->m_tx_waitq;
9355 9517 while (cmd != NULL) {
9356 9518 next_cmd = cmd->cmd_linkp;
9357 9519 if (Tgt(cmd) == target) {
9358 9520 mptsas_tx_waitq_delete(mpt, cmd);
9359 9521 mutex_exit(&mpt->m_tx_waitq_mutex);
9360 9522 mptsas_set_pkt_reason(mpt, cmd,
9361 9523 reason, stat);
9362 9524 mptsas_doneq_add(mpt, cmd);
9363 9525 mutex_enter(&mpt->m_tx_waitq_mutex);
9364 9526 }
9365 9527 cmd = next_cmd;
9366 9528 }
9367 9529 mutex_exit(&mpt->m_tx_waitq_mutex);
9368 9530 break;
9369 9531 case MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET:
9370 9532 reason = CMD_ABORTED;
9371 9533 stat = STAT_ABORTED;
9372 9534 /*FALLTHROUGH*/
9373 9535 case MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET:
9374 9536 while (cmd != NULL) {
9375 9537 next_cmd = cmd->cmd_linkp;
9376 9538 if ((Tgt(cmd) == target) && (Lun(cmd) == lun)) {
9377 9539 mptsas_waitq_delete(mpt, cmd);
9378 9540 mptsas_set_pkt_reason(mpt, cmd,
9379 9541 reason, stat);
9380 9542 mptsas_doneq_add(mpt, cmd);
9381 9543 }
9382 9544 cmd = next_cmd;
9383 9545 }
9384 9546 mutex_enter(&mpt->m_tx_waitq_mutex);
9385 9547 cmd = mpt->m_tx_waitq;
9386 9548 while (cmd != NULL) {
9387 9549 next_cmd = cmd->cmd_linkp;
9388 9550 if ((Tgt(cmd) == target) && (Lun(cmd) == lun)) {
9389 9551 mptsas_tx_waitq_delete(mpt, cmd);
9390 9552 mutex_exit(&mpt->m_tx_waitq_mutex);
9391 9553 mptsas_set_pkt_reason(mpt, cmd,
9392 9554 reason, stat);
9393 9555 mptsas_doneq_add(mpt, cmd);
9394 9556 mutex_enter(&mpt->m_tx_waitq_mutex);
|
↓ open down ↓ |
129 lines elided |
↑ open up ↑ |
9395 9557 }
9396 9558 cmd = next_cmd;
9397 9559 }
9398 9560 mutex_exit(&mpt->m_tx_waitq_mutex);
9399 9561 break;
9400 9562 default:
9401 9563 mptsas_log(mpt, CE_WARN, "Unknown task management type %d.",
9402 9564 tasktype);
9403 9565 break;
9404 9566 }
9567 +
9568 +#ifdef MPTSAS_FAULTINJECTION
9569 + mptsas_fminj_move_tgt_to_doneq(mpt, target, reason, stat);
9570 +#endif
9405 9571 }
9406 9572
9407 9573 /*
9408 9574 * Clean up hba state, abort all outstanding command and commands in waitq
9409 9575 * reset timeout of all targets.
9410 9576 */
9411 9577 static void
9412 9578 mptsas_flush_hba(mptsas_t *mpt)
9413 9579 {
9414 9580 mptsas_slots_t *slots = mpt->m_active;
9415 9581 mptsas_cmd_t *cmd;
9416 9582 int slot;
9417 9583
9418 9584 NDBG25(("mptsas_flush_hba"));
9419 9585
9420 9586 /*
9421 9587 * The I/O Controller should have already sent back
9422 9588 * all commands via the scsi I/O reply frame. Make
9423 9589 * sure all commands have been flushed.
9424 9590 * Account for TM request, which use the last SMID.
9425 9591 */
9426 9592 for (slot = 0; slot <= mpt->m_active->m_n_normal; slot++) {
9427 9593 if ((cmd = slots->m_slot[slot]) == NULL)
9428 9594 continue;
9429 9595
9430 9596 if (cmd->cmd_flags & CFLAG_CMDIOC) {
9431 9597 /*
9432 9598 * Need to make sure to tell everyone that might be
9433 9599 * waiting on this command that it's going to fail. If
9434 9600 * we get here, this command will never timeout because
9435 9601 * the active command table is going to be re-allocated,
9436 9602 * so there will be nothing to check against a time out.
9437 9603 * Instead, mark the command as failed due to reset.
9438 9604 */
9439 9605 mptsas_set_pkt_reason(mpt, cmd, CMD_RESET,
9440 9606 STAT_BUS_RESET);
9441 9607 if ((cmd->cmd_flags &
9442 9608 (CFLAG_PASSTHRU | CFLAG_CONFIG | CFLAG_FW_DIAG))) {
9443 9609 cmd->cmd_flags |= CFLAG_FINISHED;
9444 9610 cv_broadcast(&mpt->m_passthru_cv);
9445 9611 cv_broadcast(&mpt->m_config_cv);
9446 9612 cv_broadcast(&mpt->m_fw_diag_cv);
9447 9613 }
9448 9614 continue;
9449 9615 }
9450 9616
9451 9617 NDBG25(("mptsas_flush_hba discovered non-NULL cmd in slot %d",
9452 9618 slot));
9453 9619 mptsas_dump_cmd(mpt, cmd);
9454 9620
9455 9621 mptsas_remove_cmd(mpt, cmd);
9456 9622 mptsas_set_pkt_reason(mpt, cmd, CMD_RESET, STAT_BUS_RESET);
9457 9623 mptsas_doneq_add(mpt, cmd);
9458 9624 }
9459 9625
9460 9626 /*
9461 9627 * Flush the waitq.
9462 9628 */
9463 9629 while ((cmd = mptsas_waitq_rm(mpt)) != NULL) {
9464 9630 mptsas_set_pkt_reason(mpt, cmd, CMD_RESET, STAT_BUS_RESET);
9465 9631 if ((cmd->cmd_flags & CFLAG_PASSTHRU) ||
9466 9632 (cmd->cmd_flags & CFLAG_CONFIG) ||
9467 9633 (cmd->cmd_flags & CFLAG_FW_DIAG)) {
9468 9634 cmd->cmd_flags |= CFLAG_FINISHED;
9469 9635 cv_broadcast(&mpt->m_passthru_cv);
9470 9636 cv_broadcast(&mpt->m_config_cv);
9471 9637 cv_broadcast(&mpt->m_fw_diag_cv);
9472 9638 } else {
9473 9639 mptsas_doneq_add(mpt, cmd);
9474 9640 }
9475 9641 }
9476 9642
9477 9643 /*
9478 9644 * Flush the tx_waitq
9479 9645 */
9480 9646 mutex_enter(&mpt->m_tx_waitq_mutex);
9481 9647 while ((cmd = mptsas_tx_waitq_rm(mpt)) != NULL) {
9482 9648 mutex_exit(&mpt->m_tx_waitq_mutex);
9483 9649 mptsas_set_pkt_reason(mpt, cmd, CMD_RESET, STAT_BUS_RESET);
9484 9650 mptsas_doneq_add(mpt, cmd);
9485 9651 mutex_enter(&mpt->m_tx_waitq_mutex);
9486 9652 }
9487 9653 mutex_exit(&mpt->m_tx_waitq_mutex);
9488 9654
9489 9655 /*
9490 9656 * Drain the taskqs prior to reallocating resources. The thread
9491 9657 * passing through here could be launched from either (dr)
9492 9658 * or (event) taskqs so only wait on the 'other' queue since
9493 9659 * waiting on 'this' queue is a deadlock condition.
9494 9660 */
9495 9661 mutex_exit(&mpt->m_mutex);
9496 9662 if (!taskq_member((taskq_t *)mpt->m_event_taskq, curthread))
9497 9663 ddi_taskq_wait(mpt->m_event_taskq);
9498 9664 if (!taskq_member((taskq_t *)mpt->m_dr_taskq, curthread))
9499 9665 ddi_taskq_wait(mpt->m_dr_taskq);
9500 9666
9501 9667 mutex_enter(&mpt->m_mutex);
9502 9668 }
9503 9669
9504 9670 /*
9505 9671 * set pkt_reason and OR in pkt_statistics flag
9506 9672 */
9507 9673 static void
9508 9674 mptsas_set_pkt_reason(mptsas_t *mpt, mptsas_cmd_t *cmd, uchar_t reason,
9509 9675 uint_t stat)
9510 9676 {
9511 9677 #ifndef __lock_lint
9512 9678 _NOTE(ARGUNUSED(mpt))
9513 9679 #endif
9514 9680
9515 9681 NDBG25(("mptsas_set_pkt_reason: cmd=0x%p reason=%x stat=%x",
9516 9682 (void *)cmd, reason, stat));
9517 9683
9518 9684 if (cmd) {
9519 9685 if (cmd->cmd_pkt->pkt_reason == CMD_CMPLT) {
9520 9686 cmd->cmd_pkt->pkt_reason = reason;
9521 9687 }
9522 9688 cmd->cmd_pkt->pkt_statistics |= stat;
9523 9689 }
9524 9690 }
9525 9691
9526 9692 static void
9527 9693 mptsas_start_watch_reset_delay()
9528 9694 {
9529 9695 NDBG22(("mptsas_start_watch_reset_delay"));
9530 9696
9531 9697 mutex_enter(&mptsas_global_mutex);
9532 9698 if (mptsas_reset_watch == NULL && mptsas_timeouts_enabled) {
9533 9699 mptsas_reset_watch = timeout(mptsas_watch_reset_delay, NULL,
9534 9700 drv_usectohz((clock_t)
9535 9701 MPTSAS_WATCH_RESET_DELAY_TICK * 1000));
9536 9702 ASSERT(mptsas_reset_watch != NULL);
9537 9703 }
9538 9704 mutex_exit(&mptsas_global_mutex);
9539 9705 }
9540 9706
9541 9707 static void
9542 9708 mptsas_setup_bus_reset_delay(mptsas_t *mpt)
9543 9709 {
9544 9710 mptsas_target_t *ptgt = NULL;
9545 9711
9546 9712 ASSERT(MUTEX_HELD(&mpt->m_mutex));
9547 9713
9548 9714 NDBG22(("mptsas_setup_bus_reset_delay"));
9549 9715 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
9550 9716 ptgt = refhash_next(mpt->m_targets, ptgt)) {
9551 9717 mptsas_set_throttle(mpt, ptgt, HOLD_THROTTLE);
9552 9718 ptgt->m_reset_delay = mpt->m_scsi_reset_delay;
9553 9719 }
9554 9720
9555 9721 mptsas_start_watch_reset_delay();
9556 9722 }
9557 9723
9558 9724 /*
9559 9725 * mptsas_watch_reset_delay(_subr) is invoked by timeout() and checks every
9560 9726 * mpt instance for active reset delays
9561 9727 */
9562 9728 static void
9563 9729 mptsas_watch_reset_delay(void *arg)
9564 9730 {
9565 9731 #ifndef __lock_lint
9566 9732 _NOTE(ARGUNUSED(arg))
9567 9733 #endif
9568 9734
9569 9735 mptsas_t *mpt;
9570 9736 int not_done = 0;
9571 9737
9572 9738 NDBG22(("mptsas_watch_reset_delay"));
9573 9739
9574 9740 mutex_enter(&mptsas_global_mutex);
9575 9741 mptsas_reset_watch = 0;
9576 9742 mutex_exit(&mptsas_global_mutex);
9577 9743 rw_enter(&mptsas_global_rwlock, RW_READER);
9578 9744 for (mpt = mptsas_head; mpt != NULL; mpt = mpt->m_next) {
9579 9745 if (mpt->m_tran == 0) {
9580 9746 continue;
9581 9747 }
9582 9748 mutex_enter(&mpt->m_mutex);
9583 9749 not_done += mptsas_watch_reset_delay_subr(mpt);
9584 9750 mutex_exit(&mpt->m_mutex);
9585 9751 }
9586 9752 rw_exit(&mptsas_global_rwlock);
9587 9753
9588 9754 if (not_done) {
9589 9755 mptsas_start_watch_reset_delay();
9590 9756 }
9591 9757 }
9592 9758
9593 9759 static int
9594 9760 mptsas_watch_reset_delay_subr(mptsas_t *mpt)
9595 9761 {
9596 9762 int done = 0;
9597 9763 int restart = 0;
9598 9764 mptsas_target_t *ptgt = NULL;
9599 9765
9600 9766 NDBG22(("mptsas_watch_reset_delay_subr: mpt=0x%p", (void *)mpt));
9601 9767
9602 9768 ASSERT(mutex_owned(&mpt->m_mutex));
9603 9769
9604 9770 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
9605 9771 ptgt = refhash_next(mpt->m_targets, ptgt)) {
9606 9772 if (ptgt->m_reset_delay != 0) {
9607 9773 ptgt->m_reset_delay -=
9608 9774 MPTSAS_WATCH_RESET_DELAY_TICK;
9609 9775 if (ptgt->m_reset_delay <= 0) {
9610 9776 ptgt->m_reset_delay = 0;
9611 9777 mptsas_set_throttle(mpt, ptgt,
9612 9778 MAX_THROTTLE);
9613 9779 restart++;
9614 9780 } else {
9615 9781 done = -1;
9616 9782 }
9617 9783 }
9618 9784 }
9619 9785
9620 9786 if (restart > 0) {
9621 9787 mptsas_restart_hba(mpt);
9622 9788 }
9623 9789 return (done);
9624 9790 }
9625 9791
9626 9792 #ifdef MPTSAS_TEST
9627 9793 static void
9628 9794 mptsas_test_reset(mptsas_t *mpt, int target)
9629 9795 {
9630 9796 mptsas_target_t *ptgt = NULL;
9631 9797
9632 9798 if (mptsas_rtest == target) {
9633 9799 if (mptsas_do_scsi_reset(mpt, target) == TRUE) {
9634 9800 mptsas_rtest = -1;
9635 9801 }
9636 9802 if (mptsas_rtest == -1) {
9637 9803 NDBG22(("mptsas_test_reset success"));
9638 9804 }
9639 9805 }
9640 9806 }
9641 9807 #endif
9642 9808
9643 9809 /*
9644 9810 * abort handling:
9645 9811 *
9646 9812 * Notes:
9647 9813 * - if pkt is not NULL, abort just that command
9648 9814 * - if pkt is NULL, abort all outstanding commands for target
9649 9815 */
9650 9816 static int
9651 9817 mptsas_scsi_abort(struct scsi_address *ap, struct scsi_pkt *pkt)
9652 9818 {
9653 9819 mptsas_t *mpt = ADDR2MPT(ap);
9654 9820 int rval;
9655 9821 mptsas_tgt_private_t *tgt_private;
9656 9822 int target, lun;
9657 9823
9658 9824 tgt_private = (mptsas_tgt_private_t *)ap->a_hba_tran->
9659 9825 tran_tgt_private;
9660 9826 ASSERT(tgt_private != NULL);
9661 9827 target = tgt_private->t_private->m_devhdl;
9662 9828 lun = tgt_private->t_lun;
9663 9829
9664 9830 NDBG23(("mptsas_scsi_abort: target=%d.%d", target, lun));
9665 9831
9666 9832 mutex_enter(&mpt->m_mutex);
9667 9833 rval = mptsas_do_scsi_abort(mpt, target, lun, pkt);
9668 9834 mutex_exit(&mpt->m_mutex);
9669 9835 return (rval);
9670 9836 }
9671 9837
9672 9838 static int
9673 9839 mptsas_do_scsi_abort(mptsas_t *mpt, int target, int lun, struct scsi_pkt *pkt)
9674 9840 {
9675 9841 mptsas_cmd_t *sp = NULL;
9676 9842 mptsas_slots_t *slots = mpt->m_active;
9677 9843 int rval = FALSE;
9678 9844
9679 9845 ASSERT(mutex_owned(&mpt->m_mutex));
9680 9846
9681 9847 /*
9682 9848 * Abort the command pkt on the target/lun in ap. If pkt is
9683 9849 * NULL, abort all outstanding commands on that target/lun.
9684 9850 * If you can abort them, return 1, else return 0.
9685 9851 * Each packet that's aborted should be sent back to the target
|
↓ open down ↓ |
271 lines elided |
↑ open up ↑ |
9686 9852 * driver through the callback routine, with pkt_reason set to
9687 9853 * CMD_ABORTED.
9688 9854 *
9689 9855 * abort cmd pkt on HBA hardware; clean out of outstanding
9690 9856 * command lists, etc.
9691 9857 */
9692 9858 if (pkt != NULL) {
9693 9859 /* abort the specified packet */
9694 9860 sp = PKT2CMD(pkt);
9695 9861
9862 +#ifdef MPTSAS_FAULTINJECTION
9863 + /* Command already on the list. */
9864 + if (((pkt->pkt_flags & FLAG_PKT_TIMEOUT) != 0) &&
9865 + (sp->cmd_active_expiration != 0)) {
9866 + mptsas_fminj_move_cmd_to_doneq(mpt, sp, CMD_ABORTED,
9867 + STAT_ABORTED);
9868 + rval = TRUE;
9869 + goto done;
9870 + }
9871 +#endif
9872 +
9696 9873 if (sp->cmd_queued) {
9697 9874 NDBG23(("mptsas_do_scsi_abort: queued sp=0x%p aborted",
9698 9875 (void *)sp));
9699 9876 mptsas_waitq_delete(mpt, sp);
9700 9877 mptsas_set_pkt_reason(mpt, sp, CMD_ABORTED,
9701 9878 STAT_ABORTED);
9702 9879 mptsas_doneq_add(mpt, sp);
9703 9880 rval = TRUE;
9704 9881 goto done;
9705 9882 }
9706 9883
9707 9884 /*
9708 9885 * Have mpt firmware abort this command
9709 9886 */
9710 9887
9711 9888 if (slots->m_slot[sp->cmd_slot] != NULL) {
9712 9889 rval = mptsas_ioc_task_management(mpt,
9713 9890 MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK, target,
9714 9891 lun, NULL, 0, 0);
9715 9892
9716 9893 /*
9717 9894 * The transport layer expects only TRUE and FALSE.
9718 9895 * Therefore, if mptsas_ioc_task_management returns
9719 9896 * FAILED we will return FALSE.
9720 9897 */
9721 9898 if (rval == FAILED)
9722 9899 rval = FALSE;
9723 9900 goto done;
9724 9901 }
9725 9902 }
9726 9903
9727 9904 /*
9728 9905 * If pkt is NULL then abort task set
9729 9906 */
9730 9907 rval = mptsas_ioc_task_management(mpt,
9731 9908 MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET, target, lun, NULL, 0, 0);
9732 9909
9733 9910 /*
9734 9911 * The transport layer expects only TRUE and FALSE.
9735 9912 * Therefore, if mptsas_ioc_task_management returns
9736 9913 * FAILED we will return FALSE.
9737 9914 */
9738 9915 if (rval == FAILED)
9739 9916 rval = FALSE;
9740 9917
9741 9918 #ifdef MPTSAS_TEST
9742 9919 if (rval && mptsas_test_stop) {
9743 9920 debug_enter("mptsas_do_scsi_abort");
9744 9921 }
9745 9922 #endif
9746 9923
9747 9924 done:
9748 9925 mptsas_doneq_empty(mpt);
9749 9926 return (rval);
9750 9927 }
9751 9928
9752 9929 /*
9753 9930 * capability handling:
9754 9931 * (*tran_getcap). Get the capability named, and return its value.
9755 9932 */
9756 9933 static int
9757 9934 mptsas_scsi_getcap(struct scsi_address *ap, char *cap, int tgtonly)
9758 9935 {
9759 9936 mptsas_t *mpt = ADDR2MPT(ap);
9760 9937 int ckey;
9761 9938 int rval = FALSE;
9762 9939
9763 9940 NDBG24(("mptsas_scsi_getcap: target=%d, cap=%s tgtonly=%x",
9764 9941 ap->a_target, cap, tgtonly));
9765 9942
9766 9943 mutex_enter(&mpt->m_mutex);
9767 9944
9768 9945 if ((mptsas_scsi_capchk(cap, tgtonly, &ckey)) != TRUE) {
9769 9946 mutex_exit(&mpt->m_mutex);
9770 9947 return (UNDEFINED);
9771 9948 }
9772 9949
9773 9950 switch (ckey) {
9774 9951 case SCSI_CAP_DMA_MAX:
9775 9952 rval = (int)mpt->m_msg_dma_attr.dma_attr_maxxfer;
9776 9953 break;
9777 9954 case SCSI_CAP_ARQ:
9778 9955 rval = TRUE;
9779 9956 break;
9780 9957 case SCSI_CAP_MSG_OUT:
9781 9958 case SCSI_CAP_PARITY:
9782 9959 case SCSI_CAP_UNTAGGED_QING:
9783 9960 rval = TRUE;
9784 9961 break;
9785 9962 case SCSI_CAP_TAGGED_QING:
9786 9963 rval = TRUE;
9787 9964 break;
9788 9965 case SCSI_CAP_RESET_NOTIFICATION:
9789 9966 rval = TRUE;
9790 9967 break;
9791 9968 case SCSI_CAP_LINKED_CMDS:
9792 9969 rval = FALSE;
9793 9970 break;
9794 9971 case SCSI_CAP_QFULL_RETRIES:
9795 9972 rval = ((mptsas_tgt_private_t *)(ap->a_hba_tran->
9796 9973 tran_tgt_private))->t_private->m_qfull_retries;
9797 9974 break;
9798 9975 case SCSI_CAP_QFULL_RETRY_INTERVAL:
9799 9976 rval = drv_hztousec(((mptsas_tgt_private_t *)
9800 9977 (ap->a_hba_tran->tran_tgt_private))->
9801 9978 t_private->m_qfull_retry_interval) / 1000;
9802 9979 break;
9803 9980 case SCSI_CAP_CDB_LEN:
9804 9981 rval = CDB_GROUP4;
9805 9982 break;
9806 9983 case SCSI_CAP_INTERCONNECT_TYPE:
9807 9984 rval = INTERCONNECT_SAS;
9808 9985 break;
9809 9986 case SCSI_CAP_TRAN_LAYER_RETRIES:
9810 9987 if (mpt->m_ioc_capabilities &
9811 9988 MPI2_IOCFACTS_CAPABILITY_TLR)
9812 9989 rval = TRUE;
9813 9990 else
9814 9991 rval = FALSE;
9815 9992 break;
9816 9993 default:
9817 9994 rval = UNDEFINED;
9818 9995 break;
9819 9996 }
9820 9997
9821 9998 NDBG24(("mptsas_scsi_getcap: %s, rval=%x", cap, rval));
9822 9999
9823 10000 mutex_exit(&mpt->m_mutex);
9824 10001 return (rval);
9825 10002 }
9826 10003
9827 10004 /*
9828 10005 * (*tran_setcap). Set the capability named to the value given.
9829 10006 */
9830 10007 static int
9831 10008 mptsas_scsi_setcap(struct scsi_address *ap, char *cap, int value, int tgtonly)
9832 10009 {
9833 10010 mptsas_t *mpt = ADDR2MPT(ap);
9834 10011 int ckey;
9835 10012 int rval = FALSE;
9836 10013
9837 10014 NDBG24(("mptsas_scsi_setcap: target=%d, cap=%s value=%x tgtonly=%x",
9838 10015 ap->a_target, cap, value, tgtonly));
9839 10016
9840 10017 if (!tgtonly) {
9841 10018 return (rval);
9842 10019 }
9843 10020
9844 10021 mutex_enter(&mpt->m_mutex);
9845 10022
9846 10023 if ((mptsas_scsi_capchk(cap, tgtonly, &ckey)) != TRUE) {
9847 10024 mutex_exit(&mpt->m_mutex);
9848 10025 return (UNDEFINED);
9849 10026 }
9850 10027
9851 10028 switch (ckey) {
9852 10029 case SCSI_CAP_DMA_MAX:
9853 10030 case SCSI_CAP_MSG_OUT:
9854 10031 case SCSI_CAP_PARITY:
9855 10032 case SCSI_CAP_INITIATOR_ID:
9856 10033 case SCSI_CAP_LINKED_CMDS:
9857 10034 case SCSI_CAP_UNTAGGED_QING:
9858 10035 case SCSI_CAP_RESET_NOTIFICATION:
9859 10036 /*
9860 10037 * None of these are settable via
9861 10038 * the capability interface.
9862 10039 */
9863 10040 break;
9864 10041 case SCSI_CAP_ARQ:
9865 10042 /*
9866 10043 * We cannot turn off arq so return false if asked to
9867 10044 */
9868 10045 if (value) {
9869 10046 rval = TRUE;
9870 10047 } else {
9871 10048 rval = FALSE;
9872 10049 }
9873 10050 break;
9874 10051 case SCSI_CAP_TAGGED_QING:
9875 10052 mptsas_set_throttle(mpt, ((mptsas_tgt_private_t *)
9876 10053 (ap->a_hba_tran->tran_tgt_private))->t_private,
9877 10054 MAX_THROTTLE);
9878 10055 rval = TRUE;
9879 10056 break;
9880 10057 case SCSI_CAP_QFULL_RETRIES:
9881 10058 ((mptsas_tgt_private_t *)(ap->a_hba_tran->tran_tgt_private))->
9882 10059 t_private->m_qfull_retries = (uchar_t)value;
9883 10060 rval = TRUE;
9884 10061 break;
9885 10062 case SCSI_CAP_QFULL_RETRY_INTERVAL:
9886 10063 ((mptsas_tgt_private_t *)(ap->a_hba_tran->tran_tgt_private))->
9887 10064 t_private->m_qfull_retry_interval =
9888 10065 drv_usectohz(value * 1000);
9889 10066 rval = TRUE;
9890 10067 break;
9891 10068 default:
9892 10069 rval = UNDEFINED;
9893 10070 break;
9894 10071 }
9895 10072 mutex_exit(&mpt->m_mutex);
9896 10073 return (rval);
9897 10074 }
9898 10075
9899 10076 /*
9900 10077 * Utility routine for mptsas_ifsetcap/ifgetcap
9901 10078 */
9902 10079 /*ARGSUSED*/
9903 10080 static int
9904 10081 mptsas_scsi_capchk(char *cap, int tgtonly, int *cidxp)
9905 10082 {
9906 10083 NDBG24(("mptsas_scsi_capchk: cap=%s", cap));
9907 10084
9908 10085 if (!cap)
9909 10086 return (FALSE);
9910 10087
9911 10088 *cidxp = scsi_hba_lookup_capstr(cap);
9912 10089 return (TRUE);
9913 10090 }
9914 10091
9915 10092 static int
9916 10093 mptsas_alloc_active_slots(mptsas_t *mpt, int flag)
9917 10094 {
9918 10095 mptsas_slots_t *old_active = mpt->m_active;
9919 10096 mptsas_slots_t *new_active;
9920 10097 size_t size;
9921 10098
9922 10099 /*
9923 10100 * if there are active commands, then we cannot
9924 10101 * change size of active slots array.
9925 10102 */
9926 10103 ASSERT(mpt->m_ncmds == 0);
9927 10104
9928 10105 size = MPTSAS_SLOTS_SIZE(mpt);
9929 10106 new_active = kmem_zalloc(size, flag);
9930 10107 if (new_active == NULL) {
9931 10108 NDBG1(("new active alloc failed"));
9932 10109 return (-1);
9933 10110 }
9934 10111 /*
9935 10112 * Since SMID 0 is reserved and the TM slot is reserved, the
9936 10113 * number of slots that can be used at any one time is
9937 10114 * m_max_requests - 2.
9938 10115 */
9939 10116 new_active->m_n_normal = (mpt->m_max_requests - 2);
9940 10117 new_active->m_size = size;
9941 10118 new_active->m_rotor = 1;
9942 10119 if (old_active)
9943 10120 mptsas_free_active_slots(mpt);
9944 10121 mpt->m_active = new_active;
9945 10122
9946 10123 return (0);
9947 10124 }
9948 10125
9949 10126 static void
9950 10127 mptsas_free_active_slots(mptsas_t *mpt)
9951 10128 {
9952 10129 mptsas_slots_t *active = mpt->m_active;
9953 10130 size_t size;
9954 10131
9955 10132 if (active == NULL)
9956 10133 return;
9957 10134 size = active->m_size;
9958 10135 kmem_free(active, size);
9959 10136 mpt->m_active = NULL;
9960 10137 }
9961 10138
9962 10139 /*
9963 10140 * Error logging, printing, and debug print routines.
9964 10141 */
9965 10142 static char *mptsas_label = "mpt_sas";
9966 10143
9967 10144 /*PRINTFLIKE3*/
9968 10145 void
9969 10146 mptsas_log(mptsas_t *mpt, int level, char *fmt, ...)
9970 10147 {
9971 10148 dev_info_t *dev;
9972 10149 va_list ap;
9973 10150
9974 10151 if (mpt) {
9975 10152 dev = mpt->m_dip;
|
↓ open down ↓ |
270 lines elided |
↑ open up ↑ |
9976 10153 } else {
9977 10154 dev = 0;
9978 10155 }
9979 10156
9980 10157 mutex_enter(&mptsas_log_mutex);
9981 10158
9982 10159 va_start(ap, fmt);
9983 10160 (void) vsprintf(mptsas_log_buf, fmt, ap);
9984 10161 va_end(ap);
9985 10162
9986 - if (level == CE_CONT) {
9987 - scsi_log(dev, mptsas_label, level, "%s\n", mptsas_log_buf);
10163 + if (level == CE_CONT || level == CE_NOTE) {
10164 + scsi_log(dev, mptsas_label, level, "!%s\n", mptsas_log_buf);
9988 10165 } else {
9989 - scsi_log(dev, mptsas_label, level, "%s", mptsas_log_buf);
10166 + scsi_log(dev, mptsas_label, level, "!%s", mptsas_log_buf);
9990 10167 }
9991 10168
9992 10169 mutex_exit(&mptsas_log_mutex);
9993 10170 }
9994 10171
9995 10172 #ifdef MPTSAS_DEBUG
9996 10173 /*
9997 10174 * Use a circular buffer to log messages to private memory.
9998 10175 * Increment idx atomically to minimize risk to miss lines.
9999 10176 * It's fast and does not hold up the proceedings too much.
10000 10177 */
10001 10178 static const size_t mptsas_dbglog_linecnt = MPTSAS_DBGLOG_LINECNT;
10002 10179 static const size_t mptsas_dbglog_linelen = MPTSAS_DBGLOG_LINELEN;
10003 10180 static char mptsas_dbglog_bufs[MPTSAS_DBGLOG_LINECNT][MPTSAS_DBGLOG_LINELEN];
10004 10181 static uint32_t mptsas_dbglog_idx = 0;
10005 10182
10006 10183 /*PRINTFLIKE1*/
10007 10184 void
10008 10185 mptsas_debug_log(char *fmt, ...)
10009 10186 {
10010 10187 va_list ap;
10011 10188 uint32_t idx;
10012 10189
10013 10190 idx = atomic_inc_32_nv(&mptsas_dbglog_idx) &
10014 10191 (mptsas_dbglog_linecnt - 1);
10015 10192
10016 10193 va_start(ap, fmt);
10017 10194 (void) vsnprintf(mptsas_dbglog_bufs[idx],
10018 10195 mptsas_dbglog_linelen, fmt, ap);
10019 10196 va_end(ap);
10020 10197 }
10021 10198
10022 10199 /*PRINTFLIKE1*/
10023 10200 void
10024 10201 mptsas_printf(char *fmt, ...)
10025 10202 {
10026 10203 dev_info_t *dev = 0;
10027 10204 va_list ap;
10028 10205
10029 10206 mutex_enter(&mptsas_log_mutex);
10030 10207
10031 10208 va_start(ap, fmt);
10032 10209 (void) vsprintf(mptsas_log_buf, fmt, ap);
10033 10210 va_end(ap);
10034 10211
10035 10212 #ifdef PROM_PRINTF
10036 10213 prom_printf("%s:\t%s\n", mptsas_label, mptsas_log_buf);
10037 10214 #else
10038 10215 scsi_log(dev, mptsas_label, CE_CONT, "!%s\n", mptsas_log_buf);
10039 10216 #endif
10040 10217 mutex_exit(&mptsas_log_mutex);
10041 10218 }
10042 10219 #endif
10043 10220
10044 10221 /*
10045 10222 * timeout handling
10046 10223 */
|
↓ open down ↓ |
47 lines elided |
↑ open up ↑ |
10047 10224 static void
10048 10225 mptsas_watch(void *arg)
10049 10226 {
10050 10227 #ifndef __lock_lint
10051 10228 _NOTE(ARGUNUSED(arg))
10052 10229 #endif
10053 10230
10054 10231 mptsas_t *mpt;
10055 10232 uint32_t doorbell;
10056 10233
10234 +#ifdef MPTSAS_FAULTINJECTION
10235 + struct mptsas_active_cmdq finj_cmds;
10236 +
10237 + TAILQ_INIT(&finj_cmds);
10238 +#endif
10239 +
10057 10240 NDBG30(("mptsas_watch"));
10058 10241
10059 10242 rw_enter(&mptsas_global_rwlock, RW_READER);
10060 10243 for (mpt = mptsas_head; mpt != (mptsas_t *)NULL; mpt = mpt->m_next) {
10061 10244
10062 10245 mutex_enter(&mpt->m_mutex);
10063 10246
10064 10247 /* Skip device if not powered on */
10065 10248 if (mpt->m_options & MPTSAS_OPT_PM) {
10066 10249 if (mpt->m_power_level == PM_LEVEL_D0) {
10067 10250 (void) pm_busy_component(mpt->m_dip, 0);
10068 10251 mpt->m_busy = 1;
10069 10252 } else {
10070 10253 mutex_exit(&mpt->m_mutex);
10071 10254 continue;
10072 10255 }
10073 10256 }
10074 10257
10075 10258 /*
10076 10259 * Check if controller is in a FAULT state. If so, reset it.
10077 10260 */
10078 10261 doorbell = ddi_get32(mpt->m_datap, &mpt->m_reg->Doorbell);
10079 10262 if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
10080 10263 doorbell &= MPI2_DOORBELL_DATA_MASK;
10081 10264 mptsas_log(mpt, CE_WARN, "MPT Firmware Fault, "
10082 10265 "code: %04x", doorbell);
10083 10266 mpt->m_softstate &= ~MPTSAS_SS_MSG_UNIT_RESET;
10084 10267 if ((mptsas_restart_ioc(mpt)) == DDI_FAILURE) {
10085 10268 mptsas_log(mpt, CE_WARN, "Reset failed"
10086 10269 "after fault was detected");
10087 10270 }
10088 10271 }
10089 10272
|
↓ open down ↓ |
23 lines elided |
↑ open up ↑ |
10090 10273 /*
10091 10274 * For now, always call mptsas_watchsubr.
10092 10275 */
10093 10276 mptsas_watchsubr(mpt);
10094 10277
10095 10278 if (mpt->m_options & MPTSAS_OPT_PM) {
10096 10279 mpt->m_busy = 0;
10097 10280 (void) pm_idle_component(mpt->m_dip, 0);
10098 10281 }
10099 10282
10283 +#ifdef MPTSAS_FAULTINJECTION
10284 + mptsas_fminj_watchsubr(mpt, &finj_cmds);
10285 +#endif
10286 +
10100 10287 mutex_exit(&mpt->m_mutex);
10101 10288 }
10102 10289 rw_exit(&mptsas_global_rwlock);
10103 10290
10104 10291 mutex_enter(&mptsas_global_mutex);
10105 10292 if (mptsas_timeouts_enabled)
10106 10293 mptsas_timeout_id = timeout(mptsas_watch, NULL, mptsas_tick);
10107 10294 mutex_exit(&mptsas_global_mutex);
10295 +
10296 +#ifdef MPTSAS_FAULTINJECTION
10297 + /* Complete all completed commands. */
10298 + if (!TAILQ_EMPTY(&finj_cmds)) {
10299 + mptsas_cmd_t *cmd;
10300 +
10301 + while ((cmd = TAILQ_FIRST(&finj_cmds)) != NULL) {
10302 + TAILQ_REMOVE(&finj_cmds, cmd, cmd_active_link);
10303 + struct scsi_pkt *pkt = cmd->cmd_pkt;
10304 +
10305 + if (pkt->pkt_comp != NULL) {
10306 + (*pkt->pkt_comp)(pkt);
10307 + }
10308 + }
10309 + }
10310 +#endif
10108 10311 }
10109 10312
10110 10313 static void
10111 10314 mptsas_watchsubr_tgt(mptsas_t *mpt, mptsas_target_t *ptgt, hrtime_t timestamp)
10112 10315 {
10113 10316 mptsas_cmd_t *cmd;
10114 10317
10115 10318 /*
10116 10319 * If we were draining due to a qfull condition,
10117 10320 * go back to full throttle.
10118 10321 */
10119 10322 if ((ptgt->m_t_throttle < MAX_THROTTLE) &&
10120 10323 (ptgt->m_t_throttle > HOLD_THROTTLE) &&
10121 10324 (ptgt->m_t_ncmds < ptgt->m_t_throttle)) {
10122 10325 mptsas_set_throttle(mpt, ptgt, MAX_THROTTLE);
10123 10326 mptsas_restart_hba(mpt);
10124 10327 }
10125 10328
10126 10329 cmd = TAILQ_LAST(&ptgt->m_active_cmdq, mptsas_active_cmdq);
10127 10330 if (cmd == NULL)
10128 10331 return;
10129 10332
10130 10333 if (cmd->cmd_active_expiration <= timestamp) {
10131 10334 /*
10132 10335 * Earliest command timeout expired. Drain throttle.
10133 10336 */
10134 10337 mptsas_set_throttle(mpt, ptgt, DRAIN_THROTTLE);
10135 10338
10136 10339 /*
10137 10340 * Check for remaining commands.
10138 10341 */
10139 10342 cmd = TAILQ_FIRST(&ptgt->m_active_cmdq);
10140 10343 if (cmd->cmd_active_expiration > timestamp) {
10141 10344 /*
10142 10345 * Wait for remaining commands to complete or
10143 10346 * time out.
10144 10347 */
10145 10348 NDBG23(("command timed out, pending drain"));
10146 10349 return;
10147 10350 }
10148 10351
10149 10352 /*
10150 10353 * All command timeouts expired.
10151 10354 */
10152 10355 mptsas_log(mpt, CE_NOTE, "Timeout of %d seconds "
10153 10356 "expired with %d commands on target %d lun %d.",
10154 10357 cmd->cmd_pkt->pkt_time, ptgt->m_t_ncmds,
10155 10358 ptgt->m_devhdl, Lun(cmd));
10156 10359
10157 10360 mptsas_cmd_timeout(mpt, ptgt);
10158 10361 } else if (cmd->cmd_active_expiration <=
10159 10362 timestamp + (hrtime_t)mptsas_scsi_watchdog_tick * NANOSEC) {
10160 10363 NDBG23(("pending timeout"));
10161 10364 mptsas_set_throttle(mpt, ptgt, DRAIN_THROTTLE);
10162 10365 }
10163 10366 }
10164 10367
10165 10368 static void
10166 10369 mptsas_watchsubr(mptsas_t *mpt)
10167 10370 {
10168 10371 int i;
10169 10372 mptsas_cmd_t *cmd;
10170 10373 mptsas_target_t *ptgt = NULL;
10171 10374 hrtime_t timestamp = gethrtime();
10172 10375
10173 10376 ASSERT(MUTEX_HELD(&mpt->m_mutex));
10174 10377
10175 10378 NDBG30(("mptsas_watchsubr: mpt=0x%p", (void *)mpt));
10176 10379
10177 10380 #ifdef MPTSAS_TEST
10178 10381 if (mptsas_enable_untagged) {
10179 10382 mptsas_test_untagged++;
10180 10383 }
10181 10384 #endif
10182 10385
10183 10386 /*
10184 10387 * Check for commands stuck in active slot
10185 10388 * Account for TM requests, which use the last SMID.
10186 10389 */
10187 10390 for (i = 0; i <= mpt->m_active->m_n_normal; i++) {
10188 10391 if ((cmd = mpt->m_active->m_slot[i]) != NULL) {
10189 10392 if (cmd->cmd_active_expiration <= timestamp) {
10190 10393 if ((cmd->cmd_flags & CFLAG_CMDIOC) == 0) {
10191 10394 /*
10192 10395 * There seems to be a command stuck
10193 10396 * in the active slot. Drain throttle.
10194 10397 */
10195 10398 mptsas_set_throttle(mpt,
10196 10399 cmd->cmd_tgt_addr,
10197 10400 DRAIN_THROTTLE);
10198 10401 } else if (cmd->cmd_flags &
10199 10402 (CFLAG_PASSTHRU | CFLAG_CONFIG |
10200 10403 CFLAG_FW_DIAG)) {
10201 10404 /*
10202 10405 * passthrough command timeout
10203 10406 */
10204 10407 cmd->cmd_flags |= (CFLAG_FINISHED |
10205 10408 CFLAG_TIMEOUT);
10206 10409 cv_broadcast(&mpt->m_passthru_cv);
10207 10410 cv_broadcast(&mpt->m_config_cv);
10208 10411 cv_broadcast(&mpt->m_fw_diag_cv);
10209 10412 }
10210 10413 }
10211 10414 }
10212 10415 }
10213 10416
10214 10417 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
10215 10418 ptgt = refhash_next(mpt->m_targets, ptgt)) {
10216 10419 mptsas_watchsubr_tgt(mpt, ptgt, timestamp);
10217 10420 }
10218 10421
10219 10422 for (ptgt = refhash_first(mpt->m_tmp_targets); ptgt != NULL;
10220 10423 ptgt = refhash_next(mpt->m_tmp_targets, ptgt)) {
10221 10424 mptsas_watchsubr_tgt(mpt, ptgt, timestamp);
10222 10425 }
10223 10426 }
10224 10427
10225 10428 /*
10226 10429 * timeout recovery
10227 10430 */
10228 10431 static void
10229 10432 mptsas_cmd_timeout(mptsas_t *mpt, mptsas_target_t *ptgt)
10230 10433 {
10231 10434 uint16_t devhdl;
10232 10435 uint64_t sas_wwn;
10233 10436 uint8_t phy;
10234 10437 char wwn_str[MPTSAS_WWN_STRLEN];
10235 10438
10236 10439 devhdl = ptgt->m_devhdl;
10237 10440 sas_wwn = ptgt->m_addr.mta_wwn;
10238 10441 phy = ptgt->m_phynum;
10239 10442 if (sas_wwn == 0) {
10240 10443 (void) sprintf(wwn_str, "p%x", phy);
10241 10444 } else {
10242 10445 (void) sprintf(wwn_str, "w%016"PRIx64, sas_wwn);
10243 10446 }
10244 10447
10245 10448 NDBG29(("mptsas_cmd_timeout: target=%d", devhdl));
10246 10449 mptsas_log(mpt, CE_WARN, "Disconnected command timeout for "
10247 10450 "target %d %s, enclosure %u", devhdl, wwn_str,
10248 10451 ptgt->m_enclosure);
10249 10452
10250 10453 /*
10251 10454 * Abort all outstanding commands on the device.
10252 10455 */
10253 10456 NDBG29(("mptsas_cmd_timeout: device reset"));
10254 10457 if (mptsas_do_scsi_reset(mpt, devhdl) != TRUE) {
10255 10458 mptsas_log(mpt, CE_WARN, "Target %d reset for command timeout "
10256 10459 "recovery failed!", devhdl);
10257 10460 }
10258 10461 }
10259 10462
10260 10463 /*
10261 10464 * Device / Hotplug control
10262 10465 */
10263 10466 static int
10264 10467 mptsas_scsi_quiesce(dev_info_t *dip)
10265 10468 {
10266 10469 mptsas_t *mpt;
10267 10470 scsi_hba_tran_t *tran;
10268 10471
10269 10472 tran = ddi_get_driver_private(dip);
10270 10473 if (tran == NULL || (mpt = TRAN2MPT(tran)) == NULL)
10271 10474 return (-1);
10272 10475
10273 10476 return (mptsas_quiesce_bus(mpt));
10274 10477 }
10275 10478
10276 10479 static int
10277 10480 mptsas_scsi_unquiesce(dev_info_t *dip)
10278 10481 {
10279 10482 mptsas_t *mpt;
10280 10483 scsi_hba_tran_t *tran;
10281 10484
10282 10485 tran = ddi_get_driver_private(dip);
10283 10486 if (tran == NULL || (mpt = TRAN2MPT(tran)) == NULL)
10284 10487 return (-1);
10285 10488
10286 10489 return (mptsas_unquiesce_bus(mpt));
10287 10490 }
10288 10491
10289 10492 static int
10290 10493 mptsas_quiesce_bus(mptsas_t *mpt)
10291 10494 {
10292 10495 mptsas_target_t *ptgt = NULL;
10293 10496
10294 10497 NDBG28(("mptsas_quiesce_bus"));
10295 10498 mutex_enter(&mpt->m_mutex);
10296 10499
10297 10500 /* Set all the throttles to zero */
10298 10501 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
10299 10502 ptgt = refhash_next(mpt->m_targets, ptgt)) {
10300 10503 mptsas_set_throttle(mpt, ptgt, HOLD_THROTTLE);
10301 10504 }
10302 10505
10303 10506 /* If there are any outstanding commands in the queue */
10304 10507 if (mpt->m_ncmds) {
10305 10508 mpt->m_softstate |= MPTSAS_SS_DRAINING;
10306 10509 mpt->m_quiesce_timeid = timeout(mptsas_ncmds_checkdrain,
10307 10510 mpt, (MPTSAS_QUIESCE_TIMEOUT * drv_usectohz(1000000)));
10308 10511 if (cv_wait_sig(&mpt->m_cv, &mpt->m_mutex) == 0) {
10309 10512 /*
10310 10513 * Quiesce has been interrupted
10311 10514 */
10312 10515 mpt->m_softstate &= ~MPTSAS_SS_DRAINING;
10313 10516 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
10314 10517 ptgt = refhash_next(mpt->m_targets, ptgt)) {
10315 10518 mptsas_set_throttle(mpt, ptgt, MAX_THROTTLE);
10316 10519 }
10317 10520 mptsas_restart_hba(mpt);
10318 10521 if (mpt->m_quiesce_timeid != 0) {
10319 10522 timeout_id_t tid = mpt->m_quiesce_timeid;
10320 10523 mpt->m_quiesce_timeid = 0;
10321 10524 mutex_exit(&mpt->m_mutex);
10322 10525 (void) untimeout(tid);
10323 10526 return (-1);
10324 10527 }
10325 10528 mutex_exit(&mpt->m_mutex);
10326 10529 return (-1);
10327 10530 } else {
10328 10531 /* Bus has been quiesced */
10329 10532 ASSERT(mpt->m_quiesce_timeid == 0);
10330 10533 mpt->m_softstate &= ~MPTSAS_SS_DRAINING;
10331 10534 mpt->m_softstate |= MPTSAS_SS_QUIESCED;
10332 10535 mutex_exit(&mpt->m_mutex);
10333 10536 return (0);
10334 10537 }
10335 10538 }
10336 10539 /* Bus was not busy - QUIESCED */
10337 10540 mutex_exit(&mpt->m_mutex);
10338 10541
10339 10542 return (0);
10340 10543 }
10341 10544
10342 10545 static int
10343 10546 mptsas_unquiesce_bus(mptsas_t *mpt)
10344 10547 {
10345 10548 mptsas_target_t *ptgt = NULL;
10346 10549
10347 10550 NDBG28(("mptsas_unquiesce_bus"));
10348 10551 mutex_enter(&mpt->m_mutex);
10349 10552 mpt->m_softstate &= ~MPTSAS_SS_QUIESCED;
10350 10553 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
10351 10554 ptgt = refhash_next(mpt->m_targets, ptgt)) {
10352 10555 mptsas_set_throttle(mpt, ptgt, MAX_THROTTLE);
10353 10556 }
10354 10557 mptsas_restart_hba(mpt);
10355 10558 mutex_exit(&mpt->m_mutex);
10356 10559 return (0);
10357 10560 }
10358 10561
10359 10562 static void
10360 10563 mptsas_ncmds_checkdrain(void *arg)
10361 10564 {
10362 10565 mptsas_t *mpt = arg;
10363 10566 mptsas_target_t *ptgt = NULL;
10364 10567
10365 10568 mutex_enter(&mpt->m_mutex);
10366 10569 if (mpt->m_softstate & MPTSAS_SS_DRAINING) {
10367 10570 mpt->m_quiesce_timeid = 0;
10368 10571 if (mpt->m_ncmds == 0) {
10369 10572 /* Command queue has been drained */
10370 10573 cv_signal(&mpt->m_cv);
10371 10574 } else {
10372 10575 /*
10373 10576 * The throttle may have been reset because
10374 10577 * of a SCSI bus reset
10375 10578 */
10376 10579 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
10377 10580 ptgt = refhash_next(mpt->m_targets, ptgt)) {
10378 10581 mptsas_set_throttle(mpt, ptgt, HOLD_THROTTLE);
10379 10582 }
10380 10583
10381 10584 mpt->m_quiesce_timeid = timeout(mptsas_ncmds_checkdrain,
10382 10585 mpt, (MPTSAS_QUIESCE_TIMEOUT *
10383 10586 drv_usectohz(1000000)));
10384 10587 }
10385 10588 }
10386 10589 mutex_exit(&mpt->m_mutex);
10387 10590 }
10388 10591
10389 10592 /*ARGSUSED*/
10390 10593 static void
10391 10594 mptsas_dump_cmd(mptsas_t *mpt, mptsas_cmd_t *cmd)
10392 10595 {
10393 10596 int i;
10394 10597 uint8_t *cp = (uchar_t *)cmd->cmd_pkt->pkt_cdbp;
10395 10598 char buf[128];
10396 10599
10397 10600 buf[0] = '\0';
10398 10601 NDBG25(("?Cmd (0x%p) dump for Target %d Lun %d:\n", (void *)cmd,
10399 10602 Tgt(cmd), Lun(cmd)));
10400 10603 (void) sprintf(&buf[0], "\tcdb=[");
10401 10604 for (i = 0; i < (int)cmd->cmd_cdblen; i++) {
10402 10605 (void) sprintf(&buf[strlen(buf)], " 0x%x", *cp++);
10403 10606 }
10404 10607 (void) sprintf(&buf[strlen(buf)], " ]");
10405 10608 NDBG25(("?%s\n", buf));
10406 10609 NDBG25(("?pkt_flags=0x%x pkt_statistics=0x%x pkt_state=0x%x\n",
10407 10610 cmd->cmd_pkt->pkt_flags, cmd->cmd_pkt->pkt_statistics,
10408 10611 cmd->cmd_pkt->pkt_state));
10409 10612 NDBG25(("?pkt_scbp=0x%x cmd_flags=0x%x\n", cmd->cmd_pkt->pkt_scbp ?
10410 10613 *(cmd->cmd_pkt->pkt_scbp) : 0, cmd->cmd_flags));
10411 10614 }
10412 10615
10413 10616 static void
10414 10617 mptsas_passthru_sge(ddi_acc_handle_t acc_hdl, mptsas_pt_request_t *pt,
10415 10618 pMpi2SGESimple64_t sgep)
10416 10619 {
10417 10620 uint32_t sge_flags;
10418 10621 uint32_t data_size, dataout_size;
10419 10622 ddi_dma_cookie_t data_cookie;
10420 10623 ddi_dma_cookie_t dataout_cookie;
10421 10624
10422 10625 data_size = pt->data_size;
10423 10626 dataout_size = pt->dataout_size;
10424 10627 data_cookie = pt->data_cookie;
10425 10628 dataout_cookie = pt->dataout_cookie;
10426 10629
10427 10630 if (dataout_size) {
10428 10631 sge_flags = dataout_size |
10429 10632 ((uint32_t)(MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
10430 10633 MPI2_SGE_FLAGS_END_OF_BUFFER |
10431 10634 MPI2_SGE_FLAGS_HOST_TO_IOC |
10432 10635 MPI2_SGE_FLAGS_64_BIT_ADDRESSING) <<
10433 10636 MPI2_SGE_FLAGS_SHIFT);
10434 10637 ddi_put32(acc_hdl, &sgep->FlagsLength, sge_flags);
10435 10638 ddi_put32(acc_hdl, &sgep->Address.Low,
10436 10639 (uint32_t)(dataout_cookie.dmac_laddress &
10437 10640 0xffffffffull));
10438 10641 ddi_put32(acc_hdl, &sgep->Address.High,
10439 10642 (uint32_t)(dataout_cookie.dmac_laddress
10440 10643 >> 32));
10441 10644 sgep++;
10442 10645 }
10443 10646 sge_flags = data_size;
10444 10647 sge_flags |= ((uint32_t)(MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
10445 10648 MPI2_SGE_FLAGS_LAST_ELEMENT |
10446 10649 MPI2_SGE_FLAGS_END_OF_BUFFER |
10447 10650 MPI2_SGE_FLAGS_END_OF_LIST |
10448 10651 MPI2_SGE_FLAGS_64_BIT_ADDRESSING) <<
10449 10652 MPI2_SGE_FLAGS_SHIFT);
10450 10653 if (pt->direction == MPTSAS_PASS_THRU_DIRECTION_WRITE) {
10451 10654 sge_flags |= ((uint32_t)(MPI2_SGE_FLAGS_HOST_TO_IOC) <<
10452 10655 MPI2_SGE_FLAGS_SHIFT);
10453 10656 } else {
10454 10657 sge_flags |= ((uint32_t)(MPI2_SGE_FLAGS_IOC_TO_HOST) <<
10455 10658 MPI2_SGE_FLAGS_SHIFT);
10456 10659 }
10457 10660 ddi_put32(acc_hdl, &sgep->FlagsLength,
10458 10661 sge_flags);
10459 10662 ddi_put32(acc_hdl, &sgep->Address.Low,
10460 10663 (uint32_t)(data_cookie.dmac_laddress &
10461 10664 0xffffffffull));
10462 10665 ddi_put32(acc_hdl, &sgep->Address.High,
10463 10666 (uint32_t)(data_cookie.dmac_laddress >> 32));
10464 10667 }
10465 10668
10466 10669 static void
10467 10670 mptsas_passthru_ieee_sge(ddi_acc_handle_t acc_hdl, mptsas_pt_request_t *pt,
10468 10671 pMpi2IeeeSgeSimple64_t ieeesgep)
10469 10672 {
10470 10673 uint8_t sge_flags;
10471 10674 uint32_t data_size, dataout_size;
10472 10675 ddi_dma_cookie_t data_cookie;
10473 10676 ddi_dma_cookie_t dataout_cookie;
10474 10677
10475 10678 data_size = pt->data_size;
10476 10679 dataout_size = pt->dataout_size;
10477 10680 data_cookie = pt->data_cookie;
10478 10681 dataout_cookie = pt->dataout_cookie;
10479 10682
10480 10683 sge_flags = (MPI2_IEEE_SGE_FLAGS_SIMPLE_ELEMENT |
10481 10684 MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR);
10482 10685 if (dataout_size) {
10483 10686 ddi_put32(acc_hdl, &ieeesgep->Length, dataout_size);
10484 10687 ddi_put32(acc_hdl, &ieeesgep->Address.Low,
10485 10688 (uint32_t)(dataout_cookie.dmac_laddress &
10486 10689 0xffffffffull));
10487 10690 ddi_put32(acc_hdl, &ieeesgep->Address.High,
10488 10691 (uint32_t)(dataout_cookie.dmac_laddress >> 32));
10489 10692 ddi_put8(acc_hdl, &ieeesgep->Flags, sge_flags);
10490 10693 ieeesgep++;
10491 10694 }
10492 10695 sge_flags |= MPI25_IEEE_SGE_FLAGS_END_OF_LIST;
10493 10696 ddi_put32(acc_hdl, &ieeesgep->Length, data_size);
10494 10697 ddi_put32(acc_hdl, &ieeesgep->Address.Low,
10495 10698 (uint32_t)(data_cookie.dmac_laddress & 0xffffffffull));
10496 10699 ddi_put32(acc_hdl, &ieeesgep->Address.High,
10497 10700 (uint32_t)(data_cookie.dmac_laddress >> 32));
10498 10701 ddi_put8(acc_hdl, &ieeesgep->Flags, sge_flags);
10499 10702 }
10500 10703
10501 10704 static void
10502 10705 mptsas_start_passthru(mptsas_t *mpt, mptsas_cmd_t *cmd)
10503 10706 {
10504 10707 caddr_t memp;
10505 10708 pMPI2RequestHeader_t request_hdrp;
10506 10709 struct scsi_pkt *pkt = cmd->cmd_pkt;
10507 10710 mptsas_pt_request_t *pt = pkt->pkt_ha_private;
10508 10711 uint32_t request_size;
10509 10712 uint32_t i;
10510 10713 uint64_t request_desc = 0;
10511 10714 uint8_t desc_type;
10512 10715 uint16_t SMID;
10513 10716 uint8_t *request, function;
10514 10717 ddi_dma_handle_t dma_hdl = mpt->m_dma_req_frame_hdl;
10515 10718 ddi_acc_handle_t acc_hdl = mpt->m_acc_req_frame_hdl;
10516 10719
10517 10720 desc_type = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
10518 10721
10519 10722 request = pt->request;
10520 10723 request_size = pt->request_size;
10521 10724
10522 10725 SMID = cmd->cmd_slot;
10523 10726
10524 10727 /*
10525 10728 * Store the passthrough message in memory location
10526 10729 * corresponding to our slot number
10527 10730 */
10528 10731 memp = mpt->m_req_frame + (mpt->m_req_frame_size * SMID);
10529 10732 request_hdrp = (pMPI2RequestHeader_t)memp;
10530 10733 bzero(memp, mpt->m_req_frame_size);
10531 10734
10532 10735 for (i = 0; i < request_size; i++) {
10533 10736 bcopy(request + i, memp + i, 1);
10534 10737 }
10535 10738
10536 10739 NDBG15(("mptsas_start_passthru: Func 0x%x, MsgFlags 0x%x, "
10537 10740 "size=%d, in %d, out %d, SMID %d", request_hdrp->Function,
10538 10741 request_hdrp->MsgFlags, request_size,
10539 10742 pt->data_size, pt->dataout_size, SMID));
10540 10743
10541 10744 /*
10542 10745 * Add an SGE, even if the length is zero.
10543 10746 */
10544 10747 if (mpt->m_MPI25 && pt->simple == 0) {
10545 10748 mptsas_passthru_ieee_sge(acc_hdl, pt,
10546 10749 (pMpi2IeeeSgeSimple64_t)
10547 10750 ((uint8_t *)request_hdrp + pt->sgl_offset));
10548 10751 } else {
10549 10752 mptsas_passthru_sge(acc_hdl, pt,
10550 10753 (pMpi2SGESimple64_t)
10551 10754 ((uint8_t *)request_hdrp + pt->sgl_offset));
10552 10755 }
10553 10756
10554 10757 function = request_hdrp->Function;
10555 10758 if ((function == MPI2_FUNCTION_SCSI_IO_REQUEST) ||
10556 10759 (function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) {
10557 10760 pMpi2SCSIIORequest_t scsi_io_req;
10558 10761 caddr_t arsbuf;
10559 10762 uint8_t ars_size;
10560 10763 uint32_t ars_dmaaddrlow;
10561 10764
10562 10765 NDBG15(("mptsas_start_passthru: Is SCSI IO Req"));
10563 10766 scsi_io_req = (pMpi2SCSIIORequest_t)request_hdrp;
10564 10767
10565 10768 if (cmd->cmd_extrqslen != 0) {
10566 10769 /*
10567 10770 * Mapping of the buffer was done in
10568 10771 * mptsas_do_passthru().
10569 10772 * Calculate the DMA address with the same offset.
10570 10773 */
10571 10774 arsbuf = cmd->cmd_arq_buf;
10572 10775 ars_size = cmd->cmd_extrqslen;
10573 10776 ars_dmaaddrlow = (mpt->m_req_sense_dma_addr +
10574 10777 ((uintptr_t)arsbuf - (uintptr_t)mpt->m_req_sense)) &
10575 10778 0xffffffffu;
10576 10779 } else {
10577 10780 arsbuf = mpt->m_req_sense +
10578 10781 (mpt->m_req_sense_size * (SMID-1));
10579 10782 cmd->cmd_arq_buf = arsbuf;
10580 10783 ars_size = mpt->m_req_sense_size;
10581 10784 ars_dmaaddrlow = (mpt->m_req_sense_dma_addr +
10582 10785 (mpt->m_req_sense_size * (SMID-1))) &
10583 10786 0xffffffffu;
10584 10787 }
10585 10788 bzero(arsbuf, ars_size);
10586 10789
10587 10790 ddi_put8(acc_hdl, &scsi_io_req->SenseBufferLength, ars_size);
10588 10791 ddi_put32(acc_hdl, &scsi_io_req->SenseBufferLowAddress,
10589 10792 ars_dmaaddrlow);
10590 10793
10591 10794 /*
10592 10795 * Put SGE for data and data_out buffer at the end of
10593 10796 * scsi_io_request message header.(64 bytes in total)
10594 10797 * Set SGLOffset0 value
10595 10798 */
10596 10799 ddi_put8(acc_hdl, &scsi_io_req->SGLOffset0,
10597 10800 offsetof(MPI2_SCSI_IO_REQUEST, SGL) / 4);
10598 10801
10599 10802 /*
10600 10803 * Setup descriptor info. RAID passthrough must use the
10601 10804 * default request descriptor which is already set, so if this
10602 10805 * is a SCSI IO request, change the descriptor to SCSI IO.
10603 10806 */
10604 10807 if (function == MPI2_FUNCTION_SCSI_IO_REQUEST) {
10605 10808 desc_type = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
10606 10809 request_desc = ((uint64_t)ddi_get16(acc_hdl,
10607 10810 &scsi_io_req->DevHandle) << 48);
10608 10811 }
10609 10812 (void) ddi_dma_sync(mpt->m_dma_req_sense_hdl, 0, 0,
10610 10813 DDI_DMA_SYNC_FORDEV);
10611 10814 }
10612 10815
10613 10816 /*
10614 10817 * We must wait till the message has been completed before
10615 10818 * beginning the next message so we wait for this one to
10616 10819 * finish.
10617 10820 */
10618 10821 (void) ddi_dma_sync(dma_hdl, 0, 0, DDI_DMA_SYNC_FORDEV);
10619 10822 request_desc |= (SMID << 16) + desc_type;
10620 10823 cmd->cmd_rfm = NULL;
10621 10824 MPTSAS_START_CMD(mpt, request_desc);
10622 10825 if ((mptsas_check_dma_handle(dma_hdl) != DDI_SUCCESS) ||
10623 10826 (mptsas_check_acc_handle(acc_hdl) != DDI_SUCCESS)) {
10624 10827 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
10625 10828 }
10626 10829 }
10627 10830
10628 10831 typedef void (mptsas_pre_f)(mptsas_t *, mptsas_pt_request_t *);
10629 10832 static mptsas_pre_f mpi_pre_ioc_facts;
10630 10833 static mptsas_pre_f mpi_pre_port_facts;
10631 10834 static mptsas_pre_f mpi_pre_fw_download;
10632 10835 static mptsas_pre_f mpi_pre_fw_25_download;
10633 10836 static mptsas_pre_f mpi_pre_fw_upload;
10634 10837 static mptsas_pre_f mpi_pre_fw_25_upload;
10635 10838 static mptsas_pre_f mpi_pre_sata_passthrough;
10636 10839 static mptsas_pre_f mpi_pre_smp_passthrough;
10637 10840 static mptsas_pre_f mpi_pre_config;
10638 10841 static mptsas_pre_f mpi_pre_sas_io_unit_control;
10639 10842 static mptsas_pre_f mpi_pre_scsi_io_req;
10640 10843
10641 10844 /*
10642 10845 * Prepare the pt for a SAS2 FW_DOWNLOAD request.
10643 10846 */
10644 10847 static void
10645 10848 mpi_pre_fw_download(mptsas_t *mpt, mptsas_pt_request_t *pt)
10646 10849 {
10647 10850 pMpi2FWDownloadTCSGE_t tcsge;
10648 10851 pMpi2FWDownloadRequest req;
10649 10852
10650 10853 /*
10651 10854 * If SAS3, call separate function.
10652 10855 */
10653 10856 if (mpt->m_MPI25) {
10654 10857 mpi_pre_fw_25_download(mpt, pt);
10655 10858 return;
10656 10859 }
10657 10860
10658 10861 /*
10659 10862 * User requests should come in with the Transaction
10660 10863 * context element where the SGL will go. Putting the
10661 10864 * SGL after that seems to work, but don't really know
10662 10865 * why. Other drivers tend to create an extra SGL and
10663 10866 * refer to the TCE through that.
|
↓ open down ↓ |
546 lines elided |
↑ open up ↑ |
10664 10867 */
10665 10868 req = (pMpi2FWDownloadRequest)pt->request;
10666 10869 tcsge = (pMpi2FWDownloadTCSGE_t)&req->SGL;
10667 10870 if (tcsge->ContextSize != 0 || tcsge->DetailsLength != 12 ||
10668 10871 tcsge->Flags != MPI2_SGE_FLAGS_TRANSACTION_ELEMENT) {
10669 10872 mptsas_log(mpt, CE_WARN, "FW Download tce invalid!");
10670 10873 }
10671 10874
10672 10875 pt->sgl_offset = offsetof(MPI2_FW_DOWNLOAD_REQUEST, SGL) +
10673 10876 sizeof (*tcsge);
10674 - if (pt->request_size != pt->sgl_offset)
10877 + if (pt->request_size != pt->sgl_offset) {
10675 10878 NDBG15(("mpi_pre_fw_download(): Incorrect req size, "
10676 10879 "0x%x, should be 0x%x, dataoutsz 0x%x",
10677 10880 (int)pt->request_size, (int)pt->sgl_offset,
10678 10881 (int)pt->dataout_size));
10679 - if (pt->data_size < sizeof (MPI2_FW_DOWNLOAD_REPLY))
10882 + }
10883 + if (pt->data_size < sizeof (MPI2_FW_DOWNLOAD_REPLY)) {
10680 10884 NDBG15(("mpi_pre_fw_download(): Incorrect rep size, "
10681 10885 "0x%x, should be 0x%x", pt->data_size,
10682 10886 (int)sizeof (MPI2_FW_DOWNLOAD_REPLY)));
10887 + }
10683 10888 }
10684 10889
10685 10890 /*
10686 10891 * Prepare the pt for a SAS3 FW_DOWNLOAD request.
10687 10892 */
10688 10893 static void
10689 10894 mpi_pre_fw_25_download(mptsas_t *mpt, mptsas_pt_request_t *pt)
10690 10895 {
10691 10896 pMpi2FWDownloadTCSGE_t tcsge;
10692 10897 pMpi2FWDownloadRequest req2;
10693 10898 pMpi25FWDownloadRequest req25;
10694 10899
10695 10900 /*
10696 10901 * User requests should come in with the Transaction
10697 10902 * context element where the SGL will go. The new firmware
10698 10903 * Doesn't use TCE and has space in the main request for
10699 10904 * this information. So move to the right place.
10700 10905 */
10701 10906 req2 = (pMpi2FWDownloadRequest)pt->request;
|
↓ open down ↓ |
9 lines elided |
↑ open up ↑ |
10702 10907 req25 = (pMpi25FWDownloadRequest)pt->request;
10703 10908 tcsge = (pMpi2FWDownloadTCSGE_t)&req2->SGL;
10704 10909 if (tcsge->ContextSize != 0 || tcsge->DetailsLength != 12 ||
10705 10910 tcsge->Flags != MPI2_SGE_FLAGS_TRANSACTION_ELEMENT) {
10706 10911 mptsas_log(mpt, CE_WARN, "FW Download tce invalid!");
10707 10912 }
10708 10913 req25->ImageOffset = tcsge->ImageOffset;
10709 10914 req25->ImageSize = tcsge->ImageSize;
10710 10915
10711 10916 pt->sgl_offset = offsetof(MPI25_FW_DOWNLOAD_REQUEST, SGL);
10712 - if (pt->request_size != pt->sgl_offset)
10917 + if (pt->request_size != pt->sgl_offset) {
10713 10918 NDBG15(("mpi_pre_fw_25_download(): Incorrect req size, "
10714 10919 "0x%x, should be 0x%x, dataoutsz 0x%x",
10715 10920 pt->request_size, pt->sgl_offset,
10716 10921 pt->dataout_size));
10717 - if (pt->data_size < sizeof (MPI2_FW_DOWNLOAD_REPLY))
10922 + }
10923 + if (pt->data_size < sizeof (MPI2_FW_DOWNLOAD_REPLY)) {
10718 10924 NDBG15(("mpi_pre_fw_25_download(): Incorrect rep size, "
10719 10925 "0x%x, should be 0x%x", pt->data_size,
10720 10926 (int)sizeof (MPI2_FW_UPLOAD_REPLY)));
10927 + }
10721 10928 }
10722 10929
10723 10930 /*
10724 10931 * Prepare the pt for a SAS2 FW_UPLOAD request.
10725 10932 */
10726 10933 static void
10727 10934 mpi_pre_fw_upload(mptsas_t *mpt, mptsas_pt_request_t *pt)
10728 10935 {
10729 10936 pMpi2FWUploadTCSGE_t tcsge;
10730 10937 pMpi2FWUploadRequest_t req;
10731 10938
10732 10939 /*
10733 10940 * If SAS3, call separate function.
10734 10941 */
10735 10942 if (mpt->m_MPI25) {
10736 10943 mpi_pre_fw_25_upload(mpt, pt);
10737 10944 return;
10738 10945 }
10739 10946
10740 10947 /*
10741 10948 * User requests should come in with the Transaction
10742 10949 * context element where the SGL will go. Putting the
10743 10950 * SGL after that seems to work, but don't really know
10744 10951 * why. Other drivers tend to create an extra SGL and
10745 10952 * refer to the TCE through that.
|
↓ open down ↓ |
15 lines elided |
↑ open up ↑ |
10746 10953 */
10747 10954 req = (pMpi2FWUploadRequest_t)pt->request;
10748 10955 tcsge = (pMpi2FWUploadTCSGE_t)&req->SGL;
10749 10956 if (tcsge->ContextSize != 0 || tcsge->DetailsLength != 12 ||
10750 10957 tcsge->Flags != MPI2_SGE_FLAGS_TRANSACTION_ELEMENT) {
10751 10958 mptsas_log(mpt, CE_WARN, "FW Upload tce invalid!");
10752 10959 }
10753 10960
10754 10961 pt->sgl_offset = offsetof(MPI2_FW_UPLOAD_REQUEST, SGL) +
10755 10962 sizeof (*tcsge);
10756 - if (pt->request_size != pt->sgl_offset)
10963 + if (pt->request_size != pt->sgl_offset) {
10757 10964 NDBG15(("mpi_pre_fw_upload(): Incorrect req size, "
10758 10965 "0x%x, should be 0x%x, dataoutsz 0x%x",
10759 10966 pt->request_size, pt->sgl_offset,
10760 10967 pt->dataout_size));
10761 - if (pt->data_size < sizeof (MPI2_FW_UPLOAD_REPLY))
10968 + }
10969 + if (pt->data_size < sizeof (MPI2_FW_UPLOAD_REPLY)) {
10762 10970 NDBG15(("mpi_pre_fw_upload(): Incorrect rep size, "
10763 10971 "0x%x, should be 0x%x", pt->data_size,
10764 10972 (int)sizeof (MPI2_FW_UPLOAD_REPLY)));
10973 + }
10765 10974 }
10766 10975
10767 10976 /*
10768 10977 * Prepare the pt a SAS3 FW_UPLOAD request.
10769 10978 */
10770 10979 static void
10771 10980 mpi_pre_fw_25_upload(mptsas_t *mpt, mptsas_pt_request_t *pt)
10772 10981 {
10773 10982 pMpi2FWUploadTCSGE_t tcsge;
10774 10983 pMpi2FWUploadRequest_t req2;
10775 10984 pMpi25FWUploadRequest_t req25;
10776 10985
10777 10986 /*
10778 10987 * User requests should come in with the Transaction
10779 10988 * context element where the SGL will go. The new firmware
10780 10989 * Doesn't use TCE and has space in the main request for
10781 10990 * this information. So move to the right place.
10782 10991 */
10783 10992 req2 = (pMpi2FWUploadRequest_t)pt->request;
|
↓ open down ↓ |
9 lines elided |
↑ open up ↑ |
10784 10993 req25 = (pMpi25FWUploadRequest_t)pt->request;
10785 10994 tcsge = (pMpi2FWUploadTCSGE_t)&req2->SGL;
10786 10995 if (tcsge->ContextSize != 0 || tcsge->DetailsLength != 12 ||
10787 10996 tcsge->Flags != MPI2_SGE_FLAGS_TRANSACTION_ELEMENT) {
10788 10997 mptsas_log(mpt, CE_WARN, "FW Upload tce invalid!");
10789 10998 }
10790 10999 req25->ImageOffset = tcsge->ImageOffset;
10791 11000 req25->ImageSize = tcsge->ImageSize;
10792 11001
10793 11002 pt->sgl_offset = offsetof(MPI25_FW_UPLOAD_REQUEST, SGL);
10794 - if (pt->request_size != pt->sgl_offset)
11003 + if (pt->request_size != pt->sgl_offset) {
10795 11004 NDBG15(("mpi_pre_fw_25_upload(): Incorrect req size, "
10796 11005 "0x%x, should be 0x%x, dataoutsz 0x%x",
10797 11006 pt->request_size, pt->sgl_offset,
10798 11007 pt->dataout_size));
10799 - if (pt->data_size < sizeof (MPI2_FW_UPLOAD_REPLY))
11008 + }
11009 + if (pt->data_size < sizeof (MPI2_FW_UPLOAD_REPLY)) {
10800 11010 NDBG15(("mpi_pre_fw_25_upload(): Incorrect rep size, "
10801 11011 "0x%x, should be 0x%x", pt->data_size,
10802 11012 (int)sizeof (MPI2_FW_UPLOAD_REPLY)));
11013 + }
10803 11014 }
10804 11015
10805 11016 /*
10806 11017 * Prepare the pt for an IOC_FACTS request.
10807 11018 */
10808 11019 static void
10809 11020 mpi_pre_ioc_facts(mptsas_t *mpt, mptsas_pt_request_t *pt)
10810 11021 {
10811 11022 #ifndef __lock_lint
10812 11023 _NOTE(ARGUNUSED(mpt))
10813 11024 #endif
10814 - if (pt->request_size != sizeof (MPI2_IOC_FACTS_REQUEST))
11025 + if (pt->request_size != sizeof (MPI2_IOC_FACTS_REQUEST)) {
10815 11026 NDBG15(("mpi_pre_ioc_facts(): Incorrect req size, "
10816 11027 "0x%x, should be 0x%x, dataoutsz 0x%x",
10817 11028 pt->request_size,
10818 11029 (int)sizeof (MPI2_IOC_FACTS_REQUEST),
10819 11030 pt->dataout_size));
10820 - if (pt->data_size != sizeof (MPI2_IOC_FACTS_REPLY))
11031 + }
11032 + if (pt->data_size != sizeof (MPI2_IOC_FACTS_REPLY)) {
10821 11033 NDBG15(("mpi_pre_ioc_facts(): Incorrect rep size, "
10822 11034 "0x%x, should be 0x%x", pt->data_size,
10823 11035 (int)sizeof (MPI2_IOC_FACTS_REPLY)));
11036 + }
10824 11037 pt->sgl_offset = (uint16_t)pt->request_size;
10825 11038 }
10826 11039
10827 11040 /*
10828 11041 * Prepare the pt for a PORT_FACTS request.
10829 11042 */
10830 11043 static void
10831 11044 mpi_pre_port_facts(mptsas_t *mpt, mptsas_pt_request_t *pt)
10832 11045 {
10833 11046 #ifndef __lock_lint
10834 11047 _NOTE(ARGUNUSED(mpt))
10835 11048 #endif
10836 - if (pt->request_size != sizeof (MPI2_PORT_FACTS_REQUEST))
11049 + if (pt->request_size != sizeof (MPI2_PORT_FACTS_REQUEST)) {
10837 11050 NDBG15(("mpi_pre_port_facts(): Incorrect req size, "
10838 11051 "0x%x, should be 0x%x, dataoutsz 0x%x",
10839 11052 pt->request_size,
10840 11053 (int)sizeof (MPI2_PORT_FACTS_REQUEST),
10841 11054 pt->dataout_size));
10842 - if (pt->data_size != sizeof (MPI2_PORT_FACTS_REPLY))
11055 + }
11056 + if (pt->data_size != sizeof (MPI2_PORT_FACTS_REPLY)) {
10843 11057 NDBG15(("mpi_pre_port_facts(): Incorrect rep size, "
10844 11058 "0x%x, should be 0x%x", pt->data_size,
10845 11059 (int)sizeof (MPI2_PORT_FACTS_REPLY)));
11060 + }
10846 11061 pt->sgl_offset = (uint16_t)pt->request_size;
10847 11062 }
10848 11063
10849 11064 /*
10850 11065 * Prepare pt for a SATA_PASSTHROUGH request.
10851 11066 */
10852 11067 static void
10853 11068 mpi_pre_sata_passthrough(mptsas_t *mpt, mptsas_pt_request_t *pt)
10854 11069 {
10855 11070 #ifndef __lock_lint
10856 11071 _NOTE(ARGUNUSED(mpt))
10857 11072 #endif
10858 11073 pt->sgl_offset = offsetof(MPI2_SATA_PASSTHROUGH_REQUEST, SGL);
10859 - if (pt->request_size != pt->sgl_offset)
11074 + if (pt->request_size != pt->sgl_offset) {
10860 11075 NDBG15(("mpi_pre_sata_passthrough(): Incorrect req size, "
10861 11076 "0x%x, should be 0x%x, dataoutsz 0x%x",
10862 11077 pt->request_size, pt->sgl_offset,
10863 11078 pt->dataout_size));
10864 - if (pt->data_size != sizeof (MPI2_SATA_PASSTHROUGH_REPLY))
11079 + }
11080 + if (pt->data_size != sizeof (MPI2_SATA_PASSTHROUGH_REPLY)) {
10865 11081 NDBG15(("mpi_pre_sata_passthrough(): Incorrect rep size, "
10866 11082 "0x%x, should be 0x%x", pt->data_size,
10867 11083 (int)sizeof (MPI2_SATA_PASSTHROUGH_REPLY)));
11084 + }
10868 11085 }
10869 11086
10870 11087 static void
10871 11088 mpi_pre_smp_passthrough(mptsas_t *mpt, mptsas_pt_request_t *pt)
10872 11089 {
10873 11090 #ifndef __lock_lint
10874 11091 _NOTE(ARGUNUSED(mpt))
10875 11092 #endif
10876 11093 pt->sgl_offset = offsetof(MPI2_SMP_PASSTHROUGH_REQUEST, SGL);
10877 - if (pt->request_size != pt->sgl_offset)
11094 + if (pt->request_size != pt->sgl_offset) {
10878 11095 NDBG15(("mpi_pre_smp_passthrough(): Incorrect req size, "
10879 11096 "0x%x, should be 0x%x, dataoutsz 0x%x",
10880 11097 pt->request_size, pt->sgl_offset,
10881 11098 pt->dataout_size));
10882 - if (pt->data_size != sizeof (MPI2_SMP_PASSTHROUGH_REPLY))
11099 + }
11100 + if (pt->data_size != sizeof (MPI2_SMP_PASSTHROUGH_REPLY)) {
10883 11101 NDBG15(("mpi_pre_smp_passthrough(): Incorrect rep size, "
10884 11102 "0x%x, should be 0x%x", pt->data_size,
10885 11103 (int)sizeof (MPI2_SMP_PASSTHROUGH_REPLY)));
11104 + }
10886 11105 }
10887 11106
10888 11107 /*
10889 11108 * Prepare pt for a CONFIG request.
10890 11109 */
10891 11110 static void
10892 11111 mpi_pre_config(mptsas_t *mpt, mptsas_pt_request_t *pt)
10893 11112 {
10894 11113 #ifndef __lock_lint
10895 11114 _NOTE(ARGUNUSED(mpt))
10896 11115 #endif
10897 11116 pt->sgl_offset = offsetof(MPI2_CONFIG_REQUEST, PageBufferSGE);
10898 - if (pt->request_size != pt->sgl_offset)
11117 + if (pt->request_size != pt->sgl_offset) {
10899 11118 NDBG15(("mpi_pre_config(): Incorrect req size, 0x%x, "
10900 11119 "should be 0x%x, dataoutsz 0x%x", pt->request_size,
10901 11120 pt->sgl_offset, pt->dataout_size));
10902 - if (pt->data_size != sizeof (MPI2_CONFIG_REPLY))
11121 + }
11122 + if (pt->data_size != sizeof (MPI2_CONFIG_REPLY)) {
10903 11123 NDBG15(("mpi_pre_config(): Incorrect rep size, 0x%x, "
10904 11124 "should be 0x%x", pt->data_size,
10905 11125 (int)sizeof (MPI2_CONFIG_REPLY)));
11126 + }
10906 11127 pt->simple = 1;
10907 11128 }
10908 11129
10909 11130 /*
10910 11131 * Prepare pt for a SCSI_IO_REQ request.
10911 11132 */
10912 11133 static void
10913 11134 mpi_pre_scsi_io_req(mptsas_t *mpt, mptsas_pt_request_t *pt)
10914 11135 {
10915 11136 #ifndef __lock_lint
10916 11137 _NOTE(ARGUNUSED(mpt))
10917 11138 #endif
10918 11139 pt->sgl_offset = offsetof(MPI2_SCSI_IO_REQUEST, SGL);
10919 - if (pt->request_size != pt->sgl_offset)
11140 + if (pt->request_size != pt->sgl_offset) {
10920 11141 NDBG15(("mpi_pre_config(): Incorrect req size, 0x%x, "
10921 11142 "should be 0x%x, dataoutsz 0x%x", pt->request_size,
10922 11143 pt->sgl_offset,
10923 11144 pt->dataout_size));
10924 - if (pt->data_size != sizeof (MPI2_SCSI_IO_REPLY))
11145 + }
11146 + if (pt->data_size != sizeof (MPI2_SCSI_IO_REPLY)) {
10925 11147 NDBG15(("mpi_pre_config(): Incorrect rep size, 0x%x, "
10926 11148 "should be 0x%x", pt->data_size,
10927 11149 (int)sizeof (MPI2_SCSI_IO_REPLY)));
11150 + }
10928 11151 }
10929 11152
10930 11153 /*
10931 11154 * Prepare the mptsas_cmd for a SAS_IO_UNIT_CONTROL request.
10932 11155 */
10933 11156 static void
10934 11157 mpi_pre_sas_io_unit_control(mptsas_t *mpt, mptsas_pt_request_t *pt)
10935 11158 {
10936 11159 #ifndef __lock_lint
10937 11160 _NOTE(ARGUNUSED(mpt))
10938 11161 #endif
10939 11162 pt->sgl_offset = (uint16_t)pt->request_size;
10940 11163 }
10941 11164
10942 11165 /*
10943 11166 * A set of functions to prepare an mptsas_cmd for the various
10944 11167 * supported requests.
10945 11168 */
10946 11169 static struct mptsas_func {
10947 11170 U8 Function;
10948 11171 char *Name;
10949 11172 mptsas_pre_f *f_pre;
10950 11173 } mptsas_func_list[] = {
10951 11174 { MPI2_FUNCTION_IOC_FACTS, "IOC_FACTS", mpi_pre_ioc_facts },
10952 11175 { MPI2_FUNCTION_PORT_FACTS, "PORT_FACTS", mpi_pre_port_facts },
10953 11176 { MPI2_FUNCTION_FW_DOWNLOAD, "FW_DOWNLOAD", mpi_pre_fw_download },
10954 11177 { MPI2_FUNCTION_FW_UPLOAD, "FW_UPLOAD", mpi_pre_fw_upload },
10955 11178 { MPI2_FUNCTION_SATA_PASSTHROUGH, "SATA_PASSTHROUGH",
10956 11179 mpi_pre_sata_passthrough },
10957 11180 { MPI2_FUNCTION_SMP_PASSTHROUGH, "SMP_PASSTHROUGH",
10958 11181 mpi_pre_smp_passthrough},
10959 11182 { MPI2_FUNCTION_SCSI_IO_REQUEST, "SCSI_IO_REQUEST",
10960 11183 mpi_pre_scsi_io_req},
10961 11184 { MPI2_FUNCTION_CONFIG, "CONFIG", mpi_pre_config},
10962 11185 { MPI2_FUNCTION_SAS_IO_UNIT_CONTROL, "SAS_IO_UNIT_CONTROL",
10963 11186 mpi_pre_sas_io_unit_control },
10964 11187 { 0xFF, NULL, NULL } /* list end */
10965 11188 };
10966 11189
10967 11190 static void
10968 11191 mptsas_prep_sgl_offset(mptsas_t *mpt, mptsas_pt_request_t *pt)
10969 11192 {
10970 11193 pMPI2RequestHeader_t hdr;
10971 11194 struct mptsas_func *f;
10972 11195
10973 11196 hdr = (pMPI2RequestHeader_t)pt->request;
10974 11197
10975 11198 for (f = mptsas_func_list; f->f_pre != NULL; f++) {
10976 11199 if (hdr->Function == f->Function) {
10977 11200 f->f_pre(mpt, pt);
10978 11201 NDBG15(("mptsas_prep_sgl_offset: Function %s,"
10979 11202 " sgl_offset 0x%x", f->Name,
10980 11203 pt->sgl_offset));
10981 11204 return;
10982 11205 }
10983 11206 }
10984 11207 NDBG15(("mptsas_prep_sgl_offset: Unknown Function 0x%02x,"
10985 11208 " returning req_size 0x%x for sgl_offset",
10986 11209 hdr->Function, pt->request_size));
10987 11210 pt->sgl_offset = (uint16_t)pt->request_size;
10988 11211 }
10989 11212
10990 11213
10991 11214 static int
10992 11215 mptsas_do_passthru(mptsas_t *mpt, uint8_t *request, uint8_t *reply,
10993 11216 uint8_t *data, uint32_t request_size, uint32_t reply_size,
10994 11217 uint32_t data_size, uint32_t direction, uint8_t *dataout,
10995 11218 uint32_t dataout_size, short timeout, int mode)
10996 11219 {
10997 11220 mptsas_pt_request_t pt;
10998 11221 mptsas_dma_alloc_state_t data_dma_state;
10999 11222 mptsas_dma_alloc_state_t dataout_dma_state;
11000 11223 caddr_t memp;
11001 11224 mptsas_cmd_t *cmd = NULL;
11002 11225 struct scsi_pkt *pkt;
11003 11226 uint32_t reply_len = 0, sense_len = 0;
11004 11227 pMPI2RequestHeader_t request_hdrp;
11005 11228 pMPI2RequestHeader_t request_msg;
11006 11229 pMPI2DefaultReply_t reply_msg;
11007 11230 Mpi2SCSIIOReply_t rep_msg;
11008 11231 int rvalue;
11009 11232 int i, status = 0, pt_flags = 0, rv = 0;
11010 11233 uint8_t function;
11011 11234
11012 11235 ASSERT(mutex_owned(&mpt->m_mutex));
11013 11236
11014 11237 reply_msg = (pMPI2DefaultReply_t)(&rep_msg);
11015 11238 bzero(reply_msg, sizeof (MPI2_DEFAULT_REPLY));
11016 11239 request_msg = kmem_zalloc(request_size, KM_SLEEP);
11017 11240
11018 11241 mutex_exit(&mpt->m_mutex);
11019 11242 /*
11020 11243 * copy in the request buffer since it could be used by
11021 11244 * another thread when the pt request into waitq
11022 11245 */
11023 11246 if (ddi_copyin(request, request_msg, request_size, mode)) {
11024 11247 mutex_enter(&mpt->m_mutex);
11025 11248 status = EFAULT;
11026 11249 mptsas_log(mpt, CE_WARN, "failed to copy request data");
11027 11250 goto out;
11028 11251 }
11029 11252 NDBG27(("mptsas_do_passthru: mode 0x%x, size 0x%x, Func 0x%x",
11030 11253 mode, request_size, request_msg->Function));
11031 11254 mutex_enter(&mpt->m_mutex);
11032 11255
11033 11256 function = request_msg->Function;
11034 11257 if (function == MPI2_FUNCTION_SCSI_TASK_MGMT) {
11035 11258 pMpi2SCSITaskManagementRequest_t task;
11036 11259 task = (pMpi2SCSITaskManagementRequest_t)request_msg;
11037 11260 mptsas_setup_bus_reset_delay(mpt);
11038 11261 rv = mptsas_ioc_task_management(mpt, task->TaskType,
11039 11262 task->DevHandle, (int)task->LUN[1], reply, reply_size,
11040 11263 mode);
11041 11264
11042 11265 if (rv != TRUE) {
11043 11266 status = EIO;
11044 11267 mptsas_log(mpt, CE_WARN, "task management failed");
11045 11268 }
11046 11269 goto out;
11047 11270 }
11048 11271
11049 11272 if (data_size != 0) {
11050 11273 data_dma_state.size = data_size;
11051 11274 if (mptsas_dma_alloc(mpt, &data_dma_state) != DDI_SUCCESS) {
11052 11275 status = ENOMEM;
11053 11276 mptsas_log(mpt, CE_WARN, "failed to alloc DMA "
11054 11277 "resource");
11055 11278 goto out;
11056 11279 }
11057 11280 pt_flags |= MPTSAS_DATA_ALLOCATED;
11058 11281 if (direction == MPTSAS_PASS_THRU_DIRECTION_WRITE) {
11059 11282 mutex_exit(&mpt->m_mutex);
11060 11283 for (i = 0; i < data_size; i++) {
11061 11284 if (ddi_copyin(data + i, (uint8_t *)
11062 11285 data_dma_state.memp + i, 1, mode)) {
11063 11286 mutex_enter(&mpt->m_mutex);
11064 11287 status = EFAULT;
11065 11288 mptsas_log(mpt, CE_WARN, "failed to "
11066 11289 "copy read data");
11067 11290 goto out;
11068 11291 }
11069 11292 }
11070 11293 mutex_enter(&mpt->m_mutex);
11071 11294 }
11072 11295 } else {
11073 11296 bzero(&data_dma_state, sizeof (data_dma_state));
11074 11297 }
11075 11298
11076 11299 if (dataout_size != 0) {
11077 11300 dataout_dma_state.size = dataout_size;
11078 11301 if (mptsas_dma_alloc(mpt, &dataout_dma_state) != DDI_SUCCESS) {
11079 11302 status = ENOMEM;
11080 11303 mptsas_log(mpt, CE_WARN, "failed to alloc DMA "
11081 11304 "resource");
11082 11305 goto out;
11083 11306 }
11084 11307 pt_flags |= MPTSAS_DATAOUT_ALLOCATED;
11085 11308 mutex_exit(&mpt->m_mutex);
11086 11309 for (i = 0; i < dataout_size; i++) {
11087 11310 if (ddi_copyin(dataout + i, (uint8_t *)
11088 11311 dataout_dma_state.memp + i, 1, mode)) {
11089 11312 mutex_enter(&mpt->m_mutex);
11090 11313 mptsas_log(mpt, CE_WARN, "failed to copy out"
11091 11314 " data");
11092 11315 status = EFAULT;
11093 11316 goto out;
11094 11317 }
11095 11318 }
11096 11319 mutex_enter(&mpt->m_mutex);
11097 11320 } else {
11098 11321 bzero(&dataout_dma_state, sizeof (dataout_dma_state));
11099 11322 }
11100 11323
11101 11324 if ((rvalue = (mptsas_request_from_pool(mpt, &cmd, &pkt))) == -1) {
11102 11325 status = EAGAIN;
11103 11326 mptsas_log(mpt, CE_NOTE, "event ack command pool is full");
11104 11327 goto out;
11105 11328 }
11106 11329 pt_flags |= MPTSAS_REQUEST_POOL_CMD;
11107 11330
11108 11331 bzero((caddr_t)cmd, sizeof (*cmd));
11109 11332 bzero((caddr_t)pkt, scsi_pkt_size());
11110 11333 bzero((caddr_t)&pt, sizeof (pt));
11111 11334
11112 11335 cmd->ioc_cmd_slot = (uint32_t)(rvalue);
11113 11336
11114 11337 pt.request = (uint8_t *)request_msg;
11115 11338 pt.direction = direction;
11116 11339 pt.simple = 0;
11117 11340 pt.request_size = request_size;
11118 11341 pt.data_size = data_size;
11119 11342 pt.dataout_size = dataout_size;
11120 11343 pt.data_cookie = data_dma_state.cookie;
11121 11344 pt.dataout_cookie = dataout_dma_state.cookie;
|
↓ open down ↓ |
184 lines elided |
↑ open up ↑ |
11122 11345 mptsas_prep_sgl_offset(mpt, &pt);
11123 11346
11124 11347 /*
11125 11348 * Form a blank cmd/pkt to store the acknowledgement message
11126 11349 */
11127 11350 pkt->pkt_cdbp = (opaque_t)&cmd->cmd_cdb[0];
11128 11351 pkt->pkt_scbp = (opaque_t)&cmd->cmd_scb;
11129 11352 pkt->pkt_ha_private = (opaque_t)&pt;
11130 11353 pkt->pkt_flags = FLAG_HEAD;
11131 11354 pkt->pkt_time = timeout;
11355 + pkt->pkt_start = gethrtime();
11132 11356 cmd->cmd_pkt = pkt;
11133 11357 cmd->cmd_flags = CFLAG_CMDIOC | CFLAG_PASSTHRU;
11134 11358
11135 11359 if ((function == MPI2_FUNCTION_SCSI_IO_REQUEST) ||
11136 11360 (function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) {
11137 11361 uint8_t com, cdb_group_id;
11138 11362 boolean_t ret;
11139 11363
11140 11364 pkt->pkt_cdbp = ((pMpi2SCSIIORequest_t)request_msg)->CDB.CDB32;
11141 11365 com = pkt->pkt_cdbp[0];
11142 11366 cdb_group_id = CDB_GROUPID(com);
11143 11367 switch (cdb_group_id) {
11144 11368 case CDB_GROUPID_0: cmd->cmd_cdblen = CDB_GROUP0; break;
11145 11369 case CDB_GROUPID_1: cmd->cmd_cdblen = CDB_GROUP1; break;
11146 11370 case CDB_GROUPID_2: cmd->cmd_cdblen = CDB_GROUP2; break;
11147 11371 case CDB_GROUPID_4: cmd->cmd_cdblen = CDB_GROUP4; break;
11148 11372 case CDB_GROUPID_5: cmd->cmd_cdblen = CDB_GROUP5; break;
11149 11373 default:
11150 11374 NDBG27(("mptsas_do_passthru: SCSI_IO, reserved "
11151 11375 "CDBGROUP 0x%x requested!", cdb_group_id));
11152 11376 break;
11153 11377 }
11154 11378
11155 11379 reply_len = sizeof (MPI2_SCSI_IO_REPLY);
11156 11380 sense_len = reply_size - reply_len;
11157 11381 ret = mptsas_cmdarqsize(mpt, cmd, sense_len, KM_SLEEP);
11158 11382 VERIFY(ret == B_TRUE);
11159 11383 } else {
11160 11384 reply_len = reply_size;
11161 11385 sense_len = 0;
11162 11386 }
11163 11387
11164 11388 NDBG27(("mptsas_do_passthru: %s, dsz 0x%x, dosz 0x%x, replen 0x%x, "
11165 11389 "snslen 0x%x",
11166 11390 (direction == MPTSAS_PASS_THRU_DIRECTION_WRITE)?"Write":"Read",
11167 11391 data_size, dataout_size, reply_len, sense_len));
11168 11392
11169 11393 /*
11170 11394 * Save the command in a slot
11171 11395 */
11172 11396 if (mptsas_save_cmd(mpt, cmd) == TRUE) {
11173 11397 /*
11174 11398 * Once passthru command get slot, set cmd_flags
11175 11399 * CFLAG_PREPARED.
11176 11400 */
11177 11401 cmd->cmd_flags |= CFLAG_PREPARED;
11178 11402 mptsas_start_passthru(mpt, cmd);
11179 11403 } else {
11180 11404 mptsas_waitq_add(mpt, cmd);
11181 11405 }
11182 11406
11183 11407 while ((cmd->cmd_flags & CFLAG_FINISHED) == 0) {
11184 11408 cv_wait(&mpt->m_passthru_cv, &mpt->m_mutex);
11185 11409 }
11186 11410
11187 11411 NDBG27(("mptsas_do_passthru: Cmd complete, flags 0x%x, rfm 0x%x "
11188 11412 "pktreason 0x%x", cmd->cmd_flags, cmd->cmd_rfm,
11189 11413 pkt->pkt_reason));
11190 11414
11191 11415 if (cmd->cmd_flags & CFLAG_PREPARED) {
11192 11416 memp = mpt->m_req_frame + (mpt->m_req_frame_size *
11193 11417 cmd->cmd_slot);
11194 11418 request_hdrp = (pMPI2RequestHeader_t)memp;
11195 11419 }
11196 11420
11197 11421 if (cmd->cmd_flags & CFLAG_TIMEOUT) {
11198 11422 status = ETIMEDOUT;
11199 11423 mptsas_log(mpt, CE_WARN, "passthrough command timeout");
11200 11424 pt_flags |= MPTSAS_CMD_TIMEOUT;
11201 11425 goto out;
11202 11426 }
11203 11427
11204 11428 if (cmd->cmd_rfm) {
11205 11429 /*
11206 11430 * cmd_rfm is zero means the command reply is a CONTEXT
11207 11431 * reply and no PCI Write to post the free reply SMFA
11208 11432 * because no reply message frame is used.
11209 11433 * cmd_rfm is non-zero means the reply is a ADDRESS
11210 11434 * reply and reply message frame is used.
11211 11435 */
11212 11436 pt_flags |= MPTSAS_ADDRESS_REPLY;
11213 11437 (void) ddi_dma_sync(mpt->m_dma_reply_frame_hdl, 0, 0,
11214 11438 DDI_DMA_SYNC_FORCPU);
11215 11439 reply_msg = (pMPI2DefaultReply_t)
11216 11440 (mpt->m_reply_frame + (cmd->cmd_rfm -
11217 11441 (mpt->m_reply_frame_dma_addr & 0xffffffffu)));
11218 11442 }
11219 11443
11220 11444 mptsas_fma_check(mpt, cmd);
11221 11445 if (pkt->pkt_reason == CMD_TRAN_ERR) {
11222 11446 status = EAGAIN;
11223 11447 mptsas_log(mpt, CE_WARN, "passthru fma error");
11224 11448 goto out;
11225 11449 }
11226 11450 if (pkt->pkt_reason == CMD_RESET) {
11227 11451 status = EAGAIN;
11228 11452 mptsas_log(mpt, CE_WARN, "ioc reset abort passthru");
11229 11453 goto out;
11230 11454 }
11231 11455
11232 11456 if (pkt->pkt_reason == CMD_INCOMPLETE) {
11233 11457 status = EIO;
11234 11458 mptsas_log(mpt, CE_WARN, "passthrough command incomplete");
11235 11459 goto out;
11236 11460 }
11237 11461
11238 11462 mutex_exit(&mpt->m_mutex);
11239 11463 if (cmd->cmd_flags & CFLAG_PREPARED) {
11240 11464 function = request_hdrp->Function;
11241 11465 if ((function == MPI2_FUNCTION_SCSI_IO_REQUEST) ||
11242 11466 (function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) {
11243 11467 reply_len = sizeof (MPI2_SCSI_IO_REPLY);
11244 11468 sense_len = cmd->cmd_extrqslen ?
11245 11469 min(sense_len, cmd->cmd_extrqslen) :
11246 11470 min(sense_len, cmd->cmd_rqslen);
11247 11471 } else {
11248 11472 reply_len = reply_size;
11249 11473 sense_len = 0;
11250 11474 }
11251 11475
11252 11476 for (i = 0; i < reply_len; i++) {
11253 11477 if (ddi_copyout((uint8_t *)reply_msg + i, reply + i, 1,
11254 11478 mode)) {
11255 11479 mutex_enter(&mpt->m_mutex);
11256 11480 status = EFAULT;
11257 11481 mptsas_log(mpt, CE_WARN, "failed to copy out "
11258 11482 "reply data");
11259 11483 goto out;
11260 11484 }
11261 11485 }
11262 11486 for (i = 0; i < sense_len; i++) {
11263 11487 if (ddi_copyout((uint8_t *)request_hdrp + 64 + i,
11264 11488 reply + reply_len + i, 1, mode)) {
11265 11489 mutex_enter(&mpt->m_mutex);
11266 11490 status = EFAULT;
11267 11491 mptsas_log(mpt, CE_WARN, "failed to copy out "
11268 11492 "sense data");
11269 11493 goto out;
11270 11494 }
11271 11495 }
11272 11496 }
11273 11497
11274 11498 if (data_size) {
11275 11499 if (direction != MPTSAS_PASS_THRU_DIRECTION_WRITE) {
11276 11500 (void) ddi_dma_sync(data_dma_state.handle, 0, 0,
11277 11501 DDI_DMA_SYNC_FORCPU);
11278 11502 for (i = 0; i < data_size; i++) {
11279 11503 if (ddi_copyout((uint8_t *)(
11280 11504 data_dma_state.memp + i), data + i, 1,
11281 11505 mode)) {
11282 11506 mutex_enter(&mpt->m_mutex);
11283 11507 status = EFAULT;
11284 11508 mptsas_log(mpt, CE_WARN, "failed to "
11285 11509 "copy out the reply data");
11286 11510 goto out;
11287 11511 }
11288 11512 }
11289 11513 }
11290 11514 }
11291 11515 mutex_enter(&mpt->m_mutex);
11292 11516 out:
11293 11517 /*
11294 11518 * Put the reply frame back on the free queue, increment the free
11295 11519 * index, and write the new index to the free index register. But only
11296 11520 * if this reply is an ADDRESS reply.
11297 11521 */
11298 11522 if (pt_flags & MPTSAS_ADDRESS_REPLY) {
11299 11523 ddi_put32(mpt->m_acc_free_queue_hdl,
11300 11524 &((uint32_t *)(void *)mpt->m_free_queue)[mpt->m_free_index],
11301 11525 cmd->cmd_rfm);
11302 11526 (void) ddi_dma_sync(mpt->m_dma_free_queue_hdl, 0, 0,
11303 11527 DDI_DMA_SYNC_FORDEV);
11304 11528 if (++mpt->m_free_index == mpt->m_free_queue_depth) {
11305 11529 mpt->m_free_index = 0;
11306 11530 }
11307 11531 ddi_put32(mpt->m_datap, &mpt->m_reg->ReplyFreeHostIndex,
11308 11532 mpt->m_free_index);
11309 11533 }
11310 11534 if (cmd) {
11311 11535 if (cmd->cmd_extrqslen != 0) {
11312 11536 rmfree(mpt->m_erqsense_map, cmd->cmd_extrqschunks,
11313 11537 cmd->cmd_extrqsidx + 1);
11314 11538 }
11315 11539 if (cmd->cmd_flags & CFLAG_PREPARED) {
11316 11540 mptsas_remove_cmd(mpt, cmd);
11317 11541 pt_flags &= (~MPTSAS_REQUEST_POOL_CMD);
11318 11542 }
11319 11543 }
11320 11544 if (pt_flags & MPTSAS_REQUEST_POOL_CMD)
11321 11545 mptsas_return_to_pool(mpt, cmd);
11322 11546 if (pt_flags & MPTSAS_DATA_ALLOCATED) {
11323 11547 if (mptsas_check_dma_handle(data_dma_state.handle) !=
11324 11548 DDI_SUCCESS) {
11325 11549 ddi_fm_service_impact(mpt->m_dip,
11326 11550 DDI_SERVICE_UNAFFECTED);
11327 11551 status = EFAULT;
11328 11552 }
11329 11553 mptsas_dma_free(&data_dma_state);
11330 11554 }
11331 11555 if (pt_flags & MPTSAS_DATAOUT_ALLOCATED) {
11332 11556 if (mptsas_check_dma_handle(dataout_dma_state.handle) !=
11333 11557 DDI_SUCCESS) {
11334 11558 ddi_fm_service_impact(mpt->m_dip,
11335 11559 DDI_SERVICE_UNAFFECTED);
11336 11560 status = EFAULT;
11337 11561 }
11338 11562 mptsas_dma_free(&dataout_dma_state);
11339 11563 }
11340 11564 if (pt_flags & MPTSAS_CMD_TIMEOUT) {
11341 11565 if ((mptsas_restart_ioc(mpt)) == DDI_FAILURE) {
11342 11566 mptsas_log(mpt, CE_WARN, "mptsas_restart_ioc failed");
11343 11567 }
11344 11568 }
11345 11569 if (request_msg)
11346 11570 kmem_free(request_msg, request_size);
11347 11571 NDBG27(("mptsas_do_passthru: Done status 0x%x", status));
11348 11572
11349 11573 return (status);
11350 11574 }
11351 11575
11352 11576 static int
11353 11577 mptsas_pass_thru(mptsas_t *mpt, mptsas_pass_thru_t *data, int mode)
11354 11578 {
11355 11579 /*
11356 11580 * If timeout is 0, set timeout to default of 60 seconds.
11357 11581 */
11358 11582 if (data->Timeout == 0) {
11359 11583 data->Timeout = MPTSAS_PASS_THRU_TIME_DEFAULT;
11360 11584 }
11361 11585
11362 11586 if (((data->DataSize == 0) &&
11363 11587 (data->DataDirection == MPTSAS_PASS_THRU_DIRECTION_NONE)) ||
11364 11588 ((data->DataSize != 0) &&
11365 11589 ((data->DataDirection == MPTSAS_PASS_THRU_DIRECTION_READ) ||
11366 11590 (data->DataDirection == MPTSAS_PASS_THRU_DIRECTION_WRITE) ||
11367 11591 ((data->DataDirection == MPTSAS_PASS_THRU_DIRECTION_BOTH) &&
11368 11592 (data->DataOutSize != 0))))) {
11369 11593 if (data->DataDirection == MPTSAS_PASS_THRU_DIRECTION_BOTH) {
11370 11594 data->DataDirection = MPTSAS_PASS_THRU_DIRECTION_READ;
11371 11595 } else {
11372 11596 data->DataOutSize = 0;
11373 11597 }
11374 11598 /*
11375 11599 * Send passthru request messages
11376 11600 */
11377 11601 return (mptsas_do_passthru(mpt,
11378 11602 (uint8_t *)((uintptr_t)data->PtrRequest),
11379 11603 (uint8_t *)((uintptr_t)data->PtrReply),
11380 11604 (uint8_t *)((uintptr_t)data->PtrData),
11381 11605 data->RequestSize, data->ReplySize,
11382 11606 data->DataSize, data->DataDirection,
11383 11607 (uint8_t *)((uintptr_t)data->PtrDataOut),
11384 11608 data->DataOutSize, data->Timeout, mode));
11385 11609 } else {
11386 11610 return (EINVAL);
11387 11611 }
11388 11612 }
11389 11613
11390 11614 static uint8_t
11391 11615 mptsas_get_fw_diag_buffer_number(mptsas_t *mpt, uint32_t unique_id)
11392 11616 {
11393 11617 uint8_t index;
11394 11618
11395 11619 for (index = 0; index < MPI2_DIAG_BUF_TYPE_COUNT; index++) {
11396 11620 if (mpt->m_fw_diag_buffer_list[index].unique_id == unique_id) {
11397 11621 return (index);
11398 11622 }
11399 11623 }
11400 11624
11401 11625 return (MPTSAS_FW_DIAGNOSTIC_UID_NOT_FOUND);
11402 11626 }
11403 11627
11404 11628 static void
11405 11629 mptsas_start_diag(mptsas_t *mpt, mptsas_cmd_t *cmd)
11406 11630 {
11407 11631 pMpi2DiagBufferPostRequest_t pDiag_post_msg;
11408 11632 pMpi2DiagReleaseRequest_t pDiag_release_msg;
11409 11633 struct scsi_pkt *pkt = cmd->cmd_pkt;
11410 11634 mptsas_diag_request_t *diag = pkt->pkt_ha_private;
11411 11635 uint32_t i;
11412 11636 uint64_t request_desc;
11413 11637
11414 11638 ASSERT(mutex_owned(&mpt->m_mutex));
11415 11639
11416 11640 /*
11417 11641 * Form the diag message depending on the post or release function.
11418 11642 */
11419 11643 if (diag->function == MPI2_FUNCTION_DIAG_BUFFER_POST) {
11420 11644 pDiag_post_msg = (pMpi2DiagBufferPostRequest_t)
11421 11645 (mpt->m_req_frame + (mpt->m_req_frame_size *
11422 11646 cmd->cmd_slot));
11423 11647 bzero(pDiag_post_msg, mpt->m_req_frame_size);
11424 11648 ddi_put8(mpt->m_acc_req_frame_hdl, &pDiag_post_msg->Function,
11425 11649 diag->function);
11426 11650 ddi_put8(mpt->m_acc_req_frame_hdl, &pDiag_post_msg->BufferType,
11427 11651 diag->pBuffer->buffer_type);
11428 11652 ddi_put8(mpt->m_acc_req_frame_hdl,
11429 11653 &pDiag_post_msg->ExtendedType,
11430 11654 diag->pBuffer->extended_type);
11431 11655 ddi_put32(mpt->m_acc_req_frame_hdl,
11432 11656 &pDiag_post_msg->BufferLength,
11433 11657 diag->pBuffer->buffer_data.size);
11434 11658 for (i = 0; i < (sizeof (pDiag_post_msg->ProductSpecific) / 4);
11435 11659 i++) {
11436 11660 ddi_put32(mpt->m_acc_req_frame_hdl,
11437 11661 &pDiag_post_msg->ProductSpecific[i],
11438 11662 diag->pBuffer->product_specific[i]);
11439 11663 }
11440 11664 ddi_put32(mpt->m_acc_req_frame_hdl,
11441 11665 &pDiag_post_msg->BufferAddress.Low,
11442 11666 (uint32_t)(diag->pBuffer->buffer_data.cookie.dmac_laddress
11443 11667 & 0xffffffffull));
11444 11668 ddi_put32(mpt->m_acc_req_frame_hdl,
11445 11669 &pDiag_post_msg->BufferAddress.High,
11446 11670 (uint32_t)(diag->pBuffer->buffer_data.cookie.dmac_laddress
11447 11671 >> 32));
11448 11672 } else {
11449 11673 pDiag_release_msg = (pMpi2DiagReleaseRequest_t)
11450 11674 (mpt->m_req_frame + (mpt->m_req_frame_size *
11451 11675 cmd->cmd_slot));
11452 11676 bzero(pDiag_release_msg, mpt->m_req_frame_size);
11453 11677 ddi_put8(mpt->m_acc_req_frame_hdl,
11454 11678 &pDiag_release_msg->Function, diag->function);
11455 11679 ddi_put8(mpt->m_acc_req_frame_hdl,
11456 11680 &pDiag_release_msg->BufferType,
11457 11681 diag->pBuffer->buffer_type);
11458 11682 }
11459 11683
11460 11684 /*
11461 11685 * Send the message
11462 11686 */
11463 11687 (void) ddi_dma_sync(mpt->m_dma_req_frame_hdl, 0, 0,
11464 11688 DDI_DMA_SYNC_FORDEV);
11465 11689 request_desc = (cmd->cmd_slot << 16) +
11466 11690 MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
11467 11691 cmd->cmd_rfm = NULL;
11468 11692 MPTSAS_START_CMD(mpt, request_desc);
11469 11693 if ((mptsas_check_dma_handle(mpt->m_dma_req_frame_hdl) !=
11470 11694 DDI_SUCCESS) ||
11471 11695 (mptsas_check_acc_handle(mpt->m_acc_req_frame_hdl) !=
11472 11696 DDI_SUCCESS)) {
11473 11697 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
11474 11698 }
11475 11699 }
11476 11700
11477 11701 static int
11478 11702 mptsas_post_fw_diag_buffer(mptsas_t *mpt,
11479 11703 mptsas_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code)
11480 11704 {
11481 11705 mptsas_diag_request_t diag;
11482 11706 int status, slot_num, post_flags = 0;
11483 11707 mptsas_cmd_t *cmd = NULL;
11484 11708 struct scsi_pkt *pkt;
11485 11709 pMpi2DiagBufferPostReply_t reply;
11486 11710 uint16_t iocstatus;
11487 11711 uint32_t iocloginfo, transfer_length;
11488 11712
11489 11713 /*
11490 11714 * If buffer is not enabled, just leave.
11491 11715 */
11492 11716 *return_code = MPTSAS_FW_DIAG_ERROR_POST_FAILED;
11493 11717 if (!pBuffer->enabled) {
11494 11718 status = DDI_FAILURE;
11495 11719 goto out;
11496 11720 }
11497 11721
11498 11722 /*
11499 11723 * Clear some flags initially.
11500 11724 */
11501 11725 pBuffer->force_release = FALSE;
11502 11726 pBuffer->valid_data = FALSE;
11503 11727 pBuffer->owned_by_firmware = FALSE;
11504 11728
11505 11729 /*
11506 11730 * Get a cmd buffer from the cmd buffer pool
11507 11731 */
11508 11732 if ((slot_num = (mptsas_request_from_pool(mpt, &cmd, &pkt))) == -1) {
11509 11733 status = DDI_FAILURE;
11510 11734 mptsas_log(mpt, CE_NOTE, "command pool is full: Post FW Diag");
11511 11735 goto out;
11512 11736 }
11513 11737 post_flags |= MPTSAS_REQUEST_POOL_CMD;
11514 11738
11515 11739 bzero((caddr_t)cmd, sizeof (*cmd));
11516 11740 bzero((caddr_t)pkt, scsi_pkt_size());
11517 11741
11518 11742 cmd->ioc_cmd_slot = (uint32_t)(slot_num);
11519 11743
11520 11744 diag.pBuffer = pBuffer;
11521 11745 diag.function = MPI2_FUNCTION_DIAG_BUFFER_POST;
11522 11746
11523 11747 /*
11524 11748 * Form a blank cmd/pkt to store the acknowledgement message
11525 11749 */
11526 11750 pkt->pkt_ha_private = (opaque_t)&diag;
11527 11751 pkt->pkt_flags = FLAG_HEAD;
11528 11752 pkt->pkt_time = 60;
11529 11753 cmd->cmd_pkt = pkt;
11530 11754 cmd->cmd_flags = CFLAG_CMDIOC | CFLAG_FW_DIAG;
11531 11755
11532 11756 /*
11533 11757 * Save the command in a slot
11534 11758 */
11535 11759 if (mptsas_save_cmd(mpt, cmd) == TRUE) {
11536 11760 /*
11537 11761 * Once passthru command get slot, set cmd_flags
11538 11762 * CFLAG_PREPARED.
11539 11763 */
11540 11764 cmd->cmd_flags |= CFLAG_PREPARED;
11541 11765 mptsas_start_diag(mpt, cmd);
11542 11766 } else {
11543 11767 mptsas_waitq_add(mpt, cmd);
11544 11768 }
11545 11769
11546 11770 while ((cmd->cmd_flags & CFLAG_FINISHED) == 0) {
11547 11771 cv_wait(&mpt->m_fw_diag_cv, &mpt->m_mutex);
11548 11772 }
11549 11773
11550 11774 if (cmd->cmd_flags & CFLAG_TIMEOUT) {
11551 11775 status = DDI_FAILURE;
11552 11776 mptsas_log(mpt, CE_WARN, "Post FW Diag command timeout");
11553 11777 goto out;
11554 11778 }
11555 11779
11556 11780 /*
11557 11781 * cmd_rfm points to the reply message if a reply was given. Check the
11558 11782 * IOCStatus to make sure everything went OK with the FW diag request
11559 11783 * and set buffer flags.
11560 11784 */
11561 11785 if (cmd->cmd_rfm) {
11562 11786 post_flags |= MPTSAS_ADDRESS_REPLY;
11563 11787 (void) ddi_dma_sync(mpt->m_dma_reply_frame_hdl, 0, 0,
11564 11788 DDI_DMA_SYNC_FORCPU);
11565 11789 reply = (pMpi2DiagBufferPostReply_t)(mpt->m_reply_frame +
11566 11790 (cmd->cmd_rfm -
11567 11791 (mpt->m_reply_frame_dma_addr & 0xffffffffu)));
11568 11792
11569 11793 /*
11570 11794 * Get the reply message data
11571 11795 */
11572 11796 iocstatus = ddi_get16(mpt->m_acc_reply_frame_hdl,
11573 11797 &reply->IOCStatus);
11574 11798 iocloginfo = ddi_get32(mpt->m_acc_reply_frame_hdl,
11575 11799 &reply->IOCLogInfo);
11576 11800 transfer_length = ddi_get32(mpt->m_acc_reply_frame_hdl,
11577 11801 &reply->TransferLength);
11578 11802
11579 11803 /*
11580 11804 * If post failed quit.
11581 11805 */
11582 11806 if (iocstatus != MPI2_IOCSTATUS_SUCCESS) {
11583 11807 status = DDI_FAILURE;
11584 11808 NDBG13(("post FW Diag Buffer failed: IOCStatus=0x%x, "
11585 11809 "IOCLogInfo=0x%x, TransferLength=0x%x", iocstatus,
11586 11810 iocloginfo, transfer_length));
11587 11811 goto out;
11588 11812 }
11589 11813
11590 11814 /*
11591 11815 * Post was successful.
11592 11816 */
11593 11817 pBuffer->valid_data = TRUE;
11594 11818 pBuffer->owned_by_firmware = TRUE;
11595 11819 *return_code = MPTSAS_FW_DIAG_ERROR_SUCCESS;
11596 11820 status = DDI_SUCCESS;
11597 11821 }
11598 11822
11599 11823 out:
11600 11824 /*
11601 11825 * Put the reply frame back on the free queue, increment the free
11602 11826 * index, and write the new index to the free index register. But only
11603 11827 * if this reply is an ADDRESS reply.
11604 11828 */
11605 11829 if (post_flags & MPTSAS_ADDRESS_REPLY) {
11606 11830 ddi_put32(mpt->m_acc_free_queue_hdl,
11607 11831 &((uint32_t *)(void *)mpt->m_free_queue)[mpt->m_free_index],
11608 11832 cmd->cmd_rfm);
11609 11833 (void) ddi_dma_sync(mpt->m_dma_free_queue_hdl, 0, 0,
11610 11834 DDI_DMA_SYNC_FORDEV);
11611 11835 if (++mpt->m_free_index == mpt->m_free_queue_depth) {
11612 11836 mpt->m_free_index = 0;
11613 11837 }
11614 11838 ddi_put32(mpt->m_datap, &mpt->m_reg->ReplyFreeHostIndex,
11615 11839 mpt->m_free_index);
11616 11840 }
11617 11841 if (cmd && (cmd->cmd_flags & CFLAG_PREPARED)) {
11618 11842 mptsas_remove_cmd(mpt, cmd);
11619 11843 post_flags &= (~MPTSAS_REQUEST_POOL_CMD);
11620 11844 }
11621 11845 if (post_flags & MPTSAS_REQUEST_POOL_CMD) {
11622 11846 mptsas_return_to_pool(mpt, cmd);
11623 11847 }
11624 11848
11625 11849 return (status);
11626 11850 }
11627 11851
11628 11852 static int
11629 11853 mptsas_release_fw_diag_buffer(mptsas_t *mpt,
11630 11854 mptsas_fw_diagnostic_buffer_t *pBuffer, uint32_t *return_code,
11631 11855 uint32_t diag_type)
11632 11856 {
11633 11857 mptsas_diag_request_t diag;
11634 11858 int status, slot_num, rel_flags = 0;
11635 11859 mptsas_cmd_t *cmd = NULL;
11636 11860 struct scsi_pkt *pkt;
11637 11861 pMpi2DiagReleaseReply_t reply;
11638 11862 uint16_t iocstatus;
11639 11863 uint32_t iocloginfo;
11640 11864
11641 11865 /*
11642 11866 * If buffer is not enabled, just leave.
11643 11867 */
11644 11868 *return_code = MPTSAS_FW_DIAG_ERROR_RELEASE_FAILED;
11645 11869 if (!pBuffer->enabled) {
11646 11870 mptsas_log(mpt, CE_NOTE, "This buffer type is not supported "
11647 11871 "by the IOC");
11648 11872 status = DDI_FAILURE;
11649 11873 goto out;
11650 11874 }
11651 11875
11652 11876 /*
11653 11877 * Clear some flags initially.
11654 11878 */
11655 11879 pBuffer->force_release = FALSE;
11656 11880 pBuffer->valid_data = FALSE;
11657 11881 pBuffer->owned_by_firmware = FALSE;
11658 11882
11659 11883 /*
11660 11884 * Get a cmd buffer from the cmd buffer pool
11661 11885 */
11662 11886 if ((slot_num = (mptsas_request_from_pool(mpt, &cmd, &pkt))) == -1) {
11663 11887 status = DDI_FAILURE;
11664 11888 mptsas_log(mpt, CE_NOTE, "command pool is full: Release FW "
11665 11889 "Diag");
11666 11890 goto out;
11667 11891 }
11668 11892 rel_flags |= MPTSAS_REQUEST_POOL_CMD;
11669 11893
11670 11894 bzero((caddr_t)cmd, sizeof (*cmd));
11671 11895 bzero((caddr_t)pkt, scsi_pkt_size());
11672 11896
11673 11897 cmd->ioc_cmd_slot = (uint32_t)(slot_num);
11674 11898
11675 11899 diag.pBuffer = pBuffer;
11676 11900 diag.function = MPI2_FUNCTION_DIAG_RELEASE;
11677 11901
11678 11902 /*
11679 11903 * Form a blank cmd/pkt to store the acknowledgement message
11680 11904 */
11681 11905 pkt->pkt_ha_private = (opaque_t)&diag;
11682 11906 pkt->pkt_flags = FLAG_HEAD;
11683 11907 pkt->pkt_time = 60;
11684 11908 cmd->cmd_pkt = pkt;
11685 11909 cmd->cmd_flags = CFLAG_CMDIOC | CFLAG_FW_DIAG;
11686 11910
11687 11911 /*
11688 11912 * Save the command in a slot
11689 11913 */
11690 11914 if (mptsas_save_cmd(mpt, cmd) == TRUE) {
11691 11915 /*
11692 11916 * Once passthru command get slot, set cmd_flags
11693 11917 * CFLAG_PREPARED.
11694 11918 */
11695 11919 cmd->cmd_flags |= CFLAG_PREPARED;
11696 11920 mptsas_start_diag(mpt, cmd);
11697 11921 } else {
11698 11922 mptsas_waitq_add(mpt, cmd);
11699 11923 }
11700 11924
11701 11925 while ((cmd->cmd_flags & CFLAG_FINISHED) == 0) {
11702 11926 cv_wait(&mpt->m_fw_diag_cv, &mpt->m_mutex);
11703 11927 }
11704 11928
11705 11929 if (cmd->cmd_flags & CFLAG_TIMEOUT) {
11706 11930 status = DDI_FAILURE;
11707 11931 mptsas_log(mpt, CE_WARN, "Release FW Diag command timeout");
11708 11932 goto out;
11709 11933 }
11710 11934
11711 11935 /*
11712 11936 * cmd_rfm points to the reply message if a reply was given. Check the
11713 11937 * IOCStatus to make sure everything went OK with the FW diag request
11714 11938 * and set buffer flags.
11715 11939 */
11716 11940 if (cmd->cmd_rfm) {
11717 11941 rel_flags |= MPTSAS_ADDRESS_REPLY;
11718 11942 (void) ddi_dma_sync(mpt->m_dma_reply_frame_hdl, 0, 0,
11719 11943 DDI_DMA_SYNC_FORCPU);
11720 11944 reply = (pMpi2DiagReleaseReply_t)(mpt->m_reply_frame +
11721 11945 (cmd->cmd_rfm -
11722 11946 (mpt->m_reply_frame_dma_addr & 0xffffffffu)));
11723 11947
11724 11948 /*
11725 11949 * Get the reply message data
11726 11950 */
11727 11951 iocstatus = ddi_get16(mpt->m_acc_reply_frame_hdl,
11728 11952 &reply->IOCStatus);
11729 11953 iocloginfo = ddi_get32(mpt->m_acc_reply_frame_hdl,
11730 11954 &reply->IOCLogInfo);
11731 11955
11732 11956 /*
11733 11957 * If release failed quit.
11734 11958 */
11735 11959 if ((iocstatus != MPI2_IOCSTATUS_SUCCESS) ||
11736 11960 pBuffer->owned_by_firmware) {
11737 11961 status = DDI_FAILURE;
11738 11962 NDBG13(("release FW Diag Buffer failed: "
11739 11963 "IOCStatus=0x%x, IOCLogInfo=0x%x", iocstatus,
11740 11964 iocloginfo));
11741 11965 goto out;
11742 11966 }
11743 11967
11744 11968 /*
11745 11969 * Release was successful.
11746 11970 */
11747 11971 *return_code = MPTSAS_FW_DIAG_ERROR_SUCCESS;
11748 11972 status = DDI_SUCCESS;
11749 11973
11750 11974 /*
11751 11975 * If this was for an UNREGISTER diag type command, clear the
11752 11976 * unique ID.
11753 11977 */
11754 11978 if (diag_type == MPTSAS_FW_DIAG_TYPE_UNREGISTER) {
11755 11979 pBuffer->unique_id = MPTSAS_FW_DIAG_INVALID_UID;
11756 11980 }
11757 11981 }
11758 11982
11759 11983 out:
11760 11984 /*
11761 11985 * Put the reply frame back on the free queue, increment the free
11762 11986 * index, and write the new index to the free index register. But only
11763 11987 * if this reply is an ADDRESS reply.
11764 11988 */
11765 11989 if (rel_flags & MPTSAS_ADDRESS_REPLY) {
11766 11990 ddi_put32(mpt->m_acc_free_queue_hdl,
11767 11991 &((uint32_t *)(void *)mpt->m_free_queue)[mpt->m_free_index],
11768 11992 cmd->cmd_rfm);
11769 11993 (void) ddi_dma_sync(mpt->m_dma_free_queue_hdl, 0, 0,
11770 11994 DDI_DMA_SYNC_FORDEV);
11771 11995 if (++mpt->m_free_index == mpt->m_free_queue_depth) {
11772 11996 mpt->m_free_index = 0;
11773 11997 }
11774 11998 ddi_put32(mpt->m_datap, &mpt->m_reg->ReplyFreeHostIndex,
11775 11999 mpt->m_free_index);
11776 12000 }
11777 12001 if (cmd && (cmd->cmd_flags & CFLAG_PREPARED)) {
11778 12002 mptsas_remove_cmd(mpt, cmd);
11779 12003 rel_flags &= (~MPTSAS_REQUEST_POOL_CMD);
11780 12004 }
11781 12005 if (rel_flags & MPTSAS_REQUEST_POOL_CMD) {
11782 12006 mptsas_return_to_pool(mpt, cmd);
11783 12007 }
11784 12008
11785 12009 return (status);
11786 12010 }
11787 12011
11788 12012 static int
11789 12013 mptsas_diag_register(mptsas_t *mpt, mptsas_fw_diag_register_t *diag_register,
11790 12014 uint32_t *return_code)
11791 12015 {
11792 12016 mptsas_fw_diagnostic_buffer_t *pBuffer;
11793 12017 uint8_t extended_type, buffer_type, i;
11794 12018 uint32_t buffer_size;
11795 12019 uint32_t unique_id;
11796 12020 int status;
11797 12021
11798 12022 ASSERT(mutex_owned(&mpt->m_mutex));
11799 12023
11800 12024 extended_type = diag_register->ExtendedType;
11801 12025 buffer_type = diag_register->BufferType;
11802 12026 buffer_size = diag_register->RequestedBufferSize;
11803 12027 unique_id = diag_register->UniqueId;
11804 12028
11805 12029 /*
11806 12030 * Check for valid buffer type
11807 12031 */
11808 12032 if (buffer_type >= MPI2_DIAG_BUF_TYPE_COUNT) {
11809 12033 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_PARAMETER;
11810 12034 return (DDI_FAILURE);
11811 12035 }
11812 12036
11813 12037 /*
11814 12038 * Get the current buffer and look up the unique ID. The unique ID
11815 12039 * should not be found. If it is, the ID is already in use.
11816 12040 */
11817 12041 i = mptsas_get_fw_diag_buffer_number(mpt, unique_id);
11818 12042 pBuffer = &mpt->m_fw_diag_buffer_list[buffer_type];
11819 12043 if (i != MPTSAS_FW_DIAGNOSTIC_UID_NOT_FOUND) {
11820 12044 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_UID;
11821 12045 return (DDI_FAILURE);
11822 12046 }
11823 12047
11824 12048 /*
11825 12049 * The buffer's unique ID should not be registered yet, and the given
11826 12050 * unique ID cannot be 0.
11827 12051 */
11828 12052 if ((pBuffer->unique_id != MPTSAS_FW_DIAG_INVALID_UID) ||
11829 12053 (unique_id == MPTSAS_FW_DIAG_INVALID_UID)) {
11830 12054 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_UID;
11831 12055 return (DDI_FAILURE);
11832 12056 }
11833 12057
11834 12058 /*
11835 12059 * If this buffer is already posted as immediate, just change owner.
11836 12060 */
11837 12061 if (pBuffer->immediate && pBuffer->owned_by_firmware &&
11838 12062 (pBuffer->unique_id == MPTSAS_FW_DIAG_INVALID_UID)) {
11839 12063 pBuffer->immediate = FALSE;
11840 12064 pBuffer->unique_id = unique_id;
11841 12065 return (DDI_SUCCESS);
11842 12066 }
11843 12067
11844 12068 /*
11845 12069 * Post a new buffer after checking if it's enabled. The DMA buffer
11846 12070 * that is allocated will be contiguous (sgl_len = 1).
11847 12071 */
11848 12072 if (!pBuffer->enabled) {
11849 12073 *return_code = MPTSAS_FW_DIAG_ERROR_NO_BUFFER;
11850 12074 return (DDI_FAILURE);
11851 12075 }
11852 12076 bzero(&pBuffer->buffer_data, sizeof (mptsas_dma_alloc_state_t));
11853 12077 pBuffer->buffer_data.size = buffer_size;
11854 12078 if (mptsas_dma_alloc(mpt, &pBuffer->buffer_data) != DDI_SUCCESS) {
11855 12079 mptsas_log(mpt, CE_WARN, "failed to alloc DMA resource for "
11856 12080 "diag buffer: size = %d bytes", buffer_size);
11857 12081 *return_code = MPTSAS_FW_DIAG_ERROR_NO_BUFFER;
11858 12082 return (DDI_FAILURE);
11859 12083 }
11860 12084
11861 12085 /*
11862 12086 * Copy the given info to the diag buffer and post the buffer.
11863 12087 */
11864 12088 pBuffer->buffer_type = buffer_type;
11865 12089 pBuffer->immediate = FALSE;
11866 12090 if (buffer_type == MPI2_DIAG_BUF_TYPE_TRACE) {
11867 12091 for (i = 0; i < (sizeof (pBuffer->product_specific) / 4);
11868 12092 i++) {
11869 12093 pBuffer->product_specific[i] =
11870 12094 diag_register->ProductSpecific[i];
11871 12095 }
11872 12096 }
11873 12097 pBuffer->extended_type = extended_type;
11874 12098 pBuffer->unique_id = unique_id;
11875 12099 status = mptsas_post_fw_diag_buffer(mpt, pBuffer, return_code);
11876 12100
11877 12101 if (mptsas_check_dma_handle(pBuffer->buffer_data.handle) !=
11878 12102 DDI_SUCCESS) {
11879 12103 mptsas_log(mpt, CE_WARN, "Check of DMA handle failed in "
11880 12104 "mptsas_diag_register.");
11881 12105 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
11882 12106 status = DDI_FAILURE;
11883 12107 }
11884 12108
11885 12109 /*
11886 12110 * In case there was a failure, free the DMA buffer.
11887 12111 */
11888 12112 if (status == DDI_FAILURE) {
11889 12113 mptsas_dma_free(&pBuffer->buffer_data);
11890 12114 }
11891 12115
11892 12116 return (status);
11893 12117 }
11894 12118
11895 12119 static int
11896 12120 mptsas_diag_unregister(mptsas_t *mpt,
11897 12121 mptsas_fw_diag_unregister_t *diag_unregister, uint32_t *return_code)
11898 12122 {
11899 12123 mptsas_fw_diagnostic_buffer_t *pBuffer;
11900 12124 uint8_t i;
11901 12125 uint32_t unique_id;
11902 12126 int status;
11903 12127
11904 12128 ASSERT(mutex_owned(&mpt->m_mutex));
11905 12129
11906 12130 unique_id = diag_unregister->UniqueId;
11907 12131
11908 12132 /*
11909 12133 * Get the current buffer and look up the unique ID. The unique ID
11910 12134 * should be there.
11911 12135 */
11912 12136 i = mptsas_get_fw_diag_buffer_number(mpt, unique_id);
11913 12137 if (i == MPTSAS_FW_DIAGNOSTIC_UID_NOT_FOUND) {
11914 12138 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_UID;
11915 12139 return (DDI_FAILURE);
11916 12140 }
11917 12141
11918 12142 pBuffer = &mpt->m_fw_diag_buffer_list[i];
11919 12143
11920 12144 /*
11921 12145 * Try to release the buffer from FW before freeing it. If release
11922 12146 * fails, don't free the DMA buffer in case FW tries to access it
11923 12147 * later. If buffer is not owned by firmware, can't release it.
11924 12148 */
11925 12149 if (!pBuffer->owned_by_firmware) {
11926 12150 status = DDI_SUCCESS;
11927 12151 } else {
11928 12152 status = mptsas_release_fw_diag_buffer(mpt, pBuffer,
11929 12153 return_code, MPTSAS_FW_DIAG_TYPE_UNREGISTER);
11930 12154 }
11931 12155
11932 12156 /*
11933 12157 * At this point, return the current status no matter what happens with
11934 12158 * the DMA buffer.
11935 12159 */
11936 12160 pBuffer->unique_id = MPTSAS_FW_DIAG_INVALID_UID;
11937 12161 if (status == DDI_SUCCESS) {
11938 12162 if (mptsas_check_dma_handle(pBuffer->buffer_data.handle) !=
11939 12163 DDI_SUCCESS) {
11940 12164 mptsas_log(mpt, CE_WARN, "Check of DMA handle failed "
11941 12165 "in mptsas_diag_unregister.");
11942 12166 ddi_fm_service_impact(mpt->m_dip,
11943 12167 DDI_SERVICE_UNAFFECTED);
11944 12168 }
11945 12169 mptsas_dma_free(&pBuffer->buffer_data);
11946 12170 }
11947 12171
11948 12172 return (status);
11949 12173 }
11950 12174
11951 12175 static int
11952 12176 mptsas_diag_query(mptsas_t *mpt, mptsas_fw_diag_query_t *diag_query,
11953 12177 uint32_t *return_code)
11954 12178 {
11955 12179 mptsas_fw_diagnostic_buffer_t *pBuffer;
11956 12180 uint8_t i;
11957 12181 uint32_t unique_id;
11958 12182
11959 12183 ASSERT(mutex_owned(&mpt->m_mutex));
11960 12184
11961 12185 unique_id = diag_query->UniqueId;
11962 12186
11963 12187 /*
11964 12188 * If ID is valid, query on ID.
11965 12189 * If ID is invalid, query on buffer type.
11966 12190 */
11967 12191 if (unique_id == MPTSAS_FW_DIAG_INVALID_UID) {
11968 12192 i = diag_query->BufferType;
11969 12193 if (i >= MPI2_DIAG_BUF_TYPE_COUNT) {
11970 12194 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_UID;
11971 12195 return (DDI_FAILURE);
11972 12196 }
11973 12197 } else {
11974 12198 i = mptsas_get_fw_diag_buffer_number(mpt, unique_id);
11975 12199 if (i == MPTSAS_FW_DIAGNOSTIC_UID_NOT_FOUND) {
11976 12200 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_UID;
11977 12201 return (DDI_FAILURE);
11978 12202 }
11979 12203 }
11980 12204
11981 12205 /*
11982 12206 * Fill query structure with the diag buffer info.
11983 12207 */
11984 12208 pBuffer = &mpt->m_fw_diag_buffer_list[i];
11985 12209 diag_query->BufferType = pBuffer->buffer_type;
11986 12210 diag_query->ExtendedType = pBuffer->extended_type;
11987 12211 if (diag_query->BufferType == MPI2_DIAG_BUF_TYPE_TRACE) {
11988 12212 for (i = 0; i < (sizeof (diag_query->ProductSpecific) / 4);
11989 12213 i++) {
11990 12214 diag_query->ProductSpecific[i] =
11991 12215 pBuffer->product_specific[i];
11992 12216 }
11993 12217 }
11994 12218 diag_query->TotalBufferSize = pBuffer->buffer_data.size;
11995 12219 diag_query->DriverAddedBufferSize = 0;
11996 12220 diag_query->UniqueId = pBuffer->unique_id;
11997 12221 diag_query->ApplicationFlags = 0;
11998 12222 diag_query->DiagnosticFlags = 0;
11999 12223
12000 12224 /*
12001 12225 * Set/Clear application flags
12002 12226 */
12003 12227 if (pBuffer->immediate) {
12004 12228 diag_query->ApplicationFlags &= ~MPTSAS_FW_DIAG_FLAG_APP_OWNED;
12005 12229 } else {
12006 12230 diag_query->ApplicationFlags |= MPTSAS_FW_DIAG_FLAG_APP_OWNED;
12007 12231 }
12008 12232 if (pBuffer->valid_data || pBuffer->owned_by_firmware) {
12009 12233 diag_query->ApplicationFlags |=
12010 12234 MPTSAS_FW_DIAG_FLAG_BUFFER_VALID;
12011 12235 } else {
12012 12236 diag_query->ApplicationFlags &=
12013 12237 ~MPTSAS_FW_DIAG_FLAG_BUFFER_VALID;
12014 12238 }
12015 12239 if (pBuffer->owned_by_firmware) {
12016 12240 diag_query->ApplicationFlags |=
12017 12241 MPTSAS_FW_DIAG_FLAG_FW_BUFFER_ACCESS;
12018 12242 } else {
12019 12243 diag_query->ApplicationFlags &=
12020 12244 ~MPTSAS_FW_DIAG_FLAG_FW_BUFFER_ACCESS;
12021 12245 }
12022 12246
12023 12247 return (DDI_SUCCESS);
12024 12248 }
12025 12249
12026 12250 static int
12027 12251 mptsas_diag_read_buffer(mptsas_t *mpt,
12028 12252 mptsas_diag_read_buffer_t *diag_read_buffer, uint8_t *ioctl_buf,
12029 12253 uint32_t *return_code, int ioctl_mode)
12030 12254 {
12031 12255 mptsas_fw_diagnostic_buffer_t *pBuffer;
12032 12256 uint8_t i, *pData;
12033 12257 uint32_t unique_id, byte;
12034 12258 int status;
12035 12259
12036 12260 ASSERT(mutex_owned(&mpt->m_mutex));
12037 12261
12038 12262 unique_id = diag_read_buffer->UniqueId;
12039 12263
12040 12264 /*
12041 12265 * Get the current buffer and look up the unique ID. The unique ID
12042 12266 * should be there.
12043 12267 */
12044 12268 i = mptsas_get_fw_diag_buffer_number(mpt, unique_id);
12045 12269 if (i == MPTSAS_FW_DIAGNOSTIC_UID_NOT_FOUND) {
12046 12270 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_UID;
12047 12271 return (DDI_FAILURE);
12048 12272 }
12049 12273
12050 12274 pBuffer = &mpt->m_fw_diag_buffer_list[i];
12051 12275
12052 12276 /*
12053 12277 * Make sure requested read is within limits
12054 12278 */
12055 12279 if (diag_read_buffer->StartingOffset + diag_read_buffer->BytesToRead >
12056 12280 pBuffer->buffer_data.size) {
12057 12281 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_PARAMETER;
12058 12282 return (DDI_FAILURE);
12059 12283 }
12060 12284
12061 12285 /*
12062 12286 * Copy the requested data from DMA to the diag_read_buffer. The DMA
12063 12287 * buffer that was allocated is one contiguous buffer.
12064 12288 */
12065 12289 pData = (uint8_t *)(pBuffer->buffer_data.memp +
12066 12290 diag_read_buffer->StartingOffset);
12067 12291 (void) ddi_dma_sync(pBuffer->buffer_data.handle, 0, 0,
12068 12292 DDI_DMA_SYNC_FORCPU);
12069 12293 for (byte = 0; byte < diag_read_buffer->BytesToRead; byte++) {
12070 12294 if (ddi_copyout(pData + byte, ioctl_buf + byte, 1, ioctl_mode)
12071 12295 != 0) {
12072 12296 return (DDI_FAILURE);
12073 12297 }
12074 12298 }
12075 12299 diag_read_buffer->Status = 0;
12076 12300
12077 12301 /*
12078 12302 * Set or clear the Force Release flag.
12079 12303 */
12080 12304 if (pBuffer->force_release) {
12081 12305 diag_read_buffer->Flags |= MPTSAS_FW_DIAG_FLAG_FORCE_RELEASE;
12082 12306 } else {
12083 12307 diag_read_buffer->Flags &= ~MPTSAS_FW_DIAG_FLAG_FORCE_RELEASE;
12084 12308 }
12085 12309
12086 12310 /*
12087 12311 * If buffer is to be reregistered, make sure it's not already owned by
12088 12312 * firmware first.
12089 12313 */
12090 12314 status = DDI_SUCCESS;
12091 12315 if (!pBuffer->owned_by_firmware) {
12092 12316 if (diag_read_buffer->Flags & MPTSAS_FW_DIAG_FLAG_REREGISTER) {
12093 12317 status = mptsas_post_fw_diag_buffer(mpt, pBuffer,
12094 12318 return_code);
12095 12319 }
12096 12320 }
12097 12321
12098 12322 return (status);
12099 12323 }
12100 12324
12101 12325 static int
12102 12326 mptsas_diag_release(mptsas_t *mpt, mptsas_fw_diag_release_t *diag_release,
12103 12327 uint32_t *return_code)
12104 12328 {
12105 12329 mptsas_fw_diagnostic_buffer_t *pBuffer;
12106 12330 uint8_t i;
12107 12331 uint32_t unique_id;
12108 12332 int status;
12109 12333
12110 12334 ASSERT(mutex_owned(&mpt->m_mutex));
12111 12335
12112 12336 unique_id = diag_release->UniqueId;
12113 12337
12114 12338 /*
12115 12339 * Get the current buffer and look up the unique ID. The unique ID
12116 12340 * should be there.
12117 12341 */
12118 12342 i = mptsas_get_fw_diag_buffer_number(mpt, unique_id);
12119 12343 if (i == MPTSAS_FW_DIAGNOSTIC_UID_NOT_FOUND) {
12120 12344 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_UID;
12121 12345 return (DDI_FAILURE);
12122 12346 }
12123 12347
12124 12348 pBuffer = &mpt->m_fw_diag_buffer_list[i];
12125 12349
12126 12350 /*
12127 12351 * If buffer is not owned by firmware, it's already been released.
12128 12352 */
12129 12353 if (!pBuffer->owned_by_firmware) {
12130 12354 *return_code = MPTSAS_FW_DIAG_ERROR_ALREADY_RELEASED;
12131 12355 return (DDI_FAILURE);
12132 12356 }
12133 12357
12134 12358 /*
12135 12359 * Release the buffer.
12136 12360 */
12137 12361 status = mptsas_release_fw_diag_buffer(mpt, pBuffer, return_code,
12138 12362 MPTSAS_FW_DIAG_TYPE_RELEASE);
12139 12363 return (status);
12140 12364 }
12141 12365
12142 12366 static int
12143 12367 mptsas_do_diag_action(mptsas_t *mpt, uint32_t action, uint8_t *diag_action,
12144 12368 uint32_t length, uint32_t *return_code, int ioctl_mode)
12145 12369 {
12146 12370 mptsas_fw_diag_register_t diag_register;
12147 12371 mptsas_fw_diag_unregister_t diag_unregister;
12148 12372 mptsas_fw_diag_query_t diag_query;
12149 12373 mptsas_diag_read_buffer_t diag_read_buffer;
12150 12374 mptsas_fw_diag_release_t diag_release;
12151 12375 int status = DDI_SUCCESS;
12152 12376 uint32_t original_return_code, read_buf_len;
12153 12377
12154 12378 ASSERT(mutex_owned(&mpt->m_mutex));
12155 12379
12156 12380 original_return_code = *return_code;
12157 12381 *return_code = MPTSAS_FW_DIAG_ERROR_SUCCESS;
12158 12382
12159 12383 switch (action) {
12160 12384 case MPTSAS_FW_DIAG_TYPE_REGISTER:
12161 12385 if (!length) {
12162 12386 *return_code =
12163 12387 MPTSAS_FW_DIAG_ERROR_INVALID_PARAMETER;
12164 12388 status = DDI_FAILURE;
12165 12389 break;
12166 12390 }
12167 12391 if (ddi_copyin(diag_action, &diag_register,
12168 12392 sizeof (diag_register), ioctl_mode) != 0) {
12169 12393 return (DDI_FAILURE);
12170 12394 }
12171 12395 status = mptsas_diag_register(mpt, &diag_register,
12172 12396 return_code);
12173 12397 break;
12174 12398
12175 12399 case MPTSAS_FW_DIAG_TYPE_UNREGISTER:
12176 12400 if (length < sizeof (diag_unregister)) {
12177 12401 *return_code =
12178 12402 MPTSAS_FW_DIAG_ERROR_INVALID_PARAMETER;
12179 12403 status = DDI_FAILURE;
12180 12404 break;
12181 12405 }
12182 12406 if (ddi_copyin(diag_action, &diag_unregister,
12183 12407 sizeof (diag_unregister), ioctl_mode) != 0) {
12184 12408 return (DDI_FAILURE);
12185 12409 }
12186 12410 status = mptsas_diag_unregister(mpt, &diag_unregister,
12187 12411 return_code);
12188 12412 break;
12189 12413
12190 12414 case MPTSAS_FW_DIAG_TYPE_QUERY:
12191 12415 if (length < sizeof (diag_query)) {
12192 12416 *return_code =
12193 12417 MPTSAS_FW_DIAG_ERROR_INVALID_PARAMETER;
12194 12418 status = DDI_FAILURE;
12195 12419 break;
12196 12420 }
12197 12421 if (ddi_copyin(diag_action, &diag_query,
12198 12422 sizeof (diag_query), ioctl_mode) != 0) {
12199 12423 return (DDI_FAILURE);
12200 12424 }
12201 12425 status = mptsas_diag_query(mpt, &diag_query,
12202 12426 return_code);
12203 12427 if (status == DDI_SUCCESS) {
12204 12428 if (ddi_copyout(&diag_query, diag_action,
12205 12429 sizeof (diag_query), ioctl_mode) != 0) {
12206 12430 return (DDI_FAILURE);
12207 12431 }
12208 12432 }
12209 12433 break;
12210 12434
12211 12435 case MPTSAS_FW_DIAG_TYPE_READ_BUFFER:
12212 12436 if (ddi_copyin(diag_action, &diag_read_buffer,
12213 12437 sizeof (diag_read_buffer) - 4, ioctl_mode) != 0) {
12214 12438 return (DDI_FAILURE);
12215 12439 }
12216 12440 read_buf_len = sizeof (diag_read_buffer) -
12217 12441 sizeof (diag_read_buffer.DataBuffer) +
12218 12442 diag_read_buffer.BytesToRead;
12219 12443 if (length < read_buf_len) {
12220 12444 *return_code =
12221 12445 MPTSAS_FW_DIAG_ERROR_INVALID_PARAMETER;
12222 12446 status = DDI_FAILURE;
12223 12447 break;
12224 12448 }
12225 12449 status = mptsas_diag_read_buffer(mpt,
12226 12450 &diag_read_buffer, diag_action +
12227 12451 sizeof (diag_read_buffer) - 4, return_code,
12228 12452 ioctl_mode);
12229 12453 if (status == DDI_SUCCESS) {
12230 12454 if (ddi_copyout(&diag_read_buffer, diag_action,
12231 12455 sizeof (diag_read_buffer) - 4, ioctl_mode)
12232 12456 != 0) {
12233 12457 return (DDI_FAILURE);
12234 12458 }
12235 12459 }
12236 12460 break;
12237 12461
12238 12462 case MPTSAS_FW_DIAG_TYPE_RELEASE:
12239 12463 if (length < sizeof (diag_release)) {
12240 12464 *return_code =
12241 12465 MPTSAS_FW_DIAG_ERROR_INVALID_PARAMETER;
12242 12466 status = DDI_FAILURE;
12243 12467 break;
12244 12468 }
12245 12469 if (ddi_copyin(diag_action, &diag_release,
12246 12470 sizeof (diag_release), ioctl_mode) != 0) {
12247 12471 return (DDI_FAILURE);
12248 12472 }
12249 12473 status = mptsas_diag_release(mpt, &diag_release,
12250 12474 return_code);
12251 12475 break;
12252 12476
12253 12477 default:
12254 12478 *return_code = MPTSAS_FW_DIAG_ERROR_INVALID_PARAMETER;
12255 12479 status = DDI_FAILURE;
12256 12480 break;
12257 12481 }
12258 12482
12259 12483 if ((status == DDI_FAILURE) &&
12260 12484 (original_return_code == MPTSAS_FW_DIAG_NEW) &&
12261 12485 (*return_code != MPTSAS_FW_DIAG_ERROR_SUCCESS)) {
12262 12486 status = DDI_SUCCESS;
12263 12487 }
12264 12488
12265 12489 return (status);
12266 12490 }
12267 12491
12268 12492 static int
12269 12493 mptsas_diag_action(mptsas_t *mpt, mptsas_diag_action_t *user_data, int mode)
12270 12494 {
12271 12495 int status;
12272 12496 mptsas_diag_action_t driver_data;
12273 12497
12274 12498 ASSERT(mutex_owned(&mpt->m_mutex));
12275 12499
12276 12500 /*
12277 12501 * Copy the user data to a driver data buffer.
12278 12502 */
12279 12503 if (ddi_copyin(user_data, &driver_data, sizeof (mptsas_diag_action_t),
12280 12504 mode) == 0) {
12281 12505 /*
12282 12506 * Send diag action request if Action is valid
12283 12507 */
12284 12508 if (driver_data.Action == MPTSAS_FW_DIAG_TYPE_REGISTER ||
12285 12509 driver_data.Action == MPTSAS_FW_DIAG_TYPE_UNREGISTER ||
12286 12510 driver_data.Action == MPTSAS_FW_DIAG_TYPE_QUERY ||
12287 12511 driver_data.Action == MPTSAS_FW_DIAG_TYPE_READ_BUFFER ||
12288 12512 driver_data.Action == MPTSAS_FW_DIAG_TYPE_RELEASE) {
12289 12513 status = mptsas_do_diag_action(mpt, driver_data.Action,
12290 12514 (void *)(uintptr_t)driver_data.PtrDiagAction,
12291 12515 driver_data.Length, &driver_data.ReturnCode,
12292 12516 mode);
12293 12517 if (status == DDI_SUCCESS) {
12294 12518 if (ddi_copyout(&driver_data.ReturnCode,
12295 12519 &user_data->ReturnCode,
12296 12520 sizeof (user_data->ReturnCode), mode)
12297 12521 != 0) {
12298 12522 status = EFAULT;
12299 12523 } else {
12300 12524 status = 0;
12301 12525 }
12302 12526 } else {
12303 12527 status = EIO;
12304 12528 }
12305 12529 } else {
12306 12530 status = EINVAL;
12307 12531 }
12308 12532 } else {
12309 12533 status = EFAULT;
12310 12534 }
12311 12535
12312 12536 return (status);
12313 12537 }
12314 12538
12315 12539 /*
12316 12540 * This routine handles the "event query" ioctl.
12317 12541 */
12318 12542 static int
12319 12543 mptsas_event_query(mptsas_t *mpt, mptsas_event_query_t *data, int mode,
12320 12544 int *rval)
12321 12545 {
12322 12546 int status;
12323 12547 mptsas_event_query_t driverdata;
12324 12548 uint8_t i;
12325 12549
12326 12550 driverdata.Entries = MPTSAS_EVENT_QUEUE_SIZE;
12327 12551
12328 12552 mutex_enter(&mpt->m_mutex);
12329 12553 for (i = 0; i < 4; i++) {
12330 12554 driverdata.Types[i] = mpt->m_event_mask[i];
12331 12555 }
12332 12556 mutex_exit(&mpt->m_mutex);
12333 12557
12334 12558 if (ddi_copyout(&driverdata, data, sizeof (driverdata), mode) != 0) {
12335 12559 status = EFAULT;
12336 12560 } else {
12337 12561 *rval = MPTIOCTL_STATUS_GOOD;
12338 12562 status = 0;
12339 12563 }
12340 12564
12341 12565 return (status);
12342 12566 }
12343 12567
12344 12568 /*
12345 12569 * This routine handles the "event enable" ioctl.
12346 12570 */
12347 12571 static int
12348 12572 mptsas_event_enable(mptsas_t *mpt, mptsas_event_enable_t *data, int mode,
12349 12573 int *rval)
12350 12574 {
12351 12575 int status;
12352 12576 mptsas_event_enable_t driverdata;
12353 12577 uint8_t i;
12354 12578
12355 12579 if (ddi_copyin(data, &driverdata, sizeof (driverdata), mode) == 0) {
12356 12580 mutex_enter(&mpt->m_mutex);
12357 12581 for (i = 0; i < 4; i++) {
12358 12582 mpt->m_event_mask[i] = driverdata.Types[i];
12359 12583 }
12360 12584 mutex_exit(&mpt->m_mutex);
12361 12585
12362 12586 *rval = MPTIOCTL_STATUS_GOOD;
12363 12587 status = 0;
12364 12588 } else {
12365 12589 status = EFAULT;
12366 12590 }
12367 12591 return (status);
12368 12592 }
12369 12593
12370 12594 /*
12371 12595 * This routine handles the "event report" ioctl.
12372 12596 */
12373 12597 static int
12374 12598 mptsas_event_report(mptsas_t *mpt, mptsas_event_report_t *data, int mode,
12375 12599 int *rval)
12376 12600 {
12377 12601 int status;
12378 12602 mptsas_event_report_t driverdata;
12379 12603
12380 12604 mutex_enter(&mpt->m_mutex);
12381 12605
12382 12606 if (ddi_copyin(&data->Size, &driverdata.Size, sizeof (driverdata.Size),
12383 12607 mode) == 0) {
12384 12608 if (driverdata.Size >= sizeof (mpt->m_events)) {
12385 12609 if (ddi_copyout(mpt->m_events, data->Events,
12386 12610 sizeof (mpt->m_events), mode) != 0) {
12387 12611 status = EFAULT;
12388 12612 } else {
12389 12613 if (driverdata.Size > sizeof (mpt->m_events)) {
12390 12614 driverdata.Size =
12391 12615 sizeof (mpt->m_events);
12392 12616 if (ddi_copyout(&driverdata.Size,
12393 12617 &data->Size,
12394 12618 sizeof (driverdata.Size),
12395 12619 mode) != 0) {
12396 12620 status = EFAULT;
12397 12621 } else {
12398 12622 *rval = MPTIOCTL_STATUS_GOOD;
12399 12623 status = 0;
12400 12624 }
12401 12625 } else {
12402 12626 *rval = MPTIOCTL_STATUS_GOOD;
12403 12627 status = 0;
12404 12628 }
12405 12629 }
12406 12630 } else {
12407 12631 *rval = MPTIOCTL_STATUS_LEN_TOO_SHORT;
12408 12632 status = 0;
12409 12633 }
12410 12634 } else {
12411 12635 status = EFAULT;
12412 12636 }
12413 12637
12414 12638 mutex_exit(&mpt->m_mutex);
12415 12639 return (status);
12416 12640 }
12417 12641
12418 12642 static void
12419 12643 mptsas_lookup_pci_data(mptsas_t *mpt, mptsas_adapter_data_t *adapter_data)
12420 12644 {
12421 12645 int *reg_data;
12422 12646 uint_t reglen;
12423 12647
12424 12648 /*
12425 12649 * Lookup the 'reg' property and extract the other data
12426 12650 */
12427 12651 if (ddi_prop_lookup_int_array(DDI_DEV_T_ANY, mpt->m_dip,
12428 12652 DDI_PROP_DONTPASS, "reg", ®_data, ®len) ==
12429 12653 DDI_PROP_SUCCESS) {
12430 12654 /*
12431 12655 * Extract the PCI data from the 'reg' property first DWORD.
12432 12656 * The entry looks like the following:
12433 12657 * First DWORD:
12434 12658 * Bits 0 - 7 8-bit Register number
12435 12659 * Bits 8 - 10 3-bit Function number
12436 12660 * Bits 11 - 15 5-bit Device number
12437 12661 * Bits 16 - 23 8-bit Bus number
12438 12662 * Bits 24 - 25 2-bit Address Space type identifier
12439 12663 *
12440 12664 */
12441 12665 adapter_data->PciInformation.u.bits.BusNumber =
12442 12666 (reg_data[0] & 0x00FF0000) >> 16;
12443 12667 adapter_data->PciInformation.u.bits.DeviceNumber =
12444 12668 (reg_data[0] & 0x0000F800) >> 11;
12445 12669 adapter_data->PciInformation.u.bits.FunctionNumber =
12446 12670 (reg_data[0] & 0x00000700) >> 8;
12447 12671 ddi_prop_free((void *)reg_data);
12448 12672 } else {
12449 12673 /*
12450 12674 * If we can't determine the PCI data then we fill in FF's for
12451 12675 * the data to indicate this.
12452 12676 */
12453 12677 adapter_data->PCIDeviceHwId = 0xFFFFFFFF;
12454 12678 adapter_data->MpiPortNumber = 0xFFFFFFFF;
12455 12679 adapter_data->PciInformation.u.AsDWORD = 0xFFFFFFFF;
12456 12680 }
12457 12681
12458 12682 /*
12459 12683 * Saved in the mpt->m_fwversion
12460 12684 */
12461 12685 adapter_data->MpiFirmwareVersion = mpt->m_fwversion;
12462 12686 }
12463 12687
12464 12688 static void
12465 12689 mptsas_read_adapter_data(mptsas_t *mpt, mptsas_adapter_data_t *adapter_data)
12466 12690 {
12467 12691 char *driver_verstr = MPTSAS_MOD_STRING;
12468 12692
12469 12693 mptsas_lookup_pci_data(mpt, adapter_data);
12470 12694 adapter_data->AdapterType = mpt->m_MPI25 ?
12471 12695 MPTIOCTL_ADAPTER_TYPE_SAS3 :
12472 12696 MPTIOCTL_ADAPTER_TYPE_SAS2;
12473 12697 adapter_data->PCIDeviceHwId = (uint32_t)mpt->m_devid;
12474 12698 adapter_data->PCIDeviceHwRev = (uint32_t)mpt->m_revid;
12475 12699 adapter_data->SubSystemId = (uint32_t)mpt->m_ssid;
12476 12700 adapter_data->SubsystemVendorId = (uint32_t)mpt->m_svid;
12477 12701 (void) strcpy((char *)&adapter_data->DriverVersion[0], driver_verstr);
12478 12702 adapter_data->BiosVersion = 0;
12479 12703 (void) mptsas_get_bios_page3(mpt, &adapter_data->BiosVersion);
12480 12704 }
12481 12705
12482 12706 static void
12483 12707 mptsas_read_pci_info(mptsas_t *mpt, mptsas_pci_info_t *pci_info)
12484 12708 {
12485 12709 int *reg_data, i;
12486 12710 uint_t reglen;
12487 12711
12488 12712 /*
12489 12713 * Lookup the 'reg' property and extract the other data
12490 12714 */
12491 12715 if (ddi_prop_lookup_int_array(DDI_DEV_T_ANY, mpt->m_dip,
12492 12716 DDI_PROP_DONTPASS, "reg", ®_data, ®len) ==
12493 12717 DDI_PROP_SUCCESS) {
12494 12718 /*
12495 12719 * Extract the PCI data from the 'reg' property first DWORD.
12496 12720 * The entry looks like the following:
12497 12721 * First DWORD:
12498 12722 * Bits 8 - 10 3-bit Function number
12499 12723 * Bits 11 - 15 5-bit Device number
12500 12724 * Bits 16 - 23 8-bit Bus number
12501 12725 */
12502 12726 pci_info->BusNumber = (reg_data[0] & 0x00FF0000) >> 16;
12503 12727 pci_info->DeviceNumber = (reg_data[0] & 0x0000F800) >> 11;
12504 12728 pci_info->FunctionNumber = (reg_data[0] & 0x00000700) >> 8;
12505 12729 ddi_prop_free((void *)reg_data);
12506 12730 } else {
12507 12731 /*
12508 12732 * If we can't determine the PCI info then we fill in FF's for
12509 12733 * the data to indicate this.
12510 12734 */
12511 12735 pci_info->BusNumber = 0xFFFFFFFF;
12512 12736 pci_info->DeviceNumber = 0xFF;
12513 12737 pci_info->FunctionNumber = 0xFF;
12514 12738 }
12515 12739
12516 12740 /*
12517 12741 * Now get the interrupt vector and the pci header. The vector can
12518 12742 * only be 0 right now. The header is the first 256 bytes of config
12519 12743 * space.
12520 12744 */
12521 12745 pci_info->InterruptVector = 0;
12522 12746 for (i = 0; i < sizeof (pci_info->PciHeader); i++) {
12523 12747 pci_info->PciHeader[i] = pci_config_get8(mpt->m_config_handle,
12524 12748 i);
12525 12749 }
12526 12750 }
12527 12751
12528 12752 static int
12529 12753 mptsas_reg_access(mptsas_t *mpt, mptsas_reg_access_t *data, int mode)
12530 12754 {
12531 12755 int status = 0;
12532 12756 mptsas_reg_access_t driverdata;
12533 12757
12534 12758 mutex_enter(&mpt->m_mutex);
12535 12759 if (ddi_copyin(data, &driverdata, sizeof (driverdata), mode) == 0) {
12536 12760 switch (driverdata.Command) {
12537 12761 /*
12538 12762 * IO access is not supported.
12539 12763 */
12540 12764 case REG_IO_READ:
12541 12765 case REG_IO_WRITE:
12542 12766 mptsas_log(mpt, CE_WARN, "IO access is not "
12543 12767 "supported. Use memory access.");
12544 12768 status = EINVAL;
12545 12769 break;
12546 12770
12547 12771 case REG_MEM_READ:
12548 12772 driverdata.RegData = ddi_get32(mpt->m_datap,
12549 12773 (uint32_t *)(void *)mpt->m_reg +
12550 12774 driverdata.RegOffset);
12551 12775 if (ddi_copyout(&driverdata.RegData,
12552 12776 &data->RegData,
12553 12777 sizeof (driverdata.RegData), mode) != 0) {
12554 12778 mptsas_log(mpt, CE_WARN, "Register "
12555 12779 "Read Failed");
12556 12780 status = EFAULT;
12557 12781 }
12558 12782 break;
12559 12783
12560 12784 case REG_MEM_WRITE:
12561 12785 ddi_put32(mpt->m_datap,
12562 12786 (uint32_t *)(void *)mpt->m_reg +
12563 12787 driverdata.RegOffset,
12564 12788 driverdata.RegData);
12565 12789 break;
12566 12790
12567 12791 default:
12568 12792 status = EINVAL;
12569 12793 break;
12570 12794 }
12571 12795 } else {
12572 12796 status = EFAULT;
12573 12797 }
|
↓ open down ↓ |
1432 lines elided |
↑ open up ↑ |
12574 12798
12575 12799 mutex_exit(&mpt->m_mutex);
12576 12800 return (status);
12577 12801 }
12578 12802
12579 12803 static int
12580 12804 led_control(mptsas_t *mpt, intptr_t data, int mode)
12581 12805 {
12582 12806 int ret = 0;
12583 12807 mptsas_led_control_t lc;
12584 - mptsas_target_t *ptgt;
12808 + mptsas_enclosure_t *mep;
12809 + uint16_t slotidx;
12585 12810
12586 12811 if (ddi_copyin((void *)data, &lc, sizeof (lc), mode) != 0) {
12587 12812 return (EFAULT);
12588 12813 }
12589 12814
12590 12815 if ((lc.Command != MPTSAS_LEDCTL_FLAG_SET &&
12591 12816 lc.Command != MPTSAS_LEDCTL_FLAG_GET) ||
12592 12817 lc.Led < MPTSAS_LEDCTL_LED_MIN ||
12593 12818 lc.Led > MPTSAS_LEDCTL_LED_MAX ||
12594 12819 (lc.Command == MPTSAS_LEDCTL_FLAG_SET && lc.LedStatus != 0 &&
12595 12820 lc.LedStatus != 1)) {
12596 12821 return (EINVAL);
12597 12822 }
12598 12823
12599 12824 if ((lc.Command == MPTSAS_LEDCTL_FLAG_SET && (mode & FWRITE) == 0) ||
12600 12825 (lc.Command == MPTSAS_LEDCTL_FLAG_GET && (mode & FREAD) == 0))
12601 12826 return (EACCES);
12602 12827
12603 - /* Locate the target we're interrogating... */
12828 + /* Locate the required enclosure */
12604 12829 mutex_enter(&mpt->m_mutex);
12605 - ptgt = refhash_linear_search(mpt->m_targets,
12606 - mptsas_target_eval_slot, &lc);
12607 - if (ptgt == NULL) {
12608 - /* We could not find a target for that enclosure/slot. */
12830 + mep = mptsas_enc_lookup(mpt, lc.Enclosure);
12831 + if (mep == NULL) {
12609 12832 mutex_exit(&mpt->m_mutex);
12610 12833 return (ENOENT);
12611 12834 }
12612 12835
12836 + if (lc.Slot < mep->me_fslot) {
12837 + mutex_exit(&mpt->m_mutex);
12838 + return (ENOENT);
12839 + }
12840 +
12841 + /*
12842 + * Slots on the enclosure are maintained in array where me_fslot is
12843 + * entry zero. We normalize the requested slot.
12844 + */
12845 + slotidx = lc.Slot - mep->me_fslot;
12846 + if (slotidx >= mep->me_nslots) {
12847 + mutex_exit(&mpt->m_mutex);
12848 + return (ENOENT);
12849 + }
12850 +
12613 12851 if (lc.Command == MPTSAS_LEDCTL_FLAG_SET) {
12614 12852 /* Update our internal LED state. */
12615 - ptgt->m_led_status &= ~(1 << (lc.Led - 1));
12616 - ptgt->m_led_status |= lc.LedStatus << (lc.Led - 1);
12853 + mep->me_slotleds[slotidx] &= ~(1 << (lc.Led - 1));
12854 + mep->me_slotleds[slotidx] |= lc.LedStatus << (lc.Led - 1);
12617 12855
12618 12856 /* Flush it to the controller. */
12619 - ret = mptsas_flush_led_status(mpt, ptgt);
12857 + ret = mptsas_flush_led_status(mpt, mep, slotidx);
12620 12858 mutex_exit(&mpt->m_mutex);
12621 12859 return (ret);
12622 12860 }
12623 12861
12624 12862 /* Return our internal LED state. */
12625 - lc.LedStatus = (ptgt->m_led_status >> (lc.Led - 1)) & 1;
12863 + lc.LedStatus = (mep->me_slotleds[slotidx] >> (lc.Led - 1)) & 1;
12626 12864 mutex_exit(&mpt->m_mutex);
12627 12865
12628 12866 if (ddi_copyout(&lc, (void *)data, sizeof (lc), mode) != 0) {
12629 12867 return (EFAULT);
12630 12868 }
12631 12869
12632 12870 return (0);
12633 12871 }
12634 12872
12635 12873 static int
12636 12874 get_disk_info(mptsas_t *mpt, intptr_t data, int mode)
12637 12875 {
12638 12876 uint16_t i = 0;
12639 12877 uint16_t count = 0;
12640 12878 int ret = 0;
12641 12879 mptsas_target_t *ptgt;
12642 12880 mptsas_disk_info_t *di;
12643 12881 STRUCT_DECL(mptsas_get_disk_info, gdi);
12644 12882
12645 12883 if ((mode & FREAD) == 0)
12646 12884 return (EACCES);
12647 12885
12648 12886 STRUCT_INIT(gdi, get_udatamodel());
12649 12887
12650 12888 if (ddi_copyin((void *)data, STRUCT_BUF(gdi), STRUCT_SIZE(gdi),
12651 12889 mode) != 0) {
12652 12890 return (EFAULT);
12653 12891 }
12654 12892
12655 12893 /* Find out how many targets there are. */
12656 12894 mutex_enter(&mpt->m_mutex);
12657 12895 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
12658 12896 ptgt = refhash_next(mpt->m_targets, ptgt)) {
12659 12897 count++;
12660 12898 }
12661 12899 mutex_exit(&mpt->m_mutex);
12662 12900
12663 12901 /*
12664 12902 * If we haven't been asked to copy out information on each target,
12665 12903 * then just return the count.
12666 12904 */
12667 12905 STRUCT_FSET(gdi, DiskCount, count);
12668 12906 if (STRUCT_FGETP(gdi, PtrDiskInfoArray) == NULL)
12669 12907 goto copy_out;
12670 12908
12671 12909 /*
12672 12910 * If we haven't been given a large enough buffer to copy out into,
12673 12911 * let the caller know.
12674 12912 */
12675 12913 if (STRUCT_FGET(gdi, DiskInfoArraySize) <
12676 12914 count * sizeof (mptsas_disk_info_t)) {
12677 12915 ret = ENOSPC;
12678 12916 goto copy_out;
12679 12917 }
12680 12918
12681 12919 di = kmem_zalloc(count * sizeof (mptsas_disk_info_t), KM_SLEEP);
12682 12920
12683 12921 mutex_enter(&mpt->m_mutex);
12684 12922 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
12685 12923 ptgt = refhash_next(mpt->m_targets, ptgt)) {
12686 12924 if (i >= count) {
12687 12925 /*
12688 12926 * The number of targets changed while we weren't
12689 12927 * looking, so give up.
12690 12928 */
12691 12929 refhash_rele(mpt->m_targets, ptgt);
12692 12930 mutex_exit(&mpt->m_mutex);
12693 12931 kmem_free(di, count * sizeof (mptsas_disk_info_t));
12694 12932 return (EAGAIN);
12695 12933 }
12696 12934 di[i].Instance = mpt->m_instance;
12697 12935 di[i].Enclosure = ptgt->m_enclosure;
12698 12936 di[i].Slot = ptgt->m_slot_num;
12699 12937 di[i].SasAddress = ptgt->m_addr.mta_wwn;
12700 12938 i++;
12701 12939 }
12702 12940 mutex_exit(&mpt->m_mutex);
12703 12941 STRUCT_FSET(gdi, DiskCount, i);
12704 12942
12705 12943 /* Copy out the disk information to the caller. */
12706 12944 if (ddi_copyout((void *)di, STRUCT_FGETP(gdi, PtrDiskInfoArray),
12707 12945 i * sizeof (mptsas_disk_info_t), mode) != 0) {
12708 12946 ret = EFAULT;
12709 12947 }
12710 12948
12711 12949 kmem_free(di, count * sizeof (mptsas_disk_info_t));
12712 12950
12713 12951 copy_out:
12714 12952 if (ddi_copyout(STRUCT_BUF(gdi), (void *)data, STRUCT_SIZE(gdi),
12715 12953 mode) != 0) {
12716 12954 ret = EFAULT;
12717 12955 }
12718 12956
12719 12957 return (ret);
12720 12958 }
12721 12959
12722 12960 static int
12723 12961 mptsas_ioctl(dev_t dev, int cmd, intptr_t data, int mode, cred_t *credp,
12724 12962 int *rval)
12725 12963 {
12726 12964 int status = 0;
12727 12965 mptsas_t *mpt;
12728 12966 mptsas_update_flash_t flashdata;
12729 12967 mptsas_pass_thru_t passthru_data;
12730 12968 mptsas_adapter_data_t adapter_data;
12731 12969 mptsas_pci_info_t pci_info;
12732 12970 int copylen;
12733 12971
12734 12972 int iport_flag = 0;
12735 12973 dev_info_t *dip = NULL;
12736 12974 mptsas_phymask_t phymask = 0;
12737 12975 struct devctl_iocdata *dcp = NULL;
12738 12976 char *addr = NULL;
12739 12977 mptsas_target_t *ptgt = NULL;
12740 12978
12741 12979 *rval = MPTIOCTL_STATUS_GOOD;
12742 12980 if (secpolicy_sys_config(credp, B_FALSE) != 0) {
12743 12981 return (EPERM);
12744 12982 }
12745 12983
12746 12984 mpt = ddi_get_soft_state(mptsas_state, MINOR2INST(getminor(dev)));
12747 12985 if (mpt == NULL) {
12748 12986 /*
12749 12987 * Called from iport node, get the states
12750 12988 */
12751 12989 iport_flag = 1;
12752 12990 dip = mptsas_get_dip_from_dev(dev, &phymask);
12753 12991 if (dip == NULL) {
12754 12992 return (ENXIO);
12755 12993 }
12756 12994 mpt = DIP2MPT(dip);
|
↓ open down ↓ |
121 lines elided |
↑ open up ↑ |
12757 12995 }
12758 12996 /* Make sure power level is D0 before accessing registers */
12759 12997 mutex_enter(&mpt->m_mutex);
12760 12998 if (mpt->m_options & MPTSAS_OPT_PM) {
12761 12999 (void) pm_busy_component(mpt->m_dip, 0);
12762 13000 if (mpt->m_power_level != PM_LEVEL_D0) {
12763 13001 mutex_exit(&mpt->m_mutex);
12764 13002 if (pm_raise_power(mpt->m_dip, 0, PM_LEVEL_D0) !=
12765 13003 DDI_SUCCESS) {
12766 13004 mptsas_log(mpt, CE_WARN,
12767 - "mptsas%d: mptsas_ioctl: Raise power "
12768 - "request failed.", mpt->m_instance);
13005 + "raise power request failed");
12769 13006 (void) pm_idle_component(mpt->m_dip, 0);
12770 13007 return (ENXIO);
12771 13008 }
12772 13009 } else {
12773 13010 mutex_exit(&mpt->m_mutex);
12774 13011 }
12775 13012 } else {
12776 13013 mutex_exit(&mpt->m_mutex);
12777 13014 }
12778 13015
12779 13016 if (iport_flag) {
12780 13017 status = scsi_hba_ioctl(dev, cmd, data, mode, credp, rval);
12781 13018 if (status != 0) {
12782 13019 goto out;
12783 13020 }
12784 13021 /*
12785 13022 * The following code control the OK2RM LED, it doesn't affect
12786 13023 * the ioctl return status.
12787 13024 */
12788 13025 if ((cmd == DEVCTL_DEVICE_ONLINE) ||
12789 13026 (cmd == DEVCTL_DEVICE_OFFLINE)) {
12790 13027 if (ndi_dc_allochdl((void *)data, &dcp) !=
12791 13028 NDI_SUCCESS) {
|
↓ open down ↓ |
13 lines elided |
↑ open up ↑ |
12792 13029 goto out;
12793 13030 }
12794 13031 addr = ndi_dc_getaddr(dcp);
12795 13032 ptgt = mptsas_addr_to_ptgt(mpt, addr, phymask);
12796 13033 if (ptgt == NULL) {
12797 13034 NDBG14(("mptsas_ioctl led control: tgt %s not "
12798 13035 "found", addr));
12799 13036 ndi_dc_freehdl(dcp);
12800 13037 goto out;
12801 13038 }
12802 - mutex_enter(&mpt->m_mutex);
12803 - if (cmd == DEVCTL_DEVICE_ONLINE) {
12804 - ptgt->m_tgt_unconfigured = 0;
12805 - } else if (cmd == DEVCTL_DEVICE_OFFLINE) {
12806 - ptgt->m_tgt_unconfigured = 1;
12807 - }
12808 - if (cmd == DEVCTL_DEVICE_OFFLINE) {
12809 - ptgt->m_led_status |=
12810 - (1 << (MPTSAS_LEDCTL_LED_OK2RM - 1));
12811 - } else {
12812 - ptgt->m_led_status &=
12813 - ~(1 << (MPTSAS_LEDCTL_LED_OK2RM - 1));
12814 - }
12815 - (void) mptsas_flush_led_status(mpt, ptgt);
12816 - mutex_exit(&mpt->m_mutex);
12817 13039 ndi_dc_freehdl(dcp);
12818 13040 }
12819 13041 goto out;
12820 13042 }
12821 13043 switch (cmd) {
12822 13044 case MPTIOCTL_GET_DISK_INFO:
12823 13045 status = get_disk_info(mpt, data, mode);
12824 13046 break;
12825 13047 case MPTIOCTL_LED_CONTROL:
12826 13048 status = led_control(mpt, data, mode);
12827 13049 break;
12828 13050 case MPTIOCTL_UPDATE_FLASH:
12829 13051 if (ddi_copyin((void *)data, &flashdata,
12830 13052 sizeof (struct mptsas_update_flash), mode)) {
12831 13053 status = EFAULT;
12832 13054 break;
12833 13055 }
12834 13056
12835 13057 mutex_enter(&mpt->m_mutex);
12836 13058 if (mptsas_update_flash(mpt,
12837 13059 (caddr_t)(long)flashdata.PtrBuffer,
12838 13060 flashdata.ImageSize, flashdata.ImageType, mode)) {
12839 13061 status = EFAULT;
12840 13062 }
12841 13063
12842 13064 /*
12843 13065 * Reset the chip to start using the new
12844 13066 * firmware. Reset if failed also.
12845 13067 */
12846 13068 mpt->m_softstate &= ~MPTSAS_SS_MSG_UNIT_RESET;
12847 13069 if (mptsas_restart_ioc(mpt) == DDI_FAILURE) {
12848 13070 status = EFAULT;
12849 13071 }
12850 13072 mutex_exit(&mpt->m_mutex);
12851 13073 break;
12852 13074 case MPTIOCTL_PASS_THRU:
12853 13075 /*
12854 13076 * The user has requested to pass through a command to
12855 13077 * be executed by the MPT firmware. Call our routine
12856 13078 * which does this. Only allow one passthru IOCTL at
12857 13079 * one time. Other threads will block on
12858 13080 * m_passthru_mutex, which is of adaptive variant.
12859 13081 */
12860 13082 if (ddi_copyin((void *)data, &passthru_data,
12861 13083 sizeof (mptsas_pass_thru_t), mode)) {
12862 13084 status = EFAULT;
12863 13085 break;
12864 13086 }
12865 13087 mutex_enter(&mpt->m_passthru_mutex);
12866 13088 mutex_enter(&mpt->m_mutex);
12867 13089 status = mptsas_pass_thru(mpt, &passthru_data, mode);
12868 13090 mutex_exit(&mpt->m_mutex);
12869 13091 mutex_exit(&mpt->m_passthru_mutex);
12870 13092
12871 13093 break;
12872 13094 case MPTIOCTL_GET_ADAPTER_DATA:
12873 13095 /*
12874 13096 * The user has requested to read adapter data. Call
12875 13097 * our routine which does this.
12876 13098 */
12877 13099 bzero(&adapter_data, sizeof (mptsas_adapter_data_t));
12878 13100 if (ddi_copyin((void *)data, (void *)&adapter_data,
12879 13101 sizeof (mptsas_adapter_data_t), mode)) {
12880 13102 status = EFAULT;
12881 13103 break;
12882 13104 }
12883 13105 if (adapter_data.StructureLength >=
12884 13106 sizeof (mptsas_adapter_data_t)) {
12885 13107 adapter_data.StructureLength = (uint32_t)
12886 13108 sizeof (mptsas_adapter_data_t);
12887 13109 copylen = sizeof (mptsas_adapter_data_t);
12888 13110 mutex_enter(&mpt->m_mutex);
12889 13111 mptsas_read_adapter_data(mpt, &adapter_data);
12890 13112 mutex_exit(&mpt->m_mutex);
12891 13113 } else {
12892 13114 adapter_data.StructureLength = (uint32_t)
12893 13115 sizeof (mptsas_adapter_data_t);
12894 13116 copylen = sizeof (adapter_data.StructureLength);
12895 13117 *rval = MPTIOCTL_STATUS_LEN_TOO_SHORT;
12896 13118 }
12897 13119 if (ddi_copyout((void *)(&adapter_data), (void *)data,
12898 13120 copylen, mode) != 0) {
12899 13121 status = EFAULT;
12900 13122 }
12901 13123 break;
12902 13124 case MPTIOCTL_GET_PCI_INFO:
12903 13125 /*
12904 13126 * The user has requested to read pci info. Call
12905 13127 * our routine which does this.
12906 13128 */
12907 13129 bzero(&pci_info, sizeof (mptsas_pci_info_t));
12908 13130 mutex_enter(&mpt->m_mutex);
12909 13131 mptsas_read_pci_info(mpt, &pci_info);
12910 13132 mutex_exit(&mpt->m_mutex);
12911 13133 if (ddi_copyout((void *)(&pci_info), (void *)data,
12912 13134 sizeof (mptsas_pci_info_t), mode) != 0) {
12913 13135 status = EFAULT;
12914 13136 }
12915 13137 break;
12916 13138 case MPTIOCTL_RESET_ADAPTER:
12917 13139 mutex_enter(&mpt->m_mutex);
12918 13140 mpt->m_softstate &= ~MPTSAS_SS_MSG_UNIT_RESET;
12919 13141 if ((mptsas_restart_ioc(mpt)) == DDI_FAILURE) {
12920 13142 mptsas_log(mpt, CE_WARN, "reset adapter IOCTL "
12921 13143 "failed");
12922 13144 status = EFAULT;
12923 13145 }
12924 13146 mutex_exit(&mpt->m_mutex);
12925 13147 break;
12926 13148 case MPTIOCTL_DIAG_ACTION:
12927 13149 /*
12928 13150 * The user has done a diag buffer action. Call our
12929 13151 * routine which does this. Only allow one diag action
12930 13152 * at one time.
12931 13153 */
12932 13154 mutex_enter(&mpt->m_mutex);
12933 13155 if (mpt->m_diag_action_in_progress) {
12934 13156 mutex_exit(&mpt->m_mutex);
12935 13157 return (EBUSY);
12936 13158 }
12937 13159 mpt->m_diag_action_in_progress = 1;
12938 13160 status = mptsas_diag_action(mpt,
12939 13161 (mptsas_diag_action_t *)data, mode);
12940 13162 mpt->m_diag_action_in_progress = 0;
12941 13163 mutex_exit(&mpt->m_mutex);
12942 13164 break;
12943 13165 case MPTIOCTL_EVENT_QUERY:
12944 13166 /*
12945 13167 * The user has done an event query. Call our routine
12946 13168 * which does this.
12947 13169 */
12948 13170 status = mptsas_event_query(mpt,
12949 13171 (mptsas_event_query_t *)data, mode, rval);
12950 13172 break;
12951 13173 case MPTIOCTL_EVENT_ENABLE:
12952 13174 /*
12953 13175 * The user has done an event enable. Call our routine
12954 13176 * which does this.
12955 13177 */
12956 13178 status = mptsas_event_enable(mpt,
12957 13179 (mptsas_event_enable_t *)data, mode, rval);
12958 13180 break;
12959 13181 case MPTIOCTL_EVENT_REPORT:
12960 13182 /*
12961 13183 * The user has done an event report. Call our routine
12962 13184 * which does this.
12963 13185 */
12964 13186 status = mptsas_event_report(mpt,
12965 13187 (mptsas_event_report_t *)data, mode, rval);
12966 13188 break;
12967 13189 case MPTIOCTL_REG_ACCESS:
12968 13190 /*
12969 13191 * The user has requested register access. Call our
12970 13192 * routine which does this.
12971 13193 */
12972 13194 status = mptsas_reg_access(mpt,
12973 13195 (mptsas_reg_access_t *)data, mode);
12974 13196 break;
12975 13197 default:
12976 13198 status = scsi_hba_ioctl(dev, cmd, data, mode, credp,
12977 13199 rval);
12978 13200 break;
12979 13201 }
12980 13202
12981 13203 out:
12982 13204 return (status);
12983 13205 }
12984 13206
12985 13207 int
12986 13208 mptsas_restart_ioc(mptsas_t *mpt)
12987 13209 {
12988 13210 int rval = DDI_SUCCESS;
12989 13211 mptsas_target_t *ptgt = NULL;
12990 13212
12991 13213 ASSERT(mutex_owned(&mpt->m_mutex));
12992 13214
|
↓ open down ↓ |
166 lines elided |
↑ open up ↑ |
12993 13215 /*
12994 13216 * Set a flag telling I/O path that we're processing a reset. This is
12995 13217 * needed because after the reset is complete, the hash table still
12996 13218 * needs to be rebuilt. If I/Os are started before the hash table is
12997 13219 * rebuilt, I/O errors will occur. This flag allows I/Os to be marked
12998 13220 * so that they can be retried.
12999 13221 */
13000 13222 mpt->m_in_reset = TRUE;
13001 13223
13002 13224 /*
13003 - * Wait until all the allocated sense data buffers for DMA are freed.
13004 - */
13005 - while (mpt->m_extreq_sense_refcount > 0)
13006 - cv_wait(&mpt->m_extreq_sense_refcount_cv, &mpt->m_mutex);
13007 -
13008 - /*
13009 13225 * Set all throttles to HOLD
13010 13226 */
13011 13227 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
13012 13228 ptgt = refhash_next(mpt->m_targets, ptgt)) {
13013 13229 mptsas_set_throttle(mpt, ptgt, HOLD_THROTTLE);
13014 13230 }
13015 13231
13016 13232 /*
13017 13233 * Disable interrupts
13018 13234 */
13019 13235 MPTSAS_DISABLE_INTR(mpt);
13020 13236
13021 13237 /*
13022 13238 * Abort all commands: outstanding commands, commands in waitq and
13023 13239 * tx_waitq.
13024 13240 */
13025 13241 mptsas_flush_hba(mpt);
13026 13242
13027 13243 /*
13028 13244 * Reinitialize the chip.
13029 13245 */
13030 13246 if (mptsas_init_chip(mpt, FALSE) == DDI_FAILURE) {
13031 13247 rval = DDI_FAILURE;
13032 13248 }
13033 13249
13034 13250 /*
13035 13251 * Enable interrupts again
13036 13252 */
13037 13253 MPTSAS_ENABLE_INTR(mpt);
13038 13254
13039 13255 /*
13040 13256 * If mptsas_init_chip was successful, update the driver data.
13041 13257 */
13042 13258 if (rval == DDI_SUCCESS) {
13043 13259 mptsas_update_driver_data(mpt);
13044 13260 }
13045 13261
13046 13262 /*
13047 13263 * Reset the throttles
13048 13264 */
13049 13265 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
13050 13266 ptgt = refhash_next(mpt->m_targets, ptgt)) {
13051 13267 mptsas_set_throttle(mpt, ptgt, MAX_THROTTLE);
13052 13268 }
13053 13269
13054 13270 mptsas_doneq_empty(mpt);
13055 13271 mptsas_restart_hba(mpt);
13056 13272
13057 13273 if (rval != DDI_SUCCESS) {
13058 13274 mptsas_fm_ereport(mpt, DDI_FM_DEVICE_NO_RESPONSE);
13059 13275 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_LOST);
13060 13276 }
13061 13277
13062 13278 /*
13063 13279 * Clear the reset flag so that I/Os can continue.
13064 13280 */
13065 13281 mpt->m_in_reset = FALSE;
13066 13282
13067 13283 return (rval);
13068 13284 }
13069 13285
13070 13286 static int
13071 13287 mptsas_init_chip(mptsas_t *mpt, int first_time)
13072 13288 {
13073 13289 ddi_dma_cookie_t cookie;
13074 13290 uint32_t i;
13075 13291 int rval;
13076 13292
13077 13293 /*
13078 13294 * Check to see if the firmware image is valid
13079 13295 */
13080 13296 if (ddi_get32(mpt->m_datap, &mpt->m_reg->HostDiagnostic) &
13081 13297 MPI2_DIAG_FLASH_BAD_SIG) {
13082 13298 mptsas_log(mpt, CE_WARN, "mptsas bad flash signature!");
13083 13299 goto fail;
13084 13300 }
13085 13301
13086 13302 /*
13087 13303 * Reset the chip
13088 13304 */
13089 13305 rval = mptsas_ioc_reset(mpt, first_time);
13090 13306 if (rval == MPTSAS_RESET_FAIL) {
13091 13307 mptsas_log(mpt, CE_WARN, "hard reset failed!");
13092 13308 goto fail;
13093 13309 }
13094 13310
13095 13311 if ((rval == MPTSAS_SUCCESS_MUR) && (!first_time)) {
13096 13312 goto mur;
13097 13313 }
13098 13314 /*
13099 13315 * Setup configuration space
13100 13316 */
13101 13317 if (mptsas_config_space_init(mpt) == FALSE) {
13102 13318 mptsas_log(mpt, CE_WARN, "mptsas_config_space_init "
13103 13319 "failed!");
13104 13320 goto fail;
13105 13321 }
13106 13322
|
↓ open down ↓ |
88 lines elided |
↑ open up ↑ |
13107 13323 /*
13108 13324 * IOC facts can change after a diag reset so all buffers that are
13109 13325 * based on these numbers must be de-allocated and re-allocated. Get
13110 13326 * new IOC facts each time chip is initialized.
13111 13327 */
13112 13328 if (mptsas_ioc_get_facts(mpt) == DDI_FAILURE) {
13113 13329 mptsas_log(mpt, CE_WARN, "mptsas_ioc_get_facts failed");
13114 13330 goto fail;
13115 13331 }
13116 13332
13117 - if (mptsas_alloc_active_slots(mpt, KM_SLEEP)) {
13118 - goto fail;
13333 + if (first_time) {
13334 + if (mptsas_alloc_active_slots(mpt, KM_SLEEP)) {
13335 + goto fail;
13336 + }
13337 + /*
13338 + * Allocate request message frames, reply free queue, reply
13339 + * descriptor post queue, and reply message frames using
13340 + * latest IOC facts.
13341 + */
13342 + if (mptsas_alloc_request_frames(mpt) == DDI_FAILURE) {
13343 + mptsas_log(mpt, CE_WARN,
13344 + "mptsas_alloc_request_frames failed");
13345 + goto fail;
13346 + }
13347 + if (mptsas_alloc_sense_bufs(mpt) == DDI_FAILURE) {
13348 + mptsas_log(mpt, CE_WARN,
13349 + "mptsas_alloc_sense_bufs failed");
13350 + goto fail;
13351 + }
13352 + if (mptsas_alloc_free_queue(mpt) == DDI_FAILURE) {
13353 + mptsas_log(mpt, CE_WARN,
13354 + "mptsas_alloc_free_queue failed!");
13355 + goto fail;
13356 + }
13357 + if (mptsas_alloc_post_queue(mpt) == DDI_FAILURE) {
13358 + mptsas_log(mpt, CE_WARN,
13359 + "mptsas_alloc_post_queue failed!");
13360 + goto fail;
13361 + }
13362 + if (mptsas_alloc_reply_frames(mpt) == DDI_FAILURE) {
13363 + mptsas_log(mpt, CE_WARN,
13364 + "mptsas_alloc_reply_frames failed!");
13365 + goto fail;
13366 + }
13119 13367 }
13120 - /*
13121 - * Allocate request message frames, reply free queue, reply descriptor
13122 - * post queue, and reply message frames using latest IOC facts.
13123 - */
13124 - if (mptsas_alloc_request_frames(mpt) == DDI_FAILURE) {
13125 - mptsas_log(mpt, CE_WARN, "mptsas_alloc_request_frames failed");
13126 - goto fail;
13127 - }
13128 - if (mptsas_alloc_sense_bufs(mpt) == DDI_FAILURE) {
13129 - mptsas_log(mpt, CE_WARN, "mptsas_alloc_sense_bufs failed");
13130 - goto fail;
13131 - }
13132 - if (mptsas_alloc_free_queue(mpt) == DDI_FAILURE) {
13133 - mptsas_log(mpt, CE_WARN, "mptsas_alloc_free_queue failed!");
13134 - goto fail;
13135 - }
13136 - if (mptsas_alloc_post_queue(mpt) == DDI_FAILURE) {
13137 - mptsas_log(mpt, CE_WARN, "mptsas_alloc_post_queue failed!");
13138 - goto fail;
13139 - }
13140 - if (mptsas_alloc_reply_frames(mpt) == DDI_FAILURE) {
13141 - mptsas_log(mpt, CE_WARN, "mptsas_alloc_reply_frames failed!");
13142 - goto fail;
13143 - }
13144 -
13145 13368 mur:
13146 13369 /*
13147 13370 * Re-Initialize ioc to operational state
13148 13371 */
13149 13372 if (mptsas_ioc_init(mpt) == DDI_FAILURE) {
13150 13373 mptsas_log(mpt, CE_WARN, "mptsas_ioc_init failed");
13151 13374 goto fail;
13152 13375 }
13153 13376
13154 13377 mptsas_alloc_reply_args(mpt);
13155 13378
13156 13379 /*
13157 13380 * Initialize reply post index. Reply free index is initialized after
13158 13381 * the next loop.
13159 13382 */
13160 13383 mpt->m_post_index = 0;
13161 13384
13162 13385 /*
13163 13386 * Initialize the Reply Free Queue with the physical addresses of our
13164 13387 * reply frames.
13165 13388 */
13166 13389 cookie.dmac_address = mpt->m_reply_frame_dma_addr & 0xffffffffu;
13167 13390 for (i = 0; i < mpt->m_max_replies; i++) {
13168 13391 ddi_put32(mpt->m_acc_free_queue_hdl,
13169 13392 &((uint32_t *)(void *)mpt->m_free_queue)[i],
13170 13393 cookie.dmac_address);
13171 13394 cookie.dmac_address += mpt->m_reply_frame_size;
13172 13395 }
13173 13396 (void) ddi_dma_sync(mpt->m_dma_free_queue_hdl, 0, 0,
13174 13397 DDI_DMA_SYNC_FORDEV);
13175 13398
13176 13399 /*
13177 13400 * Initialize the reply free index to one past the last frame on the
13178 13401 * queue. This will signify that the queue is empty to start with.
13179 13402 */
13180 13403 mpt->m_free_index = i;
13181 13404 ddi_put32(mpt->m_datap, &mpt->m_reg->ReplyFreeHostIndex, i);
13182 13405
13183 13406 /*
13184 13407 * Initialize the reply post queue to 0xFFFFFFFF,0xFFFFFFFF's.
13185 13408 */
13186 13409 for (i = 0; i < mpt->m_post_queue_depth; i++) {
13187 13410 ddi_put64(mpt->m_acc_post_queue_hdl,
13188 13411 &((uint64_t *)(void *)mpt->m_post_queue)[i],
13189 13412 0xFFFFFFFFFFFFFFFF);
13190 13413 }
13191 13414 (void) ddi_dma_sync(mpt->m_dma_post_queue_hdl, 0, 0,
13192 13415 DDI_DMA_SYNC_FORDEV);
13193 13416
13194 13417 /*
13195 13418 * Enable ports
13196 13419 */
13197 13420 if (mptsas_ioc_enable_port(mpt) == DDI_FAILURE) {
13198 13421 mptsas_log(mpt, CE_WARN, "mptsas_ioc_enable_port failed");
13199 13422 goto fail;
13200 13423 }
13201 13424
13202 13425 /*
13203 13426 * enable events
13204 13427 */
13205 13428 if (mptsas_ioc_enable_event_notification(mpt)) {
13206 13429 mptsas_log(mpt, CE_WARN,
13207 13430 "mptsas_ioc_enable_event_notification failed");
13208 13431 goto fail;
13209 13432 }
13210 13433
13211 13434 /*
13212 13435 * We need checks in attach and these.
13213 13436 * chip_init is called in mult. places
13214 13437 */
13215 13438
13216 13439 if ((mptsas_check_dma_handle(mpt->m_dma_req_frame_hdl) !=
13217 13440 DDI_SUCCESS) ||
13218 13441 (mptsas_check_dma_handle(mpt->m_dma_req_sense_hdl) !=
13219 13442 DDI_SUCCESS) ||
13220 13443 (mptsas_check_dma_handle(mpt->m_dma_reply_frame_hdl) !=
13221 13444 DDI_SUCCESS) ||
13222 13445 (mptsas_check_dma_handle(mpt->m_dma_free_queue_hdl) !=
13223 13446 DDI_SUCCESS) ||
13224 13447 (mptsas_check_dma_handle(mpt->m_dma_post_queue_hdl) !=
13225 13448 DDI_SUCCESS) ||
13226 13449 (mptsas_check_dma_handle(mpt->m_hshk_dma_hdl) !=
13227 13450 DDI_SUCCESS)) {
13228 13451 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
13229 13452 goto fail;
13230 13453 }
13231 13454
13232 13455 /* Check all acc handles */
13233 13456 if ((mptsas_check_acc_handle(mpt->m_datap) != DDI_SUCCESS) ||
13234 13457 (mptsas_check_acc_handle(mpt->m_acc_req_frame_hdl) !=
13235 13458 DDI_SUCCESS) ||
13236 13459 (mptsas_check_acc_handle(mpt->m_acc_req_sense_hdl) !=
13237 13460 DDI_SUCCESS) ||
13238 13461 (mptsas_check_acc_handle(mpt->m_acc_reply_frame_hdl) !=
13239 13462 DDI_SUCCESS) ||
13240 13463 (mptsas_check_acc_handle(mpt->m_acc_free_queue_hdl) !=
13241 13464 DDI_SUCCESS) ||
13242 13465 (mptsas_check_acc_handle(mpt->m_acc_post_queue_hdl) !=
13243 13466 DDI_SUCCESS) ||
13244 13467 (mptsas_check_acc_handle(mpt->m_hshk_acc_hdl) !=
13245 13468 DDI_SUCCESS) ||
13246 13469 (mptsas_check_acc_handle(mpt->m_config_handle) !=
13247 13470 DDI_SUCCESS)) {
13248 13471 ddi_fm_service_impact(mpt->m_dip, DDI_SERVICE_UNAFFECTED);
13249 13472 goto fail;
13250 13473 }
13251 13474
13252 13475 return (DDI_SUCCESS);
13253 13476
13254 13477 fail:
13255 13478 return (DDI_FAILURE);
13256 13479 }
13257 13480
13258 13481 static int
13259 13482 mptsas_get_pci_cap(mptsas_t *mpt)
13260 13483 {
13261 13484 ushort_t caps_ptr, cap, cap_count;
13262 13485
13263 13486 if (mpt->m_config_handle == NULL)
13264 13487 return (FALSE);
13265 13488 /*
13266 13489 * Check if capabilities list is supported and if so,
13267 13490 * get initial capabilities pointer and clear bits 0,1.
13268 13491 */
13269 13492 if (pci_config_get16(mpt->m_config_handle, PCI_CONF_STAT)
13270 13493 & PCI_STAT_CAP) {
13271 13494 caps_ptr = P2ALIGN(pci_config_get8(mpt->m_config_handle,
13272 13495 PCI_CONF_CAP_PTR), 4);
13273 13496 } else {
13274 13497 caps_ptr = PCI_CAP_NEXT_PTR_NULL;
13275 13498 }
13276 13499
13277 13500 /*
|
↓ open down ↓ |
123 lines elided |
↑ open up ↑ |
13278 13501 * Walk capabilities if supported.
13279 13502 */
13280 13503 for (cap_count = 0; caps_ptr != PCI_CAP_NEXT_PTR_NULL; ) {
13281 13504
13282 13505 /*
13283 13506 * Check that we haven't exceeded the maximum number of
13284 13507 * capabilities and that the pointer is in a valid range.
13285 13508 */
13286 13509 if (++cap_count > 48) {
13287 13510 mptsas_log(mpt, CE_WARN,
13288 - "too many device capabilities.\n");
13511 + "too many device capabilities");
13289 13512 break;
13290 13513 }
13291 13514 if (caps_ptr < 64) {
13292 13515 mptsas_log(mpt, CE_WARN,
13293 - "capabilities pointer 0x%x out of range.\n",
13516 + "capabilities pointer 0x%x out of range",
13294 13517 caps_ptr);
13295 13518 break;
13296 13519 }
13297 13520
13298 13521 /*
13299 13522 * Get next capability and check that it is valid.
13300 13523 * For now, we only support power management.
13301 13524 */
13302 13525 cap = pci_config_get8(mpt->m_config_handle, caps_ptr);
13303 13526 switch (cap) {
13304 13527 case PCI_CAP_ID_PM:
13305 13528 mptsas_log(mpt, CE_NOTE,
13306 - "?mptsas%d supports power management.\n",
13307 - mpt->m_instance);
13529 + "power management supported");
13308 13530 mpt->m_options |= MPTSAS_OPT_PM;
13309 13531
13310 13532 /* Save PMCSR offset */
13311 13533 mpt->m_pmcsr_offset = caps_ptr + PCI_PMCSR;
13312 13534 break;
13313 13535 /*
13314 13536 * The following capabilities are valid. Any others
13315 13537 * will cause a message to be logged.
13316 13538 */
13317 13539 case PCI_CAP_ID_VPD:
13318 13540 case PCI_CAP_ID_MSI:
13319 13541 case PCI_CAP_ID_PCIX:
13320 13542 case PCI_CAP_ID_PCI_E:
13321 13543 case PCI_CAP_ID_MSI_X:
13322 13544 break;
13323 13545 default:
13324 13546 mptsas_log(mpt, CE_NOTE,
13325 - "?mptsas%d unrecognized capability "
13326 - "0x%x.\n", mpt->m_instance, cap);
13547 + "unrecognized capability 0x%x", cap);
13327 13548 break;
13328 13549 }
13329 13550
13330 13551 /*
13331 13552 * Get next capabilities pointer and clear bits 0,1.
13332 13553 */
13333 13554 caps_ptr = P2ALIGN(pci_config_get8(mpt->m_config_handle,
13334 13555 (caps_ptr + PCI_CAP_NEXT_PTR)), 4);
13335 13556 }
13336 13557 return (TRUE);
13337 13558 }
13338 13559
13339 13560 static int
13340 13561 mptsas_init_pm(mptsas_t *mpt)
13341 13562 {
13342 13563 char pmc_name[16];
13343 13564 char *pmc[] = {
13344 13565 NULL,
13345 13566 "0=Off (PCI D3 State)",
13346 13567 "3=On (PCI D0 State)",
13347 13568 NULL
13348 13569 };
13349 13570 uint16_t pmcsr_stat;
13350 13571
13351 13572 if (mptsas_get_pci_cap(mpt) == FALSE) {
13352 13573 return (DDI_FAILURE);
13353 13574 }
|
↓ open down ↓ |
17 lines elided |
↑ open up ↑ |
13354 13575 /*
13355 13576 * If PCI's capability does not support PM, then don't need
13356 13577 * to registe the pm-components
13357 13578 */
13358 13579 if (!(mpt->m_options & MPTSAS_OPT_PM))
13359 13580 return (DDI_SUCCESS);
13360 13581 /*
13361 13582 * If power management is supported by this chip, create
13362 13583 * pm-components property for the power management framework
13363 13584 */
13364 - (void) sprintf(pmc_name, "NAME=mptsas%d", mpt->m_instance);
13585 + (void) sprintf(pmc_name, "NAME=mpt_sas%d", mpt->m_instance);
13365 13586 pmc[0] = pmc_name;
13366 13587 if (ddi_prop_update_string_array(DDI_DEV_T_NONE, mpt->m_dip,
13367 13588 "pm-components", pmc, 3) != DDI_PROP_SUCCESS) {
13368 13589 mpt->m_options &= ~MPTSAS_OPT_PM;
13369 13590 mptsas_log(mpt, CE_WARN,
13370 - "mptsas%d: pm-component property creation failed.",
13371 - mpt->m_instance);
13591 + "pm-component property creation failed");
13372 13592 return (DDI_FAILURE);
13373 13593 }
13374 13594
13375 13595 /*
13376 13596 * Power on device.
13377 13597 */
13378 13598 (void) pm_busy_component(mpt->m_dip, 0);
13379 13599 pmcsr_stat = pci_config_get16(mpt->m_config_handle,
13380 13600 mpt->m_pmcsr_offset);
13381 13601 if ((pmcsr_stat & PCI_PMCSR_STATE_MASK) != PCI_PMCSR_D0) {
13382 - mptsas_log(mpt, CE_WARN, "mptsas%d: Power up the device",
13383 - mpt->m_instance);
13602 + mptsas_log(mpt, CE_WARN, "power up the device");
13384 13603 pci_config_put16(mpt->m_config_handle, mpt->m_pmcsr_offset,
13385 13604 PCI_PMCSR_D0);
13386 13605 }
13387 13606 if (pm_power_has_changed(mpt->m_dip, 0, PM_LEVEL_D0) != DDI_SUCCESS) {
13388 13607 mptsas_log(mpt, CE_WARN, "pm_power_has_changed failed");
13389 13608 return (DDI_FAILURE);
13390 13609 }
13391 13610 mpt->m_power_level = PM_LEVEL_D0;
13392 13611 /*
13393 13612 * Set pm idle delay.
13394 13613 */
13395 13614 mpt->m_pm_idle_delay = ddi_prop_get_int(DDI_DEV_T_ANY,
13396 13615 mpt->m_dip, 0, "mptsas-pm-idle-delay", MPTSAS_PM_IDLE_TIMEOUT);
13397 13616
13398 13617 return (DDI_SUCCESS);
13399 13618 }
13400 13619
13401 13620 static int
|
↓ open down ↓ |
8 lines elided |
↑ open up ↑ |
13402 13621 mptsas_register_intrs(mptsas_t *mpt)
13403 13622 {
13404 13623 dev_info_t *dip;
13405 13624 int intr_types;
13406 13625
13407 13626 dip = mpt->m_dip;
13408 13627
13409 13628 /* Get supported interrupt types */
13410 13629 if (ddi_intr_get_supported_types(dip, &intr_types) != DDI_SUCCESS) {
13411 13630 mptsas_log(mpt, CE_WARN, "ddi_intr_get_supported_types "
13412 - "failed\n");
13631 + "failed");
13413 13632 return (FALSE);
13414 13633 }
13415 13634
13416 13635 NDBG6(("ddi_intr_get_supported_types() returned: 0x%x", intr_types));
13417 13636
13418 13637 /*
13419 13638 * Try MSI, but fall back to FIXED
13420 13639 */
13421 13640 if (mptsas_enable_msi && (intr_types & DDI_INTR_TYPE_MSI)) {
13422 13641 if (mptsas_add_intrs(mpt, DDI_INTR_TYPE_MSI) == DDI_SUCCESS) {
13423 13642 NDBG0(("Using MSI interrupt type"));
13424 13643 mpt->m_intr_type = DDI_INTR_TYPE_MSI;
13425 13644 return (TRUE);
13426 13645 }
13427 13646 }
13428 13647 if (intr_types & DDI_INTR_TYPE_FIXED) {
13429 13648 if (mptsas_add_intrs(mpt, DDI_INTR_TYPE_FIXED) == DDI_SUCCESS) {
13430 13649 NDBG0(("Using FIXED interrupt type"));
13431 13650 mpt->m_intr_type = DDI_INTR_TYPE_FIXED;
13432 13651 return (TRUE);
13433 13652 } else {
13434 13653 NDBG0(("FIXED interrupt registration failed"));
13435 13654 return (FALSE);
13436 13655 }
13437 13656 }
13438 13657
13439 13658 return (FALSE);
13440 13659 }
13441 13660
13442 13661 static void
13443 13662 mptsas_unregister_intrs(mptsas_t *mpt)
13444 13663 {
13445 13664 mptsas_rem_intrs(mpt);
13446 13665 }
13447 13666
13448 13667 /*
13449 13668 * mptsas_add_intrs:
13450 13669 *
13451 13670 * Register FIXED or MSI interrupts.
13452 13671 */
13453 13672 static int
13454 13673 mptsas_add_intrs(mptsas_t *mpt, int intr_type)
13455 13674 {
|
↓ open down ↓ |
33 lines elided |
↑ open up ↑ |
13456 13675 dev_info_t *dip = mpt->m_dip;
13457 13676 int avail, actual, count = 0;
13458 13677 int i, flag, ret;
13459 13678
13460 13679 NDBG6(("mptsas_add_intrs:interrupt type 0x%x", intr_type));
13461 13680
13462 13681 /* Get number of interrupts */
13463 13682 ret = ddi_intr_get_nintrs(dip, intr_type, &count);
13464 13683 if ((ret != DDI_SUCCESS) || (count <= 0)) {
13465 13684 mptsas_log(mpt, CE_WARN, "ddi_intr_get_nintrs() failed, "
13466 - "ret %d count %d\n", ret, count);
13685 + "ret %d count %d", ret, count);
13467 13686
13468 13687 return (DDI_FAILURE);
13469 13688 }
13470 13689
13471 13690 /* Get number of available interrupts */
13472 13691 ret = ddi_intr_get_navail(dip, intr_type, &avail);
13473 13692 if ((ret != DDI_SUCCESS) || (avail == 0)) {
13474 13693 mptsas_log(mpt, CE_WARN, "ddi_intr_get_navail() failed, "
13475 - "ret %d avail %d\n", ret, avail);
13694 + "ret %d avail %d", ret, avail);
13476 13695
13477 13696 return (DDI_FAILURE);
13478 13697 }
13479 13698
13480 13699 if (avail < count) {
13481 13700 mptsas_log(mpt, CE_NOTE, "ddi_intr_get_nvail returned %d, "
13482 13701 "navail() returned %d", count, avail);
13483 13702 }
13484 13703
13485 13704 /* Mpt only have one interrupt routine */
13486 13705 if ((intr_type == DDI_INTR_TYPE_MSI) && (count > 1)) {
13487 13706 count = 1;
13488 13707 }
13489 13708
13490 13709 /* Allocate an array of interrupt handles */
|
↓ open down ↓ |
5 lines elided |
↑ open up ↑ |
13491 13710 mpt->m_intr_size = count * sizeof (ddi_intr_handle_t);
13492 13711 mpt->m_htable = kmem_alloc(mpt->m_intr_size, KM_SLEEP);
13493 13712
13494 13713 flag = DDI_INTR_ALLOC_NORMAL;
13495 13714
13496 13715 /* call ddi_intr_alloc() */
13497 13716 ret = ddi_intr_alloc(dip, mpt->m_htable, intr_type, 0,
13498 13717 count, &actual, flag);
13499 13718
13500 13719 if ((ret != DDI_SUCCESS) || (actual == 0)) {
13501 - mptsas_log(mpt, CE_WARN, "ddi_intr_alloc() failed, ret %d\n",
13720 + mptsas_log(mpt, CE_WARN, "ddi_intr_alloc() failed, ret %d",
13502 13721 ret);
13503 13722 kmem_free(mpt->m_htable, mpt->m_intr_size);
13504 13723 return (DDI_FAILURE);
13505 13724 }
13506 13725
13507 13726 /* use interrupt count returned or abort? */
13508 13727 if (actual < count) {
13509 - mptsas_log(mpt, CE_NOTE, "Requested: %d, Received: %d\n",
13728 + mptsas_log(mpt, CE_NOTE, "Requested: %d, Received: %d",
13510 13729 count, actual);
13511 13730 }
13512 13731
13513 13732 mpt->m_intr_cnt = actual;
13514 13733
13515 13734 /*
13516 13735 * Get priority for first msi, assume remaining are all the same
13517 13736 */
13518 13737 if ((ret = ddi_intr_get_pri(mpt->m_htable[0],
13519 13738 &mpt->m_intr_pri)) != DDI_SUCCESS) {
13520 - mptsas_log(mpt, CE_WARN, "ddi_intr_get_pri() failed %d\n", ret);
13739 + mptsas_log(mpt, CE_WARN, "ddi_intr_get_pri() failed %d", ret);
13521 13740
13522 13741 /* Free already allocated intr */
13523 13742 for (i = 0; i < actual; i++) {
13524 13743 (void) ddi_intr_free(mpt->m_htable[i]);
13525 13744 }
13526 13745
13527 13746 kmem_free(mpt->m_htable, mpt->m_intr_size);
13528 13747 return (DDI_FAILURE);
13529 13748 }
13530 13749
13531 13750 /* Test for high level mutex */
13532 13751 if (mpt->m_intr_pri >= ddi_intr_get_hilevel_pri()) {
13533 13752 mptsas_log(mpt, CE_WARN, "mptsas_add_intrs: "
13534 - "Hi level interrupt not supported\n");
13753 + "Hi level interrupt not supported");
13535 13754
13536 13755 /* Free already allocated intr */
13537 13756 for (i = 0; i < actual; i++) {
13538 13757 (void) ddi_intr_free(mpt->m_htable[i]);
13539 13758 }
13540 13759
13541 13760 kmem_free(mpt->m_htable, mpt->m_intr_size);
13542 13761 return (DDI_FAILURE);
13543 13762 }
13544 13763
13545 13764 /* Call ddi_intr_add_handler() */
13546 13765 for (i = 0; i < actual; i++) {
13547 13766 if ((ret = ddi_intr_add_handler(mpt->m_htable[i], mptsas_intr,
13548 13767 (caddr_t)mpt, (caddr_t)(uintptr_t)i)) != DDI_SUCCESS) {
13549 13768 mptsas_log(mpt, CE_WARN, "ddi_intr_add_handler() "
13550 - "failed %d\n", ret);
13769 + "failed %d", ret);
13551 13770
13552 13771 /* Free already allocated intr */
13553 13772 for (i = 0; i < actual; i++) {
13554 13773 (void) ddi_intr_free(mpt->m_htable[i]);
13555 13774 }
13556 13775
13557 13776 kmem_free(mpt->m_htable, mpt->m_intr_size);
13558 13777 return (DDI_FAILURE);
13559 13778 }
13560 13779 }
13561 13780
13562 13781 if ((ret = ddi_intr_get_cap(mpt->m_htable[0], &mpt->m_intr_cap))
13563 13782 != DDI_SUCCESS) {
13564 - mptsas_log(mpt, CE_WARN, "ddi_intr_get_cap() failed %d\n", ret);
13783 + mptsas_log(mpt, CE_WARN, "ddi_intr_get_cap() failed %d", ret);
13565 13784
13566 13785 /* Free already allocated intr */
13567 13786 for (i = 0; i < actual; i++) {
13568 13787 (void) ddi_intr_free(mpt->m_htable[i]);
13569 13788 }
13570 13789
13571 13790 kmem_free(mpt->m_htable, mpt->m_intr_size);
13572 13791 return (DDI_FAILURE);
13573 13792 }
13574 13793
13575 13794 /*
13576 13795 * Enable interrupts
13577 13796 */
13578 13797 if (mpt->m_intr_cap & DDI_INTR_FLAG_BLOCK) {
13579 13798 /* Call ddi_intr_block_enable() for MSI interrupts */
13580 13799 (void) ddi_intr_block_enable(mpt->m_htable, mpt->m_intr_cnt);
13581 13800 } else {
13582 13801 /* Call ddi_intr_enable for MSI or FIXED interrupts */
13583 13802 for (i = 0; i < mpt->m_intr_cnt; i++) {
13584 13803 (void) ddi_intr_enable(mpt->m_htable[i]);
13585 13804 }
13586 13805 }
13587 13806 return (DDI_SUCCESS);
13588 13807 }
13589 13808
13590 13809 /*
13591 13810 * mptsas_rem_intrs:
13592 13811 *
13593 13812 * Unregister FIXED or MSI interrupts
13594 13813 */
13595 13814 static void
13596 13815 mptsas_rem_intrs(mptsas_t *mpt)
13597 13816 {
13598 13817 int i;
13599 13818
13600 13819 NDBG6(("mptsas_rem_intrs"));
13601 13820
13602 13821 /* Disable all interrupts */
13603 13822 if (mpt->m_intr_cap & DDI_INTR_FLAG_BLOCK) {
13604 13823 /* Call ddi_intr_block_disable() */
13605 13824 (void) ddi_intr_block_disable(mpt->m_htable, mpt->m_intr_cnt);
13606 13825 } else {
13607 13826 for (i = 0; i < mpt->m_intr_cnt; i++) {
13608 13827 (void) ddi_intr_disable(mpt->m_htable[i]);
13609 13828 }
13610 13829 }
13611 13830
13612 13831 /* Call ddi_intr_remove_handler() */
13613 13832 for (i = 0; i < mpt->m_intr_cnt; i++) {
13614 13833 (void) ddi_intr_remove_handler(mpt->m_htable[i]);
13615 13834 (void) ddi_intr_free(mpt->m_htable[i]);
13616 13835 }
13617 13836
13618 13837 kmem_free(mpt->m_htable, mpt->m_intr_size);
13619 13838 }
13620 13839
13621 13840 /*
13622 13841 * The IO fault service error handling callback function
13623 13842 */
13624 13843 /*ARGSUSED*/
13625 13844 static int
13626 13845 mptsas_fm_error_cb(dev_info_t *dip, ddi_fm_error_t *err, const void *impl_data)
13627 13846 {
13628 13847 /*
13629 13848 * as the driver can always deal with an error in any dma or
13630 13849 * access handle, we can just return the fme_status value.
13631 13850 */
13632 13851 pci_ereport_post(dip, err, NULL);
13633 13852 return (err->fme_status);
13634 13853 }
13635 13854
13636 13855 /*
13637 13856 * mptsas_fm_init - initialize fma capabilities and register with IO
13638 13857 * fault services.
13639 13858 */
13640 13859 static void
13641 13860 mptsas_fm_init(mptsas_t *mpt)
13642 13861 {
13643 13862 /*
13644 13863 * Need to change iblock to priority for new MSI intr
13645 13864 */
13646 13865 ddi_iblock_cookie_t fm_ibc;
13647 13866
13648 13867 /* Only register with IO Fault Services if we have some capability */
13649 13868 if (mpt->m_fm_capabilities) {
13650 13869 /* Adjust access and dma attributes for FMA */
13651 13870 mpt->m_reg_acc_attr.devacc_attr_access = DDI_FLAGERR_ACC;
13652 13871 mpt->m_msg_dma_attr.dma_attr_flags |= DDI_DMA_FLAGERR;
13653 13872 mpt->m_io_dma_attr.dma_attr_flags |= DDI_DMA_FLAGERR;
13654 13873
13655 13874 /*
13656 13875 * Register capabilities with IO Fault Services.
13657 13876 * mpt->m_fm_capabilities will be updated to indicate
13658 13877 * capabilities actually supported (not requested.)
13659 13878 */
13660 13879 ddi_fm_init(mpt->m_dip, &mpt->m_fm_capabilities, &fm_ibc);
13661 13880
13662 13881 /*
13663 13882 * Initialize pci ereport capabilities if ereport
13664 13883 * capable (should always be.)
13665 13884 */
13666 13885 if (DDI_FM_EREPORT_CAP(mpt->m_fm_capabilities) ||
13667 13886 DDI_FM_ERRCB_CAP(mpt->m_fm_capabilities)) {
13668 13887 pci_ereport_setup(mpt->m_dip);
13669 13888 }
13670 13889
13671 13890 /*
13672 13891 * Register error callback if error callback capable.
13673 13892 */
13674 13893 if (DDI_FM_ERRCB_CAP(mpt->m_fm_capabilities)) {
13675 13894 ddi_fm_handler_register(mpt->m_dip,
13676 13895 mptsas_fm_error_cb, (void *) mpt);
13677 13896 }
13678 13897 }
13679 13898 }
13680 13899
13681 13900 /*
13682 13901 * mptsas_fm_fini - Releases fma capabilities and un-registers with IO
13683 13902 * fault services.
13684 13903 *
13685 13904 */
13686 13905 static void
13687 13906 mptsas_fm_fini(mptsas_t *mpt)
13688 13907 {
13689 13908 /* Only unregister FMA capabilities if registered */
13690 13909 if (mpt->m_fm_capabilities) {
13691 13910
13692 13911 /*
13693 13912 * Un-register error callback if error callback capable.
13694 13913 */
13695 13914
13696 13915 if (DDI_FM_ERRCB_CAP(mpt->m_fm_capabilities)) {
13697 13916 ddi_fm_handler_unregister(mpt->m_dip);
13698 13917 }
13699 13918
13700 13919 /*
13701 13920 * Release any resources allocated by pci_ereport_setup()
13702 13921 */
13703 13922
13704 13923 if (DDI_FM_EREPORT_CAP(mpt->m_fm_capabilities) ||
13705 13924 DDI_FM_ERRCB_CAP(mpt->m_fm_capabilities)) {
13706 13925 pci_ereport_teardown(mpt->m_dip);
13707 13926 }
13708 13927
13709 13928 /* Unregister from IO Fault Services */
13710 13929 ddi_fm_fini(mpt->m_dip);
13711 13930
13712 13931 /* Adjust access and dma attributes for FMA */
13713 13932 mpt->m_reg_acc_attr.devacc_attr_access = DDI_DEFAULT_ACC;
13714 13933 mpt->m_msg_dma_attr.dma_attr_flags &= ~DDI_DMA_FLAGERR;
13715 13934 mpt->m_io_dma_attr.dma_attr_flags &= ~DDI_DMA_FLAGERR;
13716 13935
13717 13936 }
13718 13937 }
13719 13938
13720 13939 int
13721 13940 mptsas_check_acc_handle(ddi_acc_handle_t handle)
13722 13941 {
13723 13942 ddi_fm_error_t de;
13724 13943
13725 13944 if (handle == NULL)
13726 13945 return (DDI_FAILURE);
13727 13946 ddi_fm_acc_err_get(handle, &de, DDI_FME_VER0);
13728 13947 return (de.fme_status);
13729 13948 }
13730 13949
13731 13950 int
13732 13951 mptsas_check_dma_handle(ddi_dma_handle_t handle)
13733 13952 {
13734 13953 ddi_fm_error_t de;
13735 13954
13736 13955 if (handle == NULL)
13737 13956 return (DDI_FAILURE);
13738 13957 ddi_fm_dma_err_get(handle, &de, DDI_FME_VER0);
13739 13958 return (de.fme_status);
13740 13959 }
13741 13960
13742 13961 void
13743 13962 mptsas_fm_ereport(mptsas_t *mpt, char *detail)
13744 13963 {
13745 13964 uint64_t ena;
13746 13965 char buf[FM_MAX_CLASS];
13747 13966
13748 13967 (void) snprintf(buf, FM_MAX_CLASS, "%s.%s", DDI_FM_DEVICE, detail);
13749 13968 ena = fm_ena_generate(0, FM_ENA_FMT1);
13750 13969 if (DDI_FM_EREPORT_CAP(mpt->m_fm_capabilities)) {
13751 13970 ddi_fm_ereport_post(mpt->m_dip, buf, ena, DDI_NOSLEEP,
13752 13971 FM_VERSION, DATA_TYPE_UINT8, FM_EREPORT_VERS0, NULL);
13753 13972 }
13754 13973 }
13755 13974
13756 13975 static int
13757 13976 mptsas_get_target_device_info(mptsas_t *mpt, uint32_t page_address,
13758 13977 uint16_t *dev_handle, mptsas_target_t **pptgt)
13759 13978 {
13760 13979 int rval;
13761 13980 uint32_t dev_info;
13762 13981 uint64_t sas_wwn;
13763 13982 mptsas_phymask_t phymask;
13764 13983 uint8_t physport, phynum, config, disk;
13765 13984 uint64_t devicename;
13766 13985 uint16_t pdev_hdl;
13767 13986 mptsas_target_t *tmp_tgt = NULL;
13768 13987 uint16_t bay_num, enclosure, io_flags;
13769 13988
13770 13989 ASSERT(*pptgt == NULL);
13771 13990
13772 13991 rval = mptsas_get_sas_device_page0(mpt, page_address, dev_handle,
13773 13992 &sas_wwn, &dev_info, &physport, &phynum, &pdev_hdl,
13774 13993 &bay_num, &enclosure, &io_flags);
13775 13994 if (rval != DDI_SUCCESS) {
13776 13995 rval = DEV_INFO_FAIL_PAGE0;
13777 13996 return (rval);
13778 13997 }
13779 13998
13780 13999 if ((dev_info & (MPI2_SAS_DEVICE_INFO_SSP_TARGET |
13781 14000 MPI2_SAS_DEVICE_INFO_SATA_DEVICE |
13782 14001 MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE)) == NULL) {
13783 14002 rval = DEV_INFO_WRONG_DEVICE_TYPE;
13784 14003 return (rval);
13785 14004 }
13786 14005
13787 14006 /*
13788 14007 * Check if the dev handle is for a Phys Disk. If so, set return value
13789 14008 * and exit. Don't add Phys Disks to hash.
13790 14009 */
13791 14010 for (config = 0; config < mpt->m_num_raid_configs; config++) {
13792 14011 for (disk = 0; disk < MPTSAS_MAX_DISKS_IN_CONFIG; disk++) {
13793 14012 if (*dev_handle == mpt->m_raidconfig[config].
13794 14013 m_physdisk_devhdl[disk]) {
13795 14014 rval = DEV_INFO_PHYS_DISK;
13796 14015 return (rval);
13797 14016 }
13798 14017 }
13799 14018 }
13800 14019
13801 14020 /*
13802 14021 * Get SATA Device Name from SAS device page0 for
13803 14022 * sata device, if device name doesn't exist, set mta_wwn to
13804 14023 * 0 for direct attached SATA. For the device behind the expander
13805 14024 * we still can use STP address assigned by expander.
13806 14025 */
13807 14026 if (dev_info & (MPI2_SAS_DEVICE_INFO_SATA_DEVICE |
13808 14027 MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE)) {
13809 14028 /* alloc a temporary target to send the cmd to */
13810 14029 tmp_tgt = mptsas_tgt_alloc(mpt->m_tmp_targets, *dev_handle,
13811 14030 0, dev_info, 0, 0);
13812 14031 mutex_exit(&mpt->m_mutex);
13813 14032
13814 14033 devicename = mptsas_get_sata_guid(mpt, tmp_tgt, 0);
13815 14034
13816 14035 if (devicename == -1) {
13817 14036 mutex_enter(&mpt->m_mutex);
13818 14037 refhash_remove(mpt->m_tmp_targets, tmp_tgt);
13819 14038 rval = DEV_INFO_FAIL_GUID;
13820 14039 return (rval);
13821 14040 }
13822 14041
13823 14042 if (devicename != 0 && (((devicename >> 56) & 0xf0) == 0x50)) {
13824 14043 sas_wwn = devicename;
13825 14044 } else if (dev_info & MPI2_SAS_DEVICE_INFO_DIRECT_ATTACH) {
13826 14045 sas_wwn = 0;
13827 14046 }
13828 14047
13829 14048 mutex_enter(&mpt->m_mutex);
13830 14049 refhash_remove(mpt->m_tmp_targets, tmp_tgt);
13831 14050 }
13832 14051
13833 14052 phymask = mptsas_physport_to_phymask(mpt, physport);
13834 14053 *pptgt = mptsas_tgt_alloc(mpt->m_targets, *dev_handle, sas_wwn,
13835 14054 dev_info, phymask, phynum);
13836 14055 if (*pptgt == NULL) {
13837 14056 mptsas_log(mpt, CE_WARN, "Failed to allocated target"
13838 14057 "structure!");
13839 14058 rval = DEV_INFO_FAIL_ALLOC;
13840 14059 return (rval);
13841 14060 }
13842 14061 (*pptgt)->m_io_flags = io_flags;
13843 14062 (*pptgt)->m_enclosure = enclosure;
13844 14063 (*pptgt)->m_slot_num = bay_num;
13845 14064 return (DEV_INFO_SUCCESS);
13846 14065 }
13847 14066
13848 14067 uint64_t
13849 14068 mptsas_get_sata_guid(mptsas_t *mpt, mptsas_target_t *ptgt, int lun)
13850 14069 {
13851 14070 uint64_t sata_guid = 0, *pwwn = NULL;
13852 14071 int target = ptgt->m_devhdl;
13853 14072 uchar_t *inq83 = NULL;
13854 14073 int inq83_len = 0xFF;
|
↓ open down ↓ |
280 lines elided |
↑ open up ↑ |
13855 14074 uchar_t *dblk = NULL;
13856 14075 int inq83_retry = 3;
13857 14076 int rval = DDI_FAILURE;
13858 14077
13859 14078 inq83 = kmem_zalloc(inq83_len, KM_SLEEP);
13860 14079
13861 14080 inq83_retry:
13862 14081 rval = mptsas_inquiry(mpt, ptgt, lun, 0x83, inq83,
13863 14082 inq83_len, NULL, 1);
13864 14083 if (rval != DDI_SUCCESS) {
13865 - mptsas_log(mpt, CE_WARN, "!mptsas request inquiry page "
14084 + mptsas_log(mpt, CE_WARN, "mptsas request inquiry page "
13866 14085 "0x83 for target:%x, lun:%x failed!", target, lun);
13867 14086 sata_guid = -1;
13868 14087 goto out;
13869 14088 }
13870 14089 /* According to SAT2, the first descriptor is logic unit name */
13871 14090 dblk = &inq83[4];
13872 14091 if ((dblk[1] & 0x30) != 0) {
13873 - mptsas_log(mpt, CE_WARN, "!Descriptor is not lun associated.");
14092 + mptsas_log(mpt, CE_WARN, "Descriptor is not lun associated.");
13874 14093 goto out;
13875 14094 }
13876 14095 pwwn = (uint64_t *)(void *)(&dblk[4]);
13877 14096 if ((dblk[4] & 0xf0) == 0x50) {
13878 14097 sata_guid = BE_64(*pwwn);
13879 14098 goto out;
13880 14099 } else if (dblk[4] == 'A') {
13881 14100 NDBG20(("SATA drive has no NAA format GUID."));
13882 14101 goto out;
13883 14102 } else {
13884 14103 /* The data is not ready, wait and retry */
13885 14104 inq83_retry--;
13886 14105 if (inq83_retry <= 0) {
13887 14106 goto out;
13888 14107 }
13889 14108 NDBG20(("The GUID is not ready, retry..."));
13890 14109 delay(1 * drv_usectohz(1000000));
13891 14110 goto inq83_retry;
13892 14111 }
13893 14112 out:
13894 14113 kmem_free(inq83, inq83_len);
13895 14114 return (sata_guid);
13896 14115 }
13897 14116
13898 14117 static int
13899 14118 mptsas_inquiry(mptsas_t *mpt, mptsas_target_t *ptgt, int lun, uchar_t page,
13900 14119 unsigned char *buf, int len, int *reallen, uchar_t evpd)
13901 14120 {
13902 14121 uchar_t cdb[CDB_GROUP0];
13903 14122 struct scsi_address ap;
13904 14123 struct buf *data_bp = NULL;
13905 14124 int resid = 0;
13906 14125 int ret = DDI_FAILURE;
13907 14126
13908 14127 ASSERT(len <= 0xffff);
13909 14128
13910 14129 ap.a_target = MPTSAS_INVALID_DEVHDL;
13911 14130 ap.a_lun = (uchar_t)(lun);
13912 14131 ap.a_hba_tran = mpt->m_tran;
13913 14132
13914 14133 data_bp = scsi_alloc_consistent_buf(&ap,
13915 14134 (struct buf *)NULL, len, B_READ, NULL_FUNC, NULL);
13916 14135 if (data_bp == NULL) {
13917 14136 return (ret);
13918 14137 }
13919 14138 bzero(cdb, CDB_GROUP0);
13920 14139 cdb[0] = SCMD_INQUIRY;
13921 14140 cdb[1] = evpd;
13922 14141 cdb[2] = page;
13923 14142 cdb[3] = (len & 0xff00) >> 8;
13924 14143 cdb[4] = (len & 0x00ff);
13925 14144 cdb[5] = 0;
13926 14145
13927 14146 ret = mptsas_send_scsi_cmd(mpt, &ap, ptgt, &cdb[0], CDB_GROUP0, data_bp,
13928 14147 &resid);
13929 14148 if (ret == DDI_SUCCESS) {
13930 14149 if (reallen) {
13931 14150 *reallen = len - resid;
13932 14151 }
13933 14152 bcopy((caddr_t)data_bp->b_un.b_addr, buf, len);
13934 14153 }
13935 14154 if (data_bp) {
13936 14155 scsi_free_consistent_buf(data_bp);
13937 14156 }
13938 14157 return (ret);
13939 14158 }
13940 14159
13941 14160 static int
13942 14161 mptsas_send_scsi_cmd(mptsas_t *mpt, struct scsi_address *ap,
13943 14162 mptsas_target_t *ptgt, uchar_t *cdb, int cdblen, struct buf *data_bp,
13944 14163 int *resid)
13945 14164 {
13946 14165 struct scsi_pkt *pktp = NULL;
13947 14166 scsi_hba_tran_t *tran_clone = NULL;
13948 14167 mptsas_tgt_private_t *tgt_private = NULL;
13949 14168 int ret = DDI_FAILURE;
13950 14169
13951 14170 /*
13952 14171 * scsi_hba_tran_t->tran_tgt_private is used to pass the address
13953 14172 * information to scsi_init_pkt, allocate a scsi_hba_tran structure
13954 14173 * to simulate the cmds from sd
13955 14174 */
13956 14175 tran_clone = kmem_alloc(
13957 14176 sizeof (scsi_hba_tran_t), KM_SLEEP);
13958 14177 if (tran_clone == NULL) {
13959 14178 goto out;
13960 14179 }
13961 14180 bcopy((caddr_t)mpt->m_tran,
13962 14181 (caddr_t)tran_clone, sizeof (scsi_hba_tran_t));
13963 14182 tgt_private = kmem_alloc(
13964 14183 sizeof (mptsas_tgt_private_t), KM_SLEEP);
13965 14184 if (tgt_private == NULL) {
13966 14185 goto out;
13967 14186 }
13968 14187 tgt_private->t_lun = ap->a_lun;
13969 14188 tgt_private->t_private = ptgt;
13970 14189 tran_clone->tran_tgt_private = tgt_private;
|
↓ open down ↓ |
87 lines elided |
↑ open up ↑ |
13971 14190 ap->a_hba_tran = tran_clone;
13972 14191
13973 14192 pktp = scsi_init_pkt(ap, (struct scsi_pkt *)NULL,
13974 14193 data_bp, cdblen, sizeof (struct scsi_arq_status),
13975 14194 0, PKT_CONSISTENT, NULL, NULL);
13976 14195 if (pktp == NULL) {
13977 14196 goto out;
13978 14197 }
13979 14198 bcopy(cdb, pktp->pkt_cdbp, cdblen);
13980 14199 pktp->pkt_flags = FLAG_NOPARITY;
14200 + pktp->pkt_time = mptsas_scsi_pkt_time;
13981 14201 if (scsi_poll(pktp) < 0) {
13982 14202 goto out;
13983 14203 }
13984 14204 if (((struct scsi_status *)pktp->pkt_scbp)->sts_chk) {
13985 14205 goto out;
13986 14206 }
13987 14207 if (resid != NULL) {
13988 14208 *resid = pktp->pkt_resid;
13989 14209 }
13990 14210
13991 14211 ret = DDI_SUCCESS;
13992 14212 out:
13993 14213 if (pktp) {
13994 14214 scsi_destroy_pkt(pktp);
13995 14215 }
13996 14216 if (tran_clone) {
13997 14217 kmem_free(tran_clone, sizeof (scsi_hba_tran_t));
13998 14218 }
13999 14219 if (tgt_private) {
14000 14220 kmem_free(tgt_private, sizeof (mptsas_tgt_private_t));
14001 14221 }
14002 14222 return (ret);
14003 14223 }
14004 14224 static int
14005 14225 mptsas_parse_address(char *name, uint64_t *wwid, uint8_t *phy, int *lun)
14006 14226 {
14007 14227 char *cp = NULL;
14008 14228 char *ptr = NULL;
14009 14229 size_t s = 0;
14010 14230 char *wwid_str = NULL;
14011 14231 char *lun_str = NULL;
14012 14232 long lunnum;
14013 14233 long phyid = -1;
14014 14234 int rc = DDI_FAILURE;
14015 14235
14016 14236 ptr = name;
14017 14237 ASSERT(ptr[0] == 'w' || ptr[0] == 'p');
14018 14238 ptr++;
14019 14239 if ((cp = strchr(ptr, ',')) == NULL) {
14020 14240 return (DDI_FAILURE);
14021 14241 }
14022 14242
14023 14243 wwid_str = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
14024 14244 s = (uintptr_t)cp - (uintptr_t)ptr;
14025 14245
14026 14246 bcopy(ptr, wwid_str, s);
14027 14247 wwid_str[s] = '\0';
14028 14248
14029 14249 ptr = ++cp;
14030 14250
14031 14251 if ((cp = strchr(ptr, '\0')) == NULL) {
14032 14252 goto out;
14033 14253 }
14034 14254 lun_str = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
14035 14255 s = (uintptr_t)cp - (uintptr_t)ptr;
14036 14256
14037 14257 bcopy(ptr, lun_str, s);
14038 14258 lun_str[s] = '\0';
14039 14259
14040 14260 if (name[0] == 'p') {
14041 14261 rc = ddi_strtol(wwid_str, NULL, 0x10, &phyid);
14042 14262 } else {
14043 14263 rc = scsi_wwnstr_to_wwn(wwid_str, wwid);
14044 14264 }
14045 14265 if (rc != DDI_SUCCESS)
14046 14266 goto out;
14047 14267
14048 14268 if (phyid != -1) {
14049 14269 ASSERT(phyid < MPTSAS_MAX_PHYS);
14050 14270 *phy = (uint8_t)phyid;
14051 14271 }
14052 14272 rc = ddi_strtol(lun_str, NULL, 0x10, &lunnum);
14053 14273 if (rc != 0)
14054 14274 goto out;
14055 14275
14056 14276 *lun = (int)lunnum;
14057 14277 rc = DDI_SUCCESS;
14058 14278 out:
14059 14279 if (wwid_str)
14060 14280 kmem_free(wwid_str, SCSI_MAXNAMELEN);
14061 14281 if (lun_str)
14062 14282 kmem_free(lun_str, SCSI_MAXNAMELEN);
14063 14283
14064 14284 return (rc);
14065 14285 }
14066 14286
14067 14287 /*
14068 14288 * mptsas_parse_smp_name() is to parse sas wwn string
14069 14289 * which format is "wWWN"
14070 14290 */
14071 14291 static int
14072 14292 mptsas_parse_smp_name(char *name, uint64_t *wwn)
14073 14293 {
14074 14294 char *ptr = name;
14075 14295
14076 14296 if (*ptr != 'w') {
14077 14297 return (DDI_FAILURE);
14078 14298 }
14079 14299
14080 14300 ptr++;
14081 14301 if (scsi_wwnstr_to_wwn(ptr, wwn)) {
14082 14302 return (DDI_FAILURE);
14083 14303 }
14084 14304 return (DDI_SUCCESS);
14085 14305 }
14086 14306
14087 14307 static int
14088 14308 mptsas_bus_config(dev_info_t *pdip, uint_t flag,
14089 14309 ddi_bus_config_op_t op, void *arg, dev_info_t **childp)
14090 14310 {
14091 14311 int ret = NDI_FAILURE;
14092 14312 int circ = 0;
14093 14313 int circ1 = 0;
14094 14314 mptsas_t *mpt;
14095 14315 char *ptr = NULL;
14096 14316 char *devnm = NULL;
14097 14317 uint64_t wwid = 0;
14098 14318 uint8_t phy = 0xFF;
14099 14319 int lun = 0;
14100 14320 uint_t mflags = flag;
14101 14321 int bconfig = TRUE;
14102 14322
14103 14323 if (scsi_hba_iport_unit_address(pdip) == 0) {
14104 14324 return (DDI_FAILURE);
14105 14325 }
14106 14326
14107 14327 mpt = DIP2MPT(pdip);
14108 14328 if (!mpt) {
14109 14329 return (DDI_FAILURE);
14110 14330 }
14111 14331 /*
14112 14332 * Hold the nexus across the bus_config
14113 14333 */
14114 14334 ndi_devi_enter(scsi_vhci_dip, &circ);
14115 14335 ndi_devi_enter(pdip, &circ1);
14116 14336 switch (op) {
14117 14337 case BUS_CONFIG_ONE:
14118 14338 /* parse wwid/target name out of name given */
14119 14339 if ((ptr = strchr((char *)arg, '@')) == NULL) {
14120 14340 ret = NDI_FAILURE;
14121 14341 break;
14122 14342 }
14123 14343 ptr++;
14124 14344 if (strncmp((char *)arg, "smp", 3) == 0) {
14125 14345 /*
14126 14346 * This is a SMP target device
14127 14347 */
14128 14348 ret = mptsas_parse_smp_name(ptr, &wwid);
14129 14349 if (ret != DDI_SUCCESS) {
14130 14350 ret = NDI_FAILURE;
14131 14351 break;
14132 14352 }
14133 14353 ret = mptsas_config_smp(pdip, wwid, childp);
14134 14354 } else if ((ptr[0] == 'w') || (ptr[0] == 'p')) {
14135 14355 /*
14136 14356 * OBP could pass down a non-canonical form
14137 14357 * bootpath without LUN part when LUN is 0.
14138 14358 * So driver need adjust the string.
14139 14359 */
14140 14360 if (strchr(ptr, ',') == NULL) {
14141 14361 devnm = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
14142 14362 (void) sprintf(devnm, "%s,0", (char *)arg);
14143 14363 ptr = strchr(devnm, '@');
14144 14364 ptr++;
14145 14365 }
14146 14366
14147 14367 /*
14148 14368 * The device path is wWWID format and the device
14149 14369 * is not SMP target device.
14150 14370 */
14151 14371 ret = mptsas_parse_address(ptr, &wwid, &phy, &lun);
14152 14372 if (ret != DDI_SUCCESS) {
14153 14373 ret = NDI_FAILURE;
14154 14374 break;
14155 14375 }
14156 14376 *childp = NULL;
14157 14377 if (ptr[0] == 'w') {
14158 14378 ret = mptsas_config_one_addr(pdip, wwid,
14159 14379 lun, childp);
14160 14380 } else if (ptr[0] == 'p') {
14161 14381 ret = mptsas_config_one_phy(pdip, phy, lun,
14162 14382 childp);
14163 14383 }
14164 14384
14165 14385 /*
14166 14386 * If this is CD/DVD device in OBP path, the
14167 14387 * ndi_busop_bus_config can be skipped as config one
14168 14388 * operation is done above.
14169 14389 */
14170 14390 if ((ret == NDI_SUCCESS) && (*childp != NULL) &&
14171 14391 (strcmp(ddi_node_name(*childp), "cdrom") == 0) &&
14172 14392 (strncmp((char *)arg, "disk", 4) == 0)) {
14173 14393 bconfig = FALSE;
14174 14394 ndi_hold_devi(*childp);
14175 14395 }
14176 14396 } else {
14177 14397 ret = NDI_FAILURE;
14178 14398 break;
14179 14399 }
14180 14400
|
↓ open down ↓ |
190 lines elided |
↑ open up ↑ |
14181 14401 /*
14182 14402 * DDI group instructed us to use this flag.
14183 14403 */
14184 14404 mflags |= NDI_MDI_FALLBACK;
14185 14405 break;
14186 14406 case BUS_CONFIG_DRIVER:
14187 14407 case BUS_CONFIG_ALL:
14188 14408 mptsas_config_all(pdip);
14189 14409 ret = NDI_SUCCESS;
14190 14410 break;
14411 + default:
14412 + ret = NDI_FAILURE;
14413 + break;
14191 14414 }
14192 14415
14193 14416 if ((ret == NDI_SUCCESS) && bconfig) {
14194 14417 ret = ndi_busop_bus_config(pdip, mflags, op,
14195 14418 (devnm == NULL) ? arg : devnm, childp, 0);
14196 14419 }
14197 14420
14198 14421 ndi_devi_exit(pdip, circ1);
14199 14422 ndi_devi_exit(scsi_vhci_dip, circ);
14200 14423 if (devnm != NULL)
14201 14424 kmem_free(devnm, SCSI_MAXNAMELEN);
14202 14425 return (ret);
14203 14426 }
14204 14427
14205 14428 static int
14206 14429 mptsas_probe_lun(dev_info_t *pdip, int lun, dev_info_t **dip,
14207 14430 mptsas_target_t *ptgt)
14208 14431 {
14209 14432 int rval = DDI_FAILURE;
14210 14433 struct scsi_inquiry *sd_inq = NULL;
14211 14434 mptsas_t *mpt = DIP2MPT(pdip);
14212 14435
14213 14436 sd_inq = (struct scsi_inquiry *)kmem_alloc(SUN_INQSIZE, KM_SLEEP);
14214 14437
14215 14438 rval = mptsas_inquiry(mpt, ptgt, lun, 0, (uchar_t *)sd_inq,
14216 14439 SUN_INQSIZE, 0, (uchar_t)0);
14217 14440
14218 14441 if ((rval == DDI_SUCCESS) && MPTSAS_VALID_LUN(sd_inq)) {
14219 14442 rval = mptsas_create_lun(pdip, sd_inq, dip, ptgt, lun);
14220 14443 } else {
14221 14444 rval = DDI_FAILURE;
14222 14445 }
14223 14446
14224 14447 kmem_free(sd_inq, SUN_INQSIZE);
14225 14448 return (rval);
14226 14449 }
14227 14450
|
↓ open down ↓ |
27 lines elided |
↑ open up ↑ |
14228 14451 static int
14229 14452 mptsas_config_one_addr(dev_info_t *pdip, uint64_t sasaddr, int lun,
14230 14453 dev_info_t **lundip)
14231 14454 {
14232 14455 int rval;
14233 14456 mptsas_t *mpt = DIP2MPT(pdip);
14234 14457 int phymask;
14235 14458 mptsas_target_t *ptgt = NULL;
14236 14459
14237 14460 /*
14461 + * The phymask exists if the port is active, otherwise
14462 + * nothing to do.
14463 + */
14464 + if (ddi_prop_exists(DDI_DEV_T_ANY, pdip,
14465 + DDI_PROP_DONTPASS | DDI_PROP_NOTPROM, "phymask") == 0)
14466 + return (DDI_FAILURE);
14467 +
14468 + /*
14238 14469 * Get the physical port associated to the iport
14239 14470 */
14240 14471 phymask = ddi_prop_get_int(DDI_DEV_T_ANY, pdip, 0,
14241 14472 "phymask", 0);
14242 14473
14243 14474 ptgt = mptsas_wwid_to_ptgt(mpt, phymask, sasaddr);
14244 14475 if (ptgt == NULL) {
14245 14476 /*
14246 14477 * didn't match any device by searching
14247 14478 */
14248 14479 return (DDI_FAILURE);
14249 14480 }
14250 14481 /*
14251 14482 * If the LUN already exists and the status is online,
14252 14483 * we just return the pointer to dev_info_t directly.
14253 14484 * For the mdi_pathinfo node, we'll handle it in
14254 14485 * mptsas_create_virt_lun()
14255 14486 * TODO should be also in mptsas_handle_dr
14256 14487 */
14257 14488
14258 14489 *lundip = mptsas_find_child_addr(pdip, sasaddr, lun);
14259 14490 if (*lundip != NULL) {
14260 14491 /*
14261 14492 * TODO Another senario is, we hotplug the same disk
14262 14493 * on the same slot, the devhdl changed, is this
14263 14494 * possible?
14264 14495 * tgt_private->t_private != ptgt
14265 14496 */
14266 14497 if (sasaddr != ptgt->m_addr.mta_wwn) {
|
↓ open down ↓ |
19 lines elided |
↑ open up ↑ |
14267 14498 /*
14268 14499 * The device has changed although the devhdl is the
14269 14500 * same (Enclosure mapping mode, change drive on the
14270 14501 * same slot)
14271 14502 */
14272 14503 return (DDI_FAILURE);
14273 14504 }
14274 14505 return (DDI_SUCCESS);
14275 14506 }
14276 14507
14277 - if (phymask == 0) {
14508 + /*
14509 + * If this is a RAID, configure the volumes
14510 + */
14511 + if (mpt->m_num_raid_configs > 0) {
14278 14512 /*
14279 14513 * Configure IR volume
14280 14514 */
14281 14515 rval = mptsas_config_raid(pdip, ptgt->m_devhdl, lundip);
14282 14516 return (rval);
14283 14517 }
14284 14518 rval = mptsas_probe_lun(pdip, lun, lundip, ptgt);
14285 14519
14286 14520 return (rval);
14287 14521 }
14288 14522
|
↓ open down ↓ |
1 lines elided |
↑ open up ↑ |
14289 14523 static int
14290 14524 mptsas_config_one_phy(dev_info_t *pdip, uint8_t phy, int lun,
14291 14525 dev_info_t **lundip)
14292 14526 {
14293 14527 int rval;
14294 14528 mptsas_t *mpt = DIP2MPT(pdip);
14295 14529 mptsas_phymask_t phymask;
14296 14530 mptsas_target_t *ptgt = NULL;
14297 14531
14298 14532 /*
14533 + * The phymask exists if the port is active, otherwise
14534 + * nothing to do.
14535 + */
14536 + if (ddi_prop_exists(DDI_DEV_T_ANY, pdip,
14537 + DDI_PROP_DONTPASS | DDI_PROP_NOTPROM, "phymask") == 0)
14538 + return (DDI_FAILURE);
14539 + /*
14299 14540 * Get the physical port associated to the iport
14300 14541 */
14301 14542 phymask = (mptsas_phymask_t)ddi_prop_get_int(DDI_DEV_T_ANY, pdip, 0,
14302 14543 "phymask", 0);
14303 14544
14304 14545 ptgt = mptsas_phy_to_tgt(mpt, phymask, phy);
14305 14546 if (ptgt == NULL) {
14306 14547 /*
14307 14548 * didn't match any device by searching
14308 14549 */
14309 14550 return (DDI_FAILURE);
14310 14551 }
14311 14552
14312 14553 /*
14313 14554 * If the LUN already exists and the status is online,
14314 14555 * we just return the pointer to dev_info_t directly.
14315 14556 * For the mdi_pathinfo node, we'll handle it in
14316 14557 * mptsas_create_virt_lun().
14317 14558 */
14318 14559
14319 14560 *lundip = mptsas_find_child_phy(pdip, phy);
14320 14561 if (*lundip != NULL) {
14321 14562 return (DDI_SUCCESS);
14322 14563 }
14323 14564
14324 14565 rval = mptsas_probe_lun(pdip, lun, lundip, ptgt);
14325 14566
14326 14567 return (rval);
14327 14568 }
14328 14569
14329 14570 static int
14330 14571 mptsas_retrieve_lundata(int lun_cnt, uint8_t *buf, uint16_t *lun_num,
14331 14572 uint8_t *lun_addr_type)
14332 14573 {
14333 14574 uint32_t lun_idx = 0;
14334 14575
14335 14576 ASSERT(lun_num != NULL);
14336 14577 ASSERT(lun_addr_type != NULL);
14337 14578
14338 14579 lun_idx = (lun_cnt + 1) * MPTSAS_SCSI_REPORTLUNS_ADDRESS_SIZE;
14339 14580 /* determine report luns addressing type */
14340 14581 switch (buf[lun_idx] & MPTSAS_SCSI_REPORTLUNS_ADDRESS_MASK) {
14341 14582 /*
14342 14583 * Vendors in the field have been found to be concatenating
14343 14584 * bus/target/lun to equal the complete lun value instead
14344 14585 * of switching to flat space addressing
14345 14586 */
14346 14587 /* 00b - peripheral device addressing method */
14347 14588 case MPTSAS_SCSI_REPORTLUNS_ADDRESS_PERIPHERAL:
14348 14589 /* FALLTHRU */
14349 14590 /* 10b - logical unit addressing method */
14350 14591 case MPTSAS_SCSI_REPORTLUNS_ADDRESS_LOGICAL_UNIT:
14351 14592 /* FALLTHRU */
14352 14593 /* 01b - flat space addressing method */
14353 14594 case MPTSAS_SCSI_REPORTLUNS_ADDRESS_FLAT_SPACE:
14354 14595 /* byte0 bit0-5=msb lun byte1 bit0-7=lsb lun */
14355 14596 *lun_addr_type = (buf[lun_idx] &
14356 14597 MPTSAS_SCSI_REPORTLUNS_ADDRESS_MASK) >> 6;
14357 14598 *lun_num = (buf[lun_idx] & 0x3F) << 8;
14358 14599 *lun_num |= buf[lun_idx + 1];
14359 14600 return (DDI_SUCCESS);
14360 14601 default:
14361 14602 return (DDI_FAILURE);
14362 14603 }
14363 14604 }
14364 14605
14365 14606 static int
14366 14607 mptsas_config_luns(dev_info_t *pdip, mptsas_target_t *ptgt)
14367 14608 {
14368 14609 struct buf *repluns_bp = NULL;
14369 14610 struct scsi_address ap;
14370 14611 uchar_t cdb[CDB_GROUP5];
14371 14612 int ret = DDI_FAILURE;
14372 14613 int retry = 0;
14373 14614 int lun_list_len = 0;
14374 14615 uint16_t lun_num = 0;
14375 14616 uint8_t lun_addr_type = 0;
14376 14617 uint32_t lun_cnt = 0;
14377 14618 uint32_t lun_total = 0;
14378 14619 dev_info_t *cdip = NULL;
14379 14620 uint16_t *saved_repluns = NULL;
14380 14621 char *buffer = NULL;
14381 14622 int buf_len = 128;
14382 14623 mptsas_t *mpt = DIP2MPT(pdip);
14383 14624 uint64_t sas_wwn = 0;
14384 14625 uint8_t phy = 0xFF;
14385 14626 uint32_t dev_info = 0;
14386 14627
14387 14628 mutex_enter(&mpt->m_mutex);
14388 14629 sas_wwn = ptgt->m_addr.mta_wwn;
14389 14630 phy = ptgt->m_phynum;
14390 14631 dev_info = ptgt->m_deviceinfo;
14391 14632 mutex_exit(&mpt->m_mutex);
14392 14633
14393 14634 if (sas_wwn == 0) {
14394 14635 /*
14395 14636 * It's a SATA without Device Name
14396 14637 * So don't try multi-LUNs
14397 14638 */
14398 14639 if (mptsas_find_child_phy(pdip, phy)) {
14399 14640 return (DDI_SUCCESS);
14400 14641 } else {
14401 14642 /*
14402 14643 * need configure and create node
14403 14644 */
14404 14645 return (DDI_FAILURE);
14405 14646 }
14406 14647 }
14407 14648
14408 14649 /*
14409 14650 * WWN (SAS address or Device Name exist)
14410 14651 */
14411 14652 if (dev_info & (MPI2_SAS_DEVICE_INFO_SATA_DEVICE |
14412 14653 MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE)) {
14413 14654 /*
14414 14655 * SATA device with Device Name
14415 14656 * So don't try multi-LUNs
14416 14657 */
14417 14658 if (mptsas_find_child_addr(pdip, sas_wwn, 0)) {
14418 14659 return (DDI_SUCCESS);
14419 14660 } else {
14420 14661 return (DDI_FAILURE);
14421 14662 }
14422 14663 }
14423 14664
14424 14665 do {
14425 14666 ap.a_target = MPTSAS_INVALID_DEVHDL;
14426 14667 ap.a_lun = 0;
14427 14668 ap.a_hba_tran = mpt->m_tran;
14428 14669 repluns_bp = scsi_alloc_consistent_buf(&ap,
14429 14670 (struct buf *)NULL, buf_len, B_READ, NULL_FUNC, NULL);
14430 14671 if (repluns_bp == NULL) {
14431 14672 retry++;
14432 14673 continue;
14433 14674 }
14434 14675 bzero(cdb, CDB_GROUP5);
14435 14676 cdb[0] = SCMD_REPORT_LUNS;
14436 14677 cdb[6] = (buf_len & 0xff000000) >> 24;
14437 14678 cdb[7] = (buf_len & 0x00ff0000) >> 16;
14438 14679 cdb[8] = (buf_len & 0x0000ff00) >> 8;
14439 14680 cdb[9] = (buf_len & 0x000000ff);
14440 14681
14441 14682 ret = mptsas_send_scsi_cmd(mpt, &ap, ptgt, &cdb[0], CDB_GROUP5,
14442 14683 repluns_bp, NULL);
14443 14684 if (ret != DDI_SUCCESS) {
14444 14685 scsi_free_consistent_buf(repluns_bp);
14445 14686 retry++;
14446 14687 continue;
14447 14688 }
14448 14689 lun_list_len = BE_32(*(int *)((void *)(
14449 14690 repluns_bp->b_un.b_addr)));
14450 14691 if (buf_len >= lun_list_len + 8) {
14451 14692 ret = DDI_SUCCESS;
14452 14693 break;
14453 14694 }
14454 14695 scsi_free_consistent_buf(repluns_bp);
14455 14696 buf_len = lun_list_len + 8;
14456 14697
14457 14698 } while (retry < 3);
14458 14699
14459 14700 if (ret != DDI_SUCCESS)
14460 14701 return (ret);
14461 14702 buffer = (char *)repluns_bp->b_un.b_addr;
14462 14703 /*
14463 14704 * find out the number of luns returned by the SCSI ReportLun call
14464 14705 * and allocate buffer space
14465 14706 */
14466 14707 lun_total = lun_list_len / MPTSAS_SCSI_REPORTLUNS_ADDRESS_SIZE;
14467 14708 saved_repluns = kmem_zalloc(sizeof (uint16_t) * lun_total, KM_SLEEP);
|
↓ open down ↓ |
159 lines elided |
↑ open up ↑ |
14468 14709 if (saved_repluns == NULL) {
14469 14710 scsi_free_consistent_buf(repluns_bp);
14470 14711 return (DDI_FAILURE);
14471 14712 }
14472 14713 for (lun_cnt = 0; lun_cnt < lun_total; lun_cnt++) {
14473 14714 if (mptsas_retrieve_lundata(lun_cnt, (uint8_t *)(buffer),
14474 14715 &lun_num, &lun_addr_type) != DDI_SUCCESS) {
14475 14716 continue;
14476 14717 }
14477 14718 saved_repluns[lun_cnt] = lun_num;
14478 - if (cdip = mptsas_find_child_addr(pdip, sas_wwn, lun_num))
14719 + if ((cdip = mptsas_find_child_addr(pdip, sas_wwn, lun_num)) !=
14720 + NULL) {
14479 14721 ret = DDI_SUCCESS;
14480 - else
14722 + } else {
14481 14723 ret = mptsas_probe_lun(pdip, lun_num, &cdip,
14482 14724 ptgt);
14725 + }
14483 14726 if ((ret == DDI_SUCCESS) && (cdip != NULL)) {
14484 14727 (void) ndi_prop_remove(DDI_DEV_T_NONE, cdip,
14485 14728 MPTSAS_DEV_GONE);
14486 14729 }
14487 14730 }
14488 14731 mptsas_offline_missed_luns(pdip, saved_repluns, lun_total, ptgt);
14489 14732 kmem_free(saved_repluns, sizeof (uint16_t) * lun_total);
14490 14733 scsi_free_consistent_buf(repluns_bp);
14491 14734 return (DDI_SUCCESS);
14492 14735 }
14493 14736
14494 14737 static int
14495 14738 mptsas_config_raid(dev_info_t *pdip, uint16_t target, dev_info_t **dip)
14496 14739 {
14497 14740 int rval = DDI_FAILURE;
14498 14741 struct scsi_inquiry *sd_inq = NULL;
14499 14742 mptsas_t *mpt = DIP2MPT(pdip);
14500 14743 mptsas_target_t *ptgt = NULL;
14501 14744
14502 14745 mutex_enter(&mpt->m_mutex);
14503 14746 ptgt = refhash_linear_search(mpt->m_targets,
14504 14747 mptsas_target_eval_devhdl, &target);
14505 14748 mutex_exit(&mpt->m_mutex);
14506 14749 if (ptgt == NULL) {
14507 14750 mptsas_log(mpt, CE_WARN, "Volume with VolDevHandle of 0x%x "
14508 14751 "not found.", target);
14509 14752 return (rval);
14510 14753 }
14511 14754
14512 14755 sd_inq = (struct scsi_inquiry *)kmem_alloc(SUN_INQSIZE, KM_SLEEP);
14513 14756 rval = mptsas_inquiry(mpt, ptgt, 0, 0, (uchar_t *)sd_inq,
14514 14757 SUN_INQSIZE, 0, (uchar_t)0);
14515 14758
14516 14759 if ((rval == DDI_SUCCESS) && MPTSAS_VALID_LUN(sd_inq)) {
14517 14760 rval = mptsas_create_phys_lun(pdip, sd_inq, NULL, dip, ptgt,
14518 14761 0);
14519 14762 } else {
14520 14763 rval = DDI_FAILURE;
14521 14764 }
14522 14765
14523 14766 kmem_free(sd_inq, SUN_INQSIZE);
14524 14767 return (rval);
14525 14768 }
14526 14769
14527 14770 /*
14528 14771 * configure all RAID volumes for virtual iport
14529 14772 */
14530 14773 static void
14531 14774 mptsas_config_all_viport(dev_info_t *pdip)
14532 14775 {
14533 14776 mptsas_t *mpt = DIP2MPT(pdip);
14534 14777 int config, vol;
14535 14778 int target;
14536 14779 dev_info_t *lundip = NULL;
14537 14780
14538 14781 /*
14539 14782 * Get latest RAID info and search for any Volume DevHandles. If any
14540 14783 * are found, configure the volume.
14541 14784 */
14542 14785 mutex_enter(&mpt->m_mutex);
14543 14786 for (config = 0; config < mpt->m_num_raid_configs; config++) {
14544 14787 for (vol = 0; vol < MPTSAS_MAX_RAIDVOLS; vol++) {
14545 14788 if (mpt->m_raidconfig[config].m_raidvol[vol].m_israid
14546 14789 == 1) {
14547 14790 target = mpt->m_raidconfig[config].
14548 14791 m_raidvol[vol].m_raidhandle;
14549 14792 mutex_exit(&mpt->m_mutex);
14550 14793 (void) mptsas_config_raid(pdip, target,
14551 14794 &lundip);
14552 14795 mutex_enter(&mpt->m_mutex);
14553 14796 }
14554 14797 }
14555 14798 }
14556 14799 mutex_exit(&mpt->m_mutex);
14557 14800 }
14558 14801
14559 14802 static void
14560 14803 mptsas_offline_missed_luns(dev_info_t *pdip, uint16_t *repluns,
14561 14804 int lun_cnt, mptsas_target_t *ptgt)
14562 14805 {
14563 14806 dev_info_t *child = NULL, *savechild = NULL;
14564 14807 mdi_pathinfo_t *pip = NULL, *savepip = NULL;
14565 14808 uint64_t sas_wwn, wwid;
14566 14809 uint8_t phy;
14567 14810 int lun;
14568 14811 int i;
14569 14812 int find;
14570 14813 char *addr;
14571 14814 char *nodename;
14572 14815 mptsas_t *mpt = DIP2MPT(pdip);
14573 14816
14574 14817 mutex_enter(&mpt->m_mutex);
14575 14818 wwid = ptgt->m_addr.mta_wwn;
14576 14819 mutex_exit(&mpt->m_mutex);
14577 14820
14578 14821 child = ddi_get_child(pdip);
14579 14822 while (child) {
14580 14823 find = 0;
14581 14824 savechild = child;
14582 14825 child = ddi_get_next_sibling(child);
14583 14826
14584 14827 nodename = ddi_node_name(savechild);
14585 14828 if (strcmp(nodename, "smp") == 0) {
14586 14829 continue;
14587 14830 }
14588 14831
14589 14832 addr = ddi_get_name_addr(savechild);
14590 14833 if (addr == NULL) {
14591 14834 continue;
14592 14835 }
14593 14836
14594 14837 if (mptsas_parse_address(addr, &sas_wwn, &phy, &lun) !=
14595 14838 DDI_SUCCESS) {
14596 14839 continue;
14597 14840 }
14598 14841
14599 14842 if (wwid == sas_wwn) {
14600 14843 for (i = 0; i < lun_cnt; i++) {
14601 14844 if (repluns[i] == lun) {
14602 14845 find = 1;
|
↓ open down ↓ |
110 lines elided |
↑ open up ↑ |
14603 14846 break;
14604 14847 }
14605 14848 }
14606 14849 } else {
14607 14850 continue;
14608 14851 }
14609 14852 if (find == 0) {
14610 14853 /*
14611 14854 * The lun has not been there already
14612 14855 */
14613 - (void) mptsas_offline_lun(pdip, savechild, NULL,
14614 - NDI_DEVI_REMOVE);
14856 + (void) mptsas_offline_lun(savechild, NULL);
14615 14857 }
14616 14858 }
14617 14859
14618 14860 pip = mdi_get_next_client_path(pdip, NULL);
14619 14861 while (pip) {
14620 14862 find = 0;
14621 14863 savepip = pip;
14622 14864 addr = MDI_PI(pip)->pi_addr;
14623 14865
14624 14866 pip = mdi_get_next_client_path(pdip, pip);
14625 14867
14626 14868 if (addr == NULL) {
14627 14869 continue;
14628 14870 }
14629 14871
14630 14872 if (mptsas_parse_address(addr, &sas_wwn, &phy,
14631 14873 &lun) != DDI_SUCCESS) {
14632 14874 continue;
14633 14875 }
14634 14876
14635 14877 if (sas_wwn == wwid) {
14636 14878 for (i = 0; i < lun_cnt; i++) {
14637 14879 if (repluns[i] == lun) {
14638 14880 find = 1;
14639 14881 break;
|
↓ open down ↓ |
15 lines elided |
↑ open up ↑ |
14640 14882 }
14641 14883 }
14642 14884 } else {
14643 14885 continue;
14644 14886 }
14645 14887
14646 14888 if (find == 0) {
14647 14889 /*
14648 14890 * The lun has not been there already
14649 14891 */
14650 - (void) mptsas_offline_lun(pdip, NULL, savepip,
14651 - NDI_DEVI_REMOVE);
14892 + (void) mptsas_offline_lun(NULL, savepip);
14652 14893 }
14653 14894 }
14654 14895 }
14655 14896
14656 14897 /*
14657 14898 * If this enclosure doesn't exist in the enclosure list, add it. If it does,
14658 14899 * update it.
14659 14900 */
14660 14901 static void
14661 14902 mptsas_enclosure_update(mptsas_t *mpt, mptsas_enclosure_t *mep)
14662 14903 {
14663 14904 mptsas_enclosure_t *m;
14664 14905
14665 14906 ASSERT(MUTEX_HELD(&mpt->m_mutex));
14666 14907 m = mptsas_enc_lookup(mpt, mep->me_enchdl);
14667 14908 if (m != NULL) {
14909 + uint8_t *ledp;
14668 14910 m->me_flags = mep->me_flags;
14911 +
14912 +
14913 + /*
14914 + * If the number of slots and the first slot entry in the
14915 + * enclosure has not changed, then we don't need to do anything
14916 + * here. Otherwise, we need to allocate a new array for the LED
14917 + * status of the slot.
14918 + */
14919 + if (m->me_fslot == mep->me_fslot &&
14920 + m->me_nslots == mep->me_nslots)
14921 + return;
14922 +
14923 + /*
14924 + * If the number of slots or the first slot has changed, it's
14925 + * not clear that we're really in a place that we can continue
14926 + * to honor the existing flags.
14927 + */
14928 + if (mep->me_nslots > 0) {
14929 + ledp = kmem_zalloc(sizeof (uint8_t) * mep->me_nslots,
14930 + KM_SLEEP);
14931 + } else {
14932 + ledp = NULL;
14933 + }
14934 +
14935 + if (m->me_slotleds != NULL) {
14936 + kmem_free(m->me_slotleds, sizeof (uint8_t) *
14937 + m->me_nslots);
14938 + }
14939 + m->me_slotleds = ledp;
14940 + m->me_fslot = mep->me_fslot;
14941 + m->me_nslots = mep->me_nslots;
14669 14942 return;
14670 14943 }
14671 14944
14672 14945 m = kmem_zalloc(sizeof (*m), KM_SLEEP);
14673 14946 m->me_enchdl = mep->me_enchdl;
14674 14947 m->me_flags = mep->me_flags;
14948 + m->me_nslots = mep->me_nslots;
14949 + m->me_fslot = mep->me_fslot;
14950 + if (m->me_nslots > 0) {
14951 + m->me_slotleds = kmem_zalloc(sizeof (uint8_t) * mep->me_nslots,
14952 + KM_SLEEP);
14953 + /*
14954 + * It may make sense to optionally flush all of the slots and/or
14955 + * read the slot status flag here to synchronize between
14956 + * ourselves and the card. So far, that hasn't been needed
14957 + * annecdotally when enumerating something new. If we do, we
14958 + * should kick that off in a taskq potentially.
14959 + */
14960 + }
14675 14961 list_insert_tail(&mpt->m_enclosures, m);
14676 14962 }
14677 14963
14678 14964 static void
14679 14965 mptsas_update_hashtab(struct mptsas *mpt)
14680 14966 {
14681 14967 uint32_t page_address;
14682 14968 int rval = 0;
14683 14969 uint16_t dev_handle;
14684 14970 mptsas_target_t *ptgt = NULL;
14685 14971 mptsas_smp_t smp_node;
14686 14972
14687 14973 /*
14688 14974 * Get latest RAID info.
14689 14975 */
14690 14976 (void) mptsas_get_raid_info(mpt);
14691 14977
14692 14978 dev_handle = mpt->m_smp_devhdl;
14693 14979 while (mpt->m_done_traverse_smp == 0) {
14694 14980 page_address = (MPI2_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL &
14695 14981 MPI2_SAS_EXPAND_PGAD_FORM_MASK) | (uint32_t)dev_handle;
14696 14982 if (mptsas_get_sas_expander_page0(mpt, page_address, &smp_node)
14697 14983 != DDI_SUCCESS) {
14698 14984 break;
14699 14985 }
14700 14986 mpt->m_smp_devhdl = dev_handle = smp_node.m_devhdl;
14701 14987 (void) mptsas_smp_alloc(mpt, &smp_node);
14702 14988 }
14703 14989
14704 14990 /*
14705 14991 * Loop over enclosures so we can understand what's there.
14706 14992 */
14707 14993 dev_handle = MPTSAS_INVALID_DEVHDL;
14708 14994 while (mpt->m_done_traverse_enc == 0) {
14709 14995 mptsas_enclosure_t me;
14710 14996
14711 14997 page_address = (MPI2_SAS_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE &
14712 14998 MPI2_SAS_ENCLOS_PGAD_FORM_MASK) | (uint32_t)dev_handle;
14713 14999
14714 15000 if (mptsas_get_enclosure_page0(mpt, page_address, &me) !=
14715 15001 DDI_SUCCESS) {
14716 15002 break;
14717 15003 }
14718 15004 dev_handle = me.me_enchdl;
14719 15005 mptsas_enclosure_update(mpt, &me);
14720 15006 }
14721 15007
14722 15008 /*
14723 15009 * Config target devices
14724 15010 */
14725 15011 dev_handle = mpt->m_dev_handle;
14726 15012
14727 15013 /*
14728 15014 * Loop to get sas device page 0 by GetNextHandle till the
14729 15015 * the last handle. If the sas device is a SATA/SSP target,
14730 15016 * we try to config it.
14731 15017 */
14732 15018 while (mpt->m_done_traverse_dev == 0) {
14733 15019 ptgt = NULL;
14734 15020 page_address =
14735 15021 (MPI2_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE &
14736 15022 MPI2_SAS_DEVICE_PGAD_FORM_MASK) |
14737 15023 (uint32_t)dev_handle;
14738 15024 rval = mptsas_get_target_device_info(mpt, page_address,
14739 15025 &dev_handle, &ptgt);
14740 15026 if ((rval == DEV_INFO_FAIL_PAGE0) ||
14741 15027 (rval == DEV_INFO_FAIL_ALLOC) ||
14742 15028 (rval == DEV_INFO_FAIL_GUID)) {
14743 15029 break;
14744 15030 }
14745 15031
14746 15032 mpt->m_dev_handle = dev_handle;
14747 15033 }
14748 15034
14749 15035 }
14750 15036
14751 15037 void
14752 15038 mptsas_update_driver_data(struct mptsas *mpt)
14753 15039 {
14754 15040 mptsas_target_t *tp;
14755 15041 mptsas_smp_t *sp;
14756 15042
14757 15043 ASSERT(MUTEX_HELD(&mpt->m_mutex));
14758 15044
14759 15045 /*
14760 15046 * TODO after hard reset, update the driver data structures
14761 15047 * 1. update port/phymask mapping table mpt->m_phy_info
14762 15048 * 2. invalid all the entries in hash table
14763 15049 * m_devhdl = 0xffff and m_deviceinfo = 0
14764 15050 * 3. call sas_device_page/expander_page to update hash table
14765 15051 */
14766 15052 mptsas_update_phymask(mpt);
14767 15053
14768 15054 /*
14769 15055 * Remove all the devhdls for existing entries but leave their
14770 15056 * addresses alone. In update_hashtab() below, we'll find all
14771 15057 * targets that are still present and reassociate them with
14772 15058 * their potentially new devhdls. Leaving the targets around in
14773 15059 * this fashion allows them to be used on the tx waitq even
14774 15060 * while IOC reset is occurring.
14775 15061 */
14776 15062 for (tp = refhash_first(mpt->m_targets); tp != NULL;
14777 15063 tp = refhash_next(mpt->m_targets, tp)) {
14778 15064 tp->m_devhdl = MPTSAS_INVALID_DEVHDL;
14779 15065 tp->m_deviceinfo = 0;
14780 15066 tp->m_dr_flag = MPTSAS_DR_INACTIVE;
14781 15067 }
14782 15068 for (sp = refhash_first(mpt->m_smp_targets); sp != NULL;
14783 15069 sp = refhash_next(mpt->m_smp_targets, sp)) {
14784 15070 sp->m_devhdl = MPTSAS_INVALID_DEVHDL;
14785 15071 sp->m_deviceinfo = 0;
14786 15072 }
14787 15073 mpt->m_done_traverse_dev = 0;
14788 15074 mpt->m_done_traverse_smp = 0;
14789 15075 mpt->m_done_traverse_enc = 0;
14790 15076 mpt->m_dev_handle = mpt->m_smp_devhdl = MPTSAS_INVALID_DEVHDL;
14791 15077 mptsas_update_hashtab(mpt);
14792 15078 }
14793 15079
14794 15080 static void
|
↓ open down ↓ |
110 lines elided |
↑ open up ↑ |
14795 15081 mptsas_config_all(dev_info_t *pdip)
14796 15082 {
14797 15083 dev_info_t *smpdip = NULL;
14798 15084 mptsas_t *mpt = DIP2MPT(pdip);
14799 15085 int phymask = 0;
14800 15086 mptsas_phymask_t phy_mask;
14801 15087 mptsas_target_t *ptgt = NULL;
14802 15088 mptsas_smp_t *psmp;
14803 15089
14804 15090 /*
14805 - * Get the phymask associated to the iport
15091 + * The phymask exists if the port is active, otherwise
15092 + * nothing to do.
14806 15093 */
15094 + if (ddi_prop_exists(DDI_DEV_T_ANY, pdip,
15095 + DDI_PROP_DONTPASS | DDI_PROP_NOTPROM, "phymask") == 0)
15096 + return;
15097 +
14807 15098 phymask = ddi_prop_get_int(DDI_DEV_T_ANY, pdip, 0,
14808 15099 "phymask", 0);
14809 15100
14810 15101 /*
14811 - * Enumerate RAID volumes here (phymask == 0).
15102 + * If this is a RAID, enumerate the volumes
14812 15103 */
14813 - if (phymask == 0) {
15104 + if (mpt->m_num_raid_configs > 0) {
14814 15105 mptsas_config_all_viport(pdip);
14815 15106 return;
14816 15107 }
14817 15108
14818 15109 mutex_enter(&mpt->m_mutex);
14819 15110
14820 15111 if (!mpt->m_done_traverse_dev || !mpt->m_done_traverse_smp ||
14821 15112 !mpt->m_done_traverse_enc) {
14822 15113 mptsas_update_hashtab(mpt);
14823 15114 }
14824 15115
14825 15116 for (psmp = refhash_first(mpt->m_smp_targets); psmp != NULL;
14826 15117 psmp = refhash_next(mpt->m_smp_targets, psmp)) {
14827 15118 phy_mask = psmp->m_addr.mta_phymask;
14828 15119 if (phy_mask == phymask) {
14829 15120 smpdip = NULL;
14830 15121 mutex_exit(&mpt->m_mutex);
14831 15122 (void) mptsas_online_smp(pdip, psmp, &smpdip);
14832 15123 mutex_enter(&mpt->m_mutex);
14833 15124 }
14834 15125 }
14835 15126
14836 15127 for (ptgt = refhash_first(mpt->m_targets); ptgt != NULL;
14837 15128 ptgt = refhash_next(mpt->m_targets, ptgt)) {
14838 15129 phy_mask = ptgt->m_addr.mta_phymask;
14839 15130 if (phy_mask == phymask) {
14840 15131 mutex_exit(&mpt->m_mutex);
14841 15132 (void) mptsas_config_target(pdip, ptgt);
14842 15133 mutex_enter(&mpt->m_mutex);
14843 15134 }
14844 15135 }
14845 15136 mutex_exit(&mpt->m_mutex);
14846 15137 }
14847 15138
14848 15139 static int
14849 15140 mptsas_config_target(dev_info_t *pdip, mptsas_target_t *ptgt)
14850 15141 {
14851 15142 int rval = DDI_FAILURE;
14852 15143 dev_info_t *tdip;
14853 15144
14854 15145 rval = mptsas_config_luns(pdip, ptgt);
14855 15146 if (rval != DDI_SUCCESS) {
14856 15147 /*
14857 15148 * The return value means the SCMD_REPORT_LUNS
14858 15149 * did not execute successfully. The target maybe
14859 15150 * doesn't support such command.
14860 15151 */
14861 15152 rval = mptsas_probe_lun(pdip, 0, &tdip, ptgt);
14862 15153 }
14863 15154 return (rval);
14864 15155 }
14865 15156
14866 15157 /*
14867 15158 * Return fail if not all the childs/paths are freed.
14868 15159 * if there is any path under the HBA, the return value will be always fail
14869 15160 * because we didn't call mdi_pi_free for path
14870 15161 */
14871 15162 static int
14872 15163 mptsas_offline_target(dev_info_t *pdip, char *name)
14873 15164 {
14874 15165 dev_info_t *child = NULL, *prechild = NULL;
14875 15166 mdi_pathinfo_t *pip = NULL, *savepip = NULL;
14876 15167 int tmp_rval, rval = DDI_SUCCESS;
14877 15168 char *addr, *cp;
14878 15169 size_t s;
14879 15170 mptsas_t *mpt = DIP2MPT(pdip);
14880 15171
14881 15172 child = ddi_get_child(pdip);
14882 15173 while (child) {
14883 15174 addr = ddi_get_name_addr(child);
14884 15175 prechild = child;
14885 15176 child = ddi_get_next_sibling(child);
14886 15177
14887 15178 if (addr == NULL) {
14888 15179 continue;
14889 15180 }
|
↓ open down ↓ |
66 lines elided |
↑ open up ↑ |
14890 15181 if ((cp = strchr(addr, ',')) == NULL) {
14891 15182 continue;
14892 15183 }
14893 15184
14894 15185 s = (uintptr_t)cp - (uintptr_t)addr;
14895 15186
14896 15187 if (strncmp(addr, name, s) != 0) {
14897 15188 continue;
14898 15189 }
14899 15190
14900 - tmp_rval = mptsas_offline_lun(pdip, prechild, NULL,
14901 - NDI_DEVI_REMOVE);
15191 + tmp_rval = mptsas_offline_lun(prechild, NULL);
14902 15192 if (tmp_rval != DDI_SUCCESS) {
14903 15193 rval = DDI_FAILURE;
14904 15194 if (ndi_prop_create_boolean(DDI_DEV_T_NONE,
14905 15195 prechild, MPTSAS_DEV_GONE) !=
14906 15196 DDI_PROP_SUCCESS) {
14907 15197 mptsas_log(mpt, CE_WARN, "mptsas driver "
14908 15198 "unable to create property for "
14909 15199 "SAS %s (MPTSAS_DEV_GONE)", addr);
14910 15200 }
14911 15201 }
14912 15202 }
14913 15203
14914 15204 pip = mdi_get_next_client_path(pdip, NULL);
14915 15205 while (pip) {
14916 15206 addr = MDI_PI(pip)->pi_addr;
14917 15207 savepip = pip;
14918 15208 pip = mdi_get_next_client_path(pdip, pip);
14919 15209 if (addr == NULL) {
14920 15210 continue;
14921 15211 }
14922 15212
|
↓ open down ↓ |
11 lines elided |
↑ open up ↑ |
14923 15213 if ((cp = strchr(addr, ',')) == NULL) {
14924 15214 continue;
14925 15215 }
14926 15216
14927 15217 s = (uintptr_t)cp - (uintptr_t)addr;
14928 15218
14929 15219 if (strncmp(addr, name, s) != 0) {
14930 15220 continue;
14931 15221 }
14932 15222
14933 - (void) mptsas_offline_lun(pdip, NULL, savepip,
14934 - NDI_DEVI_REMOVE);
15223 + (void) mptsas_offline_lun(NULL, savepip);
14935 15224 /*
14936 15225 * driver will not invoke mdi_pi_free, so path will not
14937 15226 * be freed forever, return DDI_FAILURE.
14938 15227 */
14939 15228 rval = DDI_FAILURE;
14940 15229 }
14941 15230 return (rval);
14942 15231 }
14943 15232
14944 15233 static int
14945 -mptsas_offline_lun(dev_info_t *pdip, dev_info_t *rdip,
14946 - mdi_pathinfo_t *rpip, uint_t flags)
15234 +mptsas_offline_lun(dev_info_t *rdip, mdi_pathinfo_t *rpip)
14947 15235 {
14948 15236 int rval = DDI_FAILURE;
14949 - char *devname;
14950 - dev_info_t *cdip, *parent;
14951 15237
14952 15238 if (rpip != NULL) {
14953 - parent = scsi_vhci_dip;
14954 - cdip = mdi_pi_get_client(rpip);
14955 - } else if (rdip != NULL) {
14956 - parent = pdip;
14957 - cdip = rdip;
14958 - } else {
14959 - return (DDI_FAILURE);
14960 - }
14961 -
14962 - /*
14963 - * Make sure node is attached otherwise
14964 - * it won't have related cache nodes to
14965 - * clean up. i_ddi_devi_attached is
14966 - * similiar to i_ddi_node_state(cdip) >=
14967 - * DS_ATTACHED.
14968 - */
14969 - if (i_ddi_devi_attached(cdip)) {
14970 -
14971 - /* Get full devname */
14972 - devname = kmem_alloc(MAXNAMELEN + 1, KM_SLEEP);
14973 - (void) ddi_deviname(cdip, devname);
14974 - /* Clean cache */
14975 - (void) devfs_clean(parent, devname + 1,
14976 - DV_CLEAN_FORCE);
14977 - kmem_free(devname, MAXNAMELEN + 1);
14978 - }
14979 - if (rpip != NULL) {
14980 15239 if (MDI_PI_IS_OFFLINE(rpip)) {
14981 15240 rval = DDI_SUCCESS;
14982 15241 } else {
14983 15242 rval = mdi_pi_offline(rpip, 0);
14984 15243 }
14985 - } else {
14986 - rval = ndi_devi_offline(cdip, flags);
15244 + } else if (rdip != NULL) {
15245 + rval = ndi_devi_offline(rdip,
15246 + NDI_DEVFS_CLEAN | NDI_DEVI_REMOVE | NDI_DEVI_GONE);
14987 15247 }
14988 15248
14989 15249 return (rval);
14990 15250 }
14991 15251
14992 15252 static dev_info_t *
14993 15253 mptsas_find_smp_child(dev_info_t *parent, char *str_wwn)
14994 15254 {
14995 15255 dev_info_t *child = NULL;
14996 15256 char *smp_wwn = NULL;
14997 15257
14998 15258 child = ddi_get_child(parent);
14999 15259 while (child) {
15000 15260 if (ddi_prop_lookup_string(DDI_DEV_T_ANY, child,
15001 15261 DDI_PROP_DONTPASS, SMP_WWN, &smp_wwn)
15002 15262 != DDI_SUCCESS) {
15003 15263 child = ddi_get_next_sibling(child);
15004 15264 continue;
15005 15265 }
15006 15266
15007 15267 if (strcmp(smp_wwn, str_wwn) == 0) {
|
↓ open down ↓ |
11 lines elided |
↑ open up ↑ |
15008 15268 ddi_prop_free(smp_wwn);
15009 15269 break;
15010 15270 }
15011 15271 child = ddi_get_next_sibling(child);
15012 15272 ddi_prop_free(smp_wwn);
15013 15273 }
15014 15274 return (child);
15015 15275 }
15016 15276
15017 15277 static int
15018 -mptsas_offline_smp(dev_info_t *pdip, mptsas_smp_t *smp_node, uint_t flags)
15278 +mptsas_offline_smp(dev_info_t *pdip, mptsas_smp_t *smp_node)
15019 15279 {
15020 15280 int rval = DDI_FAILURE;
15021 - char *devname;
15022 15281 char wwn_str[MPTSAS_WWN_STRLEN];
15023 15282 dev_info_t *cdip;
15024 15283
15025 15284 (void) sprintf(wwn_str, "%"PRIx64, smp_node->m_addr.mta_wwn);
15026 15285
15027 15286 cdip = mptsas_find_smp_child(pdip, wwn_str);
15028 -
15029 15287 if (cdip == NULL)
15030 15288 return (DDI_SUCCESS);
15031 15289
15032 - /*
15033 - * Make sure node is attached otherwise
15034 - * it won't have related cache nodes to
15035 - * clean up. i_ddi_devi_attached is
15036 - * similiar to i_ddi_node_state(cdip) >=
15037 - * DS_ATTACHED.
15038 - */
15039 - if (i_ddi_devi_attached(cdip)) {
15290 + rval = ndi_devi_offline(cdip, NDI_DEVFS_CLEAN | NDI_DEVI_REMOVE);
15040 15291
15041 - /* Get full devname */
15042 - devname = kmem_alloc(MAXNAMELEN + 1, KM_SLEEP);
15043 - (void) ddi_deviname(cdip, devname);
15044 - /* Clean cache */
15045 - (void) devfs_clean(pdip, devname + 1,
15046 - DV_CLEAN_FORCE);
15047 - kmem_free(devname, MAXNAMELEN + 1);
15048 - }
15049 -
15050 - rval = ndi_devi_offline(cdip, flags);
15051 -
15052 15292 return (rval);
15053 15293 }
15054 15294
15055 15295 static dev_info_t *
15056 15296 mptsas_find_child(dev_info_t *pdip, char *name)
15057 15297 {
15058 15298 dev_info_t *child = NULL;
15059 15299 char *rname = NULL;
15060 15300 int rval = DDI_FAILURE;
15061 15301
15062 15302 rname = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
15063 15303
15064 15304 child = ddi_get_child(pdip);
15065 15305 while (child) {
15066 15306 rval = mptsas_name_child(child, rname, SCSI_MAXNAMELEN);
15067 15307 if (rval != DDI_SUCCESS) {
15068 15308 child = ddi_get_next_sibling(child);
15069 15309 bzero(rname, SCSI_MAXNAMELEN);
15070 15310 continue;
15071 15311 }
15072 15312
15073 15313 if (strcmp(rname, name) == 0) {
15074 15314 break;
15075 15315 }
15076 15316 child = ddi_get_next_sibling(child);
15077 15317 bzero(rname, SCSI_MAXNAMELEN);
15078 15318 }
15079 15319
15080 15320 kmem_free(rname, SCSI_MAXNAMELEN);
15081 15321
15082 15322 return (child);
15083 15323 }
15084 15324
15085 15325
15086 15326 static dev_info_t *
15087 15327 mptsas_find_child_addr(dev_info_t *pdip, uint64_t sasaddr, int lun)
15088 15328 {
15089 15329 dev_info_t *child = NULL;
15090 15330 char *name = NULL;
15091 15331 char *addr = NULL;
15092 15332
15093 15333 name = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
15094 15334 addr = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
15095 15335 (void) sprintf(name, "%016"PRIx64, sasaddr);
15096 15336 (void) sprintf(addr, "w%s,%x", name, lun);
15097 15337 child = mptsas_find_child(pdip, addr);
15098 15338 kmem_free(name, SCSI_MAXNAMELEN);
15099 15339 kmem_free(addr, SCSI_MAXNAMELEN);
15100 15340 return (child);
15101 15341 }
15102 15342
15103 15343 static dev_info_t *
15104 15344 mptsas_find_child_phy(dev_info_t *pdip, uint8_t phy)
15105 15345 {
15106 15346 dev_info_t *child;
15107 15347 char *addr;
15108 15348
15109 15349 addr = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
15110 15350 (void) sprintf(addr, "p%x,0", phy);
15111 15351 child = mptsas_find_child(pdip, addr);
15112 15352 kmem_free(addr, SCSI_MAXNAMELEN);
15113 15353 return (child);
15114 15354 }
15115 15355
15116 15356 static mdi_pathinfo_t *
15117 15357 mptsas_find_path_phy(dev_info_t *pdip, uint8_t phy)
15118 15358 {
15119 15359 mdi_pathinfo_t *path;
15120 15360 char *addr = NULL;
15121 15361
15122 15362 addr = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
15123 15363 (void) sprintf(addr, "p%x,0", phy);
15124 15364 path = mdi_pi_find(pdip, NULL, addr);
15125 15365 kmem_free(addr, SCSI_MAXNAMELEN);
15126 15366 return (path);
15127 15367 }
15128 15368
15129 15369 static mdi_pathinfo_t *
15130 15370 mptsas_find_path_addr(dev_info_t *parent, uint64_t sasaddr, int lun)
15131 15371 {
15132 15372 mdi_pathinfo_t *path;
15133 15373 char *name = NULL;
15134 15374 char *addr = NULL;
15135 15375
15136 15376 name = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
15137 15377 addr = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
15138 15378 (void) sprintf(name, "%016"PRIx64, sasaddr);
15139 15379 (void) sprintf(addr, "w%s,%x", name, lun);
15140 15380 path = mdi_pi_find(parent, NULL, addr);
15141 15381 kmem_free(name, SCSI_MAXNAMELEN);
15142 15382 kmem_free(addr, SCSI_MAXNAMELEN);
15143 15383
15144 15384 return (path);
15145 15385 }
15146 15386
15147 15387 static int
15148 15388 mptsas_create_lun(dev_info_t *pdip, struct scsi_inquiry *sd_inq,
15149 15389 dev_info_t **lun_dip, mptsas_target_t *ptgt, int lun)
15150 15390 {
15151 15391 int i = 0;
15152 15392 uchar_t *inq83 = NULL;
15153 15393 int inq83_len1 = 0xFF;
15154 15394 int inq83_len = 0;
15155 15395 int rval = DDI_FAILURE;
15156 15396 ddi_devid_t devid;
15157 15397 char *guid = NULL;
15158 15398 int target = ptgt->m_devhdl;
15159 15399 mdi_pathinfo_t *pip = NULL;
15160 15400 mptsas_t *mpt = DIP2MPT(pdip);
15161 15401
15162 15402 /*
15163 15403 * For DVD/CD ROM and tape devices and optical
15164 15404 * devices, we won't try to enumerate them under
15165 15405 * scsi_vhci, so no need to try page83
15166 15406 */
15167 15407 if (sd_inq && (sd_inq->inq_dtype == DTYPE_RODIRECT ||
15168 15408 sd_inq->inq_dtype == DTYPE_OPTICAL ||
15169 15409 sd_inq->inq_dtype == DTYPE_ESI))
15170 15410 goto create_lun;
15171 15411
15172 15412 /*
15173 15413 * The LCA returns good SCSI status, but corrupt page 83 data the first
15174 15414 * time it is queried. The solution is to keep trying to request page83
15175 15415 * and verify the GUID is not (DDI_NOT_WELL_FORMED) in
|
↓ open down ↓ |
114 lines elided |
↑ open up ↑ |
15176 15416 * mptsas_inq83_retry_timeout seconds. If the timeout expires, driver
15177 15417 * give up to get VPD page at this stage and fail the enumeration.
15178 15418 */
15179 15419
15180 15420 inq83 = kmem_zalloc(inq83_len1, KM_SLEEP);
15181 15421
15182 15422 for (i = 0; i < mptsas_inq83_retry_timeout; i++) {
15183 15423 rval = mptsas_inquiry(mpt, ptgt, lun, 0x83, inq83,
15184 15424 inq83_len1, &inq83_len, 1);
15185 15425 if (rval != 0) {
15186 - mptsas_log(mpt, CE_WARN, "!mptsas request inquiry page "
15426 + mptsas_log(mpt, CE_WARN, "mptsas request inquiry page "
15187 15427 "0x83 for target:%x, lun:%x failed!", target, lun);
15188 15428 if (mptsas_physical_bind_failed_page_83 != B_FALSE)
15189 15429 goto create_lun;
15190 15430 goto out;
15191 15431 }
15192 15432 /*
15193 15433 * create DEVID from inquiry data
15194 15434 */
15195 15435 if ((rval = ddi_devid_scsi_encode(
15196 15436 DEVID_SCSI_ENCODE_VERSION_LATEST, NULL, (uchar_t *)sd_inq,
15197 15437 sizeof (struct scsi_inquiry), NULL, 0, inq83,
15198 15438 (size_t)inq83_len, &devid)) == DDI_SUCCESS) {
15199 15439 /*
15200 15440 * extract GUID from DEVID
15201 15441 */
15202 15442 guid = ddi_devid_to_guid(devid);
|
↓ open down ↓ |
6 lines elided |
↑ open up ↑ |
15203 15443
15204 15444 /*
15205 15445 * Do not enable MPXIO if the strlen(guid) is greater
15206 15446 * than MPTSAS_MAX_GUID_LEN, this constrain would be
15207 15447 * handled by framework later.
15208 15448 */
15209 15449 if (guid && (strlen(guid) > MPTSAS_MAX_GUID_LEN)) {
15210 15450 ddi_devid_free_guid(guid);
15211 15451 guid = NULL;
15212 15452 if (mpt->m_mpxio_enable == TRUE) {
15213 - mptsas_log(mpt, CE_NOTE, "!Target:%x, "
15453 + mptsas_log(mpt, CE_NOTE, "Target:%x, "
15214 15454 "lun:%x doesn't have a valid GUID, "
15215 15455 "multipathing for this drive is "
15216 15456 "not enabled", target, lun);
15217 15457 }
15218 15458 }
15219 15459
15220 15460 /*
15221 15461 * devid no longer needed
15222 15462 */
15223 15463 ddi_devid_free(devid);
15224 15464 break;
|
↓ open down ↓ |
1 lines elided |
↑ open up ↑ |
15225 15465 } else if (rval == DDI_NOT_WELL_FORMED) {
15226 15466 /*
15227 15467 * return value of ddi_devid_scsi_encode equal to
15228 15468 * DDI_NOT_WELL_FORMED means DEVID_RETRY, it worth
15229 15469 * to retry inquiry page 0x83 and get GUID.
15230 15470 */
15231 15471 NDBG20(("Not well formed devid, retry..."));
15232 15472 delay(1 * drv_usectohz(1000000));
15233 15473 continue;
15234 15474 } else {
15235 - mptsas_log(mpt, CE_WARN, "!Encode devid failed for "
15475 + mptsas_log(mpt, CE_WARN, "Encode devid failed for "
15236 15476 "path target:%x, lun:%x", target, lun);
15237 15477 rval = DDI_FAILURE;
15238 15478 goto create_lun;
15239 15479 }
15240 15480 }
15241 15481
15242 15482 if (i == mptsas_inq83_retry_timeout) {
15243 - mptsas_log(mpt, CE_WARN, "!Repeated page83 requests timeout "
15483 + mptsas_log(mpt, CE_WARN, "Repeated page83 requests timeout "
15244 15484 "for path target:%x, lun:%x", target, lun);
15245 15485 }
15246 15486
15247 15487 rval = DDI_FAILURE;
15248 15488
15249 15489 create_lun:
15250 15490 if ((guid != NULL) && (mpt->m_mpxio_enable == TRUE)) {
15251 15491 rval = mptsas_create_virt_lun(pdip, sd_inq, guid, lun_dip, &pip,
15252 15492 ptgt, lun);
15253 15493 }
15254 15494 if (rval != DDI_SUCCESS) {
15255 15495 rval = mptsas_create_phys_lun(pdip, sd_inq, guid, lun_dip,
15256 15496 ptgt, lun);
15257 15497
15258 15498 }
15259 15499 out:
15260 15500 if (guid != NULL) {
15261 15501 /*
15262 15502 * guid no longer needed
15263 15503 */
15264 15504 ddi_devid_free_guid(guid);
15265 15505 }
15266 15506 if (inq83 != NULL)
15267 15507 kmem_free(inq83, inq83_len1);
15268 15508 return (rval);
15269 15509 }
15270 15510
15271 15511 static int
15272 15512 mptsas_create_virt_lun(dev_info_t *pdip, struct scsi_inquiry *inq, char *guid,
15273 15513 dev_info_t **lun_dip, mdi_pathinfo_t **pip, mptsas_target_t *ptgt, int lun)
15274 15514 {
15275 15515 int target;
15276 15516 char *nodename = NULL;
15277 15517 char **compatible = NULL;
15278 15518 int ncompatible = 0;
15279 15519 int mdi_rtn = MDI_FAILURE;
15280 15520 int rval = DDI_FAILURE;
15281 15521 char *old_guid = NULL;
15282 15522 mptsas_t *mpt = DIP2MPT(pdip);
15283 15523 char *lun_addr = NULL;
15284 15524 char *wwn_str = NULL;
15285 15525 char *attached_wwn_str = NULL;
15286 15526 char *component = NULL;
15287 15527 uint8_t phy = 0xFF;
15288 15528 uint64_t sas_wwn;
15289 15529 int64_t lun64 = 0;
15290 15530 uint32_t devinfo;
15291 15531 uint16_t dev_hdl;
15292 15532 uint16_t pdev_hdl;
15293 15533 uint64_t dev_sas_wwn;
15294 15534 uint64_t pdev_sas_wwn;
15295 15535 uint32_t pdev_info;
15296 15536 uint8_t physport;
15297 15537 uint8_t phy_id;
15298 15538 uint32_t page_address;
15299 15539 uint16_t bay_num, enclosure, io_flags;
15300 15540 char pdev_wwn_str[MPTSAS_WWN_STRLEN];
15301 15541 uint32_t dev_info;
15302 15542
15303 15543 mutex_enter(&mpt->m_mutex);
15304 15544 target = ptgt->m_devhdl;
15305 15545 sas_wwn = ptgt->m_addr.mta_wwn;
15306 15546 devinfo = ptgt->m_deviceinfo;
15307 15547 phy = ptgt->m_phynum;
15308 15548 mutex_exit(&mpt->m_mutex);
15309 15549
15310 15550 if (sas_wwn) {
15311 15551 *pip = mptsas_find_path_addr(pdip, sas_wwn, lun);
15312 15552 } else {
15313 15553 *pip = mptsas_find_path_phy(pdip, phy);
15314 15554 }
15315 15555
15316 15556 if (*pip != NULL) {
15317 15557 *lun_dip = MDI_PI(*pip)->pi_client->ct_dip;
15318 15558 ASSERT(*lun_dip != NULL);
15319 15559 if (ddi_prop_lookup_string(DDI_DEV_T_ANY, *lun_dip,
15320 15560 (DDI_PROP_DONTPASS | DDI_PROP_NOTPROM),
|
↓ open down ↓ |
67 lines elided |
↑ open up ↑ |
15321 15561 MDI_CLIENT_GUID_PROP, &old_guid) == DDI_SUCCESS) {
15322 15562 if (strncmp(guid, old_guid, strlen(guid)) == 0) {
15323 15563 /*
15324 15564 * Same path back online again.
15325 15565 */
15326 15566 (void) ddi_prop_free(old_guid);
15327 15567 if ((!MDI_PI_IS_ONLINE(*pip)) &&
15328 15568 (!MDI_PI_IS_STANDBY(*pip)) &&
15329 15569 (ptgt->m_tgt_unconfigured == 0)) {
15330 15570 rval = mdi_pi_online(*pip, 0);
15331 - mutex_enter(&mpt->m_mutex);
15332 - ptgt->m_led_status = 0;
15333 - (void) mptsas_flush_led_status(mpt,
15334 - ptgt);
15335 - mutex_exit(&mpt->m_mutex);
15336 15571 } else {
15337 15572 rval = DDI_SUCCESS;
15338 15573 }
15339 15574 if (rval != DDI_SUCCESS) {
15340 15575 mptsas_log(mpt, CE_WARN, "path:target: "
15341 15576 "%x, lun:%x online failed!", target,
15342 15577 lun);
15343 15578 *pip = NULL;
15344 15579 *lun_dip = NULL;
15345 15580 }
15346 15581 return (rval);
15347 15582 } else {
15348 15583 /*
15349 15584 * The GUID of the LUN has changed which maybe
15350 15585 * because customer mapped another volume to the
15351 15586 * same LUN.
15352 15587 */
15353 15588 mptsas_log(mpt, CE_WARN, "The GUID of the "
15354 15589 "target:%x, lun:%x was changed, maybe "
15355 15590 "because someone mapped another volume "
15356 15591 "to the same LUN", target, lun);
15357 15592 (void) ddi_prop_free(old_guid);
15358 15593 if (!MDI_PI_IS_OFFLINE(*pip)) {
|
↓ open down ↓ |
13 lines elided |
↑ open up ↑ |
15359 15594 rval = mdi_pi_offline(*pip, 0);
15360 15595 if (rval != MDI_SUCCESS) {
15361 15596 mptsas_log(mpt, CE_WARN, "path:"
15362 15597 "target:%x, lun:%x offline "
15363 15598 "failed!", target, lun);
15364 15599 *pip = NULL;
15365 15600 *lun_dip = NULL;
15366 15601 return (DDI_FAILURE);
15367 15602 }
15368 15603 }
15369 - if (mdi_pi_free(*pip, 0) != MDI_SUCCESS) {
15604 + if (mdi_pi_free(*pip,
15605 + MDI_CLIENT_FLAGS_NO_EVENT) != MDI_SUCCESS) {
15370 15606 mptsas_log(mpt, CE_WARN, "path:target:"
15371 15607 "%x, lun:%x free failed!", target,
15372 15608 lun);
15373 15609 *pip = NULL;
15374 15610 *lun_dip = NULL;
15375 15611 return (DDI_FAILURE);
15376 15612 }
15377 15613 }
15378 15614 } else {
15379 15615 mptsas_log(mpt, CE_WARN, "Can't get client-guid "
15380 15616 "property for path:target:%x, lun:%x", target, lun);
15381 15617 *pip = NULL;
15382 15618 *lun_dip = NULL;
15383 15619 return (DDI_FAILURE);
15384 15620 }
15385 15621 }
15386 15622 scsi_hba_nodename_compatible_get(inq, NULL,
15387 15623 inq->inq_dtype, NULL, &nodename, &compatible, &ncompatible);
15388 15624
15389 15625 /*
15390 15626 * if nodename can't be determined then print a message and skip it
15391 15627 */
15392 15628 if (nodename == NULL) {
15393 15629 mptsas_log(mpt, CE_WARN, "mptsas driver found no compatible "
15394 15630 "driver for target%d lun %d dtype:0x%02x", target, lun,
15395 15631 inq->inq_dtype);
15396 15632 return (DDI_FAILURE);
15397 15633 }
15398 15634
15399 15635 wwn_str = kmem_zalloc(MPTSAS_WWN_STRLEN, KM_SLEEP);
15400 15636 /* The property is needed by MPAPI */
15401 15637 (void) sprintf(wwn_str, "%016"PRIx64, sas_wwn);
15402 15638
15403 15639 lun_addr = kmem_zalloc(SCSI_MAXNAMELEN, KM_SLEEP);
15404 15640 if (guid) {
15405 15641 (void) sprintf(lun_addr, "w%s,%x", wwn_str, lun);
15406 15642 (void) sprintf(wwn_str, "w%016"PRIx64, sas_wwn);
15407 15643 } else {
15408 15644 (void) sprintf(lun_addr, "p%x,%x", phy, lun);
15409 15645 (void) sprintf(wwn_str, "p%x", phy);
15410 15646 }
15411 15647
15412 15648 mdi_rtn = mdi_pi_alloc_compatible(pdip, nodename,
15413 15649 guid, lun_addr, compatible, ncompatible,
15414 15650 0, pip);
15415 15651 if (mdi_rtn == MDI_SUCCESS) {
15416 15652
15417 15653 if (mdi_prop_update_string(*pip, MDI_GUID,
15418 15654 guid) != DDI_SUCCESS) {
15419 15655 mptsas_log(mpt, CE_WARN, "mptsas driver unable to "
15420 15656 "create prop for target %d lun %d (MDI_GUID)",
15421 15657 target, lun);
15422 15658 mdi_rtn = MDI_FAILURE;
15423 15659 goto virt_create_done;
15424 15660 }
15425 15661
15426 15662 if (mdi_prop_update_int(*pip, LUN_PROP,
15427 15663 lun) != DDI_SUCCESS) {
15428 15664 mptsas_log(mpt, CE_WARN, "mptsas driver unable to "
15429 15665 "create prop for target %d lun %d (LUN_PROP)",
15430 15666 target, lun);
15431 15667 mdi_rtn = MDI_FAILURE;
15432 15668 goto virt_create_done;
15433 15669 }
15434 15670 lun64 = (int64_t)lun;
15435 15671 if (mdi_prop_update_int64(*pip, LUN64_PROP,
15436 15672 lun64) != DDI_SUCCESS) {
15437 15673 mptsas_log(mpt, CE_WARN, "mptsas driver unable to "
15438 15674 "create prop for target %d (LUN64_PROP)",
15439 15675 target);
15440 15676 mdi_rtn = MDI_FAILURE;
15441 15677 goto virt_create_done;
15442 15678 }
15443 15679 if (mdi_prop_update_string_array(*pip, "compatible",
15444 15680 compatible, ncompatible) !=
15445 15681 DDI_PROP_SUCCESS) {
15446 15682 mptsas_log(mpt, CE_WARN, "mptsas driver unable to "
15447 15683 "create prop for target %d lun %d (COMPATIBLE)",
15448 15684 target, lun);
15449 15685 mdi_rtn = MDI_FAILURE;
15450 15686 goto virt_create_done;
15451 15687 }
15452 15688 if (sas_wwn && (mdi_prop_update_string(*pip,
15453 15689 SCSI_ADDR_PROP_TARGET_PORT, wwn_str) != DDI_PROP_SUCCESS)) {
15454 15690 mptsas_log(mpt, CE_WARN, "mptsas driver unable to "
15455 15691 "create prop for target %d lun %d "
15456 15692 "(target-port)", target, lun);
15457 15693 mdi_rtn = MDI_FAILURE;
15458 15694 goto virt_create_done;
15459 15695 } else if ((sas_wwn == 0) && (mdi_prop_update_int(*pip,
15460 15696 "sata-phy", phy) != DDI_PROP_SUCCESS)) {
15461 15697 /*
15462 15698 * Direct attached SATA device without DeviceName
15463 15699 */
15464 15700 mptsas_log(mpt, CE_WARN, "mptsas driver unable to "
15465 15701 "create prop for SAS target %d lun %d "
15466 15702 "(sata-phy)", target, lun);
15467 15703 mdi_rtn = MDI_FAILURE;
15468 15704 goto virt_create_done;
15469 15705 }
15470 15706 mutex_enter(&mpt->m_mutex);
15471 15707
15472 15708 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
15473 15709 MPI2_SAS_DEVICE_PGAD_FORM_MASK) |
15474 15710 (uint32_t)ptgt->m_devhdl;
15475 15711 rval = mptsas_get_sas_device_page0(mpt, page_address,
15476 15712 &dev_hdl, &dev_sas_wwn, &dev_info, &physport,
15477 15713 &phy_id, &pdev_hdl, &bay_num, &enclosure, &io_flags);
15478 15714 if (rval != DDI_SUCCESS) {
15479 15715 mutex_exit(&mpt->m_mutex);
15480 15716 mptsas_log(mpt, CE_WARN, "mptsas unable to get "
15481 15717 "parent device for handle %d", page_address);
15482 15718 mdi_rtn = MDI_FAILURE;
15483 15719 goto virt_create_done;
15484 15720 }
15485 15721
15486 15722 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
15487 15723 MPI2_SAS_DEVICE_PGAD_FORM_MASK) | (uint32_t)pdev_hdl;
15488 15724 rval = mptsas_get_sas_device_page0(mpt, page_address,
15489 15725 &dev_hdl, &pdev_sas_wwn, &pdev_info, &physport,
15490 15726 &phy_id, &pdev_hdl, &bay_num, &enclosure, &io_flags);
15491 15727 if (rval != DDI_SUCCESS) {
15492 15728 mutex_exit(&mpt->m_mutex);
15493 15729 mptsas_log(mpt, CE_WARN, "mptsas unable to get"
15494 15730 "device info for handle %d", page_address);
15495 15731 mdi_rtn = MDI_FAILURE;
15496 15732 goto virt_create_done;
15497 15733 }
15498 15734
15499 15735 mutex_exit(&mpt->m_mutex);
15500 15736
15501 15737 /*
15502 15738 * If this device direct attached to the controller
15503 15739 * set the attached-port to the base wwid
15504 15740 */
15505 15741 if ((ptgt->m_deviceinfo & DEVINFO_DIRECT_ATTACHED)
15506 15742 != DEVINFO_DIRECT_ATTACHED) {
15507 15743 (void) sprintf(pdev_wwn_str, "w%016"PRIx64,
15508 15744 pdev_sas_wwn);
15509 15745 } else {
15510 15746 /*
15511 15747 * Update the iport's attached-port to guid
15512 15748 */
15513 15749 if (sas_wwn == 0) {
15514 15750 (void) sprintf(wwn_str, "p%x", phy);
15515 15751 } else {
15516 15752 (void) sprintf(wwn_str, "w%016"PRIx64, sas_wwn);
15517 15753 }
15518 15754 if (ddi_prop_update_string(DDI_DEV_T_NONE,
15519 15755 pdip, SCSI_ADDR_PROP_ATTACHED_PORT, wwn_str) !=
15520 15756 DDI_PROP_SUCCESS) {
15521 15757 mptsas_log(mpt, CE_WARN,
15522 15758 "mptsas unable to create "
15523 15759 "property for iport target-port"
15524 15760 " %s (sas_wwn)",
15525 15761 wwn_str);
15526 15762 mdi_rtn = MDI_FAILURE;
15527 15763 goto virt_create_done;
15528 15764 }
15529 15765
15530 15766 (void) sprintf(pdev_wwn_str, "w%016"PRIx64,
15531 15767 mpt->un.m_base_wwid);
15532 15768 }
15533 15769
15534 15770 if (IS_SATA_DEVICE(ptgt->m_deviceinfo)) {
15535 15771 char uabuf[SCSI_WWN_BUFLEN];
15536 15772
15537 15773 if (scsi_wwn_to_wwnstr(dev_sas_wwn, 1, uabuf) == NULL) {
15538 15774 mptsas_log(mpt, CE_WARN,
15539 15775 "mptsas unable to format SATA bridge WWN");
15540 15776 mdi_rtn = MDI_FAILURE;
15541 15777 goto virt_create_done;
15542 15778 }
15543 15779
15544 15780 if (mdi_prop_update_string(*pip,
15545 15781 SCSI_ADDR_PROP_BRIDGE_PORT, uabuf) !=
15546 15782 DDI_SUCCESS) {
15547 15783 mptsas_log(mpt, CE_WARN,
15548 15784 "mptsas unable to create SCSI bridge port "
15549 15785 "property for SATA device");
15550 15786 mdi_rtn = MDI_FAILURE;
15551 15787 goto virt_create_done;
15552 15788 }
15553 15789 }
15554 15790
15555 15791 if (mdi_prop_update_string(*pip,
15556 15792 SCSI_ADDR_PROP_ATTACHED_PORT, pdev_wwn_str) !=
15557 15793 DDI_PROP_SUCCESS) {
15558 15794 mptsas_log(mpt, CE_WARN, "mptsas unable to create "
15559 15795 "property for iport attached-port %s (sas_wwn)",
15560 15796 attached_wwn_str);
15561 15797 mdi_rtn = MDI_FAILURE;
15562 15798 goto virt_create_done;
15563 15799 }
15564 15800
15565 15801
15566 15802 if (inq->inq_dtype == 0) {
15567 15803 component = kmem_zalloc(MAXPATHLEN, KM_SLEEP);
15568 15804 /*
15569 15805 * set obp path for pathinfo
15570 15806 */
15571 15807 (void) snprintf(component, MAXPATHLEN,
15572 15808 "disk@%s", lun_addr);
15573 15809
15574 15810 if (mdi_pi_pathname_obp_set(*pip, component) !=
15575 15811 DDI_SUCCESS) {
15576 15812 mptsas_log(mpt, CE_WARN, "mpt_sas driver "
15577 15813 "unable to set obp-path for object %s",
15578 15814 component);
15579 15815 mdi_rtn = MDI_FAILURE;
15580 15816 goto virt_create_done;
15581 15817 }
15582 15818 }
15583 15819
15584 15820 *lun_dip = MDI_PI(*pip)->pi_client->ct_dip;
15585 15821 if (devinfo & (MPI2_SAS_DEVICE_INFO_SATA_DEVICE |
15586 15822 MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE)) {
15587 15823 if ((ndi_prop_update_int(DDI_DEV_T_NONE, *lun_dip,
15588 15824 "pm-capable", 1)) !=
15589 15825 DDI_PROP_SUCCESS) {
15590 15826 mptsas_log(mpt, CE_WARN, "mptsas driver"
15591 15827 "failed to create pm-capable "
15592 15828 "property, target %d", target);
15593 15829 mdi_rtn = MDI_FAILURE;
15594 15830 goto virt_create_done;
15595 15831 }
15596 15832 }
15597 15833 /*
15598 15834 * Create the phy-num property
15599 15835 */
|
↓ open down ↓ |
220 lines elided |
↑ open up ↑ |
15600 15836 if (mdi_prop_update_int(*pip, "phy-num",
15601 15837 ptgt->m_phynum) != DDI_SUCCESS) {
15602 15838 mptsas_log(mpt, CE_WARN, "mptsas driver unable to "
15603 15839 "create phy-num property for target %d lun %d",
15604 15840 target, lun);
15605 15841 mdi_rtn = MDI_FAILURE;
15606 15842 goto virt_create_done;
15607 15843 }
15608 15844 NDBG20(("new path:%s onlining,", MDI_PI(*pip)->pi_addr));
15609 15845 mdi_rtn = mdi_pi_online(*pip, 0);
15610 - if (mdi_rtn == MDI_SUCCESS) {
15611 - mutex_enter(&mpt->m_mutex);
15612 - ptgt->m_led_status = 0;
15613 - (void) mptsas_flush_led_status(mpt, ptgt);
15614 - mutex_exit(&mpt->m_mutex);
15615 - }
15616 15846 if (mdi_rtn == MDI_NOT_SUPPORTED) {
15617 15847 mdi_rtn = MDI_FAILURE;
15618 15848 }
15619 15849 virt_create_done:
15620 15850 if (*pip && mdi_rtn != MDI_SUCCESS) {
15621 - (void) mdi_pi_free(*pip, 0);
15851 + (void) mdi_pi_free(*pip, MDI_CLIENT_FLAGS_NO_EVENT);
15622 15852 *pip = NULL;
15623 15853 *lun_dip = NULL;
15624 15854 }
15625 15855 }
15626 15856
15627 15857 scsi_hba_nodename_compatible_free(nodename, compatible);
15628 15858 if (lun_addr != NULL) {
15629 15859 kmem_free(lun_addr, SCSI_MAXNAMELEN);
15630 15860 }
15631 15861 if (wwn_str != NULL) {
15632 15862 kmem_free(wwn_str, MPTSAS_WWN_STRLEN);
15633 15863 }
15634 15864 if (component != NULL) {
15635 15865 kmem_free(component, MAXPATHLEN);
15636 15866 }
15637 15867
15638 15868 return ((mdi_rtn == MDI_SUCCESS) ? DDI_SUCCESS : DDI_FAILURE);
15639 15869 }
15640 15870
15641 15871 static int
15642 15872 mptsas_create_phys_lun(dev_info_t *pdip, struct scsi_inquiry *inq,
15643 15873 char *guid, dev_info_t **lun_dip, mptsas_target_t *ptgt, int lun)
15644 15874 {
15645 15875 int target;
15646 15876 int rval;
15647 15877 int ndi_rtn = NDI_FAILURE;
15648 15878 uint64_t be_sas_wwn;
15649 15879 char *nodename = NULL;
15650 15880 char **compatible = NULL;
15651 15881 int ncompatible = 0;
15652 15882 int instance = 0;
15653 15883 mptsas_t *mpt = DIP2MPT(pdip);
15654 15884 char *wwn_str = NULL;
15655 15885 char *component = NULL;
15656 15886 char *attached_wwn_str = NULL;
15657 15887 uint8_t phy = 0xFF;
15658 15888 uint64_t sas_wwn;
15659 15889 uint32_t devinfo;
15660 15890 uint16_t dev_hdl;
15661 15891 uint16_t pdev_hdl;
15662 15892 uint64_t pdev_sas_wwn;
15663 15893 uint64_t dev_sas_wwn;
15664 15894 uint32_t pdev_info;
15665 15895 uint8_t physport;
15666 15896 uint8_t phy_id;
15667 15897 uint32_t page_address;
15668 15898 uint16_t bay_num, enclosure, io_flags;
15669 15899 char pdev_wwn_str[MPTSAS_WWN_STRLEN];
15670 15900 uint32_t dev_info;
15671 15901 int64_t lun64 = 0;
15672 15902
15673 15903 mutex_enter(&mpt->m_mutex);
15674 15904 target = ptgt->m_devhdl;
15675 15905 sas_wwn = ptgt->m_addr.mta_wwn;
15676 15906 devinfo = ptgt->m_deviceinfo;
15677 15907 phy = ptgt->m_phynum;
15678 15908 mutex_exit(&mpt->m_mutex);
15679 15909
15680 15910 /*
15681 15911 * generate compatible property with binding-set "mpt"
15682 15912 */
15683 15913 scsi_hba_nodename_compatible_get(inq, NULL, inq->inq_dtype, NULL,
15684 15914 &nodename, &compatible, &ncompatible);
15685 15915
15686 15916 /*
15687 15917 * if nodename can't be determined then print a message and skip it
15688 15918 */
15689 15919 if (nodename == NULL) {
15690 15920 mptsas_log(mpt, CE_WARN, "mptsas found no compatible driver "
15691 15921 "for target %d lun %d", target, lun);
15692 15922 return (DDI_FAILURE);
15693 15923 }
15694 15924
15695 15925 ndi_rtn = ndi_devi_alloc(pdip, nodename,
15696 15926 DEVI_SID_NODEID, lun_dip);
15697 15927
15698 15928 /*
15699 15929 * if lun alloc success, set props
15700 15930 */
15701 15931 if (ndi_rtn == NDI_SUCCESS) {
15702 15932
15703 15933 if (ndi_prop_update_int(DDI_DEV_T_NONE,
15704 15934 *lun_dip, LUN_PROP, lun) !=
15705 15935 DDI_PROP_SUCCESS) {
15706 15936 mptsas_log(mpt, CE_WARN, "mptsas unable to create "
15707 15937 "property for target %d lun %d (LUN_PROP)",
15708 15938 target, lun);
15709 15939 ndi_rtn = NDI_FAILURE;
15710 15940 goto phys_create_done;
15711 15941 }
15712 15942
15713 15943 lun64 = (int64_t)lun;
15714 15944 if (ndi_prop_update_int64(DDI_DEV_T_NONE,
15715 15945 *lun_dip, LUN64_PROP, lun64) !=
15716 15946 DDI_PROP_SUCCESS) {
15717 15947 mptsas_log(mpt, CE_WARN, "mptsas unable to create "
15718 15948 "property for target %d lun64 %d (LUN64_PROP)",
15719 15949 target, lun);
15720 15950 ndi_rtn = NDI_FAILURE;
15721 15951 goto phys_create_done;
15722 15952 }
15723 15953 if (ndi_prop_update_string_array(DDI_DEV_T_NONE,
15724 15954 *lun_dip, "compatible", compatible, ncompatible)
15725 15955 != DDI_PROP_SUCCESS) {
15726 15956 mptsas_log(mpt, CE_WARN, "mptsas unable to create "
15727 15957 "property for target %d lun %d (COMPATIBLE)",
15728 15958 target, lun);
15729 15959 ndi_rtn = NDI_FAILURE;
15730 15960 goto phys_create_done;
15731 15961 }
15732 15962
15733 15963 /*
15734 15964 * We need the SAS WWN for non-multipath devices, so
15735 15965 * we'll use the same property as that multipathing
15736 15966 * devices need to present for MPAPI. If we don't have
15737 15967 * a WWN (e.g. parallel SCSI), don't create the prop.
15738 15968 */
15739 15969 wwn_str = kmem_zalloc(MPTSAS_WWN_STRLEN, KM_SLEEP);
15740 15970 (void) sprintf(wwn_str, "w%016"PRIx64, sas_wwn);
15741 15971 if (sas_wwn && ndi_prop_update_string(DDI_DEV_T_NONE,
15742 15972 *lun_dip, SCSI_ADDR_PROP_TARGET_PORT, wwn_str)
15743 15973 != DDI_PROP_SUCCESS) {
15744 15974 mptsas_log(mpt, CE_WARN, "mptsas unable to "
15745 15975 "create property for SAS target %d lun %d "
15746 15976 "(target-port)", target, lun);
15747 15977 ndi_rtn = NDI_FAILURE;
15748 15978 goto phys_create_done;
15749 15979 }
15750 15980
15751 15981 be_sas_wwn = BE_64(sas_wwn);
15752 15982 if (sas_wwn && ndi_prop_update_byte_array(
15753 15983 DDI_DEV_T_NONE, *lun_dip, "port-wwn",
15754 15984 (uchar_t *)&be_sas_wwn, 8) != DDI_PROP_SUCCESS) {
15755 15985 mptsas_log(mpt, CE_WARN, "mptsas unable to "
15756 15986 "create property for SAS target %d lun %d "
15757 15987 "(port-wwn)", target, lun);
15758 15988 ndi_rtn = NDI_FAILURE;
15759 15989 goto phys_create_done;
15760 15990 } else if ((sas_wwn == 0) && (ndi_prop_update_int(
15761 15991 DDI_DEV_T_NONE, *lun_dip, "sata-phy", phy) !=
15762 15992 DDI_PROP_SUCCESS)) {
15763 15993 /*
15764 15994 * Direct attached SATA device without DeviceName
15765 15995 */
15766 15996 mptsas_log(mpt, CE_WARN, "mptsas unable to "
15767 15997 "create property for SAS target %d lun %d "
15768 15998 "(sata-phy)", target, lun);
15769 15999 ndi_rtn = NDI_FAILURE;
15770 16000 goto phys_create_done;
15771 16001 }
15772 16002
15773 16003 if (ndi_prop_create_boolean(DDI_DEV_T_NONE,
15774 16004 *lun_dip, SAS_PROP) != DDI_PROP_SUCCESS) {
15775 16005 mptsas_log(mpt, CE_WARN, "mptsas unable to"
15776 16006 "create property for SAS target %d lun %d"
15777 16007 " (SAS_PROP)", target, lun);
15778 16008 ndi_rtn = NDI_FAILURE;
15779 16009 goto phys_create_done;
15780 16010 }
15781 16011 if (guid && (ndi_prop_update_string(DDI_DEV_T_NONE,
15782 16012 *lun_dip, NDI_GUID, guid) != DDI_SUCCESS)) {
15783 16013 mptsas_log(mpt, CE_WARN, "mptsas unable "
15784 16014 "to create guid property for target %d "
15785 16015 "lun %d", target, lun);
15786 16016 ndi_rtn = NDI_FAILURE;
15787 16017 goto phys_create_done;
15788 16018 }
15789 16019
15790 16020 /*
15791 16021 * The following code is to set properties for SM-HBA support,
15792 16022 * it doesn't apply to RAID volumes
15793 16023 */
15794 16024 if (ptgt->m_addr.mta_phymask == 0)
15795 16025 goto phys_raid_lun;
15796 16026
15797 16027 mutex_enter(&mpt->m_mutex);
15798 16028
15799 16029 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
15800 16030 MPI2_SAS_DEVICE_PGAD_FORM_MASK) |
15801 16031 (uint32_t)ptgt->m_devhdl;
15802 16032 rval = mptsas_get_sas_device_page0(mpt, page_address,
15803 16033 &dev_hdl, &dev_sas_wwn, &dev_info,
15804 16034 &physport, &phy_id, &pdev_hdl,
15805 16035 &bay_num, &enclosure, &io_flags);
15806 16036 if (rval != DDI_SUCCESS) {
15807 16037 mutex_exit(&mpt->m_mutex);
15808 16038 mptsas_log(mpt, CE_WARN, "mptsas unable to get"
15809 16039 "parent device for handle %d.", page_address);
15810 16040 ndi_rtn = NDI_FAILURE;
15811 16041 goto phys_create_done;
15812 16042 }
15813 16043
15814 16044 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
15815 16045 MPI2_SAS_DEVICE_PGAD_FORM_MASK) | (uint32_t)pdev_hdl;
15816 16046 rval = mptsas_get_sas_device_page0(mpt, page_address,
15817 16047 &dev_hdl, &pdev_sas_wwn, &pdev_info, &physport,
15818 16048 &phy_id, &pdev_hdl, &bay_num, &enclosure, &io_flags);
15819 16049 if (rval != DDI_SUCCESS) {
15820 16050 mutex_exit(&mpt->m_mutex);
15821 16051 mptsas_log(mpt, CE_WARN, "mptsas unable to create "
15822 16052 "device for handle %d.", page_address);
15823 16053 ndi_rtn = NDI_FAILURE;
15824 16054 goto phys_create_done;
15825 16055 }
15826 16056
15827 16057 mutex_exit(&mpt->m_mutex);
15828 16058
15829 16059 /*
15830 16060 * If this device direct attached to the controller
15831 16061 * set the attached-port to the base wwid
15832 16062 */
15833 16063 if ((ptgt->m_deviceinfo & DEVINFO_DIRECT_ATTACHED)
15834 16064 != DEVINFO_DIRECT_ATTACHED) {
15835 16065 (void) sprintf(pdev_wwn_str, "w%016"PRIx64,
15836 16066 pdev_sas_wwn);
15837 16067 } else {
15838 16068 /*
15839 16069 * Update the iport's attached-port to guid
15840 16070 */
15841 16071 if (sas_wwn == 0) {
15842 16072 (void) sprintf(wwn_str, "p%x", phy);
15843 16073 } else {
15844 16074 (void) sprintf(wwn_str, "w%016"PRIx64, sas_wwn);
15845 16075 }
15846 16076 if (ddi_prop_update_string(DDI_DEV_T_NONE,
15847 16077 pdip, SCSI_ADDR_PROP_ATTACHED_PORT, wwn_str) !=
15848 16078 DDI_PROP_SUCCESS) {
15849 16079 mptsas_log(mpt, CE_WARN,
15850 16080 "mptsas unable to create "
15851 16081 "property for iport target-port"
15852 16082 " %s (sas_wwn)",
15853 16083 wwn_str);
15854 16084 ndi_rtn = NDI_FAILURE;
15855 16085 goto phys_create_done;
15856 16086 }
15857 16087
15858 16088 (void) sprintf(pdev_wwn_str, "w%016"PRIx64,
15859 16089 mpt->un.m_base_wwid);
15860 16090 }
15861 16091
15862 16092 if (ndi_prop_update_string(DDI_DEV_T_NONE,
15863 16093 *lun_dip, SCSI_ADDR_PROP_ATTACHED_PORT, pdev_wwn_str) !=
15864 16094 DDI_PROP_SUCCESS) {
15865 16095 mptsas_log(mpt, CE_WARN,
15866 16096 "mptsas unable to create "
15867 16097 "property for iport attached-port %s (sas_wwn)",
15868 16098 attached_wwn_str);
15869 16099 ndi_rtn = NDI_FAILURE;
15870 16100 goto phys_create_done;
15871 16101 }
15872 16102
15873 16103 if (IS_SATA_DEVICE(dev_info)) {
15874 16104 char uabuf[SCSI_WWN_BUFLEN];
15875 16105
15876 16106 if (ndi_prop_update_string(DDI_DEV_T_NONE,
15877 16107 *lun_dip, MPTSAS_VARIANT, "sata") !=
15878 16108 DDI_PROP_SUCCESS) {
15879 16109 mptsas_log(mpt, CE_WARN,
15880 16110 "mptsas unable to create "
15881 16111 "property for device variant ");
15882 16112 ndi_rtn = NDI_FAILURE;
15883 16113 goto phys_create_done;
15884 16114 }
15885 16115
15886 16116 if (scsi_wwn_to_wwnstr(dev_sas_wwn, 1, uabuf) == NULL) {
15887 16117 mptsas_log(mpt, CE_WARN,
15888 16118 "mptsas unable to format SATA bridge WWN");
15889 16119 ndi_rtn = NDI_FAILURE;
15890 16120 goto phys_create_done;
15891 16121 }
15892 16122
15893 16123 if (ndi_prop_update_string(DDI_DEV_T_NONE, *lun_dip,
15894 16124 SCSI_ADDR_PROP_BRIDGE_PORT, uabuf) !=
15895 16125 DDI_PROP_SUCCESS) {
15896 16126 mptsas_log(mpt, CE_WARN,
15897 16127 "mptsas unable to create SCSI bridge port "
15898 16128 "property for SATA device");
15899 16129 ndi_rtn = NDI_FAILURE;
15900 16130 goto phys_create_done;
15901 16131 }
15902 16132 }
15903 16133
15904 16134 if (IS_ATAPI_DEVICE(dev_info)) {
15905 16135 if (ndi_prop_update_string(DDI_DEV_T_NONE,
15906 16136 *lun_dip, MPTSAS_VARIANT, "atapi") !=
15907 16137 DDI_PROP_SUCCESS) {
15908 16138 mptsas_log(mpt, CE_WARN,
15909 16139 "mptsas unable to create "
15910 16140 "property for device variant ");
15911 16141 ndi_rtn = NDI_FAILURE;
15912 16142 goto phys_create_done;
15913 16143 }
15914 16144 }
15915 16145
15916 16146 phys_raid_lun:
15917 16147 /*
15918 16148 * if this is a SAS controller, and the target is a SATA
15919 16149 * drive, set the 'pm-capable' property for sd and if on
15920 16150 * an OPL platform, also check if this is an ATAPI
15921 16151 * device.
15922 16152 */
15923 16153 instance = ddi_get_instance(mpt->m_dip);
15924 16154 if (devinfo & (MPI2_SAS_DEVICE_INFO_SATA_DEVICE |
15925 16155 MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE)) {
15926 16156 NDBG2(("mptsas%d: creating pm-capable property, "
15927 16157 "target %d", instance, target));
15928 16158
15929 16159 if ((ndi_prop_update_int(DDI_DEV_T_NONE,
15930 16160 *lun_dip, "pm-capable", 1)) !=
15931 16161 DDI_PROP_SUCCESS) {
15932 16162 mptsas_log(mpt, CE_WARN, "mptsas "
15933 16163 "failed to create pm-capable "
15934 16164 "property, target %d", target);
15935 16165 ndi_rtn = NDI_FAILURE;
15936 16166 goto phys_create_done;
15937 16167 }
15938 16168
15939 16169 }
15940 16170
15941 16171 if ((inq->inq_dtype == 0) || (inq->inq_dtype == 5)) {
15942 16172 /*
15943 16173 * add 'obp-path' properties for devinfo
15944 16174 */
15945 16175 bzero(wwn_str, sizeof (wwn_str));
15946 16176 (void) sprintf(wwn_str, "%016"PRIx64, sas_wwn);
15947 16177 component = kmem_zalloc(MAXPATHLEN, KM_SLEEP);
15948 16178 if (guid) {
15949 16179 (void) snprintf(component, MAXPATHLEN,
15950 16180 "disk@w%s,%x", wwn_str, lun);
15951 16181 } else {
15952 16182 (void) snprintf(component, MAXPATHLEN,
15953 16183 "disk@p%x,%x", phy, lun);
15954 16184 }
15955 16185 if (ddi_pathname_obp_set(*lun_dip, component)
15956 16186 != DDI_SUCCESS) {
15957 16187 mptsas_log(mpt, CE_WARN, "mpt_sas driver "
15958 16188 "unable to set obp-path for SAS "
15959 16189 "object %s", component);
15960 16190 ndi_rtn = NDI_FAILURE;
15961 16191 goto phys_create_done;
15962 16192 }
15963 16193 }
15964 16194 /*
15965 16195 * Create the phy-num property for non-raid disk
15966 16196 */
15967 16197 if (ptgt->m_addr.mta_phymask != 0) {
15968 16198 if (ndi_prop_update_int(DDI_DEV_T_NONE,
15969 16199 *lun_dip, "phy-num", ptgt->m_phynum) !=
15970 16200 DDI_PROP_SUCCESS) {
15971 16201 mptsas_log(mpt, CE_WARN, "mptsas driver "
15972 16202 "failed to create phy-num property for "
15973 16203 "target %d", target);
15974 16204 ndi_rtn = NDI_FAILURE;
15975 16205 goto phys_create_done;
15976 16206 }
15977 16207 }
|
↓ open down ↓ |
346 lines elided |
↑ open up ↑ |
15978 16208 phys_create_done:
15979 16209 /*
15980 16210 * If props were setup ok, online the lun
15981 16211 */
15982 16212 if (ndi_rtn == NDI_SUCCESS) {
15983 16213 /*
15984 16214 * Try to online the new node
15985 16215 */
15986 16216 ndi_rtn = ndi_devi_online(*lun_dip, NDI_ONLINE_ATTACH);
15987 16217 }
15988 - if (ndi_rtn == NDI_SUCCESS) {
15989 - mutex_enter(&mpt->m_mutex);
15990 - ptgt->m_led_status = 0;
15991 - (void) mptsas_flush_led_status(mpt, ptgt);
15992 - mutex_exit(&mpt->m_mutex);
15993 - }
15994 16218
15995 16219 /*
15996 16220 * If success set rtn flag, else unwire alloc'd lun
15997 16221 */
15998 16222 if (ndi_rtn != NDI_SUCCESS) {
15999 16223 NDBG12(("mptsas driver unable to online "
16000 16224 "target %d lun %d", target, lun));
16001 16225 ndi_prop_remove_all(*lun_dip);
16002 16226 (void) ndi_devi_free(*lun_dip);
16003 16227 *lun_dip = NULL;
16004 16228 }
16005 16229 }
16006 16230
16007 16231 scsi_hba_nodename_compatible_free(nodename, compatible);
16008 16232
16009 16233 if (wwn_str != NULL) {
16010 16234 kmem_free(wwn_str, MPTSAS_WWN_STRLEN);
16011 16235 }
16012 16236 if (component != NULL) {
16013 16237 kmem_free(component, MAXPATHLEN);
16014 16238 }
16015 16239
16016 16240
16017 16241 return ((ndi_rtn == NDI_SUCCESS) ? DDI_SUCCESS : DDI_FAILURE);
16018 16242 }
16019 16243
16020 16244 static int
16021 16245 mptsas_probe_smp(dev_info_t *pdip, uint64_t wwn)
16022 16246 {
16023 16247 mptsas_t *mpt = DIP2MPT(pdip);
16024 16248 struct smp_device smp_sd;
16025 16249
16026 16250 /* XXX An HBA driver should not be allocating an smp_device. */
16027 16251 bzero(&smp_sd, sizeof (struct smp_device));
16028 16252 smp_sd.smp_sd_address.smp_a_hba_tran = mpt->m_smptran;
16029 16253 bcopy(&wwn, smp_sd.smp_sd_address.smp_a_wwn, SAS_WWN_BYTE_SIZE);
16030 16254
16031 16255 if (smp_probe(&smp_sd) != DDI_PROBE_SUCCESS)
16032 16256 return (NDI_FAILURE);
16033 16257 return (NDI_SUCCESS);
16034 16258 }
|
↓ open down ↓ |
31 lines elided |
↑ open up ↑ |
16035 16259
16036 16260 static int
16037 16261 mptsas_config_smp(dev_info_t *pdip, uint64_t sas_wwn, dev_info_t **smp_dip)
16038 16262 {
16039 16263 mptsas_t *mpt = DIP2MPT(pdip);
16040 16264 mptsas_smp_t *psmp = NULL;
16041 16265 int rval;
16042 16266 int phymask;
16043 16267
16044 16268 /*
16045 - * Get the physical port associated to the iport
16046 - * PHYMASK TODO
16269 + * The phymask exists if the port is active, otherwise
16270 + * nothing to do.
16047 16271 */
16272 + if (ddi_prop_exists(DDI_DEV_T_ANY, pdip,
16273 + DDI_PROP_DONTPASS | DDI_PROP_NOTPROM, "phymask") == 0)
16274 + return (DDI_FAILURE);
16275 +
16048 16276 phymask = ddi_prop_get_int(DDI_DEV_T_ANY, pdip, 0,
16049 16277 "phymask", 0);
16050 16278 /*
16051 16279 * Find the smp node in hash table with specified sas address and
16052 16280 * physical port
16053 16281 */
16054 16282 psmp = mptsas_wwid_to_psmp(mpt, phymask, sas_wwn);
16055 16283 if (psmp == NULL) {
16056 16284 return (DDI_FAILURE);
16057 16285 }
16058 16286
16059 16287 rval = mptsas_online_smp(pdip, psmp, smp_dip);
16060 16288
16061 16289 return (rval);
16062 16290 }
16063 16291
16064 16292 static int
16065 16293 mptsas_online_smp(dev_info_t *pdip, mptsas_smp_t *smp_node,
16066 16294 dev_info_t **smp_dip)
16067 16295 {
16068 16296 char wwn_str[MPTSAS_WWN_STRLEN];
16069 16297 char attached_wwn_str[MPTSAS_WWN_STRLEN];
16070 16298 int ndi_rtn = NDI_FAILURE;
16071 16299 int rval = 0;
16072 16300 mptsas_smp_t dev_info;
16073 16301 uint32_t page_address;
16074 16302 mptsas_t *mpt = DIP2MPT(pdip);
16075 16303 uint16_t dev_hdl;
16076 16304 uint64_t sas_wwn;
16077 16305 uint64_t smp_sas_wwn;
16078 16306 uint8_t physport;
16079 16307 uint8_t phy_id;
16080 16308 uint16_t pdev_hdl;
16081 16309 uint8_t numphys = 0;
16082 16310 uint16_t i = 0;
16083 16311 char phymask[MPTSAS_MAX_PHYS];
16084 16312 char *iport = NULL;
16085 16313 mptsas_phymask_t phy_mask = 0;
16086 16314 uint16_t attached_devhdl;
16087 16315 uint16_t bay_num, enclosure, io_flags;
16088 16316
16089 16317 (void) sprintf(wwn_str, "%"PRIx64, smp_node->m_addr.mta_wwn);
16090 16318
16091 16319 /*
16092 16320 * Probe smp device, prevent the node of removed device from being
16093 16321 * configured succesfully
16094 16322 */
16095 16323 if (mptsas_probe_smp(pdip, smp_node->m_addr.mta_wwn) != NDI_SUCCESS) {
16096 16324 return (DDI_FAILURE);
16097 16325 }
16098 16326
16099 16327 if ((*smp_dip = mptsas_find_smp_child(pdip, wwn_str)) != NULL) {
16100 16328 return (DDI_SUCCESS);
16101 16329 }
16102 16330
16103 16331 ndi_rtn = ndi_devi_alloc(pdip, "smp", DEVI_SID_NODEID, smp_dip);
16104 16332
16105 16333 /*
16106 16334 * if lun alloc success, set props
16107 16335 */
16108 16336 if (ndi_rtn == NDI_SUCCESS) {
16109 16337 /*
16110 16338 * Set the flavor of the child to be SMP flavored
16111 16339 */
16112 16340 ndi_flavor_set(*smp_dip, SCSA_FLAVOR_SMP);
16113 16341
16114 16342 if (ndi_prop_update_string(DDI_DEV_T_NONE,
16115 16343 *smp_dip, SMP_WWN, wwn_str) !=
16116 16344 DDI_PROP_SUCCESS) {
16117 16345 mptsas_log(mpt, CE_WARN, "mptsas unable to create "
16118 16346 "property for smp device %s (sas_wwn)",
16119 16347 wwn_str);
16120 16348 ndi_rtn = NDI_FAILURE;
16121 16349 goto smp_create_done;
16122 16350 }
16123 16351 (void) sprintf(wwn_str, "w%"PRIx64, smp_node->m_addr.mta_wwn);
16124 16352 if (ndi_prop_update_string(DDI_DEV_T_NONE,
16125 16353 *smp_dip, SCSI_ADDR_PROP_TARGET_PORT, wwn_str) !=
16126 16354 DDI_PROP_SUCCESS) {
16127 16355 mptsas_log(mpt, CE_WARN, "mptsas unable to create "
16128 16356 "property for iport target-port %s (sas_wwn)",
16129 16357 wwn_str);
16130 16358 ndi_rtn = NDI_FAILURE;
16131 16359 goto smp_create_done;
16132 16360 }
16133 16361
16134 16362 mutex_enter(&mpt->m_mutex);
16135 16363
16136 16364 page_address = (MPI2_SAS_EXPAND_PGAD_FORM_HNDL &
16137 16365 MPI2_SAS_EXPAND_PGAD_FORM_MASK) | smp_node->m_devhdl;
16138 16366 rval = mptsas_get_sas_expander_page0(mpt, page_address,
16139 16367 &dev_info);
16140 16368 if (rval != DDI_SUCCESS) {
16141 16369 mutex_exit(&mpt->m_mutex);
16142 16370 mptsas_log(mpt, CE_WARN,
16143 16371 "mptsas unable to get expander "
16144 16372 "parent device info for %x", page_address);
16145 16373 ndi_rtn = NDI_FAILURE;
16146 16374 goto smp_create_done;
16147 16375 }
16148 16376
16149 16377 smp_node->m_pdevhdl = dev_info.m_pdevhdl;
16150 16378 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
16151 16379 MPI2_SAS_DEVICE_PGAD_FORM_MASK) |
16152 16380 (uint32_t)dev_info.m_pdevhdl;
16153 16381 rval = mptsas_get_sas_device_page0(mpt, page_address,
16154 16382 &dev_hdl, &sas_wwn, &smp_node->m_pdevinfo, &physport,
16155 16383 &phy_id, &pdev_hdl, &bay_num, &enclosure, &io_flags);
16156 16384 if (rval != DDI_SUCCESS) {
16157 16385 mutex_exit(&mpt->m_mutex);
16158 16386 mptsas_log(mpt, CE_WARN, "mptsas unable to get "
16159 16387 "device info for %x", page_address);
16160 16388 ndi_rtn = NDI_FAILURE;
16161 16389 goto smp_create_done;
16162 16390 }
16163 16391
16164 16392 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_HANDLE &
16165 16393 MPI2_SAS_DEVICE_PGAD_FORM_MASK) |
16166 16394 (uint32_t)dev_info.m_devhdl;
16167 16395 rval = mptsas_get_sas_device_page0(mpt, page_address,
16168 16396 &dev_hdl, &smp_sas_wwn, &smp_node->m_deviceinfo,
16169 16397 &physport, &phy_id, &pdev_hdl, &bay_num, &enclosure,
16170 16398 &io_flags);
16171 16399 if (rval != DDI_SUCCESS) {
16172 16400 mutex_exit(&mpt->m_mutex);
16173 16401 mptsas_log(mpt, CE_WARN, "mptsas unable to get "
16174 16402 "device info for %x", page_address);
16175 16403 ndi_rtn = NDI_FAILURE;
16176 16404 goto smp_create_done;
16177 16405 }
16178 16406 mutex_exit(&mpt->m_mutex);
16179 16407
16180 16408 /*
16181 16409 * If this smp direct attached to the controller
16182 16410 * set the attached-port to the base wwid
16183 16411 */
16184 16412 if ((smp_node->m_deviceinfo & DEVINFO_DIRECT_ATTACHED)
16185 16413 != DEVINFO_DIRECT_ATTACHED) {
16186 16414 (void) sprintf(attached_wwn_str, "w%016"PRIx64,
16187 16415 sas_wwn);
16188 16416 } else {
16189 16417 (void) sprintf(attached_wwn_str, "w%016"PRIx64,
16190 16418 mpt->un.m_base_wwid);
16191 16419 }
16192 16420
16193 16421 if (ndi_prop_update_string(DDI_DEV_T_NONE,
16194 16422 *smp_dip, SCSI_ADDR_PROP_ATTACHED_PORT, attached_wwn_str) !=
16195 16423 DDI_PROP_SUCCESS) {
16196 16424 mptsas_log(mpt, CE_WARN, "mptsas unable to create "
16197 16425 "property for smp attached-port %s (sas_wwn)",
16198 16426 attached_wwn_str);
16199 16427 ndi_rtn = NDI_FAILURE;
16200 16428 goto smp_create_done;
16201 16429 }
16202 16430
16203 16431 if (ndi_prop_create_boolean(DDI_DEV_T_NONE,
16204 16432 *smp_dip, SMP_PROP) != DDI_PROP_SUCCESS) {
16205 16433 mptsas_log(mpt, CE_WARN, "mptsas unable to "
16206 16434 "create property for SMP %s (SMP_PROP) ",
16207 16435 wwn_str);
16208 16436 ndi_rtn = NDI_FAILURE;
16209 16437 goto smp_create_done;
16210 16438 }
16211 16439
16212 16440 /*
16213 16441 * check the smp to see whether it direct
16214 16442 * attached to the controller
16215 16443 */
16216 16444 if ((smp_node->m_deviceinfo & DEVINFO_DIRECT_ATTACHED)
16217 16445 != DEVINFO_DIRECT_ATTACHED) {
16218 16446 goto smp_create_done;
16219 16447 }
16220 16448 numphys = ddi_prop_get_int(DDI_DEV_T_ANY, pdip,
16221 16449 DDI_PROP_DONTPASS, MPTSAS_NUM_PHYS, -1);
16222 16450 if (numphys > 0) {
16223 16451 goto smp_create_done;
16224 16452 }
16225 16453 /*
16226 16454 * this iport is an old iport, we need to
16227 16455 * reconfig the props for it.
16228 16456 */
16229 16457 if (ddi_prop_update_int(DDI_DEV_T_NONE, pdip,
16230 16458 MPTSAS_VIRTUAL_PORT, 0) !=
16231 16459 DDI_PROP_SUCCESS) {
16232 16460 (void) ddi_prop_remove(DDI_DEV_T_NONE, pdip,
16233 16461 MPTSAS_VIRTUAL_PORT);
16234 16462 mptsas_log(mpt, CE_WARN, "mptsas virtual port "
16235 16463 "prop update failed");
16236 16464 goto smp_create_done;
16237 16465 }
16238 16466
16239 16467 mutex_enter(&mpt->m_mutex);
16240 16468 numphys = 0;
16241 16469 iport = ddi_get_name_addr(pdip);
16242 16470 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
16243 16471 bzero(phymask, sizeof (phymask));
16244 16472 (void) sprintf(phymask,
16245 16473 "%x", mpt->m_phy_info[i].phy_mask);
16246 16474 if (strcmp(phymask, iport) == 0) {
16247 16475 phy_mask = mpt->m_phy_info[i].phy_mask;
16248 16476 break;
16249 16477 }
16250 16478 }
16251 16479
16252 16480 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
16253 16481 if ((phy_mask >> i) & 0x01) {
16254 16482 numphys++;
16255 16483 }
16256 16484 }
16257 16485 /*
16258 16486 * Update PHY info for smhba
16259 16487 */
16260 16488 if (mptsas_smhba_phy_init(mpt)) {
16261 16489 mutex_exit(&mpt->m_mutex);
16262 16490 mptsas_log(mpt, CE_WARN, "mptsas phy update "
16263 16491 "failed");
16264 16492 goto smp_create_done;
16265 16493 }
16266 16494 mutex_exit(&mpt->m_mutex);
16267 16495
16268 16496 mptsas_smhba_set_all_phy_props(mpt, pdip, numphys, phy_mask,
16269 16497 &attached_devhdl);
16270 16498
16271 16499 if (ddi_prop_update_int(DDI_DEV_T_NONE, pdip,
16272 16500 MPTSAS_NUM_PHYS, numphys) !=
16273 16501 DDI_PROP_SUCCESS) {
16274 16502 (void) ddi_prop_remove(DDI_DEV_T_NONE, pdip,
16275 16503 MPTSAS_NUM_PHYS);
16276 16504 mptsas_log(mpt, CE_WARN, "mptsas update "
16277 16505 "num phys props failed");
16278 16506 goto smp_create_done;
16279 16507 }
16280 16508 /*
16281 16509 * Add parent's props for SMHBA support
16282 16510 */
16283 16511 if (ddi_prop_update_string(DDI_DEV_T_NONE, pdip,
16284 16512 SCSI_ADDR_PROP_ATTACHED_PORT, wwn_str) !=
16285 16513 DDI_PROP_SUCCESS) {
16286 16514 (void) ddi_prop_remove(DDI_DEV_T_NONE, pdip,
16287 16515 SCSI_ADDR_PROP_ATTACHED_PORT);
16288 16516 mptsas_log(mpt, CE_WARN, "mptsas update iport"
16289 16517 "attached-port failed");
16290 16518 goto smp_create_done;
16291 16519 }
16292 16520
16293 16521 smp_create_done:
16294 16522 /*
16295 16523 * If props were setup ok, online the lun
16296 16524 */
16297 16525 if (ndi_rtn == NDI_SUCCESS) {
16298 16526 /*
16299 16527 * Try to online the new node
16300 16528 */
16301 16529 ndi_rtn = ndi_devi_online(*smp_dip, NDI_ONLINE_ATTACH);
16302 16530 }
16303 16531
16304 16532 /*
16305 16533 * If success set rtn flag, else unwire alloc'd lun
16306 16534 */
16307 16535 if (ndi_rtn != NDI_SUCCESS) {
16308 16536 NDBG12(("mptsas unable to online "
16309 16537 "SMP target %s", wwn_str));
16310 16538 ndi_prop_remove_all(*smp_dip);
16311 16539 (void) ndi_devi_free(*smp_dip);
16312 16540 }
16313 16541 }
16314 16542
16315 16543 return ((ndi_rtn == NDI_SUCCESS) ? DDI_SUCCESS : DDI_FAILURE);
16316 16544 }
16317 16545
16318 16546 /* smp transport routine */
16319 16547 static int mptsas_smp_start(struct smp_pkt *smp_pkt)
16320 16548 {
16321 16549 uint64_t wwn;
16322 16550 Mpi2SmpPassthroughRequest_t req;
16323 16551 Mpi2SmpPassthroughReply_t rep;
16324 16552 uint32_t direction = 0;
16325 16553 mptsas_t *mpt;
16326 16554 int ret;
16327 16555 uint64_t tmp64;
16328 16556
16329 16557 mpt = (mptsas_t *)smp_pkt->smp_pkt_address->
16330 16558 smp_a_hba_tran->smp_tran_hba_private;
16331 16559
16332 16560 bcopy(smp_pkt->smp_pkt_address->smp_a_wwn, &wwn, SAS_WWN_BYTE_SIZE);
16333 16561 /*
16334 16562 * Need to compose a SMP request message
16335 16563 * and call mptsas_do_passthru() function
16336 16564 */
16337 16565 bzero(&req, sizeof (req));
16338 16566 bzero(&rep, sizeof (rep));
16339 16567 req.PassthroughFlags = 0;
16340 16568 req.PhysicalPort = 0xff;
16341 16569 req.ChainOffset = 0;
16342 16570 req.Function = MPI2_FUNCTION_SMP_PASSTHROUGH;
16343 16571
16344 16572 if ((smp_pkt->smp_pkt_reqsize & 0xffff0000ul) != 0) {
16345 16573 smp_pkt->smp_pkt_reason = ERANGE;
16346 16574 return (DDI_FAILURE);
16347 16575 }
16348 16576 req.RequestDataLength = LE_16((uint16_t)(smp_pkt->smp_pkt_reqsize - 4));
16349 16577
16350 16578 req.MsgFlags = 0;
16351 16579 tmp64 = LE_64(wwn);
16352 16580 bcopy(&tmp64, &req.SASAddress, SAS_WWN_BYTE_SIZE);
16353 16581 if (smp_pkt->smp_pkt_rspsize > 0) {
16354 16582 direction |= MPTSAS_PASS_THRU_DIRECTION_READ;
16355 16583 }
16356 16584 if (smp_pkt->smp_pkt_reqsize > 0) {
16357 16585 direction |= MPTSAS_PASS_THRU_DIRECTION_WRITE;
16358 16586 }
16359 16587
16360 16588 mutex_enter(&mpt->m_mutex);
16361 16589 ret = mptsas_do_passthru(mpt, (uint8_t *)&req, (uint8_t *)&rep,
16362 16590 (uint8_t *)smp_pkt->smp_pkt_rsp,
16363 16591 offsetof(Mpi2SmpPassthroughRequest_t, SGL), sizeof (rep),
16364 16592 smp_pkt->smp_pkt_rspsize - 4, direction,
16365 16593 (uint8_t *)smp_pkt->smp_pkt_req, smp_pkt->smp_pkt_reqsize - 4,
16366 16594 smp_pkt->smp_pkt_timeout, FKIOCTL);
16367 16595 mutex_exit(&mpt->m_mutex);
16368 16596 if (ret != 0) {
16369 16597 cmn_err(CE_WARN, "smp_start do passthru error %d", ret);
16370 16598 smp_pkt->smp_pkt_reason = (uchar_t)(ret);
16371 16599 return (DDI_FAILURE);
16372 16600 }
16373 16601 /* do passthrough success, check the smp status */
16374 16602 if (LE_16(rep.IOCStatus) != MPI2_IOCSTATUS_SUCCESS) {
16375 16603 switch (LE_16(rep.IOCStatus)) {
16376 16604 case MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
16377 16605 smp_pkt->smp_pkt_reason = ENODEV;
16378 16606 break;
16379 16607 case MPI2_IOCSTATUS_SAS_SMP_DATA_OVERRUN:
16380 16608 smp_pkt->smp_pkt_reason = EOVERFLOW;
16381 16609 break;
16382 16610 case MPI2_IOCSTATUS_SAS_SMP_REQUEST_FAILED:
16383 16611 smp_pkt->smp_pkt_reason = EIO;
16384 16612 break;
16385 16613 default:
16386 16614 mptsas_log(mpt, CE_NOTE, "smp_start: get unknown ioc"
16387 16615 "status:%x", LE_16(rep.IOCStatus));
16388 16616 smp_pkt->smp_pkt_reason = EIO;
16389 16617 break;
16390 16618 }
16391 16619 return (DDI_FAILURE);
16392 16620 }
16393 16621 if (rep.SASStatus != MPI2_SASSTATUS_SUCCESS) {
16394 16622 mptsas_log(mpt, CE_NOTE, "smp_start: get error SAS status:%x",
16395 16623 rep.SASStatus);
16396 16624 smp_pkt->smp_pkt_reason = EIO;
16397 16625 return (DDI_FAILURE);
16398 16626 }
16399 16627
16400 16628 return (DDI_SUCCESS);
16401 16629 }
16402 16630
16403 16631 /*
16404 16632 * If we didn't get a match, we need to get sas page0 for each device, and
16405 16633 * untill we get a match. If failed, return NULL
16406 16634 */
16407 16635 static mptsas_target_t *
16408 16636 mptsas_phy_to_tgt(mptsas_t *mpt, mptsas_phymask_t phymask, uint8_t phy)
16409 16637 {
16410 16638 int i, j = 0;
16411 16639 int rval = 0;
16412 16640 uint16_t cur_handle;
16413 16641 uint32_t page_address;
16414 16642 mptsas_target_t *ptgt = NULL;
16415 16643
16416 16644 /*
16417 16645 * PHY named device must be direct attached and attaches to
16418 16646 * narrow port, if the iport is not parent of the device which
16419 16647 * we are looking for.
16420 16648 */
16421 16649 for (i = 0; i < MPTSAS_MAX_PHYS; i++) {
16422 16650 if ((1 << i) & phymask)
16423 16651 j++;
16424 16652 }
16425 16653
16426 16654 if (j > 1)
16427 16655 return (NULL);
16428 16656
16429 16657 /*
16430 16658 * Must be a narrow port and single device attached to the narrow port
16431 16659 * So the physical port num of device which is equal to the iport's
16432 16660 * port num is the device what we are looking for.
16433 16661 */
16434 16662
16435 16663 if (mpt->m_phy_info[phy].phy_mask != phymask)
16436 16664 return (NULL);
16437 16665
16438 16666 mutex_enter(&mpt->m_mutex);
16439 16667
16440 16668 ptgt = refhash_linear_search(mpt->m_targets, mptsas_target_eval_nowwn,
16441 16669 &phy);
16442 16670 if (ptgt != NULL) {
16443 16671 mutex_exit(&mpt->m_mutex);
16444 16672 return (ptgt);
16445 16673 }
16446 16674
16447 16675 if (mpt->m_done_traverse_dev) {
16448 16676 mutex_exit(&mpt->m_mutex);
16449 16677 return (NULL);
16450 16678 }
16451 16679
16452 16680 /* If didn't get a match, come here */
16453 16681 cur_handle = mpt->m_dev_handle;
16454 16682 for (; ; ) {
16455 16683 ptgt = NULL;
16456 16684 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE &
16457 16685 MPI2_SAS_DEVICE_PGAD_FORM_MASK) | (uint32_t)cur_handle;
16458 16686 rval = mptsas_get_target_device_info(mpt, page_address,
16459 16687 &cur_handle, &ptgt);
16460 16688 if ((rval == DEV_INFO_FAIL_PAGE0) ||
16461 16689 (rval == DEV_INFO_FAIL_ALLOC) ||
16462 16690 (rval == DEV_INFO_FAIL_GUID)) {
16463 16691 break;
16464 16692 }
16465 16693 if ((rval == DEV_INFO_WRONG_DEVICE_TYPE) ||
16466 16694 (rval == DEV_INFO_PHYS_DISK)) {
16467 16695 continue;
16468 16696 }
16469 16697 mpt->m_dev_handle = cur_handle;
16470 16698
16471 16699 if ((ptgt->m_addr.mta_wwn == 0) && (ptgt->m_phynum == phy)) {
16472 16700 break;
16473 16701 }
16474 16702 }
16475 16703
16476 16704 mutex_exit(&mpt->m_mutex);
16477 16705 return (ptgt);
16478 16706 }
16479 16707
16480 16708 /*
16481 16709 * The ptgt->m_addr.mta_wwn contains the wwid for each disk.
16482 16710 * For Raid volumes, we need to check m_raidvol[x].m_raidwwid
16483 16711 * If we didn't get a match, we need to get sas page0 for each device, and
16484 16712 * untill we get a match
16485 16713 * If failed, return NULL
16486 16714 */
16487 16715 static mptsas_target_t *
16488 16716 mptsas_wwid_to_ptgt(mptsas_t *mpt, mptsas_phymask_t phymask, uint64_t wwid)
16489 16717 {
16490 16718 int rval = 0;
16491 16719 uint16_t cur_handle;
16492 16720 uint32_t page_address;
16493 16721 mptsas_target_t *tmp_tgt = NULL;
16494 16722 mptsas_target_addr_t addr;
16495 16723
16496 16724 addr.mta_wwn = wwid;
16497 16725 addr.mta_phymask = phymask;
16498 16726 mutex_enter(&mpt->m_mutex);
16499 16727 tmp_tgt = refhash_lookup(mpt->m_targets, &addr);
16500 16728 if (tmp_tgt != NULL) {
16501 16729 mutex_exit(&mpt->m_mutex);
16502 16730 return (tmp_tgt);
16503 16731 }
16504 16732
16505 16733 if (phymask == 0) {
16506 16734 /*
16507 16735 * It's IR volume
16508 16736 */
16509 16737 rval = mptsas_get_raid_info(mpt);
16510 16738 if (rval) {
16511 16739 tmp_tgt = refhash_lookup(mpt->m_targets, &addr);
16512 16740 }
16513 16741 mutex_exit(&mpt->m_mutex);
16514 16742 return (tmp_tgt);
16515 16743 }
16516 16744
16517 16745 if (mpt->m_done_traverse_dev) {
16518 16746 mutex_exit(&mpt->m_mutex);
16519 16747 return (NULL);
16520 16748 }
16521 16749
16522 16750 /* If didn't get a match, come here */
16523 16751 cur_handle = mpt->m_dev_handle;
16524 16752 for (;;) {
16525 16753 tmp_tgt = NULL;
16526 16754 page_address = (MPI2_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE &
16527 16755 MPI2_SAS_DEVICE_PGAD_FORM_MASK) | cur_handle;
16528 16756 rval = mptsas_get_target_device_info(mpt, page_address,
16529 16757 &cur_handle, &tmp_tgt);
16530 16758 if ((rval == DEV_INFO_FAIL_PAGE0) ||
16531 16759 (rval == DEV_INFO_FAIL_ALLOC) ||
16532 16760 (rval == DEV_INFO_FAIL_GUID)) {
16533 16761 tmp_tgt = NULL;
16534 16762 break;
16535 16763 }
16536 16764 if ((rval == DEV_INFO_WRONG_DEVICE_TYPE) ||
16537 16765 (rval == DEV_INFO_PHYS_DISK)) {
16538 16766 continue;
16539 16767 }
16540 16768 mpt->m_dev_handle = cur_handle;
16541 16769 if ((tmp_tgt->m_addr.mta_wwn) &&
16542 16770 (tmp_tgt->m_addr.mta_wwn == wwid) &&
16543 16771 (tmp_tgt->m_addr.mta_phymask == phymask)) {
16544 16772 break;
16545 16773 }
16546 16774 }
16547 16775
16548 16776 mutex_exit(&mpt->m_mutex);
16549 16777 return (tmp_tgt);
16550 16778 }
16551 16779
16552 16780 static mptsas_smp_t *
16553 16781 mptsas_wwid_to_psmp(mptsas_t *mpt, mptsas_phymask_t phymask, uint64_t wwid)
16554 16782 {
16555 16783 int rval = 0;
16556 16784 uint16_t cur_handle;
16557 16785 uint32_t page_address;
16558 16786 mptsas_smp_t smp_node, *psmp = NULL;
16559 16787 mptsas_target_addr_t addr;
16560 16788
16561 16789 addr.mta_wwn = wwid;
16562 16790 addr.mta_phymask = phymask;
16563 16791 mutex_enter(&mpt->m_mutex);
16564 16792 psmp = refhash_lookup(mpt->m_smp_targets, &addr);
16565 16793 if (psmp != NULL) {
16566 16794 mutex_exit(&mpt->m_mutex);
16567 16795 return (psmp);
16568 16796 }
16569 16797
16570 16798 if (mpt->m_done_traverse_smp) {
16571 16799 mutex_exit(&mpt->m_mutex);
16572 16800 return (NULL);
16573 16801 }
16574 16802
16575 16803 /* If didn't get a match, come here */
16576 16804 cur_handle = mpt->m_smp_devhdl;
16577 16805 for (;;) {
16578 16806 psmp = NULL;
16579 16807 page_address = (MPI2_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL &
16580 16808 MPI2_SAS_EXPAND_PGAD_FORM_MASK) | (uint32_t)cur_handle;
16581 16809 rval = mptsas_get_sas_expander_page0(mpt, page_address,
16582 16810 &smp_node);
16583 16811 if (rval != DDI_SUCCESS) {
16584 16812 break;
16585 16813 }
16586 16814 mpt->m_smp_devhdl = cur_handle = smp_node.m_devhdl;
16587 16815 psmp = mptsas_smp_alloc(mpt, &smp_node);
16588 16816 ASSERT(psmp);
16589 16817 if ((psmp->m_addr.mta_wwn) && (psmp->m_addr.mta_wwn == wwid) &&
16590 16818 (psmp->m_addr.mta_phymask == phymask)) {
16591 16819 break;
16592 16820 }
16593 16821 }
16594 16822
16595 16823 mutex_exit(&mpt->m_mutex);
16596 16824 return (psmp);
16597 16825 }
16598 16826
16599 16827 mptsas_target_t *
16600 16828 mptsas_tgt_alloc(refhash_t *refhash, uint16_t devhdl, uint64_t wwid,
16601 16829 uint32_t devinfo, mptsas_phymask_t phymask, uint8_t phynum)
16602 16830 {
16603 16831 mptsas_target_t *tmp_tgt = NULL;
16604 16832 mptsas_target_addr_t addr;
16605 16833
16606 16834 addr.mta_wwn = wwid;
16607 16835 addr.mta_phymask = phymask;
16608 16836 tmp_tgt = refhash_lookup(refhash, &addr);
16609 16837 if (tmp_tgt != NULL) {
16610 16838 NDBG20(("Hash item already exist"));
16611 16839 tmp_tgt->m_deviceinfo = devinfo;
16612 16840 tmp_tgt->m_devhdl = devhdl; /* XXX - duplicate? */
16613 16841 return (tmp_tgt);
16614 16842 }
16615 16843 tmp_tgt = kmem_zalloc(sizeof (struct mptsas_target), KM_SLEEP);
16616 16844 if (tmp_tgt == NULL) {
16617 16845 cmn_err(CE_WARN, "Fatal, allocated tgt failed");
16618 16846 return (NULL);
16619 16847 }
16620 16848 tmp_tgt->m_devhdl = devhdl;
16621 16849 tmp_tgt->m_addr.mta_wwn = wwid;
16622 16850 tmp_tgt->m_deviceinfo = devinfo;
16623 16851 tmp_tgt->m_addr.mta_phymask = phymask;
16624 16852 tmp_tgt->m_phynum = phynum;
16625 16853 /* Initialized the tgt structure */
16626 16854 tmp_tgt->m_qfull_retries = QFULL_RETRIES;
16627 16855 tmp_tgt->m_qfull_retry_interval =
16628 16856 drv_usectohz(QFULL_RETRY_INTERVAL * 1000);
16629 16857 tmp_tgt->m_t_throttle = MAX_THROTTLE;
16630 16858 TAILQ_INIT(&tmp_tgt->m_active_cmdq);
16631 16859
16632 16860 refhash_insert(refhash, tmp_tgt);
16633 16861
16634 16862 return (tmp_tgt);
16635 16863 }
16636 16864
16637 16865 static void
16638 16866 mptsas_smp_target_copy(mptsas_smp_t *src, mptsas_smp_t *dst)
16639 16867 {
16640 16868 dst->m_devhdl = src->m_devhdl;
16641 16869 dst->m_deviceinfo = src->m_deviceinfo;
16642 16870 dst->m_pdevhdl = src->m_pdevhdl;
16643 16871 dst->m_pdevinfo = src->m_pdevinfo;
16644 16872 }
16645 16873
16646 16874 static mptsas_smp_t *
16647 16875 mptsas_smp_alloc(mptsas_t *mpt, mptsas_smp_t *data)
16648 16876 {
16649 16877 mptsas_target_addr_t addr;
16650 16878 mptsas_smp_t *ret_data;
16651 16879
16652 16880 addr.mta_wwn = data->m_addr.mta_wwn;
16653 16881 addr.mta_phymask = data->m_addr.mta_phymask;
16654 16882 ret_data = refhash_lookup(mpt->m_smp_targets, &addr);
16655 16883 /*
16656 16884 * If there's already a matching SMP target, update its fields
16657 16885 * in place. Since the address is not changing, it's safe to do
16658 16886 * this. We cannot just bcopy() here because the structure we've
16659 16887 * been given has invalid hash links.
16660 16888 */
16661 16889 if (ret_data != NULL) {
16662 16890 mptsas_smp_target_copy(data, ret_data);
16663 16891 return (ret_data);
16664 16892 }
16665 16893
16666 16894 ret_data = kmem_alloc(sizeof (mptsas_smp_t), KM_SLEEP);
16667 16895 bcopy(data, ret_data, sizeof (mptsas_smp_t));
16668 16896 refhash_insert(mpt->m_smp_targets, ret_data);
16669 16897 return (ret_data);
|
↓ open down ↓ |
612 lines elided |
↑ open up ↑ |
16670 16898 }
16671 16899
16672 16900 /*
16673 16901 * Functions for SGPIO LED support
16674 16902 */
16675 16903 static dev_info_t *
16676 16904 mptsas_get_dip_from_dev(dev_t dev, mptsas_phymask_t *phymask)
16677 16905 {
16678 16906 dev_info_t *dip;
16679 16907 int prop;
16908 +
16680 16909 dip = e_ddi_hold_devi_by_dev(dev, 0);
16681 16910 if (dip == NULL)
16682 16911 return (dip);
16912 +
16913 + /*
16914 + * The phymask exists if the port is active, otherwise
16915 + * nothing to do.
16916 + */
16917 + if (ddi_prop_exists(DDI_DEV_T_ANY, dip,
16918 + DDI_PROP_DONTPASS | DDI_PROP_NOTPROM, "phymask") == 0) {
16919 + ddi_release_devi(dip);
16920 + return ((dev_info_t *)NULL);
16921 + }
16922 +
16683 16923 prop = ddi_prop_get_int(DDI_DEV_T_ANY, dip, 0,
16684 16924 "phymask", 0);
16925 +
16685 16926 *phymask = (mptsas_phymask_t)prop;
16686 16927 ddi_release_devi(dip);
16687 16928 return (dip);
16688 16929 }
16689 16930 static mptsas_target_t *
16690 16931 mptsas_addr_to_ptgt(mptsas_t *mpt, char *addr, mptsas_phymask_t phymask)
16691 16932 {
16692 16933 uint8_t phynum;
16693 16934 uint64_t wwn;
16694 16935 int lun;
16695 16936 mptsas_target_t *ptgt = NULL;
16696 16937
16697 16938 if (mptsas_parse_address(addr, &wwn, &phynum, &lun) != DDI_SUCCESS) {
16698 16939 return (NULL);
|
↓ open down ↓ |
4 lines elided |
↑ open up ↑ |
16699 16940 }
16700 16941 if (addr[0] == 'w') {
16701 16942 ptgt = mptsas_wwid_to_ptgt(mpt, (int)phymask, wwn);
16702 16943 } else {
16703 16944 ptgt = mptsas_phy_to_tgt(mpt, (int)phymask, phynum);
16704 16945 }
16705 16946 return (ptgt);
16706 16947 }
16707 16948
16708 16949 static int
16709 -mptsas_flush_led_status(mptsas_t *mpt, mptsas_target_t *ptgt)
16950 +mptsas_flush_led_status(mptsas_t *mpt, mptsas_enclosure_t *mep, uint16_t idx)
16710 16951 {
16711 16952 uint32_t slotstatus = 0;
16712 16953
16954 + ASSERT3U(idx, <, mep->me_nslots);
16955 +
16713 16956 /* Build an MPI2 Slot Status based on our view of the world */
16714 - if (ptgt->m_led_status & (1 << (MPTSAS_LEDCTL_LED_IDENT - 1)))
16957 + if (mep->me_slotleds[idx] & (1 << (MPTSAS_LEDCTL_LED_IDENT - 1)))
16715 16958 slotstatus |= MPI2_SEP_REQ_SLOTSTATUS_IDENTIFY_REQUEST;
16716 - if (ptgt->m_led_status & (1 << (MPTSAS_LEDCTL_LED_FAIL - 1)))
16959 + if (mep->me_slotleds[idx] & (1 << (MPTSAS_LEDCTL_LED_FAIL - 1)))
16717 16960 slotstatus |= MPI2_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT;
16718 - if (ptgt->m_led_status & (1 << (MPTSAS_LEDCTL_LED_OK2RM - 1)))
16961 + if (mep->me_slotleds[idx] & (1 << (MPTSAS_LEDCTL_LED_OK2RM - 1)))
16719 16962 slotstatus |= MPI2_SEP_REQ_SLOTSTATUS_REQUEST_REMOVE;
16720 16963
16721 16964 /* Write it to the controller */
16722 16965 NDBG14(("mptsas_ioctl: set LED status %x for slot %x",
16723 - slotstatus, ptgt->m_slot_num));
16724 - return (mptsas_send_sep(mpt, ptgt, &slotstatus,
16966 + slotstatus, idx + mep->me_fslot));
16967 + return (mptsas_send_sep(mpt, mep, idx, &slotstatus,
16725 16968 MPI2_SEP_REQ_ACTION_WRITE_STATUS));
16726 16969 }
16727 16970
16728 16971 /*
16729 16972 * send sep request, use enclosure/slot addressing
16730 16973 */
16731 16974 static int
16732 -mptsas_send_sep(mptsas_t *mpt, mptsas_target_t *ptgt,
16975 +mptsas_send_sep(mptsas_t *mpt, mptsas_enclosure_t *mep, uint16_t idx,
16733 16976 uint32_t *status, uint8_t act)
16734 16977 {
16735 16978 Mpi2SepRequest_t req;
16736 16979 Mpi2SepReply_t rep;
16737 16980 int ret;
16738 - mptsas_enclosure_t *mep;
16739 16981 uint16_t enctype;
16982 + uint16_t slot;
16740 16983
16741 16984 ASSERT(mutex_owned(&mpt->m_mutex));
16742 16985
16743 16986 /*
16744 - * We only support SEP control of directly-attached targets, in which
16745 - * case the "SEP" we're talking to is a virtual one contained within
16746 - * the HBA itself. This is necessary because DA targets typically have
16747 - * no other mechanism for LED control. Targets for which a separate
16748 - * enclosure service processor exists should be controlled via ses(7d)
16749 - * or sgen(7d). Furthermore, since such requests can time out, they
16750 - * should be made in user context rather than in response to
16751 - * asynchronous fabric changes.
16752 - *
16753 - * In addition, we do not support this operation for RAID volumes,
16754 - * since there is no slot associated with them.
16755 - */
16756 - if (!(ptgt->m_deviceinfo & DEVINFO_DIRECT_ATTACHED) ||
16757 - ptgt->m_addr.mta_phymask == 0) {
16758 - return (ENOTTY);
16759 - }
16760 -
16761 - /*
16762 16987 * Look through the enclosures and make sure that this enclosure is
16763 16988 * something that is directly attached device. If we didn't find an
16764 16989 * enclosure for this device, don't send the ioctl.
16765 16990 */
16766 - mep = mptsas_enc_lookup(mpt, ptgt->m_enclosure);
16767 - if (mep == NULL)
16768 - return (ENOTTY);
16769 16991 enctype = mep->me_flags & MPI2_SAS_ENCLS0_FLAGS_MNG_MASK;
16770 16992 if (enctype != MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_SES &&
16771 16993 enctype != MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_SGPIO &&
16772 16994 enctype != MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_GPIO) {
16773 16995 return (ENOTTY);
16774 16996 }
16997 + slot = idx + mep->me_fslot;
16775 16998
16776 16999 bzero(&req, sizeof (req));
16777 17000 bzero(&rep, sizeof (rep));
16778 17001
16779 17002 req.Function = MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR;
16780 17003 req.Action = act;
16781 17004 req.Flags = MPI2_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS;
16782 - req.EnclosureHandle = LE_16(ptgt->m_enclosure);
16783 - req.Slot = LE_16(ptgt->m_slot_num);
17005 + req.EnclosureHandle = LE_16(mep->me_enchdl);
17006 + req.Slot = LE_16(slot);
16784 17007 if (act == MPI2_SEP_REQ_ACTION_WRITE_STATUS) {
16785 17008 req.SlotStatus = LE_32(*status);
16786 17009 }
16787 17010 ret = mptsas_do_passthru(mpt, (uint8_t *)&req, (uint8_t *)&rep, NULL,
16788 17011 sizeof (req), sizeof (rep), NULL, 0, NULL, 0, 60, FKIOCTL);
16789 17012 if (ret != 0) {
16790 17013 mptsas_log(mpt, CE_NOTE, "mptsas_send_sep: passthru SEP "
16791 17014 "Processor Request message error %d", ret);
16792 17015 return (ret);
16793 17016 }
16794 17017 /* do passthrough success, check the ioc status */
16795 17018 if (LE_16(rep.IOCStatus) != MPI2_IOCSTATUS_SUCCESS) {
16796 17019 mptsas_log(mpt, CE_NOTE, "send_sep act %x: ioc "
16797 17020 "status:%x loginfo %x", act, LE_16(rep.IOCStatus),
16798 17021 LE_32(rep.IOCLogInfo));
16799 17022 switch (LE_16(rep.IOCStatus) & MPI2_IOCSTATUS_MASK) {
16800 17023 case MPI2_IOCSTATUS_INVALID_FUNCTION:
16801 17024 case MPI2_IOCSTATUS_INVALID_VPID:
16802 17025 case MPI2_IOCSTATUS_INVALID_FIELD:
16803 17026 case MPI2_IOCSTATUS_INVALID_STATE:
16804 17027 case MPI2_IOCSTATUS_OP_STATE_NOT_SUPPORTED:
16805 17028 case MPI2_IOCSTATUS_CONFIG_INVALID_ACTION:
16806 17029 case MPI2_IOCSTATUS_CONFIG_INVALID_TYPE:
16807 17030 case MPI2_IOCSTATUS_CONFIG_INVALID_PAGE:
16808 17031 case MPI2_IOCSTATUS_CONFIG_INVALID_DATA:
16809 17032 case MPI2_IOCSTATUS_CONFIG_NO_DEFAULTS:
16810 17033 return (EINVAL);
16811 17034 case MPI2_IOCSTATUS_BUSY:
16812 17035 return (EBUSY);
16813 17036 case MPI2_IOCSTATUS_INSUFFICIENT_RESOURCES:
16814 17037 return (EAGAIN);
16815 17038 case MPI2_IOCSTATUS_INVALID_SGL:
16816 17039 case MPI2_IOCSTATUS_INTERNAL_ERROR:
16817 17040 case MPI2_IOCSTATUS_CONFIG_CANT_COMMIT:
16818 17041 default:
16819 17042 return (EIO);
16820 17043 }
16821 17044 }
16822 17045 if (act != MPI2_SEP_REQ_ACTION_WRITE_STATUS) {
16823 17046 *status = LE_32(rep.SlotStatus);
16824 17047 }
16825 17048
16826 17049 return (0);
16827 17050 }
16828 17051
16829 17052 int
16830 17053 mptsas_dma_addr_create(mptsas_t *mpt, ddi_dma_attr_t dma_attr,
16831 17054 ddi_dma_handle_t *dma_hdp, ddi_acc_handle_t *acc_hdp, caddr_t *dma_memp,
16832 17055 uint32_t alloc_size, ddi_dma_cookie_t *cookiep)
16833 17056 {
16834 17057 ddi_dma_cookie_t new_cookie;
16835 17058 size_t alloc_len;
16836 17059 uint_t ncookie;
16837 17060
16838 17061 if (cookiep == NULL)
16839 17062 cookiep = &new_cookie;
16840 17063
16841 17064 if (ddi_dma_alloc_handle(mpt->m_dip, &dma_attr, DDI_DMA_SLEEP,
16842 17065 NULL, dma_hdp) != DDI_SUCCESS) {
16843 17066 return (FALSE);
16844 17067 }
16845 17068
16846 17069 if (ddi_dma_mem_alloc(*dma_hdp, alloc_size, &mpt->m_dev_acc_attr,
16847 17070 DDI_DMA_CONSISTENT, DDI_DMA_SLEEP, NULL, dma_memp, &alloc_len,
16848 17071 acc_hdp) != DDI_SUCCESS) {
16849 17072 ddi_dma_free_handle(dma_hdp);
16850 17073 *dma_hdp = NULL;
16851 17074 return (FALSE);
16852 17075 }
16853 17076
16854 17077 if (ddi_dma_addr_bind_handle(*dma_hdp, NULL, *dma_memp, alloc_len,
16855 17078 (DDI_DMA_RDWR | DDI_DMA_CONSISTENT), DDI_DMA_SLEEP, NULL,
16856 17079 cookiep, &ncookie) != DDI_DMA_MAPPED) {
16857 17080 (void) ddi_dma_mem_free(acc_hdp);
16858 17081 ddi_dma_free_handle(dma_hdp);
16859 17082 *dma_hdp = NULL;
16860 17083 return (FALSE);
16861 17084 }
16862 17085
16863 17086 return (TRUE);
16864 17087 }
16865 17088
16866 17089 void
16867 17090 mptsas_dma_addr_destroy(ddi_dma_handle_t *dma_hdp, ddi_acc_handle_t *acc_hdp)
16868 17091 {
16869 17092 if (*dma_hdp == NULL)
16870 17093 return;
16871 17094
16872 17095 (void) ddi_dma_unbind_handle(*dma_hdp);
16873 17096 (void) ddi_dma_mem_free(acc_hdp);
16874 17097 ddi_dma_free_handle(dma_hdp);
16875 17098 *dma_hdp = NULL;
16876 17099 }
|
↓ open down ↓ |
83 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX