Merge git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (110 commits)
  [SCSI] qla2xxx: Refactor call to qla2xxx_read_sfp for thermal temperature.
  [SCSI] qla2xxx: Unify the read/write sfp mailbox command routines.
  [SCSI] qla2xxx: Clear complete initialization control block.
  [SCSI] qla2xxx: Allow an override of the registered maximum LUN.
  [SCSI] qla2xxx: Add host number in reset and quiescent message logs.
  [SCSI] qla2xxx: Correctly read sfp single byte mailbox register.
  [SCSI] qla2xxx: Add qla82xx_rom_unlock() function.
  [SCSI] qla2xxx: Log if qla82xx firmware fails to load from flash.
  [SCSI] qla2xxx: Use passed in host to initialize local scsi_qla_host in queuecommand function
  [SCSI] qla2xxx: Correct buffer start in edc sysfs debug print.
  [SCSI] qla2xxx: Update firmware version after flash update for ISP82xx.
  [SCSI] qla2xxx: Fix hang during driver unload when vport is active.
  [SCSI] qla2xxx: Properly set the dsd_list_len for dsd_chaining in cmd type 6.
  [SCSI] qla2xxx: Fix virtual port failing to login after chip reset.
  [SCSI] qla2xxx: Fix vport delete hang when logins are outstanding.
  [SCSI] hpsa: Change memset using sizeof(ptr) to sizeof(*ptr)
  [SCSI] ipr: Rate limit DMA mapping errors
  [SCSI] hpsa: add P2000 to list of shared SAS devices
  [SCSI] hpsa: do not attempt PCI power management reset method if we know it won't work.
  [SCSI] hpsa: remove superfluous sleeps around reset code
  ...
This commit is contained in:
Linus Torvalds 2011-05-20 13:29:52 -07:00
commit ad9471752e
119 changed files with 6398 additions and 1690 deletions

View File

@ -1,11 +1,11 @@
Copyright (c) 2003-2005 QLogic Corporation Copyright (c) 2003-2011 QLogic Corporation
QLogic Linux Fibre Channel HBA Driver QLogic Linux/ESX Fibre Channel HBA Driver
This program includes a device driver for Linux 2.6 that may be This program includes a device driver for Linux 2.6/ESX that may be
distributed with QLogic hardware specific firmware binary file. distributed with QLogic hardware specific firmware binary file.
You may modify and redistribute the device driver code under the You may modify and redistribute the device driver code under the
GNU General Public License as published by the Free Software GNU General Public License (a copy of which is attached hereto as
Foundation (version 2 or a later version). Exhibit A) published by the Free Software Foundation (version 2).
You may redistribute the hardware specific firmware binary file You may redistribute the hardware specific firmware binary file
under the following terms: under the following terms:
@ -43,3 +43,285 @@ OTHERWISE IN ANY INTELLECTUAL PROPERTY RIGHTS (PATENT, COPYRIGHT,
TRADE SECRET, MASK WORK, OR OTHER PROPRIETARY RIGHT) EMBODIED IN TRADE SECRET, MASK WORK, OR OTHER PROPRIETARY RIGHT) EMBODIED IN
ANY OTHER QLOGIC HARDWARE OR SOFTWARE EITHER SOLELY OR IN ANY OTHER QLOGIC HARDWARE OR SOFTWARE EITHER SOLELY OR IN
COMBINATION WITH THIS PROGRAM. COMBINATION WITH THIS PROGRAM.
EXHIBIT A
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

View File

@ -5611,9 +5611,9 @@ F: include/linux/ata.h
F: include/linux/libata.h F: include/linux/libata.h
SERVER ENGINES 10Gbps iSCSI - BladeEngine 2 DRIVER SERVER ENGINES 10Gbps iSCSI - BladeEngine 2 DRIVER
M: Jayamohan Kallickal <jayamohank@serverengines.com> M: Jayamohan Kallickal <jayamohan.kallickal@emulex.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
W: http://www.serverengines.com W: http://www.emulex.com
S: Supported S: Supported
F: drivers/scsi/be2iscsi/ F: drivers/scsi/be2iscsi/

View File

@ -76,8 +76,8 @@
#define COPYRIGHT "Copyright (c) 1999-2008 " MODULEAUTHOR #define COPYRIGHT "Copyright (c) 1999-2008 " MODULEAUTHOR
#endif #endif
#define MPT_LINUX_VERSION_COMMON "3.04.18" #define MPT_LINUX_VERSION_COMMON "3.04.19"
#define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.18" #define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.04.19"
#define WHAT_MAGIC_STRING "@" "(" "#" ")" #define WHAT_MAGIC_STRING "@" "(" "#" ")"
#define show_mptmod_ver(s,ver) \ #define show_mptmod_ver(s,ver) \

View File

@ -5012,7 +5012,6 @@ mptsas_event_process(MPT_ADAPTER *ioc, EventNotificationReply_t *reply)
(ioc_stat & MPI_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE)) { (ioc_stat & MPI_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE)) {
VirtTarget *vtarget = NULL; VirtTarget *vtarget = NULL;
u8 id, channel; u8 id, channel;
u32 log_info = le32_to_cpu(reply->IOCLogInfo);
id = sas_event_data->TargetID; id = sas_event_data->TargetID;
channel = sas_event_data->Bus; channel = sas_event_data->Bus;
@ -5023,7 +5022,8 @@ mptsas_event_process(MPT_ADAPTER *ioc, EventNotificationReply_t *reply)
"LogInfo (0x%x) available for " "LogInfo (0x%x) available for "
"INTERNAL_DEVICE_RESET" "INTERNAL_DEVICE_RESET"
"fw_id %d fw_channel %d\n", ioc->name, "fw_id %d fw_channel %d\n", ioc->name,
log_info, id, channel)); le32_to_cpu(reply->IOCLogInfo),
id, channel));
if (vtarget->raidVolume) { if (vtarget->raidVolume) {
devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT
"Skipping Raid Volume for inDMD\n", "Skipping Raid Volume for inDMD\n",

View File

@ -1415,11 +1415,8 @@ mptscsih_qcmd(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))
dmfprintk(ioc, printk(MYIOC_s_DEBUG_FMT "qcmd: SCpnt=%p, done()=%p\n", dmfprintk(ioc, printk(MYIOC_s_DEBUG_FMT "qcmd: SCpnt=%p, done()=%p\n",
ioc->name, SCpnt, done)); ioc->name, SCpnt, done));
if (ioc->taskmgmt_quiesce_io) { if (ioc->taskmgmt_quiesce_io)
dtmprintk(ioc, printk(MYIOC_s_WARN_FMT "qcmd: SCpnt=%p timeout + 60HZ\n",
ioc->name, SCpnt));
return SCSI_MLQUEUE_HOST_BUSY; return SCSI_MLQUEUE_HOST_BUSY;
}
/* /*
* Put together a MPT SCSI request... * Put together a MPT SCSI request...
@ -1773,7 +1770,6 @@ mptscsih_abort(struct scsi_cmnd * SCpnt)
int scpnt_idx; int scpnt_idx;
int retval; int retval;
VirtDevice *vdevice; VirtDevice *vdevice;
ulong sn = SCpnt->serial_number;
MPT_ADAPTER *ioc; MPT_ADAPTER *ioc;
/* If we can't locate our host adapter structure, return FAILED status. /* If we can't locate our host adapter structure, return FAILED status.
@ -1859,8 +1855,7 @@ mptscsih_abort(struct scsi_cmnd * SCpnt)
vdevice->vtarget->id, vdevice->lun, vdevice->vtarget->id, vdevice->lun,
ctx2abort, mptscsih_get_tm_timeout(ioc)); ctx2abort, mptscsih_get_tm_timeout(ioc));
if (SCPNT_TO_LOOKUP_IDX(ioc, SCpnt) == scpnt_idx && if (SCPNT_TO_LOOKUP_IDX(ioc, SCpnt) == scpnt_idx) {
SCpnt->serial_number == sn) {
dtmprintk(ioc, printk(MYIOC_s_DEBUG_FMT dtmprintk(ioc, printk(MYIOC_s_DEBUG_FMT
"task abort: command still in active list! (sc=%p)\n", "task abort: command still in active list! (sc=%p)\n",
ioc->name, SCpnt)); ioc->name, SCpnt));
@ -1873,9 +1868,9 @@ mptscsih_abort(struct scsi_cmnd * SCpnt)
} }
out: out:
printk(MYIOC_s_INFO_FMT "task abort: %s (rv=%04x) (sc=%p) (sn=%ld)\n", printk(MYIOC_s_INFO_FMT "task abort: %s (rv=%04x) (sc=%p)\n",
ioc->name, ((retval == SUCCESS) ? "SUCCESS" : "FAILED"), retval, ioc->name, ((retval == SUCCESS) ? "SUCCESS" : "FAILED"), retval,
SCpnt, SCpnt->serial_number); SCpnt);
return retval; return retval;
} }

View File

@ -867,6 +867,10 @@ static int mptspi_write_spi_device_pg1(struct scsi_target *starget,
struct _x_config_parms cfg; struct _x_config_parms cfg;
struct _CONFIG_PAGE_HEADER hdr; struct _CONFIG_PAGE_HEADER hdr;
int err = -EBUSY; int err = -EBUSY;
u32 nego_parms;
u32 period;
struct scsi_device *sdev;
int i;
/* don't allow updating nego parameters on RAID devices */ /* don't allow updating nego parameters on RAID devices */
if (starget->channel == 0 && if (starget->channel == 0 &&
@ -904,6 +908,24 @@ static int mptspi_write_spi_device_pg1(struct scsi_target *starget,
pg1->Header.PageNumber = hdr.PageNumber; pg1->Header.PageNumber = hdr.PageNumber;
pg1->Header.PageType = hdr.PageType; pg1->Header.PageType = hdr.PageType;
nego_parms = le32_to_cpu(pg1->RequestedParameters);
period = (nego_parms & MPI_SCSIDEVPAGE1_RP_MIN_SYNC_PERIOD_MASK) >>
MPI_SCSIDEVPAGE1_RP_SHIFT_MIN_SYNC_PERIOD;
if (period == 8) {
/* Turn on inline data padding for TAPE when running U320 */
for (i = 0 ; i < 16; i++) {
sdev = scsi_device_lookup_by_target(starget, i);
if (sdev && sdev->type == TYPE_TAPE) {
sdev_printk(KERN_DEBUG, sdev, MYIOC_s_FMT
"IDP:ON\n", ioc->name);
nego_parms |= MPI_SCSIDEVPAGE1_RP_IDP;
pg1->RequestedParameters =
cpu_to_le32(nego_parms);
break;
}
}
}
mptspi_print_write_nego(hd, starget, le32_to_cpu(pg1->RequestedParameters)); mptspi_print_write_nego(hd, starget, le32_to_cpu(pg1->RequestedParameters));
if (mpt_config(ioc, &cfg)) { if (mpt_config(ioc, &cfg)) {

View File

@ -361,7 +361,7 @@ static int i2o_scsi_reply(struct i2o_controller *c, u32 m,
*/ */
error = le32_to_cpu(msg->body[0]); error = le32_to_cpu(msg->body[0]);
osm_debug("Completed %ld\n", cmd->serial_number); osm_debug("Completed %0x%p\n", cmd);
cmd->result = error & 0xff; cmd->result = error & 0xff;
/* /*
@ -678,7 +678,7 @@ static int i2o_scsi_queuecommand_lck(struct scsi_cmnd *SCpnt,
/* Queue the message */ /* Queue the message */
i2o_msg_post(c, msg); i2o_msg_post(c, msg);
osm_debug("Issued %ld\n", SCpnt->serial_number); osm_debug("Issued %0x%p\n", SCpnt);
return 0; return 0;

View File

@ -75,8 +75,10 @@ MODULE_AUTHOR("Nick Cheng <support@areca.com.tw>");
MODULE_DESCRIPTION("ARECA (ARC11xx/12xx/16xx/1880) SATA/SAS RAID Host Bus Adapter"); MODULE_DESCRIPTION("ARECA (ARC11xx/12xx/16xx/1880) SATA/SAS RAID Host Bus Adapter");
MODULE_LICENSE("Dual BSD/GPL"); MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(ARCMSR_DRIVER_VERSION); MODULE_VERSION(ARCMSR_DRIVER_VERSION);
static int sleeptime = 10;
static int retrycount = 12; #define ARCMSR_SLEEPTIME 10
#define ARCMSR_RETRYCOUNT 12
wait_queue_head_t wait_q; wait_queue_head_t wait_q;
static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb, static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb,
struct scsi_cmnd *cmd); struct scsi_cmnd *cmd);
@ -171,24 +173,6 @@ static struct pci_driver arcmsr_pci_driver = {
**************************************************************************** ****************************************************************************
**************************************************************************** ****************************************************************************
*/ */
int arcmsr_sleep_for_bus_reset(struct scsi_cmnd *cmd)
{
struct Scsi_Host *shost = NULL;
int i, isleep;
shost = cmd->device->host;
isleep = sleeptime / 10;
if (isleep > 0) {
for (i = 0; i < isleep; i++) {
msleep(10000);
}
}
isleep = sleeptime % 10;
if (isleep > 0) {
msleep(isleep*1000);
}
return 0;
}
static void arcmsr_free_hbb_mu(struct AdapterControlBlock *acb) static void arcmsr_free_hbb_mu(struct AdapterControlBlock *acb)
{ {
@ -323,66 +307,64 @@ static void arcmsr_define_adapter_type(struct AdapterControlBlock *acb)
default: acb->adapter_type = ACB_ADAPTER_TYPE_A; default: acb->adapter_type = ACB_ADAPTER_TYPE_A;
} }
} }
static uint8_t arcmsr_hba_wait_msgint_ready(struct AdapterControlBlock *acb) static uint8_t arcmsr_hba_wait_msgint_ready(struct AdapterControlBlock *acb)
{ {
struct MessageUnit_A __iomem *reg = acb->pmuA; struct MessageUnit_A __iomem *reg = acb->pmuA;
uint32_t Index; int i;
uint8_t Retries = 0x00;
do { for (i = 0; i < 2000; i++) {
for (Index = 0; Index < 100; Index++) { if (readl(&reg->outbound_intstatus) &
if (readl(&reg->outbound_intstatus) & ARCMSR_MU_OUTBOUND_MESSAGE0_INT) {
ARCMSR_MU_OUTBOUND_MESSAGE0_INT) { writel(ARCMSR_MU_OUTBOUND_MESSAGE0_INT,
writel(ARCMSR_MU_OUTBOUND_MESSAGE0_INT, &reg->outbound_intstatus);
&reg->outbound_intstatus); return true;
return true; }
} msleep(10);
msleep(10); } /* max 20 seconds */
}/*max 1 seconds*/
} while (Retries++ < 20);/*max 20 sec*/
return false; return false;
} }
static uint8_t arcmsr_hbb_wait_msgint_ready(struct AdapterControlBlock *acb) static uint8_t arcmsr_hbb_wait_msgint_ready(struct AdapterControlBlock *acb)
{ {
struct MessageUnit_B *reg = acb->pmuB; struct MessageUnit_B *reg = acb->pmuB;
uint32_t Index; int i;
uint8_t Retries = 0x00;
do { for (i = 0; i < 2000; i++) {
for (Index = 0; Index < 100; Index++) { if (readl(reg->iop2drv_doorbell)
if (readl(reg->iop2drv_doorbell) & ARCMSR_IOP2DRV_MESSAGE_CMD_DONE) {
& ARCMSR_IOP2DRV_MESSAGE_CMD_DONE) { writel(ARCMSR_MESSAGE_INT_CLEAR_PATTERN,
writel(ARCMSR_MESSAGE_INT_CLEAR_PATTERN reg->iop2drv_doorbell);
, reg->iop2drv_doorbell); writel(ARCMSR_DRV2IOP_END_OF_INTERRUPT,
writel(ARCMSR_DRV2IOP_END_OF_INTERRUPT, reg->drv2iop_doorbell); reg->drv2iop_doorbell);
return true; return true;
} }
msleep(10); msleep(10);
}/*max 1 seconds*/ } /* max 20 seconds */
} while (Retries++ < 20);/*max 20 sec*/
return false; return false;
} }
static uint8_t arcmsr_hbc_wait_msgint_ready(struct AdapterControlBlock *pACB) static uint8_t arcmsr_hbc_wait_msgint_ready(struct AdapterControlBlock *pACB)
{ {
struct MessageUnit_C *phbcmu = (struct MessageUnit_C *)pACB->pmuC; struct MessageUnit_C *phbcmu = (struct MessageUnit_C *)pACB->pmuC;
unsigned char Retries = 0x00; int i;
uint32_t Index;
do { for (i = 0; i < 2000; i++) {
for (Index = 0; Index < 100; Index++) { if (readl(&phbcmu->outbound_doorbell)
if (readl(&phbcmu->outbound_doorbell) & ARCMSR_HBCMU_IOP2DRV_MESSAGE_CMD_DONE) { & ARCMSR_HBCMU_IOP2DRV_MESSAGE_CMD_DONE) {
writel(ARCMSR_HBCMU_IOP2DRV_MESSAGE_CMD_DONE_DOORBELL_CLEAR, &phbcmu->outbound_doorbell_clear);/*clear interrupt*/ writel(ARCMSR_HBCMU_IOP2DRV_MESSAGE_CMD_DONE_DOORBELL_CLEAR,
return true; &phbcmu->outbound_doorbell_clear); /*clear interrupt*/
} return true;
/* one us delay */ }
msleep(10); msleep(10);
} /*max 1 seconds*/ } /* max 20 seconds */
} while (Retries++ < 20); /*max 20 sec*/
return false; return false;
} }
static void arcmsr_flush_hba_cache(struct AdapterControlBlock *acb) static void arcmsr_flush_hba_cache(struct AdapterControlBlock *acb)
{ {
struct MessageUnit_A __iomem *reg = acb->pmuA; struct MessageUnit_A __iomem *reg = acb->pmuA;
@ -459,10 +441,11 @@ static int arcmsr_alloc_ccb_pool(struct AdapterControlBlock *acb)
struct CommandControlBlock *ccb_tmp; struct CommandControlBlock *ccb_tmp;
int i = 0, j = 0; int i = 0, j = 0;
dma_addr_t cdb_phyaddr; dma_addr_t cdb_phyaddr;
unsigned long roundup_ccbsize = 0, offset; unsigned long roundup_ccbsize;
unsigned long max_xfer_len; unsigned long max_xfer_len;
unsigned long max_sg_entrys; unsigned long max_sg_entrys;
uint32_t firm_config_version; uint32_t firm_config_version;
for (i = 0; i < ARCMSR_MAX_TARGETID; i++) for (i = 0; i < ARCMSR_MAX_TARGETID; i++)
for (j = 0; j < ARCMSR_MAX_TARGETLUN; j++) for (j = 0; j < ARCMSR_MAX_TARGETLUN; j++)
acb->devstate[i][j] = ARECA_RAID_GONE; acb->devstate[i][j] = ARECA_RAID_GONE;
@ -472,23 +455,20 @@ static int arcmsr_alloc_ccb_pool(struct AdapterControlBlock *acb)
firm_config_version = acb->firm_cfg_version; firm_config_version = acb->firm_cfg_version;
if((firm_config_version & 0xFF) >= 3){ if((firm_config_version & 0xFF) >= 3){
max_xfer_len = (ARCMSR_CDB_SG_PAGE_LENGTH << ((firm_config_version >> 8) & 0xFF)) * 1024;/* max 4M byte */ max_xfer_len = (ARCMSR_CDB_SG_PAGE_LENGTH << ((firm_config_version >> 8) & 0xFF)) * 1024;/* max 4M byte */
max_sg_entrys = (max_xfer_len/4096); max_sg_entrys = (max_xfer_len/4096);
} }
acb->host->max_sectors = max_xfer_len/512; acb->host->max_sectors = max_xfer_len/512;
acb->host->sg_tablesize = max_sg_entrys; acb->host->sg_tablesize = max_sg_entrys;
roundup_ccbsize = roundup(sizeof(struct CommandControlBlock) + (max_sg_entrys - 1) * sizeof(struct SG64ENTRY), 32); roundup_ccbsize = roundup(sizeof(struct CommandControlBlock) + (max_sg_entrys - 1) * sizeof(struct SG64ENTRY), 32);
acb->uncache_size = roundup_ccbsize * ARCMSR_MAX_FREECCB_NUM + 32; acb->uncache_size = roundup_ccbsize * ARCMSR_MAX_FREECCB_NUM;
dma_coherent = dma_alloc_coherent(&pdev->dev, acb->uncache_size, &dma_coherent_handle, GFP_KERNEL); dma_coherent = dma_alloc_coherent(&pdev->dev, acb->uncache_size, &dma_coherent_handle, GFP_KERNEL);
if(!dma_coherent){ if(!dma_coherent){
printk(KERN_NOTICE "arcmsr%d: dma_alloc_coherent got error \n", acb->host->host_no); printk(KERN_NOTICE "arcmsr%d: dma_alloc_coherent got error\n", acb->host->host_no);
return -ENOMEM; return -ENOMEM;
} }
acb->dma_coherent = dma_coherent; acb->dma_coherent = dma_coherent;
acb->dma_coherent_handle = dma_coherent_handle; acb->dma_coherent_handle = dma_coherent_handle;
memset(dma_coherent, 0, acb->uncache_size); memset(dma_coherent, 0, acb->uncache_size);
offset = roundup((unsigned long)dma_coherent, 32) - (unsigned long)dma_coherent;
dma_coherent_handle = dma_coherent_handle + offset;
dma_coherent = (struct CommandControlBlock *)dma_coherent + offset;
ccb_tmp = dma_coherent; ccb_tmp = dma_coherent;
acb->vir2phy_offset = (unsigned long)dma_coherent - (unsigned long)dma_coherent_handle; acb->vir2phy_offset = (unsigned long)dma_coherent - (unsigned long)dma_coherent_handle;
for(i = 0; i < ARCMSR_MAX_FREECCB_NUM; i++){ for(i = 0; i < ARCMSR_MAX_FREECCB_NUM; i++){
@ -2602,12 +2582,8 @@ static int arcmsr_iop_confirm(struct AdapterControlBlock *acb)
if (cdb_phyaddr_hi32 != 0) { if (cdb_phyaddr_hi32 != 0) {
struct MessageUnit_C *reg = (struct MessageUnit_C *)acb->pmuC; struct MessageUnit_C *reg = (struct MessageUnit_C *)acb->pmuC;
if (cdb_phyaddr_hi32 != 0) { printk(KERN_NOTICE "arcmsr%d: cdb_phyaddr_hi32=0x%x\n",
unsigned char Retries = 0x00; acb->adapter_index, cdb_phyaddr_hi32);
do {
printk(KERN_NOTICE "arcmsr%d: cdb_phyaddr_hi32=0x%x \n", acb->adapter_index, cdb_phyaddr_hi32);
} while (Retries++ < 100);
}
writel(ARCMSR_SIGNATURE_SET_CONFIG, &reg->msgcode_rwbuffer[0]); writel(ARCMSR_SIGNATURE_SET_CONFIG, &reg->msgcode_rwbuffer[0]);
writel(cdb_phyaddr_hi32, &reg->msgcode_rwbuffer[1]); writel(cdb_phyaddr_hi32, &reg->msgcode_rwbuffer[1]);
writel(ARCMSR_INBOUND_MESG0_SET_CONFIG, &reg->inbound_msgaddr0); writel(ARCMSR_INBOUND_MESG0_SET_CONFIG, &reg->inbound_msgaddr0);
@ -2955,12 +2931,12 @@ static int arcmsr_bus_reset(struct scsi_cmnd *cmd)
arcmsr_hardware_reset(acb); arcmsr_hardware_reset(acb);
acb->acb_flags &= ~ACB_F_IOP_INITED; acb->acb_flags &= ~ACB_F_IOP_INITED;
sleep_again: sleep_again:
arcmsr_sleep_for_bus_reset(cmd); ssleep(ARCMSR_SLEEPTIME);
if ((readl(&reg->outbound_msgaddr1) & ARCMSR_OUTBOUND_MESG1_FIRMWARE_OK) == 0) { if ((readl(&reg->outbound_msgaddr1) & ARCMSR_OUTBOUND_MESG1_FIRMWARE_OK) == 0) {
printk(KERN_ERR "arcmsr%d: waiting for hw bus reset return, retry=%d \n", acb->host->host_no, retry_count); printk(KERN_ERR "arcmsr%d: waiting for hw bus reset return, retry=%d\n", acb->host->host_no, retry_count);
if (retry_count > retrycount) { if (retry_count > ARCMSR_RETRYCOUNT) {
acb->fw_flag = FW_DEADLOCK; acb->fw_flag = FW_DEADLOCK;
printk(KERN_ERR "arcmsr%d: waiting for hw bus reset return, RETRY TERMINATED!! \n", acb->host->host_no); printk(KERN_ERR "arcmsr%d: waiting for hw bus reset return, RETRY TERMINATED!!\n", acb->host->host_no);
return FAILED; return FAILED;
} }
retry_count++; retry_count++;
@ -3025,12 +3001,12 @@ sleep_again:
arcmsr_hardware_reset(acb); arcmsr_hardware_reset(acb);
acb->acb_flags &= ~ACB_F_IOP_INITED; acb->acb_flags &= ~ACB_F_IOP_INITED;
sleep: sleep:
arcmsr_sleep_for_bus_reset(cmd); ssleep(ARCMSR_SLEEPTIME);
if ((readl(&reg->host_diagnostic) & 0x04) != 0) { if ((readl(&reg->host_diagnostic) & 0x04) != 0) {
printk(KERN_ERR "arcmsr%d: waiting for hw bus reset return, retry=%d \n", acb->host->host_no, retry_count); printk(KERN_ERR "arcmsr%d: waiting for hw bus reset return, retry=%d\n", acb->host->host_no, retry_count);
if (retry_count > retrycount) { if (retry_count > ARCMSR_RETRYCOUNT) {
acb->fw_flag = FW_DEADLOCK; acb->fw_flag = FW_DEADLOCK;
printk(KERN_ERR "arcmsr%d: waiting for hw bus reset return, RETRY TERMINATED!! \n", acb->host->host_no); printk(KERN_ERR "arcmsr%d: waiting for hw bus reset return, RETRY TERMINATED!!\n", acb->host->host_no);
return FAILED; return FAILED;
} }
retry_count++; retry_count++;

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -8,11 +8,11 @@
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
* *
* ServerEngines * Emulex
* 209 N. Fair Oaks Ave * 3333 Susan Street
* Sunnyvale, CA 94085 * Costa Mesa, CA 92626
*/ */
#ifndef BEISCSI_H #ifndef BEISCSI_H

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -8,11 +8,11 @@
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
* *
* ServerEngines * Emulex
* 209 N. Fair Oaks Ave * 3333 Susan Street
* Sunnyvale, CA 94085 * Costa Mesa, CA 92626
*/ */
#include "be.h" #include "be.h"
@ -458,6 +458,7 @@ void be_cmd_hdr_prepare(struct be_cmd_req_hdr *req_hdr,
req_hdr->opcode = opcode; req_hdr->opcode = opcode;
req_hdr->subsystem = subsystem; req_hdr->subsystem = subsystem;
req_hdr->request_length = cpu_to_le32(cmd_len - sizeof(*req_hdr)); req_hdr->request_length = cpu_to_le32(cmd_len - sizeof(*req_hdr));
req_hdr->timeout = 120;
} }
static void be_cmd_page_addrs_prepare(struct phys_addr *pages, u32 max_pages, static void be_cmd_page_addrs_prepare(struct phys_addr *pages, u32 max_pages,

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -8,11 +8,11 @@
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
* *
* ServerEngines * Emulex
* 209 N. Fair Oaks Ave * 3333 Susan Street
* Sunnyvale, CA 94085 * Costa Mesa, CA 92626
*/ */
#ifndef BEISCSI_CMDS_H #ifndef BEISCSI_CMDS_H

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,15 +7,14 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohank@serverengines.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
*
* ServerEngines
* 209 N. Fair Oaks Ave
* Sunnyvale, CA 94085
* *
* Emulex
* 3333 Susan Street
* Costa Mesa, CA 92626
*/ */
#include <scsi/libiscsi.h> #include <scsi/libiscsi.h>

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,15 +7,14 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohank@serverengines.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
*
* ServerEngines
* 209 N. Fair Oaks Ave
* Sunnyvale, CA 94085
* *
* Emulex
* 3333 Susan Street
* Costa Mesa, CA 92626
*/ */
#ifndef _BE_ISCSI_ #ifndef _BE_ISCSI_

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,16 +7,16 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohank@serverengines.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
*
* ServerEngines
* 209 N. Fair Oaks Ave
* Sunnyvale, CA 94085
* *
* Emulex
* 3333 Susan Street
* Costa Mesa, CA 92626
*/ */
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/slab.h> #include <linux/slab.h>
@ -420,7 +420,8 @@ static int beiscsi_setup_boot_info(struct beiscsi_hba *phba)
return 0; return 0;
free_kset: free_kset:
iscsi_boot_destroy_kset(phba->boot_kset); if (phba->boot_kset)
iscsi_boot_destroy_kset(phba->boot_kset);
return -ENOMEM; return -ENOMEM;
} }
@ -3464,23 +3465,23 @@ static void hwi_enable_intr(struct beiscsi_hba *phba)
addr = (u8 __iomem *) ((u8 __iomem *) ctrl->pcicfg + addr = (u8 __iomem *) ((u8 __iomem *) ctrl->pcicfg +
PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET); PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET);
reg = ioread32(addr); reg = ioread32(addr);
SE_DEBUG(DBG_LVL_8, "reg =x%08x\n", reg);
enabled = reg & MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK; enabled = reg & MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK;
if (!enabled) { if (!enabled) {
reg |= MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK; reg |= MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK;
SE_DEBUG(DBG_LVL_8, "reg =x%08x addr=%p\n", reg, addr); SE_DEBUG(DBG_LVL_8, "reg =x%08x addr=%p\n", reg, addr);
iowrite32(reg, addr); iowrite32(reg, addr);
if (!phba->msix_enabled) { }
eq = &phwi_context->be_eq[0].q;
if (!phba->msix_enabled) {
eq = &phwi_context->be_eq[0].q;
SE_DEBUG(DBG_LVL_8, "eq->id=%d\n", eq->id);
hwi_ring_eq_db(phba, eq->id, 0, 0, 1, 1);
} else {
for (i = 0; i <= phba->num_cpus; i++) {
eq = &phwi_context->be_eq[i].q;
SE_DEBUG(DBG_LVL_8, "eq->id=%d\n", eq->id); SE_DEBUG(DBG_LVL_8, "eq->id=%d\n", eq->id);
hwi_ring_eq_db(phba, eq->id, 0, 0, 1, 1); hwi_ring_eq_db(phba, eq->id, 0, 0, 1, 1);
} else {
for (i = 0; i <= phba->num_cpus; i++) {
eq = &phwi_context->be_eq[i].q;
SE_DEBUG(DBG_LVL_8, "eq->id=%d\n", eq->id);
hwi_ring_eq_db(phba, eq->id, 0, 0, 1, 1);
}
} }
} }
} }
@ -4019,12 +4020,17 @@ static int beiscsi_mtask(struct iscsi_task *task)
hwi_write_buffer(pwrb, task); hwi_write_buffer(pwrb, task);
break; break;
case ISCSI_OP_NOOP_OUT: case ISCSI_OP_NOOP_OUT:
AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb, if (task->hdr->ttt != ISCSI_RESERVED_TAG) {
INI_RD_CMD); AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb,
if (task->hdr->ttt == ISCSI_RESERVED_TAG) TGT_DM_CMD);
AMAP_SET_BITS(struct amap_iscsi_wrb, cmdsn_itt,
pwrb, 0);
AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 0); AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 0);
else } else {
AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb,
INI_RD_CMD);
AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 1); AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 1);
}
hwi_write_buffer(pwrb, task); hwi_write_buffer(pwrb, task);
break; break;
case ISCSI_OP_TEXT: case ISCSI_OP_TEXT:
@ -4144,10 +4150,11 @@ static void beiscsi_remove(struct pci_dev *pcidev)
phba->ctrl.mbox_mem_alloced.size, phba->ctrl.mbox_mem_alloced.size,
phba->ctrl.mbox_mem_alloced.va, phba->ctrl.mbox_mem_alloced.va,
phba->ctrl.mbox_mem_alloced.dma); phba->ctrl.mbox_mem_alloced.dma);
if (phba->boot_kset)
iscsi_boot_destroy_kset(phba->boot_kset);
iscsi_host_remove(phba->shost); iscsi_host_remove(phba->shost);
pci_dev_put(phba->pcidev); pci_dev_put(phba->pcidev);
iscsi_host_free(phba->shost); iscsi_host_free(phba->shost);
iscsi_boot_destroy_kset(phba->boot_kset);
} }
static void beiscsi_msix_enable(struct beiscsi_hba *phba) static void beiscsi_msix_enable(struct beiscsi_hba *phba)

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,15 +7,14 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohank@serverengines.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
*
* ServerEngines
* 209 N. Fair Oaks Ave
* Sunnyvale, CA 94085
* *
* Emulex
* 3333 Susan Street
* Costa Mesa, CA 92626
*/ */
#ifndef _BEISCSI_MAIN_ #ifndef _BEISCSI_MAIN_
@ -35,7 +34,7 @@
#include "be.h" #include "be.h"
#define DRV_NAME "be2iscsi" #define DRV_NAME "be2iscsi"
#define BUILD_STR "2.0.549.0" #define BUILD_STR "2.103.298.0"
#define BE_NAME "ServerEngines BladeEngine2" \ #define BE_NAME "ServerEngines BladeEngine2" \
"Linux iSCSI Driver version" BUILD_STR "Linux iSCSI Driver version" BUILD_STR
#define DRV_DESC BE_NAME " " "Driver" #define DRV_DESC BE_NAME " " "Driver"

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,15 +7,14 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohank@serverengines.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
*
* ServerEngines
* 209 N. Fair Oaks Ave
* Sunnyvale, CA 94085
* *
* Emulex
* 3333 Susan Street
* Costa Mesa, CA 92626
*/ */
#include "be_mgmt.h" #include "be_mgmt.h"
@ -203,8 +202,8 @@ int mgmt_epfw_cleanup(struct beiscsi_hba *phba, unsigned short chute)
OPCODE_COMMON_ISCSI_CLEANUP, sizeof(*req)); OPCODE_COMMON_ISCSI_CLEANUP, sizeof(*req));
req->chute = chute; req->chute = chute;
req->hdr_ring_id = 0; req->hdr_ring_id = cpu_to_le16(HWI_GET_DEF_HDRQ_ID(phba));
req->data_ring_id = 0; req->data_ring_id = cpu_to_le16(HWI_GET_DEF_BUFQ_ID(phba));
status = be_mcc_notify_wait(phba); status = be_mcc_notify_wait(phba);
if (status) if (status)

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2010 ServerEngines * Copyright (C) 2005 - 2011 Emulex
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,15 +7,14 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohank@serverengines.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@serverengines.com * linux-drivers@emulex.com
*
* ServerEngines
* 209 N. Fair Oaks Ave
* Sunnyvale, CA 94085
* *
* Emulex
* 3333 Susan Street
* Costa Mesa, CA 92626
*/ */
#ifndef _BEISCSI_MGMT_ #ifndef _BEISCSI_MGMT_

View File

@ -57,9 +57,19 @@ int pcie_max_read_reqsz;
int bfa_debugfs_enable = 1; int bfa_debugfs_enable = 1;
int msix_disable_cb = 0, msix_disable_ct = 0; int msix_disable_cb = 0, msix_disable_ct = 0;
/* Firmware releated */
u32 bfi_image_ct_fc_size, bfi_image_ct_cna_size, bfi_image_cb_fc_size; u32 bfi_image_ct_fc_size, bfi_image_ct_cna_size, bfi_image_cb_fc_size;
u32 *bfi_image_ct_fc, *bfi_image_ct_cna, *bfi_image_cb_fc; u32 *bfi_image_ct_fc, *bfi_image_ct_cna, *bfi_image_cb_fc;
#define BFAD_FW_FILE_CT_FC "ctfw_fc.bin"
#define BFAD_FW_FILE_CT_CNA "ctfw_cna.bin"
#define BFAD_FW_FILE_CB_FC "cbfw_fc.bin"
static u32 *bfad_load_fwimg(struct pci_dev *pdev);
static void bfad_free_fwimg(void);
static void bfad_read_firmware(struct pci_dev *pdev, u32 **bfi_image,
u32 *bfi_image_size, char *fw_name);
static const char *msix_name_ct[] = { static const char *msix_name_ct[] = {
"cpe0", "cpe1", "cpe2", "cpe3", "cpe0", "cpe1", "cpe2", "cpe3",
"rme0", "rme1", "rme2", "rme3", "rme0", "rme1", "rme2", "rme3",
@ -222,6 +232,9 @@ bfad_sm_created(struct bfad_s *bfad, enum bfad_sm_event event)
if ((bfad->bfad_flags & BFAD_HAL_INIT_DONE)) { if ((bfad->bfad_flags & BFAD_HAL_INIT_DONE)) {
bfa_sm_send_event(bfad, BFAD_E_INIT_SUCCESS); bfa_sm_send_event(bfad, BFAD_E_INIT_SUCCESS);
} else { } else {
printk(KERN_WARNING
"bfa %s: bfa init failed\n",
bfad->pci_name);
bfad->bfad_flags |= BFAD_HAL_INIT_FAIL; bfad->bfad_flags |= BFAD_HAL_INIT_FAIL;
bfa_sm_send_event(bfad, BFAD_E_INIT_FAILED); bfa_sm_send_event(bfad, BFAD_E_INIT_FAILED);
} }
@ -991,10 +1004,6 @@ bfad_cfg_pport(struct bfad_s *bfad, enum bfa_lport_role role)
bfad->pport.roles |= BFA_LPORT_ROLE_FCP_IM; bfad->pport.roles |= BFA_LPORT_ROLE_FCP_IM;
} }
/* Setup the debugfs node for this scsi_host */
if (bfa_debugfs_enable)
bfad_debugfs_init(&bfad->pport);
bfad->bfad_flags |= BFAD_CFG_PPORT_DONE; bfad->bfad_flags |= BFAD_CFG_PPORT_DONE;
out: out:
@ -1004,10 +1013,6 @@ out:
void void
bfad_uncfg_pport(struct bfad_s *bfad) bfad_uncfg_pport(struct bfad_s *bfad)
{ {
/* Remove the debugfs node for this scsi_host */
kfree(bfad->regdata);
bfad_debugfs_exit(&bfad->pport);
if ((supported_fc4s & BFA_LPORT_ROLE_FCP_IM) && if ((supported_fc4s & BFA_LPORT_ROLE_FCP_IM) &&
(bfad->pport.roles & BFA_LPORT_ROLE_FCP_IM)) { (bfad->pport.roles & BFA_LPORT_ROLE_FCP_IM)) {
bfad_im_scsi_host_free(bfad, bfad->pport.im_port); bfad_im_scsi_host_free(bfad, bfad->pport.im_port);
@ -1389,6 +1394,10 @@ bfad_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
bfad->pport.bfad = bfad; bfad->pport.bfad = bfad;
INIT_LIST_HEAD(&bfad->pbc_vport_list); INIT_LIST_HEAD(&bfad->pbc_vport_list);
/* Setup the debugfs node for this bfad */
if (bfa_debugfs_enable)
bfad_debugfs_init(&bfad->pport);
retval = bfad_drv_init(bfad); retval = bfad_drv_init(bfad);
if (retval != BFA_STATUS_OK) if (retval != BFA_STATUS_OK)
goto out_drv_init_failure; goto out_drv_init_failure;
@ -1404,6 +1413,9 @@ out_bfad_sm_failure:
bfa_detach(&bfad->bfa); bfa_detach(&bfad->bfa);
bfad_hal_mem_release(bfad); bfad_hal_mem_release(bfad);
out_drv_init_failure: out_drv_init_failure:
/* Remove the debugfs node for this bfad */
kfree(bfad->regdata);
bfad_debugfs_exit(&bfad->pport);
mutex_lock(&bfad_mutex); mutex_lock(&bfad_mutex);
bfad_inst--; bfad_inst--;
list_del(&bfad->list_entry); list_del(&bfad->list_entry);
@ -1445,6 +1457,10 @@ bfad_pci_remove(struct pci_dev *pdev)
spin_unlock_irqrestore(&bfad->bfad_lock, flags); spin_unlock_irqrestore(&bfad->bfad_lock, flags);
bfad_hal_mem_release(bfad); bfad_hal_mem_release(bfad);
/* Remove the debugfs node for this bfad */
kfree(bfad->regdata);
bfad_debugfs_exit(&bfad->pport);
/* Cleaning the BFAD instance */ /* Cleaning the BFAD instance */
mutex_lock(&bfad_mutex); mutex_lock(&bfad_mutex);
bfad_inst--; bfad_inst--;
@ -1550,7 +1566,7 @@ bfad_exit(void)
} }
/* Firmware handling */ /* Firmware handling */
u32 * static void
bfad_read_firmware(struct pci_dev *pdev, u32 **bfi_image, bfad_read_firmware(struct pci_dev *pdev, u32 **bfi_image,
u32 *bfi_image_size, char *fw_name) u32 *bfi_image_size, char *fw_name)
{ {
@ -1558,27 +1574,25 @@ bfad_read_firmware(struct pci_dev *pdev, u32 **bfi_image,
if (request_firmware(&fw, fw_name, &pdev->dev)) { if (request_firmware(&fw, fw_name, &pdev->dev)) {
printk(KERN_ALERT "Can't locate firmware %s\n", fw_name); printk(KERN_ALERT "Can't locate firmware %s\n", fw_name);
goto error; *bfi_image = NULL;
goto out;
} }
*bfi_image = vmalloc(fw->size); *bfi_image = vmalloc(fw->size);
if (NULL == *bfi_image) { if (NULL == *bfi_image) {
printk(KERN_ALERT "Fail to allocate buffer for fw image " printk(KERN_ALERT "Fail to allocate buffer for fw image "
"size=%x!\n", (u32) fw->size); "size=%x!\n", (u32) fw->size);
goto error; goto out;
} }
memcpy(*bfi_image, fw->data, fw->size); memcpy(*bfi_image, fw->data, fw->size);
*bfi_image_size = fw->size/sizeof(u32); *bfi_image_size = fw->size/sizeof(u32);
out:
return *bfi_image; release_firmware(fw);
error:
return NULL;
} }
u32 * static u32 *
bfad_get_firmware_buf(struct pci_dev *pdev) bfad_load_fwimg(struct pci_dev *pdev)
{ {
if (pdev->device == BFA_PCI_DEVICE_ID_CT_FC) { if (pdev->device == BFA_PCI_DEVICE_ID_CT_FC) {
if (bfi_image_ct_fc_size == 0) if (bfi_image_ct_fc_size == 0)
@ -1598,6 +1612,17 @@ bfad_get_firmware_buf(struct pci_dev *pdev)
} }
} }
static void
bfad_free_fwimg(void)
{
if (bfi_image_ct_fc_size && bfi_image_ct_fc)
vfree(bfi_image_ct_fc);
if (bfi_image_ct_cna_size && bfi_image_ct_cna)
vfree(bfi_image_ct_cna);
if (bfi_image_cb_fc_size && bfi_image_cb_fc)
vfree(bfi_image_cb_fc);
}
module_init(bfad_init); module_init(bfad_init);
module_exit(bfad_exit); module_exit(bfad_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

View File

@ -28,10 +28,10 @@
* mount -t debugfs none /sys/kernel/debug * mount -t debugfs none /sys/kernel/debug
* *
* BFA Hierarchy: * BFA Hierarchy:
* - bfa/host# * - bfa/pci_dev:<pci_name>
* where the host number corresponds to the one under /sys/class/scsi_host/host# * where the pci_name corresponds to the one under /sys/bus/pci/drivers/bfa
* *
* Debugging service available per host: * Debugging service available per pci_dev:
* fwtrc: To collect current firmware trace. * fwtrc: To collect current firmware trace.
* drvtrc: To collect current driver trace * drvtrc: To collect current driver trace
* fwsave: To collect last saved fw trace as a result of firmware crash. * fwsave: To collect last saved fw trace as a result of firmware crash.
@ -489,11 +489,9 @@ static atomic_t bfa_debugfs_port_count;
inline void inline void
bfad_debugfs_init(struct bfad_port_s *port) bfad_debugfs_init(struct bfad_port_s *port)
{ {
struct bfad_im_port_s *im_port = port->im_port; struct bfad_s *bfad = port->bfad;
struct bfad_s *bfad = im_port->bfad;
struct Scsi_Host *shost = im_port->shost;
const struct bfad_debugfs_entry *file; const struct bfad_debugfs_entry *file;
char name[16]; char name[64];
int i; int i;
if (!bfa_debugfs_enable) if (!bfa_debugfs_enable)
@ -510,17 +508,15 @@ bfad_debugfs_init(struct bfad_port_s *port)
} }
} }
/* /* Setup the pci_dev debugfs directory for the port */
* Setup the host# directory for the port, snprintf(name, sizeof(name), "pci_dev:%s", bfad->pci_name);
* corresponds to the scsi_host num of this port.
*/
snprintf(name, sizeof(name), "host%d", shost->host_no);
if (!port->port_debugfs_root) { if (!port->port_debugfs_root) {
port->port_debugfs_root = port->port_debugfs_root =
debugfs_create_dir(name, bfa_debugfs_root); debugfs_create_dir(name, bfa_debugfs_root);
if (!port->port_debugfs_root) { if (!port->port_debugfs_root) {
printk(KERN_WARNING printk(KERN_WARNING
"BFA host root dir creation failed\n"); "bfa %s: debugfs root creation failed\n",
bfad->pci_name);
goto err; goto err;
} }
@ -536,8 +532,8 @@ bfad_debugfs_init(struct bfad_port_s *port)
file->fops); file->fops);
if (!bfad->bfad_dentry_files[i]) { if (!bfad->bfad_dentry_files[i]) {
printk(KERN_WARNING printk(KERN_WARNING
"BFA host%d: create %s entry failed\n", "bfa %s: debugfs %s creation failed\n",
shost->host_no, file->name); bfad->pci_name, file->name);
goto err; goto err;
} }
} }
@ -550,8 +546,7 @@ err:
inline void inline void
bfad_debugfs_exit(struct bfad_port_s *port) bfad_debugfs_exit(struct bfad_port_s *port)
{ {
struct bfad_im_port_s *im_port = port->im_port; struct bfad_s *bfad = port->bfad;
struct bfad_s *bfad = im_port->bfad;
int i; int i;
for (i = 0; i < ARRAY_SIZE(bfad_debugfs_files); i++) { for (i = 0; i < ARRAY_SIZE(bfad_debugfs_files); i++) {
@ -562,9 +557,7 @@ bfad_debugfs_exit(struct bfad_port_s *port)
} }
/* /*
* Remove the host# directory for the port, * Remove the pci_dev debugfs directory for the port */
* corresponds to the scsi_host num of this port.
*/
if (port->port_debugfs_root) { if (port->port_debugfs_root) {
debugfs_remove(port->port_debugfs_root); debugfs_remove(port->port_debugfs_root);
port->port_debugfs_root = NULL; port->port_debugfs_root = NULL;

View File

@ -141,29 +141,4 @@ extern struct device_attribute *bfad_im_vport_attrs[];
irqreturn_t bfad_intx(int irq, void *dev_id); irqreturn_t bfad_intx(int irq, void *dev_id);
/* Firmware releated */
#define BFAD_FW_FILE_CT_FC "ctfw_fc.bin"
#define BFAD_FW_FILE_CT_CNA "ctfw_cna.bin"
#define BFAD_FW_FILE_CB_FC "cbfw_fc.bin"
u32 *bfad_get_firmware_buf(struct pci_dev *pdev);
u32 *bfad_read_firmware(struct pci_dev *pdev, u32 **bfi_image,
u32 *bfi_image_size, char *fw_name);
static inline u32 *
bfad_load_fwimg(struct pci_dev *pdev)
{
return bfad_get_firmware_buf(pdev);
}
static inline void
bfad_free_fwimg(void)
{
if (bfi_image_ct_fc_size && bfi_image_ct_fc)
vfree(bfi_image_ct_fc);
if (bfi_image_ct_cna_size && bfi_image_ct_cna)
vfree(bfi_image_ct_cna);
if (bfi_image_cb_fc_size && bfi_image_cb_fc)
vfree(bfi_image_cb_fc);
}
#endif #endif

View File

@ -130,7 +130,7 @@
#define BNX2FC_TM_TIMEOUT 60 /* secs */ #define BNX2FC_TM_TIMEOUT 60 /* secs */
#define BNX2FC_IO_TIMEOUT 20000UL /* msecs */ #define BNX2FC_IO_TIMEOUT 20000UL /* msecs */
#define BNX2FC_WAIT_CNT 120 #define BNX2FC_WAIT_CNT 1200
#define BNX2FC_FW_TIMEOUT (3 * HZ) #define BNX2FC_FW_TIMEOUT (3 * HZ)
#define PORT_MAX 2 #define PORT_MAX 2

View File

@ -1130,7 +1130,7 @@ static void bnx2fc_interface_release(struct kref *kref)
struct net_device *phys_dev; struct net_device *phys_dev;
hba = container_of(kref, struct bnx2fc_hba, kref); hba = container_of(kref, struct bnx2fc_hba, kref);
BNX2FC_HBA_DBG(hba->ctlr.lp, "Interface is being released\n"); BNX2FC_MISC_DBG("Interface is being released\n");
netdev = hba->netdev; netdev = hba->netdev;
phys_dev = hba->phys_dev; phys_dev = hba->phys_dev;
@ -1254,20 +1254,17 @@ setup_err:
static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba, static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba,
struct device *parent, int npiv) struct device *parent, int npiv)
{ {
struct fc_lport *lport = NULL; struct fc_lport *lport, *n_port;
struct fcoe_port *port; struct fcoe_port *port;
struct Scsi_Host *shost; struct Scsi_Host *shost;
struct fc_vport *vport = dev_to_vport(parent); struct fc_vport *vport = dev_to_vport(parent);
int rc = 0; int rc = 0;
/* Allocate Scsi_Host structure */ /* Allocate Scsi_Host structure */
if (!npiv) { if (!npiv)
lport = libfc_host_alloc(&bnx2fc_shost_template, lport = libfc_host_alloc(&bnx2fc_shost_template, sizeof(*port));
sizeof(struct fcoe_port)); else
} else { lport = libfc_vport_create(vport, sizeof(*port));
lport = libfc_vport_create(vport,
sizeof(struct fcoe_port));
}
if (!lport) { if (!lport) {
printk(KERN_ERR PFX "could not allocate scsi host structure\n"); printk(KERN_ERR PFX "could not allocate scsi host structure\n");
@ -1285,7 +1282,6 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba,
goto lp_config_err; goto lp_config_err;
if (npiv) { if (npiv) {
vport = dev_to_vport(parent);
printk(KERN_ERR PFX "Setting vport names, 0x%llX 0x%llX\n", printk(KERN_ERR PFX "Setting vport names, 0x%llX 0x%llX\n",
vport->node_name, vport->port_name); vport->node_name, vport->port_name);
fc_set_wwnn(lport, vport->node_name); fc_set_wwnn(lport, vport->node_name);
@ -1314,12 +1310,17 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba,
fc_host_port_type(lport->host) = FC_PORTTYPE_UNKNOWN; fc_host_port_type(lport->host) = FC_PORTTYPE_UNKNOWN;
/* Allocate exchange manager */ /* Allocate exchange manager */
if (!npiv) { if (!npiv)
rc = bnx2fc_em_config(lport); rc = bnx2fc_em_config(lport);
if (rc) { else {
printk(KERN_ERR PFX "Error on bnx2fc_em_config\n"); shost = vport_to_shost(vport);
goto shost_err; n_port = shost_priv(shost);
} rc = fc_exch_mgr_list_clone(n_port, lport);
}
if (rc) {
printk(KERN_ERR PFX "Error on bnx2fc_em_config\n");
goto shost_err;
} }
bnx2fc_interface_get(hba); bnx2fc_interface_get(hba);
@ -1352,8 +1353,6 @@ static void bnx2fc_if_destroy(struct fc_lport *lport)
/* Free existing transmit skbs */ /* Free existing transmit skbs */
fcoe_clean_pending_queue(lport); fcoe_clean_pending_queue(lport);
bnx2fc_interface_put(hba);
/* Free queued packets for the receive thread */ /* Free queued packets for the receive thread */
bnx2fc_clean_rx_queue(lport); bnx2fc_clean_rx_queue(lport);
@ -1372,6 +1371,8 @@ static void bnx2fc_if_destroy(struct fc_lport *lport)
/* Release Scsi_Host */ /* Release Scsi_Host */
scsi_host_put(lport->host); scsi_host_put(lport->host);
bnx2fc_interface_put(hba);
} }
/** /**

View File

@ -522,6 +522,7 @@ void bnx2fc_process_l2_frame_compl(struct bnx2fc_rport *tgt,
fp = fc_frame_alloc(lport, payload_len); fp = fc_frame_alloc(lport, payload_len);
if (!fp) { if (!fp) {
printk(KERN_ERR PFX "fc_frame_alloc failure\n"); printk(KERN_ERR PFX "fc_frame_alloc failure\n");
kfree(unsol_els);
return; return;
} }
@ -547,6 +548,7 @@ void bnx2fc_process_l2_frame_compl(struct bnx2fc_rport *tgt,
*/ */
printk(KERN_ERR PFX "dropping ELS 0x%x\n", op); printk(KERN_ERR PFX "dropping ELS 0x%x\n", op);
kfree_skb(skb); kfree_skb(skb);
kfree(unsol_els);
return; return;
} }
} }
@ -563,6 +565,7 @@ void bnx2fc_process_l2_frame_compl(struct bnx2fc_rport *tgt,
} else { } else {
BNX2FC_HBA_DBG(lport, "fh_r_ctl = 0x%x\n", fh->fh_r_ctl); BNX2FC_HBA_DBG(lport, "fh_r_ctl = 0x%x\n", fh->fh_r_ctl);
kfree_skb(skb); kfree_skb(skb);
kfree(unsol_els);
} }
} }

View File

@ -1663,6 +1663,12 @@ int bnx2fc_queuecommand(struct Scsi_Host *host,
tgt = (struct bnx2fc_rport *)&rp[1]; tgt = (struct bnx2fc_rport *)&rp[1];
if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) { if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) {
if (test_bit(BNX2FC_FLAG_UPLD_REQ_COMPL, &tgt->flags)) {
sc_cmd->result = DID_NO_CONNECT << 16;
sc_cmd->scsi_done(sc_cmd);
return 0;
}
/* /*
* Session is not offloaded yet. Let SCSI-ml retry * Session is not offloaded yet. Let SCSI-ml retry
* the command. * the command.

View File

@ -772,6 +772,7 @@ static const struct error_info additional[] =
{0x3802, "Esn - power management class event"}, {0x3802, "Esn - power management class event"},
{0x3804, "Esn - media class event"}, {0x3804, "Esn - media class event"},
{0x3806, "Esn - device busy class event"}, {0x3806, "Esn - device busy class event"},
{0x3807, "Thin Provisioning soft threshold reached"},
{0x3900, "Saving parameters not supported"}, {0x3900, "Saving parameters not supported"},

View File

@ -778,8 +778,8 @@ static void srb_free_insert(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb)
static void srb_waiting_insert(struct DeviceCtlBlk *dcb, static void srb_waiting_insert(struct DeviceCtlBlk *dcb,
struct ScsiReqBlk *srb) struct ScsiReqBlk *srb)
{ {
dprintkdbg(DBG_0, "srb_waiting_insert: (pid#%li) <%02i-%i> srb=%p\n", dprintkdbg(DBG_0, "srb_waiting_insert: (0x%p) <%02i-%i> srb=%p\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun, srb); srb->cmd, dcb->target_id, dcb->target_lun, srb);
list_add(&srb->list, &dcb->srb_waiting_list); list_add(&srb->list, &dcb->srb_waiting_list);
} }
@ -787,16 +787,16 @@ static void srb_waiting_insert(struct DeviceCtlBlk *dcb,
static void srb_waiting_append(struct DeviceCtlBlk *dcb, static void srb_waiting_append(struct DeviceCtlBlk *dcb,
struct ScsiReqBlk *srb) struct ScsiReqBlk *srb)
{ {
dprintkdbg(DBG_0, "srb_waiting_append: (pid#%li) <%02i-%i> srb=%p\n", dprintkdbg(DBG_0, "srb_waiting_append: (0x%p) <%02i-%i> srb=%p\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun, srb); srb->cmd, dcb->target_id, dcb->target_lun, srb);
list_add_tail(&srb->list, &dcb->srb_waiting_list); list_add_tail(&srb->list, &dcb->srb_waiting_list);
} }
static void srb_going_append(struct DeviceCtlBlk *dcb, struct ScsiReqBlk *srb) static void srb_going_append(struct DeviceCtlBlk *dcb, struct ScsiReqBlk *srb)
{ {
dprintkdbg(DBG_0, "srb_going_append: (pid#%li) <%02i-%i> srb=%p\n", dprintkdbg(DBG_0, "srb_going_append: (0x%p) <%02i-%i> srb=%p\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun, srb); srb->cmd, dcb->target_id, dcb->target_lun, srb);
list_add_tail(&srb->list, &dcb->srb_going_list); list_add_tail(&srb->list, &dcb->srb_going_list);
} }
@ -805,8 +805,8 @@ static void srb_going_remove(struct DeviceCtlBlk *dcb, struct ScsiReqBlk *srb)
{ {
struct ScsiReqBlk *i; struct ScsiReqBlk *i;
struct ScsiReqBlk *tmp; struct ScsiReqBlk *tmp;
dprintkdbg(DBG_0, "srb_going_remove: (pid#%li) <%02i-%i> srb=%p\n", dprintkdbg(DBG_0, "srb_going_remove: (0x%p) <%02i-%i> srb=%p\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun, srb); srb->cmd, dcb->target_id, dcb->target_lun, srb);
list_for_each_entry_safe(i, tmp, &dcb->srb_going_list, list) list_for_each_entry_safe(i, tmp, &dcb->srb_going_list, list)
if (i == srb) { if (i == srb) {
@ -821,8 +821,8 @@ static void srb_waiting_remove(struct DeviceCtlBlk *dcb,
{ {
struct ScsiReqBlk *i; struct ScsiReqBlk *i;
struct ScsiReqBlk *tmp; struct ScsiReqBlk *tmp;
dprintkdbg(DBG_0, "srb_waiting_remove: (pid#%li) <%02i-%i> srb=%p\n", dprintkdbg(DBG_0, "srb_waiting_remove: (0x%p) <%02i-%i> srb=%p\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun, srb); srb->cmd, dcb->target_id, dcb->target_lun, srb);
list_for_each_entry_safe(i, tmp, &dcb->srb_waiting_list, list) list_for_each_entry_safe(i, tmp, &dcb->srb_waiting_list, list)
if (i == srb) { if (i == srb) {
@ -836,8 +836,8 @@ static void srb_going_to_waiting_move(struct DeviceCtlBlk *dcb,
struct ScsiReqBlk *srb) struct ScsiReqBlk *srb)
{ {
dprintkdbg(DBG_0, dprintkdbg(DBG_0,
"srb_going_to_waiting_move: (pid#%li) <%02i-%i> srb=%p\n", "srb_going_to_waiting_move: (0x%p) <%02i-%i> srb=%p\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun, srb); srb->cmd, dcb->target_id, dcb->target_lun, srb);
list_move(&srb->list, &dcb->srb_waiting_list); list_move(&srb->list, &dcb->srb_waiting_list);
} }
@ -846,8 +846,8 @@ static void srb_waiting_to_going_move(struct DeviceCtlBlk *dcb,
struct ScsiReqBlk *srb) struct ScsiReqBlk *srb)
{ {
dprintkdbg(DBG_0, dprintkdbg(DBG_0,
"srb_waiting_to_going_move: (pid#%li) <%02i-%i> srb=%p\n", "srb_waiting_to_going_move: (0x%p) <%02i-%i> srb=%p\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun, srb); srb->cmd, dcb->target_id, dcb->target_lun, srb);
list_move(&srb->list, &dcb->srb_going_list); list_move(&srb->list, &dcb->srb_going_list);
} }
@ -982,8 +982,8 @@ static void build_srb(struct scsi_cmnd *cmd, struct DeviceCtlBlk *dcb,
{ {
int nseg; int nseg;
enum dma_data_direction dir = cmd->sc_data_direction; enum dma_data_direction dir = cmd->sc_data_direction;
dprintkdbg(DBG_0, "build_srb: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_0, "build_srb: (0x%p) <%02i-%i>\n",
cmd->serial_number, dcb->target_id, dcb->target_lun); cmd, dcb->target_id, dcb->target_lun);
srb->dcb = dcb; srb->dcb = dcb;
srb->cmd = cmd; srb->cmd = cmd;
@ -1086,8 +1086,8 @@ static int dc395x_queue_command_lck(struct scsi_cmnd *cmd, void (*done)(struct s
struct ScsiReqBlk *srb; struct ScsiReqBlk *srb;
struct AdapterCtlBlk *acb = struct AdapterCtlBlk *acb =
(struct AdapterCtlBlk *)cmd->device->host->hostdata; (struct AdapterCtlBlk *)cmd->device->host->hostdata;
dprintkdbg(DBG_0, "queue_command: (pid#%li) <%02i-%i> cmnd=0x%02x\n", dprintkdbg(DBG_0, "queue_command: (0x%p) <%02i-%i> cmnd=0x%02x\n",
cmd->serial_number, cmd->device->id, cmd->device->lun, cmd->cmnd[0]); cmd, cmd->device->id, cmd->device->lun, cmd->cmnd[0]);
/* Assume BAD_TARGET; will be cleared later */ /* Assume BAD_TARGET; will be cleared later */
cmd->result = DID_BAD_TARGET << 16; cmd->result = DID_BAD_TARGET << 16;
@ -1140,7 +1140,7 @@ static int dc395x_queue_command_lck(struct scsi_cmnd *cmd, void (*done)(struct s
/* process immediately */ /* process immediately */
send_srb(acb, srb); send_srb(acb, srb);
} }
dprintkdbg(DBG_1, "queue_command: (pid#%li) done\n", cmd->serial_number); dprintkdbg(DBG_1, "queue_command: (0x%p) done\n", cmd);
return 0; return 0;
complete: complete:
@ -1203,9 +1203,9 @@ static void dump_register_info(struct AdapterCtlBlk *acb,
dprintkl(KERN_INFO, "dump: srb=%p cmd=%p OOOPS!\n", dprintkl(KERN_INFO, "dump: srb=%p cmd=%p OOOPS!\n",
srb, srb->cmd); srb, srb->cmd);
else else
dprintkl(KERN_INFO, "dump: srb=%p cmd=%p (pid#%li) " dprintkl(KERN_INFO, "dump: srb=%p cmd=%p "
"cmnd=0x%02x <%02i-%i>\n", "cmnd=0x%02x <%02i-%i>\n",
srb, srb->cmd, srb->cmd->serial_number, srb, srb->cmd,
srb->cmd->cmnd[0], srb->cmd->device->id, srb->cmd->cmnd[0], srb->cmd->device->id,
srb->cmd->device->lun); srb->cmd->device->lun);
printk(" sglist=%p cnt=%i idx=%i len=%zu\n", printk(" sglist=%p cnt=%i idx=%i len=%zu\n",
@ -1301,8 +1301,8 @@ static int __dc395x_eh_bus_reset(struct scsi_cmnd *cmd)
struct AdapterCtlBlk *acb = struct AdapterCtlBlk *acb =
(struct AdapterCtlBlk *)cmd->device->host->hostdata; (struct AdapterCtlBlk *)cmd->device->host->hostdata;
dprintkl(KERN_INFO, dprintkl(KERN_INFO,
"eh_bus_reset: (pid#%li) target=<%02i-%i> cmd=%p\n", "eh_bus_reset: (0%p) target=<%02i-%i> cmd=%p\n",
cmd->serial_number, cmd->device->id, cmd->device->lun, cmd); cmd, cmd->device->id, cmd->device->lun, cmd);
if (timer_pending(&acb->waiting_timer)) if (timer_pending(&acb->waiting_timer))
del_timer(&acb->waiting_timer); del_timer(&acb->waiting_timer);
@ -1368,8 +1368,8 @@ static int dc395x_eh_abort(struct scsi_cmnd *cmd)
(struct AdapterCtlBlk *)cmd->device->host->hostdata; (struct AdapterCtlBlk *)cmd->device->host->hostdata;
struct DeviceCtlBlk *dcb; struct DeviceCtlBlk *dcb;
struct ScsiReqBlk *srb; struct ScsiReqBlk *srb;
dprintkl(KERN_INFO, "eh_abort: (pid#%li) target=<%02i-%i> cmd=%p\n", dprintkl(KERN_INFO, "eh_abort: (0x%p) target=<%02i-%i> cmd=%p\n",
cmd->serial_number, cmd->device->id, cmd->device->lun, cmd); cmd, cmd->device->id, cmd->device->lun, cmd);
dcb = find_dcb(acb, cmd->device->id, cmd->device->lun); dcb = find_dcb(acb, cmd->device->id, cmd->device->lun);
if (!dcb) { if (!dcb) {
@ -1495,8 +1495,8 @@ static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
u16 s_stat2, return_code; u16 s_stat2, return_code;
u8 s_stat, scsicommand, i, identify_message; u8 s_stat, scsicommand, i, identify_message;
u8 *ptr; u8 *ptr;
dprintkdbg(DBG_0, "start_scsi: (pid#%li) <%02i-%i> srb=%p\n", dprintkdbg(DBG_0, "start_scsi: (0x%p) <%02i-%i> srb=%p\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun, srb); dcb->target_id, dcb->target_lun, srb);
srb->tag_number = TAG_NONE; /* acb->tag_max_num: had error read in eeprom */ srb->tag_number = TAG_NONE; /* acb->tag_max_num: had error read in eeprom */
@ -1505,8 +1505,8 @@ static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
s_stat2 = DC395x_read16(acb, TRM_S1040_SCSI_STATUS); s_stat2 = DC395x_read16(acb, TRM_S1040_SCSI_STATUS);
#if 1 #if 1
if (s_stat & 0x20 /* s_stat2 & 0x02000 */ ) { if (s_stat & 0x20 /* s_stat2 & 0x02000 */ ) {
dprintkdbg(DBG_KG, "start_scsi: (pid#%li) BUSY %02x %04x\n", dprintkdbg(DBG_KG, "start_scsi: (0x%p) BUSY %02x %04x\n",
srb->cmd->serial_number, s_stat, s_stat2); s_stat, s_stat2);
/* /*
* Try anyway? * Try anyway?
* *
@ -1522,16 +1522,15 @@ static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
} }
#endif #endif
if (acb->active_dcb) { if (acb->active_dcb) {
dprintkl(KERN_DEBUG, "start_scsi: (pid#%li) Attempt to start a" dprintkl(KERN_DEBUG, "start_scsi: (0x%p) Attempt to start a"
"command while another command (pid#%li) is active.", "command while another command (0x%p) is active.",
srb->cmd->serial_number, srb->cmd,
acb->active_dcb->active_srb ? acb->active_dcb->active_srb ?
acb->active_dcb->active_srb->cmd->serial_number : 0); acb->active_dcb->active_srb->cmd : 0);
return 1; return 1;
} }
if (DC395x_read16(acb, TRM_S1040_SCSI_STATUS) & SCSIINTERRUPT) { if (DC395x_read16(acb, TRM_S1040_SCSI_STATUS) & SCSIINTERRUPT) {
dprintkdbg(DBG_KG, "start_scsi: (pid#%li) Failed (busy)\n", dprintkdbg(DBG_KG, "start_scsi: (0x%p) Failed (busy)\n", srb->cmd);
srb->cmd->serial_number);
return 1; return 1;
} }
/* Allow starting of SCSI commands half a second before we allow the mid-level /* Allow starting of SCSI commands half a second before we allow the mid-level
@ -1603,9 +1602,9 @@ static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
tag_number++; tag_number++;
} }
if (tag_number >= dcb->max_command) { if (tag_number >= dcb->max_command) {
dprintkl(KERN_WARNING, "start_scsi: (pid#%li) " dprintkl(KERN_WARNING, "start_scsi: (0x%p) "
"Out of tags target=<%02i-%i>)\n", "Out of tags target=<%02i-%i>)\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd, srb->cmd->device->id,
srb->cmd->device->lun); srb->cmd->device->lun);
srb->state = SRB_READY; srb->state = SRB_READY;
DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DC395x_write16(acb, TRM_S1040_SCSI_CONTROL,
@ -1623,8 +1622,8 @@ static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
#endif #endif
/*polling:*/ /*polling:*/
/* Send CDB ..command block ......... */ /* Send CDB ..command block ......... */
dprintkdbg(DBG_KG, "start_scsi: (pid#%li) <%02i-%i> cmnd=0x%02x tag=%i\n", dprintkdbg(DBG_KG, "start_scsi: (0x%p) <%02i-%i> cmnd=0x%02x tag=%i\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd->device->lun, srb->cmd, srb->cmd->device->id, srb->cmd->device->lun,
srb->cmd->cmnd[0], srb->tag_number); srb->cmd->cmnd[0], srb->tag_number);
if (srb->flag & AUTO_REQSENSE) { if (srb->flag & AUTO_REQSENSE) {
DC395x_write8(acb, TRM_S1040_SCSI_FIFO, REQUEST_SENSE); DC395x_write8(acb, TRM_S1040_SCSI_FIFO, REQUEST_SENSE);
@ -1647,8 +1646,8 @@ static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
* we caught an interrupt (must be reset or reselection ... ) * we caught an interrupt (must be reset or reselection ... )
* : Let's process it first! * : Let's process it first!
*/ */
dprintkdbg(DBG_0, "start_scsi: (pid#%li) <%02i-%i> Failed - busy\n", dprintkdbg(DBG_0, "start_scsi: (0x%p) <%02i-%i> Failed - busy\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun); srb->cmd, dcb->target_id, dcb->target_lun);
srb->state = SRB_READY; srb->state = SRB_READY;
free_tag(dcb, srb); free_tag(dcb, srb);
srb->msg_count = 0; srb->msg_count = 0;
@ -1843,7 +1842,7 @@ static irqreturn_t dc395x_interrupt(int irq, void *dev_id)
static void msgout_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, static void msgout_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
u16 *pscsi_status) u16 *pscsi_status)
{ {
dprintkdbg(DBG_0, "msgout_phase0: (pid#%li)\n", srb->cmd->serial_number); dprintkdbg(DBG_0, "msgout_phase0: (0x%p)\n", srb->cmd);
if (srb->state & (SRB_UNEXPECT_RESEL + SRB_ABORT_SENT)) if (srb->state & (SRB_UNEXPECT_RESEL + SRB_ABORT_SENT))
*pscsi_status = PH_BUS_FREE; /*.. initial phase */ *pscsi_status = PH_BUS_FREE; /*.. initial phase */
@ -1857,18 +1856,18 @@ static void msgout_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
{ {
u16 i; u16 i;
u8 *ptr; u8 *ptr;
dprintkdbg(DBG_0, "msgout_phase1: (pid#%li)\n", srb->cmd->serial_number); dprintkdbg(DBG_0, "msgout_phase1: (0x%p)\n", srb->cmd);
clear_fifo(acb, "msgout_phase1"); clear_fifo(acb, "msgout_phase1");
if (!(srb->state & SRB_MSGOUT)) { if (!(srb->state & SRB_MSGOUT)) {
srb->state |= SRB_MSGOUT; srb->state |= SRB_MSGOUT;
dprintkl(KERN_DEBUG, dprintkl(KERN_DEBUG,
"msgout_phase1: (pid#%li) Phase unexpected\n", "msgout_phase1: (0x%p) Phase unexpected\n",
srb->cmd->serial_number); /* So what ? */ srb->cmd); /* So what ? */
} }
if (!srb->msg_count) { if (!srb->msg_count) {
dprintkdbg(DBG_0, "msgout_phase1: (pid#%li) NOP msg\n", dprintkdbg(DBG_0, "msgout_phase1: (0x%p) NOP msg\n",
srb->cmd->serial_number); srb->cmd);
DC395x_write8(acb, TRM_S1040_SCSI_FIFO, MSG_NOP); DC395x_write8(acb, TRM_S1040_SCSI_FIFO, MSG_NOP);
DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); /* it's important for atn stop */ DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); /* it's important for atn stop */
DC395x_write8(acb, TRM_S1040_SCSI_COMMAND, SCMD_FIFO_OUT); DC395x_write8(acb, TRM_S1040_SCSI_COMMAND, SCMD_FIFO_OUT);
@ -1888,7 +1887,7 @@ static void msgout_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
static void command_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, static void command_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
u16 *pscsi_status) u16 *pscsi_status)
{ {
dprintkdbg(DBG_0, "command_phase0: (pid#%li)\n", srb->cmd->serial_number); dprintkdbg(DBG_0, "command_phase0: (0x%p)\n", srb->cmd);
DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH);
} }
@ -1899,7 +1898,7 @@ static void command_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
struct DeviceCtlBlk *dcb; struct DeviceCtlBlk *dcb;
u8 *ptr; u8 *ptr;
u16 i; u16 i;
dprintkdbg(DBG_0, "command_phase1: (pid#%li)\n", srb->cmd->serial_number); dprintkdbg(DBG_0, "command_phase1: (0x%p)\n", srb->cmd);
clear_fifo(acb, "command_phase1"); clear_fifo(acb, "command_phase1");
DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_CLRATN); DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_CLRATN);
@ -2041,8 +2040,8 @@ static void data_out_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
struct DeviceCtlBlk *dcb = srb->dcb; struct DeviceCtlBlk *dcb = srb->dcb;
u16 scsi_status = *pscsi_status; u16 scsi_status = *pscsi_status;
u32 d_left_counter = 0; u32 d_left_counter = 0;
dprintkdbg(DBG_0, "data_out_phase0: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_0, "data_out_phase0: (0x%p) <%02i-%i>\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd->device->lun); srb->cmd, srb->cmd->device->id, srb->cmd->device->lun);
/* /*
* KG: We need to drain the buffers before we draw any conclusions! * KG: We need to drain the buffers before we draw any conclusions!
@ -2171,8 +2170,8 @@ static void data_out_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
static void data_out_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, static void data_out_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
u16 *pscsi_status) u16 *pscsi_status)
{ {
dprintkdbg(DBG_0, "data_out_phase1: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_0, "data_out_phase1: (0x%p) <%02i-%i>\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd->device->lun); srb->cmd, srb->cmd->device->id, srb->cmd->device->lun);
clear_fifo(acb, "data_out_phase1"); clear_fifo(acb, "data_out_phase1");
/* do prepare before transfer when data out phase */ /* do prepare before transfer when data out phase */
data_io_transfer(acb, srb, XFERDATAOUT); data_io_transfer(acb, srb, XFERDATAOUT);
@ -2183,8 +2182,8 @@ static void data_in_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
{ {
u16 scsi_status = *pscsi_status; u16 scsi_status = *pscsi_status;
dprintkdbg(DBG_0, "data_in_phase0: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_0, "data_in_phase0: (0x%p) <%02i-%i>\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd->device->lun); srb->cmd, srb->cmd->device->id, srb->cmd->device->lun);
/* /*
* KG: DataIn is much more tricky than DataOut. When the device is finished * KG: DataIn is much more tricky than DataOut. When the device is finished
@ -2204,8 +2203,8 @@ static void data_in_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
unsigned int sc, fc; unsigned int sc, fc;
if (scsi_status & PARITYERROR) { if (scsi_status & PARITYERROR) {
dprintkl(KERN_INFO, "data_in_phase0: (pid#%li) " dprintkl(KERN_INFO, "data_in_phase0: (0x%p) "
"Parity Error\n", srb->cmd->serial_number); "Parity Error\n", srb->cmd);
srb->status |= PARITY_ERROR; srb->status |= PARITY_ERROR;
} }
/* /*
@ -2394,8 +2393,8 @@ static void data_in_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
static void data_in_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, static void data_in_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
u16 *pscsi_status) u16 *pscsi_status)
{ {
dprintkdbg(DBG_0, "data_in_phase1: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_0, "data_in_phase1: (0x%p) <%02i-%i>\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd->device->lun); srb->cmd, srb->cmd->device->id, srb->cmd->device->lun);
data_io_transfer(acb, srb, XFERDATAIN); data_io_transfer(acb, srb, XFERDATAIN);
} }
@ -2406,8 +2405,8 @@ static void data_io_transfer(struct AdapterCtlBlk *acb,
struct DeviceCtlBlk *dcb = srb->dcb; struct DeviceCtlBlk *dcb = srb->dcb;
u8 bval; u8 bval;
dprintkdbg(DBG_0, dprintkdbg(DBG_0,
"data_io_transfer: (pid#%li) <%02i-%i> %c len=%i, sg=(%i/%i)\n", "data_io_transfer: (0x%p) <%02i-%i> %c len=%i, sg=(%i/%i)\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd->device->lun, srb->cmd, srb->cmd->device->id, srb->cmd->device->lun,
((io_dir & DMACMD_DIR) ? 'r' : 'w'), ((io_dir & DMACMD_DIR) ? 'r' : 'w'),
srb->total_xfer_length, srb->sg_index, srb->sg_count); srb->total_xfer_length, srb->sg_index, srb->sg_count);
if (srb == acb->tmp_srb) if (srb == acb->tmp_srb)
@ -2579,8 +2578,8 @@ static void data_io_transfer(struct AdapterCtlBlk *acb,
static void status_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, static void status_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
u16 *pscsi_status) u16 *pscsi_status)
{ {
dprintkdbg(DBG_0, "status_phase0: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_0, "status_phase0: (0x%p) <%02i-%i>\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd->device->lun); srb->cmd, srb->cmd->device->id, srb->cmd->device->lun);
srb->target_status = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); srb->target_status = DC395x_read8(acb, TRM_S1040_SCSI_FIFO);
srb->end_message = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); /* get message */ srb->end_message = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); /* get message */
srb->state = SRB_COMPLETED; srb->state = SRB_COMPLETED;
@ -2593,8 +2592,8 @@ static void status_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
static void status_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, static void status_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
u16 *pscsi_status) u16 *pscsi_status)
{ {
dprintkdbg(DBG_0, "status_phase1: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_0, "status_phase1: (0x%p) <%02i-%i>\n",
srb->cmd->serial_number, srb->cmd->device->id, srb->cmd->device->lun); srb->cmd, srb->cmd->device->id, srb->cmd->device->lun);
srb->state = SRB_STATUS; srb->state = SRB_STATUS;
DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); /* it's important for atn stop */ DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); /* it's important for atn stop */
DC395x_write8(acb, TRM_S1040_SCSI_COMMAND, SCMD_COMP); DC395x_write8(acb, TRM_S1040_SCSI_COMMAND, SCMD_COMP);
@ -2635,8 +2634,8 @@ static struct ScsiReqBlk *msgin_qtag(struct AdapterCtlBlk *acb,
{ {
struct ScsiReqBlk *srb = NULL; struct ScsiReqBlk *srb = NULL;
struct ScsiReqBlk *i; struct ScsiReqBlk *i;
dprintkdbg(DBG_0, "msgin_qtag: (pid#%li) tag=%i srb=%p\n", dprintkdbg(DBG_0, "msgin_qtag: (0x%p) tag=%i srb=%p\n",
srb->cmd->serial_number, tag, srb); srb->cmd, tag, srb);
if (!(dcb->tag_mask & (1 << tag))) if (!(dcb->tag_mask & (1 << tag)))
dprintkl(KERN_DEBUG, dprintkl(KERN_DEBUG,
@ -2654,8 +2653,8 @@ static struct ScsiReqBlk *msgin_qtag(struct AdapterCtlBlk *acb,
if (!srb) if (!srb)
goto mingx0; goto mingx0;
dprintkdbg(DBG_0, "msgin_qtag: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_0, "msgin_qtag: (0x%p) <%02i-%i>\n",
srb->cmd->serial_number, srb->dcb->target_id, srb->dcb->target_lun); srb->cmd, srb->dcb->target_id, srb->dcb->target_lun);
if (dcb->flag & ABORT_DEV_) { if (dcb->flag & ABORT_DEV_) {
/*srb->state = SRB_ABORT_SENT; */ /*srb->state = SRB_ABORT_SENT; */
enable_msgout_abort(acb, srb); enable_msgout_abort(acb, srb);
@ -2865,7 +2864,7 @@ static void msgin_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
u16 *pscsi_status) u16 *pscsi_status)
{ {
struct DeviceCtlBlk *dcb = acb->active_dcb; struct DeviceCtlBlk *dcb = acb->active_dcb;
dprintkdbg(DBG_0, "msgin_phase0: (pid#%li)\n", srb->cmd->serial_number); dprintkdbg(DBG_0, "msgin_phase0: (0x%p)\n", srb->cmd);
srb->msgin_buf[acb->msg_len++] = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); srb->msgin_buf[acb->msg_len++] = DC395x_read8(acb, TRM_S1040_SCSI_FIFO);
if (msgin_completed(srb->msgin_buf, acb->msg_len)) { if (msgin_completed(srb->msgin_buf, acb->msg_len)) {
@ -2931,9 +2930,9 @@ static void msgin_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
* SAVE POINTER may be ignored as we have the struct * SAVE POINTER may be ignored as we have the struct
* ScsiReqBlk* associated with the scsi command. * ScsiReqBlk* associated with the scsi command.
*/ */
dprintkdbg(DBG_0, "msgin_phase0: (pid#%li) " dprintkdbg(DBG_0, "msgin_phase0: (0x%p) "
"SAVE POINTER rem=%i Ignore\n", "SAVE POINTER rem=%i Ignore\n",
srb->cmd->serial_number, srb->total_xfer_length); srb->cmd, srb->total_xfer_length);
break; break;
case RESTORE_POINTERS: case RESTORE_POINTERS:
@ -2941,9 +2940,9 @@ static void msgin_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
break; break;
case ABORT: case ABORT:
dprintkdbg(DBG_0, "msgin_phase0: (pid#%li) " dprintkdbg(DBG_0, "msgin_phase0: (0x%p) "
"<%02i-%i> ABORT msg\n", "<%02i-%i> ABORT msg\n",
srb->cmd->serial_number, dcb->target_id, srb->cmd, dcb->target_id,
dcb->target_lun); dcb->target_lun);
dcb->flag |= ABORT_DEV_; dcb->flag |= ABORT_DEV_;
enable_msgout_abort(acb, srb); enable_msgout_abort(acb, srb);
@ -2975,7 +2974,7 @@ static void msgin_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
static void msgin_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, static void msgin_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb,
u16 *pscsi_status) u16 *pscsi_status)
{ {
dprintkdbg(DBG_0, "msgin_phase1: (pid#%li)\n", srb->cmd->serial_number); dprintkdbg(DBG_0, "msgin_phase1: (0x%p)\n", srb->cmd);
clear_fifo(acb, "msgin_phase1"); clear_fifo(acb, "msgin_phase1");
DC395x_write32(acb, TRM_S1040_SCSI_COUNTER, 1); DC395x_write32(acb, TRM_S1040_SCSI_COUNTER, 1);
if (!(srb->state & SRB_MSGIN)) { if (!(srb->state & SRB_MSGIN)) {
@ -3041,7 +3040,7 @@ static void disconnect(struct AdapterCtlBlk *acb)
} }
srb = dcb->active_srb; srb = dcb->active_srb;
acb->active_dcb = NULL; acb->active_dcb = NULL;
dprintkdbg(DBG_0, "disconnect: (pid#%li)\n", srb->cmd->serial_number); dprintkdbg(DBG_0, "disconnect: (0x%p)\n", srb->cmd);
srb->scsi_phase = PH_BUS_FREE; /* initial phase */ srb->scsi_phase = PH_BUS_FREE; /* initial phase */
clear_fifo(acb, "disconnect"); clear_fifo(acb, "disconnect");
@ -3071,14 +3070,14 @@ static void disconnect(struct AdapterCtlBlk *acb)
&& srb->state != SRB_MSGOUT) { && srb->state != SRB_MSGOUT) {
srb->state = SRB_READY; srb->state = SRB_READY;
dprintkl(KERN_DEBUG, dprintkl(KERN_DEBUG,
"disconnect: (pid#%li) Unexpected\n", "disconnect: (0x%p) Unexpected\n",
srb->cmd->serial_number); srb->cmd);
srb->target_status = SCSI_STAT_SEL_TIMEOUT; srb->target_status = SCSI_STAT_SEL_TIMEOUT;
goto disc1; goto disc1;
} else { } else {
/* Normal selection timeout */ /* Normal selection timeout */
dprintkdbg(DBG_KG, "disconnect: (pid#%li) " dprintkdbg(DBG_KG, "disconnect: (0x%p) "
"<%02i-%i> SelTO\n", srb->cmd->serial_number, "<%02i-%i> SelTO\n", srb->cmd,
dcb->target_id, dcb->target_lun); dcb->target_id, dcb->target_lun);
if (srb->retry_count++ > DC395x_MAX_RETRIES if (srb->retry_count++ > DC395x_MAX_RETRIES
|| acb->scan_devices) { || acb->scan_devices) {
@ -3089,8 +3088,8 @@ static void disconnect(struct AdapterCtlBlk *acb)
free_tag(dcb, srb); free_tag(dcb, srb);
srb_going_to_waiting_move(dcb, srb); srb_going_to_waiting_move(dcb, srb);
dprintkdbg(DBG_KG, dprintkdbg(DBG_KG,
"disconnect: (pid#%li) Retry\n", "disconnect: (0x%p) Retry\n",
srb->cmd->serial_number); srb->cmd);
waiting_set_timer(acb, HZ / 20); waiting_set_timer(acb, HZ / 20);
} }
} else if (srb->state & SRB_DISCONNECT) { } else if (srb->state & SRB_DISCONNECT) {
@ -3142,9 +3141,9 @@ static void reselect(struct AdapterCtlBlk *acb)
} }
/* Why the if ? */ /* Why the if ? */
if (!acb->scan_devices) { if (!acb->scan_devices) {
dprintkdbg(DBG_KG, "reselect: (pid#%li) <%02i-%i> " dprintkdbg(DBG_KG, "reselect: (0x%p) <%02i-%i> "
"Arb lost but Resel win rsel=%i stat=0x%04x\n", "Arb lost but Resel win rsel=%i stat=0x%04x\n",
srb->cmd->serial_number, dcb->target_id, srb->cmd, dcb->target_id,
dcb->target_lun, rsel_tar_lun_id, dcb->target_lun, rsel_tar_lun_id,
DC395x_read16(acb, TRM_S1040_SCSI_STATUS)); DC395x_read16(acb, TRM_S1040_SCSI_STATUS));
arblostflag = 1; arblostflag = 1;
@ -3318,7 +3317,7 @@ static void srb_done(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
enum dma_data_direction dir = cmd->sc_data_direction; enum dma_data_direction dir = cmd->sc_data_direction;
int ckc_only = 1; int ckc_only = 1;
dprintkdbg(DBG_1, "srb_done: (pid#%li) <%02i-%i>\n", srb->cmd->serial_number, dprintkdbg(DBG_1, "srb_done: (0x%p) <%02i-%i>\n", srb->cmd,
srb->cmd->device->id, srb->cmd->device->lun); srb->cmd->device->id, srb->cmd->device->lun);
dprintkdbg(DBG_SG, "srb_done: srb=%p sg=%i(%i/%i) buf=%p\n", dprintkdbg(DBG_SG, "srb_done: srb=%p sg=%i(%i/%i) buf=%p\n",
srb, scsi_sg_count(cmd), srb->sg_index, srb->sg_count, srb, scsi_sg_count(cmd), srb->sg_index, srb->sg_count,
@ -3497,9 +3496,9 @@ static void srb_done(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
cmd->SCp.buffers_residual = 0; cmd->SCp.buffers_residual = 0;
if (debug_enabled(DBG_KG)) { if (debug_enabled(DBG_KG)) {
if (srb->total_xfer_length) if (srb->total_xfer_length)
dprintkdbg(DBG_KG, "srb_done: (pid#%li) <%02i-%i> " dprintkdbg(DBG_KG, "srb_done: (0x%p) <%02i-%i> "
"cmnd=0x%02x Missed %i bytes\n", "cmnd=0x%02x Missed %i bytes\n",
cmd->serial_number, cmd->device->id, cmd->device->lun, cmd, cmd->device->id, cmd->device->lun,
cmd->cmnd[0], srb->total_xfer_length); cmd->cmnd[0], srb->total_xfer_length);
} }
@ -3508,8 +3507,8 @@ static void srb_done(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
if (srb == acb->tmp_srb) if (srb == acb->tmp_srb)
dprintkl(KERN_ERR, "srb_done: ERROR! Completed cmd with tmp_srb\n"); dprintkl(KERN_ERR, "srb_done: ERROR! Completed cmd with tmp_srb\n");
else { else {
dprintkdbg(DBG_0, "srb_done: (pid#%li) done result=0x%08x\n", dprintkdbg(DBG_0, "srb_done: (0x%p) done result=0x%08x\n",
cmd->serial_number, cmd->result); cmd, cmd->result);
srb_free_insert(acb, srb); srb_free_insert(acb, srb);
} }
pci_unmap_srb(acb, srb); pci_unmap_srb(acb, srb);
@ -3538,7 +3537,7 @@ static void doing_srb_done(struct AdapterCtlBlk *acb, u8 did_flag,
p = srb->cmd; p = srb->cmd;
dir = p->sc_data_direction; dir = p->sc_data_direction;
result = MK_RES(0, did_flag, 0, 0); result = MK_RES(0, did_flag, 0, 0);
printk("G:%li(%02i-%i) ", p->serial_number, printk("G:%p(%02i-%i) ", p,
p->device->id, p->device->lun); p->device->id, p->device->lun);
srb_going_remove(dcb, srb); srb_going_remove(dcb, srb);
free_tag(dcb, srb); free_tag(dcb, srb);
@ -3568,7 +3567,7 @@ static void doing_srb_done(struct AdapterCtlBlk *acb, u8 did_flag,
p = srb->cmd; p = srb->cmd;
result = MK_RES(0, did_flag, 0, 0); result = MK_RES(0, did_flag, 0, 0);
printk("W:%li<%02i-%i>", p->serial_number, p->device->id, printk("W:%p<%02i-%i>", p, p->device->id,
p->device->lun); p->device->lun);
srb_waiting_remove(dcb, srb); srb_waiting_remove(dcb, srb);
srb_free_insert(acb, srb); srb_free_insert(acb, srb);
@ -3677,8 +3676,8 @@ static void request_sense(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
struct ScsiReqBlk *srb) struct ScsiReqBlk *srb)
{ {
struct scsi_cmnd *cmd = srb->cmd; struct scsi_cmnd *cmd = srb->cmd;
dprintkdbg(DBG_1, "request_sense: (pid#%li) <%02i-%i>\n", dprintkdbg(DBG_1, "request_sense: (0x%p) <%02i-%i>\n",
cmd->serial_number, cmd->device->id, cmd->device->lun); cmd, cmd->device->id, cmd->device->lun);
srb->flag |= AUTO_REQSENSE; srb->flag |= AUTO_REQSENSE;
srb->adapter_status = 0; srb->adapter_status = 0;
@ -3708,8 +3707,8 @@ static void request_sense(struct AdapterCtlBlk *acb, struct DeviceCtlBlk *dcb,
if (start_scsi(acb, dcb, srb)) { /* Should only happen, if sb. else grabs the bus */ if (start_scsi(acb, dcb, srb)) { /* Should only happen, if sb. else grabs the bus */
dprintkl(KERN_DEBUG, dprintkl(KERN_DEBUG,
"request_sense: (pid#%li) failed <%02i-%i>\n", "request_sense: (0x%p) failed <%02i-%i>\n",
srb->cmd->serial_number, dcb->target_id, dcb->target_lun); srb->cmd, dcb->target_id, dcb->target_lun);
srb_going_to_waiting_move(dcb, srb); srb_going_to_waiting_move(dcb, srb);
waiting_set_timer(acb, HZ / 100); waiting_set_timer(acb, HZ / 100);
} }
@ -4717,13 +4716,13 @@ static int dc395x_proc_info(struct Scsi_Host *host, char *buffer,
dcb->target_id, dcb->target_lun, dcb->target_id, dcb->target_lun,
list_size(&dcb->srb_waiting_list)); list_size(&dcb->srb_waiting_list));
list_for_each_entry(srb, &dcb->srb_waiting_list, list) list_for_each_entry(srb, &dcb->srb_waiting_list, list)
SPRINTF(" %li", srb->cmd->serial_number); SPRINTF(" %p", srb->cmd);
if (!list_empty(&dcb->srb_going_list)) if (!list_empty(&dcb->srb_going_list))
SPRINTF("\nDCB (%02i-%i): Going : %i:", SPRINTF("\nDCB (%02i-%i): Going : %i:",
dcb->target_id, dcb->target_lun, dcb->target_id, dcb->target_lun,
list_size(&dcb->srb_going_list)); list_size(&dcb->srb_going_list));
list_for_each_entry(srb, &dcb->srb_going_list, list) list_for_each_entry(srb, &dcb->srb_going_list, list)
SPRINTF(" %li", srb->cmd->serial_number); SPRINTF(" %p", srb->cmd);
if (!list_empty(&dcb->srb_waiting_list) || !list_empty(&dcb->srb_going_list)) if (!list_empty(&dcb->srb_waiting_list) || !list_empty(&dcb->srb_going_list))
SPRINTF("\n"); SPRINTF("\n");
} }

View File

@ -782,7 +782,7 @@ static int alua_bus_attach(struct scsi_device *sdev)
h->sdev = sdev; h->sdev = sdev;
err = alua_initialize(sdev, h); err = alua_initialize(sdev, h);
if (err != SCSI_DH_OK) if ((err != SCSI_DH_OK) && (err != SCSI_DH_DEV_OFFLINED))
goto failed; goto failed;
if (!try_module_get(THIS_MODULE)) if (!try_module_get(THIS_MODULE))

View File

@ -182,14 +182,24 @@ struct rdac_dh_data {
struct rdac_controller *ctlr; struct rdac_controller *ctlr;
#define UNINITIALIZED_LUN (1 << 8) #define UNINITIALIZED_LUN (1 << 8)
unsigned lun; unsigned lun;
#define RDAC_MODE 0
#define RDAC_MODE_AVT 1
#define RDAC_MODE_IOSHIP 2
unsigned char mode;
#define RDAC_STATE_ACTIVE 0 #define RDAC_STATE_ACTIVE 0
#define RDAC_STATE_PASSIVE 1 #define RDAC_STATE_PASSIVE 1
unsigned char state; unsigned char state;
#define RDAC_LUN_UNOWNED 0 #define RDAC_LUN_UNOWNED 0
#define RDAC_LUN_OWNED 1 #define RDAC_LUN_OWNED 1
#define RDAC_LUN_AVT 2
char lun_state; char lun_state;
#define RDAC_PREFERRED 0
#define RDAC_NON_PREFERRED 1
char preferred;
unsigned char sense[SCSI_SENSE_BUFFERSIZE]; unsigned char sense[SCSI_SENSE_BUFFERSIZE];
union { union {
struct c2_inquiry c2; struct c2_inquiry c2;
@ -199,11 +209,15 @@ struct rdac_dh_data {
} inq; } inq;
}; };
static const char *mode[] = {
"RDAC",
"AVT",
"IOSHIP",
};
static const char *lun_state[] = static const char *lun_state[] =
{ {
"unowned", "unowned",
"owned", "owned",
"owned (AVT mode)",
}; };
struct rdac_queue_data { struct rdac_queue_data {
@ -458,25 +472,33 @@ static int check_ownership(struct scsi_device *sdev, struct rdac_dh_data *h)
int err; int err;
struct c9_inquiry *inqp; struct c9_inquiry *inqp;
h->lun_state = RDAC_LUN_UNOWNED;
h->state = RDAC_STATE_ACTIVE; h->state = RDAC_STATE_ACTIVE;
err = submit_inquiry(sdev, 0xC9, sizeof(struct c9_inquiry), h); err = submit_inquiry(sdev, 0xC9, sizeof(struct c9_inquiry), h);
if (err == SCSI_DH_OK) { if (err == SCSI_DH_OK) {
inqp = &h->inq.c9; inqp = &h->inq.c9;
if ((inqp->avte_cvp >> 7) == 0x1) { /* detect the operating mode */
/* LUN in AVT mode */ if ((inqp->avte_cvp >> 5) & 0x1)
sdev_printk(KERN_NOTICE, sdev, h->mode = RDAC_MODE_IOSHIP; /* LUN in IOSHIP mode */
"%s: AVT mode detected\n", else if (inqp->avte_cvp >> 7)
RDAC_NAME); h->mode = RDAC_MODE_AVT; /* LUN in AVT mode */
h->lun_state = RDAC_LUN_AVT; else
} else if ((inqp->avte_cvp & 0x1) != 0) { h->mode = RDAC_MODE; /* LUN in RDAC mode */
/* LUN was owned by the controller */
h->lun_state = RDAC_LUN_OWNED;
}
}
if (h->lun_state == RDAC_LUN_UNOWNED) /* Update ownership */
h->state = RDAC_STATE_PASSIVE; if (inqp->avte_cvp & 0x1)
h->lun_state = RDAC_LUN_OWNED;
else {
h->lun_state = RDAC_LUN_UNOWNED;
if (h->mode == RDAC_MODE)
h->state = RDAC_STATE_PASSIVE;
}
/* Update path prio*/
if (inqp->path_prio & 0x1)
h->preferred = RDAC_PREFERRED;
else
h->preferred = RDAC_NON_PREFERRED;
}
return err; return err;
} }
@ -648,12 +670,27 @@ static int rdac_activate(struct scsi_device *sdev,
{ {
struct rdac_dh_data *h = get_rdac_data(sdev); struct rdac_dh_data *h = get_rdac_data(sdev);
int err = SCSI_DH_OK; int err = SCSI_DH_OK;
int act = 0;
err = check_ownership(sdev, h); err = check_ownership(sdev, h);
if (err != SCSI_DH_OK) if (err != SCSI_DH_OK)
goto done; goto done;
if (h->lun_state == RDAC_LUN_UNOWNED) { switch (h->mode) {
case RDAC_MODE:
if (h->lun_state == RDAC_LUN_UNOWNED)
act = 1;
break;
case RDAC_MODE_IOSHIP:
if ((h->lun_state == RDAC_LUN_UNOWNED) &&
(h->preferred == RDAC_PREFERRED))
act = 1;
break;
default:
break;
}
if (act) {
err = queue_mode_select(sdev, fn, data); err = queue_mode_select(sdev, fn, data);
if (err == SCSI_DH_OK) if (err == SCSI_DH_OK)
return 0; return 0;
@ -836,8 +873,9 @@ static int rdac_bus_attach(struct scsi_device *sdev)
spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags); spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
sdev_printk(KERN_NOTICE, sdev, sdev_printk(KERN_NOTICE, sdev,
"%s: LUN %d (%s)\n", "%s: LUN %d (%s) (%s)\n",
RDAC_NAME, h->lun, lun_state[(int)h->lun_state]); RDAC_NAME, h->lun, mode[(int)h->mode],
lun_state[(int)h->lun_state]);
return 0; return 0;

View File

@ -780,7 +780,7 @@ static int adpt_abort(struct scsi_cmnd * cmd)
return FAILED; return FAILED;
} }
pHba = (adpt_hba*) cmd->device->host->hostdata[0]; pHba = (adpt_hba*) cmd->device->host->hostdata[0];
printk(KERN_INFO"%s: Trying to Abort cmd=%ld\n",pHba->name, cmd->serial_number); printk(KERN_INFO"%s: Trying to Abort\n",pHba->name);
if ((dptdevice = (void*) (cmd->device->hostdata)) == NULL) { if ((dptdevice = (void*) (cmd->device->hostdata)) == NULL) {
printk(KERN_ERR "%s: Unable to abort: No device in cmnd\n",pHba->name); printk(KERN_ERR "%s: Unable to abort: No device in cmnd\n",pHba->name);
return FAILED; return FAILED;
@ -802,10 +802,10 @@ static int adpt_abort(struct scsi_cmnd * cmd)
printk(KERN_INFO"%s: Abort cmd not supported\n",pHba->name); printk(KERN_INFO"%s: Abort cmd not supported\n",pHba->name);
return FAILED; return FAILED;
} }
printk(KERN_INFO"%s: Abort cmd=%ld failed.\n",pHba->name, cmd->serial_number); printk(KERN_INFO"%s: Abort failed.\n",pHba->name);
return FAILED; return FAILED;
} }
printk(KERN_INFO"%s: Abort cmd=%ld complete.\n",pHba->name, cmd->serial_number); printk(KERN_INFO"%s: Abort complete.\n",pHba->name);
return SUCCESS; return SUCCESS;
} }

View File

@ -1766,8 +1766,8 @@ static int eata2x_queuecommand_lck(struct scsi_cmnd *SCpnt,
struct mscp *cpp; struct mscp *cpp;
if (SCpnt->host_scribble) if (SCpnt->host_scribble)
panic("%s: qcomm, pid %ld, SCpnt %p already active.\n", panic("%s: qcomm, SCpnt %p already active.\n",
ha->board_name, SCpnt->serial_number, SCpnt); ha->board_name, SCpnt);
/* i is the mailbox number, look for the first free mailbox /* i is the mailbox number, look for the first free mailbox
starting from last_cp_used */ starting from last_cp_used */
@ -1801,7 +1801,7 @@ static int eata2x_queuecommand_lck(struct scsi_cmnd *SCpnt,
if (do_trace) if (do_trace)
scmd_printk(KERN_INFO, SCpnt, scmd_printk(KERN_INFO, SCpnt,
"qcomm, mbox %d, pid %ld.\n", i, SCpnt->serial_number); "qcomm, mbox %d.\n", i);
cpp->reqsen = 1; cpp->reqsen = 1;
cpp->dispri = 1; cpp->dispri = 1;
@ -1833,8 +1833,7 @@ static int eata2x_queuecommand_lck(struct scsi_cmnd *SCpnt,
if (do_dma(shost->io_port, cpp->cp_dma_addr, SEND_CP_DMA)) { if (do_dma(shost->io_port, cpp->cp_dma_addr, SEND_CP_DMA)) {
unmap_dma(i, ha); unmap_dma(i, ha);
SCpnt->host_scribble = NULL; SCpnt->host_scribble = NULL;
scmd_printk(KERN_INFO, SCpnt, scmd_printk(KERN_INFO, SCpnt, "qcomm, adapter busy.\n");
"qcomm, pid %ld, adapter busy.\n", SCpnt->serial_number);
return 1; return 1;
} }
@ -1851,14 +1850,12 @@ static int eata2x_eh_abort(struct scsi_cmnd *SCarg)
unsigned int i; unsigned int i;
if (SCarg->host_scribble == NULL) { if (SCarg->host_scribble == NULL) {
scmd_printk(KERN_INFO, SCarg, scmd_printk(KERN_INFO, SCarg, "abort, cmd inactive.\n");
"abort, pid %ld inactive.\n", SCarg->serial_number);
return SUCCESS; return SUCCESS;
} }
i = *(unsigned int *)SCarg->host_scribble; i = *(unsigned int *)SCarg->host_scribble;
scmd_printk(KERN_WARNING, SCarg, scmd_printk(KERN_WARNING, SCarg, "abort, mbox %d.\n", i);
"abort, mbox %d, pid %ld.\n", i, SCarg->serial_number);
if (i >= shost->can_queue) if (i >= shost->can_queue)
panic("%s: abort, invalid SCarg->host_scribble.\n", ha->board_name); panic("%s: abort, invalid SCarg->host_scribble.\n", ha->board_name);
@ -1902,8 +1899,8 @@ static int eata2x_eh_abort(struct scsi_cmnd *SCarg)
SCarg->result = DID_ABORT << 16; SCarg->result = DID_ABORT << 16;
SCarg->host_scribble = NULL; SCarg->host_scribble = NULL;
ha->cp_stat[i] = FREE; ha->cp_stat[i] = FREE;
printk("%s, abort, mbox %d ready, DID_ABORT, pid %ld done.\n", printk("%s, abort, mbox %d ready, DID_ABORT, done.\n",
ha->board_name, i, SCarg->serial_number); ha->board_name, i);
SCarg->scsi_done(SCarg); SCarg->scsi_done(SCarg);
return SUCCESS; return SUCCESS;
} }
@ -1919,13 +1916,12 @@ static int eata2x_eh_host_reset(struct scsi_cmnd *SCarg)
struct Scsi_Host *shost = SCarg->device->host; struct Scsi_Host *shost = SCarg->device->host;
struct hostdata *ha = (struct hostdata *)shost->hostdata; struct hostdata *ha = (struct hostdata *)shost->hostdata;
scmd_printk(KERN_INFO, SCarg, scmd_printk(KERN_INFO, SCarg, "reset, enter.\n");
"reset, enter, pid %ld.\n", SCarg->serial_number);
spin_lock_irq(shost->host_lock); spin_lock_irq(shost->host_lock);
if (SCarg->host_scribble == NULL) if (SCarg->host_scribble == NULL)
printk("%s: reset, pid %ld inactive.\n", ha->board_name, SCarg->serial_number); printk("%s: reset, inactive.\n", ha->board_name);
if (ha->in_reset) { if (ha->in_reset) {
printk("%s: reset, exit, already in reset.\n", ha->board_name); printk("%s: reset, exit, already in reset.\n", ha->board_name);
@ -1964,14 +1960,14 @@ static int eata2x_eh_host_reset(struct scsi_cmnd *SCarg)
if (ha->cp_stat[i] == READY || ha->cp_stat[i] == ABORTING) { if (ha->cp_stat[i] == READY || ha->cp_stat[i] == ABORTING) {
ha->cp_stat[i] = ABORTING; ha->cp_stat[i] = ABORTING;
printk("%s: reset, mbox %d aborting, pid %ld.\n", printk("%s: reset, mbox %d aborting.\n",
ha->board_name, i, SCpnt->serial_number); ha->board_name, i);
} }
else { else {
ha->cp_stat[i] = IN_RESET; ha->cp_stat[i] = IN_RESET;
printk("%s: reset, mbox %d in reset, pid %ld.\n", printk("%s: reset, mbox %d in reset.\n",
ha->board_name, i, SCpnt->serial_number); ha->board_name, i);
} }
if (SCpnt->host_scribble == NULL) if (SCpnt->host_scribble == NULL)
@ -2025,8 +2021,8 @@ static int eata2x_eh_host_reset(struct scsi_cmnd *SCarg)
ha->cp_stat[i] = LOCKED; ha->cp_stat[i] = LOCKED;
printk printk
("%s, reset, mbox %d locked, DID_RESET, pid %ld done.\n", ("%s, reset, mbox %d locked, DID_RESET, done.\n",
ha->board_name, i, SCpnt->serial_number); ha->board_name, i);
} }
else if (ha->cp_stat[i] == ABORTING) { else if (ha->cp_stat[i] == ABORTING) {
@ -2039,8 +2035,8 @@ static int eata2x_eh_host_reset(struct scsi_cmnd *SCarg)
ha->cp_stat[i] = FREE; ha->cp_stat[i] = FREE;
printk printk
("%s, reset, mbox %d aborting, DID_RESET, pid %ld done.\n", ("%s, reset, mbox %d aborting, DID_RESET, done.\n",
ha->board_name, i, SCpnt->serial_number); ha->board_name, i);
} }
else else
@ -2054,7 +2050,7 @@ static int eata2x_eh_host_reset(struct scsi_cmnd *SCarg)
do_trace = 0; do_trace = 0;
if (arg_done) if (arg_done)
printk("%s: reset, exit, pid %ld done.\n", ha->board_name, SCarg->serial_number); printk("%s: reset, exit, done.\n", ha->board_name);
else else
printk("%s: reset, exit.\n", ha->board_name); printk("%s: reset, exit.\n", ha->board_name);
@ -2238,10 +2234,10 @@ static int reorder(struct hostdata *ha, unsigned long cursec,
cpp = &ha->cp[k]; cpp = &ha->cp[k];
SCpnt = cpp->SCpnt; SCpnt = cpp->SCpnt;
scmd_printk(KERN_INFO, SCpnt, scmd_printk(KERN_INFO, SCpnt,
"%s pid %ld mb %d fc %d nr %d sec %ld ns %u" "%s mb %d fc %d nr %d sec %ld ns %u"
" cur %ld s:%c r:%c rev:%c in:%c ov:%c xd %d.\n", " cur %ld s:%c r:%c rev:%c in:%c ov:%c xd %d.\n",
(ihdlr ? "ihdlr" : "qcomm"), (ihdlr ? "ihdlr" : "qcomm"),
SCpnt->serial_number, k, flushcount, k, flushcount,
n_ready, blk_rq_pos(SCpnt->request), n_ready, blk_rq_pos(SCpnt->request),
blk_rq_sectors(SCpnt->request), cursec, YESNO(s), blk_rq_sectors(SCpnt->request), cursec, YESNO(s),
YESNO(r), YESNO(rev), YESNO(input_only), YESNO(r), YESNO(rev), YESNO(input_only),
@ -2285,10 +2281,10 @@ static void flush_dev(struct scsi_device *dev, unsigned long cursec,
if (do_dma(dev->host->io_port, cpp->cp_dma_addr, SEND_CP_DMA)) { if (do_dma(dev->host->io_port, cpp->cp_dma_addr, SEND_CP_DMA)) {
scmd_printk(KERN_INFO, SCpnt, scmd_printk(KERN_INFO, SCpnt,
"%s, pid %ld, mbox %d, adapter" "%s, mbox %d, adapter"
" busy, will abort.\n", " busy, will abort.\n",
(ihdlr ? "ihdlr" : "qcomm"), (ihdlr ? "ihdlr" : "qcomm"),
SCpnt->serial_number, k); k);
ha->cp_stat[k] = ABORTING; ha->cp_stat[k] = ABORTING;
continue; continue;
} }
@ -2398,12 +2394,12 @@ static irqreturn_t ihdlr(struct Scsi_Host *shost)
panic("%s: ihdlr, mbox %d, SCpnt == NULL.\n", ha->board_name, i); panic("%s: ihdlr, mbox %d, SCpnt == NULL.\n", ha->board_name, i);
if (SCpnt->host_scribble == NULL) if (SCpnt->host_scribble == NULL)
panic("%s: ihdlr, mbox %d, pid %ld, SCpnt %p garbled.\n", ha->board_name, panic("%s: ihdlr, mbox %d, SCpnt %p garbled.\n", ha->board_name,
i, SCpnt->serial_number, SCpnt); i, SCpnt);
if (*(unsigned int *)SCpnt->host_scribble != i) if (*(unsigned int *)SCpnt->host_scribble != i)
panic("%s: ihdlr, mbox %d, pid %ld, index mismatch %d.\n", panic("%s: ihdlr, mbox %d, index mismatch %d.\n",
ha->board_name, i, SCpnt->serial_number, ha->board_name, i,
*(unsigned int *)SCpnt->host_scribble); *(unsigned int *)SCpnt->host_scribble);
sync_dma(i, ha); sync_dma(i, ha);
@ -2449,11 +2445,11 @@ static irqreturn_t ihdlr(struct Scsi_Host *shost)
if (spp->target_status && SCpnt->device->type == TYPE_DISK && if (spp->target_status && SCpnt->device->type == TYPE_DISK &&
(!(tstatus == CHECK_CONDITION && ha->iocount <= 1000 && (!(tstatus == CHECK_CONDITION && ha->iocount <= 1000 &&
(SCpnt->sense_buffer[2] & 0xf) == NOT_READY))) (SCpnt->sense_buffer[2] & 0xf) == NOT_READY)))
printk("%s: ihdlr, target %d.%d:%d, pid %ld, " printk("%s: ihdlr, target %d.%d:%d, "
"target_status 0x%x, sense key 0x%x.\n", "target_status 0x%x, sense key 0x%x.\n",
ha->board_name, ha->board_name,
SCpnt->device->channel, SCpnt->device->id, SCpnt->device->channel, SCpnt->device->id,
SCpnt->device->lun, SCpnt->serial_number, SCpnt->device->lun,
spp->target_status, SCpnt->sense_buffer[2]); spp->target_status, SCpnt->sense_buffer[2]);
ha->target_to[SCpnt->device->id][SCpnt->device->channel] = 0; ha->target_to[SCpnt->device->id][SCpnt->device->channel] = 0;
@ -2522,9 +2518,9 @@ static irqreturn_t ihdlr(struct Scsi_Host *shost)
do_trace || msg_byte(spp->target_status)) do_trace || msg_byte(spp->target_status))
#endif #endif
scmd_printk(KERN_INFO, SCpnt, "ihdlr, mbox %2d, err 0x%x:%x," scmd_printk(KERN_INFO, SCpnt, "ihdlr, mbox %2d, err 0x%x:%x,"
" pid %ld, reg 0x%x, count %d.\n", " reg 0x%x, count %d.\n",
i, spp->adapter_status, spp->target_status, i, spp->adapter_status, spp->target_status,
SCpnt->serial_number, reg, ha->iocount); reg, ha->iocount);
unmap_dma(i, ha); unmap_dma(i, ha);

View File

@ -372,8 +372,7 @@ static int eata_pio_queue_lck(struct scsi_cmnd *cmd,
cp->status = USED; /* claim free slot */ cp->status = USED; /* claim free slot */
DBG(DBG_QUEUE, scmd_printk(KERN_DEBUG, cmd, DBG(DBG_QUEUE, scmd_printk(KERN_DEBUG, cmd,
"eata_pio_queue pid %ld, y %d\n", "eata_pio_queue 0x%p, y %d\n", cmd, y));
cmd->serial_number, y));
cmd->scsi_done = (void *) done; cmd->scsi_done = (void *) done;
@ -417,8 +416,8 @@ static int eata_pio_queue_lck(struct scsi_cmnd *cmd,
if (eata_pio_send_command(base, EATA_CMD_PIO_SEND_CP)) { if (eata_pio_send_command(base, EATA_CMD_PIO_SEND_CP)) {
cmd->result = DID_BUS_BUSY << 16; cmd->result = DID_BUS_BUSY << 16;
scmd_printk(KERN_NOTICE, cmd, scmd_printk(KERN_NOTICE, cmd,
"eata_pio_queue pid %ld, HBA busy, " "eata_pio_queue pid 0x%p, HBA busy, "
"returning DID_BUS_BUSY, done.\n", cmd->serial_number); "returning DID_BUS_BUSY, done.\n", cmd);
done(cmd); done(cmd);
cp->status = FREE; cp->status = FREE;
return 0; return 0;
@ -432,8 +431,8 @@ static int eata_pio_queue_lck(struct scsi_cmnd *cmd,
outw(0, base + HA_RDATA); outw(0, base + HA_RDATA);
DBG(DBG_QUEUE, scmd_printk(KERN_DEBUG, cmd, DBG(DBG_QUEUE, scmd_printk(KERN_DEBUG, cmd,
"Queued base %#.4lx pid: %ld " "Queued base %#.4lx cmd: 0x%p "
"slot %d irq %d\n", sh->base, cmd->serial_number, y, sh->irq)); "slot %d irq %d\n", sh->base, cmd, y, sh->irq));
return 0; return 0;
} }
@ -445,8 +444,7 @@ static int eata_pio_abort(struct scsi_cmnd *cmd)
unsigned int loop = 100; unsigned int loop = 100;
DBG(DBG_ABNORM, scmd_printk(KERN_WARNING, cmd, DBG(DBG_ABNORM, scmd_printk(KERN_WARNING, cmd,
"eata_pio_abort called pid: %ld\n", "eata_pio_abort called pid: 0x%p\n", cmd));
cmd->serial_number));
while (inb(cmd->device->host->base + HA_RAUXSTAT) & HA_ABUSY) while (inb(cmd->device->host->base + HA_RAUXSTAT) & HA_ABUSY)
if (--loop == 0) { if (--loop == 0) {
@ -481,8 +479,7 @@ static int eata_pio_host_reset(struct scsi_cmnd *cmd)
struct Scsi_Host *host = cmd->device->host; struct Scsi_Host *host = cmd->device->host;
DBG(DBG_ABNORM, scmd_printk(KERN_WARNING, cmd, DBG(DBG_ABNORM, scmd_printk(KERN_WARNING, cmd,
"eata_pio_reset called pid:%ld\n", "eata_pio_reset called\n"));
cmd->serial_number));
spin_lock_irq(host->host_lock); spin_lock_irq(host->host_lock);
@ -501,7 +498,7 @@ static int eata_pio_host_reset(struct scsi_cmnd *cmd)
sp = HD(cmd)->ccb[x].cmd; sp = HD(cmd)->ccb[x].cmd;
HD(cmd)->ccb[x].status = RESET; HD(cmd)->ccb[x].status = RESET;
printk(KERN_WARNING "eata_pio_reset: slot %d in reset, pid %ld.\n", x, sp->serial_number); printk(KERN_WARNING "eata_pio_reset: slot %d in reset.\n", x);
if (sp == NULL) if (sp == NULL)
panic("eata_pio_reset: slot %d, sp==NULL.\n", x); panic("eata_pio_reset: slot %d, sp==NULL.\n", x);

View File

@ -708,8 +708,7 @@ static void esp_maybe_execute_command(struct esp *esp)
tp = &esp->target[tgt]; tp = &esp->target[tgt];
lp = dev->hostdata; lp = dev->hostdata;
list_del(&ent->list); list_move(&ent->list, &esp->active_cmds);
list_add(&ent->list, &esp->active_cmds);
esp->active_cmd = ent; esp->active_cmd = ent;
@ -1244,8 +1243,7 @@ static int esp_finish_select(struct esp *esp)
/* Now that the state is unwound properly, put back onto /* Now that the state is unwound properly, put back onto
* the issue queue. This command is no longer active. * the issue queue. This command is no longer active.
*/ */
list_del(&ent->list); list_move(&ent->list, &esp->queued_cmds);
list_add(&ent->list, &esp->queued_cmds);
esp->active_cmd = NULL; esp->active_cmd = NULL;
/* Return value ignored by caller, it directly invokes /* Return value ignored by caller, it directly invokes

View File

@ -380,49 +380,6 @@ out:
return fcoe; return fcoe;
} }
/**
* fcoe_interface_cleanup() - Clean up a FCoE interface
* @fcoe: The FCoE interface to be cleaned up
*
* Caller must be holding the RTNL mutex
*/
void fcoe_interface_cleanup(struct fcoe_interface *fcoe)
{
struct net_device *netdev = fcoe->netdev;
struct fcoe_ctlr *fip = &fcoe->ctlr;
u8 flogi_maddr[ETH_ALEN];
const struct net_device_ops *ops;
/*
* Don't listen for Ethernet packets anymore.
* synchronize_net() ensures that the packet handlers are not running
* on another CPU. dev_remove_pack() would do that, this calls the
* unsyncronized version __dev_remove_pack() to avoid multiple delays.
*/
__dev_remove_pack(&fcoe->fcoe_packet_type);
__dev_remove_pack(&fcoe->fip_packet_type);
synchronize_net();
/* Delete secondary MAC addresses */
memcpy(flogi_maddr, (u8[6]) FC_FCOE_FLOGI_MAC, ETH_ALEN);
dev_uc_del(netdev, flogi_maddr);
if (fip->spma)
dev_uc_del(netdev, fip->ctl_src_addr);
if (fip->mode == FIP_MODE_VN2VN) {
dev_mc_del(netdev, FIP_ALL_VN2VN_MACS);
dev_mc_del(netdev, FIP_ALL_P2P_MACS);
} else
dev_mc_del(netdev, FIP_ALL_ENODE_MACS);
/* Tell the LLD we are done w/ FCoE */
ops = netdev->netdev_ops;
if (ops->ndo_fcoe_disable) {
if (ops->ndo_fcoe_disable(netdev))
FCOE_NETDEV_DBG(netdev, "Failed to disable FCoE"
" specific feature for LLD.\n");
}
}
/** /**
* fcoe_interface_release() - fcoe_port kref release function * fcoe_interface_release() - fcoe_port kref release function
* @kref: Embedded reference count in an fcoe_interface struct * @kref: Embedded reference count in an fcoe_interface struct
@ -459,6 +416,68 @@ static inline void fcoe_interface_put(struct fcoe_interface *fcoe)
kref_put(&fcoe->kref, fcoe_interface_release); kref_put(&fcoe->kref, fcoe_interface_release);
} }
/**
* fcoe_interface_cleanup() - Clean up a FCoE interface
* @fcoe: The FCoE interface to be cleaned up
*
* Caller must be holding the RTNL mutex
*/
void fcoe_interface_cleanup(struct fcoe_interface *fcoe)
{
struct net_device *netdev = fcoe->netdev;
struct fcoe_ctlr *fip = &fcoe->ctlr;
u8 flogi_maddr[ETH_ALEN];
const struct net_device_ops *ops;
struct fcoe_port *port = lport_priv(fcoe->ctlr.lp);
FCOE_NETDEV_DBG(netdev, "Destroying interface\n");
/* Logout of the fabric */
fc_fabric_logoff(fcoe->ctlr.lp);
/* Cleanup the fc_lport */
fc_lport_destroy(fcoe->ctlr.lp);
/* Stop the transmit retry timer */
del_timer_sync(&port->timer);
/* Free existing transmit skbs */
fcoe_clean_pending_queue(fcoe->ctlr.lp);
/*
* Don't listen for Ethernet packets anymore.
* synchronize_net() ensures that the packet handlers are not running
* on another CPU. dev_remove_pack() would do that, this calls the
* unsyncronized version __dev_remove_pack() to avoid multiple delays.
*/
__dev_remove_pack(&fcoe->fcoe_packet_type);
__dev_remove_pack(&fcoe->fip_packet_type);
synchronize_net();
/* Delete secondary MAC addresses */
memcpy(flogi_maddr, (u8[6]) FC_FCOE_FLOGI_MAC, ETH_ALEN);
dev_uc_del(netdev, flogi_maddr);
if (fip->spma)
dev_uc_del(netdev, fip->ctl_src_addr);
if (fip->mode == FIP_MODE_VN2VN) {
dev_mc_del(netdev, FIP_ALL_VN2VN_MACS);
dev_mc_del(netdev, FIP_ALL_P2P_MACS);
} else
dev_mc_del(netdev, FIP_ALL_ENODE_MACS);
if (!is_zero_ether_addr(port->data_src_addr))
dev_uc_del(netdev, port->data_src_addr);
/* Tell the LLD we are done w/ FCoE */
ops = netdev->netdev_ops;
if (ops->ndo_fcoe_disable) {
if (ops->ndo_fcoe_disable(netdev))
FCOE_NETDEV_DBG(netdev, "Failed to disable FCoE"
" specific feature for LLD.\n");
}
fcoe_interface_put(fcoe);
}
/** /**
* fcoe_fip_recv() - Handler for received FIP frames * fcoe_fip_recv() - Handler for received FIP frames
* @skb: The receive skb * @skb: The receive skb
@ -821,39 +840,9 @@ skip_oem:
* fcoe_if_destroy() - Tear down a SW FCoE instance * fcoe_if_destroy() - Tear down a SW FCoE instance
* @lport: The local port to be destroyed * @lport: The local port to be destroyed
* *
* Locking: must be called with the RTNL mutex held and RTNL mutex
* needed to be dropped by this function since not dropping RTNL
* would cause circular locking warning on synchronous fip worker
* cancelling thru fcoe_interface_put invoked by this function.
*
*/ */
static void fcoe_if_destroy(struct fc_lport *lport) static void fcoe_if_destroy(struct fc_lport *lport)
{ {
struct fcoe_port *port = lport_priv(lport);
struct fcoe_interface *fcoe = port->priv;
struct net_device *netdev = fcoe->netdev;
FCOE_NETDEV_DBG(netdev, "Destroying interface\n");
/* Logout of the fabric */
fc_fabric_logoff(lport);
/* Cleanup the fc_lport */
fc_lport_destroy(lport);
/* Stop the transmit retry timer */
del_timer_sync(&port->timer);
/* Free existing transmit skbs */
fcoe_clean_pending_queue(lport);
if (!is_zero_ether_addr(port->data_src_addr))
dev_uc_del(netdev, port->data_src_addr);
rtnl_unlock();
/* receives may not be stopped until after this */
fcoe_interface_put(fcoe);
/* Free queued packets for the per-CPU receive threads */ /* Free queued packets for the per-CPU receive threads */
fcoe_percpu_clean(lport); fcoe_percpu_clean(lport);
@ -1783,23 +1772,8 @@ static int fcoe_disable(struct net_device *netdev)
int rc = 0; int rc = 0;
mutex_lock(&fcoe_config_mutex); mutex_lock(&fcoe_config_mutex);
#ifdef CONFIG_FCOE_MODULE
/*
* Make sure the module has been initialized, and is not about to be
* removed. Module paramter sysfs files are writable before the
* module_init function is called and after module_exit.
*/
if (THIS_MODULE->state != MODULE_STATE_LIVE) {
rc = -ENODEV;
goto out_nodev;
}
#endif
if (!rtnl_trylock()) {
mutex_unlock(&fcoe_config_mutex);
return -ERESTARTSYS;
}
rtnl_lock();
fcoe = fcoe_hostlist_lookup_port(netdev); fcoe = fcoe_hostlist_lookup_port(netdev);
rtnl_unlock(); rtnl_unlock();
@ -1809,7 +1783,6 @@ static int fcoe_disable(struct net_device *netdev)
} else } else
rc = -ENODEV; rc = -ENODEV;
out_nodev:
mutex_unlock(&fcoe_config_mutex); mutex_unlock(&fcoe_config_mutex);
return rc; return rc;
} }
@ -1828,22 +1801,7 @@ static int fcoe_enable(struct net_device *netdev)
int rc = 0; int rc = 0;
mutex_lock(&fcoe_config_mutex); mutex_lock(&fcoe_config_mutex);
#ifdef CONFIG_FCOE_MODULE rtnl_lock();
/*
* Make sure the module has been initialized, and is not about to be
* removed. Module paramter sysfs files are writable before the
* module_init function is called and after module_exit.
*/
if (THIS_MODULE->state != MODULE_STATE_LIVE) {
rc = -ENODEV;
goto out_nodev;
}
#endif
if (!rtnl_trylock()) {
mutex_unlock(&fcoe_config_mutex);
return -ERESTARTSYS;
}
fcoe = fcoe_hostlist_lookup_port(netdev); fcoe = fcoe_hostlist_lookup_port(netdev);
rtnl_unlock(); rtnl_unlock();
@ -1852,7 +1810,6 @@ static int fcoe_enable(struct net_device *netdev)
else if (!fcoe_link_ok(fcoe->ctlr.lp)) else if (!fcoe_link_ok(fcoe->ctlr.lp))
fcoe_ctlr_link_up(&fcoe->ctlr); fcoe_ctlr_link_up(&fcoe->ctlr);
out_nodev:
mutex_unlock(&fcoe_config_mutex); mutex_unlock(&fcoe_config_mutex);
return rc; return rc;
} }
@ -1868,35 +1825,22 @@ out_nodev:
static int fcoe_destroy(struct net_device *netdev) static int fcoe_destroy(struct net_device *netdev)
{ {
struct fcoe_interface *fcoe; struct fcoe_interface *fcoe;
struct fc_lport *lport;
int rc = 0; int rc = 0;
mutex_lock(&fcoe_config_mutex); mutex_lock(&fcoe_config_mutex);
#ifdef CONFIG_FCOE_MODULE rtnl_lock();
/*
* Make sure the module has been initialized, and is not about to be
* removed. Module paramter sysfs files are writable before the
* module_init function is called and after module_exit.
*/
if (THIS_MODULE->state != MODULE_STATE_LIVE) {
rc = -ENODEV;
goto out_nodev;
}
#endif
if (!rtnl_trylock()) {
mutex_unlock(&fcoe_config_mutex);
return -ERESTARTSYS;
}
fcoe = fcoe_hostlist_lookup_port(netdev); fcoe = fcoe_hostlist_lookup_port(netdev);
if (!fcoe) { if (!fcoe) {
rtnl_unlock(); rtnl_unlock();
rc = -ENODEV; rc = -ENODEV;
goto out_nodev; goto out_nodev;
} }
fcoe_interface_cleanup(fcoe); lport = fcoe->ctlr.lp;
list_del(&fcoe->list); list_del(&fcoe->list);
/* RTNL mutex is dropped by fcoe_if_destroy */ fcoe_interface_cleanup(fcoe);
fcoe_if_destroy(fcoe->ctlr.lp); rtnl_unlock();
fcoe_if_destroy(lport);
out_nodev: out_nodev:
mutex_unlock(&fcoe_config_mutex); mutex_unlock(&fcoe_config_mutex);
return rc; return rc;
@ -1912,8 +1856,6 @@ static void fcoe_destroy_work(struct work_struct *work)
port = container_of(work, struct fcoe_port, destroy_work); port = container_of(work, struct fcoe_port, destroy_work);
mutex_lock(&fcoe_config_mutex); mutex_lock(&fcoe_config_mutex);
rtnl_lock();
/* RTNL mutex is dropped by fcoe_if_destroy */
fcoe_if_destroy(port->lport); fcoe_if_destroy(port->lport);
mutex_unlock(&fcoe_config_mutex); mutex_unlock(&fcoe_config_mutex);
} }
@ -1948,23 +1890,7 @@ static int fcoe_create(struct net_device *netdev, enum fip_state fip_mode)
struct fc_lport *lport; struct fc_lport *lport;
mutex_lock(&fcoe_config_mutex); mutex_lock(&fcoe_config_mutex);
rtnl_lock();
if (!rtnl_trylock()) {
mutex_unlock(&fcoe_config_mutex);
return -ERESTARTSYS;
}
#ifdef CONFIG_FCOE_MODULE
/*
* Make sure the module has been initialized, and is not about to be
* removed. Module paramter sysfs files are writable before the
* module_init function is called and after module_exit.
*/
if (THIS_MODULE->state != MODULE_STATE_LIVE) {
rc = -ENODEV;
goto out_nodev;
}
#endif
/* look for existing lport */ /* look for existing lport */
if (fcoe_hostlist_lookup(netdev)) { if (fcoe_hostlist_lookup(netdev)) {

View File

@ -978,10 +978,8 @@ static void fcoe_ctlr_recv_adv(struct fcoe_ctlr *fip, struct sk_buff *skb)
* the FCF that answers multicast solicitations, not the others that * the FCF that answers multicast solicitations, not the others that
* are sending periodic multicast advertisements. * are sending periodic multicast advertisements.
*/ */
if (mtu_valid) { if (mtu_valid)
list_del(&fcf->list); list_move(&fcf->list, &fip->fcfs);
list_add(&fcf->list, &fip->fcfs);
}
/* /*
* If this is the first validated FCF, note the time and * If this is the first validated FCF, note the time and

View File

@ -335,7 +335,7 @@ out_attach:
EXPORT_SYMBOL(fcoe_transport_attach); EXPORT_SYMBOL(fcoe_transport_attach);
/** /**
* fcoe_transport_attach - Detaches an FCoE transport * fcoe_transport_detach - Detaches an FCoE transport
* @ft: The fcoe transport to be attached * @ft: The fcoe transport to be attached
* *
* Returns : 0 for success * Returns : 0 for success
@ -343,6 +343,7 @@ EXPORT_SYMBOL(fcoe_transport_attach);
int fcoe_transport_detach(struct fcoe_transport *ft) int fcoe_transport_detach(struct fcoe_transport *ft)
{ {
int rc = 0; int rc = 0;
struct fcoe_netdev_mapping *nm = NULL, *tmp;
mutex_lock(&ft_mutex); mutex_lock(&ft_mutex);
if (!ft->attached) { if (!ft->attached) {
@ -352,6 +353,19 @@ int fcoe_transport_detach(struct fcoe_transport *ft)
goto out_attach; goto out_attach;
} }
/* remove netdev mapping for this transport as it is going away */
mutex_lock(&fn_mutex);
list_for_each_entry_safe(nm, tmp, &fcoe_netdevs, list) {
if (nm->ft == ft) {
LIBFCOE_TRANSPORT_DBG("transport %s going away, "
"remove its netdev mapping for %s\n",
ft->name, nm->netdev->name);
list_del(&nm->list);
kfree(nm);
}
}
mutex_unlock(&fn_mutex);
list_del(&ft->list); list_del(&ft->list);
ft->attached = false; ft->attached = false;
LIBFCOE_TRANSPORT_DBG("detaching transport %s\n", ft->name); LIBFCOE_TRANSPORT_DBG("detaching transport %s\n", ft->name);
@ -371,9 +385,9 @@ static int fcoe_transport_show(char *buffer, const struct kernel_param *kp)
i = j = sprintf(buffer, "Attached FCoE transports:"); i = j = sprintf(buffer, "Attached FCoE transports:");
mutex_lock(&ft_mutex); mutex_lock(&ft_mutex);
list_for_each_entry(ft, &fcoe_transports, list) { list_for_each_entry(ft, &fcoe_transports, list) {
i += snprintf(&buffer[i], IFNAMSIZ, "%s ", ft->name); if (i >= PAGE_SIZE - IFNAMSIZ)
if (i >= PAGE_SIZE)
break; break;
i += snprintf(&buffer[i], IFNAMSIZ, "%s ", ft->name);
} }
mutex_unlock(&ft_mutex); mutex_unlock(&ft_mutex);
if (i == j) if (i == j)
@ -530,9 +544,6 @@ static int fcoe_transport_create(const char *buffer, struct kernel_param *kp)
struct fcoe_transport *ft = NULL; struct fcoe_transport *ft = NULL;
enum fip_state fip_mode = (enum fip_state)(long)kp->arg; enum fip_state fip_mode = (enum fip_state)(long)kp->arg;
if (!mutex_trylock(&ft_mutex))
return restart_syscall();
#ifdef CONFIG_LIBFCOE_MODULE #ifdef CONFIG_LIBFCOE_MODULE
/* /*
* Make sure the module has been initialized, and is not about to be * Make sure the module has been initialized, and is not about to be
@ -543,6 +554,8 @@ static int fcoe_transport_create(const char *buffer, struct kernel_param *kp)
goto out_nodev; goto out_nodev;
#endif #endif
mutex_lock(&ft_mutex);
netdev = fcoe_if_to_netdev(buffer); netdev = fcoe_if_to_netdev(buffer);
if (!netdev) { if (!netdev) {
LIBFCOE_TRANSPORT_DBG("Invalid device %s.\n", buffer); LIBFCOE_TRANSPORT_DBG("Invalid device %s.\n", buffer);
@ -586,10 +599,7 @@ out_putdev:
dev_put(netdev); dev_put(netdev);
out_nodev: out_nodev:
mutex_unlock(&ft_mutex); mutex_unlock(&ft_mutex);
if (rc == -ERESTARTSYS) return rc;
return restart_syscall();
else
return rc;
} }
/** /**
@ -608,9 +618,6 @@ static int fcoe_transport_destroy(const char *buffer, struct kernel_param *kp)
struct net_device *netdev = NULL; struct net_device *netdev = NULL;
struct fcoe_transport *ft = NULL; struct fcoe_transport *ft = NULL;
if (!mutex_trylock(&ft_mutex))
return restart_syscall();
#ifdef CONFIG_LIBFCOE_MODULE #ifdef CONFIG_LIBFCOE_MODULE
/* /*
* Make sure the module has been initialized, and is not about to be * Make sure the module has been initialized, and is not about to be
@ -621,6 +628,8 @@ static int fcoe_transport_destroy(const char *buffer, struct kernel_param *kp)
goto out_nodev; goto out_nodev;
#endif #endif
mutex_lock(&ft_mutex);
netdev = fcoe_if_to_netdev(buffer); netdev = fcoe_if_to_netdev(buffer);
if (!netdev) { if (!netdev) {
LIBFCOE_TRANSPORT_DBG("invalid device %s.\n", buffer); LIBFCOE_TRANSPORT_DBG("invalid device %s.\n", buffer);
@ -645,11 +654,7 @@ out_putdev:
dev_put(netdev); dev_put(netdev);
out_nodev: out_nodev:
mutex_unlock(&ft_mutex); mutex_unlock(&ft_mutex);
return rc;
if (rc == -ERESTARTSYS)
return restart_syscall();
else
return rc;
} }
/** /**
@ -667,9 +672,6 @@ static int fcoe_transport_disable(const char *buffer, struct kernel_param *kp)
struct net_device *netdev = NULL; struct net_device *netdev = NULL;
struct fcoe_transport *ft = NULL; struct fcoe_transport *ft = NULL;
if (!mutex_trylock(&ft_mutex))
return restart_syscall();
#ifdef CONFIG_LIBFCOE_MODULE #ifdef CONFIG_LIBFCOE_MODULE
/* /*
* Make sure the module has been initialized, and is not about to be * Make sure the module has been initialized, and is not about to be
@ -680,6 +682,8 @@ static int fcoe_transport_disable(const char *buffer, struct kernel_param *kp)
goto out_nodev; goto out_nodev;
#endif #endif
mutex_lock(&ft_mutex);
netdev = fcoe_if_to_netdev(buffer); netdev = fcoe_if_to_netdev(buffer);
if (!netdev) if (!netdev)
goto out_nodev; goto out_nodev;
@ -716,9 +720,6 @@ static int fcoe_transport_enable(const char *buffer, struct kernel_param *kp)
struct net_device *netdev = NULL; struct net_device *netdev = NULL;
struct fcoe_transport *ft = NULL; struct fcoe_transport *ft = NULL;
if (!mutex_trylock(&ft_mutex))
return restart_syscall();
#ifdef CONFIG_LIBFCOE_MODULE #ifdef CONFIG_LIBFCOE_MODULE
/* /*
* Make sure the module has been initialized, and is not about to be * Make sure the module has been initialized, and is not about to be
@ -729,6 +730,8 @@ static int fcoe_transport_enable(const char *buffer, struct kernel_param *kp)
goto out_nodev; goto out_nodev;
#endif #endif
mutex_lock(&ft_mutex);
netdev = fcoe_if_to_netdev(buffer); netdev = fcoe_if_to_netdev(buffer);
if (!netdev) if (!netdev)
goto out_nodev; goto out_nodev;
@ -743,10 +746,7 @@ out_putdev:
dev_put(netdev); dev_put(netdev);
out_nodev: out_nodev:
mutex_unlock(&ft_mutex); mutex_unlock(&ft_mutex);
if (rc == -ERESTARTSYS) return rc;
return restart_syscall();
else
return rc;
} }
/** /**

View File

@ -273,7 +273,7 @@ static ssize_t host_show_transport_mode(struct device *dev,
"performant" : "simple"); "performant" : "simple");
} }
/* List of controllers which cannot be reset on kexec with reset_devices */ /* List of controllers which cannot be hard reset on kexec with reset_devices */
static u32 unresettable_controller[] = { static u32 unresettable_controller[] = {
0x324a103C, /* Smart Array P712m */ 0x324a103C, /* Smart Array P712m */
0x324b103C, /* SmartArray P711m */ 0x324b103C, /* SmartArray P711m */
@ -291,16 +291,45 @@ static u32 unresettable_controller[] = {
0x409D0E11, /* Smart Array 6400 EM */ 0x409D0E11, /* Smart Array 6400 EM */
}; };
static int ctlr_is_resettable(struct ctlr_info *h) /* List of controllers which cannot even be soft reset */
static u32 soft_unresettable_controller[] = {
/* Exclude 640x boards. These are two pci devices in one slot
* which share a battery backed cache module. One controls the
* cache, the other accesses the cache through the one that controls
* it. If we reset the one controlling the cache, the other will
* likely not be happy. Just forbid resetting this conjoined mess.
* The 640x isn't really supported by hpsa anyway.
*/
0x409C0E11, /* Smart Array 6400 */
0x409D0E11, /* Smart Array 6400 EM */
};
static int ctlr_is_hard_resettable(u32 board_id)
{ {
int i; int i;
for (i = 0; i < ARRAY_SIZE(unresettable_controller); i++) for (i = 0; i < ARRAY_SIZE(unresettable_controller); i++)
if (unresettable_controller[i] == h->board_id) if (unresettable_controller[i] == board_id)
return 0; return 0;
return 1; return 1;
} }
static int ctlr_is_soft_resettable(u32 board_id)
{
int i;
for (i = 0; i < ARRAY_SIZE(soft_unresettable_controller); i++)
if (soft_unresettable_controller[i] == board_id)
return 0;
return 1;
}
static int ctlr_is_resettable(u32 board_id)
{
return ctlr_is_hard_resettable(board_id) ||
ctlr_is_soft_resettable(board_id);
}
static ssize_t host_show_resettable(struct device *dev, static ssize_t host_show_resettable(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
@ -308,7 +337,7 @@ static ssize_t host_show_resettable(struct device *dev,
struct Scsi_Host *shost = class_to_shost(dev); struct Scsi_Host *shost = class_to_shost(dev);
h = shost_to_hba(shost); h = shost_to_hba(shost);
return snprintf(buf, 20, "%d\n", ctlr_is_resettable(h)); return snprintf(buf, 20, "%d\n", ctlr_is_resettable(h->board_id));
} }
static inline int is_logical_dev_addr_mode(unsigned char scsi3addr[]) static inline int is_logical_dev_addr_mode(unsigned char scsi3addr[])
@ -929,13 +958,6 @@ static void hpsa_slave_destroy(struct scsi_device *sdev)
/* nothing to do. */ /* nothing to do. */
} }
static void hpsa_scsi_setup(struct ctlr_info *h)
{
h->ndevices = 0;
h->scsi_host = NULL;
spin_lock_init(&h->devlock);
}
static void hpsa_free_sg_chain_blocks(struct ctlr_info *h) static void hpsa_free_sg_chain_blocks(struct ctlr_info *h)
{ {
int i; int i;
@ -1006,8 +1028,7 @@ static void hpsa_unmap_sg_chain_block(struct ctlr_info *h,
pci_unmap_single(h->pdev, temp64.val, chain_sg->Len, PCI_DMA_TODEVICE); pci_unmap_single(h->pdev, temp64.val, chain_sg->Len, PCI_DMA_TODEVICE);
} }
static void complete_scsi_command(struct CommandList *cp, static void complete_scsi_command(struct CommandList *cp)
int timeout, u32 tag)
{ {
struct scsi_cmnd *cmd; struct scsi_cmnd *cmd;
struct ctlr_info *h; struct ctlr_info *h;
@ -1308,7 +1329,7 @@ static void hpsa_scsi_do_simple_cmd_with_retry(struct ctlr_info *h,
int retry_count = 0; int retry_count = 0;
do { do {
memset(c->err_info, 0, sizeof(c->err_info)); memset(c->err_info, 0, sizeof(*c->err_info));
hpsa_scsi_do_simple_cmd_core(h, c); hpsa_scsi_do_simple_cmd_core(h, c);
retry_count++; retry_count++;
} while (check_for_unit_attention(h, c) && retry_count <= 3); } while (check_for_unit_attention(h, c) && retry_count <= 3);
@ -1570,6 +1591,7 @@ static unsigned char *msa2xxx_model[] = {
"MSA2024", "MSA2024",
"MSA2312", "MSA2312",
"MSA2324", "MSA2324",
"P2000 G3 SAS",
NULL, NULL,
}; };
@ -2751,6 +2773,26 @@ static int hpsa_ioctl(struct scsi_device *dev, int cmd, void *arg)
} }
} }
static int __devinit hpsa_send_host_reset(struct ctlr_info *h,
unsigned char *scsi3addr, u8 reset_type)
{
struct CommandList *c;
c = cmd_alloc(h);
if (!c)
return -ENOMEM;
fill_cmd(c, HPSA_DEVICE_RESET_MSG, h, NULL, 0, 0,
RAID_CTLR_LUNID, TYPE_MSG);
c->Request.CDB[1] = reset_type; /* fill_cmd defaults to target reset */
c->waiting = NULL;
enqueue_cmd_and_start_io(h, c);
/* Don't wait for completion, the reset won't complete. Don't free
* the command either. This is the last command we will send before
* re-initializing everything, so it doesn't matter and won't leak.
*/
return 0;
}
static void fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h, static void fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h,
void *buff, size_t size, u8 page_code, unsigned char *scsi3addr, void *buff, size_t size, u8 page_code, unsigned char *scsi3addr,
int cmd_type) int cmd_type)
@ -2828,7 +2870,8 @@ static void fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h,
c->Request.Type.Attribute = ATTR_SIMPLE; c->Request.Type.Attribute = ATTR_SIMPLE;
c->Request.Type.Direction = XFER_NONE; c->Request.Type.Direction = XFER_NONE;
c->Request.Timeout = 0; /* Don't time out */ c->Request.Timeout = 0; /* Don't time out */
c->Request.CDB[0] = 0x01; /* RESET_MSG is 0x01 */ memset(&c->Request.CDB[0], 0, sizeof(c->Request.CDB));
c->Request.CDB[0] = cmd;
c->Request.CDB[1] = 0x03; /* Reset target above */ c->Request.CDB[1] = 0x03; /* Reset target above */
/* If bytes 4-7 are zero, it means reset the */ /* If bytes 4-7 are zero, it means reset the */
/* LunID device */ /* LunID device */
@ -2936,7 +2979,7 @@ static inline void finish_cmd(struct CommandList *c, u32 raw_tag)
{ {
removeQ(c); removeQ(c);
if (likely(c->cmd_type == CMD_SCSI)) if (likely(c->cmd_type == CMD_SCSI))
complete_scsi_command(c, 0, raw_tag); complete_scsi_command(c);
else if (c->cmd_type == CMD_IOCTL_PEND) else if (c->cmd_type == CMD_IOCTL_PEND)
complete(c->waiting); complete(c->waiting);
} }
@ -2994,6 +3037,63 @@ static inline u32 process_nonindexed_cmd(struct ctlr_info *h,
return next_command(h); return next_command(h);
} }
/* Some controllers, like p400, will give us one interrupt
* after a soft reset, even if we turned interrupts off.
* Only need to check for this in the hpsa_xxx_discard_completions
* functions.
*/
static int ignore_bogus_interrupt(struct ctlr_info *h)
{
if (likely(!reset_devices))
return 0;
if (likely(h->interrupts_enabled))
return 0;
dev_info(&h->pdev->dev, "Received interrupt while interrupts disabled "
"(known firmware bug.) Ignoring.\n");
return 1;
}
static irqreturn_t hpsa_intx_discard_completions(int irq, void *dev_id)
{
struct ctlr_info *h = dev_id;
unsigned long flags;
u32 raw_tag;
if (ignore_bogus_interrupt(h))
return IRQ_NONE;
if (interrupt_not_for_us(h))
return IRQ_NONE;
spin_lock_irqsave(&h->lock, flags);
while (interrupt_pending(h)) {
raw_tag = get_next_completion(h);
while (raw_tag != FIFO_EMPTY)
raw_tag = next_command(h);
}
spin_unlock_irqrestore(&h->lock, flags);
return IRQ_HANDLED;
}
static irqreturn_t hpsa_msix_discard_completions(int irq, void *dev_id)
{
struct ctlr_info *h = dev_id;
unsigned long flags;
u32 raw_tag;
if (ignore_bogus_interrupt(h))
return IRQ_NONE;
spin_lock_irqsave(&h->lock, flags);
raw_tag = get_next_completion(h);
while (raw_tag != FIFO_EMPTY)
raw_tag = next_command(h);
spin_unlock_irqrestore(&h->lock, flags);
return IRQ_HANDLED;
}
static irqreturn_t do_hpsa_intr_intx(int irq, void *dev_id) static irqreturn_t do_hpsa_intr_intx(int irq, void *dev_id)
{ {
struct ctlr_info *h = dev_id; struct ctlr_info *h = dev_id;
@ -3132,11 +3232,10 @@ static __devinit int hpsa_message(struct pci_dev *pdev, unsigned char opcode,
return 0; return 0;
} }
#define hpsa_soft_reset_controller(p) hpsa_message(p, 1, 0)
#define hpsa_noop(p) hpsa_message(p, 3, 0) #define hpsa_noop(p) hpsa_message(p, 3, 0)
static int hpsa_controller_hard_reset(struct pci_dev *pdev, static int hpsa_controller_hard_reset(struct pci_dev *pdev,
void * __iomem vaddr, bool use_doorbell) void * __iomem vaddr, u32 use_doorbell)
{ {
u16 pmcsr; u16 pmcsr;
int pos; int pos;
@ -3147,8 +3246,7 @@ static int hpsa_controller_hard_reset(struct pci_dev *pdev,
* other way using the doorbell register. * other way using the doorbell register.
*/ */
dev_info(&pdev->dev, "using doorbell to reset controller\n"); dev_info(&pdev->dev, "using doorbell to reset controller\n");
writel(DOORBELL_CTLR_RESET, vaddr + SA5_DOORBELL); writel(use_doorbell, vaddr + SA5_DOORBELL);
msleep(1000);
} else { /* Try to do it the PCI power state way */ } else { /* Try to do it the PCI power state way */
/* Quoting from the Open CISS Specification: "The Power /* Quoting from the Open CISS Specification: "The Power
@ -3179,12 +3277,63 @@ static int hpsa_controller_hard_reset(struct pci_dev *pdev,
pmcsr &= ~PCI_PM_CTRL_STATE_MASK; pmcsr &= ~PCI_PM_CTRL_STATE_MASK;
pmcsr |= PCI_D0; pmcsr |= PCI_D0;
pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr); pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr);
msleep(500);
} }
return 0; return 0;
} }
static __devinit void init_driver_version(char *driver_version, int len)
{
memset(driver_version, 0, len);
strncpy(driver_version, "hpsa " HPSA_DRIVER_VERSION, len - 1);
}
static __devinit int write_driver_ver_to_cfgtable(
struct CfgTable __iomem *cfgtable)
{
char *driver_version;
int i, size = sizeof(cfgtable->driver_version);
driver_version = kmalloc(size, GFP_KERNEL);
if (!driver_version)
return -ENOMEM;
init_driver_version(driver_version, size);
for (i = 0; i < size; i++)
writeb(driver_version[i], &cfgtable->driver_version[i]);
kfree(driver_version);
return 0;
}
static __devinit void read_driver_ver_from_cfgtable(
struct CfgTable __iomem *cfgtable, unsigned char *driver_ver)
{
int i;
for (i = 0; i < sizeof(cfgtable->driver_version); i++)
driver_ver[i] = readb(&cfgtable->driver_version[i]);
}
static __devinit int controller_reset_failed(
struct CfgTable __iomem *cfgtable)
{
char *driver_ver, *old_driver_ver;
int rc, size = sizeof(cfgtable->driver_version);
old_driver_ver = kmalloc(2 * size, GFP_KERNEL);
if (!old_driver_ver)
return -ENOMEM;
driver_ver = old_driver_ver + size;
/* After a reset, the 32 bytes of "driver version" in the cfgtable
* should have been changed, otherwise we know the reset failed.
*/
init_driver_version(old_driver_ver, size);
read_driver_ver_from_cfgtable(cfgtable, driver_ver);
rc = !memcmp(driver_ver, old_driver_ver, size);
kfree(old_driver_ver);
return rc;
}
/* This does a hard reset of the controller using PCI power management /* This does a hard reset of the controller using PCI power management
* states or the using the doorbell register. * states or the using the doorbell register.
*/ */
@ -3195,10 +3344,10 @@ static __devinit int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev)
u64 cfg_base_addr_index; u64 cfg_base_addr_index;
void __iomem *vaddr; void __iomem *vaddr;
unsigned long paddr; unsigned long paddr;
u32 misc_fw_support, active_transport; u32 misc_fw_support;
int rc; int rc;
struct CfgTable __iomem *cfgtable; struct CfgTable __iomem *cfgtable;
bool use_doorbell; u32 use_doorbell;
u32 board_id; u32 board_id;
u16 command_register; u16 command_register;
@ -3215,20 +3364,15 @@ static __devinit int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev)
* using the doorbell register. * using the doorbell register.
*/ */
/* Exclude 640x boards. These are two pci devices in one slot
* which share a battery backed cache module. One controls the
* cache, the other accesses the cache through the one that controls
* it. If we reset the one controlling the cache, the other will
* likely not be happy. Just forbid resetting this conjoined mess.
* The 640x isn't really supported by hpsa anyway.
*/
rc = hpsa_lookup_board_id(pdev, &board_id); rc = hpsa_lookup_board_id(pdev, &board_id);
if (rc < 0) { if (rc < 0 || !ctlr_is_resettable(board_id)) {
dev_warn(&pdev->dev, "Not resetting device.\n"); dev_warn(&pdev->dev, "Not resetting device.\n");
return -ENODEV; return -ENODEV;
} }
if (board_id == 0x409C0E11 || board_id == 0x409D0E11)
return -ENOTSUPP; /* if controller is soft- but not hard resettable... */
if (!ctlr_is_hard_resettable(board_id))
return -ENOTSUPP; /* try soft reset later. */
/* Save the PCI command register */ /* Save the PCI command register */
pci_read_config_word(pdev, 4, &command_register); pci_read_config_word(pdev, 4, &command_register);
@ -3257,10 +3401,28 @@ static __devinit int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev)
rc = -ENOMEM; rc = -ENOMEM;
goto unmap_vaddr; goto unmap_vaddr;
} }
rc = write_driver_ver_to_cfgtable(cfgtable);
if (rc)
goto unmap_vaddr;
/* If reset via doorbell register is supported, use that. */ /* If reset via doorbell register is supported, use that.
* There are two such methods. Favor the newest method.
*/
misc_fw_support = readl(&cfgtable->misc_fw_support); misc_fw_support = readl(&cfgtable->misc_fw_support);
use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET; use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET2;
if (use_doorbell) {
use_doorbell = DOORBELL_CTLR_RESET2;
} else {
use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET;
if (use_doorbell) {
dev_warn(&pdev->dev, "Controller claims that "
"'Bit 2 doorbell reset' is "
"supported, but not 'bit 5 doorbell reset'. "
"Firmware update is recommended.\n");
rc = -ENOTSUPP; /* try soft reset */
goto unmap_cfgtable;
}
}
rc = hpsa_controller_hard_reset(pdev, vaddr, use_doorbell); rc = hpsa_controller_hard_reset(pdev, vaddr, use_doorbell);
if (rc) if (rc)
@ -3279,30 +3441,32 @@ static __devinit int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev)
msleep(HPSA_POST_RESET_PAUSE_MSECS); msleep(HPSA_POST_RESET_PAUSE_MSECS);
/* Wait for board to become not ready, then ready. */ /* Wait for board to become not ready, then ready. */
dev_info(&pdev->dev, "Waiting for board to become ready.\n"); dev_info(&pdev->dev, "Waiting for board to reset.\n");
rc = hpsa_wait_for_board_state(pdev, vaddr, BOARD_NOT_READY); rc = hpsa_wait_for_board_state(pdev, vaddr, BOARD_NOT_READY);
if (rc) if (rc) {
dev_warn(&pdev->dev, dev_warn(&pdev->dev,
"failed waiting for board to become not ready\n"); "failed waiting for board to reset."
" Will try soft reset.\n");
rc = -ENOTSUPP; /* Not expected, but try soft reset later */
goto unmap_cfgtable;
}
rc = hpsa_wait_for_board_state(pdev, vaddr, BOARD_READY); rc = hpsa_wait_for_board_state(pdev, vaddr, BOARD_READY);
if (rc) { if (rc) {
dev_warn(&pdev->dev, dev_warn(&pdev->dev,
"failed waiting for board to become ready\n"); "failed waiting for board to become ready "
"after hard reset\n");
goto unmap_cfgtable; goto unmap_cfgtable;
} }
dev_info(&pdev->dev, "board ready.\n");
/* Controller should be in simple mode at this point. If it's not, rc = controller_reset_failed(vaddr);
* It means we're on one of those controllers which doesn't support if (rc < 0)
* the doorbell reset method and on which the PCI power management reset goto unmap_cfgtable;
* method doesn't work (P800, for example.) if (rc) {
* In those cases, don't try to proceed, as it generally doesn't work. dev_warn(&pdev->dev, "Unable to successfully reset "
*/ "controller. Will try soft reset.\n");
active_transport = readl(&cfgtable->TransportActive); rc = -ENOTSUPP;
if (active_transport & PERFORMANT_MODE) { } else {
dev_warn(&pdev->dev, "Unable to successfully reset controller," dev_info(&pdev->dev, "board ready after hard reset.\n");
" Ignoring controller.\n");
rc = -ENODEV;
} }
unmap_cfgtable: unmap_cfgtable:
@ -3543,6 +3707,9 @@ static int __devinit hpsa_find_cfgtables(struct ctlr_info *h)
cfg_base_addr_index) + cfg_offset, sizeof(*h->cfgtable)); cfg_base_addr_index) + cfg_offset, sizeof(*h->cfgtable));
if (!h->cfgtable) if (!h->cfgtable)
return -ENOMEM; return -ENOMEM;
rc = write_driver_ver_to_cfgtable(h->cfgtable);
if (rc)
return rc;
/* Find performant mode table. */ /* Find performant mode table. */
trans_offset = readl(&h->cfgtable->TransMethodOffset); trans_offset = readl(&h->cfgtable->TransMethodOffset);
h->transtable = remap_pci_mem(pci_resource_start(h->pdev, h->transtable = remap_pci_mem(pci_resource_start(h->pdev,
@ -3777,11 +3944,12 @@ static __devinit int hpsa_init_reset_devices(struct pci_dev *pdev)
* due to concerns about shared bbwc between 6402/6404 pair. * due to concerns about shared bbwc between 6402/6404 pair.
*/ */
if (rc == -ENOTSUPP) if (rc == -ENOTSUPP)
return 0; /* just try to do the kdump anyhow. */ return rc; /* just try to do the kdump anyhow. */
if (rc) if (rc)
return -ENODEV; return -ENODEV;
/* Now try to get the controller to respond to a no-op */ /* Now try to get the controller to respond to a no-op */
dev_warn(&pdev->dev, "Waiting for controller to respond to no-op\n");
for (i = 0; i < HPSA_POST_RESET_NOOP_RETRIES; i++) { for (i = 0; i < HPSA_POST_RESET_NOOP_RETRIES; i++) {
if (hpsa_noop(pdev) == 0) if (hpsa_noop(pdev) == 0)
break; break;
@ -3792,18 +3960,133 @@ static __devinit int hpsa_init_reset_devices(struct pci_dev *pdev)
return 0; return 0;
} }
static __devinit int hpsa_allocate_cmd_pool(struct ctlr_info *h)
{
h->cmd_pool_bits = kzalloc(
DIV_ROUND_UP(h->nr_cmds, BITS_PER_LONG) *
sizeof(unsigned long), GFP_KERNEL);
h->cmd_pool = pci_alloc_consistent(h->pdev,
h->nr_cmds * sizeof(*h->cmd_pool),
&(h->cmd_pool_dhandle));
h->errinfo_pool = pci_alloc_consistent(h->pdev,
h->nr_cmds * sizeof(*h->errinfo_pool),
&(h->errinfo_pool_dhandle));
if ((h->cmd_pool_bits == NULL)
|| (h->cmd_pool == NULL)
|| (h->errinfo_pool == NULL)) {
dev_err(&h->pdev->dev, "out of memory in %s", __func__);
return -ENOMEM;
}
return 0;
}
static void hpsa_free_cmd_pool(struct ctlr_info *h)
{
kfree(h->cmd_pool_bits);
if (h->cmd_pool)
pci_free_consistent(h->pdev,
h->nr_cmds * sizeof(struct CommandList),
h->cmd_pool, h->cmd_pool_dhandle);
if (h->errinfo_pool)
pci_free_consistent(h->pdev,
h->nr_cmds * sizeof(struct ErrorInfo),
h->errinfo_pool,
h->errinfo_pool_dhandle);
}
static int hpsa_request_irq(struct ctlr_info *h,
irqreturn_t (*msixhandler)(int, void *),
irqreturn_t (*intxhandler)(int, void *))
{
int rc;
if (h->msix_vector || h->msi_vector)
rc = request_irq(h->intr[h->intr_mode], msixhandler,
IRQF_DISABLED, h->devname, h);
else
rc = request_irq(h->intr[h->intr_mode], intxhandler,
IRQF_DISABLED, h->devname, h);
if (rc) {
dev_err(&h->pdev->dev, "unable to get irq %d for %s\n",
h->intr[h->intr_mode], h->devname);
return -ENODEV;
}
return 0;
}
static int __devinit hpsa_kdump_soft_reset(struct ctlr_info *h)
{
if (hpsa_send_host_reset(h, RAID_CTLR_LUNID,
HPSA_RESET_TYPE_CONTROLLER)) {
dev_warn(&h->pdev->dev, "Resetting array controller failed.\n");
return -EIO;
}
dev_info(&h->pdev->dev, "Waiting for board to soft reset.\n");
if (hpsa_wait_for_board_state(h->pdev, h->vaddr, BOARD_NOT_READY)) {
dev_warn(&h->pdev->dev, "Soft reset had no effect.\n");
return -1;
}
dev_info(&h->pdev->dev, "Board reset, awaiting READY status.\n");
if (hpsa_wait_for_board_state(h->pdev, h->vaddr, BOARD_READY)) {
dev_warn(&h->pdev->dev, "Board failed to become ready "
"after soft reset.\n");
return -1;
}
return 0;
}
static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
{
free_irq(h->intr[h->intr_mode], h);
#ifdef CONFIG_PCI_MSI
if (h->msix_vector)
pci_disable_msix(h->pdev);
else if (h->msi_vector)
pci_disable_msi(h->pdev);
#endif /* CONFIG_PCI_MSI */
hpsa_free_sg_chain_blocks(h);
hpsa_free_cmd_pool(h);
kfree(h->blockFetchTable);
pci_free_consistent(h->pdev, h->reply_pool_size,
h->reply_pool, h->reply_pool_dhandle);
if (h->vaddr)
iounmap(h->vaddr);
if (h->transtable)
iounmap(h->transtable);
if (h->cfgtable)
iounmap(h->cfgtable);
pci_release_regions(h->pdev);
kfree(h);
}
static int __devinit hpsa_init_one(struct pci_dev *pdev, static int __devinit hpsa_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent) const struct pci_device_id *ent)
{ {
int dac, rc; int dac, rc;
struct ctlr_info *h; struct ctlr_info *h;
int try_soft_reset = 0;
unsigned long flags;
if (number_of_controllers == 0) if (number_of_controllers == 0)
printk(KERN_INFO DRIVER_NAME "\n"); printk(KERN_INFO DRIVER_NAME "\n");
rc = hpsa_init_reset_devices(pdev); rc = hpsa_init_reset_devices(pdev);
if (rc) if (rc) {
return rc; if (rc != -ENOTSUPP)
return rc;
/* If the reset fails in a particular way (it has no way to do
* a proper hard reset, so returns -ENOTSUPP) we can try to do
* a soft reset once we get the controller configured up to the
* point that it can accept a command.
*/
try_soft_reset = 1;
rc = 0;
}
reinit_after_soft_reset:
/* Command structures must be aligned on a 32-byte boundary because /* Command structures must be aligned on a 32-byte boundary because
* the 5 lower bits of the address are used by the hardware. and by * the 5 lower bits of the address are used by the hardware. and by
@ -3847,54 +4130,82 @@ static int __devinit hpsa_init_one(struct pci_dev *pdev,
/* make sure the board interrupts are off */ /* make sure the board interrupts are off */
h->access.set_intr_mask(h, HPSA_INTR_OFF); h->access.set_intr_mask(h, HPSA_INTR_OFF);
if (h->msix_vector || h->msi_vector) if (hpsa_request_irq(h, do_hpsa_intr_msi, do_hpsa_intr_intx))
rc = request_irq(h->intr[h->intr_mode], do_hpsa_intr_msi,
IRQF_DISABLED, h->devname, h);
else
rc = request_irq(h->intr[h->intr_mode], do_hpsa_intr_intx,
IRQF_DISABLED, h->devname, h);
if (rc) {
dev_err(&pdev->dev, "unable to get irq %d for %s\n",
h->intr[h->intr_mode], h->devname);
goto clean2; goto clean2;
}
dev_info(&pdev->dev, "%s: <0x%x> at IRQ %d%s using DAC\n", dev_info(&pdev->dev, "%s: <0x%x> at IRQ %d%s using DAC\n",
h->devname, pdev->device, h->devname, pdev->device,
h->intr[h->intr_mode], dac ? "" : " not"); h->intr[h->intr_mode], dac ? "" : " not");
if (hpsa_allocate_cmd_pool(h))
h->cmd_pool_bits =
kmalloc(((h->nr_cmds + BITS_PER_LONG -
1) / BITS_PER_LONG) * sizeof(unsigned long), GFP_KERNEL);
h->cmd_pool = pci_alloc_consistent(h->pdev,
h->nr_cmds * sizeof(*h->cmd_pool),
&(h->cmd_pool_dhandle));
h->errinfo_pool = pci_alloc_consistent(h->pdev,
h->nr_cmds * sizeof(*h->errinfo_pool),
&(h->errinfo_pool_dhandle));
if ((h->cmd_pool_bits == NULL)
|| (h->cmd_pool == NULL)
|| (h->errinfo_pool == NULL)) {
dev_err(&pdev->dev, "out of memory");
rc = -ENOMEM;
goto clean4; goto clean4;
}
if (hpsa_allocate_sg_chain_blocks(h)) if (hpsa_allocate_sg_chain_blocks(h))
goto clean4; goto clean4;
init_waitqueue_head(&h->scan_wait_queue); init_waitqueue_head(&h->scan_wait_queue);
h->scan_finished = 1; /* no scan currently in progress */ h->scan_finished = 1; /* no scan currently in progress */
pci_set_drvdata(pdev, h); pci_set_drvdata(pdev, h);
memset(h->cmd_pool_bits, 0, h->ndevices = 0;
((h->nr_cmds + BITS_PER_LONG - h->scsi_host = NULL;
1) / BITS_PER_LONG) * sizeof(unsigned long)); spin_lock_init(&h->devlock);
hpsa_put_ctlr_into_performant_mode(h);
hpsa_scsi_setup(h); /* At this point, the controller is ready to take commands.
* Now, if reset_devices and the hard reset didn't work, try
* the soft reset and see if that works.
*/
if (try_soft_reset) {
/* This is kind of gross. We may or may not get a completion
* from the soft reset command, and if we do, then the value
* from the fifo may or may not be valid. So, we wait 10 secs
* after the reset throwing away any completions we get during
* that time. Unregister the interrupt handler and register
* fake ones to scoop up any residual completions.
*/
spin_lock_irqsave(&h->lock, flags);
h->access.set_intr_mask(h, HPSA_INTR_OFF);
spin_unlock_irqrestore(&h->lock, flags);
free_irq(h->intr[h->intr_mode], h);
rc = hpsa_request_irq(h, hpsa_msix_discard_completions,
hpsa_intx_discard_completions);
if (rc) {
dev_warn(&h->pdev->dev, "Failed to request_irq after "
"soft reset.\n");
goto clean4;
}
rc = hpsa_kdump_soft_reset(h);
if (rc)
/* Neither hard nor soft reset worked, we're hosed. */
goto clean4;
dev_info(&h->pdev->dev, "Board READY.\n");
dev_info(&h->pdev->dev,
"Waiting for stale completions to drain.\n");
h->access.set_intr_mask(h, HPSA_INTR_ON);
msleep(10000);
h->access.set_intr_mask(h, HPSA_INTR_OFF);
rc = controller_reset_failed(h->cfgtable);
if (rc)
dev_info(&h->pdev->dev,
"Soft reset appears to have failed.\n");
/* since the controller's reset, we have to go back and re-init
* everything. Easiest to just forget what we've done and do it
* all over again.
*/
hpsa_undo_allocations_after_kdump_soft_reset(h);
try_soft_reset = 0;
if (rc)
/* don't go to clean4, we already unallocated */
return -ENODEV;
goto reinit_after_soft_reset;
}
/* Turn the interrupts on so we can service requests */ /* Turn the interrupts on so we can service requests */
h->access.set_intr_mask(h, HPSA_INTR_ON); h->access.set_intr_mask(h, HPSA_INTR_ON);
hpsa_put_ctlr_into_performant_mode(h);
hpsa_hba_inquiry(h); hpsa_hba_inquiry(h);
hpsa_register_scsi(h); /* hook ourselves into SCSI subsystem */ hpsa_register_scsi(h); /* hook ourselves into SCSI subsystem */
h->busy_initializing = 0; h->busy_initializing = 0;
@ -3902,16 +4213,7 @@ static int __devinit hpsa_init_one(struct pci_dev *pdev,
clean4: clean4:
hpsa_free_sg_chain_blocks(h); hpsa_free_sg_chain_blocks(h);
kfree(h->cmd_pool_bits); hpsa_free_cmd_pool(h);
if (h->cmd_pool)
pci_free_consistent(h->pdev,
h->nr_cmds * sizeof(struct CommandList),
h->cmd_pool, h->cmd_pool_dhandle);
if (h->errinfo_pool)
pci_free_consistent(h->pdev,
h->nr_cmds * sizeof(struct ErrorInfo),
h->errinfo_pool,
h->errinfo_pool_dhandle);
free_irq(h->intr[h->intr_mode], h); free_irq(h->intr[h->intr_mode], h);
clean2: clean2:
clean1: clean1:

View File

@ -127,10 +127,12 @@ struct ctlr_info {
}; };
#define HPSA_ABORT_MSG 0 #define HPSA_ABORT_MSG 0
#define HPSA_DEVICE_RESET_MSG 1 #define HPSA_DEVICE_RESET_MSG 1
#define HPSA_BUS_RESET_MSG 2 #define HPSA_RESET_TYPE_CONTROLLER 0x00
#define HPSA_HOST_RESET_MSG 3 #define HPSA_RESET_TYPE_BUS 0x01
#define HPSA_RESET_TYPE_TARGET 0x03
#define HPSA_RESET_TYPE_LUN 0x04
#define HPSA_MSG_SEND_RETRY_LIMIT 10 #define HPSA_MSG_SEND_RETRY_LIMIT 10
#define HPSA_MSG_SEND_RETRY_INTERVAL_MSECS 1000 #define HPSA_MSG_SEND_RETRY_INTERVAL_MSECS (10000)
/* Maximum time in seconds driver will wait for command completions /* Maximum time in seconds driver will wait for command completions
* when polling before giving up. * when polling before giving up.
@ -155,7 +157,7 @@ struct ctlr_info {
* HPSA_BOARD_READY_ITERATIONS are derived from those. * HPSA_BOARD_READY_ITERATIONS are derived from those.
*/ */
#define HPSA_BOARD_READY_WAIT_SECS (120) #define HPSA_BOARD_READY_WAIT_SECS (120)
#define HPSA_BOARD_NOT_READY_WAIT_SECS (10) #define HPSA_BOARD_NOT_READY_WAIT_SECS (100)
#define HPSA_BOARD_READY_POLL_INTERVAL_MSECS (100) #define HPSA_BOARD_READY_POLL_INTERVAL_MSECS (100)
#define HPSA_BOARD_READY_POLL_INTERVAL \ #define HPSA_BOARD_READY_POLL_INTERVAL \
((HPSA_BOARD_READY_POLL_INTERVAL_MSECS * HZ) / 1000) ((HPSA_BOARD_READY_POLL_INTERVAL_MSECS * HZ) / 1000)
@ -212,6 +214,7 @@ static void SA5_submit_command(struct ctlr_info *h,
dev_dbg(&h->pdev->dev, "Sending %x, tag = %x\n", c->busaddr, dev_dbg(&h->pdev->dev, "Sending %x, tag = %x\n", c->busaddr,
c->Header.Tag.lower); c->Header.Tag.lower);
writel(c->busaddr, h->vaddr + SA5_REQUEST_PORT_OFFSET); writel(c->busaddr, h->vaddr + SA5_REQUEST_PORT_OFFSET);
(void) readl(h->vaddr + SA5_REQUEST_PORT_OFFSET);
h->commands_outstanding++; h->commands_outstanding++;
if (h->commands_outstanding > h->max_outstanding) if (h->commands_outstanding > h->max_outstanding)
h->max_outstanding = h->commands_outstanding; h->max_outstanding = h->commands_outstanding;
@ -227,10 +230,12 @@ static void SA5_intr_mask(struct ctlr_info *h, unsigned long val)
if (val) { /* Turn interrupts on */ if (val) { /* Turn interrupts on */
h->interrupts_enabled = 1; h->interrupts_enabled = 1;
writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
} else { /* Turn them off */ } else { /* Turn them off */
h->interrupts_enabled = 0; h->interrupts_enabled = 0;
writel(SA5_INTR_OFF, writel(SA5_INTR_OFF,
h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
} }
} }
@ -239,10 +244,12 @@ static void SA5_performant_intr_mask(struct ctlr_info *h, unsigned long val)
if (val) { /* turn on interrupts */ if (val) { /* turn on interrupts */
h->interrupts_enabled = 1; h->interrupts_enabled = 1;
writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
} else { } else {
h->interrupts_enabled = 0; h->interrupts_enabled = 0;
writel(SA5_PERF_INTR_OFF, writel(SA5_PERF_INTR_OFF,
h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
} }
} }

View File

@ -101,6 +101,7 @@
#define CFGTBL_ChangeReq 0x00000001l #define CFGTBL_ChangeReq 0x00000001l
#define CFGTBL_AccCmds 0x00000001l #define CFGTBL_AccCmds 0x00000001l
#define DOORBELL_CTLR_RESET 0x00000004l #define DOORBELL_CTLR_RESET 0x00000004l
#define DOORBELL_CTLR_RESET2 0x00000020l
#define CFGTBL_Trans_Simple 0x00000002l #define CFGTBL_Trans_Simple 0x00000002l
#define CFGTBL_Trans_Performant 0x00000004l #define CFGTBL_Trans_Performant 0x00000004l
@ -256,14 +257,6 @@ struct ErrorInfo {
#define CMD_IOCTL_PEND 0x01 #define CMD_IOCTL_PEND 0x01
#define CMD_SCSI 0x03 #define CMD_SCSI 0x03
/* This structure needs to be divisible by 32 for new
* indexing method and performant mode.
*/
#define PAD32 32
#define PAD64DIFF 0
#define USEEXTRA ((sizeof(void *) - 4)/4)
#define PADSIZE (PAD32 + PAD64DIFF * USEEXTRA)
#define DIRECT_LOOKUP_SHIFT 5 #define DIRECT_LOOKUP_SHIFT 5
#define DIRECT_LOOKUP_BIT 0x10 #define DIRECT_LOOKUP_BIT 0x10
#define DIRECT_LOOKUP_MASK (~((1 << DIRECT_LOOKUP_SHIFT) - 1)) #define DIRECT_LOOKUP_MASK (~((1 << DIRECT_LOOKUP_SHIFT) - 1))
@ -345,6 +338,8 @@ struct CfgTable {
u8 reserved[0x78 - 0x58]; u8 reserved[0x78 - 0x58];
u32 misc_fw_support; /* offset 0x78 */ u32 misc_fw_support; /* offset 0x78 */
#define MISC_FW_DOORBELL_RESET (0x02) #define MISC_FW_DOORBELL_RESET (0x02)
#define MISC_FW_DOORBELL_RESET2 (0x010)
u8 driver_version[32];
}; };
#define NUM_BLOCKFETCH_ENTRIES 8 #define NUM_BLOCKFETCH_ENTRIES 8

View File

@ -1849,8 +1849,7 @@ static void ibmvscsi_do_work(struct ibmvscsi_host_data *hostdata)
rc = ibmvscsi_ops->reset_crq_queue(&hostdata->queue, hostdata); rc = ibmvscsi_ops->reset_crq_queue(&hostdata->queue, hostdata);
if (!rc) if (!rc)
rc = ibmvscsi_ops->send_crq(hostdata, 0xC001000000000000LL, 0); rc = ibmvscsi_ops->send_crq(hostdata, 0xC001000000000000LL, 0);
if (!rc) vio_enable_interrupts(to_vio_dev(hostdata->dev));
rc = vio_enable_interrupts(to_vio_dev(hostdata->dev));
} else if (hostdata->reenable_crq) { } else if (hostdata->reenable_crq) {
smp_rmb(); smp_rmb();
action = "enable"; action = "enable";

View File

@ -343,7 +343,7 @@ static int in2000_queuecommand_lck(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *))
instance = cmd->device->host; instance = cmd->device->host;
hostdata = (struct IN2000_hostdata *) instance->hostdata; hostdata = (struct IN2000_hostdata *) instance->hostdata;
DB(DB_QUEUE_COMMAND, scmd_printk(KERN_DEBUG, cmd, "Q-%02x-%ld(", cmd->cmnd[0], cmd->serial_number)) DB(DB_QUEUE_COMMAND, scmd_printk(KERN_DEBUG, cmd, "Q-%02x(", cmd->cmnd[0]))
/* Set up a few fields in the Scsi_Cmnd structure for our own use: /* Set up a few fields in the Scsi_Cmnd structure for our own use:
* - host_scribble is the pointer to the next cmd in the input queue * - host_scribble is the pointer to the next cmd in the input queue
@ -427,7 +427,7 @@ static int in2000_queuecommand_lck(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *))
in2000_execute(cmd->device->host); in2000_execute(cmd->device->host);
DB(DB_QUEUE_COMMAND, printk(")Q-%ld ", cmd->serial_number)) DB(DB_QUEUE_COMMAND, printk(")Q "))
return 0; return 0;
} }
@ -705,7 +705,7 @@ static void in2000_execute(struct Scsi_Host *instance)
* to search the input_Q again... * to search the input_Q again...
*/ */
DB(DB_EXECUTE, printk("%s%ld)EX-2 ", (cmd->SCp.phase) ? "d:" : "", cmd->serial_number)) DB(DB_EXECUTE, printk("%s)EX-2 ", (cmd->SCp.phase) ? "d:" : ""))
} }
@ -1149,7 +1149,7 @@ static irqreturn_t in2000_intr(int irqnum, void *dev_id)
case CSR_XFER_DONE | PHS_COMMAND: case CSR_XFER_DONE | PHS_COMMAND:
case CSR_UNEXP | PHS_COMMAND: case CSR_UNEXP | PHS_COMMAND:
case CSR_SRV_REQ | PHS_COMMAND: case CSR_SRV_REQ | PHS_COMMAND:
DB(DB_INTR, printk("CMND-%02x,%ld", cmd->cmnd[0], cmd->serial_number)) DB(DB_INTR, printk("CMND-%02x", cmd->cmnd[0]))
transfer_pio(cmd->cmnd, cmd->cmd_len, DATA_OUT_DIR, hostdata); transfer_pio(cmd->cmnd, cmd->cmd_len, DATA_OUT_DIR, hostdata);
hostdata->state = S_CONNECTED; hostdata->state = S_CONNECTED;
break; break;
@ -1191,7 +1191,7 @@ static irqreturn_t in2000_intr(int irqnum, void *dev_id)
switch (msg) { switch (msg) {
case COMMAND_COMPLETE: case COMMAND_COMPLETE:
DB(DB_INTR, printk("CCMP-%ld", cmd->serial_number)) DB(DB_INTR, printk("CCMP"))
write_3393_cmd(hostdata, WD_CMD_NEGATE_ACK); write_3393_cmd(hostdata, WD_CMD_NEGATE_ACK);
hostdata->state = S_PRE_CMP_DISC; hostdata->state = S_PRE_CMP_DISC;
break; break;
@ -1329,7 +1329,7 @@ static irqreturn_t in2000_intr(int irqnum, void *dev_id)
write_3393(hostdata, WD_SOURCE_ID, SRCID_ER); write_3393(hostdata, WD_SOURCE_ID, SRCID_ER);
if (phs == 0x60) { if (phs == 0x60) {
DB(DB_INTR, printk("SX-DONE-%ld", cmd->serial_number)) DB(DB_INTR, printk("SX-DONE"))
cmd->SCp.Message = COMMAND_COMPLETE; cmd->SCp.Message = COMMAND_COMPLETE;
lun = read_3393(hostdata, WD_TARGET_LUN); lun = read_3393(hostdata, WD_TARGET_LUN);
DB(DB_INTR, printk(":%d.%d", cmd->SCp.Status, lun)) DB(DB_INTR, printk(":%d.%d", cmd->SCp.Status, lun))
@ -1350,7 +1350,7 @@ static irqreturn_t in2000_intr(int irqnum, void *dev_id)
in2000_execute(instance); in2000_execute(instance);
} else { } else {
printk("%02x:%02x:%02x-%ld: Unknown SEL_XFER_DONE phase!!---", asr, sr, phs, cmd->serial_number); printk("%02x:%02x:%02x: Unknown SEL_XFER_DONE phase!!---", asr, sr, phs);
} }
break; break;
@ -1417,7 +1417,7 @@ static irqreturn_t in2000_intr(int irqnum, void *dev_id)
spin_unlock_irqrestore(instance->host_lock, flags); spin_unlock_irqrestore(instance->host_lock, flags);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
DB(DB_INTR, printk("UNEXP_DISC-%ld", cmd->serial_number)) DB(DB_INTR, printk("UNEXP_DISC"))
hostdata->connected = NULL; hostdata->connected = NULL;
hostdata->busy[cmd->device->id] &= ~(1 << cmd->device->lun); hostdata->busy[cmd->device->id] &= ~(1 << cmd->device->lun);
hostdata->state = S_UNCONNECTED; hostdata->state = S_UNCONNECTED;
@ -1442,7 +1442,7 @@ static irqreturn_t in2000_intr(int irqnum, void *dev_id)
*/ */
write_3393(hostdata, WD_SOURCE_ID, SRCID_ER); write_3393(hostdata, WD_SOURCE_ID, SRCID_ER);
DB(DB_INTR, printk("DISC-%ld", cmd->serial_number)) DB(DB_INTR, printk("DISC"))
if (cmd == NULL) { if (cmd == NULL) {
printk(" - Already disconnected! "); printk(" - Already disconnected! ");
hostdata->state = S_UNCONNECTED; hostdata->state = S_UNCONNECTED;
@ -1575,7 +1575,6 @@ static irqreturn_t in2000_intr(int irqnum, void *dev_id)
} else } else
hostdata->state = S_CONNECTED; hostdata->state = S_CONNECTED;
DB(DB_INTR, printk("-%ld", cmd->serial_number))
break; break;
default: default:
@ -1704,7 +1703,7 @@ static int __in2000_abort(Scsi_Cmnd * cmd)
prev->host_scribble = cmd->host_scribble; prev->host_scribble = cmd->host_scribble;
cmd->host_scribble = NULL; cmd->host_scribble = NULL;
cmd->result = DID_ABORT << 16; cmd->result = DID_ABORT << 16;
printk(KERN_WARNING "scsi%d: Abort - removing command %ld from input_Q. ", instance->host_no, cmd->serial_number); printk(KERN_WARNING "scsi%d: Abort - removing command from input_Q. ", instance->host_no);
cmd->scsi_done(cmd); cmd->scsi_done(cmd);
return SUCCESS; return SUCCESS;
} }
@ -1725,7 +1724,7 @@ static int __in2000_abort(Scsi_Cmnd * cmd)
if (hostdata->connected == cmd) { if (hostdata->connected == cmd) {
printk(KERN_WARNING "scsi%d: Aborting connected command %ld - ", instance->host_no, cmd->serial_number); printk(KERN_WARNING "scsi%d: Aborting connected command - ", instance->host_no);
printk("sending wd33c93 ABORT command - "); printk("sending wd33c93 ABORT command - ");
write_3393(hostdata, WD_CONTROL, CTRL_IDI | CTRL_EDI | CTRL_POLLED); write_3393(hostdata, WD_CONTROL, CTRL_IDI | CTRL_EDI | CTRL_POLLED);
@ -2270,7 +2269,7 @@ static int in2000_proc_info(struct Scsi_Host *instance, char *buf, char **start,
strcat(bp, "\nconnected: "); strcat(bp, "\nconnected: ");
if (hd->connected) { if (hd->connected) {
cmd = (Scsi_Cmnd *) hd->connected; cmd = (Scsi_Cmnd *) hd->connected;
sprintf(tbuf, " %ld-%d:%d(%02x)", cmd->serial_number, cmd->device->id, cmd->device->lun, cmd->cmnd[0]); sprintf(tbuf, " %d:%d(%02x)", cmd->device->id, cmd->device->lun, cmd->cmnd[0]);
strcat(bp, tbuf); strcat(bp, tbuf);
} }
} }
@ -2278,7 +2277,7 @@ static int in2000_proc_info(struct Scsi_Host *instance, char *buf, char **start,
strcat(bp, "\ninput_Q: "); strcat(bp, "\ninput_Q: ");
cmd = (Scsi_Cmnd *) hd->input_Q; cmd = (Scsi_Cmnd *) hd->input_Q;
while (cmd) { while (cmd) {
sprintf(tbuf, " %ld-%d:%d(%02x)", cmd->serial_number, cmd->device->id, cmd->device->lun, cmd->cmnd[0]); sprintf(tbuf, " %d:%d(%02x)", cmd->device->id, cmd->device->lun, cmd->cmnd[0]);
strcat(bp, tbuf); strcat(bp, tbuf);
cmd = (Scsi_Cmnd *) cmd->host_scribble; cmd = (Scsi_Cmnd *) cmd->host_scribble;
} }
@ -2287,7 +2286,7 @@ static int in2000_proc_info(struct Scsi_Host *instance, char *buf, char **start,
strcat(bp, "\ndisconnected_Q:"); strcat(bp, "\ndisconnected_Q:");
cmd = (Scsi_Cmnd *) hd->disconnected_Q; cmd = (Scsi_Cmnd *) hd->disconnected_Q;
while (cmd) { while (cmd) {
sprintf(tbuf, " %ld-%d:%d(%02x)", cmd->serial_number, cmd->device->id, cmd->device->lun, cmd->cmnd[0]); sprintf(tbuf, " %d:%d(%02x)", cmd->device->id, cmd->device->lun, cmd->cmnd[0]);
strcat(bp, tbuf); strcat(bp, tbuf);
cmd = (Scsi_Cmnd *) cmd->host_scribble; cmd = (Scsi_Cmnd *) cmd->host_scribble;
} }

View File

@ -60,6 +60,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/pci.h> #include <linux/pci.h>
@ -2717,13 +2718,18 @@ static int ipr_sdt_copy(struct ipr_ioa_cfg *ioa_cfg,
unsigned long pci_address, u32 length) unsigned long pci_address, u32 length)
{ {
int bytes_copied = 0; int bytes_copied = 0;
int cur_len, rc, rem_len, rem_page_len; int cur_len, rc, rem_len, rem_page_len, max_dump_size;
__be32 *page; __be32 *page;
unsigned long lock_flags = 0; unsigned long lock_flags = 0;
struct ipr_ioa_dump *ioa_dump = &ioa_cfg->dump->ioa_dump; struct ipr_ioa_dump *ioa_dump = &ioa_cfg->dump->ioa_dump;
if (ioa_cfg->sis64)
max_dump_size = IPR_FMT3_MAX_IOA_DUMP_SIZE;
else
max_dump_size = IPR_FMT2_MAX_IOA_DUMP_SIZE;
while (bytes_copied < length && while (bytes_copied < length &&
(ioa_dump->hdr.len + bytes_copied) < IPR_MAX_IOA_DUMP_SIZE) { (ioa_dump->hdr.len + bytes_copied) < max_dump_size) {
if (ioa_dump->page_offset >= PAGE_SIZE || if (ioa_dump->page_offset >= PAGE_SIZE ||
ioa_dump->page_offset == 0) { ioa_dump->page_offset == 0) {
page = (__be32 *)__get_free_page(GFP_ATOMIC); page = (__be32 *)__get_free_page(GFP_ATOMIC);
@ -2885,8 +2891,8 @@ static void ipr_get_ioa_dump(struct ipr_ioa_cfg *ioa_cfg, struct ipr_dump *dump)
unsigned long lock_flags = 0; unsigned long lock_flags = 0;
struct ipr_driver_dump *driver_dump = &dump->driver_dump; struct ipr_driver_dump *driver_dump = &dump->driver_dump;
struct ipr_ioa_dump *ioa_dump = &dump->ioa_dump; struct ipr_ioa_dump *ioa_dump = &dump->ioa_dump;
u32 num_entries, start_off, end_off; u32 num_entries, max_num_entries, start_off, end_off;
u32 bytes_to_copy, bytes_copied, rc; u32 max_dump_size, bytes_to_copy, bytes_copied, rc;
struct ipr_sdt *sdt; struct ipr_sdt *sdt;
int valid = 1; int valid = 1;
int i; int i;
@ -2947,8 +2953,18 @@ static void ipr_get_ioa_dump(struct ipr_ioa_cfg *ioa_cfg, struct ipr_dump *dump)
on entries in this table */ on entries in this table */
sdt = &ioa_dump->sdt; sdt = &ioa_dump->sdt;
if (ioa_cfg->sis64) {
max_num_entries = IPR_FMT3_NUM_SDT_ENTRIES;
max_dump_size = IPR_FMT3_MAX_IOA_DUMP_SIZE;
} else {
max_num_entries = IPR_FMT2_NUM_SDT_ENTRIES;
max_dump_size = IPR_FMT2_MAX_IOA_DUMP_SIZE;
}
bytes_to_copy = offsetof(struct ipr_sdt, entry) +
(max_num_entries * sizeof(struct ipr_sdt_entry));
rc = ipr_get_ldump_data_section(ioa_cfg, start_addr, (__be32 *)sdt, rc = ipr_get_ldump_data_section(ioa_cfg, start_addr, (__be32 *)sdt,
sizeof(struct ipr_sdt) / sizeof(__be32)); bytes_to_copy / sizeof(__be32));
/* Smart Dump table is ready to use and the first entry is valid */ /* Smart Dump table is ready to use and the first entry is valid */
if (rc || ((be32_to_cpu(sdt->hdr.state) != IPR_FMT3_SDT_READY_TO_USE) && if (rc || ((be32_to_cpu(sdt->hdr.state) != IPR_FMT3_SDT_READY_TO_USE) &&
@ -2964,13 +2980,20 @@ static void ipr_get_ioa_dump(struct ipr_ioa_cfg *ioa_cfg, struct ipr_dump *dump)
num_entries = be32_to_cpu(sdt->hdr.num_entries_used); num_entries = be32_to_cpu(sdt->hdr.num_entries_used);
if (num_entries > IPR_NUM_SDT_ENTRIES) if (num_entries > max_num_entries)
num_entries = IPR_NUM_SDT_ENTRIES; num_entries = max_num_entries;
/* Update dump length to the actual data to be copied */
dump->driver_dump.hdr.len += sizeof(struct ipr_sdt_header);
if (ioa_cfg->sis64)
dump->driver_dump.hdr.len += num_entries * sizeof(struct ipr_sdt_entry);
else
dump->driver_dump.hdr.len += max_num_entries * sizeof(struct ipr_sdt_entry);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
for (i = 0; i < num_entries; i++) { for (i = 0; i < num_entries; i++) {
if (ioa_dump->hdr.len > IPR_MAX_IOA_DUMP_SIZE) { if (ioa_dump->hdr.len > max_dump_size) {
driver_dump->hdr.status = IPR_DUMP_STATUS_QUAL_SUCCESS; driver_dump->hdr.status = IPR_DUMP_STATUS_QUAL_SUCCESS;
break; break;
} }
@ -2989,7 +3012,7 @@ static void ipr_get_ioa_dump(struct ipr_ioa_cfg *ioa_cfg, struct ipr_dump *dump)
valid = 0; valid = 0;
} }
if (valid) { if (valid) {
if (bytes_to_copy > IPR_MAX_IOA_DUMP_SIZE) { if (bytes_to_copy > max_dump_size) {
sdt->entry[i].flags &= ~IPR_SDT_VALID_ENTRY; sdt->entry[i].flags &= ~IPR_SDT_VALID_ENTRY;
continue; continue;
} }
@ -3044,6 +3067,7 @@ static void ipr_release_dump(struct kref *kref)
for (i = 0; i < dump->ioa_dump.next_page_index; i++) for (i = 0; i < dump->ioa_dump.next_page_index; i++)
free_page((unsigned long) dump->ioa_dump.ioa_data[i]); free_page((unsigned long) dump->ioa_dump.ioa_data[i]);
vfree(dump->ioa_dump.ioa_data);
kfree(dump); kfree(dump);
LEAVE; LEAVE;
} }
@ -3835,7 +3859,7 @@ static ssize_t ipr_read_dump(struct file *filp, struct kobject *kobj,
struct ipr_dump *dump; struct ipr_dump *dump;
unsigned long lock_flags = 0; unsigned long lock_flags = 0;
char *src; char *src;
int len; int len, sdt_end;
size_t rc = count; size_t rc = count;
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
@ -3875,9 +3899,17 @@ static ssize_t ipr_read_dump(struct file *filp, struct kobject *kobj,
off -= sizeof(dump->driver_dump); off -= sizeof(dump->driver_dump);
if (count && off < offsetof(struct ipr_ioa_dump, ioa_data)) { if (ioa_cfg->sis64)
if (off + count > offsetof(struct ipr_ioa_dump, ioa_data)) sdt_end = offsetof(struct ipr_ioa_dump, sdt.entry) +
len = offsetof(struct ipr_ioa_dump, ioa_data) - off; (be32_to_cpu(dump->ioa_dump.sdt.hdr.num_entries_used) *
sizeof(struct ipr_sdt_entry));
else
sdt_end = offsetof(struct ipr_ioa_dump, sdt.entry) +
(IPR_FMT2_NUM_SDT_ENTRIES * sizeof(struct ipr_sdt_entry));
if (count && off < sdt_end) {
if (off + count > sdt_end)
len = sdt_end - off;
else else
len = count; len = count;
src = (u8 *)&dump->ioa_dump + off; src = (u8 *)&dump->ioa_dump + off;
@ -3887,7 +3919,7 @@ static ssize_t ipr_read_dump(struct file *filp, struct kobject *kobj,
count -= len; count -= len;
} }
off -= offsetof(struct ipr_ioa_dump, ioa_data); off -= sdt_end;
while (count) { while (count) {
if ((off & PAGE_MASK) != ((off + count) & PAGE_MASK)) if ((off & PAGE_MASK) != ((off + count) & PAGE_MASK))
@ -3916,6 +3948,7 @@ static ssize_t ipr_read_dump(struct file *filp, struct kobject *kobj,
static int ipr_alloc_dump(struct ipr_ioa_cfg *ioa_cfg) static int ipr_alloc_dump(struct ipr_ioa_cfg *ioa_cfg)
{ {
struct ipr_dump *dump; struct ipr_dump *dump;
__be32 **ioa_data;
unsigned long lock_flags = 0; unsigned long lock_flags = 0;
dump = kzalloc(sizeof(struct ipr_dump), GFP_KERNEL); dump = kzalloc(sizeof(struct ipr_dump), GFP_KERNEL);
@ -3925,6 +3958,19 @@ static int ipr_alloc_dump(struct ipr_ioa_cfg *ioa_cfg)
return -ENOMEM; return -ENOMEM;
} }
if (ioa_cfg->sis64)
ioa_data = vmalloc(IPR_FMT3_MAX_NUM_DUMP_PAGES * sizeof(__be32 *));
else
ioa_data = vmalloc(IPR_FMT2_MAX_NUM_DUMP_PAGES * sizeof(__be32 *));
if (!ioa_data) {
ipr_err("Dump memory allocation failed\n");
kfree(dump);
return -ENOMEM;
}
dump->ioa_dump.ioa_data = ioa_data;
kref_init(&dump->kref); kref_init(&dump->kref);
dump->ioa_cfg = ioa_cfg; dump->ioa_cfg = ioa_cfg;
@ -3932,6 +3978,7 @@ static int ipr_alloc_dump(struct ipr_ioa_cfg *ioa_cfg)
if (INACTIVE != ioa_cfg->sdt_state) { if (INACTIVE != ioa_cfg->sdt_state) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
vfree(dump->ioa_dump.ioa_data);
kfree(dump); kfree(dump);
return 0; return 0;
} }
@ -4953,9 +5000,35 @@ static int ipr_eh_abort(struct scsi_cmnd * scsi_cmd)
* IRQ_NONE / IRQ_HANDLED * IRQ_NONE / IRQ_HANDLED
**/ **/
static irqreturn_t ipr_handle_other_interrupt(struct ipr_ioa_cfg *ioa_cfg, static irqreturn_t ipr_handle_other_interrupt(struct ipr_ioa_cfg *ioa_cfg,
volatile u32 int_reg) u32 int_reg)
{ {
irqreturn_t rc = IRQ_HANDLED; irqreturn_t rc = IRQ_HANDLED;
u32 int_mask_reg;
int_mask_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg32);
int_reg &= ~int_mask_reg;
/* If an interrupt on the adapter did not occur, ignore it.
* Or in the case of SIS 64, check for a stage change interrupt.
*/
if ((int_reg & IPR_PCII_OPER_INTERRUPTS) == 0) {
if (ioa_cfg->sis64) {
int_mask_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg) & ~int_mask_reg;
if (int_reg & IPR_PCII_IPL_STAGE_CHANGE) {
/* clear stage change */
writel(IPR_PCII_IPL_STAGE_CHANGE, ioa_cfg->regs.clr_interrupt_reg);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg) & ~int_mask_reg;
list_del(&ioa_cfg->reset_cmd->queue);
del_timer(&ioa_cfg->reset_cmd->timer);
ipr_reset_ioa_job(ioa_cfg->reset_cmd);
return IRQ_HANDLED;
}
}
return IRQ_NONE;
}
if (int_reg & IPR_PCII_IOA_TRANS_TO_OPER) { if (int_reg & IPR_PCII_IOA_TRANS_TO_OPER) {
/* Mask the interrupt */ /* Mask the interrupt */
@ -4968,6 +5041,13 @@ static irqreturn_t ipr_handle_other_interrupt(struct ipr_ioa_cfg *ioa_cfg,
list_del(&ioa_cfg->reset_cmd->queue); list_del(&ioa_cfg->reset_cmd->queue);
del_timer(&ioa_cfg->reset_cmd->timer); del_timer(&ioa_cfg->reset_cmd->timer);
ipr_reset_ioa_job(ioa_cfg->reset_cmd); ipr_reset_ioa_job(ioa_cfg->reset_cmd);
} else if ((int_reg & IPR_PCII_HRRQ_UPDATED) == int_reg) {
if (ipr_debug && printk_ratelimit())
dev_err(&ioa_cfg->pdev->dev,
"Spurious interrupt detected. 0x%08X\n", int_reg);
writel(IPR_PCII_HRRQ_UPDATED, ioa_cfg->regs.clr_interrupt_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32);
return IRQ_NONE;
} else { } else {
if (int_reg & IPR_PCII_IOA_UNIT_CHECKED) if (int_reg & IPR_PCII_IOA_UNIT_CHECKED)
ioa_cfg->ioa_unit_checked = 1; ioa_cfg->ioa_unit_checked = 1;
@ -5016,10 +5096,11 @@ static irqreturn_t ipr_isr(int irq, void *devp)
{ {
struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)devp; struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)devp;
unsigned long lock_flags = 0; unsigned long lock_flags = 0;
volatile u32 int_reg, int_mask_reg; u32 int_reg = 0;
u32 ioasc; u32 ioasc;
u16 cmd_index; u16 cmd_index;
int num_hrrq = 0; int num_hrrq = 0;
int irq_none = 0;
struct ipr_cmnd *ipr_cmd; struct ipr_cmnd *ipr_cmd;
irqreturn_t rc = IRQ_NONE; irqreturn_t rc = IRQ_NONE;
@ -5031,33 +5112,6 @@ static irqreturn_t ipr_isr(int irq, void *devp)
return IRQ_NONE; return IRQ_NONE;
} }
int_mask_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32) & ~int_mask_reg;
/* If an interrupt on the adapter did not occur, ignore it.
* Or in the case of SIS 64, check for a stage change interrupt.
*/
if (unlikely((int_reg & IPR_PCII_OPER_INTERRUPTS) == 0)) {
if (ioa_cfg->sis64) {
int_mask_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg) & ~int_mask_reg;
if (int_reg & IPR_PCII_IPL_STAGE_CHANGE) {
/* clear stage change */
writel(IPR_PCII_IPL_STAGE_CHANGE, ioa_cfg->regs.clr_interrupt_reg);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg) & ~int_mask_reg;
list_del(&ioa_cfg->reset_cmd->queue);
del_timer(&ioa_cfg->reset_cmd->timer);
ipr_reset_ioa_job(ioa_cfg->reset_cmd);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return IRQ_HANDLED;
}
}
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return IRQ_NONE;
}
while (1) { while (1) {
ipr_cmd = NULL; ipr_cmd = NULL;
@ -5097,7 +5151,7 @@ static irqreturn_t ipr_isr(int irq, void *devp)
/* Clear the PCI interrupt */ /* Clear the PCI interrupt */
do { do {
writel(IPR_PCII_HRRQ_UPDATED, ioa_cfg->regs.clr_interrupt_reg32); writel(IPR_PCII_HRRQ_UPDATED, ioa_cfg->regs.clr_interrupt_reg32);
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32) & ~int_mask_reg; int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32);
} while (int_reg & IPR_PCII_HRRQ_UPDATED && } while (int_reg & IPR_PCII_HRRQ_UPDATED &&
num_hrrq++ < IPR_MAX_HRRQ_RETRIES); num_hrrq++ < IPR_MAX_HRRQ_RETRIES);
@ -5107,6 +5161,9 @@ static irqreturn_t ipr_isr(int irq, void *devp)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
} else if (rc == IRQ_NONE && irq_none == 0) {
int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32);
irq_none++;
} else } else
break; break;
} }
@ -5143,7 +5200,8 @@ static int ipr_build_ioadl64(struct ipr_ioa_cfg *ioa_cfg,
nseg = scsi_dma_map(scsi_cmd); nseg = scsi_dma_map(scsi_cmd);
if (nseg < 0) { if (nseg < 0) {
dev_err(&ioa_cfg->pdev->dev, "pci_map_sg failed!\n"); if (printk_ratelimit())
dev_err(&ioa_cfg->pdev->dev, "pci_map_sg failed!\n");
return -1; return -1;
} }
@ -5773,7 +5831,8 @@ static int ipr_queuecommand_lck(struct scsi_cmnd *scsi_cmd,
} }
ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_LINK_DESC; ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_LINK_DESC;
ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_DELAY_AFTER_RST; if (ipr_is_gscsi(res))
ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_DELAY_AFTER_RST;
ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_ALIGNED_BFR; ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_ALIGNED_BFR;
ioarcb->cmd_pkt.flags_lo |= ipr_get_task_attributes(scsi_cmd); ioarcb->cmd_pkt.flags_lo |= ipr_get_task_attributes(scsi_cmd);
} }
@ -7516,7 +7575,7 @@ static int ipr_reset_get_unit_check_job(struct ipr_cmnd *ipr_cmd)
static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd) static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd)
{ {
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg; struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
volatile u32 int_reg; u32 int_reg;
ENTER; ENTER;
ioa_cfg->pdev->state_saved = true; ioa_cfg->pdev->state_saved = true;
@ -7555,7 +7614,10 @@ static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd)
ipr_cmd->job_step = ipr_reset_enable_ioa; ipr_cmd->job_step = ipr_reset_enable_ioa;
if (GET_DUMP == ioa_cfg->sdt_state) { if (GET_DUMP == ioa_cfg->sdt_state) {
ipr_reset_start_timer(ipr_cmd, IPR_DUMP_TIMEOUT); if (ioa_cfg->sis64)
ipr_reset_start_timer(ipr_cmd, IPR_SIS64_DUMP_TIMEOUT);
else
ipr_reset_start_timer(ipr_cmd, IPR_SIS32_DUMP_TIMEOUT);
ipr_cmd->job_step = ipr_reset_wait_for_dump; ipr_cmd->job_step = ipr_reset_wait_for_dump;
schedule_work(&ioa_cfg->work_q); schedule_work(&ioa_cfg->work_q);
return IPR_RC_JOB_RETURN; return IPR_RC_JOB_RETURN;

View File

@ -38,8 +38,8 @@
/* /*
* Literals * Literals
*/ */
#define IPR_DRIVER_VERSION "2.5.1" #define IPR_DRIVER_VERSION "2.5.2"
#define IPR_DRIVER_DATE "(August 10, 2010)" #define IPR_DRIVER_DATE "(April 27, 2011)"
/* /*
* IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding * IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding
@ -217,7 +217,8 @@
#define IPR_CHECK_FOR_RESET_TIMEOUT (HZ / 10) #define IPR_CHECK_FOR_RESET_TIMEOUT (HZ / 10)
#define IPR_WAIT_FOR_BIST_TIMEOUT (2 * HZ) #define IPR_WAIT_FOR_BIST_TIMEOUT (2 * HZ)
#define IPR_PCI_RESET_TIMEOUT (HZ / 2) #define IPR_PCI_RESET_TIMEOUT (HZ / 2)
#define IPR_DUMP_TIMEOUT (15 * HZ) #define IPR_SIS32_DUMP_TIMEOUT (15 * HZ)
#define IPR_SIS64_DUMP_TIMEOUT (40 * HZ)
#define IPR_DUMP_DELAY_SECONDS 4 #define IPR_DUMP_DELAY_SECONDS 4
#define IPR_DUMP_DELAY_TIMEOUT (IPR_DUMP_DELAY_SECONDS * HZ) #define IPR_DUMP_DELAY_TIMEOUT (IPR_DUMP_DELAY_SECONDS * HZ)
@ -285,9 +286,12 @@ IPR_PCII_NO_HOST_RRQ | IPR_PCII_IOARRIN_LOST | IPR_PCII_MMIO_ERROR)
/* /*
* Dump literals * Dump literals
*/ */
#define IPR_MAX_IOA_DUMP_SIZE (4 * 1024 * 1024) #define IPR_FMT2_MAX_IOA_DUMP_SIZE (4 * 1024 * 1024)
#define IPR_NUM_SDT_ENTRIES 511 #define IPR_FMT3_MAX_IOA_DUMP_SIZE (32 * 1024 * 1024)
#define IPR_MAX_NUM_DUMP_PAGES ((IPR_MAX_IOA_DUMP_SIZE / PAGE_SIZE) + 1) #define IPR_FMT2_NUM_SDT_ENTRIES 511
#define IPR_FMT3_NUM_SDT_ENTRIES 0xFFF
#define IPR_FMT2_MAX_NUM_DUMP_PAGES ((IPR_FMT2_MAX_IOA_DUMP_SIZE / PAGE_SIZE) + 1)
#define IPR_FMT3_MAX_NUM_DUMP_PAGES ((IPR_FMT3_MAX_IOA_DUMP_SIZE / PAGE_SIZE) + 1)
/* /*
* Misc literals * Misc literals
@ -474,7 +478,7 @@ struct ipr_cmd_pkt {
u8 flags_lo; u8 flags_lo;
#define IPR_FLAGS_LO_ALIGNED_BFR 0x20 #define IPR_FLAGS_LO_ALIGNED_BFR 0x20
#define IPR_FLAGS_LO_DELAY_AFTER_RST 0x10 #define IPR_FLAGS_LO_DELAY_AFTER_RST 0x10
#define IPR_FLAGS_LO_UNTAGGED_TASK 0x00 #define IPR_FLAGS_LO_UNTAGGED_TASK 0x00
#define IPR_FLAGS_LO_SIMPLE_TASK 0x02 #define IPR_FLAGS_LO_SIMPLE_TASK 0x02
#define IPR_FLAGS_LO_ORDERED_TASK 0x04 #define IPR_FLAGS_LO_ORDERED_TASK 0x04
@ -1164,7 +1168,7 @@ struct ipr_sdt_header {
struct ipr_sdt { struct ipr_sdt {
struct ipr_sdt_header hdr; struct ipr_sdt_header hdr;
struct ipr_sdt_entry entry[IPR_NUM_SDT_ENTRIES]; struct ipr_sdt_entry entry[IPR_FMT3_NUM_SDT_ENTRIES];
}__attribute__((packed, aligned (4))); }__attribute__((packed, aligned (4)));
struct ipr_uc_sdt { struct ipr_uc_sdt {
@ -1608,7 +1612,7 @@ struct ipr_driver_dump {
struct ipr_ioa_dump { struct ipr_ioa_dump {
struct ipr_dump_entry_header hdr; struct ipr_dump_entry_header hdr;
struct ipr_sdt sdt; struct ipr_sdt sdt;
__be32 *ioa_data[IPR_MAX_NUM_DUMP_PAGES]; __be32 **ioa_data;
u32 reserved; u32 reserved;
u32 next_page_index; u32 next_page_index;
u32 page_offset; u32 page_offset;

View File

@ -57,9 +57,6 @@ static struct kmem_cache *scsi_pkt_cachep;
#define FC_SRB_READ (1 << 1) #define FC_SRB_READ (1 << 1)
#define FC_SRB_WRITE (1 << 0) #define FC_SRB_WRITE (1 << 0)
/* constant added to e_d_tov timeout to get rec_tov value */
#define REC_TOV_CONST 1
/* /*
* The SCp.ptr should be tested and set under the scsi_pkt_queue lock * The SCp.ptr should be tested and set under the scsi_pkt_queue lock
*/ */
@ -248,7 +245,7 @@ static inline void fc_fcp_unlock_pkt(struct fc_fcp_pkt *fsp)
/** /**
* fc_fcp_timer_set() - Start a timer for a fcp_pkt * fc_fcp_timer_set() - Start a timer for a fcp_pkt
* @fsp: The FCP packet to start a timer for * @fsp: The FCP packet to start a timer for
* @delay: The timeout period for the timer * @delay: The timeout period in jiffies
*/ */
static void fc_fcp_timer_set(struct fc_fcp_pkt *fsp, unsigned long delay) static void fc_fcp_timer_set(struct fc_fcp_pkt *fsp, unsigned long delay)
{ {
@ -335,22 +332,23 @@ static void fc_fcp_ddp_done(struct fc_fcp_pkt *fsp)
/** /**
* fc_fcp_can_queue_ramp_up() - increases can_queue * fc_fcp_can_queue_ramp_up() - increases can_queue
* @lport: lport to ramp up can_queue * @lport: lport to ramp up can_queue
*
* Locking notes: Called with Scsi_Host lock held
*/ */
static void fc_fcp_can_queue_ramp_up(struct fc_lport *lport) static void fc_fcp_can_queue_ramp_up(struct fc_lport *lport)
{ {
struct fc_fcp_internal *si = fc_get_scsi_internal(lport); struct fc_fcp_internal *si = fc_get_scsi_internal(lport);
unsigned long flags;
int can_queue; int can_queue;
spin_lock_irqsave(lport->host->host_lock, flags);
if (si->last_can_queue_ramp_up_time && if (si->last_can_queue_ramp_up_time &&
(time_before(jiffies, si->last_can_queue_ramp_up_time + (time_before(jiffies, si->last_can_queue_ramp_up_time +
FC_CAN_QUEUE_PERIOD))) FC_CAN_QUEUE_PERIOD)))
return; goto unlock;
if (time_before(jiffies, si->last_can_queue_ramp_down_time + if (time_before(jiffies, si->last_can_queue_ramp_down_time +
FC_CAN_QUEUE_PERIOD)) FC_CAN_QUEUE_PERIOD))
return; goto unlock;
si->last_can_queue_ramp_up_time = jiffies; si->last_can_queue_ramp_up_time = jiffies;
@ -362,6 +360,9 @@ static void fc_fcp_can_queue_ramp_up(struct fc_lport *lport)
lport->host->can_queue = can_queue; lport->host->can_queue = can_queue;
shost_printk(KERN_ERR, lport->host, "libfc: increased " shost_printk(KERN_ERR, lport->host, "libfc: increased "
"can_queue to %d.\n", can_queue); "can_queue to %d.\n", can_queue);
unlock:
spin_unlock_irqrestore(lport->host->host_lock, flags);
} }
/** /**
@ -373,18 +374,19 @@ static void fc_fcp_can_queue_ramp_up(struct fc_lport *lport)
* commands complete or timeout, then try again with a reduced * commands complete or timeout, then try again with a reduced
* can_queue. Eventually we will hit the point where we run * can_queue. Eventually we will hit the point where we run
* on all reserved structs. * on all reserved structs.
*
* Locking notes: Called with Scsi_Host lock held
*/ */
static void fc_fcp_can_queue_ramp_down(struct fc_lport *lport) static void fc_fcp_can_queue_ramp_down(struct fc_lport *lport)
{ {
struct fc_fcp_internal *si = fc_get_scsi_internal(lport); struct fc_fcp_internal *si = fc_get_scsi_internal(lport);
unsigned long flags;
int can_queue; int can_queue;
spin_lock_irqsave(lport->host->host_lock, flags);
if (si->last_can_queue_ramp_down_time && if (si->last_can_queue_ramp_down_time &&
(time_before(jiffies, si->last_can_queue_ramp_down_time + (time_before(jiffies, si->last_can_queue_ramp_down_time +
FC_CAN_QUEUE_PERIOD))) FC_CAN_QUEUE_PERIOD)))
return; goto unlock;
si->last_can_queue_ramp_down_time = jiffies; si->last_can_queue_ramp_down_time = jiffies;
@ -395,6 +397,9 @@ static void fc_fcp_can_queue_ramp_down(struct fc_lport *lport)
lport->host->can_queue = can_queue; lport->host->can_queue = can_queue;
shost_printk(KERN_ERR, lport->host, "libfc: Could not allocate frame.\n" shost_printk(KERN_ERR, lport->host, "libfc: Could not allocate frame.\n"
"Reducing can_queue to %d.\n", can_queue); "Reducing can_queue to %d.\n", can_queue);
unlock:
spin_unlock_irqrestore(lport->host->host_lock, flags);
} }
/* /*
@ -409,16 +414,13 @@ static inline struct fc_frame *fc_fcp_frame_alloc(struct fc_lport *lport,
size_t len) size_t len)
{ {
struct fc_frame *fp; struct fc_frame *fp;
unsigned long flags;
fp = fc_frame_alloc(lport, len); fp = fc_frame_alloc(lport, len);
if (likely(fp)) if (likely(fp))
return fp; return fp;
/* error case */ /* error case */
spin_lock_irqsave(lport->host->host_lock, flags);
fc_fcp_can_queue_ramp_down(lport); fc_fcp_can_queue_ramp_down(lport);
spin_unlock_irqrestore(lport->host->host_lock, flags);
return NULL; return NULL;
} }
@ -1093,16 +1095,14 @@ static int fc_fcp_pkt_send(struct fc_lport *lport, struct fc_fcp_pkt *fsp)
/** /**
* get_fsp_rec_tov() - Helper function to get REC_TOV * get_fsp_rec_tov() - Helper function to get REC_TOV
* @fsp: the FCP packet * @fsp: the FCP packet
*
* Returns rec tov in jiffies as rpriv->e_d_tov + 1 second
*/ */
static inline unsigned int get_fsp_rec_tov(struct fc_fcp_pkt *fsp) static inline unsigned int get_fsp_rec_tov(struct fc_fcp_pkt *fsp)
{ {
struct fc_rport *rport; struct fc_rport_libfc_priv *rpriv = fsp->rport->dd_data;
struct fc_rport_libfc_priv *rpriv;
rport = fsp->rport; return msecs_to_jiffies(rpriv->e_d_tov) + HZ;
rpriv = rport->dd_data;
return rpriv->e_d_tov + REC_TOV_CONST;
} }
/** /**
@ -1122,7 +1122,6 @@ static int fc_fcp_cmd_send(struct fc_lport *lport, struct fc_fcp_pkt *fsp,
struct fc_rport_libfc_priv *rpriv; struct fc_rport_libfc_priv *rpriv;
const size_t len = sizeof(fsp->cdb_cmd); const size_t len = sizeof(fsp->cdb_cmd);
int rc = 0; int rc = 0;
unsigned int rec_tov;
if (fc_fcp_lock_pkt(fsp)) if (fc_fcp_lock_pkt(fsp))
return 0; return 0;
@ -1153,12 +1152,9 @@ static int fc_fcp_cmd_send(struct fc_lport *lport, struct fc_fcp_pkt *fsp,
fsp->seq_ptr = seq; fsp->seq_ptr = seq;
fc_fcp_pkt_hold(fsp); /* hold for fc_fcp_pkt_destroy */ fc_fcp_pkt_hold(fsp); /* hold for fc_fcp_pkt_destroy */
rec_tov = get_fsp_rec_tov(fsp);
setup_timer(&fsp->timer, fc_fcp_timeout, (unsigned long)fsp); setup_timer(&fsp->timer, fc_fcp_timeout, (unsigned long)fsp);
if (rpriv->flags & FC_RP_FLAGS_REC_SUPPORTED) if (rpriv->flags & FC_RP_FLAGS_REC_SUPPORTED)
fc_fcp_timer_set(fsp, rec_tov); fc_fcp_timer_set(fsp, get_fsp_rec_tov(fsp));
unlock: unlock:
fc_fcp_unlock_pkt(fsp); fc_fcp_unlock_pkt(fsp);
@ -1235,16 +1231,14 @@ static void fc_lun_reset_send(unsigned long data)
{ {
struct fc_fcp_pkt *fsp = (struct fc_fcp_pkt *)data; struct fc_fcp_pkt *fsp = (struct fc_fcp_pkt *)data;
struct fc_lport *lport = fsp->lp; struct fc_lport *lport = fsp->lp;
unsigned int rec_tov;
if (lport->tt.fcp_cmd_send(lport, fsp, fc_tm_done)) { if (lport->tt.fcp_cmd_send(lport, fsp, fc_tm_done)) {
if (fsp->recov_retry++ >= FC_MAX_RECOV_RETRY) if (fsp->recov_retry++ >= FC_MAX_RECOV_RETRY)
return; return;
if (fc_fcp_lock_pkt(fsp)) if (fc_fcp_lock_pkt(fsp))
return; return;
rec_tov = get_fsp_rec_tov(fsp);
setup_timer(&fsp->timer, fc_lun_reset_send, (unsigned long)fsp); setup_timer(&fsp->timer, fc_lun_reset_send, (unsigned long)fsp);
fc_fcp_timer_set(fsp, rec_tov); fc_fcp_timer_set(fsp, get_fsp_rec_tov(fsp));
fc_fcp_unlock_pkt(fsp); fc_fcp_unlock_pkt(fsp);
} }
} }
@ -1536,12 +1530,11 @@ static void fc_fcp_rec_resp(struct fc_seq *seq, struct fc_frame *fp, void *arg)
} }
fc_fcp_srr(fsp, r_ctl, offset); fc_fcp_srr(fsp, r_ctl, offset);
} else if (e_stat & ESB_ST_SEQ_INIT) { } else if (e_stat & ESB_ST_SEQ_INIT) {
unsigned int rec_tov = get_fsp_rec_tov(fsp);
/* /*
* The remote port has the initiative, so just * The remote port has the initiative, so just
* keep waiting for it to complete. * keep waiting for it to complete.
*/ */
fc_fcp_timer_set(fsp, rec_tov); fc_fcp_timer_set(fsp, get_fsp_rec_tov(fsp));
} else { } else {
/* /*
@ -1705,7 +1698,6 @@ static void fc_fcp_srr_resp(struct fc_seq *seq, struct fc_frame *fp, void *arg)
{ {
struct fc_fcp_pkt *fsp = arg; struct fc_fcp_pkt *fsp = arg;
struct fc_frame_header *fh; struct fc_frame_header *fh;
unsigned int rec_tov;
if (IS_ERR(fp)) { if (IS_ERR(fp)) {
fc_fcp_srr_error(fsp, fp); fc_fcp_srr_error(fsp, fp);
@ -1732,8 +1724,7 @@ static void fc_fcp_srr_resp(struct fc_seq *seq, struct fc_frame *fp, void *arg)
switch (fc_frame_payload_op(fp)) { switch (fc_frame_payload_op(fp)) {
case ELS_LS_ACC: case ELS_LS_ACC:
fsp->recov_retry = 0; fsp->recov_retry = 0;
rec_tov = get_fsp_rec_tov(fsp); fc_fcp_timer_set(fsp, get_fsp_rec_tov(fsp));
fc_fcp_timer_set(fsp, rec_tov);
break; break;
case ELS_LS_RJT: case ELS_LS_RJT:
default: default:

View File

@ -1590,7 +1590,6 @@ void fc_lport_enter_flogi(struct fc_lport *lport)
*/ */
int fc_lport_config(struct fc_lport *lport) int fc_lport_config(struct fc_lport *lport)
{ {
INIT_LIST_HEAD(&lport->ema_list);
INIT_DELAYED_WORK(&lport->retry_work, fc_lport_timeout); INIT_DELAYED_WORK(&lport->retry_work, fc_lport_timeout);
mutex_init(&lport->lp_mutex); mutex_init(&lport->lp_mutex);

View File

@ -805,6 +805,8 @@ struct lpfc_hba {
struct dentry *idiag_root; struct dentry *idiag_root;
struct dentry *idiag_pci_cfg; struct dentry *idiag_pci_cfg;
struct dentry *idiag_que_info; struct dentry *idiag_que_info;
struct dentry *idiag_que_acc;
struct dentry *idiag_drb_acc;
#endif #endif
/* Used for deferred freeing of ELS data buffers */ /* Used for deferred freeing of ELS data buffers */

View File

@ -2426,6 +2426,7 @@ lpfc_bsg_wake_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq)
{ {
struct bsg_job_data *dd_data; struct bsg_job_data *dd_data;
struct fc_bsg_job *job; struct fc_bsg_job *job;
struct lpfc_mbx_nembed_cmd *nembed_sge;
uint32_t size; uint32_t size;
unsigned long flags; unsigned long flags;
uint8_t *to; uint8_t *to;
@ -2469,9 +2470,8 @@ lpfc_bsg_wake_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq)
memcpy(to, from, size); memcpy(to, from, size);
} else if ((phba->sli_rev == LPFC_SLI_REV4) && } else if ((phba->sli_rev == LPFC_SLI_REV4) &&
(pmboxq->u.mb.mbxCommand == MBX_SLI4_CONFIG)) { (pmboxq->u.mb.mbxCommand == MBX_SLI4_CONFIG)) {
struct lpfc_mbx_nembed_cmd *nembed_sge = nembed_sge = (struct lpfc_mbx_nembed_cmd *)
(struct lpfc_mbx_nembed_cmd *) &pmboxq->u.mb.un.varWords[0];
&pmboxq->u.mb.un.varWords[0];
from = (uint8_t *)dd_data->context_un.mbox.dmp->dma. from = (uint8_t *)dd_data->context_un.mbox.dmp->dma.
virt; virt;
@ -2496,16 +2496,18 @@ lpfc_bsg_wake_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq)
job->reply_payload.sg_cnt, job->reply_payload.sg_cnt,
from, size); from, size);
job->reply->result = 0; job->reply->result = 0;
/* need to hold the lock until we set job->dd_data to NULL
* to hold off the timeout handler returning to the mid-layer
* while we are still processing the job.
*/
job->dd_data = NULL; job->dd_data = NULL;
dd_data->context_un.mbox.set_job = NULL;
spin_unlock_irqrestore(&phba->ct_ev_lock, flags);
job->job_done(job); job->job_done(job);
} else {
dd_data->context_un.mbox.set_job = NULL;
spin_unlock_irqrestore(&phba->ct_ev_lock, flags);
} }
dd_data->context_un.mbox.set_job = NULL;
/* need to hold the lock until we call job done to hold off
* the timeout handler returning to the midlayer while
* we are stillprocessing the job
*/
spin_unlock_irqrestore(&phba->ct_ev_lock, flags);
kfree(dd_data->context_un.mbox.mb); kfree(dd_data->context_un.mbox.mb);
mempool_free(dd_data->context_un.mbox.pmboxq, phba->mbox_mem_pool); mempool_free(dd_data->context_un.mbox.pmboxq, phba->mbox_mem_pool);
@ -2644,6 +2646,11 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
struct ulp_bde64 *rxbpl = NULL; struct ulp_bde64 *rxbpl = NULL;
struct dfc_mbox_req *mbox_req = (struct dfc_mbox_req *) struct dfc_mbox_req *mbox_req = (struct dfc_mbox_req *)
job->request->rqst_data.h_vendor.vendor_cmd; job->request->rqst_data.h_vendor.vendor_cmd;
struct READ_EVENT_LOG_VAR *rdEventLog;
uint32_t transmit_length, receive_length, mode;
struct lpfc_mbx_nembed_cmd *nembed_sge;
struct mbox_header *header;
struct ulp_bde64 *bde;
uint8_t *ext = NULL; uint8_t *ext = NULL;
int rc = 0; int rc = 0;
uint8_t *from; uint8_t *from;
@ -2651,9 +2658,16 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
/* in case no data is transferred */ /* in case no data is transferred */
job->reply->reply_payload_rcv_len = 0; job->reply->reply_payload_rcv_len = 0;
/* sanity check to protect driver */
if (job->reply_payload.payload_len > BSG_MBOX_SIZE ||
job->request_payload.payload_len > BSG_MBOX_SIZE) {
rc = -ERANGE;
goto job_done;
}
/* check if requested extended data lengths are valid */ /* check if requested extended data lengths are valid */
if ((mbox_req->inExtWLen > MAILBOX_EXT_SIZE) || if ((mbox_req->inExtWLen > BSG_MBOX_SIZE/sizeof(uint32_t)) ||
(mbox_req->outExtWLen > MAILBOX_EXT_SIZE)) { (mbox_req->outExtWLen > BSG_MBOX_SIZE/sizeof(uint32_t))) {
rc = -ERANGE; rc = -ERANGE;
goto job_done; goto job_done;
} }
@ -2744,8 +2758,8 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
* use ours * use ours
*/ */
if (pmb->mbxCommand == MBX_RUN_BIU_DIAG64) { if (pmb->mbxCommand == MBX_RUN_BIU_DIAG64) {
uint32_t transmit_length = pmb->un.varWords[1]; transmit_length = pmb->un.varWords[1];
uint32_t receive_length = pmb->un.varWords[4]; receive_length = pmb->un.varWords[4];
/* transmit length cannot be greater than receive length or /* transmit length cannot be greater than receive length or
* mailbox extension size * mailbox extension size
*/ */
@ -2795,10 +2809,9 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
from += sizeof(MAILBOX_t); from += sizeof(MAILBOX_t);
memcpy((uint8_t *)dmp->dma.virt, from, transmit_length); memcpy((uint8_t *)dmp->dma.virt, from, transmit_length);
} else if (pmb->mbxCommand == MBX_READ_EVENT_LOG) { } else if (pmb->mbxCommand == MBX_READ_EVENT_LOG) {
struct READ_EVENT_LOG_VAR *rdEventLog = rdEventLog = &pmb->un.varRdEventLog;
&pmb->un.varRdEventLog ; receive_length = rdEventLog->rcv_bde64.tus.f.bdeSize;
uint32_t receive_length = rdEventLog->rcv_bde64.tus.f.bdeSize; mode = bf_get(lpfc_event_log, rdEventLog);
uint32_t mode = bf_get(lpfc_event_log, rdEventLog);
/* receive length cannot be greater than mailbox /* receive length cannot be greater than mailbox
* extension size * extension size
@ -2843,7 +2856,7 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
/* rebuild the command for sli4 using our own buffers /* rebuild the command for sli4 using our own buffers
* like we do for biu diags * like we do for biu diags
*/ */
uint32_t receive_length = pmb->un.varWords[2]; receive_length = pmb->un.varWords[2];
/* receive length cannot be greater than mailbox /* receive length cannot be greater than mailbox
* extension size * extension size
*/ */
@ -2879,8 +2892,7 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
pmb->un.varWords[4] = putPaddrHigh(dmp->dma.phys); pmb->un.varWords[4] = putPaddrHigh(dmp->dma.phys);
} else if ((pmb->mbxCommand == MBX_UPDATE_CFG) && } else if ((pmb->mbxCommand == MBX_UPDATE_CFG) &&
pmb->un.varUpdateCfg.co) { pmb->un.varUpdateCfg.co) {
struct ulp_bde64 *bde = bde = (struct ulp_bde64 *)&pmb->un.varWords[4];
(struct ulp_bde64 *)&pmb->un.varWords[4];
/* bde size cannot be greater than mailbox ext size */ /* bde size cannot be greater than mailbox ext size */
if (bde->tus.f.bdeSize > MAILBOX_EXT_SIZE) { if (bde->tus.f.bdeSize > MAILBOX_EXT_SIZE) {
@ -2921,10 +2933,6 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
memcpy((uint8_t *)dmp->dma.virt, from, memcpy((uint8_t *)dmp->dma.virt, from,
bde->tus.f.bdeSize); bde->tus.f.bdeSize);
} else if (pmb->mbxCommand == MBX_SLI4_CONFIG) { } else if (pmb->mbxCommand == MBX_SLI4_CONFIG) {
struct lpfc_mbx_nembed_cmd *nembed_sge;
struct mbox_header *header;
uint32_t receive_length;
/* rebuild the command for sli4 using our own buffers /* rebuild the command for sli4 using our own buffers
* like we do for biu diags * like we do for biu diags
*/ */
@ -3386,6 +3394,7 @@ no_dd_data:
job->dd_data = NULL; job->dd_data = NULL;
return rc; return rc;
} }
/** /**
* lpfc_bsg_hst_vendor - process a vendor-specific fc_bsg_job * lpfc_bsg_hst_vendor - process a vendor-specific fc_bsg_job
* @job: fc_bsg_job to handle * @job: fc_bsg_job to handle

View File

@ -109,3 +109,133 @@ struct menlo_response {
uint32_t xri; /* return the xri of the iocb exchange */ uint32_t xri; /* return the xri of the iocb exchange */
}; };
/*
* macros and data structures for handling sli-config mailbox command
* pass-through support, this header file is shared between user and
* kernel spaces, note the set of macros are duplicates from lpfc_hw4.h,
* with macro names prefixed with bsg_, as the macros defined in
* lpfc_hw4.h are not accessible from user space.
*/
/* Macros to deal with bit fields. Each bit field must have 3 #defines
* associated with it (_SHIFT, _MASK, and _WORD).
* EG. For a bit field that is in the 7th bit of the "field4" field of a
* structure and is 2 bits in size the following #defines must exist:
* struct temp {
* uint32_t field1;
* uint32_t field2;
* uint32_t field3;
* uint32_t field4;
* #define example_bit_field_SHIFT 7
* #define example_bit_field_MASK 0x03
* #define example_bit_field_WORD field4
* uint32_t field5;
* };
* Then the macros below may be used to get or set the value of that field.
* EG. To get the value of the bit field from the above example:
* struct temp t1;
* value = bsg_bf_get(example_bit_field, &t1);
* And then to set that bit field:
* bsg_bf_set(example_bit_field, &t1, 2);
* Or clear that bit field:
* bsg_bf_set(example_bit_field, &t1, 0);
*/
#define bsg_bf_get_le32(name, ptr) \
((le32_to_cpu((ptr)->name##_WORD) >> name##_SHIFT) & name##_MASK)
#define bsg_bf_get(name, ptr) \
(((ptr)->name##_WORD >> name##_SHIFT) & name##_MASK)
#define bsg_bf_set_le32(name, ptr, value) \
((ptr)->name##_WORD = cpu_to_le32(((((value) & \
name##_MASK) << name##_SHIFT) | (le32_to_cpu((ptr)->name##_WORD) & \
~(name##_MASK << name##_SHIFT)))))
#define bsg_bf_set(name, ptr, value) \
((ptr)->name##_WORD = ((((value) & name##_MASK) << name##_SHIFT) | \
((ptr)->name##_WORD & ~(name##_MASK << name##_SHIFT))))
/*
* The sli_config structure specified here is based on the following
* restriction:
*
* -- SLI_CONFIG EMB=0, carrying MSEs, will carry subcommands without
* carrying HBD.
* -- SLI_CONFIG EMB=1, not carrying MSE, will carry subcommands with or
* without carrying HBDs.
*/
struct lpfc_sli_config_mse {
uint32_t pa_lo;
uint32_t pa_hi;
uint32_t buf_len;
#define lpfc_mbox_sli_config_mse_len_SHIFT 0
#define lpfc_mbox_sli_config_mse_len_MASK 0xffffff
#define lpfc_mbox_sli_config_mse_len_WORD buf_len
};
struct lpfc_sli_config_subcmd_hbd {
uint32_t buf_len;
#define lpfc_mbox_sli_config_ecmn_hbd_len_SHIFT 0
#define lpfc_mbox_sli_config_ecmn_hbd_len_MASK 0xffffff
#define lpfc_mbox_sli_config_ecmn_hbd_len_WORD buf_len
uint32_t pa_lo;
uint32_t pa_hi;
};
struct lpfc_sli_config_hdr {
uint32_t word1;
#define lpfc_mbox_hdr_emb_SHIFT 0
#define lpfc_mbox_hdr_emb_MASK 0x00000001
#define lpfc_mbox_hdr_emb_WORD word1
#define lpfc_mbox_hdr_mse_cnt_SHIFT 3
#define lpfc_mbox_hdr_mse_cnt_MASK 0x0000001f
#define lpfc_mbox_hdr_mse_cnt_WORD word1
uint32_t payload_length;
uint32_t tag_lo;
uint32_t tag_hi;
uint32_t reserved5;
};
struct lpfc_sli_config_generic {
struct lpfc_sli_config_hdr sli_config_hdr;
#define LPFC_MBX_SLI_CONFIG_MAX_MSE 19
struct lpfc_sli_config_mse mse[LPFC_MBX_SLI_CONFIG_MAX_MSE];
};
struct lpfc_sli_config_subcmnd {
struct lpfc_sli_config_hdr sli_config_hdr;
uint32_t word6;
#define lpfc_subcmnd_opcode_SHIFT 0
#define lpfc_subcmnd_opcode_MASK 0xff
#define lpfc_subcmnd_opcode_WORD word6
#define lpfc_subcmnd_subsys_SHIFT 8
#define lpfc_subcmnd_subsys_MASK 0xff
#define lpfc_subcmnd_subsys_WORD word6
uint32_t timeout;
uint32_t request_length;
uint32_t word9;
#define lpfc_subcmnd_version_SHIFT 0
#define lpfc_subcmnd_version_MASK 0xff
#define lpfc_subcmnd_version_WORD word9
uint32_t word10;
#define lpfc_subcmnd_ask_rd_len_SHIFT 0
#define lpfc_subcmnd_ask_rd_len_MASK 0xffffff
#define lpfc_subcmnd_ask_rd_len_WORD word10
uint32_t rd_offset;
uint32_t obj_name[26];
uint32_t hbd_count;
#define LPFC_MBX_SLI_CONFIG_MAX_HBD 10
struct lpfc_sli_config_subcmd_hbd hbd[LPFC_MBX_SLI_CONFIG_MAX_HBD];
};
struct lpfc_sli_config_mbox {
uint32_t word0;
#define lpfc_mqe_status_SHIFT 16
#define lpfc_mqe_status_MASK 0x0000FFFF
#define lpfc_mqe_status_WORD word0
#define lpfc_mqe_command_SHIFT 8
#define lpfc_mqe_command_MASK 0x000000FF
#define lpfc_mqe_command_WORD word0
union {
struct lpfc_sli_config_generic sli_config_generic;
struct lpfc_sli_config_subcmnd sli_config_subcmnd;
} un;
};

File diff suppressed because it is too large Load Diff

View File

@ -39,13 +39,42 @@
/* hbqinfo output buffer size */ /* hbqinfo output buffer size */
#define LPFC_HBQINFO_SIZE 8192 #define LPFC_HBQINFO_SIZE 8192
/* rdPciConf output buffer size */ /* pciConf */
#define LPFC_PCI_CFG_BROWSE 0xffff
#define LPFC_PCI_CFG_RD_CMD_ARG 2
#define LPFC_PCI_CFG_WR_CMD_ARG 3
#define LPFC_PCI_CFG_SIZE 4096 #define LPFC_PCI_CFG_SIZE 4096
#define LPFC_PCI_CFG_RD_BUF_SIZE (LPFC_PCI_CFG_SIZE/2) #define LPFC_PCI_CFG_RD_BUF_SIZE (LPFC_PCI_CFG_SIZE/2)
#define LPFC_PCI_CFG_RD_SIZE (LPFC_PCI_CFG_SIZE/4) #define LPFC_PCI_CFG_RD_SIZE (LPFC_PCI_CFG_SIZE/4)
/* queue info output buffer size */ /* queue info */
#define LPFC_QUE_INFO_GET_BUF_SIZE 2048 #define LPFC_QUE_INFO_GET_BUF_SIZE 4096
/* queue acc */
#define LPFC_QUE_ACC_BROWSE 0xffff
#define LPFC_QUE_ACC_RD_CMD_ARG 4
#define LPFC_QUE_ACC_WR_CMD_ARG 6
#define LPFC_QUE_ACC_BUF_SIZE 4096
#define LPFC_QUE_ACC_SIZE (LPFC_QUE_ACC_BUF_SIZE/2)
#define LPFC_IDIAG_EQ 1
#define LPFC_IDIAG_CQ 2
#define LPFC_IDIAG_MQ 3
#define LPFC_IDIAG_WQ 4
#define LPFC_IDIAG_RQ 5
/* doorbell acc */
#define LPFC_DRB_ACC_ALL 0xffff
#define LPFC_DRB_ACC_RD_CMD_ARG 1
#define LPFC_DRB_ACC_WR_CMD_ARG 2
#define LPFC_DRB_ACC_BUF_SIZE 256
#define LPFC_DRB_EQCQ 1
#define LPFC_DRB_MQ 2
#define LPFC_DRB_WQ 3
#define LPFC_DRB_RQ 4
#define LPFC_DRB_MAX 4
#define SIZE_U8 sizeof(uint8_t) #define SIZE_U8 sizeof(uint8_t)
#define SIZE_U16 sizeof(uint16_t) #define SIZE_U16 sizeof(uint16_t)
@ -73,13 +102,23 @@ struct lpfc_idiag_offset {
uint32_t last_rd; uint32_t last_rd;
}; };
#define LPFC_IDIAG_CMD_DATA_SIZE 4 #define LPFC_IDIAG_CMD_DATA_SIZE 8
struct lpfc_idiag_cmd { struct lpfc_idiag_cmd {
uint32_t opcode; uint32_t opcode;
#define LPFC_IDIAG_CMD_PCICFG_RD 0x00000001 #define LPFC_IDIAG_CMD_PCICFG_RD 0x00000001
#define LPFC_IDIAG_CMD_PCICFG_WR 0x00000002 #define LPFC_IDIAG_CMD_PCICFG_WR 0x00000002
#define LPFC_IDIAG_CMD_PCICFG_ST 0x00000003 #define LPFC_IDIAG_CMD_PCICFG_ST 0x00000003
#define LPFC_IDIAG_CMD_PCICFG_CL 0x00000004 #define LPFC_IDIAG_CMD_PCICFG_CL 0x00000004
#define LPFC_IDIAG_CMD_QUEACC_RD 0x00000011
#define LPFC_IDIAG_CMD_QUEACC_WR 0x00000012
#define LPFC_IDIAG_CMD_QUEACC_ST 0x00000013
#define LPFC_IDIAG_CMD_QUEACC_CL 0x00000014
#define LPFC_IDIAG_CMD_DRBACC_RD 0x00000021
#define LPFC_IDIAG_CMD_DRBACC_WR 0x00000022
#define LPFC_IDIAG_CMD_DRBACC_ST 0x00000023
#define LPFC_IDIAG_CMD_DRBACC_CL 0x00000024
uint32_t data[LPFC_IDIAG_CMD_DATA_SIZE]; uint32_t data[LPFC_IDIAG_CMD_DATA_SIZE];
}; };
@ -87,6 +126,7 @@ struct lpfc_idiag {
uint32_t active; uint32_t active;
struct lpfc_idiag_cmd cmd; struct lpfc_idiag_cmd cmd;
struct lpfc_idiag_offset offset; struct lpfc_idiag_offset offset;
void *ptr_private;
}; };
#endif #endif

View File

@ -670,6 +670,7 @@ lpfc_cmpl_els_flogi_fabric(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
* Driver needs to re-reg VPI in order for f/w * Driver needs to re-reg VPI in order for f/w
* to update the MAC address. * to update the MAC address.
*/ */
lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
lpfc_register_new_vport(phba, vport, ndlp); lpfc_register_new_vport(phba, vport, ndlp);
return 0; return 0;
} }
@ -869,8 +870,8 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
*/ */
if ((phba->hba_flag & HBA_FIP_SUPPORT) && if ((phba->hba_flag & HBA_FIP_SUPPORT) &&
(phba->fcf.fcf_flag & FCF_DISCOVERY) && (phba->fcf.fcf_flag & FCF_DISCOVERY) &&
(irsp->ulpStatus != IOSTAT_LOCAL_REJECT) && !((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
(irsp->un.ulpWord[4] != IOERR_SLI_ABORTED)) { (irsp->un.ulpWord[4] == IOERR_SLI_ABORTED))) {
lpfc_printf_log(phba, KERN_WARNING, LOG_FIP | LOG_ELS, lpfc_printf_log(phba, KERN_WARNING, LOG_FIP | LOG_ELS,
"2611 FLOGI failed on FCF (x%x), " "2611 FLOGI failed on FCF (x%x), "
"status:x%x/x%x, tmo:x%x, perform " "status:x%x/x%x, tmo:x%x, perform "
@ -1085,14 +1086,15 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
if (sp->cmn.fcphHigh < FC_PH3) if (sp->cmn.fcphHigh < FC_PH3)
sp->cmn.fcphHigh = FC_PH3; sp->cmn.fcphHigh = FC_PH3;
if ((phba->sli_rev == LPFC_SLI_REV4) && if (phba->sli_rev == LPFC_SLI_REV4) {
(bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) == if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) ==
LPFC_SLI_INTF_IF_TYPE_0)) { LPFC_SLI_INTF_IF_TYPE_0) {
elsiocb->iocb.ulpCt_h = ((SLI4_CT_FCFI >> 1) & 1); elsiocb->iocb.ulpCt_h = ((SLI4_CT_FCFI >> 1) & 1);
elsiocb->iocb.ulpCt_l = (SLI4_CT_FCFI & 1); elsiocb->iocb.ulpCt_l = (SLI4_CT_FCFI & 1);
/* FLOGI needs to be 3 for WQE FCFI */ /* FLOGI needs to be 3 for WQE FCFI */
/* Set the fcfi to the fcfi we registered with */ /* Set the fcfi to the fcfi we registered with */
elsiocb->iocb.ulpContext = phba->fcf.fcfi; elsiocb->iocb.ulpContext = phba->fcf.fcfi;
}
} else if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) { } else if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
sp->cmn.request_multiple_Nport = 1; sp->cmn.request_multiple_Nport = 1;
/* For FLOGI, Let FLOGI rsp set the NPortID for VPI 0 */ /* For FLOGI, Let FLOGI rsp set the NPortID for VPI 0 */
@ -4107,13 +4109,13 @@ lpfc_els_clear_rrq(struct lpfc_vport *vport,
pcmd += sizeof(uint32_t); pcmd += sizeof(uint32_t);
rrq = (struct RRQ *)pcmd; rrq = (struct RRQ *)pcmd;
rrq->rrq_exchg = be32_to_cpu(rrq->rrq_exchg); rrq->rrq_exchg = be32_to_cpu(rrq->rrq_exchg);
rxid = be16_to_cpu(bf_get(rrq_rxid, rrq)); rxid = bf_get(rrq_rxid, rrq);
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"2883 Clear RRQ for SID:x%x OXID:x%x RXID:x%x" "2883 Clear RRQ for SID:x%x OXID:x%x RXID:x%x"
" x%x x%x\n", " x%x x%x\n",
be32_to_cpu(bf_get(rrq_did, rrq)), be32_to_cpu(bf_get(rrq_did, rrq)),
be16_to_cpu(bf_get(rrq_oxid, rrq)), bf_get(rrq_oxid, rrq),
rxid, rxid,
iocb->iotag, iocb->iocb.ulpContext); iocb->iotag, iocb->iocb.ulpContext);
@ -4121,7 +4123,7 @@ lpfc_els_clear_rrq(struct lpfc_vport *vport,
"Clear RRQ: did:x%x flg:x%x exchg:x%.08x", "Clear RRQ: did:x%x flg:x%x exchg:x%.08x",
ndlp->nlp_DID, ndlp->nlp_flag, rrq->rrq_exchg); ndlp->nlp_DID, ndlp->nlp_flag, rrq->rrq_exchg);
if (vport->fc_myDID == be32_to_cpu(bf_get(rrq_did, rrq))) if (vport->fc_myDID == be32_to_cpu(bf_get(rrq_did, rrq)))
xri = be16_to_cpu(bf_get(rrq_oxid, rrq)); xri = bf_get(rrq_oxid, rrq);
else else
xri = rxid; xri = rxid;
prrq = lpfc_get_active_rrq(vport, xri, ndlp->nlp_DID); prrq = lpfc_get_active_rrq(vport, xri, ndlp->nlp_DID);
@ -7290,8 +7292,9 @@ lpfc_cmpl_els_npiv_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
struct lpfc_vport *vport = cmdiocb->vport; struct lpfc_vport *vport = cmdiocb->vport;
IOCB_t *irsp; IOCB_t *irsp;
struct lpfc_nodelist *ndlp; struct lpfc_nodelist *ndlp;
ndlp = (struct lpfc_nodelist *)cmdiocb->context1; struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
ndlp = (struct lpfc_nodelist *)cmdiocb->context1;
irsp = &rspiocb->iocb; irsp = &rspiocb->iocb;
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD, lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
"LOGO npiv cmpl: status:x%x/x%x did:x%x", "LOGO npiv cmpl: status:x%x/x%x did:x%x",
@ -7302,6 +7305,19 @@ lpfc_cmpl_els_npiv_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
/* Trigger the release of the ndlp after logo */ /* Trigger the release of the ndlp after logo */
lpfc_nlp_put(ndlp); lpfc_nlp_put(ndlp);
/* NPIV LOGO completes to NPort <nlp_DID> */
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"2928 NPIV LOGO completes to NPort x%x "
"Data: x%x x%x x%x x%x\n",
ndlp->nlp_DID, irsp->ulpStatus, irsp->un.ulpWord[4],
irsp->ulpTimeout, vport->num_disc_nodes);
if (irsp->ulpStatus == IOSTAT_SUCCESS) {
spin_lock_irq(shost->host_lock);
vport->fc_flag &= ~FC_FABRIC;
spin_unlock_irq(shost->host_lock);
}
} }
/** /**

View File

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2004-2009 Emulex. All rights reserved. * * Copyright (C) 2004-2011 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
* www.emulex.com * * www.emulex.com *
* Portions Copyright (C) 2004-2005 Christoph Hellwig * * Portions Copyright (C) 2004-2005 Christoph Hellwig *
@ -3569,6 +3569,10 @@ lpfc_register_remote_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
"rport add: did:x%x flg:x%x type x%x", "rport add: did:x%x flg:x%x type x%x",
ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type); ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type);
/* Don't add the remote port if unloading. */
if (vport->load_flag & FC_UNLOADING)
return;
ndlp->rport = rport = fc_remote_port_add(shost, 0, &rport_ids); ndlp->rport = rport = fc_remote_port_add(shost, 0, &rport_ids);
if (!rport || !get_device(&rport->dev)) { if (!rport || !get_device(&rport->dev)) {
dev_printk(KERN_WARNING, &phba->pcidev->dev, dev_printk(KERN_WARNING, &phba->pcidev->dev,

View File

@ -1059,6 +1059,11 @@ struct rq_context {
#define lpfc_rq_context_rqe_size_SHIFT 8 /* Version 1 Only */ #define lpfc_rq_context_rqe_size_SHIFT 8 /* Version 1 Only */
#define lpfc_rq_context_rqe_size_MASK 0x0000000F #define lpfc_rq_context_rqe_size_MASK 0x0000000F
#define lpfc_rq_context_rqe_size_WORD word0 #define lpfc_rq_context_rqe_size_WORD word0
#define LPFC_RQE_SIZE_8 2
#define LPFC_RQE_SIZE_16 3
#define LPFC_RQE_SIZE_32 4
#define LPFC_RQE_SIZE_64 5
#define LPFC_RQE_SIZE_128 6
#define lpfc_rq_context_page_size_SHIFT 0 /* Version 1 Only */ #define lpfc_rq_context_page_size_SHIFT 0 /* Version 1 Only */
#define lpfc_rq_context_page_size_MASK 0x000000FF #define lpfc_rq_context_page_size_MASK 0x000000FF
#define lpfc_rq_context_page_size_WORD word0 #define lpfc_rq_context_page_size_WORD word0
@ -2108,6 +2113,8 @@ struct lpfc_mbx_pc_sli4_params {
#define sgl_pp_align_WORD word12 #define sgl_pp_align_WORD word12
uint32_t rsvd_13_63[51]; uint32_t rsvd_13_63[51];
}; };
#define SLI4_PAGE_ALIGN(addr) (((addr)+((SLI4_PAGE_SIZE)-1)) \
&(~((SLI4_PAGE_SIZE)-1)))
struct lpfc_sli4_parameters { struct lpfc_sli4_parameters {
uint32_t word0; uint32_t word0;
@ -2491,6 +2498,9 @@ struct wqe_common {
#define wqe_reqtag_SHIFT 0 #define wqe_reqtag_SHIFT 0
#define wqe_reqtag_MASK 0x0000FFFF #define wqe_reqtag_MASK 0x0000FFFF
#define wqe_reqtag_WORD word9 #define wqe_reqtag_WORD word9
#define wqe_temp_rpi_SHIFT 16
#define wqe_temp_rpi_MASK 0x0000FFFF
#define wqe_temp_rpi_WORD word9
#define wqe_rcvoxid_SHIFT 16 #define wqe_rcvoxid_SHIFT 16
#define wqe_rcvoxid_MASK 0x0000FFFF #define wqe_rcvoxid_MASK 0x0000FFFF
#define wqe_rcvoxid_WORD word9 #define wqe_rcvoxid_WORD word9
@ -2524,7 +2534,7 @@ struct wqe_common {
#define wqe_wqes_WORD word10 #define wqe_wqes_WORD word10
/* Note that this field overlaps above fields */ /* Note that this field overlaps above fields */
#define wqe_wqid_SHIFT 1 #define wqe_wqid_SHIFT 1
#define wqe_wqid_MASK 0x0000007f #define wqe_wqid_MASK 0x00007fff
#define wqe_wqid_WORD word10 #define wqe_wqid_WORD word10
#define wqe_pri_SHIFT 16 #define wqe_pri_SHIFT 16
#define wqe_pri_MASK 0x00000007 #define wqe_pri_MASK 0x00000007
@ -2621,7 +2631,11 @@ struct xmit_els_rsp64_wqe {
uint32_t rsvd4; uint32_t rsvd4;
struct wqe_did wqe_dest; struct wqe_did wqe_dest;
struct wqe_common wqe_com; /* words 6-11 */ struct wqe_common wqe_com; /* words 6-11 */
uint32_t rsvd_12_15[4]; uint32_t word12;
#define wqe_rsp_temp_rpi_SHIFT 0
#define wqe_rsp_temp_rpi_MASK 0x0000FFFF
#define wqe_rsp_temp_rpi_WORD word12
uint32_t rsvd_13_15[3];
}; };
struct xmit_bls_rsp64_wqe { struct xmit_bls_rsp64_wqe {

View File

@ -3209,9 +3209,9 @@ lpfc_sli4_async_link_evt(struct lpfc_hba *phba,
phba->sli4_hba.link_state.logical_speed = phba->sli4_hba.link_state.logical_speed =
bf_get(lpfc_acqe_logical_link_speed, acqe_link); bf_get(lpfc_acqe_logical_link_speed, acqe_link);
lpfc_printf_log(phba, KERN_INFO, LOG_SLI, lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"2900 Async FCoE Link event - Speed:%dGBit duplex:x%x " "2900 Async FC/FCoE Link event - Speed:%dGBit "
"LA Type:x%x Port Type:%d Port Number:%d Logical " "duplex:x%x LA Type:x%x Port Type:%d Port Number:%d "
"speed:%dMbps Fault:%d\n", "Logical speed:%dMbps Fault:%d\n",
phba->sli4_hba.link_state.speed, phba->sli4_hba.link_state.speed,
phba->sli4_hba.link_state.topology, phba->sli4_hba.link_state.topology,
phba->sli4_hba.link_state.status, phba->sli4_hba.link_state.status,
@ -4906,6 +4906,7 @@ lpfc_sli4_create_rpi_hdr(struct lpfc_hba *phba)
uint16_t rpi_limit, curr_rpi_range; uint16_t rpi_limit, curr_rpi_range;
struct lpfc_dmabuf *dmabuf; struct lpfc_dmabuf *dmabuf;
struct lpfc_rpi_hdr *rpi_hdr; struct lpfc_rpi_hdr *rpi_hdr;
uint32_t rpi_count;
rpi_limit = phba->sli4_hba.max_cfg_param.rpi_base + rpi_limit = phba->sli4_hba.max_cfg_param.rpi_base +
phba->sli4_hba.max_cfg_param.max_rpi - 1; phba->sli4_hba.max_cfg_param.max_rpi - 1;
@ -4920,7 +4921,9 @@ lpfc_sli4_create_rpi_hdr(struct lpfc_hba *phba)
* and to allow the full max_rpi range per port. * and to allow the full max_rpi range per port.
*/ */
if ((curr_rpi_range + (LPFC_RPI_HDR_COUNT - 1)) > rpi_limit) if ((curr_rpi_range + (LPFC_RPI_HDR_COUNT - 1)) > rpi_limit)
return NULL; rpi_count = rpi_limit - curr_rpi_range;
else
rpi_count = LPFC_RPI_HDR_COUNT;
/* /*
* First allocate the protocol header region for the port. The * First allocate the protocol header region for the port. The
@ -4961,7 +4964,7 @@ lpfc_sli4_create_rpi_hdr(struct lpfc_hba *phba)
* The next_rpi stores the next module-64 rpi value to post * The next_rpi stores the next module-64 rpi value to post
* in any subsequent rpi memory region postings. * in any subsequent rpi memory region postings.
*/ */
phba->sli4_hba.next_rpi += LPFC_RPI_HDR_COUNT; phba->sli4_hba.next_rpi += rpi_count;
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
return rpi_hdr; return rpi_hdr;
@ -7004,7 +7007,8 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
lpfc_sli4_bar0_register_memmap(phba, if_type); lpfc_sli4_bar0_register_memmap(phba, if_type);
} }
if (pci_resource_start(pdev, 2)) { if ((if_type == LPFC_SLI_INTF_IF_TYPE_0) &&
(pci_resource_start(pdev, 2))) {
/* /*
* Map SLI4 if type 0 HBA Control Register base to a kernel * Map SLI4 if type 0 HBA Control Register base to a kernel
* virtual address and setup the registers. * virtual address and setup the registers.
@ -7021,7 +7025,8 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
lpfc_sli4_bar1_register_memmap(phba); lpfc_sli4_bar1_register_memmap(phba);
} }
if (pci_resource_start(pdev, 4)) { if ((if_type == LPFC_SLI_INTF_IF_TYPE_0) &&
(pci_resource_start(pdev, 4))) {
/* /*
* Map SLI4 if type 0 HBA Doorbell Register base to a kernel * Map SLI4 if type 0 HBA Doorbell Register base to a kernel
* virtual address and setup the registers. * virtual address and setup the registers.

View File

@ -1736,7 +1736,7 @@ lpfc_sli4_config(struct lpfc_hba *phba, struct lpfcMboxq *mbox,
} }
/* Setup for the none-embedded mbox command */ /* Setup for the none-embedded mbox command */
pcount = (PAGE_ALIGN(length))/SLI4_PAGE_SIZE; pcount = (SLI4_PAGE_ALIGN(length))/SLI4_PAGE_SIZE;
pcount = (pcount > LPFC_SLI4_MBX_SGE_MAX_PAGES) ? pcount = (pcount > LPFC_SLI4_MBX_SGE_MAX_PAGES) ?
LPFC_SLI4_MBX_SGE_MAX_PAGES : pcount; LPFC_SLI4_MBX_SGE_MAX_PAGES : pcount;
/* Allocate record for keeping SGE virtual addresses */ /* Allocate record for keeping SGE virtual addresses */

View File

@ -3238,9 +3238,8 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
if (!lpfc_cmd) { if (!lpfc_cmd) {
lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP, lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
"2873 SCSI Layer I/O Abort Request IO CMPL Status " "2873 SCSI Layer I/O Abort Request IO CMPL Status "
"x%x ID %d " "x%x ID %d LUN %d\n",
"LUN %d snum %#lx\n", ret, cmnd->device->id, ret, cmnd->device->id, cmnd->device->lun);
cmnd->device->lun, cmnd->serial_number);
return SUCCESS; return SUCCESS;
} }
@ -3318,16 +3317,15 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP, lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
"0748 abort handler timed out waiting " "0748 abort handler timed out waiting "
"for abort to complete: ret %#x, ID %d, " "for abort to complete: ret %#x, ID %d, "
"LUN %d, snum %#lx\n", "LUN %d\n",
ret, cmnd->device->id, cmnd->device->lun, ret, cmnd->device->id, cmnd->device->lun);
cmnd->serial_number);
} }
out: out:
lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP, lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
"0749 SCSI Layer I/O Abort Request Status x%x ID %d " "0749 SCSI Layer I/O Abort Request Status x%x ID %d "
"LUN %d snum %#lx\n", ret, cmnd->device->id, "LUN %d\n", ret, cmnd->device->id,
cmnd->device->lun, cmnd->serial_number); cmnd->device->lun);
return ret; return ret;
} }

View File

@ -4769,8 +4769,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
else else
phba->hba_flag &= ~HBA_FIP_SUPPORT; phba->hba_flag &= ~HBA_FIP_SUPPORT;
if (phba->sli_rev != LPFC_SLI_REV4 || if (phba->sli_rev != LPFC_SLI_REV4) {
!(phba->hba_flag & HBA_FCOE_MODE)) {
lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI, lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
"0376 READ_REV Error. SLI Level %d " "0376 READ_REV Error. SLI Level %d "
"FCoE enabled %d\n", "FCoE enabled %d\n",
@ -5018,10 +5017,11 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
lpfc_reg_fcfi(phba, mboxq); lpfc_reg_fcfi(phba, mboxq);
mboxq->vport = phba->pport; mboxq->vport = phba->pport;
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
if (rc == MBX_SUCCESS) if (rc != MBX_SUCCESS)
rc = 0;
else
goto out_unset_queue; goto out_unset_queue;
rc = 0;
phba->fcf.fcfi = bf_get(lpfc_reg_fcfi_fcfi,
&mboxq->u.mqe.un.reg_fcfi);
} }
/* /*
* The port is ready, set the host's link state to LINK_DOWN * The port is ready, set the host's link state to LINK_DOWN
@ -6402,6 +6402,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
uint32_t els_id = LPFC_ELS_ID_DEFAULT; uint32_t els_id = LPFC_ELS_ID_DEFAULT;
int numBdes, i; int numBdes, i;
struct ulp_bde64 bde; struct ulp_bde64 bde;
struct lpfc_nodelist *ndlp;
fip = phba->hba_flag & HBA_FIP_SUPPORT; fip = phba->hba_flag & HBA_FIP_SUPPORT;
/* The fcp commands will set command type */ /* The fcp commands will set command type */
@ -6447,6 +6448,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
switch (iocbq->iocb.ulpCommand) { switch (iocbq->iocb.ulpCommand) {
case CMD_ELS_REQUEST64_CR: case CMD_ELS_REQUEST64_CR:
ndlp = (struct lpfc_nodelist *)iocbq->context1;
if (!iocbq->iocb.ulpLe) { if (!iocbq->iocb.ulpLe) {
lpfc_printf_log(phba, KERN_ERR, LOG_SLI, lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"2007 Only Limited Edition cmd Format" "2007 Only Limited Edition cmd Format"
@ -6472,6 +6474,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
els_id = ((iocbq->iocb_flag & LPFC_FIP_ELS_ID_MASK) els_id = ((iocbq->iocb_flag & LPFC_FIP_ELS_ID_MASK)
>> LPFC_FIP_ELS_ID_SHIFT); >> LPFC_FIP_ELS_ID_SHIFT);
} }
bf_set(wqe_temp_rpi, &wqe->els_req.wqe_com, ndlp->nlp_rpi);
bf_set(wqe_els_id, &wqe->els_req.wqe_com, els_id); bf_set(wqe_els_id, &wqe->els_req.wqe_com, els_id);
bf_set(wqe_dbde, &wqe->els_req.wqe_com, 1); bf_set(wqe_dbde, &wqe->els_req.wqe_com, 1);
bf_set(wqe_iod, &wqe->els_req.wqe_com, LPFC_WQE_IOD_READ); bf_set(wqe_iod, &wqe->els_req.wqe_com, LPFC_WQE_IOD_READ);
@ -6604,6 +6607,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
command_type = OTHER_COMMAND; command_type = OTHER_COMMAND;
break; break;
case CMD_XMIT_ELS_RSP64_CX: case CMD_XMIT_ELS_RSP64_CX:
ndlp = (struct lpfc_nodelist *)iocbq->context1;
/* words0-2 BDE memcpy */ /* words0-2 BDE memcpy */
/* word3 iocb=iotag32 wqe=response_payload_len */ /* word3 iocb=iotag32 wqe=response_payload_len */
wqe->xmit_els_rsp.response_payload_len = xmit_len; wqe->xmit_els_rsp.response_payload_len = xmit_len;
@ -6626,6 +6630,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
bf_set(wqe_lenloc, &wqe->xmit_els_rsp.wqe_com, bf_set(wqe_lenloc, &wqe->xmit_els_rsp.wqe_com,
LPFC_WQE_LENLOC_WORD3); LPFC_WQE_LENLOC_WORD3);
bf_set(wqe_ebde_cnt, &wqe->xmit_els_rsp.wqe_com, 0); bf_set(wqe_ebde_cnt, &wqe->xmit_els_rsp.wqe_com, 0);
bf_set(wqe_rsp_temp_rpi, &wqe->xmit_els_rsp, ndlp->nlp_rpi);
command_type = OTHER_COMMAND; command_type = OTHER_COMMAND;
break; break;
case CMD_CLOSE_XRI_CN: case CMD_CLOSE_XRI_CN:
@ -10522,8 +10527,8 @@ lpfc_cq_create(struct lpfc_hba *phba, struct lpfc_queue *cq,
bf_set(lpfc_mbox_hdr_version, &shdr->request, bf_set(lpfc_mbox_hdr_version, &shdr->request,
phba->sli4_hba.pc_sli4_params.cqv); phba->sli4_hba.pc_sli4_params.cqv);
if (phba->sli4_hba.pc_sli4_params.cqv == LPFC_Q_CREATE_VERSION_2) { if (phba->sli4_hba.pc_sli4_params.cqv == LPFC_Q_CREATE_VERSION_2) {
bf_set(lpfc_mbx_cq_create_page_size, &cq_create->u.request, /* FW only supports 1. Should be PAGE_SIZE/SLI4_PAGE_SIZE */
(PAGE_SIZE/SLI4_PAGE_SIZE)); bf_set(lpfc_mbx_cq_create_page_size, &cq_create->u.request, 1);
bf_set(lpfc_cq_eq_id_2, &cq_create->u.request.context, bf_set(lpfc_cq_eq_id_2, &cq_create->u.request.context,
eq->queue_id); eq->queue_id);
} else { } else {
@ -10967,6 +10972,12 @@ lpfc_rq_create(struct lpfc_hba *phba, struct lpfc_queue *hrq,
&rq_create->u.request.context, &rq_create->u.request.context,
hrq->entry_count); hrq->entry_count);
rq_create->u.request.context.buffer_size = LPFC_HDR_BUF_SIZE; rq_create->u.request.context.buffer_size = LPFC_HDR_BUF_SIZE;
bf_set(lpfc_rq_context_rqe_size,
&rq_create->u.request.context,
LPFC_RQE_SIZE_8);
bf_set(lpfc_rq_context_page_size,
&rq_create->u.request.context,
(PAGE_SIZE/SLI4_PAGE_SIZE));
} else { } else {
switch (hrq->entry_count) { switch (hrq->entry_count) {
default: default:
@ -11042,9 +11053,12 @@ lpfc_rq_create(struct lpfc_hba *phba, struct lpfc_queue *hrq,
phba->sli4_hba.pc_sli4_params.rqv); phba->sli4_hba.pc_sli4_params.rqv);
if (phba->sli4_hba.pc_sli4_params.rqv == LPFC_Q_CREATE_VERSION_1) { if (phba->sli4_hba.pc_sli4_params.rqv == LPFC_Q_CREATE_VERSION_1) {
bf_set(lpfc_rq_context_rqe_count_1, bf_set(lpfc_rq_context_rqe_count_1,
&rq_create->u.request.context, &rq_create->u.request.context, hrq->entry_count);
hrq->entry_count);
rq_create->u.request.context.buffer_size = LPFC_DATA_BUF_SIZE; rq_create->u.request.context.buffer_size = LPFC_DATA_BUF_SIZE;
bf_set(lpfc_rq_context_rqe_size, &rq_create->u.request.context,
LPFC_RQE_SIZE_8);
bf_set(lpfc_rq_context_page_size, &rq_create->u.request.context,
(PAGE_SIZE/SLI4_PAGE_SIZE));
} else { } else {
switch (drq->entry_count) { switch (drq->entry_count) {
default: default:

View File

@ -18,7 +18,7 @@
* included with this package. * * included with this package. *
*******************************************************************/ *******************************************************************/
#define LPFC_DRIVER_VERSION "8.3.22" #define LPFC_DRIVER_VERSION "8.3.23"
#define LPFC_DRIVER_NAME "lpfc" #define LPFC_DRIVER_NAME "lpfc"
#define LPFC_SP_DRIVER_HANDLER_NAME "lpfc:sp" #define LPFC_SP_DRIVER_HANDLER_NAME "lpfc:sp"
#define LPFC_FP_DRIVER_HANDLER_NAME "lpfc:fp" #define LPFC_FP_DRIVER_HANDLER_NAME "lpfc:fp"

View File

@ -1469,8 +1469,8 @@ mega_cmd_done(adapter_t *adapter, u8 completed[], int nstatus, int status)
if( scb->state & SCB_ABORT ) { if( scb->state & SCB_ABORT ) {
printk(KERN_WARNING printk(KERN_WARNING
"megaraid: aborted cmd %lx[%x] complete.\n", "megaraid: aborted cmd [%x] complete.\n",
scb->cmd->serial_number, scb->idx); scb->idx);
scb->cmd->result = (DID_ABORT << 16); scb->cmd->result = (DID_ABORT << 16);
@ -1488,8 +1488,8 @@ mega_cmd_done(adapter_t *adapter, u8 completed[], int nstatus, int status)
if( scb->state & SCB_RESET ) { if( scb->state & SCB_RESET ) {
printk(KERN_WARNING printk(KERN_WARNING
"megaraid: reset cmd %lx[%x] complete.\n", "megaraid: reset cmd [%x] complete.\n",
scb->cmd->serial_number, scb->idx); scb->idx);
scb->cmd->result = (DID_RESET << 16); scb->cmd->result = (DID_RESET << 16);
@ -1958,8 +1958,8 @@ megaraid_abort_and_reset(adapter_t *adapter, Scsi_Cmnd *cmd, int aor)
struct list_head *pos, *next; struct list_head *pos, *next;
scb_t *scb; scb_t *scb;
printk(KERN_WARNING "megaraid: %s-%lx cmd=%x <c=%d t=%d l=%d>\n", printk(KERN_WARNING "megaraid: %s cmd=%x <c=%d t=%d l=%d>\n",
(aor == SCB_ABORT)? "ABORTING":"RESET", cmd->serial_number, (aor == SCB_ABORT)? "ABORTING":"RESET",
cmd->cmnd[0], cmd->device->channel, cmd->cmnd[0], cmd->device->channel,
cmd->device->id, cmd->device->lun); cmd->device->id, cmd->device->lun);
@ -1983,9 +1983,9 @@ megaraid_abort_and_reset(adapter_t *adapter, Scsi_Cmnd *cmd, int aor)
if( scb->state & SCB_ISSUED ) { if( scb->state & SCB_ISSUED ) {
printk(KERN_WARNING printk(KERN_WARNING
"megaraid: %s-%lx[%x], fw owner.\n", "megaraid: %s[%x], fw owner.\n",
(aor==SCB_ABORT) ? "ABORTING":"RESET", (aor==SCB_ABORT) ? "ABORTING":"RESET",
cmd->serial_number, scb->idx); scb->idx);
return FALSE; return FALSE;
} }
@ -1996,9 +1996,9 @@ megaraid_abort_and_reset(adapter_t *adapter, Scsi_Cmnd *cmd, int aor)
* list * list
*/ */
printk(KERN_WARNING printk(KERN_WARNING
"megaraid: %s-%lx[%x], driver owner.\n", "megaraid: %s-[%x], driver owner.\n",
(aor==SCB_ABORT) ? "ABORTING":"RESET", (aor==SCB_ABORT) ? "ABORTING":"RESET",
cmd->serial_number, scb->idx); scb->idx);
mega_free_scb(adapter, scb); mega_free_scb(adapter, scb);

View File

@ -2315,8 +2315,8 @@ megaraid_mbox_dpc(unsigned long devp)
// Was an abort issued for this command earlier // Was an abort issued for this command earlier
if (scb->state & SCB_ABORT) { if (scb->state & SCB_ABORT) {
con_log(CL_ANN, (KERN_NOTICE con_log(CL_ANN, (KERN_NOTICE
"megaraid: aborted cmd %lx[%x] completed\n", "megaraid: aborted cmd [%x] completed\n",
scp->serial_number, scb->sno)); scb->sno));
} }
/* /*
@ -2472,8 +2472,8 @@ megaraid_abort_handler(struct scsi_cmnd *scp)
raid_dev = ADAP2RAIDDEV(adapter); raid_dev = ADAP2RAIDDEV(adapter);
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid: aborting-%ld cmd=%x <c=%d t=%d l=%d>\n", "megaraid: aborting cmd=%x <c=%d t=%d l=%d>\n",
scp->serial_number, scp->cmnd[0], SCP2CHANNEL(scp), scp->cmnd[0], SCP2CHANNEL(scp),
SCP2TARGET(scp), SCP2LUN(scp))); SCP2TARGET(scp), SCP2LUN(scp)));
// If FW has stopped responding, simply return failure // If FW has stopped responding, simply return failure
@ -2496,9 +2496,8 @@ megaraid_abort_handler(struct scsi_cmnd *scp)
list_del_init(&scb->list); // from completed list list_del_init(&scb->list); // from completed list
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid: %ld:%d[%d:%d], abort from completed list\n", "megaraid: %d[%d:%d], abort from completed list\n",
scp->serial_number, scb->sno, scb->sno, scb->dev_channel, scb->dev_target));
scb->dev_channel, scb->dev_target));
scp->result = (DID_ABORT << 16); scp->result = (DID_ABORT << 16);
scp->scsi_done(scp); scp->scsi_done(scp);
@ -2527,9 +2526,8 @@ megaraid_abort_handler(struct scsi_cmnd *scp)
ASSERT(!(scb->state & SCB_ISSUED)); ASSERT(!(scb->state & SCB_ISSUED));
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid abort: %ld[%d:%d], driver owner\n", "megaraid abort: [%d:%d], driver owner\n",
scp->serial_number, scb->dev_channel, scb->dev_channel, scb->dev_target));
scb->dev_target));
scp->result = (DID_ABORT << 16); scp->result = (DID_ABORT << 16);
scp->scsi_done(scp); scp->scsi_done(scp);
@ -2560,25 +2558,21 @@ megaraid_abort_handler(struct scsi_cmnd *scp)
if (!(scb->state & SCB_ISSUED)) { if (!(scb->state & SCB_ISSUED)) {
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid abort: %ld%d[%d:%d], invalid state\n", "megaraid abort: %d[%d:%d], invalid state\n",
scp->serial_number, scb->sno, scb->dev_channel, scb->sno, scb->dev_channel, scb->dev_target));
scb->dev_target));
BUG(); BUG();
} }
else { else {
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid abort: %ld:%d[%d:%d], fw owner\n", "megaraid abort: %d[%d:%d], fw owner\n",
scp->serial_number, scb->sno, scb->dev_channel, scb->sno, scb->dev_channel, scb->dev_target));
scb->dev_target));
} }
} }
} }
spin_unlock_irq(&adapter->lock); spin_unlock_irq(&adapter->lock);
if (!found) { if (!found) {
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING "megaraid abort: do now own\n"));
"megaraid abort: scsi cmd:%ld, do now own\n",
scp->serial_number));
// FIXME: Should there be a callback for this command? // FIXME: Should there be a callback for this command?
return SUCCESS; return SUCCESS;
@ -2649,9 +2643,8 @@ megaraid_reset_handler(struct scsi_cmnd *scp)
} else { } else {
if (scb->scp == scp) { // Found command if (scb->scp == scp) { // Found command
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid: %ld:%d[%d:%d], reset from pending list\n", "megaraid: %d[%d:%d], reset from pending list\n",
scp->serial_number, scb->sno, scb->sno, scb->dev_channel, scb->dev_target));
scb->dev_channel, scb->dev_target));
} else { } else {
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid: IO packet with %d[%d:%d] being reset\n", "megaraid: IO packet with %d[%d:%d] being reset\n",

View File

@ -1751,10 +1751,9 @@ static int megasas_wait_for_outstanding(struct megasas_instance *instance)
list_del_init(&reset_cmd->list); list_del_init(&reset_cmd->list);
if (reset_cmd->scmd) { if (reset_cmd->scmd) {
reset_cmd->scmd->result = DID_RESET << 16; reset_cmd->scmd->result = DID_RESET << 16;
printk(KERN_NOTICE "%d:%p reset [%02x], %#lx\n", printk(KERN_NOTICE "%d:%p reset [%02x]\n",
reset_index, reset_cmd, reset_index, reset_cmd,
reset_cmd->scmd->cmnd[0], reset_cmd->scmd->cmnd[0]);
reset_cmd->scmd->serial_number);
reset_cmd->scmd->scsi_done(reset_cmd->scmd); reset_cmd->scmd->scsi_done(reset_cmd->scmd);
megasas_return_cmd(instance, reset_cmd); megasas_return_cmd(instance, reset_cmd);
@ -1879,8 +1878,8 @@ static int megasas_generic_reset(struct scsi_cmnd *scmd)
instance = (struct megasas_instance *)scmd->device->host->hostdata; instance = (struct megasas_instance *)scmd->device->host->hostdata;
scmd_printk(KERN_NOTICE, scmd, "megasas: RESET -%ld cmd=%x retries=%x\n", scmd_printk(KERN_NOTICE, scmd, "megasas: RESET cmd=%x retries=%x\n",
scmd->serial_number, scmd->cmnd[0], scmd->retries); scmd->cmnd[0], scmd->retries);
if (instance->adprecovery == MEGASAS_HW_CRITICAL_ERROR) { if (instance->adprecovery == MEGASAS_HW_CRITICAL_ERROR) {
printk(KERN_ERR "megasas: cannot recover from previous reset " printk(KERN_ERR "megasas: cannot recover from previous reset "
@ -2349,9 +2348,9 @@ megasas_issue_pending_cmds_again(struct megasas_instance *instance)
cmd->frame_phys_addr , cmd->frame_phys_addr ,
0, instance->reg_set); 0, instance->reg_set);
} else if (cmd->scmd) { } else if (cmd->scmd) {
printk(KERN_NOTICE "megasas: %p scsi cmd [%02x],%#lx" printk(KERN_NOTICE "megasas: %p scsi cmd [%02x]"
"detected on the internal queue, issue again.\n", "detected on the internal queue, issue again.\n",
cmd, cmd->scmd->cmnd[0], cmd->scmd->serial_number); cmd, cmd->scmd->cmnd[0]);
atomic_inc(&instance->fw_outstanding); atomic_inc(&instance->fw_outstanding);
instance->instancet->fire_cmd(instance, instance->instancet->fire_cmd(instance,

View File

@ -415,8 +415,7 @@ static void mesh_start_cmd(struct mesh_state *ms, struct scsi_cmnd *cmd)
#if 1 #if 1
if (DEBUG_TARGET(cmd)) { if (DEBUG_TARGET(cmd)) {
int i; int i;
printk(KERN_DEBUG "mesh_start: %p ser=%lu tgt=%d cmd=", printk(KERN_DEBUG "mesh_start: %p tgt=%d cmd=", cmd, id);
cmd, cmd->serial_number, id);
for (i = 0; i < cmd->cmd_len; ++i) for (i = 0; i < cmd->cmd_len; ++i)
printk(" %x", cmd->cmnd[i]); printk(" %x", cmd->cmnd[i]);
printk(" use_sg=%d buffer=%p bufflen=%u\n", printk(" use_sg=%d buffer=%p bufflen=%u\n",

View File

@ -522,7 +522,8 @@ _base_display_event_data(struct MPT2SAS_ADAPTER *ioc,
desc = "Device Status Change"; desc = "Device Status Change";
break; break;
case MPI2_EVENT_IR_OPERATION_STATUS: case MPI2_EVENT_IR_OPERATION_STATUS:
desc = "IR Operation Status"; if (!ioc->hide_ir_msg)
desc = "IR Operation Status";
break; break;
case MPI2_EVENT_SAS_DISCOVERY: case MPI2_EVENT_SAS_DISCOVERY:
{ {
@ -553,16 +554,20 @@ _base_display_event_data(struct MPT2SAS_ADAPTER *ioc,
desc = "SAS Enclosure Device Status Change"; desc = "SAS Enclosure Device Status Change";
break; break;
case MPI2_EVENT_IR_VOLUME: case MPI2_EVENT_IR_VOLUME:
desc = "IR Volume"; if (!ioc->hide_ir_msg)
desc = "IR Volume";
break; break;
case MPI2_EVENT_IR_PHYSICAL_DISK: case MPI2_EVENT_IR_PHYSICAL_DISK:
desc = "IR Physical Disk"; if (!ioc->hide_ir_msg)
desc = "IR Physical Disk";
break; break;
case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST: case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST:
desc = "IR Configuration Change List"; if (!ioc->hide_ir_msg)
desc = "IR Configuration Change List";
break; break;
case MPI2_EVENT_LOG_ENTRY_ADDED: case MPI2_EVENT_LOG_ENTRY_ADDED:
desc = "Log Entry Added"; if (!ioc->hide_ir_msg)
desc = "Log Entry Added";
break; break;
} }
@ -616,7 +621,10 @@ _base_sas_log_info(struct MPT2SAS_ADAPTER *ioc , u32 log_info)
originator_str = "PL"; originator_str = "PL";
break; break;
case 2: case 2:
originator_str = "IR"; if (!ioc->hide_ir_msg)
originator_str = "IR";
else
originator_str = "WarpDrive";
break; break;
} }
@ -1508,6 +1516,7 @@ mpt2sas_base_free_smid(struct MPT2SAS_ADAPTER *ioc, u16 smid)
} }
ioc->scsi_lookup[i].cb_idx = 0xFF; ioc->scsi_lookup[i].cb_idx = 0xFF;
ioc->scsi_lookup[i].scmd = NULL; ioc->scsi_lookup[i].scmd = NULL;
ioc->scsi_lookup[i].direct_io = 0;
list_add_tail(&ioc->scsi_lookup[i].tracker_list, list_add_tail(&ioc->scsi_lookup[i].tracker_list,
&ioc->free_list); &ioc->free_list);
spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags); spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
@ -1844,10 +1853,12 @@ _base_display_ioc_capabilities(struct MPT2SAS_ADAPTER *ioc)
printk("), "); printk("), ");
printk("Capabilities=("); printk("Capabilities=(");
if (ioc->facts.IOCCapabilities & if (!ioc->hide_ir_msg) {
MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID) { if (ioc->facts.IOCCapabilities &
printk("Raid"); MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID) {
i++; printk("Raid");
i++;
}
} }
if (ioc->facts.IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_TLR) { if (ioc->facts.IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_TLR) {
@ -3680,6 +3691,7 @@ _base_make_ioc_operational(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
u32 reply_address; u32 reply_address;
u16 smid; u16 smid;
struct _tr_list *delayed_tr, *delayed_tr_next; struct _tr_list *delayed_tr, *delayed_tr_next;
u8 hide_flag;
dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "%s\n", ioc->name, dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
__func__)); __func__));
@ -3706,6 +3718,7 @@ _base_make_ioc_operational(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
ioc->scsi_lookup[i].cb_idx = 0xFF; ioc->scsi_lookup[i].cb_idx = 0xFF;
ioc->scsi_lookup[i].smid = smid; ioc->scsi_lookup[i].smid = smid;
ioc->scsi_lookup[i].scmd = NULL; ioc->scsi_lookup[i].scmd = NULL;
ioc->scsi_lookup[i].direct_io = 0;
list_add_tail(&ioc->scsi_lookup[i].tracker_list, list_add_tail(&ioc->scsi_lookup[i].tracker_list,
&ioc->free_list); &ioc->free_list);
} }
@ -3766,6 +3779,15 @@ _base_make_ioc_operational(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
if (sleep_flag == CAN_SLEEP) if (sleep_flag == CAN_SLEEP)
_base_static_config_pages(ioc); _base_static_config_pages(ioc);
if (ioc->wait_for_port_enable_to_complete && ioc->is_warpdrive) {
if (ioc->manu_pg10.OEMIdentifier == 0x80) {
hide_flag = (u8) (ioc->manu_pg10.OEMSpecificFlags0 &
MFG_PAGE10_HIDE_SSDS_MASK);
if (hide_flag != MFG_PAGE10_HIDE_SSDS_MASK)
ioc->mfg_pg10_hide_flag = hide_flag;
}
}
if (ioc->wait_for_port_enable_to_complete) { if (ioc->wait_for_port_enable_to_complete) {
if (diag_buffer_enable != 0) if (diag_buffer_enable != 0)
mpt2sas_enable_diag_buffer(ioc, diag_buffer_enable); mpt2sas_enable_diag_buffer(ioc, diag_buffer_enable);

View File

@ -69,11 +69,11 @@
#define MPT2SAS_DRIVER_NAME "mpt2sas" #define MPT2SAS_DRIVER_NAME "mpt2sas"
#define MPT2SAS_AUTHOR "LSI Corporation <DL-MPTFusionLinux@lsi.com>" #define MPT2SAS_AUTHOR "LSI Corporation <DL-MPTFusionLinux@lsi.com>"
#define MPT2SAS_DESCRIPTION "LSI MPT Fusion SAS 2.0 Device Driver" #define MPT2SAS_DESCRIPTION "LSI MPT Fusion SAS 2.0 Device Driver"
#define MPT2SAS_DRIVER_VERSION "08.100.00.00" #define MPT2SAS_DRIVER_VERSION "08.100.00.01"
#define MPT2SAS_MAJOR_VERSION 08 #define MPT2SAS_MAJOR_VERSION 08
#define MPT2SAS_MINOR_VERSION 100 #define MPT2SAS_MINOR_VERSION 100
#define MPT2SAS_BUILD_VERSION 00 #define MPT2SAS_BUILD_VERSION 00
#define MPT2SAS_RELEASE_VERSION 00 #define MPT2SAS_RELEASE_VERSION 01
/* /*
* Set MPT2SAS_SG_DEPTH value based on user input. * Set MPT2SAS_SG_DEPTH value based on user input.
@ -188,6 +188,16 @@
#define MPT2SAS_HP_EMBEDDED_2_4_INTERNAL_SSDID 0x0044 #define MPT2SAS_HP_EMBEDDED_2_4_INTERNAL_SSDID 0x0044
#define MPT2SAS_HP_DAUGHTER_2_4_INTERNAL_SSDID 0x0046 #define MPT2SAS_HP_DAUGHTER_2_4_INTERNAL_SSDID 0x0046
/*
* WarpDrive Specific Log codes
*/
#define MPT2_WARPDRIVE_LOGENTRY (0x8002)
#define MPT2_WARPDRIVE_LC_SSDT (0x41)
#define MPT2_WARPDRIVE_LC_SSDLW (0x43)
#define MPT2_WARPDRIVE_LC_SSDLF (0x44)
#define MPT2_WARPDRIVE_LC_BRMF (0x4D)
/* /*
* per target private data * per target private data
*/ */
@ -199,6 +209,7 @@
* struct MPT2SAS_TARGET - starget private hostdata * struct MPT2SAS_TARGET - starget private hostdata
* @starget: starget object * @starget: starget object
* @sas_address: target sas address * @sas_address: target sas address
* @raid_device: raid_device pointer to access volume data
* @handle: device handle * @handle: device handle
* @num_luns: number luns * @num_luns: number luns
* @flags: MPT_TARGET_FLAGS_XXX flags * @flags: MPT_TARGET_FLAGS_XXX flags
@ -208,6 +219,7 @@
struct MPT2SAS_TARGET { struct MPT2SAS_TARGET {
struct scsi_target *starget; struct scsi_target *starget;
u64 sas_address; u64 sas_address;
struct _raid_device *raid_device;
u16 handle; u16 handle;
int num_luns; int num_luns;
u32 flags; u32 flags;
@ -215,6 +227,7 @@ struct MPT2SAS_TARGET {
u8 tm_busy; u8 tm_busy;
}; };
/* /*
* per device private data * per device private data
*/ */
@ -262,6 +275,12 @@ typedef struct _MPI2_CONFIG_PAGE_MAN_10 {
MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_10, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_10,
Mpi2ManufacturingPage10_t, MPI2_POINTER pMpi2ManufacturingPage10_t; Mpi2ManufacturingPage10_t, MPI2_POINTER pMpi2ManufacturingPage10_t;
#define MFG_PAGE10_HIDE_SSDS_MASK (0x00000003)
#define MFG_PAGE10_HIDE_ALL_DISKS (0x00)
#define MFG_PAGE10_EXPOSE_ALL_DISKS (0x01)
#define MFG_PAGE10_HIDE_IF_VOL_PRESENT (0x02)
struct MPT2SAS_DEVICE { struct MPT2SAS_DEVICE {
struct MPT2SAS_TARGET *sas_target; struct MPT2SAS_TARGET *sas_target;
unsigned int lun; unsigned int lun;
@ -341,6 +360,7 @@ struct _sas_device {
* @sdev: scsi device struct (volumes are single lun) * @sdev: scsi device struct (volumes are single lun)
* @wwid: unique identifier for the volume * @wwid: unique identifier for the volume
* @handle: device handle * @handle: device handle
* @block_size: Block size of the volume
* @id: target id * @id: target id
* @channel: target channel * @channel: target channel
* @volume_type: the raid level * @volume_type: the raid level
@ -348,20 +368,33 @@ struct _sas_device {
* @num_pds: number of hidden raid components * @num_pds: number of hidden raid components
* @responding: used in _scsih_raid_device_mark_responding * @responding: used in _scsih_raid_device_mark_responding
* @percent_complete: resync percent complete * @percent_complete: resync percent complete
* @direct_io_enabled: Whether direct io to PDs are allowed or not
* @stripe_exponent: X where 2powX is the stripe sz in blocks
* @max_lba: Maximum number of LBA in the volume
* @stripe_sz: Stripe Size of the volume
* @device_info: Device info of the volume member disk
* @pd_handle: Array of handles of the physical drives for direct I/O in le16
*/ */
#define MPT_MAX_WARPDRIVE_PDS 8
struct _raid_device { struct _raid_device {
struct list_head list; struct list_head list;
struct scsi_target *starget; struct scsi_target *starget;
struct scsi_device *sdev; struct scsi_device *sdev;
u64 wwid; u64 wwid;
u16 handle; u16 handle;
u16 block_sz;
int id; int id;
int channel; int channel;
u8 volume_type; u8 volume_type;
u32 device_info;
u8 num_pds; u8 num_pds;
u8 responding; u8 responding;
u8 percent_complete; u8 percent_complete;
u8 direct_io_enabled;
u8 stripe_exponent;
u64 max_lba;
u32 stripe_sz;
u32 device_info;
u16 pd_handle[MPT_MAX_WARPDRIVE_PDS];
}; };
/** /**
@ -470,6 +503,7 @@ struct chain_tracker {
* @smid: system message id * @smid: system message id
* @scmd: scsi request pointer * @scmd: scsi request pointer
* @cb_idx: callback index * @cb_idx: callback index
* @direct_io: To indicate whether I/O is direct (WARPDRIVE)
* @chain_list: list of chains associated to this IO * @chain_list: list of chains associated to this IO
* @tracker_list: list of free request (ioc->free_list) * @tracker_list: list of free request (ioc->free_list)
*/ */
@ -477,14 +511,14 @@ struct scsiio_tracker {
u16 smid; u16 smid;
struct scsi_cmnd *scmd; struct scsi_cmnd *scmd;
u8 cb_idx; u8 cb_idx;
u8 direct_io;
struct list_head chain_list; struct list_head chain_list;
struct list_head tracker_list; struct list_head tracker_list;
}; };
/** /**
* struct request_tracker - misc mf request tracker * struct request_tracker - firmware request tracker
* @smid: system message id * @smid: system message id
* @scmd: scsi request pointer
* @cb_idx: callback index * @cb_idx: callback index
* @tracker_list: list of free request (ioc->free_list) * @tracker_list: list of free request (ioc->free_list)
*/ */
@ -832,6 +866,11 @@ struct MPT2SAS_ADAPTER {
u32 diagnostic_flags[MPI2_DIAG_BUF_TYPE_COUNT]; u32 diagnostic_flags[MPI2_DIAG_BUF_TYPE_COUNT];
u32 ring_buffer_offset; u32 ring_buffer_offset;
u32 ring_buffer_sz; u32 ring_buffer_sz;
u8 is_warpdrive;
u8 hide_ir_msg;
u8 mfg_pg10_hide_flag;
u8 hide_drives;
}; };
typedef u8 (*MPT_CALLBACK)(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 msix_index, typedef u8 (*MPT_CALLBACK)(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 msix_index,

View File

@ -1041,7 +1041,10 @@ _ctl_getiocinfo(void __user *arg)
__func__)); __func__));
memset(&karg, 0 , sizeof(karg)); memset(&karg, 0 , sizeof(karg));
karg.adapter_type = MPT2_IOCTL_INTERFACE_SAS2; if (ioc->is_warpdrive)
karg.adapter_type = MPT2_IOCTL_INTERFACE_SAS2_SSS6200;
else
karg.adapter_type = MPT2_IOCTL_INTERFACE_SAS2;
if (ioc->pfacts) if (ioc->pfacts)
karg.port_number = ioc->pfacts[0].PortNumber; karg.port_number = ioc->pfacts[0].PortNumber;
pci_read_config_byte(ioc->pdev, PCI_CLASS_REVISION, &revision); pci_read_config_byte(ioc->pdev, PCI_CLASS_REVISION, &revision);

View File

@ -133,6 +133,7 @@ struct mpt2_ioctl_pci_info {
#define MPT2_IOCTL_INTERFACE_FC_IP (0x02) #define MPT2_IOCTL_INTERFACE_FC_IP (0x02)
#define MPT2_IOCTL_INTERFACE_SAS (0x03) #define MPT2_IOCTL_INTERFACE_SAS (0x03)
#define MPT2_IOCTL_INTERFACE_SAS2 (0x04) #define MPT2_IOCTL_INTERFACE_SAS2 (0x04)
#define MPT2_IOCTL_INTERFACE_SAS2_SSS6200 (0x05)
#define MPT2_IOCTL_VERSION_LENGTH (32) #define MPT2_IOCTL_VERSION_LENGTH (32)
/** /**

View File

@ -233,6 +233,9 @@ static struct pci_device_id scsih_pci_table[] = {
PCI_ANY_ID, PCI_ANY_ID }, PCI_ANY_ID, PCI_ANY_ID },
{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2308_3, { MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2308_3,
PCI_ANY_ID, PCI_ANY_ID }, PCI_ANY_ID, PCI_ANY_ID },
/* SSS6200 */
{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SSS6200,
PCI_ANY_ID, PCI_ANY_ID },
{0} /* Terminating entry */ {0} /* Terminating entry */
}; };
MODULE_DEVICE_TABLE(pci, scsih_pci_table); MODULE_DEVICE_TABLE(pci, scsih_pci_table);
@ -1256,6 +1259,7 @@ _scsih_target_alloc(struct scsi_target *starget)
sas_target_priv_data->handle = raid_device->handle; sas_target_priv_data->handle = raid_device->handle;
sas_target_priv_data->sas_address = raid_device->wwid; sas_target_priv_data->sas_address = raid_device->wwid;
sas_target_priv_data->flags |= MPT_TARGET_FLAGS_VOLUME; sas_target_priv_data->flags |= MPT_TARGET_FLAGS_VOLUME;
sas_target_priv_data->raid_device = raid_device;
raid_device->starget = starget; raid_device->starget = starget;
} }
spin_unlock_irqrestore(&ioc->raid_device_lock, flags); spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
@ -1455,7 +1459,10 @@ static int
_scsih_is_raid(struct device *dev) _scsih_is_raid(struct device *dev)
{ {
struct scsi_device *sdev = to_scsi_device(dev); struct scsi_device *sdev = to_scsi_device(dev);
struct MPT2SAS_ADAPTER *ioc = shost_priv(sdev->host);
if (ioc->is_warpdrive)
return 0;
return (sdev->channel == RAID_CHANNEL) ? 1 : 0; return (sdev->channel == RAID_CHANNEL) ? 1 : 0;
} }
@ -1480,7 +1487,7 @@ _scsih_get_resync(struct device *dev)
sdev->channel); sdev->channel);
spin_unlock_irqrestore(&ioc->raid_device_lock, flags); spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
if (!raid_device) if (!raid_device || ioc->is_warpdrive)
goto out; goto out;
if (mpt2sas_config_get_raid_volume_pg0(ioc, &mpi_reply, &vol_pg0, if (mpt2sas_config_get_raid_volume_pg0(ioc, &mpi_reply, &vol_pg0,
@ -1640,6 +1647,212 @@ _scsih_get_volume_capabilities(struct MPT2SAS_ADAPTER *ioc,
kfree(vol_pg0); kfree(vol_pg0);
} }
/**
* _scsih_disable_ddio - Disable direct I/O for all the volumes
* @ioc: per adapter object
*/
static void
_scsih_disable_ddio(struct MPT2SAS_ADAPTER *ioc)
{
Mpi2RaidVolPage1_t vol_pg1;
Mpi2ConfigReply_t mpi_reply;
struct _raid_device *raid_device;
u16 handle;
u16 ioc_status;
handle = 0xFFFF;
while (!(mpt2sas_config_get_raid_volume_pg1(ioc, &mpi_reply,
&vol_pg1, MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE, handle))) {
ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
MPI2_IOCSTATUS_MASK;
if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
break;
handle = le16_to_cpu(vol_pg1.DevHandle);
raid_device = _scsih_raid_device_find_by_handle(ioc, handle);
if (raid_device)
raid_device->direct_io_enabled = 0;
}
return;
}
/**
* _scsih_get_num_volumes - Get number of volumes in the ioc
* @ioc: per adapter object
*/
static u8
_scsih_get_num_volumes(struct MPT2SAS_ADAPTER *ioc)
{
Mpi2RaidVolPage1_t vol_pg1;
Mpi2ConfigReply_t mpi_reply;
u16 handle;
u8 vol_cnt = 0;
u16 ioc_status;
handle = 0xFFFF;
while (!(mpt2sas_config_get_raid_volume_pg1(ioc, &mpi_reply,
&vol_pg1, MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE, handle))) {
ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
MPI2_IOCSTATUS_MASK;
if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
break;
vol_cnt++;
handle = le16_to_cpu(vol_pg1.DevHandle);
}
return vol_cnt;
}
/**
* _scsih_init_warpdrive_properties - Set properties for warpdrive direct I/O.
* @ioc: per adapter object
* @raid_device: the raid_device object
*/
static void
_scsih_init_warpdrive_properties(struct MPT2SAS_ADAPTER *ioc,
struct _raid_device *raid_device)
{
Mpi2RaidVolPage0_t *vol_pg0;
Mpi2RaidPhysDiskPage0_t pd_pg0;
Mpi2ConfigReply_t mpi_reply;
u16 sz;
u8 num_pds, count;
u64 mb = 1024 * 1024;
u64 tb_2 = 2 * mb * mb;
u64 capacity;
u32 stripe_sz;
u8 i, stripe_exp;
if (!ioc->is_warpdrive)
return;
if (ioc->mfg_pg10_hide_flag == MFG_PAGE10_EXPOSE_ALL_DISKS) {
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is disabled "
"globally as drives are exposed\n", ioc->name);
return;
}
if (_scsih_get_num_volumes(ioc) > 1) {
_scsih_disable_ddio(ioc);
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is disabled "
"globally as number of drives > 1\n", ioc->name);
return;
}
if ((mpt2sas_config_get_number_pds(ioc, raid_device->handle,
&num_pds)) || !num_pds) {
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is disabled "
"Failure in computing number of drives\n", ioc->name);
return;
}
sz = offsetof(Mpi2RaidVolPage0_t, PhysDisk) + (num_pds *
sizeof(Mpi2RaidVol0PhysDisk_t));
vol_pg0 = kzalloc(sz, GFP_KERNEL);
if (!vol_pg0) {
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is disabled "
"Memory allocation failure for RVPG0\n", ioc->name);
return;
}
if ((mpt2sas_config_get_raid_volume_pg0(ioc, &mpi_reply, vol_pg0,
MPI2_RAID_VOLUME_PGAD_FORM_HANDLE, raid_device->handle, sz))) {
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is disabled "
"Failure in retrieving RVPG0\n", ioc->name);
kfree(vol_pg0);
return;
}
/*
* WARPDRIVE:If number of physical disks in a volume exceeds the max pds
* assumed for WARPDRIVE, disable direct I/O
*/
if (num_pds > MPT_MAX_WARPDRIVE_PDS) {
printk(MPT2SAS_WARN_FMT "WarpDrive : Direct IO is disabled "
"for the drive with handle(0x%04x): num_mem=%d, "
"max_mem_allowed=%d\n", ioc->name, raid_device->handle,
num_pds, MPT_MAX_WARPDRIVE_PDS);
kfree(vol_pg0);
return;
}
for (count = 0; count < num_pds; count++) {
if (mpt2sas_config_get_phys_disk_pg0(ioc, &mpi_reply,
&pd_pg0, MPI2_PHYSDISK_PGAD_FORM_PHYSDISKNUM,
vol_pg0->PhysDisk[count].PhysDiskNum) ||
pd_pg0.DevHandle == MPT2SAS_INVALID_DEVICE_HANDLE) {
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is "
"disabled for the drive with handle(0x%04x) member"
"handle retrieval failed for member number=%d\n",
ioc->name, raid_device->handle,
vol_pg0->PhysDisk[count].PhysDiskNum);
goto out_error;
}
raid_device->pd_handle[count] = le16_to_cpu(pd_pg0.DevHandle);
}
/*
* Assumption for WD: Direct I/O is not supported if the volume is
* not RAID0, if the stripe size is not 64KB, if the block size is
* not 512 and if the volume size is >2TB
*/
if (raid_device->volume_type != MPI2_RAID_VOL_TYPE_RAID0 ||
le16_to_cpu(vol_pg0->BlockSize) != 512) {
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is disabled "
"for the drive with handle(0x%04x): type=%d, "
"s_sz=%uK, blk_size=%u\n", ioc->name,
raid_device->handle, raid_device->volume_type,
le32_to_cpu(vol_pg0->StripeSize)/2,
le16_to_cpu(vol_pg0->BlockSize));
goto out_error;
}
capacity = (u64) le16_to_cpu(vol_pg0->BlockSize) *
(le64_to_cpu(vol_pg0->MaxLBA) + 1);
if (capacity > tb_2) {
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is disabled "
"for the drive with handle(0x%04x) since drive sz > 2TB\n",
ioc->name, raid_device->handle);
goto out_error;
}
stripe_sz = le32_to_cpu(vol_pg0->StripeSize);
stripe_exp = 0;
for (i = 0; i < 32; i++) {
if (stripe_sz & 1)
break;
stripe_exp++;
stripe_sz >>= 1;
}
if (i == 32) {
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is disabled "
"for the drive with handle(0x%04x) invalid stripe sz %uK\n",
ioc->name, raid_device->handle,
le32_to_cpu(vol_pg0->StripeSize)/2);
goto out_error;
}
raid_device->stripe_exponent = stripe_exp;
raid_device->direct_io_enabled = 1;
printk(MPT2SAS_INFO_FMT "WarpDrive : Direct IO is Enabled for the drive"
" with handle(0x%04x)\n", ioc->name, raid_device->handle);
/*
* WARPDRIVE: Though the following fields are not used for direct IO,
* stored for future purpose:
*/
raid_device->max_lba = le64_to_cpu(vol_pg0->MaxLBA);
raid_device->stripe_sz = le32_to_cpu(vol_pg0->StripeSize);
raid_device->block_sz = le16_to_cpu(vol_pg0->BlockSize);
kfree(vol_pg0);
return;
out_error:
raid_device->direct_io_enabled = 0;
for (count = 0; count < num_pds; count++)
raid_device->pd_handle[count] = 0;
kfree(vol_pg0);
return;
}
/** /**
* _scsih_enable_tlr - setting TLR flags * _scsih_enable_tlr - setting TLR flags
@ -1710,6 +1923,11 @@ _scsih_slave_configure(struct scsi_device *sdev)
_scsih_get_volume_capabilities(ioc, raid_device); _scsih_get_volume_capabilities(ioc, raid_device);
/*
* WARPDRIVE: Initialize the required data for Direct IO
*/
_scsih_init_warpdrive_properties(ioc, raid_device);
/* RAID Queue Depth Support /* RAID Queue Depth Support
* IS volume = underlying qdepth of drive type, either * IS volume = underlying qdepth of drive type, either
* MPT2SAS_SAS_QUEUE_DEPTH or MPT2SAS_SATA_QUEUE_DEPTH * MPT2SAS_SAS_QUEUE_DEPTH or MPT2SAS_SATA_QUEUE_DEPTH
@ -1757,14 +1975,16 @@ _scsih_slave_configure(struct scsi_device *sdev)
break; break;
} }
sdev_printk(KERN_INFO, sdev, "%s: " if (!ioc->hide_ir_msg)
"handle(0x%04x), wwid(0x%016llx), pd_count(%d), type(%s)\n", sdev_printk(KERN_INFO, sdev, "%s: handle(0x%04x), "
r_level, raid_device->handle, "wwid(0x%016llx), pd_count(%d), type(%s)\n",
(unsigned long long)raid_device->wwid, r_level, raid_device->handle,
raid_device->num_pds, ds); (unsigned long long)raid_device->wwid,
raid_device->num_pds, ds);
_scsih_change_queue_depth(sdev, qdepth, SCSI_QDEPTH_DEFAULT); _scsih_change_queue_depth(sdev, qdepth, SCSI_QDEPTH_DEFAULT);
/* raid transport support */ /* raid transport support */
_scsih_set_level(sdev, raid_device); if (!ioc->is_warpdrive)
_scsih_set_level(sdev, raid_device);
return 0; return 0;
} }
@ -2133,8 +2353,7 @@ mpt2sas_scsih_issue_tm(struct MPT2SAS_ADAPTER *ioc, u16 handle, uint channel,
switch (type) { switch (type) {
case MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK: case MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK:
scmd_lookup = _scsih_scsi_lookup_get(ioc, smid_task); scmd_lookup = _scsih_scsi_lookup_get(ioc, smid_task);
if (scmd_lookup && (scmd_lookup->serial_number == if (scmd_lookup)
scmd->serial_number))
rc = FAILED; rc = FAILED;
else else
rc = SUCCESS; rc = SUCCESS;
@ -2182,16 +2401,20 @@ _scsih_tm_display_info(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd *scmd)
struct MPT2SAS_TARGET *priv_target = starget->hostdata; struct MPT2SAS_TARGET *priv_target = starget->hostdata;
struct _sas_device *sas_device = NULL; struct _sas_device *sas_device = NULL;
unsigned long flags; unsigned long flags;
char *device_str = NULL;
if (!priv_target) if (!priv_target)
return; return;
if (ioc->hide_ir_msg)
device_str = "WarpDrive";
else
device_str = "volume";
scsi_print_command(scmd); scsi_print_command(scmd);
if (priv_target->flags & MPT_TARGET_FLAGS_VOLUME) { if (priv_target->flags & MPT_TARGET_FLAGS_VOLUME) {
starget_printk(KERN_INFO, starget, "volume handle(0x%04x), " starget_printk(KERN_INFO, starget, "%s handle(0x%04x), "
"volume wwid(0x%016llx)\n", "%s wwid(0x%016llx)\n", device_str, priv_target->handle,
priv_target->handle, device_str, (unsigned long long)priv_target->sas_address);
(unsigned long long)priv_target->sas_address);
} else { } else {
spin_lock_irqsave(&ioc->sas_device_lock, flags); spin_lock_irqsave(&ioc->sas_device_lock, flags);
sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc, sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
@ -3130,6 +3353,9 @@ _scsih_check_ir_config_unhide_events(struct MPT2SAS_ADAPTER *ioc,
a = 0; a = 0;
b = 0; b = 0;
if (ioc->is_warpdrive)
return;
/* Volume Resets for Deleted or Removed */ /* Volume Resets for Deleted or Removed */
element = (Mpi2EventIrConfigElement_t *)&event_data->ConfigElement[0]; element = (Mpi2EventIrConfigElement_t *)&event_data->ConfigElement[0];
for (i = 0; i < event_data->NumElements; i++, element++) { for (i = 0; i < event_data->NumElements; i++, element++) {
@ -3346,6 +3572,105 @@ _scsih_eedp_error_handling(struct scsi_cmnd *scmd, u16 ioc_status)
SAM_STAT_CHECK_CONDITION; SAM_STAT_CHECK_CONDITION;
} }
/**
* _scsih_scsi_direct_io_get - returns direct io flag
* @ioc: per adapter object
* @smid: system request message index
*
* Returns the smid stored scmd pointer.
*/
static inline u8
_scsih_scsi_direct_io_get(struct MPT2SAS_ADAPTER *ioc, u16 smid)
{
return ioc->scsi_lookup[smid - 1].direct_io;
}
/**
* _scsih_scsi_direct_io_set - sets direct io flag
* @ioc: per adapter object
* @smid: system request message index
* @direct_io: Zero or non-zero value to set in the direct_io flag
*
* Returns Nothing.
*/
static inline void
_scsih_scsi_direct_io_set(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 direct_io)
{
ioc->scsi_lookup[smid - 1].direct_io = direct_io;
}
/**
* _scsih_setup_direct_io - setup MPI request for WARPDRIVE Direct I/O
* @ioc: per adapter object
* @scmd: pointer to scsi command object
* @raid_device: pointer to raid device data structure
* @mpi_request: pointer to the SCSI_IO reqest message frame
* @smid: system request message index
*
* Returns nothing
*/
static void
_scsih_setup_direct_io(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
struct _raid_device *raid_device, Mpi2SCSIIORequest_t *mpi_request,
u16 smid)
{
u32 v_lba, p_lba, stripe_off, stripe_unit, column, io_size;
u32 stripe_sz, stripe_exp;
u8 num_pds, *cdb_ptr, *tmp_ptr, *lba_ptr1, *lba_ptr2;
u8 cdb0 = scmd->cmnd[0];
/*
* Try Direct I/O to RAID memeber disks
*/
if (cdb0 == READ_16 || cdb0 == READ_10 ||
cdb0 == WRITE_16 || cdb0 == WRITE_10) {
cdb_ptr = mpi_request->CDB.CDB32;
if ((cdb0 < READ_16) || !(cdb_ptr[2] | cdb_ptr[3] | cdb_ptr[4]
| cdb_ptr[5])) {
io_size = scsi_bufflen(scmd) >> 9;
/* get virtual lba */
lba_ptr1 = lba_ptr2 = (cdb0 < READ_16) ? &cdb_ptr[2] :
&cdb_ptr[6];
tmp_ptr = (u8 *)&v_lba + 3;
*tmp_ptr-- = *lba_ptr1++;
*tmp_ptr-- = *lba_ptr1++;
*tmp_ptr-- = *lba_ptr1++;
*tmp_ptr = *lba_ptr1;
if (((u64)v_lba + (u64)io_size - 1) <=
(u32)raid_device->max_lba) {
stripe_sz = raid_device->stripe_sz;
stripe_exp = raid_device->stripe_exponent;
stripe_off = v_lba & (stripe_sz - 1);
/* Check whether IO falls within a stripe */
if ((stripe_off + io_size) <= stripe_sz) {
num_pds = raid_device->num_pds;
p_lba = v_lba >> stripe_exp;
stripe_unit = p_lba / num_pds;
column = p_lba % num_pds;
p_lba = (stripe_unit << stripe_exp) +
stripe_off;
mpi_request->DevHandle =
cpu_to_le16(raid_device->
pd_handle[column]);
tmp_ptr = (u8 *)&p_lba + 3;
*lba_ptr2++ = *tmp_ptr--;
*lba_ptr2++ = *tmp_ptr--;
*lba_ptr2++ = *tmp_ptr--;
*lba_ptr2 = *tmp_ptr;
/*
* WD: To indicate this I/O is directI/O
*/
_scsih_scsi_direct_io_set(ioc, smid, 1);
}
}
}
}
}
/** /**
* _scsih_qcmd - main scsi request entry point * _scsih_qcmd - main scsi request entry point
* @scmd: pointer to scsi command object * @scmd: pointer to scsi command object
@ -3363,6 +3688,7 @@ _scsih_qcmd_lck(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *))
struct MPT2SAS_ADAPTER *ioc = shost_priv(scmd->device->host); struct MPT2SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
struct MPT2SAS_DEVICE *sas_device_priv_data; struct MPT2SAS_DEVICE *sas_device_priv_data;
struct MPT2SAS_TARGET *sas_target_priv_data; struct MPT2SAS_TARGET *sas_target_priv_data;
struct _raid_device *raid_device;
Mpi2SCSIIORequest_t *mpi_request; Mpi2SCSIIORequest_t *mpi_request;
u32 mpi_control; u32 mpi_control;
u16 smid; u16 smid;
@ -3424,8 +3750,10 @@ _scsih_qcmd_lck(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *))
} else } else
mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ; mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
/* Make sure Device is not raid volume */ /* Make sure Device is not raid volume.
if (!_scsih_is_raid(&scmd->device->sdev_gendev) && * We do not expose raid functionality to upper layer for warpdrive.
*/
if (!ioc->is_warpdrive && !_scsih_is_raid(&scmd->device->sdev_gendev) &&
sas_is_tlr_enabled(scmd->device) && scmd->cmd_len != 32) sas_is_tlr_enabled(scmd->device) && scmd->cmd_len != 32)
mpi_control |= MPI2_SCSIIO_CONTROL_TLR_ON; mpi_control |= MPI2_SCSIIO_CONTROL_TLR_ON;
@ -3473,9 +3801,14 @@ _scsih_qcmd_lck(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *))
} }
} }
raid_device = sas_target_priv_data->raid_device;
if (raid_device && raid_device->direct_io_enabled)
_scsih_setup_direct_io(ioc, scmd, raid_device, mpi_request,
smid);
if (likely(mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST)) if (likely(mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST))
mpt2sas_base_put_smid_scsi_io(ioc, smid, mpt2sas_base_put_smid_scsi_io(ioc, smid,
sas_device_priv_data->sas_target->handle); le16_to_cpu(mpi_request->DevHandle));
else else
mpt2sas_base_put_smid_default(ioc, smid); mpt2sas_base_put_smid_default(ioc, smid);
return 0; return 0;
@ -3540,10 +3873,16 @@ _scsih_scsi_ioc_info(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
unsigned long flags; unsigned long flags;
struct scsi_target *starget = scmd->device->sdev_target; struct scsi_target *starget = scmd->device->sdev_target;
struct MPT2SAS_TARGET *priv_target = starget->hostdata; struct MPT2SAS_TARGET *priv_target = starget->hostdata;
char *device_str = NULL;
if (!priv_target) if (!priv_target)
return; return;
if (ioc->hide_ir_msg)
device_str = "WarpDrive";
else
device_str = "volume";
if (log_info == 0x31170000) if (log_info == 0x31170000)
return; return;
@ -3660,8 +3999,8 @@ _scsih_scsi_ioc_info(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
scsi_print_command(scmd); scsi_print_command(scmd);
if (priv_target->flags & MPT_TARGET_FLAGS_VOLUME) { if (priv_target->flags & MPT_TARGET_FLAGS_VOLUME) {
printk(MPT2SAS_WARN_FMT "\tvolume wwid(0x%016llx)\n", ioc->name, printk(MPT2SAS_WARN_FMT "\t%s wwid(0x%016llx)\n", ioc->name,
(unsigned long long)priv_target->sas_address); device_str, (unsigned long long)priv_target->sas_address);
} else { } else {
spin_lock_irqsave(&ioc->sas_device_lock, flags); spin_lock_irqsave(&ioc->sas_device_lock, flags);
sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc, sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
@ -3840,6 +4179,20 @@ _scsih_io_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
scmd->result = DID_NO_CONNECT << 16; scmd->result = DID_NO_CONNECT << 16;
goto out; goto out;
} }
/*
* WARPDRIVE: If direct_io is set then it is directIO,
* the failed direct I/O should be redirected to volume
*/
if (_scsih_scsi_direct_io_get(ioc, smid)) {
_scsih_scsi_direct_io_set(ioc, smid, 0);
memcpy(mpi_request->CDB.CDB32, scmd->cmnd, scmd->cmd_len);
mpi_request->DevHandle =
cpu_to_le16(sas_device_priv_data->sas_target->handle);
mpt2sas_base_put_smid_scsi_io(ioc, smid,
sas_device_priv_data->sas_target->handle);
return 0;
}
/* turning off TLR */ /* turning off TLR */
scsi_state = mpi_reply->SCSIState; scsi_state = mpi_reply->SCSIState;
@ -3848,7 +4201,10 @@ _scsih_io_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
le32_to_cpu(mpi_reply->ResponseInfo) & 0xFF; le32_to_cpu(mpi_reply->ResponseInfo) & 0xFF;
if (!sas_device_priv_data->tlr_snoop_check) { if (!sas_device_priv_data->tlr_snoop_check) {
sas_device_priv_data->tlr_snoop_check++; sas_device_priv_data->tlr_snoop_check++;
if (!_scsih_is_raid(&scmd->device->sdev_gendev) && /* Make sure Device is not raid volume.
* We do not expose raid functionality to upper layer for warpdrive.
*/
if (!ioc->is_warpdrive && !_scsih_is_raid(&scmd->device->sdev_gendev) &&
sas_is_tlr_enabled(scmd->device) && sas_is_tlr_enabled(scmd->device) &&
response_code == MPI2_SCSITASKMGMT_RSP_INVALID_FRAME) { response_code == MPI2_SCSITASKMGMT_RSP_INVALID_FRAME) {
sas_disable_tlr(scmd->device); sas_disable_tlr(scmd->device);
@ -4681,8 +5037,10 @@ _scsih_remove_device(struct MPT2SAS_ADAPTER *ioc,
_scsih_ublock_io_device(ioc, sas_device_backup.handle); _scsih_ublock_io_device(ioc, sas_device_backup.handle);
mpt2sas_transport_port_remove(ioc, sas_device_backup.sas_address, if (!ioc->hide_drives)
sas_device_backup.sas_address_parent); mpt2sas_transport_port_remove(ioc,
sas_device_backup.sas_address,
sas_device_backup.sas_address_parent);
printk(MPT2SAS_INFO_FMT "removing handle(0x%04x), sas_addr" printk(MPT2SAS_INFO_FMT "removing handle(0x%04x), sas_addr"
"(0x%016llx)\n", ioc->name, sas_device_backup.handle, "(0x%016llx)\n", ioc->name, sas_device_backup.handle,
@ -5413,6 +5771,7 @@ _scsih_sas_pd_hide(struct MPT2SAS_ADAPTER *ioc,
&sas_device->volume_wwid); &sas_device->volume_wwid);
set_bit(handle, ioc->pd_handles); set_bit(handle, ioc->pd_handles);
_scsih_reprobe_target(sas_device->starget, 1); _scsih_reprobe_target(sas_device->starget, 1);
} }
/** /**
@ -5591,7 +5950,8 @@ _scsih_sas_ir_config_change_event(struct MPT2SAS_ADAPTER *ioc,
Mpi2EventDataIrConfigChangeList_t *event_data = fw_event->event_data; Mpi2EventDataIrConfigChangeList_t *event_data = fw_event->event_data;
#ifdef CONFIG_SCSI_MPT2SAS_LOGGING #ifdef CONFIG_SCSI_MPT2SAS_LOGGING
if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK) if ((ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK)
&& !ioc->hide_ir_msg)
_scsih_sas_ir_config_change_event_debug(ioc, event_data); _scsih_sas_ir_config_change_event_debug(ioc, event_data);
#endif #endif
@ -5614,16 +5974,20 @@ _scsih_sas_ir_config_change_event(struct MPT2SAS_ADAPTER *ioc,
le16_to_cpu(element->VolDevHandle)); le16_to_cpu(element->VolDevHandle));
break; break;
case MPI2_EVENT_IR_CHANGE_RC_PD_CREATED: case MPI2_EVENT_IR_CHANGE_RC_PD_CREATED:
_scsih_sas_pd_hide(ioc, element); if (!ioc->is_warpdrive)
_scsih_sas_pd_hide(ioc, element);
break; break;
case MPI2_EVENT_IR_CHANGE_RC_PD_DELETED: case MPI2_EVENT_IR_CHANGE_RC_PD_DELETED:
_scsih_sas_pd_expose(ioc, element); if (!ioc->is_warpdrive)
_scsih_sas_pd_expose(ioc, element);
break; break;
case MPI2_EVENT_IR_CHANGE_RC_HIDE: case MPI2_EVENT_IR_CHANGE_RC_HIDE:
_scsih_sas_pd_add(ioc, element); if (!ioc->is_warpdrive)
_scsih_sas_pd_add(ioc, element);
break; break;
case MPI2_EVENT_IR_CHANGE_RC_UNHIDE: case MPI2_EVENT_IR_CHANGE_RC_UNHIDE:
_scsih_sas_pd_delete(ioc, element); if (!ioc->is_warpdrive)
_scsih_sas_pd_delete(ioc, element);
break; break;
} }
} }
@ -5654,9 +6018,10 @@ _scsih_sas_ir_volume_event(struct MPT2SAS_ADAPTER *ioc,
handle = le16_to_cpu(event_data->VolDevHandle); handle = le16_to_cpu(event_data->VolDevHandle);
state = le32_to_cpu(event_data->NewValue); state = le32_to_cpu(event_data->NewValue);
dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: handle(0x%04x), " if (!ioc->hide_ir_msg)
"old(0x%08x), new(0x%08x)\n", ioc->name, __func__, handle, dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: handle(0x%04x), "
le32_to_cpu(event_data->PreviousValue), state)); "old(0x%08x), new(0x%08x)\n", ioc->name, __func__, handle,
le32_to_cpu(event_data->PreviousValue), state));
switch (state) { switch (state) {
case MPI2_RAID_VOL_STATE_MISSING: case MPI2_RAID_VOL_STATE_MISSING:
@ -5736,9 +6101,10 @@ _scsih_sas_ir_physical_disk_event(struct MPT2SAS_ADAPTER *ioc,
handle = le16_to_cpu(event_data->PhysDiskDevHandle); handle = le16_to_cpu(event_data->PhysDiskDevHandle);
state = le32_to_cpu(event_data->NewValue); state = le32_to_cpu(event_data->NewValue);
dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: handle(0x%04x), " if (!ioc->hide_ir_msg)
"old(0x%08x), new(0x%08x)\n", ioc->name, __func__, handle, dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: handle(0x%04x), "
le32_to_cpu(event_data->PreviousValue), state)); "old(0x%08x), new(0x%08x)\n", ioc->name, __func__, handle,
le32_to_cpu(event_data->PreviousValue), state));
switch (state) { switch (state) {
case MPI2_RAID_PD_STATE_ONLINE: case MPI2_RAID_PD_STATE_ONLINE:
@ -5747,7 +6113,8 @@ _scsih_sas_ir_physical_disk_event(struct MPT2SAS_ADAPTER *ioc,
case MPI2_RAID_PD_STATE_OPTIMAL: case MPI2_RAID_PD_STATE_OPTIMAL:
case MPI2_RAID_PD_STATE_HOT_SPARE: case MPI2_RAID_PD_STATE_HOT_SPARE:
set_bit(handle, ioc->pd_handles); if (!ioc->is_warpdrive)
set_bit(handle, ioc->pd_handles);
spin_lock_irqsave(&ioc->sas_device_lock, flags); spin_lock_irqsave(&ioc->sas_device_lock, flags);
sas_device = _scsih_sas_device_find_by_handle(ioc, handle); sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
@ -5851,7 +6218,8 @@ _scsih_sas_ir_operation_status_event(struct MPT2SAS_ADAPTER *ioc,
u16 handle; u16 handle;
#ifdef CONFIG_SCSI_MPT2SAS_LOGGING #ifdef CONFIG_SCSI_MPT2SAS_LOGGING
if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK) if ((ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK)
&& !ioc->hide_ir_msg)
_scsih_sas_ir_operation_status_event_debug(ioc, _scsih_sas_ir_operation_status_event_debug(ioc,
event_data); event_data);
#endif #endif
@ -5910,7 +6278,7 @@ static void
_scsih_mark_responding_sas_device(struct MPT2SAS_ADAPTER *ioc, u64 sas_address, _scsih_mark_responding_sas_device(struct MPT2SAS_ADAPTER *ioc, u64 sas_address,
u16 slot, u16 handle) u16 slot, u16 handle)
{ {
struct MPT2SAS_TARGET *sas_target_priv_data; struct MPT2SAS_TARGET *sas_target_priv_data = NULL;
struct scsi_target *starget; struct scsi_target *starget;
struct _sas_device *sas_device; struct _sas_device *sas_device;
unsigned long flags; unsigned long flags;
@ -5918,7 +6286,7 @@ _scsih_mark_responding_sas_device(struct MPT2SAS_ADAPTER *ioc, u64 sas_address,
spin_lock_irqsave(&ioc->sas_device_lock, flags); spin_lock_irqsave(&ioc->sas_device_lock, flags);
list_for_each_entry(sas_device, &ioc->sas_device_list, list) { list_for_each_entry(sas_device, &ioc->sas_device_list, list) {
if (sas_device->sas_address == sas_address && if (sas_device->sas_address == sas_address &&
sas_device->slot == slot && sas_device->starget) { sas_device->slot == slot) {
sas_device->responding = 1; sas_device->responding = 1;
starget = sas_device->starget; starget = sas_device->starget;
if (starget && starget->hostdata) { if (starget && starget->hostdata) {
@ -5927,13 +6295,15 @@ _scsih_mark_responding_sas_device(struct MPT2SAS_ADAPTER *ioc, u64 sas_address,
sas_target_priv_data->deleted = 0; sas_target_priv_data->deleted = 0;
} else } else
sas_target_priv_data = NULL; sas_target_priv_data = NULL;
starget_printk(KERN_INFO, sas_device->starget, if (starget)
"handle(0x%04x), sas_addr(0x%016llx), enclosure " starget_printk(KERN_INFO, starget,
"logical id(0x%016llx), slot(%d)\n", handle, "handle(0x%04x), sas_addr(0x%016llx), "
(unsigned long long)sas_device->sas_address, "enclosure logical id(0x%016llx), "
(unsigned long long) "slot(%d)\n", handle,
sas_device->enclosure_logical_id, (unsigned long long)sas_device->sas_address,
sas_device->slot); (unsigned long long)
sas_device->enclosure_logical_id,
sas_device->slot);
if (sas_device->handle == handle) if (sas_device->handle == handle)
goto out; goto out;
printk(KERN_INFO "\thandle changed from(0x%04x)!!!\n", printk(KERN_INFO "\thandle changed from(0x%04x)!!!\n",
@ -6025,6 +6395,12 @@ _scsih_mark_responding_raid_device(struct MPT2SAS_ADAPTER *ioc, u64 wwid,
starget_printk(KERN_INFO, raid_device->starget, starget_printk(KERN_INFO, raid_device->starget,
"handle(0x%04x), wwid(0x%016llx)\n", handle, "handle(0x%04x), wwid(0x%016llx)\n", handle,
(unsigned long long)raid_device->wwid); (unsigned long long)raid_device->wwid);
/*
* WARPDRIVE: The handles of the PDs might have changed
* across the host reset so re-initialize the
* required data for Direct IO
*/
_scsih_init_warpdrive_properties(ioc, raid_device);
if (raid_device->handle == handle) if (raid_device->handle == handle)
goto out; goto out;
printk(KERN_INFO "\thandle changed from(0x%04x)!!!\n", printk(KERN_INFO "\thandle changed from(0x%04x)!!!\n",
@ -6086,18 +6462,20 @@ _scsih_search_responding_raid_devices(struct MPT2SAS_ADAPTER *ioc)
} }
/* refresh the pd_handles */ /* refresh the pd_handles */
phys_disk_num = 0xFF; if (!ioc->is_warpdrive) {
memset(ioc->pd_handles, 0, ioc->pd_handles_sz); phys_disk_num = 0xFF;
while (!(mpt2sas_config_get_phys_disk_pg0(ioc, &mpi_reply, memset(ioc->pd_handles, 0, ioc->pd_handles_sz);
&pd_pg0, MPI2_PHYSDISK_PGAD_FORM_GET_NEXT_PHYSDISKNUM, while (!(mpt2sas_config_get_phys_disk_pg0(ioc, &mpi_reply,
phys_disk_num))) { &pd_pg0, MPI2_PHYSDISK_PGAD_FORM_GET_NEXT_PHYSDISKNUM,
ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & phys_disk_num))) {
MPI2_IOCSTATUS_MASK; ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE) MPI2_IOCSTATUS_MASK;
break; if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
phys_disk_num = pd_pg0.PhysDiskNum; break;
handle = le16_to_cpu(pd_pg0.DevHandle); phys_disk_num = pd_pg0.PhysDiskNum;
set_bit(handle, ioc->pd_handles); handle = le16_to_cpu(pd_pg0.DevHandle);
set_bit(handle, ioc->pd_handles);
}
} }
} }
@ -6242,6 +6620,50 @@ _scsih_remove_unresponding_sas_devices(struct MPT2SAS_ADAPTER *ioc)
} }
} }
/**
* _scsih_hide_unhide_sas_devices - add/remove device to/from OS
* @ioc: per adapter object
*
* Return nothing.
*/
static void
_scsih_hide_unhide_sas_devices(struct MPT2SAS_ADAPTER *ioc)
{
struct _sas_device *sas_device, *sas_device_next;
if (!ioc->is_warpdrive || ioc->mfg_pg10_hide_flag !=
MFG_PAGE10_HIDE_IF_VOL_PRESENT)
return;
if (ioc->hide_drives) {
if (_scsih_get_num_volumes(ioc))
return;
ioc->hide_drives = 0;
list_for_each_entry_safe(sas_device, sas_device_next,
&ioc->sas_device_list, list) {
if (!mpt2sas_transport_port_add(ioc, sas_device->handle,
sas_device->sas_address_parent)) {
_scsih_sas_device_remove(ioc, sas_device);
} else if (!sas_device->starget) {
mpt2sas_transport_port_remove(ioc,
sas_device->sas_address,
sas_device->sas_address_parent);
_scsih_sas_device_remove(ioc, sas_device);
}
}
} else {
if (!_scsih_get_num_volumes(ioc))
return;
ioc->hide_drives = 1;
list_for_each_entry_safe(sas_device, sas_device_next,
&ioc->sas_device_list, list) {
mpt2sas_transport_port_remove(ioc,
sas_device->sas_address,
sas_device->sas_address_parent);
}
}
}
/** /**
* mpt2sas_scsih_reset_handler - reset callback handler (for scsih) * mpt2sas_scsih_reset_handler - reset callback handler (for scsih)
* @ioc: per adapter object * @ioc: per adapter object
@ -6326,6 +6748,7 @@ _firmware_event_work(struct work_struct *work)
spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock,
flags); flags);
_scsih_remove_unresponding_sas_devices(ioc); _scsih_remove_unresponding_sas_devices(ioc);
_scsih_hide_unhide_sas_devices(ioc);
return; return;
} }
@ -6425,6 +6848,53 @@ mpt2sas_scsih_event_callback(struct MPT2SAS_ADAPTER *ioc, u8 msix_index,
(Mpi2EventDataIrVolume_t *) (Mpi2EventDataIrVolume_t *)
mpi_reply->EventData); mpi_reply->EventData);
break; break;
case MPI2_EVENT_LOG_ENTRY_ADDED:
{
Mpi2EventDataLogEntryAdded_t *log_entry;
u32 *log_code;
if (!ioc->is_warpdrive)
break;
log_entry = (Mpi2EventDataLogEntryAdded_t *)
mpi_reply->EventData;
log_code = (u32 *)log_entry->LogData;
if (le16_to_cpu(log_entry->LogEntryQualifier)
!= MPT2_WARPDRIVE_LOGENTRY)
break;
switch (le32_to_cpu(*log_code)) {
case MPT2_WARPDRIVE_LC_SSDT:
printk(MPT2SAS_WARN_FMT "WarpDrive Warning: "
"IO Throttling has occurred in the WarpDrive "
"subsystem. Check WarpDrive documentation for "
"additional details.\n", ioc->name);
break;
case MPT2_WARPDRIVE_LC_SSDLW:
printk(MPT2SAS_WARN_FMT "WarpDrive Warning: "
"Program/Erase Cycles for the WarpDrive subsystem "
"in degraded range. Check WarpDrive documentation "
"for additional details.\n", ioc->name);
break;
case MPT2_WARPDRIVE_LC_SSDLF:
printk(MPT2SAS_ERR_FMT "WarpDrive Fatal Error: "
"There are no Program/Erase Cycles for the "
"WarpDrive subsystem. The storage device will be "
"in read-only mode. Check WarpDrive documentation "
"for additional details.\n", ioc->name);
break;
case MPT2_WARPDRIVE_LC_BRMF:
printk(MPT2SAS_ERR_FMT "WarpDrive Fatal Error: "
"The Backup Rail Monitor has failed on the "
"WarpDrive subsystem. Check WarpDrive "
"documentation for additional details.\n",
ioc->name);
break;
}
break;
}
case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE: case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE:
case MPI2_EVENT_IR_OPERATION_STATUS: case MPI2_EVENT_IR_OPERATION_STATUS:
case MPI2_EVENT_SAS_DISCOVERY: case MPI2_EVENT_SAS_DISCOVERY:
@ -6583,7 +7053,8 @@ _scsih_ir_shutdown(struct MPT2SAS_ADAPTER *ioc)
mpi_request->Function = MPI2_FUNCTION_RAID_ACTION; mpi_request->Function = MPI2_FUNCTION_RAID_ACTION;
mpi_request->Action = MPI2_RAID_ACTION_SYSTEM_SHUTDOWN_INITIATED; mpi_request->Action = MPI2_RAID_ACTION_SYSTEM_SHUTDOWN_INITIATED;
printk(MPT2SAS_INFO_FMT "IR shutdown (sending)\n", ioc->name); if (!ioc->hide_ir_msg)
printk(MPT2SAS_INFO_FMT "IR shutdown (sending)\n", ioc->name);
init_completion(&ioc->scsih_cmds.done); init_completion(&ioc->scsih_cmds.done);
mpt2sas_base_put_smid_default(ioc, smid); mpt2sas_base_put_smid_default(ioc, smid);
wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ); wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ);
@ -6597,10 +7068,11 @@ _scsih_ir_shutdown(struct MPT2SAS_ADAPTER *ioc)
if (ioc->scsih_cmds.status & MPT2_CMD_REPLY_VALID) { if (ioc->scsih_cmds.status & MPT2_CMD_REPLY_VALID) {
mpi_reply = ioc->scsih_cmds.reply; mpi_reply = ioc->scsih_cmds.reply;
printk(MPT2SAS_INFO_FMT "IR shutdown (complete): " if (!ioc->hide_ir_msg)
"ioc_status(0x%04x), loginfo(0x%08x)\n", printk(MPT2SAS_INFO_FMT "IR shutdown (complete): "
ioc->name, le16_to_cpu(mpi_reply->IOCStatus), "ioc_status(0x%04x), loginfo(0x%08x)\n",
le32_to_cpu(mpi_reply->IOCLogInfo)); ioc->name, le16_to_cpu(mpi_reply->IOCStatus),
le32_to_cpu(mpi_reply->IOCLogInfo));
} }
out: out:
@ -6759,6 +7231,9 @@ _scsih_probe_boot_devices(struct MPT2SAS_ADAPTER *ioc)
spin_lock_irqsave(&ioc->sas_device_lock, flags); spin_lock_irqsave(&ioc->sas_device_lock, flags);
list_move_tail(&sas_device->list, &ioc->sas_device_list); list_move_tail(&sas_device->list, &ioc->sas_device_list);
spin_unlock_irqrestore(&ioc->sas_device_lock, flags); spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
if (ioc->hide_drives)
return;
if (!mpt2sas_transport_port_add(ioc, sas_device->handle, if (!mpt2sas_transport_port_add(ioc, sas_device->handle,
sas_device->sas_address_parent)) { sas_device->sas_address_parent)) {
_scsih_sas_device_remove(ioc, sas_device); _scsih_sas_device_remove(ioc, sas_device);
@ -6812,6 +7287,9 @@ _scsih_probe_sas(struct MPT2SAS_ADAPTER *ioc)
list_move_tail(&sas_device->list, &ioc->sas_device_list); list_move_tail(&sas_device->list, &ioc->sas_device_list);
spin_unlock_irqrestore(&ioc->sas_device_lock, flags); spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
if (ioc->hide_drives)
continue;
if (!mpt2sas_transport_port_add(ioc, sas_device->handle, if (!mpt2sas_transport_port_add(ioc, sas_device->handle,
sas_device->sas_address_parent)) { sas_device->sas_address_parent)) {
_scsih_sas_device_remove(ioc, sas_device); _scsih_sas_device_remove(ioc, sas_device);
@ -6882,6 +7360,11 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
ioc->id = mpt_ids++; ioc->id = mpt_ids++;
sprintf(ioc->name, "%s%d", MPT2SAS_DRIVER_NAME, ioc->id); sprintf(ioc->name, "%s%d", MPT2SAS_DRIVER_NAME, ioc->id);
ioc->pdev = pdev; ioc->pdev = pdev;
if (id->device == MPI2_MFGPAGE_DEVID_SSS6200) {
ioc->is_warpdrive = 1;
ioc->hide_ir_msg = 1;
} else
ioc->mfg_pg10_hide_flag = MFG_PAGE10_EXPOSE_ALL_DISKS;
ioc->scsi_io_cb_idx = scsi_io_cb_idx; ioc->scsi_io_cb_idx = scsi_io_cb_idx;
ioc->tm_cb_idx = tm_cb_idx; ioc->tm_cb_idx = tm_cb_idx;
ioc->ctl_cb_idx = ctl_cb_idx; ioc->ctl_cb_idx = ctl_cb_idx;
@ -6947,6 +7430,20 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
} }
ioc->wait_for_port_enable_to_complete = 0; ioc->wait_for_port_enable_to_complete = 0;
if (ioc->is_warpdrive) {
if (ioc->mfg_pg10_hide_flag == MFG_PAGE10_EXPOSE_ALL_DISKS)
ioc->hide_drives = 0;
else if (ioc->mfg_pg10_hide_flag == MFG_PAGE10_HIDE_ALL_DISKS)
ioc->hide_drives = 1;
else {
if (_scsih_get_num_volumes(ioc))
ioc->hide_drives = 1;
else
ioc->hide_drives = 0;
}
} else
ioc->hide_drives = 0;
_scsih_probe_devices(ioc); _scsih_probe_devices(ioc);
return 0; return 0;

View File

@ -3,6 +3,7 @@
# #
# Copyright 2007 Red Hat, Inc. # Copyright 2007 Red Hat, Inc.
# Copyright 2008 Marvell. <kewei@marvell.com> # Copyright 2008 Marvell. <kewei@marvell.com>
# Copyright 2009-20011 Marvell. <yuxiangl@marvell.com>
# #
# This file is licensed under GPLv2. # This file is licensed under GPLv2.
# #

View File

@ -3,6 +3,7 @@
# #
# Copyright 2007 Red Hat, Inc. # Copyright 2007 Red Hat, Inc.
# Copyright 2008 Marvell. <kewei@marvell.com> # Copyright 2008 Marvell. <kewei@marvell.com>
# Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
# #
# This file is licensed under GPLv2. # This file is licensed under GPLv2.
# #

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *
@ -34,6 +35,8 @@ enum chip_flavors {
chip_6485, chip_6485,
chip_9480, chip_9480,
chip_9180, chip_9180,
chip_9445,
chip_9485,
chip_1300, chip_1300,
chip_1320 chip_1320
}; };

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *
@ -25,13 +26,24 @@
#include "mv_sas.h" #include "mv_sas.h"
static int lldd_max_execute_num = 1;
module_param_named(collector, lldd_max_execute_num, int, S_IRUGO);
MODULE_PARM_DESC(collector, "\n"
"\tIf greater than one, tells the SAS Layer to run in Task Collector\n"
"\tMode. If 1 or 0, tells the SAS Layer to run in Direct Mode.\n"
"\tThe mvsas SAS LLDD supports both modes.\n"
"\tDefault: 1 (Direct Mode).\n");
static struct scsi_transport_template *mvs_stt; static struct scsi_transport_template *mvs_stt;
struct kmem_cache *mvs_task_list_cache;
static const struct mvs_chip_info mvs_chips[] = { static const struct mvs_chip_info mvs_chips[] = {
[chip_6320] = { 1, 2, 0x400, 17, 16, 9, &mvs_64xx_dispatch, }, [chip_6320] = { 1, 2, 0x400, 17, 16, 9, &mvs_64xx_dispatch, },
[chip_6440] = { 1, 4, 0x400, 17, 16, 9, &mvs_64xx_dispatch, }, [chip_6440] = { 1, 4, 0x400, 17, 16, 9, &mvs_64xx_dispatch, },
[chip_6485] = { 1, 8, 0x800, 33, 32, 10, &mvs_64xx_dispatch, }, [chip_6485] = { 1, 8, 0x800, 33, 32, 10, &mvs_64xx_dispatch, },
[chip_9180] = { 2, 4, 0x800, 17, 64, 9, &mvs_94xx_dispatch, }, [chip_9180] = { 2, 4, 0x800, 17, 64, 9, &mvs_94xx_dispatch, },
[chip_9480] = { 2, 4, 0x800, 17, 64, 9, &mvs_94xx_dispatch, }, [chip_9480] = { 2, 4, 0x800, 17, 64, 9, &mvs_94xx_dispatch, },
[chip_9445] = { 1, 4, 0x800, 17, 64, 11, &mvs_94xx_dispatch, },
[chip_9485] = { 2, 4, 0x800, 17, 64, 11, &mvs_94xx_dispatch, },
[chip_1300] = { 1, 4, 0x400, 17, 16, 9, &mvs_64xx_dispatch, }, [chip_1300] = { 1, 4, 0x400, 17, 16, 9, &mvs_64xx_dispatch, },
[chip_1320] = { 2, 4, 0x800, 17, 64, 9, &mvs_94xx_dispatch, }, [chip_1320] = { 2, 4, 0x800, 17, 64, 9, &mvs_94xx_dispatch, },
}; };
@ -107,7 +119,6 @@ static void __devinit mvs_phy_init(struct mvs_info *mvi, int phy_id)
static void mvs_free(struct mvs_info *mvi) static void mvs_free(struct mvs_info *mvi)
{ {
int i;
struct mvs_wq *mwq; struct mvs_wq *mwq;
int slot_nr; int slot_nr;
@ -119,12 +130,8 @@ static void mvs_free(struct mvs_info *mvi)
else else
slot_nr = MVS_SLOTS; slot_nr = MVS_SLOTS;
for (i = 0; i < mvi->tags_num; i++) { if (mvi->dma_pool)
struct mvs_slot_info *slot = &mvi->slot_info[i]; pci_pool_destroy(mvi->dma_pool);
if (slot->buf)
dma_free_coherent(mvi->dev, MVS_SLOT_BUF_SZ,
slot->buf, slot->buf_dma);
}
if (mvi->tx) if (mvi->tx)
dma_free_coherent(mvi->dev, dma_free_coherent(mvi->dev,
@ -213,6 +220,7 @@ static irqreturn_t mvs_interrupt(int irq, void *opaque)
static int __devinit mvs_alloc(struct mvs_info *mvi, struct Scsi_Host *shost) static int __devinit mvs_alloc(struct mvs_info *mvi, struct Scsi_Host *shost)
{ {
int i = 0, slot_nr; int i = 0, slot_nr;
char pool_name[32];
if (mvi->flags & MVF_FLAG_SOC) if (mvi->flags & MVF_FLAG_SOC)
slot_nr = MVS_SOC_SLOTS; slot_nr = MVS_SOC_SLOTS;
@ -272,18 +280,14 @@ static int __devinit mvs_alloc(struct mvs_info *mvi, struct Scsi_Host *shost)
if (!mvi->bulk_buffer) if (!mvi->bulk_buffer)
goto err_out; goto err_out;
#endif #endif
for (i = 0; i < slot_nr; i++) { sprintf(pool_name, "%s%d", "mvs_dma_pool", mvi->id);
struct mvs_slot_info *slot = &mvi->slot_info[i]; mvi->dma_pool = pci_pool_create(pool_name, mvi->pdev, MVS_SLOT_BUF_SZ, 16, 0);
if (!mvi->dma_pool) {
slot->buf = dma_alloc_coherent(mvi->dev, MVS_SLOT_BUF_SZ, printk(KERN_DEBUG "failed to create dma pool %s.\n", pool_name);
&slot->buf_dma, GFP_KERNEL);
if (!slot->buf) {
printk(KERN_DEBUG"failed to allocate slot->buf.\n");
goto err_out; goto err_out;
}
memset(slot->buf, 0, MVS_SLOT_BUF_SZ);
++mvi->tags_num;
} }
mvi->tags_num = slot_nr;
/* Initialize tags */ /* Initialize tags */
mvs_tag_init(mvi); mvs_tag_init(mvi);
return 0; return 0;
@ -484,7 +488,7 @@ static void __devinit mvs_post_sas_ha_init(struct Scsi_Host *shost,
sha->num_phys = nr_core * chip_info->n_phy; sha->num_phys = nr_core * chip_info->n_phy;
sha->lldd_max_execute_num = 1; sha->lldd_max_execute_num = lldd_max_execute_num;
if (mvi->flags & MVF_FLAG_SOC) if (mvi->flags & MVF_FLAG_SOC)
can_queue = MVS_SOC_CAN_QUEUE; can_queue = MVS_SOC_CAN_QUEUE;
@ -670,6 +674,24 @@ static struct pci_device_id __devinitdata mvs_pci_table[] = {
{ PCI_VDEVICE(TTI, 0x2740), chip_9480 }, { PCI_VDEVICE(TTI, 0x2740), chip_9480 },
{ PCI_VDEVICE(TTI, 0x2744), chip_9480 }, { PCI_VDEVICE(TTI, 0x2744), chip_9480 },
{ PCI_VDEVICE(TTI, 0x2760), chip_9480 }, { PCI_VDEVICE(TTI, 0x2760), chip_9480 },
{
.vendor = 0x1b4b,
.device = 0x9445,
.subvendor = PCI_ANY_ID,
.subdevice = 0x9480,
.class = 0,
.class_mask = 0,
.driver_data = chip_9445,
},
{
.vendor = 0x1b4b,
.device = 0x9485,
.subvendor = PCI_ANY_ID,
.subdevice = 0x9480,
.class = 0,
.class_mask = 0,
.driver_data = chip_9485,
},
{ } /* terminate list */ { } /* terminate list */
}; };
@ -690,6 +712,14 @@ static int __init mvs_init(void)
if (!mvs_stt) if (!mvs_stt)
return -ENOMEM; return -ENOMEM;
mvs_task_list_cache = kmem_cache_create("mvs_task_list", sizeof(struct mvs_task_list),
0, SLAB_HWCACHE_ALIGN, NULL);
if (!mvs_task_list_cache) {
rc = -ENOMEM;
mv_printk("%s: mvs_task_list_cache alloc failed! \n", __func__);
goto err_out;
}
rc = pci_register_driver(&mvs_pci_driver); rc = pci_register_driver(&mvs_pci_driver);
if (rc) if (rc)
@ -706,6 +736,7 @@ static void __exit mvs_exit(void)
{ {
pci_unregister_driver(&mvs_pci_driver); pci_unregister_driver(&mvs_pci_driver);
sas_release_transport(mvs_stt); sas_release_transport(mvs_stt);
kmem_cache_destroy(mvs_task_list_cache);
} }
module_init(mvs_init); module_init(mvs_init);

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *
@ -862,178 +863,286 @@ static int mvs_task_prep_ssp(struct mvs_info *mvi,
} }
#define DEV_IS_GONE(mvi_dev) ((!mvi_dev || (mvi_dev->dev_type == NO_DEVICE))) #define DEV_IS_GONE(mvi_dev) ((!mvi_dev || (mvi_dev->dev_type == NO_DEVICE)))
static int mvs_task_exec(struct sas_task *task, const int num, gfp_t gfp_flags, static int mvs_task_prep(struct sas_task *task, struct mvs_info *mvi, int is_tmf,
struct completion *completion,int is_tmf, struct mvs_tmf_task *tmf, int *pass)
struct mvs_tmf_task *tmf)
{ {
struct domain_device *dev = task->dev; struct domain_device *dev = task->dev;
struct mvs_device *mvi_dev = (struct mvs_device *)dev->lldd_dev; struct mvs_device *mvi_dev = dev->lldd_dev;
struct mvs_info *mvi = mvi_dev->mvi_info;
struct mvs_task_exec_info tei; struct mvs_task_exec_info tei;
struct sas_task *t = task;
struct mvs_slot_info *slot; struct mvs_slot_info *slot;
u32 tag = 0xdeadbeef, rc, n_elem = 0; u32 tag = 0xdeadbeef, n_elem = 0;
u32 n = num, pass = 0; int rc = 0;
unsigned long flags = 0, flags_libsas = 0;
if (!dev->port) { if (!dev->port) {
struct task_status_struct *tsm = &t->task_status; struct task_status_struct *tsm = &task->task_status;
tsm->resp = SAS_TASK_UNDELIVERED; tsm->resp = SAS_TASK_UNDELIVERED;
tsm->stat = SAS_PHY_DOWN; tsm->stat = SAS_PHY_DOWN;
/*
* libsas will use dev->port, should
* not call task_done for sata
*/
if (dev->dev_type != SATA_DEV) if (dev->dev_type != SATA_DEV)
t->task_done(t); task->task_done(task);
return 0; return rc;
} }
spin_lock_irqsave(&mvi->lock, flags); if (DEV_IS_GONE(mvi_dev)) {
do { if (mvi_dev)
dev = t->dev; mv_dprintk("device %d not ready.\n",
mvi_dev = dev->lldd_dev; mvi_dev->device_id);
if (DEV_IS_GONE(mvi_dev)) { else
if (mvi_dev) mv_dprintk("device %016llx not ready.\n",
mv_dprintk("device %d not ready.\n", SAS_ADDR(dev->sas_addr));
mvi_dev->device_id);
else
mv_dprintk("device %016llx not ready.\n",
SAS_ADDR(dev->sas_addr));
rc = SAS_PHY_DOWN; rc = SAS_PHY_DOWN;
goto out_done; return rc;
} }
tei.port = dev->port->lldd_port;
if (tei.port && !tei.port->port_attached && !tmf) {
if (sas_protocol_ata(task->task_proto)) {
struct task_status_struct *ts = &task->task_status;
mv_dprintk("SATA/STP port %d does not attach"
"device.\n", dev->port->id);
ts->resp = SAS_TASK_COMPLETE;
ts->stat = SAS_PHY_DOWN;
if (dev->port->id >= mvi->chip->n_phy) task->task_done(task);
tei.port = &mvi->port[dev->port->id - mvi->chip->n_phy];
else
tei.port = &mvi->port[dev->port->id];
if (tei.port && !tei.port->port_attached) {
if (sas_protocol_ata(t->task_proto)) {
struct task_status_struct *ts = &t->task_status;
mv_dprintk("port %d does not"
"attached device.\n", dev->port->id);
ts->stat = SAS_PROTO_RESPONSE;
ts->stat = SAS_PHY_DOWN;
spin_unlock_irqrestore(dev->sata_dev.ap->lock,
flags_libsas);
spin_unlock_irqrestore(&mvi->lock, flags);
t->task_done(t);
spin_lock_irqsave(&mvi->lock, flags);
spin_lock_irqsave(dev->sata_dev.ap->lock,
flags_libsas);
if (n > 1)
t = list_entry(t->list.next,
struct sas_task, list);
continue;
} else {
struct task_status_struct *ts = &t->task_status;
ts->resp = SAS_TASK_UNDELIVERED;
ts->stat = SAS_PHY_DOWN;
t->task_done(t);
if (n > 1)
t = list_entry(t->list.next,
struct sas_task, list);
continue;
}
}
if (!sas_protocol_ata(t->task_proto)) {
if (t->num_scatter) {
n_elem = dma_map_sg(mvi->dev,
t->scatter,
t->num_scatter,
t->data_dir);
if (!n_elem) {
rc = -ENOMEM;
goto err_out;
}
}
} else { } else {
n_elem = t->num_scatter; struct task_status_struct *ts = &task->task_status;
mv_dprintk("SAS port %d does not attach"
"device.\n", dev->port->id);
ts->resp = SAS_TASK_UNDELIVERED;
ts->stat = SAS_PHY_DOWN;
task->task_done(task);
} }
return rc;
}
rc = mvs_tag_alloc(mvi, &tag); if (!sas_protocol_ata(task->task_proto)) {
if (rc) if (task->num_scatter) {
goto err_out; n_elem = dma_map_sg(mvi->dev,
task->scatter,
slot = &mvi->slot_info[tag]; task->num_scatter,
task->data_dir);
if (!n_elem) {
t->lldd_task = NULL; rc = -ENOMEM;
slot->n_elem = n_elem; goto prep_out;
slot->slot_tag = tag; }
memset(slot->buf, 0, MVS_SLOT_BUF_SZ);
tei.task = t;
tei.hdr = &mvi->slot[tag];
tei.tag = tag;
tei.n_elem = n_elem;
switch (t->task_proto) {
case SAS_PROTOCOL_SMP:
rc = mvs_task_prep_smp(mvi, &tei);
break;
case SAS_PROTOCOL_SSP:
rc = mvs_task_prep_ssp(mvi, &tei, is_tmf, tmf);
break;
case SAS_PROTOCOL_SATA:
case SAS_PROTOCOL_STP:
case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
rc = mvs_task_prep_ata(mvi, &tei);
break;
default:
dev_printk(KERN_ERR, mvi->dev,
"unknown sas_task proto: 0x%x\n",
t->task_proto);
rc = -EINVAL;
break;
} }
} else {
n_elem = task->num_scatter;
}
if (rc) { rc = mvs_tag_alloc(mvi, &tag);
mv_dprintk("rc is %x\n", rc); if (rc)
goto err_out_tag; goto err_out;
}
slot->task = t;
slot->port = tei.port;
t->lldd_task = slot;
list_add_tail(&slot->entry, &tei.port->list);
/* TODO: select normal or high priority */
spin_lock(&t->task_state_lock);
t->task_state_flags |= SAS_TASK_AT_INITIATOR;
spin_unlock(&t->task_state_lock);
mvs_hba_memory_dump(mvi, tag, t->task_proto); slot = &mvi->slot_info[tag];
mvi_dev->running_req++;
++pass;
mvi->tx_prod = (mvi->tx_prod + 1) & (MVS_CHIP_SLOT_SZ - 1);
if (n > 1)
t = list_entry(t->list.next, struct sas_task, list);
if (likely(pass))
MVS_CHIP_DISP->start_delivery(mvi, (mvi->tx_prod - 1) &
(MVS_CHIP_SLOT_SZ - 1));
} while (--n); task->lldd_task = NULL;
rc = 0; slot->n_elem = n_elem;
goto out_done; slot->slot_tag = tag;
slot->buf = pci_pool_alloc(mvi->dma_pool, GFP_ATOMIC, &slot->buf_dma);
if (!slot->buf)
goto err_out_tag;
memset(slot->buf, 0, MVS_SLOT_BUF_SZ);
tei.task = task;
tei.hdr = &mvi->slot[tag];
tei.tag = tag;
tei.n_elem = n_elem;
switch (task->task_proto) {
case SAS_PROTOCOL_SMP:
rc = mvs_task_prep_smp(mvi, &tei);
break;
case SAS_PROTOCOL_SSP:
rc = mvs_task_prep_ssp(mvi, &tei, is_tmf, tmf);
break;
case SAS_PROTOCOL_SATA:
case SAS_PROTOCOL_STP:
case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
rc = mvs_task_prep_ata(mvi, &tei);
break;
default:
dev_printk(KERN_ERR, mvi->dev,
"unknown sas_task proto: 0x%x\n",
task->task_proto);
rc = -EINVAL;
break;
}
if (rc) {
mv_dprintk("rc is %x\n", rc);
goto err_out_slot_buf;
}
slot->task = task;
slot->port = tei.port;
task->lldd_task = slot;
list_add_tail(&slot->entry, &tei.port->list);
spin_lock(&task->task_state_lock);
task->task_state_flags |= SAS_TASK_AT_INITIATOR;
spin_unlock(&task->task_state_lock);
mvs_hba_memory_dump(mvi, tag, task->task_proto);
mvi_dev->running_req++;
++(*pass);
mvi->tx_prod = (mvi->tx_prod + 1) & (MVS_CHIP_SLOT_SZ - 1);
return rc;
err_out_slot_buf:
pci_pool_free(mvi->dma_pool, slot->buf, slot->buf_dma);
err_out_tag: err_out_tag:
mvs_tag_free(mvi, tag); mvs_tag_free(mvi, tag);
err_out: err_out:
dev_printk(KERN_ERR, mvi->dev, "mvsas exec failed[%d]!\n", rc); dev_printk(KERN_ERR, mvi->dev, "mvsas prep failed[%d]!\n", rc);
if (!sas_protocol_ata(t->task_proto)) if (!sas_protocol_ata(task->task_proto))
if (n_elem) if (n_elem)
dma_unmap_sg(mvi->dev, t->scatter, n_elem, dma_unmap_sg(mvi->dev, task->scatter, n_elem,
t->data_dir); task->data_dir);
out_done: prep_out:
return rc;
}
static struct mvs_task_list *mvs_task_alloc_list(int *num, gfp_t gfp_flags)
{
struct mvs_task_list *first = NULL;
for (; *num > 0; --*num) {
struct mvs_task_list *mvs_list = kmem_cache_zalloc(mvs_task_list_cache, gfp_flags);
if (!mvs_list)
break;
INIT_LIST_HEAD(&mvs_list->list);
if (!first)
first = mvs_list;
else
list_add_tail(&mvs_list->list, &first->list);
}
return first;
}
static inline void mvs_task_free_list(struct mvs_task_list *mvs_list)
{
LIST_HEAD(list);
struct list_head *pos, *a;
struct mvs_task_list *mlist = NULL;
__list_add(&list, mvs_list->list.prev, &mvs_list->list);
list_for_each_safe(pos, a, &list) {
list_del_init(pos);
mlist = list_entry(pos, struct mvs_task_list, list);
kmem_cache_free(mvs_task_list_cache, mlist);
}
}
static int mvs_task_exec(struct sas_task *task, const int num, gfp_t gfp_flags,
struct completion *completion, int is_tmf,
struct mvs_tmf_task *tmf)
{
struct domain_device *dev = task->dev;
struct mvs_info *mvi = NULL;
u32 rc = 0;
u32 pass = 0;
unsigned long flags = 0;
mvi = ((struct mvs_device *)task->dev->lldd_dev)->mvi_info;
if ((dev->dev_type == SATA_DEV) && (dev->sata_dev.ap != NULL))
spin_unlock_irq(dev->sata_dev.ap->lock);
spin_lock_irqsave(&mvi->lock, flags);
rc = mvs_task_prep(task, mvi, is_tmf, tmf, &pass);
if (rc)
dev_printk(KERN_ERR, mvi->dev, "mvsas exec failed[%d]!\n", rc);
if (likely(pass))
MVS_CHIP_DISP->start_delivery(mvi, (mvi->tx_prod - 1) &
(MVS_CHIP_SLOT_SZ - 1));
spin_unlock_irqrestore(&mvi->lock, flags); spin_unlock_irqrestore(&mvi->lock, flags);
if ((dev->dev_type == SATA_DEV) && (dev->sata_dev.ap != NULL))
spin_lock_irq(dev->sata_dev.ap->lock);
return rc;
}
static int mvs_collector_task_exec(struct sas_task *task, const int num, gfp_t gfp_flags,
struct completion *completion, int is_tmf,
struct mvs_tmf_task *tmf)
{
struct domain_device *dev = task->dev;
struct mvs_prv_info *mpi = dev->port->ha->lldd_ha;
struct mvs_info *mvi = NULL;
struct sas_task *t = task;
struct mvs_task_list *mvs_list = NULL, *a;
LIST_HEAD(q);
int pass[2] = {0};
u32 rc = 0;
u32 n = num;
unsigned long flags = 0;
mvs_list = mvs_task_alloc_list(&n, gfp_flags);
if (n) {
printk(KERN_ERR "%s: mvs alloc list failed.\n", __func__);
rc = -ENOMEM;
goto free_list;
}
__list_add(&q, mvs_list->list.prev, &mvs_list->list);
list_for_each_entry(a, &q, list) {
a->task = t;
t = list_entry(t->list.next, struct sas_task, list);
}
list_for_each_entry(a, &q , list) {
t = a->task;
mvi = ((struct mvs_device *)t->dev->lldd_dev)->mvi_info;
spin_lock_irqsave(&mvi->lock, flags);
rc = mvs_task_prep(t, mvi, is_tmf, tmf, &pass[mvi->id]);
if (rc)
dev_printk(KERN_ERR, mvi->dev, "mvsas exec failed[%d]!\n", rc);
spin_unlock_irqrestore(&mvi->lock, flags);
}
if (likely(pass[0]))
MVS_CHIP_DISP->start_delivery(mpi->mvi[0],
(mpi->mvi[0]->tx_prod - 1) & (MVS_CHIP_SLOT_SZ - 1));
if (likely(pass[1]))
MVS_CHIP_DISP->start_delivery(mpi->mvi[1],
(mpi->mvi[1]->tx_prod - 1) & (MVS_CHIP_SLOT_SZ - 1));
list_del_init(&q);
free_list:
if (mvs_list)
mvs_task_free_list(mvs_list);
return rc; return rc;
} }
int mvs_queue_command(struct sas_task *task, const int num, int mvs_queue_command(struct sas_task *task, const int num,
gfp_t gfp_flags) gfp_t gfp_flags)
{ {
return mvs_task_exec(task, num, gfp_flags, NULL, 0, NULL); struct mvs_device *mvi_dev = task->dev->lldd_dev;
struct sas_ha_struct *sas = mvi_dev->mvi_info->sas;
if (sas->lldd_max_execute_num < 2)
return mvs_task_exec(task, num, gfp_flags, NULL, 0, NULL);
else
return mvs_collector_task_exec(task, num, gfp_flags, NULL, 0, NULL);
} }
static void mvs_slot_free(struct mvs_info *mvi, u32 rx_desc) static void mvs_slot_free(struct mvs_info *mvi, u32 rx_desc)
@ -1067,6 +1176,11 @@ static void mvs_slot_task_free(struct mvs_info *mvi, struct sas_task *task,
/* do nothing */ /* do nothing */
break; break;
} }
if (slot->buf) {
pci_pool_free(mvi->dma_pool, slot->buf, slot->buf_dma);
slot->buf = NULL;
}
list_del_init(&slot->entry); list_del_init(&slot->entry);
task->lldd_task = NULL; task->lldd_task = NULL;
slot->task = NULL; slot->task = NULL;
@ -1255,6 +1369,7 @@ static void mvs_port_notify_formed(struct asd_sas_phy *sas_phy, int lock)
spin_lock_irqsave(&mvi->lock, flags); spin_lock_irqsave(&mvi->lock, flags);
port->port_attached = 1; port->port_attached = 1;
phy->port = port; phy->port = port;
sas_port->lldd_port = port;
if (phy->phy_type & PORT_TYPE_SAS) { if (phy->phy_type & PORT_TYPE_SAS) {
port->wide_port_phymap = sas_port->phy_mask; port->wide_port_phymap = sas_port->phy_mask;
mv_printk("set wide port phy map %x\n", sas_port->phy_mask); mv_printk("set wide port phy map %x\n", sas_port->phy_mask);

View File

@ -3,6 +3,7 @@
* *
* Copyright 2007 Red Hat, Inc. * Copyright 2007 Red Hat, Inc.
* Copyright 2008 Marvell. <kewei@marvell.com> * Copyright 2008 Marvell. <kewei@marvell.com>
* Copyright 2009-2011 Marvell. <yuxiangl@marvell.com>
* *
* This file is licensed under GPLv2. * This file is licensed under GPLv2.
* *
@ -67,6 +68,7 @@ extern struct mvs_tgt_initiator mvs_tgt;
extern struct mvs_info *tgt_mvi; extern struct mvs_info *tgt_mvi;
extern const struct mvs_dispatch mvs_64xx_dispatch; extern const struct mvs_dispatch mvs_64xx_dispatch;
extern const struct mvs_dispatch mvs_94xx_dispatch; extern const struct mvs_dispatch mvs_94xx_dispatch;
extern struct kmem_cache *mvs_task_list_cache;
#define DEV_IS_EXPANDER(type) \ #define DEV_IS_EXPANDER(type) \
((type == EDGE_DEV) || (type == FANOUT_DEV)) ((type == EDGE_DEV) || (type == FANOUT_DEV))
@ -341,6 +343,7 @@ struct mvs_info {
dma_addr_t bulk_buffer_dma; dma_addr_t bulk_buffer_dma;
#define TRASH_BUCKET_SIZE 0x20000 #define TRASH_BUCKET_SIZE 0x20000
#endif #endif
void *dma_pool;
struct mvs_slot_info slot_info[0]; struct mvs_slot_info slot_info[0];
}; };
@ -367,6 +370,11 @@ struct mvs_task_exec_info {
int n_elem; int n_elem;
}; };
struct mvs_task_list {
struct sas_task *task;
struct list_head list;
};
/******************** function prototype *********************/ /******************** function prototype *********************/
void mvs_get_sas_addr(void *buf, u32 buflen); void mvs_get_sas_addr(void *buf, u32 buflen);

View File

@ -8147,7 +8147,7 @@ static int ncr53c8xx_abort(struct scsi_cmnd *cmd)
unsigned long flags; unsigned long flags;
struct scsi_cmnd *done_list; struct scsi_cmnd *done_list;
printk("ncr53c8xx_abort: command pid %lu\n", cmd->serial_number); printk("ncr53c8xx_abort\n");
NCR_LOCK_NCB(np, flags); NCR_LOCK_NCB(np, flags);

View File

@ -4066,7 +4066,7 @@ __qla1280_print_scsi_cmd(struct scsi_cmnd *cmd)
} */ } */
printk(" tag=%d, transfersize=0x%x \n", printk(" tag=%d, transfersize=0x%x \n",
cmd->tag, cmd->transfersize); cmd->tag, cmd->transfersize);
printk(" Pid=%li, SP=0x%p\n", cmd->serial_number, CMD_SP(cmd)); printk(" SP=0x%p\n", CMD_SP(cmd));
printk(" underflow size = 0x%x, direction=0x%x\n", printk(" underflow size = 0x%x, direction=0x%x\n",
cmd->underflow, cmd->sc_data_direction); cmd->underflow, cmd->sc_data_direction);
} }

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -496,8 +496,8 @@ do_read:
offset = 0; offset = 0;
} }
rval = qla2x00_read_sfp(vha, ha->sfp_data_dma, addr, offset, rval = qla2x00_read_sfp(vha, ha->sfp_data_dma, ha->sfp_data,
SFP_BLOCK_SIZE); addr, offset, SFP_BLOCK_SIZE, 0);
if (rval != QLA_SUCCESS) { if (rval != QLA_SUCCESS) {
qla_printk(KERN_WARNING, ha, qla_printk(KERN_WARNING, ha,
"Unable to read SFP data (%x/%x/%x).\n", rval, "Unable to read SFP data (%x/%x/%x).\n", rval,
@ -628,12 +628,12 @@ qla2x00_sysfs_write_edc(struct file *filp, struct kobject *kobj,
memcpy(ha->edc_data, &buf[8], len); memcpy(ha->edc_data, &buf[8], len);
rval = qla2x00_write_edc(vha, dev, adr, ha->edc_data_dma, rval = qla2x00_write_sfp(vha, ha->edc_data_dma, ha->edc_data,
ha->edc_data, len, opt); dev, adr, len, opt);
if (rval != QLA_SUCCESS) { if (rval != QLA_SUCCESS) {
DEBUG2(qla_printk(KERN_INFO, ha, DEBUG2(qla_printk(KERN_INFO, ha,
"Unable to write EDC (%x) %02x:%02x:%04x:%02x:%02x.\n", "Unable to write EDC (%x) %02x:%02x:%04x:%02x:%02x.\n",
rval, dev, adr, opt, len, *buf)); rval, dev, adr, opt, len, buf[8]));
return 0; return 0;
} }
@ -685,8 +685,8 @@ qla2x00_sysfs_write_edc_status(struct file *filp, struct kobject *kobj,
return -EINVAL; return -EINVAL;
memset(ha->edc_data, 0, len); memset(ha->edc_data, 0, len);
rval = qla2x00_read_edc(vha, dev, adr, ha->edc_data_dma, rval = qla2x00_read_sfp(vha, ha->edc_data_dma, ha->edc_data,
ha->edc_data, len, opt); dev, adr, len, opt);
if (rval != QLA_SUCCESS) { if (rval != QLA_SUCCESS) {
DEBUG2(qla_printk(KERN_INFO, ha, DEBUG2(qla_printk(KERN_INFO, ha,
"Unable to write EDC status (%x) %02x:%02x:%04x:%02x.\n", "Unable to write EDC status (%x) %02x:%02x:%04x:%02x.\n",
@ -1568,7 +1568,7 @@ qla2x00_dev_loss_tmo_callbk(struct fc_rport *rport)
/* Now that the rport has been deleted, set the fcport state to /* Now that the rport has been deleted, set the fcport state to
FCS_DEVICE_DEAD */ FCS_DEVICE_DEAD */
atomic_set(&fcport->state, FCS_DEVICE_DEAD); qla2x00_set_fcport_state(fcport, FCS_DEVICE_DEAD);
/* /*
* Transport has effectively 'deleted' the rport, clear * Transport has effectively 'deleted' the rport, clear
@ -1877,14 +1877,15 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
scsi_remove_host(vha->host); scsi_remove_host(vha->host);
/* Allow timer to run to drain queued items, when removing vp */
qla24xx_deallocate_vp_id(vha);
if (vha->timer_active) { if (vha->timer_active) {
qla2x00_vp_stop_timer(vha); qla2x00_vp_stop_timer(vha);
DEBUG15(printk(KERN_INFO "scsi(%ld): timer for the vport[%d]" DEBUG15(printk(KERN_INFO "scsi(%ld): timer for the vport[%d]"
" = %p has stopped\n", vha->host_no, vha->vp_idx, vha)); " = %p has stopped\n", vha->host_no, vha->vp_idx, vha));
} }
qla24xx_deallocate_vp_id(vha);
/* No pending activities shall be there on the vha now */ /* No pending activities shall be there on the vha now */
DEBUG(msleep(random32()%10)); /* Just to see if something falls on DEBUG(msleep(random32()%10)); /* Just to see if something falls on
* the net we have placed below */ * the net we have placed below */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -1717,6 +1717,14 @@ typedef struct fc_port {
#define FCS_DEVICE_LOST 3 #define FCS_DEVICE_LOST 3
#define FCS_ONLINE 4 #define FCS_ONLINE 4
static const char * const port_state_str[] = {
"Unknown",
"UNCONFIGURED",
"DEAD",
"LOST",
"ONLINE"
};
/* /*
* FC port flags. * FC port flags.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -416,8 +416,7 @@ struct cmd_type_6 {
uint8_t vp_index; uint8_t vp_index;
uint32_t fcp_data_dseg_address[2]; /* Data segment address. */ uint32_t fcp_data_dseg_address[2]; /* Data segment address. */
uint16_t fcp_data_dseg_len; /* Data segment length. */ uint32_t fcp_data_dseg_len; /* Data segment length. */
uint16_t reserved_1; /* MUST be set to 0. */
}; };
#define COMMAND_TYPE_7 0x18 /* Command Type 7 entry */ #define COMMAND_TYPE_7 0x18 /* Command Type 7 entry */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -39,6 +39,8 @@ extern int qla81xx_load_risc(scsi_qla_host_t *, uint32_t *);
extern int qla2x00_perform_loop_resync(scsi_qla_host_t *); extern int qla2x00_perform_loop_resync(scsi_qla_host_t *);
extern int qla2x00_loop_resync(scsi_qla_host_t *); extern int qla2x00_loop_resync(scsi_qla_host_t *);
extern int qla2x00_find_new_loop_id(scsi_qla_host_t *, fc_port_t *);
extern int qla2x00_fabric_login(scsi_qla_host_t *, fc_port_t *, uint16_t *); extern int qla2x00_fabric_login(scsi_qla_host_t *, fc_port_t *, uint16_t *);
extern int qla2x00_local_device_login(scsi_qla_host_t *, fc_port_t *); extern int qla2x00_local_device_login(scsi_qla_host_t *, fc_port_t *);
@ -100,6 +102,8 @@ extern int ql2xgffidenable;
extern int ql2xenabledif; extern int ql2xenabledif;
extern int ql2xenablehba_err_chk; extern int ql2xenablehba_err_chk;
extern int ql2xtargetreset; extern int ql2xtargetreset;
extern int ql2xdontresethba;
extern unsigned int ql2xmaxlun;
extern int qla2x00_loop_reset(scsi_qla_host_t *); extern int qla2x00_loop_reset(scsi_qla_host_t *);
extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int); extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int);
@ -319,15 +323,12 @@ extern int
qla2x00_disable_fce_trace(scsi_qla_host_t *, uint64_t *, uint64_t *); qla2x00_disable_fce_trace(scsi_qla_host_t *, uint64_t *, uint64_t *);
extern int extern int
qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint16_t, uint16_t, uint16_t); qla2x00_read_sfp(scsi_qla_host_t *, dma_addr_t, uint8_t *,
uint16_t, uint16_t, uint16_t, uint16_t);
extern int extern int
qla2x00_read_edc(scsi_qla_host_t *, uint16_t, uint16_t, dma_addr_t, qla2x00_write_sfp(scsi_qla_host_t *, dma_addr_t, uint8_t *,
uint8_t *, uint16_t, uint16_t); uint16_t, uint16_t, uint16_t, uint16_t);
extern int
qla2x00_write_edc(scsi_qla_host_t *, uint16_t, uint16_t, dma_addr_t,
uint8_t *, uint16_t, uint16_t);
extern int extern int
qla2x00_set_idma_speed(scsi_qla_host_t *, uint16_t, uint16_t, uint16_t *); qla2x00_set_idma_speed(scsi_qla_host_t *, uint16_t, uint16_t, uint16_t *);
@ -549,7 +550,6 @@ extern int qla82xx_wr_32(struct qla_hw_data *, ulong, u32);
extern int qla82xx_rd_32(struct qla_hw_data *, ulong); extern int qla82xx_rd_32(struct qla_hw_data *, ulong);
extern int qla82xx_rdmem(struct qla_hw_data *, u64, void *, int); extern int qla82xx_rdmem(struct qla_hw_data *, u64, void *, int);
extern int qla82xx_wrmem(struct qla_hw_data *, u64, void *, int); extern int qla82xx_wrmem(struct qla_hw_data *, u64, void *, int);
extern void qla82xx_rom_unlock(struct qla_hw_data *);
/* ISP 8021 IDC */ /* ISP 8021 IDC */
extern void qla82xx_clear_drv_active(struct qla_hw_data *); extern void qla82xx_clear_drv_active(struct qla_hw_data *);

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -35,8 +35,6 @@ static int qla2x00_fabric_dev_login(scsi_qla_host_t *, fc_port_t *,
static int qla2x00_restart_isp(scsi_qla_host_t *); static int qla2x00_restart_isp(scsi_qla_host_t *);
static int qla2x00_find_new_loop_id(scsi_qla_host_t *, fc_port_t *);
static struct qla_chip_state_84xx *qla84xx_get_chip(struct scsi_qla_host *); static struct qla_chip_state_84xx *qla84xx_get_chip(struct scsi_qla_host *);
static int qla84xx_init_chip(scsi_qla_host_t *); static int qla84xx_init_chip(scsi_qla_host_t *);
static int qla25xx_init_queues(struct qla_hw_data *); static int qla25xx_init_queues(struct qla_hw_data *);
@ -385,8 +383,18 @@ qla2x00_async_login_done(struct scsi_qla_host *vha, fc_port_t *fcport,
switch (data[0]) { switch (data[0]) {
case MBS_COMMAND_COMPLETE: case MBS_COMMAND_COMPLETE:
/*
* Driver must validate login state - If PRLI not complete,
* force a relogin attempt via implicit LOGO, PLOGI, and PRLI
* requests.
*/
rval = qla2x00_get_port_database(vha, fcport, 0);
if (rval != QLA_SUCCESS) {
qla2x00_post_async_logout_work(vha, fcport, NULL);
qla2x00_post_async_login_work(vha, fcport, NULL);
break;
}
if (fcport->flags & FCF_FCP2_DEVICE) { if (fcport->flags & FCF_FCP2_DEVICE) {
fcport->flags |= FCF_ASYNC_SENT;
qla2x00_post_async_adisc_work(vha, fcport, data); qla2x00_post_async_adisc_work(vha, fcport, data);
break; break;
} }
@ -397,7 +405,7 @@ qla2x00_async_login_done(struct scsi_qla_host *vha, fc_port_t *fcport,
if (data[1] & QLA_LOGIO_LOGIN_RETRIED) if (data[1] & QLA_LOGIO_LOGIN_RETRIED)
set_bit(RELOGIN_NEEDED, &vha->dpc_flags); set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
else else
qla2x00_mark_device_lost(vha, fcport, 1, 1); qla2x00_mark_device_lost(vha, fcport, 1, 0);
break; break;
case MBS_PORT_ID_USED: case MBS_PORT_ID_USED:
fcport->loop_id = data[1]; fcport->loop_id = data[1];
@ -409,7 +417,7 @@ qla2x00_async_login_done(struct scsi_qla_host *vha, fc_port_t *fcport,
rval = qla2x00_find_new_loop_id(vha, fcport); rval = qla2x00_find_new_loop_id(vha, fcport);
if (rval != QLA_SUCCESS) { if (rval != QLA_SUCCESS) {
fcport->flags &= ~FCF_ASYNC_SENT; fcport->flags &= ~FCF_ASYNC_SENT;
qla2x00_mark_device_lost(vha, fcport, 1, 1); qla2x00_mark_device_lost(vha, fcport, 1, 0);
break; break;
} }
qla2x00_post_async_login_work(vha, fcport, NULL); qla2x00_post_async_login_work(vha, fcport, NULL);
@ -441,7 +449,7 @@ qla2x00_async_adisc_done(struct scsi_qla_host *vha, fc_port_t *fcport,
if (data[1] & QLA_LOGIO_LOGIN_RETRIED) if (data[1] & QLA_LOGIO_LOGIN_RETRIED)
set_bit(RELOGIN_NEEDED, &vha->dpc_flags); set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
else else
qla2x00_mark_device_lost(vha, fcport, 1, 1); qla2x00_mark_device_lost(vha, fcport, 1, 0);
return; return;
} }
@ -2536,7 +2544,7 @@ qla2x00_alloc_fcport(scsi_qla_host_t *vha, gfp_t flags)
fcport->vp_idx = vha->vp_idx; fcport->vp_idx = vha->vp_idx;
fcport->port_type = FCT_UNKNOWN; fcport->port_type = FCT_UNKNOWN;
fcport->loop_id = FC_NO_LOOP_ID; fcport->loop_id = FC_NO_LOOP_ID;
atomic_set(&fcport->state, FCS_UNCONFIGURED); qla2x00_set_fcport_state(fcport, FCS_UNCONFIGURED);
fcport->supported_classes = FC_COS_UNSPECIFIED; fcport->supported_classes = FC_COS_UNSPECIFIED;
return fcport; return fcport;
@ -2722,7 +2730,7 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
"loop_id=0x%04x\n", "loop_id=0x%04x\n",
vha->host_no, fcport->loop_id)); vha->host_no, fcport->loop_id));
atomic_set(&fcport->state, FCS_DEVICE_LOST); qla2x00_set_fcport_state(fcport, FCS_DEVICE_LOST);
} }
} }
@ -2934,7 +2942,7 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
qla2x00_iidma_fcport(vha, fcport); qla2x00_iidma_fcport(vha, fcport);
qla24xx_update_fcport_fcp_prio(vha, fcport); qla24xx_update_fcport_fcp_prio(vha, fcport);
qla2x00_reg_remote_port(vha, fcport); qla2x00_reg_remote_port(vha, fcport);
atomic_set(&fcport->state, FCS_ONLINE); qla2x00_set_fcport_state(fcport, FCS_ONLINE);
} }
/* /*
@ -3391,7 +3399,7 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *vha,
* Context: * Context:
* Kernel context. * Kernel context.
*/ */
static int int
qla2x00_find_new_loop_id(scsi_qla_host_t *vha, fc_port_t *dev) qla2x00_find_new_loop_id(scsi_qla_host_t *vha, fc_port_t *dev)
{ {
int rval; int rval;
@ -5202,7 +5210,7 @@ qla81xx_nvram_config(scsi_qla_host_t *vha)
} }
/* Reset Initialization control block */ /* Reset Initialization control block */
memset(icb, 0, sizeof(struct init_cb_81xx)); memset(icb, 0, ha->init_cb_size);
/* Copy 1st segment. */ /* Copy 1st segment. */
dptr1 = (uint8_t *)icb; dptr1 = (uint8_t *)icb;
@ -5427,6 +5435,13 @@ qla82xx_restart_isp(scsi_qla_host_t *vha)
ha->isp_abort_cnt = 0; ha->isp_abort_cnt = 0;
clear_bit(ISP_ABORT_RETRY, &vha->dpc_flags); clear_bit(ISP_ABORT_RETRY, &vha->dpc_flags);
/* Update the firmware version */
qla2x00_get_fw_version(vha, &ha->fw_major_version,
&ha->fw_minor_version, &ha->fw_subminor_version,
&ha->fw_attributes, &ha->fw_memory_size,
ha->mpi_version, &ha->mpi_capabilities,
ha->phy_version);
if (ha->fce) { if (ha->fce) {
ha->flags.fce_enabled = 1; ha->flags.fce_enabled = 1;
memset(ha->fce, 0, memset(ha->fce, 0,
@ -5508,26 +5523,26 @@ qla81xx_update_fw_options(scsi_qla_host_t *vha)
* *
* Return: * Return:
* non-zero (if found) * non-zero (if found)
* 0 (if not found) * -1 (if not found)
* *
* Context: * Context:
* Kernel context * Kernel context
*/ */
uint8_t static int
qla24xx_get_fcp_prio(scsi_qla_host_t *vha, fc_port_t *fcport) qla24xx_get_fcp_prio(scsi_qla_host_t *vha, fc_port_t *fcport)
{ {
int i, entries; int i, entries;
uint8_t pid_match, wwn_match; uint8_t pid_match, wwn_match;
uint8_t priority; int priority;
uint32_t pid1, pid2; uint32_t pid1, pid2;
uint64_t wwn1, wwn2; uint64_t wwn1, wwn2;
struct qla_fcp_prio_entry *pri_entry; struct qla_fcp_prio_entry *pri_entry;
struct qla_hw_data *ha = vha->hw; struct qla_hw_data *ha = vha->hw;
if (!ha->fcp_prio_cfg || !ha->flags.fcp_prio_enabled) if (!ha->fcp_prio_cfg || !ha->flags.fcp_prio_enabled)
return 0; return -1;
priority = 0; priority = -1;
entries = ha->fcp_prio_cfg->num_entries; entries = ha->fcp_prio_cfg->num_entries;
pri_entry = &ha->fcp_prio_cfg->entry[0]; pri_entry = &ha->fcp_prio_cfg->entry[0];
@ -5610,7 +5625,7 @@ int
qla24xx_update_fcport_fcp_prio(scsi_qla_host_t *vha, fc_port_t *fcport) qla24xx_update_fcport_fcp_prio(scsi_qla_host_t *vha, fc_port_t *fcport)
{ {
int ret; int ret;
uint8_t priority; int priority;
uint16_t mb[5]; uint16_t mb[5];
if (fcport->port_type != FCT_TARGET || if (fcport->port_type != FCT_TARGET ||
@ -5618,6 +5633,9 @@ qla24xx_update_fcport_fcp_prio(scsi_qla_host_t *vha, fc_port_t *fcport)
return QLA_FUNCTION_FAILED; return QLA_FUNCTION_FAILED;
priority = qla24xx_get_fcp_prio(vha, fcport); priority = qla24xx_get_fcp_prio(vha, fcport);
if (priority < 0)
return QLA_FUNCTION_FAILED;
ret = qla24xx_set_fcp_prio(vha, fcport->loop_id, priority, mb); ret = qla24xx_set_fcp_prio(vha, fcport->loop_id, priority, mb);
if (ret == QLA_SUCCESS) if (ret == QLA_SUCCESS)
fcport->fcp_prio = priority; fcport->fcp_prio = priority;

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -83,3 +83,22 @@ qla2x00_clean_dsd_pool(struct qla_hw_data *ha, srb_t *sp)
} }
INIT_LIST_HEAD(&((struct crc_context *)sp->ctx)->dsd_list); INIT_LIST_HEAD(&((struct crc_context *)sp->ctx)->dsd_list);
} }
static inline void
qla2x00_set_fcport_state(fc_port_t *fcport, int state)
{
int old_state;
old_state = atomic_read(&fcport->state);
atomic_set(&fcport->state, state);
/* Don't print state transitions during initial allocation of fcport */
if (old_state && old_state != state) {
DEBUG(qla_printk(KERN_WARNING, fcport->vha->hw,
"scsi(%ld): FCPort state transitioned from %s to %s - "
"portid=%02x%02x%02x.\n", fcport->vha->host_no,
port_state_str[old_state], port_state_str[state],
fcport->d_id.b.domain, fcport->d_id.b.area,
fcport->d_id.b.al_pa));
}
}

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -843,7 +843,10 @@ qla2x00_process_completed_request(struct scsi_qla_host *vha,
qla_printk(KERN_WARNING, ha, qla_printk(KERN_WARNING, ha,
"Invalid SCSI completion handle %d.\n", index); "Invalid SCSI completion handle %d.\n", index);
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); if (IS_QLA82XX(ha))
set_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags);
else
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
return; return;
} }
@ -861,7 +864,10 @@ qla2x00_process_completed_request(struct scsi_qla_host *vha,
qla_printk(KERN_WARNING, ha, qla_printk(KERN_WARNING, ha,
"Invalid ISP SCSI completion handle\n"); "Invalid ISP SCSI completion handle\n");
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); if (IS_QLA82XX(ha))
set_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags);
else
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
} }
} }
@ -878,7 +884,10 @@ qla2x00_get_sp_from_handle(scsi_qla_host_t *vha, const char *func,
if (index >= MAX_OUTSTANDING_COMMANDS) { if (index >= MAX_OUTSTANDING_COMMANDS) {
qla_printk(KERN_WARNING, ha, qla_printk(KERN_WARNING, ha,
"%s: Invalid completion handle (%x).\n", func, index); "%s: Invalid completion handle (%x).\n", func, index);
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); if (IS_QLA82XX(ha))
set_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags);
else
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
goto done; goto done;
} }
sp = req->outstanding_cmds[index]; sp = req->outstanding_cmds[index];
@ -1564,7 +1573,10 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
"scsi(%ld): Invalid status handle (0x%x).\n", vha->host_no, "scsi(%ld): Invalid status handle (0x%x).\n", vha->host_no,
sts->handle); sts->handle);
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); if (IS_QLA82XX(ha))
set_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags);
else
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
qla2xxx_wake_dpc(vha); qla2xxx_wake_dpc(vha);
return; return;
} }
@ -1794,12 +1806,13 @@ out:
if (logit) if (logit)
DEBUG2(qla_printk(KERN_INFO, ha, DEBUG2(qla_printk(KERN_INFO, ha,
"scsi(%ld:%d:%d) FCP command status: 0x%x-0x%x (0x%x) " "scsi(%ld:%d:%d) FCP command status: 0x%x-0x%x (0x%x) "
"oxid=0x%x cdb=%02x%02x%02x len=0x%x " "portid=%02x%02x%02x oxid=0x%x cdb=%02x%02x%02x len=0x%x "
"rsp_info=0x%x resid=0x%x fw_resid=0x%x\n", vha->host_no, "rsp_info=0x%x resid=0x%x fw_resid=0x%x\n", vha->host_no,
cp->device->id, cp->device->lun, comp_status, scsi_status, cp->device->id, cp->device->lun, comp_status, scsi_status,
cp->result, ox_id, cp->cmnd[0], cp->result, fcport->d_id.b.domain, fcport->d_id.b.area,
cp->cmnd[1], cp->cmnd[2], scsi_bufflen(cp), rsp_info_len, fcport->d_id.b.al_pa, ox_id, cp->cmnd[0], cp->cmnd[1],
resid_len, fw_resid_len)); cp->cmnd[2], scsi_bufflen(cp), rsp_info_len, resid_len,
fw_resid_len));
if (rsp->status_srb == NULL) if (rsp->status_srb == NULL)
qla2x00_sp_compl(ha, sp); qla2x00_sp_compl(ha, sp);
@ -1908,13 +1921,17 @@ qla2x00_error_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, sts_entry_t *pkt)
qla2x00_sp_compl(ha, sp); qla2x00_sp_compl(ha, sp);
} else if (pkt->entry_type == COMMAND_A64_TYPE || pkt->entry_type == } else if (pkt->entry_type == COMMAND_A64_TYPE || pkt->entry_type ==
COMMAND_TYPE || pkt->entry_type == COMMAND_TYPE_7) { COMMAND_TYPE || pkt->entry_type == COMMAND_TYPE_7
|| pkt->entry_type == COMMAND_TYPE_6) {
DEBUG2(printk("scsi(%ld): Error entry - invalid handle\n", DEBUG2(printk("scsi(%ld): Error entry - invalid handle\n",
vha->host_no)); vha->host_no));
qla_printk(KERN_WARNING, ha, qla_printk(KERN_WARNING, ha,
"Error entry - invalid handle\n"); "Error entry - invalid handle\n");
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); if (IS_QLA82XX(ha))
set_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags);
else
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
qla2xxx_wake_dpc(vha); qla2xxx_wake_dpc(vha);
} }
} }

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -1261,11 +1261,12 @@ qla2x00_get_port_database(scsi_qla_host_t *vha, fc_port_t *fcport, uint8_t opt)
/* Check for logged in state. */ /* Check for logged in state. */
if (pd24->current_login_state != PDS_PRLI_COMPLETE && if (pd24->current_login_state != PDS_PRLI_COMPLETE &&
pd24->last_login_state != PDS_PRLI_COMPLETE) { pd24->last_login_state != PDS_PRLI_COMPLETE) {
DEBUG2(printk("%s(%ld): Unable to verify " DEBUG2(qla_printk(KERN_WARNING, ha,
"login-state (%x/%x) for loop_id %x\n", "scsi(%ld): Unable to verify login-state (%x/%x) "
__func__, vha->host_no, " - portid=%02x%02x%02x.\n", vha->host_no,
pd24->current_login_state, pd24->current_login_state, pd24->last_login_state,
pd24->last_login_state, fcport->loop_id)); fcport->d_id.b.domain, fcport->d_id.b.area,
fcport->d_id.b.al_pa));
rval = QLA_FUNCTION_FAILED; rval = QLA_FUNCTION_FAILED;
goto gpd_error_out; goto gpd_error_out;
} }
@ -1289,6 +1290,12 @@ qla2x00_get_port_database(scsi_qla_host_t *vha, fc_port_t *fcport, uint8_t opt)
/* Check for logged in state. */ /* Check for logged in state. */
if (pd->master_state != PD_STATE_PORT_LOGGED_IN && if (pd->master_state != PD_STATE_PORT_LOGGED_IN &&
pd->slave_state != PD_STATE_PORT_LOGGED_IN) { pd->slave_state != PD_STATE_PORT_LOGGED_IN) {
DEBUG2(qla_printk(KERN_WARNING, ha,
"scsi(%ld): Unable to verify login-state (%x/%x) "
" - portid=%02x%02x%02x.\n", vha->host_no,
pd->master_state, pd->slave_state,
fcport->d_id.b.domain, fcport->d_id.b.area,
fcport->d_id.b.al_pa));
rval = QLA_FUNCTION_FAILED; rval = QLA_FUNCTION_FAILED;
goto gpd_error_out; goto gpd_error_out;
} }
@ -1883,7 +1890,8 @@ qla24xx_fabric_logout(scsi_qla_host_t *vha, uint16_t loop_id, uint8_t domain,
lg->handle = MAKE_HANDLE(req->id, lg->handle); lg->handle = MAKE_HANDLE(req->id, lg->handle);
lg->nport_handle = cpu_to_le16(loop_id); lg->nport_handle = cpu_to_le16(loop_id);
lg->control_flags = lg->control_flags =
__constant_cpu_to_le16(LCF_COMMAND_LOGO|LCF_IMPL_LOGO); __constant_cpu_to_le16(LCF_COMMAND_LOGO|LCF_IMPL_LOGO|
LCF_FREE_NPORT);
lg->port_id[0] = al_pa; lg->port_id[0] = al_pa;
lg->port_id[1] = area; lg->port_id[1] = area;
lg->port_id[2] = domain; lg->port_id[2] = domain;
@ -2362,7 +2370,7 @@ qla24xx_abort_command(srb_t *sp)
abt->entry_count = 1; abt->entry_count = 1;
abt->handle = MAKE_HANDLE(req->id, abt->handle); abt->handle = MAKE_HANDLE(req->id, abt->handle);
abt->nport_handle = cpu_to_le16(fcport->loop_id); abt->nport_handle = cpu_to_le16(fcport->loop_id);
abt->handle_to_abort = handle; abt->handle_to_abort = MAKE_HANDLE(req->id, handle);
abt->port_id[0] = fcport->d_id.b.al_pa; abt->port_id[0] = fcport->d_id.b.al_pa;
abt->port_id[1] = fcport->d_id.b.area; abt->port_id[1] = fcport->d_id.b.area;
abt->port_id[2] = fcport->d_id.b.domain; abt->port_id[2] = fcport->d_id.b.domain;
@ -2778,44 +2786,6 @@ qla2x00_disable_fce_trace(scsi_qla_host_t *vha, uint64_t *wr, uint64_t *rd)
return rval; return rval;
} }
int
qla2x00_read_sfp(scsi_qla_host_t *vha, dma_addr_t sfp_dma, uint16_t addr,
uint16_t off, uint16_t count)
{
int rval;
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
if (!IS_FWI2_CAPABLE(vha->hw))
return QLA_FUNCTION_FAILED;
DEBUG11(printk("%s(%ld): entered.\n", __func__, vha->host_no));
mcp->mb[0] = MBC_READ_SFP;
mcp->mb[1] = addr;
mcp->mb[2] = MSW(sfp_dma);
mcp->mb[3] = LSW(sfp_dma);
mcp->mb[6] = MSW(MSD(sfp_dma));
mcp->mb[7] = LSW(MSD(sfp_dma));
mcp->mb[8] = count;
mcp->mb[9] = off;
mcp->mb[10] = 0;
mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
mcp->in_mb = MBX_0;
mcp->tov = MBX_TOV_SECONDS;
mcp->flags = 0;
rval = qla2x00_mailbox_command(vha, mcp);
if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
vha->host_no, rval, mcp->mb[0]));
} else {
DEBUG11(printk("%s(%ld): done.\n", __func__, vha->host_no));
}
return rval;
}
int int
qla2x00_get_idma_speed(scsi_qla_host_t *vha, uint16_t loop_id, qla2x00_get_idma_speed(scsi_qla_host_t *vha, uint16_t loop_id,
uint16_t *port_speed, uint16_t *mb) uint16_t *port_speed, uint16_t *mb)
@ -3581,15 +3551,22 @@ qla81xx_restart_mpi_firmware(scsi_qla_host_t *vha)
} }
int int
qla2x00_read_edc(scsi_qla_host_t *vha, uint16_t dev, uint16_t adr, qla2x00_read_sfp(scsi_qla_host_t *vha, dma_addr_t sfp_dma, uint8_t *sfp,
dma_addr_t sfp_dma, uint8_t *sfp, uint16_t len, uint16_t opt) uint16_t dev, uint16_t off, uint16_t len, uint16_t opt)
{ {
int rval; int rval;
mbx_cmd_t mc; mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc; mbx_cmd_t *mcp = &mc;
struct qla_hw_data *ha = vha->hw;
if (!IS_FWI2_CAPABLE(ha))
return QLA_FUNCTION_FAILED;
DEBUG11(printk("%s(%ld): entered.\n", __func__, vha->host_no)); DEBUG11(printk("%s(%ld): entered.\n", __func__, vha->host_no));
if (len == 1)
opt |= BIT_0;
mcp->mb[0] = MBC_READ_SFP; mcp->mb[0] = MBC_READ_SFP;
mcp->mb[1] = dev; mcp->mb[1] = dev;
mcp->mb[2] = MSW(sfp_dma); mcp->mb[2] = MSW(sfp_dma);
@ -3597,17 +3574,16 @@ qla2x00_read_edc(scsi_qla_host_t *vha, uint16_t dev, uint16_t adr,
mcp->mb[6] = MSW(MSD(sfp_dma)); mcp->mb[6] = MSW(MSD(sfp_dma));
mcp->mb[7] = LSW(MSD(sfp_dma)); mcp->mb[7] = LSW(MSD(sfp_dma));
mcp->mb[8] = len; mcp->mb[8] = len;
mcp->mb[9] = adr; mcp->mb[9] = off;
mcp->mb[10] = opt; mcp->mb[10] = opt;
mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0; mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
mcp->in_mb = MBX_0; mcp->in_mb = MBX_1|MBX_0;
mcp->tov = MBX_TOV_SECONDS; mcp->tov = MBX_TOV_SECONDS;
mcp->flags = 0; mcp->flags = 0;
rval = qla2x00_mailbox_command(vha, mcp); rval = qla2x00_mailbox_command(vha, mcp);
if (opt & BIT_0) if (opt & BIT_0)
if (sfp) *sfp = mcp->mb[1];
*sfp = mcp->mb[8];
if (rval != QLA_SUCCESS) { if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__, DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__,
@ -3620,18 +3596,24 @@ qla2x00_read_edc(scsi_qla_host_t *vha, uint16_t dev, uint16_t adr,
} }
int int
qla2x00_write_edc(scsi_qla_host_t *vha, uint16_t dev, uint16_t adr, qla2x00_write_sfp(scsi_qla_host_t *vha, dma_addr_t sfp_dma, uint8_t *sfp,
dma_addr_t sfp_dma, uint8_t *sfp, uint16_t len, uint16_t opt) uint16_t dev, uint16_t off, uint16_t len, uint16_t opt)
{ {
int rval; int rval;
mbx_cmd_t mc; mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc; mbx_cmd_t *mcp = &mc;
struct qla_hw_data *ha = vha->hw;
if (!IS_FWI2_CAPABLE(ha))
return QLA_FUNCTION_FAILED;
DEBUG11(printk("%s(%ld): entered.\n", __func__, vha->host_no)); DEBUG11(printk("%s(%ld): entered.\n", __func__, vha->host_no));
if (len == 1)
opt |= BIT_0;
if (opt & BIT_0) if (opt & BIT_0)
if (sfp) len = *sfp;
len = *sfp;
mcp->mb[0] = MBC_WRITE_SFP; mcp->mb[0] = MBC_WRITE_SFP;
mcp->mb[1] = dev; mcp->mb[1] = dev;
@ -3640,10 +3622,10 @@ qla2x00_write_edc(scsi_qla_host_t *vha, uint16_t dev, uint16_t adr,
mcp->mb[6] = MSW(MSD(sfp_dma)); mcp->mb[6] = MSW(MSD(sfp_dma));
mcp->mb[7] = LSW(MSD(sfp_dma)); mcp->mb[7] = LSW(MSD(sfp_dma));
mcp->mb[8] = len; mcp->mb[8] = len;
mcp->mb[9] = adr; mcp->mb[9] = off;
mcp->mb[10] = opt; mcp->mb[10] = opt;
mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0; mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
mcp->in_mb = MBX_0; mcp->in_mb = MBX_1|MBX_0;
mcp->tov = MBX_TOV_SECONDS; mcp->tov = MBX_TOV_SECONDS;
mcp->flags = 0; mcp->flags = 0;
rval = qla2x00_mailbox_command(vha, mcp); rval = qla2x00_mailbox_command(vha, mcp);
@ -4160,63 +4142,32 @@ int
qla2x00_get_thermal_temp(scsi_qla_host_t *vha, uint16_t *temp, uint16_t *frac) qla2x00_get_thermal_temp(scsi_qla_host_t *vha, uint16_t *temp, uint16_t *frac)
{ {
int rval; int rval;
mbx_cmd_t mc; uint8_t byte;
mbx_cmd_t *mcp = &mc;
struct qla_hw_data *ha = vha->hw; struct qla_hw_data *ha = vha->hw;
DEBUG11(printk(KERN_INFO "%s(%ld): entered.\n", __func__, ha->host_no)); DEBUG11(printk(KERN_INFO "%s(%ld): entered.\n", __func__, vha->host_no));
/* High bits. */ /* Integer part */
mcp->mb[0] = MBC_READ_SFP; rval = qla2x00_read_sfp(vha, 0, &byte, 0x98, 0x01, 1, BIT_13|BIT_0);
mcp->mb[1] = 0x98;
mcp->mb[2] = 0;
mcp->mb[3] = 0;
mcp->mb[6] = 0;
mcp->mb[7] = 0;
mcp->mb[8] = 1;
mcp->mb[9] = 0x01;
mcp->mb[10] = BIT_13|BIT_0;
mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
mcp->in_mb = MBX_1|MBX_0;
mcp->tov = MBX_TOV_SECONDS;
mcp->flags = 0;
rval = qla2x00_mailbox_command(vha, mcp);
if (rval != QLA_SUCCESS) { if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk(KERN_WARNING DEBUG2_3_11(printk(KERN_WARNING
"%s(%ld): failed=%x (%x).\n", __func__, "%s(%ld): failed=%x.\n", __func__, vha->host_no, rval));
vha->host_no, rval, mcp->mb[0]));
ha->flags.thermal_supported = 0; ha->flags.thermal_supported = 0;
goto fail; goto fail;
} }
*temp = mcp->mb[1] & 0xFF; *temp = byte;
/* Low bits. */ /* Fraction part */
mcp->mb[0] = MBC_READ_SFP; rval = qla2x00_read_sfp(vha, 0, &byte, 0x98, 0x10, 1, BIT_13|BIT_0);
mcp->mb[1] = 0x98;
mcp->mb[2] = 0;
mcp->mb[3] = 0;
mcp->mb[6] = 0;
mcp->mb[7] = 0;
mcp->mb[8] = 1;
mcp->mb[9] = 0x10;
mcp->mb[10] = BIT_13|BIT_0;
mcp->out_mb = MBX_10|MBX_9|MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
mcp->in_mb = MBX_1|MBX_0;
mcp->tov = MBX_TOV_SECONDS;
mcp->flags = 0;
rval = qla2x00_mailbox_command(vha, mcp);
if (rval != QLA_SUCCESS) { if (rval != QLA_SUCCESS) {
DEBUG2_3_11(printk(KERN_WARNING DEBUG2_3_11(printk(KERN_WARNING
"%s(%ld): failed=%x (%x).\n", __func__, "%s(%ld): failed=%x.\n", __func__, vha->host_no, rval));
vha->host_no, rval, mcp->mb[0]));
ha->flags.thermal_supported = 0; ha->flags.thermal_supported = 0;
goto fail; goto fail;
} }
*frac = ((mcp->mb[1] & 0xFF) >> 6) * 25; *frac = (byte >> 6) * 25;
if (rval == QLA_SUCCESS) DEBUG11(printk(KERN_INFO "%s(%ld): done.\n", __func__, vha->host_no));
DEBUG11(printk(KERN_INFO
"%s(%ld): done.\n", __func__, ha->host_no));
fail: fail:
return rval; return rval;
} }

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -136,7 +136,7 @@ qla2x00_mark_vp_devices_dead(scsi_qla_host_t *vha)
vha->host_no, fcport->loop_id, fcport->vp_idx)); vha->host_no, fcport->loop_id, fcport->vp_idx));
qla2x00_mark_device_lost(vha, fcport, 0, 0); qla2x00_mark_device_lost(vha, fcport, 0, 0);
atomic_set(&fcport->state, FCS_UNCONFIGURED); qla2x00_set_fcport_state(fcport, FCS_UNCONFIGURED);
} }
} }
@ -456,7 +456,7 @@ qla24xx_create_vhost(struct fc_vport *fc_vport)
else else
host->max_cmd_len = MAX_CMDSZ; host->max_cmd_len = MAX_CMDSZ;
host->max_channel = MAX_BUSES - 1; host->max_channel = MAX_BUSES - 1;
host->max_lun = MAX_LUNS; host->max_lun = ql2xmaxlun;
host->unique_id = host->host_no; host->unique_id = host->host_no;
host->max_id = MAX_TARGETS_2200; host->max_id = MAX_TARGETS_2200;
host->transportt = qla2xxx_transport_vport_template; host->transportt = qla2xxx_transport_vport_template;

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -844,6 +844,12 @@ qla82xx_rom_lock(struct qla_hw_data *ha)
return 0; return 0;
} }
static void
qla82xx_rom_unlock(struct qla_hw_data *ha)
{
qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM2_UNLOCK));
}
static int static int
qla82xx_wait_rom_busy(struct qla_hw_data *ha) qla82xx_wait_rom_busy(struct qla_hw_data *ha)
{ {
@ -924,7 +930,7 @@ qla82xx_rom_fast_read(struct qla_hw_data *ha, int addr, int *valp)
return -1; return -1;
} }
ret = qla82xx_do_rom_fast_read(ha, addr, valp); ret = qla82xx_do_rom_fast_read(ha, addr, valp);
qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM2_UNLOCK)); qla82xx_rom_unlock(ha);
return ret; return ret;
} }
@ -1056,7 +1062,7 @@ qla82xx_write_flash_dword(struct qla_hw_data *ha, uint32_t flashaddr,
ret = qla82xx_flash_wait_write_finish(ha); ret = qla82xx_flash_wait_write_finish(ha);
done_write: done_write:
qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM2_UNLOCK)); qla82xx_rom_unlock(ha);
return ret; return ret;
} }
@ -1081,12 +1087,26 @@ qla82xx_pinit_from_rom(scsi_qla_host_t *vha)
/* Halt all the indiviual PEGs and other blocks of the ISP */ /* Halt all the indiviual PEGs and other blocks of the ISP */
qla82xx_rom_lock(ha); qla82xx_rom_lock(ha);
/* mask all niu interrupts */ /* disable all I2Q */
qla82xx_wr_32(ha, QLA82XX_CRB_I2Q + 0x10, 0x0);
qla82xx_wr_32(ha, QLA82XX_CRB_I2Q + 0x14, 0x0);
qla82xx_wr_32(ha, QLA82XX_CRB_I2Q + 0x18, 0x0);
qla82xx_wr_32(ha, QLA82XX_CRB_I2Q + 0x1c, 0x0);
qla82xx_wr_32(ha, QLA82XX_CRB_I2Q + 0x20, 0x0);
qla82xx_wr_32(ha, QLA82XX_CRB_I2Q + 0x24, 0x0);
/* disable all niu interrupts */
qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0x40, 0xff); qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0x40, 0xff);
/* disable xge rx/tx */ /* disable xge rx/tx */
qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0x70000, 0x00); qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0x70000, 0x00);
/* disable xg1 rx/tx */ /* disable xg1 rx/tx */
qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0x80000, 0x00); qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0x80000, 0x00);
/* disable sideband mac */
qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0x90000, 0x00);
/* disable ap0 mac */
qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0xa0000, 0x00);
/* disable ap1 mac */
qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0xb0000, 0x00);
/* halt sre */ /* halt sre */
val = qla82xx_rd_32(ha, QLA82XX_CRB_SRE + 0x1000); val = qla82xx_rd_32(ha, QLA82XX_CRB_SRE + 0x1000);
@ -1101,6 +1121,7 @@ qla82xx_pinit_from_rom(scsi_qla_host_t *vha)
qla82xx_wr_32(ha, QLA82XX_CRB_TIMER + 0x10, 0x0); qla82xx_wr_32(ha, QLA82XX_CRB_TIMER + 0x10, 0x0);
qla82xx_wr_32(ha, QLA82XX_CRB_TIMER + 0x18, 0x0); qla82xx_wr_32(ha, QLA82XX_CRB_TIMER + 0x18, 0x0);
qla82xx_wr_32(ha, QLA82XX_CRB_TIMER + 0x100, 0x0); qla82xx_wr_32(ha, QLA82XX_CRB_TIMER + 0x100, 0x0);
qla82xx_wr_32(ha, QLA82XX_CRB_TIMER + 0x200, 0x0);
/* halt pegs */ /* halt pegs */
qla82xx_wr_32(ha, QLA82XX_CRB_PEG_NET_0 + 0x3c, 1); qla82xx_wr_32(ha, QLA82XX_CRB_PEG_NET_0 + 0x3c, 1);
@ -1108,9 +1129,9 @@ qla82xx_pinit_from_rom(scsi_qla_host_t *vha)
qla82xx_wr_32(ha, QLA82XX_CRB_PEG_NET_2 + 0x3c, 1); qla82xx_wr_32(ha, QLA82XX_CRB_PEG_NET_2 + 0x3c, 1);
qla82xx_wr_32(ha, QLA82XX_CRB_PEG_NET_3 + 0x3c, 1); qla82xx_wr_32(ha, QLA82XX_CRB_PEG_NET_3 + 0x3c, 1);
qla82xx_wr_32(ha, QLA82XX_CRB_PEG_NET_4 + 0x3c, 1); qla82xx_wr_32(ha, QLA82XX_CRB_PEG_NET_4 + 0x3c, 1);
msleep(20);
/* big hammer */ /* big hammer */
msleep(1000);
if (test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags)) if (test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags))
/* don't reset CAM block on reset */ /* don't reset CAM block on reset */
qla82xx_wr_32(ha, QLA82XX_ROMUSB_GLB_SW_RESET, 0xfeffffff); qla82xx_wr_32(ha, QLA82XX_ROMUSB_GLB_SW_RESET, 0xfeffffff);
@ -1129,7 +1150,7 @@ qla82xx_pinit_from_rom(scsi_qla_host_t *vha)
qla82xx_wr_32(ha, QLA82XX_CRB_QDR_NET + 0xe4, val); qla82xx_wr_32(ha, QLA82XX_CRB_QDR_NET + 0xe4, val);
msleep(20); msleep(20);
qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM2_UNLOCK)); qla82xx_rom_unlock(ha);
/* Read the signature value from the flash. /* Read the signature value from the flash.
* Offset 0: Contain signature (0xcafecafe) * Offset 0: Contain signature (0xcafecafe)
@ -2395,9 +2416,13 @@ qla82xx_load_fw(scsi_qla_host_t *vha)
if (qla82xx_fw_load_from_flash(ha) == QLA_SUCCESS) { if (qla82xx_fw_load_from_flash(ha) == QLA_SUCCESS) {
qla_printk(KERN_ERR, ha, qla_printk(KERN_ERR, ha,
"Firmware loaded successfully from flash\n"); "Firmware loaded successfully from flash\n");
return QLA_SUCCESS; return QLA_SUCCESS;
} else {
qla_printk(KERN_ERR, ha,
"Firmware load from flash failed\n");
} }
try_blob_fw: try_blob_fw:
qla_printk(KERN_INFO, ha, qla_printk(KERN_INFO, ha,
"Attempting to load firmware from blob\n"); "Attempting to load firmware from blob\n");
@ -2548,11 +2573,11 @@ qla2xx_build_scsi_type_6_iocbs(srb_t *sp, struct cmd_type_6 *cmd_pkt,
dsd_seg = (uint32_t *)&cmd_pkt->fcp_data_dseg_address; dsd_seg = (uint32_t *)&cmd_pkt->fcp_data_dseg_address;
*dsd_seg++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma)); *dsd_seg++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));
*dsd_seg++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma)); *dsd_seg++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));
cmd_pkt->fcp_data_dseg_len = dsd_list_len; *dsd_seg++ = cpu_to_le32(dsd_list_len);
} else { } else {
*cur_dsd++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma)); *cur_dsd++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));
*cur_dsd++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma)); *cur_dsd++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));
*cur_dsd++ = dsd_list_len; *cur_dsd++ = cpu_to_le32(dsd_list_len);
} }
cur_dsd = (uint32_t *)next_dsd; cur_dsd = (uint32_t *)next_dsd;
while (avail_dsds) { while (avail_dsds) {
@ -2991,7 +3016,7 @@ qla82xx_unprotect_flash(struct qla_hw_data *ha)
qla_printk(KERN_WARNING, ha, "Write disable failed\n"); qla_printk(KERN_WARNING, ha, "Write disable failed\n");
done_unprotect: done_unprotect:
qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM2_UNLOCK)); qla82xx_rom_unlock(ha);
return ret; return ret;
} }
@ -3020,7 +3045,7 @@ qla82xx_protect_flash(struct qla_hw_data *ha)
if (qla82xx_write_disable_flash(ha) != 0) if (qla82xx_write_disable_flash(ha) != 0)
qla_printk(KERN_WARNING, ha, "Write disable failed\n"); qla_printk(KERN_WARNING, ha, "Write disable failed\n");
done_protect: done_protect:
qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM2_UNLOCK)); qla82xx_rom_unlock(ha);
return ret; return ret;
} }
@ -3048,7 +3073,7 @@ qla82xx_erase_sector(struct qla_hw_data *ha, int addr)
} }
ret = qla82xx_flash_wait_write_finish(ha); ret = qla82xx_flash_wait_write_finish(ha);
done: done:
qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM2_UNLOCK)); qla82xx_rom_unlock(ha);
return ret; return ret;
} }
@ -3228,7 +3253,7 @@ void qla82xx_rom_lock_recovery(struct qla_hw_data *ha)
* else died while holding it. * else died while holding it.
* In either case, unlock. * In either case, unlock.
*/ */
qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM2_UNLOCK)); qla82xx_rom_unlock(ha);
} }
/* /*
@ -3528,15 +3553,18 @@ int
qla82xx_device_state_handler(scsi_qla_host_t *vha) qla82xx_device_state_handler(scsi_qla_host_t *vha)
{ {
uint32_t dev_state; uint32_t dev_state;
uint32_t old_dev_state;
int rval = QLA_SUCCESS; int rval = QLA_SUCCESS;
unsigned long dev_init_timeout; unsigned long dev_init_timeout;
struct qla_hw_data *ha = vha->hw; struct qla_hw_data *ha = vha->hw;
int loopcount = 0;
qla82xx_idc_lock(ha); qla82xx_idc_lock(ha);
if (!vha->flags.init_done) if (!vha->flags.init_done)
qla82xx_set_drv_active(vha); qla82xx_set_drv_active(vha);
dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE); dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE);
old_dev_state = dev_state;
qla_printk(KERN_INFO, ha, "1:Device state is 0x%x = %s\n", dev_state, qla_printk(KERN_INFO, ha, "1:Device state is 0x%x = %s\n", dev_state,
dev_state < MAX_STATES ? qdev_state[dev_state] : "Unknown"); dev_state < MAX_STATES ? qdev_state[dev_state] : "Unknown");
@ -3553,10 +3581,16 @@ qla82xx_device_state_handler(scsi_qla_host_t *vha)
break; break;
} }
dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE); dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE);
qla_printk(KERN_INFO, ha, if (old_dev_state != dev_state) {
"2:Device state is 0x%x = %s\n", dev_state, loopcount = 0;
dev_state < MAX_STATES ? old_dev_state = dev_state;
qdev_state[dev_state] : "Unknown"); }
if (loopcount < 5) {
qla_printk(KERN_INFO, ha,
"2:Device state is 0x%x = %s\n", dev_state,
dev_state < MAX_STATES ?
qdev_state[dev_state] : "Unknown");
}
switch (dev_state) { switch (dev_state) {
case QLA82XX_DEV_READY: case QLA82XX_DEV_READY:
@ -3570,6 +3604,7 @@ qla82xx_device_state_handler(scsi_qla_host_t *vha)
qla82xx_idc_lock(ha); qla82xx_idc_lock(ha);
break; break;
case QLA82XX_DEV_NEED_RESET: case QLA82XX_DEV_NEED_RESET:
if (!ql2xdontresethba)
qla82xx_need_reset_handler(vha); qla82xx_need_reset_handler(vha);
dev_init_timeout = jiffies + dev_init_timeout = jiffies +
(ha->nx_dev_init_timeout * HZ); (ha->nx_dev_init_timeout * HZ);
@ -3604,6 +3639,7 @@ qla82xx_device_state_handler(scsi_qla_host_t *vha)
msleep(1000); msleep(1000);
qla82xx_idc_lock(ha); qla82xx_idc_lock(ha);
} }
loopcount++;
} }
exit: exit:
qla82xx_idc_unlock(ha); qla82xx_idc_unlock(ha);
@ -3621,7 +3657,8 @@ void qla82xx_watchdog(scsi_qla_host_t *vha)
if (dev_state == QLA82XX_DEV_NEED_RESET && if (dev_state == QLA82XX_DEV_NEED_RESET &&
!test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags)) { !test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags)) {
qla_printk(KERN_WARNING, ha, qla_printk(KERN_WARNING, ha,
"%s(): Adapter reset needed!\n", __func__); "scsi(%ld) %s: Adapter reset needed!\n",
vha->host_no, __func__);
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
qla2xxx_wake_dpc(vha); qla2xxx_wake_dpc(vha);
} else if (dev_state == QLA82XX_DEV_NEED_QUIESCENT && } else if (dev_state == QLA82XX_DEV_NEED_QUIESCENT &&
@ -3632,10 +3669,27 @@ void qla82xx_watchdog(scsi_qla_host_t *vha)
set_bit(ISP_QUIESCE_NEEDED, &vha->dpc_flags); set_bit(ISP_QUIESCE_NEEDED, &vha->dpc_flags);
qla2xxx_wake_dpc(vha); qla2xxx_wake_dpc(vha);
} else { } else {
qla82xx_check_fw_alive(vha);
if (qla82xx_check_fw_alive(vha)) { if (qla82xx_check_fw_alive(vha)) {
halt_status = qla82xx_rd_32(ha, halt_status = qla82xx_rd_32(ha,
QLA82XX_PEG_HALT_STATUS1); QLA82XX_PEG_HALT_STATUS1);
qla_printk(KERN_INFO, ha,
"scsi(%ld): %s, Dumping hw/fw registers:\n "
" PEG_HALT_STATUS1: 0x%x, PEG_HALT_STATUS2: 0x%x,\n "
" PEG_NET_0_PC: 0x%x, PEG_NET_1_PC: 0x%x,\n "
" PEG_NET_2_PC: 0x%x, PEG_NET_3_PC: 0x%x,\n "
" PEG_NET_4_PC: 0x%x\n",
vha->host_no, __func__, halt_status,
qla82xx_rd_32(ha, QLA82XX_PEG_HALT_STATUS2),
qla82xx_rd_32(ha,
QLA82XX_CRB_PEG_NET_0 + 0x3c),
qla82xx_rd_32(ha,
QLA82XX_CRB_PEG_NET_1 + 0x3c),
qla82xx_rd_32(ha,
QLA82XX_CRB_PEG_NET_2 + 0x3c),
qla82xx_rd_32(ha,
QLA82XX_CRB_PEG_NET_3 + 0x3c),
qla82xx_rd_32(ha,
QLA82XX_CRB_PEG_NET_4 + 0x3c));
if (halt_status & HALT_STATUS_UNRECOVERABLE) { if (halt_status & HALT_STATUS_UNRECOVERABLE) {
set_bit(ISP_UNRECOVERABLE, set_bit(ISP_UNRECOVERABLE,
&vha->dpc_flags); &vha->dpc_flags);
@ -3651,8 +3705,9 @@ void qla82xx_watchdog(scsi_qla_host_t *vha)
if (ha->flags.mbox_busy) { if (ha->flags.mbox_busy) {
ha->flags.mbox_int = 1; ha->flags.mbox_int = 1;
DEBUG2(qla_printk(KERN_ERR, ha, DEBUG2(qla_printk(KERN_ERR, ha,
"Due to fw hung, doing premature " "scsi(%ld) Due to fw hung, doing "
"completion of mbx command\n")); "premature completion of mbx "
"command\n", vha->host_no));
if (test_bit(MBX_INTR_WAIT, if (test_bit(MBX_INTR_WAIT,
&ha->mbx_cmd_flags)) &ha->mbx_cmd_flags))
complete(&ha->mbx_intr_comp); complete(&ha->mbx_intr_comp);

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */
@ -164,6 +164,20 @@ module_param(ql2xasynctmfenable, int, S_IRUGO);
MODULE_PARM_DESC(ql2xasynctmfenable, MODULE_PARM_DESC(ql2xasynctmfenable,
"Enables issue of TM IOCBs asynchronously via IOCB mechanism" "Enables issue of TM IOCBs asynchronously via IOCB mechanism"
"Default is 0 - Issue TM IOCBs via mailbox mechanism."); "Default is 0 - Issue TM IOCBs via mailbox mechanism.");
int ql2xdontresethba;
module_param(ql2xdontresethba, int, S_IRUGO);
MODULE_PARM_DESC(ql2xdontresethba,
"Option to specify reset behaviour\n"
" 0 (Default) -- Reset on failure.\n"
" 1 -- Do not reset on failure.\n");
uint ql2xmaxlun = MAX_LUNS;
module_param(ql2xmaxlun, uint, S_IRUGO);
MODULE_PARM_DESC(ql2xmaxlun,
"Defines the maximum LU number to register with the SCSI "
"midlayer. Default is 65535.");
/* /*
* SCSI host template entry points * SCSI host template entry points
*/ */
@ -528,7 +542,7 @@ qla2x00_get_new_sp(scsi_qla_host_t *vha, fc_port_t *fcport,
static int static int
qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
{ {
scsi_qla_host_t *vha = shost_priv(cmd->device->host); scsi_qla_host_t *vha = shost_priv(host);
fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata; fc_port_t *fcport = (struct fc_port *) cmd->device->hostdata;
struct fc_rport *rport = starget_to_rport(scsi_target(cmd->device)); struct fc_rport *rport = starget_to_rport(scsi_target(cmd->device));
struct qla_hw_data *ha = vha->hw; struct qla_hw_data *ha = vha->hw;
@ -2128,7 +2142,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
else else
host->max_cmd_len = MAX_CMDSZ; host->max_cmd_len = MAX_CMDSZ;
host->max_channel = MAX_BUSES - 1; host->max_channel = MAX_BUSES - 1;
host->max_lun = MAX_LUNS; host->max_lun = ql2xmaxlun;
host->transportt = qla2xxx_transport_template; host->transportt = qla2xxx_transport_template;
sht->vendor_id = (SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_QLOGIC); sht->vendor_id = (SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_QLOGIC);
@ -2360,21 +2374,26 @@ qla2x00_remove_one(struct pci_dev *pdev)
base_vha = pci_get_drvdata(pdev); base_vha = pci_get_drvdata(pdev);
ha = base_vha->hw; ha = base_vha->hw;
spin_lock_irqsave(&ha->vport_slock, flags); mutex_lock(&ha->vport_lock);
list_for_each_entry(vha, &ha->vp_list, list) { while (ha->cur_vport_count) {
atomic_inc(&vha->vref_count); struct Scsi_Host *scsi_host;
if (vha->fc_vport) { spin_lock_irqsave(&ha->vport_slock, flags);
spin_unlock_irqrestore(&ha->vport_slock, flags);
fc_vport_terminate(vha->fc_vport); BUG_ON(base_vha->list.next == &ha->vp_list);
/* This assumes first entry in ha->vp_list is always base vha */
vha = list_first_entry(&base_vha->list, scsi_qla_host_t, list);
scsi_host = scsi_host_get(vha->host);
spin_lock_irqsave(&ha->vport_slock, flags); spin_unlock_irqrestore(&ha->vport_slock, flags);
} mutex_unlock(&ha->vport_lock);
atomic_dec(&vha->vref_count); fc_vport_terminate(vha->fc_vport);
scsi_host_put(vha->host);
mutex_lock(&ha->vport_lock);
} }
spin_unlock_irqrestore(&ha->vport_slock, flags); mutex_unlock(&ha->vport_lock);
set_bit(UNLOADING, &base_vha->dpc_flags); set_bit(UNLOADING, &base_vha->dpc_flags);
@ -2544,7 +2563,7 @@ void qla2x00_mark_device_lost(scsi_qla_host_t *vha, fc_port_t *fcport,
{ {
if (atomic_read(&fcport->state) == FCS_ONLINE && if (atomic_read(&fcport->state) == FCS_ONLINE &&
vha->vp_idx == fcport->vp_idx) { vha->vp_idx == fcport->vp_idx) {
atomic_set(&fcport->state, FCS_DEVICE_LOST); qla2x00_set_fcport_state(fcport, FCS_DEVICE_LOST);
qla2x00_schedule_rport_del(vha, fcport, defer); qla2x00_schedule_rport_del(vha, fcport, defer);
} }
/* /*
@ -2552,7 +2571,7 @@ void qla2x00_mark_device_lost(scsi_qla_host_t *vha, fc_port_t *fcport,
* port but do the retries. * port but do the retries.
*/ */
if (atomic_read(&fcport->state) != FCS_DEVICE_DEAD) if (atomic_read(&fcport->state) != FCS_DEVICE_DEAD)
atomic_set(&fcport->state, FCS_DEVICE_LOST); qla2x00_set_fcport_state(fcport, FCS_DEVICE_LOST);
if (!do_login) if (!do_login)
return; return;
@ -2607,7 +2626,7 @@ qla2x00_mark_all_devices_lost(scsi_qla_host_t *vha, int defer)
if (atomic_read(&fcport->state) == FCS_DEVICE_DEAD) if (atomic_read(&fcport->state) == FCS_DEVICE_DEAD)
continue; continue;
if (atomic_read(&fcport->state) == FCS_ONLINE) { if (atomic_read(&fcport->state) == FCS_ONLINE) {
atomic_set(&fcport->state, FCS_DEVICE_LOST); qla2x00_set_fcport_state(fcport, FCS_DEVICE_LOST);
if (defer) if (defer)
qla2x00_schedule_rport_del(vha, fcport, defer); qla2x00_schedule_rport_del(vha, fcport, defer);
else if (vha->vp_idx == fcport->vp_idx) else if (vha->vp_idx == fcport->vp_idx)
@ -3214,6 +3233,17 @@ void qla2x00_relogin(struct scsi_qla_host *vha)
fcport->d_id.b.area, fcport->d_id.b.area,
fcport->d_id.b.al_pa); fcport->d_id.b.al_pa);
if (fcport->loop_id == FC_NO_LOOP_ID) {
fcport->loop_id = next_loopid =
ha->min_external_loopid;
status = qla2x00_find_new_loop_id(
vha, fcport);
if (status != QLA_SUCCESS) {
/* Ran out of IDs to use */
break;
}
}
if (IS_ALOGIO_CAPABLE(ha)) { if (IS_ALOGIO_CAPABLE(ha)) {
fcport->flags |= FCF_ASYNC_SENT; fcport->flags |= FCF_ASYNC_SENT;
data[0] = 0; data[0] = 0;
@ -3604,7 +3634,8 @@ qla2x00_timer(scsi_qla_host_t *vha)
if (!pci_channel_offline(ha->pdev)) if (!pci_channel_offline(ha->pdev))
pci_read_config_word(ha->pdev, PCI_VENDOR_ID, &w); pci_read_config_word(ha->pdev, PCI_VENDOR_ID, &w);
if (IS_QLA82XX(ha)) { /* Make sure qla82xx_watchdog is run only for physical port */
if (!vha->vp_idx && IS_QLA82XX(ha)) {
if (test_bit(ISP_QUIESCE_NEEDED, &vha->dpc_flags)) if (test_bit(ISP_QUIESCE_NEEDED, &vha->dpc_flags))
start_dpc++; start_dpc++;
qla82xx_watchdog(vha); qla82xx_watchdog(vha);
@ -3612,7 +3643,8 @@ qla2x00_timer(scsi_qla_host_t *vha)
/* Loop down handler. */ /* Loop down handler. */
if (atomic_read(&vha->loop_down_timer) > 0 && if (atomic_read(&vha->loop_down_timer) > 0 &&
!(test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags)) !(test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags)) &&
!(test_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags))
&& vha->flags.online) { && vha->flags.online) {
if (atomic_read(&vha->loop_down_timer) == if (atomic_read(&vha->loop_down_timer) ==
@ -3648,7 +3680,11 @@ qla2x00_timer(scsi_qla_host_t *vha)
if (!(sfcp->flags & FCF_FCP2_DEVICE)) if (!(sfcp->flags & FCF_FCP2_DEVICE))
continue; continue;
set_bit(ISP_ABORT_NEEDED, if (IS_QLA82XX(ha))
set_bit(FCOE_CTX_RESET_NEEDED,
&vha->dpc_flags);
else
set_bit(ISP_ABORT_NEEDED,
&vha->dpc_flags); &vha->dpc_flags);
break; break;
} }
@ -3667,7 +3703,12 @@ qla2x00_timer(scsi_qla_host_t *vha)
qla_printk(KERN_WARNING, ha, qla_printk(KERN_WARNING, ha,
"Loop down - aborting ISP.\n"); "Loop down - aborting ISP.\n");
set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); if (IS_QLA82XX(ha))
set_bit(FCOE_CTX_RESET_NEEDED,
&vha->dpc_flags);
else
set_bit(ISP_ABORT_NEEDED,
&vha->dpc_flags);
} }
} }
DEBUG3(printk("scsi(%ld): Loop Down - seconds remaining %d\n", DEBUG3(printk("scsi(%ld): Loop Down - seconds remaining %d\n",
@ -3675,8 +3716,8 @@ qla2x00_timer(scsi_qla_host_t *vha)
atomic_read(&vha->loop_down_timer))); atomic_read(&vha->loop_down_timer)));
} }
/* Check if beacon LED needs to be blinked */ /* Check if beacon LED needs to be blinked for physical host only */
if (ha->beacon_blink_led == 1) { if (!vha->vp_idx && (ha->beacon_blink_led == 1)) {
set_bit(BEACON_BLINK_NEEDED, &vha->dpc_flags); set_bit(BEACON_BLINK_NEEDED, &vha->dpc_flags);
start_dpc++; start_dpc++;
} }

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

View File

@ -1,6 +1,6 @@
/* /*
* QLogic Fibre Channel HBA Driver * QLogic Fibre Channel HBA Driver
* Copyright (c) 2003-2010 QLogic Corporation * Copyright (c) 2003-2011 QLogic Corporation
* *
* See LICENSE.qla2xxx for copyright and licensing details. * See LICENSE.qla2xxx for copyright and licensing details.
*/ */

Some files were not shown because too many files have changed in this diff Show More