SCSI misc on 20170907
This is mostly updates of the usual suspects: lpfc, qla2xxx, hisi_sas, megaraid_sas, zfcp and a host of minor updates. The major driver change here is the elimination of the block based cciss driver in favour of the SCSI based hpsa driver (which now drives all the legacy cases cciss used to be required for). Plus a reset handler clean up and the redo of the SAS SMP handler to use bsg lib. Signed-off-by: James E.J. Bottomley <jejb@linux.vnet.ibm.com> -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABAgAGBQJZscDNAAoJEAVr7HOZEZN4DWIQAK/UkkrvKpV/jLATM/yi7CoL QidY86Hmwwl7A9HQ+2fjLfAsye0xcCzRwkucKK90IP5b4pefHhiJJfiMKAAe3TUW xstnY5z5jaOhDG4nyJFoSm5fH5qXkMnJ8NZRK8f6Qg5yBN5dStEKqoBboNsz4KBI md7idw0mbp5i2GXlJwSpc5eDS97GiPL6WkwgGaGKfXF1NDau0GbEdjijfz55haCD pMhY7WJh/71RfOq/1ThXT1Z3khOlVcKXrkdO+602n7zh/klRBRtBC8m2a6xCfZPj n7Pb/s0jhCQPd+e/Xtv7WEbY8uNOCrGoVgZ6U5EGrT5IeTfep24ackYqerjMhE63 esi4BJY8lUP9SGleLMgjYWyCHdmxBJRa7UI614DWN/H0QoGP6j/2EzGoi5Fw04vC H8/+aqPPWZc9KUBioRYo8xWO8YgMqL2eyXY+Tc9cwxqAe2T6k/NC1zJVgDFKXfzb QoWW4v9NNmYwf5vL/7tNgkeTMFQV66yUR7dR3SGTSk8UIrJ40ok0JyUAsDg86ZAH BfMkWwhWQ6Byoel0Y7Ti88T49Cox/64r/I0ux06Qgg99+KpRLT7z20+GLIEHgXxg 116C39rgvYKqzc7W8RCyj8qSROuMVzg6QFbB6n+1PEsYIX2O8A2Re3jdS34q2LbX aBDm/Lfdl4kkJrV9xY6P =nQUG -----END PGP SIGNATURE----- Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI updates from James Bottomley: "This is mostly updates of the usual suspects: lpfc, qla2xxx, hisi_sas, megaraid_sas, zfcp and a host of minor updates. The major driver change here is the elimination of the block based cciss driver in favour of the SCSI based hpsa driver (which now drives all the legacy cases cciss used to be required for). Plus a reset handler clean up and the redo of the SAS SMP handler to use bsg lib" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (279 commits) scsi: scsi-mq: Always unprepare before requeuing a request scsi: Show .retries and .jiffies_at_alloc in debugfs scsi: Improve requeuing behavior scsi: Call scsi_initialize_rq() for filesystem requests scsi: qla2xxx: Reset the logo flag, after target re-login. scsi: qla2xxx: Fix slow mem alloc behind lock scsi: qla2xxx: Clear fc4f_nvme flag scsi: qla2xxx: add missing includes for qla_isr scsi: qla2xxx: Fix an integer overflow in sysfs code scsi: aacraid: report -ENOMEM to upper layer from aac_convert_sgraw2() scsi: aacraid: get rid of one level of indentation scsi: aacraid: fix indentation errors scsi: storvsc: fix memory leak on ring buffer busy scsi: scsi_transport_sas: switch to bsg-lib for SMP passthrough scsi: smartpqi: remove the smp_handler stub scsi: hpsa: remove the smp_handler stub scsi: bsg-lib: pass the release callback through bsg_setup_queue scsi: Rework handling of scsi_device.vpd_pg8[03] scsi: Rework the code for caching Vital Product Data (VPD) scsi: rcu: Introduce rcu_swap_protected() ...
This commit is contained in:
commit
572c01ba19
|
@ -1,194 +0,0 @@
|
|||
This driver is for Compaq's SMART Array Controllers.
|
||||
|
||||
Supported Cards:
|
||||
----------------
|
||||
|
||||
This driver is known to work with the following cards:
|
||||
|
||||
* SA 5300
|
||||
* SA 5i
|
||||
* SA 532
|
||||
* SA 5312
|
||||
* SA 641
|
||||
* SA 642
|
||||
* SA 6400
|
||||
* SA 6400 U320 Expansion Module
|
||||
* SA 6i
|
||||
* SA P600
|
||||
* SA P800
|
||||
* SA E400
|
||||
* SA P400i
|
||||
* SA E200
|
||||
* SA E200i
|
||||
* SA E500
|
||||
* SA P700m
|
||||
* SA P212
|
||||
* SA P410
|
||||
* SA P410i
|
||||
* SA P411
|
||||
* SA P812
|
||||
* SA P712m
|
||||
* SA P711m
|
||||
|
||||
Detecting drive failures:
|
||||
-------------------------
|
||||
|
||||
To get the status of logical volumes and to detect physical drive
|
||||
failures, you can use the cciss_vol_status program found here:
|
||||
http://cciss.sourceforge.net/#cciss_utils
|
||||
|
||||
Device Naming:
|
||||
--------------
|
||||
|
||||
If nodes are not already created in the /dev/cciss directory, run as root:
|
||||
|
||||
# cd /dev
|
||||
# ./MAKEDEV cciss
|
||||
|
||||
You need some entries in /dev for the cciss device. The MAKEDEV script
|
||||
can make device nodes for you automatically. Currently the device setup
|
||||
is as follows:
|
||||
|
||||
Major numbers:
|
||||
104 cciss0
|
||||
105 cciss1
|
||||
106 cciss2
|
||||
105 cciss3
|
||||
108 cciss4
|
||||
109 cciss5
|
||||
110 cciss6
|
||||
111 cciss7
|
||||
|
||||
Minor numbers:
|
||||
b7 b6 b5 b4 b3 b2 b1 b0
|
||||
|----+----| |----+----|
|
||||
| |
|
||||
| +-------- Partition ID (0=wholedev, 1-15 partition)
|
||||
|
|
||||
+-------------------- Logical Volume number
|
||||
|
||||
The device naming scheme is:
|
||||
/dev/cciss/c0d0 Controller 0, disk 0, whole device
|
||||
/dev/cciss/c0d0p1 Controller 0, disk 0, partition 1
|
||||
/dev/cciss/c0d0p2 Controller 0, disk 0, partition 2
|
||||
/dev/cciss/c0d0p3 Controller 0, disk 0, partition 3
|
||||
|
||||
/dev/cciss/c1d1 Controller 1, disk 1, whole device
|
||||
/dev/cciss/c1d1p1 Controller 1, disk 1, partition 1
|
||||
/dev/cciss/c1d1p2 Controller 1, disk 1, partition 2
|
||||
/dev/cciss/c1d1p3 Controller 1, disk 1, partition 3
|
||||
|
||||
CCISS simple mode support
|
||||
-------------------------
|
||||
|
||||
The "cciss_simple_mode=1" boot parameter may be used to prevent the driver
|
||||
from putting the controller into "performant" mode. The difference is that
|
||||
with simple mode, each command completion requires an interrupt, while with
|
||||
"performant mode" (the default, and ordinarily better performing) it is
|
||||
possible to have multiple command completions indicated by a single
|
||||
interrupt.
|
||||
|
||||
SCSI tape drive and medium changer support
|
||||
------------------------------------------
|
||||
|
||||
SCSI sequential access devices and medium changer devices are supported and
|
||||
appropriate device nodes are automatically created. (e.g.
|
||||
/dev/st0, /dev/st1, etc. See the "st" man page for more details.)
|
||||
You must enable "SCSI tape drive support for Smart Array 5xxx" and
|
||||
"SCSI support" in your kernel configuration to be able to use SCSI
|
||||
tape drives with your Smart Array 5xxx controller.
|
||||
|
||||
Additionally, note that the driver will engage the SCSI core at init
|
||||
time if any tape drives or medium changers are detected. The driver may
|
||||
also be directed to dynamically engage the SCSI core via the /proc filesystem
|
||||
entry which the "block" side of the driver creates as
|
||||
/proc/driver/cciss/cciss* at runtime. This is best done via a script.
|
||||
|
||||
For example:
|
||||
|
||||
for x in /proc/driver/cciss/cciss[0-9]*
|
||||
do
|
||||
echo "engage scsi" > $x
|
||||
done
|
||||
|
||||
Once the SCSI core is engaged by the driver, it cannot be disengaged
|
||||
(except by unloading the driver, if it happens to be linked as a module.)
|
||||
|
||||
Note also that if no sequential access devices or medium changers are
|
||||
detected, the SCSI core will not be engaged by the action of the above
|
||||
script.
|
||||
|
||||
Hot plug support for SCSI tape drives
|
||||
-------------------------------------
|
||||
|
||||
Hot plugging of SCSI tape drives is supported, with some caveats.
|
||||
The cciss driver must be informed that changes to the SCSI bus
|
||||
have been made. This may be done via the /proc filesystem.
|
||||
For example:
|
||||
|
||||
echo "rescan" > /proc/scsi/cciss0/1
|
||||
|
||||
This causes the driver to query the adapter about changes to the
|
||||
physical SCSI buses and/or fibre channel arbitrated loop and the
|
||||
driver to make note of any new or removed sequential access devices
|
||||
or medium changers. The driver will output messages indicating what
|
||||
devices have been added or removed and the controller, bus, target and
|
||||
lun used to address the device. It then notifies the SCSI mid layer
|
||||
of these changes.
|
||||
|
||||
Note that the naming convention of the /proc filesystem entries
|
||||
contains a number in addition to the driver name. (E.g. "cciss0"
|
||||
instead of just "cciss" which you might expect.)
|
||||
|
||||
Note: ONLY sequential access devices and medium changers are presented
|
||||
as SCSI devices to the SCSI mid layer by the cciss driver. Specifically,
|
||||
physical SCSI disk drives are NOT presented to the SCSI mid layer. The
|
||||
physical SCSI disk drives are controlled directly by the array controller
|
||||
hardware and it is important to prevent the kernel from attempting to directly
|
||||
access these devices too, as if the array controller were merely a SCSI
|
||||
controller in the same way that we are allowing it to access SCSI tape drives.
|
||||
|
||||
SCSI error handling for tape drives and medium changers
|
||||
-------------------------------------------------------
|
||||
|
||||
The linux SCSI mid layer provides an error handling protocol which
|
||||
kicks into gear whenever a SCSI command fails to complete within a
|
||||
certain amount of time (which can vary depending on the command).
|
||||
The cciss driver participates in this protocol to some extent. The
|
||||
normal protocol is a four step process. First the device is told
|
||||
to abort the command. If that doesn't work, the device is reset.
|
||||
If that doesn't work, the SCSI bus is reset. If that doesn't work
|
||||
the host bus adapter is reset. Because the cciss driver is a block
|
||||
driver as well as a SCSI driver and only the tape drives and medium
|
||||
changers are presented to the SCSI mid layer, and unlike more
|
||||
straightforward SCSI drivers, disk i/o continues through the block
|
||||
side during the SCSI error recovery process, the cciss driver only
|
||||
implements the first two of these actions, aborting the command, and
|
||||
resetting the device. Additionally, most tape drives will not oblige
|
||||
in aborting commands, and sometimes it appears they will not even
|
||||
obey a reset command, though in most circumstances they will. In
|
||||
the case that the command cannot be aborted and the device cannot be
|
||||
reset, the device will be set offline.
|
||||
|
||||
In the event the error handling code is triggered and a tape drive is
|
||||
successfully reset or the tardy command is successfully aborted, the
|
||||
tape drive may still not allow i/o to continue until some command
|
||||
is issued which positions the tape to a known position. Typically you
|
||||
must rewind the tape (by issuing "mt -f /dev/st0 rewind" for example)
|
||||
before i/o can proceed again to a tape drive which was reset.
|
||||
|
||||
There is a cciss_tape_cmds module parameter which can be used to make cciss
|
||||
allocate more commands for use by tape drives. Ordinarily only a few commands
|
||||
(6) are allocated for tape drives because tape drives are slow and
|
||||
infrequently used and the primary purpose of Smart Array controllers is to
|
||||
act as a RAID controller for disk drives, so the vast majority of commands
|
||||
are allocated for disk devices. However, if you have more than a few tape
|
||||
drives attached to a smart array, the default number of commands may not be
|
||||
enough (for example, if you have 8 tape drives, you could only rewind 6
|
||||
at one time with the default number of commands.) The cciss_tape_cmds module
|
||||
parameter allows more commands (up to 16 more) to be allocated for use by
|
||||
tape drives. For example:
|
||||
|
||||
insmod cciss.ko cciss_tape_cmds=16
|
||||
|
||||
Or, as a kernel boot parameter passed in via grub: cciss.cciss_tape_cmds=8
|
14
MAINTAINERS
14
MAINTAINERS
|
@ -6093,16 +6093,6 @@ F: drivers/scsi/hpsa*.[ch]
|
|||
F: include/linux/cciss*.h
|
||||
F: include/uapi/linux/cciss*.h
|
||||
|
||||
HEWLETT-PACKARD SMART CISS RAID DRIVER (cciss)
|
||||
M: Don Brace <don.brace@microsemi.com>
|
||||
L: esc.storagedev@microsemi.com
|
||||
L: linux-scsi@vger.kernel.org
|
||||
S: Supported
|
||||
F: Documentation/blockdev/cciss.txt
|
||||
F: drivers/block/cciss*
|
||||
F: include/linux/cciss_ioctl.h
|
||||
F: include/uapi/linux/cciss_ioctl.h
|
||||
|
||||
HFI1 DRIVER
|
||||
M: Mike Marciniszyn <mike.marciniszyn@intel.com>
|
||||
M: Dennis Dalessandro <dennis.dalessandro@intel.com>
|
||||
|
@ -11609,6 +11599,7 @@ F: drivers/s390/crypto/
|
|||
|
||||
S390 ZFCP DRIVER
|
||||
M: Steffen Maier <maier@linux.vnet.ibm.com>
|
||||
M: Benjamin Block <bblock@linux.vnet.ibm.com>
|
||||
L: linux-s390@vger.kernel.org
|
||||
W: http://www.ibm.com/developerworks/linux/linux390/
|
||||
S: Supported
|
||||
|
@ -13695,8 +13686,7 @@ F: Documentation/scsi/ufs.txt
|
|||
F: drivers/scsi/ufs/
|
||||
|
||||
UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER DWC HOOKS
|
||||
M: Manjunath M Bettegowda <manjumb@synopsys.com>
|
||||
M: Prabu Thangamuthu <prabut@synopsys.com>
|
||||
M: Joao Pinto <jpinto@synopsys.com>
|
||||
L: linux-scsi@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/scsi/ufs/*dwc*
|
||||
|
|
|
@ -239,8 +239,9 @@ static void bsg_exit_rq(struct request_queue *q, struct request *req)
|
|||
* @job_fn: bsg job handler
|
||||
* @dd_job_size: size of LLD data needed for each job
|
||||
*/
|
||||
struct request_queue *bsg_setup_queue(struct device *dev, char *name,
|
||||
bsg_job_fn *job_fn, int dd_job_size)
|
||||
struct request_queue *bsg_setup_queue(struct device *dev, const char *name,
|
||||
bsg_job_fn *job_fn, int dd_job_size,
|
||||
void (*release)(struct device *))
|
||||
{
|
||||
struct request_queue *q;
|
||||
int ret;
|
||||
|
@ -264,7 +265,7 @@ struct request_queue *bsg_setup_queue(struct device *dev, char *name,
|
|||
blk_queue_softirq_done(q, bsg_softirq_done);
|
||||
blk_queue_rq_timeout(q, BLK_DEFAULT_SG_TIMEOUT);
|
||||
|
||||
ret = bsg_register_queue(q, dev, name, NULL);
|
||||
ret = bsg_register_queue(q, dev, name, release);
|
||||
if (ret) {
|
||||
printk(KERN_ERR "%s: bsg interface failed to "
|
||||
"initialize - register queue\n", dev->kobj.name);
|
||||
|
|
|
@ -112,33 +112,6 @@ source "drivers/block/mtip32xx/Kconfig"
|
|||
|
||||
source "drivers/block/zram/Kconfig"
|
||||
|
||||
config BLK_CPQ_CISS_DA
|
||||
tristate "Compaq Smart Array 5xxx support"
|
||||
depends on PCI
|
||||
select CHECK_SIGNATURE
|
||||
select BLK_SCSI_REQUEST
|
||||
help
|
||||
This is the driver for Compaq Smart Array 5xxx controllers.
|
||||
Everyone using these boards should say Y here.
|
||||
See <file:Documentation/blockdev/cciss.txt> for the current list of
|
||||
boards supported by this driver, and for further information
|
||||
on the use of this driver.
|
||||
|
||||
config CISS_SCSI_TAPE
|
||||
bool "SCSI tape drive support for Smart Array 5xxx"
|
||||
depends on BLK_CPQ_CISS_DA && PROC_FS
|
||||
depends on SCSI=y || SCSI=BLK_CPQ_CISS_DA
|
||||
help
|
||||
When enabled (Y), this option allows SCSI tape drives and SCSI medium
|
||||
changers (tape robots) to be accessed via a Compaq 5xxx array
|
||||
controller. (See <file:Documentation/blockdev/cciss.txt> for more details.)
|
||||
|
||||
"SCSI support" and "SCSI tape support" must also be enabled for this
|
||||
option to work.
|
||||
|
||||
When this option is disabled (N), the SCSI portion of the driver
|
||||
is not compiled.
|
||||
|
||||
config BLK_DEV_DAC960
|
||||
tristate "Mylex DAC960/DAC1100 PCI RAID Controller support"
|
||||
depends on PCI
|
||||
|
|
|
@ -15,7 +15,6 @@ obj-$(CONFIG_ATARI_FLOPPY) += ataflop.o
|
|||
obj-$(CONFIG_AMIGA_Z2RAM) += z2ram.o
|
||||
obj-$(CONFIG_BLK_DEV_RAM) += brd.o
|
||||
obj-$(CONFIG_BLK_DEV_LOOP) += loop.o
|
||||
obj-$(CONFIG_BLK_CPQ_CISS_DA) += cciss.o
|
||||
obj-$(CONFIG_BLK_DEV_DAC960) += DAC960.o
|
||||
obj-$(CONFIG_XILINX_SYSACE) += xsysace.o
|
||||
obj-$(CONFIG_CDROM_PKTCDVD) += pktcdvd.o
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,433 +0,0 @@
|
|||
#ifndef CCISS_H
|
||||
#define CCISS_H
|
||||
|
||||
#include <linux/genhd.h>
|
||||
#include <linux/mutex.h>
|
||||
|
||||
#include "cciss_cmd.h"
|
||||
|
||||
|
||||
#define NWD_SHIFT 4
|
||||
#define MAX_PART (1 << NWD_SHIFT)
|
||||
|
||||
#define IO_OK 0
|
||||
#define IO_ERROR 1
|
||||
#define IO_NEEDS_RETRY 3
|
||||
|
||||
#define VENDOR_LEN 8
|
||||
#define MODEL_LEN 16
|
||||
#define REV_LEN 4
|
||||
|
||||
struct ctlr_info;
|
||||
typedef struct ctlr_info ctlr_info_t;
|
||||
|
||||
struct access_method {
|
||||
void (*submit_command)(ctlr_info_t *h, CommandList_struct *c);
|
||||
void (*set_intr_mask)(ctlr_info_t *h, unsigned long val);
|
||||
unsigned long (*fifo_full)(ctlr_info_t *h);
|
||||
bool (*intr_pending)(ctlr_info_t *h);
|
||||
unsigned long (*command_completed)(ctlr_info_t *h);
|
||||
};
|
||||
typedef struct _drive_info_struct
|
||||
{
|
||||
unsigned char LunID[8];
|
||||
int usage_count;
|
||||
struct request_queue *queue;
|
||||
sector_t nr_blocks;
|
||||
int block_size;
|
||||
int heads;
|
||||
int sectors;
|
||||
int cylinders;
|
||||
int raid_level; /* set to -1 to indicate that
|
||||
* the drive is not in use/configured
|
||||
*/
|
||||
int busy_configuring; /* This is set when a drive is being removed
|
||||
* to prevent it from being opened or it's
|
||||
* queue from being started.
|
||||
*/
|
||||
struct device dev;
|
||||
__u8 serial_no[16]; /* from inquiry page 0x83,
|
||||
* not necc. null terminated.
|
||||
*/
|
||||
char vendor[VENDOR_LEN + 1]; /* SCSI vendor string */
|
||||
char model[MODEL_LEN + 1]; /* SCSI model string */
|
||||
char rev[REV_LEN + 1]; /* SCSI revision string */
|
||||
char device_initialized; /* indicates whether dev is initialized */
|
||||
} drive_info_struct;
|
||||
|
||||
struct ctlr_info
|
||||
{
|
||||
int ctlr;
|
||||
char devname[8];
|
||||
char *product_name;
|
||||
char firm_ver[4]; /* Firmware version */
|
||||
struct pci_dev *pdev;
|
||||
__u32 board_id;
|
||||
void __iomem *vaddr;
|
||||
unsigned long paddr;
|
||||
int nr_cmds; /* Number of commands allowed on this controller */
|
||||
CfgTable_struct __iomem *cfgtable;
|
||||
int interrupts_enabled;
|
||||
int major;
|
||||
int max_commands;
|
||||
int commands_outstanding;
|
||||
int max_outstanding; /* Debug */
|
||||
int num_luns;
|
||||
int highest_lun;
|
||||
int usage_count; /* number of opens all all minor devices */
|
||||
/* Need space for temp sg list
|
||||
* number of scatter/gathers supported
|
||||
* number of scatter/gathers in chained block
|
||||
*/
|
||||
struct scatterlist **scatter_list;
|
||||
int maxsgentries;
|
||||
int chainsize;
|
||||
int max_cmd_sgentries;
|
||||
SGDescriptor_struct **cmd_sg_list;
|
||||
|
||||
# define PERF_MODE_INT 0
|
||||
# define DOORBELL_INT 1
|
||||
# define SIMPLE_MODE_INT 2
|
||||
# define MEMQ_MODE_INT 3
|
||||
unsigned int intr[4];
|
||||
int intr_mode;
|
||||
int cciss_max_sectors;
|
||||
BYTE cciss_read;
|
||||
BYTE cciss_write;
|
||||
BYTE cciss_read_capacity;
|
||||
|
||||
/* information about each logical volume */
|
||||
drive_info_struct *drv[CISS_MAX_LUN];
|
||||
|
||||
struct access_method access;
|
||||
|
||||
/* queue and queue Info */
|
||||
struct list_head reqQ;
|
||||
struct list_head cmpQ;
|
||||
unsigned int Qdepth;
|
||||
unsigned int maxQsinceinit;
|
||||
unsigned int maxSG;
|
||||
spinlock_t lock;
|
||||
|
||||
/* pointers to command and error info pool */
|
||||
CommandList_struct *cmd_pool;
|
||||
dma_addr_t cmd_pool_dhandle;
|
||||
ErrorInfo_struct *errinfo_pool;
|
||||
dma_addr_t errinfo_pool_dhandle;
|
||||
unsigned long *cmd_pool_bits;
|
||||
int nr_allocs;
|
||||
int nr_frees;
|
||||
int busy_configuring;
|
||||
int busy_initializing;
|
||||
int busy_scanning;
|
||||
struct mutex busy_shutting_down;
|
||||
|
||||
/* This element holds the zero based queue number of the last
|
||||
* queue to be started. It is used for fairness.
|
||||
*/
|
||||
int next_to_run;
|
||||
|
||||
/* Disk structures we need to pass back */
|
||||
struct gendisk *gendisk[CISS_MAX_LUN];
|
||||
#ifdef CONFIG_CISS_SCSI_TAPE
|
||||
struct cciss_scsi_adapter_data_t *scsi_ctlr;
|
||||
#endif
|
||||
unsigned char alive;
|
||||
struct list_head scan_list;
|
||||
struct completion scan_wait;
|
||||
struct device dev;
|
||||
/*
|
||||
* Performant mode tables.
|
||||
*/
|
||||
u32 trans_support;
|
||||
u32 trans_offset;
|
||||
struct TransTable_struct *transtable;
|
||||
unsigned long transMethod;
|
||||
|
||||
/*
|
||||
* Performant mode completion buffer
|
||||
*/
|
||||
u64 *reply_pool;
|
||||
dma_addr_t reply_pool_dhandle;
|
||||
u64 *reply_pool_head;
|
||||
size_t reply_pool_size;
|
||||
unsigned char reply_pool_wraparound;
|
||||
u32 *blockFetchTable;
|
||||
};
|
||||
|
||||
/* Defining the diffent access_methods
|
||||
*
|
||||
* Memory mapped FIFO interface (SMART 53xx cards)
|
||||
*/
|
||||
#define SA5_DOORBELL 0x20
|
||||
#define SA5_REQUEST_PORT_OFFSET 0x40
|
||||
#define SA5_REPLY_INTR_MASK_OFFSET 0x34
|
||||
#define SA5_REPLY_PORT_OFFSET 0x44
|
||||
#define SA5_INTR_STATUS 0x30
|
||||
#define SA5_SCRATCHPAD_OFFSET 0xB0
|
||||
|
||||
#define SA5_CTCFG_OFFSET 0xB4
|
||||
#define SA5_CTMEM_OFFSET 0xB8
|
||||
|
||||
#define SA5_INTR_OFF 0x08
|
||||
#define SA5B_INTR_OFF 0x04
|
||||
#define SA5_INTR_PENDING 0x08
|
||||
#define SA5B_INTR_PENDING 0x04
|
||||
#define FIFO_EMPTY 0xffffffff
|
||||
#define CCISS_FIRMWARE_READY 0xffff0000 /* value in scratchpad register */
|
||||
/* Perf. mode flags */
|
||||
#define SA5_PERF_INTR_PENDING 0x04
|
||||
#define SA5_PERF_INTR_OFF 0x05
|
||||
#define SA5_OUTDB_STATUS_PERF_BIT 0x01
|
||||
#define SA5_OUTDB_CLEAR_PERF_BIT 0x01
|
||||
#define SA5_OUTDB_CLEAR 0xA0
|
||||
#define SA5_OUTDB_CLEAR_PERF_BIT 0x01
|
||||
#define SA5_OUTDB_STATUS 0x9C
|
||||
|
||||
|
||||
#define CISS_ERROR_BIT 0x02
|
||||
|
||||
#define CCISS_INTR_ON 1
|
||||
#define CCISS_INTR_OFF 0
|
||||
|
||||
|
||||
/* CCISS_BOARD_READY_WAIT_SECS is how long to wait for a board
|
||||
* to become ready, in seconds, before giving up on it.
|
||||
* CCISS_BOARD_READY_POLL_INTERVAL_MSECS * is how long to wait
|
||||
* between polling the board to see if it is ready, in
|
||||
* milliseconds. CCISS_BOARD_READY_ITERATIONS is derived
|
||||
* the above.
|
||||
*/
|
||||
#define CCISS_BOARD_READY_WAIT_SECS (120)
|
||||
#define CCISS_BOARD_NOT_READY_WAIT_SECS (100)
|
||||
#define CCISS_BOARD_READY_POLL_INTERVAL_MSECS (100)
|
||||
#define CCISS_BOARD_READY_ITERATIONS \
|
||||
((CCISS_BOARD_READY_WAIT_SECS * 1000) / \
|
||||
CCISS_BOARD_READY_POLL_INTERVAL_MSECS)
|
||||
#define CCISS_BOARD_NOT_READY_ITERATIONS \
|
||||
((CCISS_BOARD_NOT_READY_WAIT_SECS * 1000) / \
|
||||
CCISS_BOARD_READY_POLL_INTERVAL_MSECS)
|
||||
#define CCISS_POST_RESET_PAUSE_MSECS (3000)
|
||||
#define CCISS_POST_RESET_NOOP_INTERVAL_MSECS (4000)
|
||||
#define CCISS_POST_RESET_NOOP_RETRIES (12)
|
||||
#define CCISS_POST_RESET_NOOP_TIMEOUT_MSECS (10000)
|
||||
|
||||
/*
|
||||
Send the command to the hardware
|
||||
*/
|
||||
static void SA5_submit_command( ctlr_info_t *h, CommandList_struct *c)
|
||||
{
|
||||
#ifdef CCISS_DEBUG
|
||||
printk(KERN_WARNING "cciss%d: Sending %08x - down to controller\n",
|
||||
h->ctlr, c->busaddr);
|
||||
#endif /* CCISS_DEBUG */
|
||||
writel(c->busaddr, h->vaddr + SA5_REQUEST_PORT_OFFSET);
|
||||
readl(h->vaddr + SA5_SCRATCHPAD_OFFSET);
|
||||
h->commands_outstanding++;
|
||||
if ( h->commands_outstanding > h->max_outstanding)
|
||||
h->max_outstanding = h->commands_outstanding;
|
||||
}
|
||||
|
||||
/*
|
||||
* This card is the opposite of the other cards.
|
||||
* 0 turns interrupts on...
|
||||
* 0x08 turns them off...
|
||||
*/
|
||||
static void SA5_intr_mask(ctlr_info_t *h, unsigned long val)
|
||||
{
|
||||
if (val)
|
||||
{ /* Turn interrupts on */
|
||||
h->interrupts_enabled = 1;
|
||||
writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
} else /* Turn them off */
|
||||
{
|
||||
h->interrupts_enabled = 0;
|
||||
writel( SA5_INTR_OFF,
|
||||
h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
}
|
||||
}
|
||||
/*
|
||||
* This card is the opposite of the other cards.
|
||||
* 0 turns interrupts on...
|
||||
* 0x04 turns them off...
|
||||
*/
|
||||
static void SA5B_intr_mask(ctlr_info_t *h, unsigned long val)
|
||||
{
|
||||
if (val)
|
||||
{ /* Turn interrupts on */
|
||||
h->interrupts_enabled = 1;
|
||||
writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
} else /* Turn them off */
|
||||
{
|
||||
h->interrupts_enabled = 0;
|
||||
writel( SA5B_INTR_OFF,
|
||||
h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
}
|
||||
}
|
||||
|
||||
/* Performant mode intr_mask */
|
||||
static void SA5_performant_intr_mask(ctlr_info_t *h, unsigned long val)
|
||||
{
|
||||
if (val) { /* turn on interrupts */
|
||||
h->interrupts_enabled = 1;
|
||||
writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
} else {
|
||||
h->interrupts_enabled = 0;
|
||||
writel(SA5_PERF_INTR_OFF,
|
||||
h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if fifo is full.
|
||||
*
|
||||
*/
|
||||
static unsigned long SA5_fifo_full(ctlr_info_t *h)
|
||||
{
|
||||
if( h->commands_outstanding >= h->max_commands)
|
||||
return(1);
|
||||
else
|
||||
return(0);
|
||||
|
||||
}
|
||||
/*
|
||||
* returns value read from hardware.
|
||||
* returns FIFO_EMPTY if there is nothing to read
|
||||
*/
|
||||
static unsigned long SA5_completed(ctlr_info_t *h)
|
||||
{
|
||||
unsigned long register_value
|
||||
= readl(h->vaddr + SA5_REPLY_PORT_OFFSET);
|
||||
if(register_value != FIFO_EMPTY)
|
||||
{
|
||||
h->commands_outstanding--;
|
||||
#ifdef CCISS_DEBUG
|
||||
printk("cciss: Read %lx back from board\n", register_value);
|
||||
#endif /* CCISS_DEBUG */
|
||||
}
|
||||
#ifdef CCISS_DEBUG
|
||||
else
|
||||
{
|
||||
printk("cciss: FIFO Empty read\n");
|
||||
}
|
||||
#endif
|
||||
return ( register_value);
|
||||
|
||||
}
|
||||
|
||||
/* Performant mode command completed */
|
||||
static unsigned long SA5_performant_completed(ctlr_info_t *h)
|
||||
{
|
||||
unsigned long register_value = FIFO_EMPTY;
|
||||
|
||||
/* flush the controller write of the reply queue by reading
|
||||
* outbound doorbell status register.
|
||||
*/
|
||||
register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
|
||||
/* msi auto clears the interrupt pending bit. */
|
||||
if (!(h->pdev->msi_enabled || h->pdev->msix_enabled)) {
|
||||
writel(SA5_OUTDB_CLEAR_PERF_BIT, h->vaddr + SA5_OUTDB_CLEAR);
|
||||
/* Do a read in order to flush the write to the controller
|
||||
* (as per spec.)
|
||||
*/
|
||||
register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
|
||||
}
|
||||
|
||||
if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) {
|
||||
register_value = *(h->reply_pool_head);
|
||||
(h->reply_pool_head)++;
|
||||
h->commands_outstanding--;
|
||||
} else {
|
||||
register_value = FIFO_EMPTY;
|
||||
}
|
||||
/* Check for wraparound */
|
||||
if (h->reply_pool_head == (h->reply_pool + h->max_commands)) {
|
||||
h->reply_pool_head = h->reply_pool;
|
||||
h->reply_pool_wraparound ^= 1;
|
||||
}
|
||||
|
||||
return register_value;
|
||||
}
|
||||
/*
|
||||
* Returns true if an interrupt is pending..
|
||||
*/
|
||||
static bool SA5_intr_pending(ctlr_info_t *h)
|
||||
{
|
||||
unsigned long register_value =
|
||||
readl(h->vaddr + SA5_INTR_STATUS);
|
||||
#ifdef CCISS_DEBUG
|
||||
printk("cciss: intr_pending %lx\n", register_value);
|
||||
#endif /* CCISS_DEBUG */
|
||||
if( register_value & SA5_INTR_PENDING)
|
||||
return 1;
|
||||
return 0 ;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if an interrupt is pending..
|
||||
*/
|
||||
static bool SA5B_intr_pending(ctlr_info_t *h)
|
||||
{
|
||||
unsigned long register_value =
|
||||
readl(h->vaddr + SA5_INTR_STATUS);
|
||||
#ifdef CCISS_DEBUG
|
||||
printk("cciss: intr_pending %lx\n", register_value);
|
||||
#endif /* CCISS_DEBUG */
|
||||
if( register_value & SA5B_INTR_PENDING)
|
||||
return 1;
|
||||
return 0 ;
|
||||
}
|
||||
|
||||
static bool SA5_performant_intr_pending(ctlr_info_t *h)
|
||||
{
|
||||
unsigned long register_value = readl(h->vaddr + SA5_INTR_STATUS);
|
||||
|
||||
if (!register_value)
|
||||
return false;
|
||||
|
||||
if (h->pdev->msi_enabled || h->pdev->msix_enabled)
|
||||
return true;
|
||||
|
||||
/* Read outbound doorbell to flush */
|
||||
register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
|
||||
return register_value & SA5_OUTDB_STATUS_PERF_BIT;
|
||||
}
|
||||
|
||||
static struct access_method SA5_access = {
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_intr_mask,
|
||||
.fifo_full = SA5_fifo_full,
|
||||
.intr_pending = SA5_intr_pending,
|
||||
.command_completed = SA5_completed,
|
||||
};
|
||||
|
||||
static struct access_method SA5B_access = {
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5B_intr_mask,
|
||||
.fifo_full = SA5_fifo_full,
|
||||
.intr_pending = SA5B_intr_pending,
|
||||
.command_completed = SA5_completed,
|
||||
};
|
||||
|
||||
static struct access_method SA5_performant_access = {
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.fifo_full = SA5_fifo_full,
|
||||
.intr_pending = SA5_performant_intr_pending,
|
||||
.command_completed = SA5_performant_completed,
|
||||
};
|
||||
|
||||
struct board_type {
|
||||
__u32 board_id;
|
||||
char *product_name;
|
||||
struct access_method *access;
|
||||
int nr_cmds; /* Max cmds this kind of ctlr can handle. */
|
||||
};
|
||||
|
||||
#endif /* CCISS_H */
|
|
@ -1,269 +0,0 @@
|
|||
#ifndef CCISS_CMD_H
|
||||
#define CCISS_CMD_H
|
||||
|
||||
#include <linux/cciss_defs.h>
|
||||
|
||||
/* DEFINES */
|
||||
#define CISS_VERSION "1.00"
|
||||
|
||||
/* general boundary definitions */
|
||||
#define MAXSGENTRIES 32
|
||||
#define CCISS_SG_CHAIN 0x80000000
|
||||
#define MAXREPLYQS 256
|
||||
|
||||
/* Unit Attentions ASC's as defined for the MSA2012sa */
|
||||
#define POWER_OR_RESET 0x29
|
||||
#define STATE_CHANGED 0x2a
|
||||
#define UNIT_ATTENTION_CLEARED 0x2f
|
||||
#define LUN_FAILED 0x3e
|
||||
#define REPORT_LUNS_CHANGED 0x3f
|
||||
|
||||
/* Unit Attentions ASCQ's as defined for the MSA2012sa */
|
||||
|
||||
/* These ASCQ's defined for ASC = POWER_OR_RESET */
|
||||
#define POWER_ON_RESET 0x00
|
||||
#define POWER_ON_REBOOT 0x01
|
||||
#define SCSI_BUS_RESET 0x02
|
||||
#define MSA_TARGET_RESET 0x03
|
||||
#define CONTROLLER_FAILOVER 0x04
|
||||
#define TRANSCEIVER_SE 0x05
|
||||
#define TRANSCEIVER_LVD 0x06
|
||||
|
||||
/* These ASCQ's defined for ASC = STATE_CHANGED */
|
||||
#define RESERVATION_PREEMPTED 0x03
|
||||
#define ASYM_ACCESS_CHANGED 0x06
|
||||
#define LUN_CAPACITY_CHANGED 0x09
|
||||
|
||||
/* config space register offsets */
|
||||
#define CFG_VENDORID 0x00
|
||||
#define CFG_DEVICEID 0x02
|
||||
#define CFG_I2OBAR 0x10
|
||||
#define CFG_MEM1BAR 0x14
|
||||
|
||||
/* i2o space register offsets */
|
||||
#define I2O_IBDB_SET 0x20
|
||||
#define I2O_IBDB_CLEAR 0x70
|
||||
#define I2O_INT_STATUS 0x30
|
||||
#define I2O_INT_MASK 0x34
|
||||
#define I2O_IBPOST_Q 0x40
|
||||
#define I2O_OBPOST_Q 0x44
|
||||
#define I2O_DMA1_CFG 0x214
|
||||
|
||||
/* Configuration Table */
|
||||
#define CFGTBL_ChangeReq 0x00000001l
|
||||
#define CFGTBL_AccCmds 0x00000001l
|
||||
#define DOORBELL_CTLR_RESET 0x00000004l
|
||||
#define DOORBELL_CTLR_RESET2 0x00000020l
|
||||
|
||||
#define CFGTBL_Trans_Simple 0x00000002l
|
||||
#define CFGTBL_Trans_Performant 0x00000004l
|
||||
#define CFGTBL_Trans_use_short_tags 0x20000000l
|
||||
|
||||
#define CFGTBL_BusType_Ultra2 0x00000001l
|
||||
#define CFGTBL_BusType_Ultra3 0x00000002l
|
||||
#define CFGTBL_BusType_Fibre1G 0x00000100l
|
||||
#define CFGTBL_BusType_Fibre2G 0x00000200l
|
||||
typedef struct _vals32
|
||||
{
|
||||
__u32 lower;
|
||||
__u32 upper;
|
||||
} vals32;
|
||||
|
||||
typedef union _u64bit
|
||||
{
|
||||
vals32 val32;
|
||||
__u64 val;
|
||||
} u64bit;
|
||||
|
||||
/* Type defs used in the following structs */
|
||||
#define QWORD vals32
|
||||
|
||||
/* STRUCTURES */
|
||||
#define CISS_MAX_PHYS_LUN 1024
|
||||
/* SCSI-3 Cmmands */
|
||||
|
||||
#pragma pack(1)
|
||||
|
||||
#define CISS_INQUIRY 0x12
|
||||
/* Date returned */
|
||||
typedef struct _InquiryData_struct
|
||||
{
|
||||
BYTE data_byte[36];
|
||||
} InquiryData_struct;
|
||||
|
||||
#define CISS_REPORT_LOG 0xc2 /* Report Logical LUNs */
|
||||
#define CISS_REPORT_PHYS 0xc3 /* Report Physical LUNs */
|
||||
/* Data returned */
|
||||
typedef struct _ReportLUNdata_struct
|
||||
{
|
||||
BYTE LUNListLength[4];
|
||||
DWORD reserved;
|
||||
BYTE LUN[CISS_MAX_LUN][8];
|
||||
} ReportLunData_struct;
|
||||
|
||||
#define CCISS_READ_CAPACITY 0x25 /* Read Capacity */
|
||||
typedef struct _ReadCapdata_struct
|
||||
{
|
||||
BYTE total_size[4]; /* Total size in blocks */
|
||||
BYTE block_size[4]; /* Size of blocks in bytes */
|
||||
} ReadCapdata_struct;
|
||||
|
||||
#define CCISS_READ_CAPACITY_16 0x9e /* Read Capacity 16 */
|
||||
|
||||
/* service action to differentiate a 16 byte read capacity from
|
||||
other commands that use the 0x9e SCSI op code */
|
||||
|
||||
#define CCISS_READ_CAPACITY_16_SERVICE_ACT 0x10
|
||||
|
||||
typedef struct _ReadCapdata_struct_16
|
||||
{
|
||||
BYTE total_size[8]; /* Total size in blocks */
|
||||
BYTE block_size[4]; /* Size of blocks in bytes */
|
||||
BYTE prot_en:1; /* protection enable bit */
|
||||
BYTE rto_en:1; /* reference tag own enable bit */
|
||||
BYTE reserved:6; /* reserved bits */
|
||||
BYTE reserved2[18]; /* reserved bytes per spec */
|
||||
} ReadCapdata_struct_16;
|
||||
|
||||
/* Define the supported read/write commands for cciss based controllers */
|
||||
|
||||
#define CCISS_READ_10 0x28 /* Read(10) */
|
||||
#define CCISS_WRITE_10 0x2a /* Write(10) */
|
||||
#define CCISS_READ_16 0x88 /* Read(16) */
|
||||
#define CCISS_WRITE_16 0x8a /* Write(16) */
|
||||
|
||||
/* Define the CDB lengths supported by cciss based controllers */
|
||||
|
||||
#define CDB_LEN10 10
|
||||
#define CDB_LEN16 16
|
||||
|
||||
/* BMIC commands */
|
||||
#define BMIC_READ 0x26
|
||||
#define BMIC_WRITE 0x27
|
||||
#define BMIC_CACHE_FLUSH 0xc2
|
||||
#define CCISS_CACHE_FLUSH 0x01 /* C2 was already being used by CCISS */
|
||||
|
||||
#define CCISS_ABORT_MSG 0x00
|
||||
#define CCISS_RESET_MSG 0x01
|
||||
#define CCISS_RESET_TYPE_CONTROLLER 0x00
|
||||
#define CCISS_RESET_TYPE_BUS 0x01
|
||||
#define CCISS_RESET_TYPE_TARGET 0x03
|
||||
#define CCISS_RESET_TYPE_LUN 0x04
|
||||
#define CCISS_NOOP_MSG 0x03
|
||||
|
||||
/* Command List Structure */
|
||||
#define CTLR_LUNID "\0\0\0\0\0\0\0\0"
|
||||
|
||||
typedef struct _CommandListHeader_struct {
|
||||
BYTE ReplyQueue;
|
||||
BYTE SGList;
|
||||
HWORD SGTotal;
|
||||
QWORD Tag;
|
||||
LUNAddr_struct LUN;
|
||||
} CommandListHeader_struct;
|
||||
typedef struct _ErrDescriptor_struct {
|
||||
QWORD Addr;
|
||||
DWORD Len;
|
||||
} ErrDescriptor_struct;
|
||||
typedef struct _SGDescriptor_struct {
|
||||
QWORD Addr;
|
||||
DWORD Len;
|
||||
DWORD Ext;
|
||||
} SGDescriptor_struct;
|
||||
|
||||
/* Command types */
|
||||
#define CMD_RWREQ 0x00
|
||||
#define CMD_IOCTL_PEND 0x01
|
||||
#define CMD_SCSI 0x03
|
||||
#define CMD_MSG_DONE 0x04
|
||||
#define CMD_MSG_TIMEOUT 0x05
|
||||
#define CMD_MSG_STALE 0xff
|
||||
|
||||
/* This structure needs to be divisible by COMMANDLIST_ALIGNMENT
|
||||
* because low bits of the address are used to to indicate that
|
||||
* whether the tag contains an index or an address. PAD_32 and
|
||||
* PAD_64 can be adjusted independently as needed for 32-bit
|
||||
* and 64-bits systems.
|
||||
*/
|
||||
#define COMMANDLIST_ALIGNMENT (32)
|
||||
#define IS_64_BIT ((sizeof(long) - 4)/4)
|
||||
#define IS_32_BIT (!IS_64_BIT)
|
||||
#define PAD_32 (0)
|
||||
#define PAD_64 (4)
|
||||
#define PADSIZE (IS_32_BIT * PAD_32 + IS_64_BIT * PAD_64)
|
||||
#define DIRECT_LOOKUP_BIT 0x10
|
||||
#define DIRECT_LOOKUP_SHIFT 5
|
||||
|
||||
typedef struct _CommandList_struct {
|
||||
CommandListHeader_struct Header;
|
||||
RequestBlock_struct Request;
|
||||
ErrDescriptor_struct ErrDesc;
|
||||
SGDescriptor_struct SG[MAXSGENTRIES];
|
||||
/* information associated with the command */
|
||||
__u32 busaddr; /* physical address of this record */
|
||||
ErrorInfo_struct * err_info; /* pointer to the allocated mem */
|
||||
int ctlr;
|
||||
int cmd_type;
|
||||
long cmdindex;
|
||||
struct list_head list;
|
||||
struct request * rq;
|
||||
struct completion *waiting;
|
||||
int retry_count;
|
||||
void * scsi_cmd;
|
||||
char pad[PADSIZE];
|
||||
} CommandList_struct;
|
||||
|
||||
/* Configuration Table Structure */
|
||||
typedef struct _HostWrite_struct {
|
||||
DWORD TransportRequest;
|
||||
DWORD Reserved;
|
||||
DWORD CoalIntDelay;
|
||||
DWORD CoalIntCount;
|
||||
} HostWrite_struct;
|
||||
|
||||
typedef struct _CfgTable_struct {
|
||||
BYTE Signature[4];
|
||||
DWORD SpecValence;
|
||||
#define SIMPLE_MODE 0x02
|
||||
#define PERFORMANT_MODE 0x04
|
||||
#define MEMQ_MODE 0x08
|
||||
DWORD TransportSupport;
|
||||
DWORD TransportActive;
|
||||
HostWrite_struct HostWrite;
|
||||
DWORD CmdsOutMax;
|
||||
DWORD BusTypes;
|
||||
DWORD TransMethodOffset;
|
||||
BYTE ServerName[16];
|
||||
DWORD HeartBeat;
|
||||
DWORD SCSI_Prefetch;
|
||||
DWORD MaxSGElements;
|
||||
DWORD MaxLogicalUnits;
|
||||
DWORD MaxPhysicalDrives;
|
||||
DWORD MaxPhysicalDrivesPerLogicalUnit;
|
||||
DWORD MaxPerformantModeCommands;
|
||||
u8 reserved[0x78 - 0x58];
|
||||
u32 misc_fw_support; /* offset 0x78 */
|
||||
#define MISC_FW_DOORBELL_RESET (0x02)
|
||||
#define MISC_FW_DOORBELL_RESET2 (0x10)
|
||||
u8 driver_version[32];
|
||||
} CfgTable_struct;
|
||||
|
||||
struct TransTable_struct {
|
||||
u32 BlockFetch0;
|
||||
u32 BlockFetch1;
|
||||
u32 BlockFetch2;
|
||||
u32 BlockFetch3;
|
||||
u32 BlockFetch4;
|
||||
u32 BlockFetch5;
|
||||
u32 BlockFetch6;
|
||||
u32 BlockFetch7;
|
||||
u32 RepQSize;
|
||||
u32 RepQCount;
|
||||
u32 RepQCtrAddrLow32;
|
||||
u32 RepQCtrAddrHigh32;
|
||||
u32 RepQAddr0Low32;
|
||||
u32 RepQAddr0High32;
|
||||
};
|
||||
|
||||
#pragma pack()
|
||||
#endif /* CCISS_CMD_H */
|
File diff suppressed because it is too large
Load Diff
|
@ -1,79 +0,0 @@
|
|||
/*
|
||||
* Disk Array driver for HP Smart Array controllers, SCSI Tape module.
|
||||
* (C) Copyright 2001, 2007 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; version 2 of the License.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 59 Temple Place, Suite 300, Boston, MA
|
||||
* 02111-1307, USA.
|
||||
*
|
||||
* Questions/Comments/Bugfixes to iss_storagedev@hp.com
|
||||
*
|
||||
*/
|
||||
#ifdef CONFIG_CISS_SCSI_TAPE
|
||||
#ifndef _CCISS_SCSI_H_
|
||||
#define _CCISS_SCSI_H_
|
||||
|
||||
#include <scsi/scsicam.h> /* possibly irrelevant, since we don't show disks */
|
||||
|
||||
/* the scsi id of the adapter... */
|
||||
#define SELF_SCSI_ID 15
|
||||
/* 15 is somewhat arbitrary, since the scsi-2 bus
|
||||
that's presented by the driver to the OS is
|
||||
fabricated. The "real" scsi-3 bus the
|
||||
hardware presents is fabricated too.
|
||||
The actual, honest-to-goodness physical
|
||||
bus that the devices are attached to is not
|
||||
addressible natively, and may in fact turn
|
||||
out to be not scsi at all. */
|
||||
|
||||
|
||||
/*
|
||||
|
||||
If the upper scsi layer tries to track how many commands we have
|
||||
outstanding, it will be operating under the misapprehension that it is
|
||||
the only one sending us requests. We also have the block interface,
|
||||
which is where most requests must surely come from, so the upper layer's
|
||||
notion of how many requests we have outstanding will be wrong most or
|
||||
all of the time.
|
||||
|
||||
Note, the normal SCSI mid-layer error handling doesn't work well
|
||||
for this driver because 1) it takes the io_request_lock before
|
||||
calling error handlers and uses a local variable to store flags,
|
||||
so the io_request_lock cannot be released and interrupts enabled
|
||||
inside the error handlers, and, the error handlers cannot poll
|
||||
for command completion because they might get commands from the
|
||||
block half of the driver completing, and not know what to do
|
||||
with them. That's what we get for making a hybrid scsi/block
|
||||
driver, I suppose.
|
||||
|
||||
*/
|
||||
|
||||
struct cciss_scsi_dev_t {
|
||||
int devtype;
|
||||
int bus, target, lun; /* as presented to the OS */
|
||||
unsigned char scsi3addr[8]; /* as presented to the HW */
|
||||
unsigned char device_id[16]; /* from inquiry pg. 0x83 */
|
||||
unsigned char vendor[8]; /* bytes 8-15 of inquiry data */
|
||||
unsigned char model[16]; /* bytes 16-31 of inquiry data */
|
||||
unsigned char revision[4]; /* bytes 32-35 of inquiry data */
|
||||
};
|
||||
|
||||
struct cciss_scsi_hba_t {
|
||||
char *name;
|
||||
int ndevices;
|
||||
#define CCISS_MAX_SCSI_DEVS_PER_HBA 16
|
||||
struct cciss_scsi_dev_t dev[CCISS_MAX_SCSI_DEVS_PER_HBA];
|
||||
};
|
||||
|
||||
#endif /* _CCISS_SCSI_H_ */
|
||||
#endif /* CONFIG_CISS_SCSI_TAPE */
|
|
@ -2079,7 +2079,7 @@ void
|
|||
mpt_detach(struct pci_dev *pdev)
|
||||
{
|
||||
MPT_ADAPTER *ioc = pci_get_drvdata(pdev);
|
||||
char pname[32];
|
||||
char pname[64];
|
||||
u8 cb_idx;
|
||||
unsigned long flags;
|
||||
struct workqueue_struct *wq;
|
||||
|
@ -2100,11 +2100,11 @@ mpt_detach(struct pci_dev *pdev)
|
|||
spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
|
||||
destroy_workqueue(wq);
|
||||
|
||||
sprintf(pname, MPT_PROCFS_MPTBASEDIR "/%s/summary", ioc->name);
|
||||
snprintf(pname, sizeof(pname), MPT_PROCFS_MPTBASEDIR "/%s/summary", ioc->name);
|
||||
remove_proc_entry(pname, NULL);
|
||||
sprintf(pname, MPT_PROCFS_MPTBASEDIR "/%s/info", ioc->name);
|
||||
snprintf(pname, sizeof(pname), MPT_PROCFS_MPTBASEDIR "/%s/info", ioc->name);
|
||||
remove_proc_entry(pname, NULL);
|
||||
sprintf(pname, MPT_PROCFS_MPTBASEDIR "/%s", ioc->name);
|
||||
snprintf(pname, sizeof(pname), MPT_PROCFS_MPTBASEDIR "/%s", ioc->name);
|
||||
remove_proc_entry(pname, NULL);
|
||||
|
||||
/* call per device driver remove entry point */
|
||||
|
|
|
@ -104,7 +104,6 @@ static void mptfc_remove(struct pci_dev *pdev);
|
|||
static int mptfc_abort(struct scsi_cmnd *SCpnt);
|
||||
static int mptfc_dev_reset(struct scsi_cmnd *SCpnt);
|
||||
static int mptfc_bus_reset(struct scsi_cmnd *SCpnt);
|
||||
static int mptfc_host_reset(struct scsi_cmnd *SCpnt);
|
||||
|
||||
static struct scsi_host_template mptfc_driver_template = {
|
||||
.module = THIS_MODULE,
|
||||
|
@ -123,7 +122,7 @@ static struct scsi_host_template mptfc_driver_template = {
|
|||
.eh_abort_handler = mptfc_abort,
|
||||
.eh_device_reset_handler = mptfc_dev_reset,
|
||||
.eh_bus_reset_handler = mptfc_bus_reset,
|
||||
.eh_host_reset_handler = mptfc_host_reset,
|
||||
.eh_host_reset_handler = mptscsih_host_reset,
|
||||
.bios_param = mptscsih_bios_param,
|
||||
.can_queue = MPT_FC_CAN_QUEUE,
|
||||
.this_id = -1,
|
||||
|
@ -254,13 +253,6 @@ mptfc_bus_reset(struct scsi_cmnd *SCpnt)
|
|||
mptfc_block_error_handler(SCpnt, mptscsih_bus_reset, __func__);
|
||||
}
|
||||
|
||||
static int
|
||||
mptfc_host_reset(struct scsi_cmnd *SCpnt)
|
||||
{
|
||||
return
|
||||
mptfc_block_error_handler(SCpnt, mptscsih_host_reset, __func__);
|
||||
}
|
||||
|
||||
static void
|
||||
mptfc_set_rport_loss_tmo(struct fc_rport *rport, uint32_t timeout)
|
||||
{
|
||||
|
|
|
@ -2210,33 +2210,26 @@ mptsas_get_bay_identifier(struct sas_rphy *rphy)
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int mptsas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
|
||||
struct request *req)
|
||||
static void mptsas_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
|
||||
struct sas_rphy *rphy)
|
||||
{
|
||||
MPT_ADAPTER *ioc = ((MPT_SCSI_HOST *) shost->hostdata)->ioc;
|
||||
MPT_FRAME_HDR *mf;
|
||||
SmpPassthroughRequest_t *smpreq;
|
||||
struct request *rsp = req->next_rq;
|
||||
int ret;
|
||||
int flagsLength;
|
||||
unsigned long timeleft;
|
||||
char *psge;
|
||||
dma_addr_t dma_addr_in = 0;
|
||||
dma_addr_t dma_addr_out = 0;
|
||||
u64 sas_address = 0;
|
||||
|
||||
if (!rsp) {
|
||||
printk(MYIOC_s_ERR_FMT "%s: the smp response space is missing\n",
|
||||
ioc->name, __func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
unsigned int reslen = 0;
|
||||
int ret = -EINVAL;
|
||||
|
||||
/* do we need to support multiple segments? */
|
||||
if (bio_multiple_segments(req->bio) ||
|
||||
bio_multiple_segments(rsp->bio)) {
|
||||
if (job->request_payload.sg_cnt > 1 ||
|
||||
job->reply_payload.sg_cnt > 1) {
|
||||
printk(MYIOC_s_ERR_FMT "%s: multiple segments req %u, rsp %u\n",
|
||||
ioc->name, __func__, blk_rq_bytes(req), blk_rq_bytes(rsp));
|
||||
return -EINVAL;
|
||||
ioc->name, __func__, job->request_payload.payload_len,
|
||||
job->reply_payload.payload_len);
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = mutex_lock_interruptible(&ioc->sas_mgmt.mutex);
|
||||
|
@ -2252,7 +2245,8 @@ static int mptsas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
|
|||
smpreq = (SmpPassthroughRequest_t *)mf;
|
||||
memset(smpreq, 0, sizeof(*smpreq));
|
||||
|
||||
smpreq->RequestDataLength = cpu_to_le16(blk_rq_bytes(req) - 4);
|
||||
smpreq->RequestDataLength =
|
||||
cpu_to_le16(job->request_payload.payload_len - 4);
|
||||
smpreq->Function = MPI_FUNCTION_SMP_PASSTHROUGH;
|
||||
|
||||
if (rphy)
|
||||
|
@ -2278,13 +2272,14 @@ static int mptsas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
|
|||
MPI_SGE_FLAGS_END_OF_BUFFER |
|
||||
MPI_SGE_FLAGS_DIRECTION)
|
||||
<< MPI_SGE_FLAGS_SHIFT;
|
||||
flagsLength |= (blk_rq_bytes(req) - 4);
|
||||
|
||||
dma_addr_out = pci_map_single(ioc->pcidev, bio_data(req->bio),
|
||||
blk_rq_bytes(req), PCI_DMA_BIDIRECTIONAL);
|
||||
if (pci_dma_mapping_error(ioc->pcidev, dma_addr_out))
|
||||
if (!dma_map_sg(&ioc->pcidev->dev, job->request_payload.sg_list,
|
||||
1, PCI_DMA_BIDIRECTIONAL))
|
||||
goto put_mf;
|
||||
ioc->add_sge(psge, flagsLength, dma_addr_out);
|
||||
|
||||
flagsLength |= (sg_dma_len(job->request_payload.sg_list) - 4);
|
||||
ioc->add_sge(psge, flagsLength,
|
||||
sg_dma_address(job->request_payload.sg_list));
|
||||
psge += ioc->SGE_size;
|
||||
|
||||
/* response */
|
||||
|
@ -2294,12 +2289,13 @@ static int mptsas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
|
|||
MPI_SGE_FLAGS_END_OF_BUFFER;
|
||||
|
||||
flagsLength = flagsLength << MPI_SGE_FLAGS_SHIFT;
|
||||
flagsLength |= blk_rq_bytes(rsp) + 4;
|
||||
dma_addr_in = pci_map_single(ioc->pcidev, bio_data(rsp->bio),
|
||||
blk_rq_bytes(rsp), PCI_DMA_BIDIRECTIONAL);
|
||||
if (pci_dma_mapping_error(ioc->pcidev, dma_addr_in))
|
||||
goto unmap;
|
||||
ioc->add_sge(psge, flagsLength, dma_addr_in);
|
||||
|
||||
if (!dma_map_sg(&ioc->pcidev->dev, job->reply_payload.sg_list,
|
||||
1, PCI_DMA_BIDIRECTIONAL))
|
||||
goto unmap_out;
|
||||
flagsLength |= sg_dma_len(job->reply_payload.sg_list) + 4;
|
||||
ioc->add_sge(psge, flagsLength,
|
||||
sg_dma_address(job->reply_payload.sg_list));
|
||||
|
||||
INITIALIZE_MGMT_STATUS(ioc->sas_mgmt.status)
|
||||
mpt_put_msg_frame(mptsasMgmtCtx, ioc, mf);
|
||||
|
@ -2310,10 +2306,10 @@ static int mptsas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
|
|||
mpt_free_msg_frame(ioc, mf);
|
||||
mf = NULL;
|
||||
if (ioc->sas_mgmt.status & MPT_MGMT_STATUS_DID_IOCRESET)
|
||||
goto unmap;
|
||||
goto unmap_in;
|
||||
if (!timeleft)
|
||||
mpt_Soft_Hard_ResetHandler(ioc, CAN_SLEEP);
|
||||
goto unmap;
|
||||
goto unmap_in;
|
||||
}
|
||||
mf = NULL;
|
||||
|
||||
|
@ -2321,23 +2317,22 @@ static int mptsas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
|
|||
SmpPassthroughReply_t *smprep;
|
||||
|
||||
smprep = (SmpPassthroughReply_t *)ioc->sas_mgmt.reply;
|
||||
memcpy(scsi_req(req)->sense, smprep, sizeof(*smprep));
|
||||
scsi_req(req)->sense_len = sizeof(*smprep);
|
||||
scsi_req(req)->resid_len = 0;
|
||||
scsi_req(rsp)->resid_len -= smprep->ResponseDataLength;
|
||||
memcpy(job->reply, smprep, sizeof(*smprep));
|
||||
job->reply_len = sizeof(*smprep);
|
||||
reslen = smprep->ResponseDataLength;
|
||||
} else {
|
||||
printk(MYIOC_s_ERR_FMT
|
||||
"%s: smp passthru reply failed to be returned\n",
|
||||
ioc->name, __func__);
|
||||
ret = -ENXIO;
|
||||
}
|
||||
unmap:
|
||||
if (dma_addr_out)
|
||||
pci_unmap_single(ioc->pcidev, dma_addr_out, blk_rq_bytes(req),
|
||||
PCI_DMA_BIDIRECTIONAL);
|
||||
if (dma_addr_in)
|
||||
pci_unmap_single(ioc->pcidev, dma_addr_in, blk_rq_bytes(rsp),
|
||||
PCI_DMA_BIDIRECTIONAL);
|
||||
|
||||
unmap_in:
|
||||
dma_unmap_sg(&ioc->pcidev->dev, job->reply_payload.sg_list, 1,
|
||||
PCI_DMA_BIDIRECTIONAL);
|
||||
unmap_out:
|
||||
dma_unmap_sg(&ioc->pcidev->dev, job->request_payload.sg_list, 1,
|
||||
PCI_DMA_BIDIRECTIONAL);
|
||||
put_mf:
|
||||
if (mf)
|
||||
mpt_free_msg_frame(ioc, mf);
|
||||
|
@ -2345,7 +2340,7 @@ out_unlock:
|
|||
CLEAR_MGMT_STATUS(ioc->sas_mgmt.status)
|
||||
mutex_unlock(&ioc->sas_mgmt.mutex);
|
||||
out:
|
||||
return ret;
|
||||
bsg_job_done(job, ret, reslen);
|
||||
}
|
||||
|
||||
static struct sas_function_template mptsas_transport_functions = {
|
||||
|
@ -4352,11 +4347,10 @@ mptsas_hotplug_work(MPT_ADAPTER *ioc, struct fw_event_work *fw_event,
|
|||
return;
|
||||
|
||||
phy_info = mptsas_refreshing_device_handles(ioc, &sas_device);
|
||||
/* Only For SATA Device ADD */
|
||||
if (!phy_info && (sas_device.device_info &
|
||||
MPI_SAS_DEVICE_INFO_SATA_DEVICE)) {
|
||||
/* Device hot plug */
|
||||
if (!phy_info) {
|
||||
devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT
|
||||
"%s %d SATA HOT PLUG: "
|
||||
"%s %d HOT PLUG: "
|
||||
"parent handle of device %x\n", ioc->name,
|
||||
__func__, __LINE__, sas_device.handle_parent));
|
||||
port_info = mptsas_find_portinfo_by_handle(ioc,
|
||||
|
|
|
@ -29,7 +29,6 @@
|
|||
#define KMSG_COMPONENT "zfcp"
|
||||
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
|
||||
|
||||
#include <linux/miscdevice.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/module.h>
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
*
|
||||
* Debug traces for zfcp.
|
||||
*
|
||||
* Copyright IBM Corp. 2002, 2016
|
||||
* Copyright IBM Corp. 2002, 2017
|
||||
*/
|
||||
|
||||
#define KMSG_COMPONENT "zfcp"
|
||||
|
@ -113,8 +113,12 @@ void zfcp_dbf_hba_fsf_uss(char *tag, struct zfcp_fsf_req *req)
|
|||
struct zfcp_dbf *dbf = req->adapter->dbf;
|
||||
struct fsf_status_read_buffer *srb = req->data;
|
||||
struct zfcp_dbf_hba *rec = &dbf->hba_buf;
|
||||
static int const level = 2;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->hba, level)))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&dbf->hba_lock, flags);
|
||||
memset(rec, 0, sizeof(*rec));
|
||||
|
||||
|
@ -142,7 +146,7 @@ void zfcp_dbf_hba_fsf_uss(char *tag, struct zfcp_fsf_req *req)
|
|||
zfcp_dbf_pl_write(dbf, srb->payload.data, rec->pl_len,
|
||||
"fsf_uss", req->req_id);
|
||||
log:
|
||||
debug_event(dbf->hba, 2, rec, sizeof(*rec));
|
||||
debug_event(dbf->hba, level, rec, sizeof(*rec));
|
||||
spin_unlock_irqrestore(&dbf->hba_lock, flags);
|
||||
}
|
||||
|
||||
|
@ -156,8 +160,12 @@ void zfcp_dbf_hba_bit_err(char *tag, struct zfcp_fsf_req *req)
|
|||
struct zfcp_dbf *dbf = req->adapter->dbf;
|
||||
struct zfcp_dbf_hba *rec = &dbf->hba_buf;
|
||||
struct fsf_status_read_buffer *sr_buf = req->data;
|
||||
static int const level = 1;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->hba, level)))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&dbf->hba_lock, flags);
|
||||
memset(rec, 0, sizeof(*rec));
|
||||
|
||||
|
@ -169,7 +177,7 @@ void zfcp_dbf_hba_bit_err(char *tag, struct zfcp_fsf_req *req)
|
|||
memcpy(&rec->u.be, &sr_buf->payload.bit_error,
|
||||
sizeof(struct fsf_bit_error_payload));
|
||||
|
||||
debug_event(dbf->hba, 1, rec, sizeof(*rec));
|
||||
debug_event(dbf->hba, level, rec, sizeof(*rec));
|
||||
spin_unlock_irqrestore(&dbf->hba_lock, flags);
|
||||
}
|
||||
|
||||
|
@ -186,8 +194,12 @@ void zfcp_dbf_hba_def_err(struct zfcp_adapter *adapter, u64 req_id, u16 scount,
|
|||
struct zfcp_dbf *dbf = adapter->dbf;
|
||||
struct zfcp_dbf_pay *payload = &dbf->pay_buf;
|
||||
unsigned long flags;
|
||||
static int const level = 1;
|
||||
u16 length;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->pay, level)))
|
||||
return;
|
||||
|
||||
if (!pl)
|
||||
return;
|
||||
|
||||
|
@ -202,7 +214,7 @@ void zfcp_dbf_hba_def_err(struct zfcp_adapter *adapter, u64 req_id, u16 scount,
|
|||
|
||||
while (payload->counter < scount && (char *)pl[payload->counter]) {
|
||||
memcpy(payload->data, (char *)pl[payload->counter], length);
|
||||
debug_event(dbf->pay, 1, payload, zfcp_dbf_plen(length));
|
||||
debug_event(dbf->pay, level, payload, zfcp_dbf_plen(length));
|
||||
payload->counter++;
|
||||
}
|
||||
|
||||
|
@ -217,15 +229,19 @@ void zfcp_dbf_hba_basic(char *tag, struct zfcp_adapter *adapter)
|
|||
{
|
||||
struct zfcp_dbf *dbf = adapter->dbf;
|
||||
struct zfcp_dbf_hba *rec = &dbf->hba_buf;
|
||||
static int const level = 1;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->hba, level)))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&dbf->hba_lock, flags);
|
||||
memset(rec, 0, sizeof(*rec));
|
||||
|
||||
memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN);
|
||||
rec->id = ZFCP_DBF_HBA_BASIC;
|
||||
|
||||
debug_event(dbf->hba, 1, rec, sizeof(*rec));
|
||||
debug_event(dbf->hba, level, rec, sizeof(*rec));
|
||||
spin_unlock_irqrestore(&dbf->hba_lock, flags);
|
||||
}
|
||||
|
||||
|
@ -264,9 +280,13 @@ void zfcp_dbf_rec_trig(char *tag, struct zfcp_adapter *adapter,
|
|||
{
|
||||
struct zfcp_dbf *dbf = adapter->dbf;
|
||||
struct zfcp_dbf_rec *rec = &dbf->rec_buf;
|
||||
static int const level = 1;
|
||||
struct list_head *entry;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->rec, level)))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&dbf->rec_lock, flags);
|
||||
memset(rec, 0, sizeof(*rec));
|
||||
|
||||
|
@ -283,7 +303,7 @@ void zfcp_dbf_rec_trig(char *tag, struct zfcp_adapter *adapter,
|
|||
rec->u.trig.want = want;
|
||||
rec->u.trig.need = need;
|
||||
|
||||
debug_event(dbf->rec, 1, rec, sizeof(*rec));
|
||||
debug_event(dbf->rec, level, rec, sizeof(*rec));
|
||||
spin_unlock_irqrestore(&dbf->rec_lock, flags);
|
||||
}
|
||||
|
||||
|
@ -300,6 +320,9 @@ void zfcp_dbf_rec_run_lvl(int level, char *tag, struct zfcp_erp_action *erp)
|
|||
struct zfcp_dbf_rec *rec = &dbf->rec_buf;
|
||||
unsigned long flags;
|
||||
|
||||
if (!debug_level_enabled(dbf->rec, level))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&dbf->rec_lock, flags);
|
||||
memset(rec, 0, sizeof(*rec));
|
||||
|
||||
|
@ -345,8 +368,12 @@ void zfcp_dbf_rec_run_wka(char *tag, struct zfcp_fc_wka_port *wka_port,
|
|||
{
|
||||
struct zfcp_dbf *dbf = wka_port->adapter->dbf;
|
||||
struct zfcp_dbf_rec *rec = &dbf->rec_buf;
|
||||
static int const level = 1;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->rec, level)))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&dbf->rec_lock, flags);
|
||||
memset(rec, 0, sizeof(*rec));
|
||||
|
||||
|
@ -362,10 +389,12 @@ void zfcp_dbf_rec_run_wka(char *tag, struct zfcp_fc_wka_port *wka_port,
|
|||
rec->u.run.rec_action = ~0;
|
||||
rec->u.run.rec_count = ~0;
|
||||
|
||||
debug_event(dbf->rec, 1, rec, sizeof(*rec));
|
||||
debug_event(dbf->rec, level, rec, sizeof(*rec));
|
||||
spin_unlock_irqrestore(&dbf->rec_lock, flags);
|
||||
}
|
||||
|
||||
#define ZFCP_DBF_SAN_LEVEL 1
|
||||
|
||||
static inline
|
||||
void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf,
|
||||
char *paytag, struct scatterlist *sg, u8 id, u16 len,
|
||||
|
@ -408,7 +437,7 @@ void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf,
|
|||
(u16)(sg->length - offset));
|
||||
/* cap_len <= pay_sum < cap_len+ZFCP_DBF_PAY_MAX_REC */
|
||||
memcpy(payload->data, sg_virt(sg) + offset, pay_len);
|
||||
debug_event(dbf->pay, 1, payload,
|
||||
debug_event(dbf->pay, ZFCP_DBF_SAN_LEVEL, payload,
|
||||
zfcp_dbf_plen(pay_len));
|
||||
payload->counter++;
|
||||
offset += pay_len;
|
||||
|
@ -418,7 +447,7 @@ void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf,
|
|||
spin_unlock(&dbf->pay_lock);
|
||||
|
||||
out:
|
||||
debug_event(dbf->san, 1, rec, sizeof(*rec));
|
||||
debug_event(dbf->san, ZFCP_DBF_SAN_LEVEL, rec, sizeof(*rec));
|
||||
spin_unlock_irqrestore(&dbf->san_lock, flags);
|
||||
}
|
||||
|
||||
|
@ -434,6 +463,9 @@ void zfcp_dbf_san_req(char *tag, struct zfcp_fsf_req *fsf, u32 d_id)
|
|||
struct zfcp_fsf_ct_els *ct_els = fsf->data;
|
||||
u16 length;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->san, ZFCP_DBF_SAN_LEVEL)))
|
||||
return;
|
||||
|
||||
length = (u16)zfcp_qdio_real_bytes(ct_els->req);
|
||||
zfcp_dbf_san(tag, dbf, "san_req", ct_els->req, ZFCP_DBF_SAN_REQ,
|
||||
length, fsf->req_id, d_id, length);
|
||||
|
@ -447,6 +479,7 @@ static u16 zfcp_dbf_san_res_cap_len_if_gpn_ft(char *tag,
|
|||
struct fc_ct_hdr *reqh = sg_virt(ct_els->req);
|
||||
struct fc_ns_gid_ft *reqn = (struct fc_ns_gid_ft *)(reqh + 1);
|
||||
struct scatterlist *resp_entry = ct_els->resp;
|
||||
struct fc_ct_hdr *resph;
|
||||
struct fc_gpn_ft_resp *acc;
|
||||
int max_entries, x, last = 0;
|
||||
|
||||
|
@ -460,7 +493,7 @@ static u16 zfcp_dbf_san_res_cap_len_if_gpn_ft(char *tag,
|
|||
&& reqh->ct_fs_subtype == FC_NS_SUBTYPE
|
||||
&& reqh->ct_options == 0
|
||||
&& reqh->_ct_resvd1 == 0
|
||||
&& reqh->ct_cmd == FC_NS_GPN_FT
|
||||
&& reqh->ct_cmd == cpu_to_be16(FC_NS_GPN_FT)
|
||||
/* reqh->ct_mr_size can vary so do not match but read below */
|
||||
&& reqh->_ct_resvd2 == 0
|
||||
&& reqh->ct_reason == 0
|
||||
|
@ -473,7 +506,15 @@ static u16 zfcp_dbf_san_res_cap_len_if_gpn_ft(char *tag,
|
|||
return len; /* not GPN_FT response so do not cap */
|
||||
|
||||
acc = sg_virt(resp_entry);
|
||||
max_entries = (reqh->ct_mr_size * 4 / sizeof(struct fc_gpn_ft_resp))
|
||||
|
||||
/* cap all but accept CT responses to at least the CT header */
|
||||
resph = (struct fc_ct_hdr *)acc;
|
||||
if ((ct_els->status) ||
|
||||
(resph->ct_cmd != cpu_to_be16(FC_FS_ACC)))
|
||||
return max(FC_CT_HDR_LEN, ZFCP_DBF_SAN_MAX_PAYLOAD);
|
||||
|
||||
max_entries = (be16_to_cpu(reqh->ct_mr_size) * 4 /
|
||||
sizeof(struct fc_gpn_ft_resp))
|
||||
+ 1 /* zfcp_fc_scan_ports: bytes correct, entries off-by-one
|
||||
* to account for header as 1st pseudo "entry" */;
|
||||
|
||||
|
@ -503,6 +544,9 @@ void zfcp_dbf_san_res(char *tag, struct zfcp_fsf_req *fsf)
|
|||
struct zfcp_fsf_ct_els *ct_els = fsf->data;
|
||||
u16 length;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->san, ZFCP_DBF_SAN_LEVEL)))
|
||||
return;
|
||||
|
||||
length = (u16)zfcp_qdio_real_bytes(ct_els->resp);
|
||||
zfcp_dbf_san(tag, dbf, "san_res", ct_els->resp, ZFCP_DBF_SAN_RES,
|
||||
length, fsf->req_id, ct_els->d_id,
|
||||
|
@ -522,6 +566,9 @@ void zfcp_dbf_san_in_els(char *tag, struct zfcp_fsf_req *fsf)
|
|||
u16 length;
|
||||
struct scatterlist sg;
|
||||
|
||||
if (unlikely(!debug_level_enabled(dbf->san, ZFCP_DBF_SAN_LEVEL)))
|
||||
return;
|
||||
|
||||
length = (u16)(srb->length -
|
||||
offsetof(struct fsf_status_read_buffer, payload));
|
||||
sg_init_one(&sg, srb->payload.data, length);
|
||||
|
@ -555,8 +602,8 @@ void zfcp_dbf_scsi(char *tag, int level, struct scsi_cmnd *sc,
|
|||
rec->scsi_retries = sc->retries;
|
||||
rec->scsi_allowed = sc->allowed;
|
||||
rec->scsi_id = sc->device->id;
|
||||
/* struct zfcp_dbf_scsi needs to be updated to handle 64bit LUNs */
|
||||
rec->scsi_lun = (u32)sc->device->lun;
|
||||
rec->scsi_lun_64_hi = (u32)(sc->device->lun >> 32);
|
||||
rec->host_scribble = (unsigned long)sc->host_scribble;
|
||||
|
||||
memcpy(rec->scsi_opcode, sc->cmnd,
|
||||
|
@ -564,19 +611,31 @@ void zfcp_dbf_scsi(char *tag, int level, struct scsi_cmnd *sc,
|
|||
|
||||
if (fsf) {
|
||||
rec->fsf_req_id = fsf->req_id;
|
||||
fcp_rsp = (struct fcp_resp_with_ext *)
|
||||
&(fsf->qtcb->bottom.io.fcp_rsp);
|
||||
rec->pl_len = FCP_RESP_WITH_EXT;
|
||||
fcp_rsp = &(fsf->qtcb->bottom.io.fcp_rsp.iu);
|
||||
/* mandatory parts of FCP_RSP IU in this SCSI record */
|
||||
memcpy(&rec->fcp_rsp, fcp_rsp, FCP_RESP_WITH_EXT);
|
||||
if (fcp_rsp->resp.fr_flags & FCP_RSP_LEN_VAL) {
|
||||
fcp_rsp_info = (struct fcp_resp_rsp_info *) &fcp_rsp[1];
|
||||
rec->fcp_rsp_info = fcp_rsp_info->rsp_code;
|
||||
rec->pl_len += be32_to_cpu(fcp_rsp->ext.fr_rsp_len);
|
||||
}
|
||||
if (fcp_rsp->resp.fr_flags & FCP_SNS_LEN_VAL) {
|
||||
rec->pl_len = min((u16)SCSI_SENSE_BUFFERSIZE,
|
||||
(u16)ZFCP_DBF_PAY_MAX_REC);
|
||||
zfcp_dbf_pl_write(dbf, sc->sense_buffer, rec->pl_len,
|
||||
"fcp_sns", fsf->req_id);
|
||||
rec->pl_len += be32_to_cpu(fcp_rsp->ext.fr_sns_len);
|
||||
}
|
||||
/* complete FCP_RSP IU in associated PAYload record
|
||||
* but only if there are optional parts
|
||||
*/
|
||||
if (fcp_rsp->resp.fr_flags != 0)
|
||||
zfcp_dbf_pl_write(
|
||||
dbf, fcp_rsp,
|
||||
/* at least one full PAY record
|
||||
* but not beyond hardware response field
|
||||
*/
|
||||
min_t(u16, max_t(u16, rec->pl_len,
|
||||
ZFCP_DBF_PAY_MAX_REC),
|
||||
FSF_FCP_RSP_SIZE),
|
||||
"fcp_riu", fsf->req_id);
|
||||
}
|
||||
|
||||
debug_event(dbf->scsi, level, rec, sizeof(*rec));
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
* zfcp device driver
|
||||
* debug feature declarations
|
||||
*
|
||||
* Copyright IBM Corp. 2008, 2016
|
||||
* Copyright IBM Corp. 2008, 2017
|
||||
*/
|
||||
|
||||
#ifndef ZFCP_DBF_H
|
||||
|
@ -204,16 +204,17 @@ enum zfcp_dbf_scsi_id {
|
|||
* @id: unique number of recovery record type
|
||||
* @tag: identifier string specifying the location of initiation
|
||||
* @scsi_id: scsi device id
|
||||
* @scsi_lun: scsi device logical unit number
|
||||
* @scsi_lun: scsi device logical unit number, low part of 64 bit, old 32 bit
|
||||
* @scsi_result: scsi result
|
||||
* @scsi_retries: current retry number of scsi request
|
||||
* @scsi_allowed: allowed retries
|
||||
* @fcp_rsp_info: FCP response info
|
||||
* @fcp_rsp_info: FCP response info code
|
||||
* @scsi_opcode: scsi opcode
|
||||
* @fsf_req_id: request id of fsf request
|
||||
* @host_scribble: LLD specific data attached to SCSI request
|
||||
* @pl_len: length of paload stored as zfcp_dbf_pay
|
||||
* @fsf_rsp: response for fsf request
|
||||
* @pl_len: length of payload stored as zfcp_dbf_pay
|
||||
* @fcp_rsp: response for FCP request
|
||||
* @scsi_lun_64_hi: scsi device logical unit number, high part of 64 bit
|
||||
*/
|
||||
struct zfcp_dbf_scsi {
|
||||
u8 id;
|
||||
|
@ -230,6 +231,7 @@ struct zfcp_dbf_scsi {
|
|||
u64 host_scribble;
|
||||
u16 pl_len;
|
||||
struct fcp_resp_with_ext fcp_rsp;
|
||||
u32 scsi_lun_64_hi;
|
||||
} __packed;
|
||||
|
||||
/**
|
||||
|
@ -299,7 +301,7 @@ bool zfcp_dbf_hba_fsf_resp_suppress(struct zfcp_fsf_req *req)
|
|||
|
||||
if (qtcb->prefix.qtcb_type != FSF_IO_COMMAND)
|
||||
return false; /* not an FCP response */
|
||||
fcp_rsp = (struct fcp_resp *)&qtcb->bottom.io.fcp_rsp;
|
||||
fcp_rsp = &qtcb->bottom.io.fcp_rsp.iu.resp;
|
||||
rsp_flags = fcp_rsp->fr_flags;
|
||||
fr_status = fcp_rsp->fr_status;
|
||||
return (fsf_stat == FSF_FCP_RSP_AVAILABLE) &&
|
||||
|
@ -323,7 +325,11 @@ void zfcp_dbf_hba_fsf_response(struct zfcp_fsf_req *req)
|
|||
{
|
||||
struct fsf_qtcb *qtcb = req->qtcb;
|
||||
|
||||
if ((qtcb->prefix.prot_status != FSF_PROT_GOOD) &&
|
||||
if (unlikely(req->status & (ZFCP_STATUS_FSFREQ_DISMISSED |
|
||||
ZFCP_STATUS_FSFREQ_ERROR))) {
|
||||
zfcp_dbf_hba_fsf_resp("fs_rerr", 3, req);
|
||||
|
||||
} else if ((qtcb->prefix.prot_status != FSF_PROT_GOOD) &&
|
||||
(qtcb->prefix.prot_status != FSF_PROT_FSF_STATUS_PRESENTED)) {
|
||||
zfcp_dbf_hba_fsf_resp("fs_perr", 1, req);
|
||||
|
||||
|
@ -401,7 +407,8 @@ void zfcp_dbf_scsi_abort(char *tag, struct scsi_cmnd *scmd,
|
|||
* @flag: indicates type of reset (Target Reset, Logical Unit Reset)
|
||||
*/
|
||||
static inline
|
||||
void zfcp_dbf_scsi_devreset(char *tag, struct scsi_cmnd *scmnd, u8 flag)
|
||||
void zfcp_dbf_scsi_devreset(char *tag, struct scsi_cmnd *scmnd, u8 flag,
|
||||
struct zfcp_fsf_req *fsf_req)
|
||||
{
|
||||
char tmp_tag[ZFCP_DBF_TAG_LEN];
|
||||
|
||||
|
@ -411,7 +418,7 @@ void zfcp_dbf_scsi_devreset(char *tag, struct scsi_cmnd *scmnd, u8 flag)
|
|||
memcpy(tmp_tag, "lr_", 3);
|
||||
|
||||
memcpy(&tmp_tag[3], tag, 4);
|
||||
_zfcp_dbf_scsi(tmp_tag, 1, scmnd, NULL);
|
||||
_zfcp_dbf_scsi(tmp_tag, 1, scmnd, fsf_req);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -572,9 +572,8 @@ static void zfcp_erp_memwait_handler(unsigned long data)
|
|||
|
||||
static void zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action)
|
||||
{
|
||||
init_timer(&erp_action->timer);
|
||||
erp_action->timer.function = zfcp_erp_memwait_handler;
|
||||
erp_action->timer.data = (unsigned long) erp_action;
|
||||
setup_timer(&erp_action->timer, zfcp_erp_memwait_handler,
|
||||
(unsigned long) erp_action);
|
||||
erp_action->timer.expires = jiffies + HZ;
|
||||
add_timer(&erp_action->timer);
|
||||
}
|
||||
|
|
|
@ -41,7 +41,6 @@ extern void zfcp_dbf_rec_run_wka(char *, struct zfcp_fc_wka_port *, u64);
|
|||
extern void zfcp_dbf_hba_fsf_uss(char *, struct zfcp_fsf_req *);
|
||||
extern void zfcp_dbf_hba_fsf_res(char *, int, struct zfcp_fsf_req *);
|
||||
extern void zfcp_dbf_hba_bit_err(char *, struct zfcp_fsf_req *);
|
||||
extern void zfcp_dbf_hba_berr(struct zfcp_dbf *, struct zfcp_fsf_req *);
|
||||
extern void zfcp_dbf_hba_def_err(struct zfcp_adapter *, u64, u16, void **);
|
||||
extern void zfcp_dbf_hba_basic(char *, struct zfcp_adapter *);
|
||||
extern void zfcp_dbf_san_req(char *, struct zfcp_fsf_req *, u32);
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
*
|
||||
* Fibre Channel related functions for the zfcp device driver.
|
||||
*
|
||||
* Copyright IBM Corp. 2008, 2010
|
||||
* Copyright IBM Corp. 2008, 2017
|
||||
*/
|
||||
|
||||
#define KMSG_COMPONENT "zfcp"
|
||||
|
@ -29,7 +29,7 @@ static u32 zfcp_fc_rscn_range_mask[] = {
|
|||
};
|
||||
|
||||
static bool no_auto_port_rescan;
|
||||
module_param_named(no_auto_port_rescan, no_auto_port_rescan, bool, 0600);
|
||||
module_param(no_auto_port_rescan, bool, 0600);
|
||||
MODULE_PARM_DESC(no_auto_port_rescan,
|
||||
"no automatic port_rescan (default off)");
|
||||
|
||||
|
@ -260,7 +260,8 @@ static void zfcp_fc_incoming_rscn(struct zfcp_fsf_req *fsf_req)
|
|||
page = (struct fc_els_rscn_page *) head;
|
||||
|
||||
/* see FC-FS */
|
||||
no_entries = head->rscn_plen / sizeof(struct fc_els_rscn_page);
|
||||
no_entries = be16_to_cpu(head->rscn_plen) /
|
||||
sizeof(struct fc_els_rscn_page);
|
||||
|
||||
for (i = 1; i < no_entries; i++) {
|
||||
/* skip head and start with 1st element */
|
||||
|
@ -296,7 +297,7 @@ static void zfcp_fc_incoming_plogi(struct zfcp_fsf_req *req)
|
|||
|
||||
status_buffer = (struct fsf_status_read_buffer *) req->data;
|
||||
plogi = (struct fc_els_flogi *) status_buffer->payload.data;
|
||||
zfcp_fc_incoming_wwpn(req, plogi->fl_wwpn);
|
||||
zfcp_fc_incoming_wwpn(req, be64_to_cpu(plogi->fl_wwpn));
|
||||
}
|
||||
|
||||
static void zfcp_fc_incoming_logo(struct zfcp_fsf_req *req)
|
||||
|
@ -306,7 +307,7 @@ static void zfcp_fc_incoming_logo(struct zfcp_fsf_req *req)
|
|||
struct fc_els_logo *logo =
|
||||
(struct fc_els_logo *) status_buffer->payload.data;
|
||||
|
||||
zfcp_fc_incoming_wwpn(req, logo->fl_n_port_wwn);
|
||||
zfcp_fc_incoming_wwpn(req, be64_to_cpu(logo->fl_n_port_wwn));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -335,7 +336,7 @@ static void zfcp_fc_ns_gid_pn_eval(struct zfcp_fc_req *fc_req)
|
|||
|
||||
if (ct_els->status)
|
||||
return;
|
||||
if (gid_pn_rsp->ct_hdr.ct_cmd != FC_FS_ACC)
|
||||
if (gid_pn_rsp->ct_hdr.ct_cmd != cpu_to_be16(FC_FS_ACC))
|
||||
return;
|
||||
|
||||
/* looks like a valid d_id */
|
||||
|
@ -352,8 +353,8 @@ static void zfcp_fc_ct_ns_init(struct fc_ct_hdr *ct_hdr, u16 cmd, u16 mr_size)
|
|||
ct_hdr->ct_rev = FC_CT_REV;
|
||||
ct_hdr->ct_fs_type = FC_FST_DIR;
|
||||
ct_hdr->ct_fs_subtype = FC_NS_SUBTYPE;
|
||||
ct_hdr->ct_cmd = cmd;
|
||||
ct_hdr->ct_mr_size = mr_size / 4;
|
||||
ct_hdr->ct_cmd = cpu_to_be16(cmd);
|
||||
ct_hdr->ct_mr_size = cpu_to_be16(mr_size / 4);
|
||||
}
|
||||
|
||||
static int zfcp_fc_ns_gid_pn_request(struct zfcp_port *port,
|
||||
|
@ -376,7 +377,7 @@ static int zfcp_fc_ns_gid_pn_request(struct zfcp_port *port,
|
|||
|
||||
zfcp_fc_ct_ns_init(&gid_pn_req->ct_hdr,
|
||||
FC_NS_GID_PN, ZFCP_FC_CT_SIZE_PAGE);
|
||||
gid_pn_req->gid_pn.fn_wwpn = port->wwpn;
|
||||
gid_pn_req->gid_pn.fn_wwpn = cpu_to_be64(port->wwpn);
|
||||
|
||||
ret = zfcp_fsf_send_ct(&adapter->gs->ds, &fc_req->ct_els,
|
||||
adapter->pool.gid_pn_req,
|
||||
|
@ -460,26 +461,26 @@ void zfcp_fc_trigger_did_lookup(struct zfcp_port *port)
|
|||
*/
|
||||
void zfcp_fc_plogi_evaluate(struct zfcp_port *port, struct fc_els_flogi *plogi)
|
||||
{
|
||||
if (plogi->fl_wwpn != port->wwpn) {
|
||||
if (be64_to_cpu(plogi->fl_wwpn) != port->wwpn) {
|
||||
port->d_id = 0;
|
||||
dev_warn(&port->adapter->ccw_device->dev,
|
||||
"A port opened with WWPN 0x%016Lx returned data that "
|
||||
"identifies it as WWPN 0x%016Lx\n",
|
||||
(unsigned long long) port->wwpn,
|
||||
(unsigned long long) plogi->fl_wwpn);
|
||||
(unsigned long long) be64_to_cpu(plogi->fl_wwpn));
|
||||
return;
|
||||
}
|
||||
|
||||
port->wwnn = plogi->fl_wwnn;
|
||||
port->maxframe_size = plogi->fl_csp.sp_bb_data;
|
||||
port->wwnn = be64_to_cpu(plogi->fl_wwnn);
|
||||
port->maxframe_size = be16_to_cpu(plogi->fl_csp.sp_bb_data);
|
||||
|
||||
if (plogi->fl_cssp[0].cp_class & FC_CPC_VALID)
|
||||
if (plogi->fl_cssp[0].cp_class & cpu_to_be16(FC_CPC_VALID))
|
||||
port->supported_classes |= FC_COS_CLASS1;
|
||||
if (plogi->fl_cssp[1].cp_class & FC_CPC_VALID)
|
||||
if (plogi->fl_cssp[1].cp_class & cpu_to_be16(FC_CPC_VALID))
|
||||
port->supported_classes |= FC_COS_CLASS2;
|
||||
if (plogi->fl_cssp[2].cp_class & FC_CPC_VALID)
|
||||
if (plogi->fl_cssp[2].cp_class & cpu_to_be16(FC_CPC_VALID))
|
||||
port->supported_classes |= FC_COS_CLASS3;
|
||||
if (plogi->fl_cssp[3].cp_class & FC_CPC_VALID)
|
||||
if (plogi->fl_cssp[3].cp_class & cpu_to_be16(FC_CPC_VALID))
|
||||
port->supported_classes |= FC_COS_CLASS4;
|
||||
}
|
||||
|
||||
|
@ -497,9 +498,9 @@ static void zfcp_fc_adisc_handler(void *data)
|
|||
}
|
||||
|
||||
if (!port->wwnn)
|
||||
port->wwnn = adisc_resp->adisc_wwnn;
|
||||
port->wwnn = be64_to_cpu(adisc_resp->adisc_wwnn);
|
||||
|
||||
if ((port->wwpn != adisc_resp->adisc_wwpn) ||
|
||||
if ((port->wwpn != be64_to_cpu(adisc_resp->adisc_wwpn)) ||
|
||||
!(atomic_read(&port->status) & ZFCP_STATUS_COMMON_OPEN)) {
|
||||
zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED,
|
||||
"fcadh_2");
|
||||
|
@ -538,8 +539,8 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
|
|||
|
||||
/* acc. to FC-FS, hard_nport_id in ADISC should not be set for ports
|
||||
without FC-AL-2 capability, so we don't set it */
|
||||
fc_req->u.adisc.req.adisc_wwpn = fc_host_port_name(shost);
|
||||
fc_req->u.adisc.req.adisc_wwnn = fc_host_node_name(shost);
|
||||
fc_req->u.adisc.req.adisc_wwpn = cpu_to_be64(fc_host_port_name(shost));
|
||||
fc_req->u.adisc.req.adisc_wwnn = cpu_to_be64(fc_host_node_name(shost));
|
||||
fc_req->u.adisc.req.adisc_cmd = ELS_ADISC;
|
||||
hton24(fc_req->u.adisc.req.adisc_port_id, fc_host_port_id(shost));
|
||||
|
||||
|
@ -666,8 +667,8 @@ static int zfcp_fc_eval_gpn_ft(struct zfcp_fc_req *fc_req,
|
|||
if (ct_els->status)
|
||||
return -EIO;
|
||||
|
||||
if (hdr->ct_cmd != FC_FS_ACC) {
|
||||
if (hdr->ct_reason == FC_BA_RJT_UNABLE)
|
||||
if (hdr->ct_cmd != cpu_to_be16(FC_FS_ACC)) {
|
||||
if (hdr->ct_reason == FC_FS_RJT_UNABL)
|
||||
return -EAGAIN; /* might be a temporary condition */
|
||||
return -EIO;
|
||||
}
|
||||
|
@ -693,10 +694,11 @@ static int zfcp_fc_eval_gpn_ft(struct zfcp_fc_req *fc_req,
|
|||
if (d_id >= FC_FID_WELL_KNOWN_BASE)
|
||||
continue;
|
||||
/* skip the adapter's port and known remote ports */
|
||||
if (acc->fp_wwpn == fc_host_port_name(adapter->scsi_host))
|
||||
if (be64_to_cpu(acc->fp_wwpn) ==
|
||||
fc_host_port_name(adapter->scsi_host))
|
||||
continue;
|
||||
|
||||
port = zfcp_port_enqueue(adapter, acc->fp_wwpn,
|
||||
port = zfcp_port_enqueue(adapter, be64_to_cpu(acc->fp_wwpn),
|
||||
ZFCP_STATUS_COMMON_NOESC, d_id);
|
||||
if (!IS_ERR(port))
|
||||
zfcp_erp_port_reopen(port, 0, "fcegpf1");
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
* Fibre Channel related definitions and inline functions for the zfcp
|
||||
* device driver
|
||||
*
|
||||
* Copyright IBM Corp. 2009
|
||||
* Copyright IBM Corp. 2009, 2017
|
||||
*/
|
||||
|
||||
#ifndef ZFCP_FC_H
|
||||
|
@ -212,6 +212,8 @@ static inline
|
|||
void zfcp_fc_scsi_to_fcp(struct fcp_cmnd *fcp, struct scsi_cmnd *scsi,
|
||||
u8 tm_flags)
|
||||
{
|
||||
u32 datalen;
|
||||
|
||||
int_to_scsilun(scsi->device->lun, (struct scsi_lun *) &fcp->fc_lun);
|
||||
|
||||
if (unlikely(tm_flags)) {
|
||||
|
@ -228,10 +230,13 @@ void zfcp_fc_scsi_to_fcp(struct fcp_cmnd *fcp, struct scsi_cmnd *scsi,
|
|||
|
||||
memcpy(fcp->fc_cdb, scsi->cmnd, scsi->cmd_len);
|
||||
|
||||
fcp->fc_dl = scsi_bufflen(scsi);
|
||||
datalen = scsi_bufflen(scsi);
|
||||
fcp->fc_dl = cpu_to_be32(datalen);
|
||||
|
||||
if (scsi_get_prot_type(scsi) == SCSI_PROT_DIF_TYPE1)
|
||||
fcp->fc_dl += fcp->fc_dl / scsi->device->sector_size * 8;
|
||||
if (scsi_get_prot_type(scsi) == SCSI_PROT_DIF_TYPE1) {
|
||||
datalen += datalen / scsi->device->sector_size * 8;
|
||||
fcp->fc_dl = cpu_to_be32(datalen);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -266,19 +271,23 @@ void zfcp_fc_eval_fcp_rsp(struct fcp_resp_with_ext *fcp_rsp,
|
|||
if (unlikely(rsp_flags & FCP_SNS_LEN_VAL)) {
|
||||
sense = (char *) &fcp_rsp[1];
|
||||
if (rsp_flags & FCP_RSP_LEN_VAL)
|
||||
sense += fcp_rsp->ext.fr_rsp_len;
|
||||
sense_len = min(fcp_rsp->ext.fr_sns_len,
|
||||
(u32) SCSI_SENSE_BUFFERSIZE);
|
||||
sense += be32_to_cpu(fcp_rsp->ext.fr_rsp_len);
|
||||
sense_len = min_t(u32, be32_to_cpu(fcp_rsp->ext.fr_sns_len),
|
||||
SCSI_SENSE_BUFFERSIZE);
|
||||
memcpy(scsi->sense_buffer, sense, sense_len);
|
||||
}
|
||||
|
||||
if (unlikely(rsp_flags & FCP_RESID_UNDER)) {
|
||||
resid = fcp_rsp->ext.fr_resid;
|
||||
resid = be32_to_cpu(fcp_rsp->ext.fr_resid);
|
||||
scsi_set_resid(scsi, resid);
|
||||
if (scsi_bufflen(scsi) - resid < scsi->underflow &&
|
||||
!(rsp_flags & FCP_SNS_LEN_VAL) &&
|
||||
fcp_rsp->resp.fr_status == SAM_STAT_GOOD)
|
||||
set_host_byte(scsi, DID_ERROR);
|
||||
} else if (unlikely(rsp_flags & FCP_RESID_OVER)) {
|
||||
/* FCP_DL was not sufficient for SCSI data length */
|
||||
if (fcp_rsp->resp.fr_status == SAM_STAT_GOOD)
|
||||
set_host_byte(scsi, DID_ERROR);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
*
|
||||
* Implementation of FSF commands.
|
||||
*
|
||||
* Copyright IBM Corp. 2002, 2015
|
||||
* Copyright IBM Corp. 2002, 2017
|
||||
*/
|
||||
|
||||
#define KMSG_COMPONENT "zfcp"
|
||||
|
@ -197,8 +197,6 @@ static void zfcp_fsf_status_read_link_down(struct zfcp_fsf_req *req)
|
|||
|
||||
switch (sr_buf->status_subtype) {
|
||||
case FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK:
|
||||
zfcp_fsf_link_down_info_eval(req, ldi);
|
||||
break;
|
||||
case FSF_STATUS_READ_SUB_FDISC_FAILED:
|
||||
zfcp_fsf_link_down_info_eval(req, ldi);
|
||||
break;
|
||||
|
@ -476,8 +474,8 @@ static int zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *req)
|
|||
if (req->data)
|
||||
memcpy(req->data, bottom, sizeof(*bottom));
|
||||
|
||||
fc_host_port_name(shost) = nsp->fl_wwpn;
|
||||
fc_host_node_name(shost) = nsp->fl_wwnn;
|
||||
fc_host_port_name(shost) = be64_to_cpu(nsp->fl_wwpn);
|
||||
fc_host_node_name(shost) = be64_to_cpu(nsp->fl_wwnn);
|
||||
fc_host_supported_classes(shost) = FC_COS_CLASS2 | FC_COS_CLASS3;
|
||||
|
||||
adapter->timer_ticks = bottom->timer_interval & ZFCP_FSF_TIMER_INT_MASK;
|
||||
|
@ -503,8 +501,8 @@ static int zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *req)
|
|||
switch (bottom->fc_topology) {
|
||||
case FSF_TOPO_P2P:
|
||||
adapter->peer_d_id = ntoh24(bottom->peer_d_id);
|
||||
adapter->peer_wwpn = plogi->fl_wwpn;
|
||||
adapter->peer_wwnn = plogi->fl_wwnn;
|
||||
adapter->peer_wwpn = be64_to_cpu(plogi->fl_wwpn);
|
||||
adapter->peer_wwnn = be64_to_cpu(plogi->fl_wwnn);
|
||||
fc_host_port_type(shost) = FC_PORTTYPE_PTP;
|
||||
break;
|
||||
case FSF_TOPO_FABRIC:
|
||||
|
@ -928,8 +926,8 @@ static void zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *req)
|
|||
|
||||
switch (header->fsf_status) {
|
||||
case FSF_GOOD:
|
||||
zfcp_dbf_san_res("fsscth2", req);
|
||||
ct->status = 0;
|
||||
zfcp_dbf_san_res("fsscth2", req);
|
||||
break;
|
||||
case FSF_SERVICE_CLASS_NOT_SUPPORTED:
|
||||
zfcp_fsf_class_not_supp(req);
|
||||
|
@ -991,8 +989,7 @@ static int zfcp_fsf_setup_ct_els_sbals(struct zfcp_fsf_req *req,
|
|||
qtcb->bottom.support.resp_buf_length =
|
||||
zfcp_qdio_real_bytes(sg_resp);
|
||||
|
||||
zfcp_qdio_set_data_div(qdio, &req->qdio_req,
|
||||
zfcp_qdio_sbale_count(sg_req));
|
||||
zfcp_qdio_set_data_div(qdio, &req->qdio_req, sg_nents(sg_req));
|
||||
zfcp_qdio_set_sbale_last(qdio, &req->qdio_req);
|
||||
zfcp_qdio_set_scount(qdio, &req->qdio_req);
|
||||
return 0;
|
||||
|
@ -1109,8 +1106,8 @@ static void zfcp_fsf_send_els_handler(struct zfcp_fsf_req *req)
|
|||
|
||||
switch (header->fsf_status) {
|
||||
case FSF_GOOD:
|
||||
zfcp_dbf_san_res("fsselh1", req);
|
||||
send_els->status = 0;
|
||||
zfcp_dbf_san_res("fsselh1", req);
|
||||
break;
|
||||
case FSF_SERVICE_CLASS_NOT_SUPPORTED:
|
||||
zfcp_fsf_class_not_supp(req);
|
||||
|
@ -1394,6 +1391,8 @@ static void zfcp_fsf_open_port_handler(struct zfcp_fsf_req *req)
|
|||
case FSF_ADAPTER_STATUS_AVAILABLE:
|
||||
switch (header->fsf_status_qual.word[0]) {
|
||||
case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE:
|
||||
/* no zfcp_fc_test_link() with failed open port */
|
||||
/* fall through */
|
||||
case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED:
|
||||
case FSF_SQ_NO_RETRY_POSSIBLE:
|
||||
req->status |= ZFCP_STATUS_FSFREQ_ERROR;
|
||||
|
@ -2142,7 +2141,8 @@ static void zfcp_fsf_fcp_cmnd_handler(struct zfcp_fsf_req *req)
|
|||
zfcp_scsi_dif_sense_error(scpnt, 0x3);
|
||||
goto skip_fsfstatus;
|
||||
}
|
||||
fcp_rsp = (struct fcp_resp_with_ext *) &req->qtcb->bottom.io.fcp_rsp;
|
||||
BUILD_BUG_ON(sizeof(struct fcp_resp_with_ext) > FSF_FCP_RSP_SIZE);
|
||||
fcp_rsp = &req->qtcb->bottom.io.fcp_rsp.iu;
|
||||
zfcp_fc_eval_fcp_rsp(fcp_rsp, scpnt);
|
||||
|
||||
skip_fsfstatus:
|
||||
|
@ -2255,10 +2255,12 @@ int zfcp_fsf_fcp_cmnd(struct scsi_cmnd *scsi_cmnd)
|
|||
if (zfcp_fsf_set_data_dir(scsi_cmnd, &io->data_direction))
|
||||
goto failed_scsi_cmnd;
|
||||
|
||||
fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd;
|
||||
BUILD_BUG_ON(sizeof(struct fcp_cmnd) > FSF_FCP_CMND_SIZE);
|
||||
fcp_cmnd = &req->qtcb->bottom.io.fcp_cmnd.iu;
|
||||
zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd, 0);
|
||||
|
||||
if (scsi_prot_sg_count(scsi_cmnd)) {
|
||||
if ((scsi_get_prot_op(scsi_cmnd) != SCSI_PROT_NORMAL) &&
|
||||
scsi_prot_sg_count(scsi_cmnd)) {
|
||||
zfcp_qdio_set_data_div(qdio, &req->qdio_req,
|
||||
scsi_prot_sg_count(scsi_cmnd));
|
||||
retval = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req,
|
||||
|
@ -2299,7 +2301,7 @@ static void zfcp_fsf_fcp_task_mgmt_handler(struct zfcp_fsf_req *req)
|
|||
|
||||
zfcp_fsf_fcp_handler_common(req);
|
||||
|
||||
fcp_rsp = (struct fcp_resp_with_ext *) &req->qtcb->bottom.io.fcp_rsp;
|
||||
fcp_rsp = &req->qtcb->bottom.io.fcp_rsp.iu;
|
||||
rsp_info = (struct fcp_resp_rsp_info *) &fcp_rsp[1];
|
||||
|
||||
if ((rsp_info->rsp_code != FCP_TMF_CMPL) ||
|
||||
|
@ -2348,7 +2350,7 @@ struct zfcp_fsf_req *zfcp_fsf_fcp_task_mgmt(struct scsi_cmnd *scmnd,
|
|||
|
||||
zfcp_qdio_set_sbale_last(qdio, &req->qdio_req);
|
||||
|
||||
fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd;
|
||||
fcp_cmnd = &req->qtcb->bottom.io.fcp_cmnd.iu;
|
||||
zfcp_fc_scsi_to_fcp(fcp_cmnd, scmnd, tm_flags);
|
||||
|
||||
zfcp_fsf_start_timer(req, ZFCP_SCSI_ER_TIMEOUT);
|
||||
|
@ -2392,7 +2394,6 @@ void zfcp_fsf_reqid_check(struct zfcp_qdio *qdio, int sbal_idx)
|
|||
req_id, dev_name(&adapter->ccw_device->dev));
|
||||
}
|
||||
|
||||
fsf_req->qdio_req.sbal_response = sbal_idx;
|
||||
zfcp_fsf_req_complete(fsf_req);
|
||||
|
||||
if (likely(sbale->eflags & SBAL_EFLAGS_LAST_ENTRY))
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
*
|
||||
* Interface to the FSF support functions.
|
||||
*
|
||||
* Copyright IBM Corp. 2002, 2016
|
||||
* Copyright IBM Corp. 2002, 2017
|
||||
*/
|
||||
|
||||
#ifndef FSF_H
|
||||
|
@ -312,8 +312,14 @@ struct fsf_qtcb_bottom_io {
|
|||
u32 data_block_length;
|
||||
u32 prot_data_length;
|
||||
u8 res2[4];
|
||||
u8 fcp_cmnd[FSF_FCP_CMND_SIZE];
|
||||
u8 fcp_rsp[FSF_FCP_RSP_SIZE];
|
||||
union {
|
||||
u8 byte[FSF_FCP_CMND_SIZE];
|
||||
struct fcp_cmnd iu;
|
||||
} fcp_cmnd;
|
||||
union {
|
||||
u8 byte[FSF_FCP_RSP_SIZE];
|
||||
struct fcp_resp_with_ext iu;
|
||||
} fcp_rsp;
|
||||
u8 res3[64];
|
||||
} __attribute__ ((packed));
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
#include "zfcp_ext.h"
|
||||
#include "zfcp_qdio.h"
|
||||
|
||||
static bool enable_multibuffer = 1;
|
||||
static bool enable_multibuffer = true;
|
||||
module_param_named(datarouter, enable_multibuffer, bool, 0400);
|
||||
MODULE_PARM_DESC(datarouter, "Enable hardware data router support (default on)");
|
||||
|
||||
|
|
|
@ -54,7 +54,6 @@ struct zfcp_qdio {
|
|||
* @sbal_last: last sbal for this request
|
||||
* @sbal_limit: last possible sbal for this request
|
||||
* @sbale_curr: current sbale at creation of this request
|
||||
* @sbal_response: sbal used in interrupt
|
||||
* @qdio_outb_usage: usage of outbound queue
|
||||
*/
|
||||
struct zfcp_qdio_req {
|
||||
|
@ -64,7 +63,6 @@ struct zfcp_qdio_req {
|
|||
u8 sbal_last;
|
||||
u8 sbal_limit;
|
||||
u8 sbale_curr;
|
||||
u8 sbal_response;
|
||||
u16 qdio_outb_usage;
|
||||
};
|
||||
|
||||
|
@ -224,21 +222,6 @@ void zfcp_qdio_set_data_div(struct zfcp_qdio *qdio,
|
|||
sbale->length = count;
|
||||
}
|
||||
|
||||
/**
|
||||
* zfcp_qdio_sbale_count - count sbale used
|
||||
* @sg: pointer to struct scatterlist
|
||||
*/
|
||||
static inline
|
||||
unsigned int zfcp_qdio_sbale_count(struct scatterlist *sg)
|
||||
{
|
||||
unsigned int count = 0;
|
||||
|
||||
for (; sg; sg = sg_next(sg))
|
||||
count++;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
/**
|
||||
* zfcp_qdio_real_bytes - count bytes used
|
||||
* @sg: pointer to struct scatterlist
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
*
|
||||
* Interface to Linux SCSI midlayer.
|
||||
*
|
||||
* Copyright IBM Corp. 2002, 2016
|
||||
* Copyright IBM Corp. 2002, 2017
|
||||
*/
|
||||
|
||||
#define KMSG_COMPONENT "zfcp"
|
||||
|
@ -28,7 +28,7 @@ static bool enable_dif;
|
|||
module_param_named(dif, enable_dif, bool, 0400);
|
||||
MODULE_PARM_DESC(dif, "Enable DIF/DIX data integrity support");
|
||||
|
||||
static bool allow_lun_scan = 1;
|
||||
static bool allow_lun_scan = true;
|
||||
module_param(allow_lun_scan, bool, 0600);
|
||||
MODULE_PARM_DESC(allow_lun_scan, "For NPIV, scan and attach all storage LUNs");
|
||||
|
||||
|
@ -273,25 +273,29 @@ static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
|
|||
|
||||
zfcp_erp_wait(adapter);
|
||||
ret = fc_block_scsi_eh(scpnt);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
zfcp_dbf_scsi_devreset("fiof", scpnt, tm_flags, NULL);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (!(atomic_read(&adapter->status) &
|
||||
ZFCP_STATUS_COMMON_RUNNING)) {
|
||||
zfcp_dbf_scsi_devreset("nres", scpnt, tm_flags);
|
||||
zfcp_dbf_scsi_devreset("nres", scpnt, tm_flags, NULL);
|
||||
return SUCCESS;
|
||||
}
|
||||
}
|
||||
if (!fsf_req)
|
||||
if (!fsf_req) {
|
||||
zfcp_dbf_scsi_devreset("reqf", scpnt, tm_flags, NULL);
|
||||
return FAILED;
|
||||
}
|
||||
|
||||
wait_for_completion(&fsf_req->completion);
|
||||
|
||||
if (fsf_req->status & ZFCP_STATUS_FSFREQ_TMFUNCFAILED) {
|
||||
zfcp_dbf_scsi_devreset("fail", scpnt, tm_flags);
|
||||
zfcp_dbf_scsi_devreset("fail", scpnt, tm_flags, fsf_req);
|
||||
retval = FAILED;
|
||||
} else {
|
||||
zfcp_dbf_scsi_devreset("okay", scpnt, tm_flags);
|
||||
zfcp_dbf_scsi_devreset("okay", scpnt, tm_flags, fsf_req);
|
||||
zfcp_scsi_forget_cmnds(zfcp_sdev, tm_flags);
|
||||
}
|
||||
|
||||
|
|
|
@ -168,7 +168,6 @@ MODULE_LICENSE("GPL");
|
|||
|
||||
STATIC int NCR_700_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *);
|
||||
STATIC int NCR_700_abort(struct scsi_cmnd * SCpnt);
|
||||
STATIC int NCR_700_bus_reset(struct scsi_cmnd * SCpnt);
|
||||
STATIC int NCR_700_host_reset(struct scsi_cmnd * SCpnt);
|
||||
STATIC void NCR_700_chip_setup(struct Scsi_Host *host);
|
||||
STATIC void NCR_700_chip_reset(struct Scsi_Host *host);
|
||||
|
@ -315,7 +314,6 @@ NCR_700_detect(struct scsi_host_template *tpnt,
|
|||
/* Fill in the missing routines from the host template */
|
||||
tpnt->queuecommand = NCR_700_queuecommand;
|
||||
tpnt->eh_abort_handler = NCR_700_abort;
|
||||
tpnt->eh_bus_reset_handler = NCR_700_bus_reset;
|
||||
tpnt->eh_host_reset_handler = NCR_700_host_reset;
|
||||
tpnt->can_queue = NCR_700_COMMAND_SLOTS_PER_HOST;
|
||||
tpnt->sg_tablesize = NCR_700_SG_SEGMENTS;
|
||||
|
@ -1938,14 +1936,14 @@ NCR_700_abort(struct scsi_cmnd * SCp)
|
|||
}
|
||||
|
||||
STATIC int
|
||||
NCR_700_bus_reset(struct scsi_cmnd * SCp)
|
||||
NCR_700_host_reset(struct scsi_cmnd * SCp)
|
||||
{
|
||||
DECLARE_COMPLETION_ONSTACK(complete);
|
||||
struct NCR_700_Host_Parameters *hostdata =
|
||||
(struct NCR_700_Host_Parameters *)SCp->device->host->hostdata[0];
|
||||
|
||||
scmd_printk(KERN_INFO, SCp,
|
||||
"New error handler wants BUS reset, cmd %p\n\t", SCp);
|
||||
"New error handler wants HOST reset, cmd %p\n\t", SCp);
|
||||
scsi_print_command(SCp);
|
||||
|
||||
/* In theory, eh_complete should always be null because the
|
||||
|
@ -1960,6 +1958,7 @@ NCR_700_bus_reset(struct scsi_cmnd * SCp)
|
|||
|
||||
hostdata->eh_complete = &complete;
|
||||
NCR_700_internal_bus_reset(SCp->device->host);
|
||||
NCR_700_chip_reset(SCp->device->host);
|
||||
|
||||
spin_unlock_irq(SCp->device->host->host_lock);
|
||||
wait_for_completion(&complete);
|
||||
|
@ -1974,22 +1973,6 @@ NCR_700_bus_reset(struct scsi_cmnd * SCp)
|
|||
return SUCCESS;
|
||||
}
|
||||
|
||||
STATIC int
|
||||
NCR_700_host_reset(struct scsi_cmnd * SCp)
|
||||
{
|
||||
scmd_printk(KERN_INFO, SCp, "New error handler wants HOST reset\n\t");
|
||||
scsi_print_command(SCp);
|
||||
|
||||
spin_lock_irq(SCp->device->host->host_lock);
|
||||
|
||||
NCR_700_internal_bus_reset(SCp->device->host);
|
||||
NCR_700_chip_reset(SCp->device->host);
|
||||
|
||||
spin_unlock_irq(SCp->device->host->host_lock);
|
||||
|
||||
return SUCCESS;
|
||||
}
|
||||
|
||||
STATIC void
|
||||
NCR_700_set_period(struct scsi_target *STp, int period)
|
||||
{
|
||||
|
|
|
@ -2296,13 +2296,13 @@ out:
|
|||
|
||||
|
||||
/**
|
||||
* NCR5380_bus_reset - reset the SCSI bus
|
||||
* NCR5380_host_reset - reset the SCSI host
|
||||
* @cmd: SCSI command undergoing EH
|
||||
*
|
||||
* Returns SUCCESS
|
||||
*/
|
||||
|
||||
static int NCR5380_bus_reset(struct scsi_cmnd *cmd)
|
||||
static int NCR5380_host_reset(struct scsi_cmnd *cmd)
|
||||
{
|
||||
struct Scsi_Host *instance = cmd->device->host;
|
||||
struct NCR5380_hostdata *hostdata = shost_priv(instance);
|
||||
|
|
|
@ -147,22 +147,6 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
|
|||
}
|
||||
}
|
||||
|
||||
static int a2091_bus_reset(struct scsi_cmnd *cmd)
|
||||
{
|
||||
struct Scsi_Host *instance = cmd->device->host;
|
||||
|
||||
/* FIXME perform bus-specific reset */
|
||||
|
||||
/* FIXME 2: kill this function, and let midlayer fall back
|
||||
to the same action, calling wd33c93_host_reset() */
|
||||
|
||||
spin_lock_irq(instance->host_lock);
|
||||
wd33c93_host_reset(cmd);
|
||||
spin_unlock_irq(instance->host_lock);
|
||||
|
||||
return SUCCESS;
|
||||
}
|
||||
|
||||
static struct scsi_host_template a2091_scsi_template = {
|
||||
.module = THIS_MODULE,
|
||||
.name = "Commodore A2091/A590 SCSI",
|
||||
|
@ -171,7 +155,6 @@ static struct scsi_host_template a2091_scsi_template = {
|
|||
.proc_name = "A2901",
|
||||
.queuecommand = wd33c93_queuecommand,
|
||||
.eh_abort_handler = wd33c93_abort,
|
||||
.eh_bus_reset_handler = a2091_bus_reset,
|
||||
.eh_host_reset_handler = wd33c93_host_reset,
|
||||
.can_queue = CAN_QUEUE,
|
||||
.this_id = 7,
|
||||
|
|
|
@ -162,22 +162,6 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
|
|||
}
|
||||
}
|
||||
|
||||
static int a3000_bus_reset(struct scsi_cmnd *cmd)
|
||||
{
|
||||
struct Scsi_Host *instance = cmd->device->host;
|
||||
|
||||
/* FIXME perform bus-specific reset */
|
||||
|
||||
/* FIXME 2: kill this entire function, which should
|
||||
cause mid-layer to call wd33c93_host_reset anyway? */
|
||||
|
||||
spin_lock_irq(instance->host_lock);
|
||||
wd33c93_host_reset(cmd);
|
||||
spin_unlock_irq(instance->host_lock);
|
||||
|
||||
return SUCCESS;
|
||||
}
|
||||
|
||||
static struct scsi_host_template amiga_a3000_scsi_template = {
|
||||
.module = THIS_MODULE,
|
||||
.name = "Amiga 3000 built-in SCSI",
|
||||
|
@ -186,7 +170,6 @@ static struct scsi_host_template amiga_a3000_scsi_template = {
|
|||
.proc_name = "A3000",
|
||||
.queuecommand = wd33c93_queuecommand,
|
||||
.eh_abort_handler = wd33c93_abort,
|
||||
.eh_bus_reset_handler = a3000_bus_reset,
|
||||
.eh_host_reset_handler = wd33c93_host_reset,
|
||||
.can_queue = CAN_QUEUE,
|
||||
.this_id = 7,
|
||||
|
|
|
@ -594,6 +594,7 @@ static int aac_get_container_name(struct scsi_cmnd * scsicmd)
|
|||
|
||||
aac_fib_init(cmd_fibcontext);
|
||||
dinfo = (struct aac_get_name *) fib_data(cmd_fibcontext);
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
|
||||
dinfo->command = cpu_to_le32(VM_ContainerConfig);
|
||||
dinfo->type = cpu_to_le32(CT_READ_NAME);
|
||||
|
@ -611,10 +612,8 @@ static int aac_get_container_name(struct scsi_cmnd * scsicmd)
|
|||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
printk(KERN_WARNING "aac_get_container_name: aac_fib_send failed with status: %d.\n", status);
|
||||
aac_fib_complete(cmd_fibcontext);
|
||||
|
@ -725,6 +724,7 @@ static void _aac_probe_container1(void * context, struct fib * fibptr)
|
|||
|
||||
dinfo->count = cpu_to_le32(scmd_id(scsicmd));
|
||||
dinfo->type = cpu_to_le32(FT_FILESYS);
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
|
||||
status = aac_fib_send(ContainerCommand,
|
||||
fibptr,
|
||||
|
@ -736,9 +736,7 @@ static void _aac_probe_container1(void * context, struct fib * fibptr)
|
|||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS)
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
else if (status < 0) {
|
||||
if (status < 0 && status != -EINPROGRESS) {
|
||||
/* Inherit results from VM_NameServe, if any */
|
||||
dresp->status = cpu_to_le32(ST_OK);
|
||||
_aac_probe_container2(context, fibptr);
|
||||
|
@ -766,6 +764,7 @@ static int _aac_probe_container(struct scsi_cmnd * scsicmd, int (*callback)(stru
|
|||
dinfo->count = cpu_to_le32(scmd_id(scsicmd));
|
||||
dinfo->type = cpu_to_le32(FT_FILESYS);
|
||||
scsicmd->SCp.ptr = (char *)callback;
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
|
||||
status = aac_fib_send(ContainerCommand,
|
||||
fibptr,
|
||||
|
@ -777,10 +776,9 @@ static int _aac_probe_container(struct scsi_cmnd * scsicmd, int (*callback)(stru
|
|||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (status < 0) {
|
||||
scsicmd->SCp.ptr = NULL;
|
||||
aac_fib_complete(fibptr);
|
||||
|
@ -1126,6 +1124,7 @@ static int aac_get_container_serial(struct scsi_cmnd * scsicmd)
|
|||
dinfo->command = cpu_to_le32(VM_ContainerConfig);
|
||||
dinfo->type = cpu_to_le32(CT_CID_TO_32BITS_UID);
|
||||
dinfo->cid = cpu_to_le32(scmd_id(scsicmd));
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
|
||||
status = aac_fib_send(ContainerCommand,
|
||||
cmd_fibcontext,
|
||||
|
@ -1138,10 +1137,8 @@ static int aac_get_container_serial(struct scsi_cmnd * scsicmd)
|
|||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
printk(KERN_WARNING "aac_get_container_serial: aac_fib_send failed with status: %d.\n", status);
|
||||
aac_fib_complete(cmd_fibcontext);
|
||||
|
@ -2335,16 +2332,14 @@ static int aac_read(struct scsi_cmnd * scsicmd)
|
|||
* Alocate and initialize a Fib
|
||||
*/
|
||||
cmd_fibcontext = aac_fib_alloc_tag(dev, scsicmd);
|
||||
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
status = aac_adapter_read(cmd_fibcontext, scsicmd, lba, count);
|
||||
|
||||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
printk(KERN_WARNING "aac_read: aac_fib_send failed with status: %d.\n", status);
|
||||
/*
|
||||
|
@ -2429,16 +2424,14 @@ static int aac_write(struct scsi_cmnd * scsicmd)
|
|||
* Allocate and initialize a Fib then setup a BlockWrite command
|
||||
*/
|
||||
cmd_fibcontext = aac_fib_alloc_tag(dev, scsicmd);
|
||||
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
status = aac_adapter_write(cmd_fibcontext, scsicmd, lba, count, fua);
|
||||
|
||||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
printk(KERN_WARNING "aac_write: aac_fib_send failed with status: %d\n", status);
|
||||
/*
|
||||
|
@ -2588,6 +2581,7 @@ static int aac_synchronize(struct scsi_cmnd *scsicmd)
|
|||
synchronizecmd->cid = cpu_to_le32(scmd_id(scsicmd));
|
||||
synchronizecmd->count =
|
||||
cpu_to_le32(sizeof(((struct aac_synchronize_reply *)NULL)->data));
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
|
||||
/*
|
||||
* Now send the Fib to the adapter
|
||||
|
@ -2603,10 +2597,8 @@ static int aac_synchronize(struct scsi_cmnd *scsicmd)
|
|||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
printk(KERN_WARNING
|
||||
"aac_synchronize: aac_fib_send failed with status: %d.\n", status);
|
||||
|
@ -2666,6 +2658,7 @@ static int aac_start_stop(struct scsi_cmnd *scsicmd)
|
|||
pmcmd->cid = cpu_to_le32(sdev_id(sdev));
|
||||
pmcmd->parm = (scsicmd->cmnd[1] & 1) ?
|
||||
cpu_to_le32(CT_PM_UNIT_IMMEDIATE) : 0;
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
|
||||
/*
|
||||
* Now send the Fib to the adapter
|
||||
|
@ -2681,10 +2674,8 @@ static int aac_start_stop(struct scsi_cmnd *scsicmd)
|
|||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
aac_fib_complete(cmd_fibcontext);
|
||||
aac_fib_free(cmd_fibcontext);
|
||||
|
@ -3692,16 +3683,14 @@ static int aac_send_srb_fib(struct scsi_cmnd* scsicmd)
|
|||
* Allocate and initialize a Fib then setup a BlockWrite command
|
||||
*/
|
||||
cmd_fibcontext = aac_fib_alloc_tag(dev, scsicmd);
|
||||
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
status = aac_adapter_scsi(cmd_fibcontext, scsicmd);
|
||||
|
||||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
printk(KERN_WARNING "aac_srb: aac_fib_send failed with status: %d\n", status);
|
||||
aac_fib_complete(cmd_fibcontext);
|
||||
|
@ -3739,15 +3728,14 @@ static int aac_send_hba_fib(struct scsi_cmnd *scsicmd)
|
|||
if (!cmd_fibcontext)
|
||||
return -1;
|
||||
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
status = aac_adapter_hba(cmd_fibcontext, scsicmd);
|
||||
|
||||
/*
|
||||
* Check that the command queued to the controller
|
||||
*/
|
||||
if (status == -EINPROGRESS) {
|
||||
scsicmd->SCp.phase = AAC_OWNER_FIRMWARE;
|
||||
if (status == -EINPROGRESS)
|
||||
return 0;
|
||||
}
|
||||
|
||||
pr_warn("aac_hba_cmd_req: aac_fib_send failed with status: %d\n",
|
||||
status);
|
||||
|
@ -3763,6 +3751,8 @@ static long aac_build_sg(struct scsi_cmnd *scsicmd, struct sgmap *psg)
|
|||
struct aac_dev *dev;
|
||||
unsigned long byte_count = 0;
|
||||
int nseg;
|
||||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
dev = (struct aac_dev *)scsicmd->device->host->hostdata;
|
||||
// Get rid of old data
|
||||
|
@ -3771,32 +3761,29 @@ static long aac_build_sg(struct scsi_cmnd *scsicmd, struct sgmap *psg)
|
|||
psg->sg[0].count = 0;
|
||||
|
||||
nseg = scsi_dma_map(scsicmd);
|
||||
if (nseg < 0)
|
||||
if (nseg <= 0)
|
||||
return nseg;
|
||||
if (nseg) {
|
||||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
psg->count = cpu_to_le32(nseg);
|
||||
psg->count = cpu_to_le32(nseg);
|
||||
|
||||
scsi_for_each_sg(scsicmd, sg, nseg, i) {
|
||||
psg->sg[i].addr = cpu_to_le32(sg_dma_address(sg));
|
||||
psg->sg[i].count = cpu_to_le32(sg_dma_len(sg));
|
||||
byte_count += sg_dma_len(sg);
|
||||
}
|
||||
/* hba wants the size to be exact */
|
||||
if (byte_count > scsi_bufflen(scsicmd)) {
|
||||
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
|
||||
(byte_count - scsi_bufflen(scsicmd));
|
||||
psg->sg[i-1].count = cpu_to_le32(temp);
|
||||
byte_count = scsi_bufflen(scsicmd);
|
||||
}
|
||||
/* Check for command underflow */
|
||||
if(scsicmd->underflow && (byte_count < scsicmd->underflow)){
|
||||
printk(KERN_WARNING"aacraid: cmd len %08lX cmd underflow %08X\n",
|
||||
byte_count, scsicmd->underflow);
|
||||
}
|
||||
scsi_for_each_sg(scsicmd, sg, nseg, i) {
|
||||
psg->sg[i].addr = cpu_to_le32(sg_dma_address(sg));
|
||||
psg->sg[i].count = cpu_to_le32(sg_dma_len(sg));
|
||||
byte_count += sg_dma_len(sg);
|
||||
}
|
||||
/* hba wants the size to be exact */
|
||||
if (byte_count > scsi_bufflen(scsicmd)) {
|
||||
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
|
||||
(byte_count - scsi_bufflen(scsicmd));
|
||||
psg->sg[i-1].count = cpu_to_le32(temp);
|
||||
byte_count = scsi_bufflen(scsicmd);
|
||||
}
|
||||
/* Check for command underflow */
|
||||
if (scsicmd->underflow && (byte_count < scsicmd->underflow)) {
|
||||
printk(KERN_WARNING"aacraid: cmd len %08lX cmd underflow %08X\n",
|
||||
byte_count, scsicmd->underflow);
|
||||
}
|
||||
|
||||
return byte_count;
|
||||
}
|
||||
|
||||
|
@ -3807,6 +3794,8 @@ static long aac_build_sg64(struct scsi_cmnd *scsicmd, struct sgmap64 *psg)
|
|||
unsigned long byte_count = 0;
|
||||
u64 addr;
|
||||
int nseg;
|
||||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
dev = (struct aac_dev *)scsicmd->device->host->hostdata;
|
||||
// Get rid of old data
|
||||
|
@ -3816,34 +3805,31 @@ static long aac_build_sg64(struct scsi_cmnd *scsicmd, struct sgmap64 *psg)
|
|||
psg->sg[0].count = 0;
|
||||
|
||||
nseg = scsi_dma_map(scsicmd);
|
||||
if (nseg < 0)
|
||||
if (nseg <= 0)
|
||||
return nseg;
|
||||
if (nseg) {
|
||||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
scsi_for_each_sg(scsicmd, sg, nseg, i) {
|
||||
int count = sg_dma_len(sg);
|
||||
addr = sg_dma_address(sg);
|
||||
psg->sg[i].addr[0] = cpu_to_le32(addr & 0xffffffff);
|
||||
psg->sg[i].addr[1] = cpu_to_le32(addr>>32);
|
||||
psg->sg[i].count = cpu_to_le32(count);
|
||||
byte_count += count;
|
||||
}
|
||||
psg->count = cpu_to_le32(nseg);
|
||||
/* hba wants the size to be exact */
|
||||
if (byte_count > scsi_bufflen(scsicmd)) {
|
||||
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
|
||||
(byte_count - scsi_bufflen(scsicmd));
|
||||
psg->sg[i-1].count = cpu_to_le32(temp);
|
||||
byte_count = scsi_bufflen(scsicmd);
|
||||
}
|
||||
/* Check for command underflow */
|
||||
if(scsicmd->underflow && (byte_count < scsicmd->underflow)){
|
||||
printk(KERN_WARNING"aacraid: cmd len %08lX cmd underflow %08X\n",
|
||||
byte_count, scsicmd->underflow);
|
||||
}
|
||||
scsi_for_each_sg(scsicmd, sg, nseg, i) {
|
||||
int count = sg_dma_len(sg);
|
||||
addr = sg_dma_address(sg);
|
||||
psg->sg[i].addr[0] = cpu_to_le32(addr & 0xffffffff);
|
||||
psg->sg[i].addr[1] = cpu_to_le32(addr>>32);
|
||||
psg->sg[i].count = cpu_to_le32(count);
|
||||
byte_count += count;
|
||||
}
|
||||
psg->count = cpu_to_le32(nseg);
|
||||
/* hba wants the size to be exact */
|
||||
if (byte_count > scsi_bufflen(scsicmd)) {
|
||||
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
|
||||
(byte_count - scsi_bufflen(scsicmd));
|
||||
psg->sg[i-1].count = cpu_to_le32(temp);
|
||||
byte_count = scsi_bufflen(scsicmd);
|
||||
}
|
||||
/* Check for command underflow */
|
||||
if (scsicmd->underflow && (byte_count < scsicmd->underflow)) {
|
||||
printk(KERN_WARNING"aacraid: cmd len %08lX cmd underflow %08X\n",
|
||||
byte_count, scsicmd->underflow);
|
||||
}
|
||||
|
||||
return byte_count;
|
||||
}
|
||||
|
||||
|
@ -3851,6 +3837,8 @@ static long aac_build_sgraw(struct scsi_cmnd *scsicmd, struct sgmapraw *psg)
|
|||
{
|
||||
unsigned long byte_count = 0;
|
||||
int nseg;
|
||||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
// Get rid of old data
|
||||
psg->count = 0;
|
||||
|
@ -3862,37 +3850,34 @@ static long aac_build_sgraw(struct scsi_cmnd *scsicmd, struct sgmapraw *psg)
|
|||
psg->sg[0].flags = 0;
|
||||
|
||||
nseg = scsi_dma_map(scsicmd);
|
||||
if (nseg < 0)
|
||||
if (nseg <= 0)
|
||||
return nseg;
|
||||
if (nseg) {
|
||||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
scsi_for_each_sg(scsicmd, sg, nseg, i) {
|
||||
int count = sg_dma_len(sg);
|
||||
u64 addr = sg_dma_address(sg);
|
||||
psg->sg[i].next = 0;
|
||||
psg->sg[i].prev = 0;
|
||||
psg->sg[i].addr[1] = cpu_to_le32((u32)(addr>>32));
|
||||
psg->sg[i].addr[0] = cpu_to_le32((u32)(addr & 0xffffffff));
|
||||
psg->sg[i].count = cpu_to_le32(count);
|
||||
psg->sg[i].flags = 0;
|
||||
byte_count += count;
|
||||
}
|
||||
psg->count = cpu_to_le32(nseg);
|
||||
/* hba wants the size to be exact */
|
||||
if (byte_count > scsi_bufflen(scsicmd)) {
|
||||
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
|
||||
(byte_count - scsi_bufflen(scsicmd));
|
||||
psg->sg[i-1].count = cpu_to_le32(temp);
|
||||
byte_count = scsi_bufflen(scsicmd);
|
||||
}
|
||||
/* Check for command underflow */
|
||||
if(scsicmd->underflow && (byte_count < scsicmd->underflow)){
|
||||
printk(KERN_WARNING"aacraid: cmd len %08lX cmd underflow %08X\n",
|
||||
byte_count, scsicmd->underflow);
|
||||
}
|
||||
scsi_for_each_sg(scsicmd, sg, nseg, i) {
|
||||
int count = sg_dma_len(sg);
|
||||
u64 addr = sg_dma_address(sg);
|
||||
psg->sg[i].next = 0;
|
||||
psg->sg[i].prev = 0;
|
||||
psg->sg[i].addr[1] = cpu_to_le32((u32)(addr>>32));
|
||||
psg->sg[i].addr[0] = cpu_to_le32((u32)(addr & 0xffffffff));
|
||||
psg->sg[i].count = cpu_to_le32(count);
|
||||
psg->sg[i].flags = 0;
|
||||
byte_count += count;
|
||||
}
|
||||
psg->count = cpu_to_le32(nseg);
|
||||
/* hba wants the size to be exact */
|
||||
if (byte_count > scsi_bufflen(scsicmd)) {
|
||||
u32 temp = le32_to_cpu(psg->sg[i-1].count) -
|
||||
(byte_count - scsi_bufflen(scsicmd));
|
||||
psg->sg[i-1].count = cpu_to_le32(temp);
|
||||
byte_count = scsi_bufflen(scsicmd);
|
||||
}
|
||||
/* Check for command underflow */
|
||||
if (scsicmd->underflow && (byte_count < scsicmd->underflow)) {
|
||||
printk(KERN_WARNING"aacraid: cmd len %08lX cmd underflow %08X\n",
|
||||
byte_count, scsicmd->underflow);
|
||||
}
|
||||
|
||||
return byte_count;
|
||||
}
|
||||
|
||||
|
@ -3901,75 +3886,77 @@ static long aac_build_sgraw2(struct scsi_cmnd *scsicmd,
|
|||
{
|
||||
unsigned long byte_count = 0;
|
||||
int nseg;
|
||||
struct scatterlist *sg;
|
||||
int i, conformable = 0;
|
||||
u32 min_size = PAGE_SIZE, cur_size;
|
||||
|
||||
nseg = scsi_dma_map(scsicmd);
|
||||
if (nseg < 0)
|
||||
if (nseg <= 0)
|
||||
return nseg;
|
||||
if (nseg) {
|
||||
struct scatterlist *sg;
|
||||
int i, conformable = 0;
|
||||
u32 min_size = PAGE_SIZE, cur_size;
|
||||
|
||||
scsi_for_each_sg(scsicmd, sg, nseg, i) {
|
||||
int count = sg_dma_len(sg);
|
||||
u64 addr = sg_dma_address(sg);
|
||||
scsi_for_each_sg(scsicmd, sg, nseg, i) {
|
||||
int count = sg_dma_len(sg);
|
||||
u64 addr = sg_dma_address(sg);
|
||||
|
||||
BUG_ON(i >= sg_max);
|
||||
rio2->sge[i].addrHigh = cpu_to_le32((u32)(addr>>32));
|
||||
rio2->sge[i].addrLow = cpu_to_le32((u32)(addr & 0xffffffff));
|
||||
cur_size = cpu_to_le32(count);
|
||||
rio2->sge[i].length = cur_size;
|
||||
rio2->sge[i].flags = 0;
|
||||
if (i == 0) {
|
||||
conformable = 1;
|
||||
rio2->sgeFirstSize = cur_size;
|
||||
} else if (i == 1) {
|
||||
rio2->sgeNominalSize = cur_size;
|
||||
BUG_ON(i >= sg_max);
|
||||
rio2->sge[i].addrHigh = cpu_to_le32((u32)(addr>>32));
|
||||
rio2->sge[i].addrLow = cpu_to_le32((u32)(addr & 0xffffffff));
|
||||
cur_size = cpu_to_le32(count);
|
||||
rio2->sge[i].length = cur_size;
|
||||
rio2->sge[i].flags = 0;
|
||||
if (i == 0) {
|
||||
conformable = 1;
|
||||
rio2->sgeFirstSize = cur_size;
|
||||
} else if (i == 1) {
|
||||
rio2->sgeNominalSize = cur_size;
|
||||
min_size = cur_size;
|
||||
} else if ((i+1) < nseg && cur_size != rio2->sgeNominalSize) {
|
||||
conformable = 0;
|
||||
if (cur_size < min_size)
|
||||
min_size = cur_size;
|
||||
} else if ((i+1) < nseg && cur_size != rio2->sgeNominalSize) {
|
||||
conformable = 0;
|
||||
if (cur_size < min_size)
|
||||
min_size = cur_size;
|
||||
}
|
||||
byte_count += count;
|
||||
}
|
||||
byte_count += count;
|
||||
}
|
||||
|
||||
/* hba wants the size to be exact */
|
||||
if (byte_count > scsi_bufflen(scsicmd)) {
|
||||
u32 temp = le32_to_cpu(rio2->sge[i-1].length) -
|
||||
(byte_count - scsi_bufflen(scsicmd));
|
||||
rio2->sge[i-1].length = cpu_to_le32(temp);
|
||||
byte_count = scsi_bufflen(scsicmd);
|
||||
}
|
||||
/* hba wants the size to be exact */
|
||||
if (byte_count > scsi_bufflen(scsicmd)) {
|
||||
u32 temp = le32_to_cpu(rio2->sge[i-1].length) -
|
||||
(byte_count - scsi_bufflen(scsicmd));
|
||||
rio2->sge[i-1].length = cpu_to_le32(temp);
|
||||
byte_count = scsi_bufflen(scsicmd);
|
||||
}
|
||||
|
||||
rio2->sgeCnt = cpu_to_le32(nseg);
|
||||
rio2->flags |= cpu_to_le16(RIO2_SG_FORMAT_IEEE1212);
|
||||
/* not conformable: evaluate required sg elements */
|
||||
if (!conformable) {
|
||||
int j, nseg_new = nseg, err_found;
|
||||
for (i = min_size / PAGE_SIZE; i >= 1; --i) {
|
||||
err_found = 0;
|
||||
nseg_new = 2;
|
||||
for (j = 1; j < nseg - 1; ++j) {
|
||||
if (rio2->sge[j].length % (i*PAGE_SIZE)) {
|
||||
err_found = 1;
|
||||
break;
|
||||
}
|
||||
nseg_new += (rio2->sge[j].length / (i*PAGE_SIZE));
|
||||
}
|
||||
if (!err_found)
|
||||
rio2->sgeCnt = cpu_to_le32(nseg);
|
||||
rio2->flags |= cpu_to_le16(RIO2_SG_FORMAT_IEEE1212);
|
||||
/* not conformable: evaluate required sg elements */
|
||||
if (!conformable) {
|
||||
int j, nseg_new = nseg, err_found;
|
||||
for (i = min_size / PAGE_SIZE; i >= 1; --i) {
|
||||
err_found = 0;
|
||||
nseg_new = 2;
|
||||
for (j = 1; j < nseg - 1; ++j) {
|
||||
if (rio2->sge[j].length % (i*PAGE_SIZE)) {
|
||||
err_found = 1;
|
||||
break;
|
||||
}
|
||||
nseg_new += (rio2->sge[j].length / (i*PAGE_SIZE));
|
||||
}
|
||||
if (i > 0 && nseg_new <= sg_max)
|
||||
aac_convert_sgraw2(rio2, i, nseg, nseg_new);
|
||||
} else
|
||||
rio2->flags |= cpu_to_le16(RIO2_SGL_CONFORMANT);
|
||||
|
||||
/* Check for command underflow */
|
||||
if (scsicmd->underflow && (byte_count < scsicmd->underflow)) {
|
||||
printk(KERN_WARNING"aacraid: cmd len %08lX cmd underflow %08X\n",
|
||||
byte_count, scsicmd->underflow);
|
||||
if (!err_found)
|
||||
break;
|
||||
}
|
||||
if (i > 0 && nseg_new <= sg_max) {
|
||||
int ret = aac_convert_sgraw2(rio2, i, nseg, nseg_new);
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
} else
|
||||
rio2->flags |= cpu_to_le16(RIO2_SGL_CONFORMANT);
|
||||
|
||||
/* Check for command underflow */
|
||||
if (scsicmd->underflow && (byte_count < scsicmd->underflow)) {
|
||||
printk(KERN_WARNING"aacraid: cmd len %08lX cmd underflow %08X\n",
|
||||
byte_count, scsicmd->underflow);
|
||||
}
|
||||
|
||||
return byte_count;
|
||||
|
@ -3986,7 +3973,7 @@ static int aac_convert_sgraw2(struct aac_raw_io2 *rio2, int pages, int nseg, int
|
|||
|
||||
sge = kmalloc(nseg_new * sizeof(struct sge_ieee1212), GFP_ATOMIC);
|
||||
if (sge == NULL)
|
||||
return -1;
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 1, pos = 1; i < nseg-1; ++i) {
|
||||
for (j = 0; j < rio2->sge[i].length / (pages * PAGE_SIZE); ++j) {
|
||||
|
|
|
@ -1723,6 +1723,7 @@ struct aac_dev
|
|||
#define FIB_CONTEXT_FLAG_FASTRESP (0x00000008)
|
||||
#define FIB_CONTEXT_FLAG_NATIVE_HBA (0x00000010)
|
||||
#define FIB_CONTEXT_FLAG_NATIVE_HBA_TMF (0x00000020)
|
||||
#define FIB_CONTEXT_FLAG_SCSI_CMD (0x00000040)
|
||||
|
||||
/*
|
||||
* Define the command values
|
||||
|
|
|
@ -520,9 +520,9 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
|
|||
dev->raw_io_64 = 1;
|
||||
dev->sync_mode = aac_sync_mode;
|
||||
if (dev->a_ops.adapter_comm &&
|
||||
(status[1] & AAC_OPT_NEW_COMM)) {
|
||||
dev->comm_interface = AAC_COMM_MESSAGE;
|
||||
dev->raw_io_interface = 1;
|
||||
(status[1] & AAC_OPT_NEW_COMM)) {
|
||||
dev->comm_interface = AAC_COMM_MESSAGE;
|
||||
dev->raw_io_interface = 1;
|
||||
if ((status[1] & AAC_OPT_NEW_COMM_TYPE1)) {
|
||||
/* driver supports TYPE1 (Tupelo) */
|
||||
dev->comm_interface = AAC_COMM_MESSAGE_TYPE1;
|
||||
|
|
|
@ -770,7 +770,8 @@ int aac_hba_send(u8 command, struct fib *fibptr, fib_callback callback,
|
|||
/* bit1 of request_id must be 0 */
|
||||
hbacmd->request_id =
|
||||
cpu_to_le32((((u32)(fibptr - dev->fibs)) << 2) + 1);
|
||||
} else
|
||||
fibptr->flags |= FIB_CONTEXT_FLAG_SCSI_CMD;
|
||||
} else if (command != HBA_IU_TYPE_SCSI_TM_REQ)
|
||||
return -EINVAL;
|
||||
|
||||
|
||||
|
|
|
@ -814,111 +814,225 @@ static int aac_eh_abort(struct scsi_cmnd* cmd)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static u8 aac_eh_tmf_lun_reset_fib(struct aac_hba_map_info *info,
|
||||
struct fib *fib, u64 tmf_lun)
|
||||
{
|
||||
struct aac_hba_tm_req *tmf;
|
||||
u64 address;
|
||||
|
||||
/* start a HBA_TMF_LUN_RESET TMF request */
|
||||
tmf = (struct aac_hba_tm_req *)fib->hw_fib_va;
|
||||
memset(tmf, 0, sizeof(*tmf));
|
||||
tmf->tmf = HBA_TMF_LUN_RESET;
|
||||
tmf->it_nexus = info->rmw_nexus;
|
||||
int_to_scsilun(tmf_lun, (struct scsi_lun *)tmf->lun);
|
||||
|
||||
address = (u64)fib->hw_error_pa;
|
||||
tmf->error_ptr_hi = cpu_to_le32
|
||||
((u32)(address >> 32));
|
||||
tmf->error_ptr_lo = cpu_to_le32
|
||||
((u32)(address & 0xffffffff));
|
||||
tmf->error_length = cpu_to_le32(FW_ERROR_BUFFER_SIZE);
|
||||
fib->hbacmd_size = sizeof(*tmf);
|
||||
|
||||
return HBA_IU_TYPE_SCSI_TM_REQ;
|
||||
}
|
||||
|
||||
static u8 aac_eh_tmf_hard_reset_fib(struct aac_hba_map_info *info,
|
||||
struct fib *fib)
|
||||
{
|
||||
struct aac_hba_reset_req *rst;
|
||||
u64 address;
|
||||
|
||||
/* already tried, start a hard reset now */
|
||||
rst = (struct aac_hba_reset_req *)fib->hw_fib_va;
|
||||
memset(rst, 0, sizeof(*rst));
|
||||
rst->it_nexus = info->rmw_nexus;
|
||||
|
||||
address = (u64)fib->hw_error_pa;
|
||||
rst->error_ptr_hi = cpu_to_le32((u32)(address >> 32));
|
||||
rst->error_ptr_lo = cpu_to_le32
|
||||
((u32)(address & 0xffffffff));
|
||||
rst->error_length = cpu_to_le32(FW_ERROR_BUFFER_SIZE);
|
||||
fib->hbacmd_size = sizeof(*rst);
|
||||
|
||||
return HBA_IU_TYPE_SATA_REQ;
|
||||
}
|
||||
|
||||
void aac_tmf_callback(void *context, struct fib *fibptr)
|
||||
{
|
||||
struct aac_hba_resp *err =
|
||||
&((struct aac_native_hba *)fibptr->hw_fib_va)->resp.err;
|
||||
struct aac_hba_map_info *info = context;
|
||||
int res;
|
||||
|
||||
switch (err->service_response) {
|
||||
case HBA_RESP_SVCRES_TMF_REJECTED:
|
||||
res = -1;
|
||||
break;
|
||||
case HBA_RESP_SVCRES_TMF_LUN_INVALID:
|
||||
res = 0;
|
||||
break;
|
||||
case HBA_RESP_SVCRES_TMF_COMPLETE:
|
||||
case HBA_RESP_SVCRES_TMF_SUCCEEDED:
|
||||
res = 0;
|
||||
break;
|
||||
default:
|
||||
res = -2;
|
||||
break;
|
||||
}
|
||||
aac_fib_complete(fibptr);
|
||||
|
||||
info->reset_state = res;
|
||||
}
|
||||
|
||||
/*
|
||||
* aac_eh_reset - Reset command handling
|
||||
* aac_eh_dev_reset - Device reset command handling
|
||||
* @scsi_cmd: SCSI command block causing the reset
|
||||
*
|
||||
*/
|
||||
static int aac_eh_reset(struct scsi_cmnd* cmd)
|
||||
static int aac_eh_dev_reset(struct scsi_cmnd *cmd)
|
||||
{
|
||||
struct scsi_device * dev = cmd->device;
|
||||
struct Scsi_Host * host = dev->host;
|
||||
struct aac_dev * aac = (struct aac_dev *)host->hostdata;
|
||||
struct aac_hba_map_info *info;
|
||||
int count;
|
||||
u32 bus, cid;
|
||||
struct fib *fib;
|
||||
int ret = FAILED;
|
||||
int status;
|
||||
u8 command;
|
||||
|
||||
bus = aac_logical_to_phys(scmd_channel(cmd));
|
||||
cid = scmd_id(cmd);
|
||||
info = &aac->hba_map[bus][cid];
|
||||
if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS ||
|
||||
info->devtype != AAC_DEVTYPE_NATIVE_RAW)
|
||||
return FAILED;
|
||||
|
||||
if (info->reset_state > 0)
|
||||
return FAILED;
|
||||
|
||||
pr_err("%s: Host adapter reset request. SCSI hang ?\n",
|
||||
AAC_DRIVERNAME);
|
||||
|
||||
fib = aac_fib_alloc(aac);
|
||||
if (!fib)
|
||||
return ret;
|
||||
|
||||
/* start a HBA_TMF_LUN_RESET TMF request */
|
||||
command = aac_eh_tmf_lun_reset_fib(info, fib, dev->lun);
|
||||
|
||||
info->reset_state = 1;
|
||||
|
||||
status = aac_hba_send(command, fib,
|
||||
(fib_callback) aac_tmf_callback,
|
||||
(void *) info);
|
||||
|
||||
/* Wait up to 15 seconds for completion */
|
||||
for (count = 0; count < 15; ++count) {
|
||||
if (info->reset_state == 0) {
|
||||
ret = info->reset_state == 0 ? SUCCESS : FAILED;
|
||||
break;
|
||||
}
|
||||
msleep(1000);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* aac_eh_target_reset - Target reset command handling
|
||||
* @scsi_cmd: SCSI command block causing the reset
|
||||
*
|
||||
*/
|
||||
static int aac_eh_target_reset(struct scsi_cmnd *cmd)
|
||||
{
|
||||
struct scsi_device * dev = cmd->device;
|
||||
struct Scsi_Host * host = dev->host;
|
||||
struct aac_dev * aac = (struct aac_dev *)host->hostdata;
|
||||
struct aac_hba_map_info *info;
|
||||
int count;
|
||||
u32 bus, cid;
|
||||
int ret = FAILED;
|
||||
struct fib *fib;
|
||||
int status;
|
||||
u8 command;
|
||||
|
||||
bus = aac_logical_to_phys(scmd_channel(cmd));
|
||||
cid = scmd_id(cmd);
|
||||
info = &aac->hba_map[bus][cid];
|
||||
if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS ||
|
||||
info->devtype != AAC_DEVTYPE_NATIVE_RAW)
|
||||
return FAILED;
|
||||
|
||||
if (info->reset_state > 0)
|
||||
return FAILED;
|
||||
|
||||
pr_err("%s: Host adapter reset request. SCSI hang ?\n",
|
||||
AAC_DRIVERNAME);
|
||||
|
||||
fib = aac_fib_alloc(aac);
|
||||
if (!fib)
|
||||
return ret;
|
||||
|
||||
|
||||
/* already tried, start a hard reset now */
|
||||
command = aac_eh_tmf_hard_reset_fib(info, fib);
|
||||
|
||||
info->reset_state = 2;
|
||||
|
||||
status = aac_hba_send(command, fib,
|
||||
(fib_callback) aac_tmf_callback,
|
||||
(void *) info);
|
||||
|
||||
/* Wait up to 15 seconds for completion */
|
||||
for (count = 0; count < 15; ++count) {
|
||||
if (info->reset_state <= 0) {
|
||||
ret = info->reset_state == 0 ? SUCCESS : FAILED;
|
||||
break;
|
||||
}
|
||||
msleep(1000);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* aac_eh_bus_reset - Bus reset command handling
|
||||
* @scsi_cmd: SCSI command block causing the reset
|
||||
*
|
||||
*/
|
||||
static int aac_eh_bus_reset(struct scsi_cmnd* cmd)
|
||||
{
|
||||
struct scsi_device * dev = cmd->device;
|
||||
struct Scsi_Host * host = dev->host;
|
||||
struct aac_dev * aac = (struct aac_dev *)host->hostdata;
|
||||
int count;
|
||||
u32 bus, cid;
|
||||
int ret = FAILED;
|
||||
u32 cmd_bus;
|
||||
int status = 0;
|
||||
__le32 supported_options2 = 0;
|
||||
bool is_mu_reset;
|
||||
bool is_ignore_reset;
|
||||
bool is_doorbell_reset;
|
||||
|
||||
|
||||
bus = aac_logical_to_phys(scmd_channel(cmd));
|
||||
cid = scmd_id(cmd);
|
||||
if (bus < AAC_MAX_BUSES && cid < AAC_MAX_TARGETS &&
|
||||
aac->hba_map[bus][cid].devtype == AAC_DEVTYPE_NATIVE_RAW) {
|
||||
struct fib *fib;
|
||||
int status;
|
||||
u64 address;
|
||||
u8 command;
|
||||
cmd_bus = aac_logical_to_phys(scmd_channel(cmd));
|
||||
/* Mark the assoc. FIB to not complete, eh handler does this */
|
||||
for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
|
||||
struct fib *fib = &aac->fibs[count];
|
||||
|
||||
pr_err("%s: Host adapter reset request. SCSI hang ?\n",
|
||||
AAC_DRIVERNAME);
|
||||
if (fib->hw_fib_va->header.XferState &&
|
||||
(fib->flags & FIB_CONTEXT_FLAG) &&
|
||||
(fib->flags & FIB_CONTEXT_FLAG_SCSI_CMD)) {
|
||||
struct aac_hba_map_info *info;
|
||||
u32 bus, cid;
|
||||
|
||||
fib = aac_fib_alloc(aac);
|
||||
if (!fib)
|
||||
return ret;
|
||||
|
||||
|
||||
if (aac->hba_map[bus][cid].reset_state == 0) {
|
||||
struct aac_hba_tm_req *tmf;
|
||||
|
||||
/* start a HBA_TMF_LUN_RESET TMF request */
|
||||
tmf = (struct aac_hba_tm_req *)fib->hw_fib_va;
|
||||
memset(tmf, 0, sizeof(*tmf));
|
||||
tmf->tmf = HBA_TMF_LUN_RESET;
|
||||
tmf->it_nexus = aac->hba_map[bus][cid].rmw_nexus;
|
||||
tmf->lun[1] = cmd->device->lun;
|
||||
|
||||
address = (u64)fib->hw_error_pa;
|
||||
tmf->error_ptr_hi = cpu_to_le32
|
||||
((u32)(address >> 32));
|
||||
tmf->error_ptr_lo = cpu_to_le32
|
||||
((u32)(address & 0xffffffff));
|
||||
tmf->error_length = cpu_to_le32(FW_ERROR_BUFFER_SIZE);
|
||||
fib->hbacmd_size = sizeof(*tmf);
|
||||
|
||||
command = HBA_IU_TYPE_SCSI_TM_REQ;
|
||||
aac->hba_map[bus][cid].reset_state++;
|
||||
} else if (aac->hba_map[bus][cid].reset_state >= 1) {
|
||||
struct aac_hba_reset_req *rst;
|
||||
|
||||
/* already tried, start a hard reset now */
|
||||
rst = (struct aac_hba_reset_req *)fib->hw_fib_va;
|
||||
memset(rst, 0, sizeof(*rst));
|
||||
/* reset_type is already zero... */
|
||||
rst->it_nexus = aac->hba_map[bus][cid].rmw_nexus;
|
||||
|
||||
address = (u64)fib->hw_error_pa;
|
||||
rst->error_ptr_hi = cpu_to_le32((u32)(address >> 32));
|
||||
rst->error_ptr_lo = cpu_to_le32
|
||||
((u32)(address & 0xffffffff));
|
||||
rst->error_length = cpu_to_le32(FW_ERROR_BUFFER_SIZE);
|
||||
fib->hbacmd_size = sizeof(*rst);
|
||||
|
||||
command = HBA_IU_TYPE_SATA_REQ;
|
||||
aac->hba_map[bus][cid].reset_state = 0;
|
||||
}
|
||||
cmd->SCp.sent_command = 0;
|
||||
|
||||
status = aac_hba_send(command, fib,
|
||||
(fib_callback) aac_hba_callback,
|
||||
(void *) cmd);
|
||||
|
||||
/* Wait up to 15 seconds for completion */
|
||||
for (count = 0; count < 15; ++count) {
|
||||
if (cmd->SCp.sent_command) {
|
||||
ret = SUCCESS;
|
||||
break;
|
||||
}
|
||||
msleep(1000);
|
||||
}
|
||||
|
||||
if (ret == SUCCESS)
|
||||
goto out;
|
||||
|
||||
} else {
|
||||
|
||||
/* Mark the assoc. FIB to not complete, eh handler does this */
|
||||
for (count = 0;
|
||||
count < (host->can_queue + AAC_NUM_MGT_FIB);
|
||||
++count) {
|
||||
struct fib *fib = &aac->fibs[count];
|
||||
|
||||
if (fib->hw_fib_va->header.XferState &&
|
||||
(fib->flags & FIB_CONTEXT_FLAG) &&
|
||||
(fib->callback_data == cmd)) {
|
||||
cmd = (struct scsi_cmnd *)fib->callback_data;
|
||||
bus = aac_logical_to_phys(scmd_channel(cmd));
|
||||
if (bus != cmd_bus)
|
||||
continue;
|
||||
cid = scmd_id(cmd);
|
||||
info = &aac->hba_map[bus][cid];
|
||||
if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS ||
|
||||
info->devtype != AAC_DEVTYPE_NATIVE_RAW) {
|
||||
fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
|
||||
cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER;
|
||||
}
|
||||
|
@ -935,8 +1049,24 @@ static int aac_eh_reset(struct scsi_cmnd* cmd)
|
|||
dev_err(&aac->pdev->dev, "Adapter health - %d\n", status);
|
||||
|
||||
count = get_num_of_incomplete_fibs(aac);
|
||||
if (count == 0)
|
||||
return SUCCESS;
|
||||
return (count == 0) ? SUCCESS : FAILED;
|
||||
}
|
||||
|
||||
/*
|
||||
* aac_eh_host_reset - Host reset command handling
|
||||
* @scsi_cmd: SCSI command block causing the reset
|
||||
*
|
||||
*/
|
||||
int aac_eh_host_reset(struct scsi_cmnd *cmd)
|
||||
{
|
||||
struct scsi_device * dev = cmd->device;
|
||||
struct Scsi_Host * host = dev->host;
|
||||
struct aac_dev * aac = (struct aac_dev *)host->hostdata;
|
||||
int ret = FAILED;
|
||||
__le32 supported_options2 = 0;
|
||||
bool is_mu_reset;
|
||||
bool is_ignore_reset;
|
||||
bool is_doorbell_reset;
|
||||
|
||||
/*
|
||||
* Check if reset is supported by the firmware
|
||||
|
@ -954,11 +1084,24 @@ static int aac_eh_reset(struct scsi_cmnd* cmd)
|
|||
&& aac_check_reset
|
||||
&& (aac_check_reset != -1 || !is_ignore_reset)) {
|
||||
/* Bypass wait for command quiesce */
|
||||
aac_reset_adapter(aac, 2, IOP_HWSOFT_RESET);
|
||||
if (aac_reset_adapter(aac, 2, IOP_HWSOFT_RESET) == 0)
|
||||
ret = SUCCESS;
|
||||
}
|
||||
ret = SUCCESS;
|
||||
/*
|
||||
* Reset EH state
|
||||
*/
|
||||
if (ret == SUCCESS) {
|
||||
int bus, cid;
|
||||
struct aac_hba_map_info *info;
|
||||
|
||||
out:
|
||||
for (bus = 0; bus < AAC_MAX_BUSES; bus++) {
|
||||
for (cid = 0; cid < AAC_MAX_TARGETS; cid++) {
|
||||
info = &aac->hba_map[bus][cid];
|
||||
if (info->devtype == AAC_DEVTYPE_NATIVE_RAW)
|
||||
info->reset_state = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1382,7 +1525,10 @@ static struct scsi_host_template aac_driver_template = {
|
|||
.change_queue_depth = aac_change_queue_depth,
|
||||
.sdev_attrs = aac_dev_attrs,
|
||||
.eh_abort_handler = aac_eh_abort,
|
||||
.eh_host_reset_handler = aac_eh_reset,
|
||||
.eh_device_reset_handler = aac_eh_dev_reset,
|
||||
.eh_target_reset_handler = aac_eh_target_reset,
|
||||
.eh_bus_reset_handler = aac_eh_bus_reset,
|
||||
.eh_host_reset_handler = aac_eh_host_reset,
|
||||
.can_queue = AAC_NUM_IO_FIB,
|
||||
.this_id = MAXIMUM_NUM_CONTAINERS,
|
||||
.sg_tablesize = 16,
|
||||
|
@ -1457,7 +1603,7 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
/*
|
||||
* Only series 7 needs freset.
|
||||
*/
|
||||
if (pdev->device == PMC_DEVICE_S7)
|
||||
if (pdev->device == PMC_DEVICE_S7)
|
||||
pdev->needs_freset = 1;
|
||||
|
||||
list_for_each_entry(aac, &aac_devices, entry) {
|
||||
|
|
|
@ -1140,6 +1140,9 @@ static void free_hard_reset_SCs(struct Scsi_Host *shpnt, Scsi_Cmnd **SCs)
|
|||
/*
|
||||
* Reset the bus
|
||||
*
|
||||
* AIC-6260 has a hard reset (MRST signal), but apparently
|
||||
* one cannot trigger it via software. So live with
|
||||
* a soft reset; no-one seemed to have cared.
|
||||
*/
|
||||
static int aha152x_bus_reset_host(struct Scsi_Host *shpnt)
|
||||
{
|
||||
|
@ -1222,15 +1225,6 @@ int aha152x_host_reset_host(struct Scsi_Host *shpnt)
|
|||
return SUCCESS;
|
||||
}
|
||||
|
||||
/*
|
||||
* Reset the host (bus and controller)
|
||||
*
|
||||
*/
|
||||
static int aha152x_host_reset(Scsi_Cmnd *SCpnt)
|
||||
{
|
||||
return aha152x_host_reset_host(SCpnt->device->host);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return the "logical geometry"
|
||||
*
|
||||
|
@ -2917,7 +2911,6 @@ static struct scsi_host_template aha152x_driver_template = {
|
|||
.eh_abort_handler = aha152x_abort,
|
||||
.eh_device_reset_handler = aha152x_device_reset,
|
||||
.eh_bus_reset_handler = aha152x_bus_reset,
|
||||
.eh_host_reset_handler = aha152x_host_reset,
|
||||
.bios_param = aha152x_biosparam,
|
||||
.can_queue = 1,
|
||||
.this_id = 7,
|
||||
|
|
|
@ -986,7 +986,7 @@ static struct isa_driver aha1542_isa_driver = {
|
|||
static int isa_registered;
|
||||
|
||||
#ifdef CONFIG_PNP
|
||||
static struct pnp_device_id aha1542_pnp_ids[] = {
|
||||
static const struct pnp_device_id aha1542_pnp_ids[] = {
|
||||
{ .id = "ADP1542" },
|
||||
{ .id = "" }
|
||||
};
|
||||
|
|
|
@ -59,7 +59,8 @@ $(obj)/aic7xxx_seq.h: $(src)/aic7xxx.seq $(src)/aic7xxx.reg $(obj)/aicasm/aicasm
|
|||
$(aicasm-7xxx-opts-y) -o $(obj)/aic7xxx_seq.h \
|
||||
$(srctree)/$(src)/aic7xxx.seq
|
||||
|
||||
$(aic7xxx-gen-y): $(obj)/aic7xxx_seq.h
|
||||
$(aic7xxx-gen-y): $(objtree)/$(obj)/aic7xxx_seq.h
|
||||
@true
|
||||
else
|
||||
$(obj)/aic7xxx_reg_print.c: $(src)/aic7xxx_reg_print.c_shipped
|
||||
endif
|
||||
|
@ -76,7 +77,8 @@ $(obj)/aic79xx_seq.h: $(src)/aic79xx.seq $(src)/aic79xx.reg $(obj)/aicasm/aicasm
|
|||
$(aicasm-79xx-opts-y) -o $(obj)/aic79xx_seq.h \
|
||||
$(srctree)/$(src)/aic79xx.seq
|
||||
|
||||
$(aic79xx-gen-y): $(obj)/aic79xx_seq.h
|
||||
$(aic79xx-gen-y): $(objtree)/$(obj)/aic79xx_seq.h
|
||||
@true
|
||||
else
|
||||
$(obj)/aic79xx_reg_print.c: $(src)/aic79xx_reg_print.c_shipped
|
||||
endif
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -234,6 +234,23 @@ ahd_selid_print(u_int regvalue, u_int *cur_col, u_int wrap)
|
|||
0x49, regvalue, cur_col, wrap));
|
||||
}
|
||||
|
||||
static const ahd_reg_parse_entry_t SIMODE0_parse_table[] = {
|
||||
{ "ENARBDO", 0x01, 0x01 },
|
||||
{ "ENSPIORDY", 0x02, 0x02 },
|
||||
{ "ENOVERRUN", 0x04, 0x04 },
|
||||
{ "ENIOERR", 0x08, 0x08 },
|
||||
{ "ENSELINGO", 0x10, 0x10 },
|
||||
{ "ENSELDI", 0x20, 0x20 },
|
||||
{ "ENSELDO", 0x40, 0x40 }
|
||||
};
|
||||
|
||||
int
|
||||
ahd_simode0_print(u_int regvalue, u_int *cur_col, u_int wrap)
|
||||
{
|
||||
return (ahd_print_register(SIMODE0_parse_table, 7, "SIMODE0",
|
||||
0x4b, regvalue, cur_col, wrap));
|
||||
}
|
||||
|
||||
static const ahd_reg_parse_entry_t SSTAT0_parse_table[] = {
|
||||
{ "ARBDO", 0x01, 0x01 },
|
||||
{ "SPIORDY", 0x02, 0x02 },
|
||||
|
@ -252,23 +269,6 @@ ahd_sstat0_print(u_int regvalue, u_int *cur_col, u_int wrap)
|
|||
0x4b, regvalue, cur_col, wrap));
|
||||
}
|
||||
|
||||
static const ahd_reg_parse_entry_t SIMODE0_parse_table[] = {
|
||||
{ "ENARBDO", 0x01, 0x01 },
|
||||
{ "ENSPIORDY", 0x02, 0x02 },
|
||||
{ "ENOVERRUN", 0x04, 0x04 },
|
||||
{ "ENIOERR", 0x08, 0x08 },
|
||||
{ "ENSELINGO", 0x10, 0x10 },
|
||||
{ "ENSELDI", 0x20, 0x20 },
|
||||
{ "ENSELDO", 0x40, 0x40 }
|
||||
};
|
||||
|
||||
int
|
||||
ahd_simode0_print(u_int regvalue, u_int *cur_col, u_int wrap)
|
||||
{
|
||||
return (ahd_print_register(SIMODE0_parse_table, 7, "SIMODE0",
|
||||
0x4b, regvalue, cur_col, wrap));
|
||||
}
|
||||
|
||||
static const ahd_reg_parse_entry_t SSTAT1_parse_table[] = {
|
||||
{ "REQINIT", 0x01, 0x01 },
|
||||
{ "STRB2FAST", 0x02, 0x02 },
|
||||
|
|
|
@ -7340,7 +7340,6 @@ ahc_dump_card_state(struct ahc_softc *ahc)
|
|||
printk("\n");
|
||||
}
|
||||
|
||||
ahc_platform_dump_card_state(ahc);
|
||||
printk("\n<<<<<<<<<<<<<<<<< Dump Card State Ends >>>>>>>>>>>>>>>>>>\n");
|
||||
ahc_outb(ahc, SCBPTR, saved_scbptr);
|
||||
if (paused == 0)
|
||||
|
|
|
@ -2329,11 +2329,6 @@ done:
|
|||
return (retval);
|
||||
}
|
||||
|
||||
void
|
||||
ahc_platform_dump_card_state(struct ahc_softc *ahc)
|
||||
{
|
||||
}
|
||||
|
||||
static void ahc_linux_set_width(struct scsi_target *starget, int width)
|
||||
{
|
||||
struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
|
||||
|
|
|
@ -688,7 +688,6 @@ void ahc_done(struct ahc_softc*, struct scb*);
|
|||
void ahc_send_async(struct ahc_softc *, char channel,
|
||||
u_int target, u_int lun, ac_code);
|
||||
void ahc_print_path(struct ahc_softc *, struct scb *);
|
||||
void ahc_platform_dump_card_state(struct ahc_softc *ahc);
|
||||
|
||||
#ifdef CONFIG_PCI
|
||||
#define AHC_PCI_CONFIG 1
|
||||
|
|
|
@ -244,8 +244,6 @@ ahc_reg_print_t ahc_scb_tag_print;
|
|||
|
||||
#define SCSIDATH 0x07
|
||||
|
||||
#define STCNT 0x08
|
||||
|
||||
#define OPTIONMODE 0x08
|
||||
#define OPTIONMODE_DEFAULTS 0x03
|
||||
#define AUTORATEEN 0x80
|
||||
|
@ -257,6 +255,8 @@ ahc_reg_print_t ahc_scb_tag_print;
|
|||
#define AUTO_MSGOUT_DE 0x02
|
||||
#define DIS_MSGIN_DUALEDGE 0x01
|
||||
|
||||
#define STCNT 0x08
|
||||
|
||||
#define TARGCRCCNT 0x0a
|
||||
|
||||
#define CLRSINT0 0x0b
|
||||
|
@ -365,8 +365,6 @@ ahc_reg_print_t ahc_scb_tag_print;
|
|||
#define ALTSTIM 0x20
|
||||
#define DFLTTID 0x10
|
||||
|
||||
#define TARGID 0x1b
|
||||
|
||||
#define SPIOCAP 0x1b
|
||||
#define SOFT1 0x80
|
||||
#define SOFT0 0x40
|
||||
|
@ -377,12 +375,14 @@ ahc_reg_print_t ahc_scb_tag_print;
|
|||
#define ROM 0x02
|
||||
#define SSPIOCPS 0x01
|
||||
|
||||
#define TARGID 0x1b
|
||||
|
||||
#define BRDCTL 0x1d
|
||||
#define BRDDAT7 0x80
|
||||
#define BRDDAT6 0x40
|
||||
#define BRDDAT5 0x20
|
||||
#define BRDDAT4 0x10
|
||||
#define BRDSTB 0x10
|
||||
#define BRDDAT4 0x10
|
||||
#define BRDDAT3 0x08
|
||||
#define BRDCS 0x08
|
||||
#define BRDDAT2 0x04
|
||||
|
@ -406,8 +406,8 @@ ahc_reg_print_t ahc_scb_tag_print;
|
|||
#define DIAGLEDEN 0x80
|
||||
#define DIAGLEDON 0x40
|
||||
#define AUTOFLUSHDIS 0x20
|
||||
#define ENAB40 0x08
|
||||
#define SELBUSB 0x08
|
||||
#define ENAB40 0x08
|
||||
#define ENAB20 0x04
|
||||
#define SELWIDE 0x02
|
||||
#define XCVR 0x01
|
||||
|
@ -730,8 +730,8 @@ ahc_reg_print_t ahc_scb_tag_print;
|
|||
#define SCB_BASE 0xa0
|
||||
|
||||
#define SCB_CDB_PTR 0xa0
|
||||
#define SCB_RESIDUAL_DATACNT 0xa0
|
||||
#define SCB_CDB_STORE 0xa0
|
||||
#define SCB_RESIDUAL_DATACNT 0xa0
|
||||
|
||||
#define SCB_RESIDUAL_SGPTR 0xa4
|
||||
|
||||
|
@ -756,8 +756,8 @@ ahc_reg_print_t ahc_scb_tag_print;
|
|||
|
||||
#define SCB_CONTROL 0xb8
|
||||
#define SCB_TAG_TYPE 0x03
|
||||
#define STATUS_RCVD 0x80
|
||||
#define TARGET_SCB 0x80
|
||||
#define STATUS_RCVD 0x80
|
||||
#define DISCENB 0x40
|
||||
#define TAG_ENB 0x20
|
||||
#define MK_MESSAGE 0x10
|
||||
|
@ -872,40 +872,40 @@ ahc_reg_print_t ahc_scb_tag_print;
|
|||
#define SG_CACHE_PRE 0xfc
|
||||
|
||||
|
||||
#define TARGET_CMD_CMPLT 0xfe
|
||||
#define MAX_OFFSET_ULTRA2 0x7f
|
||||
#define MAX_OFFSET_16BIT 0x08
|
||||
#define BUS_8_BIT 0x00
|
||||
#define TARGET_CMD_CMPLT 0xfe
|
||||
#define TID_SHIFT 0x04
|
||||
#define STATUS_QUEUE_FULL 0x28
|
||||
#define STATUS_BUSY 0x08
|
||||
#define MAX_OFFSET_8BIT 0x0f
|
||||
#define BUS_32_BIT 0x02
|
||||
#define CCSGADDR_MAX 0x80
|
||||
#define TID_SHIFT 0x04
|
||||
#define SCB_DOWNLOAD_SIZE_64 0x30
|
||||
#define MAX_OFFSET_8BIT 0x0f
|
||||
#define HOST_MAILBOX_SHIFT 0x04
|
||||
#define CCSGADDR_MAX 0x80
|
||||
#define BUS_32_BIT 0x02
|
||||
#define SG_SIZEOF 0x08
|
||||
#define SEQ_MAILBOX_SHIFT 0x00
|
||||
#define SCB_LIST_NULL 0xff
|
||||
#define SCB_DOWNLOAD_SIZE 0x20
|
||||
#define CMD_GROUP_CODE_SHIFT 0x05
|
||||
#define CCSGRAM_MAXSEGS 0x10
|
||||
#define SCB_LIST_NULL 0xff
|
||||
#define SG_SIZEOF 0x08
|
||||
#define SCB_DOWNLOAD_SIZE 0x20
|
||||
#define SEQ_MAILBOX_SHIFT 0x00
|
||||
#define TARGET_DATA_IN 0x01
|
||||
#define HOST_MSG 0xff
|
||||
#define MAX_OFFSET 0x7f
|
||||
#define BUS_16_BIT 0x01
|
||||
#define SCB_UPLOAD_SIZE 0x20
|
||||
#define STACK_SIZE 0x04
|
||||
#define SCB_UPLOAD_SIZE 0x20
|
||||
#define MAX_OFFSET 0x7f
|
||||
#define HOST_MSG 0xff
|
||||
#define BUS_16_BIT 0x01
|
||||
|
||||
|
||||
/* Downloaded Constant Definitions */
|
||||
#define INVERTED_CACHESIZE_MASK 0x03
|
||||
#define SG_PREFETCH_ADDR_MASK 0x06
|
||||
#define SG_PREFETCH_ALIGN_MASK 0x05
|
||||
#define SG_PREFETCH_ADDR_MASK 0x06
|
||||
#define QOUTFIFO_OFFSET 0x00
|
||||
#define SG_PREFETCH_CNT 0x04
|
||||
#define CACHESIZE_MASK 0x02
|
||||
#define QINFIFO_OFFSET 0x01
|
||||
#define CACHESIZE_MASK 0x02
|
||||
#define DOWNLOAD_CONST_COUNT 0x07
|
||||
|
||||
|
||||
|
|
|
@ -70,7 +70,7 @@ static struct scsi_host_template aic94xx_sht = {
|
|||
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
|
||||
.use_clustering = ENABLE_CLUSTERING,
|
||||
.eh_device_reset_handler = sas_eh_device_reset_handler,
|
||||
.eh_bus_reset_handler = sas_eh_bus_reset_handler,
|
||||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
.track_queue_depth = 1,
|
||||
|
@ -956,11 +956,11 @@ static int asd_scan_finished(struct Scsi_Host *shost, unsigned long time)
|
|||
return 1;
|
||||
}
|
||||
|
||||
static ssize_t asd_version_show(struct device_driver *driver, char *buf)
|
||||
static ssize_t version_show(struct device_driver *driver, char *buf)
|
||||
{
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n", ASD_DRIVER_VERSION);
|
||||
}
|
||||
static DRIVER_ATTR(version, S_IRUGO, asd_version_show, NULL);
|
||||
static DRIVER_ATTR_RO(version);
|
||||
|
||||
static int asd_create_driver_attrs(struct device_driver *driver)
|
||||
{
|
||||
|
|
|
@ -190,7 +190,7 @@ static ssize_t arcmsr_sysfs_iop_message_clear(struct file *filp,
|
|||
return 1;
|
||||
}
|
||||
|
||||
static struct bin_attribute arcmsr_sysfs_message_read_attr = {
|
||||
static const struct bin_attribute arcmsr_sysfs_message_read_attr = {
|
||||
.attr = {
|
||||
.name = "mu_read",
|
||||
.mode = S_IRUSR ,
|
||||
|
@ -199,7 +199,7 @@ static struct bin_attribute arcmsr_sysfs_message_read_attr = {
|
|||
.read = arcmsr_sysfs_iop_message_read,
|
||||
};
|
||||
|
||||
static struct bin_attribute arcmsr_sysfs_message_write_attr = {
|
||||
static const struct bin_attribute arcmsr_sysfs_message_write_attr = {
|
||||
.attr = {
|
||||
.name = "mu_write",
|
||||
.mode = S_IWUSR,
|
||||
|
@ -208,7 +208,7 @@ static struct bin_attribute arcmsr_sysfs_message_write_attr = {
|
|||
.write = arcmsr_sysfs_iop_message_write,
|
||||
};
|
||||
|
||||
static struct bin_attribute arcmsr_sysfs_message_clear_attr = {
|
||||
static const struct bin_attribute arcmsr_sysfs_message_clear_attr = {
|
||||
.attr = {
|
||||
.name = "mu_clear",
|
||||
.mode = S_IWUSR,
|
||||
|
|
|
@ -2725,23 +2725,24 @@ int acornscsi_abort(struct scsi_cmnd *SCpnt)
|
|||
* Params : SCpnt - command causing reset
|
||||
* Returns : one of SCSI_RESET_ macros
|
||||
*/
|
||||
int acornscsi_bus_reset(struct scsi_cmnd *SCpnt)
|
||||
int acornscsi_host_reset(struct Scsi_Host *shpnt)
|
||||
{
|
||||
AS_Host *host = (AS_Host *)SCpnt->device->host->hostdata;
|
||||
AS_Host *host = (AS_Host *)shpnt->hostdata;
|
||||
struct scsi_cmnd *SCptr;
|
||||
|
||||
host->stats.resets += 1;
|
||||
|
||||
#if (DEBUG & DEBUG_RESET)
|
||||
{
|
||||
int asr, ssr;
|
||||
int asr, ssr, devidx;
|
||||
|
||||
asr = sbic_arm_read(host, SBIC_ASR);
|
||||
ssr = sbic_arm_read(host, SBIC_SSR);
|
||||
|
||||
printk(KERN_WARNING "acornscsi_reset: ");
|
||||
print_sbic_status(asr, ssr, host->scsi.phase);
|
||||
acornscsi_dumplog(host, SCpnt->device->id);
|
||||
for (devidx = 0; devidx < 9; devidx ++) {
|
||||
acornscsi_dumplog(host, devidx);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
@ -2884,7 +2885,7 @@ static struct scsi_host_template acornscsi_template = {
|
|||
.info = acornscsi_info,
|
||||
.queuecommand = acornscsi_queuecmd,
|
||||
.eh_abort_handler = acornscsi_abort,
|
||||
.eh_bus_reset_handler = acornscsi_bus_reset,
|
||||
.eh_host_reset_handler = acornscsi_host_reset,
|
||||
.can_queue = 16,
|
||||
.this_id = 7,
|
||||
.sg_tablesize = SG_ALL,
|
||||
|
|
|
@ -216,7 +216,7 @@ static struct scsi_host_template cumanascsi_template = {
|
|||
.info = cumanascsi_info,
|
||||
.queuecommand = cumanascsi_queue_command,
|
||||
.eh_abort_handler = NCR5380_abort,
|
||||
.eh_bus_reset_handler = NCR5380_bus_reset,
|
||||
.eh_host_reset_handler = NCR5380_host_reset,
|
||||
.can_queue = 16,
|
||||
.this_id = 7,
|
||||
.sg_tablesize = SG_ALL,
|
||||
|
|
|
@ -105,7 +105,7 @@ static struct scsi_host_template oakscsi_template = {
|
|||
.info = oakscsi_info,
|
||||
.queuecommand = oakscsi_queue_command,
|
||||
.eh_abort_handler = NCR5380_abort,
|
||||
.eh_bus_reset_handler = NCR5380_bus_reset,
|
||||
.eh_host_reset_handler = NCR5380_host_reset,
|
||||
.can_queue = 16,
|
||||
.this_id = 7,
|
||||
.sg_tablesize = SG_ALL,
|
||||
|
|
|
@ -671,7 +671,7 @@ static void atari_scsi_falcon_reg_write(unsigned int reg, u8 value)
|
|||
|
||||
#include "NCR5380.c"
|
||||
|
||||
static int atari_scsi_bus_reset(struct scsi_cmnd *cmd)
|
||||
static int atari_scsi_host_reset(struct scsi_cmnd *cmd)
|
||||
{
|
||||
int rv;
|
||||
unsigned long flags;
|
||||
|
@ -688,7 +688,7 @@ static int atari_scsi_bus_reset(struct scsi_cmnd *cmd)
|
|||
atari_dma_orig_addr = NULL;
|
||||
}
|
||||
|
||||
rv = NCR5380_bus_reset(cmd);
|
||||
rv = NCR5380_host_reset(cmd);
|
||||
|
||||
/* The 5380 raises its IRQ line while _RST is active but the ST DMA
|
||||
* "lock" has been released so this interrupt may end up handled by
|
||||
|
@ -711,7 +711,7 @@ static struct scsi_host_template atari_scsi_template = {
|
|||
.info = atari_scsi_info,
|
||||
.queuecommand = atari_scsi_queue_command,
|
||||
.eh_abort_handler = atari_scsi_abort,
|
||||
.eh_bus_reset_handler = atari_scsi_bus_reset,
|
||||
.eh_host_reset_handler = atari_scsi_host_reset,
|
||||
.this_id = 7,
|
||||
.cmd_per_lun = 2,
|
||||
.use_clustering = DISABLE_CLUSTERING,
|
||||
|
|
|
@ -82,8 +82,8 @@ struct iscsi_cls_session *beiscsi_session_create(struct iscsi_endpoint *ep,
|
|||
return NULL;
|
||||
sess = cls_session->dd_data;
|
||||
beiscsi_sess = sess->dd_data;
|
||||
beiscsi_sess->bhs_pool = pci_pool_create("beiscsi_bhs_pool",
|
||||
phba->pcidev,
|
||||
beiscsi_sess->bhs_pool = dma_pool_create("beiscsi_bhs_pool",
|
||||
&phba->pcidev->dev,
|
||||
sizeof(struct be_cmd_bhs),
|
||||
64, 0);
|
||||
if (!beiscsi_sess->bhs_pool)
|
||||
|
@ -108,7 +108,7 @@ void beiscsi_session_destroy(struct iscsi_cls_session *cls_session)
|
|||
struct beiscsi_session *beiscsi_sess = sess->dd_data;
|
||||
|
||||
printk(KERN_INFO "In beiscsi_session_destroy\n");
|
||||
pci_pool_destroy(beiscsi_sess->bhs_pool);
|
||||
dma_pool_destroy(beiscsi_sess->bhs_pool);
|
||||
iscsi_session_teardown(cls_session);
|
||||
}
|
||||
|
||||
|
|
|
@ -4257,7 +4257,7 @@ static void beiscsi_cleanup_task(struct iscsi_task *task)
|
|||
pwrb_context = &phwi_ctrlr->wrb_context[cri_index];
|
||||
|
||||
if (io_task->cmd_bhs) {
|
||||
pci_pool_free(beiscsi_sess->bhs_pool, io_task->cmd_bhs,
|
||||
dma_pool_free(beiscsi_sess->bhs_pool, io_task->cmd_bhs,
|
||||
io_task->bhs_pa.u.a64.address);
|
||||
io_task->cmd_bhs = NULL;
|
||||
task->hdr = NULL;
|
||||
|
@ -4374,7 +4374,7 @@ static int beiscsi_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
|
|||
struct beiscsi_session *beiscsi_sess = beiscsi_conn->beiscsi_sess;
|
||||
dma_addr_t paddr;
|
||||
|
||||
io_task->cmd_bhs = pci_pool_alloc(beiscsi_sess->bhs_pool,
|
||||
io_task->cmd_bhs = dma_pool_alloc(beiscsi_sess->bhs_pool,
|
||||
GFP_ATOMIC, &paddr);
|
||||
if (!io_task->cmd_bhs)
|
||||
return -ENOMEM;
|
||||
|
@ -4501,7 +4501,7 @@ free_hndls:
|
|||
if (io_task->pwrb_handle)
|
||||
free_wrb_handle(phba, pwrb_context, io_task->pwrb_handle);
|
||||
io_task->pwrb_handle = NULL;
|
||||
pci_pool_free(beiscsi_sess->bhs_pool, io_task->cmd_bhs,
|
||||
dma_pool_free(beiscsi_sess->bhs_pool, io_task->cmd_bhs,
|
||||
io_task->bhs_pa.u.a64.address);
|
||||
io_task->cmd_bhs = NULL;
|
||||
return -ENOMEM;
|
||||
|
|
|
@ -438,7 +438,7 @@ struct beiscsi_hba {
|
|||
test_bit(BEISCSI_HBA_ONLINE, &phba->state))
|
||||
|
||||
struct beiscsi_session {
|
||||
struct pci_pool *bhs_pool;
|
||||
struct dma_pool *bhs_pool;
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -373,32 +373,28 @@ out:
|
|||
}
|
||||
|
||||
/*
|
||||
* Scsi_Host template entry, resets the bus and abort all commands.
|
||||
* Scsi_Host template entry, resets the target and abort all commands.
|
||||
*/
|
||||
static int
|
||||
bfad_im_reset_bus_handler(struct scsi_cmnd *cmnd)
|
||||
bfad_im_reset_target_handler(struct scsi_cmnd *cmnd)
|
||||
{
|
||||
struct Scsi_Host *shost = cmnd->device->host;
|
||||
struct scsi_target *starget = scsi_target(cmnd->device);
|
||||
struct bfad_im_port_s *im_port =
|
||||
(struct bfad_im_port_s *) shost->hostdata[0];
|
||||
struct bfad_s *bfad = im_port->bfad;
|
||||
struct bfad_itnim_s *itnim;
|
||||
unsigned long flags;
|
||||
u32 i, rc, err_cnt = 0;
|
||||
u32 rc, rtn = FAILED;
|
||||
DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
|
||||
enum bfi_tskim_status task_status;
|
||||
|
||||
spin_lock_irqsave(&bfad->bfad_lock, flags);
|
||||
for (i = 0; i < MAX_FCP_TARGET; i++) {
|
||||
itnim = bfad_get_itnim(im_port, i);
|
||||
if (itnim) {
|
||||
cmnd->SCp.ptr = (char *)&wq;
|
||||
rc = bfad_im_target_reset_send(bfad, cmnd, itnim);
|
||||
if (rc != BFA_STATUS_OK) {
|
||||
err_cnt++;
|
||||
continue;
|
||||
}
|
||||
|
||||
itnim = bfad_get_itnim(im_port, starget->id);
|
||||
if (itnim) {
|
||||
cmnd->SCp.ptr = (char *)&wq;
|
||||
rc = bfad_im_target_reset_send(bfad, cmnd, itnim);
|
||||
if (rc == BFA_STATUS_OK) {
|
||||
/* wait target reset to complete */
|
||||
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
|
||||
wait_event(wq, test_bit(IO_DONE_BIT,
|
||||
|
@ -406,20 +402,17 @@ bfad_im_reset_bus_handler(struct scsi_cmnd *cmnd)
|
|||
spin_lock_irqsave(&bfad->bfad_lock, flags);
|
||||
|
||||
task_status = cmnd->SCp.Status >> 1;
|
||||
if (task_status != BFI_TSKIM_STS_OK) {
|
||||
if (task_status != BFI_TSKIM_STS_OK)
|
||||
BFA_LOG(KERN_ERR, bfad, bfa_log_level,
|
||||
"target reset failure,"
|
||||
" status: %d\n", task_status);
|
||||
err_cnt++;
|
||||
}
|
||||
else
|
||||
rtn = SUCCESS;
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
|
||||
|
||||
if (err_cnt)
|
||||
return FAILED;
|
||||
|
||||
return SUCCESS;
|
||||
return rtn;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -816,7 +809,7 @@ struct scsi_host_template bfad_im_scsi_host_template = {
|
|||
.eh_timed_out = fc_eh_timed_out,
|
||||
.eh_abort_handler = bfad_im_abort_handler,
|
||||
.eh_device_reset_handler = bfad_im_reset_lun_handler,
|
||||
.eh_bus_reset_handler = bfad_im_reset_bus_handler,
|
||||
.eh_target_reset_handler = bfad_im_reset_target_handler,
|
||||
|
||||
.slave_alloc = bfad_im_slave_alloc,
|
||||
.slave_configure = bfad_im_slave_configure,
|
||||
|
@ -839,7 +832,7 @@ struct scsi_host_template bfad_im_vport_template = {
|
|||
.eh_timed_out = fc_eh_timed_out,
|
||||
.eh_abort_handler = bfad_im_abort_handler,
|
||||
.eh_device_reset_handler = bfad_im_reset_lun_handler,
|
||||
.eh_bus_reset_handler = bfad_im_reset_bus_handler,
|
||||
.eh_target_reset_handler = bfad_im_reset_target_handler,
|
||||
|
||||
.slave_alloc = bfad_im_slave_alloc,
|
||||
.slave_configure = bfad_im_slave_configure,
|
||||
|
|
|
@ -539,7 +539,6 @@ void bnx2fc_init_task(struct bnx2fc_cmd *io_req,
|
|||
void bnx2fc_add_2_sq(struct bnx2fc_rport *tgt, u16 xid);
|
||||
void bnx2fc_ring_doorbell(struct bnx2fc_rport *tgt);
|
||||
int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd);
|
||||
int bnx2fc_eh_host_reset(struct scsi_cmnd *sc_cmd);
|
||||
int bnx2fc_eh_target_reset(struct scsi_cmnd *sc_cmd);
|
||||
int bnx2fc_eh_device_reset(struct scsi_cmnd *sc_cmd);
|
||||
void bnx2fc_rport_event_handler(struct fc_lport *lport,
|
||||
|
|
|
@ -105,6 +105,7 @@ do { \
|
|||
static struct class * ch_sysfs_class;
|
||||
|
||||
typedef struct {
|
||||
struct kref ref;
|
||||
struct list_head list;
|
||||
int minor;
|
||||
char name[8];
|
||||
|
@ -563,13 +564,23 @@ static int ch_gstatus(scsi_changer *ch, int type, unsigned char __user *dest)
|
|||
|
||||
/* ------------------------------------------------------------------------ */
|
||||
|
||||
static void ch_destroy(struct kref *ref)
|
||||
{
|
||||
scsi_changer *ch = container_of(ref, scsi_changer, ref);
|
||||
|
||||
kfree(ch->dt);
|
||||
kfree(ch);
|
||||
}
|
||||
|
||||
static int
|
||||
ch_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
scsi_changer *ch = file->private_data;
|
||||
|
||||
scsi_device_put(ch->device);
|
||||
ch->device = NULL;
|
||||
file->private_data = NULL;
|
||||
kref_put(&ch->ref, ch_destroy);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -588,6 +599,7 @@ ch_open(struct inode *inode, struct file *file)
|
|||
mutex_unlock(&ch_mutex);
|
||||
return -ENXIO;
|
||||
}
|
||||
kref_get(&ch->ref);
|
||||
spin_unlock(&ch_index_lock);
|
||||
|
||||
file->private_data = ch;
|
||||
|
@ -935,8 +947,11 @@ static int ch_probe(struct device *dev)
|
|||
}
|
||||
|
||||
mutex_init(&ch->lock);
|
||||
kref_init(&ch->ref);
|
||||
ch->device = sd;
|
||||
ch_readconfig(ch);
|
||||
ret = ch_readconfig(ch);
|
||||
if (ret)
|
||||
goto destroy_dev;
|
||||
if (init)
|
||||
ch_init_elem(ch);
|
||||
|
||||
|
@ -944,6 +959,8 @@ static int ch_probe(struct device *dev)
|
|||
sdev_printk(KERN_INFO, sd, "Attached scsi changer %s\n", ch->name);
|
||||
|
||||
return 0;
|
||||
destroy_dev:
|
||||
device_destroy(ch_sysfs_class, MKDEV(SCSI_CHANGER_MAJOR, ch->minor));
|
||||
remove_idr:
|
||||
idr_remove(&ch_index_idr, ch->minor);
|
||||
free_ch:
|
||||
|
@ -960,8 +977,7 @@ static int ch_remove(struct device *dev)
|
|||
spin_unlock(&ch_index_lock);
|
||||
|
||||
device_destroy(ch_sysfs_class, MKDEV(SCSI_CHANGER_MAJOR,ch->minor));
|
||||
kfree(ch->dt);
|
||||
kfree(ch);
|
||||
kref_put(&ch->ref, ch_destroy);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -465,7 +465,7 @@ struct csio_hw {
|
|||
struct csio_pport pport[CSIO_MAX_PPORTS]; /* Ports (XGMACs) */
|
||||
struct csio_hw_params params; /* Hw parameters */
|
||||
|
||||
struct pci_pool *scsi_pci_pool; /* PCI pool for SCSI */
|
||||
struct dma_pool *scsi_dma_pool; /* DMA pool for SCSI */
|
||||
mempool_t *mb_mempool; /* Mailbox memory pool*/
|
||||
mempool_t *rnode_mempool; /* rnode memory pool */
|
||||
|
||||
|
|
|
@ -485,9 +485,10 @@ csio_resource_alloc(struct csio_hw *hw)
|
|||
if (!hw->rnode_mempool)
|
||||
goto err_free_mb_mempool;
|
||||
|
||||
hw->scsi_pci_pool = pci_pool_create("csio_scsi_pci_pool", hw->pdev,
|
||||
CSIO_SCSI_RSP_LEN, 8, 0);
|
||||
if (!hw->scsi_pci_pool)
|
||||
hw->scsi_dma_pool = dma_pool_create("csio_scsi_dma_pool",
|
||||
&hw->pdev->dev, CSIO_SCSI_RSP_LEN,
|
||||
8, 0);
|
||||
if (!hw->scsi_dma_pool)
|
||||
goto err_free_rn_pool;
|
||||
|
||||
return 0;
|
||||
|
@ -505,8 +506,8 @@ err:
|
|||
static void
|
||||
csio_resource_free(struct csio_hw *hw)
|
||||
{
|
||||
pci_pool_destroy(hw->scsi_pci_pool);
|
||||
hw->scsi_pci_pool = NULL;
|
||||
dma_pool_destroy(hw->scsi_dma_pool);
|
||||
hw->scsi_dma_pool = NULL;
|
||||
mempool_destroy(hw->rnode_mempool);
|
||||
hw->rnode_mempool = NULL;
|
||||
mempool_destroy(hw->mb_mempool);
|
||||
|
|
|
@ -2445,7 +2445,7 @@ csio_scsim_init(struct csio_scsim *scm, struct csio_hw *hw)
|
|||
|
||||
/* Allocate Dma buffers for Response Payload */
|
||||
dma_buf = &ioreq->dma_buf;
|
||||
dma_buf->vaddr = pci_pool_alloc(hw->scsi_pci_pool, GFP_KERNEL,
|
||||
dma_buf->vaddr = dma_pool_alloc(hw->scsi_dma_pool, GFP_KERNEL,
|
||||
&dma_buf->paddr);
|
||||
if (!dma_buf->vaddr) {
|
||||
csio_err(hw,
|
||||
|
@ -2485,7 +2485,7 @@ free_ioreq:
|
|||
ioreq = (struct csio_ioreq *)tmp;
|
||||
|
||||
dma_buf = &ioreq->dma_buf;
|
||||
pci_pool_free(hw->scsi_pci_pool, dma_buf->vaddr,
|
||||
dma_pool_free(hw->scsi_dma_pool, dma_buf->vaddr,
|
||||
dma_buf->paddr);
|
||||
|
||||
kfree(ioreq);
|
||||
|
@ -2516,7 +2516,7 @@ csio_scsim_exit(struct csio_scsim *scm)
|
|||
ioreq = (struct csio_ioreq *)tmp;
|
||||
|
||||
dma_buf = &ioreq->dma_buf;
|
||||
pci_pool_free(scm->hw->scsi_pci_pool, dma_buf->vaddr,
|
||||
dma_pool_free(scm->hw->scsi_dma_pool, dma_buf->vaddr,
|
||||
dma_buf->paddr);
|
||||
|
||||
kfree(ioreq);
|
||||
|
|
|
@ -585,19 +585,21 @@ static struct cxgbi_sock *cxgbi_sock_create(struct cxgbi_device *cdev)
|
|||
|
||||
static struct rtable *find_route_ipv4(struct flowi4 *fl4,
|
||||
__be32 saddr, __be32 daddr,
|
||||
__be16 sport, __be16 dport, u8 tos)
|
||||
__be16 sport, __be16 dport, u8 tos,
|
||||
int ifindex)
|
||||
{
|
||||
struct rtable *rt;
|
||||
|
||||
rt = ip_route_output_ports(&init_net, fl4, NULL, daddr, saddr,
|
||||
dport, sport, IPPROTO_TCP, tos, 0);
|
||||
dport, sport, IPPROTO_TCP, tos, ifindex);
|
||||
if (IS_ERR(rt))
|
||||
return NULL;
|
||||
|
||||
return rt;
|
||||
}
|
||||
|
||||
static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr)
|
||||
static struct cxgbi_sock *
|
||||
cxgbi_check_route(struct sockaddr *dst_addr, int ifindex)
|
||||
{
|
||||
struct sockaddr_in *daddr = (struct sockaddr_in *)dst_addr;
|
||||
struct dst_entry *dst;
|
||||
|
@ -611,7 +613,8 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr)
|
|||
int port = 0xFFFF;
|
||||
int err = 0;
|
||||
|
||||
rt = find_route_ipv4(&fl4, 0, daddr->sin_addr.s_addr, 0, daddr->sin_port, 0);
|
||||
rt = find_route_ipv4(&fl4, 0, daddr->sin_addr.s_addr, 0,
|
||||
daddr->sin_port, 0, ifindex);
|
||||
if (!rt) {
|
||||
pr_info("no route to ipv4 0x%x, port %u.\n",
|
||||
be32_to_cpu(daddr->sin_addr.s_addr),
|
||||
|
@ -693,11 +696,13 @@ err_out:
|
|||
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
static struct rt6_info *find_route_ipv6(const struct in6_addr *saddr,
|
||||
const struct in6_addr *daddr)
|
||||
const struct in6_addr *daddr,
|
||||
int ifindex)
|
||||
{
|
||||
struct flowi6 fl;
|
||||
|
||||
memset(&fl, 0, sizeof(fl));
|
||||
fl.flowi6_oif = ifindex;
|
||||
if (saddr)
|
||||
memcpy(&fl.saddr, saddr, sizeof(struct in6_addr));
|
||||
if (daddr)
|
||||
|
@ -705,7 +710,8 @@ static struct rt6_info *find_route_ipv6(const struct in6_addr *saddr,
|
|||
return (struct rt6_info *)ip6_route_output(&init_net, NULL, &fl);
|
||||
}
|
||||
|
||||
static struct cxgbi_sock *cxgbi_check_route6(struct sockaddr *dst_addr)
|
||||
static struct cxgbi_sock *
|
||||
cxgbi_check_route6(struct sockaddr *dst_addr, int ifindex)
|
||||
{
|
||||
struct sockaddr_in6 *daddr6 = (struct sockaddr_in6 *)dst_addr;
|
||||
struct dst_entry *dst;
|
||||
|
@ -719,7 +725,7 @@ static struct cxgbi_sock *cxgbi_check_route6(struct sockaddr *dst_addr)
|
|||
int port = 0xFFFF;
|
||||
int err = 0;
|
||||
|
||||
rt = find_route_ipv6(NULL, &daddr6->sin6_addr);
|
||||
rt = find_route_ipv6(NULL, &daddr6->sin6_addr, ifindex);
|
||||
|
||||
if (!rt) {
|
||||
pr_info("no route to ipv6 %pI6 port %u\n",
|
||||
|
@ -2536,6 +2542,7 @@ struct iscsi_endpoint *cxgbi_ep_connect(struct Scsi_Host *shost,
|
|||
struct cxgbi_endpoint *cep;
|
||||
struct cxgbi_hba *hba = NULL;
|
||||
struct cxgbi_sock *csk;
|
||||
int ifindex = 0;
|
||||
int err = -EINVAL;
|
||||
|
||||
log_debug(1 << CXGBI_DBG_ISCSI | 1 << CXGBI_DBG_SOCK,
|
||||
|
@ -2548,13 +2555,15 @@ struct iscsi_endpoint *cxgbi_ep_connect(struct Scsi_Host *shost,
|
|||
pr_info("shost 0x%p, priv NULL.\n", shost);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
ifindex = hba->ndev->ifindex;
|
||||
}
|
||||
|
||||
if (dst_addr->sa_family == AF_INET) {
|
||||
csk = cxgbi_check_route(dst_addr);
|
||||
csk = cxgbi_check_route(dst_addr, ifindex);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
} else if (dst_addr->sa_family == AF_INET6) {
|
||||
csk = cxgbi_check_route6(dst_addr);
|
||||
csk = cxgbi_check_route6(dst_addr, ifindex);
|
||||
#endif
|
||||
} else {
|
||||
pr_info("address family 0x%x NOT supported.\n",
|
||||
|
|
|
@ -820,8 +820,7 @@ static void term_afu(struct cxlflash_cfg *cfg)
|
|||
for (k = cfg->afu->num_hwqs - 1; k >= 0; k--)
|
||||
term_intr(cfg, UNMAP_THREE, k);
|
||||
|
||||
if (cfg->afu)
|
||||
stop_afu(cfg);
|
||||
stop_afu(cfg);
|
||||
|
||||
for (k = cfg->afu->num_hwqs - 1; k >= 0; k--)
|
||||
term_mc(cfg, k);
|
||||
|
|
|
@ -1390,6 +1390,7 @@ static int cxlflash_disk_attach(struct scsi_device *sdev,
|
|||
if (unlikely(!ctxi)) {
|
||||
dev_err(dev, "%s: Failed to create context ctxid=%d\n",
|
||||
__func__, ctxid);
|
||||
rc = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
|
||||
|
@ -1650,6 +1651,7 @@ static int cxlflash_afu_recover(struct scsi_device *sdev,
|
|||
u64 ctxid = DECODE_CTXID(recover->context_id),
|
||||
rctxid = recover->context_id;
|
||||
long reg;
|
||||
bool locked = true;
|
||||
int lretry = 20; /* up to 2 seconds */
|
||||
int new_adap_fd = -1;
|
||||
int rc = 0;
|
||||
|
@ -1658,8 +1660,11 @@ static int cxlflash_afu_recover(struct scsi_device *sdev,
|
|||
up_read(&cfg->ioctl_rwsem);
|
||||
rc = mutex_lock_interruptible(mutex);
|
||||
down_read(&cfg->ioctl_rwsem);
|
||||
if (rc)
|
||||
if (rc) {
|
||||
locked = false;
|
||||
goto out;
|
||||
}
|
||||
|
||||
rc = check_state(cfg);
|
||||
if (rc) {
|
||||
dev_err(dev, "%s: Failed state rc=%d\n", __func__, rc);
|
||||
|
@ -1693,8 +1698,10 @@ retry_recover:
|
|||
mutex_unlock(mutex);
|
||||
msleep(100);
|
||||
rc = mutex_lock_interruptible(mutex);
|
||||
if (rc)
|
||||
if (rc) {
|
||||
locked = false;
|
||||
goto out;
|
||||
}
|
||||
goto retry_recover;
|
||||
}
|
||||
|
||||
|
@ -1738,7 +1745,8 @@ retry_recover:
|
|||
out:
|
||||
if (likely(ctxi))
|
||||
put_context(ctxi);
|
||||
mutex_unlock(mutex);
|
||||
if (locked)
|
||||
mutex_unlock(mutex);
|
||||
atomic_dec_if_positive(&cfg->recovery_threads);
|
||||
return rc;
|
||||
}
|
||||
|
|
|
@ -694,11 +694,7 @@ static int shrink_lxt(struct afu *afu,
|
|||
/* Free LBAs allocated to freed chunks */
|
||||
mutex_lock(&blka->mutex);
|
||||
for (i = delta - 1; i >= 0; i--) {
|
||||
/* Mask the higher 48 bits before shifting, even though
|
||||
* it is a noop
|
||||
*/
|
||||
aun = (lxt_old[my_new_size + i].rlba_base & SISL_ASTATUS_MASK);
|
||||
aun = (aun >> MC_CHUNK_SHIFT);
|
||||
aun = lxt_old[my_new_size + i].rlba_base >> MC_CHUNK_SHIFT;
|
||||
if (needs_ws)
|
||||
write_same16(sdev, aun, MC_CHUNK_SIZE);
|
||||
ba_free(&blka->ba_lun, aun);
|
||||
|
|
|
@ -58,7 +58,7 @@ static struct scsi_host_template dmx3191d_driver_template = {
|
|||
.info = NCR5380_info,
|
||||
.queuecommand = NCR5380_queue_command,
|
||||
.eh_abort_handler = NCR5380_abort,
|
||||
.eh_bus_reset_handler = NCR5380_bus_reset,
|
||||
.eh_host_reset_handler = NCR5380_host_reset,
|
||||
.can_queue = 32,
|
||||
.this_id = 7,
|
||||
.sg_tablesize = SG_ALL,
|
||||
|
|
|
@ -1169,11 +1169,6 @@ static struct adpt_device* adpt_find_device(adpt_hba* pHba, u32 chan, u32 id, u6
|
|||
if(chan < 0 || chan >= MAX_CHANNEL)
|
||||
return NULL;
|
||||
|
||||
if( pHba->channel[chan].device == NULL){
|
||||
printk(KERN_DEBUG"Adaptec I2O RAID: Trying to find device before they are allocated\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
d = pHba->channel[chan].device[id];
|
||||
if(!d || d->tid == 0) {
|
||||
return NULL;
|
||||
|
|
|
@ -1899,7 +1899,6 @@ static int eata2x_eh_abort(struct scsi_cmnd *SCarg)
|
|||
static int eata2x_eh_host_reset(struct scsi_cmnd *SCarg)
|
||||
{
|
||||
unsigned int i, time, k, c, limit = 0;
|
||||
int arg_done = 0;
|
||||
struct scsi_cmnd *SCpnt;
|
||||
struct Scsi_Host *shost = SCarg->device->host;
|
||||
struct hostdata *ha = (struct hostdata *)shost->hostdata;
|
||||
|
@ -1967,9 +1966,6 @@ static int eata2x_eh_host_reset(struct scsi_cmnd *SCarg)
|
|||
if (SCpnt->scsi_done == NULL)
|
||||
panic("%s: reset, mbox %d, SCpnt->scsi_done == NULL.\n",
|
||||
ha->board_name, i);
|
||||
|
||||
if (SCpnt == SCarg)
|
||||
arg_done = 1;
|
||||
}
|
||||
|
||||
if (do_dma(shost->io_port, 0, RESET_PIO)) {
|
||||
|
@ -2037,10 +2033,7 @@ static int eata2x_eh_host_reset(struct scsi_cmnd *SCarg)
|
|||
ha->in_reset = 0;
|
||||
do_trace = 0;
|
||||
|
||||
if (arg_done)
|
||||
printk("%s: reset, exit, done.\n", ha->board_name);
|
||||
else
|
||||
printk("%s: reset, exit.\n", ha->board_name);
|
||||
printk("%s: reset, exit.\n", ha->board_name);
|
||||
|
||||
spin_unlock_irq(shost->host_lock);
|
||||
return SUCCESS;
|
||||
|
|
|
@ -309,7 +309,7 @@ MODULE_PARM_DESC(interrupt_mode,
|
|||
"Defines the interrupt mode to use. 0 for legacy"
|
||||
", 1 for MSI. Default is MSI (1).");
|
||||
|
||||
static struct pci_device_id
|
||||
static const struct pci_device_id
|
||||
esas2r_pci_table[] = {
|
||||
{ ATTO_VENDOR_ID, 0x0049, ATTO_VENDOR_ID, 0x0049,
|
||||
0,
|
||||
|
|
|
@ -597,14 +597,12 @@ static int esp_alloc_lun_tag(struct esp_cmd_entry *ent,
|
|||
|
||||
lp->non_tagged_cmd = ent;
|
||||
return 0;
|
||||
} else {
|
||||
/* Tagged command, see if blocked by a
|
||||
* non-tagged one.
|
||||
*/
|
||||
if (lp->non_tagged_cmd || lp->hold)
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
/* Tagged command. Check that it isn't blocked by a non-tagged one. */
|
||||
if (lp->non_tagged_cmd || lp->hold)
|
||||
return -EBUSY;
|
||||
|
||||
BUG_ON(lp->tagged_cmds[ent->orig_tag[1]]);
|
||||
|
||||
lp->tagged_cmds[ent->orig_tag[1]] = ent;
|
||||
|
@ -1210,12 +1208,6 @@ static int esp_reconnect(struct esp *esp)
|
|||
|
||||
esp->active_cmd = ent;
|
||||
|
||||
if (ent->flags & ESP_CMD_FLAG_ABORT) {
|
||||
esp->msg_out[0] = ABORT_TASK_SET;
|
||||
esp->msg_out_len = 1;
|
||||
scsi_esp_cmd(esp, ESP_CMD_SATN);
|
||||
}
|
||||
|
||||
esp_event(esp, ESP_EVENT_CHECK_PHASE);
|
||||
esp_restore_pointers(esp, ent);
|
||||
esp->flags |= ESP_FLAG_QUICKIRQ_CHECK;
|
||||
|
@ -1230,9 +1222,6 @@ static int esp_finish_select(struct esp *esp)
|
|||
{
|
||||
struct esp_cmd_entry *ent;
|
||||
struct scsi_cmnd *cmd;
|
||||
u8 orig_select_state;
|
||||
|
||||
orig_select_state = esp->select_state;
|
||||
|
||||
/* No longer selecting. */
|
||||
esp->select_state = ESP_SELECT_NONE;
|
||||
|
@ -1496,9 +1485,8 @@ static void esp_msgin_reject(struct esp *esp)
|
|||
return;
|
||||
}
|
||||
|
||||
esp->msg_out[0] = ABORT_TASK_SET;
|
||||
esp->msg_out_len = 1;
|
||||
scsi_esp_cmd(esp, ESP_CMD_SATN);
|
||||
shost_printk(KERN_INFO, esp->host, "Unexpected MESSAGE REJECT\n");
|
||||
esp_schedule_reset(esp);
|
||||
}
|
||||
|
||||
static void esp_msgin_sdtr(struct esp *esp, struct esp_target_data *tp)
|
||||
|
@ -1621,7 +1609,7 @@ static void esp_msgin_extended(struct esp *esp)
|
|||
shost_printk(KERN_INFO, esp->host,
|
||||
"Unexpected extended msg type %x\n", esp->msg_in[2]);
|
||||
|
||||
esp->msg_out[0] = ABORT_TASK_SET;
|
||||
esp->msg_out[0] = MESSAGE_REJECT;
|
||||
esp->msg_out_len = 1;
|
||||
scsi_esp_cmd(esp, ESP_CMD_SATN);
|
||||
}
|
||||
|
@ -1745,7 +1733,6 @@ again:
|
|||
return 0;
|
||||
}
|
||||
goto again;
|
||||
break;
|
||||
|
||||
case ESP_EVENT_DATA_IN:
|
||||
write = 1;
|
||||
|
@ -1956,12 +1943,16 @@ again:
|
|||
} else {
|
||||
if (esp->msg_out_len > 1)
|
||||
esp->ops->dma_invalidate(esp);
|
||||
}
|
||||
|
||||
if (!(esp->ireg & ESP_INTR_DC)) {
|
||||
if (esp->rev != FASHME)
|
||||
/* XXX if the chip went into disconnected mode,
|
||||
* we can't run the phase state machine anyway.
|
||||
*/
|
||||
if (!(esp->ireg & ESP_INTR_DC))
|
||||
scsi_esp_cmd(esp, ESP_CMD_NULL);
|
||||
}
|
||||
|
||||
esp->msg_out_len = 0;
|
||||
|
||||
esp_event(esp, ESP_EVENT_CHECK_PHASE);
|
||||
goto again;
|
||||
case ESP_EVENT_MSGIN:
|
||||
|
@ -1998,6 +1989,10 @@ again:
|
|||
|
||||
scsi_esp_cmd(esp, ESP_CMD_MOK);
|
||||
|
||||
/* Check whether a bus reset is to be done next */
|
||||
if (esp->event == ESP_EVENT_RESET)
|
||||
return 0;
|
||||
|
||||
if (esp->event != ESP_EVENT_FREE_BUS)
|
||||
esp_event(esp, ESP_EVENT_CHECK_PHASE);
|
||||
} else {
|
||||
|
@ -2022,7 +2017,6 @@ again:
|
|||
}
|
||||
esp_schedule_reset(esp);
|
||||
return 0;
|
||||
break;
|
||||
|
||||
case ESP_EVENT_RESET:
|
||||
scsi_esp_cmd(esp, ESP_CMD_RS);
|
||||
|
@ -2033,7 +2027,6 @@ again:
|
|||
"Unexpected event %x, resetting\n", esp->event);
|
||||
esp_schedule_reset(esp);
|
||||
return 0;
|
||||
break;
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
@ -2170,14 +2163,14 @@ static void __esp_interrupt(struct esp *esp)
|
|||
|
||||
esp_schedule_reset(esp);
|
||||
} else {
|
||||
if (!(esp->ireg & ESP_INTR_RSEL)) {
|
||||
/* Some combination of FDONE, BSERV, DC. */
|
||||
if (esp->select_state != ESP_SELECT_NONE)
|
||||
intr_done = esp_finish_select(esp);
|
||||
} else if (esp->ireg & ESP_INTR_RSEL) {
|
||||
if (esp->ireg & ESP_INTR_RSEL) {
|
||||
if (esp->active_cmd)
|
||||
(void) esp_finish_select(esp);
|
||||
intr_done = esp_reconnect(esp);
|
||||
} else {
|
||||
/* Some combination of FDONE, BSERV, DC. */
|
||||
if (esp->select_state != ESP_SELECT_NONE)
|
||||
intr_done = esp_finish_select(esp);
|
||||
}
|
||||
}
|
||||
while (!intr_done)
|
||||
|
|
|
@ -281,7 +281,6 @@ struct esp_cmd_entry {
|
|||
|
||||
u8 flags;
|
||||
#define ESP_CMD_FLAG_WRITE 0x01 /* DMA is a write */
|
||||
#define ESP_CMD_FLAG_ABORT 0x02 /* being aborted */
|
||||
#define ESP_CMD_FLAG_AUTOSENSE 0x04 /* Doing automatic REQUEST_SENSE */
|
||||
#define ESP_CMD_FLAG_RESIDUAL 0x08 /* AM53c974 BLAST residual */
|
||||
|
||||
|
|
|
@ -659,13 +659,13 @@ static void fcoe_fcf_device_release(struct device *dev)
|
|||
kfree(fcf);
|
||||
}
|
||||
|
||||
static struct device_type fcoe_ctlr_device_type = {
|
||||
static const struct device_type fcoe_ctlr_device_type = {
|
||||
.name = "fcoe_ctlr",
|
||||
.groups = fcoe_ctlr_attr_groups,
|
||||
.release = fcoe_ctlr_device_release,
|
||||
};
|
||||
|
||||
static struct device_type fcoe_fcf_device_type = {
|
||||
static const struct device_type fcoe_fcf_device_type = {
|
||||
.name = "fcoe_fcf",
|
||||
.groups = fcoe_fcf_attr_groups,
|
||||
.release = fcoe_fcf_device_release,
|
||||
|
|
|
@ -933,7 +933,7 @@ struct Scsi_Host *__fdomain_16x0_detect(struct scsi_host_template *tpnt )
|
|||
}
|
||||
}
|
||||
|
||||
fdomain_16x0_bus_reset(NULL);
|
||||
fdomain_16x0_host_reset(NULL);
|
||||
|
||||
if (fdomain_test_loopback()) {
|
||||
printk(KERN_ERR "scsi: <fdomain> Detection failed (loopback test failed at port base 0x%x)\n", port_base);
|
||||
|
@ -1568,7 +1568,7 @@ static int fdomain_16x0_abort(struct scsi_cmnd *SCpnt)
|
|||
return SUCCESS;
|
||||
}
|
||||
|
||||
int fdomain_16x0_bus_reset(struct scsi_cmnd *SCpnt)
|
||||
int fdomain_16x0_host_reset(struct scsi_cmnd *SCpnt)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
|
@ -1758,7 +1758,7 @@ struct scsi_host_template fdomain_driver_template = {
|
|||
.info = fdomain_16x0_info,
|
||||
.queuecommand = fdomain_16x0_queue,
|
||||
.eh_abort_handler = fdomain_16x0_abort,
|
||||
.eh_bus_reset_handler = fdomain_16x0_bus_reset,
|
||||
.eh_host_reset_handler = fdomain_16x0_host_reset,
|
||||
.bios_param = fdomain_16x0_biosparam,
|
||||
.release = fdomain_16x0_release,
|
||||
.can_queue = 1,
|
||||
|
|
|
@ -21,4 +21,4 @@
|
|||
extern struct scsi_host_template fdomain_driver_template;
|
||||
extern int fdomain_setup(char *str);
|
||||
extern struct Scsi_Host *__fdomain_16x0_detect(struct scsi_host_template *tpnt );
|
||||
extern int fdomain_16x0_bus_reset(struct scsi_cmnd *SCpnt);
|
||||
extern int fdomain_16x0_host_reset(struct scsi_cmnd *SCpnt);
|
||||
|
|
|
@ -180,7 +180,7 @@ enum fnic_msix_intr_index {
|
|||
|
||||
struct fnic_msix_entry {
|
||||
int requested;
|
||||
char devname[IFNAMSIZ];
|
||||
char devname[IFNAMSIZ + 11];
|
||||
irqreturn_t (*isr)(int, void *);
|
||||
void *devid;
|
||||
};
|
||||
|
|
|
@ -1990,10 +1990,6 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
|
|||
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
|
||||
"Issuing Host reset due to out of order IO\n");
|
||||
|
||||
if (fnic_host_reset(sc) == FAILED) {
|
||||
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
|
||||
"fnic_host_reset failed.\n");
|
||||
}
|
||||
ret = FAILED;
|
||||
goto fnic_abort_cmd_end;
|
||||
}
|
||||
|
|
|
@ -1,17 +1,17 @@
|
|||
/*
|
||||
* Generic Generic NCR5380 driver
|
||||
*
|
||||
*
|
||||
* Copyright 1993, Drew Eckhardt
|
||||
* Visionary Computing
|
||||
* (Unix and Linux consulting and custom programming)
|
||||
* drew@colorado.edu
|
||||
* +1 (303) 440-4894
|
||||
* Visionary Computing
|
||||
* (Unix and Linux consulting and custom programming)
|
||||
* drew@colorado.edu
|
||||
* +1 (303) 440-4894
|
||||
*
|
||||
* NCR53C400 extensions (c) 1994,1995,1996, Kevin Lentin
|
||||
* K.Lentin@cs.monash.edu.au
|
||||
* K.Lentin@cs.monash.edu.au
|
||||
*
|
||||
* NCR53C400A extensions (c) 1996, Ingmar Baumgart
|
||||
* ingmar@gonzo.schwaben.de
|
||||
* ingmar@gonzo.schwaben.de
|
||||
*
|
||||
* DTC3181E extensions (c) 1997, Ronald van Cuijlenborg
|
||||
* ronald.van.cuijlenborg@tip.nl or nutty@dds.nl
|
||||
|
@ -44,17 +44,19 @@
|
|||
int c400_ctl_status; \
|
||||
int c400_blk_cnt; \
|
||||
int c400_host_buf; \
|
||||
int io_width
|
||||
int io_width; \
|
||||
int pdma_residual; \
|
||||
int board
|
||||
|
||||
#define NCR5380_dma_xfer_len generic_NCR5380_dma_xfer_len
|
||||
#define NCR5380_dma_recv_setup generic_NCR5380_pread
|
||||
#define NCR5380_dma_send_setup generic_NCR5380_pwrite
|
||||
#define NCR5380_dma_residual NCR5380_dma_residual_none
|
||||
#define NCR5380_dma_recv_setup generic_NCR5380_precv
|
||||
#define NCR5380_dma_send_setup generic_NCR5380_psend
|
||||
#define NCR5380_dma_residual generic_NCR5380_dma_residual
|
||||
|
||||
#define NCR5380_intr generic_NCR5380_intr
|
||||
#define NCR5380_queue_command generic_NCR5380_queue_command
|
||||
#define NCR5380_abort generic_NCR5380_abort
|
||||
#define NCR5380_bus_reset generic_NCR5380_bus_reset
|
||||
#define NCR5380_host_reset generic_NCR5380_host_reset
|
||||
#define NCR5380_info generic_NCR5380_info
|
||||
|
||||
#define NCR5380_io_delay(x) udelay(x)
|
||||
|
@ -76,6 +78,7 @@
|
|||
#define IRQ_AUTO 254
|
||||
|
||||
#define MAX_CARDS 8
|
||||
#define DMA_MAX_SIZE 32768
|
||||
|
||||
/* old-style parameters for compatibility */
|
||||
static int ncr_irq = -1;
|
||||
|
@ -314,6 +317,7 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
|
|||
}
|
||||
hostdata = shost_priv(instance);
|
||||
|
||||
hostdata->board = board;
|
||||
hostdata->io = iomem;
|
||||
hostdata->region_size = region_size;
|
||||
|
||||
|
@ -478,180 +482,210 @@ static void generic_NCR5380_release_resources(struct Scsi_Host *instance)
|
|||
release_mem_region(base, region_size);
|
||||
}
|
||||
|
||||
/**
|
||||
* generic_NCR5380_pread - pseudo DMA read
|
||||
* @hostdata: scsi host private data
|
||||
* @dst: buffer to read into
|
||||
* @len: buffer length
|
||||
/* wait_for_53c80_access - wait for 53C80 registers to become accessible
|
||||
* @hostdata: scsi host private data
|
||||
*
|
||||
* Perform a pseudo DMA mode read from an NCR53C400 or equivalent
|
||||
* controller
|
||||
* The registers within the 53C80 logic block are inaccessible until
|
||||
* bit 7 in the 53C400 control status register gets asserted.
|
||||
*/
|
||||
|
||||
static inline int generic_NCR5380_pread(struct NCR5380_hostdata *hostdata,
|
||||
|
||||
static void wait_for_53c80_access(struct NCR5380_hostdata *hostdata)
|
||||
{
|
||||
int count = 10000;
|
||||
|
||||
do {
|
||||
if (hostdata->board == BOARD_DTC3181E)
|
||||
udelay(4); /* DTC436 chip hangs without this */
|
||||
if (NCR5380_read(hostdata->c400_ctl_status) & CSR_53C80_REG)
|
||||
return;
|
||||
} while (--count > 0);
|
||||
|
||||
scmd_printk(KERN_ERR, hostdata->connected,
|
||||
"53c80 registers not accessible, device will be reset\n");
|
||||
NCR5380_write(hostdata->c400_ctl_status, CSR_RESET);
|
||||
NCR5380_write(hostdata->c400_ctl_status, CSR_BASE);
|
||||
}
|
||||
|
||||
/**
|
||||
* generic_NCR5380_precv - pseudo DMA receive
|
||||
* @hostdata: scsi host private data
|
||||
* @dst: buffer to write into
|
||||
* @len: transfer size
|
||||
*
|
||||
* Perform a pseudo DMA mode receive from a 53C400 or equivalent device.
|
||||
*/
|
||||
|
||||
static inline int generic_NCR5380_precv(struct NCR5380_hostdata *hostdata,
|
||||
unsigned char *dst, int len)
|
||||
{
|
||||
int blocks = len / 128;
|
||||
int residual;
|
||||
int start = 0;
|
||||
|
||||
NCR5380_write(hostdata->c400_ctl_status, CSR_BASE | CSR_TRANS_DIR);
|
||||
NCR5380_write(hostdata->c400_blk_cnt, blocks);
|
||||
while (1) {
|
||||
if (NCR5380_read(hostdata->c400_blk_cnt) == 0)
|
||||
break;
|
||||
if (NCR5380_read(hostdata->c400_ctl_status) & CSR_GATED_53C80_IRQ) {
|
||||
printk(KERN_ERR "53C400r: Got 53C80_IRQ start=%d, blocks=%d\n", start, blocks);
|
||||
return -1;
|
||||
NCR5380_write(hostdata->c400_blk_cnt, len / 128);
|
||||
|
||||
do {
|
||||
if (start == len - 128) {
|
||||
/* Ignore End of DMA interrupt for the final buffer */
|
||||
if (NCR5380_poll_politely(hostdata, hostdata->c400_ctl_status,
|
||||
CSR_HOST_BUF_NOT_RDY, 0, HZ / 64) < 0)
|
||||
break;
|
||||
} else {
|
||||
if (NCR5380_poll_politely2(hostdata, hostdata->c400_ctl_status,
|
||||
CSR_HOST_BUF_NOT_RDY, 0,
|
||||
hostdata->c400_ctl_status,
|
||||
CSR_GATED_53C80_IRQ,
|
||||
CSR_GATED_53C80_IRQ, HZ / 64) < 0 ||
|
||||
NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
|
||||
break;
|
||||
}
|
||||
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
|
||||
; /* FIXME - no timeout */
|
||||
|
||||
if (hostdata->io_port && hostdata->io_width == 2)
|
||||
insw(hostdata->io_port + hostdata->c400_host_buf,
|
||||
dst + start, 64);
|
||||
dst + start, 64);
|
||||
else if (hostdata->io_port)
|
||||
insb(hostdata->io_port + hostdata->c400_host_buf,
|
||||
dst + start, 128);
|
||||
dst + start, 128);
|
||||
else
|
||||
memcpy_fromio(dst + start,
|
||||
hostdata->io + NCR53C400_host_buffer, 128);
|
||||
|
||||
start += 128;
|
||||
blocks--;
|
||||
} while (start < len);
|
||||
|
||||
residual = len - start;
|
||||
|
||||
if (residual != 0) {
|
||||
/* 53c80 interrupt or transfer timeout. Reset 53c400 logic. */
|
||||
NCR5380_write(hostdata->c400_ctl_status, CSR_RESET);
|
||||
NCR5380_write(hostdata->c400_ctl_status, CSR_BASE);
|
||||
}
|
||||
wait_for_53c80_access(hostdata);
|
||||
|
||||
if (blocks) {
|
||||
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
|
||||
; /* FIXME - no timeout */
|
||||
if (residual == 0 && NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
|
||||
BASR_END_DMA_TRANSFER,
|
||||
BASR_END_DMA_TRANSFER,
|
||||
HZ / 64) < 0)
|
||||
scmd_printk(KERN_ERR, hostdata->connected, "%s: End of DMA timeout\n",
|
||||
__func__);
|
||||
|
||||
if (hostdata->io_port && hostdata->io_width == 2)
|
||||
insw(hostdata->io_port + hostdata->c400_host_buf,
|
||||
dst + start, 64);
|
||||
else if (hostdata->io_port)
|
||||
insb(hostdata->io_port + hostdata->c400_host_buf,
|
||||
dst + start, 128);
|
||||
else
|
||||
memcpy_fromio(dst + start,
|
||||
hostdata->io + NCR53C400_host_buffer, 128);
|
||||
hostdata->pdma_residual = residual;
|
||||
|
||||
start += 128;
|
||||
blocks--;
|
||||
}
|
||||
|
||||
if (!(NCR5380_read(hostdata->c400_ctl_status) & CSR_GATED_53C80_IRQ))
|
||||
printk("53C400r: no 53C80 gated irq after transfer");
|
||||
|
||||
/* wait for 53C80 registers to be available */
|
||||
while (!(NCR5380_read(hostdata->c400_ctl_status) & CSR_53C80_REG))
|
||||
;
|
||||
|
||||
if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_END_DMA_TRANSFER))
|
||||
printk(KERN_ERR "53C400r: no end dma signal\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* generic_NCR5380_pwrite - pseudo DMA write
|
||||
* @hostdata: scsi host private data
|
||||
* @dst: buffer to read into
|
||||
* @len: buffer length
|
||||
* generic_NCR5380_psend - pseudo DMA send
|
||||
* @hostdata: scsi host private data
|
||||
* @src: buffer to read from
|
||||
* @len: transfer size
|
||||
*
|
||||
* Perform a pseudo DMA mode read from an NCR53C400 or equivalent
|
||||
* controller
|
||||
* Perform a pseudo DMA mode send to a 53C400 or equivalent device.
|
||||
*/
|
||||
|
||||
static inline int generic_NCR5380_pwrite(struct NCR5380_hostdata *hostdata,
|
||||
unsigned char *src, int len)
|
||||
static inline int generic_NCR5380_psend(struct NCR5380_hostdata *hostdata,
|
||||
unsigned char *src, int len)
|
||||
{
|
||||
int blocks = len / 128;
|
||||
int residual;
|
||||
int start = 0;
|
||||
|
||||
NCR5380_write(hostdata->c400_ctl_status, CSR_BASE);
|
||||
NCR5380_write(hostdata->c400_blk_cnt, blocks);
|
||||
while (1) {
|
||||
if (NCR5380_read(hostdata->c400_ctl_status) & CSR_GATED_53C80_IRQ) {
|
||||
printk(KERN_ERR "53C400w: Got 53C80_IRQ start=%d, blocks=%d\n", start, blocks);
|
||||
return -1;
|
||||
NCR5380_write(hostdata->c400_blk_cnt, len / 128);
|
||||
|
||||
do {
|
||||
if (NCR5380_poll_politely2(hostdata, hostdata->c400_ctl_status,
|
||||
CSR_HOST_BUF_NOT_RDY, 0,
|
||||
hostdata->c400_ctl_status,
|
||||
CSR_GATED_53C80_IRQ,
|
||||
CSR_GATED_53C80_IRQ, HZ / 64) < 0 ||
|
||||
NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY) {
|
||||
/* Both 128 B buffers are in use */
|
||||
if (start >= 128)
|
||||
start -= 128;
|
||||
if (start >= 128)
|
||||
start -= 128;
|
||||
break;
|
||||
}
|
||||
|
||||
if (NCR5380_read(hostdata->c400_blk_cnt) == 0)
|
||||
if (start >= len && NCR5380_read(hostdata->c400_blk_cnt) == 0)
|
||||
break;
|
||||
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
|
||||
; // FIXME - timeout
|
||||
|
||||
if (NCR5380_read(hostdata->c400_ctl_status) & CSR_GATED_53C80_IRQ) {
|
||||
/* Host buffer is empty, other one is in use */
|
||||
if (start >= 128)
|
||||
start -= 128;
|
||||
break;
|
||||
}
|
||||
|
||||
if (start >= len)
|
||||
continue;
|
||||
|
||||
if (hostdata->io_port && hostdata->io_width == 2)
|
||||
outsw(hostdata->io_port + hostdata->c400_host_buf,
|
||||
src + start, 64);
|
||||
src + start, 64);
|
||||
else if (hostdata->io_port)
|
||||
outsb(hostdata->io_port + hostdata->c400_host_buf,
|
||||
src + start, 128);
|
||||
src + start, 128);
|
||||
else
|
||||
memcpy_toio(hostdata->io + NCR53C400_host_buffer,
|
||||
src + start, 128);
|
||||
|
||||
start += 128;
|
||||
blocks--;
|
||||
} while (1);
|
||||
|
||||
residual = len - start;
|
||||
|
||||
if (residual != 0) {
|
||||
/* 53c80 interrupt or transfer timeout. Reset 53c400 logic. */
|
||||
NCR5380_write(hostdata->c400_ctl_status, CSR_RESET);
|
||||
NCR5380_write(hostdata->c400_ctl_status, CSR_BASE);
|
||||
}
|
||||
if (blocks) {
|
||||
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
|
||||
; // FIXME - no timeout
|
||||
wait_for_53c80_access(hostdata);
|
||||
|
||||
if (hostdata->io_port && hostdata->io_width == 2)
|
||||
outsw(hostdata->io_port + hostdata->c400_host_buf,
|
||||
src + start, 64);
|
||||
else if (hostdata->io_port)
|
||||
outsb(hostdata->io_port + hostdata->c400_host_buf,
|
||||
src + start, 128);
|
||||
else
|
||||
memcpy_toio(hostdata->io + NCR53C400_host_buffer,
|
||||
src + start, 128);
|
||||
if (residual == 0) {
|
||||
if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
|
||||
TCR_LAST_BYTE_SENT, TCR_LAST_BYTE_SENT,
|
||||
HZ / 64) < 0)
|
||||
scmd_printk(KERN_ERR, hostdata->connected,
|
||||
"%s: Last Byte Sent timeout\n", __func__);
|
||||
|
||||
start += 128;
|
||||
blocks--;
|
||||
if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
|
||||
BASR_END_DMA_TRANSFER, BASR_END_DMA_TRANSFER,
|
||||
HZ / 64) < 0)
|
||||
scmd_printk(KERN_ERR, hostdata->connected, "%s: End of DMA timeout\n",
|
||||
__func__);
|
||||
}
|
||||
|
||||
/* wait for 53C80 registers to be available */
|
||||
while (!(NCR5380_read(hostdata->c400_ctl_status) & CSR_53C80_REG)) {
|
||||
udelay(4); /* DTC436 chip hangs without this */
|
||||
/* FIXME - no timeout */
|
||||
}
|
||||
hostdata->pdma_residual = residual;
|
||||
|
||||
if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_END_DMA_TRANSFER)) {
|
||||
printk(KERN_ERR "53C400w: no end dma signal\n");
|
||||
}
|
||||
|
||||
while (!(NCR5380_read(TARGET_COMMAND_REG) & TCR_LAST_BYTE_SENT))
|
||||
; // TIMEOUT
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int generic_NCR5380_dma_xfer_len(struct NCR5380_hostdata *hostdata,
|
||||
struct scsi_cmnd *cmd)
|
||||
{
|
||||
int transfersize = cmd->transfersize;
|
||||
int transfersize = cmd->SCp.this_residual;
|
||||
|
||||
if (hostdata->flags & FLAG_NO_PSEUDO_DMA)
|
||||
return 0;
|
||||
|
||||
/* Limit transfers to 32K, for xx400 & xx406
|
||||
* pseudoDMA that transfers in 128 bytes blocks.
|
||||
*/
|
||||
if (transfersize > 32 * 1024 && cmd->SCp.this_residual &&
|
||||
!(cmd->SCp.this_residual % transfersize))
|
||||
transfersize = 32 * 1024;
|
||||
|
||||
/* 53C400 datasheet: non-modulo-128-byte transfers should use PIO */
|
||||
if (transfersize % 128)
|
||||
transfersize = 0;
|
||||
return 0;
|
||||
|
||||
return transfersize;
|
||||
/* Limit PDMA send to 512 B to avoid random corruption on DTC3181E */
|
||||
if (hostdata->board == BOARD_DTC3181E &&
|
||||
cmd->sc_data_direction == DMA_TO_DEVICE)
|
||||
transfersize = min(cmd->SCp.this_residual, 512);
|
||||
|
||||
return min(transfersize, DMA_MAX_SIZE);
|
||||
}
|
||||
|
||||
/*
|
||||
* Include the NCR5380 core code that we build our driver around
|
||||
*/
|
||||
|
||||
static int generic_NCR5380_dma_residual(struct NCR5380_hostdata *hostdata)
|
||||
{
|
||||
return hostdata->pdma_residual;
|
||||
}
|
||||
|
||||
/* Include the core driver code. */
|
||||
|
||||
#include "NCR5380.c"
|
||||
|
||||
static struct scsi_host_template driver_template = {
|
||||
|
@ -661,7 +695,7 @@ static struct scsi_host_template driver_template = {
|
|||
.info = generic_NCR5380_info,
|
||||
.queuecommand = generic_NCR5380_queue_command,
|
||||
.eh_abort_handler = generic_NCR5380_abort,
|
||||
.eh_bus_reset_handler = generic_NCR5380_bus_reset,
|
||||
.eh_host_reset_handler = generic_NCR5380_host_reset,
|
||||
.can_queue = 16,
|
||||
.this_id = 7,
|
||||
.sg_tablesize = SG_ALL,
|
||||
|
@ -671,11 +705,10 @@ static struct scsi_host_template driver_template = {
|
|||
.max_sectors = 128,
|
||||
};
|
||||
|
||||
|
||||
static int generic_NCR5380_isa_match(struct device *pdev, unsigned int ndev)
|
||||
{
|
||||
int ret = generic_NCR5380_init_one(&driver_template, pdev, base[ndev],
|
||||
irq[ndev], card[ndev]);
|
||||
irq[ndev], card[ndev]);
|
||||
if (ret) {
|
||||
if (base[ndev])
|
||||
printk(KERN_WARNING "Card not found at address 0x%03x\n",
|
||||
|
@ -687,7 +720,7 @@ static int generic_NCR5380_isa_match(struct device *pdev, unsigned int ndev)
|
|||
}
|
||||
|
||||
static int generic_NCR5380_isa_remove(struct device *pdev,
|
||||
unsigned int ndev)
|
||||
unsigned int ndev)
|
||||
{
|
||||
generic_NCR5380_release_resources(dev_get_drvdata(pdev));
|
||||
dev_set_drvdata(pdev, NULL);
|
||||
|
@ -703,14 +736,14 @@ static struct isa_driver generic_NCR5380_isa_driver = {
|
|||
};
|
||||
|
||||
#ifdef CONFIG_PNP
|
||||
static struct pnp_device_id generic_NCR5380_pnp_ids[] = {
|
||||
static const struct pnp_device_id generic_NCR5380_pnp_ids[] = {
|
||||
{ .id = "DTC436e", .driver_data = BOARD_DTC3181E },
|
||||
{ .id = "" }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pnp, generic_NCR5380_pnp_ids);
|
||||
|
||||
static int generic_NCR5380_pnp_probe(struct pnp_dev *pdev,
|
||||
const struct pnp_device_id *id)
|
||||
const struct pnp_device_id *id)
|
||||
{
|
||||
int base, irq;
|
||||
|
||||
|
@ -721,7 +754,7 @@ static int generic_NCR5380_pnp_probe(struct pnp_dev *pdev,
|
|||
irq = pnp_irq(pdev, 0);
|
||||
|
||||
return generic_NCR5380_init_one(&driver_template, &pdev->dev, base, irq,
|
||||
id->driver_data);
|
||||
id->driver_data);
|
||||
}
|
||||
|
||||
static void generic_NCR5380_pnp_remove(struct pnp_dev *pdev)
|
||||
|
|
|
@ -2354,7 +2354,7 @@ static int gdth_internal_cache_cmd(gdth_ha_str *ha, Scsi_Cmnd *scp)
|
|||
inq.resp_aenc = 2;
|
||||
inq.add_length= 32;
|
||||
strcpy(inq.vendor,ha->oem_name);
|
||||
sprintf(inq.product,"Host Drive #%02d",t);
|
||||
snprintf(inq.product, sizeof(inq.product), "Host Drive #%02d",t);
|
||||
strcpy(inq.revision," ");
|
||||
gdth_copy_internal_data(ha, scp, (char*)&inq, sizeof(gdth_inq_data));
|
||||
break;
|
||||
|
|
|
@ -147,7 +147,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
|
||||
gdth_cmd_str *gdtcmd;
|
||||
gdth_evt_str *estr;
|
||||
char hrec[161];
|
||||
char hrec[277];
|
||||
|
||||
char *buf;
|
||||
gdth_dskstat_str *pds;
|
||||
|
|
|
@ -171,23 +171,6 @@ static void dma_stop(struct Scsi_Host *instance, struct scsi_cmnd *SCpnt,
|
|||
}
|
||||
}
|
||||
|
||||
static int gvp11_bus_reset(struct scsi_cmnd *cmd)
|
||||
{
|
||||
struct Scsi_Host *instance = cmd->device->host;
|
||||
|
||||
/* FIXME perform bus-specific reset */
|
||||
|
||||
/* FIXME 2: shouldn't we no-op this function (return
|
||||
FAILED), and fall back to host reset function,
|
||||
wd33c93_host_reset ? */
|
||||
|
||||
spin_lock_irq(instance->host_lock);
|
||||
wd33c93_host_reset(cmd);
|
||||
spin_unlock_irq(instance->host_lock);
|
||||
|
||||
return SUCCESS;
|
||||
}
|
||||
|
||||
static struct scsi_host_template gvp11_scsi_template = {
|
||||
.module = THIS_MODULE,
|
||||
.name = "GVP Series II SCSI",
|
||||
|
@ -196,7 +179,6 @@ static struct scsi_host_template gvp11_scsi_template = {
|
|||
.proc_name = "GVP11",
|
||||
.queuecommand = wd33c93_queuecommand,
|
||||
.eh_abort_handler = wd33c93_abort,
|
||||
.eh_bus_reset_handler = gvp11_bus_reset,
|
||||
.eh_host_reset_handler = wd33c93_host_reset,
|
||||
.can_queue = CAN_QUEUE,
|
||||
.this_id = 7,
|
||||
|
|
|
@ -15,6 +15,7 @@
|
|||
#include <linux/acpi.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/dmapool.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_address.h>
|
||||
|
@ -25,14 +26,13 @@
|
|||
#include <scsi/sas_ata.h>
|
||||
#include <scsi/libsas.h>
|
||||
|
||||
#define DRV_VERSION "v1.6"
|
||||
|
||||
#define HISI_SAS_MAX_PHYS 9
|
||||
#define HISI_SAS_MAX_QUEUES 32
|
||||
#define HISI_SAS_QUEUE_SLOTS 512
|
||||
#define HISI_SAS_MAX_ITCT_ENTRIES 2048
|
||||
#define HISI_SAS_MAX_DEVICES HISI_SAS_MAX_ITCT_ENTRIES
|
||||
#define HISI_SAS_RESET_BIT 0
|
||||
#define HISI_SAS_REJECT_CMD_BIT 1
|
||||
|
||||
#define HISI_SAS_STATUS_BUF_SZ (sizeof(struct hisi_sas_status_buffer))
|
||||
#define HISI_SAS_COMMAND_TABLE_SZ (sizeof(union hisi_sas_command_table))
|
||||
|
@ -90,6 +90,14 @@ enum hisi_sas_dev_type {
|
|||
HISI_SAS_DEV_TYPE_SATA,
|
||||
};
|
||||
|
||||
struct hisi_sas_hw_error {
|
||||
u32 irq_msk;
|
||||
u32 msk;
|
||||
int shift;
|
||||
const char *msg;
|
||||
int reg;
|
||||
};
|
||||
|
||||
struct hisi_sas_phy {
|
||||
struct hisi_hba *hisi_hba;
|
||||
struct hisi_sas_port *port;
|
||||
|
@ -132,6 +140,7 @@ struct hisi_sas_dq {
|
|||
struct hisi_sas_device {
|
||||
struct hisi_hba *hisi_hba;
|
||||
struct domain_device *sas_device;
|
||||
struct completion *completion;
|
||||
struct hisi_sas_dq *dq;
|
||||
struct list_head list;
|
||||
u64 attached_phy;
|
||||
|
@ -192,6 +201,7 @@ struct hisi_sas_hw {
|
|||
void (*phy_enable)(struct hisi_hba *hisi_hba, int phy_no);
|
||||
void (*phy_disable)(struct hisi_hba *hisi_hba, int phy_no);
|
||||
void (*phy_hard_reset)(struct hisi_hba *hisi_hba, int phy_no);
|
||||
void (*get_events)(struct hisi_hba *hisi_hba, int phy_no);
|
||||
void (*phy_set_linkrate)(struct hisi_hba *hisi_hba, int phy_no,
|
||||
struct sas_phy_linkrates *linkrates);
|
||||
enum sas_linkrate (*phy_get_max_linkrate)(void);
|
||||
|
@ -201,6 +211,7 @@ struct hisi_sas_hw {
|
|||
void (*dereg_device)(struct hisi_hba *hisi_hba,
|
||||
struct domain_device *device);
|
||||
int (*soft_reset)(struct hisi_hba *hisi_hba);
|
||||
u32 (*get_phys_state)(struct hisi_hba *hisi_hba);
|
||||
int max_command_entries;
|
||||
int complete_hdr_size;
|
||||
};
|
||||
|
@ -390,6 +401,7 @@ struct hisi_sas_slot_buf_table {
|
|||
extern struct scsi_transport_template *hisi_sas_stt;
|
||||
extern struct scsi_host_template *hisi_sas_sht;
|
||||
|
||||
extern void hisi_sas_stop_phys(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_init_add(struct hisi_hba *hisi_hba);
|
||||
extern int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost);
|
||||
extern void hisi_sas_free(struct hisi_hba *hisi_hba);
|
||||
|
@ -408,6 +420,4 @@ extern void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba,
|
|||
struct sas_task *task,
|
||||
struct hisi_sas_slot *slot);
|
||||
extern void hisi_sas_init_mem(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 old_state,
|
||||
u32 state);
|
||||
#endif
|
||||
|
|
|
@ -61,6 +61,7 @@ u8 hisi_sas_get_ata_protocol(u8 cmd, int direction)
|
|||
case ATA_CMD_WRITE_QUEUED:
|
||||
case ATA_CMD_WRITE_LOG_DMA_EXT:
|
||||
case ATA_CMD_WRITE_STREAM_DMA_EXT:
|
||||
case ATA_CMD_ZAC_MGMT_IN:
|
||||
return HISI_SAS_SATA_PROTOCOL_DMA;
|
||||
|
||||
case ATA_CMD_CHK_POWER:
|
||||
|
@ -73,6 +74,7 @@ u8 hisi_sas_get_ata_protocol(u8 cmd, int direction)
|
|||
case ATA_CMD_SET_FEATURES:
|
||||
case ATA_CMD_STANDBY:
|
||||
case ATA_CMD_STANDBYNOW1:
|
||||
case ATA_CMD_ZAC_MGMT_OUT:
|
||||
return HISI_SAS_SATA_PROTOCOL_NONDATA;
|
||||
default:
|
||||
if (direction == DMA_NONE)
|
||||
|
@ -125,6 +127,15 @@ struct hisi_sas_port *to_hisi_sas_port(struct asd_sas_port *sas_port)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(to_hisi_sas_port);
|
||||
|
||||
void hisi_sas_stop_phys(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
int phy_no;
|
||||
|
||||
for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++)
|
||||
hisi_hba->hw->phy_disable(hisi_hba, phy_no);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_stop_phys);
|
||||
|
||||
static void hisi_sas_slot_index_clear(struct hisi_hba *hisi_hba, int slot_idx)
|
||||
{
|
||||
void *bitmap = hisi_hba->slot_index_tags;
|
||||
|
@ -433,7 +444,7 @@ static int hisi_sas_task_exec(struct sas_task *task, gfp_t gfp_flags,
|
|||
struct hisi_sas_device *sas_dev = device->lldd_dev;
|
||||
struct hisi_sas_dq *dq = sas_dev->dq;
|
||||
|
||||
if (unlikely(test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags)))
|
||||
if (unlikely(test_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags)))
|
||||
return -EINVAL;
|
||||
|
||||
/* protect task_prep and start_delivery sequence */
|
||||
|
@ -716,7 +727,6 @@ static void hisi_sas_dev_gone(struct domain_device *device)
|
|||
struct hisi_sas_device *sas_dev = device->lldd_dev;
|
||||
struct hisi_hba *hisi_hba = dev_to_hisi_hba(device);
|
||||
struct device *dev = hisi_hba->dev;
|
||||
int dev_id = sas_dev->device_id;
|
||||
|
||||
dev_info(dev, "found dev[%d:%x] is gone\n",
|
||||
sas_dev->device_id, sas_dev->dev_type);
|
||||
|
@ -729,9 +739,7 @@ static void hisi_sas_dev_gone(struct domain_device *device)
|
|||
hisi_hba->hw->free_device(hisi_hba, sas_dev);
|
||||
device->lldd_dev = NULL;
|
||||
memset(sas_dev, 0, sizeof(*sas_dev));
|
||||
sas_dev->device_id = dev_id;
|
||||
sas_dev->dev_type = SAS_PHY_UNUSED;
|
||||
sas_dev->dev_status = HISI_SAS_DEV_NORMAL;
|
||||
}
|
||||
|
||||
static int hisi_sas_queue_command(struct sas_task *task, gfp_t gfp_flags)
|
||||
|
@ -764,7 +772,12 @@ static int hisi_sas_control_phy(struct asd_sas_phy *sas_phy, enum phy_func func,
|
|||
case PHY_FUNC_SET_LINK_RATE:
|
||||
hisi_hba->hw->phy_set_linkrate(hisi_hba, phy_no, funcdata);
|
||||
break;
|
||||
|
||||
case PHY_FUNC_GET_EVENTS:
|
||||
if (hisi_hba->hw->get_events) {
|
||||
hisi_hba->hw->get_events(hisi_hba, phy_no);
|
||||
break;
|
||||
}
|
||||
/* fallthru */
|
||||
case PHY_FUNC_RELEASE_SPINUP_HOLD:
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
|
@ -967,37 +980,117 @@ static int hisi_sas_debug_issue_ssp_tmf(struct domain_device *device,
|
|||
sizeof(ssp_task), tmf);
|
||||
}
|
||||
|
||||
static void hisi_sas_refresh_port_id(struct hisi_hba *hisi_hba,
|
||||
struct asd_sas_port *sas_port, enum sas_linkrate linkrate)
|
||||
{
|
||||
struct hisi_sas_device *sas_dev;
|
||||
struct domain_device *device;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < HISI_SAS_MAX_DEVICES; i++) {
|
||||
sas_dev = &hisi_hba->devices[i];
|
||||
device = sas_dev->sas_device;
|
||||
if ((sas_dev->dev_type == SAS_PHY_UNUSED)
|
||||
|| !device || (device->port != sas_port))
|
||||
continue;
|
||||
|
||||
hisi_hba->hw->free_device(hisi_hba, sas_dev);
|
||||
|
||||
/* Update linkrate of directly attached device. */
|
||||
if (!device->parent)
|
||||
device->linkrate = linkrate;
|
||||
|
||||
hisi_hba->hw->setup_itct(hisi_hba, sas_dev);
|
||||
}
|
||||
}
|
||||
|
||||
static void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 old_state,
|
||||
u32 state)
|
||||
{
|
||||
struct sas_ha_struct *sas_ha = &hisi_hba->sha;
|
||||
struct asd_sas_port *_sas_port = NULL;
|
||||
int phy_no;
|
||||
|
||||
for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct asd_sas_port *sas_port = sas_phy->port;
|
||||
struct hisi_sas_port *port = to_hisi_sas_port(sas_port);
|
||||
bool do_port_check = !!(_sas_port != sas_port);
|
||||
|
||||
if (!sas_phy->phy->enabled)
|
||||
continue;
|
||||
|
||||
/* Report PHY state change to libsas */
|
||||
if (state & (1 << phy_no)) {
|
||||
if (do_port_check && sas_port) {
|
||||
struct domain_device *dev = sas_port->port_dev;
|
||||
|
||||
_sas_port = sas_port;
|
||||
port->id = phy->port_id;
|
||||
hisi_sas_refresh_port_id(hisi_hba,
|
||||
sas_port, sas_phy->linkrate);
|
||||
|
||||
if (DEV_IS_EXPANDER(dev->dev_type))
|
||||
sas_ha->notify_port_event(sas_phy,
|
||||
PORTE_BROADCAST_RCVD);
|
||||
}
|
||||
} else if (old_state & (1 << phy_no))
|
||||
/* PHY down but was up before */
|
||||
hisi_sas_phy_down(hisi_hba, phy_no, 0);
|
||||
|
||||
}
|
||||
|
||||
drain_workqueue(hisi_hba->shost->work_q);
|
||||
}
|
||||
|
||||
static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct sas_ha_struct *sas_ha = &hisi_hba->sha;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
struct Scsi_Host *shost = hisi_hba->shost;
|
||||
u32 old_state, state;
|
||||
unsigned long flags;
|
||||
int rc;
|
||||
|
||||
if (!hisi_hba->hw->soft_reset)
|
||||
return -1;
|
||||
|
||||
if (!test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags)) {
|
||||
struct device *dev = hisi_hba->dev;
|
||||
struct sas_ha_struct *sas_ha = &hisi_hba->sha;
|
||||
unsigned long flags;
|
||||
|
||||
dev_dbg(dev, "controller reset begins!\n");
|
||||
scsi_block_requests(hisi_hba->shost);
|
||||
rc = hisi_hba->hw->soft_reset(hisi_hba);
|
||||
if (rc) {
|
||||
dev_warn(dev, "controller reset failed (%d)\n", rc);
|
||||
goto out;
|
||||
}
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
hisi_sas_release_tasks(hisi_hba);
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
|
||||
sas_ha->notify_ha_event(sas_ha, HAE_RESET);
|
||||
dev_dbg(dev, "controller reset successful!\n");
|
||||
} else
|
||||
if (test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
|
||||
return -1;
|
||||
|
||||
dev_dbg(dev, "controller resetting...\n");
|
||||
old_state = hisi_hba->hw->get_phys_state(hisi_hba);
|
||||
|
||||
scsi_block_requests(shost);
|
||||
set_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
|
||||
rc = hisi_hba->hw->soft_reset(hisi_hba);
|
||||
if (rc) {
|
||||
dev_warn(dev, "controller reset failed (%d)\n", rc);
|
||||
clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
|
||||
goto out;
|
||||
}
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
hisi_sas_release_tasks(hisi_hba);
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
|
||||
sas_ha->notify_ha_event(sas_ha, HAE_RESET);
|
||||
clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
|
||||
|
||||
/* Init and wait for PHYs to come up and all libsas event finished. */
|
||||
hisi_hba->hw->phys_init(hisi_hba);
|
||||
msleep(1000);
|
||||
drain_workqueue(hisi_hba->wq);
|
||||
drain_workqueue(shost->work_q);
|
||||
|
||||
state = hisi_hba->hw->get_phys_state(hisi_hba);
|
||||
hisi_sas_rescan_topology(hisi_hba, old_state, state);
|
||||
dev_dbg(dev, "controller reset complete\n");
|
||||
|
||||
out:
|
||||
scsi_unblock_requests(hisi_hba->shost);
|
||||
scsi_unblock_requests(shost);
|
||||
clear_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -1241,7 +1334,7 @@ hisi_sas_internal_abort_task_exec(struct hisi_hba *hisi_hba, int device_id,
|
|||
int dlvry_queue_slot, dlvry_queue, n_elem = 0, rc, slot_idx;
|
||||
unsigned long flags, flags_dq;
|
||||
|
||||
if (unlikely(test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags)))
|
||||
if (unlikely(test_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags)))
|
||||
return -EINVAL;
|
||||
|
||||
if (!device->port)
|
||||
|
@ -1279,12 +1372,21 @@ hisi_sas_internal_abort_task_exec(struct hisi_hba *hisi_hba, int device_id,
|
|||
slot->port = port;
|
||||
task->lldd_task = slot;
|
||||
|
||||
slot->buf = dma_pool_alloc(hisi_hba->buffer_pool,
|
||||
GFP_ATOMIC, &slot->buf_dma);
|
||||
if (!slot->buf) {
|
||||
rc = -ENOMEM;
|
||||
goto err_out_tag;
|
||||
}
|
||||
|
||||
memset(slot->cmd_hdr, 0, sizeof(struct hisi_sas_cmd_hdr));
|
||||
memset(hisi_sas_cmd_hdr_addr_mem(slot), 0, HISI_SAS_COMMAND_TABLE_SZ);
|
||||
memset(hisi_sas_status_buf_addr_mem(slot), 0, HISI_SAS_STATUS_BUF_SZ);
|
||||
|
||||
rc = hisi_sas_task_prep_abort(hisi_hba, slot, device_id,
|
||||
abort_flag, task_tag);
|
||||
if (rc)
|
||||
goto err_out_tag;
|
||||
goto err_out_buf;
|
||||
|
||||
|
||||
list_add_tail(&slot->entry, &sas_dev->list);
|
||||
|
@ -1302,6 +1404,9 @@ hisi_sas_internal_abort_task_exec(struct hisi_hba *hisi_hba, int device_id,
|
|||
|
||||
return 0;
|
||||
|
||||
err_out_buf:
|
||||
dma_pool_free(hisi_hba->buffer_pool, slot->buf,
|
||||
slot->buf_dma);
|
||||
err_out_tag:
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
hisi_sas_slot_index_free(hisi_hba, slot_idx);
|
||||
|
@ -1437,36 +1542,6 @@ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_phy_down);
|
||||
|
||||
void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 old_state,
|
||||
u32 state)
|
||||
{
|
||||
struct sas_ha_struct *sas_ha = &hisi_hba->sha;
|
||||
int phy_no;
|
||||
|
||||
for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct asd_sas_port *sas_port = sas_phy->port;
|
||||
struct domain_device *dev;
|
||||
|
||||
if (sas_phy->enabled) {
|
||||
/* Report PHY state change to libsas */
|
||||
if (state & (1 << phy_no))
|
||||
continue;
|
||||
|
||||
if (old_state & (1 << phy_no))
|
||||
/* PHY down but was up before */
|
||||
hisi_sas_phy_down(hisi_hba, phy_no, 0);
|
||||
}
|
||||
if (!sas_port)
|
||||
continue;
|
||||
dev = sas_port->port_dev;
|
||||
|
||||
if (DEV_IS_EXPANDER(dev->dev_type))
|
||||
sas_ha->notify_phy_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_rescan_topology);
|
||||
|
||||
struct scsi_transport_template *hisi_sas_stt;
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_stt);
|
||||
|
@ -1487,7 +1562,7 @@ static struct scsi_host_template _hisi_sas_sht = {
|
|||
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
|
||||
.use_clustering = ENABLE_CLUSTERING,
|
||||
.eh_device_reset_handler = sas_eh_device_reset_handler,
|
||||
.eh_bus_reset_handler = sas_eh_bus_reset_handler,
|
||||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
};
|
||||
|
@ -1825,7 +1900,7 @@ static struct Scsi_Host *hisi_sas_shost_alloc(struct platform_device *pdev,
|
|||
|
||||
return shost;
|
||||
err_out:
|
||||
kfree(shost);
|
||||
scsi_host_put(shost);
|
||||
dev_err(dev, "shost alloc failed\n");
|
||||
return NULL;
|
||||
}
|
||||
|
@ -1916,7 +1991,7 @@ err_out_register_ha:
|
|||
scsi_remove_host(shost);
|
||||
err_out_ha:
|
||||
hisi_sas_free(hisi_hba);
|
||||
kfree(shost);
|
||||
scsi_host_put(shost);
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_probe);
|
||||
|
@ -1931,15 +2006,13 @@ int hisi_sas_remove(struct platform_device *pdev)
|
|||
sas_remove_host(sha->core.shost);
|
||||
|
||||
hisi_sas_free(hisi_hba);
|
||||
kfree(shost);
|
||||
scsi_host_put(shost);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_remove);
|
||||
|
||||
static __init int hisi_sas_init(void)
|
||||
{
|
||||
pr_info("hisi_sas: driver version %s\n", DRV_VERSION);
|
||||
|
||||
hisi_sas_stt = sas_domain_attach_transport(&hisi_sas_transport_ops);
|
||||
if (!hisi_sas_stt)
|
||||
return -ENOMEM;
|
||||
|
@ -1955,7 +2028,6 @@ static __exit void hisi_sas_exit(void)
|
|||
module_init(hisi_sas_init);
|
||||
module_exit(hisi_sas_exit);
|
||||
|
||||
MODULE_VERSION(DRV_VERSION);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("John Garry <john.garry@huawei.com>");
|
||||
MODULE_DESCRIPTION("HISILICON SAS controller driver");
|
||||
|
|
|
@ -256,6 +256,8 @@
|
|||
#define LINK_DFX2_RCVR_HOLD_STS_MSK (0x1 << LINK_DFX2_RCVR_HOLD_STS_OFF)
|
||||
#define LINK_DFX2_SEND_HOLD_STS_OFF 10
|
||||
#define LINK_DFX2_SEND_HOLD_STS_MSK (0x1 << LINK_DFX2_SEND_HOLD_STS_OFF)
|
||||
#define SAS_ERR_CNT4_REG (PORT_BASE + 0x290)
|
||||
#define SAS_ERR_CNT6_REG (PORT_BASE + 0x298)
|
||||
#define PHY_CTRL_RDY_MSK (PORT_BASE + 0x2b0)
|
||||
#define PHYCTRL_NOT_RDY_MSK (PORT_BASE + 0x2b4)
|
||||
#define PHYCTRL_DWS_RESET_MSK (PORT_BASE + 0x2b8)
|
||||
|
@ -399,6 +401,172 @@ struct hisi_sas_err_record_v2 {
|
|||
__le32 dma_rx_err_type;
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw_error one_bit_ecc_errors[] = {
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_1B_OFF),
|
||||
.msk = HGC_DQE_ECC_1B_ADDR_MSK,
|
||||
.shift = HGC_DQE_ECC_1B_ADDR_OFF,
|
||||
.msg = "hgc_dqe_acc1b_intr found: \
|
||||
Ram address is 0x%08X\n",
|
||||
.reg = HGC_DQE_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_1B_OFF),
|
||||
.msk = HGC_IOST_ECC_1B_ADDR_MSK,
|
||||
.shift = HGC_IOST_ECC_1B_ADDR_OFF,
|
||||
.msg = "hgc_iost_acc1b_intr found: \
|
||||
Ram address is 0x%08X\n",
|
||||
.reg = HGC_IOST_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_1B_OFF),
|
||||
.msk = HGC_ITCT_ECC_1B_ADDR_MSK,
|
||||
.shift = HGC_ITCT_ECC_1B_ADDR_OFF,
|
||||
.msg = "hgc_itct_acc1b_intr found: \
|
||||
Ram address is 0x%08X\n",
|
||||
.reg = HGC_ITCT_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF),
|
||||
.msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
|
||||
.shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
|
||||
.msg = "hgc_iostl_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_LM_DFX_STATUS2,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF),
|
||||
.msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
|
||||
.shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
|
||||
.msg = "hgc_itctl_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_LM_DFX_STATUS2,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_1B_OFF),
|
||||
.msk = HGC_CQE_ECC_1B_ADDR_MSK,
|
||||
.shift = HGC_CQE_ECC_1B_ADDR_OFF,
|
||||
.msg = "hgc_cqe_acc1b_intr found: \
|
||||
Ram address is 0x%08X\n",
|
||||
.reg = HGC_CQE_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
|
||||
.msg = "rxm_mem0_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
|
||||
.msg = "rxm_mem1_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
|
||||
.msg = "rxm_mem2_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
|
||||
.msg = "rxm_mem3_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS15,
|
||||
},
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw_error multi_bit_ecc_errors[] = {
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF),
|
||||
.msk = HGC_DQE_ECC_MB_ADDR_MSK,
|
||||
.shift = HGC_DQE_ECC_MB_ADDR_OFF,
|
||||
.msg = "hgc_dqe_accbad_intr (0x%x) found: \
|
||||
Ram address is 0x%08X\n",
|
||||
.reg = HGC_DQE_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF),
|
||||
.msk = HGC_IOST_ECC_MB_ADDR_MSK,
|
||||
.shift = HGC_IOST_ECC_MB_ADDR_OFF,
|
||||
.msg = "hgc_iost_accbad_intr (0x%x) found: \
|
||||
Ram address is 0x%08X\n",
|
||||
.reg = HGC_IOST_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF),
|
||||
.msk = HGC_ITCT_ECC_MB_ADDR_MSK,
|
||||
.shift = HGC_ITCT_ECC_MB_ADDR_OFF,
|
||||
.msg = "hgc_itct_accbad_intr (0x%x) found: \
|
||||
Ram address is 0x%08X\n",
|
||||
.reg = HGC_ITCT_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF),
|
||||
.msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
|
||||
.shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
|
||||
.msg = "hgc_iostl_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_LM_DFX_STATUS2,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF),
|
||||
.msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
|
||||
.shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
|
||||
.msg = "hgc_itctl_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_LM_DFX_STATUS2,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF),
|
||||
.msk = HGC_CQE_ECC_MB_ADDR_MSK,
|
||||
.shift = HGC_CQE_ECC_MB_ADDR_OFF,
|
||||
.msg = "hgc_cqe_accbad_intr (0x%x) found: \
|
||||
Ram address is 0x%08X\n",
|
||||
.reg = HGC_CQE_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
|
||||
.msg = "rxm_mem0_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
|
||||
.msg = "rxm_mem1_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
|
||||
.msg = "rxm_mem2_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
|
||||
.msg = "rxm_mem3_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS15,
|
||||
},
|
||||
};
|
||||
|
||||
enum {
|
||||
HISI_SAS_PHY_PHY_UPDOWN,
|
||||
HISI_SAS_PHY_CHNL_INT,
|
||||
|
@ -806,12 +974,14 @@ static void setup_itct_v2_hw(struct hisi_hba *hisi_hba,
|
|||
static void free_device_v2_hw(struct hisi_hba *hisi_hba,
|
||||
struct hisi_sas_device *sas_dev)
|
||||
{
|
||||
DECLARE_COMPLETION_ONSTACK(completion);
|
||||
u64 dev_id = sas_dev->device_id;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
struct hisi_sas_itct *itct = &hisi_hba->itct[dev_id];
|
||||
u32 reg_val = hisi_sas_read32(hisi_hba, ENT_INT_SRC3);
|
||||
int i;
|
||||
|
||||
sas_dev->completion = &completion;
|
||||
|
||||
/* SoC bug workaround */
|
||||
if (dev_is_sata(sas_dev->sas_device))
|
||||
clear_bit(sas_dev->sata_idx, hisi_hba->sata_dev_bitmap);
|
||||
|
@ -821,28 +991,12 @@ static void free_device_v2_hw(struct hisi_hba *hisi_hba,
|
|||
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
|
||||
ENT_INT_SRC3_ITC_INT_MSK);
|
||||
|
||||
/* clear the itct int*/
|
||||
for (i = 0; i < 2; i++) {
|
||||
/* clear the itct table*/
|
||||
reg_val = hisi_sas_read32(hisi_hba, ITCT_CLR);
|
||||
reg_val |= ITCT_CLR_EN_MSK | (dev_id & ITCT_DEV_MSK);
|
||||
reg_val = ITCT_CLR_EN_MSK | (dev_id & ITCT_DEV_MSK);
|
||||
hisi_sas_write32(hisi_hba, ITCT_CLR, reg_val);
|
||||
wait_for_completion(sas_dev->completion);
|
||||
|
||||
udelay(10);
|
||||
reg_val = hisi_sas_read32(hisi_hba, ENT_INT_SRC3);
|
||||
if (ENT_INT_SRC3_ITC_INT_MSK & reg_val) {
|
||||
dev_dbg(dev, "got clear ITCT done interrupt\n");
|
||||
|
||||
/* invalid the itct state*/
|
||||
memset(itct, 0, sizeof(struct hisi_sas_itct));
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
|
||||
ENT_INT_SRC3_ITC_INT_MSK);
|
||||
|
||||
/* clear the itct */
|
||||
hisi_sas_write32(hisi_hba, ITCT_CLR, 0);
|
||||
dev_dbg(dev, "clear ITCT ok\n");
|
||||
break;
|
||||
}
|
||||
memset(itct, 0, sizeof(struct hisi_sas_itct));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1023,7 +1177,7 @@ static void init_reg_v2_hw(struct hisi_hba *hisi_hba)
|
|||
hisi_sas_write32(hisi_hba, ENT_INT_SRC3, 0xffffffff);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK1, 0x7efefefe);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK2, 0x7efefefe);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0x7ffffffe);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0x7ffe20fe);
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0xfff00c30);
|
||||
for (i = 0; i < hisi_hba->queue_count; i++)
|
||||
hisi_sas_write32(hisi_hba, OQ0_INT_SRC_MSK+0x4*i, 0);
|
||||
|
@ -1332,25 +1486,12 @@ static void start_phy_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
|||
enable_phy_v2_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
static void stop_phy_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
disable_phy_v2_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
static void stop_phys_v2_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++)
|
||||
stop_phy_v2_hw(hisi_hba, i);
|
||||
}
|
||||
|
||||
static void phy_hard_reset_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
u32 txid_auto;
|
||||
|
||||
stop_phy_v2_hw(hisi_hba, phy_no);
|
||||
disable_phy_v2_hw(hisi_hba, phy_no);
|
||||
if (phy->identify.device_type == SAS_END_DEVICE) {
|
||||
txid_auto = hisi_sas_phy_read32(hisi_hba, phy_no, TXID_AUTO);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO,
|
||||
|
@ -1360,17 +1501,38 @@ static void phy_hard_reset_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
|||
start_phy_v2_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
static void start_phys_v2_hw(struct hisi_hba *hisi_hba)
|
||||
static void phy_get_events_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
int i;
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_phy *sphy = sas_phy->phy;
|
||||
u32 err4_reg_val, err6_reg_val;
|
||||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++)
|
||||
start_phy_v2_hw(hisi_hba, i);
|
||||
/* loss dword syn, phy reset problem */
|
||||
err4_reg_val = hisi_sas_phy_read32(hisi_hba, phy_no, SAS_ERR_CNT4_REG);
|
||||
|
||||
/* disparity err, invalid dword */
|
||||
err6_reg_val = hisi_sas_phy_read32(hisi_hba, phy_no, SAS_ERR_CNT6_REG);
|
||||
|
||||
sphy->loss_of_dword_sync_count += (err4_reg_val >> 16) & 0xFFFF;
|
||||
sphy->phy_reset_problem_count += err4_reg_val & 0xFFFF;
|
||||
sphy->invalid_dword_count += (err6_reg_val & 0xFF0000) >> 16;
|
||||
sphy->running_disparity_error_count += err6_reg_val & 0xFF;
|
||||
}
|
||||
|
||||
static void phys_init_v2_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
start_phys_v2_hw(hisi_hba);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++) {
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[i];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
|
||||
if (!sas_phy->phy->enabled)
|
||||
continue;
|
||||
|
||||
start_phy_v2_hw(hisi_hba, i);
|
||||
}
|
||||
}
|
||||
|
||||
static void sl_notify_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
|
@ -1965,7 +2127,7 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
|
|||
}
|
||||
case DMA_RX_DATA_LEN_UNDERFLOW:
|
||||
{
|
||||
ts->residual = dma_rx_err_type;
|
||||
ts->residual = trans_tx_fail_type;
|
||||
ts->stat = SAS_DATA_UNDERRUN;
|
||||
break;
|
||||
}
|
||||
|
@ -2091,7 +2253,7 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
|
|||
}
|
||||
case DMA_RX_DATA_LEN_UNDERFLOW:
|
||||
{
|
||||
ts->residual = dma_rx_err_type;
|
||||
ts->residual = trans_tx_fail_type;
|
||||
ts->stat = SAS_DATA_UNDERRUN;
|
||||
break;
|
||||
}
|
||||
|
@ -2599,6 +2761,7 @@ static irqreturn_t int_phy_updown_v2_hw(int irq_no, void *p)
|
|||
struct hisi_hba *hisi_hba = p;
|
||||
u32 irq_msk;
|
||||
int phy_no = 0;
|
||||
irqreturn_t res = IRQ_NONE;
|
||||
|
||||
irq_msk = (hisi_sas_read32(hisi_hba, HGC_INVLD_DQE_INFO)
|
||||
>> HGC_INVLD_DQE_INFO_FB_CH0_OFF) & 0x1ff;
|
||||
|
@ -2613,15 +2776,15 @@ static irqreturn_t int_phy_updown_v2_hw(int irq_no, void *p)
|
|||
case CHL_INT0_SL_PHY_ENABLE_MSK:
|
||||
/* phy up */
|
||||
if (phy_up_v2_hw(phy_no, hisi_hba) ==
|
||||
IRQ_NONE)
|
||||
return IRQ_NONE;
|
||||
IRQ_HANDLED)
|
||||
res = IRQ_HANDLED;
|
||||
break;
|
||||
|
||||
case CHL_INT0_NOT_RDY_MSK:
|
||||
/* phy down */
|
||||
if (phy_down_v2_hw(phy_no, hisi_hba) ==
|
||||
IRQ_NONE)
|
||||
return IRQ_NONE;
|
||||
IRQ_HANDLED)
|
||||
res = IRQ_HANDLED;
|
||||
break;
|
||||
|
||||
case (CHL_INT0_NOT_RDY_MSK |
|
||||
|
@ -2631,13 +2794,13 @@ static irqreturn_t int_phy_updown_v2_hw(int irq_no, void *p)
|
|||
if (reg_value & BIT(phy_no)) {
|
||||
/* phy up */
|
||||
if (phy_up_v2_hw(phy_no, hisi_hba) ==
|
||||
IRQ_NONE)
|
||||
return IRQ_NONE;
|
||||
IRQ_HANDLED)
|
||||
res = IRQ_HANDLED;
|
||||
} else {
|
||||
/* phy down */
|
||||
if (phy_down_v2_hw(phy_no, hisi_hba) ==
|
||||
IRQ_NONE)
|
||||
return IRQ_NONE;
|
||||
IRQ_HANDLED)
|
||||
res = IRQ_HANDLED;
|
||||
}
|
||||
break;
|
||||
|
||||
|
@ -2650,7 +2813,7 @@ static irqreturn_t int_phy_updown_v2_hw(int irq_no, void *p)
|
|||
phy_no++;
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
return res;
|
||||
}
|
||||
|
||||
static void phy_bcast_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
|
||||
|
@ -2733,194 +2896,38 @@ static void
|
|||
one_bit_ecc_error_process_v2_hw(struct hisi_hba *hisi_hba, u32 irq_value)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
u32 reg_val;
|
||||
const struct hisi_sas_hw_error *ecc_error;
|
||||
u32 val;
|
||||
int i;
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_DQE_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_DQE_ECC_ADDR);
|
||||
dev_warn(dev, "hgc_dqe_acc1b_intr found: \
|
||||
Ram address is 0x%08X\n",
|
||||
(reg_val & HGC_DQE_ECC_1B_ADDR_MSK) >>
|
||||
HGC_DQE_ECC_1B_ADDR_OFF);
|
||||
for (i = 0; i < ARRAY_SIZE(one_bit_ecc_errors); i++) {
|
||||
ecc_error = &one_bit_ecc_errors[i];
|
||||
if (irq_value & ecc_error->irq_msk) {
|
||||
val = hisi_sas_read32(hisi_hba, ecc_error->reg);
|
||||
val &= ecc_error->msk;
|
||||
val >>= ecc_error->shift;
|
||||
dev_warn(dev, ecc_error->msg, val);
|
||||
}
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_IOST_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_IOST_ECC_ADDR);
|
||||
dev_warn(dev, "hgc_iost_acc1b_intr found: \
|
||||
Ram address is 0x%08X\n",
|
||||
(reg_val & HGC_IOST_ECC_1B_ADDR_MSK) >>
|
||||
HGC_IOST_ECC_1B_ADDR_OFF);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_ITCT_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_ITCT_ECC_ADDR);
|
||||
dev_warn(dev, "hgc_itct_acc1b_intr found: \
|
||||
Ram address is 0x%08X\n",
|
||||
(reg_val & HGC_ITCT_ECC_1B_ADDR_MSK) >>
|
||||
HGC_ITCT_ECC_1B_ADDR_OFF);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2);
|
||||
dev_warn(dev, "hgc_iostl_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
(reg_val & HGC_LM_DFX_STATUS2_IOSTLIST_MSK) >>
|
||||
HGC_LM_DFX_STATUS2_IOSTLIST_OFF);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2);
|
||||
dev_warn(dev, "hgc_itctl_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
(reg_val & HGC_LM_DFX_STATUS2_ITCTLIST_MSK) >>
|
||||
HGC_LM_DFX_STATUS2_ITCTLIST_OFF);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_CQE_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_CQE_ECC_ADDR);
|
||||
dev_warn(dev, "hgc_cqe_acc1b_intr found: \
|
||||
Ram address is 0x%08X\n",
|
||||
(reg_val & HGC_CQE_ECC_1B_ADDR_MSK) >>
|
||||
HGC_CQE_ECC_1B_ADDR_OFF);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
|
||||
dev_warn(dev, "rxm_mem0_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
(reg_val & HGC_RXM_DFX_STATUS14_MEM0_MSK) >>
|
||||
HGC_RXM_DFX_STATUS14_MEM0_OFF);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
|
||||
dev_warn(dev, "rxm_mem1_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
(reg_val & HGC_RXM_DFX_STATUS14_MEM1_MSK) >>
|
||||
HGC_RXM_DFX_STATUS14_MEM1_OFF);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
|
||||
dev_warn(dev, "rxm_mem2_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
(reg_val & HGC_RXM_DFX_STATUS14_MEM2_MSK) >>
|
||||
HGC_RXM_DFX_STATUS14_MEM2_OFF);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS15);
|
||||
dev_warn(dev, "rxm_mem3_acc1b_intr found: \
|
||||
memory address is 0x%08X\n",
|
||||
(reg_val & HGC_RXM_DFX_STATUS15_MEM3_MSK) >>
|
||||
HGC_RXM_DFX_STATUS15_MEM3_OFF);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static void multi_bit_ecc_error_process_v2_hw(struct hisi_hba *hisi_hba,
|
||||
u32 irq_value)
|
||||
{
|
||||
u32 reg_val;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
const struct hisi_sas_hw_error *ecc_error;
|
||||
u32 val;
|
||||
int i;
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_DQE_ECC_ADDR);
|
||||
dev_warn(dev, "hgc_dqe_accbad_intr (0x%x) found: \
|
||||
Ram address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_DQE_ECC_MB_ADDR_MSK) >>
|
||||
HGC_DQE_ECC_MB_ADDR_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_IOST_ECC_ADDR);
|
||||
dev_warn(dev, "hgc_iost_accbad_intr (0x%x) found: \
|
||||
Ram address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_IOST_ECC_MB_ADDR_MSK) >>
|
||||
HGC_IOST_ECC_MB_ADDR_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_ITCT_ECC_ADDR);
|
||||
dev_warn(dev,"hgc_itct_accbad_intr (0x%x) found: \
|
||||
Ram address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_ITCT_ECC_MB_ADDR_MSK) >>
|
||||
HGC_ITCT_ECC_MB_ADDR_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2);
|
||||
dev_warn(dev, "hgc_iostl_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_LM_DFX_STATUS2_IOSTLIST_MSK) >>
|
||||
HGC_LM_DFX_STATUS2_IOSTLIST_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_LM_DFX_STATUS2);
|
||||
dev_warn(dev, "hgc_itctl_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_LM_DFX_STATUS2_ITCTLIST_MSK) >>
|
||||
HGC_LM_DFX_STATUS2_ITCTLIST_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_CQE_ECC_ADDR);
|
||||
dev_warn(dev, "hgc_cqe_accbad_intr (0x%x) found: \
|
||||
Ram address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_CQE_ECC_MB_ADDR_MSK) >>
|
||||
HGC_CQE_ECC_MB_ADDR_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
|
||||
dev_warn(dev, "rxm_mem0_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_RXM_DFX_STATUS14_MEM0_MSK) >>
|
||||
HGC_RXM_DFX_STATUS14_MEM0_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
|
||||
dev_warn(dev, "rxm_mem1_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_RXM_DFX_STATUS14_MEM1_MSK) >>
|
||||
HGC_RXM_DFX_STATUS14_MEM1_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS14);
|
||||
dev_warn(dev, "rxm_mem2_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_RXM_DFX_STATUS14_MEM2_MSK) >>
|
||||
HGC_RXM_DFX_STATUS14_MEM2_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF)) {
|
||||
reg_val = hisi_sas_read32(hisi_hba, HGC_RXM_DFX_STATUS15);
|
||||
dev_warn(dev, "rxm_mem3_accbad_intr (0x%x) found: \
|
||||
memory address is 0x%08X\n",
|
||||
irq_value,
|
||||
(reg_val & HGC_RXM_DFX_STATUS15_MEM3_MSK) >>
|
||||
HGC_RXM_DFX_STATUS15_MEM3_OFF);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
for (i = 0; i < ARRAY_SIZE(multi_bit_ecc_errors); i++) {
|
||||
ecc_error = &multi_bit_ecc_errors[i];
|
||||
if (irq_value & ecc_error->irq_msk) {
|
||||
val = hisi_sas_read32(hisi_hba, ecc_error->reg);
|
||||
val &= ecc_error->msk;
|
||||
val >>= ecc_error->shift;
|
||||
dev_warn(dev, ecc_error->msg, irq_value, val);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
}
|
||||
|
||||
return;
|
||||
|
@ -3053,8 +3060,20 @@ static irqreturn_t fatal_axi_int_v2_hw(int irq_no, void *p)
|
|||
irq_value);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (irq_value & BIT(ENT_INT_SRC3_ITC_INT_OFF)) {
|
||||
u32 reg_val = hisi_sas_read32(hisi_hba, ITCT_CLR);
|
||||
u32 dev_id = reg_val & ITCT_DEV_MSK;
|
||||
struct hisi_sas_device *sas_dev =
|
||||
&hisi_hba->devices[dev_id];
|
||||
|
||||
hisi_sas_write32(hisi_hba, ITCT_CLR, 0);
|
||||
dev_dbg(dev, "clear ITCT ok\n");
|
||||
complete(sas_dev->completion);
|
||||
}
|
||||
}
|
||||
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC3, irq_value);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, irq_msk);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
|
@ -3251,97 +3270,92 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
{
|
||||
struct platform_device *pdev = hisi_hba->platform_dev;
|
||||
struct device *dev = &pdev->dev;
|
||||
int i, irq, rc, irq_map[128];
|
||||
|
||||
int irq, rc, irq_map[128];
|
||||
int i, phy_no, fatal_no, queue_no, k;
|
||||
|
||||
for (i = 0; i < 128; i++)
|
||||
irq_map[i] = platform_get_irq(pdev, i);
|
||||
|
||||
for (i = 0; i < HISI_SAS_PHY_INT_NR; i++) {
|
||||
int idx = i;
|
||||
|
||||
irq = irq_map[idx + 1]; /* Phy up/down is irq1 */
|
||||
if (!irq) {
|
||||
dev_err(dev, "irq init: fail map phy interrupt %d\n",
|
||||
idx);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
irq = irq_map[i + 1]; /* Phy up/down is irq1 */
|
||||
rc = devm_request_irq(dev, irq, phy_interrupts[i], 0,
|
||||
DRV_NAME " phy", hisi_hba);
|
||||
if (rc) {
|
||||
dev_err(dev, "irq init: could not request "
|
||||
"phy interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
return -ENOENT;
|
||||
rc = -ENOENT;
|
||||
goto free_phy_int_irqs;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++) {
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[i];
|
||||
int idx = i + 72; /* First SATA interrupt is irq72 */
|
||||
|
||||
irq = irq_map[idx];
|
||||
if (!irq) {
|
||||
dev_err(dev, "irq init: fail map phy interrupt %d\n",
|
||||
idx);
|
||||
return -ENOENT;
|
||||
}
|
||||
for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
|
||||
irq = irq_map[phy_no + 72];
|
||||
rc = devm_request_irq(dev, irq, sata_int_v2_hw, 0,
|
||||
DRV_NAME " sata", phy);
|
||||
if (rc) {
|
||||
dev_err(dev, "irq init: could not request "
|
||||
"sata interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
return -ENOENT;
|
||||
rc = -ENOENT;
|
||||
goto free_sata_int_irqs;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < HISI_SAS_FATAL_INT_NR; i++) {
|
||||
int idx = i;
|
||||
|
||||
irq = irq_map[idx + 81];
|
||||
if (!irq) {
|
||||
dev_err(dev, "irq init: fail map fatal interrupt %d\n",
|
||||
idx);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
rc = devm_request_irq(dev, irq, fatal_interrupts[i], 0,
|
||||
for (fatal_no = 0; fatal_no < HISI_SAS_FATAL_INT_NR; fatal_no++) {
|
||||
irq = irq_map[fatal_no + 81];
|
||||
rc = devm_request_irq(dev, irq, fatal_interrupts[fatal_no], 0,
|
||||
DRV_NAME " fatal", hisi_hba);
|
||||
if (rc) {
|
||||
dev_err(dev,
|
||||
"irq init: could not request fatal interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
return -ENOENT;
|
||||
rc = -ENOENT;
|
||||
goto free_fatal_int_irqs;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < hisi_hba->queue_count; i++) {
|
||||
int idx = i + 96; /* First cq interrupt is irq96 */
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
for (queue_no = 0; queue_no < hisi_hba->queue_count; queue_no++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[queue_no];
|
||||
struct tasklet_struct *t = &cq->tasklet;
|
||||
|
||||
irq = irq_map[idx];
|
||||
if (!irq) {
|
||||
dev_err(dev,
|
||||
"irq init: could not map cq interrupt %d\n",
|
||||
idx);
|
||||
return -ENOENT;
|
||||
}
|
||||
irq = irq_map[queue_no + 96];
|
||||
rc = devm_request_irq(dev, irq, cq_interrupt_v2_hw, 0,
|
||||
DRV_NAME " cq", &hisi_hba->cq[i]);
|
||||
DRV_NAME " cq", cq);
|
||||
if (rc) {
|
||||
dev_err(dev,
|
||||
"irq init: could not request cq interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
return -ENOENT;
|
||||
rc = -ENOENT;
|
||||
goto free_cq_int_irqs;
|
||||
}
|
||||
tasklet_init(t, cq_tasklet_v2_hw, (unsigned long)cq);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
free_cq_int_irqs:
|
||||
for (k = 0; k < queue_no; k++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[k];
|
||||
|
||||
free_irq(irq_map[k + 96], cq);
|
||||
tasklet_kill(&cq->tasklet);
|
||||
}
|
||||
free_fatal_int_irqs:
|
||||
for (k = 0; k < fatal_no; k++)
|
||||
free_irq(irq_map[k + 81], hisi_hba);
|
||||
free_sata_int_irqs:
|
||||
for (k = 0; k < phy_no; k++) {
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[k];
|
||||
|
||||
free_irq(irq_map[k + 72], phy);
|
||||
}
|
||||
free_phy_int_irqs:
|
||||
for (k = 0; k < i; k++)
|
||||
free_irq(irq_map[k + 1], hisi_hba);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int hisi_sas_v2_init(struct hisi_hba *hisi_hba)
|
||||
|
@ -3383,19 +3397,21 @@ static void interrupt_disable_v2_hw(struct hisi_hba *hisi_hba)
|
|||
synchronize_irq(platform_get_irq(pdev, i));
|
||||
}
|
||||
|
||||
|
||||
static u32 get_phys_state_v2_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
return hisi_sas_read32(hisi_hba, PHY_STATE);
|
||||
}
|
||||
|
||||
static int soft_reset_v2_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
u32 old_state, state;
|
||||
int rc, cnt;
|
||||
int phy_no;
|
||||
|
||||
old_state = hisi_sas_read32(hisi_hba, PHY_STATE);
|
||||
|
||||
interrupt_disable_v2_hw(hisi_hba);
|
||||
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0x0);
|
||||
|
||||
stop_phys_v2_hw(hisi_hba);
|
||||
hisi_sas_stop_phys(hisi_hba);
|
||||
|
||||
mdelay(10);
|
||||
|
||||
|
@ -3425,22 +3441,6 @@ static int soft_reset_v2_hw(struct hisi_hba *hisi_hba)
|
|||
|
||||
phys_reject_stp_links_v2_hw(hisi_hba);
|
||||
|
||||
/* Re-enable the PHYs */
|
||||
for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
|
||||
if (sas_phy->enabled)
|
||||
start_phy_v2_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
/* Wait for the PHYs to come up and read the PHY state */
|
||||
msleep(1000);
|
||||
|
||||
state = hisi_sas_read32(hisi_hba, PHY_STATE);
|
||||
|
||||
hisi_sas_rescan_topology(hisi_hba, old_state, state);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -3463,11 +3463,13 @@ static const struct hisi_sas_hw hisi_sas_v2_hw = {
|
|||
.phy_enable = enable_phy_v2_hw,
|
||||
.phy_disable = disable_phy_v2_hw,
|
||||
.phy_hard_reset = phy_hard_reset_v2_hw,
|
||||
.get_events = phy_get_events_v2_hw,
|
||||
.phy_set_linkrate = phy_set_linkrate_v2_hw,
|
||||
.phy_get_max_linkrate = phy_get_max_linkrate_v2_hw,
|
||||
.max_command_entries = HISI_SAS_COMMAND_ENTRIES_V2_HW,
|
||||
.complete_hdr_size = sizeof(struct hisi_sas_complete_v2_hdr),
|
||||
.soft_reset = soft_reset_v2_hw,
|
||||
.get_phys_state = get_phys_state_v2_hw,
|
||||
};
|
||||
|
||||
static int hisi_sas_v2_probe(struct platform_device *pdev)
|
||||
|
@ -3491,10 +3493,17 @@ static int hisi_sas_v2_remove(struct platform_device *pdev)
|
|||
{
|
||||
struct sas_ha_struct *sha = platform_get_drvdata(pdev);
|
||||
struct hisi_hba *hisi_hba = sha->lldd_ha;
|
||||
int i;
|
||||
|
||||
if (timer_pending(&hisi_hba->timer))
|
||||
del_timer(&hisi_hba->timer);
|
||||
|
||||
for (i = 0; i < hisi_hba->queue_count; i++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
|
||||
tasklet_kill(&cq->tasklet);
|
||||
}
|
||||
|
||||
return hisi_sas_remove(pdev);
|
||||
}
|
||||
|
||||
|
|
|
@ -23,14 +23,11 @@
|
|||
#define PHY_STATE 0x24
|
||||
#define PHY_PORT_NUM_MA 0x28
|
||||
#define PHY_CONN_RATE 0x30
|
||||
#define AXI_AHB_CLK_CFG 0x3c
|
||||
#define ITCT_CLR 0x44
|
||||
#define ITCT_CLR_EN_OFF 16
|
||||
#define ITCT_CLR_EN_MSK (0x1 << ITCT_CLR_EN_OFF)
|
||||
#define ITCT_DEV_OFF 0
|
||||
#define ITCT_DEV_MSK (0x7ff << ITCT_DEV_OFF)
|
||||
#define AXI_USER1 0x48
|
||||
#define AXI_USER2 0x4c
|
||||
#define IO_SATA_BROKEN_MSG_ADDR_LO 0x58
|
||||
#define IO_SATA_BROKEN_MSG_ADDR_HI 0x5c
|
||||
#define SATA_INITI_D2H_STORE_ADDR_LO 0x60
|
||||
|
@ -137,6 +134,7 @@
|
|||
#define TX_HARDRST_MSK (0x1 << TX_HARDRST_OFF)
|
||||
#define RX_IDAF_DWORD0 (PORT_BASE + 0xc4)
|
||||
#define RXOP_CHECK_CFG_H (PORT_BASE + 0xfc)
|
||||
#define STP_LINK_TIMER (PORT_BASE + 0x120)
|
||||
#define SAS_SSP_CON_TIMER_CFG (PORT_BASE + 0x134)
|
||||
#define SAS_SMP_CON_TIMER_CFG (PORT_BASE + 0x138)
|
||||
#define SAS_STP_CON_TIMER_CFG (PORT_BASE + 0x13c)
|
||||
|
@ -167,6 +165,31 @@
|
|||
#define PHYCTRL_PHY_ENA_MSK (PORT_BASE + 0x2bc)
|
||||
#define SL_RX_BCAST_CHK_MSK (PORT_BASE + 0x2c0)
|
||||
#define PHYCTRL_OOB_RESTART_MSK (PORT_BASE + 0x2c4)
|
||||
#define DMA_TX_STATUS (PORT_BASE + 0x2d0)
|
||||
#define DMA_TX_STATUS_BUSY_OFF 0
|
||||
#define DMA_TX_STATUS_BUSY_MSK (0x1 << DMA_TX_STATUS_BUSY_OFF)
|
||||
#define DMA_RX_STATUS (PORT_BASE + 0x2e8)
|
||||
#define DMA_RX_STATUS_BUSY_OFF 0
|
||||
#define DMA_RX_STATUS_BUSY_MSK (0x1 << DMA_RX_STATUS_BUSY_OFF)
|
||||
|
||||
#define MAX_ITCT_HW 4096 /* max the hw can support */
|
||||
#define DEFAULT_ITCT_HW 2048 /* reset value, not reprogrammed */
|
||||
#if (HISI_SAS_MAX_DEVICES > DEFAULT_ITCT_HW)
|
||||
#error Max ITCT exceeded
|
||||
#endif
|
||||
|
||||
#define AXI_MASTER_CFG_BASE (0x5000)
|
||||
#define AM_CTRL_GLOBAL (0x0)
|
||||
#define AM_CURR_TRANS_RETURN (0x150)
|
||||
|
||||
#define AM_CFG_MAX_TRANS (0x5010)
|
||||
#define AM_CFG_SINGLE_PORT_MAX_TRANS (0x5014)
|
||||
#define AXI_CFG (0x5100)
|
||||
#define AM_ROB_ECC_ERR_ADDR (0x510c)
|
||||
#define AM_ROB_ECC_ONEBIT_ERR_ADDR_OFF 0
|
||||
#define AM_ROB_ECC_ONEBIT_ERR_ADDR_MSK (0xff << AM_ROB_ECC_ONEBIT_ERR_ADDR_OFF)
|
||||
#define AM_ROB_ECC_MULBIT_ERR_ADDR_OFF 8
|
||||
#define AM_ROB_ECC_MULBIT_ERR_ADDR_MSK (0xff << AM_ROB_ECC_MULBIT_ERR_ADDR_OFF)
|
||||
|
||||
/* HW dma structures */
|
||||
/* Delivery queue header */
|
||||
|
@ -354,8 +377,6 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
|
|||
/* Global registers init */
|
||||
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE,
|
||||
(u32)((1ULL << hisi_hba->queue_count) - 1));
|
||||
hisi_sas_write32(hisi_hba, AXI_USER1, 0x0);
|
||||
hisi_sas_write32(hisi_hba, AXI_USER2, 0x40000060);
|
||||
hisi_sas_write32(hisi_hba, HGC_SAS_TXFAIL_RETRY_CTRL, 0x108);
|
||||
hisi_sas_write32(hisi_hba, CFG_1US_TIMER_TRSH, 0xd);
|
||||
hisi_sas_write32(hisi_hba, INT_COAL_EN, 0x1);
|
||||
|
@ -371,15 +392,14 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
|
|||
hisi_sas_write32(hisi_hba, CHNL_PHYUPDOWN_INT_MSK, 0x0);
|
||||
hisi_sas_write32(hisi_hba, CHNL_ENT_INT_MSK, 0x0);
|
||||
hisi_sas_write32(hisi_hba, HGC_COM_INT_MSK, 0x0);
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0xfff00c30);
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0x0);
|
||||
hisi_sas_write32(hisi_hba, AWQOS_AWCACHE_CFG, 0xf0f0);
|
||||
hisi_sas_write32(hisi_hba, ARQOS_ARCACHE_CFG, 0xf0f0);
|
||||
for (i = 0; i < hisi_hba->queue_count; i++)
|
||||
hisi_sas_write32(hisi_hba, OQ0_INT_SRC_MSK+0x4*i, 0);
|
||||
|
||||
hisi_sas_write32(hisi_hba, AXI_AHB_CLK_CFG, 1);
|
||||
hisi_sas_write32(hisi_hba, HYPER_STREAM_ID_EN_CFG, 1);
|
||||
hisi_sas_write32(hisi_hba, CFG_MAX_TAG, 0xfff07fff);
|
||||
hisi_sas_write32(hisi_hba, AXI_MASTER_CFG_BASE, 0x30000);
|
||||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++) {
|
||||
hisi_sas_phy_write32(hisi_hba, i, PROG_PHY_LINK_RATE, 0x801);
|
||||
|
@ -389,7 +409,6 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
|
|||
hisi_sas_phy_write32(hisi_hba, i, RXOP_CHECK_CFG_H, 0x1000);
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK, 0xffffffff);
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0x8ffffbff);
|
||||
hisi_sas_phy_write32(hisi_hba, i, SL_CFG, 0x83f801fc);
|
||||
hisi_sas_phy_write32(hisi_hba, i, PHY_CTRL_RDY_MSK, 0x0);
|
||||
hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_NOT_RDY_MSK, 0x0);
|
||||
hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_DWS_RESET_MSK, 0x0);
|
||||
|
@ -398,9 +417,11 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
|
|||
hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_OOB_RESTART_MSK, 0x0);
|
||||
hisi_sas_phy_write32(hisi_hba, i, PHY_CTRL, 0x199b4fa);
|
||||
hisi_sas_phy_write32(hisi_hba, i, SAS_SSP_CON_TIMER_CFG,
|
||||
0xa0064);
|
||||
0xa03e8);
|
||||
hisi_sas_phy_write32(hisi_hba, i, SAS_STP_CON_TIMER_CFG,
|
||||
0xa0064);
|
||||
0xa03e8);
|
||||
hisi_sas_phy_write32(hisi_hba, i, STP_LINK_TIMER,
|
||||
0x7f7a120);
|
||||
}
|
||||
for (i = 0; i < hisi_hba->queue_count; i++) {
|
||||
/* Delivery queue */
|
||||
|
@ -578,8 +599,6 @@ static void free_device_v3_hw(struct hisi_hba *hisi_hba,
|
|||
memset(itct, 0, sizeof(struct hisi_sas_itct));
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
|
||||
ENT_INT_SRC3_ITC_INT_MSK);
|
||||
hisi_hba->devices[dev_id].dev_type = SAS_PHY_UNUSED;
|
||||
hisi_hba->devices[dev_id].dev_status = HISI_SAS_DEV_NORMAL;
|
||||
|
||||
/* clear the itct */
|
||||
hisi_sas_write32(hisi_hba, ITCT_CLR, 0);
|
||||
|
@ -610,8 +629,52 @@ static void dereg_device_v3_hw(struct hisi_hba *hisi_hba,
|
|||
1 << CFG_ABT_SET_IPTT_DONE_OFF);
|
||||
}
|
||||
|
||||
static int reset_hw_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0);
|
||||
|
||||
/* Disable all of the PHYs */
|
||||
hisi_sas_stop_phys(hisi_hba);
|
||||
udelay(50);
|
||||
|
||||
/* Ensure axi bus idle */
|
||||
ret = readl_poll_timeout(hisi_hba->regs + AXI_CFG, val, !val,
|
||||
20000, 1000000);
|
||||
if (ret) {
|
||||
dev_err(dev, "axi bus is not idle, ret = %d!\n", ret);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
if (ACPI_HANDLE(dev)) {
|
||||
acpi_status s;
|
||||
|
||||
s = acpi_evaluate_object(ACPI_HANDLE(dev), "_RST", NULL, NULL);
|
||||
if (ACPI_FAILURE(s)) {
|
||||
dev_err(dev, "Reset failed\n");
|
||||
return -EIO;
|
||||
}
|
||||
} else
|
||||
dev_err(dev, "no reset method!\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int hw_init_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
int rc;
|
||||
|
||||
rc = reset_hw_v3_hw(hisi_hba);
|
||||
if (rc) {
|
||||
dev_err(dev, "hisi_sas_reset_hw failed, rc=%d", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
msleep(100);
|
||||
init_reg_v3_hw(hisi_hba);
|
||||
|
||||
return 0;
|
||||
|
@ -640,25 +703,12 @@ static void start_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
|||
enable_phy_v3_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
static void stop_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
disable_phy_v3_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
static void start_phys_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++)
|
||||
start_phy_v3_hw(hisi_hba, i);
|
||||
}
|
||||
|
||||
static void phy_hard_reset_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
u32 txid_auto;
|
||||
|
||||
stop_phy_v3_hw(hisi_hba, phy_no);
|
||||
disable_phy_v3_hw(hisi_hba, phy_no);
|
||||
if (phy->identify.device_type == SAS_END_DEVICE) {
|
||||
txid_auto = hisi_sas_phy_read32(hisi_hba, phy_no, TXID_AUTO);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO,
|
||||
|
@ -675,7 +725,17 @@ enum sas_linkrate phy_get_max_linkrate_v3_hw(void)
|
|||
|
||||
static void phys_init_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
start_phys_v3_hw(hisi_hba);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++) {
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[i];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
|
||||
if (!sas_phy->phy->enabled)
|
||||
continue;
|
||||
|
||||
start_phy_v3_hw(hisi_hba, i);
|
||||
}
|
||||
}
|
||||
|
||||
static void sl_notify_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
|
@ -1140,7 +1200,6 @@ end:
|
|||
|
||||
static int phy_down_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
||||
{
|
||||
int res = 0;
|
||||
u32 phy_state, sl_ctrl, txid_auto;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
|
||||
|
@ -1161,7 +1220,7 @@ static int phy_down_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
|||
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0, CHL_INT0_NOT_RDY_MSK);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 0);
|
||||
|
||||
return res;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void phy_bcast_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
||||
|
@ -1259,7 +1318,7 @@ static irqreturn_t int_chnl_int_v3_hw(int irq_no, void *p)
|
|||
if (irq_msk & (2 << (phy_no * 4)) && irq_value0) {
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no,
|
||||
CHL_INT0, irq_value0
|
||||
& (~CHL_INT0_HOTPLUG_TOUT_MSK)
|
||||
& (~CHL_INT0_SL_RX_BCST_ACK_MSK)
|
||||
& (~CHL_INT0_SL_PHY_ENABLE_MSK)
|
||||
& (~CHL_INT0_NOT_RDY_MSK));
|
||||
}
|
||||
|
@ -1620,6 +1679,104 @@ static int hisi_sas_v3_init(struct hisi_hba *hisi_hba)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void phy_set_linkrate_v3_hw(struct hisi_hba *hisi_hba, int phy_no,
|
||||
struct sas_phy_linkrates *r)
|
||||
{
|
||||
u32 prog_phy_link_rate =
|
||||
hisi_sas_phy_read32(hisi_hba, phy_no, PROG_PHY_LINK_RATE);
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
int i;
|
||||
enum sas_linkrate min, max;
|
||||
u32 rate_mask = 0;
|
||||
|
||||
if (r->maximum_linkrate == SAS_LINK_RATE_UNKNOWN) {
|
||||
max = sas_phy->phy->maximum_linkrate;
|
||||
min = r->minimum_linkrate;
|
||||
} else if (r->minimum_linkrate == SAS_LINK_RATE_UNKNOWN) {
|
||||
max = r->maximum_linkrate;
|
||||
min = sas_phy->phy->minimum_linkrate;
|
||||
} else
|
||||
return;
|
||||
|
||||
sas_phy->phy->maximum_linkrate = max;
|
||||
sas_phy->phy->minimum_linkrate = min;
|
||||
|
||||
min -= SAS_LINK_RATE_1_5_GBPS;
|
||||
max -= SAS_LINK_RATE_1_5_GBPS;
|
||||
|
||||
for (i = 0; i <= max; i++)
|
||||
rate_mask |= 1 << (i * 2);
|
||||
|
||||
prog_phy_link_rate &= ~0xff;
|
||||
prog_phy_link_rate |= rate_mask;
|
||||
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, PROG_PHY_LINK_RATE,
|
||||
prog_phy_link_rate);
|
||||
|
||||
phy_hard_reset_v3_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
static void interrupt_disable_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct pci_dev *pdev = hisi_hba->pci_dev;
|
||||
int i;
|
||||
|
||||
synchronize_irq(pci_irq_vector(pdev, 1));
|
||||
synchronize_irq(pci_irq_vector(pdev, 2));
|
||||
synchronize_irq(pci_irq_vector(pdev, 11));
|
||||
for (i = 0; i < hisi_hba->queue_count; i++) {
|
||||
hisi_sas_write32(hisi_hba, OQ0_INT_SRC_MSK + 0x4 * i, 0x1);
|
||||
synchronize_irq(pci_irq_vector(pdev, i + 16));
|
||||
}
|
||||
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK1, 0xffffffff);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK2, 0xffffffff);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0xffffffff);
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0xffffffff);
|
||||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++) {
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK, 0xffffffff);
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0xffffffff);
|
||||
hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_NOT_RDY_MSK, 0x1);
|
||||
hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_PHY_ENA_MSK, 0x1);
|
||||
hisi_sas_phy_write32(hisi_hba, i, SL_RX_BCAST_CHK_MSK, 0x1);
|
||||
}
|
||||
}
|
||||
|
||||
static u32 get_phys_state_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
return hisi_sas_read32(hisi_hba, PHY_STATE);
|
||||
}
|
||||
|
||||
static int soft_reset_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
int rc;
|
||||
u32 status;
|
||||
|
||||
interrupt_disable_v3_hw(hisi_hba);
|
||||
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0x0);
|
||||
|
||||
hisi_sas_stop_phys(hisi_hba);
|
||||
|
||||
mdelay(10);
|
||||
|
||||
hisi_sas_write32(hisi_hba, AXI_MASTER_CFG_BASE + AM_CTRL_GLOBAL, 0x1);
|
||||
|
||||
/* wait until bus idle */
|
||||
rc = readl_poll_timeout(hisi_hba->regs + AXI_MASTER_CFG_BASE +
|
||||
AM_CURR_TRANS_RETURN, status, status == 0x3, 10, 100);
|
||||
if (rc) {
|
||||
dev_err(dev, "axi bus is not idle, rc = %d\n", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
hisi_sas_init_mem(hisi_hba);
|
||||
|
||||
return hw_init_v3_hw(hisi_hba);
|
||||
}
|
||||
|
||||
static const struct hisi_sas_hw hisi_sas_v3_hw = {
|
||||
.hw_init = hisi_sas_v3_init,
|
||||
.setup_itct = setup_itct_v3_hw,
|
||||
|
@ -1640,7 +1797,10 @@ static const struct hisi_sas_hw hisi_sas_v3_hw = {
|
|||
.phy_disable = disable_phy_v3_hw,
|
||||
.phy_hard_reset = phy_hard_reset_v3_hw,
|
||||
.phy_get_max_linkrate = phy_get_max_linkrate_v3_hw,
|
||||
.phy_set_linkrate = phy_set_linkrate_v3_hw,
|
||||
.dereg_device = dereg_device_v3_hw,
|
||||
.soft_reset = soft_reset_v3_hw,
|
||||
.get_phys_state = get_phys_state_v3_hw,
|
||||
};
|
||||
|
||||
static struct Scsi_Host *
|
||||
|
@ -1651,8 +1811,10 @@ hisi_sas_shost_alloc_pci(struct pci_dev *pdev)
|
|||
struct device *dev = &pdev->dev;
|
||||
|
||||
shost = scsi_host_alloc(hisi_sas_sht, sizeof(*hisi_hba));
|
||||
if (!shost)
|
||||
goto err_out;
|
||||
if (!shost) {
|
||||
dev_err(dev, "shost alloc failed\n");
|
||||
return NULL;
|
||||
}
|
||||
hisi_hba = shost_priv(shost);
|
||||
|
||||
hisi_hba->hw = &hisi_sas_v3_hw;
|
||||
|
@ -1673,6 +1835,7 @@ hisi_sas_shost_alloc_pci(struct pci_dev *pdev)
|
|||
|
||||
return shost;
|
||||
err_out:
|
||||
scsi_host_put(shost);
|
||||
dev_err(dev, "shost alloc failed\n");
|
||||
return NULL;
|
||||
}
|
||||
|
@ -1781,7 +1944,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
err_out_register_ha:
|
||||
scsi_remove_host(shost);
|
||||
err_out_ha:
|
||||
kfree(shost);
|
||||
scsi_host_put(shost);
|
||||
err_out_regions:
|
||||
pci_release_regions(pdev);
|
||||
err_out_disable_device:
|
||||
|
@ -1801,6 +1964,7 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba)
|
|||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
|
||||
free_irq(pci_irq_vector(pdev, i+16), cq);
|
||||
tasklet_kill(&cq->tasklet);
|
||||
}
|
||||
pci_free_irq_vectors(pdev);
|
||||
}
|
||||
|
@ -1810,14 +1974,16 @@ static void hisi_sas_v3_remove(struct pci_dev *pdev)
|
|||
struct device *dev = &pdev->dev;
|
||||
struct sas_ha_struct *sha = dev_get_drvdata(dev);
|
||||
struct hisi_hba *hisi_hba = sha->lldd_ha;
|
||||
struct Scsi_Host *shost = sha->core.shost;
|
||||
|
||||
sas_unregister_ha(sha);
|
||||
sas_remove_host(sha->core.shost);
|
||||
|
||||
hisi_sas_free(hisi_hba);
|
||||
hisi_sas_v3_destroy_irqs(pdev, hisi_hba);
|
||||
pci_release_regions(pdev);
|
||||
pci_disable_device(pdev);
|
||||
hisi_sas_free(hisi_hba);
|
||||
scsi_host_put(shost);
|
||||
}
|
||||
|
||||
enum {
|
||||
|
@ -1839,7 +2005,6 @@ static struct pci_driver sas_v3_pci_driver = {
|
|||
|
||||
module_pci_driver(sas_v3_pci_driver);
|
||||
|
||||
MODULE_VERSION(DRV_VERSION);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("John Garry <john.garry@huawei.com>");
|
||||
MODULE_DESCRIPTION("HISILICON SAS controller v3 hw driver based on pci device");
|
||||
|
|
|
@ -81,11 +81,8 @@ MODULE_DESCRIPTION("Driver for HP Smart Array Controller version " \
|
|||
MODULE_SUPPORTED_DEVICE("HP Smart Array Controllers");
|
||||
MODULE_VERSION(HPSA_DRIVER_VERSION);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS("cciss");
|
||||
|
||||
static int hpsa_allow_any;
|
||||
module_param(hpsa_allow_any, int, S_IRUGO|S_IWUSR);
|
||||
MODULE_PARM_DESC(hpsa_allow_any,
|
||||
"Allow hpsa driver to access unknown HP Smart Array hardware");
|
||||
static int hpsa_simple_mode;
|
||||
module_param(hpsa_simple_mode, int, S_IRUGO|S_IWUSR);
|
||||
MODULE_PARM_DESC(hpsa_simple_mode,
|
||||
|
@ -148,6 +145,8 @@ static const struct pci_device_id hpsa_pci_device_id[] = {
|
|||
{PCI_VENDOR_ID_HP, 0x333f, 0x103c, 0x333f},
|
||||
{PCI_VENDOR_ID_HP, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
|
||||
PCI_CLASS_STORAGE_RAID << 8, 0xffff << 8, 0},
|
||||
{PCI_VENDOR_ID_COMPAQ, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
|
||||
PCI_CLASS_STORAGE_RAID << 8, 0xffff << 8, 0},
|
||||
{0,}
|
||||
};
|
||||
|
||||
|
@ -158,6 +157,26 @@ MODULE_DEVICE_TABLE(pci, hpsa_pci_device_id);
|
|||
* access = Address of the struct of function pointers
|
||||
*/
|
||||
static struct board_type products[] = {
|
||||
{0x40700E11, "Smart Array 5300", &SA5A_access},
|
||||
{0x40800E11, "Smart Array 5i", &SA5B_access},
|
||||
{0x40820E11, "Smart Array 532", &SA5B_access},
|
||||
{0x40830E11, "Smart Array 5312", &SA5B_access},
|
||||
{0x409A0E11, "Smart Array 641", &SA5A_access},
|
||||
{0x409B0E11, "Smart Array 642", &SA5A_access},
|
||||
{0x409C0E11, "Smart Array 6400", &SA5A_access},
|
||||
{0x409D0E11, "Smart Array 6400 EM", &SA5A_access},
|
||||
{0x40910E11, "Smart Array 6i", &SA5A_access},
|
||||
{0x3225103C, "Smart Array P600", &SA5A_access},
|
||||
{0x3223103C, "Smart Array P800", &SA5A_access},
|
||||
{0x3234103C, "Smart Array P400", &SA5A_access},
|
||||
{0x3235103C, "Smart Array P400i", &SA5A_access},
|
||||
{0x3211103C, "Smart Array E200i", &SA5A_access},
|
||||
{0x3212103C, "Smart Array E200", &SA5A_access},
|
||||
{0x3213103C, "Smart Array E200i", &SA5A_access},
|
||||
{0x3214103C, "Smart Array E200i", &SA5A_access},
|
||||
{0x3215103C, "Smart Array E200i", &SA5A_access},
|
||||
{0x3237103C, "Smart Array E500", &SA5A_access},
|
||||
{0x323D103C, "Smart Array P700m", &SA5A_access},
|
||||
{0x3241103C, "Smart Array P212", &SA5_access},
|
||||
{0x3243103C, "Smart Array P410", &SA5_access},
|
||||
{0x3245103C, "Smart Array P410i", &SA5_access},
|
||||
|
@ -278,7 +297,8 @@ static int hpsa_find_cfg_addrs(struct pci_dev *pdev, void __iomem *vaddr,
|
|||
u64 *cfg_offset);
|
||||
static int hpsa_pci_find_memory_BAR(struct pci_dev *pdev,
|
||||
unsigned long *memory_bar);
|
||||
static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id);
|
||||
static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id,
|
||||
bool *legacy_board);
|
||||
static int wait_for_device_to_become_ready(struct ctlr_info *h,
|
||||
unsigned char lunaddr[],
|
||||
int reply_queue);
|
||||
|
@ -866,6 +886,16 @@ static ssize_t host_show_ctlr_num(struct device *dev,
|
|||
return snprintf(buf, 20, "%d\n", h->ctlr);
|
||||
}
|
||||
|
||||
static ssize_t host_show_legacy_board(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct ctlr_info *h;
|
||||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
|
||||
h = shost_to_hba(shost);
|
||||
return snprintf(buf, 20, "%d\n", h->legacy_board ? 1 : 0);
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(raid_level, S_IRUGO, raid_level_show, NULL);
|
||||
static DEVICE_ATTR(lunid, S_IRUGO, lunid_show, NULL);
|
||||
static DEVICE_ATTR(unique_id, S_IRUGO, unique_id_show, NULL);
|
||||
|
@ -891,6 +921,8 @@ static DEVICE_ATTR(lockup_detected, S_IRUGO,
|
|||
host_show_lockup_detected, NULL);
|
||||
static DEVICE_ATTR(ctlr_num, S_IRUGO,
|
||||
host_show_ctlr_num, NULL);
|
||||
static DEVICE_ATTR(legacy_board, S_IRUGO,
|
||||
host_show_legacy_board, NULL);
|
||||
|
||||
static struct device_attribute *hpsa_sdev_attrs[] = {
|
||||
&dev_attr_raid_level,
|
||||
|
@ -912,6 +944,7 @@ static struct device_attribute *hpsa_shost_attrs[] = {
|
|||
&dev_attr_raid_offload_debug,
|
||||
&dev_attr_lockup_detected,
|
||||
&dev_attr_ctlr_num,
|
||||
&dev_attr_legacy_board,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -3565,7 +3598,7 @@ static int hpsa_scsi_do_report_luns(struct ctlr_info *h, int logical,
|
|||
memset(scsi3addr, 0, sizeof(scsi3addr));
|
||||
if (fill_cmd(c, logical ? HPSA_REPORT_LOG : HPSA_REPORT_PHYS, h,
|
||||
buf, bufsize, 0, scsi3addr, TYPE_CMD)) {
|
||||
rc = -1;
|
||||
rc = -EAGAIN;
|
||||
goto out;
|
||||
}
|
||||
if (extended_response)
|
||||
|
@ -3578,16 +3611,19 @@ static int hpsa_scsi_do_report_luns(struct ctlr_info *h, int logical,
|
|||
if (ei->CommandStatus != 0 &&
|
||||
ei->CommandStatus != CMD_DATA_UNDERRUN) {
|
||||
hpsa_scsi_interpret_error(h, c);
|
||||
rc = -1;
|
||||
rc = -EIO;
|
||||
} else {
|
||||
struct ReportLUNdata *rld = buf;
|
||||
|
||||
if (rld->extended_response_flag != extended_response) {
|
||||
dev_err(&h->pdev->dev,
|
||||
"report luns requested format %u, got %u\n",
|
||||
extended_response,
|
||||
rld->extended_response_flag);
|
||||
rc = -1;
|
||||
if (!h->legacy_board) {
|
||||
dev_err(&h->pdev->dev,
|
||||
"report luns requested format %u, got %u\n",
|
||||
extended_response,
|
||||
rld->extended_response_flag);
|
||||
rc = -EINVAL;
|
||||
} else
|
||||
rc = -EOPNOTSUPP;
|
||||
}
|
||||
}
|
||||
out:
|
||||
|
@ -3603,7 +3639,7 @@ static inline int hpsa_scsi_do_report_phys_luns(struct ctlr_info *h,
|
|||
|
||||
rc = hpsa_scsi_do_report_luns(h, 0, buf, bufsize,
|
||||
HPSA_REPORT_PHYS_EXTENDED);
|
||||
if (!rc || !hpsa_allow_any)
|
||||
if (!rc || rc != -EOPNOTSUPP)
|
||||
return rc;
|
||||
|
||||
/* REPORT PHYS EXTENDED is not supported */
|
||||
|
@ -3791,7 +3827,7 @@ static int hpsa_update_device_info(struct ctlr_info *h,
|
|||
memset(this_device->device_id, 0,
|
||||
sizeof(this_device->device_id));
|
||||
if (hpsa_get_device_id(h, scsi3addr, this_device->device_id, 8,
|
||||
sizeof(this_device->device_id)))
|
||||
sizeof(this_device->device_id)) < 0)
|
||||
dev_err(&h->pdev->dev,
|
||||
"hpsa%d: %s: can't get device id for host %d:C0:T%d:L%d\t%s\t%.16s\n",
|
||||
h->ctlr, __func__,
|
||||
|
@ -3809,6 +3845,16 @@ static int hpsa_update_device_info(struct ctlr_info *h,
|
|||
if (h->fw_support & MISC_FW_RAID_OFFLOAD_BASIC)
|
||||
hpsa_get_ioaccel_status(h, scsi3addr, this_device);
|
||||
volume_offline = hpsa_volume_offline(h, scsi3addr);
|
||||
if (volume_offline == HPSA_VPD_LV_STATUS_UNSUPPORTED &&
|
||||
h->legacy_board) {
|
||||
/*
|
||||
* Legacy boards might not support volume status
|
||||
*/
|
||||
dev_info(&h->pdev->dev,
|
||||
"C0:T%d:L%d Volume status not available, assuming online.\n",
|
||||
this_device->target, this_device->lun);
|
||||
volume_offline = 0;
|
||||
}
|
||||
this_device->volume_offline = volume_offline;
|
||||
if (volume_offline == HPSA_LV_FAILED) {
|
||||
rc = HPSA_LV_FAILED;
|
||||
|
@ -6571,7 +6617,6 @@ static int fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h,
|
|||
default:
|
||||
dev_warn(&h->pdev->dev, "unknown command 0x%c\n", cmd);
|
||||
BUG();
|
||||
return -1;
|
||||
}
|
||||
} else if (cmd_type == TYPE_MSG) {
|
||||
switch (cmd) {
|
||||
|
@ -7232,7 +7277,8 @@ static int hpsa_interrupt_mode(struct ctlr_info *h)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id)
|
||||
static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id,
|
||||
bool *legacy_board)
|
||||
{
|
||||
int i;
|
||||
u32 subsystem_vendor_id, subsystem_device_id;
|
||||
|
@ -7242,17 +7288,24 @@ static int hpsa_lookup_board_id(struct pci_dev *pdev, u32 *board_id)
|
|||
*board_id = ((subsystem_device_id << 16) & 0xffff0000) |
|
||||
subsystem_vendor_id;
|
||||
|
||||
if (legacy_board)
|
||||
*legacy_board = false;
|
||||
for (i = 0; i < ARRAY_SIZE(products); i++)
|
||||
if (*board_id == products[i].board_id)
|
||||
if (*board_id == products[i].board_id) {
|
||||
if (products[i].access != &SA5A_access &&
|
||||
products[i].access != &SA5B_access)
|
||||
return i;
|
||||
dev_warn(&pdev->dev,
|
||||
"legacy board ID: 0x%08x\n",
|
||||
*board_id);
|
||||
if (legacy_board)
|
||||
*legacy_board = true;
|
||||
return i;
|
||||
}
|
||||
|
||||
if ((subsystem_vendor_id != PCI_VENDOR_ID_HP &&
|
||||
subsystem_vendor_id != PCI_VENDOR_ID_COMPAQ) ||
|
||||
!hpsa_allow_any) {
|
||||
dev_warn(&pdev->dev, "unrecognized board ID: "
|
||||
"0x%08x, ignoring.\n", *board_id);
|
||||
return -ENODEV;
|
||||
}
|
||||
dev_warn(&pdev->dev, "unrecognized board ID: 0x%08x\n", *board_id);
|
||||
if (legacy_board)
|
||||
*legacy_board = true;
|
||||
return ARRAY_SIZE(products) - 1; /* generic unknown smart array */
|
||||
}
|
||||
|
||||
|
@ -7555,13 +7608,14 @@ static void hpsa_free_pci_init(struct ctlr_info *h)
|
|||
static int hpsa_pci_init(struct ctlr_info *h)
|
||||
{
|
||||
int prod_index, err;
|
||||
bool legacy_board;
|
||||
|
||||
prod_index = hpsa_lookup_board_id(h->pdev, &h->board_id);
|
||||
prod_index = hpsa_lookup_board_id(h->pdev, &h->board_id, &legacy_board);
|
||||
if (prod_index < 0)
|
||||
return prod_index;
|
||||
h->product_name = products[prod_index].product_name;
|
||||
h->access = *(products[prod_index].access);
|
||||
|
||||
h->legacy_board = legacy_board;
|
||||
pci_disable_link_state(h->pdev, PCIE_LINK_STATE_L0S |
|
||||
PCIE_LINK_STATE_L1 | PCIE_LINK_STATE_CLKPM);
|
||||
|
||||
|
@ -8241,7 +8295,7 @@ static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
if (number_of_controllers == 0)
|
||||
printk(KERN_INFO DRIVER_NAME "\n");
|
||||
|
||||
rc = hpsa_lookup_board_id(pdev, &board_id);
|
||||
rc = hpsa_lookup_board_id(pdev, &board_id, NULL);
|
||||
if (rc < 0) {
|
||||
dev_warn(&pdev->dev, "Board ID not found\n");
|
||||
return rc;
|
||||
|
@ -9443,14 +9497,6 @@ hpsa_sas_phy_speed(struct sas_phy *phy, struct sas_phy_linkrates *rates)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* SMP = Serial Management Protocol */
|
||||
static int
|
||||
hpsa_sas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
|
||||
struct request *req)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static struct sas_function_template hpsa_sas_transport_functions = {
|
||||
.get_linkerrors = hpsa_sas_get_linkerrors,
|
||||
.get_enclosure_identifier = hpsa_sas_get_enclosure_identifier,
|
||||
|
@ -9460,7 +9506,6 @@ static struct sas_function_template hpsa_sas_transport_functions = {
|
|||
.phy_setup = hpsa_sas_phy_setup,
|
||||
.phy_release = hpsa_sas_phy_release,
|
||||
.set_phy_speed = hpsa_sas_phy_speed,
|
||||
.smp_handler = hpsa_sas_smp_handler,
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
|
@ -293,6 +293,7 @@ struct ctlr_info {
|
|||
int drv_req_rescan;
|
||||
int raid_offload_debug;
|
||||
int discovery_polling;
|
||||
int legacy_board;
|
||||
struct ReportLUNdata *lastlogicals;
|
||||
int needs_abort_tags_swizzled;
|
||||
struct workqueue_struct *resubmit_wq;
|
||||
|
@ -447,6 +448,23 @@ static void SA5_intr_mask(struct ctlr_info *h, unsigned long val)
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Variant of the above; 0x04 turns interrupts off...
|
||||
*/
|
||||
static void SA5B_intr_mask(struct ctlr_info *h, unsigned long val)
|
||||
{
|
||||
if (val) { /* Turn interrupts on */
|
||||
h->interrupts_enabled = 1;
|
||||
writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
} else { /* Turn them off */
|
||||
h->interrupts_enabled = 0;
|
||||
writel(SA5B_INTR_OFF,
|
||||
h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
(void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
|
||||
}
|
||||
}
|
||||
|
||||
static void SA5_performant_intr_mask(struct ctlr_info *h, unsigned long val)
|
||||
{
|
||||
if (val) { /* turn on interrupts */
|
||||
|
@ -549,6 +567,14 @@ static bool SA5_ioaccel_mode1_intr_pending(struct ctlr_info *h)
|
|||
true : false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if an interrupt is pending..
|
||||
*/
|
||||
static bool SA5B_intr_pending(struct ctlr_info *h)
|
||||
{
|
||||
return readl(h->vaddr + SA5_INTR_STATUS) & SA5B_INTR_PENDING;
|
||||
}
|
||||
|
||||
#define IOACCEL_MODE1_REPLY_QUEUE_INDEX 0x1A0
|
||||
#define IOACCEL_MODE1_PRODUCER_INDEX 0x1B8
|
||||
#define IOACCEL_MODE1_CONSUMER_INDEX 0x1BC
|
||||
|
@ -581,38 +607,53 @@ static unsigned long SA5_ioaccel_mode1_completed(struct ctlr_info *h, u8 q)
|
|||
}
|
||||
|
||||
static struct access_method SA5_access = {
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_intr_mask,
|
||||
.intr_pending = SA5_intr_pending,
|
||||
.command_completed = SA5_completed,
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_intr_mask,
|
||||
.intr_pending = SA5_intr_pending,
|
||||
.command_completed = SA5_completed,
|
||||
};
|
||||
|
||||
/* Duplicate entry of the above to mark unsupported boards */
|
||||
static struct access_method SA5A_access = {
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_intr_mask,
|
||||
.intr_pending = SA5_intr_pending,
|
||||
.command_completed = SA5_completed,
|
||||
};
|
||||
|
||||
static struct access_method SA5B_access = {
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5B_intr_mask,
|
||||
.intr_pending = SA5B_intr_pending,
|
||||
.command_completed = SA5_completed,
|
||||
};
|
||||
|
||||
static struct access_method SA5_ioaccel_mode1_access = {
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.intr_pending = SA5_ioaccel_mode1_intr_pending,
|
||||
.command_completed = SA5_ioaccel_mode1_completed,
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.intr_pending = SA5_ioaccel_mode1_intr_pending,
|
||||
.command_completed = SA5_ioaccel_mode1_completed,
|
||||
};
|
||||
|
||||
static struct access_method SA5_ioaccel_mode2_access = {
|
||||
.submit_command = SA5_submit_command_ioaccel2,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.intr_pending = SA5_performant_intr_pending,
|
||||
.command_completed = SA5_performant_completed,
|
||||
.submit_command = SA5_submit_command_ioaccel2,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.intr_pending = SA5_performant_intr_pending,
|
||||
.command_completed = SA5_performant_completed,
|
||||
};
|
||||
|
||||
static struct access_method SA5_performant_access = {
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.intr_pending = SA5_performant_intr_pending,
|
||||
.command_completed = SA5_performant_completed,
|
||||
.submit_command = SA5_submit_command,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.intr_pending = SA5_performant_intr_pending,
|
||||
.command_completed = SA5_performant_completed,
|
||||
};
|
||||
|
||||
static struct access_method SA5_performant_access_no_read = {
|
||||
.submit_command = SA5_submit_command_no_read,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.intr_pending = SA5_performant_intr_pending,
|
||||
.command_completed = SA5_performant_completed,
|
||||
.submit_command = SA5_submit_command_no_read,
|
||||
.set_intr_mask = SA5_performant_intr_mask,
|
||||
.intr_pending = SA5_performant_intr_pending,
|
||||
.command_completed = SA5_performant_completed,
|
||||
};
|
||||
|
||||
struct board_type {
|
||||
|
|
|
@ -1106,12 +1106,10 @@ static int hptiop_reset_hba(struct hptiop_hba *hba)
|
|||
|
||||
static int hptiop_reset(struct scsi_cmnd *scp)
|
||||
{
|
||||
struct Scsi_Host * host = scp->device->host;
|
||||
struct hptiop_hba * hba = (struct hptiop_hba *)host->hostdata;
|
||||
struct hptiop_hba * hba = (struct hptiop_hba *)scp->device->host->hostdata;
|
||||
|
||||
printk(KERN_WARNING "hptiop_reset(%d/%d/%d) scp=%p\n",
|
||||
scp->device->host->host_no, scp->device->channel,
|
||||
scp->device->id, scp);
|
||||
printk(KERN_WARNING "hptiop_reset(%d/%d/%d)\n",
|
||||
scp->device->host->host_no, -1, -1);
|
||||
|
||||
return hptiop_reset_hba(hba)? FAILED : SUCCESS;
|
||||
}
|
||||
|
@ -1179,8 +1177,7 @@ static struct scsi_host_template driver_template = {
|
|||
.module = THIS_MODULE,
|
||||
.name = driver_name,
|
||||
.queuecommand = hptiop_queuecommand,
|
||||
.eh_device_reset_handler = hptiop_reset,
|
||||
.eh_bus_reset_handler = hptiop_reset,
|
||||
.eh_host_reset_handler = hptiop_reset,
|
||||
.info = hptiop_info,
|
||||
.emulated = 0,
|
||||
.use_clustering = ENABLE_CLUSTERING,
|
||||
|
|
|
@ -2528,16 +2528,12 @@ static int ibmvfc_eh_target_reset_handler(struct scsi_cmnd *cmd)
|
|||
**/
|
||||
static int ibmvfc_eh_host_reset_handler(struct scsi_cmnd *cmd)
|
||||
{
|
||||
int rc, block_rc;
|
||||
int rc;
|
||||
struct ibmvfc_host *vhost = shost_priv(cmd->device->host);
|
||||
|
||||
block_rc = fc_block_scsi_eh(cmd);
|
||||
dev_err(vhost->dev, "Resetting connection due to error recovery\n");
|
||||
rc = ibmvfc_issue_fc_host_lip(vhost->host);
|
||||
|
||||
if (block_rc == FAST_IO_FAIL)
|
||||
return FAST_IO_FAIL;
|
||||
|
||||
return rc ? FAILED : SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -4929,7 +4925,7 @@ static unsigned long ibmvfc_get_desired_dma(struct vio_dev *vdev)
|
|||
return pool_dma + ((512 * 1024) * driver_template.cmd_per_lun);
|
||||
}
|
||||
|
||||
static struct vio_device_id ibmvfc_device_table[] = {
|
||||
static const struct vio_device_id ibmvfc_device_table[] = {
|
||||
{"fcp", "IBM,vfc-client"},
|
||||
{ "", "" }
|
||||
};
|
||||
|
|
|
@ -2330,7 +2330,7 @@ static int ibmvscsi_resume(struct device *dev)
|
|||
* ibmvscsi_device_table: Used by vio.c to match devices in the device tree we
|
||||
* support.
|
||||
*/
|
||||
static struct vio_device_id ibmvscsi_device_table[] = {
|
||||
static const struct vio_device_id ibmvscsi_device_table[] = {
|
||||
{"vscsi", "IBM,v-scsi"},
|
||||
{ "", "" }
|
||||
};
|
||||
|
|
|
@ -4086,7 +4086,7 @@ static struct class ibmvscsis_class = {
|
|||
.dev_groups = ibmvscsis_dev_groups,
|
||||
};
|
||||
|
||||
static struct vio_device_id ibmvscsis_device_table[] = {
|
||||
static const struct vio_device_id ibmvscsis_device_table[] = {
|
||||
{ "v-scsi-host", "IBM,v-scsi-host" },
|
||||
{ "", "" }
|
||||
};
|
||||
|
|
|
@ -1106,7 +1106,6 @@ static struct scsi_host_template imm_template = {
|
|||
.name = "Iomega VPI2 (imm) interface",
|
||||
.queuecommand = imm_queuecommand,
|
||||
.eh_abort_handler = imm_abort,
|
||||
.eh_bus_reset_handler = imm_reset,
|
||||
.eh_host_reset_handler = imm_reset,
|
||||
.bios_param = imm_biosparam,
|
||||
.this_id = 7,
|
||||
|
|
|
@ -166,7 +166,7 @@ static struct scsi_host_template isci_sht = {
|
|||
.use_clustering = ENABLE_CLUSTERING,
|
||||
.eh_abort_handler = sas_eh_abort_handler,
|
||||
.eh_device_reset_handler = sas_eh_device_reset_handler,
|
||||
.eh_bus_reset_handler = sas_eh_bus_reset_handler,
|
||||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
.shost_attrs = isci_host_attrs,
|
||||
|
|
|
@ -163,7 +163,6 @@ static void iscsi_sw_tcp_state_change(struct sock *sk)
|
|||
struct iscsi_tcp_conn *tcp_conn;
|
||||
struct iscsi_sw_tcp_conn *tcp_sw_conn;
|
||||
struct iscsi_conn *conn;
|
||||
struct iscsi_session *session;
|
||||
void (*old_state_change)(struct sock *);
|
||||
|
||||
read_lock_bh(&sk->sk_callback_lock);
|
||||
|
@ -172,7 +171,6 @@ static void iscsi_sw_tcp_state_change(struct sock *sk)
|
|||
read_unlock_bh(&sk->sk_callback_lock);
|
||||
return;
|
||||
}
|
||||
session = conn->session;
|
||||
|
||||
iscsi_sw_sk_state_check(sk);
|
||||
|
||||
|
|
|
@ -2222,8 +2222,6 @@ int fc_eh_host_reset(struct scsi_cmnd *sc_cmd)
|
|||
|
||||
FC_SCSI_DBG(lport, "Resetting host\n");
|
||||
|
||||
fc_block_scsi_eh(sc_cmd);
|
||||
|
||||
fc_lport_reset(lport);
|
||||
wait_tmo = jiffies + FC_HOST_RESET_TIMEOUT;
|
||||
while (!fc_fcp_lport_queue_ready(lport) && time_before(jiffies,
|
||||
|
|
|
@ -1078,7 +1078,7 @@ static int iscsi_handle_reject(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
|
|||
if (opcode != ISCSI_OP_NOOP_OUT)
|
||||
return 0;
|
||||
|
||||
if (rejected_pdu.itt == cpu_to_be32(ISCSI_RESERVED_TAG)) {
|
||||
if (rejected_pdu.itt == cpu_to_be32(ISCSI_RESERVED_TAG)) {
|
||||
/*
|
||||
* nop-out in response to target's nop-out rejected.
|
||||
* Just resend.
|
||||
|
|
|
@ -26,6 +26,7 @@ config SCSI_SAS_LIBSAS
|
|||
tristate "SAS Domain Transport Attributes"
|
||||
depends on SCSI
|
||||
select SCSI_SAS_ATTRS
|
||||
select BLK_DEV_BSGLIB
|
||||
help
|
||||
This provides transport specific helpers for SAS drivers which
|
||||
use the domain device construct (like the aic94xxx).
|
||||
|
|
|
@ -343,6 +343,7 @@ static int smp_ata_check_ready(struct ata_link *link)
|
|||
case SAS_END_DEVICE:
|
||||
if (ex_phy->attached_sata_dev)
|
||||
return sas_ata_clear_pending(dev, ex_phy);
|
||||
/* fall through */
|
||||
default:
|
||||
return -ENODEV;
|
||||
}
|
||||
|
|
|
@ -64,8 +64,8 @@ static void smp_task_done(struct sas_task *task)
|
|||
/* Give it some long enough timeout. In seconds. */
|
||||
#define SMP_TIMEOUT 10
|
||||
|
||||
static int smp_execute_task(struct domain_device *dev, void *req, int req_size,
|
||||
void *resp, int resp_size)
|
||||
static int smp_execute_task_sg(struct domain_device *dev,
|
||||
struct scatterlist *req, struct scatterlist *resp)
|
||||
{
|
||||
int res, retry;
|
||||
struct sas_task *task = NULL;
|
||||
|
@ -86,8 +86,8 @@ static int smp_execute_task(struct domain_device *dev, void *req, int req_size,
|
|||
}
|
||||
task->dev = dev;
|
||||
task->task_proto = dev->tproto;
|
||||
sg_init_one(&task->smp_task.smp_req, req, req_size);
|
||||
sg_init_one(&task->smp_task.smp_resp, resp, resp_size);
|
||||
task->smp_task.smp_req = *req;
|
||||
task->smp_task.smp_resp = *resp;
|
||||
|
||||
task->task_done = smp_task_done;
|
||||
|
||||
|
@ -151,6 +151,17 @@ static int smp_execute_task(struct domain_device *dev, void *req, int req_size,
|
|||
return res;
|
||||
}
|
||||
|
||||
static int smp_execute_task(struct domain_device *dev, void *req, int req_size,
|
||||
void *resp, int resp_size)
|
||||
{
|
||||
struct scatterlist req_sg;
|
||||
struct scatterlist resp_sg;
|
||||
|
||||
sg_init_one(&req_sg, req, req_size);
|
||||
sg_init_one(&resp_sg, resp, resp_size);
|
||||
return smp_execute_task_sg(dev, &req_sg, &resp_sg);
|
||||
}
|
||||
|
||||
/* ---------- Allocations ---------- */
|
||||
|
||||
static inline void *alloc_smp_req(int size)
|
||||
|
@ -2130,57 +2141,50 @@ int sas_ex_revalidate_domain(struct domain_device *port_dev)
|
|||
return res;
|
||||
}
|
||||
|
||||
int sas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
|
||||
struct request *req)
|
||||
void sas_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
|
||||
struct sas_rphy *rphy)
|
||||
{
|
||||
struct domain_device *dev;
|
||||
int ret, type;
|
||||
struct request *rsp = req->next_rq;
|
||||
|
||||
if (!rsp) {
|
||||
printk("%s: space for a smp response is missing\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
unsigned int reslen = 0;
|
||||
int ret = -EINVAL;
|
||||
|
||||
/* no rphy means no smp target support (ie aic94xx host) */
|
||||
if (!rphy)
|
||||
return sas_smp_host_handler(shost, req, rsp);
|
||||
return sas_smp_host_handler(job, shost);
|
||||
|
||||
type = rphy->identify.device_type;
|
||||
|
||||
if (type != SAS_EDGE_EXPANDER_DEVICE &&
|
||||
type != SAS_FANOUT_EXPANDER_DEVICE) {
|
||||
switch (rphy->identify.device_type) {
|
||||
case SAS_EDGE_EXPANDER_DEVICE:
|
||||
case SAS_FANOUT_EXPANDER_DEVICE:
|
||||
break;
|
||||
default:
|
||||
printk("%s: can we send a smp request to a device?\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
dev = sas_find_dev_by_rphy(rphy);
|
||||
if (!dev) {
|
||||
printk("%s: fail to find a domain_device?\n", __func__);
|
||||
return -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* do we need to support multiple segments? */
|
||||
if (bio_multiple_segments(req->bio) ||
|
||||
bio_multiple_segments(rsp->bio)) {
|
||||
if (job->request_payload.sg_cnt > 1 ||
|
||||
job->reply_payload.sg_cnt > 1) {
|
||||
printk("%s: multiple segments req %u, rsp %u\n",
|
||||
__func__, blk_rq_bytes(req), blk_rq_bytes(rsp));
|
||||
return -EINVAL;
|
||||
__func__, job->request_payload.payload_len,
|
||||
job->reply_payload.payload_len);
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = smp_execute_task(dev, bio_data(req->bio), blk_rq_bytes(req),
|
||||
bio_data(rsp->bio), blk_rq_bytes(rsp));
|
||||
ret = smp_execute_task_sg(dev, job->request_payload.sg_list,
|
||||
job->reply_payload.sg_list);
|
||||
if (ret > 0) {
|
||||
/* positive number is the untransferred residual */
|
||||
scsi_req(rsp)->resid_len = ret;
|
||||
scsi_req(req)->resid_len = 0;
|
||||
reslen = ret;
|
||||
ret = 0;
|
||||
} else if (ret == 0) {
|
||||
scsi_req(rsp)->resid_len = 0;
|
||||
scsi_req(req)->resid_len = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
out:
|
||||
bsg_job_done(job, ret, reslen);
|
||||
}
|
||||
|
|
|
@ -225,47 +225,36 @@ static void sas_phy_control(struct sas_ha_struct *sas_ha, u8 phy_id,
|
|||
resp_data[2] = SMP_RESP_FUNC_ACC;
|
||||
}
|
||||
|
||||
int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
|
||||
struct request *rsp)
|
||||
void sas_smp_host_handler(struct bsg_job *job, struct Scsi_Host *shost)
|
||||
{
|
||||
u8 *req_data = NULL, *resp_data = NULL, *buf;
|
||||
struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
|
||||
u8 *req_data, *resp_data;
|
||||
unsigned int reslen = 0;
|
||||
int error = -EINVAL;
|
||||
|
||||
/* eight is the minimum size for request and response frames */
|
||||
if (blk_rq_bytes(req) < 8 || blk_rq_bytes(rsp) < 8)
|
||||
if (job->request_payload.payload_len < 8 ||
|
||||
job->reply_payload.payload_len < 8)
|
||||
goto out;
|
||||
|
||||
if (bio_offset(req->bio) + blk_rq_bytes(req) > PAGE_SIZE ||
|
||||
bio_offset(rsp->bio) + blk_rq_bytes(rsp) > PAGE_SIZE) {
|
||||
shost_printk(KERN_ERR, shost,
|
||||
"SMP request/response frame crosses page boundary");
|
||||
error = -ENOMEM;
|
||||
req_data = kzalloc(job->request_payload.payload_len, GFP_KERNEL);
|
||||
if (!req_data)
|
||||
goto out;
|
||||
}
|
||||
|
||||
req_data = kzalloc(blk_rq_bytes(req), GFP_KERNEL);
|
||||
sg_copy_to_buffer(job->request_payload.sg_list,
|
||||
job->request_payload.sg_cnt, req_data,
|
||||
job->request_payload.payload_len);
|
||||
|
||||
/* make sure frame can always be built ... we copy
|
||||
* back only the requested length */
|
||||
resp_data = kzalloc(max(blk_rq_bytes(rsp), 128U), GFP_KERNEL);
|
||||
|
||||
if (!req_data || !resp_data) {
|
||||
error = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
local_irq_disable();
|
||||
buf = kmap_atomic(bio_page(req->bio));
|
||||
memcpy(req_data, buf, blk_rq_bytes(req));
|
||||
kunmap_atomic(buf - bio_offset(req->bio));
|
||||
local_irq_enable();
|
||||
resp_data = kzalloc(max(job->reply_payload.payload_len, 128U),
|
||||
GFP_KERNEL);
|
||||
if (!resp_data)
|
||||
goto out_free_req;
|
||||
|
||||
error = -EINVAL;
|
||||
if (req_data[0] != SMP_REQUEST)
|
||||
goto out;
|
||||
|
||||
/* always succeeds ... even if we can't process the request
|
||||
* the result is in the response frame */
|
||||
error = 0;
|
||||
goto out_free_resp;
|
||||
|
||||
/* set up default don't know response */
|
||||
resp_data[0] = SMP_RESPONSE;
|
||||
|
@ -274,20 +263,18 @@ int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
|
|||
|
||||
switch (req_data[1]) {
|
||||
case SMP_REPORT_GENERAL:
|
||||
scsi_req(req)->resid_len -= 8;
|
||||
scsi_req(rsp)->resid_len -= 32;
|
||||
resp_data[2] = SMP_RESP_FUNC_ACC;
|
||||
resp_data[9] = sas_ha->num_phys;
|
||||
reslen = 32;
|
||||
break;
|
||||
|
||||
case SMP_REPORT_MANUF_INFO:
|
||||
scsi_req(req)->resid_len -= 8;
|
||||
scsi_req(rsp)->resid_len -= 64;
|
||||
resp_data[2] = SMP_RESP_FUNC_ACC;
|
||||
memcpy(resp_data + 12, shost->hostt->name,
|
||||
SAS_EXPANDER_VENDOR_ID_LEN);
|
||||
memcpy(resp_data + 20, "libsas virt phy",
|
||||
SAS_EXPANDER_PRODUCT_ID_LEN);
|
||||
reslen = 64;
|
||||
break;
|
||||
|
||||
case SMP_READ_GPIO_REG:
|
||||
|
@ -295,14 +282,10 @@ int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
|
|||
break;
|
||||
|
||||
case SMP_DISCOVER:
|
||||
scsi_req(req)->resid_len -= 16;
|
||||
if ((int)scsi_req(req)->resid_len < 0) {
|
||||
scsi_req(req)->resid_len = 0;
|
||||
error = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
scsi_req(rsp)->resid_len -= 56;
|
||||
if (job->request_payload.payload_len < 16)
|
||||
goto out_free_resp;
|
||||
sas_host_smp_discover(sas_ha, resp_data, req_data[9]);
|
||||
reslen = 56;
|
||||
break;
|
||||
|
||||
case SMP_REPORT_PHY_ERR_LOG:
|
||||
|
@ -311,14 +294,10 @@ int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
|
|||
break;
|
||||
|
||||
case SMP_REPORT_PHY_SATA:
|
||||
scsi_req(req)->resid_len -= 16;
|
||||
if ((int)scsi_req(req)->resid_len < 0) {
|
||||
scsi_req(req)->resid_len = 0;
|
||||
error = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
scsi_req(rsp)->resid_len -= 60;
|
||||
if (job->request_payload.payload_len < 16)
|
||||
goto out_free_resp;
|
||||
sas_report_phy_sata(sas_ha, resp_data, req_data[9]);
|
||||
reslen = 60;
|
||||
break;
|
||||
|
||||
case SMP_REPORT_ROUTE_INFO:
|
||||
|
@ -330,16 +309,15 @@ int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
|
|||
const int base_frame_size = 11;
|
||||
int to_write = req_data[4];
|
||||
|
||||
if (blk_rq_bytes(req) < base_frame_size + to_write * 4 ||
|
||||
scsi_req(req)->resid_len < base_frame_size + to_write * 4) {
|
||||
if (job->request_payload.payload_len <
|
||||
base_frame_size + to_write * 4) {
|
||||
resp_data[2] = SMP_RESP_INV_FRM_LEN;
|
||||
break;
|
||||
}
|
||||
|
||||
to_write = sas_host_smp_write_gpio(sas_ha, resp_data, req_data[2],
|
||||
req_data[3], to_write, &req_data[8]);
|
||||
scsi_req(req)->resid_len -= base_frame_size + to_write * 4;
|
||||
scsi_req(rsp)->resid_len -= 8;
|
||||
reslen = 8;
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -348,16 +326,12 @@ int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
|
|||
break;
|
||||
|
||||
case SMP_PHY_CONTROL:
|
||||
scsi_req(req)->resid_len -= 44;
|
||||
if ((int)scsi_req(req)->resid_len < 0) {
|
||||
scsi_req(req)->resid_len = 0;
|
||||
error = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
scsi_req(rsp)->resid_len -= 8;
|
||||
if (job->request_payload.payload_len < 44)
|
||||
goto out_free_resp;
|
||||
sas_phy_control(sas_ha, req_data[9], req_data[10],
|
||||
req_data[32] >> 4, req_data[33] >> 4,
|
||||
resp_data);
|
||||
reslen = 8;
|
||||
break;
|
||||
|
||||
case SMP_PHY_TEST_FUNCTION:
|
||||
|
@ -369,15 +343,15 @@ int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
|
|||
break;
|
||||
}
|
||||
|
||||
local_irq_disable();
|
||||
buf = kmap_atomic(bio_page(rsp->bio));
|
||||
memcpy(buf, resp_data, blk_rq_bytes(rsp));
|
||||
flush_kernel_dcache_page(bio_page(rsp->bio));
|
||||
kunmap_atomic(buf - bio_offset(rsp->bio));
|
||||
local_irq_enable();
|
||||
sg_copy_from_buffer(job->reply_payload.sg_list,
|
||||
job->reply_payload.sg_cnt, resp_data,
|
||||
job->reply_payload.payload_len);
|
||||
|
||||
out:
|
||||
kfree(req_data);
|
||||
error = 0;
|
||||
out_free_resp:
|
||||
kfree(resp_data);
|
||||
return error;
|
||||
out_free_req:
|
||||
kfree(req_data);
|
||||
out:
|
||||
bsg_job_done(job, error, reslen);
|
||||
}
|
||||
|
|
|
@ -81,6 +81,8 @@ int sas_queue_work(struct sas_ha_struct *ha, struct sas_work *sw);
|
|||
int sas_notify_lldd_dev_found(struct domain_device *);
|
||||
void sas_notify_lldd_dev_gone(struct domain_device *);
|
||||
|
||||
void sas_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
|
||||
struct sas_rphy *rphy);
|
||||
int sas_smp_phy_control(struct domain_device *dev, int phy_id,
|
||||
enum phy_func phy_func, struct sas_phy_linkrates *);
|
||||
int sas_smp_get_phy_events(struct sas_phy *phy);
|
||||
|
@ -98,16 +100,14 @@ void sas_hae_reset(struct work_struct *work);
|
|||
void sas_free_device(struct kref *kref);
|
||||
|
||||
#ifdef CONFIG_SCSI_SAS_HOST_SMP
|
||||
extern int sas_smp_host_handler(struct Scsi_Host *shost, struct request *req,
|
||||
struct request *rsp);
|
||||
extern void sas_smp_host_handler(struct bsg_job *job, struct Scsi_Host *shost);
|
||||
#else
|
||||
static inline int sas_smp_host_handler(struct Scsi_Host *shost,
|
||||
struct request *req,
|
||||
struct request *rsp)
|
||||
static inline void sas_smp_host_handler(struct bsg_job *job,
|
||||
struct Scsi_Host *shost)
|
||||
{
|
||||
shost_printk(KERN_ERR, shost,
|
||||
"Cannot send SMP to a sas host (not enabled in CONFIG)\n");
|
||||
return -EINVAL;
|
||||
bsg_job_done(job, -EINVAL, 0);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
|
|
@ -526,7 +526,7 @@ int sas_eh_device_reset_handler(struct scsi_cmnd *cmd)
|
|||
return FAILED;
|
||||
}
|
||||
|
||||
int sas_eh_bus_reset_handler(struct scsi_cmnd *cmd)
|
||||
int sas_eh_target_reset_handler(struct scsi_cmnd *cmd)
|
||||
{
|
||||
int res;
|
||||
struct Scsi_Host *host = cmd->device->host;
|
||||
|
@ -554,15 +554,15 @@ static int try_to_reset_cmd_device(struct scsi_cmnd *cmd)
|
|||
struct Scsi_Host *shost = cmd->device->host;
|
||||
|
||||
if (!shost->hostt->eh_device_reset_handler)
|
||||
goto try_bus_reset;
|
||||
goto try_target_reset;
|
||||
|
||||
res = shost->hostt->eh_device_reset_handler(cmd);
|
||||
if (res == SUCCESS)
|
||||
return res;
|
||||
|
||||
try_bus_reset:
|
||||
if (shost->hostt->eh_bus_reset_handler)
|
||||
return shost->hostt->eh_bus_reset_handler(cmd);
|
||||
try_target_reset:
|
||||
if (shost->hostt->eh_target_reset_handler)
|
||||
return shost->hostt->eh_target_reset_handler(cmd);
|
||||
|
||||
return FAILED;
|
||||
}
|
||||
|
@ -855,7 +855,6 @@ int sas_target_alloc(struct scsi_target *starget)
|
|||
int sas_slave_configure(struct scsi_device *scsi_dev)
|
||||
{
|
||||
struct domain_device *dev = sdev_to_domain_dev(scsi_dev);
|
||||
struct sas_ha_struct *sas_ha;
|
||||
|
||||
BUG_ON(dev->rphy->identify.device_type != SAS_END_DEVICE);
|
||||
|
||||
|
@ -864,8 +863,6 @@ int sas_slave_configure(struct scsi_device *scsi_dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
sas_ha = dev->port->ha;
|
||||
|
||||
sas_read_port_mode_page(scsi_dev);
|
||||
|
||||
if (scsi_dev->tagged_supported) {
|
||||
|
@ -996,6 +993,6 @@ EXPORT_SYMBOL_GPL(sas_bios_param);
|
|||
EXPORT_SYMBOL_GPL(sas_task_abort);
|
||||
EXPORT_SYMBOL_GPL(sas_phy_reset);
|
||||
EXPORT_SYMBOL_GPL(sas_eh_device_reset_handler);
|
||||
EXPORT_SYMBOL_GPL(sas_eh_bus_reset_handler);
|
||||
EXPORT_SYMBOL_GPL(sas_eh_target_reset_handler);
|
||||
EXPORT_SYMBOL_GPL(sas_target_destroy);
|
||||
EXPORT_SYMBOL_GPL(sas_ioctl);
|
||||
|
|
|
@ -733,7 +733,6 @@ struct lpfc_hba {
|
|||
uint32_t fc_rttov; /* R_T_TOV timer value */
|
||||
uint32_t fc_altov; /* AL_TOV timer value */
|
||||
uint32_t fc_crtov; /* C_R_TOV timer value */
|
||||
uint32_t fc_citov; /* C_I_TOV timer value */
|
||||
|
||||
struct serv_parm fc_fabparam; /* fabric service parameters buffer */
|
||||
uint8_t alpa_map[128]; /* AL_PA map from READ_LA */
|
||||
|
@ -757,6 +756,7 @@ struct lpfc_hba {
|
|||
#define LPFC_NVMET_MAX_PORTS 32
|
||||
uint8_t mds_diags_support;
|
||||
uint32_t initial_imax;
|
||||
uint8_t bbcredit_support;
|
||||
|
||||
/* HBA Config Parameters */
|
||||
uint32_t cfg_ack0;
|
||||
|
@ -836,6 +836,7 @@ struct lpfc_hba {
|
|||
uint32_t cfg_enable_SmartSAN;
|
||||
uint32_t cfg_enable_mds_diags;
|
||||
uint32_t cfg_enable_fc4_type;
|
||||
uint32_t cfg_enable_bbcr; /*Enable BB Credit Recovery*/
|
||||
uint32_t cfg_xri_split;
|
||||
#define LPFC_ENABLE_FCP 1
|
||||
#define LPFC_ENABLE_NVME 2
|
||||
|
@ -946,14 +947,14 @@ struct lpfc_hba {
|
|||
struct list_head active_rrq_list;
|
||||
spinlock_t hbalock;
|
||||
|
||||
/* pci_mem_pools */
|
||||
struct pci_pool *lpfc_sg_dma_buf_pool;
|
||||
struct pci_pool *lpfc_mbuf_pool;
|
||||
struct pci_pool *lpfc_hrb_pool; /* header receive buffer pool */
|
||||
struct pci_pool *lpfc_drb_pool; /* data receive buffer pool */
|
||||
struct pci_pool *lpfc_nvmet_drb_pool; /* data receive buffer pool */
|
||||
struct pci_pool *lpfc_hbq_pool; /* SLI3 hbq buffer pool */
|
||||
struct pci_pool *txrdy_payload_pool;
|
||||
/* dma_mem_pools */
|
||||
struct dma_pool *lpfc_sg_dma_buf_pool;
|
||||
struct dma_pool *lpfc_mbuf_pool;
|
||||
struct dma_pool *lpfc_hrb_pool; /* header receive buffer pool */
|
||||
struct dma_pool *lpfc_drb_pool; /* data receive buffer pool */
|
||||
struct dma_pool *lpfc_nvmet_drb_pool; /* data receive buffer pool */
|
||||
struct dma_pool *lpfc_hbq_pool; /* SLI3 hbq buffer pool */
|
||||
struct dma_pool *txrdy_payload_pool;
|
||||
struct lpfc_dma_pool lpfc_mbuf_safety_pool;
|
||||
|
||||
mempool_t *mbox_mem_pool;
|
||||
|
|
|
@ -247,13 +247,10 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
|
|||
atomic_read(&tgtp->xmt_abort_rsp),
|
||||
atomic_read(&tgtp->xmt_abort_rsp_error));
|
||||
|
||||
spin_lock(&phba->sli4_hba.nvmet_ctx_get_lock);
|
||||
spin_lock(&phba->sli4_hba.nvmet_ctx_put_lock);
|
||||
tot = phba->sli4_hba.nvmet_xri_cnt -
|
||||
(phba->sli4_hba.nvmet_ctx_get_cnt +
|
||||
phba->sli4_hba.nvmet_ctx_put_cnt);
|
||||
spin_unlock(&phba->sli4_hba.nvmet_ctx_put_lock);
|
||||
spin_unlock(&phba->sli4_hba.nvmet_ctx_get_lock);
|
||||
/* Calculate outstanding IOs */
|
||||
tot = atomic_read(&tgtp->rcv_fcp_cmd_drop);
|
||||
tot += atomic_read(&tgtp->xmt_fcp_release);
|
||||
tot = atomic_read(&tgtp->rcv_fcp_cmd_in) - tot;
|
||||
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
"IO_CTX: %08x WAIT: cur %08x tot %08x\n"
|
||||
|
@ -1893,6 +1890,36 @@ static inline bool lpfc_rangecheck(uint val, uint min, uint max)
|
|||
return val >= min && val <= max;
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_enable_bbcr_set: Sets an attribute value.
|
||||
* @phba: pointer the the adapter structure.
|
||||
* @val: integer attribute value.
|
||||
*
|
||||
* Description:
|
||||
* Validates the min and max values then sets the
|
||||
* adapter config field if in the valid range. prints error message
|
||||
* and does not set the parameter if invalid.
|
||||
*
|
||||
* Returns:
|
||||
* zero on success
|
||||
* -EINVAL if val is invalid
|
||||
*/
|
||||
static ssize_t
|
||||
lpfc_enable_bbcr_set(struct lpfc_hba *phba, uint val)
|
||||
{
|
||||
if (lpfc_rangecheck(val, 0, 1) && phba->sli_rev == LPFC_SLI_REV4) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"3068 %s_enable_bbcr changed from %d to %d\n",
|
||||
LPFC_DRIVER_NAME, phba->cfg_enable_bbcr, val);
|
||||
phba->cfg_enable_bbcr = val;
|
||||
return 0;
|
||||
}
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0451 %s_enable_bbcr cannot set to %d, range is 0, 1\n",
|
||||
LPFC_DRIVER_NAME, val);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_param_show - Return a cfg attribute value in decimal
|
||||
*
|
||||
|
@ -5116,6 +5143,14 @@ LPFC_ATTR_R(sg_seg_cnt, LPFC_DEFAULT_SG_SEG_CNT, LPFC_DEFAULT_SG_SEG_CNT,
|
|||
*/
|
||||
LPFC_ATTR_R(enable_mds_diags, 0, 0, 1, "Enable MDS Diagnostics");
|
||||
|
||||
/*
|
||||
* lpfc_enable_bbcr: Enable BB Credit Recovery
|
||||
* 0 = BB Credit Recovery disabled
|
||||
* 1 = BB Credit Recovery enabled (default)
|
||||
* Value range is [0,1]. Default value is 1.
|
||||
*/
|
||||
LPFC_BBCR_ATTR_RW(enable_bbcr, 1, 0, 1, "Enable BBC Recovery");
|
||||
|
||||
struct device_attribute *lpfc_hba_attrs[] = {
|
||||
&dev_attr_nvme_info,
|
||||
&dev_attr_bg_info,
|
||||
|
@ -5223,6 +5258,7 @@ struct device_attribute *lpfc_hba_attrs[] = {
|
|||
&dev_attr_protocol,
|
||||
&dev_attr_lpfc_xlane_supported,
|
||||
&dev_attr_lpfc_enable_mds_diags,
|
||||
&dev_attr_lpfc_enable_bbcr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -6234,11 +6270,13 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
|
|||
lpfc_nvmet_fb_size_init(phba, lpfc_nvmet_fb_size);
|
||||
lpfc_fcp_io_channel_init(phba, lpfc_fcp_io_channel);
|
||||
lpfc_nvme_io_channel_init(phba, lpfc_nvme_io_channel);
|
||||
lpfc_enable_bbcr_init(phba, lpfc_enable_bbcr);
|
||||
|
||||
if (phba->sli_rev != LPFC_SLI_REV4) {
|
||||
/* NVME only supported on SLI4 */
|
||||
phba->nvmet_support = 0;
|
||||
phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP;
|
||||
phba->cfg_enable_bbcr = 0;
|
||||
} else {
|
||||
/* We MUST have FCP support */
|
||||
if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue