From an interrupt handler path memory may be allocated using
GFP_KERNEL, replace it with GFP_ATOMIC.
_base_interrupt->_scsih_io_done->_scsih_smart_predicted_fault
Link: https://lore.kernel.org/r/20191024152835.6177-1-thenzl@redhat.com
Signed-off-by: Tomas Henzl <thenzl@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This line is indented too far so it's a bit confusing.
Link: https://lore.kernel.org/r/20191004100615.GA823@mwanda
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Load driver with module parameter "max_msix_vectors". Value provided in
module parameter is not used by mpt3sas driver. Driver loads with max
controller supported MSI-X value.
In _base_alloc_irq_vectors use reply_queue_count which is determined using
user provided msix value insted of ioc->msix_vector_count which tells max
supported msix value of the controller.
Link: https://lore.kernel.org/r/1568379890-18347-13-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
If any faulty application issues an NVMe Encapsulated commands to HBA which
doesn't support NVMe protocol then driver should return the command as
invalid with the following message.
"HBA doesn't support NVMe. Rejecting NVMe Encapsulated request."
Otherwise below page fault kernel panic will be observed while building the
PRPs as there is no PRP pools allocated for the HBA which doesn't support
NVMe drives.
RIP: 0010:_base_build_nvme_prp+0x3b/0xf0 [mpt3sas]
Call Trace:
_ctl_do_mpt_command+0x931/0x1120 [mpt3sas]
_ctl_ioctl_main.isra.11+0xa28/0x11e0 [mpt3sas]
? prepare_to_wait+0xb0/0xb0
? tty_ldisc_deref+0x16/0x20
_ctl_ioctl+0x1a/0x20 [mpt3sas]
do_vfs_ioctl+0xaa/0x620
? vfs_read+0x117/0x140
ksys_ioctl+0x67/0x90
__x64_sys_ioctl+0x1a/0x20
do_syscall_64+0x60/0x190
entry_SYSCALL_64_after_hwframe+0x44/0xa9
[mkp: tweaked error string]
Link: https://lore.kernel.org/r/1568379890-18347-12-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
The firmware image layout has been changed for Aero controllers. All
compatible HBAs have to get Firmware Package version from Component Image
Header layout.
The Signature field in FW header is set to 0xEB000042 for products
compatible with Component Image Header.
For compatible controllers, driver fetches firmware package version from
ApplicationSpecific field of Component Image Header.
Link: https://lore.kernel.org/r/1568379890-18347-11-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Added a new status flag named MPT3_DIAG_BUFFER_IS_APP_OWNED and it will set
whenever application registers the diag buffer & it will be cleared when
application unregisters the buffer.
When this flag is enabled, and if application issues diag buffer register
command without releasing the buffer, then register command will be failed
with -EINVAL status by saying that this buffer is already registered by the
application.
When user issues a trace buffer register command through sysfs parameter,
and if trace buffer is in released stated but not yet unregistered by the
application which was owning it, then driver will unregister the buffer by
itself and freshly register the 1MB sized trace buffer with the HBA
firmware.
Link: https://lore.kernel.org/r/1568379890-18347-9-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
The diag buffer which is allocated during driver load time or through sysfs
parameter is marked as driver allocated diag buffer.
MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED bit will be set for this buffer.
This buffer won't be de-allocated even when application issues unregister
command, driver just clears the registered status bit. Same buffer will be
reused while re-registering the same diag buffer type by any application.
While re-registering the same diag buffer type application has to register
with the same size that the buffer was allocated during driver load
time. This buffer size can be read by the application by issuing diag
'query' command.
This always makes sure that the memory is available for applications for
collecting the firmware logs. Only thing is that this won't allow the
application to re-register the diag buffer with different size, but the
buffer size which is allocated during driver load time will be enough for
most of the cases for collecting the firmware logs.
Link: https://lore.kernel.org/r/1568379890-18347-8-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Clear MPT3_DIAG_BUFFER_IS_RELEASED bit once diag buffer is re-registered
after reading the buffer, else driver won't release the buffer and return
the 'diag release' command with -EINVAL status saying that buffer is
already released.
Link: https://lore.kernel.org/r/1568379890-18347-7-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Application A has registered a diag buffer and looking for particular event
to happen to release & read the trace buffer. Meanwhile application B has
unregistered the diag buffer and now Application A can't get the required
diag buffer. So proper diag buffer ownership is missing.
Each application has to maintain its own Unique ID. Now driver has to save
the Application's UniqueID for each diag buffer type when diag buffer is
registered. And driver has to allow 'release', 'read' & 'unregister' diag
commands only if application's UniqueID matches with saved UniqueID for the
corresponding diag buffer type.
When diag buffer is registered by the driver, then the UniqueID saved by
the driver is "BRCM" (i.e. 0x4252434D) for SAS3 and above generations HBA
devices. For SAS2 HBAs, driver keeps the legacy UniqueID 0x07075900 for
maintaining compatibility with the legacy SAS2 application and this
improvement won't be applicable for SAS2 HBA devices.
Any application can own the buffer registered by the driver by sending
diag register request to driver with same buffer type and size
(Application can get the buffer size by sending 'query' command). Then
driver changes the ownership of the buffer by saving application's
UniqueID for that corresponding buffer type.
Also, application can re-register the diag buffer with same size without
un-registering it, but diag buffer should be released before re-registering
it. By allowing this, driver no need to deallocate and allocate a new
buffer for re-register command, same buffer can be re-used.
Link: https://lore.kernel.org/r/1568379890-18347-6-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Memory leak can happen when diag buffer is released but not unregistered
(where buffer is deallocated) by the user. During module unload time driver
is not deallocating the buffer if the buffer is in released state.
Deallocate the diag buffer during module unload time without any diag
buffer status checks.
Link: https://lore.kernel.org/r/1568379890-18347-5-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
When user issues diag register command from application with required size,
and if driver unable to allocate the memory, then it will fail the register
command. While failing the register command, driver is not currently
clearing MPT3_CMD_PENDING bit in ctl_cmds.status variable which was set
before trying to allocate the memory. As this bit is set, subsequent
register command will be failed with BUSY status even when user wants to
register the trace buffer will less memory.
Clear MPT3_CMD_PENDING bit in ctl_cmds.status before returning the diag
register command with no memory status.
Link: https://lore.kernel.org/r/1568379890-18347-4-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Display message before releasing the diag buffer so that user knows which
event caused the release of diag buffer.
Releasing of diag buffer means HBA firmware stops posting the firmware logs
on the registered diag buffer.
Link: https://lore.kernel.org/r/1568379890-18347-3-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Currently if user wishes to enable the host trace buffer during driver load
time, then user has to load the driver with module parameter
'diag_buffer_enable' set to one.
Alternatively now the user can enable host trace buffer by enabling the
following fields in manufacturing page11 in NVDATA (nvdata xml is used
while building HBA firmware image):
* HostTraceBufferMaxSizeKB - Maximum trace buffer size in KB that host can
allocate,
* HostTraceBufferMinSizeKB - Minimum trace buffer size in KB atleast host
should allocate,
* HostTraceBufferDecrementSizeKB - size by which host can reduce from
buffer size and retry the buffer allocation
when buffer allocation failed with previous
calculated buffer size.
The driver will register the trace buffer automatically without any module
parameter during boot time when above fields are enabled in manufacturing
page11 in HBA firmware.
Driver follows the following algorithm for enabling the host trace buffer
during driver load time:
* If user has loaded the driver with module parameter 'diag_buffer_enable'
set to one, then driver allocates 2MB buffer and registers this buffer
with HBA firmware for capturing the firmware trace logs.
* Else driver reads manufacture page11 data and checks whether
HostTraceBufferMaxSizeKB filed is zero or not?
- If HostTraceBufferMaxSizeKB is non-zero then driver tries to allocate
HostTraceBufferMaxSizeKB size of memory. If the buffer allocation is
successful, then it will register this buffer with HBA firmware, else
in a loop the driver will try again by reducing the current buffer size
with HostTraceBufferDecrementSizeKB size until memory allocation is
successful or buffer size falls below HostTraceBufferMinSizeKB. If the
memory allocation is successful, then the buffer will be registered
with the firmware. Else, if the buffer size falls below the
HostTraceBufferMinSizeKB, then driver won't register trace buffer with
HBA firmware.
- If HostTraceBufferMaxSizeKB is zero, then driver won't register trace
buffer with HBA firmware.
Link: https://lore.kernel.org/r/1568379890-18347-2-git-send-email-sreekanth.reddy@broadcom.com
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This is mostly update of the usual drivers: qla2xxx, ufs, smartpqi,
lpfc, hisi_sas, qedf, mpt3sas; plus a whole load of minor updates.
The only core change this time around is the addition of request
batching for virtio. Since batching requires an additional flag to
use, it should be invisible to the rest of the drivers.
Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>
-----BEGIN PGP SIGNATURE-----
iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCXYQE/yYcamFtZXMuYm90
dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishXs9AP4usPY5
OpMlF6OiKFNeJrCdhCScVghf9uHbc7UA6cP+EgD/bCtRgcDe1ZjOTYWdeTwvwWqA
ltWYonnv6Lg3b1f9yqI=
=jRC/
-----END PGP SIGNATURE-----
Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI updates from James Bottomley:
"This is mostly update of the usual drivers: qla2xxx, ufs, smartpqi,
lpfc, hisi_sas, qedf, mpt3sas; plus a whole load of minor updates. The
only core change this time around is the addition of request batching
for virtio. Since batching requires an additional flag to use, it
should be invisible to the rest of the drivers"
* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (264 commits)
scsi: hisi_sas: Fix the conflict between device gone and host reset
scsi: hisi_sas: Add BIST support for phy loopback
scsi: hisi_sas: Add hisi_sas_debugfs_alloc() to centralise allocation
scsi: hisi_sas: Remove some unused function arguments
scsi: hisi_sas: Remove redundant work declaration
scsi: hisi_sas: Remove hisi_sas_hw.slot_complete
scsi: hisi_sas: Assign NCQ tag for all NCQ commands
scsi: hisi_sas: Update all the registers after suspend and resume
scsi: hisi_sas: Retry 3 times TMF IO for SAS disks when init device
scsi: hisi_sas: Remove sleep after issue phy reset if sas_smp_phy_control() fails
scsi: hisi_sas: Directly return when running I_T_nexus reset if phy disabled
scsi: hisi_sas: Use true/false as input parameter of sas_phy_reset()
scsi: hisi_sas: add debugfs auto-trigger for internal abort time out
scsi: virtio_scsi: unplug LUNs when events missed
scsi: scsi_dh_rdac: zero cdb in send_mode_select()
scsi: fcoe: fix null-ptr-deref Read in fc_release_transport
scsi: ufs-hisi: use devm_platform_ioremap_resource() to simplify code
scsi: ufshcd: use devm_platform_ioremap_resource() to simplify code
scsi: hisi_sas: use devm_platform_ioremap_resource() to simplify code
scsi: ufs: Use kmemdup in ufshcd_read_string_desc()
...
This patch provides a module parameter and sysfs interface to select
whether the queue depth for each device should be based on the
protocol-specific value set by the driver (the default) or the maximum
supported by the controller (can_queue).
Although we have a sysfs interface per sdev to change the queue depth
of individual scsi devices, this implementation provides a single
sysfs entry per shost to switch between the controller max and the
driver default.
[mkp: tweaked commit desc]
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Move ASPM definitions and function prototypes from include/linux/pci-aspm.h
to include/linux/pci.h so users only need to include <linux/pci.h>:
PCIE_LINK_STATE_L0S
PCIE_LINK_STATE_L1
PCIE_LINK_STATE_CLKPM
pci_disable_link_state()
pci_disable_link_state_locked()
pcie_no_aspm()
No functional changes intended.
Link: https://lore.kernel.org/r/20190827095620.11213-1-kw@linux.com
Signed-off-by: Krzysztof Wilczynski <kw@linux.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Updated driver version from 29.100.00.00 to 31.100.00.00 which is
equivalent to Phase 12 OOB.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
In some cases, like while performing extensive expander reset or phy reset,
user may observe that drives are not visible in OS. Driver's
firmware-worker thread is blocked for more than 120 seconds resulting in a
call trace.
1. Received target add event for Device A and hence driver has registered
this device to SML by calling sas_rphy_add(). SML has half added this
device and returned the control to the driver by quitting from
sas_rphy_add() API, and started some background scanning on this device A.
2. While background scanning is going on device A, driver has received SAS
DEVICE STATUS CHANGE EVENT with RC code "Internal device reset" event and
hence driver has set tm_busy flag for this Device A from FW worker thread
context. When tm_busy flag is set then driver return scsi commands with
device busy status asking the kernel to retry the command after some time.
So background scanning for device A will be waiting for this tm_busy to be
cleared.
3. Meanwhile driver has received a target add event for Device B and hence
driver called sas_rphy_add() API to register this device with SML. But
since background scanning for Device A is still pending and SML is not
quitting from sas_rphy_add(), the driver’s firmware worker thread got
blocked.
4. Now driver has received SAS DEVICE STATUS CHANGE EVENT with RC code
"Internal device reset complete" event. But as driver’s firmware worker
thread got blocked in Step3, it can’t process this event and it was not
clearing the tm_busy flag and deadlock occurred (where SML was waiting for
tm_busy flag to be cleared and our FW worker thread is waiting for SML to
quit from sas_device_rphy_add() API).
Same deadlock will be observed even if device B is getting removed in
step3. So to limit these types of deadlocks driver will process the SAS
DEVICE STATUS CHANGE EVENT events from ISR context instead of processing
this event from worker thread context. This improvement avoids above
deadlock.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This patch is to reduce the performance drop depth observed on SATA HDD
when ATA PT command is outstanding.
Driver returns IO commands with status "SAM_STAT_BUSY" whenever ATA PT
command is outstanding. With this, IO commands will be retried until this
outstanding ATA PT to complete and hence we will observe drop in
performance.
As the driver is completing the subsequent IOs commands with SAM_STAT_BUSY
status, these IOs has to go though the block layer. Hence it adds latency
to the IOs and large performance drop is observed.
So to reduce this performance dropp, added improvement in driver to return
the subsequent IOs with SCSI_MLQUEUE_DEVICE_BUSY status instead of
completing the IOs with SAM_STAT_BUSY status when ATA PT command is
outstanding. Sending command back with SCSI_MLQUEUE_DEVICE_BUSY does not go
through complete block layer stack (as scsi_done won't be called) SML will
immediately retry the command and this method will avoid latency of block
layer stack and the performance impact will be reduced.
On Local setup, ran 512k sequential read IO operation on HGST SATA drive
with existing driver & with this improvement drivers and here is the
result,
1. With existing driver: IOs are running at bandwidth of ~230 rMB/s and
whenever any ATA PT command is outstanding (e.g issued from systemd-udevd
daemon) then this bandwidth drops to ~150 rMB/s.
2. With this improvement driver: IOs are running at bandwidth of ~230 rMB/s
and whenever any ATA PT command is outstanding then this bandwidth drops to
just ~190 rMB/s.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
During HBA initialization time, if handshake operation fails due to some
firmware fault then currently driver is terminating the HBA
initialization. It is possible that HBA may come up properly if diag reset
is issued.
So improvement is made in driver in such a way that before terminating the
HBA initialization, driver checks the IOC state and if IOC state is in
fault state then issue diag reset for once. If diag reset is successful
then continue with HBA initialization else terminate the HBA
initialization.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Currently with sysfs parameter "drv_support_bitmap" driver exposes whether
driver supports toolbox memory move command or not.
And application should issue the toolbox memory move command only if driver
tell that memory move tool box command is supported through this sysfs
parameter.
In future we can utilize this sysfs parameter if any new feature is added
and need to notify the same to applications.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Host uses the Memory Move Tool to copy data from any source/destination
combination of system memory and IOC memory.
Memory Move Tool box request contains two SGE fields, First SGE field must
contains the source buffer details described by an MPI Simple SGE. The
second SGE field must contains the destination buffer details described by
an MPI Simple SGE.
Source -> Destination
1. IOC -> IOC (Both the SGE's will be filled by application)
2. HOST -> HOST (Both the SGE's will be filled by the host,
application should give sgl_offset to first SGE offset)
3. IOC -> HOST (Application will fill the first SGE and set the
sgl_offset to second SGE and hence driver fills
the second SGE)
4. HOST -> IOC (Application will fill IOC buffer information in the
first SGE and set the sgl_offset to second SGE.
Then driver will fill the second SGE with Host buffer
information and just before posting the command to the
firmware, driver will swap these two SGEs so that first
SGE contains the HOST buffer information and second SGE
contains the IOC information.
Driver has to take care only of the 4th case, other three cases are by
default supported by the current driver design.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
If driver sees the NVMe drive with "DEVICE_BLOCKED" AccessStatus in its
PCIe Device Page0, then driver removes the drive from its internal list and
does not allow any IOCTL commands to be sent to the drive and will return
the IOCTLs with "-ENODEV" status.
The driver will now allow NVMe Encapsulated IOCTL issued to the NVMe device
with an access status of DEVICE_BLOCKED. This change allows the user to
flash new drive firmware online and revive the drive.
Add NVMe device only the driver's internal list even though the device is
in the blocked state so that the device will be visible to Apps. This way
Apps can send NVMe Encapsulated IOCTLs to this drive and bring the drive
online. This NVMe drive with DEVICE_BLOCKED access status won't added to
the SML, it will be added only in the driver's internal list.
[mkp: clarified desc]
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
SES device of managed PCIe switch will be enumerated same as NVMe drives.
The device info type for this SES device is
MPI26_PCIE_DEVINFO_SCSI (0x4),
whereas the device info type for NVMe drives is
MPI26_PCIE_DEVINFO_NVME (0x3).
Based on this device info type driver determines whether the device is NVMe
drive or a SES device of a managed PCIe switch.
This SES device doesn't have the PCIe device page 2 information like NVMe
drives, so driver won't read PCIe device page 2 information for SES device.
This SES device uses only IEEE SGL's, So driver build's IEEE SGL's whenever
it receives any SCSI commands for this SES device.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Updated MPI to 2.6.8 specification and header files to 2.00.54.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Issue:
During online Firmware upgrade operations it is possible that MaxDevHandles
filled in IOCFacts may change with new FW. With this we may observe kernel
panics when driver try to access the pd_handles or blocking_handles buffers
at offset greater than the old firmware's MaxDevHandle value.
Fix:
_base_check_ioc_facts_changes() looks for increase/decrease in IOCFacts
attributes during online firmware upgrade and increases the pd_handles,
blocking_handles, etc buffer sizes to new firmware's MaxDevHandle value if
this new firmware's MaxDevHandle value is greater than the old firmware's
MaxDevHandle value.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Driver gets a request frame from the free pool of DMA-able request frames
and fill in the required information and pass the address of the frame to
IOC/FW to pull the complete request frame. In certain places the driver
used the request frame allocated from the free pool without completely
clearing the previous data stored in it. The request contents were cleared
only for the size of the new request to be issued and that left out some
stale data in the unused part of the request. Though the IOC/FW is not
expected to access the request beyond the specified size, it is good
practice to clear complete request message frame.
So reinitialize the complete request message frame with 0s before using
it.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
PCIe Lane margin tool box request requires IEEE sgl's and hence driver
fills the SGL field with IEEE sgl's while issuing the PCIe Lane margin
ioctl request to the HBA firmware.
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
We can request task management IOCTL command(MPI2_FUNCTION_SCSI_TASK_MGMT)
to /dev/mpt3ctl. If the given task_type is either abort task or query
task, it may need a field named "Initiator Port Transfer Tag to Manage" in
the IU.
Current code does not support to check target IPTT tag from the tm_request.
This patch introduces to check TaskMID given from the userspace as a target
tag. We have a rule of relationship between
(struct request *req->tag) and smid in mpt3sas_base.c:
3318 u16
3319 mpt3sas_base_get_smid_scsiio(struct MPT3SAS_ADAPTER *ioc, u8 cb_idx,
3320 struct scsi_cmnd *scmd)
3321 {
3322 struct scsiio_tracker *request = scsi_cmd_priv(scmd);
3323 unsigned int tag = scmd->request->tag;
3324 u16 smid;
3325
3326 smid = tag + 1;
So if we want to abort a request tagged #X, then we can pass (X + 1) to
this IOCTL handler. Otherwise, user space just can pass 0 TaskMID to abort
the first outstanding smid which is legacy behaviour.
Cc: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Cc: Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com>
Cc: Sathya Prakash <sathya.prakash@broadcom.com>
Cc: James E.J. Bottomley <jejb@linux.ibm.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: MPT-FusionLinux.pdl@broadcom.com
Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
Acked-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
There is a copy and paste bug here. It uses EVENT_TRIGGERS size instead of
SCSI_TRIGGERS size but fortunately both size are 84 bytes so it doesn't
affect runtime.
These days the preferred style is to just say sizeof(object) instead of
sizeof(type) so I have updated the function to the latest style as well.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Although SAS3 & SAS3.5 IT HBA controllers support 64-bit DMA addressing, as
per hardware design, if DMA-able range contains all 64-bits
set (0xFFFFFFFF-FFFFFFFF) then it results in a firmware fault.
E.g. SGE's start address is 0xFFFFFFFF-FFFF000 and data length is 0x1000
bytes. when HBA tries to DMA the data at 0xFFFFFFFF-FFFFFFFF location then
HBA will fault the firmware.
Driver will set 63-bit DMA mask to ensure the above address will not be
used.
Cc: <stable@vger.kernel.org> # 5.1.20+
Signed-off-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
When using a virt_boundary_mask, as done for NVMe devices attached to
mpt3sas controllers, we require an unlimited max_segment_size as the virt
boundary merging code assumes that. But we also need to propagate that to
the DMA mapping layer to make dma-debug happy. The SCSI layer takes care
of that when using the per-host virt_boundary setting, but given that
mpt3sas only wants to set the virt_boundary for actual NVMe devices, we
can't rely on that. The DMA layer maximum segment is global to the HBA
however, so we have to set it explicitly. This patch assumes that mpt3sas
does not have a segment size limitation, which seems true based on the SGL
format, but will need to be verified.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Enable msix load balance only when combined reply queue mode is disabled on
the SAS3 and above generation HBA devices.
Earlier msix load balance used to enable if the number of online cpus is
greater than the number of MSI-X vectors enabled on the HBA. Combined reply
queue mode will be disabled only on those HBA which works in shared
resources mode. I.e. on SAS3 HBAs it will be <= 8 and on SAS35 HBA devices
it will be <= 16.
- Before this patch if system has 256 logical CPUs and HBA exposes 128
MSI-X vectors, driver will enable msix load balance.
- After this patch if system has 256 logical CPUs and HBA exposes 128
MSI-X vectors, driver will disable msix load balance.
- After this patch if system has 256 logical CPUs and HBA exposes 16 MSI-X
vectors (due to combined reply queue mode being off in HW), driver will
enable msix load balance.
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Even though 'smp_affinity_enable' module parameter is enabled, if the
number of online CPUs is bigger than the number of msix vectors enabled on
that HBA, then smp affinity settings should be disabled only for this HBA.
But currently the smp affinity setting is disabled globally and hence smp
affinity will be disabled for subsequent HBAs even though number of msix
vectors enabled for this HBA matches the number of online CPU.
To fix this, define a per HBA variable smp_affinity_enable. Initially this
variable is initialized with smp_affinity_enable module parameter value. If
this HBA has less number of msix vectors configured when compared to number
of online cpus, then only this HBA's variable smp_affinity_enable is set to
zero.
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
When enabling high iops queues, the driver should use the HBA's configured
PCIe link speed instead of looking for the maximum link speed.
I.e. enable high iops queues only if Aero/Sea HBA's configured PCIe link
speed is set to 16GT/s.
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Currently default perf_mode is set to 'balanced' on Intel architecture
machines and on other machines default perf_mode is set to 'latency' mode.
This CPU architecture check is removed and the default perf_mode mode is
set to 'balanced' mode on all machines.
User can choose the required performance mode using perf_mode module
parameter.
Signed-off-by: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Use existing macros. No functional change.
[mkp: typo]
Signed-off-by: Tomas Henzl <thenzl@redhat.com>
Acked-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Support is easier with all driver parameters visible in sysfs. Also I've
replaced a constant with an octal permission.
Signed-off-by: Tomas Henzl <thenzl@redhat.com>
Acked-by: Suganath Prabu <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where
we are expecting to fall through.
This patch fixes the following warning:
drivers/scsi/mpt3sas/mpt3sas_base.c: In function _base_update_ioc_page1_inlinewith_perf_mode :
drivers/scsi/mpt3sas/mpt3sas_base.c:4510:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (ioc->high_iops_queues) {
^
drivers/scsi/mpt3sas/mpt3sas_base.c:4530:2: note: here
case MPT_PERF_MODE_LATENCY:
^~~~
Warning level 3 was used: -Wimplicit-fallthrough=3
This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough.
Fixes: 30cb97023f38 ("scsi: mpt3sas: Introduce perf_mode module parameter")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Update driver version from 28.100.00.00 to 29.100.00.00
This is equivalent to Phase 10 OOB driver.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
1. Introduce module parameter perf_mode for only Aero/Sea generation HBAs.
2. Update IOC page1 fields according to performance mode.
Below are the performance modes that can be enabled with module parameter
perf_mode:
0: Balanced - Few high iops reply queues will be enabled. Interrupt
coalescing will be enabled only for these high iops reply descriptor
queues.
1: Iops - Interrupt coalescing will be enabled on all reply queues.
Coalescing timeout is set to 0x20.This is default value for Aero.
2: Latency - Interrupt coalescing will be enabled on all reply queues.
Coalescing timeout is set to 0xA. This is a legacy behavior similar to
Ventura & Invader HBA series.
Default perf mode set by driver will be balanced mode if the following
conditions are met:
- CPU vendor = Intel;
- Aero controller working in 16GT/s pcie speed
Performance mode will be set to latency mode for all other cases.
4k Random Read IO performance numbers on 24 SAS SSD drives for above three
permormance modes. Performance data is from Intel Skylake and HGST SS300
(drive model SDLL1DLR400GCCA1).
IOPs:
-----------------------------------------------------------------------
|perf_mode | qd = 1 | qd = 64 | note |
|-------------|--------|---------|-------------------------------------
|balanced | 259K | 3061k | Provides max performance numbers |
| | | | both on lower QD workload & |
| | | | also on higher QD workload |
|-------------|--------|---------|-------------------------------------
|iops | 220K | 3100k | Provides max performance numbers |
| | | | only on higher QD workload. |
|-------------|--------|---------|-------------------------------------
|latency | 246k | 2226k | Provides good performance numbers |
| | | | only on lower QD worklaod. |
-----------------------------------------------------------------------
Avarage Latency:
-----------------------------------------------------
|perf_mode | qd = 1 | qd = 64 |
|-------------|--------------|----------------------|
|balanced | 92.05 usec | 501.12 usec |
|-------------|--------------|----------------------|
|iops | 108.40 usec | 498.10 usec |
|-------------|--------------|----------------------|
|latency | 97.10 usec | 689.26 usec |
-----------------------------------------------------
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Enable interrupt coalescing only on high iops queues.
In ioc config page 1, offset 0x14 (ProductSpecific field) is used to
determine interrupt coalescing enabled/disabled on per reply descriptor
post queue group(8) basis. If 31st bit is zero, then interrupt coalescing
is enabled for all reply descriptor post queues. If 31st bit is set to one,
then user can enable/disable interrupt coalescing on per reply descriptor
post queue group(8) basis. So to enable interrupt coalescing only on first
reply descriptor post queue group (i.e. on high iops queues), set bit 0 and
31.
This configuration should reset during driver unload or shutdown to the
default settings. For this, the driver takes copy of default ioc page 1 and
copies back the default or unmodified ioc page1 during unload and
shutdown. This means that on next driver load (e.g. if older version driver
is loaded by user), current modified changes on ioc page1 won't take
effect.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
High iops queues are mapped to non-managed irqs. Set affinity of
non-managed irqs to local numa node. Low latency queues are mapped to
managed irqs.
Driver reserves some reply queues for max iops (through
pci_alloc_irq_vectors_affinity and .pre_vectors interface). The rest of
queues are for low latency.
Based on io workload in io submission path, driver will decide which group
of reply queues (either high iops queues or low latency queues) to be
used. High iops queues will be mapped to local numa node of controller and
low latency queues will be mapped to cpus across numa nodes. In general,
high iops and low latency queues should fit into 128 reply queues
which is the max number of reply queues supported by Aero/Sea.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
In the IO submission path _base_get_msix_index is called twice. Initially
while getting the smid and subsequently while posting the request
descriptor (RD).
Refactor code to query msix index only while posting the request
descriptor. Save determined msix index in msix_io field.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
The driver will use round-robin method for io submission in batches within
the high iops queues when the number of in-flight ios on the target device
is larger than 8. Otherwise the driver will use low latency reply queues.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Code refactoring.
In function _base_get_msix_index, add scmd as second argument. This change
is made in preparation for the next patch where we introduce a new function
to get the MSI-X index for high iops queues.
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>