Sync code to the same with tk4 pub/lts/0017-kabi, except deleted rue
and wujing. Partners can submit pull requests to this branch, and we
can pick the commits to tk4 pub/lts/0017-kabi easly.
Signed-off-by: Jianping Liu <frankjpliu@tencent.com>
Gitee limit the repo's size to 3GB, to reduce the size of the code,
sync codes to ock 5.4.119-20.0009.21 in one commit.
Signed-off-by: Jianping Liu <frankjpliu@tencent.com>
Currently, each hardware queue, typically allocated per-cpu, consists of a
WQ/CQ pair per protocol. Meaning if both SCSI and NVMe are supported 2
WQ/CQ pairs will exist for the hardware queue. Separate queues are
unnecessary. The current implementation wastes memory backing the 2nd set
of queues, and the use of double the SLI-4 WQ/CQ's means less hardware
queues can be supported which means there may not always be enough to have
a pair per cpu. If there is only 1 pair per cpu, more cpu's may get their
own WQ/CQ.
Rework the implementation to use a single WQ/CQ pair by both protocols.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
The lpfc_debug_dump_all_queues() function repeatedly calls into
lpfc_debug_dump_qe() which has a temporary 128 byte buffer. This was fine
before the introduction of CONFIG_GCC_PLUGIN_STRUCTLEAK_VERBOSE because
each instance could occupy the same stack slot. However, now they each get
their own copy, which leads to a huge increase in stack usage as seen from
the compiler warning:
drivers/scsi/lpfc/lpfc_debugfs.c: In function 'lpfc_debug_dump_all_queues':
drivers/scsi/lpfc/lpfc_debugfs.c:6474:1: error: the frame size of 1712 bytes is larger than 100 bytes [-Werror=frame-larger-than=]
Avoid this by not marking lpfc_debug_dump_qe() as inline so the compiler
can choose to emit a static version of this function when it's needed or
otherwise silently drop it. As an added benefit, not inlining multiple
copies of this function means we save several kilobytes of .text section,
reducing the file size from 47kb to 43.
It is somewhat unusual to have a function that is static but not inline in
a header file, but this does not cause problems here because it is only
used by other inline functions. It would however seem reasonable to move
all the lpfc_debug_dump_* functions into lpfc_debugfs.c and not mark them
inline as a later cleanup.
Fixes: 81a56f6dcd ("gcc-plugins: structleak: Generalize to all variable types")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Change snprintf to scnprintf. There are generally two cases where using
snprintf causes problems.
1) Uses of size += snprintf(buf, SIZE - size, fmt, ...)
In this case, if snprintf would have written more characters than what the
buffer size (SIZE) is, then size will end up larger than SIZE. In later
uses of snprintf, SIZE - size will result in a negative number, leading
to problems. Note that size might already be too large by using
size = snprintf before the code reaches a case of size += snprintf.
2) If size is ultimately used as a length parameter for a copy back to user
space, then it will potentially allow for a buffer overflow and information
disclosure when size is greater than SIZE. When the size is used to index
the buffer directly, we can have memory corruption. This also means when
size = snprintf... is used, it may also cause problems since size may become
large. Copying to userspace is mitigated by the HARDENED_USERCOPY kernel
configuration.
The solution to these issues is to use scnprintf which returns the number of
characters actually written to the buffer, so the size variable will never
exceed SIZE.
Signed-off-by: Silvio Cesare <silvio.cesare@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: James Smart <james.smart@broadcom.com>
Cc: Dick Kennedy <dick.kennedy@broadcom.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Currently the driver maintains a sideband structure which has a pointer for
each queue element. However, at 8 bytes per pointer, and up to 4k elements
per queue, and 100s of queues, this can take up a lot of memory.
Convert the driver to using an access routine that calculates the element
address based on its index rather than using the pointer table.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
For files modified as part of 12.2.0.0 patches, update copyright to 2019
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
So far MSIX vector allocation assumed it would be 1:1 with hardware
queues. However, there are several reasons why fewer MSIX vectors may be
allocated than hardware queues such as the platform being out of vectors or
adapter limits being less than cpu count.
This patch reworks the MSIX/EQ relationships with the per-cpu hardware
queues so they can function independently. MSIX vectors will be equitably
split been cpu sockets/cores and then the per-cpu hardware queues will be
mapped to the vectors most efficient for them.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
The XRI get/put lists were partitioned per hardware queue. However, the
adapter rarely had sufficient resources to give a large number of resources
per queue. As such, it became common for a cpu to encounter a lack of XRI
resource and request the upper io stack to retry after returning a BUSY
condition. This occurred even though other cpus were idle and not using
their resources.
Create as efficient a scheme as possible to move resources to the cpus that
need them. Each cpu maintains a small private pool which it allocates from
for io. There is a watermark that the cpu attempts to keep in the private
pool. The private pool, when empty, pulls from a global pool from the
cpu. When the cpu's global pool is empty it will pull from other cpu's
global pool. As there many cpu global pools (1 per cpu or hardware queue
count) and as each cpu selects what cpu to pull from at different rates and
at different times, it creates a radomizing effect that minimizes the
number of cpu's that will contend with each other when the steal XRI's from
another cpu's global pool.
On io completion, a cpu will push the XRI back on to its private pool. A
watermark level is maintained for the private pool such that when it is
exceeded it will move XRI's to the CPU global pool so that other cpu's may
allocate them.
On NVME, as heartbeat commands are critical to get placed on the wire, a
single expedite pool is maintained. When a heartbeat is to be sent, it will
allocate an XRI from the expedite pool rather than the normal cpu
private/global pools. On any io completion, if a reduction in the expedite
pools is seen, it will be replenished before the XRI is placed on the cpu
private pool.
Statistics are added to aid understanding the XRI levels on each cpu and
their behaviors.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Many io statistics were being sampled and saved using adapter-based data
structures. This was creating a lot of contention and cache thrashing in
the I/O path.
Move the statistics to the hardware queue data structures. Given the
per-queue data structures, use of atomic types is lessened.
Add new sysfs and debugfs stat routines to collate the per hardware queue
values and report at an adapter level.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Once the IO buff allocations were made shared, there was a single XRI
buffer list shared by all hardware queues. A single list isn't great for
performance when shared across the per-cpu hardware queues.
Create a separate XRI IO buffer get/put list for each Hardware Queue. As
SGLs and associated IO buffers get allocated/posted to the firmware; round
robin their assignment across all available hardware Queues so that there
is an equitable assignment.
Modify SCSI and NVME IO submit code paths to use the Hardware Queue logic
for XRI allocation.
Add a debugfs interface to display hardware queue statistics
Added new empty_io_bufs counter to track if a cpu runs out of XRIs.
Replace common_ variables/names with io_ to make meanings clearer.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Currently, both nvme and fcp each have their own concept of an io_channel,
which is a combination wq/cq and associated msix. Different cpus would
share an io_channel.
The driver is now moving to per-cpu wq/cq pairs and msix vectors. The
driver will still use separate wq/cq pairs per protocol on each cpu, but
the protocols will share the msix vector.
Given the elimination of the nvme and fcp io channels, the module
parameters will be removed. A new parameter, lpfc_hdw_queue is added which
allows the wq/cq pair allocation per cpu to be overridden and allocated to
lesser value. If lpfc_hdw_queue is zero, the number of pairs allocated will
be based on the number of cpus. If non-zero, the parameter specifies the
number of queues to allocate. At this time, the maximum non-zero value is
64.
To manage this new paradigm, a new hardware queue structure is created to
track queue activity and relationships.
As MSIX vector allocation must be known before setting up the
relationships, msix allocation now occurs before queue datastructures are
allocated. If the number of vectors allocated is less than the desired
hardware queues, the hardware queue counts will be reduced to the number of
vectors
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Change references from "Broadcom Limited" to "Broadcom Inc." in the
copyright message. Update copyright duration if not yet updated for 2018.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Updated Copyright in files updated as part of 12.0.0.0
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Up until now, all SLI-4 devices had the same doorbells at the same
bar locations. With newer hardware, there are now independent EQ and
CQ doorbells and the bar locations differ.
Prepare the code for new hardware by separating the eq/cq doorbell into
separate components. The components can be set based on if_type.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This is an interesting regression with gcc-8, showing a harmless warning
for correct code:
In file included from include/linux/kernel.h:13:0,
...
from drivers/scsi/lpfc/lpfc_debugfs.c:23:
include/linux/printk.h:301:2: error: 'eq' may be used uninitialized in this function [-Werror=maybe-uninitialized]
printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
^~~~~~
In file included from drivers/scsi/lpfc/lpfc_debugfs.c:58:0:
drivers/scsi/lpfc/lpfc_debugfs.h:451:31: note: 'eq' was declared here
I managed to reduce the warning into a small test case for gcc-8 that I
reported in the gcc bugzilla[1].
As a workaround, this changes the logic to move the two assignments of
'eq' out of the conditions and instead make the index conditional. This
works for all configurations I tried and avoids adding a bogus
initialization.
Acked-by: James Smart <james.smart@broadcom.com>
Link: [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81958
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
On a randconfig build without CONFIG_SCSI_LPFC_DEBUG_FS, I ran into
multiple compile failures:
drivers/scsi/lpfc/lpfc_debugfs.h: In function 'lpfc_debug_dump_wq':
drivers/scsi/lpfc/lpfc_debugfs.h:405:15: error: 'DUMP_FCP' undeclared (first use in this function); did you mean 'DUMP_VAR'?
drivers/scsi/lpfc/lpfc_debugfs.h:405:15: note: each undeclared identifier is reported only once for each function it appears in
drivers/scsi/lpfc/lpfc_debugfs.h:408:22: error: 'DUMP_NVME' undeclared (first use in this function); did you mean 'DUMP_NONE'?
drivers/scsi/lpfc/lpfc_nvmet.c: In function 'lpfc_nvmet_xmt_ls_rsp_cmp':
drivers/scsi/lpfc/lpfc_nvmet.c:109:2: error: implicit declaration of function 'lpfc_nvmeio_data'; did you mean 'lpfc_mem_free'? [-Werror=implicit-function-declaration]
drivers/scsi/lpfc/lpfc_nvmet.c: In function 'lpfc_nvmet_xmt_fcp_op':
drivers/scsi/lpfc/lpfc_nvmet.c:523:10: error: unused variable 'id' [-Werror=unused-variable]
They are all trivial to fix, so I'm doing it in a combined patch here.
Fixes: 1d9d5a9879 ("scsi: lpfc: refactor debugfs queue dump routines")
Fixes: bd2cdd5e40 ("scsi: lpfc: NVME Initiator: Add debugfs support")
Fixes: 2b65e18202 ("scsi: lpfc: NVME Target: Add debugfs support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Update copyrights to 2017 for all files touched in this patch set
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
NVME Initiator: Add debugfs support
Adds debugfs snippets to cover the new NVME initiator functionality
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
NVME Initiator: Base modifications
This patch adds base modifications for NVME initiator support.
The base modifications consist of:
- Formal split of SLI3 rings from SLI-4 WQs (sometimes referred to as
rings as well) as implementation now widely varies between the two.
- Addition of configuration modes:
SCSI initiator only; NVME initiator only; NVME target only; and
SCSI and NVME initiator.
The configuration mode drives overall adapter configuration,
offloads enabled, and resource splits.
NVME support is only available on SLI-4 devices and newer fw.
- Implements the following based on configuration mode:
- Exchange resources are split by protocol; Obviously, if only
1 mode, then no split occurs. Default is 50/50. module attribute
allows tuning.
- Pools and config parameters are separated per-protocol
- Each protocol has it's own set of queues, but share interrupt
vectors.
SCSI:
SLI3 devices have few queues and the original style of queue
allocation remains.
SLI4 devices piggy back on an "io-channel" concept that
eventually needs to merge with scsi-mq/blk-mq support (it is
underway). For now, the paradigm continues as it existed
prior. io channel allocates N msix and N WQs (N=4 default)
and either round robins or uses cpu # modulo N for scheduling.
A bunch of module parameters allow the configuration to be
tuned.
NVME (initiator):
Allocates an msix per cpu (or whatever pci_alloc_irq_vectors
gets)
Allocates a WQ per cpu, and maps the WQs to msix on a WQ #
modulo msix vector count basis.
Module parameters exist to cap/control the config if desired.
- Each protocol has its own buffer and dma pools.
I apologize for the size of the patch.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
----
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Create common wq, cq, eq, rq dump functions
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Add fcp_io_channel module attribute to control amount of parallel I/O queues
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Fixed debug helper routine failed to dump CQ and EQ entries in non-MSI-X mode
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Enhancements to Debug infrastructure
- debugfs additions for new hardware.
- Correct stack overflow in lpfc_debugfs_dumpHBASlim_data()
- Correct warning on uninitialized reg_val in lpfc_idiag_drbacc_write()
- Separated the iDiag command for capturing mailbox commands for generic
issue mailbox command entry point and for BSG multi-buffer handling.
- Added capturing dumping capabiliy of mailbox command and external buffer
for the completion of the mailbox command so that the outcome can be
examined.
- Changed all the iDiag command structure data array indexing introduced so
far with properly defined macros.
- Added SLI4 device PCI BAR memory mapped register read/browse, write-by-
value, set-bit, and clear-bit methods for both interface type 0 and
interface type 2.
- Corrected warnings on mbxstatus being uninitialized in error paths in
lpfc_bsg.c
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Debugfs enhancements
- Added iDiag support for new adapters.
- Added queue entry access methods.
- Fix host/port index in decimal
- Added Doorbell register access methods.
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
- Add the driver debugfs framework for supporting debugfs read and write
operations, and iDiag command structure.
- Add read and write to SLI4 device PCI config space registers.
- Add the driver support of debugfs PCI config space register bits set/clear
methods to the provided bitmask.
- Add iDiag driver support for SLI4 device queue diagnostic.
Signed-off-by: Alex Iannicelli <alex.iannicelli@emulex.com>
Signed-off-by: James Smart <james.smart@emulex.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Error messages and debugfs updates:
- Fix up GID_FT error messages
- Enhance debugfs with slow_ring_trace, dumpslim and nodelist information
- Add log type (and messages) for vport state changes
- Enhance log messages when retries ELS fail
Signed-off-by: James Smart <James.Smart@emulex.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
Following the NPIV support, the following changes have been accumulated
in the testing and qualification of the driver:
- Fix affinity of ELS ring to slow/deferred event processing
- Fix Ring attention masks
- Defer dev_loss_tmo timeout handling to worker thread
- Consolidate link down error classification for better error checking
- Remove unused/deprecated nlp_initiator_tmr timer
- Fix for async scan - move adapter init code back into pci_probe_one
context. Fix async scan interfaces.
- Expand validation of ability to create vports
- Extract VPI resource cnt from firmware
- Tuning of Login/Reject policies to better deal with overwhelmned targets
- Misc ELS and discovery fixes
- Export the npiv_enable attribute to sysfs
- Mailbox handling fix
- Add debugfs support
- A few other small misc fixes:
- wrong return values, double-frees, bad locking
- Added adapter failure heartbeat
Signed-off-by: James Smart <James.Smart@emulex.com>
Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>