License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2007-09-07 15:15:31 +08:00
|
|
|
/*
|
2008-06-11 00:20:58 +08:00
|
|
|
* zfcp device driver
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2008-06-11 00:20:58 +08:00
|
|
|
* Error Recovery Procedures (ERP).
|
2007-09-07 15:15:31 +08:00
|
|
|
*
|
2020-05-09 01:23:32 +08:00
|
|
|
* Copyright IBM Corp. 2002, 2020
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2008-12-25 20:39:53 +08:00
|
|
|
#define KMSG_COMPONENT "zfcp"
|
|
|
|
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
|
|
|
|
|
2009-08-18 21:43:25 +08:00
|
|
|
#include <linux/kthread.h>
|
scsi: zfcp: fix GCC compiler warning emitted with -Wmaybe-uninitialized
GCC v9 emits this warning:
CC drivers/s390/scsi/zfcp_erp.o
drivers/s390/scsi/zfcp_erp.c: In function 'zfcp_erp_action_enqueue':
drivers/s390/scsi/zfcp_erp.c:217:26: warning: 'erp_action' may be used uninitialized in this function [-Wmaybe-uninitialized]
217 | struct zfcp_erp_action *erp_action;
| ^~~~~~~~~~
This is a possible false positive case, as also documented in the GCC
documentations:
https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wmaybe-uninitialized
The actual code-sequence is like this:
Various callers can invoke the function below with the argument "want"
being one of:
ZFCP_ERP_ACTION_REOPEN_ADAPTER,
ZFCP_ERP_ACTION_REOPEN_PORT_FORCED,
ZFCP_ERP_ACTION_REOPEN_PORT, or
ZFCP_ERP_ACTION_REOPEN_LUN.
zfcp_erp_action_enqueue(want, ...)
...
need = zfcp_erp_required_act(want, ...)
need = want
...
maybe: need = ZFCP_ERP_ACTION_REOPEN_PORT
maybe: need = ZFCP_ERP_ACTION_REOPEN_ADAPTER
...
return need
...
zfcp_erp_setup_act(need, ...)
struct zfcp_erp_action *erp_action; // <== line 217
...
switch(need) {
case ZFCP_ERP_ACTION_REOPEN_LUN:
...
erp_action = &zfcp_sdev->erp_action;
WARN_ON_ONCE(erp_action->port != port); // <== access
...
break;
case ZFCP_ERP_ACTION_REOPEN_PORT:
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
...
erp_action = &port->erp_action;
WARN_ON_ONCE(erp_action->port != port); // <== access
...
break;
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
...
erp_action = &adapter->erp_action;
WARN_ON_ONCE(erp_action->port != NULL); // <== access
...
break;
}
...
WARN_ON_ONCE(erp_action->adapter != adapter); // <== access
When zfcp_erp_setup_act() is called, 'need' will never be anything else
than one of the 4 possible enumeration-names that are used in the
switch-case, and 'erp_action' is initialized for every one of them, before
it is used. Thus the warning is a false positive, as documented.
We introduce the extra if{} in the beginning to create an extra code-flow,
so the compiler can be convinced that the switch-case will never see any
other value.
BUG_ON()/BUG() is intentionally not used to not crash anything, should
this ever happen anyway - right now it's impossible, as argued above; and
it doesn't introduce a 'default:' switch-case to retain warnings should
'enum zfcp_erp_act_type' ever be extended and no explicit case be
introduced. See also v5.0 commit 399b6c8bc9f7 ("scsi: zfcp: drop old
default switch case which might paper over missing case").
Signed-off-by: Benjamin Block <bblock@linux.ibm.com>
Reviewed-by: Jens Remus <jremus@linux.ibm.com>
Reviewed-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2019-07-03 05:02:02 +08:00
|
|
|
#include <linux/bug.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include "zfcp_ext.h"
|
2010-02-17 18:18:50 +08:00
|
|
|
#include "zfcp_reqlist.h"
|
scsi: zfcp: Move allocation of the shost object to after xconf- and xport-data
At the moment we allocate and register the Scsi_Host object corresponding
to a zfcp adapter (FCP device) very early in the life cycle of the adapter
- even before we fully discover and initialize the underlying
firmware/hardware. This had the advantage that we could already use the
Scsi_Host object, and fill in all its information during said discover and
initialize.
Due to commit 737eb78e82d5 ("block: Delay default elevator initialization")
(first released in v5.4), we noticed a regression that would prevent us
from using any storage volume if zfcp is configured with support for DIF or
DIX (zfcp.dif=1 || zfcp.dix=1). Doing so would result in an illegal memory
access as soon as the first request is sent with such an configuration. As
example for a crash resulting from this:
scsi host0: scsi_eh_0: sleeping
scsi host0: zfcp
qdio: 0.0.1900 ZFCP on SC 4bd using AI:1 QEBSM:0 PRI:1 TDD:1 SIGA: W AP
scsi 0:0:0:0: scsi scan: INQUIRY pass 1 length 36
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 0000000000000000 TEID: 0000000000000483
Fault in home space mode while using kernel ASCE.
AS:0000000035c7c007 R3:00000001effcc007 S:00000001effd1000 P:000000000000003d
Oops: 0004 ilc:3 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Modules linked in: ...
CPU: 1 PID: 783 Comm: kworker/u760:5 Kdump: loaded Not tainted 5.6.0-rc2-bb-next+ #1
Hardware name: ...
Workqueue: scsi_wq_0 fc_scsi_scan_rport [scsi_transport_fc]
Krnl PSW : 0704e00180000000 000003ff801fcdae (scsi_queue_rq+0x436/0x740 [scsi_mod])
R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
Krnl GPRS: 0fffffffffffffff 0000000000000000 0000000187150120 0000000000000000
000003ff80223d20 000000000000018e 000000018adc6400 0000000187711000
000003e0062337e8 00000001ae719000 0000000187711000 0000000187150000
00000001ab808100 0000000187150120 000003ff801fcd74 000003e0062336a0
Krnl Code: 000003ff801fcd9e: e310a35c0012 lt %r1,860(%r10)
000003ff801fcda4: a7840010 brc 8,000003ff801fcdc4
#000003ff801fcda8: e310b2900004 lg %r1,656(%r11)
>000003ff801fcdae: d71710001000 xc 0(24,%r1),0(%r1)
000003ff801fcdb4: e310b2900004 lg %r1,656(%r11)
000003ff801fcdba: 41201018 la %r2,24(%r1)
000003ff801fcdbe: e32010000024 stg %r2,0(%r1)
000003ff801fcdc4: b904002b lgr %r2,%r11
Call Trace:
[<000003ff801fcdae>] scsi_queue_rq+0x436/0x740 [scsi_mod]
([<000003ff801fcd74>] scsi_queue_rq+0x3fc/0x740 [scsi_mod])
[<00000000349c9970>] blk_mq_dispatch_rq_list+0x390/0x680
[<00000000349d1596>] blk_mq_sched_dispatch_requests+0x196/0x1a8
[<00000000349c7a04>] __blk_mq_run_hw_queue+0x144/0x160
[<00000000349c7ab6>] __blk_mq_delay_run_hw_queue+0x96/0x228
[<00000000349c7d5a>] blk_mq_run_hw_queue+0xd2/0xe0
[<00000000349d194a>] blk_mq_sched_insert_request+0x192/0x1d8
[<00000000349c17b8>] blk_execute_rq_nowait+0x80/0x90
[<00000000349c1856>] blk_execute_rq+0x6e/0xb0
[<000003ff801f8ac2>] __scsi_execute+0xe2/0x1f0 [scsi_mod]
[<000003ff801fef98>] scsi_probe_and_add_lun+0x358/0x840 [scsi_mod]
[<000003ff8020001c>] __scsi_scan_target+0xc4/0x228 [scsi_mod]
[<000003ff80200254>] scsi_scan_target+0xd4/0x100 [scsi_mod]
[<000003ff802d8b96>] fc_scsi_scan_rport+0x96/0xc0 [scsi_transport_fc]
[<0000000034245ce8>] process_one_work+0x458/0x7d0
[<00000000342462a2>] worker_thread+0x242/0x448
[<0000000034250994>] kthread+0x15c/0x170
[<0000000034e1979c>] ret_from_fork+0x30/0x38
INFO: lockdep is turned off.
Last Breaking-Event-Address:
[<000003ff801fbc36>] scsi_add_cmd_to_list+0x9e/0xa8 [scsi_mod]
Kernel panic - not syncing: Fatal exception: panic_on_oops
While this issue is exposed by the commit named above, this is only by
accident. The real issue exists for longer already - basically since it's
possible to use blk-mq via scsi-mq, and blk-mq pre-allocates all requests
for a tag-set during initialization of the same. For a given Scsi_Host
object this is done when adding the object to the midlayer
(`scsi_add_host()` and such). In `scsi_mq_setup_tags()` the midlayer
calculates how much memory is required for a single scsi_cmnd, and its
additional data, which also might include space for additional protection
data - depending on whether the Scsi_Host has any form of protection
capabilities (`scsi_host_get_prot()`).
The problem is now thus, because zfcp does this step before we actually
know whether the firmware/hardware has these capabilities, we don't set any
protection capabilities in the Scsi_Host object. And so, no space is
allocated for additional protection data for requests in the Scsi_Host
tag-set.
Once we go through discover and initialize the FCP device firmware/hardware
fully (this is done via the firmware commands "Exchange Config Data" and
"Exchange Port Data") we find out whether it actually supports DIF and DIX,
and we set the corresponding capabilities in the Scsi_Host object (in
`zfcp_scsi_set_prot()`). Now the Scsi_Host potentially has protection
capabilities, but the already allocated requests in the tag-set don't have
any space allocated for that.
When we then trigger target scanning or add scsi_devices manually, the
midlayer will use requests from that tag-set, and before sending most
requests, it will also call `scsi_mq_prep_fn()`. To prepare the scsi_cmnd
this function will check again whether the used Scsi_Host has any
protection capabilities - and now it potentially has - and if so, it will
try to initialize the assumed to be preallocated structures and thus it
causes the crash, like shown above.
Before delaying the default elevator initialization with the commit named
above, we always would also allocate an elevator for any scsi_device before
ever sending any requests - in contrast to now, where we do it after
device-probing. That elevator in turn would have its own tag-set, and that
is initialized after we went through discovery and initialization of the
underlying firmware/hardware. So requests from that tag-set can be
allocated properly, and if used - unless the user changes/disabled the
default elevator - this would hide the underlying issue.
To fix this for any configuration - with or without an elevator - we move
the allocation and registration of the Scsi_Host object for a given FCP
device to after the first complete discovery and initialization of the
underlying firmware/hardware. By doing that we can make all basic
properties of the Scsi_Host known to the midlayer by the time we call
`scsi_add_host()`, including whether we have any protection capabilities.
To do that we have to delay all the accesses that we would have done in the
past during discovery and initialization, and do them instead once we are
finished with it. The previous patches ramp up to this by fencing and
factoring out all these accesses, and make it possible to re-do them later
on. In addition we make also use of the diagnostic buffers we recently
added with
commit 92953c6e0aa7 ("scsi: zfcp: signal incomplete or error for sync exchange config/port data")
commit 7e418833e689 ("scsi: zfcp: diagnostics buffer caching and use for exchange port data")
commit 088210233e6f ("scsi: zfcp: add diagnostics buffer for exchange config data")
(first released in v5.5), because these already cache all the information
we need for that "re-do operation" - the information cached are always
updated during xconf or xport data, so it won't be stale.
In addition to the move and re-do, this patch also updates the
function-documentation of `zfcp_scsi_adapter_register()` and changes how it
reports if a Scsi_Host object already exists. In that case future
recovery-operations can skip this step completely and behave much like they
would do in the past - zfcp does not release a once allocated Scsi_Host
object unless the corresponding FCP device is deconstructed completely.
Link: https://lore.kernel.org/r/030dd6da318bbb529f0b5268ec65cebcd20fc0a3.1588956679.git.bblock@linux.ibm.com
Reviewed-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-05-09 01:23:35 +08:00
|
|
|
#include "zfcp_diag.h"
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
#define ZFCP_MAX_ERPS 3
|
|
|
|
|
|
|
|
enum zfcp_erp_act_flags {
|
|
|
|
ZFCP_STATUS_ERP_TIMEDOUT = 0x10000000,
|
|
|
|
ZFCP_STATUS_ERP_CLOSE_ONLY = 0x01000000,
|
|
|
|
ZFCP_STATUS_ERP_DISMISSED = 0x00200000,
|
|
|
|
ZFCP_STATUS_ERP_LOWMEM = 0x00400000,
|
2010-09-08 20:39:54 +08:00
|
|
|
ZFCP_STATUS_ERP_NO_REF = 0x00800000,
|
2008-07-02 16:56:40 +08:00
|
|
|
};
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
/*
|
|
|
|
* Eyecatcher pseudo flag to bitwise or-combine with enum zfcp_erp_act_type.
|
|
|
|
* Used to indicate that an ERP action could not be set up despite a detected
|
|
|
|
* need for some recovery.
|
scsi: zfcp: fix misleading REC trigger trace where erp_action setup failed
If a SCSI device is deleted during scsi_eh host reset, we cannot get a
reference to the SCSI device anymore since scsi_device_get returns !=0 by
design. Assuming the recovery of adapter and port(s) was successful,
zfcp_erp_strategy_followup_success() attempts to trigger a LUN reset for the
half-gone SCSI device. Unfortunately, it causes the following confusing
trace record which states that zfcp will do a LUN recovery as "ERP need" is
ZFCP_ERP_ACTION_REOPEN_LUN == 1 and equals "ERP want".
Old example trace record formatted with zfcpdbf from s390-tools:
Tag: : ersfs_3 ERP, trigger, unit reopen, port reopen succeeded
LUN : 0x<FCP_LUN>
WWPN : 0x<WWPN>
D_ID : 0x<N_Port-ID>
Adapter status : 0x5400050b
Port status : 0x54000001
LUN status : 0x40000000 ZFCP_STATUS_COMMON_RUNNING
but not ZFCP_STATUS_COMMON_UNBLOCKED as it
was closed on close part of adapter reopen
ERP want : 0x01
ERP need : 0x01 misleading
However, zfcp_erp_setup_act() returns NULL as it cannot get the reference.
Hence, zfcp_erp_action_enqueue() takes an early goto out and _NO_ recovery
actually happens.
We always do want the recovery trigger trace record even if no erp_action
could be enqueued as in this case. For other cases where we did not enqueue
an erp_action, 'need' has always been zero to indicate this. In order to
indicate above goto out, introduce an eyecatcher "flag" to mark the "ERP
need" as 'not needed' but still keep the information which erp_action type,
that zfcp_erp_required_act() had decided upon, is needed. 0xc_ is chosen to
be visibly different from 0x0_ in "ERP want".
New example trace record formatted with zfcpdbf from s390-tools:
Tag: : ersfs_3 ERP, trigger, unit reopen, port reopen succeeded
LUN : 0x<FCP_LUN>
WWPN : 0x<WWPN>
D_ID : 0x<N_Port-ID>
Adapter status : 0x5400050b
Port status : 0x54000001
LUN status : 0x40000000
ERP want : 0x01
ERP need : 0xc1 would need LUN ERP, but no action set up
^
Before v2.6.38 commit ae0904f60fab ("[SCSI] zfcp: Redesign of the debug
tracing for recovery actions.") we could detect this case because the
"erp_action" field in the trace was NULL. The rework removed erp_action as
argument and field from the trace.
This patch here is for tracing. A fix to allow LUN recovery in the case at
hand is a topic for a separate patch.
See also commit fdbd1c5e27da ("[SCSI] zfcp: Allow running unit/LUN shutdown
without acquiring reference") for a similar case and background info.
Signed-off-by: Steffen Maier <maier@linux.ibm.com>
Fixes: ae0904f60fab ("[SCSI] zfcp: Redesign of the debug tracing for recovery actions.")
Cc: <stable@vger.kernel.org> #2.6.38+
Reviewed-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-05-18 01:14:45 +08:00
|
|
|
*/
|
2018-11-08 22:44:50 +08:00
|
|
|
#define ZFCP_ERP_ACTION_NONE 0xc0
|
|
|
|
/*
|
|
|
|
* Eyecatcher pseudo flag to bitwise or-combine with enum zfcp_erp_act_type.
|
|
|
|
* Used to indicate that ERP not needed because the object has
|
|
|
|
* ZFCP_STATUS_COMMON_ERP_FAILED.
|
|
|
|
*/
|
|
|
|
#define ZFCP_ERP_ACTION_FAILED 0xe0
|
2008-07-02 16:56:40 +08:00
|
|
|
|
|
|
|
enum zfcp_erp_act_result {
|
|
|
|
ZFCP_ERP_SUCCEEDED = 0,
|
|
|
|
ZFCP_ERP_FAILED = 1,
|
|
|
|
ZFCP_ERP_CONTINUES = 2,
|
|
|
|
ZFCP_ERP_EXIT = 3,
|
|
|
|
ZFCP_ERP_DISMISSED = 4,
|
|
|
|
ZFCP_ERP_NOMEM = 5,
|
|
|
|
};
|
|
|
|
|
|
|
|
static void zfcp_erp_adapter_block(struct zfcp_adapter *adapter, int mask)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:40:01 +08:00
|
|
|
zfcp_erp_clear_adapter_status(adapter,
|
|
|
|
ZFCP_STATUS_COMMON_UNBLOCKED | mask);
|
2006-09-19 04:29:56 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-05-18 01:15:00 +08:00
|
|
|
static bool zfcp_erp_action_is_running(struct zfcp_erp_action *act)
|
2006-09-19 04:29:56 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_erp_action *curr_act;
|
|
|
|
|
|
|
|
list_for_each_entry(curr_act, &act->adapter->erp_running_head, list)
|
|
|
|
if (act == curr_act)
|
2018-05-18 01:15:00 +08:00
|
|
|
return true;
|
|
|
|
return false;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_action_ready(struct zfcp_erp_action *act)
|
2006-09-19 04:29:56 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
|
|
|
|
2020-07-03 21:20:01 +08:00
|
|
|
list_move(&act->list, &adapter->erp_ready_head);
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erardy1", act);
|
2009-08-18 21:43:25 +08:00
|
|
|
wake_up(&adapter->erp_ready_wq);
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erardy2", act);
|
2006-09-19 04:29:56 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_action_dismiss(struct zfcp_erp_action *act)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
act->status |= ZFCP_STATUS_ERP_DISMISSED;
|
2018-05-18 01:15:00 +08:00
|
|
|
if (zfcp_erp_action_is_running(act))
|
2008-07-02 16:56:40 +08:00
|
|
|
zfcp_erp_action_ready(act);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
static void zfcp_erp_action_dismiss_lun(struct scsi_device *sdev)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
|
|
|
|
if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_ERP_INUSE)
|
|
|
|
zfcp_erp_action_dismiss(&zfcp_sdev->erp_action);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_action_dismiss_port(struct zfcp_port *port)
|
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_INUSE)
|
|
|
|
zfcp_erp_action_dismiss(&port->erp_action);
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
else {
|
|
|
|
spin_lock(port->adapter->scsi_host->host_lock);
|
|
|
|
__shost_for_each_device(sdev, port->adapter->scsi_host)
|
2010-09-08 20:39:55 +08:00
|
|
|
if (sdev_to_zfcp(sdev)->port == port)
|
|
|
|
zfcp_erp_action_dismiss_lun(sdev);
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_unlock(port->adapter->scsi_host->host_lock);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *adapter)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_port *port;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_ERP_INUSE)
|
|
|
|
zfcp_erp_action_dismiss(&adapter->erp_action);
|
2009-11-24 23:53:58 +08:00
|
|
|
else {
|
|
|
|
read_lock(&adapter->port_list_lock);
|
|
|
|
list_for_each_entry(port, &adapter->port_list, list)
|
2008-07-02 16:56:40 +08:00
|
|
|
zfcp_erp_action_dismiss_port(port);
|
2009-11-24 23:53:58 +08:00
|
|
|
read_unlock(&adapter->port_list_lock);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
static enum zfcp_erp_act_type zfcp_erp_handle_failed(
|
|
|
|
enum zfcp_erp_act_type want, struct zfcp_adapter *adapter,
|
|
|
|
struct zfcp_port *port, struct scsi_device *sdev)
|
2018-05-18 01:14:48 +08:00
|
|
|
{
|
2018-11-08 22:44:50 +08:00
|
|
|
enum zfcp_erp_act_type need = want;
|
2018-05-18 01:14:48 +08:00
|
|
|
struct zfcp_scsi_dev *zsdev;
|
|
|
|
|
|
|
|
switch (want) {
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
|
|
|
zsdev = sdev_to_zfcp(sdev);
|
|
|
|
if (atomic_read(&zsdev->status) & ZFCP_STATUS_COMMON_ERP_FAILED)
|
|
|
|
need = 0;
|
|
|
|
break;
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
|
|
|
if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_FAILED)
|
|
|
|
need = 0;
|
|
|
|
break;
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
|
|
|
if (atomic_read(&port->status) &
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED) {
|
|
|
|
need = 0;
|
|
|
|
/* ensure propagation of failed status to new devices */
|
|
|
|
zfcp_erp_set_port_status(
|
|
|
|
port, ZFCP_STATUS_COMMON_ERP_FAILED);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
|
|
|
if (atomic_read(&adapter->status) &
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED) {
|
|
|
|
need = 0;
|
|
|
|
/* ensure propagation of failed status to new devices */
|
|
|
|
zfcp_erp_set_adapter_status(
|
|
|
|
adapter, ZFCP_STATUS_COMMON_ERP_FAILED);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return need;
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
static enum zfcp_erp_act_type zfcp_erp_required_act(enum zfcp_erp_act_type want,
|
|
|
|
struct zfcp_adapter *adapter,
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_port *port,
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-11-08 22:44:50 +08:00
|
|
|
enum zfcp_erp_act_type need = want;
|
2010-09-08 20:39:55 +08:00
|
|
|
int l_status, p_status, a_status;
|
|
|
|
struct zfcp_scsi_dev *zfcp_sdev;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
switch (want) {
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
|
|
|
zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
l_status = atomic_read(&zfcp_sdev->status);
|
|
|
|
if (l_status & ZFCP_STATUS_COMMON_ERP_INUSE)
|
2008-07-02 16:56:40 +08:00
|
|
|
return 0;
|
|
|
|
p_status = atomic_read(&port->status);
|
|
|
|
if (!(p_status & ZFCP_STATUS_COMMON_RUNNING) ||
|
2019-10-26 00:12:52 +08:00
|
|
|
p_status & ZFCP_STATUS_COMMON_ERP_FAILED)
|
2008-07-02 16:56:40 +08:00
|
|
|
return 0;
|
|
|
|
if (!(p_status & ZFCP_STATUS_COMMON_UNBLOCKED))
|
|
|
|
need = ZFCP_ERP_ACTION_REOPEN_PORT;
|
2020-03-31 22:21:48 +08:00
|
|
|
fallthrough;
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
2010-07-08 15:53:06 +08:00
|
|
|
p_status = atomic_read(&port->status);
|
|
|
|
if (!(p_status & ZFCP_STATUS_COMMON_OPEN))
|
|
|
|
need = ZFCP_ERP_ACTION_REOPEN_PORT;
|
2020-03-31 22:21:48 +08:00
|
|
|
fallthrough;
|
2010-07-08 15:53:06 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
2008-07-02 16:56:40 +08:00
|
|
|
p_status = atomic_read(&port->status);
|
|
|
|
if (p_status & ZFCP_STATUS_COMMON_ERP_INUSE)
|
|
|
|
return 0;
|
|
|
|
a_status = atomic_read(&adapter->status);
|
|
|
|
if (!(a_status & ZFCP_STATUS_COMMON_RUNNING) ||
|
2019-10-26 00:12:52 +08:00
|
|
|
a_status & ZFCP_STATUS_COMMON_ERP_FAILED)
|
2008-07-02 16:56:40 +08:00
|
|
|
return 0;
|
2010-11-17 21:23:42 +08:00
|
|
|
if (p_status & ZFCP_STATUS_COMMON_NOESC)
|
|
|
|
return need;
|
2008-07-02 16:56:40 +08:00
|
|
|
if (!(a_status & ZFCP_STATUS_COMMON_UNBLOCKED))
|
|
|
|
need = ZFCP_ERP_ACTION_REOPEN_ADAPTER;
|
2020-03-31 22:21:48 +08:00
|
|
|
fallthrough;
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
|
|
|
a_status = atomic_read(&adapter->status);
|
|
|
|
if (a_status & ZFCP_STATUS_COMMON_ERP_INUSE)
|
|
|
|
return 0;
|
2009-08-18 21:43:27 +08:00
|
|
|
if (!(a_status & ZFCP_STATUS_COMMON_RUNNING) &&
|
|
|
|
!(a_status & ZFCP_STATUS_COMMON_OPEN))
|
|
|
|
return 0; /* shutdown requested for closed adapter */
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
return need;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
static struct zfcp_erp_action *zfcp_erp_setup_act(enum zfcp_erp_act_type need,
|
|
|
|
u32 act_status,
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter,
|
|
|
|
struct zfcp_port *port,
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_erp_action *erp_action;
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
scsi: zfcp: fix GCC compiler warning emitted with -Wmaybe-uninitialized
GCC v9 emits this warning:
CC drivers/s390/scsi/zfcp_erp.o
drivers/s390/scsi/zfcp_erp.c: In function 'zfcp_erp_action_enqueue':
drivers/s390/scsi/zfcp_erp.c:217:26: warning: 'erp_action' may be used uninitialized in this function [-Wmaybe-uninitialized]
217 | struct zfcp_erp_action *erp_action;
| ^~~~~~~~~~
This is a possible false positive case, as also documented in the GCC
documentations:
https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wmaybe-uninitialized
The actual code-sequence is like this:
Various callers can invoke the function below with the argument "want"
being one of:
ZFCP_ERP_ACTION_REOPEN_ADAPTER,
ZFCP_ERP_ACTION_REOPEN_PORT_FORCED,
ZFCP_ERP_ACTION_REOPEN_PORT, or
ZFCP_ERP_ACTION_REOPEN_LUN.
zfcp_erp_action_enqueue(want, ...)
...
need = zfcp_erp_required_act(want, ...)
need = want
...
maybe: need = ZFCP_ERP_ACTION_REOPEN_PORT
maybe: need = ZFCP_ERP_ACTION_REOPEN_ADAPTER
...
return need
...
zfcp_erp_setup_act(need, ...)
struct zfcp_erp_action *erp_action; // <== line 217
...
switch(need) {
case ZFCP_ERP_ACTION_REOPEN_LUN:
...
erp_action = &zfcp_sdev->erp_action;
WARN_ON_ONCE(erp_action->port != port); // <== access
...
break;
case ZFCP_ERP_ACTION_REOPEN_PORT:
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
...
erp_action = &port->erp_action;
WARN_ON_ONCE(erp_action->port != port); // <== access
...
break;
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
...
erp_action = &adapter->erp_action;
WARN_ON_ONCE(erp_action->port != NULL); // <== access
...
break;
}
...
WARN_ON_ONCE(erp_action->adapter != adapter); // <== access
When zfcp_erp_setup_act() is called, 'need' will never be anything else
than one of the 4 possible enumeration-names that are used in the
switch-case, and 'erp_action' is initialized for every one of them, before
it is used. Thus the warning is a false positive, as documented.
We introduce the extra if{} in the beginning to create an extra code-flow,
so the compiler can be convinced that the switch-case will never see any
other value.
BUG_ON()/BUG() is intentionally not used to not crash anything, should
this ever happen anyway - right now it's impossible, as argued above; and
it doesn't introduce a 'default:' switch-case to retain warnings should
'enum zfcp_erp_act_type' ever be extended and no explicit case be
introduced. See also v5.0 commit 399b6c8bc9f7 ("scsi: zfcp: drop old
default switch case which might paper over missing case").
Signed-off-by: Benjamin Block <bblock@linux.ibm.com>
Reviewed-by: Jens Remus <jremus@linux.ibm.com>
Reviewed-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2019-07-03 05:02:02 +08:00
|
|
|
if (WARN_ON_ONCE(need != ZFCP_ERP_ACTION_REOPEN_LUN &&
|
|
|
|
need != ZFCP_ERP_ACTION_REOPEN_PORT &&
|
|
|
|
need != ZFCP_ERP_ACTION_REOPEN_PORT_FORCED &&
|
|
|
|
need != ZFCP_ERP_ACTION_REOPEN_ADAPTER))
|
|
|
|
return NULL;
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
switch (need) {
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
|
|
|
zfcp_sdev = sdev_to_zfcp(sdev);
|
2010-09-08 20:39:54 +08:00
|
|
|
if (!(act_status & ZFCP_STATUS_ERP_NO_REF))
|
2010-09-08 20:39:55 +08:00
|
|
|
if (scsi_device_get(sdev))
|
2010-09-08 20:39:54 +08:00
|
|
|
return NULL;
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE,
|
2010-09-08 20:39:55 +08:00
|
|
|
&zfcp_sdev->status);
|
|
|
|
erp_action = &zfcp_sdev->erp_action;
|
scsi: zfcp: fix erp_action use-before-initialize in REC action trace
v4.10 commit 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN
recovery") extended accessing parent pointer fields of struct
zfcp_erp_action for tracing. If an erp_action has never been enqueued
before, these parent pointer fields are uninitialized and NULL. Examples
are zfcp objects freshly added to the parent object's children list,
before enqueueing their first recovery subsequently. In
zfcp_erp_try_rport_unblock(), we iterate such list. Accessing erp_action
fields can cause a NULL pointer dereference. Since the kernel can read
from lowcore on s390, it does not immediately cause a kernel page
fault. Instead it can cause hangs on trying to acquire the wrong
erp_action->adapter->dbf->rec_lock in zfcp_dbf_rec_action_lvl()
^bogus^
while holding already other locks with IRQs disabled.
Real life example from attaching lots of LUNs in parallel on many CPUs:
crash> bt 17723
PID: 17723 TASK: ... CPU: 25 COMMAND: "zfcperp0.0.1800"
LOWCORE INFO:
-psw : 0x0404300180000000 0x000000000038e424
-function : _raw_spin_lock_wait_flags at 38e424
...
#0 [fdde8fc90] zfcp_dbf_rec_action_lvl at 3e0004e9862 [zfcp]
#1 [fdde8fce8] zfcp_erp_try_rport_unblock at 3e0004dfddc [zfcp]
#2 [fdde8fd38] zfcp_erp_strategy at 3e0004e0234 [zfcp]
#3 [fdde8fda8] zfcp_erp_thread at 3e0004e0a12 [zfcp]
#4 [fdde8fe60] kthread at 173550
#5 [fdde8feb8] kernel_thread_starter at 10add2
zfcp_adapter
zfcp_port
zfcp_unit <address>, 0x404040d600000000
scsi_device NULL, returning early!
zfcp_scsi_dev.status = 0x40000000
0x40000000 ZFCP_STATUS_COMMON_RUNNING
crash> zfcp_unit <address>
struct zfcp_unit {
erp_action = {
adapter = 0x0,
port = 0x0,
unit = 0x0,
},
}
zfcp_erp_action is always fully embedded into its container object. Such
container object is never moved in its object tree (only add or delete).
Hence, erp_action parent pointers can never change.
To fix the issue, initialize the erp_action parent pointers before
adding the erp_action container to any list and thus before it becomes
accessible from outside of its initializing function.
In order to also close the time window between zfcp_erp_setup_act()
memsetting the entire erp_action to zero and setting the parent pointers
again, drop the memset and instead explicitly initialize individually
all erp_action fields except for parent pointers. To be extra careful
not to introduce any other unintended side effect, even keep zeroing the
erp_action fields for list and timer. Also double-check with
WARN_ON_ONCE that erp_action parent pointers never change, so we get to
know when we would deviate from previous behavior.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN recovery")
Cc: <stable@vger.kernel.org> #2.6.32+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-10-13 21:40:07 +08:00
|
|
|
WARN_ON_ONCE(erp_action->port != port);
|
|
|
|
WARN_ON_ONCE(erp_action->sdev != sdev);
|
2010-09-08 20:39:55 +08:00
|
|
|
if (!(atomic_read(&zfcp_sdev->status) &
|
|
|
|
ZFCP_STATUS_COMMON_RUNNING))
|
2010-09-08 20:39:54 +08:00
|
|
|
act_status |= ZFCP_STATUS_ERP_CLOSE_ONLY;
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
2010-02-17 18:18:56 +08:00
|
|
|
if (!get_device(&port->dev))
|
2009-11-24 23:54:05 +08:00
|
|
|
return NULL;
|
2008-07-02 16:56:40 +08:00
|
|
|
zfcp_erp_action_dismiss_port(port);
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, &port->status);
|
2008-07-02 16:56:40 +08:00
|
|
|
erp_action = &port->erp_action;
|
scsi: zfcp: fix erp_action use-before-initialize in REC action trace
v4.10 commit 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN
recovery") extended accessing parent pointer fields of struct
zfcp_erp_action for tracing. If an erp_action has never been enqueued
before, these parent pointer fields are uninitialized and NULL. Examples
are zfcp objects freshly added to the parent object's children list,
before enqueueing their first recovery subsequently. In
zfcp_erp_try_rport_unblock(), we iterate such list. Accessing erp_action
fields can cause a NULL pointer dereference. Since the kernel can read
from lowcore on s390, it does not immediately cause a kernel page
fault. Instead it can cause hangs on trying to acquire the wrong
erp_action->adapter->dbf->rec_lock in zfcp_dbf_rec_action_lvl()
^bogus^
while holding already other locks with IRQs disabled.
Real life example from attaching lots of LUNs in parallel on many CPUs:
crash> bt 17723
PID: 17723 TASK: ... CPU: 25 COMMAND: "zfcperp0.0.1800"
LOWCORE INFO:
-psw : 0x0404300180000000 0x000000000038e424
-function : _raw_spin_lock_wait_flags at 38e424
...
#0 [fdde8fc90] zfcp_dbf_rec_action_lvl at 3e0004e9862 [zfcp]
#1 [fdde8fce8] zfcp_erp_try_rport_unblock at 3e0004dfddc [zfcp]
#2 [fdde8fd38] zfcp_erp_strategy at 3e0004e0234 [zfcp]
#3 [fdde8fda8] zfcp_erp_thread at 3e0004e0a12 [zfcp]
#4 [fdde8fe60] kthread at 173550
#5 [fdde8feb8] kernel_thread_starter at 10add2
zfcp_adapter
zfcp_port
zfcp_unit <address>, 0x404040d600000000
scsi_device NULL, returning early!
zfcp_scsi_dev.status = 0x40000000
0x40000000 ZFCP_STATUS_COMMON_RUNNING
crash> zfcp_unit <address>
struct zfcp_unit {
erp_action = {
adapter = 0x0,
port = 0x0,
unit = 0x0,
},
}
zfcp_erp_action is always fully embedded into its container object. Such
container object is never moved in its object tree (only add or delete).
Hence, erp_action parent pointers can never change.
To fix the issue, initialize the erp_action parent pointers before
adding the erp_action container to any list and thus before it becomes
accessible from outside of its initializing function.
In order to also close the time window between zfcp_erp_setup_act()
memsetting the entire erp_action to zero and setting the parent pointers
again, drop the memset and instead explicitly initialize individually
all erp_action fields except for parent pointers. To be extra careful
not to introduce any other unintended side effect, even keep zeroing the
erp_action fields for list and timer. Also double-check with
WARN_ON_ONCE that erp_action parent pointers never change, so we get to
know when we would deviate from previous behavior.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN recovery")
Cc: <stable@vger.kernel.org> #2.6.32+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-10-13 21:40:07 +08:00
|
|
|
WARN_ON_ONCE(erp_action->port != port);
|
|
|
|
WARN_ON_ONCE(erp_action->sdev != NULL);
|
2008-07-02 16:56:40 +08:00
|
|
|
if (!(atomic_read(&port->status) & ZFCP_STATUS_COMMON_RUNNING))
|
2010-09-08 20:39:54 +08:00
|
|
|
act_status |= ZFCP_STATUS_ERP_CLOSE_ONLY;
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
2009-11-24 23:53:59 +08:00
|
|
|
kref_get(&adapter->ref);
|
2008-07-02 16:56:40 +08:00
|
|
|
zfcp_erp_action_dismiss_adapter(adapter);
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status);
|
2008-07-02 16:56:40 +08:00
|
|
|
erp_action = &adapter->erp_action;
|
scsi: zfcp: fix erp_action use-before-initialize in REC action trace
v4.10 commit 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN
recovery") extended accessing parent pointer fields of struct
zfcp_erp_action for tracing. If an erp_action has never been enqueued
before, these parent pointer fields are uninitialized and NULL. Examples
are zfcp objects freshly added to the parent object's children list,
before enqueueing their first recovery subsequently. In
zfcp_erp_try_rport_unblock(), we iterate such list. Accessing erp_action
fields can cause a NULL pointer dereference. Since the kernel can read
from lowcore on s390, it does not immediately cause a kernel page
fault. Instead it can cause hangs on trying to acquire the wrong
erp_action->adapter->dbf->rec_lock in zfcp_dbf_rec_action_lvl()
^bogus^
while holding already other locks with IRQs disabled.
Real life example from attaching lots of LUNs in parallel on many CPUs:
crash> bt 17723
PID: 17723 TASK: ... CPU: 25 COMMAND: "zfcperp0.0.1800"
LOWCORE INFO:
-psw : 0x0404300180000000 0x000000000038e424
-function : _raw_spin_lock_wait_flags at 38e424
...
#0 [fdde8fc90] zfcp_dbf_rec_action_lvl at 3e0004e9862 [zfcp]
#1 [fdde8fce8] zfcp_erp_try_rport_unblock at 3e0004dfddc [zfcp]
#2 [fdde8fd38] zfcp_erp_strategy at 3e0004e0234 [zfcp]
#3 [fdde8fda8] zfcp_erp_thread at 3e0004e0a12 [zfcp]
#4 [fdde8fe60] kthread at 173550
#5 [fdde8feb8] kernel_thread_starter at 10add2
zfcp_adapter
zfcp_port
zfcp_unit <address>, 0x404040d600000000
scsi_device NULL, returning early!
zfcp_scsi_dev.status = 0x40000000
0x40000000 ZFCP_STATUS_COMMON_RUNNING
crash> zfcp_unit <address>
struct zfcp_unit {
erp_action = {
adapter = 0x0,
port = 0x0,
unit = 0x0,
},
}
zfcp_erp_action is always fully embedded into its container object. Such
container object is never moved in its object tree (only add or delete).
Hence, erp_action parent pointers can never change.
To fix the issue, initialize the erp_action parent pointers before
adding the erp_action container to any list and thus before it becomes
accessible from outside of its initializing function.
In order to also close the time window between zfcp_erp_setup_act()
memsetting the entire erp_action to zero and setting the parent pointers
again, drop the memset and instead explicitly initialize individually
all erp_action fields except for parent pointers. To be extra careful
not to introduce any other unintended side effect, even keep zeroing the
erp_action fields for list and timer. Also double-check with
WARN_ON_ONCE that erp_action parent pointers never change, so we get to
know when we would deviate from previous behavior.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN recovery")
Cc: <stable@vger.kernel.org> #2.6.32+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-10-13 21:40:07 +08:00
|
|
|
WARN_ON_ONCE(erp_action->port != NULL);
|
|
|
|
WARN_ON_ONCE(erp_action->sdev != NULL);
|
2008-07-02 16:56:40 +08:00
|
|
|
if (!(atomic_read(&adapter->status) &
|
|
|
|
ZFCP_STATUS_COMMON_RUNNING))
|
2010-09-08 20:39:54 +08:00
|
|
|
act_status |= ZFCP_STATUS_ERP_CLOSE_ONLY;
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
scsi: zfcp: fix erp_action use-before-initialize in REC action trace
v4.10 commit 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN
recovery") extended accessing parent pointer fields of struct
zfcp_erp_action for tracing. If an erp_action has never been enqueued
before, these parent pointer fields are uninitialized and NULL. Examples
are zfcp objects freshly added to the parent object's children list,
before enqueueing their first recovery subsequently. In
zfcp_erp_try_rport_unblock(), we iterate such list. Accessing erp_action
fields can cause a NULL pointer dereference. Since the kernel can read
from lowcore on s390, it does not immediately cause a kernel page
fault. Instead it can cause hangs on trying to acquire the wrong
erp_action->adapter->dbf->rec_lock in zfcp_dbf_rec_action_lvl()
^bogus^
while holding already other locks with IRQs disabled.
Real life example from attaching lots of LUNs in parallel on many CPUs:
crash> bt 17723
PID: 17723 TASK: ... CPU: 25 COMMAND: "zfcperp0.0.1800"
LOWCORE INFO:
-psw : 0x0404300180000000 0x000000000038e424
-function : _raw_spin_lock_wait_flags at 38e424
...
#0 [fdde8fc90] zfcp_dbf_rec_action_lvl at 3e0004e9862 [zfcp]
#1 [fdde8fce8] zfcp_erp_try_rport_unblock at 3e0004dfddc [zfcp]
#2 [fdde8fd38] zfcp_erp_strategy at 3e0004e0234 [zfcp]
#3 [fdde8fda8] zfcp_erp_thread at 3e0004e0a12 [zfcp]
#4 [fdde8fe60] kthread at 173550
#5 [fdde8feb8] kernel_thread_starter at 10add2
zfcp_adapter
zfcp_port
zfcp_unit <address>, 0x404040d600000000
scsi_device NULL, returning early!
zfcp_scsi_dev.status = 0x40000000
0x40000000 ZFCP_STATUS_COMMON_RUNNING
crash> zfcp_unit <address>
struct zfcp_unit {
erp_action = {
adapter = 0x0,
port = 0x0,
unit = 0x0,
},
}
zfcp_erp_action is always fully embedded into its container object. Such
container object is never moved in its object tree (only add or delete).
Hence, erp_action parent pointers can never change.
To fix the issue, initialize the erp_action parent pointers before
adding the erp_action container to any list and thus before it becomes
accessible from outside of its initializing function.
In order to also close the time window between zfcp_erp_setup_act()
memsetting the entire erp_action to zero and setting the parent pointers
again, drop the memset and instead explicitly initialize individually
all erp_action fields except for parent pointers. To be extra careful
not to introduce any other unintended side effect, even keep zeroing the
erp_action fields for list and timer. Also double-check with
WARN_ON_ONCE that erp_action parent pointers never change, so we get to
know when we would deviate from previous behavior.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN recovery")
Cc: <stable@vger.kernel.org> #2.6.32+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-10-13 21:40:07 +08:00
|
|
|
WARN_ON_ONCE(erp_action->adapter != adapter);
|
|
|
|
memset(&erp_action->list, 0, sizeof(erp_action->list));
|
|
|
|
memset(&erp_action->timer, 0, sizeof(erp_action->timer));
|
|
|
|
erp_action->step = ZFCP_ERP_STEP_UNINITIALIZED;
|
|
|
|
erp_action->fsf_req_id = 0;
|
2018-11-08 22:44:50 +08:00
|
|
|
erp_action->type = need;
|
2010-09-08 20:39:54 +08:00
|
|
|
erp_action->status = act_status;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
return erp_action;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
static void zfcp_erp_action_enqueue(enum zfcp_erp_act_type want,
|
|
|
|
struct zfcp_adapter *adapter,
|
2018-05-18 01:15:01 +08:00
|
|
|
struct zfcp_port *port,
|
|
|
|
struct scsi_device *sdev,
|
2018-11-08 22:44:49 +08:00
|
|
|
char *dbftag, u32 act_status)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-11-08 22:44:50 +08:00
|
|
|
enum zfcp_erp_act_type need;
|
2010-12-02 22:16:12 +08:00
|
|
|
struct zfcp_erp_action *act;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-05-18 01:14:48 +08:00
|
|
|
need = zfcp_erp_handle_failed(want, adapter, port, sdev);
|
|
|
|
if (!need) {
|
|
|
|
need = ZFCP_ERP_ACTION_FAILED; /* marker for trace */
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2018-05-18 01:14:49 +08:00
|
|
|
if (!adapter->erp_thread) {
|
|
|
|
need = ZFCP_ERP_ACTION_NONE; /* marker for trace */
|
|
|
|
goto out;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
need = zfcp_erp_required_act(want, adapter, port, sdev);
|
2008-07-02 16:56:40 +08:00
|
|
|
if (!need)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out;
|
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
act = zfcp_erp_setup_act(need, act_status, adapter, port, sdev);
|
scsi: zfcp: fix misleading REC trigger trace where erp_action setup failed
If a SCSI device is deleted during scsi_eh host reset, we cannot get a
reference to the SCSI device anymore since scsi_device_get returns !=0 by
design. Assuming the recovery of adapter and port(s) was successful,
zfcp_erp_strategy_followup_success() attempts to trigger a LUN reset for the
half-gone SCSI device. Unfortunately, it causes the following confusing
trace record which states that zfcp will do a LUN recovery as "ERP need" is
ZFCP_ERP_ACTION_REOPEN_LUN == 1 and equals "ERP want".
Old example trace record formatted with zfcpdbf from s390-tools:
Tag: : ersfs_3 ERP, trigger, unit reopen, port reopen succeeded
LUN : 0x<FCP_LUN>
WWPN : 0x<WWPN>
D_ID : 0x<N_Port-ID>
Adapter status : 0x5400050b
Port status : 0x54000001
LUN status : 0x40000000 ZFCP_STATUS_COMMON_RUNNING
but not ZFCP_STATUS_COMMON_UNBLOCKED as it
was closed on close part of adapter reopen
ERP want : 0x01
ERP need : 0x01 misleading
However, zfcp_erp_setup_act() returns NULL as it cannot get the reference.
Hence, zfcp_erp_action_enqueue() takes an early goto out and _NO_ recovery
actually happens.
We always do want the recovery trigger trace record even if no erp_action
could be enqueued as in this case. For other cases where we did not enqueue
an erp_action, 'need' has always been zero to indicate this. In order to
indicate above goto out, introduce an eyecatcher "flag" to mark the "ERP
need" as 'not needed' but still keep the information which erp_action type,
that zfcp_erp_required_act() had decided upon, is needed. 0xc_ is chosen to
be visibly different from 0x0_ in "ERP want".
New example trace record formatted with zfcpdbf from s390-tools:
Tag: : ersfs_3 ERP, trigger, unit reopen, port reopen succeeded
LUN : 0x<FCP_LUN>
WWPN : 0x<WWPN>
D_ID : 0x<N_Port-ID>
Adapter status : 0x5400050b
Port status : 0x54000001
LUN status : 0x40000000
ERP want : 0x01
ERP need : 0xc1 would need LUN ERP, but no action set up
^
Before v2.6.38 commit ae0904f60fab ("[SCSI] zfcp: Redesign of the debug
tracing for recovery actions.") we could detect this case because the
"erp_action" field in the trace was NULL. The rework removed erp_action as
argument and field from the trace.
This patch here is for tracing. A fix to allow LUN recovery in the case at
hand is a topic for a separate patch.
See also commit fdbd1c5e27da ("[SCSI] zfcp: Allow running unit/LUN shutdown
without acquiring reference") for a similar case and background info.
Signed-off-by: Steffen Maier <maier@linux.ibm.com>
Fixes: ae0904f60fab ("[SCSI] zfcp: Redesign of the debug tracing for recovery actions.")
Cc: <stable@vger.kernel.org> #2.6.38+
Reviewed-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-05-18 01:14:45 +08:00
|
|
|
if (!act) {
|
|
|
|
need |= ZFCP_ERP_ACTION_NONE; /* marker for trace */
|
2008-07-02 16:56:40 +08:00
|
|
|
goto out;
|
scsi: zfcp: fix misleading REC trigger trace where erp_action setup failed
If a SCSI device is deleted during scsi_eh host reset, we cannot get a
reference to the SCSI device anymore since scsi_device_get returns !=0 by
design. Assuming the recovery of adapter and port(s) was successful,
zfcp_erp_strategy_followup_success() attempts to trigger a LUN reset for the
half-gone SCSI device. Unfortunately, it causes the following confusing
trace record which states that zfcp will do a LUN recovery as "ERP need" is
ZFCP_ERP_ACTION_REOPEN_LUN == 1 and equals "ERP want".
Old example trace record formatted with zfcpdbf from s390-tools:
Tag: : ersfs_3 ERP, trigger, unit reopen, port reopen succeeded
LUN : 0x<FCP_LUN>
WWPN : 0x<WWPN>
D_ID : 0x<N_Port-ID>
Adapter status : 0x5400050b
Port status : 0x54000001
LUN status : 0x40000000 ZFCP_STATUS_COMMON_RUNNING
but not ZFCP_STATUS_COMMON_UNBLOCKED as it
was closed on close part of adapter reopen
ERP want : 0x01
ERP need : 0x01 misleading
However, zfcp_erp_setup_act() returns NULL as it cannot get the reference.
Hence, zfcp_erp_action_enqueue() takes an early goto out and _NO_ recovery
actually happens.
We always do want the recovery trigger trace record even if no erp_action
could be enqueued as in this case. For other cases where we did not enqueue
an erp_action, 'need' has always been zero to indicate this. In order to
indicate above goto out, introduce an eyecatcher "flag" to mark the "ERP
need" as 'not needed' but still keep the information which erp_action type,
that zfcp_erp_required_act() had decided upon, is needed. 0xc_ is chosen to
be visibly different from 0x0_ in "ERP want".
New example trace record formatted with zfcpdbf from s390-tools:
Tag: : ersfs_3 ERP, trigger, unit reopen, port reopen succeeded
LUN : 0x<FCP_LUN>
WWPN : 0x<WWPN>
D_ID : 0x<N_Port-ID>
Adapter status : 0x5400050b
Port status : 0x54000001
LUN status : 0x40000000
ERP want : 0x01
ERP need : 0xc1 would need LUN ERP, but no action set up
^
Before v2.6.38 commit ae0904f60fab ("[SCSI] zfcp: Redesign of the debug
tracing for recovery actions.") we could detect this case because the
"erp_action" field in the trace was NULL. The rework removed erp_action as
argument and field from the trace.
This patch here is for tracing. A fix to allow LUN recovery in the case at
hand is a topic for a separate patch.
See also commit fdbd1c5e27da ("[SCSI] zfcp: Allow running unit/LUN shutdown
without acquiring reference") for a similar case and background info.
Signed-off-by: Steffen Maier <maier@linux.ibm.com>
Fixes: ae0904f60fab ("[SCSI] zfcp: Redesign of the debug tracing for recovery actions.")
Cc: <stable@vger.kernel.org> #2.6.38+
Reviewed-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-05-18 01:14:45 +08:00
|
|
|
}
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(ZFCP_STATUS_ADAPTER_ERP_PENDING, &adapter->status);
|
2008-07-02 16:56:40 +08:00
|
|
|
++adapter->erp_total_count;
|
|
|
|
list_add_tail(&act->list, &adapter->erp_ready_head);
|
2009-08-18 21:43:25 +08:00
|
|
|
wake_up(&adapter->erp_ready_wq);
|
2005-04-17 06:20:36 +08:00
|
|
|
out:
|
2018-11-08 22:44:49 +08:00
|
|
|
zfcp_dbf_rec_trig(dbftag, adapter, port, sdev, want, need);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_port_forced_no_port_dbf(char *dbftag,
|
|
|
|
struct zfcp_adapter *adapter,
|
2018-05-18 01:14:46 +08:00
|
|
|
u64 port_name, u32 port_id)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
static /* don't waste stack */ struct zfcp_port tmpport;
|
|
|
|
|
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
|
|
|
/* Stand-in zfcp port with fields just good enough for
|
|
|
|
* zfcp_dbf_rec_trig() and zfcp_dbf_set_common().
|
|
|
|
* Under lock because tmpport is static.
|
|
|
|
*/
|
|
|
|
atomic_set(&tmpport.status, -1); /* unknown */
|
|
|
|
tmpport.wwpn = port_name;
|
|
|
|
tmpport.d_id = port_id;
|
2018-11-08 22:44:49 +08:00
|
|
|
zfcp_dbf_rec_trig(dbftag, adapter, &tmpport, NULL,
|
2018-05-18 01:14:46 +08:00
|
|
|
ZFCP_ERP_ACTION_REOPEN_PORT_FORCED,
|
|
|
|
ZFCP_ERP_ACTION_NONE);
|
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
|
|
|
}
|
|
|
|
|
2018-05-18 01:15:01 +08:00
|
|
|
static void _zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter,
|
2018-11-08 22:44:49 +08:00
|
|
|
int clear_mask, char *dbftag)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
zfcp_erp_adapter_block(adapter, clear_mask);
|
2009-03-02 20:09:08 +08:00
|
|
|
zfcp_scsi_schedule_rports_block(adapter);
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2018-05-18 01:15:01 +08:00
|
|
|
zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER,
|
2018-11-08 22:44:49 +08:00
|
|
|
adapter, NULL, NULL, dbftag, 0);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* zfcp_erp_adapter_reopen - Reopen adapter.
|
|
|
|
* @adapter: Adapter to reopen.
|
|
|
|
* @clear: Status flags to clear.
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Tag for debug trace event.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear,
|
|
|
|
char *dbftag)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
2009-11-24 23:53:58 +08:00
|
|
|
zfcp_erp_adapter_block(adapter, clear);
|
|
|
|
zfcp_scsi_schedule_rports_block(adapter);
|
|
|
|
|
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2018-05-18 01:14:48 +08:00
|
|
|
zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER, adapter,
|
2018-11-08 22:44:49 +08:00
|
|
|
NULL, NULL, dbftag, 0);
|
2009-11-24 23:53:58 +08:00
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
|
|
|
* zfcp_erp_adapter_shutdown - Shutdown adapter.
|
|
|
|
* @adapter: Adapter to shut down.
|
|
|
|
* @clear: Status flags to clear.
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Tag for debug trace event.
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
|
|
|
void zfcp_erp_adapter_shutdown(struct zfcp_adapter *adapter, int clear,
|
2018-11-08 22:44:49 +08:00
|
|
|
char *dbftag)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED;
|
2018-11-08 22:44:49 +08:00
|
|
|
zfcp_erp_adapter_reopen(adapter, clear | flags, dbftag);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
|
|
|
* zfcp_erp_port_shutdown - Shutdown port
|
|
|
|
* @port: Port to shut down.
|
|
|
|
* @clear: Status flags to clear.
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Tag for debug trace event.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_port_shutdown(struct zfcp_port *port, int clear, char *dbftag)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED;
|
2018-11-08 22:44:49 +08:00
|
|
|
zfcp_erp_port_reopen(port, clear | flags, dbftag);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void zfcp_erp_port_block(struct zfcp_port *port, int clear)
|
|
|
|
{
|
2010-09-08 20:40:01 +08:00
|
|
|
zfcp_erp_clear_port_status(port,
|
|
|
|
ZFCP_STATUS_COMMON_UNBLOCKED | clear);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
|
2010-12-02 22:16:16 +08:00
|
|
|
static void _zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear,
|
2018-11-08 22:44:49 +08:00
|
|
|
char *dbftag)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
zfcp_erp_port_block(port, clear);
|
2009-03-02 20:09:08 +08:00
|
|
|
zfcp_scsi_schedule_rport_block(port);
|
2008-07-02 16:56:40 +08:00
|
|
|
|
|
|
|
zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_PORT_FORCED,
|
2018-11-08 22:44:49 +08:00
|
|
|
port->adapter, port, NULL, dbftag, 0);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* zfcp_erp_port_forced_reopen - Forced close of port and open again
|
|
|
|
* @port: Port to force close and to reopen.
|
2010-12-02 22:16:16 +08:00
|
|
|
* @clear: Status flags to clear.
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Tag for debug trace event.
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear,
|
|
|
|
char *dbftag)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
struct zfcp_adapter *adapter = port->adapter;
|
|
|
|
|
2009-11-24 23:53:58 +08:00
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2018-11-08 22:44:49 +08:00
|
|
|
_zfcp_erp_port_forced_reopen(port, clear, dbftag);
|
2009-11-24 23:53:58 +08:00
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:49 +08:00
|
|
|
static void _zfcp_erp_port_reopen(struct zfcp_port *port, int clear,
|
|
|
|
char *dbftag)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
zfcp_erp_port_block(port, clear);
|
2009-03-02 20:09:08 +08:00
|
|
|
zfcp_scsi_schedule_rport_block(port);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-05-18 01:15:01 +08:00
|
|
|
zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_PORT,
|
2018-11-08 22:44:49 +08:00
|
|
|
port->adapter, port, NULL, dbftag, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2008-07-02 16:56:40 +08:00
|
|
|
* zfcp_erp_port_reopen - trigger remote port recovery
|
|
|
|
* @port: port to recover
|
2018-11-08 22:44:54 +08:00
|
|
|
* @clear: flags in port status to be cleared
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Tag for debug trace event.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_port_reopen(struct zfcp_port *port, int clear, char *dbftag)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-11-24 23:53:58 +08:00
|
|
|
unsigned long flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct zfcp_adapter *adapter = port->adapter;
|
|
|
|
|
2009-11-24 23:53:58 +08:00
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2018-11-08 22:44:49 +08:00
|
|
|
_zfcp_erp_port_reopen(port, clear, dbftag);
|
2009-11-24 23:53:58 +08:00
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
static void zfcp_erp_lun_block(struct scsi_device *sdev, int clear_mask)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
2010-09-08 20:40:01 +08:00
|
|
|
zfcp_erp_clear_lun_status(sdev,
|
|
|
|
ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:49 +08:00
|
|
|
static void _zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear,
|
|
|
|
char *dbftag, u32 act_status)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
struct zfcp_adapter *adapter = zfcp_sdev->port->adapter;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
zfcp_erp_lun_block(sdev, clear);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_LUN, adapter,
|
2018-11-08 22:44:49 +08:00
|
|
|
zfcp_sdev->port, sdev, dbftag, act_status);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2010-09-08 20:39:55 +08:00
|
|
|
* zfcp_erp_lun_reopen - initiate reopen of a LUN
|
|
|
|
* @sdev: SCSI device / LUN to be reopened
|
2018-11-08 22:44:54 +08:00
|
|
|
* @clear: specifies flags in LUN status to be cleared
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Tag for debug trace event.
|
2010-12-02 22:16:16 +08:00
|
|
|
*
|
2005-04-17 06:20:36 +08:00
|
|
|
* Return: 0 on success, < 0 on error
|
|
|
|
*/
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_lun_reopen(struct scsi_device *sdev, int clear, char *dbftag)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
struct zfcp_port *port = zfcp_sdev->port;
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = port->adapter;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-11-24 23:53:58 +08:00
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2018-11-08 22:44:49 +08:00
|
|
|
_zfcp_erp_lun_reopen(sdev, clear, dbftag, 0);
|
2010-09-08 20:39:54 +08:00
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2010-09-08 20:39:55 +08:00
|
|
|
* zfcp_erp_lun_shutdown - Shutdown LUN
|
|
|
|
* @sdev: SCSI device / LUN to shut down.
|
2010-09-08 20:39:54 +08:00
|
|
|
* @clear: Status flags to clear.
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Tag for debug trace event.
|
2010-09-08 20:39:54 +08:00
|
|
|
*/
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_lun_shutdown(struct scsi_device *sdev, int clear, char *dbftag)
|
2010-09-08 20:39:54 +08:00
|
|
|
{
|
|
|
|
int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED;
|
2018-11-08 22:44:49 +08:00
|
|
|
zfcp_erp_lun_reopen(sdev, clear | flags, dbftag);
|
2010-09-08 20:39:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2010-09-08 20:39:55 +08:00
|
|
|
* zfcp_erp_lun_shutdown_wait - Shutdown LUN and wait for erp completion
|
|
|
|
* @sdev: SCSI device / LUN to shut down.
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Tag for debug trace event.
|
2010-09-08 20:39:54 +08:00
|
|
|
*
|
2010-09-08 20:39:55 +08:00
|
|
|
* Do not acquire a reference for the LUN when creating the ERP
|
2010-09-08 20:39:54 +08:00
|
|
|
* action. It is safe, because this function waits for the ERP to
|
2010-09-08 20:39:55 +08:00
|
|
|
* complete first. This allows to shutdown the LUN, even when the SCSI
|
|
|
|
* device is in the state SDEV_DEL when scsi_device_get will fail.
|
2010-09-08 20:39:54 +08:00
|
|
|
*/
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_lun_shutdown_wait(struct scsi_device *sdev, char *dbftag)
|
2010-09-08 20:39:54 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
struct zfcp_port *port = zfcp_sdev->port;
|
2010-09-08 20:39:54 +08:00
|
|
|
struct zfcp_adapter *adapter = port->adapter;
|
|
|
|
int clear = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED;
|
|
|
|
|
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2018-11-08 22:44:49 +08:00
|
|
|
_zfcp_erp_lun_reopen(sdev, clear, dbftag, ZFCP_STATUS_ERP_NO_REF);
|
2009-11-24 23:53:58 +08:00
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
2010-09-08 20:39:54 +08:00
|
|
|
|
|
|
|
zfcp_erp_wait(adapter);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-05-18 01:14:58 +08:00
|
|
|
static int zfcp_erp_status_change_set(unsigned long mask, atomic_t *status)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
return (atomic_read(status) ^ mask) & mask;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter)
|
2008-03-27 21:22:02 +08:00
|
|
|
{
|
2018-05-18 01:14:58 +08:00
|
|
|
if (zfcp_erp_status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED,
|
|
|
|
&adapter->status))
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("eraubl1", &adapter->erp_action);
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status);
|
2008-03-27 21:22:02 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_port_unblock(struct zfcp_port *port)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-05-18 01:14:58 +08:00
|
|
|
if (zfcp_erp_status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED,
|
|
|
|
&port->status))
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erpubl1", &port->erp_action);
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
static void zfcp_erp_lun_unblock(struct scsi_device *sdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
|
2018-05-18 01:14:58 +08:00
|
|
|
if (zfcp_erp_status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED,
|
|
|
|
&zfcp_sdev->status))
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erlubl1", &sdev_to_zfcp(sdev)->erp_action);
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &zfcp_sdev->status);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_action_to_running(struct zfcp_erp_action *erp_action)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
list_move(&erp_action->list, &erp_action->adapter->erp_running_head);
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erator1", erp_action);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *act)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
2010-02-17 18:18:49 +08:00
|
|
|
struct zfcp_fsf_req *req;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-02-17 18:18:49 +08:00
|
|
|
if (!act->fsf_req_id)
|
2008-07-02 16:56:40 +08:00
|
|
|
return;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-02-17 18:18:50 +08:00
|
|
|
spin_lock(&adapter->req_list->lock);
|
|
|
|
req = _zfcp_reqlist_find(adapter->req_list, act->fsf_req_id);
|
2010-02-17 18:18:49 +08:00
|
|
|
if (req && req->erp_action == act) {
|
2008-07-02 16:56:40 +08:00
|
|
|
if (act->status & (ZFCP_STATUS_ERP_DISMISSED |
|
|
|
|
ZFCP_STATUS_ERP_TIMEDOUT)) {
|
2010-02-17 18:18:49 +08:00
|
|
|
req->status |= ZFCP_STATUS_FSFREQ_DISMISSED;
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erscf_1", act);
|
scsi: zfcp: Fix panic on ERP timeout for previously dismissed ERP action
Suppose that, for unrelated reasons, FSF requests on behalf of recovery are
very slow and can run into the ERP timeout.
In the case at hand, we did adapter recovery to a large degree. However
due to the slowness a LUN open is pending so the corresponding fc_rport
remains blocked. After fast_io_fail_tmo we trigger close physical port
recovery for the port under which the LUN should have been opened. The new
higher order port recovery dismisses the pending LUN open ERP action and
dismisses the pending LUN open FSF request. Such dismissal decouples the
ERP action from the pending corresponding FSF request by setting
zfcp_fsf_req->erp_action to NULL (among other things)
[zfcp_erp_strategy_check_fsfreq()].
If now the ERP timeout for the pending open LUN request runs out, we must
not use zfcp_fsf_req->erp_action in the ERP timeout handler. This is a
problem since v4.15 commit 75492a51568b ("s390/scsi: Convert timers to use
timer_setup()"). Before that we intentionally only passed zfcp_erp_action
as context argument to zfcp_erp_timeout_handler().
Note: The lifetime of the corresponding zfcp_fsf_req object continues until
a (late) response or an (unrelated) adapter recovery.
Just like the regular response path ignores dismissed requests
[zfcp_fsf_req_complete() => zfcp_fsf_protstatus_eval() => return early] the
ERP timeout handler now needs to ignore dismissed requests. So simply
return early in the ERP timeout handler if the FSF request is marked as
dismissed in its status flags. To protect against the race where
zfcp_erp_strategy_check_fsfreq() dismisses and sets
zfcp_fsf_req->erp_action to NULL after our previous status flag check,
return early if zfcp_fsf_req->erp_action is NULL. After all, the former
ERP action does not need to be woken up as that was already done as part of
the dismissal above [zfcp_erp_action_dismiss()].
This fixes the following panic due to kernel page fault in IRQ context:
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 0000000000000000 TEID: 0000000000000483
Fault in home space mode while using kernel ASCE.
AS:000009859238c00b R2:00000e3e7ffd000b R3:00000e3e7ffcc007 S:00000e3e7ffd7000 P:000000000000013d
Oops: 0004 ilc:2 [#1] SMP
Modules linked in: ...
CPU: 82 PID: 311273 Comm: stress Kdump: loaded Tainted: G E X ...
Hardware name: IBM 8561 T01 701 (LPAR)
Krnl PSW : 0404c00180000000 001fffff80549be0 (zfcp_erp_notify+0x40/0xc0 [zfcp])
R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
Krnl GPRS: 0000000000000080 00000e3d00000000 00000000000000f0 0000000000030000
000000010028e700 000000000400a39c 000000010028e700 00000e3e7cf87e02
0000000010000000 0700098591cb67f0 0000000000000000 0000000000000000
0000033840e9a000 0000000000000000 001fffe008d6bc18 001fffe008d6bbc8
Krnl Code: 001fffff80549bd4: a7180000 lhi %r1,0
001fffff80549bd8: 4120a0f0 la %r2,240(%r10)
#001fffff80549bdc: a53e0003 llilh %r3,3
>001fffff80549be0: ba132000 cs %r1,%r3,0(%r2)
001fffff80549be4: a7740037 brc 7,1fffff80549c52
001fffff80549be8: e320b0180004 lg %r2,24(%r11)
001fffff80549bee: e31020e00004 lg %r1,224(%r2)
001fffff80549bf4: 412020e0 la %r2,224(%r2)
Call Trace:
[<001fffff80549be0>] zfcp_erp_notify+0x40/0xc0 [zfcp]
[<00000985915e26f0>] call_timer_fn+0x38/0x190
[<00000985915e2944>] expire_timers+0xfc/0x190
[<00000985915e2ac4>] run_timer_softirq+0xec/0x218
[<0000098591ca7c4c>] __do_softirq+0x144/0x398
[<00000985915110aa>] do_softirq_own_stack+0x72/0x88
[<0000098591551b58>] irq_exit+0xb0/0xb8
[<0000098591510c6a>] do_IRQ+0x82/0xb0
[<0000098591ca7140>] ext_int_handler+0x128/0x12c
[<0000098591722d98>] clear_subpage.constprop.13+0x38/0x60
([<000009859172ae4c>] clear_huge_page+0xec/0x250)
[<000009859177e7a2>] do_huge_pmd_anonymous_page+0x32a/0x768
[<000009859172a712>] __handle_mm_fault+0x88a/0x900
[<000009859172a860>] handle_mm_fault+0xd8/0x1b0
[<0000098591529ef6>] do_dat_exception+0x136/0x3e8
[<0000098591ca6d34>] pgm_check_handler+0x1c8/0x220
Last Breaking-Event-Address:
[<001fffff80549c88>] zfcp_erp_timeout_handler+0x10/0x18 [zfcp]
Kernel panic - not syncing: Fatal exception in interrupt
Link: https://lore.kernel.org/r/20200623140242.98864-1-maier@linux.ibm.com
Fixes: 75492a51568b ("s390/scsi: Convert timers to use timer_setup()")
Cc: <stable@vger.kernel.org> #4.15+
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-06-23 22:02:42 +08:00
|
|
|
/* lock-free concurrent access with
|
|
|
|
* zfcp_erp_timeout_handler()
|
|
|
|
*/
|
|
|
|
WRITE_ONCE(req->erp_action, NULL);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-07-02 16:56:40 +08:00
|
|
|
if (act->status & ZFCP_STATUS_ERP_TIMEDOUT)
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erscf_2", act);
|
2010-02-17 18:18:49 +08:00
|
|
|
if (req->status & ZFCP_STATUS_FSFREQ_DISMISSED)
|
|
|
|
act->fsf_req_id = 0;
|
2008-07-02 16:56:40 +08:00
|
|
|
} else
|
2010-02-17 18:18:49 +08:00
|
|
|
act->fsf_req_id = 0;
|
2010-02-17 18:18:50 +08:00
|
|
|
spin_unlock(&adapter->req_list->lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
|
|
|
* zfcp_erp_notify - Trigger ERP action.
|
|
|
|
* @erp_action: ERP action to continue.
|
|
|
|
* @set_mask: ERP action status flags to set.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2008-07-02 16:56:40 +08:00
|
|
|
void zfcp_erp_notify(struct zfcp_erp_action *erp_action, unsigned long set_mask)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct zfcp_adapter *adapter = erp_action->adapter;
|
2008-07-02 16:56:40 +08:00
|
|
|
unsigned long flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2018-05-18 01:15:00 +08:00
|
|
|
if (zfcp_erp_action_is_running(erp_action)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
erp_action->status |= set_mask;
|
|
|
|
zfcp_erp_action_ready(erp_action);
|
|
|
|
}
|
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
|
|
|
* zfcp_erp_timeout_handler - Trigger ERP action from timed out ERP request
|
2018-11-08 22:44:54 +08:00
|
|
|
* @t: timer list entry embedded in zfcp FSF request
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2017-10-17 07:44:34 +08:00
|
|
|
void zfcp_erp_timeout_handler(struct timer_list *t)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2017-10-17 07:44:34 +08:00
|
|
|
struct zfcp_fsf_req *fsf_req = from_timer(fsf_req, t, timer);
|
scsi: zfcp: Fix panic on ERP timeout for previously dismissed ERP action
Suppose that, for unrelated reasons, FSF requests on behalf of recovery are
very slow and can run into the ERP timeout.
In the case at hand, we did adapter recovery to a large degree. However
due to the slowness a LUN open is pending so the corresponding fc_rport
remains blocked. After fast_io_fail_tmo we trigger close physical port
recovery for the port under which the LUN should have been opened. The new
higher order port recovery dismisses the pending LUN open ERP action and
dismisses the pending LUN open FSF request. Such dismissal decouples the
ERP action from the pending corresponding FSF request by setting
zfcp_fsf_req->erp_action to NULL (among other things)
[zfcp_erp_strategy_check_fsfreq()].
If now the ERP timeout for the pending open LUN request runs out, we must
not use zfcp_fsf_req->erp_action in the ERP timeout handler. This is a
problem since v4.15 commit 75492a51568b ("s390/scsi: Convert timers to use
timer_setup()"). Before that we intentionally only passed zfcp_erp_action
as context argument to zfcp_erp_timeout_handler().
Note: The lifetime of the corresponding zfcp_fsf_req object continues until
a (late) response or an (unrelated) adapter recovery.
Just like the regular response path ignores dismissed requests
[zfcp_fsf_req_complete() => zfcp_fsf_protstatus_eval() => return early] the
ERP timeout handler now needs to ignore dismissed requests. So simply
return early in the ERP timeout handler if the FSF request is marked as
dismissed in its status flags. To protect against the race where
zfcp_erp_strategy_check_fsfreq() dismisses and sets
zfcp_fsf_req->erp_action to NULL after our previous status flag check,
return early if zfcp_fsf_req->erp_action is NULL. After all, the former
ERP action does not need to be woken up as that was already done as part of
the dismissal above [zfcp_erp_action_dismiss()].
This fixes the following panic due to kernel page fault in IRQ context:
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 0000000000000000 TEID: 0000000000000483
Fault in home space mode while using kernel ASCE.
AS:000009859238c00b R2:00000e3e7ffd000b R3:00000e3e7ffcc007 S:00000e3e7ffd7000 P:000000000000013d
Oops: 0004 ilc:2 [#1] SMP
Modules linked in: ...
CPU: 82 PID: 311273 Comm: stress Kdump: loaded Tainted: G E X ...
Hardware name: IBM 8561 T01 701 (LPAR)
Krnl PSW : 0404c00180000000 001fffff80549be0 (zfcp_erp_notify+0x40/0xc0 [zfcp])
R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
Krnl GPRS: 0000000000000080 00000e3d00000000 00000000000000f0 0000000000030000
000000010028e700 000000000400a39c 000000010028e700 00000e3e7cf87e02
0000000010000000 0700098591cb67f0 0000000000000000 0000000000000000
0000033840e9a000 0000000000000000 001fffe008d6bc18 001fffe008d6bbc8
Krnl Code: 001fffff80549bd4: a7180000 lhi %r1,0
001fffff80549bd8: 4120a0f0 la %r2,240(%r10)
#001fffff80549bdc: a53e0003 llilh %r3,3
>001fffff80549be0: ba132000 cs %r1,%r3,0(%r2)
001fffff80549be4: a7740037 brc 7,1fffff80549c52
001fffff80549be8: e320b0180004 lg %r2,24(%r11)
001fffff80549bee: e31020e00004 lg %r1,224(%r2)
001fffff80549bf4: 412020e0 la %r2,224(%r2)
Call Trace:
[<001fffff80549be0>] zfcp_erp_notify+0x40/0xc0 [zfcp]
[<00000985915e26f0>] call_timer_fn+0x38/0x190
[<00000985915e2944>] expire_timers+0xfc/0x190
[<00000985915e2ac4>] run_timer_softirq+0xec/0x218
[<0000098591ca7c4c>] __do_softirq+0x144/0x398
[<00000985915110aa>] do_softirq_own_stack+0x72/0x88
[<0000098591551b58>] irq_exit+0xb0/0xb8
[<0000098591510c6a>] do_IRQ+0x82/0xb0
[<0000098591ca7140>] ext_int_handler+0x128/0x12c
[<0000098591722d98>] clear_subpage.constprop.13+0x38/0x60
([<000009859172ae4c>] clear_huge_page+0xec/0x250)
[<000009859177e7a2>] do_huge_pmd_anonymous_page+0x32a/0x768
[<000009859172a712>] __handle_mm_fault+0x88a/0x900
[<000009859172a860>] handle_mm_fault+0xd8/0x1b0
[<0000098591529ef6>] do_dat_exception+0x136/0x3e8
[<0000098591ca6d34>] pgm_check_handler+0x1c8/0x220
Last Breaking-Event-Address:
[<001fffff80549c88>] zfcp_erp_timeout_handler+0x10/0x18 [zfcp]
Kernel panic - not syncing: Fatal exception in interrupt
Link: https://lore.kernel.org/r/20200623140242.98864-1-maier@linux.ibm.com
Fixes: 75492a51568b ("s390/scsi: Convert timers to use timer_setup()")
Cc: <stable@vger.kernel.org> #4.15+
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-06-23 22:02:42 +08:00
|
|
|
struct zfcp_erp_action *act;
|
2017-10-18 00:40:51 +08:00
|
|
|
|
scsi: zfcp: Fix panic on ERP timeout for previously dismissed ERP action
Suppose that, for unrelated reasons, FSF requests on behalf of recovery are
very slow and can run into the ERP timeout.
In the case at hand, we did adapter recovery to a large degree. However
due to the slowness a LUN open is pending so the corresponding fc_rport
remains blocked. After fast_io_fail_tmo we trigger close physical port
recovery for the port under which the LUN should have been opened. The new
higher order port recovery dismisses the pending LUN open ERP action and
dismisses the pending LUN open FSF request. Such dismissal decouples the
ERP action from the pending corresponding FSF request by setting
zfcp_fsf_req->erp_action to NULL (among other things)
[zfcp_erp_strategy_check_fsfreq()].
If now the ERP timeout for the pending open LUN request runs out, we must
not use zfcp_fsf_req->erp_action in the ERP timeout handler. This is a
problem since v4.15 commit 75492a51568b ("s390/scsi: Convert timers to use
timer_setup()"). Before that we intentionally only passed zfcp_erp_action
as context argument to zfcp_erp_timeout_handler().
Note: The lifetime of the corresponding zfcp_fsf_req object continues until
a (late) response or an (unrelated) adapter recovery.
Just like the regular response path ignores dismissed requests
[zfcp_fsf_req_complete() => zfcp_fsf_protstatus_eval() => return early] the
ERP timeout handler now needs to ignore dismissed requests. So simply
return early in the ERP timeout handler if the FSF request is marked as
dismissed in its status flags. To protect against the race where
zfcp_erp_strategy_check_fsfreq() dismisses and sets
zfcp_fsf_req->erp_action to NULL after our previous status flag check,
return early if zfcp_fsf_req->erp_action is NULL. After all, the former
ERP action does not need to be woken up as that was already done as part of
the dismissal above [zfcp_erp_action_dismiss()].
This fixes the following panic due to kernel page fault in IRQ context:
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 0000000000000000 TEID: 0000000000000483
Fault in home space mode while using kernel ASCE.
AS:000009859238c00b R2:00000e3e7ffd000b R3:00000e3e7ffcc007 S:00000e3e7ffd7000 P:000000000000013d
Oops: 0004 ilc:2 [#1] SMP
Modules linked in: ...
CPU: 82 PID: 311273 Comm: stress Kdump: loaded Tainted: G E X ...
Hardware name: IBM 8561 T01 701 (LPAR)
Krnl PSW : 0404c00180000000 001fffff80549be0 (zfcp_erp_notify+0x40/0xc0 [zfcp])
R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
Krnl GPRS: 0000000000000080 00000e3d00000000 00000000000000f0 0000000000030000
000000010028e700 000000000400a39c 000000010028e700 00000e3e7cf87e02
0000000010000000 0700098591cb67f0 0000000000000000 0000000000000000
0000033840e9a000 0000000000000000 001fffe008d6bc18 001fffe008d6bbc8
Krnl Code: 001fffff80549bd4: a7180000 lhi %r1,0
001fffff80549bd8: 4120a0f0 la %r2,240(%r10)
#001fffff80549bdc: a53e0003 llilh %r3,3
>001fffff80549be0: ba132000 cs %r1,%r3,0(%r2)
001fffff80549be4: a7740037 brc 7,1fffff80549c52
001fffff80549be8: e320b0180004 lg %r2,24(%r11)
001fffff80549bee: e31020e00004 lg %r1,224(%r2)
001fffff80549bf4: 412020e0 la %r2,224(%r2)
Call Trace:
[<001fffff80549be0>] zfcp_erp_notify+0x40/0xc0 [zfcp]
[<00000985915e26f0>] call_timer_fn+0x38/0x190
[<00000985915e2944>] expire_timers+0xfc/0x190
[<00000985915e2ac4>] run_timer_softirq+0xec/0x218
[<0000098591ca7c4c>] __do_softirq+0x144/0x398
[<00000985915110aa>] do_softirq_own_stack+0x72/0x88
[<0000098591551b58>] irq_exit+0xb0/0xb8
[<0000098591510c6a>] do_IRQ+0x82/0xb0
[<0000098591ca7140>] ext_int_handler+0x128/0x12c
[<0000098591722d98>] clear_subpage.constprop.13+0x38/0x60
([<000009859172ae4c>] clear_huge_page+0xec/0x250)
[<000009859177e7a2>] do_huge_pmd_anonymous_page+0x32a/0x768
[<000009859172a712>] __handle_mm_fault+0x88a/0x900
[<000009859172a860>] handle_mm_fault+0xd8/0x1b0
[<0000098591529ef6>] do_dat_exception+0x136/0x3e8
[<0000098591ca6d34>] pgm_check_handler+0x1c8/0x220
Last Breaking-Event-Address:
[<001fffff80549c88>] zfcp_erp_timeout_handler+0x10/0x18 [zfcp]
Kernel panic - not syncing: Fatal exception in interrupt
Link: https://lore.kernel.org/r/20200623140242.98864-1-maier@linux.ibm.com
Fixes: 75492a51568b ("s390/scsi: Convert timers to use timer_setup()")
Cc: <stable@vger.kernel.org> #4.15+
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-06-23 22:02:42 +08:00
|
|
|
if (fsf_req->status & ZFCP_STATUS_FSFREQ_DISMISSED)
|
|
|
|
return;
|
|
|
|
/* lock-free concurrent access with zfcp_erp_strategy_check_fsfreq() */
|
|
|
|
act = READ_ONCE(fsf_req->erp_action);
|
|
|
|
if (!act)
|
|
|
|
return;
|
2008-07-02 16:56:40 +08:00
|
|
|
zfcp_erp_notify(act, ZFCP_STATUS_ERP_TIMEDOUT);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2017-10-17 07:44:34 +08:00
|
|
|
static void zfcp_erp_memwait_handler(struct timer_list *t)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2017-10-17 07:44:34 +08:00
|
|
|
struct zfcp_erp_action *act = from_timer(act, t, timer);
|
|
|
|
|
|
|
|
zfcp_erp_notify(act, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2017-10-18 00:40:51 +08:00
|
|
|
timer_setup(&erp_action->timer, zfcp_erp_memwait_handler, 0);
|
2008-07-02 16:56:40 +08:00
|
|
|
erp_action->timer.expires = jiffies + HZ;
|
|
|
|
add_timer(&erp_action->timer);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2019-03-26 21:36:59 +08:00
|
|
|
void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
|
|
|
|
int clear, char *dbftag)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
struct zfcp_port *port;
|
|
|
|
|
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
|
|
|
read_lock(&adapter->port_list_lock);
|
|
|
|
list_for_each_entry(port, &adapter->port_list, list)
|
|
|
|
_zfcp_erp_port_forced_reopen(port, clear, dbftag);
|
|
|
|
read_unlock(&adapter->port_list_lock);
|
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void _zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter,
|
2018-11-08 22:44:49 +08:00
|
|
|
int clear, char *dbftag)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_port *port;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-11-24 23:53:58 +08:00
|
|
|
read_lock(&adapter->port_list_lock);
|
|
|
|
list_for_each_entry(port, &adapter->port_list, list)
|
2018-11-08 22:44:49 +08:00
|
|
|
_zfcp_erp_port_reopen(port, clear, dbftag);
|
2009-11-24 23:53:58 +08:00
|
|
|
read_unlock(&adapter->port_list_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
static void _zfcp_erp_lun_reopen_all(struct zfcp_port *port, int clear,
|
2018-11-08 22:44:49 +08:00
|
|
|
char *dbftag)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_lock(port->adapter->scsi_host->host_lock);
|
|
|
|
__shost_for_each_device(sdev, port->adapter->scsi_host)
|
2010-09-08 20:39:55 +08:00
|
|
|
if (sdev_to_zfcp(sdev)->port == port)
|
2018-11-08 22:44:49 +08:00
|
|
|
_zfcp_erp_lun_reopen(sdev, clear, dbftag, 0);
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_unlock(port->adapter->scsi_host->host_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2009-07-13 21:06:09 +08:00
|
|
|
static void zfcp_erp_strategy_followup_failed(struct zfcp_erp_action *act)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-11-08 22:44:50 +08:00
|
|
|
switch (act->type) {
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
2010-12-02 22:16:16 +08:00
|
|
|
_zfcp_erp_adapter_reopen(act->adapter, 0, "ersff_1");
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
2010-12-02 22:16:16 +08:00
|
|
|
_zfcp_erp_port_forced_reopen(act->port, 0, "ersff_2");
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
2010-12-02 22:16:16 +08:00
|
|
|
_zfcp_erp_port_reopen(act->port, 0, "ersff_3");
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
2010-12-02 22:16:16 +08:00
|
|
|
_zfcp_erp_lun_reopen(act->sdev, 0, "ersff_4", 0);
|
2009-07-13 21:06:09 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void zfcp_erp_strategy_followup_success(struct zfcp_erp_action *act)
|
|
|
|
{
|
2018-11-08 22:44:50 +08:00
|
|
|
switch (act->type) {
|
2009-07-13 21:06:09 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
2010-12-02 22:16:16 +08:00
|
|
|
_zfcp_erp_port_reopen_all(act->adapter, 0, "ersfs_1");
|
2009-07-13 21:06:09 +08:00
|
|
|
break;
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
2010-12-02 22:16:16 +08:00
|
|
|
_zfcp_erp_port_reopen(act->port, 0, "ersfs_2");
|
2009-07-13 21:06:09 +08:00
|
|
|
break;
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
2010-12-02 22:16:16 +08:00
|
|
|
_zfcp_erp_lun_reopen_all(act->port, 0, "ersfs_3");
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
2018-11-08 22:44:50 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_wakeup(struct zfcp_adapter *adapter)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
2009-11-24 23:53:58 +08:00
|
|
|
read_lock_irqsave(&adapter->erp_lock, flags);
|
2008-07-02 16:56:40 +08:00
|
|
|
if (list_empty(&adapter->erp_ready_head) &&
|
|
|
|
list_empty(&adapter->erp_running_head)) {
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_ADAPTER_ERP_PENDING,
|
2008-07-02 16:56:40 +08:00
|
|
|
&adapter->status);
|
|
|
|
wake_up(&adapter->erp_done_wqh);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2009-11-24 23:53:58 +08:00
|
|
|
read_unlock_irqrestore(&adapter->erp_lock, flags);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_enqueue_ptp_port(struct zfcp_adapter *adapter)
|
|
|
|
{
|
|
|
|
struct zfcp_port *port;
|
|
|
|
port = zfcp_port_enqueue(adapter, adapter->peer_wwpn, 0,
|
|
|
|
adapter->peer_d_id);
|
|
|
|
if (IS_ERR(port)) /* error or port already attached */
|
|
|
|
return;
|
2020-03-13 01:44:56 +08:00
|
|
|
zfcp_erp_port_reopen(port, 0, "ereptp1");
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_adapter_strat_fsf_xconf(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
int retries;
|
|
|
|
int sleep = 1;
|
|
|
|
struct zfcp_adapter *adapter = erp_action->adapter;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_ADAPTER_XCONFIG_OK, &adapter->status);
|
2008-07-02 16:56:40 +08:00
|
|
|
|
|
|
|
for (retries = 7; retries; retries--) {
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
|
2008-07-02 16:56:40 +08:00
|
|
|
&adapter->status);
|
|
|
|
write_lock_irq(&adapter->erp_lock);
|
|
|
|
zfcp_erp_action_to_running(erp_action);
|
|
|
|
write_unlock_irq(&adapter->erp_lock);
|
|
|
|
if (zfcp_fsf_exchange_config_data(erp_action)) {
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
|
2008-07-02 16:56:40 +08:00
|
|
|
&adapter->status);
|
|
|
|
return ZFCP_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2009-08-18 21:43:25 +08:00
|
|
|
wait_event(adapter->erp_ready_wq,
|
|
|
|
!list_empty(&adapter->erp_ready_head));
|
2008-07-02 16:56:40 +08:00
|
|
|
if (erp_action->status & ZFCP_STATUS_ERP_TIMEDOUT)
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (!(atomic_read(&adapter->status) &
|
|
|
|
ZFCP_STATUS_ADAPTER_HOST_CON_INIT))
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
ssleep(sleep);
|
|
|
|
sleep *= 2;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
|
2008-07-02 16:56:40 +08:00
|
|
|
&adapter->status);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (!(atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_XCONFIG_OK))
|
|
|
|
return ZFCP_ERP_FAILED;
|
2007-09-07 15:15:31 +08:00
|
|
|
|
2020-05-09 01:23:32 +08:00
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
zfcp_erp_adapter_strategy_open_ptp_port(struct zfcp_adapter *const adapter)
|
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
if (fc_host_port_type(adapter->scsi_host) == FC_PORTTYPE_PTP)
|
|
|
|
zfcp_erp_enqueue_ptp_port(adapter);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_adapter_strategy_open_fsf_xport(
|
|
|
|
struct zfcp_erp_action *act)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
int ret;
|
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
write_lock_irq(&adapter->erp_lock);
|
|
|
|
zfcp_erp_action_to_running(act);
|
|
|
|
write_unlock_irq(&adapter->erp_lock);
|
|
|
|
|
|
|
|
ret = zfcp_fsf_exchange_port_data(act);
|
|
|
|
if (ret == -EOPNOTSUPP)
|
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
|
|
|
if (ret)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erasox1", act);
|
2009-08-18 21:43:25 +08:00
|
|
|
wait_event(adapter->erp_ready_wq,
|
|
|
|
!list_empty(&adapter->erp_ready_head));
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("erasox2", act);
|
2008-07-02 16:56:40 +08:00
|
|
|
if (act->status & ZFCP_STATUS_ERP_TIMEDOUT)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
|
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
scsi: zfcp: Move allocation of the shost object to after xconf- and xport-data
At the moment we allocate and register the Scsi_Host object corresponding
to a zfcp adapter (FCP device) very early in the life cycle of the adapter
- even before we fully discover and initialize the underlying
firmware/hardware. This had the advantage that we could already use the
Scsi_Host object, and fill in all its information during said discover and
initialize.
Due to commit 737eb78e82d5 ("block: Delay default elevator initialization")
(first released in v5.4), we noticed a regression that would prevent us
from using any storage volume if zfcp is configured with support for DIF or
DIX (zfcp.dif=1 || zfcp.dix=1). Doing so would result in an illegal memory
access as soon as the first request is sent with such an configuration. As
example for a crash resulting from this:
scsi host0: scsi_eh_0: sleeping
scsi host0: zfcp
qdio: 0.0.1900 ZFCP on SC 4bd using AI:1 QEBSM:0 PRI:1 TDD:1 SIGA: W AP
scsi 0:0:0:0: scsi scan: INQUIRY pass 1 length 36
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 0000000000000000 TEID: 0000000000000483
Fault in home space mode while using kernel ASCE.
AS:0000000035c7c007 R3:00000001effcc007 S:00000001effd1000 P:000000000000003d
Oops: 0004 ilc:3 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Modules linked in: ...
CPU: 1 PID: 783 Comm: kworker/u760:5 Kdump: loaded Not tainted 5.6.0-rc2-bb-next+ #1
Hardware name: ...
Workqueue: scsi_wq_0 fc_scsi_scan_rport [scsi_transport_fc]
Krnl PSW : 0704e00180000000 000003ff801fcdae (scsi_queue_rq+0x436/0x740 [scsi_mod])
R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
Krnl GPRS: 0fffffffffffffff 0000000000000000 0000000187150120 0000000000000000
000003ff80223d20 000000000000018e 000000018adc6400 0000000187711000
000003e0062337e8 00000001ae719000 0000000187711000 0000000187150000
00000001ab808100 0000000187150120 000003ff801fcd74 000003e0062336a0
Krnl Code: 000003ff801fcd9e: e310a35c0012 lt %r1,860(%r10)
000003ff801fcda4: a7840010 brc 8,000003ff801fcdc4
#000003ff801fcda8: e310b2900004 lg %r1,656(%r11)
>000003ff801fcdae: d71710001000 xc 0(24,%r1),0(%r1)
000003ff801fcdb4: e310b2900004 lg %r1,656(%r11)
000003ff801fcdba: 41201018 la %r2,24(%r1)
000003ff801fcdbe: e32010000024 stg %r2,0(%r1)
000003ff801fcdc4: b904002b lgr %r2,%r11
Call Trace:
[<000003ff801fcdae>] scsi_queue_rq+0x436/0x740 [scsi_mod]
([<000003ff801fcd74>] scsi_queue_rq+0x3fc/0x740 [scsi_mod])
[<00000000349c9970>] blk_mq_dispatch_rq_list+0x390/0x680
[<00000000349d1596>] blk_mq_sched_dispatch_requests+0x196/0x1a8
[<00000000349c7a04>] __blk_mq_run_hw_queue+0x144/0x160
[<00000000349c7ab6>] __blk_mq_delay_run_hw_queue+0x96/0x228
[<00000000349c7d5a>] blk_mq_run_hw_queue+0xd2/0xe0
[<00000000349d194a>] blk_mq_sched_insert_request+0x192/0x1d8
[<00000000349c17b8>] blk_execute_rq_nowait+0x80/0x90
[<00000000349c1856>] blk_execute_rq+0x6e/0xb0
[<000003ff801f8ac2>] __scsi_execute+0xe2/0x1f0 [scsi_mod]
[<000003ff801fef98>] scsi_probe_and_add_lun+0x358/0x840 [scsi_mod]
[<000003ff8020001c>] __scsi_scan_target+0xc4/0x228 [scsi_mod]
[<000003ff80200254>] scsi_scan_target+0xd4/0x100 [scsi_mod]
[<000003ff802d8b96>] fc_scsi_scan_rport+0x96/0xc0 [scsi_transport_fc]
[<0000000034245ce8>] process_one_work+0x458/0x7d0
[<00000000342462a2>] worker_thread+0x242/0x448
[<0000000034250994>] kthread+0x15c/0x170
[<0000000034e1979c>] ret_from_fork+0x30/0x38
INFO: lockdep is turned off.
Last Breaking-Event-Address:
[<000003ff801fbc36>] scsi_add_cmd_to_list+0x9e/0xa8 [scsi_mod]
Kernel panic - not syncing: Fatal exception: panic_on_oops
While this issue is exposed by the commit named above, this is only by
accident. The real issue exists for longer already - basically since it's
possible to use blk-mq via scsi-mq, and blk-mq pre-allocates all requests
for a tag-set during initialization of the same. For a given Scsi_Host
object this is done when adding the object to the midlayer
(`scsi_add_host()` and such). In `scsi_mq_setup_tags()` the midlayer
calculates how much memory is required for a single scsi_cmnd, and its
additional data, which also might include space for additional protection
data - depending on whether the Scsi_Host has any form of protection
capabilities (`scsi_host_get_prot()`).
The problem is now thus, because zfcp does this step before we actually
know whether the firmware/hardware has these capabilities, we don't set any
protection capabilities in the Scsi_Host object. And so, no space is
allocated for additional protection data for requests in the Scsi_Host
tag-set.
Once we go through discover and initialize the FCP device firmware/hardware
fully (this is done via the firmware commands "Exchange Config Data" and
"Exchange Port Data") we find out whether it actually supports DIF and DIX,
and we set the corresponding capabilities in the Scsi_Host object (in
`zfcp_scsi_set_prot()`). Now the Scsi_Host potentially has protection
capabilities, but the already allocated requests in the tag-set don't have
any space allocated for that.
When we then trigger target scanning or add scsi_devices manually, the
midlayer will use requests from that tag-set, and before sending most
requests, it will also call `scsi_mq_prep_fn()`. To prepare the scsi_cmnd
this function will check again whether the used Scsi_Host has any
protection capabilities - and now it potentially has - and if so, it will
try to initialize the assumed to be preallocated structures and thus it
causes the crash, like shown above.
Before delaying the default elevator initialization with the commit named
above, we always would also allocate an elevator for any scsi_device before
ever sending any requests - in contrast to now, where we do it after
device-probing. That elevator in turn would have its own tag-set, and that
is initialized after we went through discovery and initialization of the
underlying firmware/hardware. So requests from that tag-set can be
allocated properly, and if used - unless the user changes/disabled the
default elevator - this would hide the underlying issue.
To fix this for any configuration - with or without an elevator - we move
the allocation and registration of the Scsi_Host object for a given FCP
device to after the first complete discovery and initialization of the
underlying firmware/hardware. By doing that we can make all basic
properties of the Scsi_Host known to the midlayer by the time we call
`scsi_add_host()`, including whether we have any protection capabilities.
To do that we have to delay all the accesses that we would have done in the
past during discovery and initialization, and do them instead once we are
finished with it. The previous patches ramp up to this by fencing and
factoring out all these accesses, and make it possible to re-do them later
on. In addition we make also use of the diagnostic buffers we recently
added with
commit 92953c6e0aa7 ("scsi: zfcp: signal incomplete or error for sync exchange config/port data")
commit 7e418833e689 ("scsi: zfcp: diagnostics buffer caching and use for exchange port data")
commit 088210233e6f ("scsi: zfcp: add diagnostics buffer for exchange config data")
(first released in v5.5), because these already cache all the information
we need for that "re-do operation" - the information cached are always
updated during xconf or xport data, so it won't be stale.
In addition to the move and re-do, this patch also updates the
function-documentation of `zfcp_scsi_adapter_register()` and changes how it
reports if a Scsi_Host object already exists. In that case future
recovery-operations can skip this step completely and behave much like they
would do in the past - zfcp does not release a once allocated Scsi_Host
object unless the corresponding FCP device is deconstructed completely.
Link: https://lore.kernel.org/r/030dd6da318bbb529f0b5268ec65cebcd20fc0a3.1588956679.git.bblock@linux.ibm.com
Reviewed-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-05-09 01:23:35 +08:00
|
|
|
static enum zfcp_erp_act_result
|
|
|
|
zfcp_erp_adapter_strategy_alloc_shost(struct zfcp_adapter *const adapter)
|
|
|
|
{
|
|
|
|
struct zfcp_diag_adapter_config_data *const config_data =
|
|
|
|
&adapter->diagnostics->config_data;
|
|
|
|
struct zfcp_diag_adapter_port_data *const port_data =
|
|
|
|
&adapter->diagnostics->port_data;
|
|
|
|
unsigned long flags;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = zfcp_scsi_adapter_register(adapter);
|
|
|
|
if (rc == -EEXIST)
|
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
|
|
|
else if (rc)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We allocated the shost for the first time. Before it was NULL,
|
|
|
|
* and so we deferred all updates in the xconf- and xport-data
|
|
|
|
* handlers. We need to make up for that now, and make all the updates
|
|
|
|
* that would have been done before.
|
|
|
|
*
|
|
|
|
* We can be sure that xconf- and xport-data succeeded, because
|
|
|
|
* otherwise this function is not called. But they might have been
|
|
|
|
* incomplete.
|
|
|
|
*/
|
|
|
|
|
|
|
|
spin_lock_irqsave(&config_data->header.access_lock, flags);
|
|
|
|
zfcp_scsi_shost_update_config_data(adapter, &config_data->data,
|
|
|
|
!!config_data->header.incomplete);
|
|
|
|
spin_unlock_irqrestore(&config_data->header.access_lock, flags);
|
|
|
|
|
|
|
|
if (adapter->adapter_features & FSF_FEATURE_HBAAPI_MANAGEMENT) {
|
|
|
|
spin_lock_irqsave(&port_data->header.access_lock, flags);
|
|
|
|
zfcp_scsi_shost_update_port_data(adapter, &port_data->data);
|
|
|
|
spin_unlock_irqrestore(&port_data->header.access_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* There is a remote possibility that the 'Exchange Port Data' request
|
|
|
|
* reports a different connectivity status than 'Exchange Config Data'.
|
|
|
|
* But any change to the connectivity status of the local optic that
|
|
|
|
* happens after the initial xconf request is expected to be reported
|
|
|
|
* to us, as soon as we post Status Read Buffers to the FCP channel
|
|
|
|
* firmware after this function. So any resulting inconsistency will
|
|
|
|
* only be momentary.
|
|
|
|
*/
|
|
|
|
if (config_data->header.incomplete)
|
|
|
|
zfcp_fsf_fc_host_link_down(adapter);
|
|
|
|
|
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_adapter_strategy_open_fsf(
|
|
|
|
struct zfcp_erp_action *act)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
if (zfcp_erp_adapter_strat_fsf_xconf(act) == ZFCP_ERP_FAILED)
|
|
|
|
return ZFCP_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (zfcp_erp_adapter_strategy_open_fsf_xport(act) == ZFCP_ERP_FAILED)
|
|
|
|
return ZFCP_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
scsi: zfcp: Move allocation of the shost object to after xconf- and xport-data
At the moment we allocate and register the Scsi_Host object corresponding
to a zfcp adapter (FCP device) very early in the life cycle of the adapter
- even before we fully discover and initialize the underlying
firmware/hardware. This had the advantage that we could already use the
Scsi_Host object, and fill in all its information during said discover and
initialize.
Due to commit 737eb78e82d5 ("block: Delay default elevator initialization")
(first released in v5.4), we noticed a regression that would prevent us
from using any storage volume if zfcp is configured with support for DIF or
DIX (zfcp.dif=1 || zfcp.dix=1). Doing so would result in an illegal memory
access as soon as the first request is sent with such an configuration. As
example for a crash resulting from this:
scsi host0: scsi_eh_0: sleeping
scsi host0: zfcp
qdio: 0.0.1900 ZFCP on SC 4bd using AI:1 QEBSM:0 PRI:1 TDD:1 SIGA: W AP
scsi 0:0:0:0: scsi scan: INQUIRY pass 1 length 36
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 0000000000000000 TEID: 0000000000000483
Fault in home space mode while using kernel ASCE.
AS:0000000035c7c007 R3:00000001effcc007 S:00000001effd1000 P:000000000000003d
Oops: 0004 ilc:3 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Modules linked in: ...
CPU: 1 PID: 783 Comm: kworker/u760:5 Kdump: loaded Not tainted 5.6.0-rc2-bb-next+ #1
Hardware name: ...
Workqueue: scsi_wq_0 fc_scsi_scan_rport [scsi_transport_fc]
Krnl PSW : 0704e00180000000 000003ff801fcdae (scsi_queue_rq+0x436/0x740 [scsi_mod])
R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
Krnl GPRS: 0fffffffffffffff 0000000000000000 0000000187150120 0000000000000000
000003ff80223d20 000000000000018e 000000018adc6400 0000000187711000
000003e0062337e8 00000001ae719000 0000000187711000 0000000187150000
00000001ab808100 0000000187150120 000003ff801fcd74 000003e0062336a0
Krnl Code: 000003ff801fcd9e: e310a35c0012 lt %r1,860(%r10)
000003ff801fcda4: a7840010 brc 8,000003ff801fcdc4
#000003ff801fcda8: e310b2900004 lg %r1,656(%r11)
>000003ff801fcdae: d71710001000 xc 0(24,%r1),0(%r1)
000003ff801fcdb4: e310b2900004 lg %r1,656(%r11)
000003ff801fcdba: 41201018 la %r2,24(%r1)
000003ff801fcdbe: e32010000024 stg %r2,0(%r1)
000003ff801fcdc4: b904002b lgr %r2,%r11
Call Trace:
[<000003ff801fcdae>] scsi_queue_rq+0x436/0x740 [scsi_mod]
([<000003ff801fcd74>] scsi_queue_rq+0x3fc/0x740 [scsi_mod])
[<00000000349c9970>] blk_mq_dispatch_rq_list+0x390/0x680
[<00000000349d1596>] blk_mq_sched_dispatch_requests+0x196/0x1a8
[<00000000349c7a04>] __blk_mq_run_hw_queue+0x144/0x160
[<00000000349c7ab6>] __blk_mq_delay_run_hw_queue+0x96/0x228
[<00000000349c7d5a>] blk_mq_run_hw_queue+0xd2/0xe0
[<00000000349d194a>] blk_mq_sched_insert_request+0x192/0x1d8
[<00000000349c17b8>] blk_execute_rq_nowait+0x80/0x90
[<00000000349c1856>] blk_execute_rq+0x6e/0xb0
[<000003ff801f8ac2>] __scsi_execute+0xe2/0x1f0 [scsi_mod]
[<000003ff801fef98>] scsi_probe_and_add_lun+0x358/0x840 [scsi_mod]
[<000003ff8020001c>] __scsi_scan_target+0xc4/0x228 [scsi_mod]
[<000003ff80200254>] scsi_scan_target+0xd4/0x100 [scsi_mod]
[<000003ff802d8b96>] fc_scsi_scan_rport+0x96/0xc0 [scsi_transport_fc]
[<0000000034245ce8>] process_one_work+0x458/0x7d0
[<00000000342462a2>] worker_thread+0x242/0x448
[<0000000034250994>] kthread+0x15c/0x170
[<0000000034e1979c>] ret_from_fork+0x30/0x38
INFO: lockdep is turned off.
Last Breaking-Event-Address:
[<000003ff801fbc36>] scsi_add_cmd_to_list+0x9e/0xa8 [scsi_mod]
Kernel panic - not syncing: Fatal exception: panic_on_oops
While this issue is exposed by the commit named above, this is only by
accident. The real issue exists for longer already - basically since it's
possible to use blk-mq via scsi-mq, and blk-mq pre-allocates all requests
for a tag-set during initialization of the same. For a given Scsi_Host
object this is done when adding the object to the midlayer
(`scsi_add_host()` and such). In `scsi_mq_setup_tags()` the midlayer
calculates how much memory is required for a single scsi_cmnd, and its
additional data, which also might include space for additional protection
data - depending on whether the Scsi_Host has any form of protection
capabilities (`scsi_host_get_prot()`).
The problem is now thus, because zfcp does this step before we actually
know whether the firmware/hardware has these capabilities, we don't set any
protection capabilities in the Scsi_Host object. And so, no space is
allocated for additional protection data for requests in the Scsi_Host
tag-set.
Once we go through discover and initialize the FCP device firmware/hardware
fully (this is done via the firmware commands "Exchange Config Data" and
"Exchange Port Data") we find out whether it actually supports DIF and DIX,
and we set the corresponding capabilities in the Scsi_Host object (in
`zfcp_scsi_set_prot()`). Now the Scsi_Host potentially has protection
capabilities, but the already allocated requests in the tag-set don't have
any space allocated for that.
When we then trigger target scanning or add scsi_devices manually, the
midlayer will use requests from that tag-set, and before sending most
requests, it will also call `scsi_mq_prep_fn()`. To prepare the scsi_cmnd
this function will check again whether the used Scsi_Host has any
protection capabilities - and now it potentially has - and if so, it will
try to initialize the assumed to be preallocated structures and thus it
causes the crash, like shown above.
Before delaying the default elevator initialization with the commit named
above, we always would also allocate an elevator for any scsi_device before
ever sending any requests - in contrast to now, where we do it after
device-probing. That elevator in turn would have its own tag-set, and that
is initialized after we went through discovery and initialization of the
underlying firmware/hardware. So requests from that tag-set can be
allocated properly, and if used - unless the user changes/disabled the
default elevator - this would hide the underlying issue.
To fix this for any configuration - with or without an elevator - we move
the allocation and registration of the Scsi_Host object for a given FCP
device to after the first complete discovery and initialization of the
underlying firmware/hardware. By doing that we can make all basic
properties of the Scsi_Host known to the midlayer by the time we call
`scsi_add_host()`, including whether we have any protection capabilities.
To do that we have to delay all the accesses that we would have done in the
past during discovery and initialization, and do them instead once we are
finished with it. The previous patches ramp up to this by fencing and
factoring out all these accesses, and make it possible to re-do them later
on. In addition we make also use of the diagnostic buffers we recently
added with
commit 92953c6e0aa7 ("scsi: zfcp: signal incomplete or error for sync exchange config/port data")
commit 7e418833e689 ("scsi: zfcp: diagnostics buffer caching and use for exchange port data")
commit 088210233e6f ("scsi: zfcp: add diagnostics buffer for exchange config data")
(first released in v5.5), because these already cache all the information
we need for that "re-do operation" - the information cached are always
updated during xconf or xport data, so it won't be stale.
In addition to the move and re-do, this patch also updates the
function-documentation of `zfcp_scsi_adapter_register()` and changes how it
reports if a Scsi_Host object already exists. In that case future
recovery-operations can skip this step completely and behave much like they
would do in the past - zfcp does not release a once allocated Scsi_Host
object unless the corresponding FCP device is deconstructed completely.
Link: https://lore.kernel.org/r/030dd6da318bbb529f0b5268ec65cebcd20fc0a3.1588956679.git.bblock@linux.ibm.com
Reviewed-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-05-09 01:23:35 +08:00
|
|
|
if (zfcp_erp_adapter_strategy_alloc_shost(act->adapter) ==
|
|
|
|
ZFCP_ERP_FAILED)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
|
2020-05-09 01:23:32 +08:00
|
|
|
zfcp_erp_adapter_strategy_open_ptp_port(act->adapter);
|
|
|
|
|
2011-02-23 02:54:40 +08:00
|
|
|
if (mempool_resize(act->adapter->pool.sr_data,
|
2015-04-15 06:48:21 +08:00
|
|
|
act->adapter->stat_read_buf_num))
|
2010-06-21 16:11:33 +08:00
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
|
|
|
|
if (mempool_resize(act->adapter->pool.status_read_req,
|
2015-04-15 06:48:21 +08:00
|
|
|
act->adapter->stat_read_buf_num))
|
2010-06-21 16:11:33 +08:00
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
|
2010-05-01 00:09:36 +08:00
|
|
|
atomic_set(&act->adapter->stat_miss, act->adapter->stat_read_buf_num);
|
2008-07-02 16:56:40 +08:00
|
|
|
if (zfcp_status_read_refill(act->adapter))
|
|
|
|
return ZFCP_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-03-02 20:09:03 +08:00
|
|
|
static void zfcp_erp_adapter_strategy_close(struct zfcp_erp_action *act)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/* close queues to ensure that buffers are not accessed by adapter */
|
2009-08-18 21:43:19 +08:00
|
|
|
zfcp_qdio_close(adapter->qdio);
|
2008-07-02 16:56:40 +08:00
|
|
|
zfcp_fsf_req_dismiss_all(adapter);
|
|
|
|
adapter->fsf_req_seq_no = 0;
|
2009-08-18 21:43:12 +08:00
|
|
|
zfcp_fc_wka_ports_force_offline(adapter->gs);
|
2010-09-08 20:39:55 +08:00
|
|
|
/* all ports and LUNs are closed */
|
2010-09-08 20:40:01 +08:00
|
|
|
zfcp_erp_clear_adapter_status(adapter, ZFCP_STATUS_COMMON_OPEN);
|
2009-03-02 20:09:03 +08:00
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
|
2009-03-02 20:09:03 +08:00
|
|
|
ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_adapter_strategy_open(
|
|
|
|
struct zfcp_erp_action *act)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2009-03-02 20:09:03 +08:00
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-12-02 22:16:17 +08:00
|
|
|
if (zfcp_qdio_open(adapter->qdio)) {
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
|
2009-03-02 20:09:03 +08:00
|
|
|
ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED,
|
|
|
|
&adapter->status);
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
}
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2009-03-02 20:09:03 +08:00
|
|
|
if (zfcp_erp_adapter_strategy_open_fsf(act)) {
|
|
|
|
zfcp_erp_adapter_strategy_close(act);
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
}
|
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(ZFCP_STATUS_COMMON_OPEN, &adapter->status);
|
2009-03-02 20:09:03 +08:00
|
|
|
|
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
|
|
|
}
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_adapter_strategy(
|
|
|
|
struct zfcp_erp_action *act)
|
2009-03-02 20:09:03 +08:00
|
|
|
{
|
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
|
|
|
|
|
|
|
if (atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_OPEN) {
|
|
|
|
zfcp_erp_adapter_strategy_close(act);
|
|
|
|
if (act->status & ZFCP_STATUS_ERP_CLOSE_ONLY)
|
|
|
|
return ZFCP_ERP_EXIT;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (zfcp_erp_adapter_strategy_open(act)) {
|
2008-07-02 16:56:40 +08:00
|
|
|
ssleep(8);
|
2009-03-02 20:09:03 +08:00
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-03-02 20:09:03 +08:00
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_port_forced_strategy_close(
|
|
|
|
struct zfcp_erp_action *act)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
int retval;
|
|
|
|
|
|
|
|
retval = zfcp_fsf_close_physical_port(act);
|
|
|
|
if (retval == -ENOMEM)
|
|
|
|
return ZFCP_ERP_NOMEM;
|
|
|
|
act->step = ZFCP_ERP_STEP_PHYS_PORT_CLOSING;
|
|
|
|
if (retval)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
|
|
|
|
return ZFCP_ERP_CONTINUES;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_port_forced_strategy(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
struct zfcp_port *port = erp_action->port;
|
|
|
|
int status = atomic_read(&port->status);
|
|
|
|
|
|
|
|
switch (erp_action->step) {
|
|
|
|
case ZFCP_ERP_STEP_UNINITIALIZED:
|
|
|
|
if ((status & ZFCP_STATUS_PORT_PHYS_OPEN) &&
|
|
|
|
(status & ZFCP_STATUS_COMMON_OPEN))
|
|
|
|
return zfcp_erp_port_forced_strategy_close(erp_action);
|
|
|
|
else
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
|
|
|
|
case ZFCP_ERP_STEP_PHYS_PORT_CLOSING:
|
2009-07-13 21:06:08 +08:00
|
|
|
if (!(status & ZFCP_STATUS_PORT_PHYS_OPEN))
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
2018-11-08 22:44:51 +08:00
|
|
|
break;
|
|
|
|
case ZFCP_ERP_STEP_PORT_CLOSING:
|
|
|
|
case ZFCP_ERP_STEP_PORT_OPENING:
|
|
|
|
case ZFCP_ERP_STEP_LUN_CLOSING:
|
|
|
|
case ZFCP_ERP_STEP_LUN_OPENING:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
return ZFCP_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_port_strategy_close(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
int retval;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
retval = zfcp_fsf_close_port(erp_action);
|
|
|
|
if (retval == -ENOMEM)
|
|
|
|
return ZFCP_ERP_NOMEM;
|
|
|
|
erp_action->step = ZFCP_ERP_STEP_PORT_CLOSING;
|
|
|
|
if (retval)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
return ZFCP_ERP_CONTINUES;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_port_strategy_open_port(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
int retval;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
retval = zfcp_fsf_open_port(erp_action);
|
|
|
|
if (retval == -ENOMEM)
|
|
|
|
return ZFCP_ERP_NOMEM;
|
|
|
|
erp_action->step = ZFCP_ERP_STEP_PORT_OPENING;
|
|
|
|
if (retval)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
return ZFCP_ERP_CONTINUES;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static int zfcp_erp_open_ptp_port(struct zfcp_erp_action *act)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
|
|
|
struct zfcp_port *port = act->port;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (port->wwpn != adapter->peer_wwpn) {
|
2010-09-08 20:40:01 +08:00
|
|
|
zfcp_erp_set_port_status(port, ZFCP_STATUS_COMMON_ERP_FAILED);
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
}
|
|
|
|
port->d_id = adapter->peer_d_id;
|
|
|
|
return zfcp_erp_port_strategy_open_port(act);
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_port_strategy_open_common(
|
|
|
|
struct zfcp_erp_action *act)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
|
|
|
struct zfcp_port *port = act->port;
|
|
|
|
int p_status = atomic_read(&port->status);
|
|
|
|
|
|
|
|
switch (act->step) {
|
|
|
|
case ZFCP_ERP_STEP_UNINITIALIZED:
|
|
|
|
case ZFCP_ERP_STEP_PHYS_PORT_CLOSING:
|
|
|
|
case ZFCP_ERP_STEP_PORT_CLOSING:
|
|
|
|
if (fc_host_port_type(adapter->scsi_host) == FC_PORTTYPE_PTP)
|
|
|
|
return zfcp_erp_open_ptp_port(act);
|
2008-12-19 23:56:59 +08:00
|
|
|
if (!port->d_id) {
|
2009-10-14 17:00:43 +08:00
|
|
|
zfcp_fc_trigger_did_lookup(port);
|
2009-08-18 21:43:20 +08:00
|
|
|
return ZFCP_ERP_EXIT;
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
return zfcp_erp_port_strategy_open_port(act);
|
|
|
|
|
|
|
|
case ZFCP_ERP_STEP_PORT_OPENING:
|
|
|
|
/* D_ID might have changed during open */
|
2008-10-01 18:42:17 +08:00
|
|
|
if (p_status & ZFCP_STATUS_COMMON_OPEN) {
|
2009-10-14 17:00:43 +08:00
|
|
|
if (!port->d_id) {
|
|
|
|
zfcp_fc_trigger_did_lookup(port);
|
|
|
|
return ZFCP_ERP_EXIT;
|
2008-10-01 18:42:17 +08:00
|
|
|
}
|
2009-10-14 17:00:43 +08:00
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
2008-10-01 18:42:17 +08:00
|
|
|
}
|
2009-05-15 19:18:20 +08:00
|
|
|
if (port->d_id && !(p_status & ZFCP_STATUS_COMMON_NOESC)) {
|
|
|
|
port->d_id = 0;
|
2010-07-08 15:53:10 +08:00
|
|
|
return ZFCP_ERP_FAILED;
|
2009-05-15 19:18:20 +08:00
|
|
|
}
|
2018-11-08 22:44:51 +08:00
|
|
|
/* no early return otherwise, continue after switch case */
|
|
|
|
break;
|
|
|
|
case ZFCP_ERP_STEP_LUN_CLOSING:
|
|
|
|
case ZFCP_ERP_STEP_LUN_OPENING:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_port_strategy(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
|
|
|
struct zfcp_port *port = erp_action->port;
|
2009-10-14 17:00:43 +08:00
|
|
|
int p_status = atomic_read(&port->status);
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2009-10-14 17:00:43 +08:00
|
|
|
if ((p_status & ZFCP_STATUS_COMMON_NOESC) &&
|
|
|
|
!(p_status & ZFCP_STATUS_COMMON_OPEN))
|
2008-10-01 18:42:17 +08:00
|
|
|
goto close_init_done;
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
switch (erp_action->step) {
|
|
|
|
case ZFCP_ERP_STEP_UNINITIALIZED:
|
2009-10-14 17:00:43 +08:00
|
|
|
if (p_status & ZFCP_STATUS_COMMON_OPEN)
|
2008-07-02 16:56:40 +08:00
|
|
|
return zfcp_erp_port_strategy_close(erp_action);
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_STEP_PORT_CLOSING:
|
2009-10-14 17:00:43 +08:00
|
|
|
if (p_status & ZFCP_STATUS_COMMON_OPEN)
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2018-11-08 22:44:51 +08:00
|
|
|
case ZFCP_ERP_STEP_PHYS_PORT_CLOSING:
|
|
|
|
case ZFCP_ERP_STEP_PORT_OPENING:
|
|
|
|
case ZFCP_ERP_STEP_LUN_CLOSING:
|
|
|
|
case ZFCP_ERP_STEP_LUN_OPENING:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-10-01 18:42:17 +08:00
|
|
|
|
|
|
|
close_init_done:
|
2008-07-02 16:56:40 +08:00
|
|
|
if (erp_action->status & ZFCP_STATUS_ERP_CLOSE_ONLY)
|
|
|
|
return ZFCP_ERP_EXIT;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-10-01 18:42:17 +08:00
|
|
|
return zfcp_erp_port_strategy_open_common(erp_action);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
static void zfcp_erp_lun_strategy_clearstati(struct scsi_device *sdev)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_COMMON_ACCESS_DENIED,
|
2010-09-08 20:39:55 +08:00
|
|
|
&zfcp_sdev->status);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_lun_strategy_close(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
int retval = zfcp_fsf_close_lun(erp_action);
|
2008-07-02 16:56:40 +08:00
|
|
|
if (retval == -ENOMEM)
|
|
|
|
return ZFCP_ERP_NOMEM;
|
2010-09-08 20:39:55 +08:00
|
|
|
erp_action->step = ZFCP_ERP_STEP_LUN_CLOSING;
|
2008-07-02 16:56:40 +08:00
|
|
|
if (retval)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
return ZFCP_ERP_CONTINUES;
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_lun_strategy_open(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
int retval = zfcp_fsf_open_lun(erp_action);
|
2008-07-02 16:56:40 +08:00
|
|
|
if (retval == -ENOMEM)
|
|
|
|
return ZFCP_ERP_NOMEM;
|
2010-09-08 20:39:55 +08:00
|
|
|
erp_action->step = ZFCP_ERP_STEP_LUN_OPENING;
|
2008-07-02 16:56:40 +08:00
|
|
|
if (retval)
|
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
return ZFCP_ERP_CONTINUES;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_lun_strategy(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev = erp_action->sdev;
|
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
2008-07-02 16:56:40 +08:00
|
|
|
|
|
|
|
switch (erp_action->step) {
|
|
|
|
case ZFCP_ERP_STEP_UNINITIALIZED:
|
2010-09-08 20:39:55 +08:00
|
|
|
zfcp_erp_lun_strategy_clearstati(sdev);
|
|
|
|
if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_OPEN)
|
|
|
|
return zfcp_erp_lun_strategy_close(erp_action);
|
2018-11-08 22:44:56 +08:00
|
|
|
/* already closed */
|
2020-03-31 22:21:48 +08:00
|
|
|
fallthrough;
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_STEP_LUN_CLOSING:
|
|
|
|
if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_OPEN)
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_FAILED;
|
|
|
|
if (erp_action->status & ZFCP_STATUS_ERP_CLOSE_ONLY)
|
|
|
|
return ZFCP_ERP_EXIT;
|
2010-09-08 20:39:55 +08:00
|
|
|
return zfcp_erp_lun_strategy_open(erp_action);
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_STEP_LUN_OPENING:
|
|
|
|
if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_OPEN)
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_SUCCEEDED;
|
2018-11-08 22:44:51 +08:00
|
|
|
break;
|
|
|
|
case ZFCP_ERP_STEP_PHYS_PORT_CLOSING:
|
|
|
|
case ZFCP_ERP_STEP_PORT_CLOSING:
|
|
|
|
case ZFCP_ERP_STEP_PORT_OPENING:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
return ZFCP_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_strategy_check_lun(
|
|
|
|
struct scsi_device *sdev, enum zfcp_erp_act_result result)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
switch (result) {
|
|
|
|
case ZFCP_ERP_SUCCEEDED :
|
2010-09-08 20:39:55 +08:00
|
|
|
atomic_set(&zfcp_sdev->erp_counter, 0);
|
|
|
|
zfcp_erp_lun_unblock(sdev);
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
case ZFCP_ERP_FAILED :
|
2010-09-08 20:39:55 +08:00
|
|
|
atomic_inc(&zfcp_sdev->erp_counter);
|
|
|
|
if (atomic_read(&zfcp_sdev->erp_counter) > ZFCP_MAX_ERPS) {
|
|
|
|
dev_err(&zfcp_sdev->port->adapter->ccw_device->dev,
|
|
|
|
"ERP failed for LUN 0x%016Lx on "
|
2008-10-01 18:42:15 +08:00
|
|
|
"port 0x%016Lx\n",
|
2010-09-08 20:39:55 +08:00
|
|
|
(unsigned long long)zfcp_scsi_dev_lun(sdev),
|
|
|
|
(unsigned long long)zfcp_sdev->port->wwpn);
|
2010-09-08 20:40:01 +08:00
|
|
|
zfcp_erp_set_lun_status(sdev,
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED);
|
2008-10-01 18:42:15 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2018-11-08 22:44:52 +08:00
|
|
|
case ZFCP_ERP_CONTINUES:
|
|
|
|
case ZFCP_ERP_EXIT:
|
|
|
|
case ZFCP_ERP_DISMISSED:
|
|
|
|
case ZFCP_ERP_NOMEM:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
if (atomic_read(&zfcp_sdev->status) & ZFCP_STATUS_COMMON_ERP_FAILED) {
|
|
|
|
zfcp_erp_lun_block(sdev, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
result = ZFCP_ERP_EXIT;
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_strategy_check_port(
|
|
|
|
struct zfcp_port *port, enum zfcp_erp_act_result result)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
switch (result) {
|
|
|
|
case ZFCP_ERP_SUCCEEDED :
|
|
|
|
atomic_set(&port->erp_counter, 0);
|
|
|
|
zfcp_erp_port_unblock(port);
|
|
|
|
break;
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
case ZFCP_ERP_FAILED :
|
2008-07-02 16:56:40 +08:00
|
|
|
if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_NOESC) {
|
2008-06-11 00:21:00 +08:00
|
|
|
zfcp_erp_port_block(port, 0);
|
|
|
|
result = ZFCP_ERP_EXIT;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
atomic_inc(&port->erp_counter);
|
2008-10-01 18:42:15 +08:00
|
|
|
if (atomic_read(&port->erp_counter) > ZFCP_MAX_ERPS) {
|
|
|
|
dev_err(&port->adapter->ccw_device->dev,
|
|
|
|
"ERP failed for remote port 0x%016Lx\n",
|
2008-10-01 18:42:18 +08:00
|
|
|
(unsigned long long)port->wwpn);
|
2010-09-08 20:40:01 +08:00
|
|
|
zfcp_erp_set_port_status(port,
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED);
|
2008-10-01 18:42:15 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2018-11-08 22:44:52 +08:00
|
|
|
case ZFCP_ERP_CONTINUES:
|
|
|
|
case ZFCP_ERP_EXIT:
|
|
|
|
case ZFCP_ERP_DISMISSED:
|
|
|
|
case ZFCP_ERP_NOMEM:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_FAILED) {
|
|
|
|
zfcp_erp_port_block(port, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
result = ZFCP_ERP_EXIT;
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_strategy_check_adapter(
|
|
|
|
struct zfcp_adapter *adapter, enum zfcp_erp_act_result result)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
switch (result) {
|
|
|
|
case ZFCP_ERP_SUCCEEDED :
|
|
|
|
atomic_set(&adapter->erp_counter, 0);
|
|
|
|
zfcp_erp_adapter_unblock(adapter);
|
|
|
|
break;
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
case ZFCP_ERP_FAILED :
|
|
|
|
atomic_inc(&adapter->erp_counter);
|
2008-10-01 18:42:15 +08:00
|
|
|
if (atomic_read(&adapter->erp_counter) > ZFCP_MAX_ERPS) {
|
|
|
|
dev_err(&adapter->ccw_device->dev,
|
|
|
|
"ERP cannot recover an error "
|
|
|
|
"on the FCP device\n");
|
2010-09-08 20:40:01 +08:00
|
|
|
zfcp_erp_set_adapter_status(adapter,
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED);
|
2008-10-01 18:42:15 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2018-11-08 22:44:52 +08:00
|
|
|
case ZFCP_ERP_CONTINUES:
|
|
|
|
case ZFCP_ERP_EXIT:
|
|
|
|
case ZFCP_ERP_DISMISSED:
|
|
|
|
case ZFCP_ERP_NOMEM:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_ERP_FAILED) {
|
|
|
|
zfcp_erp_adapter_block(adapter, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
result = ZFCP_ERP_EXIT;
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_strategy_check_target(
|
|
|
|
struct zfcp_erp_action *erp_action, enum zfcp_erp_act_result result)
|
2007-05-08 17:16:52 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = erp_action->adapter;
|
|
|
|
struct zfcp_port *port = erp_action->port;
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev = erp_action->sdev;
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
switch (erp_action->type) {
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
|
|
|
result = zfcp_erp_strategy_check_lun(sdev, result);
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
|
|
|
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
|
|
|
result = zfcp_erp_strategy_check_port(port, result);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
|
|
|
result = zfcp_erp_strategy_check_adapter(adapter, result);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return result;
|
2007-05-08 17:16:52 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static int zfcp_erp_strat_change_det(atomic_t *target_status, u32 erp_status)
|
2007-05-08 17:16:52 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
int status = atomic_read(target_status);
|
2007-05-08 17:16:52 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if ((status & ZFCP_STATUS_COMMON_RUNNING) &&
|
|
|
|
(erp_status & ZFCP_STATUS_ERP_CLOSE_ONLY))
|
|
|
|
return 1; /* take it online */
|
2007-05-08 17:16:52 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (!(status & ZFCP_STATUS_COMMON_RUNNING) &&
|
|
|
|
!(erp_status & ZFCP_STATUS_ERP_CLOSE_ONLY))
|
|
|
|
return 1; /* take it offline */
|
|
|
|
|
|
|
|
return 0;
|
2007-05-08 17:16:52 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_strategy_statechange(
|
|
|
|
struct zfcp_erp_action *act, enum zfcp_erp_act_result result)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-11-08 22:44:50 +08:00
|
|
|
enum zfcp_erp_act_type type = act->type;
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
|
|
|
struct zfcp_port *port = act->port;
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev = act->sdev;
|
|
|
|
struct zfcp_scsi_dev *zfcp_sdev;
|
2008-07-02 16:56:40 +08:00
|
|
|
u32 erp_status = act->status;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
switch (type) {
|
2005-04-17 06:20:36 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
2008-07-02 16:56:40 +08:00
|
|
|
if (zfcp_erp_strat_change_det(&adapter->status, erp_status)) {
|
|
|
|
_zfcp_erp_adapter_reopen(adapter,
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED,
|
2010-12-02 22:16:16 +08:00
|
|
|
"ersscg1");
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_EXIT;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
2008-07-02 16:56:40 +08:00
|
|
|
if (zfcp_erp_strat_change_det(&port->status, erp_status)) {
|
|
|
|
_zfcp_erp_port_reopen(port,
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED,
|
2010-12-02 22:16:16 +08:00
|
|
|
"ersscg2");
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_EXIT;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
|
|
|
zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
if (zfcp_erp_strat_change_det(&zfcp_sdev->status, erp_status)) {
|
|
|
|
_zfcp_erp_lun_reopen(sdev,
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED,
|
2010-12-02 22:16:16 +08:00
|
|
|
"ersscg3", 0);
|
2008-07-02 16:56:40 +08:00
|
|
|
return ZFCP_ERP_EXIT;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
}
|
2018-11-08 22:44:52 +08:00
|
|
|
return result;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static void zfcp_erp_action_dequeue(struct zfcp_erp_action *erp_action)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = erp_action->adapter;
|
2010-09-08 20:39:55 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
adapter->erp_total_count--;
|
|
|
|
if (erp_action->status & ZFCP_STATUS_ERP_LOWMEM) {
|
|
|
|
adapter->erp_low_mem_count--;
|
|
|
|
erp_action->status &= ~ZFCP_STATUS_ERP_LOWMEM;
|
2008-03-27 21:22:05 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
list_del(&erp_action->list);
|
2010-12-02 22:16:12 +08:00
|
|
|
zfcp_dbf_rec_run("eractd1", erp_action);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
switch (erp_action->type) {
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
|
|
|
zfcp_sdev = sdev_to_zfcp(erp_action->sdev);
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_COMMON_ERP_INUSE,
|
2010-09-08 20:39:55 +08:00
|
|
|
&zfcp_sdev->status);
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_COMMON_ERP_INUSE,
|
2008-07-02 16:56:40 +08:00
|
|
|
&erp_action->port->status);
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(ZFCP_STATUS_COMMON_ERP_INUSE,
|
2008-07-02 16:56:40 +08:00
|
|
|
&erp_action->adapter->status);
|
|
|
|
break;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
scsi: zfcp: fix rport unblock race with LUN recovery
It is unavoidable that zfcp_scsi_queuecommand() has to finish requests
with DID_IMM_RETRY (like fc_remote_port_chkready()) during the time
window when zfcp detected an unavailable rport but
fc_remote_port_delete(), which is asynchronous via
zfcp_scsi_schedule_rport_block(), has not yet blocked the rport.
However, for the case when the rport becomes available again, we should
prevent unblocking the rport too early. In contrast to other FCP LLDDs,
zfcp has to open each LUN with the FCP channel hardware before it can
send I/O to a LUN. So if a port already has LUNs attached and we
unblock the rport just after port recovery, recoveries of LUNs behind
this port can still be pending which in turn force
zfcp_scsi_queuecommand() to unnecessarily finish requests with
DID_IMM_RETRY.
This also opens a time window with unblocked rport (until the followup
LUN reopen recovery has finished). If a scsi_cmnd timeout occurs during
this time window fc_timed_out() cannot work as desired and such command
would indeed time out and trigger scsi_eh. This prevents a clean and
timely path failover. This should not happen if the path issue can be
recovered on FC transport layer such as path issues involving RSCNs.
Fix this by only calling zfcp_scsi_schedule_rport_register(), to
asynchronously trigger fc_remote_port_add(), after all LUN recoveries as
children of the rport have finished and no new recoveries of equal or
higher order were triggered meanwhile. Finished intentionally includes
any recovery result no matter if successful or failed (still unblock
rport so other successful LUNs work). For simplicity, we check after
each finished LUN recovery if there is another LUN recovery pending on
the same port and then do nothing. We handle the special case of a
successful recovery of a port without LUN children the same way without
changing this case's semantics.
For debugging we introduce 2 new trace records written if the rport
unblock attempt was aborted due to still unfinished or freshly triggered
recovery. The records are only written above the default trace level.
Benjamin noticed the important special case of new recovery that can be
triggered between having given up the erp_lock and before calling
zfcp_erp_action_cleanup() within zfcp_erp_strategy(). We must avoid the
following sequence:
ERP thread rport_work other context
------------------------- -------------- --------------------------------
port is unblocked, rport still blocked,
due to pending/running ERP action,
so ((port->status & ...UNBLOCK) != 0)
and (port->rport == NULL)
unlock ERP
zfcp_erp_action_cleanup()
case ZFCP_ERP_ACTION_REOPEN_LUN:
zfcp_erp_try_rport_unblock()
((status & ...UNBLOCK) != 0) [OLD!]
zfcp_erp_port_reopen()
lock ERP
zfcp_erp_port_block()
port->status clear ...UNBLOCK
unlock ERP
zfcp_scsi_schedule_rport_block()
port->rport_task = RPORT_DEL
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task != RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_block()
if (!port->rport) return
zfcp_scsi_schedule_rport_register()
port->rport_task = RPORT_ADD
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task == RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_register()
(port->rport == NULL)
rport = fc_remote_port_add()
port->rport = rport;
Now the rport was erroneously unblocked while the zfcp_port is blocked.
This is another situation we want to avoid due to scsi_eh
potential. This state would at least remain until the new recovery from
the other context finished successfully, or potentially forever if it
failed. In order to close this race, we take the erp_lock inside
zfcp_erp_try_rport_unblock() when checking the status of zfcp_port or
LUN. With that, the possible corresponding rport state sequences would
be: (unblock[ERP thread],block[other context]) if the ERP thread gets
erp_lock first and still sees ((port->status & ...UNBLOCK) != 0),
(block[other context],NOP[ERP thread]) if the ERP thread gets erp_lock
after the other context has already cleard ...UNBLOCK from port->status.
Since checking fields of struct erp_action is unsafe because they could
have been overwritten (re-used for new recovery) meanwhile, we only
check status of zfcp_port and LUN since these are only changed under
erp_lock elsewhere. Regarding the check of the proper status flags (port
or port_forced are similar to the shown adapter recovery):
[zfcp_erp_adapter_shutdown()]
zfcp_erp_adapter_reopen()
zfcp_erp_adapter_block()
* clear UNBLOCK ---------------------------------------+
zfcp_scsi_schedule_rports_block() |
write_lock_irqsave(&adapter->erp_lock, flags);-------+ |
zfcp_erp_action_enqueue() | |
zfcp_erp_setup_act() | |
* set ERP_INUSE -----------------------------------|--|--+
write_unlock_irqrestore(&adapter->erp_lock, flags);--+ | |
.context-switch. | |
zfcp_erp_thread() | |
zfcp_erp_strategy() | |
write_lock_irqsave(&adapter->erp_lock, flags);------+ | |
... | | |
zfcp_erp_strategy_check_target() | | |
zfcp_erp_strategy_check_adapter() | | |
zfcp_erp_adapter_unblock() | | |
* set UNBLOCK -----------------------------------|--+ |
zfcp_erp_action_dequeue() | |
* clear ERP_INUSE ---------------------------------|-----+
... |
write_unlock_irqrestore(&adapter->erp_lock, flags);-+
Hence, we should check for both UNBLOCK and ERP_INUSE because they are
interleaved. Also we need to explicitly check ERP_FAILED for the link
down case which currently does not clear the UNBLOCK flag in
zfcp_fsf_link_down_info_eval().
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 8830271c4819 ("[SCSI] zfcp: Dont fail SCSI commands when transitioning to blocked fc_rport")
Fixes: a2fa0aede07c ("[SCSI] zfcp: Block FC transport rports early on errors")
Fixes: 5f852be9e11d ("[SCSI] zfcp: Fix deadlock between zfcp ERP and SCSI")
Fixes: 338151e06608 ("[SCSI] zfcp: make use of fc_remote_port_delete when target port is unavailable")
Fixes: 3859f6a248cb ("[PATCH] zfcp: add rports to enable scsi_add_device to work again")
Cc: <stable@vger.kernel.org> #2.6.32+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2016-12-10 00:16:33 +08:00
|
|
|
/**
|
|
|
|
* zfcp_erp_try_rport_unblock - unblock rport if no more/new recovery
|
|
|
|
* @port: zfcp_port whose fc_rport we should try to unblock
|
|
|
|
*/
|
|
|
|
static void zfcp_erp_try_rport_unblock(struct zfcp_port *port)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
struct zfcp_adapter *adapter = port->adapter;
|
|
|
|
int port_status;
|
|
|
|
struct Scsi_Host *shost = adapter->scsi_host;
|
|
|
|
struct scsi_device *sdev;
|
|
|
|
|
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
|
|
|
port_status = atomic_read(&port->status);
|
|
|
|
if ((port_status & ZFCP_STATUS_COMMON_UNBLOCKED) == 0 ||
|
|
|
|
(port_status & (ZFCP_STATUS_COMMON_ERP_INUSE |
|
|
|
|
ZFCP_STATUS_COMMON_ERP_FAILED)) != 0) {
|
|
|
|
/* new ERP of severity >= port triggered elsewhere meanwhile or
|
|
|
|
* local link down (adapter erp_failed but not clear unblock)
|
|
|
|
*/
|
|
|
|
zfcp_dbf_rec_run_lvl(4, "ertru_p", &port->erp_action);
|
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
spin_lock(shost->host_lock);
|
|
|
|
__shost_for_each_device(sdev, shost) {
|
|
|
|
struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
|
|
|
|
int lun_status;
|
|
|
|
|
2019-03-26 21:36:58 +08:00
|
|
|
if (sdev->sdev_state == SDEV_DEL ||
|
|
|
|
sdev->sdev_state == SDEV_CANCEL)
|
|
|
|
continue;
|
scsi: zfcp: fix rport unblock race with LUN recovery
It is unavoidable that zfcp_scsi_queuecommand() has to finish requests
with DID_IMM_RETRY (like fc_remote_port_chkready()) during the time
window when zfcp detected an unavailable rport but
fc_remote_port_delete(), which is asynchronous via
zfcp_scsi_schedule_rport_block(), has not yet blocked the rport.
However, for the case when the rport becomes available again, we should
prevent unblocking the rport too early. In contrast to other FCP LLDDs,
zfcp has to open each LUN with the FCP channel hardware before it can
send I/O to a LUN. So if a port already has LUNs attached and we
unblock the rport just after port recovery, recoveries of LUNs behind
this port can still be pending which in turn force
zfcp_scsi_queuecommand() to unnecessarily finish requests with
DID_IMM_RETRY.
This also opens a time window with unblocked rport (until the followup
LUN reopen recovery has finished). If a scsi_cmnd timeout occurs during
this time window fc_timed_out() cannot work as desired and such command
would indeed time out and trigger scsi_eh. This prevents a clean and
timely path failover. This should not happen if the path issue can be
recovered on FC transport layer such as path issues involving RSCNs.
Fix this by only calling zfcp_scsi_schedule_rport_register(), to
asynchronously trigger fc_remote_port_add(), after all LUN recoveries as
children of the rport have finished and no new recoveries of equal or
higher order were triggered meanwhile. Finished intentionally includes
any recovery result no matter if successful or failed (still unblock
rport so other successful LUNs work). For simplicity, we check after
each finished LUN recovery if there is another LUN recovery pending on
the same port and then do nothing. We handle the special case of a
successful recovery of a port without LUN children the same way without
changing this case's semantics.
For debugging we introduce 2 new trace records written if the rport
unblock attempt was aborted due to still unfinished or freshly triggered
recovery. The records are only written above the default trace level.
Benjamin noticed the important special case of new recovery that can be
triggered between having given up the erp_lock and before calling
zfcp_erp_action_cleanup() within zfcp_erp_strategy(). We must avoid the
following sequence:
ERP thread rport_work other context
------------------------- -------------- --------------------------------
port is unblocked, rport still blocked,
due to pending/running ERP action,
so ((port->status & ...UNBLOCK) != 0)
and (port->rport == NULL)
unlock ERP
zfcp_erp_action_cleanup()
case ZFCP_ERP_ACTION_REOPEN_LUN:
zfcp_erp_try_rport_unblock()
((status & ...UNBLOCK) != 0) [OLD!]
zfcp_erp_port_reopen()
lock ERP
zfcp_erp_port_block()
port->status clear ...UNBLOCK
unlock ERP
zfcp_scsi_schedule_rport_block()
port->rport_task = RPORT_DEL
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task != RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_block()
if (!port->rport) return
zfcp_scsi_schedule_rport_register()
port->rport_task = RPORT_ADD
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task == RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_register()
(port->rport == NULL)
rport = fc_remote_port_add()
port->rport = rport;
Now the rport was erroneously unblocked while the zfcp_port is blocked.
This is another situation we want to avoid due to scsi_eh
potential. This state would at least remain until the new recovery from
the other context finished successfully, or potentially forever if it
failed. In order to close this race, we take the erp_lock inside
zfcp_erp_try_rport_unblock() when checking the status of zfcp_port or
LUN. With that, the possible corresponding rport state sequences would
be: (unblock[ERP thread],block[other context]) if the ERP thread gets
erp_lock first and still sees ((port->status & ...UNBLOCK) != 0),
(block[other context],NOP[ERP thread]) if the ERP thread gets erp_lock
after the other context has already cleard ...UNBLOCK from port->status.
Since checking fields of struct erp_action is unsafe because they could
have been overwritten (re-used for new recovery) meanwhile, we only
check status of zfcp_port and LUN since these are only changed under
erp_lock elsewhere. Regarding the check of the proper status flags (port
or port_forced are similar to the shown adapter recovery):
[zfcp_erp_adapter_shutdown()]
zfcp_erp_adapter_reopen()
zfcp_erp_adapter_block()
* clear UNBLOCK ---------------------------------------+
zfcp_scsi_schedule_rports_block() |
write_lock_irqsave(&adapter->erp_lock, flags);-------+ |
zfcp_erp_action_enqueue() | |
zfcp_erp_setup_act() | |
* set ERP_INUSE -----------------------------------|--|--+
write_unlock_irqrestore(&adapter->erp_lock, flags);--+ | |
.context-switch. | |
zfcp_erp_thread() | |
zfcp_erp_strategy() | |
write_lock_irqsave(&adapter->erp_lock, flags);------+ | |
... | | |
zfcp_erp_strategy_check_target() | | |
zfcp_erp_strategy_check_adapter() | | |
zfcp_erp_adapter_unblock() | | |
* set UNBLOCK -----------------------------------|--+ |
zfcp_erp_action_dequeue() | |
* clear ERP_INUSE ---------------------------------|-----+
... |
write_unlock_irqrestore(&adapter->erp_lock, flags);-+
Hence, we should check for both UNBLOCK and ERP_INUSE because they are
interleaved. Also we need to explicitly check ERP_FAILED for the link
down case which currently does not clear the UNBLOCK flag in
zfcp_fsf_link_down_info_eval().
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 8830271c4819 ("[SCSI] zfcp: Dont fail SCSI commands when transitioning to blocked fc_rport")
Fixes: a2fa0aede07c ("[SCSI] zfcp: Block FC transport rports early on errors")
Fixes: 5f852be9e11d ("[SCSI] zfcp: Fix deadlock between zfcp ERP and SCSI")
Fixes: 338151e06608 ("[SCSI] zfcp: make use of fc_remote_port_delete when target port is unavailable")
Fixes: 3859f6a248cb ("[PATCH] zfcp: add rports to enable scsi_add_device to work again")
Cc: <stable@vger.kernel.org> #2.6.32+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2016-12-10 00:16:33 +08:00
|
|
|
if (zsdev->port != port)
|
|
|
|
continue;
|
|
|
|
/* LUN under port of interest */
|
|
|
|
lun_status = atomic_read(&zsdev->status);
|
|
|
|
if ((lun_status & ZFCP_STATUS_COMMON_ERP_FAILED) != 0)
|
|
|
|
continue; /* unblock rport despite failed LUNs */
|
|
|
|
/* LUN recovery not given up yet [maybe follow-up pending] */
|
|
|
|
if ((lun_status & ZFCP_STATUS_COMMON_UNBLOCKED) == 0 ||
|
|
|
|
(lun_status & ZFCP_STATUS_COMMON_ERP_INUSE) != 0) {
|
|
|
|
/* LUN blocked:
|
|
|
|
* not yet unblocked [LUN recovery pending]
|
|
|
|
* or meanwhile blocked [new LUN recovery triggered]
|
|
|
|
*/
|
|
|
|
zfcp_dbf_rec_run_lvl(4, "ertru_l", &zsdev->erp_action);
|
|
|
|
spin_unlock(shost->host_lock);
|
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* now port has no child or all children have completed recovery,
|
|
|
|
* and no ERP of severity >= port was meanwhile triggered elsewhere
|
|
|
|
*/
|
|
|
|
zfcp_scsi_schedule_rport_register(port);
|
|
|
|
spin_unlock(shost->host_lock);
|
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act,
|
|
|
|
enum zfcp_erp_act_result result)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = act->adapter;
|
|
|
|
struct zfcp_port *port = act->port;
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev = act->sdev;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-11-08 22:44:50 +08:00
|
|
|
switch (act->type) {
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
2010-09-08 20:39:54 +08:00
|
|
|
if (!(act->status & ZFCP_STATUS_ERP_NO_REF))
|
2010-09-08 20:39:55 +08:00
|
|
|
scsi_device_put(sdev);
|
scsi: zfcp: fix rport unblock race with LUN recovery
It is unavoidable that zfcp_scsi_queuecommand() has to finish requests
with DID_IMM_RETRY (like fc_remote_port_chkready()) during the time
window when zfcp detected an unavailable rport but
fc_remote_port_delete(), which is asynchronous via
zfcp_scsi_schedule_rport_block(), has not yet blocked the rport.
However, for the case when the rport becomes available again, we should
prevent unblocking the rport too early. In contrast to other FCP LLDDs,
zfcp has to open each LUN with the FCP channel hardware before it can
send I/O to a LUN. So if a port already has LUNs attached and we
unblock the rport just after port recovery, recoveries of LUNs behind
this port can still be pending which in turn force
zfcp_scsi_queuecommand() to unnecessarily finish requests with
DID_IMM_RETRY.
This also opens a time window with unblocked rport (until the followup
LUN reopen recovery has finished). If a scsi_cmnd timeout occurs during
this time window fc_timed_out() cannot work as desired and such command
would indeed time out and trigger scsi_eh. This prevents a clean and
timely path failover. This should not happen if the path issue can be
recovered on FC transport layer such as path issues involving RSCNs.
Fix this by only calling zfcp_scsi_schedule_rport_register(), to
asynchronously trigger fc_remote_port_add(), after all LUN recoveries as
children of the rport have finished and no new recoveries of equal or
higher order were triggered meanwhile. Finished intentionally includes
any recovery result no matter if successful or failed (still unblock
rport so other successful LUNs work). For simplicity, we check after
each finished LUN recovery if there is another LUN recovery pending on
the same port and then do nothing. We handle the special case of a
successful recovery of a port without LUN children the same way without
changing this case's semantics.
For debugging we introduce 2 new trace records written if the rport
unblock attempt was aborted due to still unfinished or freshly triggered
recovery. The records are only written above the default trace level.
Benjamin noticed the important special case of new recovery that can be
triggered between having given up the erp_lock and before calling
zfcp_erp_action_cleanup() within zfcp_erp_strategy(). We must avoid the
following sequence:
ERP thread rport_work other context
------------------------- -------------- --------------------------------
port is unblocked, rport still blocked,
due to pending/running ERP action,
so ((port->status & ...UNBLOCK) != 0)
and (port->rport == NULL)
unlock ERP
zfcp_erp_action_cleanup()
case ZFCP_ERP_ACTION_REOPEN_LUN:
zfcp_erp_try_rport_unblock()
((status & ...UNBLOCK) != 0) [OLD!]
zfcp_erp_port_reopen()
lock ERP
zfcp_erp_port_block()
port->status clear ...UNBLOCK
unlock ERP
zfcp_scsi_schedule_rport_block()
port->rport_task = RPORT_DEL
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task != RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_block()
if (!port->rport) return
zfcp_scsi_schedule_rport_register()
port->rport_task = RPORT_ADD
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task == RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_register()
(port->rport == NULL)
rport = fc_remote_port_add()
port->rport = rport;
Now the rport was erroneously unblocked while the zfcp_port is blocked.
This is another situation we want to avoid due to scsi_eh
potential. This state would at least remain until the new recovery from
the other context finished successfully, or potentially forever if it
failed. In order to close this race, we take the erp_lock inside
zfcp_erp_try_rport_unblock() when checking the status of zfcp_port or
LUN. With that, the possible corresponding rport state sequences would
be: (unblock[ERP thread],block[other context]) if the ERP thread gets
erp_lock first and still sees ((port->status & ...UNBLOCK) != 0),
(block[other context],NOP[ERP thread]) if the ERP thread gets erp_lock
after the other context has already cleard ...UNBLOCK from port->status.
Since checking fields of struct erp_action is unsafe because they could
have been overwritten (re-used for new recovery) meanwhile, we only
check status of zfcp_port and LUN since these are only changed under
erp_lock elsewhere. Regarding the check of the proper status flags (port
or port_forced are similar to the shown adapter recovery):
[zfcp_erp_adapter_shutdown()]
zfcp_erp_adapter_reopen()
zfcp_erp_adapter_block()
* clear UNBLOCK ---------------------------------------+
zfcp_scsi_schedule_rports_block() |
write_lock_irqsave(&adapter->erp_lock, flags);-------+ |
zfcp_erp_action_enqueue() | |
zfcp_erp_setup_act() | |
* set ERP_INUSE -----------------------------------|--|--+
write_unlock_irqrestore(&adapter->erp_lock, flags);--+ | |
.context-switch. | |
zfcp_erp_thread() | |
zfcp_erp_strategy() | |
write_lock_irqsave(&adapter->erp_lock, flags);------+ | |
... | | |
zfcp_erp_strategy_check_target() | | |
zfcp_erp_strategy_check_adapter() | | |
zfcp_erp_adapter_unblock() | | |
* set UNBLOCK -----------------------------------|--+ |
zfcp_erp_action_dequeue() | |
* clear ERP_INUSE ---------------------------------|-----+
... |
write_unlock_irqrestore(&adapter->erp_lock, flags);-+
Hence, we should check for both UNBLOCK and ERP_INUSE because they are
interleaved. Also we need to explicitly check ERP_FAILED for the link
down case which currently does not clear the UNBLOCK flag in
zfcp_fsf_link_down_info_eval().
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 8830271c4819 ("[SCSI] zfcp: Dont fail SCSI commands when transitioning to blocked fc_rport")
Fixes: a2fa0aede07c ("[SCSI] zfcp: Block FC transport rports early on errors")
Fixes: 5f852be9e11d ("[SCSI] zfcp: Fix deadlock between zfcp ERP and SCSI")
Fixes: 338151e06608 ("[SCSI] zfcp: make use of fc_remote_port_delete when target port is unavailable")
Fixes: 3859f6a248cb ("[PATCH] zfcp: add rports to enable scsi_add_device to work again")
Cc: <stable@vger.kernel.org> #2.6.32+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2016-12-10 00:16:33 +08:00
|
|
|
zfcp_erp_try_rport_unblock(port);
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
2016-08-11 00:30:46 +08:00
|
|
|
/* This switch case might also happen after a forced reopen
|
|
|
|
* was successfully done and thus overwritten with a new
|
|
|
|
* non-forced reopen at `ersfs_2'. In this case, we must not
|
|
|
|
* do the clean-up of the non-forced version.
|
|
|
|
*/
|
|
|
|
if (act->step != ZFCP_ERP_STEP_UNINITIALIZED)
|
|
|
|
if (result == ZFCP_ERP_SUCCEEDED)
|
scsi: zfcp: fix rport unblock race with LUN recovery
It is unavoidable that zfcp_scsi_queuecommand() has to finish requests
with DID_IMM_RETRY (like fc_remote_port_chkready()) during the time
window when zfcp detected an unavailable rport but
fc_remote_port_delete(), which is asynchronous via
zfcp_scsi_schedule_rport_block(), has not yet blocked the rport.
However, for the case when the rport becomes available again, we should
prevent unblocking the rport too early. In contrast to other FCP LLDDs,
zfcp has to open each LUN with the FCP channel hardware before it can
send I/O to a LUN. So if a port already has LUNs attached and we
unblock the rport just after port recovery, recoveries of LUNs behind
this port can still be pending which in turn force
zfcp_scsi_queuecommand() to unnecessarily finish requests with
DID_IMM_RETRY.
This also opens a time window with unblocked rport (until the followup
LUN reopen recovery has finished). If a scsi_cmnd timeout occurs during
this time window fc_timed_out() cannot work as desired and such command
would indeed time out and trigger scsi_eh. This prevents a clean and
timely path failover. This should not happen if the path issue can be
recovered on FC transport layer such as path issues involving RSCNs.
Fix this by only calling zfcp_scsi_schedule_rport_register(), to
asynchronously trigger fc_remote_port_add(), after all LUN recoveries as
children of the rport have finished and no new recoveries of equal or
higher order were triggered meanwhile. Finished intentionally includes
any recovery result no matter if successful or failed (still unblock
rport so other successful LUNs work). For simplicity, we check after
each finished LUN recovery if there is another LUN recovery pending on
the same port and then do nothing. We handle the special case of a
successful recovery of a port without LUN children the same way without
changing this case's semantics.
For debugging we introduce 2 new trace records written if the rport
unblock attempt was aborted due to still unfinished or freshly triggered
recovery. The records are only written above the default trace level.
Benjamin noticed the important special case of new recovery that can be
triggered between having given up the erp_lock and before calling
zfcp_erp_action_cleanup() within zfcp_erp_strategy(). We must avoid the
following sequence:
ERP thread rport_work other context
------------------------- -------------- --------------------------------
port is unblocked, rport still blocked,
due to pending/running ERP action,
so ((port->status & ...UNBLOCK) != 0)
and (port->rport == NULL)
unlock ERP
zfcp_erp_action_cleanup()
case ZFCP_ERP_ACTION_REOPEN_LUN:
zfcp_erp_try_rport_unblock()
((status & ...UNBLOCK) != 0) [OLD!]
zfcp_erp_port_reopen()
lock ERP
zfcp_erp_port_block()
port->status clear ...UNBLOCK
unlock ERP
zfcp_scsi_schedule_rport_block()
port->rport_task = RPORT_DEL
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task != RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_block()
if (!port->rport) return
zfcp_scsi_schedule_rport_register()
port->rport_task = RPORT_ADD
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task == RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_register()
(port->rport == NULL)
rport = fc_remote_port_add()
port->rport = rport;
Now the rport was erroneously unblocked while the zfcp_port is blocked.
This is another situation we want to avoid due to scsi_eh
potential. This state would at least remain until the new recovery from
the other context finished successfully, or potentially forever if it
failed. In order to close this race, we take the erp_lock inside
zfcp_erp_try_rport_unblock() when checking the status of zfcp_port or
LUN. With that, the possible corresponding rport state sequences would
be: (unblock[ERP thread],block[other context]) if the ERP thread gets
erp_lock first and still sees ((port->status & ...UNBLOCK) != 0),
(block[other context],NOP[ERP thread]) if the ERP thread gets erp_lock
after the other context has already cleard ...UNBLOCK from port->status.
Since checking fields of struct erp_action is unsafe because they could
have been overwritten (re-used for new recovery) meanwhile, we only
check status of zfcp_port and LUN since these are only changed under
erp_lock elsewhere. Regarding the check of the proper status flags (port
or port_forced are similar to the shown adapter recovery):
[zfcp_erp_adapter_shutdown()]
zfcp_erp_adapter_reopen()
zfcp_erp_adapter_block()
* clear UNBLOCK ---------------------------------------+
zfcp_scsi_schedule_rports_block() |
write_lock_irqsave(&adapter->erp_lock, flags);-------+ |
zfcp_erp_action_enqueue() | |
zfcp_erp_setup_act() | |
* set ERP_INUSE -----------------------------------|--|--+
write_unlock_irqrestore(&adapter->erp_lock, flags);--+ | |
.context-switch. | |
zfcp_erp_thread() | |
zfcp_erp_strategy() | |
write_lock_irqsave(&adapter->erp_lock, flags);------+ | |
... | | |
zfcp_erp_strategy_check_target() | | |
zfcp_erp_strategy_check_adapter() | | |
zfcp_erp_adapter_unblock() | | |
* set UNBLOCK -----------------------------------|--+ |
zfcp_erp_action_dequeue() | |
* clear ERP_INUSE ---------------------------------|-----+
... |
write_unlock_irqrestore(&adapter->erp_lock, flags);-+
Hence, we should check for both UNBLOCK and ERP_INUSE because they are
interleaved. Also we need to explicitly check ERP_FAILED for the link
down case which currently does not clear the UNBLOCK flag in
zfcp_fsf_link_down_info_eval().
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 8830271c4819 ("[SCSI] zfcp: Dont fail SCSI commands when transitioning to blocked fc_rport")
Fixes: a2fa0aede07c ("[SCSI] zfcp: Block FC transport rports early on errors")
Fixes: 5f852be9e11d ("[SCSI] zfcp: Fix deadlock between zfcp ERP and SCSI")
Fixes: 338151e06608 ("[SCSI] zfcp: make use of fc_remote_port_delete when target port is unavailable")
Fixes: 3859f6a248cb ("[PATCH] zfcp: add rports to enable scsi_add_device to work again")
Cc: <stable@vger.kernel.org> #2.6.32+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2016-12-10 00:16:33 +08:00
|
|
|
zfcp_erp_try_rport_unblock(port);
|
2020-03-31 22:21:48 +08:00
|
|
|
fallthrough;
|
2010-07-08 15:53:05 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
2010-02-17 18:18:56 +08:00
|
|
|
put_device(&port->dev);
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
|
|
|
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
2009-03-02 20:09:08 +08:00
|
|
|
if (result == ZFCP_ERP_SUCCEEDED) {
|
2008-12-25 20:38:50 +08:00
|
|
|
register_service_level(&adapter->service_level);
|
2012-09-04 21:23:35 +08:00
|
|
|
zfcp_fc_conditional_port_scan(adapter);
|
2011-02-23 02:54:48 +08:00
|
|
|
queue_work(adapter->work_queue, &adapter->ns_up_work);
|
2009-03-02 20:09:08 +08:00
|
|
|
} else
|
|
|
|
unregister_service_level(&adapter->service_level);
|
2011-02-23 02:54:48 +08:00
|
|
|
|
2009-11-24 23:53:59 +08:00
|
|
|
kref_put(&adapter->ref, zfcp_adapter_release);
|
2008-07-02 16:56:40 +08:00
|
|
|
break;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_strategy_do_action(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-11-08 22:44:50 +08:00
|
|
|
switch (erp_action->type) {
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
|
|
|
|
return zfcp_erp_adapter_strategy(erp_action);
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
|
|
|
|
return zfcp_erp_port_forced_strategy(erp_action);
|
|
|
|
case ZFCP_ERP_ACTION_REOPEN_PORT:
|
|
|
|
return zfcp_erp_port_strategy(erp_action);
|
2010-09-08 20:39:55 +08:00
|
|
|
case ZFCP_ERP_ACTION_REOPEN_LUN:
|
|
|
|
return zfcp_erp_lun_strategy(erp_action);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
|
|
|
return ZFCP_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
static enum zfcp_erp_act_result zfcp_erp_strategy(
|
|
|
|
struct zfcp_erp_action *erp_action)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-11-08 22:44:52 +08:00
|
|
|
enum zfcp_erp_act_result result;
|
2008-07-02 16:56:40 +08:00
|
|
|
unsigned long flags;
|
2009-11-24 23:53:58 +08:00
|
|
|
struct zfcp_adapter *adapter = erp_action->adapter;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-11-24 23:53:59 +08:00
|
|
|
kref_get(&adapter->ref);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-11-24 23:53:59 +08:00
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2008-07-02 16:56:40 +08:00
|
|
|
zfcp_erp_strategy_check_fsfreq(erp_action);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (erp_action->status & ZFCP_STATUS_ERP_DISMISSED) {
|
|
|
|
zfcp_erp_action_dequeue(erp_action);
|
2018-11-08 22:44:52 +08:00
|
|
|
result = ZFCP_ERP_DISMISSED;
|
2008-07-02 16:56:40 +08:00
|
|
|
goto unlock;
|
2005-06-13 19:15:15 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-07-08 15:53:09 +08:00
|
|
|
if (erp_action->status & ZFCP_STATUS_ERP_TIMEDOUT) {
|
2018-11-08 22:44:52 +08:00
|
|
|
result = ZFCP_ERP_FAILED;
|
2010-07-08 15:53:09 +08:00
|
|
|
goto check_target;
|
|
|
|
}
|
|
|
|
|
2006-02-11 08:41:50 +08:00
|
|
|
zfcp_erp_action_to_running(erp_action);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/* no lock to allow for blocking operations */
|
2009-11-24 23:53:58 +08:00
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
2018-11-08 22:44:52 +08:00
|
|
|
result = zfcp_erp_strategy_do_action(erp_action);
|
2009-11-24 23:53:58 +08:00
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
if (erp_action->status & ZFCP_STATUS_ERP_DISMISSED)
|
2018-11-08 22:44:52 +08:00
|
|
|
result = ZFCP_ERP_CONTINUES;
|
2008-06-11 00:21:00 +08:00
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
switch (result) {
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_NOMEM:
|
|
|
|
if (!(erp_action->status & ZFCP_STATUS_ERP_LOWMEM)) {
|
|
|
|
++adapter->erp_low_mem_count;
|
|
|
|
erp_action->status |= ZFCP_STATUS_ERP_LOWMEM;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-07-02 16:56:40 +08:00
|
|
|
if (adapter->erp_total_count == adapter->erp_low_mem_count)
|
2010-12-02 22:16:16 +08:00
|
|
|
_zfcp_erp_adapter_reopen(adapter, 0, "erstgy1");
|
2008-07-02 16:56:40 +08:00
|
|
|
else {
|
|
|
|
zfcp_erp_strategy_memwait(erp_action);
|
2018-11-08 22:44:52 +08:00
|
|
|
result = ZFCP_ERP_CONTINUES;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-07-02 16:56:40 +08:00
|
|
|
goto unlock;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
case ZFCP_ERP_CONTINUES:
|
|
|
|
if (erp_action->status & ZFCP_STATUS_ERP_LOWMEM) {
|
|
|
|
--adapter->erp_low_mem_count;
|
|
|
|
erp_action->status &= ~ZFCP_STATUS_ERP_LOWMEM;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-07-02 16:56:40 +08:00
|
|
|
goto unlock;
|
2018-11-08 22:44:52 +08:00
|
|
|
case ZFCP_ERP_SUCCEEDED:
|
|
|
|
case ZFCP_ERP_FAILED:
|
|
|
|
case ZFCP_ERP_EXIT:
|
|
|
|
case ZFCP_ERP_DISMISSED:
|
|
|
|
/* NOP */
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-07-08 15:53:09 +08:00
|
|
|
check_target:
|
2018-11-08 22:44:52 +08:00
|
|
|
result = zfcp_erp_strategy_check_target(erp_action, result);
|
2008-07-02 16:56:40 +08:00
|
|
|
zfcp_erp_action_dequeue(erp_action);
|
2018-11-08 22:44:52 +08:00
|
|
|
result = zfcp_erp_strategy_statechange(erp_action, result);
|
|
|
|
if (result == ZFCP_ERP_EXIT)
|
2008-07-02 16:56:40 +08:00
|
|
|
goto unlock;
|
2018-11-08 22:44:52 +08:00
|
|
|
if (result == ZFCP_ERP_SUCCEEDED)
|
2009-07-13 21:06:09 +08:00
|
|
|
zfcp_erp_strategy_followup_success(erp_action);
|
2018-11-08 22:44:52 +08:00
|
|
|
if (result == ZFCP_ERP_FAILED)
|
2009-07-13 21:06:09 +08:00
|
|
|
zfcp_erp_strategy_followup_failed(erp_action);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
unlock:
|
2009-11-24 23:53:58 +08:00
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-11-08 22:44:52 +08:00
|
|
|
if (result != ZFCP_ERP_CONTINUES)
|
|
|
|
zfcp_erp_action_cleanup(erp_action, result);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-11-24 23:53:59 +08:00
|
|
|
kref_put(&adapter->ref, zfcp_adapter_release);
|
2018-11-08 22:44:52 +08:00
|
|
|
return result;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
static int zfcp_erp_thread(void *data)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-02 16:56:40 +08:00
|
|
|
struct zfcp_adapter *adapter = (struct zfcp_adapter *) data;
|
|
|
|
struct zfcp_erp_action *act;
|
|
|
|
unsigned long flags;
|
2009-04-17 21:08:06 +08:00
|
|
|
|
2009-08-18 21:43:25 +08:00
|
|
|
for (;;) {
|
|
|
|
wait_event_interruptible(adapter->erp_ready_wq,
|
|
|
|
!list_empty(&adapter->erp_ready_head) ||
|
|
|
|
kthread_should_stop());
|
2009-04-17 21:08:06 +08:00
|
|
|
|
2009-08-18 21:43:25 +08:00
|
|
|
if (kthread_should_stop())
|
|
|
|
break;
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
write_lock_irqsave(&adapter->erp_lock, flags);
|
2020-09-11 03:49:15 +08:00
|
|
|
act = list_first_entry_or_null(&adapter->erp_ready_head,
|
|
|
|
struct zfcp_erp_action, list);
|
2008-07-02 16:56:40 +08:00
|
|
|
write_unlock_irqrestore(&adapter->erp_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2020-09-11 03:49:15 +08:00
|
|
|
if (act) {
|
2008-07-02 16:56:40 +08:00
|
|
|
/* there is more to come after dismission, no notify */
|
|
|
|
if (zfcp_erp_strategy(act) != ZFCP_ERP_DISMISSED)
|
|
|
|
zfcp_erp_wakeup(adapter);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
|
|
|
* zfcp_erp_thread_setup - Start ERP thread for adapter
|
|
|
|
* @adapter: Adapter to start the ERP thread for
|
|
|
|
*
|
2018-11-08 22:44:48 +08:00
|
|
|
* Return: 0 on success, or error code from kthread_run().
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
|
|
|
int zfcp_erp_thread_setup(struct zfcp_adapter *adapter)
|
|
|
|
{
|
2009-08-18 21:43:25 +08:00
|
|
|
struct task_struct *thread;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-08-18 21:43:25 +08:00
|
|
|
thread = kthread_run(zfcp_erp_thread, adapter, "zfcperp%s",
|
|
|
|
dev_name(&adapter->ccw_device->dev));
|
|
|
|
if (IS_ERR(thread)) {
|
2008-07-02 16:56:40 +08:00
|
|
|
dev_err(&adapter->ccw_device->dev,
|
2008-10-01 18:42:15 +08:00
|
|
|
"Creating an ERP thread for the FCP device failed.\n");
|
2009-08-18 21:43:25 +08:00
|
|
|
return PTR_ERR(thread);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2009-08-18 21:43:25 +08:00
|
|
|
|
|
|
|
adapter->erp_thread = thread;
|
2008-07-02 16:56:40 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
|
|
|
* zfcp_erp_thread_kill - Stop ERP thread.
|
|
|
|
* @adapter: Adapter where the ERP thread should be stopped.
|
|
|
|
*
|
|
|
|
* The caller of this routine ensures that the specified adapter has
|
|
|
|
* been shut down and that this operation has been completed. Thus,
|
|
|
|
* there are no pending erp_actions which would need to be handled
|
|
|
|
* here.
|
|
|
|
*/
|
|
|
|
void zfcp_erp_thread_kill(struct zfcp_adapter *adapter)
|
|
|
|
{
|
2009-08-18 21:43:25 +08:00
|
|
|
kthread_stop(adapter->erp_thread);
|
|
|
|
adapter->erp_thread = NULL;
|
2009-08-18 21:43:27 +08:00
|
|
|
WARN_ON(!list_empty(&adapter->erp_ready_head));
|
|
|
|
WARN_ON(!list_empty(&adapter->erp_running_head));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
2010-09-08 20:40:01 +08:00
|
|
|
* zfcp_erp_wait - wait for completion of error recovery on an adapter
|
|
|
|
* @adapter: adapter for which to wait for completion of its error recovery
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
2010-09-08 20:40:01 +08:00
|
|
|
void zfcp_erp_wait(struct zfcp_adapter *adapter)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:40:01 +08:00
|
|
|
wait_event(adapter->erp_done_wqh,
|
|
|
|
!(atomic_read(&adapter->status) &
|
|
|
|
ZFCP_STATUS_ADAPTER_ERP_PENDING));
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
2010-09-08 20:40:01 +08:00
|
|
|
* zfcp_erp_set_adapter_status - set adapter status bits
|
|
|
|
* @adapter: adapter to change the status
|
|
|
|
* @mask: status bits to change
|
|
|
|
*
|
|
|
|
* Changes in common status bits are propagated to attached ports and LUNs.
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
2010-09-08 20:40:01 +08:00
|
|
|
void zfcp_erp_set_adapter_status(struct zfcp_adapter *adapter, u32 mask)
|
2008-07-02 16:56:40 +08:00
|
|
|
{
|
2010-09-08 20:40:01 +08:00
|
|
|
struct zfcp_port *port;
|
|
|
|
struct scsi_device *sdev;
|
|
|
|
unsigned long flags;
|
|
|
|
u32 common_mask = mask & ZFCP_COMMON_FLAGS;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(mask, &adapter->status);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-08 20:40:01 +08:00
|
|
|
if (!common_mask)
|
|
|
|
return;
|
|
|
|
|
|
|
|
read_lock_irqsave(&adapter->port_list_lock, flags);
|
|
|
|
list_for_each_entry(port, &adapter->port_list, list)
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(common_mask, &port->status);
|
2010-09-08 20:40:01 +08:00
|
|
|
read_unlock_irqrestore(&adapter->port_list_lock, flags);
|
|
|
|
|
2020-05-09 01:23:33 +08:00
|
|
|
/*
|
|
|
|
* if `scsi_host` is missing, xconfig/xport data has never completed
|
|
|
|
* yet, so we can't access it, but there are also no SDEVs yet
|
|
|
|
*/
|
|
|
|
if (adapter->scsi_host == NULL)
|
|
|
|
return;
|
|
|
|
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_lock_irqsave(adapter->scsi_host->host_lock, flags);
|
|
|
|
__shost_for_each_device(sdev, adapter->scsi_host)
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(common_mask, &sdev_to_zfcp(sdev)->status);
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_unlock_irqrestore(adapter->scsi_host->host_lock, flags);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
2010-09-08 20:40:01 +08:00
|
|
|
* zfcp_erp_clear_adapter_status - clear adapter status bits
|
2008-07-02 16:56:40 +08:00
|
|
|
* @adapter: adapter to change the status
|
|
|
|
* @mask: status bits to change
|
|
|
|
*
|
2010-09-08 20:39:55 +08:00
|
|
|
* Changes in common status bits are propagated to attached ports and LUNs.
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
2010-09-08 20:40:01 +08:00
|
|
|
void zfcp_erp_clear_adapter_status(struct zfcp_adapter *adapter, u32 mask)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct zfcp_port *port;
|
2010-09-08 20:40:01 +08:00
|
|
|
struct scsi_device *sdev;
|
2009-11-24 23:53:58 +08:00
|
|
|
unsigned long flags;
|
2008-07-02 16:56:40 +08:00
|
|
|
u32 common_mask = mask & ZFCP_COMMON_FLAGS;
|
2010-09-08 20:40:01 +08:00
|
|
|
u32 clear_counter = mask & ZFCP_STATUS_COMMON_ERP_FAILED;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(mask, &adapter->status);
|
2010-09-08 20:40:01 +08:00
|
|
|
|
|
|
|
if (!common_mask)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (clear_counter)
|
|
|
|
atomic_set(&adapter->erp_counter, 0);
|
|
|
|
|
|
|
|
read_lock_irqsave(&adapter->port_list_lock, flags);
|
|
|
|
list_for_each_entry(port, &adapter->port_list, list) {
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(common_mask, &port->status);
|
2010-09-08 20:40:01 +08:00
|
|
|
if (clear_counter)
|
|
|
|
atomic_set(&port->erp_counter, 0);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
2010-09-08 20:40:01 +08:00
|
|
|
read_unlock_irqrestore(&adapter->port_list_lock, flags);
|
2008-07-02 16:56:40 +08:00
|
|
|
|
2020-05-09 01:23:33 +08:00
|
|
|
/*
|
|
|
|
* if `scsi_host` is missing, xconfig/xport data has never completed
|
|
|
|
* yet, so we can't access it, but there are also no SDEVs yet
|
|
|
|
*/
|
|
|
|
if (adapter->scsi_host == NULL)
|
|
|
|
return;
|
|
|
|
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_lock_irqsave(adapter->scsi_host->host_lock, flags);
|
|
|
|
__shost_for_each_device(sdev, adapter->scsi_host) {
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(common_mask, &sdev_to_zfcp(sdev)->status);
|
2010-09-08 20:40:01 +08:00
|
|
|
if (clear_counter)
|
|
|
|
atomic_set(&sdev_to_zfcp(sdev)->erp_counter, 0);
|
2009-11-24 23:53:58 +08:00
|
|
|
}
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_unlock_irqrestore(adapter->scsi_host->host_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
2010-09-08 20:40:01 +08:00
|
|
|
* zfcp_erp_set_port_status - set port status bits
|
|
|
|
* @port: port to change the status
|
2008-07-02 16:56:40 +08:00
|
|
|
* @mask: status bits to change
|
|
|
|
*
|
2010-09-08 20:39:55 +08:00
|
|
|
* Changes in common status bits are propagated to attached LUNs.
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
2010-09-08 20:40:01 +08:00
|
|
|
void zfcp_erp_set_port_status(struct zfcp_port *port, u32 mask)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:39:55 +08:00
|
|
|
struct scsi_device *sdev;
|
2008-07-02 16:56:40 +08:00
|
|
|
u32 common_mask = mask & ZFCP_COMMON_FLAGS;
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
unsigned long flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(mask, &port->status);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-08 20:40:01 +08:00
|
|
|
if (!common_mask)
|
|
|
|
return;
|
|
|
|
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_lock_irqsave(port->adapter->scsi_host->host_lock, flags);
|
|
|
|
__shost_for_each_device(sdev, port->adapter->scsi_host)
|
2010-09-08 20:40:01 +08:00
|
|
|
if (sdev_to_zfcp(sdev)->port == port)
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(common_mask,
|
2010-09-08 20:40:01 +08:00
|
|
|
&sdev_to_zfcp(sdev)->status);
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_unlock_irqrestore(port->adapter->scsi_host->host_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
2010-09-08 20:40:01 +08:00
|
|
|
* zfcp_erp_clear_port_status - clear port status bits
|
|
|
|
* @port: adapter to change the status
|
2008-07-02 16:56:40 +08:00
|
|
|
* @mask: status bits to change
|
2010-09-08 20:40:01 +08:00
|
|
|
*
|
|
|
|
* Changes in common status bits are propagated to attached LUNs.
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
2010-09-08 20:40:01 +08:00
|
|
|
void zfcp_erp_clear_port_status(struct zfcp_port *port, u32 mask)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-09-08 20:40:01 +08:00
|
|
|
struct scsi_device *sdev;
|
|
|
|
u32 common_mask = mask & ZFCP_COMMON_FLAGS;
|
|
|
|
u32 clear_counter = mask & ZFCP_STATUS_COMMON_ERP_FAILED;
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
unsigned long flags;
|
2010-09-08 20:40:01 +08:00
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(mask, &port->status);
|
2010-09-08 20:40:01 +08:00
|
|
|
|
|
|
|
if (!common_mask)
|
|
|
|
return;
|
2010-09-08 20:39:55 +08:00
|
|
|
|
2010-09-08 20:40:01 +08:00
|
|
|
if (clear_counter)
|
|
|
|
atomic_set(&port->erp_counter, 0);
|
|
|
|
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_lock_irqsave(port->adapter->scsi_host->host_lock, flags);
|
|
|
|
__shost_for_each_device(sdev, port->adapter->scsi_host)
|
2010-09-08 20:40:01 +08:00
|
|
|
if (sdev_to_zfcp(sdev)->port == port) {
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(common_mask,
|
2010-09-08 20:40:01 +08:00
|
|
|
&sdev_to_zfcp(sdev)->status);
|
|
|
|
if (clear_counter)
|
|
|
|
atomic_set(&sdev_to_zfcp(sdev)->erp_counter, 0);
|
2008-07-02 16:56:40 +08:00
|
|
|
}
|
[SCSI] zfcp: fix schedule-inside-lock in scsi_device list loops
BUG: sleeping function called from invalid context at kernel/workqueue.c:2752
in_atomic(): 1, irqs_disabled(): 1, pid: 360, name: zfcperp0.0.1700
CPU: 1 Not tainted 3.9.3+ #69
Process zfcperp0.0.1700 (pid: 360, task: 0000000075b7e080, ksp: 000000007476bc30)
<snip>
Call Trace:
([<00000000001165de>] show_trace+0x106/0x154)
[<00000000001166a0>] show_stack+0x74/0xf4
[<00000000006ff646>] dump_stack+0xc6/0xd4
[<000000000017f3a0>] __might_sleep+0x128/0x148
[<000000000015ece8>] flush_work+0x54/0x1f8
[<00000000001630de>] __cancel_work_timer+0xc6/0x128
[<00000000005067ac>] scsi_device_dev_release_usercontext+0x164/0x23c
[<0000000000161816>] execute_in_process_context+0x96/0xa8
[<00000000004d33d8>] device_release+0x60/0xc0
[<000000000048af48>] kobject_release+0xa8/0x1c4
[<00000000004f4bf2>] __scsi_iterate_devices+0xfa/0x130
[<000003ff801b307a>] zfcp_erp_strategy+0x4da/0x1014 [zfcp]
[<000003ff801b3caa>] zfcp_erp_thread+0xf6/0x2b0 [zfcp]
[<000000000016b75a>] kthread+0xf2/0xfc
[<000000000070c9de>] kernel_thread_starter+0x6/0xc
[<000000000070c9d8>] kernel_thread_starter+0x0/0xc
Apparently, the ref_count for some scsi_device drops down to zero,
triggering device removal through execute_in_process_context(), while
the lldd error recovery thread iterates through a scsi device list.
Unfortunately, execute_in_process_context() decides to immediately
execute that device removal function, instead of scheduling asynchronous
execution, since it detects process context and thinks it is safe to do
so. But almost all calls to shost_for_each_device() in our lldd are
inside spin_lock_irq, even in thread context. Obviously, schedule()
inside spin_lock_irq sections is a bad idea.
Change the lldd to use the proper iterator function,
__shost_for_each_device(), in combination with required locking.
Occurences that need to be changed include all calls in zfcp_erp.c,
since those might be executed in zfcp error recovery thread context
with a lock held.
Other occurences of shost_for_each_device() in zfcp_fsf.c do not
need to be changed (no process context, no surrounding locking).
The problem was introduced in Linux 2.6.37 by commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit".
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org #2.6.37+
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-08-22 23:45:37 +08:00
|
|
|
spin_unlock_irqrestore(port->adapter->scsi_host->host_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
2010-09-08 20:40:01 +08:00
|
|
|
* zfcp_erp_set_lun_status - set lun status bits
|
|
|
|
* @sdev: SCSI device / lun to set the status bits
|
|
|
|
* @mask: status bits to change
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
2010-09-08 20:40:01 +08:00
|
|
|
void zfcp_erp_set_lun_status(struct scsi_device *sdev, u32 mask)
|
2005-06-13 19:23:57 +08:00
|
|
|
{
|
2010-09-08 20:40:01 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_or(mask, &zfcp_sdev->status);
|
2005-06-13 19:23:57 +08:00
|
|
|
}
|
|
|
|
|
2008-07-02 16:56:40 +08:00
|
|
|
/**
|
2010-09-08 20:40:01 +08:00
|
|
|
* zfcp_erp_clear_lun_status - clear lun status bits
|
|
|
|
* @sdev: SCSi device / lun to clear the status bits
|
|
|
|
* @mask: status bits to change
|
2008-07-02 16:56:40 +08:00
|
|
|
*/
|
2010-09-08 20:40:01 +08:00
|
|
|
void zfcp_erp_clear_lun_status(struct scsi_device *sdev, u32 mask)
|
2005-06-13 19:23:57 +08:00
|
|
|
{
|
2010-09-08 20:40:01 +08:00
|
|
|
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
|
|
|
|
|
2015-04-24 07:12:32 +08:00
|
|
|
atomic_andnot(mask, &zfcp_sdev->status);
|
2010-09-08 20:40:01 +08:00
|
|
|
|
|
|
|
if (mask & ZFCP_STATUS_COMMON_ERP_FAILED)
|
|
|
|
atomic_set(&zfcp_sdev->erp_counter, 0);
|
2005-06-13 19:23:57 +08:00
|
|
|
}
|
2010-09-08 20:40:01 +08:00
|
|
|
|
2018-05-18 01:15:03 +08:00
|
|
|
/**
|
|
|
|
* zfcp_erp_adapter_reset_sync() - Really reopen adapter and wait.
|
|
|
|
* @adapter: Pointer to zfcp_adapter to reopen.
|
2018-11-08 22:44:49 +08:00
|
|
|
* @dbftag: Trace tag string of length %ZFCP_DBF_TAG_LEN.
|
2018-05-18 01:15:03 +08:00
|
|
|
*/
|
2018-11-08 22:44:49 +08:00
|
|
|
void zfcp_erp_adapter_reset_sync(struct zfcp_adapter *adapter, char *dbftag)
|
2018-05-18 01:15:03 +08:00
|
|
|
{
|
|
|
|
zfcp_erp_set_adapter_status(adapter, ZFCP_STATUS_COMMON_RUNNING);
|
2018-11-08 22:44:49 +08:00
|
|
|
zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, dbftag);
|
2018-05-18 01:15:03 +08:00
|
|
|
zfcp_erp_wait(adapter);
|
|
|
|
}
|