Apparently been in there since forever and fairly easy to hit when
hotplugging really fast. I can do that since my mst hub has a manual
button to flick the hpd line for reprobing. The resulting WARNING spam
isn't pretty.
Cc: Dave Airlie <airlied@gmail.com>
Cc: stable@vger.kernel.org
Reviewed-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
In
commit 8c7b5ccb72
Author: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Date: Tue Apr 21 17:13:19 2015 +0300
drm/i915: Use atomic helpers for computing changed flags
we've switched over to the atomic version to compute the
crtc->encoder->connector routing from the i915 variant. That one
relies upon the ->best_encoder callback, but the i915-private version
relied upon intel_find_encoder. Which didn't matter except for dp mst,
where the encoder depends upon the selected crtc.
Fix this functional bug by implemented a correct atomic-state based
encoder selector for dp mst.
Note that we can't get rid of the legacy best_encoder callback since
the fbdev emulation uses that still. That means it's incorrect there
still, but that's been the case ever since i915 dp mst support was
merged so not a regression. Best to fix that by converting fbdev over
to atomic too.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
With legacy helpers all the routing was already set up when calling
best_encoder and so could be inspected. But with atomic it's staged,
hence we need a new atomic compliant callback for drivers which need
to inspect the requested state and can't just decided the best encoder
statically.
This is needed to fix up i915 dp mst where we need to pick the right
encoder depending upon the requested CRTC for the connector.
v2: Don't forget to amend the kerneldoc
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Theodore Ts'o <tytso@mit.edu>
Acked-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Vince reported that the fasync signal stuff doesn't work proper for
inherited events. So fix that.
Installing fasync allocates memory and sets filp->f_flags |= FASYNC,
which upon the demise of the file descriptor ensures the allocation is
freed and state is updated.
Now for perf, we can have the events stick around for a while after the
original FD is dead because of references from child events. So we
cannot copy the fasync pointer around. We can however consistently use
the parent's fasync, as that will be updated.
Reported-and-Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: Arnaldo Carvalho deMelo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1434011521.1495.71.camel@twins
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Determine if a fraglist is needed in the tx path, and allocate it if
necessary before setting up the copy and map operations.
Otherwise, undoing the copy and map operations is tricky.
This fixes a use-after-free: if allocating the fraglist failed, the copy
and map operations that had been set up were still executed, writing
over the data area of a freed skb.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Multicast dst are not cached. They carry DST_NOCACHE.
As mentioned in commit f886497212 ("ipv4: fix dst race in
sk_dst_get()"), these dst need special care before caching them
into a socket.
Caching them is allowed only if their refcnt was not 0, ie we
must use atomic_inc_not_zero()
Also, we must use READ_ONCE() to fetch sk->sk_rx_dst, as mentioned
in commit d0c294c53a ("tcp: prevent fetching dst twice in early demux
code")
Fixes: 421b3885bf ("udp: ipv4: Add udp early demux")
Tested-by: Gregory Hoggarth <Gregory.Hoggarth@alliedtelesis.co.nz>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Gregory Hoggarth <Gregory.Hoggarth@alliedtelesis.co.nz>
Reported-by: Alex Gartrell <agartrell@fb.com>
Cc: Michal Kubeček <mkubecek@suse.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
The snd_hdac_chip_readl return can never be less than zeros,
so no point in checking for the return value
This fixes following static checker warnings in
snd_hdac_ext_bus_parse_capabilities
sound/hda/ext/hdac_ext_controller.c:47
snd_hdac_ext_bus_parse_capabilities()
warn: unsigned 'offset' is never less than zero.
sound/hda/ext/hdac_ext_controller.c:54
snd_hdac_ext_bus_parse_capabilities()
warn: unsigned 'cur_cap' is never less than zero.
Signed-off-by: Jeeja KP <jeeja.kp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
This fixes issue in assigning host stream in case of
decoupled mode. The check to verify if the stream is already
in use was wrong so fix that
Signed-off-by: Jeeja KP <jeeja.kp@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
The previous commit for delayed retry of SCOND needs some fine tuning
for spin locks.
The backoff from delayed retry in conjunction with spin looping of lock
itself can potentially cause the delay counter to reach high values.
So to provide fairness to any lock operation, after a lock "seems"
available (i.e. just before first SCOND try0, reset the delay counter
back to starting value of 1
Essentially reset delay to 1 for a new spin-wait-loop-acquire cycle.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This is to workaround the llock/scond livelock
HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping
coherency transactions in the SCU. The exclusive line state keeps rotating
among contenting cores leading to a never ending cycle. So break the cycle
by deferring the retry of failed exclusive access (SCOND). The actual delay
needed is function of number of contending cores as well as the unrelated
coherency traffic from other cores. To keep the code simple, start off with
small delay of 1 which would suffice most cases and in case of contention
double the delay. Eventually the delay is sufficient such that the coherency
pipeline is drained, thus a subsequent exclusive access would succeed.
Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.com
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
With LLOCK/SCOND, the rwlock counter can be atomically updated w/o need
for a guarding spin lock.
This in turn elides the EXchange instruction based spinning which causes
the cacheline transition to exclusive state and concurrent spinning
across cores would cause the line to keep bouncing around.
LLOCK/SCOND based implementation is superior as spinning on LLOCK keeps
the cacheline in shared state.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Current spin_lock uses EXchange instruction to implement the atomic test
and set of lock location (reads orig value and ST 1). This however forces
the cacheline into exclusive state (because of the ST) and concurrent
loops in multiple cores will bounce the line around between cores.
Instead, use LLOCK/SCOND to implement the atomic test and set which is
better as line is in shared state while lock is spinning on LLOCK
The real motivation of this change however is to make way for future
changes in atomics to implement delayed retry (with backoff).
Initial experiment with delayed retry in atomics combined with orig
EX based spinlock was a total disaster (broke even LMBench) as
struct sock has a cache line sharing an atomic_t and spinlock. The
tight spinning on lock, caused the atomic retry to keep backing off
such that it would never finish.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This reduces the diff in forth-coming patches and also helps understand
better the incremental changes to inline asm.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Extended testing of quad core configuration revealed that this fix was
insufficient. Specifically LTP open posix shm_op/23-1 would cause the
hardware livelock in llock/scond loop in update_cpu_load_active()
So remove this and make way for a proper workaround
This reverts commit a5c8b52abe.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
A DM regression on 32 bit systems was reported against v4.2-rc3 here:
https://lkml.org/lkml/2015/7/29/401
Fix this by reverting both commit 1c220c69 ("dm: fix casting bug in
dm_merge_bvec()") and 148e51ba ("dm: improve documentation and code
clarity in dm_merge_bvec"). This combined revert is done to eliminate
the possibility of a partial revert in stable@ kernels.
In hindsight the correct fix, at the time 1c220c69 was applied to fix
the regression that 148e51ba introduced, should've been to simply revert
148e51ba.
Reported-by: Josh Boyer <jwboyer@fedoraproject.org>
Tested-by: Adam Williamson <awilliam@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # 3.19+
conf->beacon_rate can be NULL on association. So check conf->beacon_rate
BSS_CHANGED_BEACON_INFO needs to flagged in changed as the beacon_rate
will appear later.
Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Cc: <stable@vger.kernel.org> # v3.19+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When vortex_up is failed, the skb buffers allocated by __netdev_alloc_skb
in vortex_open are not released, which may cause resource leaks.
This bug has been submitted before.
This patch modifies the error handling code to fix it.
Signed-off-by: Jia-Ju Bai <baijiaju1990@163.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
"len" is a signed integer. We check that len is not negative, so it
goes from zero to INT_MAX. PAGE_SIZE is unsigned long so the comparison
is type promoted to unsigned long. ULONG_MAX - 4095 is a higher than
INT_MAX so the condition can never be true.
I don't know if this is harmful but it seems safe to limit "len" to
INT_MAX - 4095.
Fixes: a8c879a7ee ('RDS: Info and stats')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull i2c fixes from Wolfram Sang:
"A refcounting bugfix for the i2c-core, bugfixes for the generic bus
recovery algorithm and for its omap-user, making binary file
attributes for EEPROMs behave POSIX compliant, and a small typo fix
while we are here"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: fix leaked device refcount on of_find_i2c_* error path
i2c: Fix typo in i2c-bfin-twi.c
i2c: omap: fix bus recovery setup
i2c: core: only use set_scl for bus recovery after calling prepare_recovery
misc: eeprom: at24: clean up at24_bin_write()
i2c: slave eeprom: clean up sysfs bin attribute read()/write()
We need to check that a TRB is part of the current segment
before calculating its DMA address.
Previously a ring segment didn't use a full memory page, and every
new ring segment got a new memory page, so the off by one
error in checking the upper bound was never seen.
Now that we use a full memory page, 256 TRBs (4096 bytes), the off by one
didn't catch the case when a TRB was the first element of the next segment.
This is triggered if the virtual memory pages for a ring segment are
next to each in increasing order where the ring buffer wraps around and
causes errors like:
[ 106.398223] xhci_hcd 0000:00:14.0: ERROR Transfer event TRB DMA ptr not part of current TD ep_index 0 comp_code 1
[ 106.398230] xhci_hcd 0000:00:14.0: Looking for event-dma fffd3000 trb-start fffd4fd0 trb-end fffd5000 seg-start fffd4000 seg-end fffd4ff0
The trb-end address is one outside the end-seg address.
Cc: <stable@vger.kernel.org>
Tested-by: Arkadiusz Miśkiewicz <arekm@maven.pl>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Just one major fix which has been pending since January.
Somehow it fell through the cracks, but here it is. Basically,
this fixes a bug in udc-core when gadget registration fails.
Signed-off-by: Felipe Balbi <balbi@ti.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVv4FwAAoJEIaOsuA1yqREo+4P/1DHyjSWdDTigfFh1Cbmd2Gv
yX3FDJjgA7LA6AbFrkDpx8RnlZrLVRqqMssK5ZI9uVyQ/MlCsn+H6esu5gRnqxQg
VqR6GzNQTprbHUHqvOEEO+d186Mm/brw7qOT4X3Eg/DJJeLY9gr9HmSPPxkExVEy
1GcgFv2rABRG4XmNTWSqTXLqIoLQOoyqsNSgo9U7TlvPHYfmdBMO4zU6Ny1Y7dld
bKzZDckiZ/rLhBncGe6GMjQ26Zm04ABPJ5HbUlCD/IWCiqnGES6OovvubkgaXXX2
FQQsP7ZThyv3z5Lfd/IPBXm3zML8xQAJzneNpnD7KD8xhhikLXE+gmpe7ZRMqeg7
/vWjU5RAunCRheW9Gx3wYoqsnE+jbNckoz91tc4tLW0ZP82ibE76IcYZseYg0bZC
H0Weq+HGQ+7JvqOygjClYh5+Xa6FUQ5Si2AXZ9lFLZYtB3ol6kQh8END/tODmJsm
vZ/yZMSeeqykW8WIjvYMf5RuVyMV7SxDxis3+IjvtdZA5OPemAzFhb6AiJrCqhP1
kSmIYd4XNRElde8msEdr97cqgrPNh+wAQQljSudYTR9xw4idurtWC8AFtoTCVI7n
L02kt3I2Ro7APMOBui3QvnIowjPJhqoQxvwRP7FZ9Pzr/0mNIaAI2c28uOjLTPMX
lJxbcjE6U+Z+4Zb3K4HM
=YNrs
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-v4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-linus
Felipe writes:
usb: fixes for v4.2-rc6
Just one major fix which has been pending since January.
Somehow it fell through the cracks, but here it is. Basically,
this fixes a bug in udc-core when gadget registration fails.
Signed-off-by: Felipe Balbi <balbi@ti.com>
When we share an action within a filter, the bind refcnt
should increase, therefore we should not call tcf_hash_release().
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Cong Wang <cwang@twopensource.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It turns out that only Dell laptops have the separate button bits for
v2 dualpoint sticks and that commit 92bac83dd7 ("Input: alps - non
interleaved V2 dualpoint has separate stick button bits") causes
regressions on Toshiba laptops.
This commit adds a check for Dell laptops to the code for handling these
extra button bits, fixing this regression.
This patch has been tested on a Dell Latitude D620 to make sure that it
does not reintroduce the original problem.
Reported-and-tested-by: Douglas Christman <douglaschristman@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Add a proper module alias so the driver can be autoloaded when the
parent axp20x mfd driver registers its cells.
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Acked-by: Carlo Caione <carlo@caione.org>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Patch 17dd3f0f7aa7: "[PATCH] drivers/input/joystick: convert to dynamic
input_dev allocation" from Sep 15, 2005, leads to the following static
checker warning:
drivers/input/joystick/turbografx.c:235 tgfx_probe()
error: buffer overflow 'tgfx_buttons' 5 <= 5
drivers/input/joystick/turbografx.c
195 for (i = 0; i < n_devs; i++) {
196 if (n_buttons[i] < 1)
197 continue;
198
199 if (n_buttons[i] > 6) {
^^^^^^^^^^^^^^^^
Possibly off by one. >= 6.
Let's change the upper value to ARRAY_SIZE(tgfx_buttons) to ensure we do
not reach past the end of the array.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
openvswitch modifies the L4 checksum of a packet when modifying
the ip address. When an IP packet is fragmented only the first
fragment contains an L4 header and checksum. Prior to this change
openvswitch would modify all fragments, modifying application data
in non-first fragments, causing checksum failures in the
reassembled packet.
Signed-off-by: Glenn Griffin <ggriffin.kernel@gmail.com>
Acked-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull Ceph fixes from Sage Weil:
"There are two critical regression fixes for CephFS from Zheng, and an
RBD completion fix for layered images from Ilya"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
rbd: fix copyup completion race
ceph: always re-send cap flushes when MDS recovers
ceph: fix ceph_encode_locks_to_buffer()
Pull security layer fix from James Morris:
"Yama initialization fix"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
Adding YAMA hooks also when YAMA is not stacked.
Pull crypto fixes from Herbert Xu:
"This fixes the following issues:
- a bogus BUG_ON in ixp4xx that can be triggered by a dst buffer that
is an SG list.
- the error handling in hwrngd may cause a crash in case of an error.
- fix a race condition in qat registration when multiple devices are
present"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
hwrng: core - correct error check of kthread_run call
crypto: ixp4xx - Remove bogus BUG_ON on scattered dst buffer
crypto: qat - Fix invalid synchronization between register/unregister sym algs
I2S2_DAC pin can be used for I2S or GPIO. We should set it as GPIO
if we use GPIO5 as DMIC1 data pin.
Signed-off-by: Bard Liao <bardliao@realtek.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Without this patch YAMA will not work at all if it is chosen
as the primary LSM instead of being "stacked".
Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: James Morris <james.l.morris@oracle.com>
platform_driver does not need to set an owner because
platform_driver_register() will set it.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
This patch fixes a bug in the error pathway of
usb_add_gadget_udc_release() in udc-core.c. If the udc registration
fails, the gadget registration is not fully undone; there's a
put_device(&gadget->dev) call but no device_del().
CC: <stable@vger.kernel.org>
Acked-by: Peter Chen <peter.chen@freescale.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Felipe Balbi <balbi@ti.com>
With HS 2.1 release, the peripheral space register no longer contains
the uncached space specifics, causing the kernel to panic early on.
So read the newer NON VOLATILE AUX register to get that info.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
When EVA is enabled, flush the Return Prediction Stack (RPS) present on
some MIPS cores on entry to the kernel from user mode.
This is important specifically for interAptiv with EVA enabled,
otherwise kernel mode RPS mispredicts may trigger speculative fetches of
user return addresses, which may be sensitive in the kernel address
space due to EVA's overlapping user/kernel address spaces.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15.x-
Patchwork: https://patchwork.linux-mips.org/patch/10812/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The driver code allows for the disabling of MSI interrupts; however the
module_parm line was missed and the option fails to show with modinfo.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org> [3.15+]
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
This reverts commit 3cf2954341 ("MIPS:
BCM63xx: Provide a plat_post_dma_flush hook") since this commit was
found to prevent BCM6358 (early BMIPS4350 cores) and some BCM6368
(BMIPS4380 cores) from booting reliably.
Alvaro was able to track this down to an issue specifically located to
devices that use the second thread (TP1) when booting. Since BCM63xx did
not have a need for plat_post_dma_flush() hook before, let's just keep
things the way they were.
Reported-by: Álvaro Fernández Rojas <noltari@gmail.com>
Reported-by: Jonas Gorski <jogo@openwrt.org>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Nicolas Schichan <nschichan@freebox.fr>
Cc: linux-mips@linux-mips.org
Cc: blogic@openwrt.org
Cc: noltari@gmail.com
Cc: jogo@openwrt.org
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10804/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This fixes the following warning, that is seen with gcc 5.1:
warning: logical not is only applied to the left hand side of comparison [-Wlogical-not-parentheses].
Signed-off-by: Tomer Barletz <barletz@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
This was left over from an earlier iteration of the BMIPS irqchip changes.
It doesn't actually have an effect, so let's nuke it.
Reported-by: Valentin Rothberg <valentinrothberg@gmail.com>
Signed-off-by: Kevin Cernekee <cernekee@chromium.org>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org # v4.1+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/9910/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The show_stack() function deals exclusively with kernel contexts, but if
it gets called in user context with EVA enabled, show_stacktrace() will
attempt to access the stack using EVA accesses, which will either read
other user mapped data, or more likely cause an exception which will be
handled by __get_user().
This is easily reproduced using SysRq t to show all task states, which
results in the following stack dump output:
Stack : (Bad stack address)
Fix by setting the current user access mode to kernel around the call to
show_stacktrace(). This causes __get_user() to use normal loads to read
the kernel stack.
Now we get the correct output, like this:
Stack : 00000000 80168960 00000000 004a0000 00000000 00000000 8060016c 1f3abd0c
1f172cd8 8056f09c 7ff1e450 8014fc3c 00000001 806dd0b0 0000001d 00000002
1f17c6a0 1f17c804 1f17c6a0 8066f6e0 00000000 0000000a 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 0110e800 1f3abd6c 1f17c6a0
...
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/10778/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If a machine check exception is raised in kernel mode, user context,
with EVA enabled, then the do_mcheck handler will attempt to read the
code around the EPC using EVA load instructions, i.e. as if the reads
were from user mode. This will either read random user data if the
process has anything mapped at the same address, or it will cause an
exception which is handled by __get_user, resulting in this output:
Code: (Bad address in epc)
Fix by setting the current user access mode to kernel if the saved
register context indicates the exception was taken in kernel mode. This
causes __get_user to use normal loads to read the kernel code.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/10777/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The majority of SMP platforms handle their IPIs through do_IRQ()
which calls irq_{enter/exit}(). When a call function IPI is received,
smp_call_function_interrupt() is called which also calls
irq_{enter,exit}(), meaning irq_count is raised twice.
When tick broadcasting is used (which is implemented via a call
function IPI), this incorrectly causes all CPU idle time on the core
receiving broadcast ticks to be accounted as time spent servicing
IRQs, as account_process_tick() will account as such if irq_count is
greater than 1. This results in 100% CPU usage being reported on a
core which receives its ticks via broadcast.
This patch removes the SMP smp_call_function_interrupt() wrapper which
calls irq_{enter,exit}(). Platforms which handle their IPIs through
do_IRQ() now call generic_smp_call_function_interrupt() directly to
avoid incrementing irq_count a second time. Platforms which don't
(loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt()
wrapped in irq_{enter,exit}().
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10770/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Execution of break instruction, trap instructions, emulation of unaligned
loads or floating point instructions - anything that tries to read the
instruction's opcode from userspace - needs read access to a page.
RIXI (Read Inhibit / Execute Inhibit) support however allows the creation of
pags that are executable but not readable. On such a mapping the attempted
load of the opcode by the kernel is going to cause an endless loop of
page faults.
The quick workaround for this is to disable the combinations that the kernel
currently isn't able to handle which are executable mappings.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>