Merge 4.10-rc7 into usb-next

We want the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2017-02-06 09:32:06 +01:00
commit 7bddba7750
491 changed files with 4986 additions and 3285 deletions

View File

@ -15,6 +15,9 @@ Properties:
Second cell specifies the irq distribution mode to cores
0=Round Robin; 1=cpu0, 2=cpu1, 4=cpu2, 8=cpu3
The second cell in interrupts property is deprecated and may be ignored by
the kernel.
intc accessed via the special ARC AUX register interface, hence "reg" property
is not specified.

View File

@ -7,7 +7,7 @@ have dual GMAC each represented by a child node..
* Ethernet controller node
Required properties:
- compatible: Should be "mediatek,mt7623-eth"
- compatible: Should be "mediatek,mt2701-eth"
- reg: Address and length of the register set for the device
- interrupts: Should contain the three frame engines interrupts in numeric
order. These are fe_int0, fe_int1 and fe_int2.

View File

@ -19,8 +19,9 @@ Optional Properties:
specifications. If neither of these are specified, the default is to
assume clause 22.
If the phy's identifier is known then the list may contain an entry
of the form: "ethernet-phy-idAAAA.BBBB" where
If the PHY reports an incorrect ID (or none at all) then the
"compatible" list may contain an entry with the correct PHY ID in the
form: "ethernet-phy-idAAAA.BBBB" where
AAAA - The value of the 16 bit Phy Identifier 1 register as
4 hex digits. This is the chip vendor OUI bits 3:18
BBBB - The value of the 16 bit Phy Identifier 2 register as

View File

@ -212,10 +212,11 @@ asynchronous manner and the value may not be very precise. To see a precise
snapshot of a moment, you can see /proc/<pid>/smaps file and scan page table.
It's slow but very precise.
Table 1-2: Contents of the status files (as of 4.1)
Table 1-2: Contents of the status files (as of 4.8)
..............................................................................
Field Content
Name filename of the executable
Umask file mode creation mask
State state (R is running, S is sleeping, D is sleeping
in an uninterruptible wait, Z is zombie,
T is traced or stopped)
@ -226,7 +227,6 @@ Table 1-2: Contents of the status files (as of 4.1)
TracerPid PID of process tracing this process (0 if not)
Uid Real, effective, saved set, and file system UIDs
Gid Real, effective, saved set, and file system GIDs
Umask file mode creation mask
FDSize number of file descriptor slots currently allocated
Groups supplementary group list
NStgid descendant namespace thread group ID hierarchy
@ -236,6 +236,7 @@ Table 1-2: Contents of the status files (as of 4.1)
VmPeak peak virtual memory size
VmSize total program size
VmLck locked memory size
VmPin pinned memory size
VmHWM peak resident set size ("high water mark")
VmRSS size of memory portions. It contains the three
following parts (VmRSS = RssAnon + RssFile + RssShmem)

View File

@ -35,9 +35,7 @@ only one way to cause the system to go into the Suspend-To-RAM state (write
The default suspend mode (ie. the one to be used without writing anything into
/sys/power/mem_sleep) is either "deep" (if Suspend-To-RAM is supported) or
"s2idle", but it can be overridden by the value of the "mem_sleep_default"
parameter in the kernel command line. On some ACPI-based systems, depending on
the information in the FADT, the default may be "s2idle" even if Suspend-To-RAM
is supported.
parameter in the kernel command line.
The properties of all of the sleep states are described below.

View File

@ -3567,7 +3567,7 @@ F: drivers/infiniband/hw/cxgb3/
F: include/uapi/rdma/cxgb3-abi.h
CXGB4 ETHERNET DRIVER (CXGB4)
M: Hariprasad S <hariprasad@chelsio.com>
M: Ganesh Goudar <ganeshgr@chelsio.com>
L: netdev@vger.kernel.org
W: http://www.chelsio.com
S: Supported
@ -4100,12 +4100,18 @@ F: drivers/gpu/drm/bridge/
DRM DRIVER FOR BOCHS VIRTUAL GPU
M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
L: virtualization@lists.linux-foundation.org
T: git git://git.kraxel.org/linux drm-qemu
S: Maintained
F: drivers/gpu/drm/bochs/
DRM DRIVER FOR QEMU'S CIRRUS DEVICE
M: Dave Airlie <airlied@redhat.com>
S: Odd Fixes
M: Gerd Hoffmann <kraxel@redhat.com>
L: virtualization@lists.linux-foundation.org
T: git git://git.kraxel.org/linux drm-qemu
S: Obsolete
W: https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
F: drivers/gpu/drm/cirrus/
RADEON and AMDGPU DRM DRIVERS
@ -4147,7 +4153,7 @@ F: Documentation/gpu/i915.rst
INTEL GVT-g DRIVERS (Intel GPU Virtualization)
M: Zhenyu Wang <zhenyuw@linux.intel.com>
M: Zhi Wang <zhi.a.wang@intel.com>
L: igvt-g-dev@lists.01.org
L: intel-gvt-dev@lists.freedesktop.org
L: intel-gfx@lists.freedesktop.org
W: https://01.org/igvt-g
T: git https://github.com/01org/gvt-linux.git
@ -4298,7 +4304,10 @@ F: Documentation/devicetree/bindings/display/renesas,du.txt
DRM DRIVER FOR QXL VIRTUAL GPU
M: Dave Airlie <airlied@redhat.com>
S: Odd Fixes
M: Gerd Hoffmann <kraxel@redhat.com>
L: virtualization@lists.linux-foundation.org
T: git git://git.kraxel.org/linux drm-qemu
S: Maintained
F: drivers/gpu/drm/qxl/
F: include/uapi/drm/qxl_drm.h
@ -10186,7 +10195,6 @@ F: drivers/media/tuners/qt1010*
QUALCOMM ATHEROS ATH9K WIRELESS DRIVER
M: QCA ath9k Development <ath9k-devel@qca.qualcomm.com>
L: linux-wireless@vger.kernel.org
L: ath9k-devel@lists.ath9k.org
W: http://wireless.kernel.org/en/users/Drivers/ath9k
S: Supported
F: drivers/net/wireless/ath/ath9k/
@ -13057,7 +13065,7 @@ F: drivers/input/serio/userio.c
F: include/uapi/linux/userio.h
VIRTIO CONSOLE DRIVER
M: Amit Shah <amit.shah@redhat.com>
M: Amit Shah <amit@kernel.org>
L: virtualization@lists.linux-foundation.org
S: Maintained
F: drivers/char/virtio_console.c
@ -13092,6 +13100,7 @@ M: David Airlie <airlied@linux.ie>
M: Gerd Hoffmann <kraxel@redhat.com>
L: dri-devel@lists.freedesktop.org
L: virtualization@lists.linux-foundation.org
T: git git://git.kraxel.org/linux drm-qemu
S: Maintained
F: drivers/gpu/drm/virtio/
F: include/uapi/linux/virtio_gpu.h
@ -13443,6 +13452,7 @@ F: arch/x86/
X86 PLATFORM DRIVERS
M: Darren Hart <dvhart@infradead.org>
M: Andy Shevchenko <andy@infradead.org>
L: platform-driver-x86@vger.kernel.org
T: git git://git.infradead.org/users/dvhart/linux-platform-drivers-x86.git
S: Maintained
@ -13614,6 +13624,7 @@ F: drivers/net/hamradio/z8530.h
ZBUD COMPRESSED PAGE ALLOCATOR
M: Seth Jennings <sjenning@redhat.com>
M: Dan Streetman <ddstreet@ieee.org>
L: linux-mm@kvack.org
S: Maintained
F: mm/zbud.c
@ -13669,6 +13680,7 @@ F: Documentation/vm/zsmalloc.txt
ZSWAP COMPRESSED SWAP CACHING
M: Seth Jennings <sjenning@redhat.com>
M: Dan Streetman <ddstreet@ieee.org>
L: linux-mm@kvack.org
S: Maintained
F: mm/zswap.c

View File

@ -1,8 +1,8 @@
VERSION = 4
PATCHLEVEL = 10
SUBLEVEL = 0
EXTRAVERSION = -rc5
NAME = Anniversary Edition
EXTRAVERSION = -rc7
NAME = Fearless Coyote
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"
@ -797,7 +797,7 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=incompatible-pointer-types)
KBUILD_ARFLAGS := $(call ar-option,D)
# check for 'asm goto'
ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y)
ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC) $(KBUILD_CFLAGS)), y)
KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
KBUILD_AFLAGS += -DCC_HAVE_ASM_GOTO
endif

View File

@ -26,7 +26,9 @@ static inline void __delay(unsigned long loops)
" lp 1f \n"
" nop \n"
"1: \n"
: : "r"(loops));
:
: "r"(loops)
: "lp_count");
}
extern void __bad_udelay(void);

View File

@ -71,14 +71,14 @@ ENTRY(stext)
GET_CPU_ID r5
cmp r5, 0
mov.nz r0, r5
#ifdef CONFIG_ARC_SMP_HALT_ON_RESET
; Non-Master can proceed as system would be booted sufficiently
jnz first_lines_of_secondary
#else
bz .Lmaster_proceed
; Non-Masters wait for Master to boot enough and bring them up
jnz arc_platform_smp_wait_to_boot
#endif
; Master falls thru
; when they resume, tail-call to entry point
mov blink, @first_lines_of_secondary
j arc_platform_smp_wait_to_boot
.Lmaster_proceed:
#endif
; Clear BSS before updating any globals

View File

@ -93,11 +93,10 @@ static void mcip_probe_n_setup(void)
READ_BCR(ARC_REG_MCIP_BCR, mp);
sprintf(smp_cpuinfo_buf,
"Extn [SMP]\t: ARConnect (v%d): %d cores with %s%s%s%s%s\n",
"Extn [SMP]\t: ARConnect (v%d): %d cores with %s%s%s%s\n",
mp.ver, mp.num_cores,
IS_AVAIL1(mp.ipi, "IPI "),
IS_AVAIL1(mp.idu, "IDU "),
IS_AVAIL1(mp.llm, "LLM "),
IS_AVAIL1(mp.dbg, "DEBUG "),
IS_AVAIL1(mp.gfrc, "GFRC"));
@ -175,7 +174,6 @@ static void idu_irq_unmask(struct irq_data *data)
raw_spin_unlock_irqrestore(&mcip_lock, flags);
}
#ifdef CONFIG_SMP
static int
idu_irq_set_affinity(struct irq_data *data, const struct cpumask *cpumask,
bool force)
@ -205,12 +203,27 @@ idu_irq_set_affinity(struct irq_data *data, const struct cpumask *cpumask,
return IRQ_SET_MASK_OK;
}
#endif
static void idu_irq_enable(struct irq_data *data)
{
/*
* By default send all common interrupts to all available online CPUs.
* The affinity of common interrupts in IDU must be set manually since
* in some cases the kernel will not call irq_set_affinity() by itself:
* 1. When the kernel is not configured with support of SMP.
* 2. When the kernel is configured with support of SMP but upper
* interrupt controllers does not support setting of the affinity
* and cannot propagate it to IDU.
*/
idu_irq_set_affinity(data, cpu_online_mask, false);
idu_irq_unmask(data);
}
static struct irq_chip idu_irq_chip = {
.name = "MCIP IDU Intc",
.irq_mask = idu_irq_mask,
.irq_unmask = idu_irq_unmask,
.irq_enable = idu_irq_enable,
#ifdef CONFIG_SMP
.irq_set_affinity = idu_irq_set_affinity,
#endif
@ -243,36 +256,14 @@ static int idu_irq_xlate(struct irq_domain *d, struct device_node *n,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_type)
{
irq_hw_number_t hwirq = *out_hwirq = intspec[0];
int distri = intspec[1];
unsigned long flags;
/*
* Ignore value of interrupt distribution mode for common interrupts in
* IDU which resides in intspec[1] since setting an affinity using value
* from Device Tree is deprecated in ARC.
*/
*out_hwirq = intspec[0];
*out_type = IRQ_TYPE_NONE;
/* XXX: validate distribution scheme again online cpu mask */
if (distri == 0) {
/* 0 - Round Robin to all cpus, otherwise 1 bit per core */
raw_spin_lock_irqsave(&mcip_lock, flags);
idu_set_dest(hwirq, BIT(num_online_cpus()) - 1);
idu_set_mode(hwirq, IDU_M_TRIG_LEVEL, IDU_M_DISTRI_RR);
raw_spin_unlock_irqrestore(&mcip_lock, flags);
} else {
/*
* DEST based distribution for Level Triggered intr can only
* have 1 CPU, so generalize it to always contain 1 cpu
*/
int cpu = ffs(distri);
if (cpu != fls(distri))
pr_warn("IDU irq %lx distri mode set to cpu %x\n",
hwirq, cpu);
raw_spin_lock_irqsave(&mcip_lock, flags);
idu_set_dest(hwirq, cpu);
idu_set_mode(hwirq, IDU_M_TRIG_LEVEL, IDU_M_DISTRI_DEST);
raw_spin_unlock_irqrestore(&mcip_lock, flags);
}
return 0;
}

View File

@ -90,22 +90,37 @@ void __init smp_cpus_done(unsigned int max_cpus)
*/
static volatile int wake_flag;
#ifdef CONFIG_ISA_ARCOMPACT
#define __boot_read(f) f
#define __boot_write(f, v) f = v
#else
#define __boot_read(f) arc_read_uncached_32(&f)
#define __boot_write(f, v) arc_write_uncached_32(&f, v)
#endif
static void arc_default_smp_cpu_kick(int cpu, unsigned long pc)
{
BUG_ON(cpu == 0);
wake_flag = cpu;
__boot_write(wake_flag, cpu);
}
void arc_platform_smp_wait_to_boot(int cpu)
{
while (wake_flag != cpu)
/* for halt-on-reset, we've waited already */
if (IS_ENABLED(CONFIG_ARC_SMP_HALT_ON_RESET))
return;
while (__boot_read(wake_flag) != cpu)
;
wake_flag = 0;
__asm__ __volatile__("j @first_lines_of_secondary \n");
__boot_write(wake_flag, 0);
}
const char *arc_platform_smp_cpuinfo(void)
{
return plat_smp_ops.info ? : "";

View File

@ -241,8 +241,9 @@ int misaligned_fixup(unsigned long address, struct pt_regs *regs,
if (state.fault)
goto fault;
/* clear any remanants of delay slot */
if (delay_mode(regs)) {
regs->ret = regs->bta;
regs->ret = regs->bta ~1U;
regs->status32 &= ~STATUS_DE_MASK;
} else {
regs->ret += state.instr_len;

View File

@ -193,15 +193,16 @@ AES_ENTRY(aes_cbc_encrypt)
cbz w6, .Lcbcencloop
ld1 {v0.16b}, [x5] /* get iv */
enc_prepare w3, x2, x5
enc_prepare w3, x2, x6
.Lcbcencloop:
ld1 {v1.16b}, [x1], #16 /* get next pt block */
eor v0.16b, v0.16b, v1.16b /* ..and xor with iv */
encrypt_block v0, w3, x2, x5, w6
encrypt_block v0, w3, x2, x6, w7
st1 {v0.16b}, [x0], #16
subs w4, w4, #1
bne .Lcbcencloop
st1 {v0.16b}, [x5] /* return iv */
ret
AES_ENDPROC(aes_cbc_encrypt)
@ -211,7 +212,7 @@ AES_ENTRY(aes_cbc_decrypt)
cbz w6, .LcbcdecloopNx
ld1 {v7.16b}, [x5] /* get iv */
dec_prepare w3, x2, x5
dec_prepare w3, x2, x6
.LcbcdecloopNx:
#if INTERLEAVE >= 2
@ -248,7 +249,7 @@ AES_ENTRY(aes_cbc_decrypt)
.Lcbcdecloop:
ld1 {v1.16b}, [x1], #16 /* get next ct block */
mov v0.16b, v1.16b /* ...and copy to v0 */
decrypt_block v0, w3, x2, x5, w6
decrypt_block v0, w3, x2, x6, w7
eor v0.16b, v0.16b, v7.16b /* xor with iv => pt */
mov v7.16b, v1.16b /* ct is next iv */
st1 {v0.16b}, [x0], #16
@ -256,6 +257,7 @@ AES_ENTRY(aes_cbc_decrypt)
bne .Lcbcdecloop
.Lcbcdecout:
FRAME_POP
st1 {v7.16b}, [x5] /* return iv */
ret
AES_ENDPROC(aes_cbc_decrypt)
@ -267,24 +269,15 @@ AES_ENDPROC(aes_cbc_decrypt)
AES_ENTRY(aes_ctr_encrypt)
FRAME_PUSH
cbnz w6, .Lctrfirst /* 1st time around? */
umov x5, v4.d[1] /* keep swabbed ctr in reg */
rev x5, x5
#if INTERLEAVE >= 2
cmn w5, w4 /* 32 bit overflow? */
bcs .Lctrinc
add x5, x5, #1 /* increment BE ctr */
b .LctrincNx
#else
b .Lctrinc
#endif
.Lctrfirst:
cbz w6, .Lctrnotfirst /* 1st time around? */
enc_prepare w3, x2, x6
ld1 {v4.16b}, [x5]
umov x5, v4.d[1] /* keep swabbed ctr in reg */
rev x5, x5
.Lctrnotfirst:
umov x8, v4.d[1] /* keep swabbed ctr in reg */
rev x8, x8
#if INTERLEAVE >= 2
cmn w5, w4 /* 32 bit overflow? */
cmn w8, w4 /* 32 bit overflow? */
bcs .Lctrloop
.LctrloopNx:
subs w4, w4, #INTERLEAVE
@ -292,11 +285,11 @@ AES_ENTRY(aes_ctr_encrypt)
#if INTERLEAVE == 2
mov v0.8b, v4.8b
mov v1.8b, v4.8b
rev x7, x5
add x5, x5, #1
rev x7, x8
add x8, x8, #1
ins v0.d[1], x7
rev x7, x5
add x5, x5, #1
rev x7, x8
add x8, x8, #1
ins v1.d[1], x7
ld1 {v2.16b-v3.16b}, [x1], #32 /* get 2 input blocks */
do_encrypt_block2x
@ -305,7 +298,7 @@ AES_ENTRY(aes_ctr_encrypt)
st1 {v0.16b-v1.16b}, [x0], #32
#else
ldr q8, =0x30000000200000001 /* addends 1,2,3[,0] */
dup v7.4s, w5
dup v7.4s, w8
mov v0.16b, v4.16b
add v7.4s, v7.4s, v8.4s
mov v1.16b, v4.16b
@ -323,18 +316,12 @@ AES_ENTRY(aes_ctr_encrypt)
eor v2.16b, v7.16b, v2.16b
eor v3.16b, v5.16b, v3.16b
st1 {v0.16b-v3.16b}, [x0], #64
add x5, x5, #INTERLEAVE
add x8, x8, #INTERLEAVE
#endif
cbz w4, .LctroutNx
.LctrincNx:
rev x7, x5
rev x7, x8
ins v4.d[1], x7
cbz w4, .Lctrout
b .LctrloopNx
.LctroutNx:
sub x5, x5, #1
rev x7, x5
ins v4.d[1], x7
b .Lctrout
.Lctr1x:
adds w4, w4, #INTERLEAVE
beq .Lctrout
@ -342,30 +329,39 @@ AES_ENTRY(aes_ctr_encrypt)
.Lctrloop:
mov v0.16b, v4.16b
encrypt_block v0, w3, x2, x6, w7
adds x8, x8, #1 /* increment BE ctr */
rev x7, x8
ins v4.d[1], x7
bcs .Lctrcarry /* overflow? */
.Lctrcarrydone:
subs w4, w4, #1
bmi .Lctrhalfblock /* blocks < 0 means 1/2 block */
ld1 {v3.16b}, [x1], #16
eor v3.16b, v0.16b, v3.16b
st1 {v3.16b}, [x0], #16
beq .Lctrout
.Lctrinc:
adds x5, x5, #1 /* increment BE ctr */
rev x7, x5
ins v4.d[1], x7
bcc .Lctrloop /* no overflow? */
bne .Lctrloop
.Lctrout:
st1 {v4.16b}, [x5] /* return next CTR value */
FRAME_POP
ret
.Lctrhalfblock:
ld1 {v3.8b}, [x1]
eor v3.8b, v0.8b, v3.8b
st1 {v3.8b}, [x0]
FRAME_POP
ret
.Lctrcarry:
umov x7, v4.d[0] /* load upper word of ctr */
rev x7, x7 /* ... to handle the carry */
add x7, x7, #1
rev x7, x7
ins v4.d[0], x7
b .Lctrloop
.Lctrhalfblock:
ld1 {v3.8b}, [x1]
eor v3.8b, v0.8b, v3.8b
st1 {v3.8b}, [x0]
.Lctrout:
FRAME_POP
ret
b .Lctrcarrydone
AES_ENDPROC(aes_ctr_encrypt)
.ltorg

View File

@ -11,6 +11,7 @@
* for more details.
*/
#include <linux/acpi.h>
#include <linux/cpu.h>
#include <linux/cpumask.h>
#include <linux/init.h>
@ -209,7 +210,12 @@ static struct notifier_block init_cpu_capacity_notifier = {
static int __init register_cpufreq_notifier(void)
{
if (cap_parsing_failed)
/*
* on ACPI-based systems we need to use the default cpu capacity
* until we have the necessary code to parse the cpu capacity, so
* skip registering cpufreq notifier.
*/
if (!acpi_disabled || cap_parsing_failed)
return -EINVAL;
if (!alloc_cpumask_var(&cpus_to_visit, GFP_KERNEL)) {

View File

@ -139,7 +139,7 @@ static inline void atomic64_dec(atomic64_t *v)
#define atomic64_sub_and_test(i,v) (atomic64_sub_return((i), (v)) == 0)
#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0)
#define atomic64_inc_and_test(v) (atomic64_inc_return((v)) == 0)
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
#define atomic_cmpxchg(v, old, new) (cmpxchg(&(v)->counter, old, new))
#define atomic_xchg(v, new) (xchg(&(v)->counter, new))
@ -161,6 +161,39 @@ static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u)
return c;
}
static inline int atomic64_add_unless(atomic64_t *v, long long i, long long u)
{
long long c, old;
c = atomic64_read(v);
for (;;) {
if (unlikely(c == u))
break;
old = atomic64_cmpxchg(v, c, c + i);
if (likely(old == c))
break;
c = old;
}
return c != u;
}
static inline long long atomic64_dec_if_positive(atomic64_t *v)
{
long long c, old, dec;
c = atomic64_read(v);
for (;;) {
dec = c - 1;
if (unlikely(dec < 0))
break;
old = atomic64_cmpxchg((v), c, dec);
if (likely(old == c))
break;
c = old;
}
return dec;
}
#define ATOMIC_OP(op) \
static inline int atomic_fetch_##op(int i, atomic_t *v) \
{ \

View File

@ -16,7 +16,7 @@
struct task_struct;
struct thread_struct;
#if !defined(CONFIG_LAZY_SAVE_FPU)
#if defined(CONFIG_FPU) && !defined(CONFIG_LAZY_SAVE_FPU)
struct fpu_state_struct;
extern asmlinkage void fpu_save(struct fpu_state_struct *);
#define switch_fpu(prev, next) \

View File

@ -6,7 +6,7 @@
#endif
#include <linux/compiler.h>
#include <asm/types.h> /* for BITS_PER_LONG/SHIFT_PER_LONG */
#include <asm/types.h>
#include <asm/byteorder.h>
#include <asm/barrier.h>
#include <linux/atomic.h>
@ -17,6 +17,12 @@
* to include/asm-i386/bitops.h or kerneldoc
*/
#if __BITS_PER_LONG == 64
#define SHIFT_PER_LONG 6
#else
#define SHIFT_PER_LONG 5
#endif
#define CHOP_SHIFTCOUNT(x) (((unsigned long) (x)) & (BITS_PER_LONG - 1))

View File

@ -3,10 +3,8 @@
#if defined(__LP64__)
#define __BITS_PER_LONG 64
#define SHIFT_PER_LONG 6
#else
#define __BITS_PER_LONG 32
#define SHIFT_PER_LONG 5
#endif
#include <asm-generic/bitsperlong.h>

View File

@ -1,6 +1,7 @@
#ifndef _PARISC_SWAB_H
#define _PARISC_SWAB_H
#include <asm/bitsperlong.h>
#include <linux/types.h>
#include <linux/compiler.h>
@ -38,7 +39,7 @@ static inline __attribute_const__ __u32 __arch_swab32(__u32 x)
}
#define __arch_swab32 __arch_swab32
#if BITS_PER_LONG > 32
#if __BITS_PER_LONG > 32
/*
** From "PA-RISC 2.0 Architecture", HP Professional Books.
** See Appendix I page 8 , "Endian Byte Swapping".
@ -61,6 +62,6 @@ static inline __attribute_const__ __u64 __arch_swab64(__u64 x)
return x;
}
#define __arch_swab64 __arch_swab64
#endif /* BITS_PER_LONG > 32 */
#endif /* __BITS_PER_LONG > 32 */
#endif /* _PARISC_SWAB_H */

View File

@ -164,7 +164,6 @@ config PPC
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE
select HAVE_ARCH_HARDENED_USERCOPY
select HAVE_KERNEL_GZIP
select HAVE_CC_STACKPROTECTOR
config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN
@ -484,6 +483,7 @@ config RELOCATABLE
bool "Build a relocatable kernel"
depends on (PPC64 && !COMPILE_TEST) || (FLATMEM && (44x || FSL_BOOKE))
select NONSTATIC_KERNEL
select MODULE_REL_CRCS if MODVERSIONS
help
This builds a kernel image that is capable of running at the
location the kernel is loaded at. For ppc32, there is no any

View File

@ -23,7 +23,9 @@ static __always_inline bool cpu_has_feature(unsigned long feature)
{
int i;
#ifndef __clang__ /* clang can't cope with this */
BUILD_BUG_ON(!__builtin_constant_p(feature));
#endif
#ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG
if (!static_key_initialized) {

View File

@ -160,7 +160,9 @@ static __always_inline bool mmu_has_feature(unsigned long feature)
{
int i;
#ifndef __clang__ /* clang can't cope with this */
BUILD_BUG_ON(!__builtin_constant_p(feature));
#endif
#ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG
if (!static_key_initialized) {

View File

@ -90,9 +90,5 @@ static inline int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sec
}
#endif
#if defined(CONFIG_MODVERSIONS) && defined(CONFIG_PPC64)
#define ARCH_RELOCATES_KCRCTAB
#define reloc_start PHYSICAL_START
#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_MODULE_H */

View File

@ -1,40 +0,0 @@
/*
* GCC stack protector support.
*
* Stack protector works by putting predefined pattern at the start of
* the stack frame and verifying that it hasn't been overwritten when
* returning from the function. The pattern is called stack canary
* and gcc expects it to be defined by a global variable called
* "__stack_chk_guard" on PPC. This unfortunately means that on SMP
* we cannot have a different canary value per task.
*/
#ifndef _ASM_STACKPROTECTOR_H
#define _ASM_STACKPROTECTOR_H
#include <linux/random.h>
#include <linux/version.h>
#include <asm/reg.h>
extern unsigned long __stack_chk_guard;
/*
* Initialize the stackprotector canary value.
*
* NOTE: this must only be called from functions that never return,
* and it must always be inlined.
*/
static __always_inline void boot_init_stack_canary(void)
{
unsigned long canary;
/* Try to get a semi random initial value. */
get_random_bytes(&canary, sizeof(canary));
canary ^= mftb();
canary ^= LINUX_VERSION_CODE;
current->stack_canary = canary;
__stack_chk_guard = current->stack_canary;
}
#endif /* _ASM_STACKPROTECTOR_H */

View File

@ -19,10 +19,6 @@ CFLAGS_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
# -fstack-protector triggers protection checks in this code,
# but it is being used too early to link to meaningful stack_chk logic.
CFLAGS_prom_init.o += $(call cc-option, -fno-stack-protector)
ifdef CONFIG_FUNCTION_TRACER
# Do not trace early boot code
CFLAGS_REMOVE_cputable.o = -mno-sched-epilog $(CC_FLAGS_FTRACE)

View File

@ -91,9 +91,6 @@ int main(void)
DEFINE(TI_livepatch_sp, offsetof(struct thread_info, livepatch_sp));
#endif
#ifdef CONFIG_CC_STACKPROTECTOR
DEFINE(TSK_STACK_CANARY, offsetof(struct task_struct, stack_canary));
#endif
DEFINE(KSP, offsetof(struct thread_struct, ksp));
DEFINE(PT_REGS, offsetof(struct thread_struct, regs));
#ifdef CONFIG_BOOKE

View File

@ -545,7 +545,7 @@ static void *eeh_pe_detach_dev(void *data, void *userdata)
static void *__eeh_clear_pe_frozen_state(void *data, void *flag)
{
struct eeh_pe *pe = (struct eeh_pe *)data;
bool *clear_sw_state = flag;
bool clear_sw_state = *(bool *)flag;
int i, rc = 1;
for (i = 0; rc && i < 3; i++)

View File

@ -674,11 +674,7 @@ BEGIN_FTR_SECTION
mtspr SPRN_SPEFSCR,r0 /* restore SPEFSCR reg */
END_FTR_SECTION_IFSET(CPU_FTR_SPE)
#endif /* CONFIG_SPE */
#if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP)
lwz r0,TSK_STACK_CANARY(r2)
lis r4,__stack_chk_guard@ha
stw r0,__stack_chk_guard@l(r4)
#endif
lwz r0,_CCR(r1)
mtcrf 0xFF,r0
/* r3-r12 are destroyed -- Cort */

View File

@ -286,14 +286,6 @@ static void dedotify_versions(struct modversion_info *vers,
for (end = (void *)vers + size; vers < end; vers++)
if (vers->name[0] == '.') {
memmove(vers->name, vers->name+1, strlen(vers->name));
#ifdef ARCH_RELOCATES_KCRCTAB
/* The TOC symbol has no CRC computed. To avoid CRC
* check failing, we must force it to the expected
* value (see CRC check in module.c).
*/
if (!strcmp(vers->name, "TOC."))
vers->crc = -(unsigned long)reloc_start;
#endif
}
}

View File

@ -64,12 +64,6 @@
#include <linux/kprobes.h>
#include <linux/kdebug.h>
#ifdef CONFIG_CC_STACKPROTECTOR
#include <linux/stackprotector.h>
unsigned long __stack_chk_guard __read_mostly;
EXPORT_SYMBOL(__stack_chk_guard);
#endif
/* Transactional Memory debug */
#ifdef TM_DEBUG_SW
#define TM_DEBUG(x...) printk(KERN_INFO x)

View File

@ -2834,6 +2834,9 @@ static void __init prom_find_boot_cpu(void)
cpu_pkg = call_prom("instance-to-package", 1, 1, prom_cpu);
if (!PHANDLE_VALID(cpu_pkg))
return;
prom_getprop(cpu_pkg, "reg", &rval, sizeof(rval));
prom.cpu = be32_to_cpu(rval);

View File

@ -65,7 +65,7 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa,
if (!pmdp)
return -ENOMEM;
if (map_page_size == PMD_SIZE) {
ptep = (pte_t *)pudp;
ptep = pmdp_ptep(pmdp);
goto set_the_pte;
}
ptep = pte_alloc_kernel(pmdp, ea);
@ -90,7 +90,7 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa,
}
pmdp = pmd_offset(pudp, ea);
if (map_page_size == PMD_SIZE) {
ptep = (pte_t *)pudp;
ptep = pmdp_ptep(pmdp);
goto set_the_pte;
}
if (!pmd_present(*pmdp)) {

View File

@ -963,6 +963,11 @@ static int s390_fpregs_set(struct task_struct *target,
if (target == current)
save_fpu_regs();
if (MACHINE_HAS_VX)
convert_vx_to_fp(fprs, target->thread.fpu.vxrs);
else
memcpy(&fprs, target->thread.fpu.fprs, sizeof(fprs));
/* If setting FPC, must validate it first. */
if (count > 0 && pos < offsetof(s390_fp_regs, fprs)) {
u32 ufpc[2] = { target->thread.fpu.fpc, 0 };
@ -1067,6 +1072,9 @@ static int s390_vxrs_low_set(struct task_struct *target,
if (target == current)
save_fpu_regs();
for (i = 0; i < __NUM_VXRS_LOW; i++)
vxrs[i] = *((__u64 *)(target->thread.fpu.vxrs + i) + 1);
rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vxrs, 0, -1);
if (rc == 0)
for (i = 0; i < __NUM_VXRS_LOW; i++)

View File

@ -202,7 +202,7 @@ static inline pgste_t ptep_xchg_start(struct mm_struct *mm,
return pgste;
}
static inline void ptep_xchg_commit(struct mm_struct *mm,
static inline pte_t ptep_xchg_commit(struct mm_struct *mm,
unsigned long addr, pte_t *ptep,
pgste_t pgste, pte_t old, pte_t new)
{
@ -220,6 +220,7 @@ static inline void ptep_xchg_commit(struct mm_struct *mm,
} else {
*ptep = new;
}
return old;
}
pte_t ptep_xchg_direct(struct mm_struct *mm, unsigned long addr,
@ -231,7 +232,7 @@ pte_t ptep_xchg_direct(struct mm_struct *mm, unsigned long addr,
preempt_disable();
pgste = ptep_xchg_start(mm, addr, ptep);
old = ptep_flush_direct(mm, addr, ptep);
ptep_xchg_commit(mm, addr, ptep, pgste, old, new);
old = ptep_xchg_commit(mm, addr, ptep, pgste, old, new);
preempt_enable();
return old;
}
@ -246,7 +247,7 @@ pte_t ptep_xchg_lazy(struct mm_struct *mm, unsigned long addr,
preempt_disable();
pgste = ptep_xchg_start(mm, addr, ptep);
old = ptep_flush_lazy(mm, addr, ptep);
ptep_xchg_commit(mm, addr, ptep, pgste, old, new);
old = ptep_xchg_commit(mm, addr, ptep, pgste, old, new);
preempt_enable();
return old;
}

View File

@ -35,15 +35,15 @@ void __tsb_context_switch(unsigned long pgd_pa,
static inline void tsb_context_switch(struct mm_struct *mm)
{
__tsb_context_switch(__pa(mm->pgd),
&mm->context.tsb_block[0],
&mm->context.tsb_block[MM_TSB_BASE],
#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
(mm->context.tsb_block[1].tsb ?
&mm->context.tsb_block[1] :
(mm->context.tsb_block[MM_TSB_HUGE].tsb ?
&mm->context.tsb_block[MM_TSB_HUGE] :
NULL)
#else
NULL
#endif
, __pa(&mm->context.tsb_descr[0]));
, __pa(&mm->context.tsb_descr[MM_TSB_BASE]));
}
void tsb_grow(struct mm_struct *mm,

View File

@ -1021,7 +1021,7 @@ static void __init alloc_one_queue(unsigned long *pa_ptr, unsigned long qmask)
unsigned long order = get_order(size);
unsigned long p;
p = __get_free_pages(GFP_KERNEL, order);
p = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
if (!p) {
prom_printf("SUN4V: Error, cannot allocate queue.\n");
prom_halt();

View File

@ -43,8 +43,8 @@ static const char poweroff_msg[32] __attribute__((aligned(32))) =
"Linux powering off";
static const char rebooting_msg[32] __attribute__((aligned(32))) =
"Linux rebooting";
static const char panicing_msg[32] __attribute__((aligned(32))) =
"Linux panicing";
static const char panicking_msg[32] __attribute__((aligned(32))) =
"Linux panicking";
static int sstate_reboot_call(struct notifier_block *np, unsigned long type, void *_unused)
{
@ -76,7 +76,7 @@ static struct notifier_block sstate_reboot_notifier = {
static int sstate_panic_event(struct notifier_block *n, unsigned long event, void *ptr)
{
do_set_sstate(HV_SOFT_STATE_TRANSITION, panicing_msg);
do_set_sstate(HV_SOFT_STATE_TRANSITION, panicking_msg);
return NOTIFY_DONE;
}

View File

@ -2051,6 +2051,73 @@ void sun4v_resum_overflow(struct pt_regs *regs)
atomic_inc(&sun4v_resum_oflow_cnt);
}
/* Given a set of registers, get the virtual addressi that was being accessed
* by the faulting instructions at tpc.
*/
static unsigned long sun4v_get_vaddr(struct pt_regs *regs)
{
unsigned int insn;
if (!copy_from_user(&insn, (void __user *)regs->tpc, 4)) {
return compute_effective_address(regs, insn,
(insn >> 25) & 0x1f);
}
return 0;
}
/* Attempt to handle non-resumable errors generated from userspace.
* Returns true if the signal was handled, false otherwise.
*/
bool sun4v_nonresum_error_user_handled(struct pt_regs *regs,
struct sun4v_error_entry *ent) {
unsigned int attrs = ent->err_attrs;
if (attrs & SUN4V_ERR_ATTRS_MEMORY) {
unsigned long addr = ent->err_raddr;
siginfo_t info;
if (addr == ~(u64)0) {
/* This seems highly unlikely to ever occur */
pr_emerg("SUN4V NON-RECOVERABLE ERROR: Memory error detected in unknown location!\n");
} else {
unsigned long page_cnt = DIV_ROUND_UP(ent->err_size,
PAGE_SIZE);
/* Break the unfortunate news. */
pr_emerg("SUN4V NON-RECOVERABLE ERROR: Memory failed at %016lX\n",
addr);
pr_emerg("SUN4V NON-RECOVERABLE ERROR: Claiming %lu ages.\n",
page_cnt);
while (page_cnt-- > 0) {
if (pfn_valid(addr >> PAGE_SHIFT))
get_page(pfn_to_page(addr >> PAGE_SHIFT));
addr += PAGE_SIZE;
}
}
info.si_signo = SIGKILL;
info.si_errno = 0;
info.si_trapno = 0;
force_sig_info(info.si_signo, &info, current);
return true;
}
if (attrs & SUN4V_ERR_ATTRS_PIO) {
siginfo_t info;
info.si_signo = SIGBUS;
info.si_code = BUS_ADRERR;
info.si_addr = (void __user *)sun4v_get_vaddr(regs);
force_sig_info(info.si_signo, &info, current);
return true;
}
/* Default to doing nothing */
return false;
}
/* We run with %pil set to PIL_NORMAL_MAX and PSTATE_IE enabled in %pstate.
* Log the event, clear the first word of the entry, and die.
*/
@ -2075,6 +2142,12 @@ void sun4v_nonresum_error(struct pt_regs *regs, unsigned long offset)
put_cpu();
if (!(regs->tstate & TSTATE_PRIV) &&
sun4v_nonresum_error_user_handled(regs, &local_copy)) {
/* DON'T PANIC: This userspace error was handled. */
return;
}
#ifdef CONFIG_PCI
/* Check for the special PCI poke sequence. */
if (pci_poke_in_progress && pci_poke_cpu == cpu) {

View File

@ -111,7 +111,7 @@ static int tile_gpr_set(struct task_struct *target,
const void *kbuf, const void __user *ubuf)
{
int ret;
struct pt_regs regs;
struct pt_regs regs = *task_pt_regs(target);
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0,
sizeof(regs));

View File

@ -161,7 +161,13 @@ static u64 rapl_timer_ms;
static inline struct rapl_pmu *cpu_to_rapl_pmu(unsigned int cpu)
{
return rapl_pmus->pmus[topology_logical_package_id(cpu)];
unsigned int pkgid = topology_logical_package_id(cpu);
/*
* The unsigned check also catches the '-1' return value for non
* existent mappings in the topology map.
*/
return pkgid < rapl_pmus->maxpkg ? rapl_pmus->pmus[pkgid] : NULL;
}
static inline u64 rapl_read_counter(struct perf_event *event)
@ -402,6 +408,8 @@ static int rapl_pmu_event_init(struct perf_event *event)
/* must be done before validate_group */
pmu = cpu_to_rapl_pmu(event->cpu);
if (!pmu)
return -EINVAL;
event->cpu = pmu->cpu;
event->pmu_private = pmu;
event->hw.event_base = msr;
@ -585,6 +593,20 @@ static int rapl_cpu_online(unsigned int cpu)
struct rapl_pmu *pmu = cpu_to_rapl_pmu(cpu);
int target;
if (!pmu) {
pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
if (!pmu)
return -ENOMEM;
raw_spin_lock_init(&pmu->lock);
INIT_LIST_HEAD(&pmu->active_list);
pmu->pmu = &rapl_pmus->pmu;
pmu->timer_interval = ms_to_ktime(rapl_timer_ms);
rapl_hrtimer_init(pmu);
rapl_pmus->pmus[topology_logical_package_id(cpu)] = pmu;
}
/*
* Check if there is an online cpu in the package which collects rapl
* events already.
@ -598,27 +620,6 @@ static int rapl_cpu_online(unsigned int cpu)
return 0;
}
static int rapl_cpu_prepare(unsigned int cpu)
{
struct rapl_pmu *pmu = cpu_to_rapl_pmu(cpu);
if (pmu)
return 0;
pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
if (!pmu)
return -ENOMEM;
raw_spin_lock_init(&pmu->lock);
INIT_LIST_HEAD(&pmu->active_list);
pmu->pmu = &rapl_pmus->pmu;
pmu->timer_interval = ms_to_ktime(rapl_timer_ms);
pmu->cpu = -1;
rapl_hrtimer_init(pmu);
rapl_pmus->pmus[topology_logical_package_id(cpu)] = pmu;
return 0;
}
static int rapl_check_hw_unit(bool apply_quirk)
{
u64 msr_rapl_power_unit_bits;
@ -803,29 +804,21 @@ static int __init rapl_pmu_init(void)
/*
* Install callbacks. Core will call them for each online cpu.
*/
ret = cpuhp_setup_state(CPUHP_PERF_X86_RAPL_PREP, "perf/x86/rapl:prepare",
rapl_cpu_prepare, NULL);
if (ret)
goto out;
ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_RAPL_ONLINE,
"perf/x86/rapl:online",
rapl_cpu_online, rapl_cpu_offline);
if (ret)
goto out1;
goto out;
ret = perf_pmu_register(&rapl_pmus->pmu, "power", -1);
if (ret)
goto out2;
goto out1;
rapl_advertise();
return 0;
out2:
cpuhp_remove_state(CPUHP_AP_PERF_X86_RAPL_ONLINE);
out1:
cpuhp_remove_state(CPUHP_PERF_X86_RAPL_PREP);
cpuhp_remove_state(CPUHP_AP_PERF_X86_RAPL_ONLINE);
out:
pr_warn("Initialization failed (%d), disabled\n", ret);
cleanup_rapl_pmus();
@ -836,7 +829,6 @@ module_init(rapl_pmu_init);
static void __exit intel_rapl_exit(void)
{
cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_RAPL_ONLINE);
cpuhp_remove_state_nocalls(CPUHP_PERF_X86_RAPL_PREP);
perf_pmu_unregister(&rapl_pmus->pmu);
cleanup_rapl_pmus();
}

View File

@ -100,7 +100,13 @@ ssize_t uncore_event_show(struct kobject *kobj,
struct intel_uncore_box *uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
{
return pmu->boxes[topology_logical_package_id(cpu)];
unsigned int pkgid = topology_logical_package_id(cpu);
/*
* The unsigned check also catches the '-1' return value for non
* existent mappings in the topology map.
*/
return pkgid < max_packages ? pmu->boxes[pkgid] : NULL;
}
u64 uncore_msr_read_counter(struct intel_uncore_box *box, struct perf_event *event)
@ -764,30 +770,6 @@ static void uncore_pmu_unregister(struct intel_uncore_pmu *pmu)
pmu->registered = false;
}
static void __uncore_exit_boxes(struct intel_uncore_type *type, int cpu)
{
struct intel_uncore_pmu *pmu = type->pmus;
struct intel_uncore_box *box;
int i, pkg;
if (pmu) {
pkg = topology_physical_package_id(cpu);
for (i = 0; i < type->num_boxes; i++, pmu++) {
box = pmu->boxes[pkg];
if (box)
uncore_box_exit(box);
}
}
}
static void uncore_exit_boxes(void *dummy)
{
struct intel_uncore_type **types;
for (types = uncore_msr_uncores; *types; types++)
__uncore_exit_boxes(*types++, smp_processor_id());
}
static void uncore_free_boxes(struct intel_uncore_pmu *pmu)
{
int pkg;
@ -1058,86 +1040,6 @@ static void uncore_pci_exit(void)
}
}
static int uncore_cpu_dying(unsigned int cpu)
{
struct intel_uncore_type *type, **types = uncore_msr_uncores;
struct intel_uncore_pmu *pmu;
struct intel_uncore_box *box;
int i, pkg;
pkg = topology_logical_package_id(cpu);
for (; *types; types++) {
type = *types;
pmu = type->pmus;
for (i = 0; i < type->num_boxes; i++, pmu++) {
box = pmu->boxes[pkg];
if (box && atomic_dec_return(&box->refcnt) == 0)
uncore_box_exit(box);
}
}
return 0;
}
static int first_init;
static int uncore_cpu_starting(unsigned int cpu)
{
struct intel_uncore_type *type, **types = uncore_msr_uncores;
struct intel_uncore_pmu *pmu;
struct intel_uncore_box *box;
int i, pkg, ncpus = 1;
if (first_init) {
/*
* On init we get the number of online cpus in the package
* and set refcount for all of them.
*/
ncpus = cpumask_weight(topology_core_cpumask(cpu));
}
pkg = topology_logical_package_id(cpu);
for (; *types; types++) {
type = *types;
pmu = type->pmus;
for (i = 0; i < type->num_boxes; i++, pmu++) {
box = pmu->boxes[pkg];
if (!box)
continue;
/* The first cpu on a package activates the box */
if (atomic_add_return(ncpus, &box->refcnt) == ncpus)
uncore_box_init(box);
}
}
return 0;
}
static int uncore_cpu_prepare(unsigned int cpu)
{
struct intel_uncore_type *type, **types = uncore_msr_uncores;
struct intel_uncore_pmu *pmu;
struct intel_uncore_box *box;
int i, pkg;
pkg = topology_logical_package_id(cpu);
for (; *types; types++) {
type = *types;
pmu = type->pmus;
for (i = 0; i < type->num_boxes; i++, pmu++) {
if (pmu->boxes[pkg])
continue;
/* First cpu of a package allocates the box */
box = uncore_alloc_box(type, cpu_to_node(cpu));
if (!box)
return -ENOMEM;
box->pmu = pmu;
box->pkgid = pkg;
pmu->boxes[pkg] = box;
}
}
return 0;
}
static void uncore_change_type_ctx(struct intel_uncore_type *type, int old_cpu,
int new_cpu)
{
@ -1177,12 +1079,14 @@ static void uncore_change_context(struct intel_uncore_type **uncores,
static int uncore_event_cpu_offline(unsigned int cpu)
{
int target;
struct intel_uncore_type *type, **types = uncore_msr_uncores;
struct intel_uncore_pmu *pmu;
struct intel_uncore_box *box;
int i, pkg, target;
/* Check if exiting cpu is used for collecting uncore events */
if (!cpumask_test_and_clear_cpu(cpu, &uncore_cpu_mask))
return 0;
goto unref;
/* Find a new cpu to collect uncore events */
target = cpumask_any_but(topology_core_cpumask(cpu), cpu);
@ -1194,12 +1098,82 @@ static int uncore_event_cpu_offline(unsigned int cpu)
uncore_change_context(uncore_msr_uncores, cpu, target);
uncore_change_context(uncore_pci_uncores, cpu, target);
unref:
/* Clear the references */
pkg = topology_logical_package_id(cpu);
for (; *types; types++) {
type = *types;
pmu = type->pmus;
for (i = 0; i < type->num_boxes; i++, pmu++) {
box = pmu->boxes[pkg];
if (box && atomic_dec_return(&box->refcnt) == 0)
uncore_box_exit(box);
}
}
return 0;
}
static int allocate_boxes(struct intel_uncore_type **types,
unsigned int pkg, unsigned int cpu)
{
struct intel_uncore_box *box, *tmp;
struct intel_uncore_type *type;
struct intel_uncore_pmu *pmu;
LIST_HEAD(allocated);
int i;
/* Try to allocate all required boxes */
for (; *types; types++) {
type = *types;
pmu = type->pmus;
for (i = 0; i < type->num_boxes; i++, pmu++) {
if (pmu->boxes[pkg])
continue;
box = uncore_alloc_box(type, cpu_to_node(cpu));
if (!box)
goto cleanup;
box->pmu = pmu;
box->pkgid = pkg;
list_add(&box->active_list, &allocated);
}
}
/* Install them in the pmus */
list_for_each_entry_safe(box, tmp, &allocated, active_list) {
list_del_init(&box->active_list);
box->pmu->boxes[pkg] = box;
}
return 0;
cleanup:
list_for_each_entry_safe(box, tmp, &allocated, active_list) {
list_del_init(&box->active_list);
kfree(box);
}
return -ENOMEM;
}
static int uncore_event_cpu_online(unsigned int cpu)
{
int target;
struct intel_uncore_type *type, **types = uncore_msr_uncores;
struct intel_uncore_pmu *pmu;
struct intel_uncore_box *box;
int i, ret, pkg, target;
pkg = topology_logical_package_id(cpu);
ret = allocate_boxes(types, pkg, cpu);
if (ret)
return ret;
for (; *types; types++) {
type = *types;
pmu = type->pmus;
for (i = 0; i < type->num_boxes; i++, pmu++) {
box = pmu->boxes[pkg];
if (!box && atomic_inc_return(&box->refcnt) == 1)
uncore_box_init(box);
}
}
/*
* Check if there is an online cpu in the package
@ -1389,38 +1363,16 @@ static int __init intel_uncore_init(void)
if (cret && pret)
return -ENODEV;
/*
* Install callbacks. Core will call them for each online cpu.
*
* The first online cpu of each package allocates and takes
* the refcounts for all other online cpus in that package.
* If msrs are not enabled no allocation is required and
* uncore_cpu_prepare() is not called for each online cpu.
*/
if (!cret) {
ret = cpuhp_setup_state(CPUHP_PERF_X86_UNCORE_PREP,
"perf/x86/intel/uncore:prepare",
uncore_cpu_prepare, NULL);
if (ret)
goto err;
} else {
cpuhp_setup_state_nocalls(CPUHP_PERF_X86_UNCORE_PREP,
"perf/x86/intel/uncore:prepare",
uncore_cpu_prepare, NULL);
}
first_init = 1;
cpuhp_setup_state(CPUHP_AP_PERF_X86_UNCORE_STARTING,
"perf/x86/uncore:starting",
uncore_cpu_starting, uncore_cpu_dying);
first_init = 0;
cpuhp_setup_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
"perf/x86/uncore:online",
uncore_event_cpu_online, uncore_event_cpu_offline);
/* Install hotplug callbacks to setup the targets for each package */
ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
"perf/x86/intel/uncore:online",
uncore_event_cpu_online,
uncore_event_cpu_offline);
if (ret)
goto err;
return 0;
err:
/* Undo box->init_box() */
on_each_cpu_mask(&uncore_cpu_mask, uncore_exit_boxes, NULL, 1);
uncore_types_exit(uncore_msr_uncores);
uncore_pci_exit();
return ret;
@ -1429,9 +1381,7 @@ module_init(intel_uncore_init);
static void __exit intel_uncore_exit(void)
{
cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_UNCORE_STARTING);
cpuhp_remove_state_nocalls(CPUHP_PERF_X86_UNCORE_PREP);
cpuhp_remove_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
uncore_types_exit(uncore_msr_uncores);
uncore_pci_exit();
}

View File

@ -140,6 +140,7 @@ extern void __init load_ucode_bsp(void);
extern void load_ucode_ap(void);
void reload_early_microcode(void);
extern bool get_builtin_firmware(struct cpio_data *cd, const char *name);
extern bool initrd_gone;
#else
static inline int __init microcode_init(void) { return 0; };
static inline void __init load_ucode_bsp(void) { }

View File

@ -2117,6 +2117,7 @@ static inline void __init check_timer(void)
if (idx != -1 && irq_trigger(idx))
unmask_ioapic_irq(irq_get_chip_data(0));
}
irq_domain_deactivate_irq(irq_data);
irq_domain_activate_irq(irq_data);
if (timer_irq_works()) {
if (disable_timer_pin_1 > 0)
@ -2138,6 +2139,7 @@ static inline void __init check_timer(void)
* legacy devices should be connected to IO APIC #0
*/
replace_pin_at_irq_node(data, node, apic1, pin1, apic2, pin2);
irq_domain_deactivate_irq(irq_data);
irq_domain_activate_irq(irq_data);
legacy_pic->unmask(0);
if (timer_irq_works()) {

View File

@ -1373,20 +1373,15 @@ static unsigned long mce_adjust_timer_default(unsigned long interval)
static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default;
static void __restart_timer(struct timer_list *t, unsigned long interval)
static void __start_timer(struct timer_list *t, unsigned long interval)
{
unsigned long when = jiffies + interval;
unsigned long flags;
local_irq_save(flags);
if (timer_pending(t)) {
if (time_before(when, t->expires))
mod_timer(t, when);
} else {
t->expires = round_jiffies(when);
add_timer_on(t, smp_processor_id());
}
if (!timer_pending(t) || time_before(when, t->expires))
mod_timer(t, round_jiffies(when));
local_irq_restore(flags);
}
@ -1421,7 +1416,7 @@ static void mce_timer_fn(unsigned long data)
done:
__this_cpu_write(mce_next_interval, iv);
__restart_timer(t, iv);
__start_timer(t, iv);
}
/*
@ -1432,7 +1427,7 @@ void mce_timer_kick(unsigned long interval)
struct timer_list *t = this_cpu_ptr(&mce_timer);
unsigned long iv = __this_cpu_read(mce_next_interval);
__restart_timer(t, interval);
__start_timer(t, interval);
if (interval < iv)
__this_cpu_write(mce_next_interval, interval);
@ -1779,17 +1774,15 @@ static void __mcheck_cpu_clear_vendor(struct cpuinfo_x86 *c)
}
}
static void mce_start_timer(unsigned int cpu, struct timer_list *t)
static void mce_start_timer(struct timer_list *t)
{
unsigned long iv = check_interval * HZ;
if (mca_cfg.ignore_ce || !iv)
return;
per_cpu(mce_next_interval, cpu) = iv;
t->expires = round_jiffies(jiffies + iv);
add_timer_on(t, cpu);
this_cpu_write(mce_next_interval, iv);
__start_timer(t, iv);
}
static void __mcheck_cpu_setup_timer(void)
@ -1806,7 +1799,7 @@ static void __mcheck_cpu_init_timer(void)
unsigned int cpu = smp_processor_id();
setup_pinned_timer(t, mce_timer_fn, cpu);
mce_start_timer(cpu, t);
mce_start_timer(t);
}
/* Handle unconfigured int18 (should never happen) */
@ -2566,7 +2559,7 @@ static int mce_cpu_dead(unsigned int cpu)
static int mce_cpu_online(unsigned int cpu)
{
struct timer_list *t = &per_cpu(mce_timer, cpu);
struct timer_list *t = this_cpu_ptr(&mce_timer);
int ret;
mce_device_create(cpu);
@ -2577,13 +2570,13 @@ static int mce_cpu_online(unsigned int cpu)
return ret;
}
mce_reenable_cpu();
mce_start_timer(cpu, t);
mce_start_timer(t);
return 0;
}
static int mce_cpu_pre_down(unsigned int cpu)
{
struct timer_list *t = &per_cpu(mce_timer, cpu);
struct timer_list *t = this_cpu_ptr(&mce_timer);
mce_disable_cpu();
del_timer_sync(t);

View File

@ -384,8 +384,9 @@ void load_ucode_amd_ap(unsigned int family)
reget:
if (!get_builtin_microcode(&cp, family)) {
#ifdef CONFIG_BLK_DEV_INITRD
cp = find_cpio_data(ucode_path, (void *)initrd_start,
initrd_end - initrd_start, NULL);
if (!initrd_gone)
cp = find_cpio_data(ucode_path, (void *)initrd_start,
initrd_end - initrd_start, NULL);
#endif
if (!(cp.data && cp.size)) {
/*

View File

@ -46,6 +46,8 @@
static struct microcode_ops *microcode_ops;
static bool dis_ucode_ldr = true;
bool initrd_gone;
LIST_HEAD(microcode_cache);
/*
@ -190,21 +192,24 @@ void load_ucode_ap(void)
static int __init save_microcode_in_initrd(void)
{
struct cpuinfo_x86 *c = &boot_cpu_data;
int ret = -EINVAL;
switch (c->x86_vendor) {
case X86_VENDOR_INTEL:
if (c->x86 >= 6)
return save_microcode_in_initrd_intel();
ret = save_microcode_in_initrd_intel();
break;
case X86_VENDOR_AMD:
if (c->x86 >= 0x10)
return save_microcode_in_initrd_amd(c->x86);
ret = save_microcode_in_initrd_amd(c->x86);
break;
default:
break;
}
return -EINVAL;
initrd_gone = true;
return ret;
}
struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa)
@ -247,9 +252,16 @@ struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa)
* has the virtual address of the beginning of the initrd. It also
* possibly relocates the ramdisk. In either case, initrd_start contains
* the updated address so use that instead.
*
* initrd_gone is for the hotplug case where we've thrown out initrd
* already.
*/
if (!use_pa && initrd_start)
start = initrd_start;
if (!use_pa) {
if (initrd_gone)
return (struct cpio_data){ NULL, 0, "" };
if (initrd_start)
start = initrd_start;
}
return find_cpio_data(path, (void *)start, size, NULL);
#else /* !CONFIG_BLK_DEV_INITRD */

View File

@ -41,7 +41,7 @@
static const char ucode_path[] = "kernel/x86/microcode/GenuineIntel.bin";
/* Current microcode patch used in early patching */
/* Current microcode patch used in early patching on the APs. */
struct microcode_intel *intel_ucode_patch;
static inline bool cpu_signatures_match(unsigned int s1, unsigned int p1,
@ -607,12 +607,6 @@ int __init save_microcode_in_initrd_intel(void)
struct ucode_cpu_info uci;
struct cpio_data cp;
/*
* AP loading didn't find any microcode patch, no need to save anything.
*/
if (!intel_ucode_patch || IS_ERR(intel_ucode_patch))
return 0;
if (!load_builtin_intel_microcode(&cp))
cp = find_microcode_in_initrd(ucode_path, false);
@ -628,7 +622,6 @@ int __init save_microcode_in_initrd_intel(void)
return 0;
}
/*
* @res_patch, output: a pointer to the patch we found.
*/

View File

@ -9,6 +9,7 @@
#include <asm/fpu/regset.h>
#include <asm/fpu/signal.h>
#include <asm/fpu/types.h>
#include <asm/fpu/xstate.h>
#include <asm/traps.h>
#include <linux/hardirq.h>
@ -183,7 +184,8 @@ void fpstate_init(union fpregs_state *state)
* it will #GP. Make sure it is replaced after the memset().
*/
if (static_cpu_has(X86_FEATURE_XSAVES))
state->xsave.header.xcomp_bv = XCOMP_BV_COMPACTED_FORMAT;
state->xsave.header.xcomp_bv = XCOMP_BV_COMPACTED_FORMAT |
xfeatures_mask;
if (static_cpu_has(X86_FEATURE_FXSR))
fpstate_init_fxstate(&state->fxsave);

View File

@ -352,6 +352,7 @@ static int hpet_resume(struct clock_event_device *evt, int timer)
} else {
struct hpet_dev *hdev = EVT_TO_HPET_DEV(evt);
irq_domain_deactivate_irq(irq_get_irq_data(hdev->irq));
irq_domain_activate_irq(irq_get_irq_data(hdev->irq));
disable_irq(hdev->irq);
irq_set_affinity(hdev->irq, cpumask_of(hdev->cpu));

View File

@ -3182,6 +3182,7 @@ static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu)
memcpy(dest, xsave, XSAVE_HDR_OFFSET);
/* Set XSTATE_BV */
xstate_bv &= vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FPSSE;
*(u64 *)(dest + XSAVE_HDR_OFFSET) = xstate_bv;
/*

View File

@ -268,6 +268,22 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
efi_scratch.use_pgd = true;
/*
* Certain firmware versions are way too sentimential and still believe
* they are exclusive and unquestionable owners of the first physical page,
* even though they explicitly mark it as EFI_CONVENTIONAL_MEMORY
* (but then write-access it later during SetVirtualAddressMap()).
*
* Create a 1:1 mapping for this page, to avoid triple faults during early
* boot with such firmware. We are free to hand this page to the BIOS,
* as trim_bios_range() will reserve the first page and isolate it away
* from memory allocators anyway.
*/
if (kernel_map_pages_in_pgd(pgd, 0x0, 0x0, 1, _PAGE_RW)) {
pr_err("Failed to create 1:1 mapping for the first page!\n");
return 1;
}
/*
* When making calls to the firmware everything needs to be 1:1
* mapped and addressable with 32-bit pointers. Map the kernel

View File

@ -419,7 +419,7 @@ subsys_initcall(topology_init);
void cpu_reset(void)
{
#if XCHAL_HAVE_PTP_MMU
#if XCHAL_HAVE_PTP_MMU && IS_ENABLED(CONFIG_MMU)
local_irq_disable();
/*
* We have full MMU: all autoload ways, ways 7, 8 and 9 of DTLB must

View File

@ -356,6 +356,7 @@ int crypto_register_alg(struct crypto_alg *alg)
struct crypto_larval *larval;
int err;
alg->cra_flags &= ~CRYPTO_ALG_DEAD;
err = crypto_check_alg(alg);
if (err)
return err;

View File

@ -852,23 +852,18 @@ acpi_tb_install_and_load_table(acpi_physical_address address,
ACPI_FUNCTION_TRACE(tb_install_and_load_table);
(void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES);
/* Install the table and load it into the namespace */
status = acpi_tb_install_standard_table(address, flags, TRUE,
override, &i);
if (ACPI_FAILURE(status)) {
goto unlock_and_exit;
goto exit;
}
(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
status = acpi_tb_load_table(i, acpi_gbl_root_node);
(void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES);
unlock_and_exit:
exit:
*table_index = i;
(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
return_ACPI_STATUS(status);
}

View File

@ -217,6 +217,10 @@ acpi_tb_install_standard_table(acpi_physical_address address,
goto release_and_exit;
}
/* Acquire the table lock */
(void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES);
if (reload) {
/*
* Validate the incoming table signature.
@ -244,7 +248,7 @@ acpi_tb_install_standard_table(acpi_physical_address address,
new_table_desc.signature.integer));
status = AE_BAD_SIGNATURE;
goto release_and_exit;
goto unlock_and_exit;
}
/* Check if table is already registered */
@ -279,7 +283,7 @@ acpi_tb_install_standard_table(acpi_physical_address address,
/* Table is still loaded, this is an error */
status = AE_ALREADY_EXISTS;
goto release_and_exit;
goto unlock_and_exit;
} else {
/*
* Table was unloaded, allow it to be reloaded.
@ -290,6 +294,7 @@ acpi_tb_install_standard_table(acpi_physical_address address,
* indicate the re-installation.
*/
acpi_tb_uninstall_table(&new_table_desc);
(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
*table_index = i;
return_ACPI_STATUS(AE_OK);
}
@ -303,11 +308,19 @@ acpi_tb_install_standard_table(acpi_physical_address address,
/* Invoke table handler if present */
(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
if (acpi_gbl_table_handler) {
(void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_INSTALL,
new_table_desc.pointer,
acpi_gbl_table_handler_context);
}
(void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES);
unlock_and_exit:
/* Release the table lock */
(void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
release_and_exit:

View File

@ -674,14 +674,6 @@ static void acpi_sleep_suspend_setup(void)
if (acpi_sleep_state_supported(i))
sleep_states[i] = 1;
/*
* Use suspend-to-idle by default if ACPI_FADT_LOW_POWER_S0 is set and
* the default suspend mode was not selected from the command line.
*/
if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0 &&
mem_sleep_default > PM_SUSPEND_MEM)
mem_sleep_default = PM_SUSPEND_FREEZE;
suspend_set_ops(old_suspend_ordering ?
&acpi_suspend_ops_old : &acpi_suspend_ops);
freeze_set_ops(&acpi_freeze_ops);

View File

@ -305,17 +305,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "Dell System XPS L702X"),
},
},
{
/* https://bugzilla.redhat.com/show_bug.cgi?id=1204476 */
/* https://bugs.launchpad.net/ubuntu/+source/linux-lts-trusty/+bug/1416940 */
.callback = video_detect_force_native,
.ident = "HP Pavilion dv6",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv6 Notebook PC"),
},
},
{ },
};

View File

@ -1702,6 +1702,8 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
if (qc->err_mask & ~AC_ERR_OTHER)
qc->err_mask &= ~AC_ERR_OTHER;
} else if (qc->tf.command == ATA_CMD_REQ_SENSE_DATA) {
qc->result_tf.command |= ATA_SENSE;
}
/* finish up */
@ -4356,10 +4358,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
{ "ST380013AS", "3.20", ATA_HORKAGE_MAX_SEC_1024 },
/*
* Device times out with higher max sects.
* These devices time out with higher max sects.
* https://bugzilla.kernel.org/show_bug.cgi?id=121671
*/
{ "LITEON CX1-JB256-HP", NULL, ATA_HORKAGE_MAX_SEC_1024 },
{ "LITEON CX1-JB*-HP", NULL, ATA_HORKAGE_MAX_SEC_1024 },
/* Devices we expect to fail diagnostics */

View File

@ -4132,6 +4132,9 @@ static int mv_platform_probe(struct platform_device *pdev)
host->iomap = NULL;
hpriv->base = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
if (!hpriv->base)
return -ENOMEM;
hpriv->base -= SATAHC0_REG_BASE;
hpriv->clk = clk_get(&pdev->dev, NULL);

View File

@ -558,9 +558,6 @@ static void fw_load_abort(struct firmware_priv *fw_priv)
struct firmware_buf *buf = fw_priv->buf;
__fw_load_abort(buf);
/* avoid user action after loading abort */
fw_priv->buf = NULL;
}
static LIST_HEAD(pending_fw_head);
@ -713,7 +710,7 @@ static ssize_t firmware_loading_store(struct device *dev,
mutex_lock(&fw_lock);
fw_buf = fw_priv->buf;
if (!fw_buf)
if (fw_state_is_aborted(&fw_buf->fw_st))
goto out;
switch (loading) {

View File

@ -389,33 +389,33 @@ static ssize_t show_valid_zones(struct device *dev,
{
struct memory_block *mem = to_memory_block(dev);
unsigned long start_pfn, end_pfn;
unsigned long valid_start, valid_end, valid_pages;
unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
struct page *first_page;
struct zone *zone;
int zone_shift = 0;
start_pfn = section_nr_to_pfn(mem->start_section_nr);
end_pfn = start_pfn + nr_pages;
first_page = pfn_to_page(start_pfn);
/* The block contains more than one zone can not be offlined. */
if (!test_pages_in_a_zone(start_pfn, end_pfn))
if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start, &valid_end))
return sprintf(buf, "none\n");
zone = page_zone(first_page);
zone = page_zone(pfn_to_page(valid_start));
valid_pages = valid_end - valid_start;
/* MMOP_ONLINE_KEEP */
sprintf(buf, "%s", zone->name);
/* MMOP_ONLINE_KERNEL */
zone_shift = zone_can_shift(start_pfn, nr_pages, ZONE_NORMAL);
zone_can_shift(valid_start, valid_pages, ZONE_NORMAL, &zone_shift);
if (zone_shift) {
strcat(buf, " ");
strcat(buf, (zone + zone_shift)->name);
}
/* MMOP_ONLINE_MOVABLE */
zone_shift = zone_can_shift(start_pfn, nr_pages, ZONE_MOVABLE);
zone_can_shift(valid_start, valid_pages, ZONE_MOVABLE, &zone_shift);
if (zone_shift) {
strcat(buf, " ");
strcat(buf, (zone + zone_shift)->name);

View File

@ -45,6 +45,9 @@ int bcma_sprom_get(struct bcma_bus *bus);
void bcma_core_chipcommon_early_init(struct bcma_drv_cc *cc);
void bcma_core_chipcommon_init(struct bcma_drv_cc *cc);
void bcma_chipco_bcm4331_ext_pa_lines_ctl(struct bcma_drv_cc *cc, bool enable);
#ifdef CONFIG_BCMA_DRIVER_MIPS
void bcma_chipco_serial_init(struct bcma_drv_cc *cc);
#endif /* CONFIG_BCMA_DRIVER_MIPS */
/* driver_chipcommon_b.c */
int bcma_core_chipcommon_b_init(struct bcma_drv_cc_b *ccb);

View File

@ -15,8 +15,6 @@
#include <linux/platform_device.h>
#include <linux/bcma/bcma.h>
static void bcma_chipco_serial_init(struct bcma_drv_cc *cc);
static inline u32 bcma_cc_write32_masked(struct bcma_drv_cc *cc, u16 offset,
u32 mask, u32 value)
{
@ -186,9 +184,6 @@ void bcma_core_chipcommon_early_init(struct bcma_drv_cc *cc)
if (cc->capabilities & BCMA_CC_CAP_PMU)
bcma_pmu_early_init(cc);
if (IS_BUILTIN(CONFIG_BCM47XX) && bus->hosttype == BCMA_HOSTTYPE_SOC)
bcma_chipco_serial_init(cc);
if (bus->hosttype == BCMA_HOSTTYPE_SOC)
bcma_core_chipcommon_flash_detect(cc);
@ -378,9 +373,9 @@ u32 bcma_chipco_gpio_pulldown(struct bcma_drv_cc *cc, u32 mask, u32 value)
return res;
}
static void bcma_chipco_serial_init(struct bcma_drv_cc *cc)
#ifdef CONFIG_BCMA_DRIVER_MIPS
void bcma_chipco_serial_init(struct bcma_drv_cc *cc)
{
#if IS_BUILTIN(CONFIG_BCM47XX)
unsigned int irq;
u32 baud_base;
u32 i;
@ -422,5 +417,5 @@ static void bcma_chipco_serial_init(struct bcma_drv_cc *cc)
ports[i].baud_base = baud_base;
ports[i].reg_shift = 0;
}
#endif /* CONFIG_BCM47XX */
}
#endif /* CONFIG_BCMA_DRIVER_MIPS */

View File

@ -278,9 +278,12 @@ static void bcma_core_mips_nvram_init(struct bcma_drv_mips *mcore)
void bcma_core_mips_early_init(struct bcma_drv_mips *mcore)
{
struct bcma_bus *bus = mcore->core->bus;
if (mcore->early_setup_done)
return;
bcma_chipco_serial_init(&bus->drv_cc);
bcma_core_mips_nvram_init(mcore);
mcore->early_setup_done = true;

View File

@ -197,13 +197,13 @@ struct blkfront_info
/* Number of pages per ring buffer. */
unsigned int nr_ring_pages;
struct request_queue *rq;
unsigned int feature_flush;
unsigned int feature_fua;
unsigned int feature_flush:1;
unsigned int feature_fua:1;
unsigned int feature_discard:1;
unsigned int feature_secdiscard:1;
unsigned int feature_persistent:1;
unsigned int discard_granularity;
unsigned int discard_alignment;
unsigned int feature_persistent:1;
/* Number of 4KB segments handled */
unsigned int max_indirect_segments;
int is_ready;
@ -2223,7 +2223,7 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
}
else
grants = info->max_indirect_segments;
psegs = grants / GRANTS_PER_PSEG;
psegs = DIV_ROUND_UP(grants, GRANTS_PER_PSEG);
err = fill_grant_buffer(rinfo,
(grants + INDIRECT_GREFS(grants)) * BLK_RING_SIZE(info));
@ -2323,13 +2323,16 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
blkfront_setup_discard(info);
info->feature_persistent =
xenbus_read_unsigned(info->xbdev->otherend,
"feature-persistent", 0);
!!xenbus_read_unsigned(info->xbdev->otherend,
"feature-persistent", 0);
indirect_segments = xenbus_read_unsigned(info->xbdev->otherend,
"feature-max-indirect-segments", 0);
info->max_indirect_segments = min(indirect_segments,
xen_blkif_max_segments);
if (indirect_segments > xen_blkif_max_segments)
indirect_segments = xen_blkif_max_segments;
if (indirect_segments <= BLKIF_MAX_SEGMENTS_PER_REQUEST)
indirect_segments = 0;
info->max_indirect_segments = indirect_segments;
}
/*
@ -2652,6 +2655,9 @@ static int __init xlblk_init(void)
if (!xen_domain())
return -ENODEV;
if (xen_blkif_max_segments < BLKIF_MAX_SEGMENTS_PER_REQUEST)
xen_blkif_max_segments = BLKIF_MAX_SEGMENTS_PER_REQUEST;
if (xen_blkif_max_ring_order > XENBUS_MAX_RING_GRANT_ORDER) {
pr_info("Invalid max_ring_order (%d), will use default max: %d.\n",
xen_blkif_max_ring_order, XENBUS_MAX_RING_GRANT_ORDER);

View File

@ -2005,7 +2005,8 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
limits = &performance_limits;
perf_limits = limits;
}
if (policy->max >= policy->cpuinfo.max_freq) {
if (policy->max >= policy->cpuinfo.max_freq &&
!limits->no_turbo) {
pr_debug("set performance\n");
intel_pstate_set_performance_limits(perf_limits);
goto out;
@ -2047,6 +2048,17 @@ static int intel_pstate_verify_policy(struct cpufreq_policy *policy)
policy->policy != CPUFREQ_POLICY_PERFORMANCE)
return -EINVAL;
/* When per-CPU limits are used, sysfs limits are not used */
if (!per_cpu_limits) {
unsigned int max_freq, min_freq;
max_freq = policy->cpuinfo.max_freq *
limits->max_sysfs_pct / 100;
min_freq = policy->cpuinfo.max_freq *
limits->min_sysfs_pct / 100;
cpufreq_verify_within_limits(policy, min_freq, max_freq);
}
return 0;
}

View File

@ -153,6 +153,8 @@ struct cppi41_dd {
/* context for suspend/resume */
unsigned int dma_tdfdq;
bool is_suspended;
};
#define FIST_COMPLETION_QUEUE 93
@ -257,6 +259,10 @@ static struct cppi41_channel *desc_to_chan(struct cppi41_dd *cdd, u32 desc)
BUG_ON(desc_num >= ALLOC_DECS_NUM);
c = cdd->chan_busy[desc_num];
cdd->chan_busy[desc_num] = NULL;
/* Usecount for chan_busy[], paired with push_desc_queue() */
pm_runtime_put(cdd->ddev.dev);
return c;
}
@ -317,12 +323,12 @@ static irqreturn_t cppi41_irq(int irq, void *data)
while (val) {
u32 desc, len;
int error;
error = pm_runtime_get(cdd->ddev.dev);
if (error < 0)
dev_err(cdd->ddev.dev, "%s pm runtime get: %i\n",
__func__, error);
/*
* This should never trigger, see the comments in
* push_desc_queue()
*/
WARN_ON(cdd->is_suspended);
q_num = __fls(val);
val &= ~(1 << q_num);
@ -343,9 +349,6 @@ static irqreturn_t cppi41_irq(int irq, void *data)
c->residue = pd_trans_len(c->desc->pd6) - len;
dma_cookie_complete(&c->txd);
dmaengine_desc_get_callback_invoke(&c->txd, NULL);
pm_runtime_mark_last_busy(cdd->ddev.dev);
pm_runtime_put_autosuspend(cdd->ddev.dev);
}
}
return IRQ_HANDLED;
@ -447,6 +450,15 @@ static void push_desc_queue(struct cppi41_channel *c)
*/
__iowmb();
/*
* DMA transfers can take at least 200ms to complete with USB mass
* storage connected. To prevent autosuspend timeouts, we must use
* pm_runtime_get/put() when chan_busy[] is modified. This will get
* cleared in desc_to_chan() or cppi41_stop_chan() depending on the
* outcome of the transfer.
*/
pm_runtime_get(cdd->ddev.dev);
desc_phys = lower_32_bits(c->desc_phys);
desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc);
WARN_ON(cdd->chan_busy[desc_num]);
@ -457,20 +469,26 @@ static void push_desc_queue(struct cppi41_channel *c)
cppi_writel(reg, cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num));
}
static void pending_desc(struct cppi41_channel *c)
/*
* Caller must hold cdd->lock to prevent push_desc_queue()
* getting called out of order. We have both cppi41_dma_issue_pending()
* and cppi41_runtime_resume() call this function.
*/
static void cppi41_run_queue(struct cppi41_dd *cdd)
{
struct cppi41_dd *cdd = c->cdd;
unsigned long flags;
struct cppi41_channel *c, *_c;
spin_lock_irqsave(&cdd->lock, flags);
list_add_tail(&c->node, &cdd->pending);
spin_unlock_irqrestore(&cdd->lock, flags);
list_for_each_entry_safe(c, _c, &cdd->pending, node) {
push_desc_queue(c);
list_del(&c->node);
}
}
static void cppi41_dma_issue_pending(struct dma_chan *chan)
{
struct cppi41_channel *c = to_cpp41_chan(chan);
struct cppi41_dd *cdd = c->cdd;
unsigned long flags;
int error;
error = pm_runtime_get(cdd->ddev.dev);
@ -482,10 +500,11 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan)
return;
}
if (likely(pm_runtime_active(cdd->ddev.dev)))
push_desc_queue(c);
else
pending_desc(c);
spin_lock_irqsave(&cdd->lock, flags);
list_add_tail(&c->node, &cdd->pending);
if (!cdd->is_suspended)
cppi41_run_queue(cdd);
spin_unlock_irqrestore(&cdd->lock, flags);
pm_runtime_mark_last_busy(cdd->ddev.dev);
pm_runtime_put_autosuspend(cdd->ddev.dev);
@ -705,6 +724,9 @@ static int cppi41_stop_chan(struct dma_chan *chan)
WARN_ON(!cdd->chan_busy[desc_num]);
cdd->chan_busy[desc_num] = NULL;
/* Usecount for chan_busy[], paired with push_desc_queue() */
pm_runtime_put(cdd->ddev.dev);
return 0;
}
@ -1150,8 +1172,12 @@ static int __maybe_unused cppi41_resume(struct device *dev)
static int __maybe_unused cppi41_runtime_suspend(struct device *dev)
{
struct cppi41_dd *cdd = dev_get_drvdata(dev);
unsigned long flags;
spin_lock_irqsave(&cdd->lock, flags);
cdd->is_suspended = true;
WARN_ON(!list_empty(&cdd->pending));
spin_unlock_irqrestore(&cdd->lock, flags);
return 0;
}
@ -1159,14 +1185,11 @@ static int __maybe_unused cppi41_runtime_suspend(struct device *dev)
static int __maybe_unused cppi41_runtime_resume(struct device *dev)
{
struct cppi41_dd *cdd = dev_get_drvdata(dev);
struct cppi41_channel *c, *_c;
unsigned long flags;
spin_lock_irqsave(&cdd->lock, flags);
list_for_each_entry_safe(c, _c, &cdd->pending, node) {
push_desc_queue(c);
list_del(&c->node);
}
cdd->is_suspended = false;
cppi41_run_queue(cdd);
spin_unlock_irqrestore(&cdd->lock, flags);
return 0;

View File

@ -1699,7 +1699,6 @@ static bool _chan_ns(const struct pl330_dmac *pl330, int i)
static struct pl330_thread *pl330_request_channel(struct pl330_dmac *pl330)
{
struct pl330_thread *thrd = NULL;
unsigned long flags;
int chans, i;
if (pl330->state == DYING)
@ -1707,8 +1706,6 @@ static struct pl330_thread *pl330_request_channel(struct pl330_dmac *pl330)
chans = pl330->pcfg.num_chan;
spin_lock_irqsave(&pl330->lock, flags);
for (i = 0; i < chans; i++) {
thrd = &pl330->channels[i];
if ((thrd->free) && (!_manager_ns(thrd) ||
@ -1726,8 +1723,6 @@ static struct pl330_thread *pl330_request_channel(struct pl330_dmac *pl330)
thrd = NULL;
}
spin_unlock_irqrestore(&pl330->lock, flags);
return thrd;
}
@ -1745,7 +1740,6 @@ static inline void _free_event(struct pl330_thread *thrd, int ev)
static void pl330_release_channel(struct pl330_thread *thrd)
{
struct pl330_dmac *pl330;
unsigned long flags;
if (!thrd || thrd->free)
return;
@ -1757,10 +1751,8 @@ static void pl330_release_channel(struct pl330_thread *thrd)
pl330 = thrd->dmac;
spin_lock_irqsave(&pl330->lock, flags);
_free_event(thrd, thrd->ev);
thrd->free = true;
spin_unlock_irqrestore(&pl330->lock, flags);
}
/* Initialize the structure for PL330 configuration, that can be used
@ -2122,20 +2114,20 @@ static int pl330_alloc_chan_resources(struct dma_chan *chan)
struct pl330_dmac *pl330 = pch->dmac;
unsigned long flags;
spin_lock_irqsave(&pch->lock, flags);
spin_lock_irqsave(&pl330->lock, flags);
dma_cookie_init(chan);
pch->cyclic = false;
pch->thread = pl330_request_channel(pl330);
if (!pch->thread) {
spin_unlock_irqrestore(&pch->lock, flags);
spin_unlock_irqrestore(&pl330->lock, flags);
return -ENOMEM;
}
tasklet_init(&pch->task, pl330_tasklet, (unsigned long) pch);
spin_unlock_irqrestore(&pch->lock, flags);
spin_unlock_irqrestore(&pl330->lock, flags);
return 1;
}
@ -2238,12 +2230,13 @@ static int pl330_pause(struct dma_chan *chan)
static void pl330_free_chan_resources(struct dma_chan *chan)
{
struct dma_pl330_chan *pch = to_pchan(chan);
struct pl330_dmac *pl330 = pch->dmac;
unsigned long flags;
tasklet_kill(&pch->task);
pm_runtime_get_sync(pch->dmac->ddma.dev);
spin_lock_irqsave(&pch->lock, flags);
spin_lock_irqsave(&pl330->lock, flags);
pl330_release_channel(pch->thread);
pch->thread = NULL;
@ -2251,7 +2244,7 @@ static void pl330_free_chan_resources(struct dma_chan *chan)
if (pch->cyclic)
list_splice_tail_init(&pch->work_list, &pch->dmac->desc_pool);
spin_unlock_irqrestore(&pch->lock, flags);
spin_unlock_irqrestore(&pl330->lock, flags);
pm_runtime_mark_last_busy(pch->dmac->ddma.dev);
pm_runtime_put_autosuspend(pch->dmac->ddma.dev);
}

View File

@ -187,6 +187,7 @@ static efi_status_t update_fdt_memmap(void *fdt, struct efi_boot_memmap *map)
struct exit_boot_struct {
efi_memory_desc_t *runtime_map;
int *runtime_entry_count;
void *new_fdt_addr;
};
static efi_status_t exit_boot_func(efi_system_table_t *sys_table_arg,
@ -202,7 +203,7 @@ static efi_status_t exit_boot_func(efi_system_table_t *sys_table_arg,
efi_get_virtmap(*map->map, *map->map_size, *map->desc_size,
p->runtime_map, p->runtime_entry_count);
return EFI_SUCCESS;
return update_fdt_memmap(p->new_fdt_addr, map);
}
/*
@ -300,22 +301,13 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
priv.runtime_map = runtime_map;
priv.runtime_entry_count = &runtime_entry_count;
priv.new_fdt_addr = (void *)*new_fdt_addr;
status = efi_exit_boot_services(sys_table, handle, &map, &priv,
exit_boot_func);
if (status == EFI_SUCCESS) {
efi_set_virtual_address_map_t *svam;
status = update_fdt_memmap((void *)*new_fdt_addr, &map);
if (status != EFI_SUCCESS) {
/*
* The kernel won't get far without the memory map, but
* may still be able to print something meaningful so
* return success here.
*/
return EFI_SUCCESS;
}
/* Install the new virtual address map */
svam = sys_table->runtime->set_virtual_address_map;
status = svam(runtime_entry_count * desc_size, desc_size,

View File

@ -1723,7 +1723,7 @@ static void gpiochip_irqchip_remove(struct gpio_chip *gpiochip)
}
/**
* _gpiochip_irqchip_add() - adds an irqchip to a gpiochip
* gpiochip_irqchip_add_key() - adds an irqchip to a gpiochip
* @gpiochip: the gpiochip to add the irqchip to
* @irqchip: the irqchip to add to the gpiochip
* @first_irq: if not dynamically assigned, the base (first) IRQ to
@ -1749,13 +1749,13 @@ static void gpiochip_irqchip_remove(struct gpio_chip *gpiochip)
* the pins on the gpiochip can generate a unique IRQ. Everything else
* need to be open coded.
*/
int _gpiochip_irqchip_add(struct gpio_chip *gpiochip,
struct irq_chip *irqchip,
unsigned int first_irq,
irq_flow_handler_t handler,
unsigned int type,
bool nested,
struct lock_class_key *lock_key)
int gpiochip_irqchip_add_key(struct gpio_chip *gpiochip,
struct irq_chip *irqchip,
unsigned int first_irq,
irq_flow_handler_t handler,
unsigned int type,
bool nested,
struct lock_class_key *lock_key)
{
struct device_node *of_node;
bool irq_base_set = false;
@ -1840,7 +1840,7 @@ int _gpiochip_irqchip_add(struct gpio_chip *gpiochip,
return 0;
}
EXPORT_SYMBOL_GPL(_gpiochip_irqchip_add);
EXPORT_SYMBOL_GPL(gpiochip_irqchip_add_key);
#else /* CONFIG_GPIOLIB_IRQCHIP */

View File

@ -83,6 +83,13 @@ int amdgpu_cs_get_ring(struct amdgpu_device *adev, u32 ip_type,
}
break;
}
if (!(*out_ring && (*out_ring)->adev)) {
DRM_ERROR("Ring %d is not initialized on IP %d\n",
ring, ip_type);
return -EINVAL;
}
return 0;
}

View File

@ -2512,6 +2512,8 @@ static int dce_v10_0_cursor_move_locked(struct drm_crtc *crtc,
WREG32(mmCUR_POSITION + amdgpu_crtc->crtc_offset, (x << 16) | y);
WREG32(mmCUR_HOT_SPOT + amdgpu_crtc->crtc_offset, (xorigin << 16) | yorigin);
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
((amdgpu_crtc->cursor_width - 1) << 16) | (amdgpu_crtc->cursor_height - 1));
return 0;
}
@ -2537,7 +2539,6 @@ static int dce_v10_0_crtc_cursor_set2(struct drm_crtc *crtc,
int32_t hot_y)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = crtc->dev->dev_private;
struct drm_gem_object *obj;
struct amdgpu_bo *aobj;
int ret;
@ -2578,7 +2579,9 @@ static int dce_v10_0_crtc_cursor_set2(struct drm_crtc *crtc,
dce_v10_0_lock_cursor(crtc, true);
if (hot_x != amdgpu_crtc->cursor_hot_x ||
if (width != amdgpu_crtc->cursor_width ||
height != amdgpu_crtc->cursor_height ||
hot_x != amdgpu_crtc->cursor_hot_x ||
hot_y != amdgpu_crtc->cursor_hot_y) {
int x, y;
@ -2587,16 +2590,10 @@ static int dce_v10_0_crtc_cursor_set2(struct drm_crtc *crtc,
dce_v10_0_cursor_move_locked(crtc, x, y);
amdgpu_crtc->cursor_hot_x = hot_x;
amdgpu_crtc->cursor_hot_y = hot_y;
}
if (width != amdgpu_crtc->cursor_width ||
height != amdgpu_crtc->cursor_height) {
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
(width - 1) << 16 | (height - 1));
amdgpu_crtc->cursor_width = width;
amdgpu_crtc->cursor_height = height;
amdgpu_crtc->cursor_hot_x = hot_x;
amdgpu_crtc->cursor_hot_y = hot_y;
}
dce_v10_0_show_cursor(crtc);
@ -2620,7 +2617,6 @@ unpin:
static void dce_v10_0_cursor_reset(struct drm_crtc *crtc)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = crtc->dev->dev_private;
if (amdgpu_crtc->cursor_bo) {
dce_v10_0_lock_cursor(crtc, true);
@ -2628,10 +2624,6 @@ static void dce_v10_0_cursor_reset(struct drm_crtc *crtc)
dce_v10_0_cursor_move_locked(crtc, amdgpu_crtc->cursor_x,
amdgpu_crtc->cursor_y);
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
(amdgpu_crtc->cursor_width - 1) << 16 |
(amdgpu_crtc->cursor_height - 1));
dce_v10_0_show_cursor(crtc);
dce_v10_0_lock_cursor(crtc, false);

View File

@ -2532,6 +2532,8 @@ static int dce_v11_0_cursor_move_locked(struct drm_crtc *crtc,
WREG32(mmCUR_POSITION + amdgpu_crtc->crtc_offset, (x << 16) | y);
WREG32(mmCUR_HOT_SPOT + amdgpu_crtc->crtc_offset, (xorigin << 16) | yorigin);
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
((amdgpu_crtc->cursor_width - 1) << 16) | (amdgpu_crtc->cursor_height - 1));
return 0;
}
@ -2557,7 +2559,6 @@ static int dce_v11_0_crtc_cursor_set2(struct drm_crtc *crtc,
int32_t hot_y)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = crtc->dev->dev_private;
struct drm_gem_object *obj;
struct amdgpu_bo *aobj;
int ret;
@ -2598,7 +2599,9 @@ static int dce_v11_0_crtc_cursor_set2(struct drm_crtc *crtc,
dce_v11_0_lock_cursor(crtc, true);
if (hot_x != amdgpu_crtc->cursor_hot_x ||
if (width != amdgpu_crtc->cursor_width ||
height != amdgpu_crtc->cursor_height ||
hot_x != amdgpu_crtc->cursor_hot_x ||
hot_y != amdgpu_crtc->cursor_hot_y) {
int x, y;
@ -2607,16 +2610,10 @@ static int dce_v11_0_crtc_cursor_set2(struct drm_crtc *crtc,
dce_v11_0_cursor_move_locked(crtc, x, y);
amdgpu_crtc->cursor_hot_x = hot_x;
amdgpu_crtc->cursor_hot_y = hot_y;
}
if (width != amdgpu_crtc->cursor_width ||
height != amdgpu_crtc->cursor_height) {
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
(width - 1) << 16 | (height - 1));
amdgpu_crtc->cursor_width = width;
amdgpu_crtc->cursor_height = height;
amdgpu_crtc->cursor_hot_x = hot_x;
amdgpu_crtc->cursor_hot_y = hot_y;
}
dce_v11_0_show_cursor(crtc);
@ -2640,7 +2637,6 @@ unpin:
static void dce_v11_0_cursor_reset(struct drm_crtc *crtc)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = crtc->dev->dev_private;
if (amdgpu_crtc->cursor_bo) {
dce_v11_0_lock_cursor(crtc, true);
@ -2648,10 +2644,6 @@ static void dce_v11_0_cursor_reset(struct drm_crtc *crtc)
dce_v11_0_cursor_move_locked(crtc, amdgpu_crtc->cursor_x,
amdgpu_crtc->cursor_y);
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
(amdgpu_crtc->cursor_width - 1) << 16 |
(amdgpu_crtc->cursor_height - 1));
dce_v11_0_show_cursor(crtc);
dce_v11_0_lock_cursor(crtc, false);

View File

@ -1859,6 +1859,8 @@ static int dce_v6_0_cursor_move_locked(struct drm_crtc *crtc,
struct amdgpu_device *adev = crtc->dev->dev_private;
int xorigin = 0, yorigin = 0;
int w = amdgpu_crtc->cursor_width;
amdgpu_crtc->cursor_x = x;
amdgpu_crtc->cursor_y = y;
@ -1878,6 +1880,8 @@ static int dce_v6_0_cursor_move_locked(struct drm_crtc *crtc,
WREG32(mmCUR_POSITION + amdgpu_crtc->crtc_offset, (x << 16) | y);
WREG32(mmCUR_HOT_SPOT + amdgpu_crtc->crtc_offset, (xorigin << 16) | yorigin);
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
((w - 1) << 16) | (amdgpu_crtc->cursor_height - 1));
return 0;
}
@ -1903,7 +1907,6 @@ static int dce_v6_0_crtc_cursor_set2(struct drm_crtc *crtc,
int32_t hot_y)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = crtc->dev->dev_private;
struct drm_gem_object *obj;
struct amdgpu_bo *aobj;
int ret;
@ -1944,7 +1947,9 @@ static int dce_v6_0_crtc_cursor_set2(struct drm_crtc *crtc,
dce_v6_0_lock_cursor(crtc, true);
if (hot_x != amdgpu_crtc->cursor_hot_x ||
if (width != amdgpu_crtc->cursor_width ||
height != amdgpu_crtc->cursor_height ||
hot_x != amdgpu_crtc->cursor_hot_x ||
hot_y != amdgpu_crtc->cursor_hot_y) {
int x, y;
@ -1953,16 +1958,10 @@ static int dce_v6_0_crtc_cursor_set2(struct drm_crtc *crtc,
dce_v6_0_cursor_move_locked(crtc, x, y);
amdgpu_crtc->cursor_hot_x = hot_x;
amdgpu_crtc->cursor_hot_y = hot_y;
}
if (width != amdgpu_crtc->cursor_width ||
height != amdgpu_crtc->cursor_height) {
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
(width - 1) << 16 | (height - 1));
amdgpu_crtc->cursor_width = width;
amdgpu_crtc->cursor_height = height;
amdgpu_crtc->cursor_hot_x = hot_x;
amdgpu_crtc->cursor_hot_y = hot_y;
}
dce_v6_0_show_cursor(crtc);
@ -1986,7 +1985,6 @@ unpin:
static void dce_v6_0_cursor_reset(struct drm_crtc *crtc)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = crtc->dev->dev_private;
if (amdgpu_crtc->cursor_bo) {
dce_v6_0_lock_cursor(crtc, true);
@ -1994,10 +1992,6 @@ static void dce_v6_0_cursor_reset(struct drm_crtc *crtc)
dce_v6_0_cursor_move_locked(crtc, amdgpu_crtc->cursor_x,
amdgpu_crtc->cursor_y);
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
(amdgpu_crtc->cursor_width - 1) << 16 |
(amdgpu_crtc->cursor_height - 1));
dce_v6_0_show_cursor(crtc);
dce_v6_0_lock_cursor(crtc, false);
}

View File

@ -2363,6 +2363,8 @@ static int dce_v8_0_cursor_move_locked(struct drm_crtc *crtc,
WREG32(mmCUR_POSITION + amdgpu_crtc->crtc_offset, (x << 16) | y);
WREG32(mmCUR_HOT_SPOT + amdgpu_crtc->crtc_offset, (xorigin << 16) | yorigin);
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
((amdgpu_crtc->cursor_width - 1) << 16) | (amdgpu_crtc->cursor_height - 1));
return 0;
}
@ -2388,7 +2390,6 @@ static int dce_v8_0_crtc_cursor_set2(struct drm_crtc *crtc,
int32_t hot_y)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = crtc->dev->dev_private;
struct drm_gem_object *obj;
struct amdgpu_bo *aobj;
int ret;
@ -2429,7 +2430,9 @@ static int dce_v8_0_crtc_cursor_set2(struct drm_crtc *crtc,
dce_v8_0_lock_cursor(crtc, true);
if (hot_x != amdgpu_crtc->cursor_hot_x ||
if (width != amdgpu_crtc->cursor_width ||
height != amdgpu_crtc->cursor_height ||
hot_x != amdgpu_crtc->cursor_hot_x ||
hot_y != amdgpu_crtc->cursor_hot_y) {
int x, y;
@ -2438,16 +2441,10 @@ static int dce_v8_0_crtc_cursor_set2(struct drm_crtc *crtc,
dce_v8_0_cursor_move_locked(crtc, x, y);
amdgpu_crtc->cursor_hot_x = hot_x;
amdgpu_crtc->cursor_hot_y = hot_y;
}
if (width != amdgpu_crtc->cursor_width ||
height != amdgpu_crtc->cursor_height) {
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
(width - 1) << 16 | (height - 1));
amdgpu_crtc->cursor_width = width;
amdgpu_crtc->cursor_height = height;
amdgpu_crtc->cursor_hot_x = hot_x;
amdgpu_crtc->cursor_hot_y = hot_y;
}
dce_v8_0_show_cursor(crtc);
@ -2471,7 +2468,6 @@ unpin:
static void dce_v8_0_cursor_reset(struct drm_crtc *crtc)
{
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = crtc->dev->dev_private;
if (amdgpu_crtc->cursor_bo) {
dce_v8_0_lock_cursor(crtc, true);
@ -2479,10 +2475,6 @@ static void dce_v8_0_cursor_reset(struct drm_crtc *crtc)
dce_v8_0_cursor_move_locked(crtc, amdgpu_crtc->cursor_x,
amdgpu_crtc->cursor_y);
WREG32(mmCUR_SIZE + amdgpu_crtc->crtc_offset,
(amdgpu_crtc->cursor_width - 1) << 16 |
(amdgpu_crtc->cursor_height - 1));
dce_v8_0_show_cursor(crtc);
dce_v8_0_lock_cursor(crtc, false);

View File

@ -627,11 +627,8 @@ static const struct drm_encoder_helper_funcs dce_virtual_encoder_helper_funcs =
static void dce_virtual_encoder_destroy(struct drm_encoder *encoder)
{
struct amdgpu_encoder *amdgpu_encoder = to_amdgpu_encoder(encoder);
kfree(amdgpu_encoder->enc_priv);
drm_encoder_cleanup(encoder);
kfree(amdgpu_encoder);
kfree(encoder);
}
static const struct drm_encoder_funcs dce_virtual_encoder_funcs = {

View File

@ -44,6 +44,7 @@ MODULE_FIRMWARE("radeon/tahiti_mc.bin");
MODULE_FIRMWARE("radeon/pitcairn_mc.bin");
MODULE_FIRMWARE("radeon/verde_mc.bin");
MODULE_FIRMWARE("radeon/oland_mc.bin");
MODULE_FIRMWARE("radeon/si58_mc.bin");
#define MC_SEQ_MISC0__MT__MASK 0xf0000000
#define MC_SEQ_MISC0__MT__GDDR1 0x10000000
@ -113,6 +114,7 @@ static int gmc_v6_0_init_microcode(struct amdgpu_device *adev)
const char *chip_name;
char fw_name[30];
int err;
bool is_58_fw = false;
DRM_DEBUG("\n");
@ -135,7 +137,14 @@ static int gmc_v6_0_init_microcode(struct amdgpu_device *adev)
default: BUG();
}
snprintf(fw_name, sizeof(fw_name), "radeon/%s_mc.bin", chip_name);
/* this memory configuration requires special firmware */
if (((RREG32(mmMC_SEQ_MISC0) & 0xff000000) >> 24) == 0x58)
is_58_fw = true;
if (is_58_fw)
snprintf(fw_name, sizeof(fw_name), "radeon/si58_mc.bin");
else
snprintf(fw_name, sizeof(fw_name), "radeon/%s_mc.bin", chip_name);
err = request_firmware(&adev->mc.fw, fw_name, adev->dev);
if (err)
goto out;
@ -245,6 +254,9 @@ static void gmc_v6_0_mc_program(struct amdgpu_device *adev)
}
WREG32(mmHDP_REG_COHERENCY_FLUSH_CNTL, 0);
if (adev->mode_info.num_crtc)
amdgpu_display_set_vga_render_state(adev, false);
gmc_v6_0_mc_stop(adev, &save);
if (gmc_v6_0_wait_for_idle((void *)adev)) {
@ -274,7 +286,6 @@ static void gmc_v6_0_mc_program(struct amdgpu_device *adev)
dev_warn(adev->dev, "Wait for MC idle timedout !\n");
}
gmc_v6_0_mc_resume(adev, &save);
amdgpu_display_set_vga_render_state(adev, false);
}
static int gmc_v6_0_mc_init(struct amdgpu_device *adev)
@ -463,19 +474,11 @@ static int gmc_v6_0_gart_enable(struct amdgpu_device *adev)
WREG32(mmVM_CONTEXT1_CNTL,
VM_CONTEXT1_CNTL__ENABLE_CONTEXT_MASK |
(1UL << VM_CONTEXT1_CNTL__PAGE_TABLE_DEPTH__SHIFT) |
((amdgpu_vm_block_size - 9) << VM_CONTEXT1_CNTL__PAGE_TABLE_BLOCK_SIZE__SHIFT) |
VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_DEFAULT_MASK |
VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT_MASK |
VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_DEFAULT_MASK |
VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_DEFAULT_MASK |
VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_DEFAULT_MASK |
VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_DEFAULT_MASK);
((amdgpu_vm_block_size - 9) << VM_CONTEXT1_CNTL__PAGE_TABLE_BLOCK_SIZE__SHIFT));
if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS)
gmc_v6_0_set_fault_enable_default(adev, false);
else
gmc_v6_0_set_fault_enable_default(adev, true);
gmc_v6_0_gart_flush_gpu_tlb(adev, 0);
dev_info(adev->dev, "PCIE GART of %uM enabled (table at 0x%016llX).\n",
@ -754,7 +757,10 @@ static int gmc_v6_0_late_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
if (amdgpu_vm_fault_stop != AMDGPU_VM_FAULT_STOP_ALWAYS)
return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
else
return 0;
}
static int gmc_v6_0_sw_init(void *handle)

View File

@ -64,6 +64,7 @@ MODULE_FIRMWARE("radeon/oland_smc.bin");
MODULE_FIRMWARE("radeon/oland_k_smc.bin");
MODULE_FIRMWARE("radeon/hainan_smc.bin");
MODULE_FIRMWARE("radeon/hainan_k_smc.bin");
MODULE_FIRMWARE("radeon/banks_k_2_smc.bin");
union power_info {
struct _ATOM_POWERPLAY_INFO info;
@ -3487,17 +3488,6 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
(adev->pdev->device == 0x6817) ||
(adev->pdev->device == 0x6806))
max_mclk = 120000;
} else if (adev->asic_type == CHIP_OLAND) {
if ((adev->pdev->revision == 0xC7) ||
(adev->pdev->revision == 0x80) ||
(adev->pdev->revision == 0x81) ||
(adev->pdev->revision == 0x83) ||
(adev->pdev->revision == 0x87) ||
(adev->pdev->device == 0x6604) ||
(adev->pdev->device == 0x6605)) {
max_sclk = 75000;
max_mclk = 80000;
}
} else if (adev->asic_type == CHIP_HAINAN) {
if ((adev->pdev->revision == 0x81) ||
(adev->pdev->revision == 0x83) ||
@ -3506,7 +3496,6 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
(adev->pdev->device == 0x6665) ||
(adev->pdev->device == 0x6667)) {
max_sclk = 75000;
max_mclk = 80000;
}
}
/* Apply dpm quirks */
@ -7713,10 +7702,11 @@ static int si_dpm_init_microcode(struct amdgpu_device *adev)
((adev->pdev->device == 0x6660) ||
(adev->pdev->device == 0x6663) ||
(adev->pdev->device == 0x6665) ||
(adev->pdev->device == 0x6667))) ||
((adev->pdev->revision == 0xc3) &&
(adev->pdev->device == 0x6665)))
(adev->pdev->device == 0x6667))))
chip_name = "hainan_k";
else if ((adev->pdev->revision == 0xc3) &&
(adev->pdev->device == 0x6665))
chip_name = "banks_k_2";
else
chip_name = "hainan";
break;

View File

@ -40,13 +40,14 @@
#include "smu/smu_7_0_1_sh_mask.h"
static void uvd_v4_2_mc_resume(struct amdgpu_device *adev);
static void uvd_v4_2_init_cg(struct amdgpu_device *adev);
static void uvd_v4_2_set_ring_funcs(struct amdgpu_device *adev);
static void uvd_v4_2_set_irq_funcs(struct amdgpu_device *adev);
static int uvd_v4_2_start(struct amdgpu_device *adev);
static void uvd_v4_2_stop(struct amdgpu_device *adev);
static int uvd_v4_2_set_clockgating_state(void *handle,
enum amd_clockgating_state state);
static void uvd_v4_2_set_dcm(struct amdgpu_device *adev,
bool sw_mode);
/**
* uvd_v4_2_ring_get_rptr - get read pointer
*
@ -140,7 +141,8 @@ static int uvd_v4_2_sw_fini(void *handle)
return r;
}
static void uvd_v4_2_enable_mgcg(struct amdgpu_device *adev,
bool enable);
/**
* uvd_v4_2_hw_init - start and test UVD block
*
@ -155,8 +157,7 @@ static int uvd_v4_2_hw_init(void *handle)
uint32_t tmp;
int r;
uvd_v4_2_init_cg(adev);
uvd_v4_2_set_clockgating_state(adev, AMD_CG_STATE_GATE);
uvd_v4_2_enable_mgcg(adev, true);
amdgpu_asic_set_uvd_clocks(adev, 10000, 10000);
r = uvd_v4_2_start(adev);
if (r)
@ -266,11 +267,13 @@ static int uvd_v4_2_start(struct amdgpu_device *adev)
struct amdgpu_ring *ring = &adev->uvd.ring;
uint32_t rb_bufsz;
int i, j, r;
/* disable byte swapping */
u32 lmi_swap_cntl = 0;
u32 mp_swap_cntl = 0;
WREG32(mmUVD_CGC_GATE, 0);
uvd_v4_2_set_dcm(adev, true);
uvd_v4_2_mc_resume(adev);
/* disable interupt */
@ -406,6 +409,8 @@ static void uvd_v4_2_stop(struct amdgpu_device *adev)
/* Unstall UMC and register bus */
WREG32_P(mmUVD_LMI_CTRL2, 0, ~(1 << 8));
uvd_v4_2_set_dcm(adev, false);
}
/**
@ -619,19 +624,6 @@ static void uvd_v4_2_set_dcm(struct amdgpu_device *adev,
WREG32_UVD_CTX(ixUVD_CGC_CTRL2, tmp2);
}
static void uvd_v4_2_init_cg(struct amdgpu_device *adev)
{
bool hw_mode = true;
if (hw_mode) {
uvd_v4_2_set_dcm(adev, false);
} else {
u32 tmp = RREG32(mmUVD_CGC_CTRL);
tmp &= ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
WREG32(mmUVD_CGC_CTRL, tmp);
}
}
static bool uvd_v4_2_is_idle(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
@ -685,17 +677,6 @@ static int uvd_v4_2_process_interrupt(struct amdgpu_device *adev,
static int uvd_v4_2_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
bool gate = false;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG))
return 0;
if (state == AMD_CG_STATE_GATE)
gate = true;
uvd_v4_2_enable_mgcg(adev, gate);
return 0;
}
@ -711,9 +692,6 @@ static int uvd_v4_2_set_powergating_state(void *handle,
*/
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (!(adev->pg_flags & AMD_PG_SUPPORT_UVD))
return 0;
if (state == AMD_PG_STATE_GATE) {
uvd_v4_2_stop(adev);
return 0;

View File

@ -43,9 +43,13 @@
#define GRBM_GFX_INDEX__VCE_INSTANCE__SHIFT 0x04
#define GRBM_GFX_INDEX__VCE_INSTANCE_MASK 0x10
#define GRBM_GFX_INDEX__VCE_ALL_PIPE 0x07
#define mmVCE_LMI_VCPU_CACHE_40BIT_BAR0 0x8616
#define mmVCE_LMI_VCPU_CACHE_40BIT_BAR1 0x8617
#define mmVCE_LMI_VCPU_CACHE_40BIT_BAR2 0x8618
#define mmGRBM_GFX_INDEX_DEFAULT 0xE0000000
#define VCE_STATUS_VCPU_REPORT_FW_LOADED_MASK 0x02
#define VCE_V3_0_FW_SIZE (384 * 1024)
@ -54,6 +58,9 @@
#define FW_52_8_3 ((52 << 24) | (8 << 16) | (3 << 8))
#define GET_VCE_INSTANCE(i) ((i) << GRBM_GFX_INDEX__VCE_INSTANCE__SHIFT \
| GRBM_GFX_INDEX__VCE_ALL_PIPE)
static void vce_v3_0_mc_resume(struct amdgpu_device *adev, int idx);
static void vce_v3_0_set_ring_funcs(struct amdgpu_device *adev);
static void vce_v3_0_set_irq_funcs(struct amdgpu_device *adev);
@ -175,7 +182,7 @@ static void vce_v3_0_set_vce_sw_clock_gating(struct amdgpu_device *adev,
WREG32(mmVCE_UENC_CLOCK_GATING_2, data);
data = RREG32(mmVCE_UENC_REG_CLOCK_GATING);
data &= ~0xffc00000;
data &= ~0x3ff;
WREG32(mmVCE_UENC_REG_CLOCK_GATING, data);
data = RREG32(mmVCE_UENC_DMA_DCLK_CTRL);
@ -249,7 +256,7 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
if (adev->vce.harvest_config & (1 << idx))
continue;
WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, idx);
WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(idx));
vce_v3_0_mc_resume(adev, idx);
WREG32_FIELD(VCE_STATUS, JOB_BUSY, 1);
@ -273,7 +280,7 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
}
}
WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, 0);
WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT);
mutex_unlock(&adev->grbm_idx_mutex);
return 0;
@ -288,7 +295,7 @@ static int vce_v3_0_stop(struct amdgpu_device *adev)
if (adev->vce.harvest_config & (1 << idx))
continue;
WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, idx);
WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(idx));
if (adev->asic_type >= CHIP_STONEY)
WREG32_P(mmVCE_VCPU_CNTL, 0, ~0x200001);
@ -306,7 +313,7 @@ static int vce_v3_0_stop(struct amdgpu_device *adev)
vce_v3_0_set_vce_sw_clock_gating(adev, false);
}
WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, 0);
WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT);
mutex_unlock(&adev->grbm_idx_mutex);
return 0;
@ -586,17 +593,17 @@ static bool vce_v3_0_check_soft_reset(void *handle)
* VCE team suggest use bit 3--bit 6 for busy status check
*/
mutex_lock(&adev->grbm_idx_mutex);
WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0);
WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(0));
if (RREG32(mmVCE_STATUS) & AMDGPU_VCE_STATUS_BUSY_MASK) {
srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE0, 1);
srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE1, 1);
}
WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0x10);
WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(1));
if (RREG32(mmVCE_STATUS) & AMDGPU_VCE_STATUS_BUSY_MASK) {
srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE0, 1);
srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE1, 1);
}
WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0);
WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(0));
mutex_unlock(&adev->grbm_idx_mutex);
if (srbm_soft_reset) {
@ -734,7 +741,7 @@ static int vce_v3_0_set_clockgating_state(void *handle,
if (adev->vce.harvest_config & (1 << i))
continue;
WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, i);
WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(i));
if (enable) {
/* initialize VCE_CLOCK_GATING_A: Clock ON/OFF delay */
@ -753,7 +760,7 @@ static int vce_v3_0_set_clockgating_state(void *handle,
vce_v3_0_set_vce_sw_clock_gating(adev, enable);
}
WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, 0);
WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT);
mutex_unlock(&adev->grbm_idx_mutex);
return 0;

View File

@ -200,7 +200,7 @@ int cz_dpm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate)
cgs_set_clockgating_state(
hwmgr->device,
AMD_IP_BLOCK_TYPE_VCE,
AMD_CG_STATE_UNGATE);
AMD_CG_STATE_GATE);
cgs_set_powergating_state(
hwmgr->device,
AMD_IP_BLOCK_TYPE_VCE,
@ -218,7 +218,7 @@ int cz_dpm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate)
cgs_set_clockgating_state(
hwmgr->device,
AMD_IP_BLOCK_TYPE_VCE,
AMD_PG_STATE_GATE);
AMD_PG_STATE_UNGATE);
cz_dpm_update_vce_dpm(hwmgr);
cz_enable_disable_vce_dpm(hwmgr, true);
return 0;

View File

@ -1402,14 +1402,22 @@ int cz_dpm_update_vce_dpm(struct pp_hwmgr *hwmgr)
cz_hwmgr->vce_dpm.hard_min_clk,
PPSMC_MSG_SetEclkHardMin));
} else {
/*EPR# 419220 -HW limitation to to */
cz_hwmgr->vce_dpm.hard_min_clk = hwmgr->vce_arbiter.ecclk;
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
PPSMC_MSG_SetEclkHardMin,
cz_get_eclk_level(hwmgr,
cz_hwmgr->vce_dpm.hard_min_clk,
PPSMC_MSG_SetEclkHardMin));
/*Program HardMin based on the vce_arbiter.ecclk */
if (hwmgr->vce_arbiter.ecclk == 0) {
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
PPSMC_MSG_SetEclkHardMin, 0);
/* disable ECLK DPM 0. Otherwise VCE could hang if
* switching SCLK from DPM 0 to 6/7 */
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
PPSMC_MSG_SetEclkSoftMin, 1);
} else {
cz_hwmgr->vce_dpm.hard_min_clk = hwmgr->vce_arbiter.ecclk;
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
PPSMC_MSG_SetEclkHardMin,
cz_get_eclk_level(hwmgr,
cz_hwmgr->vce_dpm.hard_min_clk,
PPSMC_MSG_SetEclkHardMin));
}
}
return 0;
}

View File

@ -113,6 +113,7 @@ struct ast_private {
struct ttm_bo_kmap_obj cache_kmap;
int next_cursor;
bool support_wide_screen;
bool DisableP2A;
enum ast_tx_chip tx_chip_type;
u8 dp501_maxclk;

View File

@ -124,6 +124,12 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
} else
*need_post = false;
/* Check P2A Access */
ast->DisableP2A = true;
data = ast_read32(ast, 0xf004);
if (data != 0xFFFFFFFF)
ast->DisableP2A = false;
/* Check if we support wide screen */
switch (ast->chip) {
case AST1180:
@ -140,15 +146,17 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
ast->support_wide_screen = true;
else {
ast->support_wide_screen = false;
/* Read SCU7c (silicon revision register) */
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
data = ast_read32(ast, 0x1207c);
data &= 0x300;
if (ast->chip == AST2300 && data == 0x0) /* ast1300 */
ast->support_wide_screen = true;
if (ast->chip == AST2400 && data == 0x100) /* ast1400 */
ast->support_wide_screen = true;
if (ast->DisableP2A == false) {
/* Read SCU7c (silicon revision register) */
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
data = ast_read32(ast, 0x1207c);
data &= 0x300;
if (ast->chip == AST2300 && data == 0x0) /* ast1300 */
ast->support_wide_screen = true;
if (ast->chip == AST2400 && data == 0x100) /* ast1400 */
ast->support_wide_screen = true;
}
}
break;
}
@ -216,80 +224,81 @@ static int ast_get_dram_info(struct drm_device *dev)
uint32_t data, data2;
uint32_t denum, num, div, ref_pll;
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
ast_write32(ast, 0x10000, 0xfc600309);
do {
if (pci_channel_offline(dev->pdev))
return -EIO;
} while (ast_read32(ast, 0x10000) != 0x01);
data = ast_read32(ast, 0x10004);
if (data & 0x40)
if (ast->DisableP2A)
{
ast->dram_bus_width = 16;
ast->dram_type = AST_DRAM_1Gx16;
ast->mclk = 396;
}
else
ast->dram_bus_width = 32;
{
ast_write32(ast, 0xf004, 0x1e6e0000);
ast_write32(ast, 0xf000, 0x1);
data = ast_read32(ast, 0x10004);
if (ast->chip == AST2300 || ast->chip == AST2400) {
switch (data & 0x03) {
case 0:
ast->dram_type = AST_DRAM_512Mx16;
break;
default:
case 1:
ast->dram_type = AST_DRAM_1Gx16;
if (data & 0x40)
ast->dram_bus_width = 16;
else
ast->dram_bus_width = 32;
if (ast->chip == AST2300 || ast->chip == AST2400) {
switch (data & 0x03) {
case 0:
ast->dram_type = AST_DRAM_512Mx16;
break;
default:
case 1:
ast->dram_type = AST_DRAM_1Gx16;
break;
case 2:
ast->dram_type = AST_DRAM_2Gx16;
break;
case 3:
ast->dram_type = AST_DRAM_4Gx16;
break;
}
} else {
switch (data & 0x0c) {
case 0:
case 4:
ast->dram_type = AST_DRAM_512Mx16;
break;
case 8:
if (data & 0x40)
ast->dram_type = AST_DRAM_1Gx16;
else
ast->dram_type = AST_DRAM_512Mx32;
break;
case 0xc:
ast->dram_type = AST_DRAM_1Gx32;
break;
}
}
data = ast_read32(ast, 0x10120);
data2 = ast_read32(ast, 0x10170);
if (data2 & 0x2000)
ref_pll = 14318;
else
ref_pll = 12000;
denum = data & 0x1f;
num = (data & 0x3fe0) >> 5;
data = (data & 0xc000) >> 14;
switch (data) {
case 3:
div = 0x4;
break;
case 2:
ast->dram_type = AST_DRAM_2Gx16;
case 1:
div = 0x2;
break;
case 3:
ast->dram_type = AST_DRAM_4Gx16;
break;
}
} else {
switch (data & 0x0c) {
case 0:
case 4:
ast->dram_type = AST_DRAM_512Mx16;
break;
case 8:
if (data & 0x40)
ast->dram_type = AST_DRAM_1Gx16;
else
ast->dram_type = AST_DRAM_512Mx32;
break;
case 0xc:
ast->dram_type = AST_DRAM_1Gx32;
default:
div = 0x1;
break;
}
ast->mclk = ref_pll * (num + 2) / (denum + 2) * (div * 1000);
}
data = ast_read32(ast, 0x10120);
data2 = ast_read32(ast, 0x10170);
if (data2 & 0x2000)
ref_pll = 14318;
else
ref_pll = 12000;
denum = data & 0x1f;
num = (data & 0x3fe0) >> 5;
data = (data & 0xc000) >> 14;
switch (data) {
case 3:
div = 0x4;
break;
case 2:
case 1:
div = 0x2;
break;
default:
div = 0x1;
break;
}
ast->mclk = ref_pll * (num + 2) / (denum + 2) * (div * 1000);
return 0;
}

View File

@ -379,12 +379,20 @@ void ast_post_gpu(struct drm_device *dev)
ast_open_key(ast);
ast_set_def_ext_reg(dev);
if (ast->chip == AST2300 || ast->chip == AST2400)
ast_init_dram_2300(dev);
else
ast_init_dram_reg(dev);
if (ast->DisableP2A == false)
{
if (ast->chip == AST2300 || ast->chip == AST2400)
ast_init_dram_2300(dev);
else
ast_init_dram_reg(dev);
ast_init_3rdtx(dev);
ast_init_3rdtx(dev);
}
else
{
if (ast->tx_chip_type != AST_TX_NONE)
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xcf, 0x80); /* Enable DVO */
}
}
/* AST 2300 DRAM settings */

View File

@ -1382,6 +1382,7 @@ int analogix_dp_bind(struct device *dev, struct drm_device *drm_dev,
pm_runtime_enable(dev);
pm_runtime_get_sync(dev);
phy_power_on(dp->phy);
analogix_dp_init_dp(dp);
@ -1414,9 +1415,15 @@ int analogix_dp_bind(struct device *dev, struct drm_device *drm_dev,
goto err_disable_pm_runtime;
}
phy_power_off(dp->phy);
pm_runtime_put(dev);
return 0;
err_disable_pm_runtime:
phy_power_off(dp->phy);
pm_runtime_put(dev);
pm_runtime_disable(dev);
return ret;

View File

@ -7,3 +7,12 @@ config DRM_CIRRUS_QEMU
This is a KMS driver for emulated cirrus device in qemu.
It is *NOT* intended for real cirrus devices. This requires
the modesetting userspace X.org driver.
Cirrus is obsolete, the hardware was designed in the 90ies
and can't keep up with todays needs. More background:
https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
Better alternatives are:
- stdvga (DRM_BOCHS, qemu -vga std, default in qemu 2.2+)
- qxl (DRM_QXL, qemu -vga qxl, works best with spice)
- virtio (DRM_VIRTIO_GPU), qemu -vga virtio)

View File

@ -291,15 +291,15 @@ drm_atomic_get_crtc_state(struct drm_atomic_state *state,
EXPORT_SYMBOL(drm_atomic_get_crtc_state);
static void set_out_fence_for_crtc(struct drm_atomic_state *state,
struct drm_crtc *crtc, s64 __user *fence_ptr)
struct drm_crtc *crtc, s32 __user *fence_ptr)
{
state->crtcs[drm_crtc_index(crtc)].out_fence_ptr = fence_ptr;
}
static s64 __user *get_out_fence_for_crtc(struct drm_atomic_state *state,
static s32 __user *get_out_fence_for_crtc(struct drm_atomic_state *state,
struct drm_crtc *crtc)
{
s64 __user *fence_ptr;
s32 __user *fence_ptr;
fence_ptr = state->crtcs[drm_crtc_index(crtc)].out_fence_ptr;
state->crtcs[drm_crtc_index(crtc)].out_fence_ptr = NULL;
@ -512,7 +512,7 @@ int drm_atomic_crtc_set_property(struct drm_crtc *crtc,
state->color_mgmt_changed |= replaced;
return ret;
} else if (property == config->prop_out_fence_ptr) {
s64 __user *fence_ptr = u64_to_user_ptr(val);
s32 __user *fence_ptr = u64_to_user_ptr(val);
if (!fence_ptr)
return 0;
@ -1915,7 +1915,7 @@ EXPORT_SYMBOL(drm_atomic_clean_old_fb);
*/
struct drm_out_fence_state {
s64 __user *out_fence_ptr;
s32 __user *out_fence_ptr;
struct sync_file *sync_file;
int fd;
};
@ -1952,7 +1952,7 @@ static int prepare_crtc_signaling(struct drm_device *dev,
return 0;
for_each_crtc_in_state(state, crtc, crtc_state, i) {
u64 __user *fence_ptr;
s32 __user *fence_ptr;
fence_ptr = get_out_fence_for_crtc(crtc_state->state, crtc);
@ -2032,13 +2032,16 @@ static void complete_crtc_signaling(struct drm_device *dev,
}
for_each_crtc_in_state(state, crtc, crtc_state, i) {
struct drm_pending_vblank_event *event = crtc_state->event;
/*
* TEST_ONLY and PAGE_FLIP_EVENT are mutually
* exclusive, if they weren't, this code should be
* called on success for TEST_ONLY too.
* Free the allocated event. drm_atomic_helper_setup_commit
* can allocate an event too, so only free it if it's ours
* to prevent a double free in drm_atomic_state_clear.
*/
if (crtc_state->event)
drm_event_cancel_free(dev, &crtc_state->event->base);
if (event && (event->base.fence || event->base.file_priv)) {
drm_event_cancel_free(dev, &event->base);
crtc_state->event = NULL;
}
}
if (!fence_state)

View File

@ -1666,9 +1666,6 @@ int drm_atomic_helper_prepare_planes(struct drm_device *dev,
funcs = plane->helper_private;
if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc))
continue;
if (funcs->prepare_fb) {
ret = funcs->prepare_fb(plane, plane_state);
if (ret)
@ -1685,9 +1682,6 @@ fail:
if (j >= i)
continue;
if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc))
continue;
funcs = plane->helper_private;
if (funcs->cleanup_fb)
@ -1954,9 +1948,6 @@ void drm_atomic_helper_cleanup_planes(struct drm_device *dev,
for_each_plane_in_state(old_state, plane, plane_state, i) {
const struct drm_plane_helper_funcs *funcs;
if (!drm_atomic_helper_framebuffer_changed(dev, old_state, plane_state->crtc))
continue;
funcs = plane->helper_private;
if (funcs->cleanup_fb)

View File

@ -225,6 +225,7 @@ int drm_connector_init(struct drm_device *dev,
INIT_LIST_HEAD(&connector->probed_modes);
INIT_LIST_HEAD(&connector->modes);
mutex_init(&connector->mutex);
connector->edid_blob_ptr = NULL;
connector->status = connector_status_unknown;
@ -359,6 +360,8 @@ void drm_connector_cleanup(struct drm_connector *connector)
connector->funcs->atomic_destroy_state(connector,
connector->state);
mutex_destroy(&connector->mutex);
memset(connector, 0, sizeof(*connector));
}
EXPORT_SYMBOL(drm_connector_cleanup);
@ -374,14 +377,18 @@ EXPORT_SYMBOL(drm_connector_cleanup);
*/
int drm_connector_register(struct drm_connector *connector)
{
int ret;
int ret = 0;
if (connector->registered)
if (!connector->dev->registered)
return 0;
mutex_lock(&connector->mutex);
if (connector->registered)
goto unlock;
ret = drm_sysfs_connector_add(connector);
if (ret)
return ret;
goto unlock;
ret = drm_debugfs_connector_add(connector);
if (ret) {
@ -397,12 +404,14 @@ int drm_connector_register(struct drm_connector *connector)
drm_mode_object_register(connector->dev, &connector->base);
connector->registered = true;
return 0;
goto unlock;
err_debugfs:
drm_debugfs_connector_remove(connector);
err_sysfs:
drm_sysfs_connector_remove(connector);
unlock:
mutex_unlock(&connector->mutex);
return ret;
}
EXPORT_SYMBOL(drm_connector_register);
@ -415,8 +424,11 @@ EXPORT_SYMBOL(drm_connector_register);
*/
void drm_connector_unregister(struct drm_connector *connector)
{
if (!connector->registered)
mutex_lock(&connector->mutex);
if (!connector->registered) {
mutex_unlock(&connector->mutex);
return;
}
if (connector->funcs->early_unregister)
connector->funcs->early_unregister(connector);
@ -425,6 +437,7 @@ void drm_connector_unregister(struct drm_connector *connector)
drm_debugfs_connector_remove(connector);
connector->registered = false;
mutex_unlock(&connector->mutex);
}
EXPORT_SYMBOL(drm_connector_unregister);

View File

@ -745,6 +745,8 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
if (ret)
goto err_minors;
dev->registered = true;
if (dev->driver->load) {
ret = dev->driver->load(dev, flags);
if (ret)
@ -785,6 +787,8 @@ void drm_dev_unregister(struct drm_device *dev)
drm_lastclose(dev);
dev->registered = false;
if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_modeset_unregister_all(dev);

View File

@ -1460,6 +1460,13 @@ drm_mode_create_from_cmdline_mode(struct drm_device *dev,
return NULL;
mode->type |= DRM_MODE_TYPE_USERDEF;
/* fix up 1368x768: GFT/CVT can't express 1366 width due to alignment */
if (cmd->xres == 1366 && mode->hdisplay == 1368) {
mode->hdisplay = 1366;
mode->hsync_start--;
mode->hsync_end--;
drm_mode_set_name(mode);
}
drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V);
return mode;
}

View File

@ -143,8 +143,18 @@ void drm_kms_helper_poll_enable_locked(struct drm_device *dev)
}
if (dev->mode_config.delayed_event) {
/*
* FIXME:
*
* Use short (1s) delay to handle the initial delayed event.
* This delay should not be needed, but Optimus/nouveau will
* fail in a mysterious way if the delayed event is handled as
* soon as possible like it is done in
* drm_helper_probe_single_connector_modes() in case the poll
* was enabled before.
*/
poll = true;
delay = 0;
delay = HZ;
}
if (poll)

View File

@ -116,9 +116,14 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
struct list_head list;
bool found;
/*
* XXX: The DRM_MM_SEARCH_BELOW is really a hack to trick
* drm_mm into giving out a low IOVA after address space
* rollover. This needs a proper fix.
*/
ret = drm_mm_insert_node_in_range(&mmu->mm, node,
size, 0, mmu->last_iova, ~0UL,
DRM_MM_SEARCH_DEFAULT);
mmu->last_iova ? DRM_MM_SEARCH_DEFAULT : DRM_MM_SEARCH_BELOW);
if (ret != -ENOSPC)
break;

View File

@ -46,7 +46,8 @@ enum decon_flag_bits {
BIT_CLKS_ENABLED,
BIT_IRQS_ENABLED,
BIT_WIN_UPDATED,
BIT_SUSPENDED
BIT_SUSPENDED,
BIT_REQUEST_UPDATE
};
struct decon_context {
@ -141,12 +142,6 @@ static void decon_commit(struct exynos_drm_crtc *crtc)
m->crtc_vsync_end = m->crtc_vsync_start + 1;
}
decon_set_bits(ctx, DECON_VIDCON0, VIDCON0_ENVID, 0);
/* enable clock gate */
val = CMU_CLKGAGE_MODE_SFR_F | CMU_CLKGAGE_MODE_MEM_F;
writel(val, ctx->addr + DECON_CMU);
if (ctx->out_type & (IFTYPE_I80 | I80_HW_TRG))
decon_setup_trigger(ctx);
@ -315,6 +310,7 @@ static void decon_update_plane(struct exynos_drm_crtc *crtc,
/* window enable */
decon_set_bits(ctx, DECON_WINCONx(win), WINCONx_ENWIN_F, ~0);
set_bit(BIT_REQUEST_UPDATE, &ctx->flags);
}
static void decon_disable_plane(struct exynos_drm_crtc *crtc,
@ -327,6 +323,7 @@ static void decon_disable_plane(struct exynos_drm_crtc *crtc,
return;
decon_set_bits(ctx, DECON_WINCONx(win), WINCONx_ENWIN_F, 0);
set_bit(BIT_REQUEST_UPDATE, &ctx->flags);
}
static void decon_atomic_flush(struct exynos_drm_crtc *crtc)
@ -340,8 +337,8 @@ static void decon_atomic_flush(struct exynos_drm_crtc *crtc)
for (i = ctx->first_win; i < WINDOWS_NR; i++)
decon_shadow_protect_win(ctx, i, false);
/* standalone update */
decon_set_bits(ctx, DECON_UPDATE, STANDALONE_UPDATE_F, ~0);
if (test_and_clear_bit(BIT_REQUEST_UPDATE, &ctx->flags))
decon_set_bits(ctx, DECON_UPDATE, STANDALONE_UPDATE_F, ~0);
if (ctx->out_type & IFTYPE_I80)
set_bit(BIT_WIN_UPDATED, &ctx->flags);

View File

@ -37,13 +37,6 @@
#include "i915_drv.h"
#include "gvt.h"
#define MB_TO_BYTES(mb) ((mb) << 20ULL)
#define BYTES_TO_MB(b) ((b) >> 20ULL)
#define HOST_LOW_GM_SIZE MB_TO_BYTES(128)
#define HOST_HIGH_GM_SIZE MB_TO_BYTES(384)
#define HOST_FENCE 4
static int alloc_gm(struct intel_vgpu *vgpu, bool high_gm)
{
struct intel_gvt *gvt = vgpu->gvt;
@ -165,6 +158,14 @@ void intel_vgpu_write_fence(struct intel_vgpu *vgpu,
POSTING_READ(fence_reg_lo);
}
static void _clear_vgpu_fence(struct intel_vgpu *vgpu)
{
int i;
for (i = 0; i < vgpu_fence_sz(vgpu); i++)
intel_vgpu_write_fence(vgpu, i, 0);
}
static void free_vgpu_fence(struct intel_vgpu *vgpu)
{
struct intel_gvt *gvt = vgpu->gvt;
@ -178,9 +179,9 @@ static void free_vgpu_fence(struct intel_vgpu *vgpu)
intel_runtime_pm_get(dev_priv);
mutex_lock(&dev_priv->drm.struct_mutex);
_clear_vgpu_fence(vgpu);
for (i = 0; i < vgpu_fence_sz(vgpu); i++) {
reg = vgpu->fence.regs[i];
intel_vgpu_write_fence(vgpu, i, 0);
list_add_tail(&reg->link,
&dev_priv->mm.fence_list);
}
@ -208,13 +209,14 @@ static int alloc_vgpu_fence(struct intel_vgpu *vgpu)
continue;
list_del(pos);
vgpu->fence.regs[i] = reg;
intel_vgpu_write_fence(vgpu, i, 0);
if (++i == vgpu_fence_sz(vgpu))
break;
}
if (i != vgpu_fence_sz(vgpu))
goto out_free_fence;
_clear_vgpu_fence(vgpu);
mutex_unlock(&dev_priv->drm.struct_mutex);
intel_runtime_pm_put(dev_priv);
return 0;
@ -313,6 +315,22 @@ void intel_vgpu_free_resource(struct intel_vgpu *vgpu)
free_resource(vgpu);
}
/**
* intel_vgpu_reset_resource - reset resource state owned by a vGPU
* @vgpu: a vGPU
*
* This function is used to reset resource state owned by a vGPU.
*
*/
void intel_vgpu_reset_resource(struct intel_vgpu *vgpu)
{
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
intel_runtime_pm_get(dev_priv);
_clear_vgpu_fence(vgpu);
intel_runtime_pm_put(dev_priv);
}
/**
* intel_alloc_vgpu_resource - allocate HW resource for a vGPU
* @vgpu: vGPU

View File

@ -282,3 +282,77 @@ int intel_vgpu_emulate_cfg_write(struct intel_vgpu *vgpu, unsigned int offset,
}
return 0;
}
/**
* intel_vgpu_init_cfg_space - init vGPU configuration space when create vGPU
*
* @vgpu: a vGPU
* @primary: is the vGPU presented as primary
*
*/
void intel_vgpu_init_cfg_space(struct intel_vgpu *vgpu,
bool primary)
{
struct intel_gvt *gvt = vgpu->gvt;
const struct intel_gvt_device_info *info = &gvt->device_info;
u16 *gmch_ctl;
int i;
memcpy(vgpu_cfg_space(vgpu), gvt->firmware.cfg_space,
info->cfg_space_size);
if (!primary) {
vgpu_cfg_space(vgpu)[PCI_CLASS_DEVICE] =
INTEL_GVT_PCI_CLASS_VGA_OTHER;
vgpu_cfg_space(vgpu)[PCI_CLASS_PROG] =
INTEL_GVT_PCI_CLASS_VGA_OTHER;
}
/* Show guest that there isn't any stolen memory.*/
gmch_ctl = (u16 *)(vgpu_cfg_space(vgpu) + INTEL_GVT_PCI_GMCH_CONTROL);
*gmch_ctl &= ~(BDW_GMCH_GMS_MASK << BDW_GMCH_GMS_SHIFT);
intel_vgpu_write_pci_bar(vgpu, PCI_BASE_ADDRESS_2,
gvt_aperture_pa_base(gvt), true);
vgpu_cfg_space(vgpu)[PCI_COMMAND] &= ~(PCI_COMMAND_IO
| PCI_COMMAND_MEMORY
| PCI_COMMAND_MASTER);
/*
* Clear the bar upper 32bit and let guest to assign the new value
*/
memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_1, 0, 4);
memset(vgpu_cfg_space(vgpu) + PCI_BASE_ADDRESS_3, 0, 4);
memset(vgpu_cfg_space(vgpu) + INTEL_GVT_PCI_OPREGION, 0, 4);
for (i = 0; i < INTEL_GVT_MAX_BAR_NUM; i++) {
vgpu->cfg_space.bar[i].size = pci_resource_len(
gvt->dev_priv->drm.pdev, i * 2);
vgpu->cfg_space.bar[i].tracked = false;
}
}
/**
* intel_vgpu_reset_cfg_space - reset vGPU configuration space
*
* @vgpu: a vGPU
*
*/
void intel_vgpu_reset_cfg_space(struct intel_vgpu *vgpu)
{
u8 cmd = vgpu_cfg_space(vgpu)[PCI_COMMAND];
bool primary = vgpu_cfg_space(vgpu)[PCI_CLASS_DEVICE] !=
INTEL_GVT_PCI_CLASS_VGA_OTHER;
if (cmd & PCI_COMMAND_MEMORY) {
trap_gttmmio(vgpu, false);
map_aperture(vgpu, false);
}
/**
* Currently we only do such reset when vGPU is not
* owned by any VM, so we simply restore entire cfg
* space to default value.
*/
intel_vgpu_init_cfg_space(vgpu, primary);
}

View File

@ -481,7 +481,6 @@ struct parser_exec_state {
(s->vgpu->gvt->device_info.gmadr_bytes_in_cmd >> 2)
static unsigned long bypass_scan_mask = 0;
static bool bypass_batch_buffer_scan = true;
/* ring ALL, type = 0 */
static struct sub_op_bits sub_op_mi[] = {
@ -1525,9 +1524,6 @@ static int batch_buffer_needs_scan(struct parser_exec_state *s)
{
struct intel_gvt *gvt = s->vgpu->gvt;
if (bypass_batch_buffer_scan)
return 0;
if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)) {
/* BDW decides privilege based on address space */
if (cmd_val(s, 0) & (1 << 8))

View File

@ -364,58 +364,30 @@ static void free_workload(struct intel_vgpu_workload *workload)
#define get_desc_from_elsp_dwords(ed, i) \
((struct execlist_ctx_descriptor_format *)&((ed)->data[i * 2]))
#define BATCH_BUFFER_ADDR_MASK ((1UL << 32) - (1U << 2))
#define BATCH_BUFFER_ADDR_HIGH_MASK ((1UL << 16) - (1U))
static int set_gma_to_bb_cmd(struct intel_shadow_bb_entry *entry_obj,
unsigned long add, int gmadr_bytes)
{
if (WARN_ON(gmadr_bytes != 4 && gmadr_bytes != 8))
return -1;
*((u32 *)(entry_obj->bb_start_cmd_va + (1 << 2))) = add &
BATCH_BUFFER_ADDR_MASK;
if (gmadr_bytes == 8) {
*((u32 *)(entry_obj->bb_start_cmd_va + (2 << 2))) =
add & BATCH_BUFFER_ADDR_HIGH_MASK;
}
return 0;
}
static void prepare_shadow_batch_buffer(struct intel_vgpu_workload *workload)
{
int gmadr_bytes = workload->vgpu->gvt->device_info.gmadr_bytes_in_cmd;
const int gmadr_bytes = workload->vgpu->gvt->device_info.gmadr_bytes_in_cmd;
struct intel_shadow_bb_entry *entry_obj;
/* pin the gem object to ggtt */
if (!list_empty(&workload->shadow_bb)) {
struct intel_shadow_bb_entry *entry_obj =
list_first_entry(&workload->shadow_bb,
struct intel_shadow_bb_entry,
list);
struct intel_shadow_bb_entry *temp;
list_for_each_entry(entry_obj, &workload->shadow_bb, list) {
struct i915_vma *vma;
list_for_each_entry_safe(entry_obj, temp, &workload->shadow_bb,
list) {
struct i915_vma *vma;
vma = i915_gem_object_ggtt_pin(entry_obj->obj, NULL, 0,
4, 0);
if (IS_ERR(vma)) {
gvt_err("Cannot pin\n");
return;
}
/* FIXME: we are not tracking our pinned VMA leaving it
* up to the core to fix up the stray pin_count upon
* free.
*/
/* update the relocate gma with shadow batch buffer*/
set_gma_to_bb_cmd(entry_obj,
i915_ggtt_offset(vma),
gmadr_bytes);
vma = i915_gem_object_ggtt_pin(entry_obj->obj, NULL, 0, 4, 0);
if (IS_ERR(vma)) {
gvt_err("Cannot pin\n");
return;
}
/* FIXME: we are not tracking our pinned VMA leaving it
* up to the core to fix up the stray pin_count upon
* free.
*/
/* update the relocate gma with shadow batch buffer*/
entry_obj->bb_start_cmd_va[1] = i915_ggtt_offset(vma);
if (gmadr_bytes == 8)
entry_obj->bb_start_cmd_va[2] = 0;
}
}
@ -826,7 +798,7 @@ int intel_vgpu_init_execlist(struct intel_vgpu *vgpu)
INIT_LIST_HEAD(&vgpu->workload_q_head[i]);
}
vgpu->workloads = kmem_cache_create("gvt-g vgpu workload",
vgpu->workloads = kmem_cache_create("gvt-g_vgpu_workload",
sizeof(struct intel_vgpu_workload), 0,
SLAB_HWCACHE_ALIGN,
NULL);

View File

@ -240,15 +240,8 @@ static inline int get_pse_type(int type)
static u64 read_pte64(struct drm_i915_private *dev_priv, unsigned long index)
{
void __iomem *addr = (gen8_pte_t __iomem *)dev_priv->ggtt.gsm + index;
u64 pte;
#ifdef readq
pte = readq(addr);
#else
pte = ioread32(addr);
pte |= (u64)ioread32(addr + 4) << 32;
#endif
return pte;
return readq(addr);
}
static void write_pte64(struct drm_i915_private *dev_priv,
@ -256,12 +249,8 @@ static void write_pte64(struct drm_i915_private *dev_priv,
{
void __iomem *addr = (gen8_pte_t __iomem *)dev_priv->ggtt.gsm + index;
#ifdef writeq
writeq(pte, addr);
#else
iowrite32((u32)pte, addr);
iowrite32(pte >> 32, addr + 4);
#endif
I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
POSTING_READ(GFX_FLSH_CNTL_GEN6);
}
@ -1380,8 +1369,7 @@ static int gen8_mm_alloc_page_table(struct intel_vgpu_mm *mm)
info->gtt_entry_size;
mem = kzalloc(mm->has_shadow_page_table ?
mm->page_table_entry_size * 2
: mm->page_table_entry_size,
GFP_ATOMIC);
: mm->page_table_entry_size, GFP_KERNEL);
if (!mem)
return -ENOMEM;
mm->virtual_page_table = mem;
@ -1532,7 +1520,7 @@ struct intel_vgpu_mm *intel_vgpu_create_mm(struct intel_vgpu *vgpu,
struct intel_vgpu_mm *mm;
int ret;
mm = kzalloc(sizeof(*mm), GFP_ATOMIC);
mm = kzalloc(sizeof(*mm), GFP_KERNEL);
if (!mm) {
ret = -ENOMEM;
goto fail;
@ -1886,30 +1874,27 @@ static int alloc_scratch_pages(struct intel_vgpu *vgpu,
struct intel_gvt_gtt_pte_ops *ops = vgpu->gvt->gtt.pte_ops;
int page_entry_num = GTT_PAGE_SIZE >>
vgpu->gvt->device_info.gtt_entry_size_shift;
struct page *scratch_pt;
void *scratch_pt;
unsigned long mfn;
int i;
void *p;
if (WARN_ON(type < GTT_TYPE_PPGTT_PTE_PT || type >= GTT_TYPE_MAX))
return -EINVAL;
scratch_pt = alloc_page(GFP_KERNEL | GFP_ATOMIC | __GFP_ZERO);
scratch_pt = (void *)get_zeroed_page(GFP_KERNEL);
if (!scratch_pt) {
gvt_err("fail to allocate scratch page\n");
return -ENOMEM;
}
p = kmap_atomic(scratch_pt);
mfn = intel_gvt_hypervisor_virt_to_mfn(p);
mfn = intel_gvt_hypervisor_virt_to_mfn(scratch_pt);
if (mfn == INTEL_GVT_INVALID_ADDR) {
gvt_err("fail to translate vaddr:0x%llx\n", (u64)p);
kunmap_atomic(p);
__free_page(scratch_pt);
gvt_err("fail to translate vaddr:0x%lx\n", (unsigned long)scratch_pt);
free_page((unsigned long)scratch_pt);
return -EFAULT;
}
gtt->scratch_pt[type].page_mfn = mfn;
gtt->scratch_pt[type].page = scratch_pt;
gtt->scratch_pt[type].page = virt_to_page(scratch_pt);
gvt_dbg_mm("vgpu%d create scratch_pt: type %d mfn=0x%lx\n",
vgpu->id, type, mfn);
@ -1918,7 +1903,7 @@ static int alloc_scratch_pages(struct intel_vgpu *vgpu,
* scratch_pt[type] indicate the scratch pt/scratch page used by the
* 'type' pt.
* e.g. scratch_pt[GTT_TYPE_PPGTT_PDE_PT] is used by
* GTT_TYPE_PPGTT_PDE_PT level pt, that means this scatch_pt it self
* GTT_TYPE_PPGTT_PDE_PT level pt, that means this scratch_pt it self
* is GTT_TYPE_PPGTT_PTE_PT, and full filled by scratch page mfn.
*/
if (type > GTT_TYPE_PPGTT_PTE_PT && type < GTT_TYPE_MAX) {
@ -1936,11 +1921,9 @@ static int alloc_scratch_pages(struct intel_vgpu *vgpu,
se.val64 |= PPAT_CACHED_INDEX;
for (i = 0; i < page_entry_num; i++)
ops->set_entry(p, &se, i, false, 0, vgpu);
ops->set_entry(scratch_pt, &se, i, false, 0, vgpu);
}
kunmap_atomic(p);
return 0;
}
@ -2208,7 +2191,7 @@ int intel_vgpu_g2v_destroy_ppgtt_mm(struct intel_vgpu *vgpu,
int intel_gvt_init_gtt(struct intel_gvt *gvt)
{
int ret;
void *page_addr;
void *page;
gvt_dbg_core("init gtt\n");
@ -2221,17 +2204,14 @@ int intel_gvt_init_gtt(struct intel_gvt *gvt)
return -ENODEV;
}
gvt->gtt.scratch_ggtt_page =
alloc_page(GFP_KERNEL | GFP_ATOMIC | __GFP_ZERO);
if (!gvt->gtt.scratch_ggtt_page) {
page = (void *)get_zeroed_page(GFP_KERNEL);
if (!page) {
gvt_err("fail to allocate scratch ggtt page\n");
return -ENOMEM;
}
gvt->gtt.scratch_ggtt_page = virt_to_page(page);
page_addr = page_address(gvt->gtt.scratch_ggtt_page);
gvt->gtt.scratch_ggtt_mfn =
intel_gvt_hypervisor_virt_to_mfn(page_addr);
gvt->gtt.scratch_ggtt_mfn = intel_gvt_hypervisor_virt_to_mfn(page);
if (gvt->gtt.scratch_ggtt_mfn == INTEL_GVT_INVALID_ADDR) {
gvt_err("fail to translate scratch ggtt page\n");
__free_page(gvt->gtt.scratch_ggtt_page);
@ -2297,3 +2277,30 @@ void intel_vgpu_reset_ggtt(struct intel_vgpu *vgpu)
for (offset = 0; offset < num_entries; offset++)
ops->set_entry(NULL, &e, index + offset, false, 0, vgpu);
}
/**
* intel_vgpu_reset_gtt - reset the all GTT related status
* @vgpu: a vGPU
* @dmlr: true for vGPU Device Model Level Reset, false for GT Reset
*
* This function is called from vfio core to reset reset all
* GTT related status, including GGTT, PPGTT, scratch page.
*
*/
void intel_vgpu_reset_gtt(struct intel_vgpu *vgpu, bool dmlr)
{
int i;
ppgtt_free_all_shadow_page(vgpu);
if (!dmlr)
return;
intel_vgpu_reset_ggtt(vgpu);
/* clear scratch page for security */
for (i = GTT_TYPE_PPGTT_PTE_PT; i < GTT_TYPE_MAX; i++) {
if (vgpu->gtt.scratch_pt[i].page != NULL)
memset(page_address(vgpu->gtt.scratch_pt[i].page),
0, PAGE_SIZE);
}
}

Some files were not shown because too many files have changed in this diff Show More