Linux 3.17-rc5

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJUFjfVAAoJEHm+PkMAQRiGANkIAIU3PNrAz9dIItq8a/rEAhnx
 l2shHoOyEmyNR2apholM3BPUNX50cbsc/HGdi7lZKLkA/ifAj6B9nFD2NzVsIChD
 1QWVcvdkKlVuxXCDd26qbijlfmbTOAWrLw9ntvM+J6ZtECM6zCAZF4MAV/FwogPq
 ETGKD76AxJtVIhBMS99troAiC1YxmQ7DKgEr8CraTOR1qwXEonnPCmN/IZA6x2/G
 EXiihOuQB5me1X7k4PI0V8CDscQOn+3B2CQHIrjRB+KiTF+iKIuI8n6ORC6bpFh+
 U8UZP9wLlIG1BrUHG83pIndglIHotqPcjmtfl1WGrRr2hn7abzVSfV+g5Syo3Vg=
 =Ep+s
 -----END PGP SIGNATURE-----

Merge tag 'v3.17-rc5' into next

Linux 3.17-rc5

Signed-off-by: Felipe Balbi <balbi@ti.com>

Conflicts:
	Documentation/devicetree/bindings/usb/mxs-phy.txt
	drivers/usb/phy/phy-mxs-usb.c
This commit is contained in:
Felipe Balbi 2014-09-16 09:53:59 -05:00
commit 4cd41ffd27
331 changed files with 3960 additions and 2086 deletions

View File

@ -16,9 +16,9 @@ Example:
* DMA client * DMA client
Required properties: Required properties:
- dmas: a list of <[DMA multiplexer phandle] [SRS/DRS value]> pairs, - dmas: a list of <[DMA multiplexer phandle] [SRS << 8 | DRS]> pairs.
where SRS/DRS values are fixed handles, specified in the SoC where SRS/DRS are specified in the SoC manual.
manual as the value that would be written into the PDMACHCR. It will be written into PDMACHCR as high 16-bit parts.
- dma-names: a list of DMA channel names, one per "dmas" entry - dma-names: a list of DMA channel names, one per "dmas" entry
Example: Example:

View File

@ -39,6 +39,10 @@ Optional properties:
further clocks may be specified in derived bindings. further clocks may be specified in derived bindings.
- clock-names: One name for each entry in the clocks property, the - clock-names: One name for each entry in the clocks property, the
first one should be "stmmaceth". first one should be "stmmaceth".
- clk_ptp_ref: this is the PTP reference clock; in case of the PTP is
available this clock is used for programming the Timestamp Addend Register.
If not passed then the system clock will be used and this is fine on some
platforms.
Examples: Examples:

View File

@ -6,6 +6,7 @@ Required properties:
* "fsl,imx6q-usbphy" for imx6dq and imx6dl * "fsl,imx6q-usbphy" for imx6dq and imx6dl
* "fsl,imx6sl-usbphy" for imx6sl * "fsl,imx6sl-usbphy" for imx6sl
* "fsl,vf610-usbphy" for Vybrid vf610 * "fsl,vf610-usbphy" for Vybrid vf610
* "fsl,imx6sx-usbphy" for imx6sx
"fsl,imx23-usbphy" is still a fallback for other strings "fsl,imx23-usbphy" is still a fallback for other strings
- reg: Should contain registers location and length - reg: Should contain registers location and length
- interrupts: Should contain phy interrupt - interrupts: Should contain phy interrupt

View File

@ -2,7 +2,7 @@ Analog TV Connector
=================== ===================
Required properties: Required properties:
- compatible: "composite-connector" or "svideo-connector" - compatible: "composite-video-connector" or "svideo-connector"
Optional properties: Optional properties:
- label: a symbolic name for the connector - label: a symbolic name for the connector
@ -14,7 +14,7 @@ Example
------- -------
tv: connector { tv: connector {
compatible = "composite-connector"; compatible = "composite-video-connector";
label = "tv"; label = "tv";
port { port {

View File

@ -3541,6 +3541,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
bogus residue values); bogus residue values);
s = SINGLE_LUN (the device has only one s = SINGLE_LUN (the device has only one
Logical Unit); Logical Unit);
u = IGNORE_UAS (don't bind to the uas driver);
w = NO_WP_DETECT (don't test whether the w = NO_WP_DETECT (don't test whether the
medium is write-protected). medium is write-protected).
Example: quirks=0419:aaf5:rl,0421:0433:rc Example: quirks=0419:aaf5:rl,0421:0433:rc

View File

@ -6425,7 +6425,8 @@ F: Documentation/scsi/NinjaSCSI.txt
F: drivers/scsi/nsp32* F: drivers/scsi/nsp32*
NTB DRIVER NTB DRIVER
M: Jon Mason <jon.mason@intel.com> M: Jon Mason <jdmason@kudzu.us>
M: Dave Jiang <dave.jiang@intel.com>
S: Supported S: Supported
W: https://github.com/jonmason/ntb/wiki W: https://github.com/jonmason/ntb/wiki
T: git git://github.com/jonmason/ntb.git T: git git://github.com/jonmason/ntb.git
@ -7054,7 +7055,7 @@ S: Maintained
F: drivers/pinctrl/sh-pfc/ F: drivers/pinctrl/sh-pfc/
PIN CONTROLLER - SAMSUNG PIN CONTROLLER - SAMSUNG
M: Tomasz Figa <t.figa@samsung.com> M: Tomasz Figa <tomasz.figa@gmail.com>
M: Thomas Abraham <thomas.abraham@linaro.org> M: Thomas Abraham <thomas.abraham@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
@ -7900,7 +7901,8 @@ S: Supported
F: drivers/media/i2c/s5k5baf.c F: drivers/media/i2c/s5k5baf.c
SAMSUNG SOC CLOCK DRIVERS SAMSUNG SOC CLOCK DRIVERS
M: Tomasz Figa <t.figa@samsung.com> M: Sylwester Nawrocki <s.nawrocki@samsung.com>
M: Tomasz Figa <tomasz.figa@gmail.com>
S: Supported S: Supported
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
F: drivers/clk/samsung/ F: drivers/clk/samsung/
@ -7913,6 +7915,19 @@ S: Supported
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
F: drivers/net/ethernet/samsung/sxgbe/ F: drivers/net/ethernet/samsung/sxgbe/
SAMSUNG USB2 PHY DRIVER
M: Kamil Debski <k.debski@samsung.com>
L: linux-kernel@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/phy/samsung-phy.txt
F: Documentation/phy/samsung-usb2.txt
F: drivers/phy/phy-exynos4210-usb2.c
F: drivers/phy/phy-exynos4x12-usb2.c
F: drivers/phy/phy-exynos5250-usb2.c
F: drivers/phy/phy-s5pv210-usb2.c
F: drivers/phy/phy-samsung-usb2.c
F: drivers/phy/phy-samsung-usb2.h
SERIAL DRIVERS SERIAL DRIVERS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org L: linux-serial@vger.kernel.org

View File

@ -1,7 +1,7 @@
VERSION = 3 VERSION = 3
PATCHLEVEL = 17 PATCHLEVEL = 17
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc4 EXTRAVERSION = -rc5
NAME = Shuffling Zombie Juror NAME = Shuffling Zombie Juror
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -93,7 +93,7 @@
}; };
tv: connector { tv: connector {
compatible = "composite-connector"; compatible = "composite-video-connector";
label = "tv"; label = "tv";
port { port {

View File

@ -26,25 +26,14 @@ static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
__generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs); __generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
} }
static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs) struct dma_attrs *attrs);
{
if (__generic_dma_ops(hwdev)->unmap_page)
__generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs);
}
static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, void xen_dma_sync_single_for_cpu(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir) dma_addr_t handle, size_t size, enum dma_data_direction dir);
{
if (__generic_dma_ops(hwdev)->sync_single_for_cpu) void xen_dma_sync_single_for_device(struct device *hwdev,
__generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir); dma_addr_t handle, size_t size, enum dma_data_direction dir);
}
static inline void xen_dma_sync_single_for_device(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
if (__generic_dma_ops(hwdev)->sync_single_for_device)
__generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir);
}
#endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */ #endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */

View File

@ -33,7 +33,6 @@ typedef struct xpaddr {
#define INVALID_P2M_ENTRY (~0UL) #define INVALID_P2M_ENTRY (~0UL)
unsigned long __pfn_to_mfn(unsigned long pfn); unsigned long __pfn_to_mfn(unsigned long pfn);
unsigned long __mfn_to_pfn(unsigned long mfn);
extern struct rb_root phys_to_mach; extern struct rb_root phys_to_mach;
static inline unsigned long pfn_to_mfn(unsigned long pfn) static inline unsigned long pfn_to_mfn(unsigned long pfn)
@ -51,14 +50,6 @@ static inline unsigned long pfn_to_mfn(unsigned long pfn)
static inline unsigned long mfn_to_pfn(unsigned long mfn) static inline unsigned long mfn_to_pfn(unsigned long mfn)
{ {
unsigned long pfn;
if (phys_to_mach.rb_node != NULL) {
pfn = __mfn_to_pfn(mfn);
if (pfn != INVALID_P2M_ENTRY)
return pfn;
}
return mfn; return mfn;
} }

View File

@ -1 +1 @@
obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o mm32.o

View File

@ -260,6 +260,12 @@ static int __init xen_guest_init(void)
xen_domain_type = XEN_HVM_DOMAIN; xen_domain_type = XEN_HVM_DOMAIN;
xen_setup_features(); xen_setup_features();
if (!xen_feature(XENFEAT_grant_map_identity)) {
pr_warn("Please upgrade your Xen.\n"
"If your platform has any non-coherent DMA devices, they won't work properly.\n");
}
if (xen_feature(XENFEAT_dom0)) if (xen_feature(XENFEAT_dom0))
xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED; xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
else else

202
arch/arm/xen/mm32.c Normal file
View File

@ -0,0 +1,202 @@
#include <linux/cpu.h>
#include <linux/dma-mapping.h>
#include <linux/gfp.h>
#include <linux/highmem.h>
#include <xen/features.h>
static DEFINE_PER_CPU(unsigned long, xen_mm32_scratch_virt);
static DEFINE_PER_CPU(pte_t *, xen_mm32_scratch_ptep);
static int alloc_xen_mm32_scratch_page(int cpu)
{
struct page *page;
unsigned long virt;
pmd_t *pmdp;
pte_t *ptep;
if (per_cpu(xen_mm32_scratch_ptep, cpu) != NULL)
return 0;
page = alloc_page(GFP_KERNEL);
if (page == NULL) {
pr_warn("Failed to allocate xen_mm32_scratch_page for cpu %d\n", cpu);
return -ENOMEM;
}
virt = (unsigned long)__va(page_to_phys(page));
pmdp = pmd_offset(pud_offset(pgd_offset_k(virt), virt), virt);
ptep = pte_offset_kernel(pmdp, virt);
per_cpu(xen_mm32_scratch_virt, cpu) = virt;
per_cpu(xen_mm32_scratch_ptep, cpu) = ptep;
return 0;
}
static int xen_mm32_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
{
int cpu = (long)hcpu;
switch (action) {
case CPU_UP_PREPARE:
if (alloc_xen_mm32_scratch_page(cpu))
return NOTIFY_BAD;
break;
default:
break;
}
return NOTIFY_OK;
}
static struct notifier_block xen_mm32_cpu_notifier = {
.notifier_call = xen_mm32_cpu_notify,
};
static void* xen_mm32_remap_page(dma_addr_t handle)
{
unsigned long virt = get_cpu_var(xen_mm32_scratch_virt);
pte_t *ptep = __get_cpu_var(xen_mm32_scratch_ptep);
*ptep = pfn_pte(handle >> PAGE_SHIFT, PAGE_KERNEL);
local_flush_tlb_kernel_page(virt);
return (void*)virt;
}
static void xen_mm32_unmap(void *vaddr)
{
put_cpu_var(xen_mm32_scratch_virt);
}
/* functions called by SWIOTLB */
static void dma_cache_maint(dma_addr_t handle, unsigned long offset,
size_t size, enum dma_data_direction dir,
void (*op)(const void *, size_t, int))
{
unsigned long pfn;
size_t left = size;
pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE;
offset %= PAGE_SIZE;
do {
size_t len = left;
void *vaddr;
if (!pfn_valid(pfn))
{
/* Cannot map the page, we don't know its physical address.
* Return and hope for the best */
if (!xen_feature(XENFEAT_grant_map_identity))
return;
vaddr = xen_mm32_remap_page(handle) + offset;
op(vaddr, len, dir);
xen_mm32_unmap(vaddr - offset);
} else {
struct page *page = pfn_to_page(pfn);
if (PageHighMem(page)) {
if (len + offset > PAGE_SIZE)
len = PAGE_SIZE - offset;
if (cache_is_vipt_nonaliasing()) {
vaddr = kmap_atomic(page);
op(vaddr + offset, len, dir);
kunmap_atomic(vaddr);
} else {
vaddr = kmap_high_get(page);
if (vaddr) {
op(vaddr + offset, len, dir);
kunmap_high(page);
}
}
} else {
vaddr = page_address(page) + offset;
op(vaddr, len, dir);
}
}
offset = 0;
pfn++;
left -= len;
} while (left);
}
static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
/* Cannot use __dma_page_dev_to_cpu because we don't have a
* struct page for handle */
if (dir != DMA_TO_DEVICE)
outer_inv_range(handle, handle + size);
dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_unmap_area);
}
static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_map_area);
if (dir == DMA_FROM_DEVICE) {
outer_inv_range(handle, handle + size);
} else {
outer_clean_range(handle, handle + size);
}
}
void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
if (!__generic_dma_ops(hwdev)->unmap_page)
return;
if (dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
return;
__xen_dma_page_dev_to_cpu(hwdev, handle, size, dir);
}
void xen_dma_sync_single_for_cpu(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
if (!__generic_dma_ops(hwdev)->sync_single_for_cpu)
return;
__xen_dma_page_dev_to_cpu(hwdev, handle, size, dir);
}
void xen_dma_sync_single_for_device(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
if (!__generic_dma_ops(hwdev)->sync_single_for_device)
return;
__xen_dma_page_cpu_to_dev(hwdev, handle, size, dir);
}
int __init xen_mm32_init(void)
{
int cpu;
if (!xen_initial_domain())
return 0;
register_cpu_notifier(&xen_mm32_cpu_notifier);
get_online_cpus();
for_each_online_cpu(cpu) {
if (alloc_xen_mm32_scratch_page(cpu)) {
put_online_cpus();
unregister_cpu_notifier(&xen_mm32_cpu_notifier);
return -ENOMEM;
}
}
put_online_cpus();
return 0;
}
arch_initcall(xen_mm32_init);

View File

@ -21,14 +21,12 @@ struct xen_p2m_entry {
unsigned long pfn; unsigned long pfn;
unsigned long mfn; unsigned long mfn;
unsigned long nr_pages; unsigned long nr_pages;
struct rb_node rbnode_mach;
struct rb_node rbnode_phys; struct rb_node rbnode_phys;
}; };
static rwlock_t p2m_lock; static rwlock_t p2m_lock;
struct rb_root phys_to_mach = RB_ROOT; struct rb_root phys_to_mach = RB_ROOT;
EXPORT_SYMBOL_GPL(phys_to_mach); EXPORT_SYMBOL_GPL(phys_to_mach);
static struct rb_root mach_to_phys = RB_ROOT;
static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new) static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
{ {
@ -41,8 +39,6 @@ static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new)
parent = *link; parent = *link;
entry = rb_entry(parent, struct xen_p2m_entry, rbnode_phys); entry = rb_entry(parent, struct xen_p2m_entry, rbnode_phys);
if (new->mfn == entry->mfn)
goto err_out;
if (new->pfn == entry->pfn) if (new->pfn == entry->pfn)
goto err_out; goto err_out;
@ -88,64 +84,6 @@ unsigned long __pfn_to_mfn(unsigned long pfn)
} }
EXPORT_SYMBOL_GPL(__pfn_to_mfn); EXPORT_SYMBOL_GPL(__pfn_to_mfn);
static int xen_add_mach_to_phys_entry(struct xen_p2m_entry *new)
{
struct rb_node **link = &mach_to_phys.rb_node;
struct rb_node *parent = NULL;
struct xen_p2m_entry *entry;
int rc = 0;
while (*link) {
parent = *link;
entry = rb_entry(parent, struct xen_p2m_entry, rbnode_mach);
if (new->mfn == entry->mfn)
goto err_out;
if (new->pfn == entry->pfn)
goto err_out;
if (new->mfn < entry->mfn)
link = &(*link)->rb_left;
else
link = &(*link)->rb_right;
}
rb_link_node(&new->rbnode_mach, parent, link);
rb_insert_color(&new->rbnode_mach, &mach_to_phys);
goto out;
err_out:
rc = -EINVAL;
pr_warn("%s: cannot add pfn=%pa -> mfn=%pa: pfn=%pa -> mfn=%pa already exists\n",
__func__, &new->pfn, &new->mfn, &entry->pfn, &entry->mfn);
out:
return rc;
}
unsigned long __mfn_to_pfn(unsigned long mfn)
{
struct rb_node *n = mach_to_phys.rb_node;
struct xen_p2m_entry *entry;
unsigned long irqflags;
read_lock_irqsave(&p2m_lock, irqflags);
while (n) {
entry = rb_entry(n, struct xen_p2m_entry, rbnode_mach);
if (entry->mfn <= mfn &&
entry->mfn + entry->nr_pages > mfn) {
read_unlock_irqrestore(&p2m_lock, irqflags);
return entry->pfn + (mfn - entry->mfn);
}
if (mfn < entry->mfn)
n = n->rb_left;
else
n = n->rb_right;
}
read_unlock_irqrestore(&p2m_lock, irqflags);
return INVALID_P2M_ENTRY;
}
EXPORT_SYMBOL_GPL(__mfn_to_pfn);
int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops, int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
struct gnttab_map_grant_ref *kmap_ops, struct gnttab_map_grant_ref *kmap_ops,
struct page **pages, unsigned int count) struct page **pages, unsigned int count)
@ -192,7 +130,6 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys); p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
if (p2m_entry->pfn <= pfn && if (p2m_entry->pfn <= pfn &&
p2m_entry->pfn + p2m_entry->nr_pages > pfn) { p2m_entry->pfn + p2m_entry->nr_pages > pfn) {
rb_erase(&p2m_entry->rbnode_mach, &mach_to_phys);
rb_erase(&p2m_entry->rbnode_phys, &phys_to_mach); rb_erase(&p2m_entry->rbnode_phys, &phys_to_mach);
write_unlock_irqrestore(&p2m_lock, irqflags); write_unlock_irqrestore(&p2m_lock, irqflags);
kfree(p2m_entry); kfree(p2m_entry);
@ -217,8 +154,7 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
p2m_entry->mfn = mfn; p2m_entry->mfn = mfn;
write_lock_irqsave(&p2m_lock, irqflags); write_lock_irqsave(&p2m_lock, irqflags);
if ((rc = xen_add_phys_to_mach_entry(p2m_entry) < 0) || if ((rc = xen_add_phys_to_mach_entry(p2m_entry)) < 0) {
(rc = xen_add_mach_to_phys_entry(p2m_entry) < 0)) {
write_unlock_irqrestore(&p2m_lock, irqflags); write_unlock_irqrestore(&p2m_lock, irqflags);
return false; return false;
} }

View File

@ -97,19 +97,15 @@ static bool migrate_one_irq(struct irq_desc *desc)
if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity)) if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
return false; return false;
if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
affinity = cpu_online_mask;
ret = true; ret = true;
}
/*
* when using forced irq_set_affinity we must ensure that the cpu
* being offlined is not present in the affinity mask, it may be
* selected as the target CPU otherwise
*/
affinity = cpu_online_mask;
c = irq_data_get_irq_chip(d); c = irq_data_get_irq_chip(d);
if (!c->irq_set_affinity) if (!c->irq_set_affinity)
pr_debug("IRQ%u: unable to set affinity\n", d->irq); pr_debug("IRQ%u: unable to set affinity\n", d->irq);
else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret) else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)
cpumask_copy(d->affinity, affinity); cpumask_copy(d->affinity, affinity);
return ret; return ret;

View File

@ -230,9 +230,27 @@ void exit_thread(void)
{ {
} }
static void tls_thread_flush(void)
{
asm ("msr tpidr_el0, xzr");
if (is_compat_task()) {
current->thread.tp_value = 0;
/*
* We need to ensure ordering between the shadow state and the
* hardware state, so that we don't corrupt the hardware state
* with a stale shadow state during context switch.
*/
barrier();
asm ("msr tpidrro_el0, xzr");
}
}
void flush_thread(void) void flush_thread(void)
{ {
fpsimd_flush_thread(); fpsimd_flush_thread();
tls_thread_flush();
flush_ptrace_hw_breakpoint(current); flush_ptrace_hw_breakpoint(current);
} }

View File

@ -79,6 +79,12 @@ long compat_arm_syscall(struct pt_regs *regs)
case __ARM_NR_compat_set_tls: case __ARM_NR_compat_set_tls:
current->thread.tp_value = regs->regs[0]; current->thread.tp_value = regs->regs[0];
/*
* Protect against register corruption from context switch.
* See comment in tls_thread_flush.
*/
barrier();
asm ("msr tpidrro_el0, %0" : : "r" (regs->regs[0])); asm ("msr tpidrro_el0, %0" : : "r" (regs->regs[0]));
return 0; return 0;

View File

@ -127,7 +127,7 @@ config SECCOMP
endmenu endmenu
menu "Advanced setup" menu "Kernel features"
config ADVANCED_OPTIONS config ADVANCED_OPTIONS
bool "Prompt for advanced kernel configuration options" bool "Prompt for advanced kernel configuration options"
@ -248,10 +248,10 @@ config MICROBLAZE_64K_PAGES
endchoice endchoice
endmenu
source "mm/Kconfig" source "mm/Kconfig"
endmenu
menu "Executable file formats" menu "Executable file formats"
source "fs/Kconfig.binfmt" source "fs/Kconfig.binfmt"

View File

@ -15,6 +15,7 @@
#include <asm/percpu.h> #include <asm/percpu.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <linux/linkage.h>
/* /*
* These are per-cpu variables required in entry.S, among other * These are per-cpu variables required in entry.S, among other

View File

@ -98,13 +98,13 @@ static inline int access_ok(int type, const void __user *addr,
if ((get_fs().seg < ((unsigned long)addr)) || if ((get_fs().seg < ((unsigned long)addr)) ||
(get_fs().seg < ((unsigned long)addr + size - 1))) { (get_fs().seg < ((unsigned long)addr + size - 1))) {
pr_debug("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n", pr_devel("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size, type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
(u32)get_fs().seg); (u32)get_fs().seg);
return 0; return 0;
} }
ok: ok:
pr_debug("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n", pr_devel("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n",
type ? "WRITE" : "READ ", (__force u32)addr, (u32)size, type ? "WRITE" : "READ ", (__force u32)addr, (u32)size,
(u32)get_fs().seg); (u32)get_fs().seg);
return 1; return 1;

View File

@ -38,6 +38,6 @@
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#define __NR_syscalls 381 #define __NR_syscalls 387
#endif /* _ASM_MICROBLAZE_UNISTD_H */ #endif /* _ASM_MICROBLAZE_UNISTD_H */

View File

@ -321,6 +321,22 @@ source "fs/Kconfig"
source "arch/parisc/Kconfig.debug" source "arch/parisc/Kconfig.debug"
config SECCOMP
def_bool y
prompt "Enable seccomp to safely compute untrusted bytecode"
---help---
This kernel feature is useful for number crunching applications
that may need to compute untrusted bytecode during their
execution. By using pipes or other transports made available to
the process as file descriptors supporting the read/write
syscalls, it's possible to isolate those applications in
their own address space using seccomp. Once seccomp is
enabled via prctl(PR_SET_SECCOMP), it cannot be disabled
and the task is only allowed to execute a few safe syscalls
defined by each seccomp mode.
If unsure, say Y. Only embedded should say N here.
source "security/Kconfig" source "security/Kconfig"
source "crypto/Kconfig" source "crypto/Kconfig"

View File

@ -456,7 +456,7 @@ int hpux_sysfs(int opcode, unsigned long arg1, unsigned long arg2)
} }
/* String could be altered by userspace after strlen_user() */ /* String could be altered by userspace after strlen_user() */
fsname[len] = '\0'; fsname[len - 1] = '\0';
printk(KERN_DEBUG "that is '%s' as (char *)\n", fsname); printk(KERN_DEBUG "that is '%s' as (char *)\n", fsname);
if ( !strcmp(fsname, "hfs") ) { if ( !strcmp(fsname, "hfs") ) {

View File

@ -0,0 +1,16 @@
#ifndef _ASM_PARISC_SECCOMP_H
#define _ASM_PARISC_SECCOMP_H
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
#define __NR_seccomp_write __NR_write
#define __NR_seccomp_exit __NR_exit
#define __NR_seccomp_sigreturn __NR_rt_sigreturn
#define __NR_seccomp_read_32 __NR_read
#define __NR_seccomp_write_32 __NR_write
#define __NR_seccomp_exit_32 __NR_exit
#define __NR_seccomp_sigreturn_32 __NR_rt_sigreturn
#endif /* _ASM_PARISC_SECCOMP_H */

View File

@ -60,6 +60,7 @@ struct thread_info {
#define TIF_NOTIFY_RESUME 8 /* callback before returning to user */ #define TIF_NOTIFY_RESUME 8 /* callback before returning to user */
#define TIF_SINGLESTEP 9 /* single stepping? */ #define TIF_SINGLESTEP 9 /* single stepping? */
#define TIF_BLOCKSTEP 10 /* branch stepping? */ #define TIF_BLOCKSTEP 10 /* branch stepping? */
#define TIF_SECCOMP 11 /* secure computing */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
@ -70,11 +71,13 @@ struct thread_info {
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP) #define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \ #define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
_TIF_NEED_RESCHED) _TIF_NEED_RESCHED)
#define _TIF_SYSCALL_TRACE_MASK (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP | \ #define _TIF_SYSCALL_TRACE_MASK (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP | \
_TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT) _TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT | \
_TIF_SECCOMP)
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
# ifdef CONFIG_COMPAT # ifdef CONFIG_COMPAT

View File

@ -830,8 +830,11 @@
#define __NR_sched_getattr (__NR_Linux + 335) #define __NR_sched_getattr (__NR_Linux + 335)
#define __NR_utimes (__NR_Linux + 336) #define __NR_utimes (__NR_Linux + 336)
#define __NR_renameat2 (__NR_Linux + 337) #define __NR_renameat2 (__NR_Linux + 337)
#define __NR_seccomp (__NR_Linux + 338)
#define __NR_getrandom (__NR_Linux + 339)
#define __NR_memfd_create (__NR_Linux + 340)
#define __NR_Linux_syscalls (__NR_renameat2 + 1) #define __NR_Linux_syscalls (__NR_memfd_create + 1)
#define __IGNORE_select /* newselect */ #define __IGNORE_select /* newselect */

View File

@ -270,6 +270,12 @@ long do_syscall_trace_enter(struct pt_regs *regs)
{ {
long ret = 0; long ret = 0;
/* Do the secure computing check first. */
if (secure_computing(regs->gr[20])) {
/* seccomp failures shouldn't expose any additional code. */
return -1;
}
if (test_thread_flag(TIF_SYSCALL_TRACE) && if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs)) tracehook_report_syscall_entry(regs))
ret = -1L; ret = -1L;

View File

@ -74,7 +74,7 @@ ENTRY(linux_gateway_page)
/* ADDRESS 0xb0 to 0xb8, lws uses two insns for entry */ /* ADDRESS 0xb0 to 0xb8, lws uses two insns for entry */
/* Light-weight-syscall entry must always be located at 0xb0 */ /* Light-weight-syscall entry must always be located at 0xb0 */
/* WARNING: Keep this number updated with table size changes */ /* WARNING: Keep this number updated with table size changes */
#define __NR_lws_entries (2) #define __NR_lws_entries (3)
lws_entry: lws_entry:
gate lws_start, %r0 /* increase privilege */ gate lws_start, %r0 /* increase privilege */
@ -502,7 +502,7 @@ lws_exit:
/*************************************************** /***************************************************
Implementing CAS as an atomic operation: Implementing 32bit CAS as an atomic operation:
%r26 - Address to examine %r26 - Address to examine
%r25 - Old value to check (old) %r25 - Old value to check (old)
@ -659,6 +659,230 @@ cas_action:
ASM_EXCEPTIONTABLE_ENTRY(2b-linux_gateway_page, 3b-linux_gateway_page) ASM_EXCEPTIONTABLE_ENTRY(2b-linux_gateway_page, 3b-linux_gateway_page)
/***************************************************
New CAS implementation which uses pointers and variable size
information. The value pointed by old and new MUST NOT change
while performing CAS. The lock only protect the value at %r26.
%r26 - Address to examine
%r25 - Pointer to the value to check (old)
%r24 - Pointer to the value to set (new)
%r23 - Size of the variable (0/1/2/3 for 8/16/32/64 bit)
%r28 - Return non-zero on failure
%r21 - Kernel error code
%r21 has the following meanings:
EAGAIN - CAS is busy, ldcw failed, try again.
EFAULT - Read or write failed.
Scratch: r20, r22, r28, r29, r1, fr4 (32bit for 64bit CAS only)
****************************************************/
/* ELF32 Process entry path */
lws_compare_and_swap_2:
#ifdef CONFIG_64BIT
/* Clip the input registers */
depdi 0, 31, 32, %r26
depdi 0, 31, 32, %r25
depdi 0, 31, 32, %r24
depdi 0, 31, 32, %r23
#endif
/* Check the validity of the size pointer */
subi,>>= 4, %r23, %r0
b,n lws_exit_nosys
/* Jump to the functions which will load the old and new values into
registers depending on the their size */
shlw %r23, 2, %r29
blr %r29, %r0
nop
/* 8bit load */
4: ldb 0(%sr3,%r25), %r25
b cas2_lock_start
5: ldb 0(%sr3,%r24), %r24
nop
nop
nop
nop
nop
/* 16bit load */
6: ldh 0(%sr3,%r25), %r25
b cas2_lock_start
7: ldh 0(%sr3,%r24), %r24
nop
nop
nop
nop
nop
/* 32bit load */
8: ldw 0(%sr3,%r25), %r25
b cas2_lock_start
9: ldw 0(%sr3,%r24), %r24
nop
nop
nop
nop
nop
/* 64bit load */
#ifdef CONFIG_64BIT
10: ldd 0(%sr3,%r25), %r25
11: ldd 0(%sr3,%r24), %r24
#else
/* Load new value into r22/r23 - high/low */
10: ldw 0(%sr3,%r25), %r22
11: ldw 4(%sr3,%r25), %r23
/* Load new value into fr4 for atomic store later */
12: flddx 0(%sr3,%r24), %fr4
#endif
cas2_lock_start:
/* Load start of lock table */
ldil L%lws_lock_start, %r20
ldo R%lws_lock_start(%r20), %r28
/* Extract four bits from r26 and hash lock (Bits 4-7) */
extru %r26, 27, 4, %r20
/* Find lock to use, the hash is either one of 0 to
15, multiplied by 16 (keep it 16-byte aligned)
and add to the lock table offset. */
shlw %r20, 4, %r20
add %r20, %r28, %r20
rsm PSW_SM_I, %r0 /* Disable interrupts */
/* COW breaks can cause contention on UP systems */
LDCW 0(%sr2,%r20), %r28 /* Try to acquire the lock */
cmpb,<>,n %r0, %r28, cas2_action /* Did we get it? */
cas2_wouldblock:
ldo 2(%r0), %r28 /* 2nd case */
ssm PSW_SM_I, %r0
b lws_exit /* Contended... */
ldo -EAGAIN(%r0), %r21 /* Spin in userspace */
/*
prev = *addr;
if ( prev == old )
*addr = new;
return prev;
*/
/* NOTES:
This all works becuse intr_do_signal
and schedule both check the return iasq
and see that we are on the kernel page
so this process is never scheduled off
or is ever sent any signal of any sort,
thus it is wholly atomic from usrspaces
perspective
*/
cas2_action:
/* Jump to the correct function */
blr %r29, %r0
/* Set %r28 as non-zero for now */
ldo 1(%r0),%r28
/* 8bit CAS */
13: ldb,ma 0(%sr3,%r26), %r29
sub,= %r29, %r25, %r0
b,n cas2_end
14: stb,ma %r24, 0(%sr3,%r26)
b cas2_end
copy %r0, %r28
nop
nop
/* 16bit CAS */
15: ldh,ma 0(%sr3,%r26), %r29
sub,= %r29, %r25, %r0
b,n cas2_end
16: sth,ma %r24, 0(%sr3,%r26)
b cas2_end
copy %r0, %r28
nop
nop
/* 32bit CAS */
17: ldw,ma 0(%sr3,%r26), %r29
sub,= %r29, %r25, %r0
b,n cas2_end
18: stw,ma %r24, 0(%sr3,%r26)
b cas2_end
copy %r0, %r28
nop
nop
/* 64bit CAS */
#ifdef CONFIG_64BIT
19: ldd,ma 0(%sr3,%r26), %r29
sub,= %r29, %r25, %r0
b,n cas2_end
20: std,ma %r24, 0(%sr3,%r26)
copy %r0, %r28
#else
/* Compare first word */
19: ldw,ma 0(%sr3,%r26), %r29
sub,= %r29, %r22, %r0
b,n cas2_end
/* Compare second word */
20: ldw,ma 4(%sr3,%r26), %r29
sub,= %r29, %r23, %r0
b,n cas2_end
/* Perform the store */
21: fstdx %fr4, 0(%sr3,%r26)
copy %r0, %r28
#endif
cas2_end:
/* Free lock */
stw,ma %r20, 0(%sr2,%r20)
/* Enable interrupts */
ssm PSW_SM_I, %r0
/* Return to userspace, set no error */
b lws_exit
copy %r0, %r21
22:
/* Error occurred on load or store */
/* Free lock */
stw %r20, 0(%sr2,%r20)
ssm PSW_SM_I, %r0
ldo 1(%r0),%r28
b lws_exit
ldo -EFAULT(%r0),%r21 /* set errno */
nop
nop
nop
/* Exception table entries, for the load and store, return EFAULT.
Each of the entries must be relocated. */
ASM_EXCEPTIONTABLE_ENTRY(4b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(5b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(6b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(7b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(8b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(9b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(10b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(11b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(13b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(14b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(15b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(16b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(17b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(18b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(19b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(20b-linux_gateway_page, 22b-linux_gateway_page)
#ifndef CONFIG_64BIT
ASM_EXCEPTIONTABLE_ENTRY(12b-linux_gateway_page, 22b-linux_gateway_page)
ASM_EXCEPTIONTABLE_ENTRY(21b-linux_gateway_page, 22b-linux_gateway_page)
#endif
/* Make sure nothing else is placed on this page */ /* Make sure nothing else is placed on this page */
.align PAGE_SIZE .align PAGE_SIZE
END(linux_gateway_page) END(linux_gateway_page)
@ -675,8 +899,9 @@ ENTRY(end_linux_gateway_page)
/* Light-weight-syscall table */ /* Light-weight-syscall table */
/* Start of lws table. */ /* Start of lws table. */
ENTRY(lws_table) ENTRY(lws_table)
LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic compare and swap */ LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic 32bit CAS */
LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic compare and swap */ LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic 32bit CAS */
LWS_ENTRY(compare_and_swap_2) /* 2 - ELF32 Atomic 64bit CAS */
END(lws_table) END(lws_table)
/* End of lws table */ /* End of lws table */

View File

@ -433,6 +433,9 @@
ENTRY_SAME(sched_getattr) /* 335 */ ENTRY_SAME(sched_getattr) /* 335 */
ENTRY_COMP(utimes) ENTRY_COMP(utimes)
ENTRY_SAME(renameat2) ENTRY_SAME(renameat2)
ENTRY_SAME(seccomp)
ENTRY_SAME(getrandom)
ENTRY_SAME(memfd_create) /* 340 */
/* Nothing yet */ /* Nothing yet */

View File

@ -5,6 +5,7 @@ CONFIG_SMP=y
CONFIG_NR_CPUS=4 CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15 CONFIG_LOG_BUF_SHIFT=15

View File

@ -5,6 +5,7 @@ CONFIG_SMP=y
CONFIG_NR_CPUS=4 CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15 CONFIG_LOG_BUF_SHIFT=15

View File

@ -4,6 +4,7 @@ CONFIG_ALTIVEC=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_NR_CPUS=24 CONFIG_NR_CPUS=24
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ=y CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y

View File

@ -5,6 +5,7 @@ CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y

View File

@ -4,6 +4,7 @@ CONFIG_NR_CPUS=4
CONFIG_EXPERIMENTAL=y CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set

View File

@ -3,6 +3,7 @@ CONFIG_ALTIVEC=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_NR_CPUS=2 CONFIG_NR_CPUS=2
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_FHANDLE=y
CONFIG_NO_HZ=y CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y

View File

@ -4,6 +4,7 @@ CONFIG_VSX=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_NO_HZ=y CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y

View File

@ -3,6 +3,7 @@ CONFIG_PPC_BOOK3E_64=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_NO_HZ=y CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_TASKSTATS=y CONFIG_TASKSTATS=y

View File

@ -5,6 +5,7 @@ CONFIG_SMP=y
CONFIG_NR_CPUS=2 CONFIG_NR_CPUS=2
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_RD_LZMA=y CONFIG_RD_LZMA=y

View File

@ -5,6 +5,7 @@ CONFIG_SMP=y
CONFIG_NR_CPUS=2048 CONFIG_NR_CPUS=2048
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y CONFIG_AUDITSYSCALL=y
CONFIG_IRQ_DOMAIN_DEBUG=y CONFIG_IRQ_DOMAIN_DEBUG=y

View File

@ -6,6 +6,7 @@ CONFIG_NR_CPUS=2048
CONFIG_CPU_LITTLE_ENDIAN=y CONFIG_CPU_LITTLE_ENDIAN=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y CONFIG_AUDITSYSCALL=y
CONFIG_IRQ_DOMAIN_DEBUG=y CONFIG_IRQ_DOMAIN_DEBUG=y

View File

@ -47,6 +47,12 @@
STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE) STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE)
#define STACK_FRAME_MARKER 12 #define STACK_FRAME_MARKER 12
#if defined(_CALL_ELF) && _CALL_ELF == 2
#define STACK_FRAME_MIN_SIZE 32
#else
#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
#endif
/* Size of dummy stack frame allocated when calling signal handler. */ /* Size of dummy stack frame allocated when calling signal handler. */
#define __SIGNAL_FRAMESIZE 128 #define __SIGNAL_FRAMESIZE 128
#define __SIGNAL_FRAMESIZE32 64 #define __SIGNAL_FRAMESIZE32 64
@ -60,6 +66,7 @@
#define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773) #define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773)
#define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + STACK_FRAME_OVERHEAD) #define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + STACK_FRAME_OVERHEAD)
#define STACK_FRAME_MARKER 2 #define STACK_FRAME_MARKER 2
#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
/* Size of stack frame allocated when calling signal handler. */ /* Size of stack frame allocated when calling signal handler. */
#define __SIGNAL_FRAMESIZE 64 #define __SIGNAL_FRAMESIZE 64

View File

@ -362,3 +362,6 @@ SYSCALL(ni_syscall) /* sys_kcmp */
SYSCALL_SPU(sched_setattr) SYSCALL_SPU(sched_setattr)
SYSCALL_SPU(sched_getattr) SYSCALL_SPU(sched_getattr)
SYSCALL_SPU(renameat2) SYSCALL_SPU(renameat2)
SYSCALL_SPU(seccomp)
SYSCALL_SPU(getrandom)
SYSCALL_SPU(memfd_create)

View File

@ -12,7 +12,7 @@
#include <uapi/asm/unistd.h> #include <uapi/asm/unistd.h>
#define __NR_syscalls 358 #define __NR_syscalls 361
#define __NR__exit __NR_exit #define __NR__exit __NR_exit
#define NR_syscalls __NR_syscalls #define NR_syscalls __NR_syscalls

View File

@ -380,5 +380,8 @@
#define __NR_sched_setattr 355 #define __NR_sched_setattr 355
#define __NR_sched_getattr 356 #define __NR_sched_getattr 356
#define __NR_renameat2 357 #define __NR_renameat2 357
#define __NR_seccomp 358
#define __NR_getrandom 359
#define __NR_memfd_create 360
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */

View File

@ -35,7 +35,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
return 0; /* must be 16-byte aligned */ return 0; /* must be 16-byte aligned */
if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD)) if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD))
return 0; return 0;
if (sp >= prev_sp + STACK_FRAME_OVERHEAD) if (sp >= prev_sp + STACK_FRAME_MIN_SIZE)
return 1; return 1;
/* /*
* sp could decrease when we jump off an interrupt stack * sp could decrease when we jump off an interrupt stack

View File

@ -28,6 +28,7 @@
#include <asm/opal.h> #include <asm/opal.h>
#include <asm/cputable.h> #include <asm/cputable.h>
#include <asm/machdep.h>
static int opal_hmi_handler_nb_init; static int opal_hmi_handler_nb_init;
struct OpalHmiEvtNode { struct OpalHmiEvtNode {
@ -185,4 +186,4 @@ static int __init opal_hmi_handler_init(void)
} }
return 0; return 0;
} }
subsys_initcall(opal_hmi_handler_init); machine_subsys_initcall(powernv, opal_hmi_handler_init);

View File

@ -113,7 +113,7 @@ out:
static int pseries_remove_mem_node(struct device_node *np) static int pseries_remove_mem_node(struct device_node *np)
{ {
const char *type; const char *type;
const unsigned int *regs; const __be32 *regs;
unsigned long base; unsigned long base;
unsigned int lmb_size; unsigned int lmb_size;
int ret = -EINVAL; int ret = -EINVAL;
@ -132,8 +132,8 @@ static int pseries_remove_mem_node(struct device_node *np)
if (!regs) if (!regs)
return ret; return ret;
base = *(unsigned long *)regs; base = be64_to_cpu(*(unsigned long *)regs);
lmb_size = regs[3]; lmb_size = be32_to_cpu(regs[3]);
pseries_remove_memblock(base, lmb_size); pseries_remove_memblock(base, lmb_size);
return 0; return 0;
@ -153,7 +153,7 @@ static inline int pseries_remove_mem_node(struct device_node *np)
static int pseries_add_mem_node(struct device_node *np) static int pseries_add_mem_node(struct device_node *np)
{ {
const char *type; const char *type;
const unsigned int *regs; const __be32 *regs;
unsigned long base; unsigned long base;
unsigned int lmb_size; unsigned int lmb_size;
int ret = -EINVAL; int ret = -EINVAL;
@ -172,8 +172,8 @@ static int pseries_add_mem_node(struct device_node *np)
if (!regs) if (!regs)
return ret; return ret;
base = *(unsigned long *)regs; base = be64_to_cpu(*(unsigned long *)regs);
lmb_size = regs[3]; lmb_size = be32_to_cpu(regs[3]);
/* /*
* Update memory region to represent the memory add * Update memory region to represent the memory add
@ -187,14 +187,14 @@ static int pseries_update_drconf_memory(struct of_prop_reconfig *pr)
struct of_drconf_cell *new_drmem, *old_drmem; struct of_drconf_cell *new_drmem, *old_drmem;
unsigned long memblock_size; unsigned long memblock_size;
u32 entries; u32 entries;
u32 *p; __be32 *p;
int i, rc = -EINVAL; int i, rc = -EINVAL;
memblock_size = pseries_memory_block_size(); memblock_size = pseries_memory_block_size();
if (!memblock_size) if (!memblock_size)
return -EINVAL; return -EINVAL;
p = (u32 *) pr->old_prop->value; p = (__be32 *) pr->old_prop->value;
if (!p) if (!p)
return -EINVAL; return -EINVAL;
@ -203,28 +203,30 @@ static int pseries_update_drconf_memory(struct of_prop_reconfig *pr)
* entries. Get the niumber of entries and skip to the array of * entries. Get the niumber of entries and skip to the array of
* of_drconf_cell's. * of_drconf_cell's.
*/ */
entries = *p++; entries = be32_to_cpu(*p++);
old_drmem = (struct of_drconf_cell *)p; old_drmem = (struct of_drconf_cell *)p;
p = (u32 *)pr->prop->value; p = (__be32 *)pr->prop->value;
p++; p++;
new_drmem = (struct of_drconf_cell *)p; new_drmem = (struct of_drconf_cell *)p;
for (i = 0; i < entries; i++) { for (i = 0; i < entries; i++) {
if ((old_drmem[i].flags & DRCONF_MEM_ASSIGNED) && if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) &&
(!(new_drmem[i].flags & DRCONF_MEM_ASSIGNED))) { (!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) {
rc = pseries_remove_memblock(old_drmem[i].base_addr, rc = pseries_remove_memblock(
be64_to_cpu(old_drmem[i].base_addr),
memblock_size); memblock_size);
break; break;
} else if ((!(old_drmem[i].flags & DRCONF_MEM_ASSIGNED)) && } else if ((!(be32_to_cpu(old_drmem[i].flags) &
(new_drmem[i].flags & DRCONF_MEM_ASSIGNED)) { DRCONF_MEM_ASSIGNED)) &&
rc = memblock_add(old_drmem[i].base_addr, (be32_to_cpu(new_drmem[i].flags) &
DRCONF_MEM_ASSIGNED)) {
rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr),
memblock_size); memblock_size);
rc = (rc < 0) ? -EINVAL : 0; rc = (rc < 0) ? -EINVAL : 0;
break; break;
} }
} }
return rc; return rc;
} }

View File

@ -17,12 +17,12 @@
#define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \ #define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_fcp)) sizeof(struct ipl_block_fcp))
#define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 8) #define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 16)
#define IPL_PARM_BLK_CCW_LEN (sizeof(struct ipl_list_hdr) + \ #define IPL_PARM_BLK_CCW_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_ccw)) sizeof(struct ipl_block_ccw))
#define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 8) #define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 16)
#define IPL_MAX_SUPPORTED_VERSION (0) #define IPL_MAX_SUPPORTED_VERSION (0)
@ -38,10 +38,11 @@ struct ipl_list_hdr {
u8 pbt; u8 pbt;
u8 flags; u8 flags;
u16 reserved2; u16 reserved2;
u8 loadparm[8];
} __attribute__((packed)); } __attribute__((packed));
struct ipl_block_fcp { struct ipl_block_fcp {
u8 reserved1[313-1]; u8 reserved1[305-1];
u8 opt; u8 opt;
u8 reserved2[3]; u8 reserved2[3];
u16 reserved3; u16 reserved3;
@ -62,7 +63,6 @@ struct ipl_block_fcp {
offsetof(struct ipl_block_fcp, scp_data))) offsetof(struct ipl_block_fcp, scp_data)))
struct ipl_block_ccw { struct ipl_block_ccw {
u8 load_parm[8];
u8 reserved1[84]; u8 reserved1[84];
u8 reserved2[2]; u8 reserved2[2];
u16 devno; u16 devno;

View File

@ -455,22 +455,6 @@ DEFINE_IPL_ATTR_RO(ipl_fcp, bootprog, "%lld\n", (unsigned long long)
DEFINE_IPL_ATTR_RO(ipl_fcp, br_lba, "%lld\n", (unsigned long long) DEFINE_IPL_ATTR_RO(ipl_fcp, br_lba, "%lld\n", (unsigned long long)
IPL_PARMBLOCK_START->ipl_info.fcp.br_lba); IPL_PARMBLOCK_START->ipl_info.fcp.br_lba);
static struct attribute *ipl_fcp_attrs[] = {
&sys_ipl_type_attr.attr,
&sys_ipl_device_attr.attr,
&sys_ipl_fcp_wwpn_attr.attr,
&sys_ipl_fcp_lun_attr.attr,
&sys_ipl_fcp_bootprog_attr.attr,
&sys_ipl_fcp_br_lba_attr.attr,
NULL,
};
static struct attribute_group ipl_fcp_attr_group = {
.attrs = ipl_fcp_attrs,
};
/* CCW ipl device attributes */
static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj, static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page) struct kobj_attribute *attr, char *page)
{ {
@ -487,6 +471,23 @@ static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj,
static struct kobj_attribute sys_ipl_ccw_loadparm_attr = static struct kobj_attribute sys_ipl_ccw_loadparm_attr =
__ATTR(loadparm, 0444, ipl_ccw_loadparm_show, NULL); __ATTR(loadparm, 0444, ipl_ccw_loadparm_show, NULL);
static struct attribute *ipl_fcp_attrs[] = {
&sys_ipl_type_attr.attr,
&sys_ipl_device_attr.attr,
&sys_ipl_fcp_wwpn_attr.attr,
&sys_ipl_fcp_lun_attr.attr,
&sys_ipl_fcp_bootprog_attr.attr,
&sys_ipl_fcp_br_lba_attr.attr,
&sys_ipl_ccw_loadparm_attr.attr,
NULL,
};
static struct attribute_group ipl_fcp_attr_group = {
.attrs = ipl_fcp_attrs,
};
/* CCW ipl device attributes */
static struct attribute *ipl_ccw_attrs_vm[] = { static struct attribute *ipl_ccw_attrs_vm[] = {
&sys_ipl_type_attr.attr, &sys_ipl_type_attr.attr,
&sys_ipl_device_attr.attr, &sys_ipl_device_attr.attr,
@ -765,28 +766,10 @@ DEFINE_IPL_ATTR_RW(reipl_fcp, br_lba, "%lld\n", "%lld\n",
DEFINE_IPL_ATTR_RW(reipl_fcp, device, "0.0.%04llx\n", "0.0.%llx\n", DEFINE_IPL_ATTR_RW(reipl_fcp, device, "0.0.%04llx\n", "0.0.%llx\n",
reipl_block_fcp->ipl_info.fcp.devno); reipl_block_fcp->ipl_info.fcp.devno);
static struct attribute *reipl_fcp_attrs[] = {
&sys_reipl_fcp_device_attr.attr,
&sys_reipl_fcp_wwpn_attr.attr,
&sys_reipl_fcp_lun_attr.attr,
&sys_reipl_fcp_bootprog_attr.attr,
&sys_reipl_fcp_br_lba_attr.attr,
NULL,
};
static struct attribute_group reipl_fcp_attr_group = {
.attrs = reipl_fcp_attrs,
};
/* CCW reipl device attributes */
DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n",
reipl_block_ccw->ipl_info.ccw.devno);
static void reipl_get_ascii_loadparm(char *loadparm, static void reipl_get_ascii_loadparm(char *loadparm,
struct ipl_parameter_block *ibp) struct ipl_parameter_block *ibp)
{ {
memcpy(loadparm, ibp->ipl_info.ccw.load_parm, LOADPARM_LEN); memcpy(loadparm, ibp->hdr.loadparm, LOADPARM_LEN);
EBCASC(loadparm, LOADPARM_LEN); EBCASC(loadparm, LOADPARM_LEN);
loadparm[LOADPARM_LEN] = 0; loadparm[LOADPARM_LEN] = 0;
strim(loadparm); strim(loadparm);
@ -821,13 +804,50 @@ static ssize_t reipl_generic_loadparm_store(struct ipl_parameter_block *ipb,
return -EINVAL; return -EINVAL;
} }
/* initialize loadparm with blanks */ /* initialize loadparm with blanks */
memset(ipb->ipl_info.ccw.load_parm, ' ', LOADPARM_LEN); memset(ipb->hdr.loadparm, ' ', LOADPARM_LEN);
/* copy and convert to ebcdic */ /* copy and convert to ebcdic */
memcpy(ipb->ipl_info.ccw.load_parm, buf, lp_len); memcpy(ipb->hdr.loadparm, buf, lp_len);
ASCEBC(ipb->ipl_info.ccw.load_parm, LOADPARM_LEN); ASCEBC(ipb->hdr.loadparm, LOADPARM_LEN);
return len; return len;
} }
/* FCP wrapper */
static ssize_t reipl_fcp_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return reipl_generic_loadparm_show(reipl_block_fcp, page);
}
static ssize_t reipl_fcp_loadparm_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t len)
{
return reipl_generic_loadparm_store(reipl_block_fcp, buf, len);
}
static struct kobj_attribute sys_reipl_fcp_loadparm_attr =
__ATTR(loadparm, S_IRUGO | S_IWUSR, reipl_fcp_loadparm_show,
reipl_fcp_loadparm_store);
static struct attribute *reipl_fcp_attrs[] = {
&sys_reipl_fcp_device_attr.attr,
&sys_reipl_fcp_wwpn_attr.attr,
&sys_reipl_fcp_lun_attr.attr,
&sys_reipl_fcp_bootprog_attr.attr,
&sys_reipl_fcp_br_lba_attr.attr,
&sys_reipl_fcp_loadparm_attr.attr,
NULL,
};
static struct attribute_group reipl_fcp_attr_group = {
.attrs = reipl_fcp_attrs,
};
/* CCW reipl device attributes */
DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n",
reipl_block_ccw->ipl_info.ccw.devno);
/* NSS wrapper */ /* NSS wrapper */
static ssize_t reipl_nss_loadparm_show(struct kobject *kobj, static ssize_t reipl_nss_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page) struct kobj_attribute *attr, char *page)
@ -1125,11 +1145,10 @@ static void reipl_block_ccw_fill_parms(struct ipl_parameter_block *ipb)
/* LOADPARM */ /* LOADPARM */
/* check if read scp info worked and set loadparm */ /* check if read scp info worked and set loadparm */
if (sclp_ipl_info.is_valid) if (sclp_ipl_info.is_valid)
memcpy(ipb->ipl_info.ccw.load_parm, memcpy(ipb->hdr.loadparm, &sclp_ipl_info.loadparm, LOADPARM_LEN);
&sclp_ipl_info.loadparm, LOADPARM_LEN);
else else
/* read scp info failed: set empty loadparm (EBCDIC blanks) */ /* read scp info failed: set empty loadparm (EBCDIC blanks) */
memset(ipb->ipl_info.ccw.load_parm, 0x40, LOADPARM_LEN); memset(ipb->hdr.loadparm, 0x40, LOADPARM_LEN);
ipb->hdr.flags = DIAG308_FLAGS_LP_VALID; ipb->hdr.flags = DIAG308_FLAGS_LP_VALID;
/* VM PARM */ /* VM PARM */
@ -1251,9 +1270,16 @@ static int __init reipl_fcp_init(void)
return rc; return rc;
} }
if (ipl_info.type == IPL_TYPE_FCP) if (ipl_info.type == IPL_TYPE_FCP) {
memcpy(reipl_block_fcp, IPL_PARMBLOCK_START, PAGE_SIZE); memcpy(reipl_block_fcp, IPL_PARMBLOCK_START, PAGE_SIZE);
else { /*
* Fix loadparm: There are systems where the (SCSI) LOADPARM
* is invalid in the SCSI IPL parameter block, so take it
* always from sclp_ipl_info.
*/
memcpy(reipl_block_fcp->hdr.loadparm, sclp_ipl_info.loadparm,
LOADPARM_LEN);
} else {
reipl_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN; reipl_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN;
reipl_block_fcp->hdr.version = IPL_PARM_BLOCK_VERSION; reipl_block_fcp->hdr.version = IPL_PARM_BLOCK_VERSION;
reipl_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN; reipl_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN;
@ -1864,7 +1890,23 @@ static void __init shutdown_actions_init(void)
static int __init s390_ipl_init(void) static int __init s390_ipl_init(void)
{ {
char str[8] = {0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40};
sclp_get_ipl_info(&sclp_ipl_info); sclp_get_ipl_info(&sclp_ipl_info);
/*
* Fix loadparm: There are systems where the (SCSI) LOADPARM
* returned by read SCP info is invalid (contains EBCDIC blanks)
* when the system has been booted via diag308. In that case we use
* the value from diag308, if available.
*
* There are also systems where diag308 store does not work in
* case the system is booted from HMC. Fortunately in this case
* READ SCP info provides the correct value.
*/
if (memcmp(sclp_ipl_info.loadparm, str, sizeof(str)) == 0 &&
diag308_set_works)
memcpy(sclp_ipl_info.loadparm, ipl_block.hdr.loadparm,
LOADPARM_LEN);
shutdown_actions_init(); shutdown_actions_init();
shutdown_triggers_init(); shutdown_triggers_init();
return 0; return 0;

View File

@ -22,13 +22,11 @@ __kernel_clock_gettime:
basr %r5,0 basr %r5,0
0: al %r5,21f-0b(%r5) /* get &_vdso_data */ 0: al %r5,21f-0b(%r5) /* get &_vdso_data */
chi %r2,__CLOCK_REALTIME chi %r2,__CLOCK_REALTIME
je 10f je 11f
chi %r2,__CLOCK_MONOTONIC chi %r2,__CLOCK_MONOTONIC
jne 19f jne 19f
/* CLOCK_MONOTONIC */ /* CLOCK_MONOTONIC */
ltr %r3,%r3
jz 9f /* tp == NULL */
1: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */ 1: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */
tml %r4,0x0001 /* pending update ? loop */ tml %r4,0x0001 /* pending update ? loop */
jnz 1b jnz 1b
@ -67,12 +65,10 @@ __kernel_clock_gettime:
j 6b j 6b
8: st %r2,0(%r3) /* store tp->tv_sec */ 8: st %r2,0(%r3) /* store tp->tv_sec */
st %r1,4(%r3) /* store tp->tv_nsec */ st %r1,4(%r3) /* store tp->tv_nsec */
9: lhi %r2,0 lhi %r2,0
br %r14 br %r14
/* CLOCK_REALTIME */ /* CLOCK_REALTIME */
10: ltr %r3,%r3 /* tp == NULL */
jz 18f
11: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */ 11: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */
tml %r4,0x0001 /* pending update ? loop */ tml %r4,0x0001 /* pending update ? loop */
jnz 11b jnz 11b
@ -111,7 +107,7 @@ __kernel_clock_gettime:
j 15b j 15b
17: st %r2,0(%r3) /* store tp->tv_sec */ 17: st %r2,0(%r3) /* store tp->tv_sec */
st %r1,4(%r3) /* store tp->tv_nsec */ st %r1,4(%r3) /* store tp->tv_nsec */
18: lhi %r2,0 lhi %r2,0
br %r14 br %r14
/* Fallback to system call */ /* Fallback to system call */

View File

@ -21,7 +21,7 @@ __kernel_clock_gettime:
.cfi_startproc .cfi_startproc
larl %r5,_vdso_data larl %r5,_vdso_data
cghi %r2,__CLOCK_REALTIME cghi %r2,__CLOCK_REALTIME
je 4f je 5f
cghi %r2,__CLOCK_THREAD_CPUTIME_ID cghi %r2,__CLOCK_THREAD_CPUTIME_ID
je 9f je 9f
cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */ cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */
@ -30,8 +30,6 @@ __kernel_clock_gettime:
jne 12f jne 12f
/* CLOCK_MONOTONIC */ /* CLOCK_MONOTONIC */
ltgr %r3,%r3
jz 3f /* tp == NULL */
0: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */ 0: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */
tmll %r4,0x0001 /* pending update ? loop */ tmll %r4,0x0001 /* pending update ? loop */
jnz 0b jnz 0b
@ -53,12 +51,10 @@ __kernel_clock_gettime:
j 1b j 1b
2: stg %r0,0(%r3) /* store tp->tv_sec */ 2: stg %r0,0(%r3) /* store tp->tv_sec */
stg %r1,8(%r3) /* store tp->tv_nsec */ stg %r1,8(%r3) /* store tp->tv_nsec */
3: lghi %r2,0 lghi %r2,0
br %r14 br %r14
/* CLOCK_REALTIME */ /* CLOCK_REALTIME */
4: ltr %r3,%r3 /* tp == NULL */
jz 8f
5: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */ 5: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */
tmll %r4,0x0001 /* pending update ? loop */ tmll %r4,0x0001 /* pending update ? loop */
jnz 5b jnz 5b
@ -80,7 +76,7 @@ __kernel_clock_gettime:
j 6b j 6b
7: stg %r0,0(%r3) /* store tp->tv_sec */ 7: stg %r0,0(%r3) /* store tp->tv_sec */
stg %r1,8(%r3) /* store tp->tv_nsec */ stg %r1,8(%r3) /* store tp->tv_nsec */
8: lghi %r2,0 lghi %r2,0
br %r14 br %r14
/* CLOCK_THREAD_CPUTIME_ID for this thread */ /* CLOCK_THREAD_CPUTIME_ID for this thread */

View File

@ -105,6 +105,8 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
VM_BUG_ON(!pfn_valid(pte_pfn(pte))); VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte); page = pte_page(pte);
get_page(page); get_page(page);
__flush_anon_page(page, addr);
flush_dcache_page(page);
pages[*nr] = page; pages[*nr] = page;
(*nr)++; (*nr)++;

View File

@ -23,6 +23,7 @@ config X86
def_bool y def_bool y
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS
select ARCH_HAS_FAST_MULTIPLIER
select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO select ARCH_MIGHT_HAVE_PC_SERIO
select HAVE_AOUT if X86_32 select HAVE_AOUT if X86_32

View File

@ -497,8 +497,6 @@ static __always_inline int fls64(__u64 x)
#include <asm-generic/bitops/sched.h> #include <asm-generic/bitops/sched.h>
#define ARCH_HAS_FAST_MULTIPLIER 1
#include <asm/arch_hweight.h> #include <asm/arch_hweight.h>
#include <asm-generic/bitops/const_hweight.h> #include <asm-generic/bitops/const_hweight.h>

View File

@ -19,6 +19,7 @@ extern pud_t level3_ident_pgt[512];
extern pmd_t level2_kernel_pgt[512]; extern pmd_t level2_kernel_pgt[512];
extern pmd_t level2_fixmap_pgt[512]; extern pmd_t level2_fixmap_pgt[512];
extern pmd_t level2_ident_pgt[512]; extern pmd_t level2_ident_pgt[512];
extern pte_t level1_fixmap_pgt[512];
extern pgd_t init_level4_pgt[]; extern pgd_t init_level4_pgt[];
#define swapper_pg_dir init_level4_pgt #define swapper_pg_dir init_level4_pgt

View File

@ -1866,12 +1866,11 @@ static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
* *
* We can construct this by grafting the Xen provided pagetable into * We can construct this by grafting the Xen provided pagetable into
* head_64.S's preconstructed pagetables. We copy the Xen L2's into * head_64.S's preconstructed pagetables. We copy the Xen L2's into
* level2_ident_pgt, level2_kernel_pgt and level2_fixmap_pgt. This * level2_ident_pgt, and level2_kernel_pgt. This means that only the
* means that only the kernel has a physical mapping to start with - * kernel has a physical mapping to start with - but that's enough to
* but that's enough to get __va working. We need to fill in the rest * get __va working. We need to fill in the rest of the physical
* of the physical mapping once some sort of allocator has been set * mapping once some sort of allocator has been set up. NOTE: for
* up. * PVH, the page tables are native.
* NOTE: for PVH, the page tables are native.
*/ */
void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn) void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
{ {
@ -1902,8 +1901,11 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
/* L3_i[0] -> level2_ident_pgt */ /* L3_i[0] -> level2_ident_pgt */
convert_pfn_mfn(level3_ident_pgt); convert_pfn_mfn(level3_ident_pgt);
/* L3_k[510] -> level2_kernel_pgt /* L3_k[510] -> level2_kernel_pgt
* L3_i[511] -> level2_fixmap_pgt */ * L3_k[511] -> level2_fixmap_pgt */
convert_pfn_mfn(level3_kernel_pgt); convert_pfn_mfn(level3_kernel_pgt);
/* L3_k[511][506] -> level1_fixmap_pgt */
convert_pfn_mfn(level2_fixmap_pgt);
} }
/* We get [511][511] and have Xen's version of level2_kernel_pgt */ /* We get [511][511] and have Xen's version of level2_kernel_pgt */
l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd); l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
@ -1913,21 +1915,15 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
addr[1] = (unsigned long)l3; addr[1] = (unsigned long)l3;
addr[2] = (unsigned long)l2; addr[2] = (unsigned long)l2;
/* Graft it onto L4[272][0]. Note that we creating an aliasing problem: /* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
* Both L4[272][0] and L4[511][511] have entries that point to the same * Both L4[272][0] and L4[511][510] have entries that point to the same
* L2 (PMD) tables. Meaning that if you modify it in __va space * L2 (PMD) tables. Meaning that if you modify it in __va space
* it will be also modified in the __ka space! (But if you just * it will be also modified in the __ka space! (But if you just
* modify the PMD table to point to other PTE's or none, then you * modify the PMD table to point to other PTE's or none, then you
* are OK - which is what cleanup_highmap does) */ * are OK - which is what cleanup_highmap does) */
copy_page(level2_ident_pgt, l2); copy_page(level2_ident_pgt, l2);
/* Graft it onto L4[511][511] */ /* Graft it onto L4[511][510] */
copy_page(level2_kernel_pgt, l2); copy_page(level2_kernel_pgt, l2);
/* Get [511][510] and graft that in level2_fixmap_pgt */
l3 = m2v(pgd[pgd_index(__START_KERNEL_map + PMD_SIZE)].pgd);
l2 = m2v(l3[pud_index(__START_KERNEL_map + PMD_SIZE)].pud);
copy_page(level2_fixmap_pgt, l2);
/* Note that we don't do anything with level1_fixmap_pgt which
* we don't need. */
if (!xen_feature(XENFEAT_auto_translated_physmap)) { if (!xen_feature(XENFEAT_auto_translated_physmap)) {
/* Make pagetable pieces RO */ /* Make pagetable pieces RO */
set_page_prot(init_level4_pgt, PAGE_KERNEL_RO); set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
@ -1937,6 +1933,7 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO); set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO); set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO); set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO);
/* Pin down new L4 */ /* Pin down new L4 */
pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE, pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,

View File

@ -10,10 +10,11 @@
#include "blk.h" #include "blk.h"
static unsigned int __blk_recalc_rq_segments(struct request_queue *q, static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
struct bio *bio) struct bio *bio,
bool no_sg_merge)
{ {
struct bio_vec bv, bvprv = { NULL }; struct bio_vec bv, bvprv = { NULL };
int cluster, high, highprv = 1, no_sg_merge; int cluster, high, highprv = 1;
unsigned int seg_size, nr_phys_segs; unsigned int seg_size, nr_phys_segs;
struct bio *fbio, *bbio; struct bio *fbio, *bbio;
struct bvec_iter iter; struct bvec_iter iter;
@ -35,7 +36,6 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
cluster = blk_queue_cluster(q); cluster = blk_queue_cluster(q);
seg_size = 0; seg_size = 0;
nr_phys_segs = 0; nr_phys_segs = 0;
no_sg_merge = test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags);
high = 0; high = 0;
for_each_bio(bio) { for_each_bio(bio) {
bio_for_each_segment(bv, bio, iter) { bio_for_each_segment(bv, bio, iter) {
@ -88,18 +88,23 @@ new_segment:
void blk_recalc_rq_segments(struct request *rq) void blk_recalc_rq_segments(struct request *rq)
{ {
rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio); bool no_sg_merge = !!test_bit(QUEUE_FLAG_NO_SG_MERGE,
&rq->q->queue_flags);
rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio,
no_sg_merge);
} }
void blk_recount_segments(struct request_queue *q, struct bio *bio) void blk_recount_segments(struct request_queue *q, struct bio *bio)
{ {
if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags)) if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) &&
bio->bi_vcnt < queue_max_segments(q))
bio->bi_phys_segments = bio->bi_vcnt; bio->bi_phys_segments = bio->bi_vcnt;
else { else {
struct bio *nxt = bio->bi_next; struct bio *nxt = bio->bi_next;
bio->bi_next = NULL; bio->bi_next = NULL;
bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio); bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, false);
bio->bi_next = nxt; bio->bi_next = nxt;
} }

View File

@ -1321,6 +1321,7 @@ static void blk_mq_free_rq_map(struct blk_mq_tag_set *set,
continue; continue;
set->ops->exit_request(set->driver_data, tags->rqs[i], set->ops->exit_request(set->driver_data, tags->rqs[i],
hctx_idx, i); hctx_idx, i);
tags->rqs[i] = NULL;
} }
} }
@ -1354,8 +1355,9 @@ static struct blk_mq_tags *blk_mq_init_rq_map(struct blk_mq_tag_set *set,
INIT_LIST_HEAD(&tags->page_list); INIT_LIST_HEAD(&tags->page_list);
tags->rqs = kmalloc_node(set->queue_depth * sizeof(struct request *), tags->rqs = kzalloc_node(set->queue_depth * sizeof(struct request *),
GFP_KERNEL, set->numa_node); GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY,
set->numa_node);
if (!tags->rqs) { if (!tags->rqs) {
blk_mq_free_tags(tags); blk_mq_free_tags(tags);
return NULL; return NULL;
@ -1379,8 +1381,9 @@ static struct blk_mq_tags *blk_mq_init_rq_map(struct blk_mq_tag_set *set,
this_order--; this_order--;
do { do {
page = alloc_pages_node(set->numa_node, GFP_KERNEL, page = alloc_pages_node(set->numa_node,
this_order); GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY,
this_order);
if (page) if (page)
break; break;
if (!this_order--) if (!this_order--)
@ -1404,8 +1407,10 @@ static struct blk_mq_tags *blk_mq_init_rq_map(struct blk_mq_tag_set *set,
if (set->ops->init_request) { if (set->ops->init_request) {
if (set->ops->init_request(set->driver_data, if (set->ops->init_request(set->driver_data,
tags->rqs[i], hctx_idx, i, tags->rqs[i], hctx_idx, i,
set->numa_node)) set->numa_node)) {
tags->rqs[i] = NULL;
goto fail; goto fail;
}
} }
p += rq_size; p += rq_size;
@ -1416,7 +1421,6 @@ static struct blk_mq_tags *blk_mq_init_rq_map(struct blk_mq_tag_set *set,
return tags; return tags;
fail: fail:
pr_warn("%s: failed to allocate requests\n", __func__);
blk_mq_free_rq_map(set, tags, hctx_idx); blk_mq_free_rq_map(set, tags, hctx_idx);
return NULL; return NULL;
} }
@ -1936,6 +1940,61 @@ static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
return NOTIFY_OK; return NOTIFY_OK;
} }
static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
{
int i;
for (i = 0; i < set->nr_hw_queues; i++) {
set->tags[i] = blk_mq_init_rq_map(set, i);
if (!set->tags[i])
goto out_unwind;
}
return 0;
out_unwind:
while (--i >= 0)
blk_mq_free_rq_map(set, set->tags[i], i);
set->tags = NULL;
return -ENOMEM;
}
/*
* Allocate the request maps associated with this tag_set. Note that this
* may reduce the depth asked for, if memory is tight. set->queue_depth
* will be updated to reflect the allocated depth.
*/
static int blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
{
unsigned int depth;
int err;
depth = set->queue_depth;
do {
err = __blk_mq_alloc_rq_maps(set);
if (!err)
break;
set->queue_depth >>= 1;
if (set->queue_depth < set->reserved_tags + BLK_MQ_TAG_MIN) {
err = -ENOMEM;
break;
}
} while (set->queue_depth);
if (!set->queue_depth || err) {
pr_err("blk-mq: failed to allocate request map\n");
return -ENOMEM;
}
if (depth != set->queue_depth)
pr_info("blk-mq: reduced tag depth (%u -> %u)\n",
depth, set->queue_depth);
return 0;
}
/* /*
* Alloc a tag set to be associated with one or more request queues. * Alloc a tag set to be associated with one or more request queues.
* May fail with EINVAL for various error conditions. May adjust the * May fail with EINVAL for various error conditions. May adjust the
@ -1944,8 +2003,6 @@ static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
*/ */
int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
{ {
int i;
if (!set->nr_hw_queues) if (!set->nr_hw_queues)
return -EINVAL; return -EINVAL;
if (!set->queue_depth) if (!set->queue_depth)
@ -1966,23 +2023,18 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set)
sizeof(struct blk_mq_tags *), sizeof(struct blk_mq_tags *),
GFP_KERNEL, set->numa_node); GFP_KERNEL, set->numa_node);
if (!set->tags) if (!set->tags)
goto out; return -ENOMEM;
for (i = 0; i < set->nr_hw_queues; i++) { if (blk_mq_alloc_rq_maps(set))
set->tags[i] = blk_mq_init_rq_map(set, i); goto enomem;
if (!set->tags[i])
goto out_unwind;
}
mutex_init(&set->tag_list_lock); mutex_init(&set->tag_list_lock);
INIT_LIST_HEAD(&set->tag_list); INIT_LIST_HEAD(&set->tag_list);
return 0; return 0;
enomem:
out_unwind: kfree(set->tags);
while (--i >= 0) set->tags = NULL;
blk_mq_free_rq_map(set, set->tags[i], i);
out:
return -ENOMEM; return -ENOMEM;
} }
EXPORT_SYMBOL(blk_mq_alloc_tag_set); EXPORT_SYMBOL(blk_mq_alloc_tag_set);
@ -1997,6 +2049,7 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set)
} }
kfree(set->tags); kfree(set->tags);
set->tags = NULL;
} }
EXPORT_SYMBOL(blk_mq_free_tag_set); EXPORT_SYMBOL(blk_mq_free_tag_set);

View File

@ -554,8 +554,10 @@ int blk_register_queue(struct gendisk *disk)
* Initialization must be complete by now. Finish the initial * Initialization must be complete by now. Finish the initial
* bypass from queue allocation. * bypass from queue allocation.
*/ */
queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q); if (!blk_queue_init_done(q)) {
blk_queue_bypass_end(q); queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q);
blk_queue_bypass_end(q);
}
ret = blk_trace_init_sysfs(dev); ret = blk_trace_init_sysfs(dev);
if (ret) if (ret)

View File

@ -28,10 +28,10 @@ struct kobject *block_depr;
/* for extended dynamic devt allocation, currently only one major is used */ /* for extended dynamic devt allocation, currently only one major is used */
#define NR_EXT_DEVT (1 << MINORBITS) #define NR_EXT_DEVT (1 << MINORBITS)
/* For extended devt allocation. ext_devt_mutex prevents look up /* For extended devt allocation. ext_devt_lock prevents look up
* results from going away underneath its user. * results from going away underneath its user.
*/ */
static DEFINE_MUTEX(ext_devt_mutex); static DEFINE_SPINLOCK(ext_devt_lock);
static DEFINE_IDR(ext_devt_idr); static DEFINE_IDR(ext_devt_idr);
static struct device_type disk_type; static struct device_type disk_type;
@ -420,9 +420,13 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
} }
/* allocate ext devt */ /* allocate ext devt */
mutex_lock(&ext_devt_mutex); idr_preload(GFP_KERNEL);
idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_KERNEL);
mutex_unlock(&ext_devt_mutex); spin_lock(&ext_devt_lock);
idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
spin_unlock(&ext_devt_lock);
idr_preload_end();
if (idx < 0) if (idx < 0)
return idx == -ENOSPC ? -EBUSY : idx; return idx == -ENOSPC ? -EBUSY : idx;
@ -447,9 +451,9 @@ void blk_free_devt(dev_t devt)
return; return;
if (MAJOR(devt) == BLOCK_EXT_MAJOR) { if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
mutex_lock(&ext_devt_mutex); spin_lock(&ext_devt_lock);
idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt))); idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
mutex_unlock(&ext_devt_mutex); spin_unlock(&ext_devt_lock);
} }
} }
@ -665,7 +669,6 @@ void del_gendisk(struct gendisk *disk)
sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk))); sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
pm_runtime_set_memalloc_noio(disk_to_dev(disk), false); pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
device_del(disk_to_dev(disk)); device_del(disk_to_dev(disk));
blk_free_devt(disk_to_dev(disk)->devt);
} }
EXPORT_SYMBOL(del_gendisk); EXPORT_SYMBOL(del_gendisk);
@ -690,13 +693,13 @@ struct gendisk *get_gendisk(dev_t devt, int *partno)
} else { } else {
struct hd_struct *part; struct hd_struct *part;
mutex_lock(&ext_devt_mutex); spin_lock(&ext_devt_lock);
part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt))); part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
if (part && get_disk(part_to_disk(part))) { if (part && get_disk(part_to_disk(part))) {
*partno = part->partno; *partno = part->partno;
disk = part_to_disk(part); disk = part_to_disk(part);
} }
mutex_unlock(&ext_devt_mutex); spin_unlock(&ext_devt_lock);
} }
return disk; return disk;
@ -1098,6 +1101,7 @@ static void disk_release(struct device *dev)
{ {
struct gendisk *disk = dev_to_disk(dev); struct gendisk *disk = dev_to_disk(dev);
blk_free_devt(dev->devt);
disk_release_events(disk); disk_release_events(disk);
kfree(disk->random); kfree(disk->random);
disk_replace_part_tbl(disk, NULL); disk_replace_part_tbl(disk, NULL);

View File

@ -211,6 +211,7 @@ static const struct attribute_group *part_attr_groups[] = {
static void part_release(struct device *dev) static void part_release(struct device *dev)
{ {
struct hd_struct *p = dev_to_part(dev); struct hd_struct *p = dev_to_part(dev);
blk_free_devt(dev->devt);
free_part_stats(p); free_part_stats(p);
free_part_info(p); free_part_info(p);
kfree(p); kfree(p);
@ -253,7 +254,6 @@ void delete_partition(struct gendisk *disk, int partno)
rcu_assign_pointer(ptbl->last_lookup, NULL); rcu_assign_pointer(ptbl->last_lookup, NULL);
kobject_put(part->holder_dir); kobject_put(part->holder_dir);
device_del(part_to_dev(part)); device_del(part_to_dev(part));
blk_free_devt(part_devt(part));
hd_struct_put(part); hd_struct_put(part);
} }

View File

@ -33,7 +33,7 @@ acpi_cmos_rtc_space_handler(u32 function, acpi_physical_address address,
void *handler_context, void *region_context) void *handler_context, void *region_context)
{ {
int i; int i;
u8 *value = (u8 *)&value64; u8 *value = (u8 *)value64;
if (address > 0xff || !value64) if (address > 0xff || !value64)
return AE_BAD_PARAMETER; return AE_BAD_PARAMETER;

View File

@ -610,7 +610,7 @@ static int acpi_lpss_suspend_late(struct device *dev)
return acpi_dev_suspend_late(dev); return acpi_dev_suspend_late(dev);
} }
static int acpi_lpss_restore_early(struct device *dev) static int acpi_lpss_resume_early(struct device *dev)
{ {
int ret = acpi_dev_resume_early(dev); int ret = acpi_dev_resume_early(dev);
@ -650,15 +650,15 @@ static int acpi_lpss_runtime_resume(struct device *dev)
static struct dev_pm_domain acpi_lpss_pm_domain = { static struct dev_pm_domain acpi_lpss_pm_domain = {
.ops = { .ops = {
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
.suspend_late = acpi_lpss_suspend_late,
.restore_early = acpi_lpss_restore_early,
.prepare = acpi_subsys_prepare, .prepare = acpi_subsys_prepare,
.complete = acpi_subsys_complete, .complete = acpi_subsys_complete,
.suspend = acpi_subsys_suspend, .suspend = acpi_subsys_suspend,
.resume_early = acpi_subsys_resume_early, .suspend_late = acpi_lpss_suspend_late,
.resume_early = acpi_lpss_resume_early,
.freeze = acpi_subsys_freeze, .freeze = acpi_subsys_freeze,
.poweroff = acpi_subsys_suspend, .poweroff = acpi_subsys_suspend,
.poweroff_late = acpi_subsys_suspend_late, .poweroff_late = acpi_lpss_suspend_late,
.restore_early = acpi_lpss_resume_early,
#endif #endif
#ifdef CONFIG_PM_RUNTIME #ifdef CONFIG_PM_RUNTIME
.runtime_suspend = acpi_lpss_runtime_suspend, .runtime_suspend = acpi_lpss_runtime_suspend,

View File

@ -534,20 +534,6 @@ static int acpi_battery_get_state(struct acpi_battery *battery)
" invalid.\n"); " invalid.\n");
} }
/*
* When fully charged, some batteries wrongly report
* capacity_now = design_capacity instead of = full_charge_capacity
*/
if (battery->capacity_now > battery->full_charge_capacity
&& battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) {
if (battery->capacity_now != battery->design_capacity)
printk_once(KERN_WARNING FW_BUG
"battery: reported current charge level (%d) "
"is higher than reported maximum charge level (%d).\n",
battery->capacity_now, battery->full_charge_capacity);
battery->capacity_now = battery->full_charge_capacity;
}
if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags) if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags)
&& battery->capacity_now >= 0 && battery->capacity_now <= 100) && battery->capacity_now >= 0 && battery->capacity_now <= 100)
battery->capacity_now = (battery->capacity_now * battery->capacity_now = (battery->capacity_now *

View File

@ -305,6 +305,14 @@ static const struct pci_device_id ahci_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, 0x9c85), board_ahci }, /* Wildcat Point-LP RAID */ { PCI_VDEVICE(INTEL, 0x9c85), board_ahci }, /* Wildcat Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0x9c87), board_ahci }, /* Wildcat Point-LP RAID */ { PCI_VDEVICE(INTEL, 0x9c87), board_ahci }, /* Wildcat Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0x9c8f), board_ahci }, /* Wildcat Point-LP RAID */ { PCI_VDEVICE(INTEL, 0x9c8f), board_ahci }, /* Wildcat Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0x8c82), board_ahci }, /* 9 Series AHCI */
{ PCI_VDEVICE(INTEL, 0x8c83), board_ahci }, /* 9 Series AHCI */
{ PCI_VDEVICE(INTEL, 0x8c84), board_ahci }, /* 9 Series RAID */
{ PCI_VDEVICE(INTEL, 0x8c85), board_ahci }, /* 9 Series RAID */
{ PCI_VDEVICE(INTEL, 0x8c86), board_ahci }, /* 9 Series RAID */
{ PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */
{ PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */
{ PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */
/* JMicron 360/1/3/5/6, match class to avoid IDE function */ /* JMicron 360/1/3/5/6, match class to avoid IDE function */
{ PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
@ -442,6 +450,8 @@ static const struct pci_device_id ahci_pci_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x917a), { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x917a),
.driver_data = board_ahci_yes_fbs }, /* 88se9172 */ .driver_data = board_ahci_yes_fbs }, /* 88se9172 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9172), { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9172),
.driver_data = board_ahci_yes_fbs }, /* 88se9182 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9182),
.driver_data = board_ahci_yes_fbs }, /* 88se9172 */ .driver_data = board_ahci_yes_fbs }, /* 88se9172 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9192), { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9192),
.driver_data = board_ahci_yes_fbs }, /* 88se9172 on some Gigabyte */ .driver_data = board_ahci_yes_fbs }, /* 88se9172 on some Gigabyte */
@ -1329,6 +1339,18 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
else if (pdev->vendor == 0x1c44 && pdev->device == 0x8000) else if (pdev->vendor == 0x1c44 && pdev->device == 0x8000)
ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS; ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS;
/*
* The JMicron chip 361/363 contains one SATA controller and one
* PATA controller,for powering on these both controllers, we must
* follow the sequence one by one, otherwise one of them can not be
* powered on successfully, so here we disable the async suspend
* method for these chips.
*/
if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
(pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
device_disable_async_suspend(&pdev->dev);
/* acquire resources */ /* acquire resources */
rc = pcim_enable_device(pdev); rc = pcim_enable_device(pdev);
if (rc) if (rc)

View File

@ -18,14 +18,17 @@
*/ */
#include <linux/ahci_platform.h> #include <linux/ahci_platform.h>
#include <linux/reset.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
#include <linux/reset.h>
#include <soc/tegra/fuse.h>
#include <soc/tegra/pmc.h> #include <soc/tegra/pmc.h>
#include "ahci.h" #include "ahci.h"
#define SATA_CONFIGURATION_0 0x180 #define SATA_CONFIGURATION_0 0x180
@ -180,9 +183,12 @@ static int tegra_ahci_controller_init(struct ahci_host_priv *hpriv)
/* Pad calibration */ /* Pad calibration */
/* FIXME Always use calibration 0. Change this to read the calibration ret = tegra_fuse_readl(FUSE_SATA_CALIB, &val);
* fuse once the fuse driver has landed. */ if (ret) {
val = 0; dev_err(&tegra->pdev->dev,
"failed to read calibration fuse: %d\n", ret);
return ret;
}
calib = tegra124_pad_calibration[val & FUSE_SATA_CALIB_MASK]; calib = tegra124_pad_calibration[val & FUSE_SATA_CALIB_MASK];

View File

@ -78,6 +78,9 @@
#define CFG_MEM_RAM_SHUTDOWN 0x00000070 #define CFG_MEM_RAM_SHUTDOWN 0x00000070
#define BLOCK_MEM_RDY 0x00000074 #define BLOCK_MEM_RDY 0x00000074
/* Max retry for link down */
#define MAX_LINK_DOWN_RETRY 3
struct xgene_ahci_context { struct xgene_ahci_context {
struct ahci_host_priv *hpriv; struct ahci_host_priv *hpriv;
struct device *dev; struct device *dev;
@ -145,6 +148,14 @@ static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc)
return rc; return rc;
} }
static bool xgene_ahci_is_memram_inited(struct xgene_ahci_context *ctx)
{
void __iomem *diagcsr = ctx->csr_diag;
return (readl(diagcsr + CFG_MEM_RAM_SHUTDOWN) == 0 &&
readl(diagcsr + BLOCK_MEM_RDY) == 0xFFFFFFFF);
}
/** /**
* xgene_ahci_read_id - Read ID data from the specified device * xgene_ahci_read_id - Read ID data from the specified device
* @dev: device * @dev: device
@ -229,8 +240,11 @@ static void xgene_ahci_set_phy_cfg(struct xgene_ahci_context *ctx, int channel)
* and Gen1 (1.5Gbps). Otherwise during long IO stress test, the PHY will * and Gen1 (1.5Gbps). Otherwise during long IO stress test, the PHY will
* report disparity error and etc. In addition, during COMRESET, there can * report disparity error and etc. In addition, during COMRESET, there can
* be error reported in the register PORT_SCR_ERR. For SERR_DISPARITY and * be error reported in the register PORT_SCR_ERR. For SERR_DISPARITY and
* SERR_10B_8B_ERR, the PHY receiver line must be reseted. The following * SERR_10B_8B_ERR, the PHY receiver line must be reseted. Also during long
* algorithm is followed to proper configure the hardware PHY during COMRESET: * reboot cycle regression, sometimes the PHY reports link down even if the
* device is present because of speed negotiation failure. so need to retry
* the COMRESET to get the link up. The following algorithm is followed to
* proper configure the hardware PHY during COMRESET:
* *
* Alg Part 1: * Alg Part 1:
* 1. Start the PHY at Gen3 speed (default setting) * 1. Start the PHY at Gen3 speed (default setting)
@ -246,9 +260,15 @@ static void xgene_ahci_set_phy_cfg(struct xgene_ahci_context *ctx, int channel)
* Alg Part 2: * Alg Part 2:
* 1. On link up, if there are any SERR_DISPARITY and SERR_10B_8B_ERR error * 1. On link up, if there are any SERR_DISPARITY and SERR_10B_8B_ERR error
* reported in the register PORT_SCR_ERR, then reset the PHY receiver line * reported in the register PORT_SCR_ERR, then reset the PHY receiver line
* 2. Go to Alg Part 3 * 2. Go to Alg Part 4
* *
* Alg Part 3: * Alg Part 3:
* 1. Check the PORT_SCR_STAT to see whether device presence detected but PHY
* communication establishment failed and maximum link down attempts are
* less than Max attempts 3 then goto Alg Part 1.
* 2. Go to Alg Part 4.
*
* Alg Part 4:
* 1. Clear any pending from register PORT_SCR_ERR. * 1. Clear any pending from register PORT_SCR_ERR.
* *
* NOTE: For the initial version, we will NOT support Gen1/Gen2. In addition * NOTE: For the initial version, we will NOT support Gen1/Gen2. In addition
@ -267,19 +287,27 @@ static int xgene_ahci_do_hardreset(struct ata_link *link,
u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG; u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG;
void __iomem *port_mmio = ahci_port_base(ap); void __iomem *port_mmio = ahci_port_base(ap);
struct ata_taskfile tf; struct ata_taskfile tf;
int link_down_retry = 0;
int rc; int rc;
u32 val; u32 val, sstatus;
/* clear D2H reception area to properly wait for D2H FIS */ do {
ata_tf_init(link->device, &tf); /* clear D2H reception area to properly wait for D2H FIS */
tf.command = ATA_BUSY; ata_tf_init(link->device, &tf);
ata_tf_to_fis(&tf, 0, 0, d2h_fis); tf.command = ATA_BUSY;
rc = sata_link_hardreset(link, timing, deadline, online, ata_tf_to_fis(&tf, 0, 0, d2h_fis);
rc = sata_link_hardreset(link, timing, deadline, online,
ahci_check_ready); ahci_check_ready);
if (*online) {
val = readl(port_mmio + PORT_SCR_ERR);
if (val & (SERR_DISPARITY | SERR_10B_8B_ERR))
dev_warn(ctx->dev, "link has error\n");
break;
}
val = readl(port_mmio + PORT_SCR_ERR); sata_scr_read(link, SCR_STATUS, &sstatus);
if (val & (SERR_DISPARITY | SERR_10B_8B_ERR)) } while (link_down_retry++ < MAX_LINK_DOWN_RETRY &&
dev_warn(ctx->dev, "link has error\n"); (sstatus & 0xff) == 0x1);
/* clear all errors if any pending */ /* clear all errors if any pending */
val = readl(port_mmio + PORT_SCR_ERR); val = readl(port_mmio + PORT_SCR_ERR);
@ -467,6 +495,11 @@ static int xgene_ahci_probe(struct platform_device *pdev)
return -ENODEV; return -ENODEV;
} }
if (xgene_ahci_is_memram_inited(ctx)) {
dev_info(dev, "skip clock and PHY initialization\n");
goto skip_clk_phy;
}
/* Due to errata, HW requires full toggle transition */ /* Due to errata, HW requires full toggle transition */
rc = ahci_platform_enable_clks(hpriv); rc = ahci_platform_enable_clks(hpriv);
if (rc) if (rc)
@ -479,7 +512,7 @@ static int xgene_ahci_probe(struct platform_device *pdev)
/* Configure the host controller */ /* Configure the host controller */
xgene_ahci_hw_init(hpriv); xgene_ahci_hw_init(hpriv);
skip_clk_phy:
hpriv->flags = AHCI_HFLAG_NO_PMP | AHCI_HFLAG_NO_NCQ; hpriv->flags = AHCI_HFLAG_NO_PMP | AHCI_HFLAG_NO_NCQ;
rc = ahci_platform_init_host(pdev, hpriv, &xgene_ahci_port_info); rc = ahci_platform_init_host(pdev, hpriv, &xgene_ahci_port_info);

View File

@ -340,6 +340,14 @@ static const struct pci_device_id piix_pci_tbl[] = {
{ 0x8086, 0x0F21, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_byt }, { 0x8086, 0x0F21, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_byt },
/* SATA Controller IDE (Coleto Creek) */ /* SATA Controller IDE (Coleto Creek) */
{ 0x8086, 0x23a6, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, { 0x8086, 0x23a6, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
/* SATA Controller IDE (9 Series) */
{ 0x8086, 0x8c88, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb },
/* SATA Controller IDE (9 Series) */
{ 0x8086, 0x8c89, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb },
/* SATA Controller IDE (9 Series) */
{ 0x8086, 0x8c80, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
/* SATA Controller IDE (9 Series) */
{ 0x8086, 0x8c81, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
{ } /* terminate list */ { } /* terminate list */
}; };

View File

@ -143,6 +143,18 @@ static int jmicron_init_one (struct pci_dev *pdev, const struct pci_device_id *i
}; };
const struct ata_port_info *ppi[] = { &info, NULL }; const struct ata_port_info *ppi[] = { &info, NULL };
/*
* The JMicron chip 361/363 contains one SATA controller and one
* PATA controller,for powering on these both controllers, we must
* follow the sequence one by one, otherwise one of them can not be
* powered on successfully, so here we disable the async suspend
* method for these chips.
*/
if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
(pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
device_disable_async_suspend(&pdev->dev);
return ata_pci_bmdma_init_one(pdev, ppi, &jmicron_sht, NULL, 0); return ata_pci_bmdma_init_one(pdev, ppi, &jmicron_sht, NULL, 0);
} }

View File

@ -282,6 +282,7 @@ static const struct pci_device_id bcma_pci_bridge_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43a9) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43a9) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43aa) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43aa) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4727) }, { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4727) },
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43227) }, /* 0xA8DB */
{ 0, }, { 0, },
}; };
MODULE_DEVICE_TABLE(pci, bcma_pci_bridge_tbl); MODULE_DEVICE_TABLE(pci, bcma_pci_bridge_tbl);

View File

@ -3918,7 +3918,6 @@ skip_create_disk:
if (rv) { if (rv) {
dev_err(&dd->pdev->dev, dev_err(&dd->pdev->dev,
"Unable to allocate request queue\n"); "Unable to allocate request queue\n");
rv = -ENOMEM;
goto block_queue_alloc_init_error; goto block_queue_alloc_init_error;
} }

View File

@ -462,17 +462,21 @@ static int null_add_dev(void)
struct gendisk *disk; struct gendisk *disk;
struct nullb *nullb; struct nullb *nullb;
sector_t size; sector_t size;
int rv;
nullb = kzalloc_node(sizeof(*nullb), GFP_KERNEL, home_node); nullb = kzalloc_node(sizeof(*nullb), GFP_KERNEL, home_node);
if (!nullb) if (!nullb) {
rv = -ENOMEM;
goto out; goto out;
}
spin_lock_init(&nullb->lock); spin_lock_init(&nullb->lock);
if (queue_mode == NULL_Q_MQ && use_per_node_hctx) if (queue_mode == NULL_Q_MQ && use_per_node_hctx)
submit_queues = nr_online_nodes; submit_queues = nr_online_nodes;
if (setup_queues(nullb)) rv = setup_queues(nullb);
if (rv)
goto out_free_nullb; goto out_free_nullb;
if (queue_mode == NULL_Q_MQ) { if (queue_mode == NULL_Q_MQ) {
@ -484,22 +488,29 @@ static int null_add_dev(void)
nullb->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; nullb->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
nullb->tag_set.driver_data = nullb; nullb->tag_set.driver_data = nullb;
if (blk_mq_alloc_tag_set(&nullb->tag_set)) rv = blk_mq_alloc_tag_set(&nullb->tag_set);
if (rv)
goto out_cleanup_queues; goto out_cleanup_queues;
nullb->q = blk_mq_init_queue(&nullb->tag_set); nullb->q = blk_mq_init_queue(&nullb->tag_set);
if (!nullb->q) if (!nullb->q) {
rv = -ENOMEM;
goto out_cleanup_tags; goto out_cleanup_tags;
}
} else if (queue_mode == NULL_Q_BIO) { } else if (queue_mode == NULL_Q_BIO) {
nullb->q = blk_alloc_queue_node(GFP_KERNEL, home_node); nullb->q = blk_alloc_queue_node(GFP_KERNEL, home_node);
if (!nullb->q) if (!nullb->q) {
rv = -ENOMEM;
goto out_cleanup_queues; goto out_cleanup_queues;
}
blk_queue_make_request(nullb->q, null_queue_bio); blk_queue_make_request(nullb->q, null_queue_bio);
init_driver_queues(nullb); init_driver_queues(nullb);
} else { } else {
nullb->q = blk_init_queue_node(null_request_fn, &nullb->lock, home_node); nullb->q = blk_init_queue_node(null_request_fn, &nullb->lock, home_node);
if (!nullb->q) if (!nullb->q) {
rv = -ENOMEM;
goto out_cleanup_queues; goto out_cleanup_queues;
}
blk_queue_prep_rq(nullb->q, null_rq_prep_fn); blk_queue_prep_rq(nullb->q, null_rq_prep_fn);
blk_queue_softirq_done(nullb->q, null_softirq_done_fn); blk_queue_softirq_done(nullb->q, null_softirq_done_fn);
init_driver_queues(nullb); init_driver_queues(nullb);
@ -509,8 +520,10 @@ static int null_add_dev(void)
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nullb->q); queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nullb->q);
disk = nullb->disk = alloc_disk_node(1, home_node); disk = nullb->disk = alloc_disk_node(1, home_node);
if (!disk) if (!disk) {
rv = -ENOMEM;
goto out_cleanup_blk_queue; goto out_cleanup_blk_queue;
}
mutex_lock(&lock); mutex_lock(&lock);
list_add_tail(&nullb->list, &nullb_list); list_add_tail(&nullb->list, &nullb_list);
@ -544,7 +557,7 @@ out_cleanup_queues:
out_free_nullb: out_free_nullb:
kfree(nullb); kfree(nullb);
out: out:
return -ENOMEM; return rv;
} }
static int __init null_init(void) static int __init null_init(void)

View File

@ -5087,9 +5087,11 @@ static int rbd_dev_device_setup(struct rbd_device *rbd_dev)
set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE); set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE);
set_disk_ro(rbd_dev->disk, rbd_dev->mapping.read_only); set_disk_ro(rbd_dev->disk, rbd_dev->mapping.read_only);
rbd_dev->rq_wq = alloc_workqueue(rbd_dev->disk->disk_name, 0, 0); rbd_dev->rq_wq = alloc_workqueue("%s", 0, 0, rbd_dev->disk->disk_name);
if (!rbd_dev->rq_wq) if (!rbd_dev->rq_wq) {
ret = -ENOMEM;
goto err_out_mapping; goto err_out_mapping;
}
ret = rbd_bus_add_dev(rbd_dev); ret = rbd_bus_add_dev(rbd_dev);
if (ret) if (ret)

View File

@ -60,7 +60,7 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
goto out; goto out;
} }
freq_table = kcalloc(sizeof(*freq_table), (max_opps + 1), GFP_ATOMIC); freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC);
if (!freq_table) { if (!freq_table) {
ret = -ENOMEM; ret = -ENOMEM;
goto out; goto out;

View File

@ -362,8 +362,9 @@ static void jz4740_dma_chan_irq(struct jz4740_dmaengine_chan *chan)
vchan_cyclic_callback(&chan->desc->vdesc); vchan_cyclic_callback(&chan->desc->vdesc);
} else { } else {
if (chan->next_sg == chan->desc->num_sgs) { if (chan->next_sg == chan->desc->num_sgs) {
chan->desc = NULL; list_del(&chan->desc->vdesc.node);
vchan_cookie_complete(&chan->desc->vdesc); vchan_cookie_complete(&chan->desc->vdesc);
chan->desc = NULL;
} }
} }
} }

View File

@ -67,6 +67,7 @@ static int ast_detect_chip(struct drm_device *dev)
{ {
struct ast_private *ast = dev->dev_private; struct ast_private *ast = dev->dev_private;
uint32_t data, jreg; uint32_t data, jreg;
ast_open_key(ast);
if (dev->pdev->device == PCI_CHIP_AST1180) { if (dev->pdev->device == PCI_CHIP_AST1180) {
ast->chip = AST1100; ast->chip = AST1100;
@ -104,7 +105,7 @@ static int ast_detect_chip(struct drm_device *dev)
} }
ast->vga2_clone = false; ast->vga2_clone = false;
} else { } else {
ast->chip = 2000; ast->chip = AST2000;
DRM_INFO("AST 2000 detected\n"); DRM_INFO("AST 2000 detected\n");
} }
} }

View File

@ -1336,12 +1336,17 @@ static int i915_load_modeset_init(struct drm_device *dev)
intel_power_domains_init_hw(dev_priv); intel_power_domains_init_hw(dev_priv);
/*
* We enable some interrupt sources in our postinstall hooks, so mark
* interrupts as enabled _before_ actually enabling them to avoid
* special cases in our ordering checks.
*/
dev_priv->pm._irqs_disabled = false;
ret = drm_irq_install(dev, dev->pdev->irq); ret = drm_irq_install(dev, dev->pdev->irq);
if (ret) if (ret)
goto cleanup_gem_stolen; goto cleanup_gem_stolen;
dev_priv->pm._irqs_disabled = false;
/* Important: The output setup functions called by modeset_init need /* Important: The output setup functions called by modeset_init need
* working irqs for e.g. gmbus and dp aux transfers. */ * working irqs for e.g. gmbus and dp aux transfers. */
intel_modeset_init(dev); intel_modeset_init(dev);

View File

@ -184,6 +184,7 @@ enum hpd_pin {
if ((1 << (domain)) & (mask)) if ((1 << (domain)) & (mask))
struct drm_i915_private; struct drm_i915_private;
struct i915_mm_struct;
struct i915_mmu_object; struct i915_mmu_object;
enum intel_dpll_id { enum intel_dpll_id {
@ -1506,9 +1507,8 @@ struct drm_i915_private {
struct i915_gtt gtt; /* VM representing the global address space */ struct i915_gtt gtt; /* VM representing the global address space */
struct i915_gem_mm mm; struct i915_gem_mm mm;
#if defined(CONFIG_MMU_NOTIFIER) DECLARE_HASHTABLE(mm_structs, 7);
DECLARE_HASHTABLE(mmu_notifiers, 7); struct mutex mm_lock;
#endif
/* Kernel Modesetting */ /* Kernel Modesetting */
@ -1814,8 +1814,8 @@ struct drm_i915_gem_object {
unsigned workers :4; unsigned workers :4;
#define I915_GEM_USERPTR_MAX_WORKERS 15 #define I915_GEM_USERPTR_MAX_WORKERS 15
struct mm_struct *mm; struct i915_mm_struct *mm;
struct i915_mmu_object *mn; struct i915_mmu_object *mmu_object;
struct work_struct *work; struct work_struct *work;
} userptr; } userptr;
}; };

View File

@ -1590,10 +1590,13 @@ unlock:
out: out:
switch (ret) { switch (ret) {
case -EIO: case -EIO:
/* If this -EIO is due to a gpu hang, give the reset code a /*
* chance to clean up the mess. Otherwise return the proper * We eat errors when the gpu is terminally wedged to avoid
* SIGBUS. */ * userspace unduly crashing (gl has no provisions for mmaps to
if (i915_terminally_wedged(&dev_priv->gpu_error)) { * fail). But any other -EIO isn't ours (e.g. swap in failure)
* and so needs to be reported.
*/
if (!i915_terminally_wedged(&dev_priv->gpu_error)) {
ret = VM_FAULT_SIGBUS; ret = VM_FAULT_SIGBUS;
break; break;
} }

View File

@ -32,6 +32,15 @@
#include <linux/mempolicy.h> #include <linux/mempolicy.h>
#include <linux/swap.h> #include <linux/swap.h>
struct i915_mm_struct {
struct mm_struct *mm;
struct drm_device *dev;
struct i915_mmu_notifier *mn;
struct hlist_node node;
struct kref kref;
struct work_struct work;
};
#if defined(CONFIG_MMU_NOTIFIER) #if defined(CONFIG_MMU_NOTIFIER)
#include <linux/interval_tree.h> #include <linux/interval_tree.h>
@ -41,16 +50,12 @@ struct i915_mmu_notifier {
struct mmu_notifier mn; struct mmu_notifier mn;
struct rb_root objects; struct rb_root objects;
struct list_head linear; struct list_head linear;
struct drm_device *dev;
struct mm_struct *mm;
struct work_struct work;
unsigned long count;
unsigned long serial; unsigned long serial;
bool has_linear; bool has_linear;
}; };
struct i915_mmu_object { struct i915_mmu_object {
struct i915_mmu_notifier *mmu; struct i915_mmu_notifier *mn;
struct interval_tree_node it; struct interval_tree_node it;
struct list_head link; struct list_head link;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
@ -96,18 +101,18 @@ static void *invalidate_range__linear(struct i915_mmu_notifier *mn,
unsigned long start, unsigned long start,
unsigned long end) unsigned long end)
{ {
struct i915_mmu_object *mmu; struct i915_mmu_object *mo;
unsigned long serial; unsigned long serial;
restart: restart:
serial = mn->serial; serial = mn->serial;
list_for_each_entry(mmu, &mn->linear, link) { list_for_each_entry(mo, &mn->linear, link) {
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
if (mmu->it.last < start || mmu->it.start > end) if (mo->it.last < start || mo->it.start > end)
continue; continue;
obj = mmu->obj; obj = mo->obj;
drm_gem_object_reference(&obj->base); drm_gem_object_reference(&obj->base);
spin_unlock(&mn->lock); spin_unlock(&mn->lock);
@ -160,130 +165,47 @@ static const struct mmu_notifier_ops i915_gem_userptr_notifier = {
}; };
static struct i915_mmu_notifier * static struct i915_mmu_notifier *
__i915_mmu_notifier_lookup(struct drm_device *dev, struct mm_struct *mm) i915_mmu_notifier_create(struct mm_struct *mm)
{ {
struct drm_i915_private *dev_priv = to_i915(dev); struct i915_mmu_notifier *mn;
struct i915_mmu_notifier *mmu;
/* Protected by dev->struct_mutex */
hash_for_each_possible(dev_priv->mmu_notifiers, mmu, node, (unsigned long)mm)
if (mmu->mm == mm)
return mmu;
return NULL;
}
static struct i915_mmu_notifier *
i915_mmu_notifier_get(struct drm_device *dev, struct mm_struct *mm)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct i915_mmu_notifier *mmu;
int ret; int ret;
lockdep_assert_held(&dev->struct_mutex); mn = kmalloc(sizeof(*mn), GFP_KERNEL);
if (mn == NULL)
mmu = __i915_mmu_notifier_lookup(dev, mm);
if (mmu)
return mmu;
mmu = kmalloc(sizeof(*mmu), GFP_KERNEL);
if (mmu == NULL)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
spin_lock_init(&mmu->lock); spin_lock_init(&mn->lock);
mmu->dev = dev; mn->mn.ops = &i915_gem_userptr_notifier;
mmu->mn.ops = &i915_gem_userptr_notifier; mn->objects = RB_ROOT;
mmu->mm = mm; mn->serial = 1;
mmu->objects = RB_ROOT; INIT_LIST_HEAD(&mn->linear);
mmu->count = 0; mn->has_linear = false;
mmu->serial = 1;
INIT_LIST_HEAD(&mmu->linear);
mmu->has_linear = false;
/* Protected by mmap_sem (write-lock) */ /* Protected by mmap_sem (write-lock) */
ret = __mmu_notifier_register(&mmu->mn, mm); ret = __mmu_notifier_register(&mn->mn, mm);
if (ret) { if (ret) {
kfree(mmu); kfree(mn);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
/* Protected by dev->struct_mutex */ return mn;
hash_add(dev_priv->mmu_notifiers, &mmu->node, (unsigned long)mm);
return mmu;
} }
static void static void __i915_mmu_notifier_update_serial(struct i915_mmu_notifier *mn)
__i915_mmu_notifier_destroy_worker(struct work_struct *work)
{ {
struct i915_mmu_notifier *mmu = container_of(work, typeof(*mmu), work); if (++mn->serial == 0)
mmu_notifier_unregister(&mmu->mn, mmu->mm); mn->serial = 1;
kfree(mmu);
}
static void
__i915_mmu_notifier_destroy(struct i915_mmu_notifier *mmu)
{
lockdep_assert_held(&mmu->dev->struct_mutex);
/* Protected by dev->struct_mutex */
hash_del(&mmu->node);
/* Our lock ordering is: mmap_sem, mmu_notifier_scru, struct_mutex.
* We enter the function holding struct_mutex, therefore we need
* to drop our mutex prior to calling mmu_notifier_unregister in
* order to prevent lock inversion (and system-wide deadlock)
* between the mmap_sem and struct-mutex. Hence we defer the
* unregistration to a workqueue where we hold no locks.
*/
INIT_WORK(&mmu->work, __i915_mmu_notifier_destroy_worker);
schedule_work(&mmu->work);
}
static void __i915_mmu_notifier_update_serial(struct i915_mmu_notifier *mmu)
{
if (++mmu->serial == 0)
mmu->serial = 1;
}
static bool i915_mmu_notifier_has_linear(struct i915_mmu_notifier *mmu)
{
struct i915_mmu_object *mn;
list_for_each_entry(mn, &mmu->linear, link)
if (mn->is_linear)
return true;
return false;
}
static void
i915_mmu_notifier_del(struct i915_mmu_notifier *mmu,
struct i915_mmu_object *mn)
{
lockdep_assert_held(&mmu->dev->struct_mutex);
spin_lock(&mmu->lock);
list_del(&mn->link);
if (mn->is_linear)
mmu->has_linear = i915_mmu_notifier_has_linear(mmu);
else
interval_tree_remove(&mn->it, &mmu->objects);
__i915_mmu_notifier_update_serial(mmu);
spin_unlock(&mmu->lock);
/* Protected against _add() by dev->struct_mutex */
if (--mmu->count == 0)
__i915_mmu_notifier_destroy(mmu);
} }
static int static int
i915_mmu_notifier_add(struct i915_mmu_notifier *mmu, i915_mmu_notifier_add(struct drm_device *dev,
struct i915_mmu_object *mn) struct i915_mmu_notifier *mn,
struct i915_mmu_object *mo)
{ {
struct interval_tree_node *it; struct interval_tree_node *it;
int ret; int ret;
ret = i915_mutex_lock_interruptible(mmu->dev); ret = i915_mutex_lock_interruptible(dev);
if (ret) if (ret)
return ret; return ret;
@ -291,11 +213,11 @@ i915_mmu_notifier_add(struct i915_mmu_notifier *mmu,
* remove the objects from the interval tree) before we do * remove the objects from the interval tree) before we do
* the check for overlapping objects. * the check for overlapping objects.
*/ */
i915_gem_retire_requests(mmu->dev); i915_gem_retire_requests(dev);
spin_lock(&mmu->lock); spin_lock(&mn->lock);
it = interval_tree_iter_first(&mmu->objects, it = interval_tree_iter_first(&mn->objects,
mn->it.start, mn->it.last); mo->it.start, mo->it.last);
if (it) { if (it) {
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
@ -312,86 +234,122 @@ i915_mmu_notifier_add(struct i915_mmu_notifier *mmu,
obj = container_of(it, struct i915_mmu_object, it)->obj; obj = container_of(it, struct i915_mmu_object, it)->obj;
if (!obj->userptr.workers) if (!obj->userptr.workers)
mmu->has_linear = mn->is_linear = true; mn->has_linear = mo->is_linear = true;
else else
ret = -EAGAIN; ret = -EAGAIN;
} else } else
interval_tree_insert(&mn->it, &mmu->objects); interval_tree_insert(&mo->it, &mn->objects);
if (ret == 0) { if (ret == 0) {
list_add(&mn->link, &mmu->linear); list_add(&mo->link, &mn->linear);
__i915_mmu_notifier_update_serial(mmu); __i915_mmu_notifier_update_serial(mn);
} }
spin_unlock(&mmu->lock); spin_unlock(&mn->lock);
mutex_unlock(&mmu->dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
return ret; return ret;
} }
static bool i915_mmu_notifier_has_linear(struct i915_mmu_notifier *mn)
{
struct i915_mmu_object *mo;
list_for_each_entry(mo, &mn->linear, link)
if (mo->is_linear)
return true;
return false;
}
static void
i915_mmu_notifier_del(struct i915_mmu_notifier *mn,
struct i915_mmu_object *mo)
{
spin_lock(&mn->lock);
list_del(&mo->link);
if (mo->is_linear)
mn->has_linear = i915_mmu_notifier_has_linear(mn);
else
interval_tree_remove(&mo->it, &mn->objects);
__i915_mmu_notifier_update_serial(mn);
spin_unlock(&mn->lock);
}
static void static void
i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj) i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj)
{ {
struct i915_mmu_object *mn; struct i915_mmu_object *mo;
mn = obj->userptr.mn; mo = obj->userptr.mmu_object;
if (mn == NULL) if (mo == NULL)
return; return;
i915_mmu_notifier_del(mn->mmu, mn); i915_mmu_notifier_del(mo->mn, mo);
obj->userptr.mn = NULL; kfree(mo);
obj->userptr.mmu_object = NULL;
}
static struct i915_mmu_notifier *
i915_mmu_notifier_find(struct i915_mm_struct *mm)
{
if (mm->mn == NULL) {
down_write(&mm->mm->mmap_sem);
mutex_lock(&to_i915(mm->dev)->mm_lock);
if (mm->mn == NULL)
mm->mn = i915_mmu_notifier_create(mm->mm);
mutex_unlock(&to_i915(mm->dev)->mm_lock);
up_write(&mm->mm->mmap_sem);
}
return mm->mn;
} }
static int static int
i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj, i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
unsigned flags) unsigned flags)
{ {
struct i915_mmu_notifier *mmu; struct i915_mmu_notifier *mn;
struct i915_mmu_object *mn; struct i915_mmu_object *mo;
int ret; int ret;
if (flags & I915_USERPTR_UNSYNCHRONIZED) if (flags & I915_USERPTR_UNSYNCHRONIZED)
return capable(CAP_SYS_ADMIN) ? 0 : -EPERM; return capable(CAP_SYS_ADMIN) ? 0 : -EPERM;
down_write(&obj->userptr.mm->mmap_sem); if (WARN_ON(obj->userptr.mm == NULL))
ret = i915_mutex_lock_interruptible(obj->base.dev); return -EINVAL;
if (ret == 0) {
mmu = i915_mmu_notifier_get(obj->base.dev, obj->userptr.mm); mn = i915_mmu_notifier_find(obj->userptr.mm);
if (!IS_ERR(mmu)) if (IS_ERR(mn))
mmu->count++; /* preemptive add to act as a refcount */ return PTR_ERR(mn);
else
ret = PTR_ERR(mmu); mo = kzalloc(sizeof(*mo), GFP_KERNEL);
mutex_unlock(&obj->base.dev->struct_mutex); if (mo == NULL)
} return -ENOMEM;
up_write(&obj->userptr.mm->mmap_sem);
if (ret) mo->mn = mn;
mo->it.start = obj->userptr.ptr;
mo->it.last = mo->it.start + obj->base.size - 1;
mo->obj = obj;
ret = i915_mmu_notifier_add(obj->base.dev, mn, mo);
if (ret) {
kfree(mo);
return ret; return ret;
mn = kzalloc(sizeof(*mn), GFP_KERNEL);
if (mn == NULL) {
ret = -ENOMEM;
goto destroy_mmu;
} }
mn->mmu = mmu; obj->userptr.mmu_object = mo;
mn->it.start = obj->userptr.ptr;
mn->it.last = mn->it.start + obj->base.size - 1;
mn->obj = obj;
ret = i915_mmu_notifier_add(mmu, mn);
if (ret)
goto free_mn;
obj->userptr.mn = mn;
return 0; return 0;
}
free_mn: static void
i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
struct mm_struct *mm)
{
if (mn == NULL)
return;
mmu_notifier_unregister(&mn->mn, mm);
kfree(mn); kfree(mn);
destroy_mmu:
mutex_lock(&obj->base.dev->struct_mutex);
if (--mmu->count == 0)
__i915_mmu_notifier_destroy(mmu);
mutex_unlock(&obj->base.dev->struct_mutex);
return ret;
} }
#else #else
@ -413,15 +371,114 @@ i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj,
return 0; return 0;
} }
static void
i915_mmu_notifier_free(struct i915_mmu_notifier *mn,
struct mm_struct *mm)
{
}
#endif #endif
static struct i915_mm_struct *
__i915_mm_struct_find(struct drm_i915_private *dev_priv, struct mm_struct *real)
{
struct i915_mm_struct *mm;
/* Protected by dev_priv->mm_lock */
hash_for_each_possible(dev_priv->mm_structs, mm, node, (unsigned long)real)
if (mm->mm == real)
return mm;
return NULL;
}
static int
i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
struct i915_mm_struct *mm;
int ret = 0;
/* During release of the GEM object we hold the struct_mutex. This
* precludes us from calling mmput() at that time as that may be
* the last reference and so call exit_mmap(). exit_mmap() will
* attempt to reap the vma, and if we were holding a GTT mmap
* would then call drm_gem_vm_close() and attempt to reacquire
* the struct mutex. So in order to avoid that recursion, we have
* to defer releasing the mm reference until after we drop the
* struct_mutex, i.e. we need to schedule a worker to do the clean
* up.
*/
mutex_lock(&dev_priv->mm_lock);
mm = __i915_mm_struct_find(dev_priv, current->mm);
if (mm == NULL) {
mm = kmalloc(sizeof(*mm), GFP_KERNEL);
if (mm == NULL) {
ret = -ENOMEM;
goto out;
}
kref_init(&mm->kref);
mm->dev = obj->base.dev;
mm->mm = current->mm;
atomic_inc(&current->mm->mm_count);
mm->mn = NULL;
/* Protected by dev_priv->mm_lock */
hash_add(dev_priv->mm_structs,
&mm->node, (unsigned long)mm->mm);
} else
kref_get(&mm->kref);
obj->userptr.mm = mm;
out:
mutex_unlock(&dev_priv->mm_lock);
return ret;
}
static void
__i915_mm_struct_free__worker(struct work_struct *work)
{
struct i915_mm_struct *mm = container_of(work, typeof(*mm), work);
i915_mmu_notifier_free(mm->mn, mm->mm);
mmdrop(mm->mm);
kfree(mm);
}
static void
__i915_mm_struct_free(struct kref *kref)
{
struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref);
/* Protected by dev_priv->mm_lock */
hash_del(&mm->node);
mutex_unlock(&to_i915(mm->dev)->mm_lock);
INIT_WORK(&mm->work, __i915_mm_struct_free__worker);
schedule_work(&mm->work);
}
static void
i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj)
{
if (obj->userptr.mm == NULL)
return;
kref_put_mutex(&obj->userptr.mm->kref,
__i915_mm_struct_free,
&to_i915(obj->base.dev)->mm_lock);
obj->userptr.mm = NULL;
}
struct get_pages_work { struct get_pages_work {
struct work_struct work; struct work_struct work;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
struct task_struct *task; struct task_struct *task;
}; };
#if IS_ENABLED(CONFIG_SWIOTLB) #if IS_ENABLED(CONFIG_SWIOTLB)
#define swiotlb_active() swiotlb_nr_tbl() #define swiotlb_active() swiotlb_nr_tbl()
#else #else
@ -479,7 +536,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
if (pvec == NULL) if (pvec == NULL)
pvec = drm_malloc_ab(num_pages, sizeof(struct page *)); pvec = drm_malloc_ab(num_pages, sizeof(struct page *));
if (pvec != NULL) { if (pvec != NULL) {
struct mm_struct *mm = obj->userptr.mm; struct mm_struct *mm = obj->userptr.mm->mm;
down_read(&mm->mmap_sem); down_read(&mm->mmap_sem);
while (pinned < num_pages) { while (pinned < num_pages) {
@ -545,7 +602,7 @@ i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
pvec = NULL; pvec = NULL;
pinned = 0; pinned = 0;
if (obj->userptr.mm == current->mm) { if (obj->userptr.mm->mm == current->mm) {
pvec = kmalloc(num_pages*sizeof(struct page *), pvec = kmalloc(num_pages*sizeof(struct page *),
GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY); GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
if (pvec == NULL) { if (pvec == NULL) {
@ -651,17 +708,13 @@ static void
i915_gem_userptr_release(struct drm_i915_gem_object *obj) i915_gem_userptr_release(struct drm_i915_gem_object *obj)
{ {
i915_gem_userptr_release__mmu_notifier(obj); i915_gem_userptr_release__mmu_notifier(obj);
i915_gem_userptr_release__mm_struct(obj);
if (obj->userptr.mm) {
mmput(obj->userptr.mm);
obj->userptr.mm = NULL;
}
} }
static int static int
i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj) i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj)
{ {
if (obj->userptr.mn) if (obj->userptr.mmu_object)
return 0; return 0;
return i915_gem_userptr_init__mmu_notifier(obj, 0); return i915_gem_userptr_init__mmu_notifier(obj, 0);
@ -736,7 +789,6 @@ i915_gem_userptr_ioctl(struct drm_device *dev, void *data, struct drm_file *file
return -ENODEV; return -ENODEV;
} }
/* Allocate the new object */
obj = i915_gem_object_alloc(dev); obj = i915_gem_object_alloc(dev);
if (obj == NULL) if (obj == NULL)
return -ENOMEM; return -ENOMEM;
@ -754,8 +806,8 @@ i915_gem_userptr_ioctl(struct drm_device *dev, void *data, struct drm_file *file
* at binding. This means that we need to hook into the mmu_notifier * at binding. This means that we need to hook into the mmu_notifier
* in order to detect if the mmu is destroyed. * in order to detect if the mmu is destroyed.
*/ */
ret = -ENOMEM; ret = i915_gem_userptr_init__mm_struct(obj);
if ((obj->userptr.mm = get_task_mm(current))) if (ret == 0)
ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags); ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags);
if (ret == 0) if (ret == 0)
ret = drm_gem_handle_create(file, &obj->base, &handle); ret = drm_gem_handle_create(file, &obj->base, &handle);
@ -772,9 +824,8 @@ i915_gem_userptr_ioctl(struct drm_device *dev, void *data, struct drm_file *file
int int
i915_gem_init_userptr(struct drm_device *dev) i915_gem_init_userptr(struct drm_device *dev)
{ {
#if defined(CONFIG_MMU_NOTIFIER)
struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_private *dev_priv = to_i915(dev);
hash_init(dev_priv->mmu_notifiers); mutex_init(&dev_priv->mm_lock);
#endif hash_init(dev_priv->mm_structs);
return 0; return 0;
} }

View File

@ -334,16 +334,20 @@
#define GFX_OP_DESTBUFFER_INFO ((0x3<<29)|(0x1d<<24)|(0x8e<<16)|1) #define GFX_OP_DESTBUFFER_INFO ((0x3<<29)|(0x1d<<24)|(0x8e<<16)|1)
#define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) #define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3))
#define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2) #define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2)
#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4)
#define COLOR_BLT_CMD (2<<29 | 0x40<<22 | (5-2))
#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4)
#define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6) #define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6)
#define XY_MONO_SRC_COPY_IMM_BLT ((2<<29)|(0x71<<22)|5) #define XY_MONO_SRC_COPY_IMM_BLT ((2<<29)|(0x71<<22)|5)
#define XY_SRC_COPY_BLT_WRITE_ALPHA (1<<21) #define BLT_WRITE_A (2<<20)
#define XY_SRC_COPY_BLT_WRITE_RGB (1<<20) #define BLT_WRITE_RGB (1<<20)
#define BLT_WRITE_RGBA (BLT_WRITE_RGB | BLT_WRITE_A)
#define BLT_DEPTH_8 (0<<24) #define BLT_DEPTH_8 (0<<24)
#define BLT_DEPTH_16_565 (1<<24) #define BLT_DEPTH_16_565 (1<<24)
#define BLT_DEPTH_16_1555 (2<<24) #define BLT_DEPTH_16_1555 (2<<24)
#define BLT_DEPTH_32 (3<<24) #define BLT_DEPTH_32 (3<<24)
#define BLT_ROP_GXCOPY (0xcc<<16) #define BLT_ROP_SRC_COPY (0xcc<<16)
#define BLT_ROP_COLOR_COPY (0xf0<<16)
#define XY_SRC_COPY_BLT_SRC_TILED (1<<15) /* 965+ only */ #define XY_SRC_COPY_BLT_SRC_TILED (1<<15) /* 965+ only */
#define XY_SRC_COPY_BLT_DST_TILED (1<<11) /* 965+ only */ #define XY_SRC_COPY_BLT_DST_TILED (1<<11) /* 965+ only */
#define CMD_OP_DISPLAYBUFFER_INFO ((0x0<<29)|(0x14<<23)|2) #define CMD_OP_DISPLAYBUFFER_INFO ((0x0<<29)|(0x14<<23)|2)

View File

@ -1363,54 +1363,66 @@ i965_dispatch_execbuffer(struct intel_engine_cs *ring,
/* Just userspace ABI convention to limit the wa batch bo to a resonable size */ /* Just userspace ABI convention to limit the wa batch bo to a resonable size */
#define I830_BATCH_LIMIT (256*1024) #define I830_BATCH_LIMIT (256*1024)
#define I830_TLB_ENTRIES (2)
#define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT)
static int static int
i830_dispatch_execbuffer(struct intel_engine_cs *ring, i830_dispatch_execbuffer(struct intel_engine_cs *ring,
u64 offset, u32 len, u64 offset, u32 len,
unsigned flags) unsigned flags)
{ {
u32 cs_offset = ring->scratch.gtt_offset;
int ret; int ret;
if (flags & I915_DISPATCH_PINNED) { ret = intel_ring_begin(ring, 6);
ret = intel_ring_begin(ring, 4); if (ret)
if (ret) return ret;
return ret;
intel_ring_emit(ring, MI_BATCH_BUFFER); /* Evict the invalid PTE TLBs */
intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE)); intel_ring_emit(ring, COLOR_BLT_CMD | BLT_WRITE_RGBA);
intel_ring_emit(ring, offset + len - 8); intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | 4096);
intel_ring_emit(ring, MI_NOOP); intel_ring_emit(ring, I830_TLB_ENTRIES << 16 | 4); /* load each page */
intel_ring_advance(ring); intel_ring_emit(ring, cs_offset);
} else { intel_ring_emit(ring, 0xdeadbeef);
u32 cs_offset = ring->scratch.gtt_offset; intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
if ((flags & I915_DISPATCH_PINNED) == 0) {
if (len > I830_BATCH_LIMIT) if (len > I830_BATCH_LIMIT)
return -ENOSPC; return -ENOSPC;
ret = intel_ring_begin(ring, 9+3); ret = intel_ring_begin(ring, 6 + 2);
if (ret) if (ret)
return ret; return ret;
/* Blit the batch (which has now all relocs applied) to the stable batch
* scratch bo area (so that the CS never stumbles over its tlb /* Blit the batch (which has now all relocs applied) to the
* invalidation bug) ... */ * stable batch scratch bo area (so that the CS never
intel_ring_emit(ring, XY_SRC_COPY_BLT_CMD | * stumbles over its tlb invalidation bug) ...
XY_SRC_COPY_BLT_WRITE_ALPHA | */
XY_SRC_COPY_BLT_WRITE_RGB); intel_ring_emit(ring, SRC_COPY_BLT_CMD | BLT_WRITE_RGBA);
intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_GXCOPY | 4096); intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_SRC_COPY | 4096);
intel_ring_emit(ring, 0); intel_ring_emit(ring, DIV_ROUND_UP(len, 4096) << 16 | 1024);
intel_ring_emit(ring, (DIV_ROUND_UP(len, 4096) << 16) | 1024);
intel_ring_emit(ring, cs_offset); intel_ring_emit(ring, cs_offset);
intel_ring_emit(ring, 0);
intel_ring_emit(ring, 4096); intel_ring_emit(ring, 4096);
intel_ring_emit(ring, offset); intel_ring_emit(ring, offset);
intel_ring_emit(ring, MI_FLUSH); intel_ring_emit(ring, MI_FLUSH);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
/* ... and execute it. */ /* ... and execute it. */
intel_ring_emit(ring, MI_BATCH_BUFFER); offset = cs_offset;
intel_ring_emit(ring, cs_offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
intel_ring_emit(ring, cs_offset + len - 8);
intel_ring_advance(ring);
} }
ret = intel_ring_begin(ring, 4);
if (ret)
return ret;
intel_ring_emit(ring, MI_BATCH_BUFFER);
intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
intel_ring_emit(ring, offset + len - 8);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0; return 0;
} }
@ -2200,7 +2212,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
/* Workaround batchbuffer to combat CS tlb bug. */ /* Workaround batchbuffer to combat CS tlb bug. */
if (HAS_BROKEN_CS_TLB(dev)) { if (HAS_BROKEN_CS_TLB(dev)) {
obj = i915_gem_alloc_object(dev, I830_BATCH_LIMIT); obj = i915_gem_alloc_object(dev, I830_WA_SIZE);
if (obj == NULL) { if (obj == NULL) {
DRM_ERROR("Failed to allocate batch bo\n"); DRM_ERROR("Failed to allocate batch bo\n");
return -ENOMEM; return -ENOMEM;

View File

@ -854,6 +854,10 @@ intel_enable_tv(struct intel_encoder *encoder)
struct drm_device *dev = encoder->base.dev; struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
/* Prevents vblank waits from timing out in intel_tv_detect_type() */
intel_wait_for_vblank(encoder->base.dev,
to_intel_crtc(encoder->base.crtc)->pipe);
I915_WRITE(TV_CTL, I915_READ(TV_CTL) | TV_ENC_ENABLE); I915_WRITE(TV_CTL, I915_READ(TV_CTL) | TV_ENC_ENABLE);
} }

View File

@ -258,28 +258,30 @@ static void set_hdmi_pdev(struct drm_device *dev,
priv->hdmi_pdev = pdev; priv->hdmi_pdev = pdev;
} }
#ifdef CONFIG_OF
static int get_gpio(struct device *dev, struct device_node *of_node, const char *name)
{
int gpio = of_get_named_gpio(of_node, name, 0);
if (gpio < 0) {
char name2[32];
snprintf(name2, sizeof(name2), "%s-gpio", name);
gpio = of_get_named_gpio(of_node, name2, 0);
if (gpio < 0) {
dev_err(dev, "failed to get gpio: %s (%d)\n",
name, gpio);
gpio = -1;
}
}
return gpio;
}
#endif
static int hdmi_bind(struct device *dev, struct device *master, void *data) static int hdmi_bind(struct device *dev, struct device *master, void *data)
{ {
static struct hdmi_platform_config config = {}; static struct hdmi_platform_config config = {};
#ifdef CONFIG_OF #ifdef CONFIG_OF
struct device_node *of_node = dev->of_node; struct device_node *of_node = dev->of_node;
int get_gpio(const char *name)
{
int gpio = of_get_named_gpio(of_node, name, 0);
if (gpio < 0) {
char name2[32];
snprintf(name2, sizeof(name2), "%s-gpio", name);
gpio = of_get_named_gpio(of_node, name2, 0);
if (gpio < 0) {
dev_err(dev, "failed to get gpio: %s (%d)\n",
name, gpio);
gpio = -1;
}
}
return gpio;
}
if (of_device_is_compatible(of_node, "qcom,hdmi-tx-8074")) { if (of_device_is_compatible(of_node, "qcom,hdmi-tx-8074")) {
static const char *hpd_reg_names[] = {"hpd-gdsc", "hpd-5v"}; static const char *hpd_reg_names[] = {"hpd-gdsc", "hpd-5v"};
static const char *pwr_reg_names[] = {"core-vdda", "core-vcc"}; static const char *pwr_reg_names[] = {"core-vdda", "core-vcc"};
@ -312,12 +314,12 @@ static int hdmi_bind(struct device *dev, struct device *master, void *data)
} }
config.mmio_name = "core_physical"; config.mmio_name = "core_physical";
config.ddc_clk_gpio = get_gpio("qcom,hdmi-tx-ddc-clk"); config.ddc_clk_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-ddc-clk");
config.ddc_data_gpio = get_gpio("qcom,hdmi-tx-ddc-data"); config.ddc_data_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-ddc-data");
config.hpd_gpio = get_gpio("qcom,hdmi-tx-hpd"); config.hpd_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-hpd");
config.mux_en_gpio = get_gpio("qcom,hdmi-tx-mux-en"); config.mux_en_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-en");
config.mux_sel_gpio = get_gpio("qcom,hdmi-tx-mux-sel"); config.mux_sel_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-sel");
config.mux_lpm_gpio = get_gpio("qcom,hdmi-tx-mux-lpm"); config.mux_lpm_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-lpm");
#else #else
static const char *hpd_clk_names[] = { static const char *hpd_clk_names[] = {

View File

@ -15,19 +15,25 @@
* this program. If not, see <http://www.gnu.org/licenses/>. * this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#ifdef CONFIG_COMMON_CLK
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/clk-provider.h> #include <linux/clk-provider.h>
#endif
#include "hdmi.h" #include "hdmi.h"
struct hdmi_phy_8960 { struct hdmi_phy_8960 {
struct hdmi_phy base; struct hdmi_phy base;
struct hdmi *hdmi; struct hdmi *hdmi;
#ifdef CONFIG_COMMON_CLK
struct clk_hw pll_hw; struct clk_hw pll_hw;
struct clk *pll; struct clk *pll;
unsigned long pixclk; unsigned long pixclk;
#endif
}; };
#define to_hdmi_phy_8960(x) container_of(x, struct hdmi_phy_8960, base) #define to_hdmi_phy_8960(x) container_of(x, struct hdmi_phy_8960, base)
#ifdef CONFIG_COMMON_CLK
#define clk_to_phy(x) container_of(x, struct hdmi_phy_8960, pll_hw) #define clk_to_phy(x) container_of(x, struct hdmi_phy_8960, pll_hw)
/* /*
@ -374,7 +380,7 @@ static struct clk_init_data pll_init = {
.parent_names = hdmi_pll_parents, .parent_names = hdmi_pll_parents,
.num_parents = ARRAY_SIZE(hdmi_pll_parents), .num_parents = ARRAY_SIZE(hdmi_pll_parents),
}; };
#endif
/* /*
* HDMI Phy: * HDMI Phy:
@ -480,12 +486,15 @@ struct hdmi_phy *hdmi_phy_8960_init(struct hdmi *hdmi)
{ {
struct hdmi_phy_8960 *phy_8960; struct hdmi_phy_8960 *phy_8960;
struct hdmi_phy *phy = NULL; struct hdmi_phy *phy = NULL;
int ret, i; int ret;
#ifdef CONFIG_COMMON_CLK
int i;
/* sanity check: */ /* sanity check: */
for (i = 0; i < (ARRAY_SIZE(freqtbl) - 1); i++) for (i = 0; i < (ARRAY_SIZE(freqtbl) - 1); i++)
if (WARN_ON(freqtbl[i].rate < freqtbl[i+1].rate)) if (WARN_ON(freqtbl[i].rate < freqtbl[i+1].rate))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
#endif
phy_8960 = kzalloc(sizeof(*phy_8960), GFP_KERNEL); phy_8960 = kzalloc(sizeof(*phy_8960), GFP_KERNEL);
if (!phy_8960) { if (!phy_8960) {
@ -499,6 +508,7 @@ struct hdmi_phy *hdmi_phy_8960_init(struct hdmi *hdmi)
phy_8960->hdmi = hdmi; phy_8960->hdmi = hdmi;
#ifdef CONFIG_COMMON_CLK
phy_8960->pll_hw.init = &pll_init; phy_8960->pll_hw.init = &pll_init;
phy_8960->pll = devm_clk_register(hdmi->dev->dev, &phy_8960->pll_hw); phy_8960->pll = devm_clk_register(hdmi->dev->dev, &phy_8960->pll_hw);
if (IS_ERR(phy_8960->pll)) { if (IS_ERR(phy_8960->pll)) {
@ -506,6 +516,7 @@ struct hdmi_phy *hdmi_phy_8960_init(struct hdmi *hdmi)
phy_8960->pll = NULL; phy_8960->pll = NULL;
goto fail; goto fail;
} }
#endif
return phy; return phy;

View File

@ -52,7 +52,7 @@ module_param(reglog, bool, 0600);
#define reglog 0 #define reglog 0
#endif #endif
static char *vram; static char *vram = "16m";
MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU"); MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU");
module_param(vram, charp, 0); module_param(vram, charp, 0);

View File

@ -405,16 +405,13 @@ bool radeon_dp_getdpcd(struct radeon_connector *radeon_connector)
u8 msg[DP_DPCD_SIZE]; u8 msg[DP_DPCD_SIZE];
int ret; int ret;
char dpcd_hex_dump[DP_DPCD_SIZE * 3];
ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg, ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
DP_DPCD_SIZE); DP_DPCD_SIZE);
if (ret > 0) { if (ret > 0) {
memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE); memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
hex_dump_to_buffer(dig_connector->dpcd, sizeof(dig_connector->dpcd), DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
32, 1, dpcd_hex_dump, sizeof(dpcd_hex_dump), false); dig_connector->dpcd);
DRM_DEBUG_KMS("DPCD: %s\n", dpcd_hex_dump);
radeon_dp_probe_oui(radeon_connector); radeon_dp_probe_oui(radeon_connector);

View File

@ -2769,8 +2769,8 @@ bool r600_semaphore_ring_emit(struct radeon_device *rdev,
radeon_ring_write(ring, lower_32_bits(addr)); radeon_ring_write(ring, lower_32_bits(addr));
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel); radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
/* PFP_SYNC_ME packet only exists on 7xx+ */ /* PFP_SYNC_ME packet only exists on 7xx+, only enable it on eg+ */
if (emit_wait && (rdev->family >= CHIP_RV770)) { if (emit_wait && (rdev->family >= CHIP_CEDAR)) {
/* Prevent the PFP from running ahead of the semaphore wait */ /* Prevent the PFP from running ahead of the semaphore wait */
radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0)); radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
radeon_ring_write(ring, 0x0); radeon_ring_write(ring, 0x0);

View File

@ -447,6 +447,13 @@ static bool radeon_atom_apply_quirks(struct drm_device *dev,
} }
} }
/* Fujitsu D3003-S2 board lists DVI-I as DVI-I and VGA */
if ((dev->pdev->device == 0x9805) &&
(dev->pdev->subsystem_vendor == 0x1734) &&
(dev->pdev->subsystem_device == 0x11bd)) {
if (*connector_type == DRM_MODE_CONNECTOR_VGA)
return false;
}
return true; return true;
} }
@ -2281,19 +2288,31 @@ static void radeon_atombios_add_pplib_thermal_controller(struct radeon_device *r
(controller->ucFanParameters & (controller->ucFanParameters &
ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
rdev->pm.int_thermal_type = THERMAL_TYPE_KV; rdev->pm.int_thermal_type = THERMAL_TYPE_KV;
} else if ((controller->ucType == } else if (controller->ucType ==
ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) || ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
(controller->ucType == DRM_INFO("External GPIO thermal controller %s fan control\n",
ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) || (controller->ucFanParameters &
(controller->ucType == ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL)) { rdev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL_GPIO;
DRM_INFO("Special thermal controller config\n"); } else if (controller->ucType ==
ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
DRM_INFO("ADT7473 with internal thermal controller %s fan control\n",
(controller->ucFanParameters &
ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
rdev->pm.int_thermal_type = THERMAL_TYPE_ADT7473_WITH_INTERNAL;
} else if (controller->ucType ==
ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
DRM_INFO("EMC2103 with internal thermal controller %s fan control\n",
(controller->ucFanParameters &
ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
rdev->pm.int_thermal_type = THERMAL_TYPE_EMC2103_WITH_INTERNAL;
} else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) { } else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) {
DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n", DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n",
pp_lib_thermal_controller_names[controller->ucType], pp_lib_thermal_controller_names[controller->ucType],
controller->ucI2cAddress >> 1, controller->ucI2cAddress >> 1,
(controller->ucFanParameters & (controller->ucFanParameters &
ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
rdev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL;
i2c_bus = radeon_lookup_i2c_gpio(rdev, controller->ucI2cLine); i2c_bus = radeon_lookup_i2c_gpio(rdev, controller->ucI2cLine);
rdev->pm.i2c_bus = radeon_i2c_lookup(rdev, &i2c_bus); rdev->pm.i2c_bus = radeon_i2c_lookup(rdev, &i2c_bus);
if (rdev->pm.i2c_bus) { if (rdev->pm.i2c_bus) {

View File

@ -34,7 +34,7 @@
int radeon_semaphore_create(struct radeon_device *rdev, int radeon_semaphore_create(struct radeon_device *rdev,
struct radeon_semaphore **semaphore) struct radeon_semaphore **semaphore)
{ {
uint32_t *cpu_addr; uint64_t *cpu_addr;
int i, r; int i, r;
*semaphore = kmalloc(sizeof(struct radeon_semaphore), GFP_KERNEL); *semaphore = kmalloc(sizeof(struct radeon_semaphore), GFP_KERNEL);

View File

@ -1089,6 +1089,30 @@ static int __mlx4_ib_destroy_flow(struct mlx4_dev *dev, u64 reg_id)
return err; return err;
} }
static int mlx4_ib_tunnel_steer_add(struct ib_qp *qp, struct ib_flow_attr *flow_attr,
u64 *reg_id)
{
void *ib_flow;
union ib_flow_spec *ib_spec;
struct mlx4_dev *dev = to_mdev(qp->device)->dev;
int err = 0;
if (dev->caps.tunnel_offload_mode != MLX4_TUNNEL_OFFLOAD_MODE_VXLAN)
return 0; /* do nothing */
ib_flow = flow_attr + 1;
ib_spec = (union ib_flow_spec *)ib_flow;
if (ib_spec->type != IB_FLOW_SPEC_ETH || flow_attr->num_of_specs != 1)
return 0; /* do nothing */
err = mlx4_tunnel_steer_add(to_mdev(qp->device)->dev, ib_spec->eth.val.dst_mac,
flow_attr->port, qp->qp_num,
MLX4_DOMAIN_UVERBS | (flow_attr->priority & 0xff),
reg_id);
return err;
}
static struct ib_flow *mlx4_ib_create_flow(struct ib_qp *qp, static struct ib_flow *mlx4_ib_create_flow(struct ib_qp *qp,
struct ib_flow_attr *flow_attr, struct ib_flow_attr *flow_attr,
int domain) int domain)
@ -1136,6 +1160,12 @@ static struct ib_flow *mlx4_ib_create_flow(struct ib_qp *qp,
i++; i++;
} }
if (i < ARRAY_SIZE(type) && flow_attr->type == IB_FLOW_ATTR_NORMAL) {
err = mlx4_ib_tunnel_steer_add(qp, flow_attr, &mflow->reg_id[i]);
if (err)
goto err_free;
}
return &mflow->ibflow; return &mflow->ibflow;
err_free: err_free:

View File

@ -1677,9 +1677,15 @@ static int __mlx4_ib_modify_qp(struct ib_qp *ibqp,
} }
} }
if (qp->ibqp.qp_type == IB_QPT_RAW_PACKET) if (qp->ibqp.qp_type == IB_QPT_RAW_PACKET) {
context->pri_path.ackto = (context->pri_path.ackto & 0xf8) | context->pri_path.ackto = (context->pri_path.ackto & 0xf8) |
MLX4_IB_LINK_TYPE_ETH; MLX4_IB_LINK_TYPE_ETH;
if (dev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) {
/* set QP to receive both tunneled & non-tunneled packets */
if (!(context->flags & (1 << MLX4_RSS_QPC_FLAG_OFFSET)))
context->srqn = cpu_to_be32(7 << 28);
}
}
if (ibqp->qp_type == IB_QPT_UD && (new_state == IB_QPS_RTR)) { if (ibqp->qp_type == IB_QPT_UD && (new_state == IB_QPS_RTR)) {
int is_eth = rdma_port_get_link_layer( int is_eth = rdma_port_get_link_layer(

View File

@ -33,8 +33,8 @@
#define CAP1106_REG_SENSOR_CONFIG 0x22 #define CAP1106_REG_SENSOR_CONFIG 0x22
#define CAP1106_REG_SENSOR_CONFIG2 0x23 #define CAP1106_REG_SENSOR_CONFIG2 0x23
#define CAP1106_REG_SAMPLING_CONFIG 0x24 #define CAP1106_REG_SAMPLING_CONFIG 0x24
#define CAP1106_REG_CALIBRATION 0x25 #define CAP1106_REG_CALIBRATION 0x26
#define CAP1106_REG_INT_ENABLE 0x26 #define CAP1106_REG_INT_ENABLE 0x27
#define CAP1106_REG_REPEAT_RATE 0x28 #define CAP1106_REG_REPEAT_RATE 0x28
#define CAP1106_REG_MT_CONFIG 0x2a #define CAP1106_REG_MT_CONFIG 0x2a
#define CAP1106_REG_MT_PATTERN_CONFIG 0x2b #define CAP1106_REG_MT_PATTERN_CONFIG 0x2b

View File

@ -332,23 +332,24 @@ static int matrix_keypad_init_gpio(struct platform_device *pdev,
} }
if (pdata->clustered_irq > 0) { if (pdata->clustered_irq > 0) {
err = request_irq(pdata->clustered_irq, err = request_any_context_irq(pdata->clustered_irq,
matrix_keypad_interrupt, matrix_keypad_interrupt,
pdata->clustered_irq_flags, pdata->clustered_irq_flags,
"matrix-keypad", keypad); "matrix-keypad", keypad);
if (err) { if (err < 0) {
dev_err(&pdev->dev, dev_err(&pdev->dev,
"Unable to acquire clustered interrupt\n"); "Unable to acquire clustered interrupt\n");
goto err_free_rows; goto err_free_rows;
} }
} else { } else {
for (i = 0; i < pdata->num_row_gpios; i++) { for (i = 0; i < pdata->num_row_gpios; i++) {
err = request_irq(gpio_to_irq(pdata->row_gpios[i]), err = request_any_context_irq(
gpio_to_irq(pdata->row_gpios[i]),
matrix_keypad_interrupt, matrix_keypad_interrupt,
IRQF_TRIGGER_RISING | IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING, IRQF_TRIGGER_FALLING,
"matrix-keypad", keypad); "matrix-keypad", keypad);
if (err) { if (err < 0) {
dev_err(&pdev->dev, dev_err(&pdev->dev,
"Unable to acquire interrupt for GPIO line %i\n", "Unable to acquire interrupt for GPIO line %i\n",
pdata->row_gpios[i]); pdata->row_gpios[i]);

View File

@ -2373,6 +2373,10 @@ int alps_init(struct psmouse *psmouse)
dev2->keybit[BIT_WORD(BTN_LEFT)] = dev2->keybit[BIT_WORD(BTN_LEFT)] =
BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) | BIT_MASK(BTN_RIGHT); BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) | BIT_MASK(BTN_RIGHT);
__set_bit(INPUT_PROP_POINTER, dev2->propbit);
if (priv->flags & ALPS_DUALPOINT)
__set_bit(INPUT_PROP_POINTING_STICK, dev2->propbit);
if (input_register_device(priv->dev2)) if (input_register_device(priv->dev2))
goto init_fail; goto init_fail;

View File

@ -1331,6 +1331,13 @@ static bool elantech_is_signature_valid(const unsigned char *param)
if (param[1] == 0) if (param[1] == 0)
return true; return true;
/*
* Some models have a revision higher then 20. Meaning param[2] may
* be 10 or 20, skip the rates check for these.
*/
if (param[0] == 0x46 && (param[1] & 0xef) == 0x0f && param[2] < 40)
return true;
for (i = 0; i < ARRAY_SIZE(rates); i++) for (i = 0; i < ARRAY_SIZE(rates); i++)
if (param[2] == rates[i]) if (param[2] == rates[i])
return false; return false;
@ -1607,6 +1614,10 @@ int elantech_init(struct psmouse *psmouse)
tp_dev->keybit[BIT_WORD(BTN_LEFT)] = tp_dev->keybit[BIT_WORD(BTN_LEFT)] =
BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) | BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) |
BIT_MASK(BTN_RIGHT); BIT_MASK(BTN_RIGHT);
__set_bit(INPUT_PROP_POINTER, tp_dev->propbit);
__set_bit(INPUT_PROP_POINTING_STICK, tp_dev->propbit);
error = input_register_device(etd->tp_dev); error = input_register_device(etd->tp_dev);
if (error < 0) if (error < 0)
goto init_fail_tp_reg; goto init_fail_tp_reg;

View File

@ -670,6 +670,8 @@ static void psmouse_apply_defaults(struct psmouse *psmouse)
__set_bit(REL_X, input_dev->relbit); __set_bit(REL_X, input_dev->relbit);
__set_bit(REL_Y, input_dev->relbit); __set_bit(REL_Y, input_dev->relbit);
__set_bit(INPUT_PROP_POINTER, input_dev->propbit);
psmouse->set_rate = psmouse_set_rate; psmouse->set_rate = psmouse_set_rate;
psmouse->set_resolution = psmouse_set_resolution; psmouse->set_resolution = psmouse_set_resolution;
psmouse->poll = psmouse_poll; psmouse->poll = psmouse_poll;

View File

@ -629,10 +629,61 @@ static int synaptics_parse_hw_state(const unsigned char buf[],
((buf[0] & 0x04) >> 1) | ((buf[0] & 0x04) >> 1) |
((buf[3] & 0x04) >> 2)); ((buf[3] & 0x04) >> 2));
if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) ||
SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) &&
hw->w == 2) {
synaptics_parse_agm(buf, priv, hw);
return 1;
}
hw->x = (((buf[3] & 0x10) << 8) |
((buf[1] & 0x0f) << 8) |
buf[4]);
hw->y = (((buf[3] & 0x20) << 7) |
((buf[1] & 0xf0) << 4) |
buf[5]);
hw->z = buf[2];
hw->left = (buf[0] & 0x01) ? 1 : 0; hw->left = (buf[0] & 0x01) ? 1 : 0;
hw->right = (buf[0] & 0x02) ? 1 : 0; hw->right = (buf[0] & 0x02) ? 1 : 0;
if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) { if (SYN_CAP_FORCEPAD(priv->ext_cap_0c)) {
/*
* ForcePads, like Clickpads, use middle button
* bits to report primary button clicks.
* Unfortunately they report primary button not
* only when user presses on the pad above certain
* threshold, but also when there are more than one
* finger on the touchpad, which interferes with
* out multi-finger gestures.
*/
if (hw->z == 0) {
/* No contacts */
priv->press = priv->report_press = false;
} else if (hw->w >= 4 && ((buf[0] ^ buf[3]) & 0x01)) {
/*
* Single-finger touch with pressure above
* the threshold. If pressure stays long
* enough, we'll start reporting primary
* button. We rely on the device continuing
* sending data even if finger does not
* move.
*/
if (!priv->press) {
priv->press_start = jiffies;
priv->press = true;
} else if (time_after(jiffies,
priv->press_start +
msecs_to_jiffies(50))) {
priv->report_press = true;
}
} else {
priv->press = false;
}
hw->left = priv->report_press;
} else if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) {
/* /*
* Clickpad's button is transmitted as middle button, * Clickpad's button is transmitted as middle button,
* however, since it is primary button, we will report * however, since it is primary button, we will report
@ -651,21 +702,6 @@ static int synaptics_parse_hw_state(const unsigned char buf[],
hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0; hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0;
} }
if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) ||
SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) &&
hw->w == 2) {
synaptics_parse_agm(buf, priv, hw);
return 1;
}
hw->x = (((buf[3] & 0x10) << 8) |
((buf[1] & 0x0f) << 8) |
buf[4]);
hw->y = (((buf[3] & 0x20) << 7) |
((buf[1] & 0xf0) << 4) |
buf[5]);
hw->z = buf[2];
if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) && if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) &&
((buf[0] ^ buf[3]) & 0x02)) { ((buf[0] ^ buf[3]) & 0x02)) {
switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) { switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) {

View File

@ -78,6 +78,11 @@
* 2 0x08 image sensor image sensor tracks 5 fingers, but only * 2 0x08 image sensor image sensor tracks 5 fingers, but only
* reports 2. * reports 2.
* 2 0x20 report min query 0x0f gives min coord reported * 2 0x20 report min query 0x0f gives min coord reported
* 2 0x80 forcepad forcepad is a variant of clickpad that
* does not have physical buttons but rather
* uses pressure above certain threshold to
* report primary clicks. Forcepads also have
* clickpad bit set.
*/ */
#define SYN_CAP_CLICKPAD(ex0c) ((ex0c) & 0x100000) /* 1-button ClickPad */ #define SYN_CAP_CLICKPAD(ex0c) ((ex0c) & 0x100000) /* 1-button ClickPad */
#define SYN_CAP_CLICKPAD2BTN(ex0c) ((ex0c) & 0x000100) /* 2-button ClickPad */ #define SYN_CAP_CLICKPAD2BTN(ex0c) ((ex0c) & 0x000100) /* 2-button ClickPad */
@ -86,6 +91,7 @@
#define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000) #define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000)
#define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400) #define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400)
#define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800) #define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800)
#define SYN_CAP_FORCEPAD(ex0c) ((ex0c) & 0x008000)
/* synaptics modes query bits */ /* synaptics modes query bits */
#define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7)) #define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7))
@ -177,6 +183,11 @@ struct synaptics_data {
*/ */
struct synaptics_hw_state agm; struct synaptics_hw_state agm;
bool agm_pending; /* new AGM packet received */ bool agm_pending; /* new AGM packet received */
/* ForcePad handling */
unsigned long press_start;
bool press;
bool report_press;
}; };
void synaptics_module_init(void); void synaptics_module_init(void);

View File

@ -387,6 +387,7 @@ static int synusb_probe(struct usb_interface *intf,
__set_bit(EV_REL, input_dev->evbit); __set_bit(EV_REL, input_dev->evbit);
__set_bit(REL_X, input_dev->relbit); __set_bit(REL_X, input_dev->relbit);
__set_bit(REL_Y, input_dev->relbit); __set_bit(REL_Y, input_dev->relbit);
__set_bit(INPUT_PROP_POINTING_STICK, input_dev->propbit);
input_set_abs_params(input_dev, ABS_PRESSURE, 0, 127, 0, 0); input_set_abs_params(input_dev, ABS_PRESSURE, 0, 127, 0, 0);
} else { } else {
input_set_abs_params(input_dev, ABS_X, input_set_abs_params(input_dev, ABS_X,
@ -401,6 +402,11 @@ static int synusb_probe(struct usb_interface *intf,
__set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit); __set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit);
} }
if (synusb->flags & SYNUSB_TOUCHSCREEN)
__set_bit(INPUT_PROP_DIRECT, input_dev->propbit);
else
__set_bit(INPUT_PROP_POINTER, input_dev->propbit);
__set_bit(BTN_LEFT, input_dev->keybit); __set_bit(BTN_LEFT, input_dev->keybit);
__set_bit(BTN_RIGHT, input_dev->keybit); __set_bit(BTN_RIGHT, input_dev->keybit);
__set_bit(BTN_MIDDLE, input_dev->keybit); __set_bit(BTN_MIDDLE, input_dev->keybit);

Some files were not shown because too many files have changed in this diff Show More