Commit Graph

855685 Commits

Author SHA1 Message Date
Linus Torvalds c92f038067 media updates for v5.3-rc1
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+QmuaPwR3wnBdVwACF8+vY7k4RUFAl01pAkACgkQCF8+vY7k
 4RXwTA/7BrsINB+6rRQxwBadSbTJvfhJ0WwNKYgbAJJfE3v91zEYNGkFgbwgYIcw
 r6pXQSr2jPxFv6mv1rXRwjU8+eOucrgl2VIjRka0CPVBMrHsF1ENGM0+DbfNqjNU
 8ygjyHZlGdThOarECkl76z0GXKprr79MKUMIEfeekDVz18QCQ2ER6gyKhyGz9LfI
 Q0uDdWbGnXyu27ubRwLm7Cl6QTZ9Lhmcx3MPbEwD6bgLiy7PAIWXun3RyzGZCj6q
 ps5DUBNtPM35k995JWOrGh4raW2VhJyTRWYJKErIfNe24bFg2dX2DMLcIfRaA4oF
 joQJ+JvJX+V01QKxz8VksXPhWbsJ+9SQAaVnJ8iTS10HXULgb+Eymst86pmhUMM3
 DFzouvcFdLXUwHJk4xXgCMabfMolIZwfhJRB220A+bP8AS+RmbsoP6zIorkTQQH3
 3+iGX/s1GM7ARQhSXkwogltmqZuy2GSxq7EZrQr1lmI4ZW/0YxaxMnecgD7+TWEP
 tU4e9R1WcMPkHbLYQnM7tzcsGwywkbGjX7vIEK52ERSy4g0TgHmO03tsL+kL6S5j
 AQVAAfaDEzEi4Ar4nL7wxm2hoLAfwUUyCv+t84AivTpdHLO7QoznYgvS0GAbRjTB
 t7YN5Uk2VVuCtfPsem704i5a9QIubB9pBZsYuvP1/miVxlhpIFw=
 =ZxJh
 -----END PGP SIGNATURE-----

Merge tag 'media/v5.3-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media

Pull media fixes from Mauro Carvalho Chehab:
 "For two regressions in media core:

   - v4l2-subdev: fix regression in check_pad()

   - videodev2.h: change V4L2_PIX_FMT_BGRA444 define: fourcc was already
     in use"

* tag 'media/v5.3-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
  media: videodev2.h: change V4L2_PIX_FMT_BGRA444 define: fourcc was already in use
  media: v4l2-subdev: fix regression in check_pad()
2019-07-22 09:01:47 -07:00
Sai Praneeth Prakhya 7f6cade5b6 iommu/vt-d: Print pasid table entries MSB to LSB in debugfs
Commit dd5142ca5d ("iommu/vt-d: Add debugfs support to show scalable mode
DMAR table internals") prints content of pasid table entries from LSB to
MSB where as other entries are printed MSB to LSB. So, to maintain
uniformity among all entries and to not confuse the user, print MSB first.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Sohil Mehta <sohil.mehta@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Fixes: dd5142ca5d ("iommu/vt-d: Add debugfs support to show scalable mode DMAR table internals")
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-22 17:52:57 +02:00
Chris Wilson 9eed17d37c iommu/iova: Remove stale cached32_node
Since the cached32_node is allowed to be advanced above dma_32bit_pfn
(to provide a shortcut into the limited range), we need to be careful to
remove the to be freed node if it is the cached32_node.

[   48.477773] BUG: KASAN: use-after-free in __cached_rbnode_delete_update+0x68/0x110
[   48.477812] Read of size 8 at addr ffff88870fc19020 by task kworker/u8:1/37
[   48.477843]
[   48.477879] CPU: 1 PID: 37 Comm: kworker/u8:1 Tainted: G     U            5.2.0+ #735
[   48.477915] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
[   48.478047] Workqueue: i915 __i915_gem_free_work [i915]
[   48.478075] Call Trace:
[   48.478111]  dump_stack+0x5b/0x90
[   48.478137]  print_address_description+0x67/0x237
[   48.478178]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478212]  __kasan_report.cold.3+0x1c/0x38
[   48.478240]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478280]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478308]  __cached_rbnode_delete_update+0x68/0x110
[   48.478344]  private_free_iova+0x2b/0x60
[   48.478378]  iova_magazine_free_pfns+0x46/0xa0
[   48.478403]  free_iova_fast+0x277/0x340
[   48.478443]  fq_ring_free+0x15a/0x1a0
[   48.478473]  queue_iova+0x19c/0x1f0
[   48.478597]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
[   48.478712]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
[   48.478826]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
[   48.478940]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
[   48.479053]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
[   48.479081]  ? __sg_free_table+0x9e/0xf0
[   48.479116]  ? kfree+0x7f/0x150
[   48.479234]  i915_vma_unbind+0x1e2/0x240 [i915]
[   48.479352]  i915_vma_destroy+0x3a/0x280 [i915]
[   48.479465]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
[   48.479579]  __i915_gem_free_work+0x41/0xa0 [i915]
[   48.479607]  process_one_work+0x495/0x710
[   48.479642]  worker_thread+0x4c7/0x6f0
[   48.479687]  ? process_one_work+0x710/0x710
[   48.479724]  kthread+0x1b2/0x1d0
[   48.479774]  ? kthread_create_worker_on_cpu+0xa0/0xa0
[   48.479820]  ret_from_fork+0x1f/0x30
[   48.479864]
[   48.479907] Allocated by task 631:
[   48.479944]  save_stack+0x19/0x80
[   48.479994]  __kasan_kmalloc.constprop.6+0xc1/0xd0
[   48.480038]  kmem_cache_alloc+0x91/0xf0
[   48.480082]  alloc_iova+0x2b/0x1e0
[   48.480125]  alloc_iova_fast+0x58/0x376
[   48.480166]  intel_alloc_iova+0x90/0xc0
[   48.480214]  intel_map_sg+0xde/0x1f0
[   48.480343]  i915_gem_gtt_prepare_pages+0xb8/0x170 [i915]
[   48.480465]  huge_get_pages+0x232/0x2b0 [i915]
[   48.480590]  ____i915_gem_object_get_pages+0x40/0xb0 [i915]
[   48.480712]  __i915_gem_object_get_pages+0x90/0xa0 [i915]
[   48.480834]  i915_gem_object_prepare_write+0x2d6/0x330 [i915]
[   48.480955]  create_test_object.isra.54+0x1a9/0x3e0 [i915]
[   48.481075]  igt_shared_ctx_exec+0x365/0x3c0 [i915]
[   48.481210]  __i915_subtests.cold.4+0x30/0x92 [i915]
[   48.481341]  __run_selftests.cold.3+0xa9/0x119 [i915]
[   48.481466]  i915_live_selftests+0x3c/0x70 [i915]
[   48.481583]  i915_pci_probe+0xe7/0x220 [i915]
[   48.481620]  pci_device_probe+0xe0/0x180
[   48.481665]  really_probe+0x163/0x4e0
[   48.481710]  device_driver_attach+0x85/0x90
[   48.481750]  __driver_attach+0xa5/0x180
[   48.481796]  bus_for_each_dev+0xda/0x130
[   48.481831]  bus_add_driver+0x205/0x2e0
[   48.481882]  driver_register+0xca/0x140
[   48.481927]  do_one_initcall+0x6c/0x1af
[   48.481970]  do_init_module+0x106/0x350
[   48.482010]  load_module+0x3d2c/0x3ea0
[   48.482058]  __do_sys_finit_module+0x110/0x180
[   48.482102]  do_syscall_64+0x62/0x1f0
[   48.482147]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   48.482190]
[   48.482224] Freed by task 37:
[   48.482273]  save_stack+0x19/0x80
[   48.482318]  __kasan_slab_free+0x12e/0x180
[   48.482363]  kmem_cache_free+0x70/0x140
[   48.482406]  __free_iova+0x1d/0x30
[   48.482445]  fq_ring_free+0x15a/0x1a0
[   48.482490]  queue_iova+0x19c/0x1f0
[   48.482624]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
[   48.482749]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
[   48.482873]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
[   48.482999]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
[   48.483123]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
[   48.483250]  i915_vma_unbind+0x1e2/0x240 [i915]
[   48.483378]  i915_vma_destroy+0x3a/0x280 [i915]
[   48.483500]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
[   48.483622]  __i915_gem_free_work+0x41/0xa0 [i915]
[   48.483659]  process_one_work+0x495/0x710
[   48.483704]  worker_thread+0x4c7/0x6f0
[   48.483748]  kthread+0x1b2/0x1d0
[   48.483787]  ret_from_fork+0x1f/0x30
[   48.483831]
[   48.483868] The buggy address belongs to the object at ffff88870fc19000
[   48.483868]  which belongs to the cache iommu_iova of size 40
[   48.483920] The buggy address is located 32 bytes inside of
[   48.483920]  40-byte region [ffff88870fc19000, ffff88870fc19028)
[   48.483964] The buggy address belongs to the page:
[   48.484006] page:ffffea001c3f0600 refcount:1 mapcount:0 mapping:ffff8888181a91c0 index:0x0 compound_mapcount: 0
[   48.484045] flags: 0x8000000000010200(slab|head)
[   48.484096] raw: 8000000000010200 ffffea001c421a08 ffffea001c447e88 ffff8888181a91c0
[   48.484141] raw: 0000000000000000 0000000000120012 00000001ffffffff 0000000000000000
[   48.484188] page dumped because: kasan: bad access detected
[   48.484230]
[   48.484265] Memory state around the buggy address:
[   48.484314]  ffff88870fc18f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484361]  ffff88870fc18f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484406] >ffff88870fc19000: fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc
[   48.484451]                                ^
[   48.484494]  ffff88870fc19080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484530]  ffff88870fc19100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108602
Fixes: e60aa7b538 ("iommu/iova: Extend rbtree node caching")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: <stable@vger.kernel.org> # v4.15+
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-22 17:50:49 +02:00
Linus Torvalds 83768245a3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:

 1) Several netfilter fixes including a nfnetlink deadlock fix from
    Florian Westphal and fix for dropping VRF packets from Miaohe Lin.

 2) Flow offload fixes from Pablo Neira Ayuso including a fix to restore
    proper block sharing.

 3) Fix r8169 PHY init from Thomas Voegtle.

 4) Fix memory leak in mac80211, from Lorenzo Bianconi.

 5) Missing NULL check on object allocation in cxgb4, from Navid
    Emamdoost.

 6) Fix scaling of RX power in sfp phy driver, from Andrew Lunn.

 7) Check that there is actually an ip header to access in skb->data in
    VRF, from Peter Kosyh.

 8) Remove spurious rcu unlock in hv_netvsc, from Haiyang Zhang.

 9) One more tweak the the TCP fragmentation memory limit changes, to be
    less harmful to applications setting small SO_SNDBUF values. From
    Eric Dumazet.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (40 commits)
  tcp: be more careful in tcp_fragment()
  hv_netvsc: Fix extra rcu_read_unlock in netvsc_recv_callback()
  vrf: make sure skb->data contains ip header to make routing
  connector: remove redundant input callback from cn_dev
  qed: Prefer pcie_capability_read_word()
  igc: Prefer pcie_capability_read_word()
  cxgb4: Prefer pcie_capability_read_word()
  be2net: Synchronize be_update_queues with dev_watchdog
  bnx2x: Prevent load reordering in tx completion processing
  net: phy: sfp: hwmon: Fix scaling of RX power
  net: sched: verify that q!=NULL before setting q->flags
  chelsio: Fix a typo in a function name
  allocate_flower_entry: should check for null deref
  net: hns3: typo in the name of a constant
  kbuild: add net/netfilter/nf_tables_offload.h to header-test blacklist.
  tipc: Fix a typo
  mac80211: don't warn about CW params when not using them
  mac80211: fix possible memory leak in ieee80211_assign_beacon
  nl80211: fix NL80211_HE_MAX_CAPABILITY_LEN
  nl80211: fix VENDOR_CMD_RAW_DATA
  ...
2019-07-22 08:49:22 -07:00
Dmitry Safonov 3ee9eca760 iommu/vt-d: Check if domain->pgd was allocated
There is a couple of places where on domain_init() failure domain_exit()
is called. While currently domain_init() can fail only if
alloc_pgtable_page() has failed.

Make domain_exit() check if domain->pgd present, before calling
domain_unmap(), as it theoretically should crash on clearing pte entries
in dma_pte_clear_level().

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Signed-off-by: Dmitry Safonov <dima@arista.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-22 17:43:06 +02:00
Dmitry Safonov effa467870 iommu/vt-d: Don't queue_iova() if there is no flush queue
Intel VT-d driver was reworked to use common deferred flushing
implementation. Previously there was one global per-cpu flush queue,
afterwards - one per domain.

Before deferring a flush, the queue should be allocated and initialized.

Currently only domains with IOMMU_DOMAIN_DMA type initialize their flush
queue. It's probably worth to init it for static or unmanaged domains
too, but it may be arguable - I'm leaving it to iommu folks.

Prevent queuing an iova flush if the domain doesn't have a queue.
The defensive check seems to be worth to keep even if queue would be
initialized for all kinds of domains. And is easy backportable.

On 4.19.43 stable kernel it has a user-visible effect: previously for
devices in si domain there were crashes, on sata devices:

 BUG: spinlock bad magic on CPU#6, swapper/0/1
  lock: 0xffff88844f582008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
 CPU: 6 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #1
 Call Trace:
  <IRQ>
  dump_stack+0x61/0x7e
  spin_bug+0x9d/0xa3
  do_raw_spin_lock+0x22/0x8e
  _raw_spin_lock_irqsave+0x32/0x3a
  queue_iova+0x45/0x115
  intel_unmap+0x107/0x113
  intel_unmap_sg+0x6b/0x76
  __ata_qc_complete+0x7f/0x103
  ata_qc_complete+0x9b/0x26a
  ata_qc_complete_multiple+0xd0/0xe3
  ahci_handle_port_interrupt+0x3ee/0x48a
  ahci_handle_port_intr+0x73/0xa9
  ahci_single_level_irq_intr+0x40/0x60
  __handle_irq_event_percpu+0x7f/0x19a
  handle_irq_event_percpu+0x32/0x72
  handle_irq_event+0x38/0x56
  handle_edge_irq+0x102/0x121
  handle_irq+0x147/0x15c
  do_IRQ+0x66/0xf2
  common_interrupt+0xf/0xf
 RIP: 0010:__do_softirq+0x8c/0x2df

The same for usb devices that use ehci-pci:
 BUG: spinlock bad magic on CPU#0, swapper/0/1
  lock: 0xffff88844f402008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #4
 Call Trace:
  <IRQ>
  dump_stack+0x61/0x7e
  spin_bug+0x9d/0xa3
  do_raw_spin_lock+0x22/0x8e
  _raw_spin_lock_irqsave+0x32/0x3a
  queue_iova+0x77/0x145
  intel_unmap+0x107/0x113
  intel_unmap_page+0xe/0x10
  usb_hcd_unmap_urb_setup_for_dma+0x53/0x9d
  usb_hcd_unmap_urb_for_dma+0x17/0x100
  unmap_urb_for_dma+0x22/0x24
  __usb_hcd_giveback_urb+0x51/0xc3
  usb_giveback_urb_bh+0x97/0xde
  tasklet_action_common.isra.4+0x5f/0xa1
  tasklet_action+0x2d/0x30
  __do_softirq+0x138/0x2df
  irq_exit+0x7d/0x8b
  smp_apic_timer_interrupt+0x10f/0x151
  apic_timer_interrupt+0xf/0x20
  </IRQ>
 RIP: 0010:_raw_spin_unlock_irqrestore+0x17/0x39

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Cc: <stable@vger.kernel.org> # 4.14+
Fixes: 13cf017446 ("iommu/vt-d: Make use of iova deferred flushing")
Signed-off-by: Dmitry Safonov <dima@arista.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-22 17:43:06 +02:00
James Morse 40ca0ce56d arm64: entry: SP Alignment Fault doesn't write to FAR_EL1
Comparing the arm-arm's  pseudocode for AArch64.PCAlignmentFault() with
AArch64.SPAlignmentFault() shows that SP faults don't copy the faulty-SP
to FAR_EL1, but this is where we read from, and the address we provide
to user-space with the BUS_ADRALN signal.

For user-space this value will be UNKNOWN due to the previous ERET to
user-space. If the last value is preserved, on systems with KASLR or KPTI
this will be the user-space link-register left in FAR_EL1 by tramp_exit().
Fix this to retrieve the original sp_el0 value, and pass this to
do_sp_pc_fault().

SP alignment faults from EL1 will cause us to take the fault again when
trying to store the pt_regs. This eventually takes us to the overflow
stack. Remove the ESR_ELx_EC_SP_ALIGN check as we will never make it
this far.

Fixes: 60ffc30d56 ("arm64: Exception handling")
Signed-off-by: James Morse <james.morse@arm.com>
[will: change label name and fleshed out comment]
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 16:22:34 +01:00
Lu Baolu 557529494d iommu/vt-d: Avoid duplicated pci dma alias consideration
As we have abandoned the home-made lazy domain allocation
and delegated the DMA domain life cycle up to the default
domain mechanism defined in the generic iommu layer, we
needn't consider pci alias anymore when mapping/unmapping
the context entries. Without this fix, we see kernel NULL
pointer dereference during pci device hot-plug test.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Fixes: fa954e6831 ("iommu/vt-d: Delegate the dma domain to upper layer")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reported-and-tested-by: Xu Pengfei <pengfei.xu@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-22 17:16:24 +02:00
Marc Zyngier cbdf8a189a arm64: Force SSBS on context switch
On a CPU that doesn't support SSBS, PSTATE[12] is RES0.  In a system
where only some of the CPUs implement SSBS, we end-up losing track of
the SSBS bit across task migration.

To address this issue, let's force the SSBS bit on context switch.

Fixes: 8f04e8e6e2 ("arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3")
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[will: inverted logic and added comments]
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 15:24:16 +01:00
Joerg Roedel 301e7ee1de Revert "iommu/vt-d: Consolidate domain_init() to avoid duplication"
This reverts commit 123b2ffc37.

This commit reportedly caused boot failures on some systems
and needs to be reverted for now.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-22 16:21:17 +02:00
Wei Yongjun 5d9e06d60e bcache: fix possible memory leak in bch_cached_dev_run()
memory malloced in bch_cached_dev_run() and should be freed before
leaving from the error handling cases, otherwise it will cause
memory leak.

Fixes: 0b13efecf5 ("bcache: add return value check to bch_cached_dev_run()")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-22 08:15:17 -06:00
Suren Baghdasaryan b191d6491b
pidfd: fix a poll race when setting exit_state
There is a race between reading task->exit_state in pidfd_poll and
writing it after do_notify_parent calls do_notify_pidfd. Expected
sequence of events is:

CPU 0                            CPU 1
------------------------------------------------
exit_notify
  do_notify_parent
    do_notify_pidfd
  tsk->exit_state = EXIT_DEAD
                                  pidfd_poll
                                     if (tsk->exit_state)

However nothing prevents the following sequence:

CPU 0                            CPU 1
------------------------------------------------
exit_notify
  do_notify_parent
    do_notify_pidfd
                                   pidfd_poll
                                      if (tsk->exit_state)
  tsk->exit_state = EXIT_DEAD

This causes a polling task to wait forever, since poll blocks because
exit_state is 0 and the waiting task is not notified again. A stress
test continuously doing pidfd poll and process exits uncovered this bug.

To fix it, we make sure that the task's exit_state is always set before
calling do_notify_pidfd.

Fixes: b53b0b9d9a ("pidfd: add polling support")
Cc: kernel-team@android.com
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Link: https://lore.kernel.org/r/20190717172100.261204-1-joel@joelfernandes.org
[christian@brauner.io: adapt commit message and drop unneeded changes from wait_task_zombie]
Signed-off-by: Christian Brauner <christian@brauner.io>
2019-07-22 16:02:03 +02:00
Jean-Philippe Brucker 4574b0b9ab MAINTAINERS: Update my email address
Update MAINTAINERS and .mailmap with my @linaro.org address, since I
don't have access to my @arm.com address anymore.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 14:57:50 +01:00
Sean Paul cfb7156672 - Rip out komeda internal properties and move the driver to -misc (Daniel)
- Handle a couple edge cases with incomplete/incorrect cmdline modes (Dmitry)
 - Fix some silly warnings (Arnd)
 - Add orientation quirk for newer GPD MicroPCs (Hans)
 
 Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
 Cc: Liviu Dudau <Liviu.Dudau@arm.com>
 Cc: Dmitry Osipenko <digetx@gmail.com>
 Cc: Arnd Bergmann <arnd@arndb.de>
 Cc: Hans de Goede <hdegoede@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEfxcpfMSgdnQMs+QqlvcN/ahKBwoFAl0wiTYACgkQlvcN/ahK
 Bwqc+AgAtNgfn9ZoSL3+wUQDL2FdYB9hhV5MwuAvuwFKH/6jufpyn6eUFlAsecbr
 59/s9xBO4143dpGNkxqmz+bx7V00fzJKUqBRQ+IAGluyXmQFC4efzbKgKYGcZThK
 v+SvTCiKXxjpaLTSJNipD+ALj9WtMUcWXANuiBagfBwL1LJSQwq+hxTuI3yDkL1v
 SoqJZiwplpypFx1oBI5qYsbmGVfj3tEzfGd+FnR1MnLDUhcUQ4Rj+vrUkO2tETsH
 +Rrk1/pImXJ5PEIQgosZNSV0hEpRwMLLakZWesoaWxy083u/atFaecZ2oOqsnOmx
 Mr2VQf5M0dOzdXfgdaegLZFEALFEKw==
 =Y/Pa
 -----END PGP SIGNATURE-----

Merge drm-misc-next-fixes-2019-07-18 into drm-misc-fixes

Picking up 7aaddd96d5 ("drm/modes: Don't apply cmdline's rotation if it wasn't specified")
from drm-misc-next-fixes.  It missed the merge window.

Signed-off-by: Sean Paul <seanpaul@chromium.org>
2019-07-22 09:56:14 -04:00
Vaibhav Jain 3a855b7ac7 powerpc/papr_scm: Force a scm-unbind if initial scm-bind fails
In some cases initial bind of scm memory for an lpar can fail if
previously it wasn't released using a scm-unbind hcall. This situation
can arise due to panic of the previous kernel or forced lpar
fadump. In such cases the H_SCM_BIND_MEM return a H_OVERLAP error.

To mitigate such cases the patch updates papr_scm_probe() to force a
call to drc_pmem_unbind() in case the initial bind of scm memory fails
with EBUSY error. In case scm-bind operation again fails after the
forced scm-unbind then we follow the existing error path. We also
update drc_pmem_bind() to handle the H_OVERLAP error returned by phyp
and indicate it as a EBUSY error back to the caller.

Suggested-by: "Oliver O'Halloran" <oohall@gmail.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Reviewed-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20190629160610.23402-4-vaibhav@linux.ibm.com
2019-07-22 23:31:00 +10:00
Vaibhav Jain 0d7fc080ba powerpc/papr_scm: Update drc_pmem_unbind() to use H_SCM_UNBIND_ALL
The new hcall named H_SCM_UNBIND_ALL has been introduce that can
unbind all or specific scm memory assigned to an lpar. This is
more efficient than using H_SCM_UNBIND_MEM as currently we don't
support partial unbind of scm memory.

Hence this patch proposes following changes to drc_pmem_unbind():

    * Update drc_pmem_unbind() to replace hcall H_SCM_UNBIND_MEM to
      H_SCM_UNBIND_ALL.

    * Update drc_pmem_unbind() to handles cases when PHYP asks the guest
      kernel to wait for specific amount of time before retrying the
      hcall via the 'LONG_BUSY' return value.

    * Ensure appropriate error code is returned back from the function
      in case of an error.

Reviewed-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20190629160610.23402-3-vaibhav@linux.ibm.com
2019-07-22 23:31:00 +10:00
Vaibhav Jain 6d140e7569 powerpc/pseries: Update SCM hcall op-codes in hvcall.h
Update the hvcalls.h to include op-codes for new hcalls introduce to
manage SCM memory. Also update existing hcall definitions to reflect
current papr specification for SCM.

The removed hcall op-codes H_SCM_MEM_QUERY, H_SCM_BLOCK_CLEAR were
transient proposals and there support was never implemented by
Power-VM nor they were used anywhere in Linux kernel. Hence we don't
expect anyone to be impacted by this change.

Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20190629160610.23402-2-vaibhav@linux.ibm.com
2019-07-22 23:31:00 +10:00
Julien Thierry 8d419adb05 MAINTAINERS: Update my email address
My @arm.com address will stop working in a couple of weeks. Update
MAINTAINERS and .mailmap files with an address I'll have access to.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 14:26:06 +01:00
Suzuki K Poulose 01233d4783 MAINTAINERS: Fix spelling mistake in my name
Fix a typo in my name for the KVM-ARM reviewers entry.

Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 14:25:11 +01:00
Marc Zyngier 0aafd138b3 MAINTAINERS: Update my email address to @kernel.org
I will soon lose access to my @arm.com email address, so let's
update the MAINTAINERS file to point to my @kernel.org address,
as well as .mailmap for good measure.

Note that my @arm.com address will still work, but someone else
will be reading whatever is sent there. Don't say you didn't know!

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 14:25:10 +01:00
Greg Kroah-Hartman 85697853be Merge tag 'misc-habanalabs-fixes-2019-07-22' of git://people.freedesktop.org/~gabbayo/linux into char-misc-linux
Oded writes:

This tag contains the following fixes:

- Fix to VRHOT event handling from the ASIC. No need to reset the ASIC,
  just notify it in dmesg.

- Fix printk specifier for printing dma address

* tag 'misc-habanalabs-fixes-2019-07-22' of git://people.freedesktop.org/~gabbayo/linux:
  habanalabs: don't reset device when getting VRHOT
  habanalabs: use %pad for printing a dma_addr_t
2019-07-22 14:08:06 +02:00
Jan Kiszka c6bf2ae931 KVM: nVMX: Set cached_vmcs12 and cached_shadow_vmcs12 NULL after free
Shall help finding use-after-free bugs earlier.

Suggested-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-22 13:55:49 +02:00
Wanpeng Li d9a710e5fc KVM: X86: Dynamically allocate user_fpu
After reverting commit 240c35a378 (kvm: x86: Use task structs fpu field
for user), struct kvm_vcpu is 19456 bytes on my server, PAGE_ALLOC_COSTLY_ORDER(3)
is the order at which allocations are deemed costly to service. In serveless
scenario, one host can service hundreds/thoudands firecracker/kata-container
instances, howerver, new instance will fail to launch after memory is too
fragmented to allocate kvm_vcpu struct on host, this was observed in some
cloud provider product environments.

This patch dynamically allocates user_fpu, kvm_vcpu is 15168 bytes now on my
Skylake server.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-22 13:55:48 +02:00
Wanpeng Li e751732486 KVM: X86: Fix fpu state crash in kvm guest
The idea before commit 240c35a37 (which has just been reverted)
was that we have the following FPU states:

               userspace (QEMU)             guest
---------------------------------------------------------------------------
               processor                    vcpu->arch.guest_fpu
>>> KVM_RUN: kvm_load_guest_fpu
               vcpu->arch.user_fpu          processor
>>> preempt out
               vcpu->arch.user_fpu          current->thread.fpu
>>> preempt in
               vcpu->arch.user_fpu          processor
>>> back to userspace
>>> kvm_put_guest_fpu
               processor                    vcpu->arch.guest_fpu
---------------------------------------------------------------------------

With the new lazy model we want to get the state back to the processor
when schedule in from current->thread.fpu.

Reported-by: Thomas Lambertz <mail@thomaslambertz.de>
Reported-by: anthony <antdev66@gmail.com>
Tested-by: anthony <antdev66@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Thomas Lambertz <mail@thomaslambertz.de>
Cc: anthony <antdev66@gmail.com>
Cc: stable@vger.kernel.org
Fixes: 5f409e20b (x86/fpu: Defer FPU state load until return to userspace)
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
[Add a comment in front of the warning. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-22 13:55:47 +02:00
Paolo Bonzini ec269475cb Revert "kvm: x86: Use task structs fpu field for user"
This reverts commit 240c35a378
("kvm: x86: Use task structs fpu field for user", 2018-11-06).
The commit is broken and causes QEMU's FPU state to be destroyed
when KVM_RUN is preempted.

Fixes: 240c35a378 ("kvm: x86: Use task structs fpu field for user")
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-22 13:55:47 +02:00
Jan Kiszka cf64527bb3 KVM: nVMX: Clear pending KVM_REQ_GET_VMCS12_PAGES when leaving nested
Letting this pend may cause nested_get_vmcs12_pages to run against an
invalid state, corrupting the effective vmcs of L1.

This was triggerable in QEMU after a guest corruption in L2, followed by
a L1 reset.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Cc: stable@vger.kernel.org
Fixes: 7f7f1ba33c ("KVM: x86: do not load vmcs12 pages while still in SMM")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-22 13:55:46 +02:00
Anshuman Khandual 5a9060e943 arm64: mm: Drop pte_huge()
This helper is required from generic huge_pte_alloc() which is available
when arch subscribes ARCH_WANT_GENERAL_HUGETLB. arm64 implements it's own
huge_pte_alloc() and does not depend on the generic definition. Drop this
helper which is redundant on arm64.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Steve Capper <Steve.Capper@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 12:06:38 +01:00
Dave Martin ed2f3e9ff6 arm64/sve: Fix a couple of magic numbers for the Z-reg count
There are some hand-written instances of "32" to express the number
of SVE Z-registers.

Since this code was written a #define was added for this, so
convert trivial instances of this magic number as appropriate.

No functional change.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 11:49:03 +01:00
Dave Martin d16af870a7 arm64/sve: Factor out FPSIMD to SVE state conversion
Currently we convert from FPSIMD to SVE register state in memory in
two places.

To ease future maintenance, let's consolidate this in one place.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 11:48:11 +01:00
Mark Rutland 592700f094 arm64: stacktrace: Better handle corrupted stacks
The arm64 stacktrace code is careful to only dereference frame records
in valid stack ranges, ensuring that a corrupted frame record won't
result in a faulting access.

However, it's still possible for corrupt frame records to result in
infinite loops in the stacktrace code, which is also undesirable.

This patch ensures that we complete a stacktrace in finite time, by
keeping track of which stacks we have already completed unwinding, and
verifying that if the next frame record is on the same stack, it is at a
higher address.

As this has turned out to be particularly subtle, comments are added to
explain the procedure.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Acked-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tengfei Fan <tengfeif@codeaurora.org>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 11:44:15 +01:00
Dave Martin f3dcbe67ed arm64: stacktrace: Factor out backtrace initialisation
Some common code is required by each stacktrace user to initialise
struct stackframe before the first call to unwind_frame().

In preparation for adding to the common code, this patch factors it
out into a separate function start_backtrace(), and modifies the
stacktrace callers appropriately.

No functional change.

Signed-off-by: Dave Martin <dave.martin@arm.com>
[Mark: drop tsk argument, update more callsites]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 11:44:08 +01:00
Dave Martin 8caa6e2be7 arm64: stacktrace: Constify stacktrace.h functions
on_accessible_stack() and on_task_stack() shouldn't (and don't)
modify their task argument, so it can be const.

This patch adds the appropriate modifiers. Whitespace violations in the
parameter lists are fixed at the same time.

No functional change.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Dave Martin <dave.martin@arm.com>
[Mark: fixup const location, whitespace]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 11:44:00 +01:00
Vincenzo Frascino a88754b231 arm64: vdso: Cleanup Makefiles
The recent changes to the vdso library for arm64 and the introduction of
the compat vdso library have generated some misalignment in the
Makefiles.

Cleanup the Makefiles for vdso and vdso32 libraries:
  * Removing unused rules.
  * Unifying the displayed compilation messages.
  * Simplifying the generic library inclusion path for
    arm64 vdso.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 10:37:33 +01:00
Naohiro Aota 2e2f3c9b86 arm64: vdso: fix flip/flop vdso build bug
Running "make" on an already compiled kernel tree will rebuild the kernel
even without any modifications:

$ make ARCH=arm64 CROSS_COMPILE=/usr/bin/aarch64-unknown-linux-gnu-
arch/arm64/Makefile:58: CROSS_COMPILE_COMPAT not defined or empty, the compat vDSO will not be built
  CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  VDSOCHK arch/arm64/kernel/vdso/vdso.so.dbg
  VDSOSYM include/generated/vdso-offsets.h
  CHK     include/generated/compile.h
  CC      arch/arm64/kernel/signal.o
  CC      arch/arm64/kernel/vdso.o
  CC      arch/arm64/kernel/signal32.o
  LD      arch/arm64/kernel/vdso/vdso.so.dbg
  OBJCOPY arch/arm64/kernel/vdso/vdso.so
  AS      arch/arm64/kernel/vdso/vdso.o
  AR      arch/arm64/kernel/vdso/built-in.a
  AR      arch/arm64/kernel/built-in.a
  GEN     .version
  CHK     include/generated/compile.h
  UPD     include/generated/compile.h
  CC      init/version.o
  AR      init/built-in.a
  LD      vmlinux.o

This is the same bug fixed in commit 92a4728608 ("x86/boot: Fix
if_changed build flip/flop bug"). We cannot use two "if_changed" in one
target. Fix this build bug by merging two commands into one function.

Fixes: a7f71a2c89 ("arm64: compat: Add vDSO")
Fixes: 28b1a824a4 ("arm64: vdso: Substitute gettimeofday() with C implementation")
Reviewed-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
[will: merged in compat fix from Vincenzo and made rule names consistent]
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 10:33:42 +01:00
Vincenzo Frascino 85751e9e5b arm64: vdso: Fix population of AT_SYSINFO_EHDR for compat vdso
Prior to the introduction of Unified vDSO support and compat layer for
vDSO on arm64, AT_SYSINFO_EHDR was not defined for compat tasks.
In the current implementation, AT_SYSINFO_EHDR is defined even if the
compat vdso layer is not built, which has been shown to break Android
applications using bionic:

 | 01-01 01:22:14.097   755   755 F libc    : Fatal signal 11 (SIGSEGV),
 | code 1 (SEGV_MAPERR), fault addr 0x3cf2c96c in tid 755 (cameraserver),
 | pid 755 (cameraserver)
 | 01-01 01:22:14.112   759   759 F libc    : Fatal signal 11 (SIGSEGV),
 | code 1 (SEGV_MAPERR), fault addr 0x3cf2c96c in tid 759
 | (android.hardwar), pid 759 (android.hardwar)
 | 01-01 01:22:14.120   756   756 F libc    : Fatal signal 11 (SIGSEGV)
 | code 1 (SEGV_MAPERR), fault addr 0x3cf2c96c in tid 756 (drmserver),
 | pid 756 (drmserver)

Restore the old behaviour by making sure that AT_SYSINFO_EHDR for compat
tasks is defined only when CONFIG_COMPAT_VDSO is enabled.

Reported-by: John Stultz <john.stultz@linaro.org>
Tested-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2019-07-22 10:33:42 +01:00
Zhang Rui f76cb066ed powercap: Invoke powercap_init() and rapl_init() earlier
The MMIO RAPL interface driver depends on both powercap subsystem and
the intel_rapl_common code.

But when all of them are built-in, the MMIO RAPL interface driver can
be loaded before the other two and this breaks the system during boot.

Fix this by adjusting the init order of the powercap subsystem and the
intel_rapl_common code, so that it can be initialized first.

Fixes: 555c45fe0d ("int340X/processor_thermal_device: add support for MMIO RAPL")
Reported-by: Kenneth R. Crudup <kenny@panix.com>
Tested-by: Kenneth R. Crudup <kenny@panix.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
[ rjw: Subject & changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-07-22 11:23:00 +02:00
Vasily Averin f9adc23ee9 futex: Cleanup generic SMP variant of arch_futex_atomic_op_inuser()
The generic SMP variant of arch_futex_atomic_op_inuser() returns always
-ENOSYS so the switch case and surrounding code are pointless. Remove it
and just return -ENOSYS.

Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lkml.kernel.org/r/12bdaca8-99eb-e576-f842-5970ab1d6a92@virtuozzo.com
2019-07-22 11:20:10 +02:00
Cao jin 385065734c x86/irq/64: Update stale comment
Commit e6401c1309 ("x86/irq/64: Split the IRQ stack into its own pages")
missed to update one piece of comment as it did to its peer in Xen, which
will confuse people who still need to read comment.

Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190719081635.26528-1-caoj.fnst@cn.fujitsu.com
2019-07-22 10:54:27 +02:00
Hans de Goede d02f1aa391 x86/sysfb_efi: Add quirks for some devices with swapped width and height
Some Lenovo 2-in-1s with a detachable keyboard have a portrait screen but
advertise a landscape resolution and pitch, resulting in a messed up
display if the kernel tries to show anything on the efifb (because of the
wrong pitch).

Fix this by adding a new DMI match table for devices which need to have
their width and height swapped.

At first it was tried to use the existing table for overriding some of the
efifb parameters, but some of the affected devices have variants with
different LCD resolutions which will not work with hardcoded override
values.

Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1730783
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20190721152418.11644-1-hdegoede@redhat.com
2019-07-22 10:47:11 +02:00
Eiichi Tsukata 2af7c85714 x86/stacktrace: Prevent access_ok() warnings in arch_stack_walk_user()
When arch_stack_walk_user() is called from atomic contexts, access_ok() can
trigger the following warning if compiled with CONFIG_DEBUG_ATOMIC_SLEEP=y.

Reproducer:

  // CONFIG_DEBUG_ATOMIC_SLEEP=y
  # cd /sys/kernel/debug/tracing
  # echo 1 > options/userstacktrace
  # echo 1 > events/irq/irq_handler_entry/enable

  WARNING: CPU: 0 PID: 2649 at arch/x86/kernel/stacktrace.c:103 arch_stack_walk_user+0x6e/0xf6
  CPU: 0 PID: 2649 Comm: bash Not tainted 5.3.0-rc1+ #99
  RIP: 0010:arch_stack_walk_user+0x6e/0xf6
  Call Trace:
   <IRQ>
   stack_trace_save_user+0x10a/0x16d
   trace_buffer_unlock_commit_regs+0x185/0x240
   trace_event_buffer_commit+0xec/0x330
   trace_event_raw_event_irq_handler_entry+0x159/0x1e0
   __handle_irq_event_percpu+0x22d/0x440
   handle_irq_event_percpu+0x70/0x100
   handle_irq_event+0x5a/0x8b
   handle_edge_irq+0x12f/0x3f0
   handle_irq+0x34/0x40
   do_IRQ+0xa6/0x1f0
   common_interrupt+0xf/0xf
   </IRQ>

Fix it by calling __range_not_ok() directly instead of access_ok() as
copy_from_user_nmi() does. This is fine here because the actual copy is
inside a pagefault disabled region.

Reported-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Eiichi Tsukata <devel@etsukata.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190722083216.16192-2-devel@etsukata.com
2019-07-22 10:42:36 +02:00
Joerg Roedel 3f8fd02b1b mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy()
On x86-32 with PTI enabled, parts of the kernel page-tables are not shared
between processes. This can cause mappings in the vmalloc/ioremap area to
persist in some page-tables after the region is unmapped and released.

When the region is re-used the processes with the old mappings do not fault
in the new mappings but still access the old ones.

This causes undefined behavior, in reality often data corruption, kernel
oopses and panics and even spontaneous reboots.

Fix this problem by activly syncing unmaps in the vmalloc/ioremap area to
all page-tables in the system before the regions can be re-used.

References: https://bugzilla.suse.com/show_bug.cgi?id=1118689
Fixes: 5d72b4fba4 ('x86, mm: support huge I/O mapping capability I/F')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20190719184652.11391-4-joro@8bytes.org
2019-07-22 10:18:30 +02:00
Joerg Roedel 8e998fc24d x86/mm: Sync also unmappings in vmalloc_sync_all()
With huge-page ioremap areas the unmappings also need to be synced between
all page-tables. Otherwise it can cause data corruption when a region is
unmapped and later re-used.

Make the vmalloc_sync_one() function ready to sync unmappings and make sure
vmalloc_sync_all() iterates over all page-tables even when an unmapped PMD
is found.

Fixes: 5d72b4fba4 ('x86, mm: support huge I/O mapping capability I/F')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20190719184652.11391-3-joro@8bytes.org
2019-07-22 10:18:30 +02:00
Joerg Roedel 51b75b5b56 x86/mm: Check for pfn instead of page in vmalloc_sync_one()
Do not require a struct page for the mapped memory location because it
might not exist. This can happen when an ioremapped region is mapped with
2MB pages.

Fixes: 5d72b4fba4 ('x86, mm: support huge I/O mapping capability I/F')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20190719184652.11391-2-joro@8bytes.org
2019-07-22 10:18:30 +02:00
Sébastien Szymanski 2ca9939633 ARM: dts: imx6ul: fix clock frequency property name of I2C buses
A few boards set clock frequency of their I2C buses with
"clock_frequency" property. The right property is "clock-frequency".

Signed-off-by: Sébastien Szymanski <sebastien.szymanski@armadeus.com>
Reviewed-by: Fabio Estevam <festevam@gmail.com>
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
2019-07-22 15:42:22 +08:00
Christophe JAILLET e4091bdd2f ALSA: line6: Fix a typo
s/Vairax/Variax/

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-07-22 09:10:54 +02:00
Mauro Rossi c8917b8ff0 firmware: fix build errors in paged buffer handling code
fw_{grow,map}_paged_buf() need to be defined as static inline
when CONFIG_FW_LOADER_PAGED_BUF is not enabled,
infact fw_free_paged_buf() is also defined as static inline
when CONFIG_FW_LOADER_PAGED_BUF is not enabled.

Fixes the following mutiple definition building errors for Android kernel:

drivers/base/firmware_loader/fallback_efi.o: In function `fw_grow_paged_buf':
fallback_efi.c:(.text+0x0): multiple definition of `fw_grow_paged_buf'
drivers/base/firmware_loader/main.o:(.text+0x73b): first defined here
drivers/base/firmware_loader/fallback_efi.o: In function `fw_map_paged_buf':
fallback_efi.c:(.text+0xf): multiple definition of `fw_map_paged_buf'
drivers/base/firmware_loader/main.o:(.text+0x74a): first defined here

[ slightly corrected the patch description -- tiwai ]

Fixes: 5342e7093f ("firmware: Factor out the paged buffer handling code")
Fixes: 82fd7a8142 ("firmware: Add support for loading compressed files")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20190722055536.15342-1-tiwai@suse.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-22 08:44:40 +02:00
Zhengyuan Liu 9310a7ba6d io_uring: track io length in async_list based on bytes
We are using PAGE_SIZE as the unit to determine if the total len in
async_list has exceeded max_pages, it's not fair for smaller io sizes.
For example, if we are doing 1k-size io streams, we will never exceed
max_pages since len >>= PAGE_SHIFT always gets zero. So use original
bytes to make it more accurate.

Signed-off-by: Zhengyuan Liu <liuzhengyuan@kylinos.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-21 21:46:55 -06:00
Jens Axboe bd11b3a391 io_uring: don't use iov_iter_advance() for fixed buffers
Hrvoje reports that when a large fixed buffer is registered and IO is
being done to the latter pages of said buffer, the IO submission time
is much worse:

reading to the start of the buffer: 11238 ns
reading to the end of the buffer:   1039879 ns

In fact, it's worse by two orders of magnitude. The reason for that is
how io_uring figures out how to setup the iov_iter. We point the iter
at the first bvec, and then use iov_iter_advance() to fast-forward to
the offset within that buffer we need.

However, that is abysmally slow, as it entails iterating the bvecs
that we setup as part of buffer registration. There's really no need
to use this generic helper, as we know it's a BVEC type iterator, and
we also know that each bvec is PAGE_SIZE in size, apart from possibly
the first and last. Hence we can just use a shift on the offset to
find the right index, and then adjust the iov_iter appropriately.
After this fix, the timings are:

reading to the start of the buffer: 10135 ns
reading to the end of the buffer:   1377 ns

Or about an 755x improvement for the tail page.

Reported-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Tested-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-21 21:46:36 -06:00
Jens Axboe 6a43074e2f block: properly handle IOCB_NOWAIT for async O_DIRECT IO
A caller is supposed to pass in REQ_NOWAIT if we can't block for any
given operation, but O_DIRECT for block devices just ignore this. Hence
we'll block for various resource shortages on the block layer side,
like having to wait for requests.

Use the new REQ_NOWAIT_INLINE to ask for this error to be returned
inline, so we can handle it appropriately and return -EAGAIN to the
caller.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-21 21:46:29 -06:00
Jens Axboe 893a1c9720 blk-mq: allow REQ_NOWAIT to return an error inline
By default, if a caller sets REQ_NOWAIT and we need to block, we'll
return -EAGAIN through the bio->bi_end_io() callback. For some use
cases, this makes it hard to use.

Allow a caller to ask for inline return of errors related to
blocking by also setting REQ_NOWAIT_INLINE.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-21 21:46:23 -06:00