Compare commits

...

7 Commits

Author SHA1 Message Date
MendeZ b5a510cea9
Pre Merge pull request !9808 from MendeZ/kvm-5.10 2025-02-16 01:30:48 +00:00
openeuler-ci-bot e804018e39
!15081 usb: typec: fix potential out of bounds in ucsi_ccg_update_set_new_cam_cmd()
Merge Pull Request from: @ci-robot 
 
PR sync from: Xia Fukun <xiafukun@huawei.com>
https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/LXIH6HD7KB6NFVOGARUJ3M3PG45CHCQA/ 
 
https://gitee.com/src-openeuler/kernel/issues/IB5AV7 
 
Link:https://gitee.com/openeuler/kernel/pulls/15081 

Reviewed-by: Zucheng Zheng <zhengzucheng@huawei.com> 
Reviewed-by: Li Nan <linan122@huawei.com> 
Signed-off-by: Li Nan <linan122@huawei.com>
2025-02-16 01:21:12 +00:00
openeuler-ci-bot 477dce52aa
!15079 drivers: media: dvb-frontends/rtl2830: fix an out-of-bounds write error
Merge Pull Request from: @ci-robot 
 
PR sync from: Xia Fukun <xiafukun@huawei.com>
https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/32ZT6XNWBZ66FRRKW45PAZRR7DKJM7MF/ 
 
https://gitee.com/src-openeuler/kernel/issues/IAYPJG 
 
Link:https://gitee.com/openeuler/kernel/pulls/15079 

Reviewed-by: Li Nan <linan122@huawei.com> 
Reviewed-by: Zucheng Zheng <zhengzucheng@huawei.com> 
Signed-off-by: Li Nan <linan122@huawei.com>
2025-02-16 01:21:01 +00:00
Dan Carpenter 7c113a988f usb: typec: fix potential out of bounds in ucsi_ccg_update_set_new_cam_cmd()
mainline inclusion
from mainline-v6.12-rc7
commit 7dd08a0b4193087976db6b3ee7807de7e8316f96
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/IB5AV7
CVE: CVE-2024-50268

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7dd08a0b4193087976db6b3ee7807de7e8316f96

--------------------------------

The "*cmd" variable can be controlled by the user via debugfs.  That means
"new_cam" can be as high as 255 while the size of the uc->updated[] array
is UCSI_MAX_ALTMODES (30).

The call tree is:
ucsi_cmd() // val comes from simple_attr_write_xsigned()
-> ucsi_send_command()
   -> ucsi_send_command_common()
      -> ucsi_run_command() // calls ucsi->ops->sync_control()
         -> ucsi_ccg_sync_control()

Fixes: 170a6726d0 ("usb: typec: ucsi: add support for separate DP altmode devices")
Cc: stable <stable@kernel.org>
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Link: https://lore.kernel.org/r/325102b3-eaa8-4918-a947-22aca1146586@stanley.mountain
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Xia Fukun <xiafukun@huawei.com>
2025-02-14 17:15:16 +08:00
Junlin Li a1f7a27ec5 drivers: media: dvb-frontends/rtl2830: fix an out-of-bounds write error
mainline inclusion
from mainline-v6.12-rc5
commit 46d7ebfe6a75a454a5fa28604f0ef1491f9d8d14
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/IAYPJG
CVE: CVE-2024-47697

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=46d7ebfe6a75a454a5fa28604f0ef1491f9d8d14

--------------------------------

Ensure index in rtl2830_pid_filter does not exceed 31 to prevent
out-of-bounds access.

dev->filters is a 32-bit value, so set_bit and clear_bit functions should
only operate on indices from 0 to 31. If index is 32, it will attempt to
access a non-existent 33rd bit, leading to out-of-bounds access.
Change the boundary check from index > 32 to index >= 32 to resolve this
issue.

Fixes: df70ddad81 ("[media] rtl2830: implement PID filter")
Signed-off-by: Junlin Li <make24@iscas.ac.cn>
Signed-off-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Signed-off-by: Xia Fukun <xiafukun@huawei.com>
2025-02-14 17:05:29 +08:00
Peng Mengguang bb836090e7 KVM: arm64: Move guest CMOs to the fault handlers
mainline inclusion
from mainline-v5.14-rc1
commit 25aa28691b
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/IABBE3
CVE: NA

--------------------------------------------------

We currently uniformly perform CMOs of D-cache and I-cache in function
user_mem_abort before calling the fault handlers. If we get concurrent
guest faults(e.g. translation faults, permission faults) or some really
unnecessary guest faults caused by BBM, CMOs for the first vcpu are
necessary while the others later are not.

By moving CMOs to the fault handlers, we can easily identify conditions
where they are really needed and avoid the unnecessary ones. As it's a
time consuming process to perform CMOs especially when flushing a block
range, so this solution reduces much load of kvm and improve efficiency
of the stage-2 page table code.

We can imagine two specific scenarios which will gain much benefit:
1) In a normal VM startup, this solution will improve the efficiency of
handling guest page faults incurred by vCPUs, when initially populating
stage-2 page tables.
2) After live migration, the heavy workload will be resumed on the
destination VM, however all the stage-2 page tables need to be rebuilt
at the moment. So this solution will ease the performance drop during
resuming stage.

Signed-off-by: Peng Mengguang <pengmengguang@phytium.com.cn>
Signed-off-by: Jiakun Shuai <shuaijiakun1288@phytium.com.cn>
2024-08-02 17:04:29 +08:00
Peng Mengguang ca105db208 KVM: arm64: clean the first vcpu dcache
phytium inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/IABBE3

----------------------------------------------------------

Phytium CPU support cache consistency.
just flush the first vcpu's dcache.

By reduce the number of flush dcache to
improve the efficiency of SMP multi-core startup,
especially for the VM with multi-core and hugepages.

Signed-off-by: Peng Mengguang <pengmengguang@phytium.com.cn>
Signed-off-by: Jiakun Shuai <shuaijiakun1288@phytium.com.cn>
2024-08-02 17:02:54 +08:00
4 changed files with 63 additions and 30 deletions

View File

@ -9,6 +9,7 @@
#include <linux/bitfield.h>
#include <asm/kvm_pgtable.h>
#include <asm/kvm_mmu.h>
#define KVM_PGTABLE_MAX_LEVELS 4U
@ -174,6 +175,25 @@ static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp)
smp_store_release(ptep, pte);
}
static bool stage2_pte_cacheable(kvm_pte_t pte)
{
u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR;
return memattr == PAGE_S2_MEMATTR(NORMAL);
}
static bool stage2_pte_executable(kvm_pte_t pte)
{
return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN);
}
static void stage2_flush_dcache(void *addr, u64 size)
{
if (kvm_ncsnp_support || cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
return;
__flush_dcache_area(addr, size);
}
static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
{
kvm_pte_t pte = kvm_phys_to_pte(pa);
@ -348,8 +368,16 @@ static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
/* Tolerate KVM recreating the exact same mapping */
new = kvm_init_valid_leaf_pte(phys, data->attr, level);
if (old != new && !WARN_ON(kvm_pte_valid(old)))
if (old != new && !WARN_ON(kvm_pte_valid(old))) {
/* Flush data cache before installation of the new PTE */
if (stage2_pte_cacheable(new))
stage2_flush_dcache(kvm_pte_follow(new), kvm_granule_size(level));
if (stage2_pte_executable(new))
__invalidate_icache_guest_page(__phys_to_pfn(phys),
kvm_granule_size(level));
smp_store_release(ptep, new);
}
data->phys += granule;
return true;
@ -496,6 +524,13 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
put_page(page);
}
/* Flush data cache before installation of the new PTE */
if (stage2_pte_cacheable(new))
stage2_flush_dcache(kvm_pte_follow(new), kvm_granule_size(level));
if (stage2_pte_executable(new))
__invalidate_icache_guest_page(__phys_to_pfn(phys), kvm_granule_size(level));
smp_store_release(ptep, new);
get_page(page);
data->phys += granule;
@ -652,20 +687,6 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
return ret;
}
static void stage2_flush_dcache(void *addr, u64 size)
{
if (kvm_ncsnp_support || cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
return;
__flush_dcache_area(addr, size);
}
static bool stage2_pte_cacheable(kvm_pte_t pte)
{
u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR;
return memattr == PAGE_S2_MEMATTR(NORMAL);
}
static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
enum kvm_pgtable_walk_flags flag,
void * const arg)
@ -744,8 +765,16 @@ static int stage2_attr_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
* but worst-case the access flag update gets lost and will be
* set on the next access instead.
*/
if (data->pte != pte)
if (data->pte != pte) {
/*
* Invalidate instruction cache before updating the guest
* stage-2 PTE if we are going to add executable permission.
*/
if (stage2_pte_executable(pte) && !stage2_pte_executable(*ptep))
__invalidate_icache_guest_page(__phys_to_pfn(kvm_pte_to_phys(pte)),
kvm_granule_size(level));
WRITE_ONCE(*ptep, pte);
}
return 0;
}

View File

@ -615,11 +615,6 @@ static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
__clean_dcache_guest_page(pfn, size);
}
static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size)
{
__invalidate_icache_guest_page(pfn, size);
}
static void kvm_send_hwpoison_signal(unsigned long address, short lsb)
{
send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, lsb, current);
@ -931,13 +926,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (writable)
prot |= KVM_PGTABLE_PROT_W;
if (fault_status != FSC_PERM && !device)
clean_dcache_guest_page(pfn, vma_pagesize);
if (exec_fault) {
if (exec_fault)
prot |= KVM_PGTABLE_PROT_X;
invalidate_icache_guest_page(pfn, vma_pagesize);
}
if (device)
prot |= KVM_PGTABLE_PROT_DEVICE;
@ -1422,8 +1412,20 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled)
* If switching it off, need to clean the caches.
* Clean + invalidate does the trick always.
*/
if (now_enabled != was_enabled)
stage2_flush_vm(vcpu->kvm);
if (now_enabled != was_enabled) {
/*
* Due to Phytium CPU's cache consistency support,
* just flush dcache on one vcpu not all vcpus in the VM.
* This can reduce the number of flush dcaches and
* improve the efficiency of SMP multi-core startup,
* especially for the large VM with hugepages.
*/
if (read_cpuid_implementor() == ARM_CPU_IMP_PHYTIUM) {
if (vcpu->vcpu_id == 0)
stage2_flush_vm(vcpu->kvm);
} else
stage2_flush_vm(vcpu->kvm);
}
/* Caches are now on, stop trapping VM ops (until a S/W op) */
if (now_enabled)

View File

@ -609,7 +609,7 @@ static int rtl2830_pid_filter(struct dvb_frontend *fe, u8 index, u16 pid, int on
index, pid, onoff);
/* skip invalid PIDs (0x2000) */
if (pid > 0x1fff || index > 32)
if (pid > 0x1fff || index >= 32)
return 0;
if (onoff)

View File

@ -436,6 +436,8 @@ static void ucsi_ccg_update_set_new_cam_cmd(struct ucsi_ccg *uc,
port = uc->orig;
new_cam = UCSI_SET_NEW_CAM_GET_AM(*cmd);
if (new_cam >= ARRAY_SIZE(uc->updated))
return;
new_port = &uc->updated[new_cam];
cam = new_port->linked_idx;
enter_new_mode = UCSI_SET_NEW_CAM_ENTER(*cmd);