system/xen: Updated for version 4.13.1.

Signed-off-by: Mario Preksavec <mario@slackware.hr>

Signed-off-by: Willy Sudiarto Raharjo <willysr@slackbuilds.org>
This commit is contained in:
Mario Preksavec 2020-06-02 01:23:58 +02:00 committed by Willy Sudiarto Raharjo
parent 61d34fa64d
commit 90a270d8f0
No known key found for this signature in database
GPG Key ID: 3F617144D7238786
10 changed files with 6 additions and 447 deletions

View File

@ -46,7 +46,7 @@ Xen EFI binary.
To make things a bit easier, a copy of Xen EFI binary can be found here:
http://slackware.hr/~mario/xen/xen-4.13.0.efi.gz
http://slackware.hr/~mario/xen/xen-4.13.1.efi.gz
If an automatic boot to Xen kernel is desired, the binary should be renamed and
copied to the following location: /boot/efi/EFI/BOOT/bootx64.efi

View File

@ -6,7 +6,7 @@
# Modified by Mario Preksavec <mario@slackware.hr>
KERNEL=${KERNEL:-4.4.217}
XEN=${XEN:-4.13.0}
XEN=${XEN:-4.13.1}
BOOTLOADER=${BOOTLOADER:-lilo}
ROOTMOD=${ROOTMOD:-ext4}

View File

@ -23,7 +23,7 @@
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
PRGNAM=xen
VERSION=${VERSION:-4.13.0}
VERSION=${VERSION:-4.13.1}
BUILD=${BUILD:-1}
TAG=${TAG:-_SBo}

View File

@ -1,7 +1,7 @@
PRGNAM="xen"
VERSION="4.13.0"
VERSION="4.13.1"
HOMEPAGE="http://www.xenproject.org/"
DOWNLOAD="http://mirror.slackware.hr/sources/xen/xen-4.13.0.tar.gz \
DOWNLOAD="http://mirror.slackware.hr/sources/xen/xen-4.13.1.tar.gz \
http://mirror.slackware.hr/sources/xen-extfiles/ipxe-git-1dd56dbd11082fb622c2ed21cfaced4f47d798a6.tar.gz \
http://mirror.slackware.hr/sources/xen-extfiles/lwip-1.3.0.tar.gz \
http://mirror.slackware.hr/sources/xen-extfiles/zlib-1.2.3.tar.gz \
@ -13,7 +13,7 @@ DOWNLOAD="http://mirror.slackware.hr/sources/xen/xen-4.13.0.tar.gz \
http://mirror.slackware.hr/sources/xen-extfiles/tpm_emulator-0.7.4.tar.gz \
http://mirror.slackware.hr/sources/xen-seabios/seabios-1.12.1.tar.gz \
http://mirror.slackware.hr/sources/xen-ovmf/xen-ovmf-20190606_20d2e5a125.tar.bz2"
MD5SUM="d3b13c4c785601be2f104eaddd7c6a00 \
MD5SUM="e26fe8f9ce39463734e6ede45c6e11b8 \
b3ab0488a989a089207302111d12e1a0 \
36cc57650cffda9a0269493be2a169bb \
debc62758716a169df9f62e6ab2bc634 \

View File

@ -1,93 +0,0 @@
From 9f807cf84a9a7a011cf1df7895c54d6031a7596d Mon Sep 17 00:00:00 2001
From: Julien Grall <julien@xen.org>
Date: Thu, 19 Dec 2019 08:12:21 +0000
Subject: [PATCH] xen/arm: Place a speculation barrier sequence following an
eret instruction
Some CPUs can speculate past an ERET instruction and potentially perform
speculative accesses to memory before processing the exception return.
Since the register state is often controlled by lower privilege level
at the point of an ERET, this could potentially be used as part of a
side-channel attack.
Newer CPUs may implement a new SB barrier instruction which acts
as an architected speculation barrier. For current CPUs, the sequence
DSB; ISB is known to prevent speculation.
The latter sequence is heavier than SB but it would never be executed
(this is speculation after all!).
Introduce a new macro 'sb' that could be used when a speculation barrier
is required. For now it is using dsb; isb but this could easily be
updated to cater SB in the future.
This is XSA-312.
Signed-off-by: Julien Grall <julien@xen.org>
---
xen/arch/arm/arm32/entry.S | 1 +
xen/arch/arm/arm64/entry.S | 3 +++
xen/include/asm-arm/macros.h | 9 +++++++++
3 files changed, 13 insertions(+)
diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S
index 31ccfb2631..b228d44b19 100644
--- a/xen/arch/arm/arm32/entry.S
+++ b/xen/arch/arm/arm32/entry.S
@@ -426,6 +426,7 @@ return_to_hypervisor:
add sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */
clrex
eret
+ sb
/*
* struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next)
diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index d35855af96..175ea2981e 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -354,6 +354,7 @@ guest_sync:
*/
mov x1, xzr
eret
+ sb
check_wa2:
/* ARM_SMCCC_ARCH_WORKAROUND_2 handling */
@@ -393,6 +394,7 @@ wa2_end:
#endif /* !CONFIG_ARM_SSBD */
mov x0, xzr
eret
+ sb
guest_sync_slowpath:
/*
* x0/x1 may have been scratch by the fast path above, so avoid
@@ -457,6 +459,7 @@ return_from_trap:
ldr lr, [sp], #(UREGS_SPSR_el1 - UREGS_LR) /* CPSR, PC, SP, LR */
eret
+ sb
/*
* Consume pending SError generated by the guest if any.
diff --git a/xen/include/asm-arm/macros.h b/xen/include/asm-arm/macros.h
index 91ea3505e4..4833671f4c 100644
--- a/xen/include/asm-arm/macros.h
+++ b/xen/include/asm-arm/macros.h
@@ -20,4 +20,13 @@
.endr
.endm
+ /*
+ * Speculative barrier
+ * XXX: Add support for the 'sb' instruction
+ */
+ .macro sb
+ dsb nsh
+ isb
+ .endm
+
#endif /* __ASM_ARM_MACROS_H */
--
2.17.1

View File

@ -1,26 +0,0 @@
From: Jan Beulich <jbeulich@suse.com>
Subject: xenoprof: clear buffer intended to be shared with guests
alloc_xenheap_pages() making use of MEMF_no_scrub is fine for Xen
internally used allocations, but buffers allocated to be shared with
(unpriviliged) guests need to be zapped of their prior content.
This is part of XSA-313.
Reported-by: Ilja Van Sprundel <ivansprundel@ioactive.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wl@xen.org>
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -253,6 +253,9 @@ static int alloc_xenoprof_struct(
return -ENOMEM;
}
+ for ( i = 0; i < npages; ++i )
+ clear_page(d->xenoprof->rawbuf + i * PAGE_SIZE);
+
d->xenoprof->npages = npages;
d->xenoprof->nbuf = nvcpu;
d->xenoprof->bufsize = bufsize;

View File

@ -1,132 +0,0 @@
From: Jan Beulich <jbeulich@suse.com>
Subject: xenoprof: limit consumption of shared buffer data
Since a shared buffer can be written to by the guest, we may only read
the head and tail pointers from there (all other fields should only ever
be written to). Furthermore, for any particular operation the two values
must be read exactly once, with both checks and consumption happening
with the thus read values. (The backtrace related xenoprof_buf_space()
use in xenoprof_log_event() is an exception: The values used there get
re-checked by every subsequent xenoprof_add_sample().)
Since that code needed touching, also fix the double increment of the
lost samples count in case the backtrace related xenoprof_add_sample()
invocation in xenoprof_log_event() fails.
Where code is being touched anyway, add const as appropriate, but take
the opportunity to entirely drop the now unused domain parameter of
xenoprof_buf_space().
This is part of XSA-313.
Reported-by: Ilja Van Sprundel <ivansprundel@ioactive.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Wei Liu <wl@xen.org>
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -479,25 +479,22 @@ static int add_passive_list(XEN_GUEST_HA
/* Get space in the buffer */
-static int xenoprof_buf_space(struct domain *d, xenoprof_buf_t * buf, int size)
+static int xenoprof_buf_space(int head, int tail, int size)
{
- int head, tail;
-
- head = xenoprof_buf(d, buf, event_head);
- tail = xenoprof_buf(d, buf, event_tail);
-
return ((tail > head) ? 0 : size) + tail - head - 1;
}
/* Check for space and add a sample. Return 1 if successful, 0 otherwise. */
-static int xenoprof_add_sample(struct domain *d, xenoprof_buf_t *buf,
+static int xenoprof_add_sample(const struct domain *d,
+ const struct xenoprof_vcpu *v,
uint64_t eip, int mode, int event)
{
+ xenoprof_buf_t *buf = v->buffer;
int head, tail, size;
head = xenoprof_buf(d, buf, event_head);
tail = xenoprof_buf(d, buf, event_tail);
- size = xenoprof_buf(d, buf, event_size);
+ size = v->event_size;
/* make sure indexes in shared buffer are sane */
if ( (head < 0) || (head >= size) || (tail < 0) || (tail >= size) )
@@ -506,7 +503,7 @@ static int xenoprof_add_sample(struct do
return 0;
}
- if ( xenoprof_buf_space(d, buf, size) > 0 )
+ if ( xenoprof_buf_space(head, tail, size) > 0 )
{
xenoprof_buf(d, buf, event_log[head].eip) = eip;
xenoprof_buf(d, buf, event_log[head].mode) = mode;
@@ -530,7 +527,6 @@ static int xenoprof_add_sample(struct do
int xenoprof_add_trace(struct vcpu *vcpu, uint64_t pc, int mode)
{
struct domain *d = vcpu->domain;
- xenoprof_buf_t *buf = d->xenoprof->vcpu[vcpu->vcpu_id].buffer;
/* Do not accidentally write an escape code due to a broken frame. */
if ( pc == XENOPROF_ESCAPE_CODE )
@@ -539,7 +535,8 @@ int xenoprof_add_trace(struct vcpu *vcpu
return 0;
}
- return xenoprof_add_sample(d, buf, pc, mode, 0);
+ return xenoprof_add_sample(d, &d->xenoprof->vcpu[vcpu->vcpu_id],
+ pc, mode, 0);
}
void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
@@ -570,17 +567,22 @@ void xenoprof_log_event(struct vcpu *vcp
/* Provide backtrace if requested. */
if ( backtrace_depth > 0 )
{
- if ( (xenoprof_buf_space(d, buf, v->event_size) < 2) ||
- !xenoprof_add_sample(d, buf, XENOPROF_ESCAPE_CODE, mode,
- XENOPROF_TRACE_BEGIN) )
+ if ( xenoprof_buf_space(xenoprof_buf(d, buf, event_head),
+ xenoprof_buf(d, buf, event_tail),
+ v->event_size) < 2 )
{
xenoprof_buf(d, buf, lost_samples)++;
lost_samples++;
return;
}
+
+ /* xenoprof_add_sample() will increment lost_samples on failure */
+ if ( !xenoprof_add_sample(d, v, XENOPROF_ESCAPE_CODE, mode,
+ XENOPROF_TRACE_BEGIN) )
+ return;
}
- if ( xenoprof_add_sample(d, buf, pc, mode, event) )
+ if ( xenoprof_add_sample(d, v, pc, mode, event) )
{
if ( is_active(vcpu->domain) )
active_samples++;
--- a/xen/include/xen/xenoprof.h
+++ b/xen/include/xen/xenoprof.h
@@ -61,12 +61,12 @@ struct xenoprof {
#ifndef CONFIG_COMPAT
#define XENOPROF_COMPAT(x) 0
-#define xenoprof_buf(d, b, field) ((b)->field)
+#define xenoprof_buf(d, b, field) ACCESS_ONCE((b)->field)
#else
#define XENOPROF_COMPAT(x) ((x)->is_compat)
-#define xenoprof_buf(d, b, field) (*(!(d)->xenoprof->is_compat ? \
- &(b)->native.field : \
- &(b)->compat.field))
+#define xenoprof_buf(d, b, field) ACCESS_ONCE(*(!(d)->xenoprof->is_compat \
+ ? &(b)->native.field \
+ : &(b)->compat.field))
#endif
struct domain;

View File

@ -1,121 +0,0 @@
From ab49f005f7d01d4004d76f2e295d31aca7d4f93a Mon Sep 17 00:00:00 2001
From: Julien Grall <jgrall@amazon.com>
Date: Thu, 20 Feb 2020 20:54:40 +0000
Subject: [PATCH] xen/rwlock: Add missing memory barrier in the unlock path of
rwlock
The rwlock unlock paths are using atomic_sub() to release the lock.
However the implementation of atomic_sub() rightfully doesn't contain a
memory barrier. On Arm, this means a processor is allowed to re-order
the memory access with the preceeding access.
In other words, the unlock may be seen by another processor before all
the memory accesses within the "critical" section.
The rwlock paths already contains barrier indirectly, but they are not
very useful without the counterpart in the unlock paths.
The memory barriers are not necessary on x86 because loads/stores are
not re-ordered with lock instructions.
So add arch_lock_release_barrier() in the unlock paths that will only
add memory barrier on Arm.
Take the opportunity to document each lock paths explaining why a
barrier is not necessary.
This is XSA-314.
Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
xen/include/xen/rwlock.h | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 3dfea1ac2a..516486306f 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -48,6 +48,10 @@ static inline int _read_trylock(rwlock_t *lock)
if ( likely(!(cnts & _QW_WMASK)) )
{
cnts = (u32)atomic_add_return(_QR_BIAS, &lock->cnts);
+ /*
+ * atomic_add_return() is a full barrier so no need for an
+ * arch_lock_acquire_barrier().
+ */
if ( likely(!(cnts & _QW_WMASK)) )
return 1;
atomic_sub(_QR_BIAS, &lock->cnts);
@@ -64,11 +68,19 @@ static inline void _read_lock(rwlock_t *lock)
u32 cnts;
cnts = atomic_add_return(_QR_BIAS, &lock->cnts);
+ /*
+ * atomic_add_return() is a full barrier so no need for an
+ * arch_lock_acquire_barrier().
+ */
if ( likely(!(cnts & _QW_WMASK)) )
return;
/* The slowpath will decrement the reader count, if necessary. */
queue_read_lock_slowpath(lock);
+ /*
+ * queue_read_lock_slowpath() is using spinlock and therefore is a
+ * full barrier. So no need for an arch_lock_acquire_barrier().
+ */
}
static inline void _read_lock_irq(rwlock_t *lock)
@@ -92,6 +104,7 @@ static inline unsigned long _read_lock_irqsave(rwlock_t *lock)
*/
static inline void _read_unlock(rwlock_t *lock)
{
+ arch_lock_release_barrier();
/*
* Atomically decrement the reader count
*/
@@ -121,11 +134,20 @@ static inline int _rw_is_locked(rwlock_t *lock)
*/
static inline void _write_lock(rwlock_t *lock)
{
- /* Optimize for the unfair lock case where the fair flag is 0. */
+ /*
+ * Optimize for the unfair lock case where the fair flag is 0.
+ *
+ * atomic_cmpxchg() is a full barrier so no need for an
+ * arch_lock_acquire_barrier().
+ */
if ( atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0 )
return;
queue_write_lock_slowpath(lock);
+ /*
+ * queue_write_lock_slowpath() is using spinlock and therefore is a
+ * full barrier. So no need for an arch_lock_acquire_barrier().
+ */
}
static inline void _write_lock_irq(rwlock_t *lock)
@@ -157,11 +179,16 @@ static inline int _write_trylock(rwlock_t *lock)
if ( unlikely(cnts) )
return 0;
+ /*
+ * atomic_cmpxchg() is a full barrier so no need for an
+ * arch_lock_acquire_barrier().
+ */
return likely(atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0);
}
static inline void _write_unlock(rwlock_t *lock)
{
+ arch_lock_release_barrier();
/*
* If the writer field is atomic, it can be cleared directly.
* Otherwise, an atomic subtraction will be used to clear it.
--
2.17.1

View File

@ -1,30 +0,0 @@
From: Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: xen/gnttab: Fix error path in map_grant_ref()
Part of XSA-295 (c/s 863e74eb2cffb) inadvertently re-positioned the brackets,
changing the logic. If the _set_status() call fails, the grant_map hypercall
would fail with a status of 1 (rc != GNTST_okay) instead of the expected
negative GNTST_* error.
This error path can be taken due to bad guest state, and causes net/blk-back
in Linux to crash.
This is XSA-316.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9fd6e60416..4b5344dc21 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1031,7 +1031,7 @@ map_grant_ref(
{
if ( (rc = _set_status(shah, status, rd, rgt->gt_version, act,
op->flags & GNTMAP_readonly, 1,
- ld->domain_id) != GNTST_okay) )
+ ld->domain_id)) != GNTST_okay )
goto act_release_out;
if ( !act->pin )

View File

@ -1,39 +0,0 @@
From: Jan Beulich <jbeulich@suse.com>
Subject: gnttab: fix GNTTABOP_copy continuation handling
The XSA-226 fix was flawed - the backwards transformation on rc was done
too early, causing a continuation to not get invoked when the need for
preemption was determined at the very first iteration of the request.
This in particular means that all of the status fields of the individual
operations would be left untouched, i.e. set to whatever the caller may
or may not have initialized them to.
This is part of XSA-318.
Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Tested-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3576,8 +3576,7 @@ do_grant_table_op(
rc = gnttab_copy(copy, count);
if ( rc > 0 )
{
- rc = count - rc;
- guest_handle_add_offset(copy, rc);
+ guest_handle_add_offset(copy, count - rc);
uop = guest_handle_cast(copy, void);
}
break;
@@ -3644,6 +3643,9 @@ do_grant_table_op(
out:
if ( rc > 0 || opaque_out != 0 )
{
+ /* Adjust rc, see gnttab_copy() for why this is needed. */
+ if ( cmd == GNTTABOP_copy )
+ rc = count - rc;
ASSERT(rc < count);
ASSERT((opaque_out & GNTTABOP_CMD_MASK) == 0);
rc = hypercall_create_continuation(__HYPERVISOR_grant_table_op, "ihi",