x86/cpu: Align cpu_caps_cleared and cpu_caps_set to unsigned long
cpu_caps_cleared[] and cpu_caps_set[] are arrays of type u32 and therefore naturally aligned to 4 bytes, which is also unsigned long aligned on 32-bit, but not on 64-bit. The array pointer is handed into atomic bit operations. If the access not aligned to unsigned long then the atomic bit operations can end up crossing a cache line boundary, which causes the CPU to do a full bus lock as it can't lock both cache lines at once. The bus lock operation is heavy weight and can cause severe performance degradation. The upcoming #AC split lock detection mechanism will issue warnings for this kind of access. Force the alignment of these arrays to unsigned long. This avoids the massive code changes which would be required when converting the array data type to unsigned long. [ tglx: Rewrote changelog ] Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20190916223958.27048-2-tony.luck@intel.com
This commit is contained in:
parent
9774a96f78
commit
f6a892ddd5
|
@ -565,8 +565,9 @@ static const char *table_lookup_model(struct cpuinfo_x86 *c)
|
|||
return NULL; /* Not found */
|
||||
}
|
||||
|
||||
__u32 cpu_caps_cleared[NCAPINTS + NBUGINTS];
|
||||
__u32 cpu_caps_set[NCAPINTS + NBUGINTS];
|
||||
/* Aligned to unsigned long to avoid split lock in atomic bitmap ops */
|
||||
__u32 cpu_caps_cleared[NCAPINTS + NBUGINTS] __aligned(sizeof(unsigned long));
|
||||
__u32 cpu_caps_set[NCAPINTS + NBUGINTS] __aligned(sizeof(unsigned long));
|
||||
|
||||
void load_percpu_segment(int cpu)
|
||||
{
|
||||
|
|
Loading…
Reference in New Issue