x86: Fix a handful of typos
Fix a couple of typos in code comments. [ bp: While at it: s/IRQ's/IRQs/. ] Signed-off-by: Martin Molnar <martin.molnar.programming@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lkml.kernel.org/r/0819a044-c360-44a4-f0b6-3f5bafe2d35c@gmail.com
This commit is contained in:
parent
bb6d3fb354
commit
4d1d0977a2
|
@ -84,7 +84,7 @@ void __init init_IRQ(void)
|
|||
* On cpu 0, Assign ISA_IRQ_VECTOR(irq) to IRQ 0..15.
|
||||
* If these IRQ's are handled by legacy interrupt-controllers like PIC,
|
||||
* then this configuration will likely be static after the boot. If
|
||||
* these IRQ's are handled by more mordern controllers like IO-APIC,
|
||||
* these IRQs are handled by more modern controllers like IO-APIC,
|
||||
* then this vector space can be freed and re-used dynamically as the
|
||||
* irq's migrate etc.
|
||||
*/
|
||||
|
|
|
@ -403,9 +403,9 @@ static void default_do_nmi(struct pt_regs *regs)
|
|||
* a 'real' unknown NMI. For example, while processing
|
||||
* a perf NMI another perf NMI comes in along with a
|
||||
* 'real' unknown NMI. These two NMIs get combined into
|
||||
* one (as descibed above). When the next NMI gets
|
||||
* one (as described above). When the next NMI gets
|
||||
* processed, it will be flagged by perf as handled, but
|
||||
* noone will know that there was a 'real' unknown NMI sent
|
||||
* no one will know that there was a 'real' unknown NMI sent
|
||||
* also. As a result it gets swallowed. Or if the first
|
||||
* perf NMI returns two events handled then the second
|
||||
* NMI will get eaten by the logic below, again losing a
|
||||
|
|
|
@ -531,7 +531,7 @@ static void emergency_vmx_disable_all(void)
|
|||
|
||||
/*
|
||||
* We need to disable VMX on all CPUs before rebooting, otherwise
|
||||
* we risk hanging up the machine, because the CPU ignore INIT
|
||||
* we risk hanging up the machine, because the CPU ignores INIT
|
||||
* signals when VMX is enabled.
|
||||
*
|
||||
* We can't take any locks and we may be on an inconsistent
|
||||
|
|
|
@ -1434,7 +1434,7 @@ early_param("possible_cpus", _setup_possible_cpus);
|
|||
/*
|
||||
* cpu_possible_mask should be static, it cannot change as cpu's
|
||||
* are onlined, or offlined. The reason is per-cpu data-structures
|
||||
* are allocated by some modules at init time, and dont expect to
|
||||
* are allocated by some modules at init time, and don't expect to
|
||||
* do this dynamically on cpu arrival/departure.
|
||||
* cpu_present_mask on the other hand can change dynamically.
|
||||
* In case when cpu_hotplug is not compiled, then we resort to current
|
||||
|
|
|
@ -477,7 +477,7 @@ static unsigned long pit_calibrate_tsc(u32 latch, unsigned long ms, int loopmin)
|
|||
* transition from one expected value to another with a fairly
|
||||
* high accuracy, and we didn't miss any events. We can thus
|
||||
* use the TSC value at the transitions to calculate a pretty
|
||||
* good value for the TSC frequencty.
|
||||
* good value for the TSC frequency.
|
||||
*/
|
||||
static inline int pit_verify_msb(unsigned char val)
|
||||
{
|
||||
|
|
|
@ -295,7 +295,7 @@ static cycles_t check_tsc_warp(unsigned int timeout)
|
|||
* But as the TSC is per-logical CPU and can potentially be modified wrongly
|
||||
* by the bios, TSC sync test for smaller duration should be able
|
||||
* to catch such errors. Also this will catch the condition where all the
|
||||
* cores in the socket doesn't get reset at the same time.
|
||||
* cores in the socket don't get reset at the same time.
|
||||
*/
|
||||
static inline unsigned int loop_timeout(int cpu)
|
||||
{
|
||||
|
|
Loading…
Reference in New Issue