This patch is borrowed from x86 hpet driver and explaind below:
Due to the overly intelligent design of HPETs, we need to workaround
the problem that the compare value which we write is already behind
the actual counter value at the point where the value hits the real
compare register. This happens for two reasons:
1) We read out the counter, add the delta and write the result to the
compare register. When a NMI hits between the read out and the write
then the counter can be ahead of the event already.
2) The write to the compare register is delayed by up to two HPET
cycles in AMD chipsets.
We can work around this by reading back the compare register to make
sure that the written value has hit the hardware. But that is bad
performance wise for the normal case where the event is far enough in
the future.
As we already know that the write can be delayed by up to two cycles
we can avoid the read back of the compare register completely if we
make the decision whether the delta has elapsed already or not based
on the following calculation:
cmp = event - actual_count;
If cmp is less than 64 HPET clock cycles, then we decide that the event
has happened already and return -ETIME. That covers the above #1 and #2
problems which would cause a wait for HPET wraparound (~306 seconds).
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12162/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When Core-0 handle SMP_ASK_C0COUNT IPI, we should make other cores to
see the result as soon as possible (especially when Store-Fill-Buffer
is enabled). Otherwise, C0_Count syncronization makes no sense.
BTW, array is more suitable than per-cpu variable for syncronization,
and there is a corner case should be avoid: C0_Count of Core-0 can be
really 0.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org>
Patchwork: https://patchwork.linux-mips.org/patch/12160/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Migrate loongson driver to the new 'set-state' interface provided by
clockevents core, the earlier 'set-mode' interface is marked obsolete
now.
This also enables us to implement callbacks for new states of clockevent
devices, for example: ONESHOT_STOPPED.
[ralf@linux-mips.org: Folded in Viresh's followon fix.]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Cc: Hongliang Tao <taohl@lemote.com>
Cc: Valentin Rothberg <valentinrothberg@gmail.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: linux-mips@linux-mips.org
Cc: linaro-kernel@lists.linaro.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Patchwork: https://patchwork.linux-mips.org/patch/10608/
Patchwork: https://patchwork.linux-mips.org/patch/10883/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The majority of SMP platforms handle their IPIs through do_IRQ()
which calls irq_{enter/exit}(). When a call function IPI is received,
smp_call_function_interrupt() is called which also calls
irq_{enter,exit}(), meaning irq_count is raised twice.
When tick broadcasting is used (which is implemented via a call
function IPI), this incorrectly causes all CPU idle time on the core
receiving broadcast ticks to be accounted as time spent servicing
IRQs, as account_process_tick() will account as such if irq_count is
greater than 1. This results in 100% CPU usage being reported on a
core which receives its ticks via broadcast.
This patch removes the SMP smp_call_function_interrupt() wrapper which
calls irq_{enter,exit}(). Platforms which handle their IPIs through
do_IRQ() now call generic_smp_call_function_interrupt() directly to
avoid incrementing irq_count a second time. Platforms which don't
(loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt()
wrapped in irq_{enter,exit}().
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10770/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Currently, code of Loongson-2/3 is under loongson directory and code of
Loongson-1 is under loongson1 directory. Besides, there are Kconfig
options such as MACH_LOONGSON and MACH_LOONGSON1. This naming style is
very ugly and confusing. Since Loongson-2/3 are both 64-bit general-
purpose CPU while Loongson-1 is 32-bit SoC, we rename both file names
and Kconfig symbols from loongson/loongson1 to loongson64/loongson32.
[ralf@linux-mips.org: Resolve a number of simple conflicts.]
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: Kelvin Cheung <keguang.zhang@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/9790/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>