Commit Graph

1524 Commits

Author SHA1 Message Date
Len Brown cb654695f6 [ACPI] acpi_register_gsi() fix needed for ACPICA 20051021
Use the #define for ACPI_LEVEL_SENSITIVE instead of assuming
non-zero, because ACPICA 20051021 changes its value to zero.

Also, use uniform variable names:
edge_level -> triggering
active_high_low -> polarity

Signed-off-by: Len Brown <len.brown@intel.com>
2005-12-28 02:50:44 -05:00
Christoph Lameter dc86e88c2b [IA64] Add __read_mostly support for IA64
sparc64, i386 and x86_64 have support for a special data section dedicated
to rarely updated data that is frequently read. The section was created to
avoid false sharing of those rarely read data with frequently written kernel
data.

This patch creates such a data section for ia64 and will group rarely written
data into this section.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-16 10:52:46 -08:00
Jes Sorensen 3bd7f01713 [IA64] uncached ref count leak
Use raw_smp_processor_id() instead of get_cpu() as we don't need the
extra features of get_cpu().

Signed-off-by: Jes Sorensen <jes@trained-monkey.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-16 10:29:52 -08:00
John Hawkes f5899b5d4f [IA64] disable preemption in udelay()
The udelay() inline for ia64 uses the ITC.  If CONFIG_PREEMPT is enabled
and the platform has unsynchronized ITCs and the calling task migrates
to another CPU while doing the udelay loop, then the effective delay may
be too short or very, very long.

This patch disables preemption around 100 usec chunks of the overall
desired udelay time.  This minimizes preemption-holdoffs.

udelay() is now too big to be inline, move it out of line and export it.

Signed-off-by: John Hawkes <hawkes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-16 10:00:24 -08:00
Robin Holt 27af4cfd11 [IA64] fix for SET_PERSONALITY when CONFIG_IA32_SUPPORT is not set.
Missed this when fixing the SET_PERSONALITY change.

Signed-off-by: Robin Holt <holt@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-14 08:52:57 -08:00
Linus Torvalds 0e67050666 Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 2005-12-12 16:48:29 -08:00
Keshavamurthy Anil S bf8d5c52c3 [PATCH] kprobes: increment kprobe missed count for multiprobes
When multiple probes are registered at the same address and if due to some
recursion (probe getting triggered within a probe handler), we skip calling
pre_handlers and just increment nmissed field.

The below patch make sure it walks the list for multiple probes case.
Without the below patch we get incorrect results of nmissed count for
multiple probe case.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-12-12 08:57:45 -08:00
Bob Moore 50eca3eb89 [ACPI] ACPICA 20050930
Completed a major overhaul of the Resource Manager code -
specifically, optimizations in the area of the AML/internal
resource conversion code. The code has been optimized to
simplify and eliminate duplicated code, CPU stack use has
been decreased by optimizing function parameters and local
variables, and naming conventions across the manager have
been standardized for clarity and ease of maintenance (this
includes function, parameter, variable, and struct/typedef
names.)

All Resource Manager dispatch and information tables have
been moved to a single location for clarity and ease of
maintenance. One new file was created, named "rsinfo.c".

The ACPI return macros (return_ACPI_STATUS, etc.) have
been modified to guarantee that the argument is
not evaluated twice, making them less prone to macro
side-effects. However, since there exists the possibility
of additional stack use if a particular compiler cannot
optimize them (such as in the debug generation case),
the original macros are optionally available.  Note that
some invocations of the return_VALUE macro may now cause
size mismatch warnings; the return_UINT8 and return_UINT32
macros are provided to eliminate these. (From Randy Dunlap)

Implemented a new mechanism to enable debug tracing for
individual control methods. A new external interface,
acpi_debug_trace(), is provided to enable this mechanism. The
intent is to allow the host OS to easily enable and disable
tracing for problematic control methods. This interface
can be easily exposed to a user or debugger interface if
desired. See the file psxface.c for details.

acpi_ut_callocate() will now return a valid pointer if a
length of zero is specified - a length of one is used
and a warning is issued. This matches the behavior of
acpi_ut_allocate().

Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-12-10 00:20:25 -05:00
Venkatesh Pallipadi 95235ca2c2 [CPUFREQ] CPU frequency display in /proc/cpuinfo
What is the value shown in "cpu MHz" of /proc/cpuinfo when CPUs are capable of
changing frequency?

Today the answer is: It depends.
On i386:
SMP kernel - It is always the boot frequency
UP kernel - Scales with the frequency change and shows that was last set.

On x86_64:
There is one single variable cpu_khz that gets written by all the CPUs. So,
the frequency set by last CPU will be seen on /proc/cpuinfo of all the
CPUs in the system. What you see also depends on whether you have constant_tsc
capable CPU or not.

On ia64:
It is always boot time frequency of a particular CPU that gets displayed.

The patch below changes this to:
Show the last known frequency of the particular CPU, when cpufreq is present. If
cpu doesnot support changing of frequency through cpufreq, then boot frequency
will be shown. The patch affects i386, x86_64 and ia64 architectures.

Signed-off-by: Venkatesh Pallipadi<venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>
2005-12-06 19:35:11 -08:00
Len Brown 3d5271f988 Pull release into acpica branch 2005-12-06 17:31:30 -05:00
Robin Holt bd1d6e2451 [IA64] Change SET_PERSONALITY to comply with comment in binfmt_elf.c.
We have a customer application which trips a bug.  The problem arises
when a driver attempts to call do_munmap on an area which is mapped, but
because current->thread.task_size has been set to 0xC0000000, the call
to do_munmap fails thinking it is an unmap beyond the user's address
space.

The comment in fs/binfmt_elf.c in load_elf_library() before the call
to SET_PERSONALITY() indicates that task_size must not be changed for
the running application until flush_thread, but is for ia64 executing
ia32 binaries.

This patch moves the setting of task_size from SET_PERSONALITY() to
flush_thread() as indicated.  The customer application no longer is able
to trip the bug.

Signed-off-by: Robin Holt <holt@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-06 09:12:34 -08:00
Venkatesh Pallipadi c82e6abfb3 [ACPI] IA64 ZX1 buildfix for _PDC patch
http://bugzilla.kernel.org/show_bug.cgi?id=5483

ZX1 config doesn't include cpufreq, so move move acpi-processor.c
up out of ia64/cpufreq directory.

no functional changes

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-12-05 17:24:45 -05:00
Keith Owens 05f70395c6 [IA64] Allow salinfo_decode to detect signals on read
Return -EINTR instead of -ERESTARTSYS when signals are delivered during
a blocked read of /proc/sal/*/event.  This allows salinfo_decode to
detect signals when it is blocked on a read of those files.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-05 11:49:17 -08:00
Venkatesh Pallipadi 05131ecc99 [ACPI] Avoid BIOS inflicted crashes by evaluating _PDC only once
Linux invokes the AML _PDC method (Processor Driver Capabilities)
to tell the BIOS what features it can handle.  While the ACPI
spec says nothing about the OS invoking _PDC multiple times,
doing so with changing bits seems to hopelessly confuse the BIOS
on multiple platforms up to and including crashing the system.

Factor out the _PDC invocation so Linux invokes it only once.

http://bugzilla.kernel.org/show_bug.cgi?id=5483

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-12-01 01:30:35 -05:00
MAEDA Naoaki c780f96490 [ACPI] ia64 build fix
arch/ia64/kernel/acpi-ext.c: In function `acpi_vendor_resource_match':
arch/ia64/kernel/acpi-ext.c:38: error: structure has no member named `id'

Signed-off-by: MAEDA Naoaki <maeda.naoaki@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-11-30 18:02:33 -05:00
Keshavamurthy Anil S 5a94bcfd2a [IA64] Remove getting break_num by decoding instruction
break.b always sets cr.iim to 0 and the current code tries to
get the break_num by decoding instruction. However, their
seems to be a race condition while reading the regs->cr_iip,
as on other cpu the break.b at regs->cr_iip might have been
replaced with the original instruction as a result of
unregister_kprobe() and hence decoding instruction to
obtain break_num will result in wrong value in this case.

Also includes changes to kprobes.c which now has to handle
break number zero.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-29 09:24:39 -08:00
Dean Roe b77dae5293 [IA64] - Make pfn_valid more precise for SGI Altix systems
A single SGI Altix system can be divided into multiple partitions,
each running their own instance of the Linux kernel.  pfn_valid()
is currently not optimal for any but the first partition, since it
does not compare the pfn with min_low_pfn before calling the more
costly ia64_pfn_valid().

Signed-off-by: Dean Roe <roe@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-29 09:24:10 -08:00
Jim Keniston 8bf1101bd5 [PATCH] kprobes: Fix return probes on sys_execve
Fix a bug in kprobes that can cause an Oops or even a crash when a return
probe is installed on one of the following functions: sys_execve,
do_execve, load_*_binary, flush_old_exec, or flush_thread.  The fix is to
remove the call to kprobe_flush_task() in flush_thread().  This fix has
been tested on all architectures for which the return-probes feature has
been implemented (i386, x86_64, ppc64, ia64).  Please apply.

BACKGROUND

Up to now, we have called kprobe_flush_task() under two situations: when a
task exits, and when it execs.  Flushing kretprobe_instances on exit is
correct because (a) do_exit() doesn't return, and (b) one or more
return-probed functions may be active when a task calls do_exit().  Neither
is the case for sys_execve() and its callees.

Initially, the mistaken call to kprobe_flush_task() on exec was harmless
because we put the "real" return address of each active probed function
back in the stack, just to be safe, when we recycled its
kretprobe_instance.  When support for ppc64 and ia64 was added, this safety
measure couldn't be employed, and was eventually dropped even for i386 and
x86_64.  sys_execve() and its callees were informally blacklisted for
return probes until this fix was developed.

Acked-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Jim Keniston <jkenisto@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-23 16:08:39 -08:00
Chen, Kenneth W e8aabc4716 [IA64] polish comments for tlb fault handler in ivt.S
Polish the comments specifically in vhpt_miss and nested_dtlb_miss
handlers.  I think it's better to explicitly name each page table
level with its name instead of numerically name them.  i.e., use
pgd, pud, pmd, and pte instead of referring as L1, L2, L3 etc.
Along the line, remove some magic number in the comments like:
"PTA + (((IFA(61,63) << 7) | IFA(33,39))*8)".  No code change at
all, pure comment update.  Feel free to shoot anything you have,
darts or tomahawk cruise missile.  I will duck behind a bunker ;-)

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Acked-by: Robin Holt <holt@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-17 09:48:15 -08:00
Chen, Kenneth W fedb25fae7 [IA64] 4 level page table bug fix in vhpt_miss
From source code inspection, I think there is a bug with 4 level
page table with vhpt_miss handler.  In the code path of rechecking
page table entry against previously read value after tlb insertion,
*pte value in register r18 was overwritten with value newly read
from pud pointer, render the check of new *pte against previous
*pte completely wrong.  Though the bug is none fatal and the penalty
is to purge the entry and retry.  For functional correctness, it
should be fixed.  The fix is to use a different register so new
*pud don't trash *pte.  (btw, the comments in the cmp statement is
wrong as well, which I will address in the next patch).

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-17 09:47:18 -08:00
Chen, Kenneth W 1e185b97b4 [PATCH] ia64: cpu_idle performance bug fix
Our performance validation on 2.6.15-rc1 caught a disastrous performance
regression on ia64 with netperf (-98%) and volanomark (-58%) compares to
previous kernel version 2.6.14-git7.  See the following chart (result
group 1 & 2).

  http://kernel-perf.sourceforge.net/results.machine_id=26.html

We have root caused it to commit 64c7c8f885

This changeset broke the ia64 task resched notification.  In
sched.c:resched_task(), a reschedule IPI is conditioned upon
TIF_POLLING_NRFLAG.  However, the above changeset unconditionally set
the polling thread flag for idle tasks regardless whether pal_halt_light
is in use or not.  As a result, resched IPI is not sent from
resched_task().  And since the default behavior on ia64 is to use
pal_halt_light, we end up delaying the rescheduling task until next
timer tick, and thus cause the performance regression.

This fixes the performance bug.  I'm glad our performance suite is
turning up bad performance bug like this in time.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-15 15:50:51 -08:00
Robin Holt 837cd0bdf5 [IA64] 4-level page tables
This patch introduces 4-level page tables to ia64.  I have run
some benchmarks and found nothing interesting.  Performance has
consistently fallen within the noise range.

It also introduces a config option (setting the default to 3
levels).  The config option prevents having 4 level page
tables with 64k base page size.

Signed-off-by: Robin Holt <holt@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-11 09:37:29 -08:00
Panagiotis Issaris baf47fb660 [IA64] Replace kcalloc(1, with kzalloc.
Conversion from kcalloc(1, to kzalloc.

Signed-off-by: Panagiotis Issaris <takis@issaris.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-10 11:28:20 -08:00
Tony Luck 7669a22592 Pull context-bitmap into release branch 2005-11-10 10:39:49 -08:00
Tony Luck cb8a55e4cd Pull extend-notify-die into release branch 2005-11-10 10:39:09 -08:00
Tony Luck cf1d469ec1 Pull mca-check-psp into release branch 2005-11-10 10:38:05 -08:00
Tony Luck 64de57ffd3 Pull align-sig-frame into release branch 2005-11-10 10:37:35 -08:00
Linus Torvalds 7df446e7e0 Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 2005-11-09 08:33:27 -08:00
Nick Piggin 64c7c8f885 [PATCH] sched: resched and cpu_idle rework
Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
confusion, and make their semantics rigid.  Improves efficiency of
resched_task and some cpu_idle routines.

* In resched_task:
- TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
  and as we hold it during resched_task, then there is no need for an
  atomic test and set there. The only other time this should be set is
  when the task's quantum expires, in the timer interrupt - this is
  protected against because the rq lock is irq-safe.

- If TIF_NEED_RESCHED is set, then we don't need to do anything. It
  won't get unset until the task get's schedule()d off.

- If we are running on the same CPU as the task we resched, then set
  TIF_NEED_RESCHED and no further action is required.

- If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
  after TIF_NEED_RESCHED has been set, then we need to send an IPI.

Using these rules, we are able to remove the test and set operation in
resched_task, and make clear the previously vague semantics of
POLLING_NRFLAG.

* In idle routines:
- Enter cpu_idle with preempt disabled. When the need_resched() condition
  becomes true, explicitly call schedule(). This makes things a bit clearer
  (IMO), but haven't updated all architectures yet.

- Many do a test and clear of TIF_NEED_RESCHED for some reason. According
  to the resched_task rules, this isn't needed (and actually breaks the
  assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
  held). So remove that. Generally one less locked memory op when switching
  to the idle thread.

- Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
  most polling idle loops. The above resched_task semantics allow it to be
  set until before the last time need_resched() is checked before going into
  a halt requiring interrupt wakeup.

  Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
  can be always left set, completely eliminating resched IPIs when rescheduling
  the idle task.

  POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Con Kolivas <kernel@kolivas.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 07:56:33 -08:00
Nick Piggin 5bfb5d690f [PATCH] sched: disable preempt in idle tasks
Run idle threads with preempt disabled.

Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()).
How did it ever work before?

Might fix the CPU hotplugging hang which Nigel Cunningham noted.

We think the bug hits if the idle thread is preempted after checking
need_resched() and before going to sleep, then the CPU offlined.

After calling stop_machine_run, the CPU eventually returns from preemption and
into the idle thread and goes to sleep.  The CPU will continue executing
previous idle and have no chance to call play_dead.

By disabling preemption until we are ready to explicitly schedule, this bug is
fixed and the idle threads generally become more robust.

From: alexs <ashepard@u.washington.edu>

  PPC build fix

From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>

  MIPS build fix

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 07:56:33 -08:00
Russ Anderson cbb9214434 [IA64] MCA recovery: Bump reference count on bad pages
When a page has a memory uncorrectable ECC error, the recovery
code wants to prevent the page from being reused.  This change
bumps the reference count to prevent the page from getting back
on the free list.

Signed-off-by: Russ Anderson (rja@sgi.com)
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-08 10:04:16 -08:00
Russ Anderson 56f87b8217 [IA64] MCA recovery: pfn_valid() needs a pfn
paddr needs to be shifted by PAGE_SHIFT to be valid
input for pfn_valid().

Signed-off-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-08 10:03:05 -08:00
Russ Anderson a14f25a076 [IA64] MCA recovery based on PSP bits
The determination of whether an MCA is recoverable or not must
be based on the bits set in the PSP (Processor State Parameter).
The specific bits are shown in the Intel IA-64 Architecture Software
Developer's Manual, Vol 2, Table 11-6 Software Recovery Bits in
Processor State Parameter.  Those bits should be consistent
across the entire IA-64 family of processors.

Signed-off-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-08 10:00:56 -08:00
David Mosberger-Tang cf20d1eafb [IA64] align signal-frame even when not using alternate signal-stack
At the moment, attempting to invoke a signal-handler on the normal
stack is guaranteed to fail if the stack-pointer happens not to be
16-byte aligned.  This is because the signal-trampoline will attempt
to store fp-regs with stf.spill instructions, which will trap for
misaligned addresses.  This isn't terribly useful behavior.  It's
better to just always align the signal frame to the next lower 16-byte
boundary.

Signed-off-by: David Mosberger-Tang <David.Mosberger@acm.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-08 09:58:06 -08:00
Keith Owens 9138d581b0 [IA64] Extend notify_die() hooks for IA64
notify_die() added for MCA_{MONARCH,SLAVE,RENDEZVOUS}_{ENTER,PROCESS,LEAVE} and
INIT_{MONARCH,SLAVE}_{ENTER,PROCESS,LEAVE}.  We need multiple
notification points for these events because they can take many seconds
to run which has nasty effects on the behaviour of the rest of the
system.

DIE_SS replaced by a generic DIE_FAULT which checks the vector number,
to allow interception of faults other than SS.

DIE_MACHINE_{HALT,RESTART} added to allow last minute close down
processing, especially when the halt/restart routines are called from
error handlers.

DIE_OOPS added.

The check for kprobe's break numbers has been moved from traps.c to
kprobes.c, allowing DIE_BREAK to be used for any additional break
numbers, i.e. it is no longer kprobes specific.

Hooks for kernel debuggers and kernel dumpers added, ENTER and LEAVE.
Both of these disable the system for long periods which impact on
watchdogs and heartbeat systems in general.  More patches to come that
use these events to reset watchdogs and heartbeats.

unregister_die_notifier() added and both routines exported.  Requested
by Dean Nelson.

Lock removed from {un,}register_die_notifier.  notifier_chain_register()
already takes a lock.  Also the generic notifier chain locking is being
reworked to distinguish between callbacks that can block and those that
cannot, the lock in {un,}register_die_notifier would interfere with
that change.  http://marc.theaimsgroup.com/?l=linux-kernel&m=113018709002036&w=2

Leading white space removed from arch/ia64/kernel/kprobes.c.

Typo in mca.c in original version of this patch found & fixed by Dean
Nelson.

Signed-off-by: Keith Owens <kaos@sgi.com>
Acked-by: Dean Nelson <dcn@sgi.com>
Acked-by: Anil Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-11-07 11:27:13 -08:00
Jesper Juhl b2325fe1b7 [PATCH] kfree cleanup: arch
This is the arch/ part of the big kfree cleanup patch.

Remove pointless checks for NULL prior to calling kfree() in arch/.

Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com>
Acked-by: Grant Grundler <grundler@parisc-linux.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07 07:54:06 -08:00
Ananth N Mavinakayanahalli d217d5450f [PATCH] Kprobes: preempt_disable/enable() simplification
Reorganize the preempt_disable/enable calls to eliminate the extra preempt
depth.  Changes based on Paul McKenney's review suggestions for the kprobes
RCU changeset.

Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07 07:53:46 -08:00
Ananth N Mavinakayanahalli 991a51d83a [PATCH] Kprobes: Use RCU for (un)register synchronization - arch changes
Changes to the arch kprobes infrastructure to take advantage of the locking
changes introduced by usage of RCU for synchronization.  All handlers are now
run without any locks held, so they have to be re-entrant or provide their own
synchronization.

Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07 07:53:46 -08:00
Ananth N Mavinakayanahalli 8a5c4dc5e5 [PATCH] Kprobes: Track kprobe on a per_cpu basis - ia64 changes
IA64 changes to track kprobe execution on a per-cpu basis.  We now track the
kprobe state machine independently on each cpu using an arch specific kprobe
control block.

Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07 07:53:45 -08:00
Ananth N Mavinakayanahalli 66ff2d0691 [PATCH] Kprobes: rearrange preempt_disable/enable() calls
The following set of patches are aimed at improving kprobes scalability.  We
currently serialize kprobe registration, unregistration and handler execution
using a single spinlock - kprobe_lock.

With these changes, kprobe handlers can run without any locks held.  It also
allows for simultaneous kprobe handler executions on different processors as
we now track kprobe execution on a per processor basis.  It is now necessary
that the handlers be re-entrant since handlers can run concurrently on
multiple processors.

All changes have been tested on i386, ia64, ppc64 and x86_64, while sparc64
has been compile tested only.

The patches can be viewed as 3 logical chunks:

patch 1: 	Reorder preempt_(dis/en)able calls
patches 2-7: 	Introduce per_cpu data areas to track kprobe execution
patches 8-9: 	Use RCU to synchronize kprobe (un)registration and handler
		execution.

Thanks to Maneesh Soni, James Keniston and Anil Keshavamurthy for their
review and suggestions. Thanks again to Anil, Hien Nguyen and Kevin Stafford
for testing the patches.

This patch:

Reorder preempt_disable/enable() calls in arch kprobes files in preparation to
introduce locking changes.  No functional changes introduced by this patch.

Signed-off-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07 07:53:45 -08:00
John W. Linville e1531b4218 [PATCH] ia64: re-implement dma_get_cache_alignment to avoid EXPORT_SYMBOL
The current ia64 implementation of dma_get_cache_alignment does not work
for modules because it relies on a symbol which is not exported.  Direct
access to a global is a little ugly anyway, so this patch re-implements
dma_get_cache_alignment in a manner similar to what is currently used for
x86_64.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-07 07:53:23 -08:00
Peter Keilty dcc17d1bae [IA64] Use bitmaps for efficient context allocation/free
Corrects the very inefficent method of finding free context_ids in
get_mmu_context().  Instead of walking the task_list of all processes,
2 bitmaps are used to efficently store and lookup state, inuse and
needs flushing. The entire rid address space is now used before calling
wrap_mmu_context and global tlb flushing.

Special thanks to Ken and Rohit for their review and modifications in
using a bit flushmap.

Signed-off-by: Peter Keilty <peter.keilty@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-31 14:36:05 -08:00
Tim Schmielau 4e57b68178 [PATCH] fix missing includes
I recently picked up my older work to remove unnecessary #includes of
sched.h, starting from a patch by Dave Jones to not include sched.h
from module.h. This reduces the number of indirect includes of sched.h
by ~300. Another ~400 pointless direct includes can be removed after
this disentangling (patch to follow later).
However, quite a few indirect includes need to be fixed up for this.

In order to feed the patches through -mm with as little disturbance as
possible, I've split out the fixes I accumulated up to now (complete for
i386 and x86_64, more archs to follow later) and post them before the real
patch.  This way this large part of the patch is kept simple with only
adding #includes, and all hunks are independent of each other.  So if any
hunk rejects or gets in the way of other patches, just drop it.  My scripts
will pick it up again in the next round.

Signed-off-by: Tim Schmielau <tim@physik3.uni-rostock.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-30 17:37:32 -08:00
Thomas Gleixner ecea8d19c9 [PATCH] jiffies_64 cleanup
Define jiffies_64 in kernel/timer.c rather than having 24 duplicated
defines in each architecture.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-30 17:37:25 -08:00
Hugh Dickins ab50b8ed81 [PATCH] mm: vm_stat_account unshackled
The original vm_stat_account has fallen into disuse, with only one user, and
only one user of vm_stat_unaccount.  It's easier to keep track if we convert
them all to __vm_stat_account, then free it from its __shackles.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29 21:40:37 -07:00
Tony Luck 0e1f606092 [IA64] fix warning unused variable `g'
4ac0068f44 forgot to delete
the declaration of this variable which is no longer used.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-28 15:52:13 -07:00
Tony Luck 9590204d31 Pull optimize-ptrace-threads into release branch 2005-10-28 15:27:48 -07:00
Tony Luck d73dee6ee4 Pull for-each-cpu into release branch 2005-10-28 15:26:43 -07:00
Tony Luck 9acd3fa2e1 Pull asm-slot-fix into release branch 2005-10-28 14:33:50 -07:00
Tony Luck 5a2b1722e1 Pull proc-cpuinfo-siblings into release branch 2005-10-28 14:33:35 -07:00
Tony Luck 5833f1420b Pull new-efi-memmap into release branch 2005-10-28 14:32:30 -07:00
Tony Luck dbcb25e621 Pull move-iosapic-to-acpi into release branch 2005-10-28 13:22:55 -07:00
Tony Luck 0ace57a96b Pull ar-k0-usage into release branch 2005-10-28 11:16:32 -07:00
Cliff Wickman 4ac0068f44 [IA64] ptrace - find memory sharers on children list
In arch/ia64/kernel/ptrace.c there is a test for a peek or poke of a
register image (in register backing storage).
The test can be unnecessarily long (and occurs while holding the tasklist_lock).
Especially long on a large system with thousands of active tasks.

The ptrace caller (presumably a debugger) specifies the pid of
its target and an address to peek or poke.  But the debugger could be
attached to several tasks.
The idea of find_thread_for_addr() is to find whether the target address
is in the RBS for any of those tasks.

Currently it searches the thread-list of the target pid.  If that search
does not find a match, and the shared mm-struct's user count indicates
that there are other tasks sharing this address space (a rare occurrence),
a search is made of all the tasks in the system.

Another approach can drastically shorten this procedure.
It depends upon the fact that in order to peek or poke from/to any task,
the debugger must first attach to that task.  And when it does, the
attached task is made a child of the debugger (is chained to its children list).

Therefore we can search just the debugger's children list.

Signed-off-by: Cliff Wickman <cpw@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-27 16:15:03 -07:00
hawkes@sgi.com ddf6d0a00c [IA64] another place to use for_each_cpu_mask() in arch/ia64
In arch/ia64 change the explicit use of a for-loop using NR_CPUS into the
general for_each_online_cpu() construct.  This widens the scope of potential
future optimizations of the general constructs, as well as takes advantage
of the existing optimizations of first_cpu() and next_cpu(), which is
advantageous when the true CPU count is much smaller than NR_CPUS.

Signed-off-by: John Hawkes <hawkes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-25 15:12:05 -07:00
hawkes@sgi.com dc565b525d [IA64] wider use of for_each_cpu_mask() in arch/ia64
In arch/ia64 change the explicit use of for-loops and NR_CPUS into the
general for_each_cpu() or for_each_online_cpu() constructs, as
appropriate.  This widens the scope of potential future optimizations
of the general constructs, as well as takes advantage of the existing
optimizations of first_cpu() and next_cpu().

Signed-off-by: John Hawkes <hawkes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-25 15:10:08 -07:00
H. J. Lu 9c184a073b [IA64] Fix 2.6 kernel for the new ia64 assembler
The new ia64 assembler uses slot 1 for the offset of a long (2-slot)
instruction and the old assembler uses slot 2. The 2.6 kernel assumes
slot 2 and won't boot when the new assembler is used:

http://sources.redhat.com/bugzilla/show_bug.cgi?id=1433

This patch will work with either slot 1 or 2.

Patch provided by H.J. Lu

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-25 15:05:45 -07:00
Siddha, Suresh B ce6e71ad48 [IA64] fix siblings field value in /proc/cpuinfo
Fix the "siblings" field value in /proc/cpuinfo so that it now shows the
number of siblings as seen by OS, instead of what is available from
hardware perspective.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-25 15:00:36 -07:00
Bryan Sutula 76e677e25d [IA64] Avoid kernel hang during CMC interrupt storm
I've noticed a kernel hang during a storm of CMC interrupts, which was
tracked down to the continual execution of the interrupt handler.

There's code in the CMC handler that's supposed to disable CMC
interrupts and switch to polling mode when it sees a bunch of CMCs.
Because disabling CMCs across all CPUs isn't safe in interrupt context,
the disable is done with a schedule_work().  But with continual CMC
interrupts, the schedule_work() never gets executed.

The following patch immediately disables CMC interrupts for the current
CPU.  This then allows (at least) one CPU to ignore CMC interrupts,
execute the schedule_work() code, and disable CMC interrupts on the rest
of the CPUs.

Acked-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Bryan Sutula <Bryan.Sutula@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-06 15:04:11 -07:00
Tony Luck d719948e62 [IA64] end of kernel 'data' is at _end, not _edata
/proc/iomem describes a block of memory as "Kernel data",
but the end address is derived from "_edata".  The kernel
actually has many other sections beyond _edata.  Get the
real end address from _end.

Acked-by: Khalid Aziz <khalid_aziz@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-28 16:09:46 -07:00
Hidetoshi Seto 4881e2cd25 [IA64] MCA recovery verify pfn_valid
Verify the pfn is valid before calling pfn_to_page(),
and cut isolation message if nothing was done.

Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Acked-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-22 13:27:59 -07:00
Keith Owens 20bb86852a [IA64] Wire in the MCA/INIT handler stacks
Wire the MCA/INIT handler stacks into DTR[2] and track them in
IA64_KR(CURRENT_STACK).  This gives the MCA/INIT handler stacks the
same TLB status as normal kernel stacks.  Reload the old CURRENT_STACK
data on return from OS to SAL.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-22 13:24:19 -07:00
Bjorn Helgaas 650316f122 [IA64] move ACPI IOSAPIC locality domain mapping from pci.c to acpi.c
Move acpi_map_iosapics() from pci.c to acpi.c, since it doesn't
have anything to do with PCI.

Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-19 15:57:48 -07:00
Bjorn Helgaas 44c451208d [IA64] ia64: add ar.k0 usage note
Update comment about how ar.k0 is used.  Make the initialization the
same as in start_secondary() (no functional change, just make it look
more similar).

Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-19 15:55:48 -07:00
Khalid Aziz be379124c0 [IA64] include EFI memory information in /proc/iomem
User mode kexec tools expect to find information about physical
memory in /proc/iomem (as they do on x86) to validate the addresses
that the new kernel will use.

Signed-off-by: Khalid Aziz <khalid.aziz@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-19 15:42:36 -07:00
Dipankar Sarma 4fb3a53860 [PATCH] files: fix preemption issues
With the new fdtable locking rules, you have to protect fdtable with either
->file_lock or rcu_read_lock/unlock().  There are some places where we
aren't doing either.  This patch fixes those places.

Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-17 11:50:02 -07:00
Hidetoshi Seto 20305e5972 [IA64] mca_drv cleanup
There were some trailing white spaces, long lines, brackets in
weird style etc.  This patch cleans them up.

Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-16 10:39:40 -07:00
Peter Chubb 24b8e0cc09 [IA64] Remove warnings for gcc 4.0 IA64 compilation.
This patch removes some compilation warnings, mostly
trivially. acpi.c fix also noted by Kenji Kaneshige.

Signed-off-by; Peter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-16 09:45:27 -07:00
Tony Luck 82f1b07b9a [IA64] fix circular dependency on generation of asm-offsets.h
Fix?  One ugly hack is replaced by a different ugly hack.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-13 08:50:39 -07:00
Tony Luck c85b2a5fe2 Pull sim-fixes into release branch 2005-09-11 14:27:15 -07:00
Keith Owens 49a28cc8fd [IA64] MCA/INIT: remove obsolete unwind code
Delete the special case unwind code that was only used by the old
MCA/INIT handler.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-11 14:09:34 -07:00
Keith Owens 05f335ea04 [IA64] MCA/INIT: remove the physical mode path from minstate.h
Remove the physical mode path from minstate.h.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-11 14:09:12 -07:00
Keith Owens 7f613c7d22 [PATCH] MCA/INIT: use per cpu stacks
The bulk of the change.  Use per cpu MCA/INIT stacks.  Change the SAL
to OS state (sos) to be per process.  Do all the assembler work on the
MCA/INIT stacks, leaving the original stack alone.  Pass per cpu state
data to the C handlers for MCA and INIT, which also means changing the
mca_drv interfaces slightly.  Lots of verification on whether the
original stack is usable before converting it to a sleeping process.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-11 14:08:41 -07:00
Keith Owens 289d773ee8 [IA64] MCA/INIT: avoid reading INIT record during INIT event
Reading the INIT record from SAL during the INIT event has proved to be
unreliable, and a source of hangs during INIT processing.  The new
MCA/INIT handlers remove the need to get the INIT record from SAL.
Change salinfo.c so mca.c can just flag that a new record is available,
without having to read the record during INIT processing.  This patch
can be applied without the new MCA/INIT handlers.

Also clean up some usage of NR_CPUS which should have been using
cpu_online().

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-11 14:02:43 -07:00
Ingo Molnar fb1c8f93d8 [PATCH] spinlock consolidation
This patch (written by me and also containing many suggestions of Arjan van
de Ven) does a major cleanup of the spinlock code.  It does the following
things:

 - consolidates and enhances the spinlock/rwlock debugging code

 - simplifies the asm/spinlock.h files

 - encapsulates the raw spinlock type and moves generic spinlock
   features (such as ->break_lock) into the generic code.

 - cleans up the spinlock code hierarchy to get rid of the spaghetti.

Most notably there's now only a single variant of the debugging code,
located in lib/spinlock_debug.c.  (previously we had one SMP debugging
variant per architecture, plus a separate generic one for UP builds)

Also, i've enhanced the rwlock debugging facility, it will now track
write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
All locks have lockup detection now, which will work for both soft and hard
spin/rwlock lockups.

The arch-level include files now only contain the minimally necessary
subset of the spinlock code - all the rest that can be generalized now
lives in the generic headers:

 include/asm-i386/spinlock_types.h       |   16
 include/asm-x86_64/spinlock_types.h     |   16

I have also split up the various spinlock variants into separate files,
making it easier to see which does what. The new layout is:

   SMP                         |  UP
   ----------------------------|-----------------------------------
   asm/spinlock_types_smp.h    |  linux/spinlock_types_up.h
   linux/spinlock_types.h      |  linux/spinlock_types.h
   asm/spinlock_smp.h          |  linux/spinlock_up.h
   linux/spinlock_api_smp.h    |  linux/spinlock_api_up.h
   linux/spinlock.h            |  linux/spinlock.h

/*
 * here's the role of the various spinlock/rwlock related include files:
 *
 * on SMP builds:
 *
 *  asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
 *                        initializers
 *
 *  linux/spinlock_types.h:
 *                        defines the generic type and initializers
 *
 *  asm/spinlock.h:       contains the __raw_spin_*()/etc. lowlevel
 *                        implementations, mostly inline assembly code
 *
 *   (also included on UP-debug builds:)
 *
 *  linux/spinlock_api_smp.h:
 *                        contains the prototypes for the _spin_*() APIs.
 *
 *  linux/spinlock.h:     builds the final spin_*() APIs.
 *
 * on UP builds:
 *
 *  linux/spinlock_type_up.h:
 *                        contains the generic, simplified UP spinlock type.
 *                        (which is an empty structure on non-debug builds)
 *
 *  linux/spinlock_types.h:
 *                        defines the generic type and initializers
 *
 *  linux/spinlock_up.h:
 *                        contains the __raw_spin_*()/etc. version of UP
 *                        builds. (which are NOPs on non-debug, non-preempt
 *                        builds)
 *
 *   (included on UP-non-debug builds:)
 *
 *  linux/spinlock_api_up.h:
 *                        builds the _spin_*() APIs.
 *
 *  linux/spinlock.h:     builds the final spin_*() APIs.
 */

All SMP and UP architectures are converted by this patch.

arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
crosscompilers.  m32r, mips, sh, sparc, have not been tested yet, but should
be mostly fine.

From: Grant Grundler <grundler@parisc-linux.org>

  Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
  Builds 32-bit SMP kernel (not booted or tested).  I did not try to build
  non-SMP kernels.  That should be trivial to fix up later if necessary.

  I converted bit ops atomic_hash lock to raw_spinlock_t.  Doing so avoids
  some ugly nesting of linux/*.h and asm/*.h files.  Those particular locks
  are well tested and contained entirely inside arch specific code.  I do NOT
  expect any new issues to arise with them.

 If someone does ever need to use debug/metrics with them, then they will
  need to unravel this hairball between spinlocks, atomic ops, and bit ops
  that exist only because parisc has exactly one atomic instruction: LDCW
  (load and clear word).

From: "Luck, Tony" <tony.luck@intel.com>

   ia64 fix

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjanv@infradead.org>
Signed-off-by: Grant Grundler <grundler@parisc-linux.org>
Cc: Matthew Wilcox <willy@debian.org>
Signed-off-by: Hirokazu Takata <takata@linux-m32r.org>
Signed-off-by: Mikael Pettersson <mikpe@csd.uu.se>
Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-10 10:06:21 -07:00
Linus Torvalds 486a153f0e Merge master.kernel.org:/pub/scm/linux/kernel/git/sam/kbuild 2005-09-09 15:46:49 -07:00
Ingo Molnar a9f6a0dd54 [PATCH] more SPIN_LOCK_UNLOCKED -> DEFINE_SPINLOCK conversions
This converts the final 20 DEFINE_SPINLOCK holdouts.  (another 580 places
are already using DEFINE_SPINLOCK).  Build tested on x86.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-09 14:03:48 -07:00
Dipankar Sarma badf16621c [PATCH] files: break up files struct
In order for the RCU to work, the file table array, sets and their sizes must
be updated atomically.  Instead of ensuring this through too many memory
barriers, we put the arrays and their sizes in a separate structure.  This
patch takes the first step of putting the file table elements in a separate
structure fdtable that is embedded withing files_struct.  It also changes all
the users to refer to the file table using files_fdtable() macro.  Subsequent
applciation of RCU becomes easier after this.

Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-09 13:57:55 -07:00
Chen, Kenneth W 383f2835eb [PATCH] Prefetch kernel stacks to speed up context switch
For architecture like ia64, the switch stack structure is fairly large
(currently 528 bytes).  For context switch intensive application, we found
that significant amount of cache misses occurs in switch_to() function.
The following patch adds a hook in the schedule() function to prefetch
switch stack structure as soon as 'next' task is determined.  This allows
maximum overlap in prefetch cache lines for that structure.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-09 13:57:31 -07:00
Sam Ravnborg 39e01cb874 kbuild: ia64 use generic asm-offsets.h support
Delete obsolete stuff from arch Makefile
Rename file to asm-offsets.h
The trick used in the arch Makefile to circumvent the circular
dependency is kept.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2005-09-09 22:03:13 +02:00
Tony Luck 344a076110 [IA64] Manual merge fix for 3 files
arch/ia64/Kconfig
	arch/ia64/kernel/acpi.c
	include/asm-ia64/irq.h

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-08 14:27:13 -07:00
Tony Luck d8c97d5f3a [IA64] simplified efi memory map parsing
New version leaves the original memory map unmodified.
Also saves any granule trimmings for use by the uncached
memory allocator.

Inspired by Khalid Aziz (various traces of his patch still
remain).  Fixes to uncached_build_memmap() and sn2 testing
by Martin Hicks.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-08 12:39:59 -07:00
Len Brown 64e47488c9 Merge linux-2.6 with linux-acpi-2.6 2005-09-08 01:45:47 -04:00
Keshavamurthy Anil S deac66ae45 [PATCH] kprobes: fix bug when probed on task and isr functions
This patch fixes a race condition where in system used to hang or sometime
crash within minutes when kprobes are inserted on ISR routine and a task
routine.

The fix has been stress tested on i386, ia64, pp64 and on x86_64.  To
reproduce the problem insert kprobes on schedule() and do_IRQ() functions
and you should see hang or system crash.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Acked-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:58:01 -07:00
Keshavamurthy Anil S 661e5a3d99 [PATCH] Kprobes/IA64: fix race when break hits and kprobe not found
This patch addresses a potential race condition for a case where Kprobe has
been removed right after another CPU has taken a break hit.

The way this is addressed here is when the CPU that has taken a break hit
does not find its corresponding kprobe, then we check to see if the
original instruction got replaced with other than break.  If it got
replaced with other than break instruction, then we continue to execute
from the replaced instruction, else if we find that it is still a break,
then we let the kernel handle this, as this might be the break instruction
inserted by other than kprobe(may be kernel debugger).

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:58:00 -07:00
Prasanna S Panchamukhi 1f7ad57b75 [PATCH] Kprobes: prevent possible race conditions ia64 changes
This patch contains the ia64 architecture specific changes to prevent the
possible race conditions.

Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:58:00 -07:00
John Hawkes 9c1cfda20a [PATCH] cpusets: Move the ia64 domain setup code to the generic code
Signed-off-by: John Hawkes <hawkes@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:57:40 -07:00
John Hawkes f68f447e83 [PATCH] ia64 cpuset + build_sched_domains() mangles structures
I've already sent this to the maintainers, and this is now being sent to a
larger community audience.  I have fixed a problem with the ia64 version of
build_sched_domains(), but a similar fix still needs to be made to the
generic build_sched_domains() in kernel/sched.c.

The "dynamic sched domains" functionality has recently been merged into
2.6.13-rcN that sees the dynamic declaration of a cpu-exclusive (a.k.a.
"isolated") cpuset and rebuilds the CPU Scheduler sched domains and sched
groups to separate away the CPUs in this cpu-exclusive cpuset from the
remainder of the non-isolated CPUs.  This allows the non-isolated CPUs to
completely ignore the isolated CPUs when doing load-balancing.

Unfortunately, build_sched_domains() expects that a sched domain will
include all the CPUs of each node in the domain, i.e., that no node will
belong in both an isolated cpuset and a non-isolated cpuset.  Declaring a
cpuset that violates this presumption will produce flawed data structures
and will oops the kernel.

To trigger the problem (on a NUMA system with >1 CPUs per node):
   cd /dev/cpuset
   mkdir newcpuset
   cd newcpuset
   echo 0 >cpus
   echo 0 >mems
   echo 1 >cpu_exclusive

I have fixed this shortcoming for ia64 NUMA (with multiple CPUs per node).
A similar shortcoming exists in the generic build_sched_domains() (in
kernel/sched.c) for NUMA, and that needs to be fixed also.  The fix
involves dynamically allocating sched_group_nodes[] and
sched_group_allnodes[] for each invocation of build_sched_domains(), rather
than using global arrays for these structures.  Care must be taken to
remember kmalloc() addresses so that arch_destroy_sched_domains() can
properly kfree() the new dynamic structures.

Signed-off-by: John Hawkes <hawkes@sgi.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:57:39 -07:00
Ashok Raj 54d5d42404 [PATCH] x86/x86_64: deferred handling of writes to /proc/irqxx/smp_affinity
When handling writes to /proc/irq, current code is re-programming rte
entries directly. This is not recommended and could potentially cause
chipset's to lockup, or cause missing interrupts.

CONFIG_IRQ_BALANCE does this correctly, where it re-programs only when the
interrupt is pending. The same needs to be done for /proc/irq handling as well.
Otherwise user space irq balancers are really not doing the right thing.

- Changed pending_irq_balance_cpumask to pending_irq_migrate_cpumask for
  lack of a generic name.
- added move_irq out of IRQ_BALANCE, and added this same to X86_64
- Added new proc handler for write, so we can do deferred write at irq
  handling time.
- Display of /proc/irq/XX/smp_affinity used to display CPU_MASKALL, instead
  it now shows only active cpu masks, or exactly what was set.
- Provided a common move_irq implementation, instead of duplicating
  when using generic irq framework.

Tested on i386/x86_64 and ia64 with CONFIG_PCI_MSI turned on and off.
Tested UP builds as well.

MSI testing: tbd: I have cards, need to look for a x-over cable, although I
did test an earlier version of this patch.  Will test in a couple days.

Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Acked-by: Zwane Mwaikambo <zwane@holomorphy.com>
Grudgingly-acked-by: Andi Kleen <ak@muc.de>
Signed-off-by: Coywolf Qi Hunt <coywolf@lovecn.org>
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:57:15 -07:00
Kenji Kaneshige 697eaad417 [IA64] Minor cleanups - remove CONFIG_ACPI_DEALLOCATE_IRQ
The config option 'CONFIG_ACPI_DEALLOCATE_IRQ' is no longer
needed. This patch removes it.

Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-07 14:00:08 -07:00
Chen, Kenneth W 0232622324 [IA64] minor performance tune-up in ia64_switch_to
The reenabling of psr.ic should really belong to dtr mapping code block.
It make the fall through code fast since it doesn't need to execute the
predicated-off instruction.  Logically make more sense as well since psr.ic
was turned off in .map code block.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-07 13:56:23 -07:00
Len Brown 129521dcc9 Merge linux-2.6 into linux-acpi-2.6 test 2005-09-03 02:44:09 -04:00
Martin Hicks a994018a5f [IA64] uncached allocator: use generic (not sn2 specific) functions
Change sn2-specific calls into generic functions.  Without this change
the uncached allocator will not work on non-sn2 platforms.

Signed-off-by: Greg Edwards <edwardsg@sgi.com>
Signed-off-by: Martin Hicks <mort@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-08-31 14:18:04 -07:00
Peter Chubb 714d2dc149 [IA64] Allow /proc/pal/cpu0/vm_info under the simulator
Not all of the PAL VM calls are implemented for the SKI simulator.
Don't just give up if one fails, print information from the calls
that succeed.

Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-08-31 08:34:51 -07:00
Tony Luck 288ceb8f14 Auto-update from upstream 2005-08-30 09:30:09 -07:00
Tony Luck 3290580285 Pull rationalise-regions into release branch 2005-08-29 15:50:32 -07:00
Len Brown 27a639a92d Auto-update from upstream 2005-08-29 17:02:17 -04:00
Steven Rostedt 69be8f1896 [PATCH] convert signal handling of NODEFER to act like other Unix boxes.
It has been reported that the way Linux handles NODEFER for signals is
not consistent with the way other Unix boxes handle it.  I've written a
program to test the behavior of how this flag affects signals and had
several reports from people who ran this on various Unix boxes,
confirming that Linux seems to be unique on the way this is handled.

The way NODEFER affects signals on other Unix boxes is as follows:

1) If NODEFER is set, other signals in sa_mask are still blocked.

2) If NODEFER is set and the signal is in sa_mask, then the signal is
still blocked. (Note: this is the behavior of all tested but Linux _and_
NetBSD 2.0 *).

The way NODEFER affects signals on Linux:

1) If NODEFER is set, other signals are _not_ blocked regardless of
sa_mask (Even NetBSD doesn't do this).

2) If NODEFER is set and the signal is in sa_mask, then the signal being
handled is not blocked.

The patch converts signal handling in all current Linux architectures to
the way most Unix boxes work.

Unix boxes that were tested:  DU4, AIX 5.2, Irix 6.5, NetBSD 2.0, SFU
3.5 on WinXP, AIX 5.3, Mac OSX, and of course Linux 2.6.13-rcX.

* NetBSD was the only other Unix to behave like Linux on point #2. The
main concern was brought up by point #1 which even NetBSD isn't like
Linux.  So with this patch, we leave NetBSD as the lonely one that
behaves differently here with #2.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-29 10:03:11 -07:00
Venkatesh Pallipadi 4db8699bcf [IA64] Add ACPI based P-state support
Patch to support P-state transitions on ia64. This driver is based on ACPI,
and uses the ACPI processor driver interface to find out the P-state support
information for the processor. This driver plugs into generic cpufreq
infrastructure.

Once this driver is loaded successfully, ondemand/userspace governor can be
used to change the CPU frequency dynamically based on load or on request from
userspace process.

Refer :
ACPI specification -
      http://www.acpi.info
P-state related PAL calls -
      http://developer.intel.com/design/itanium/downloads/24869909.pdf

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-08-26 15:09:24 -07:00
Peter Chubb 0a41e25011 [IA64] Rationalise Region Definitions
Currently, region numbers are defined in several files, with several 
names.  For example, we have REGION_KERNEL in asm/page.h and 
RGN_KERNEL in pgtable.h 
 
We also have address definitions that should depend on the 
RGN_XXX macros, but are currently just long constants. 
 
The following patch reorganises all the definitions so that they have 
the same form (RGN_XXX), are in one place, and that addresses that 
depend on RGN_XXX are derived from them. 

(This is a necessary but not sufficient patch to allow UML-like 
operation on IA64). 

Thanks to David Mosberger for catching the change I missed in mmu_context.h.
 
Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au> 
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-08-24 15:35:41 -07:00
Len Brown 888ba6c62b [ACPI] delete CONFIG_ACPI_BOOT
it has been a synonym for CONFIG_ACPI since 2.6.12

Signed-off-by: Len Brown <len.brown@intel.com>
2005-08-24 12:08:54 -04:00
Len Brown 84ffa74752 Merge from-linus to-akpm 2005-08-23 22:12:23 -04:00
Keith Owens 71841b8fe7 [IA64] Initialize some spinlocks
Some IA64 spinlocks are not being initialized, make it so.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-08-16 15:33:26 -07:00
Tony Luck f7001e8f1f Auto-update from upstream 2005-08-16 11:29:57 -07:00
John Hawkes 367ae3cd74 [PATCH] fix for ia64 sched-domains code
Fix for ia64 sched domain building triggered by cpuset code.

Acked-by: Nick Piggin <npiggin@suse.de>
Acked-by: Dinakar Guniguntala <dino@in.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-16 08:54:00 -07:00
MAEDA Naoaki 702c7e7626 [ACPI] fix ia64 build issues resulting from Lindent and merge
Signed-off-by: MAEDA Naoaki <maeda.naoaki@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Brown, Len <len.brown@intel.com>
2005-08-15 22:26:06 -04:00
Len Brown 95f193aa4f Merge ../to-linus 2005-08-11 00:56:08 -04:00
stephane.eranian@hp.com 6bf11e8c70 [IA64] fix perfmon context load
The PFM_LOAD_CONTEXT may fail silently and cause a session
to remain reserved even though it should not. This can happen
when the commands succeeds in reserving the session but fails
when it actually tries to attach to the load_pid. In that case,
the command has failed but will return 0. More importantly,
the session will remain reserved. This patch fixes the problem.

Signed-off-by: <stephane.eranian@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-08-10 16:21:58 -07:00
Ken Chen fb573856b2 [IA64] fix nohalt boot option
this changeset broke the "nohalt" kernel boot option.
  8df5a500a3

default_idle() is looking at new variable can_do_pal_halt.  However,
that variable did not get cleared upon "nohalt" boot option.  Result
is that "nohalt" option is ignored until perfmon is exercised.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-08-08 15:39:47 -07:00
Len Brown 4be44fcd3b [ACPI] Lindent all ACPI files
Signed-off-by: Len Brown <len.brown@intel.com>
2005-08-05 00:45:14 -04:00
Len Brown 1d492eb413 [ACPI] Merge acpi-2.6.12 branch into 2.6.13-rc3
Signed-off-by: Len Brown <len.brown@intel.com>
2005-08-05 00:31:42 -04:00
Kenji Kaneshige 14454a1b3f [ACPI] iosapic_register_intr() now returns error instead of panic
error condition is passed along by acpi_register_gsi().

Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-08-04 22:25:48 -04:00
Kenji Kaneshige 1f3a6a1577 [ACPI] acpi_register_gsi() can return error
Current acpi_register_gsi() function has no way to indicate errors to its
callers even though acpi_register_gsi() can fail to register gsi because of
some reasons (out of memory, lack of interrupt vectors, incorrect BIOS, and so
on).  As a result, caller of acpi_register_gsi() cannot handle the case that
acpi_register_gsi() fails.  I think failure of acpi_register_gsi() should be
handled properly.

This series of patches changes acpi_register_gsi() to return negative value on
error, and also changes callers of acpi_register_gsi() to handle failure of
acpi_register_gsi().

This patch changes the type of return value of acpi_register_gsi() from
"unsigned int" to "int" to indicate an error.  If acpi_register_gsi() fails to
register gsi, it returns negative value.

Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-08-04 22:12:08 -04:00
Ingo Molnar 6cb54819d7 [PATCH] remove sys_set_zone_reclaim()
This removes sys_set_zone_reclaim() for now.  While i'm sure Martin is
trying to solve a real problem, we must not hard-code an incomplete and
insufficient approach into a syscall, because syscalls are pretty much
for eternity.  I am quite strongly convinced that this syscall must not
hit v2.6.13 in its current form.

Firstly, the syscall lacks basic syscall design: e.g. it allows the
global setting of VM policy for unprivileged users. (!) [ Imagine an
Oracle installation and a SAP installation on the same NUMA box fighting
over the 'optimal' setting for this flag. What will they do? Will they
try to set the flag to their own preferred value every second or so? ]

Secondly, it was added based on a single datapoint from Martin:

 http://marc.theaimsgroup.com/?l=linux-mm&m=111763597218177&w=2

where Martin characterizes the numbers the following way:

 ' Run-to-run variability for "make -j" is huge, so these numbers aren't
   terribly useful except to see that with reclaim the benchmark still
   finishes in a reasonable amount of time. '

in other words: the fundamental problem has likely not been solved, only
a tendential move into the right direction has been observed, and a
handful of numbers were picked out of a set of hugely variable results,
without showing the variability data. How much variance is there
run-to-run?

I'd really suggest to first walk the walk and see what's needed to get
stable & predictable kernel compilation numbers on that NUMA box, before
adding random syscalls to tune a particular aspect of the VM ... which
approach might not even matter once the whole picture has been analyzed
and understood!

The third, most important point is that the syscall exposes VM tuning
internals in a completely unstructured way. What sense does it make to
have a _GLOBAL_ per-node setting for 'should we go to another node for
reclaim'? If then it might make sense to do this per-app, via numalib or
so.

The change is minimalistic in that it doesnt remove the syscall and the
underlying infrastructure changes, only the user-visible changes.  We
could perhaps add a CAP_SYS_ADMIN-only sysctl for this hack, a'ka
/proc/sys/vm/swappiness, but even that looks quite counterproductive
when the generic approach is that we are trying to reduce the number of
external factors in the VM balance picture.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-01 10:03:56 -07:00
Keith Owens b833961bd3 [IA64] unwind.c uses wrong unat from switch_stack
unwind.c can read the wrong unat bits from switch_stack.
sw->caller_unat is the value of ar.unat when the task was blocked.
sw->ar_unat is the value of ar.unat after doing st8.spill for r4-7.
IOW, ar_unat is caller_unat with 4 bits changed.

unw_access_gr() uses sw->ar_unat for r4-7 (correct), but it also uses
sw->ar_unat for other scratch registers (incorrect).  sw->ar_unat
should only be used for r4-7, everything else should use
sw->caller_unat, unless modified by unwind info.  Using sw->ar_unat
risks picking up the 4 bits that were overwritten when r4-7 were saved.

Also this line is wrong
	unw.sw_off[unw.preg_index[UNW_REG_PFS]] = SW(AR_UNAT);
and should be
	unw.sw_off[unw.preg_index[UNW_REG_PFS]] = SW(AR_PFS);

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-27 14:18:08 -07:00
Robert Love d108919b2b [IA64] inotify: ia64 syscalls.
Attached patch adds the inotify syscalls to ia64.

Signed-off-by: Robert Love <rml@novell.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-27 10:46:12 -07:00
Eric W. Biederman 59586e5a26 [PATCH] Don't export machine_restart, machine_halt, or machine_power_off.
machine_restart, machine_halt and machine_power_off are machine
specific hooks deep into the reboot logic, that modules
have no business messing with.  Usually code should be calling
kernel_restart, kernel_halt, kernel_power_off, or
emergency_restart. So don't export machine_restart,
machine_halt, and machine_power_off so we can catch buggy users.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-26 14:35:42 -07:00
Ian Wienand 46906c4415 [IA64] Fix undefined reference to can_cpei_retarget for simulator
The simulator build doesn't turn on ACPI, so doesn't have a definition
of can_cpei_retarget.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-14 09:21:47 -07:00
Tony Luck 99ad25a313 Auto merge with /home/aegl/GIT/linus 2005-07-13 12:15:43 -07:00
Zoltan Menyhart 08357f82d4 [IA64] improve flush_icache_range()
Check with PAL to see what the i-cache line size is for
each level of the cache, and so use the correct stride
when flushing the cache.

Acked-by: David Mosberger
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-12 15:33:18 -07:00
Len Brown 5028770a42 [ACPI] merge acpi-2.6.12 branch into latest Linux 2.6.13-rc...
Signed-off-by: Len Brown <len.brown@intel.com>
2005-07-12 17:21:56 -04:00
Venkatesh Pallipadi 6c4fa56033 [ACPI] fix C1 patch for IA64
http://bugzilla.kernel.org/show_bug.cgi?id=4233

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-07-12 00:10:20 -04:00
Ashok Raj 55e59c511c [ACPI] Evaluate CPEI Processor Override flag
ACPI 3.0 added a Correctable Platform Error Interrupt (CPEI)
Processor Overide flag to MADT.Platform_Interrupt_Source.
Record the processor that was provided as hint from ACPI.

Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2005-07-12 00:01:41 -04:00
Kenji Kaneshige 3b5cc09033 [IA64] assign_irq_vector() should not panic
Current assign_irq_vector() will panic if interrupt vectors is running
out. But I think how to handle the case of lack of interrupt vectors
should be handled by the caller of this function. For example, some
PCI devices can raise the interrupt signal via both MSI and I/O
APIC. So even if the driver for these device fails to allocate a
vector for MSI, the driver still has a chance to use I/O APIC based
interrupt. But currently there is no chance for these driver to use
I/O APIC based interrupt because kernel will panic when
assign_irq_vector() fails to allocate interrupt vector.

The following patch changes assign_irq_vector() for ia64 to return
-ENOSPC on error instead of panic (as i386 and x86_64 versions do).

Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-11 10:30:07 -07:00
Olaf Hering d0feafbf14 [IA64] remove linux/version.h include from arch/ia64
changing CONFIG_LOCALVERSION rebuilds too much, for no appearent reason.

Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-11 09:58:52 -07:00
H. J. Lu 763b3917e7 [IA64] Fix a typo in arch/ia64/kernel/entry.S
Both 2.4 and 2.6 kernels need this patch for the next binutils.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-08 13:23:49 -07:00
Tony Luck 8d7e35174d [IA64] fix generic/up builds
Jesse Barnes provided the original version of this patch months ago, but
other changes kept conflicting with it, so it got deferred.  Greg Edwards
dug it out of obscurity just over a week ago, and almost immediately
another conflicting patch appeared (Bob Picco's memory-less nodes).

I've resolved the conflicts and got it running again.  CONFIG_SGI_TIOCX
is set to "y" in defconfig, which causes a Tiger to not boot (oops in
tiocx_init).  But that can be resolved later ... get this in now before it
gets stale again.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-06 18:18:10 -07:00
af25e94d4d [IA64] Make ia64 die() preempt safe
Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-06 15:44:55 -07:00
Tony Luck 67d340f440 Auto merge with /home/aegl/GIT/linus 2005-07-06 15:35:18 -07:00
Keith Owens 2ba3e3e65c [IA64] restore_sigcontext is not preempt safe
restore_sigcontext calls ia64_set_local_fpu_owner() which requires that
preempt be disabled.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-07-06 15:31:15 -07:00
Rusty Lynch 6772926bef [PATCH] kprobes: fix namespace problem and sparc64 build
The following renames arch_init, a kprobes function for performing any
architecture specific initialization, to arch_init_kprobes in order to
cleanup the namespace.

Also, this patch adds arch_init_kprobes to sparc64 to fix the sparc64 kprobes
build from the last return probe patch.

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-05 19:19:00 -07:00
Tony Luck d18bfacff2 Auto merge with /home/aegl/GIT/linus 2005-06-29 15:21:41 -07:00
Peter Chubb a68db763af [IA64] Fix another IA64 preemption problem
There's another problem shown up by Ingo's recent patch to make
smp_processor_id() complain if it's called with preemption enabled.
local_finish_flush_tlb_mm() calls activate_context() in a situation
where it could be rescheduled to another processor.  This patch
disables preemption around the call.

Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-28 10:01:19 -07:00
David Mosberger-Tang 458f935527 [IA64] Speed up lfetch.fault [NULL]
This patch greatly speeds up the handling of lfetch.fault instructions
which result in NaT consumption. Due to the NaT-page mapped at address
0, this is guaranteed to happen when lfetch.fault'ing a NULL pointer.
With this patch in place, we can even define prefetch()/prefetchw() as
lfetch.fault without significant performance degradation.  More
importantly, it allows compilers to be more aggressive with using
lfetch.fault on pointers that might be NULL.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-28 09:28:16 -07:00
Mark Maule 66b7f8a304 [IA64-SGI] pcdp: add PCDP pci interface support
Resend 2 with changes per Bjorn Helgaas comments.  Changes from original:

+ Change globals to vga_console_iobase/vga_console_membase and make them
  unconditional.
+ Address style-related comments.

Patch to extend the PCDP vga setup code to support PCI io/mem translations
for the legacy vga ioport and ram spaces on architectures (e.g. altix) which
need them.

Summary of the changes:

drivers/firmware/pcdp.c
drivers/firmware/pcdp.h
-----------------------
+ add declaration for the spec-defined PCI interface struct (pcdp_if_pci)
  as well as support macros.

+ extend setup_vga_console() to know about pcdp_if_pci and add a couple of
  globals to hold the io and mem translation offsets if present.

arch/ia64/kernel/setup.c
------------------------
+ tweek early_console_setup() to allow multiple early console setup routines
  to be called.

include/asm-ia64/vga.h
----------------------
+ make VGA_MAP_MEM vga_console_membase aware

Signed-off-by: Mark Maule <maule@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-28 09:09:06 -07:00
Tony Luck 54522b6613 Auto merge with /home/aegl/GIT/ia64-test 2005-06-28 08:24:49 -07:00
Greg KH 8644d2a42b Merge rsync://rsync.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 2005-06-27 22:07:56 -07:00
Kenji Kaneshige 0e888adc41 [PATCH] ACPI based I/O APIC hot-plug: ia64 support
This is an ia64 implementation of acpi_register_ioapic() and
acpi_unregister_ioapic() interfaces.

Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2005-06-27 21:52:44 -07:00
Kenji Kaneshige b1bb248a5d [PATCH] ACPI based I/O APIC hot-plug: add interfaces
This patch adds the following new interfaces for I/O xAPIC
hotplug. The implementation of these interfaces depends on each
architecture.

    o int acpi_register_ioapic(acpi_handle handle, u64 phys_addr,
			       u32 gsi_base);

        This new interface is to add a new I/O xAPIC specified by
        phys_addr and gsi_base pair. phys_addr is the physical address
        to which the I/O xAPIC is mapped and gsi_base is global system
        interrupt base of the I/O xAPIC. acpi_register_ioapic returns
        0 on success, or negative value on error.

    o int acpi_unregister_ioapic(acpi_handle handle, u32 gsi_base);

        This new interface is to remove a I/O xAPIC specified by
        gsi_base. acpi_unregister_ioapic returns 0 on success, or
        negative value on error.

Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2005-06-27 21:52:44 -07:00
Keshavamurthy Anil S c7b645f934 [PATCH] kprobes/ia64: refuse kprobe on ivt code
Not safe to insert kprobes on IVT code.

This patch checks to see if the address on which Kprobes is being inserted is
in ivt code and if it is in ivt code then refuse to register kprobe.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Acked-by: David Mosberger <davidm@napali.hpl.hp.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 15:23:54 -07:00
Rusty Lynch a528e21c23 [PATCH] kprobes/ia64: refuse inserting kprobe on slot 1
Without the ability to atomically write 16 bytes, we can not update the
middle slot of a bundle, slot 1, unless we stop the machine first.  This
patch will ensure the ability to robustly insert and remove a kprobe by
refusing to insert a kprobe on slot 1 until a mechanism is in place to
safely handle this case.

Signed-off-by: Rusty Lynch <rusty.lynch@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 15:23:53 -07:00
Rusty Lynch 9508dbfe39 [PATCH] Return probe redesign: ia64 specific implementation
The following patch implements function return probes for ia64 using
the revised design.  With this new design we no longer need to do some
of the odd hacks previous required on the last ia64 return probe port
that I sent out for comments.

Note that this new implementation still does not resolve the problem noted
by Keith Owens where backtrace data is lost after a return probe is hit.

Changes include:
 * Addition of kretprobe_trampoline to act as a dummy function for instrumented
   functions to return to, and for the return probe infrastructure to place
   a kprobe on on, gaining control so that the return probe handler
   can be called, and so that the instruction pointer can be moved back
   to the original return address.
 * Addition of arch_init(), allowing a kprobe to be registered on
   kretprobe_trampoline
 * Addition of trampoline_probe_handler() which is used as the pre_handler
   for the kprobe inserted on kretprobe_implementation.  This is the function
   that handles the details for calling the return probe handler function
   and returning control back at the original return address
 * Addition of arch_prepare_kretprobe() which is setup as the pre_handler
   for a kprobe registered at the beginning of the target function by
   kernel/kprobes.c so that a return probe instance can be setup when
   a caller enters the target function.  (A return probe instance contains
   all the needed information for trampoline_probe_handler to do it's job.)
 * Hooks added to the exit path of a task so that we can cleanup any left-over
   return probe instances (i.e. if a task dies while inside a targeted function
   then the return probe instance was reserved at the beginning of the function
   but the function never returns so we need to mark the instance as unused.)

Signed-off-by: Rusty Lynch <rusty.lynch@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 15:23:53 -07:00
Jens Axboe 22e2c507c3 [PATCH] Update cfq io scheduler to time sliced design
This updates the CFQ io scheduler to the new time sliced design (cfq
v3).  It provides full process fairness, while giving excellent
aggregate system throughput even for many competing processes.  It
supports io priorities, either inherited from the cpu nice value or set
directly with the ioprio_get/set syscalls.  The latter closely mimic
set/getpriority.

This import is based on my latest from -mm.

Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27 14:33:29 -07:00
Dinakar Guniguntala 7f1867a5b3 [PATCH] Dynamic sched domains: ia64 changes
ia64 changes similar to kernel/sched.c.

Signed-off-by: Dinakar Guniguntala <dino@in.ibm.com>
Acked-by: Paul Jackson <pj@sgi.com>
Acked-by: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:45 -07:00
Nick Piggin 687f1661d3 [PATCH] sched: sched tuning
Do some basic initial tuning.

Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:42 -07:00
Shaohua Li a9fa06c26f [PATCH] set cpu_state for CPU hotplug (ia64)
Dead CPU notifies online CPU that it's dead using cpu_state variable.
After switching to physical cpu hotplug, we forgot setting the variable.
This patch fixes it.  Currently only __cpu_die uses it.  We changed other
locations for consistency in case others use it.

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Acked-by: Ashok Raj <ashok.raj@intel.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:31 -07:00
Zwane Mwaikambo f370513640 [PATCH] i386 CPU hotplug
(The i386 CPU hotplug patch provides infrastructure for some work which Pavel
is doing as well as for ACPI S3 (suspend-to-RAM) work which Li Shaohua
<shaohua.li@intel.com> is doing)

The following provides i386 architecture support for safely unregistering and
registering processors during runtime, updated for the current -mm tree.  In
order to avoid dumping cpu hotplug code into kernel/irq/* i dropped the
cpu_online check in do_IRQ() by modifying fixup_irqs().  The difference being
that on cpu offline, fixup_irqs() is called before we clear the cpu from
cpu_online_map and a long delay in order to ensure that we never have any
queued external interrupts on the APICs.  There are additional changes to s390
and ppc64 to account for this change.

1) Add CONFIG_HOTPLUG_CPU
2) disable local APIC timer on dead cpus.
3) Disable preempt around irq balancing to prevent CPUs going down.
4) Print irq stats for all possible cpus.
5) Debugging check for interrupts on offline cpus.
6) Hacky fixup_irqs() to redirect irqs when cpus go off/online.
7) play_dead() for offline cpus to spin inside.
8) Handle offline cpus set in flush_tlb_others().
9) Grab lock earlier in smp_call_function() to prevent CPUs going down.
10) Implement __cpu_disable() and __cpu_die().
11) Enable local interrupts in cpu_enable() after fixup_irqs()
12) Don't fiddle with NMI on dead cpu, but leave intact on other cpus.
13) Program IRQ affinity whilst cpu is still in cpu_online_map on offline.

Signed-off-by: Zwane Mwaikambo <zwane@linuxpower.ca>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:29 -07:00
Anil S Keshavamurthy 852caccc89 [PATCH] Kprobes/ia64: temporary disarming of reentrant probe
This patch includes IA64 architecture specific changes(ported form i386) to
support temporary disarming on reentrancy of probes.

In case of reentrancy we single step without calling user handler.

Signed-of-by: Anil S Keshavamurth <anil.s.keshavamurthy@intel.com>

Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:25 -07:00
Keshavamurthy Anil S 89cb14c0dd [PATCH] Kprobes/IA64: check jprobe break before handling
Once the jprobe instrumented function returns, it executes a jprobe_break
which is a break instruction with __IA64_JPROBE_BREAK value.  The current
patch checks for this break value, before assuming that jprobe instrumented
function just completed.

The previous code was not checking for this value and that was a bug.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:24 -07:00
Anil S Keshavamurthy 708de8f11c [PATCH] Kprobes IA64: safe register kprobe
The current kprobes does not yet handle register kprobes on some of the
following kind of instruction which needs to be emulated in a special way.

1) mov r1=ip
2) chk -- Speculation check instruction

This patch attempts to fail register_kprobes() when user tries to insert
kprobes on the above kind of instruction.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:24 -07:00
Anil S Keshavamurthy 1674eafcbd [PATCH] Kprobes IA64: cmp ctype unc support
The current Kprobes when patching the original instruction with the break
instruction tries to retain the original qualifying predicate(qp), however
for cmp.crel.ctype where ctype == unc, which is a special instruction
always needs to be executed irrespective of qp.  Hence, if the instruction
we are patching is of this type, then we should not copy the original qp to
the break instruction, this is because we always want the break fault to
happen so that we can emulate the instruction.

This patch is based on the feedback given by David Mosberger

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:23 -07:00
Anil S Keshavamurthy a5403183d8 [PATCH] Kprobes IA64: arch_prepare_kprobes() cleanup
arch_prepare_kprobes() was doing lots of functionality
in just one single function. This patch
attempts to clean up arch_prepare_kprobes() by moving
specific sub task to the following (new)functions
1)valid_kprobe_addr() -->> validate the given kprobe address
2)get_kprobe_inst(slot..)->> Retrives the instruction for a given slot from the bundle
3)prepare_break_inst() -->> Prepares break instruction within the bundle
	3a)update_kprobe_inst_flag()-->>Updates the internal flags, required
			for proper emulation of the instruction at later
			point in time.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:23 -07:00
Rusty Lynch 13608d6433 [PATCH] Kprobes ia64 qp fix
Fix a bug where a kprobe still fires when the instruction is predicated
off.  So given the p6=0, and we have an instruction like:

(p6) move loc1=0

we should not be triggering the kprobe.  This is handled by carrying over
the qp section of the original instruction into the break instruction.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Rusty Lynch <Rusty.lynch@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:23 -07:00
Rusty Lynch 8bc76772ad [PATCH] Kprobes ia64 cleanup
A cleanup of the ia64 kprobes implementation such that all of the bundle
manipulation logic is concentrated in arch_prepare_kprobe().

With the current design for kprobes, the arch specific code only has a
chance to return failure inside the arch_prepare_kprobe() function.

This patch moves all of the work that was happening in arch_copy_kprobe()
and most of the work that was happening in arch_arm_kprobe() into
arch_prepare_kprobe().  By doing this we can add further robustness checks
in arch_arm_kprobe() and refuse to insert kprobes that will cause problems.

Signed-off-by: Rusty Lynch <Rusty.lynch@intel.com>
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:23 -07:00
Anil S Keshavamurthy cd2675bf65 [PATCH] Kprobes/IA64: support kprobe on branch/call instructions
This patch is required to support kprobe on branch/call instructions.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:23 -07:00
Anil S Keshavamurthy b2761dc262 [PATCH] Kprobes/IA64: architecture specific JProbes support
This patch adds IA64 architecture specific JProbes support on top of Kprobes

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Rusty Lynch <Rusty.lynch@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:22 -07:00
Anil S Keshavamurthy fd7b231ff9 [PATCH] Kprobes/IA64: arch specific handling
This is an IA64 arch specific handling of Kprobes

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Rusty Lynch <Rusty.lynch@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:22 -07:00
Anil S Keshavamurthy 7213b25218 [PATCH] Kprobes/IA64: kdebug die notification mechanism
As many of you know that kprobes exist in the main line kernel for various
architecture including i386, x86_64, ppc64 and sparc64.  Attached patches
following this mail are a port of Kprobes and Jprobes for IA64.

I have tesed this patches for kprobes and Jprobes and this seems to work fine.
 I have tested this patch by inserting kprobes on various slots and various
templates including various types of branch instructions.

I have also tested this patch using the tool
http://marc.theaimsgroup.com/?l=linux-kernel&m=111657358022586&w=2 and the
kprobes for IA64 works great.

Here is list of TODO things and pathes for the same will appear soon.

1) Support kprobes on "mov r1=ip" type of instruction
2) Support Kprobes and Jprobes to exist on the same address
3) Support Return probes
3) Architecture independent cleanup of kprobes

This patch adds the kdebug die notification mechanism needed by Kprobes.

For break instruction on Branch type slot, imm21 is ignored and value
zero is placed in IIM register, hence we need to handle kprobes
for switch case zero.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Signed-off-by: Rusty Lynch <Rusty.lynch@intel.com>

From: Rusty Lynch <rusty.lynch@intel.com>

At the point in traps.c where we recieve a break with a zero value, we can
not say if the break was a result of a kprobe or some other debug facility.

This simple patch changes the informational string to a more correct "break
0" value, and applies to the 2.6.12-rc2-mm2 tree with all the kprobes
patches that were just recently included for the next mm cut.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:22 -07:00
Linus Torvalds fb7a0e3653 Merge kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6.git
Do arch/ia64/defconfig by hand.
2005-06-22 12:22:12 -07:00
Jes Sorensen f14f75b811 [PATCH] ia64 uncached alloc
This patch contains the ia64 uncached page allocator and the generic
allocator (genalloc).  The uncached allocator was formerly part of the SN2
mspec driver but there are several other users of it so it has been split
off from the driver.

The generic allocator can be used by device driver to manage special memory
etc.  The generic allocator is based on the allocator from the sym53c8xx_2
driver.

Various users on ia64 needs uncached memory.  The SGI SN architecture requires
it for inter-partition communication between partitions within a large NUMA
cluster.  The specific user for this is the XPC code.  Another application is
large MPI style applications which use it for synchronization, on SN this can
be done using special 'fetchop' operations but it also benefits non SN
hardware which may use regular uncached memory for this purpose.  Performance
of doing this through uncached vs cached memory is pretty substantial.  This
is handled by the mspec driver which I will push out in a seperate patch.

Rather than creating a specific allocator for just uncached memory I came up
with genalloc which is a generic purpose allocator that can be used by device
drivers and other subsystems as they please.  For instance to handle onboard
device memory.  It was derived from the sym53c7xx_2 driver's allocator which
is also an example of a potential user (I am refraining from modifying sym2
right now as it seems to have been under fairly heavy development recently).

On ia64 memory has various properties within a granule, ie.  it isn't safe to
access memory as uncached within the same granule as currently has memory
accessed in cached mode.  The regular system therefore doesn't utilize memory
in the lower granules which is mixed in with device PAL code etc.  The
uncached driver walks the EFI memmap and pulls out the spill uncached pages
and sticks them into the uncached pool.  Only after these chunks have been
utilized, will it start converting regular cached memory into uncached memory.
Hence the reason for the EFI related code additions.

Signed-off-by: Jes Sorensen <jes@wildopensource.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21 18:46:18 -07:00
Martin Hicks 753ee72896 [PATCH] VM: early zone reclaim
This is the core of the (much simplified) early reclaim.  The goal of this
patch is to reclaim some easily-freed pages from a zone before falling back
onto another zone.

One of the major uses of this is NUMA machines.  With the default allocator
behavior the allocator would look for memory in another zone, which might be
off-node, before trying to reclaim from the current zone.

This adds a zone tuneable to enable early zone reclaim.  It is selected on a
per-zone basis and is turned on/off via syscall.

Adding some extra throttling on the reclaim was also required (patch
4/4).  Without the machine would grind to a crawl when doing a "make -j"
kernel build.  Even with this patch the System Time is higher on
average, but it seems tolerable.  Here are some numbers for kernbench
runs on a 2-node, 4cpu, 8Gig RAM Altix in the "make -j" run:

			wall  user   sys   %cpu  ctx sw.  sleeps
			----  ----   ---   ----   ------  ------
No patch		1009  1384   847   258   298170   504402
w/patch, no reclaim     880   1376   667   288   254064   396745
w/patch & reclaim       1079  1385   926   252   291625   548873

These numbers are the average of 2 runs of 3 "make -j" runs done right
after system boot.  Run-to-run variability for "make -j" is huge, so
these numbers aren't terribly useful except to seee that with reclaim
the benchmark still finishes in a reasonable amount of time.

I also looked at the NUMA hit/miss stats for the "make -j" runs and the
reclaim doesn't make any difference when the machine is thrashing away.

Doing a "make -j8" on a single node that is filled with page cache pages
takes 700 seconds with reclaim turned on and 735 seconds without reclaim
(due to remote memory accesses).

The simple zone_reclaim syscall program is at
http://www.bork.org/~mort/sgi/zone_reclaim.c

Signed-off-by: Martin Hicks <mort@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21 18:46:14 -07:00
Matthew Chapman 4ea78729b8 [IA64] ptrace and restore_sigcontext() allow ar.rsc.pl==0
This patch fixes handling of accesses to ar.rsc via ptrace & restore_sigcontext
[With Thanks to Chris Wright for noticing the restore_sigcontext path]

Signed-off-by: Matthew Chapman <matthewc@hp.com>
Acked-by: David Mosberger <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-21 16:19:20 -07:00
Ken Chen 0393eed5c3 [IA64] fix nested_dtlb_miss handler for hugetlb address
The nested_dtlb_miss handler currently does not handle fault from
hugetlb address correctly.  It walks the page table assuming PAGE_SIZE.
Thus when taking a fault triggered from hugetlb address, it would not
calculate the pgd/pmd/pte address correctly and thus result an incorrect
invocation of ia64_do_page_fault().  In there, kernel will signal SIGBUS
and application dies (The faulting address is perfectly legal and we
have a valid pte for the corresponding user hugetlb address as well).
This patch fix the described kernel bug.  Since nested_dtlb_miss is a
rare event and a slow path anyway, I'm making the change without #ifdef
CONFIG_HUGETLB_PAGE for code readability.  Tony, please apply.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-21 14:40:31 -07:00
Christophe Lucas 52a0de2cd2 [IA64] printk needs KERN_INFO arch/ia64/kernel/smp.c
printk() calls should include appropriate KERN_* constant.

Signed-off-by: Christophe Lucas <clucas@rotomalug.org>
Signed-off-by: Domen Puncer <domen@coderock.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-21 14:21:17 -07:00
David Mosberger-Tang 34b727c135 [IA64] Drop spurious paren in entry.h
The latest assembler catches this typo.  (reported by Jim Wilson).

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-20 09:34:02 -07:00
Tony Luck f2cbb4f019 Auto merge with /home/aegl/GIT/linus 2005-06-15 14:06:48 -07:00
Christoph Lameter a2a64769d0 [IA64] Fix race condition in the rt_sigprocmask fastcall
current->blocked will be set to the value of current->thread_info->flags if the
cmpxchg to update thread_info->flags fails. For performance reasons the store into
current->blocked was placed in the cmpxchg loop. However, the cmpxchg overwrites the
register holding the value to be stored. In the rare case of a retry the value of
thread_info->flags will be written into current->blocked.

The fix is to use another register so that the register containing the current->blocked
value is not overwritten.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-09 13:04:30 -07:00
Peter Chubb 05062d96a2 [PATCH] ia64: fix floating-point preemption problem
There've been reports of problems with CONFIG_PREEMPT=y and the high
floating point partition.  This is caused by the possibility of preemption
and rescheduling on a different processor while saving or restioirng the
high partition.

The only places where the FPU state is touched are in ptrace, in
switch_to(), and where handling a floating-point exception.  In switch_to()
preemption is off.  So it's only in trap.c and ptrace.c that we need to
prevent preemption.

Here is a patch that adds commentary to make the conditions clear, and adds
appropriate preempt_{en,dis}able() calls to make it so.  In trap.c I use
preempt_enable_no_resched(), as we're about to return to user space where
the preemption flag will be checked anyway.

Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-08 16:21:14 -07:00
Keith Owens 70aa488cff [IA64] Extract correct break number for break.b
break.b does not store the break number in cr.iim, instead it stores 0,
which makes all break.b instructions look like BUG().  Extract the
break number from the instruction itself.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-08 12:25:24 -07:00
Tony Luck 86ebacd360 [IA64] Update comment to describe modes set in default control register.
Christian Hildner pointed out that the comment did not match what the
code does in cpu_init() when we set up the default control register.
Patch based on suggestions from Ken Chen.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-08 12:12:48 -07:00
Keith Owens 866ba633a8 [IA64] Module gp must point to valid memory
Some bits of the kernel assume that gp always points to valid memory,
in particular PHYSICAL_MODE_ENTER() assumes that both gp and sp are
valid virtual addresses with associated physical pages.  The IA64
module loader puts gp well past the end of the module, with no physical
backing.  Offsets on gp are still valid, but physical mode addressing
breaks for modules.  Ensure that gp always falls within the module
body.  Also ensure that gp is 8 byte aligned.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-08 11:41:31 -07:00
Peter Chubb b655913bf3 [IA64] Cleanup compile warnings for ski config
The attached patch cleans up a compilation warning when ACPI
is turned off (i.e., when compiling for the Ski simulator).

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-01 15:20:17 -07:00
Tony Luck fffcc150a2 [IA64] Use "PER_CPU" form of EXPORT macro
I was gently reminded that there are per-cpu forms of the EXPORT_SYMBOL macros.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-31 10:38:32 -07:00
Zhang Yanmin d11cf326bd [IA64] sys_mmap doesn't follow posix.1 when parameter len=0
In IA64 kernel, sys_mmap calls do_mmap2 and do_mmap2 returns addr if
len=0, which means the mmap sys call succeeds.

Posix.1 says:
The mmap() function shall fail if:
[EINVAL] The value of len is zero. 

Here is a patch to fix it.

Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Acked-by: David Mosberger <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-26 10:19:07 -07:00
Tony Luck fe12e25ebd [IA64] initialize spinlock pfm_alt_install_check
I applied the penultimate version of the perfmon patch, which didn't have
the initialization of the new spinlock that was added.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-18 17:09:06 -07:00
Tony Luck a1ecf7f6e6 [IA64] alternate perfmon handler
Patch from Charles Spirakis

Some linux customers want to optimize their applications on the latest
hardware but are not yet willing to upgrade to the latest kernel. This
patch provides a way to plug in an alternate, basic, and GPL'ed PMU
subsystem to help with their monitoring needs or for specialty work. It
can also be used in case of serious unexpected bugs in perfmon. Mutual
exclusion between the two subsystems is guaranteed, hence no conflict
can arise from both subsystem being present.

Acked-by: Stephane Eranian <eranian@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-18 16:14:30 -07:00
Tony Luck 325a479c4c Merge with temp tree to get David's gdb inferior calls patch 2005-05-17 15:53:14 -07:00
David Mosberger-Tang 7f9eaedf89 [IA64] Fix convert_to_non_syscall() so gdb inferior calls work again
Fix convert_to_non_syscall() so it arranges for the kernel to be left
via ia64_leave_kernel() rather than ia64_leave_syscall().  The latter
no longer tolerates being called with pSys=0 and pNonSys=1.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-17 14:07:10 -07:00
Russ Anderson bb68c12b40 [IA64-SGI] cpe interrupts are not being enabled.
acpi_request_vector() is called in ia64_mca_init() to get the cpe_vector.
The problem is that acpi_request_vector() looks in platform_intr_list[] to 
get the vector, but platform_intr_list[] is not initialized with a valid
vector until later (in sn_setup()).  Without a valid vector the code
defaults to polling mode.

This patch moves the call to acpi_request_vector() from ia64_mca_init()
to ia64_mca_late_init(), which is after platform_intr_list[] is initialized.

Signed-off-by: Russ Anderson (rja@sgi.com)
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-17 12:52:43 -07:00
David Mosberger-Tang 02a017a9f3 [IA64] Correct convert_to_non_syscall()
convert_to_non_syscall() has the same problem that unwind_to_user()
used to have.  Fix it likewise.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-17 12:33:15 -07:00
David Mosberger-Tang bfd6859408 [IA64] Avoid .spillpsp directive in handcoded assembly
Some time ago, GAS was fixed to bring the .spillpsp directive in line
with the Intel assembler manual (there was some disagreement as to
whether or not there is a built-in 16-byte offset).  Unfortunately,
there are two places in the kernel where this directive is used in
handwritten assembly files and those of course relied on the "buggy"
behavior.  As a result, when using a "fixed" assembler, the kernel
picks up the UNaT bits from the wrong place (off by 16) and randomly
sets NaT bits on the scratch registers.  This can be noticed easily by
looking at a coredump and finding various scratch registers with
unexpected NaT values.  The patch below fixes this by using the
.spillsp directive instead, which works correctly no matter what
assembler is in use.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-10 13:52:00 -07:00
David Mosberger-Tang 66302f211a [IA64] fix "section mismatch" compile-time-error
I noticed this typo when trying to compile a kernel which had
CONFIG_HOTPLUG turned off.  In that case, __devinit is no longer a
no-op and the compiler then detects a section-conflict.  Fix by using
__devinitdata instead of __devinit.

Same patch also submitted by Darren Williams to fix compilation error
using sim_defconfig (which has CONFIG_HOTPLUG=n).

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by:  Darren Williams <dsw@gelato.unsw.edu.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-09 10:16:17 -07:00
David Mosberger-Tang 966dc11fcc [IA64] Fix stack placement when INIT hits in kernel mode.
Without this patch, the stack is placed _below_ the current task
structure, which is risky at best.

Tony, I think this patch needs to go into 2.6.12, since it fixes a
real bug.  Without it, INIT may case secondary errors, which would be
most unpleasant.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-06 10:16:07 -07:00
David Mosberger-Tang ebcc80c1b6 [IA64] Merge audit fix for fsyscalls with syscall-optimizations
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-05 11:30:48 -07:00
David Woodhouse bfd4bda097 Merge with master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6.git 2005-05-05 13:59:37 +01:00
Tony Luck a71f62edc9 [IA64] Fix two warnings introduced by perfmon patches.
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03 16:21:45 -07:00
stephane eranian a5a70b75d9 [IA64] another perfmon fix (take2)
- pfm_context_load(): change return value from EINVAL to EBUSY
  when context is already loaded.

- pfm_check_task_state(): pass test if context state is MASKED.
  It is safe to give access on PFM_CTX_MASKED because the PMU
  state (PMD) is stable and saved in software state.
  This helps multiplexing programs such as the example given
  in libpfm-3.1.

Signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03 15:47:58 -07:00
Stephane Eranian 8df5a500a3 [IA64] perfmon & PAL_HALT again
The pmu_active test is based on the values of PSR.up. THIS IS THE PROBLEM as
it does not take into account the lazy restore logic which is as follow (simplified):

context switch out:
	save PMDs
	clear psr.up
	release ownership

context switch in:
	if (ctx->last_cpu == smp_processor_id() && ctx->cpu_activation == cpu_activation) {
		set psr.up
		return
	}
	restore PMD
	restore PMC
	ctx->last_cpu   = smp_processor_id();
	ctx->activation = ++cpu_activation;
	set psr.up

The key here is that on context switch out, we clear psr.up and on context switch in
we check if nobody else used the PMU on that processor since last time we came. In
that case, we assume the PMD/PMC are ours and we simply reactivate.

The Caliper problem is that between the moment we context switch out and the moment we
come back, nobody effectively used the PMU BUT the processor went idle. Normally this
would have no incidence but PAL_HALT does alter the PMU registers.  In default_idle(),
the test on psr.up is not strong enough to cover this case and we go into PAL which
trashed the PMU resgisters. When we come back we falsely assume that this is our state
yet it is corrupted. Very nasty indeed.

To avoid the problem it is necessary to forbid going to PAL_HALT as soon as perfmon
installs some valid state in the PMU registers. This happens with an application
attaches a context to a thread or CPU. It is not enough to check the psr/dcr bits.
Hence I propose the attached patch. It adds a callback in process.c to modify the
condition to enter PAL on idle. Basically, now it is conditional to pal_halt=1 AND
perfmon saying it is okay.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03 15:44:48 -07:00
Russ Anderson b1b901c202 [IA64] MCA recovery improvements
Jack Steiner uncovered some opportunities for improvement in
the MCA recovery code.

  1) Set bsp to save registers on the kernel stack.
  2) Disable interrupts while in the MCA recovery code.
  3) Change the way the user process is killed, to avoid 
     a panic in schedule.

Testing shows that these changes make the recovery code much 
more reliable with the 2.6.12 kernel.

Signed-off-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03 13:47:42 -07:00
David Woodhouse 446b8831f5 [IA64] fix ia64 syscall auditing
Attached is a patch against David's audit.17 kernel that adds checks
for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and
signal handling code paths.  The patch enables auditing of system
calls set up via fsys_bubble_down, as well as ensuring that
audit_syscall_exit() is called on return from sigreturn.

Neglecting to check for TIF_SYSCALL_AUDIT at these points results in
incorrect information in audit_context, causing frequent system panics
when system call auditing is enabled on an ia64 system.

I have tested this patch and have seen no problems with it.

[Original patch from Amy Griffis ported to current kernel by David Woodhouse]

From: Amy Griffis <amy.griffis@hp.com>
From: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Chris Wright <chrisw@osdl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03 13:45:39 -07:00
Zwane Mwaikambo 7d5f9c0f10 [IA64] reduce cacheline bouncing in cpu_idle_wait
Andi noted that during normal runtime cpu_idle_map is bounced around a lot,
and occassionally at a higher frequency than the timer interrupt wakeup
which we normally exit pm_idle from.  So switch to a percpu variable.

I didn't move things to the slow path because it would involve adding
scheduler code to wakeup the idle thread on the cpus we're waiting for.

Signed-off-by: Zwane Mwaikambo <zwane@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03 13:40:18 -07:00
Alex Williamson bb0fc08545 [IA64] use common pxm function
This patch simplifies a couple places where we search for _PXM
values in ACPI namespace.  Thanks,

Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03 13:33:18 -07:00
David Mosberger-Tang 9df6f705c0 [IA64] fix typos caught by new assembler
Patch below fixes 3 trivial typos which are caught by the new
assembler (v2.169.90).  Please apply.

[Note: fix to memcpy that was also part of this patch was separately
 applied from patches by H.J. and Andreas ... so the delta here only
 has the other two fixes. -Tony]

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03 10:56:42 -07:00
David Woodhouse 27b030d58c Merge with master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6.git 2005-05-03 08:14:09 +01:00
Jesper Juhl 7ed20e1ad5 [PATCH] convert that currently tests _NSIG directly to use valid_signal()
Convert most of the current code that uses _NSIG directly to instead use
valid_signal().  This avoids gcc -W warnings and off-by-one errors.

Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-01 08:59:14 -07:00
Stephen Rothwell 7d87e14c23 [PATCH] consolidate sys_shmat
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-01 08:59:12 -07:00
Amy Griffis 3ac3ed555b [PATCH] fix ia64 syscall auditing
Attached is a patch against David's audit.17 kernel that adds checks
for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and
signal handling code paths.The patch enables auditing of system
calls set up via fsys_bubble_down, as well as ensuring that
audit_syscall_exit() is called on return from sigreturn.

Neglecting to check for TIF_SYSCALL_AUDIT at these points results in
incorrect information in audit_context, causing frequent system panics
when system call auditing is enabled on an ia64 system.

Signed-off-by: Amy Griffis <amy.griffis@hp.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2005-04-29 16:12:55 +01:00
2fd6f58ba6 [AUDIT] Don't allow ptrace to fool auditing, log arch of audited syscalls.
We were calling ptrace_notify() after auditing the syscall and arguments,
but the debugger could have _changed_ them before the syscall was actually
invoked. Reorder the calls to fix that.

While we're touching ever call to audit_syscall_entry(), we also make it
take an extra argument: the architecture of the syscall which was made,
because some architectures allow more than one type of syscall.

Also add an explicit success/failure flag to audit_syscall_exit(), for
the benefit of architectures which return that in a condition register
rather than only returning a single register.

Change type of syscall return value to 'long' not 'int'.

Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2005-04-29 16:08:28 +01:00
David Mosberger-Tang 8e3e50168c [IA64] need r29=psr *after* rsm psr.i
Yanmin Zhang pointed out a sequence problem when saving the psr.  David
Mosberger provided this patch (which gave up a cycle).

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:22:40 -07:00
David Mosberger-Tang e7e965fa19 [IA64] use srlz.d instead of srlz.i in ia64_leave_kernel()
This patch switches the srlz.i in ia64_leave_kernel() to srlz.d.  As
per architecture manual, the former is needed only to ensure that the
clearing of PSR.IC is seen by the VHPT for subsequent instruction
fetches.  However, since the remainder of the code (up to and
including the RFI instruction) is mapped by a pinned TLB entry, there
is no chance of an iTLB miss and we don't care whether or not the VHPT
sees PSR.IC cleared.  Since srlz.d is substantially cheaper than
srlz.i, this should shave off a few cycles off the interrupt path
(unverified though; I'm not setup to measure this at the moment).

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:22:08 -07:00
David Mosberger-Tang fbf7192ba0 [IA64] Annotate fsys_bubble_down() with McKinley dispatch info.
This patch changes comments & formatting only.  There is no code
change.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:21:26 -07:00
David Mosberger-Tang 1ba7be7d69 [IA64] Reschedule fsys_bubble_down().
Improvements come from eliminating srlz.i, not scheduling AR/CR-reads
too early (while there are others still pending), scheduling the
backing-store switch as well as possible, splitting the BBB bundle
into a MIB/MBB pair.

Why is it safe to eliminate the srlz.i?  Observe
that we used to clear bits ~PSR_PRESERVED_BITS in PSR.L.  Since
PSR_PRESERVED_BITS==PSR.{UP,MFL,MFH,PK,DT,PP,SP,RT,IC}, we
ended up clearing PSR.{BE,AC,I,DFL,DFH,DI,DB,SI,TB}.  However,

 PSR.BE : already is turned off in __kernel_syscall_via_epc()
 PSR.AC : don't care (kernel normally turns PSR.AC on)
 PSR.I  : already turned off by the time fsys_bubble_down gets invoked
 PSR.DFL: always 0 (kernel never turns it on)
 PSR.DFH: don't care --- kernel never touches f32-f127 on its own
	  initiative
 PSR.DI : always 0 (kernel never turns it on)
 PSR.SI : always 0 (kernel never turns it on)
 PSR.DB : don't care --- kernel never enables kernel-level breakpoints
 PSR.TB : must be 0 already; if it wasn't zero on entry to
	  __kernel_syscall_via_epc, the branch to fsys_bubble_down
	  will trigger a taken branch; the taken-trap-handler then
	  converts the syscall into a break-based system-call.

In other words: all the bits we're clearying are either 0 already or
are don't cares!  Thus, we don't have to write PSR.L at all and we
don't have to do a srlz.i either.

Good for another ~20 cycle improvement for EPC-based heavy-weight
syscalls.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:20:51 -07:00
David Mosberger-Tang 21bc4f9b34 [IA64] Annotate __kernel_syscall_via_epc() with McKinley dispatch info.
Two other very minor changes: use "mov.i" instead of "mov" for reading
ar.pfs (for clarity; doesn't affect the code at all).  Also, predicate
the load of r14 for consistency.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:20:11 -07:00
David Mosberger-Tang 70929a57cf [IA64] Reschedule __kernel_syscall_via_epc().
Avoid some stalls, which is good for about 2 cycles when invoking a
light-weight handler.  When invoking a heavy-weight handler, this
helps by about 7 cycles, with most of the improvement coming from the
improved branch-prediction achieved by splitting the BBB bundle into
two MIB bundles.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:19:37 -07:00
David Mosberger-Tang f8fa5448fc [IA64] Reschedule break_fault() for better performance.
This patch reorganizes break_fault() to optimistically assume that a
system-call is being performed from user-space (which is almost always
the case).  If it turns out that (a) we're not being called due to a
system call or (b) we're being called from within the kernel, we fixup
the no-longer-valid assumptions in non_syscall() and .break_fixup(),
respectively.

With this approach, there are 3 major phases:

 - Phase 1: Read various control & application registers, in
	    particular the current task pointer from AR.K6.
 - Phase 2: Do all memory loads (load system-call entry,
	    load current_thread_info()->flags, prefetch
	    kernel register-backing store) and switch
	    to kernel register-stack.
 - Phase 3: Call ia64_syscall_setup() and invoke
	    syscall-handler.

Good for 26-30 cycles of improvement on break-based syscall-path.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:19:04 -07:00
David Mosberger-Tang c03f058fbf [IA64] In ia64_leave_syscall(), fix comments and whitespace only.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:18:22 -07:00
David Mosberger-Tang 87e522a0f7 [IA64] Schedule ia64_leave_syscall() to read ar.bsp earlier
Reschedule code to read ar.bsp as early as possible.  To enable this,
don't bother clearing some of the registers when we're returning to
kernel stacks.  Also, instead of trying to support the pNonSys case
(which makes no sense), do a bugcheck instead (with break 0).  Finally,
remove a clear of r14 which is a left-over from the previous patch.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:17:44 -07:00
David Mosberger-Tang 060561ff79 [IA64] In syscall-entry, use st8 instead of stf8 to clear pt_regs.r8
Using stf8 seemed like a clever idea at the time, but stf8 forces
the cache-line to be invalidated in the L1D (if it happens to be
there already).  This patch eliminates a guaranteed L1D cache-miss
and, by itself, is good for a 1-2 cycle improvement for heavy-weight
syscalls.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:17:03 -07:00
David Mosberger-Tang 96e017495e [IA64] On return from syscall, hint b7 with __kernel_syscall_via_epc().
Why is this a good idea?  Clearing b7 to 0 is guaranteed to do us no
good and writing it with __kernel_syscall_via_epc() yields a 6 cycle
improvement _if_ the application performs another EPC-based system-
call without overwriting b7, which is not all that uncommon.  Well
worth the minimal cost of 1 bundle of code.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:16:07 -07:00
David Mosberger-Tang 3c79c8b1d9 [IA64] Schedule fp-clearing insns at least 6 cycles after reading ar.bsp.
Decreases syscall overhead by approximately 6 cycles.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:15:13 -07:00
David Mosberger-Tang 9ec1a7ad43 [IA64] Use dynamic prediction for RSE-clearing branches.
This by itself is good for a 1-2 cycle speed up.  Effect is bigger
when combined with the later patches.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:13:33 -07:00
David Mosberger-Tang 06ef660816 [IA64] __ia64_syscall() is no longer used anywhere in the kernel. Remove it.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27 21:10:45 -07:00
Kenji Kaneshige b9e41d7fb6 [IA64] iosapic.c: typo ... s/spin_unlock_irq/spin_unlock/
vector sharing patch had a typo ... mismatched spin_lock() with
a spin_unlock_irq().  Fix from Kenji Kaneshige.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 13:27:48 -07:00
Tony Luck e1ed81ab7a [IA64] print "siblings" before {physical,core,thread} id
Rohit and Suresh changed their mind about the order to print things
in /proc/cpuinfo, but didn't include the change in the version of
the patch they sent to me.

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 13:27:12 -07:00
Kenji Kaneshige 24eeb568ae [IA64] vector sharing (Large I/O system support)
Current ia64 linux cannot handle greater than 184 interrupt sources
because of the lack of vectors. The following patch enables ia64 linux
to handle greater than 184 interrupt sources by allowing the same
vector number to be shared by multiple IOSAPIC's RTEs. The design of
this patch is besed on "Intel(R) Itanium(R) Processor Family Interrupt
Architecture Guide".

Even if you don't have a large I/O system, you can see the behavior of
vector sharing by changing IOSAPIC_LAST_DEVICE_VECTOR to fewer value.

Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 13:26:23 -07:00
Suresh Siddha e927ecb05e [IA64] multi-core/multi-thread identification
Version 3 - rediffed to apply on top of Ashok's hotplug cpu
patch.  /proc/cpuinfo output in step with x86.

This is an updated MC/MT identification patch based on the 
previous discussions on list. 

Add the Multi-core and Multi-threading detection for IPF.
  - Add new core and threading related fields in /proc/cpuinfo.
		Physical id
		Core id
		Thread id
		Siblings
  - setup the cpu_core_map and cpu_sibling_map appropriately
  - Handles Hot plug CPU
 
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Gordon Jin <gordon.jin@intel.com>
Signed-off-by: Rohit Seth <rohit.seth@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 13:25:06 -07:00
David Mosberger-Tang a37d98f6a9 [IA64] fix syscall-optimization goof
Sadly, I goofed in this syscall-tuning patch:

ChangeSet 1.1966.1.40 2005/01/22 13:31:05 davidm@hpl.hp.com
  [IA64] Improve ia64_leave_syscall() for McKinley-type cores.

  Optimize ia64_leave_syscall() a bit better for McKinley-type cores.
  The patch looks big, but that's mostly due to renaming r16/r17 to r2/r3.
  Good for a 13 cycle improvement.

The problem is that the size of the physical stacked registers was
loaded into the wrong register (r3 instead of r17).  Since r17 by
coincidence always had the value 1, this had the effect of turning
rse_clear_invalid into a no-op.  That poses the risk of leaking kernel
state back to user-land and is hence not acceptable.

The fix below is simple, but unfortunately it costs us about 28 cycles
in syscall overhead. ;-(

Unfortunately, there isn't much we can do about that since those
registers have to be cleared one way or another.

	--david

Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 13:20:38 -07:00
Stephane Eranian 4944930ab7 [IA64] perfmon: make pfm_sysctl a global, and other cleanup
- make pfm_sysctl a global such that it is possible
  to enable/disable debug printk in sampling formats
  using PFM_DEBUG.

- remove unused pfm_debug_var variable

- fix a bug in pfm_handle_work where an BUG_ON() could
  be triggered. There is a path where pfm_handle_work()
  can be called with interrupts enabled, i.e., when
  TIF_NEED_RESCHED is set. The fix correct the masking
  and unmasking of interrupts in pfm_handle_work() such
  that we restore the interrupt mask as it was upon entry.

signed-off-by: stephane eranian <eranian@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 13:08:30 -07:00
David Mosberger-Tang 30325d1771 [IA64] speed up syscall path a bit more
Recently I noticed that clearing ar.ssd/ar.csd right before srlz.d is
causing significant stalling in the syscall path.  The patch below
fixes that by moving the register-writes after srlz.d.  On a Madison,
this drops break-based getpid() from 241 to 226 cycles (-15 cycles).

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 13:03:16 -07:00
Keith Owens e8d1cb2f28 [IA64] Tighten up unw_unwind_to_user check
Detect user space by the unwind frame with predicate PRED_USER_STACK
set, instead of a user space IP.  Tighten up the last ditch check for
running off the top of the kernel stack.

Based on a suggestion by David Mosberger, reworked to fit the current
tree.  This survives my stress test which used to break 2.6.9 kernels.
Unlike 2.6.11, the stress test now unwinds to the correct point, so
gdb can get the user space registers.

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 11:45:26 -07:00
David Mosberger-Tang 8297511530 [IA64] add missing cpu_relax() in ITC syncing code
Call cpu_relax() in busy-waiting loops of the ITC-syncing code.

Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25 11:44:02 -07:00
Ashok Raj df6c6804ce [IA64] Fix build errors for !HOTPLUG case.
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-22 14:46:24 -07:00
Ashok Raj b8d8b883e6 [IA64] cpu hotplug: return offlined cpus to SAL
This patch is required to support cpu removal for IPF systems. Existing code
just fakes the real offline by keeping it run the idle thread, and polling
for the bit to re-appear in the cpu_state to get out of the idle loop.

For the cpu-offline to work correctly, we need to pass control of this CPU 
back to SAL so it can continue in the boot-rendez mode. This gives the
SAL control to not pick this cpu as the monarch processor for global MCA
events, and addition does not wait for this cpu to checkin with SAL
for global MCA events as well. The handoff is implemented as documented in 
SAL specification section 3.2.5.1 "OS_BOOT_RENDEZ to SAL return State"

Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-22 14:44:40 -07:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00