2005-09-26 14:04:21 +08:00
|
|
|
#
|
|
|
|
# Makefile for ppc-specific library files..
|
|
|
|
#
|
|
|
|
|
2009-06-10 04:48:51 +08:00
|
|
|
subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
|
|
|
|
|
2010-09-23 04:51:09 +08:00
|
|
|
ccflags-$(CONFIG_PPC64) := -mno-minimal-toc
|
2006-06-10 18:23:54 +08:00
|
|
|
|
2008-11-27 04:54:46 +08:00
|
|
|
CFLAGS_REMOVE_code-patching.o = -pg
|
|
|
|
CFLAGS_REMOVE_feature-fixups.o = -pg
|
|
|
|
|
2007-09-21 08:16:20 +08:00
|
|
|
obj-y := string.o alloc.o \
|
2010-06-30 04:08:42 +08:00
|
|
|
checksum_$(CONFIG_WORD_SIZE).o crtsavres.o
|
|
|
|
obj-$(CONFIG_PPC32) += div64.o copy_32.o
|
2008-05-12 20:57:51 +08:00
|
|
|
obj-$(CONFIG_HAS_IOMEM) += devres.o
|
2005-10-10 20:50:37 +08:00
|
|
|
|
2007-09-21 08:16:20 +08:00
|
|
|
obj-$(CONFIG_PPC64) += copypage_64.o copyuser_64.o \
|
2010-08-03 04:09:52 +08:00
|
|
|
memcpy_64.o usercopy_64.o mem_64.o string.o \
|
powerpc: POWER7 optimised copy_to_user/copy_from_user using VMX
Implement a POWER7 optimised copy_to_user/copy_from_user using VMX.
For large aligned copies this new loop is over 10% faster, and for
large unaligned copies it is over 200% faster.
If we take a fault we fall back to the old version, this keeps
things relatively simple and easy to verify.
On POWER7 unaligned stores rarely slow down - they only flush when
a store crosses a 4KB page boundary. Furthermore this flush is
handled completely in hardware and should be 20-30 cycles.
Unaligned loads on the other hand flush much more often - whenever
crossing a 128 byte cache line, or a 32 byte sector if either sector
is an L1 miss.
Considering this information we really want to get the loads aligned
and not worry about the alignment of the stores. Microbenchmarks
confirm that this approach is much faster than the current unaligned
copy loop that uses shifts and rotates to ensure both loads and
stores are aligned.
We also want to try and do the stores in cacheline aligned, cacheline
sized chunks. If the store queue is unable to merge an entire
cacheline of stores then the L2 cache will have to do a
read/modify/write. Even worse, we will serialise this with the stores
in the next iteration of the copy loop since both iterations hit
the same cacheline.
Based on this, the new loop does the following things:
1 - 127 bytes
Get the source 8 byte aligned and use 8 byte loads and stores. Pretty
boring and similar to how the current loop works.
128 - 4095 bytes
Get the source 8 byte aligned and use 8 byte loads and stores,
1 cacheline at a time. We aren't doing the stores in cacheline
aligned chunks so we will potentially serialise once per cacheline.
Even so it is much better than the loop we have today.
4096 - bytes
If both source and destination have the same alignment get them both
16 byte aligned, then get the destination cacheline aligned. Do
cacheline sized loads and stores using VMX.
If source and destination do not have the same alignment, we get the
destination cacheline aligned, and use permute to do aligned loads.
In both cases the VMX loop should be optimal - we always do aligned
loads and stores and are always doing stores in cacheline aligned,
cacheline sized chunks.
To be able to use VMX we must be careful about interrupts and
sleeping. We don't use the VMX loop when in an interrupt (which should
be rare anyway) and we wrap the VMX loop in disable/enable_pagefault
and fall back to the existing copy_tofrom_user loop if we do need to
sleep.
The VMX breakpoint of 4096 bytes was chosen using this microbenchmark:
http://ozlabs.org/~anton/junkcode/copy_to_user.c
Since we are using VMX and there is a cost to saving and restoring
the user VMX state there are two broad cases we need to benchmark:
- Best case - userspace never uses VMX
- Worst case - userspace always uses VMX
In reality a userspace process will sit somewhere between these two
extremes. Since we need to test both aligned and unaligned copies we
end up with 4 combinations. The point at which the VMX loop begins to
win is:
0% VMX
aligned 2048 bytes
unaligned 2048 bytes
100% VMX
aligned 16384 bytes
unaligned 8192 bytes
Considering this is a microbenchmark, the data is hot in cache and
the VMX loop has better store queue merging properties we set the
breakpoint to 4096 bytes, a little below the unaligned breakpoints.
Some future optimisations we can look at:
- Looking at the perf data, a significant part of the cost when a
task is always using VMX is the extra exception we take to restore
the VMX state. As such we should do something similar to the x86
optimisation that restores FPU state for heavy users. ie:
/*
* If the task has used fpu the last 5 timeslices, just do a full
* restore of the math state immediately to avoid the trap; the
* chances of needing FPU soon are obviously high now
*/
preload_fpu = tsk_used_math(next_p) && next_p->fpu_counter > 5;
and
/*
* fpu_counter contains the number of consecutive context switches
* that the FPU is used. If this is over a threshold, the lazy fpu
* saving becomes unlazy to save the trap. This is an unsigned char
* so that after 256 times the counter wraps and the behavior turns
* lazy again; this to deal with bursty apps that only use FPU for
* a short time
*/
- We could create a paca bit to mirror the VMX enabled MSR bit and check
that first, avoiding multiple calls to calling enable_kernel_altivec.
That should help with iovec based system calls like readv.
- We could have two VMX breakpoints, one for when we know the user VMX
state is loaded into the registers and one when it isn't. This could
be a second bit in the paca so we can calculate the break points quickly.
- One suggestion from Ben was to save and restore the VSX registers
we use inline instead of using enable_kernel_altivec.
[BenH: Fixed a problem with preempt and fixed build without CONFIG_ALTIVEC]
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-12-08 04:11:45 +08:00
|
|
|
checksum_wrappers_64.o hweight_64.o \
|
2012-05-31 04:22:09 +08:00
|
|
|
copyuser_power7.o string_64.o copypage_power7.o \
|
|
|
|
memcpy_power7.o
|
powerpc: Emulate most Book I instructions in emulate_step()
This extends the emulate_step() function to handle a large proportion
of the Book I instructions implemented on current 64-bit server
processors. The aim is to handle all the load and store instructions
used in the kernel, plus all of the instructions that appear between
l[wd]arx and st[wd]cx., so this handles the Altivec/VMX lvx and stvx
and the VSX lxv2dx and stxv2dx instructions (implemented in POWER7).
The new code can emulate user mode instructions, and checks the
effective address for a load or store if the saved state is for
user mode. It doesn't handle little-endian mode at present.
For floating-point, Altivec/VMX and VSX instructions, it checks
that the saved MSR has the enable bit for the relevant facility
set, and if so, assumes that the FP/VMX/VSX registers contain
valid state, and does loads or stores directly to/from the
FP/VMX/VSX registers, using assembly helpers in ldstfp.S.
Instructions supported now include:
* Loads and stores, including some but not all VMX and VSX instructions,
and lmw/stmw
* Atomic loads and stores (l[dw]arx, st[dw]cx.)
* Arithmetic instructions (add, subtract, multiply, divide, etc.)
* Compare instructions
* Rotate and mask instructions
* Shift instructions
* Logical instructions (and, or, xor, etc.)
* Condition register logical instructions
* mtcrf, cntlz[wd], exts[bhw]
* isync, sync, lwsync, ptesync, eieio
* Cache operations (dcbf, dcbst, dcbt, dcbtst)
The overflow-checking arithmetic instructions are not included, but
they appear not to be ever used in C code.
This uses decimal values for the minor opcodes in the switch statements
because that is what appears in the Power ISA specification, thus it is
easier to check that they are correct if they are in decimal.
If this is used to single-step an instruction where a data breakpoint
interrupt occurred, then there is the possibility that the instruction
is a lwarx or ldarx. In that case we have to be careful not to lose the
reservation until we get to the matching st[wd]cx., or we'll never make
forward progress. One alternative is to try to arrange that we can
return from interrupts and handle data breakpoint interrupts without
losing the reservation, which means not using any spinlocks, mutexes,
or atomic ops (including bitops). That seems rather fragile. The
other alternative is to emulate the larx/stcx and all the instructions
in between. This is why this commit adds support for a wide range
of integer instructions.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2010-06-15 12:48:58 +08:00
|
|
|
obj-$(CONFIG_XMON) += sstep.o ldstfp.o
|
|
|
|
obj-$(CONFIG_KPROBES) += sstep.o ldstfp.o
|
2010-06-15 14:05:19 +08:00
|
|
|
obj-$(CONFIG_HAVE_HW_BREAKPOINT) += sstep.o ldstfp.o
|
2005-10-28 20:53:37 +08:00
|
|
|
|
2005-10-27 14:45:38 +08:00
|
|
|
ifeq ($(CONFIG_PPC64),y)
|
|
|
|
obj-$(CONFIG_SMP) += locks.o
|
2012-05-30 03:31:24 +08:00
|
|
|
obj-$(CONFIG_ALTIVEC) += vmx-helper.o
|
2005-10-27 14:45:38 +08:00
|
|
|
endif
|
2006-09-22 02:18:53 +08:00
|
|
|
|
2007-09-16 18:53:25 +08:00
|
|
|
obj-$(CONFIG_PPC_LIB_RHEAP) += rheap.o
|
2008-06-24 09:32:21 +08:00
|
|
|
|
|
|
|
obj-y += code-patching.o
|
2008-06-24 09:32:36 +08:00
|
|
|
obj-y += feature-fixups.o
|
2008-06-24 09:33:03 +08:00
|
|
|
obj-$(CONFIG_FTR_FIXUP_SELFTEST) += feature-fixups-test.o
|