A single 'movl' is shorter than the 'xorl'-'orl' pair.
No change in behaviour.
Signed-off-by: Alexander Potashev <aspotashev@gmail.com>
LKML-Reference: <1256341043-4928-1-git-send-email-aspotashev@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This has the consequence of changing the section name use for head
code from ".text.head" to ".head.text".
Linus suggested that we merge the ".text.head" section with ".text"
(presumably while preserving the fact that the head code starts at 0).
When I tried this it caused the kernel to not boot.
Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Make the kernel_alignment field adjustable; this allows us to set it
to a large value (intended to be 16 MB to avoid ZONE_DMA contention,
memory holes and other weirdness) while a smart bootloader can still
force a loading at a lesser alignment if absolutely necessary.
Also export pref_address (preferred loading address, corresponding to
the link-time address) and init_size, the total amount of linear
memory the kernel will require during initialization.
[ Impact: allows better kernel placement, gives bootloader more info ]
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Remove a couple of lines of dead code from
arch/x86/boot/compressed/head_*.S; all of these update registers that
are dead in the current code.
[ Impact: cleanup ]
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Use LOAD_PHYSICAL_ADDR instead of CONFIG_PHYSICAL_START in the 64-bit
decompression code, for equivalence with the 32-bit code.
[ Impact: cleanup, increases code similarity ]
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Determine the compressed code offset (from the kernel runtime address)
at compile time. This allows some minor optimizations in
arch/x86/boot/compressed/head_*.S, but more importantly it makes this
value available to the build process, which will enable a future patch
to export the necessary linear memory footprint into the bzImage
header.
[ Impact: cleanup, future patch enabling ]
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
In the pre-decompression code, use the appropriate largest possible
rep movs and rep stos to move code and clear bss, respectively. For
reverse copy, do note that the initial values are supposed to be the
address of the first (highest) copy datum, not one byte beyond the end
of the buffer.
rep strings are not necessarily the fastest way to perform these
operations on all current processors, but are likely to be in the
future, and perhaps more importantly, we want to encourage the
architecturally right thing to do here.
This also fixes a couple of trivial inefficiencies on 64 bits.
[ Impact: trivial performance enhancement, increase code similarity ]
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Set up the decompression stack as soon as we know where it needs to
go. That way we have a full-service stack as soon as possible, rather
than relying on the BP_scratch field.
Note that the stack does need to be empty during bss zeroing (or
else the stack needs to be moved out of the bss segment, which is also
an option.)
[ Impact: cleanup, minor paranoia ]
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Both on 32 and 64 bits, we copy all the way up to the end of bss,
except that on 64 bits there is a hack to avoid copying on top of the
page tables. There is no point in copying bss at all, especially
since we are just about to zero it all anyway.
To clean up and unify the handling, we now do:
- copy from startup_32 to _bss.
- zero from _bss to _ebss.
- the _ebss symbol is aligned to an 8-byte boundary.
- the page tables are moved to a separate section.
Use _bss as the copy endpoint since _edata may be misaligned.
[ Impact: cleanup, trivial performance improvement ]
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Clean up style issues in arch/x86/boot/compressed/head_64.S. This
file had a lot fewer style issues than its 32-bit cousin, but the ones
it has are worth fixing, especially since it makes the two files more
similar.
[ Impact: cleanup, no object code change ]
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Impact: clenaup
Linker script will put startup_32 at predefined
address so using ENTRY will not bloat the code
size.
Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
In general, the only definitions that assembly files can use
are in _types.S headers (where available), so convert them.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
We should better use already defined flags from processor-flags.h instead
of defining own ones
[>>> object code check >>>]
original
md5sum: 129f24be6df396fb7d8bf998c01fc716 arch/x86/boot/compressed/head_64.o
text data bss dec hex filename
705 48 45056 45809 b2f1 arch/x86/boot/compressed/head_64.o
patched
md5sum: 129f24be6df396fb7d8bf998c01fc716 arch/x86/boot/compressed/head_64-new.o
text data bss dec hex filename
705 48 45056 45809 b2f1 arch/x86/boot/compressed/head_64-new.o
[<<< object code check <<<]
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
cleanup: change the _end in compressed vmlinux_64.lds.
also change _heap to _ebss that is not needed.
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The kernel decompressor wrapper uses memory located beyond the
end of the image. This might lead to hard to debug problems,
but even if it can be proven to be safe, it is at the very
least unclean. I don't see any advantages either, unless you
count it not being zeroed out as an advantage. This patch
moves the boot-heap area to the bss segment.
Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The kernel only ever supports 1 version of the boot protocol
so there is no need to check the boot protocol revision to
see if a feature is supported.
Both x86 and x86_64 support the same boot protocol so we need
to implement the KEEP_SEGMENTS on x86_64 as well. It isn't
just paravirt bootloaders that could use this functionality.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@suse.de>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>