Merge branch 'akpm' (second patchbomb from Andrew Morton)

Merge more incoming from Andrew Morton:
 "Two new syscalls:

     memfd_create in "shm: add memfd_create() syscall"
     kexec_file_load in "kexec: implementation of new syscall kexec_file_load"

  And:

   - Most (all?) of the rest of MM

   - Lots of the usual misc bits

   - fs/autofs4

   - drivers/rtc

   - fs/nilfs

   - procfs

   - fork.c, exec.c

   - more in lib/

   - rapidio

   - Janitorial work in filesystems: fs/ufs, fs/reiserfs, fs/adfs,
     fs/cramfs, fs/romfs, fs/qnx6.

   - initrd/initramfs work

   - "file sealing" and the memfd_create() syscall, in tmpfs

   - add pci_zalloc_consistent, use it in lots of places

   - MAINTAINERS maintenance

   - kexec feature work"

* emailed patches from Andrew Morton <akpm@linux-foundation.org: (193 commits)
  MAINTAINERS: update nomadik patterns
  MAINTAINERS: update usb/gadget patterns
  MAINTAINERS: update DMA BUFFER SHARING patterns
  kexec: verify the signature of signed PE bzImage
  kexec: support kexec/kdump on EFI systems
  kexec: support for kexec on panic using new system call
  kexec-bzImage64: support for loading bzImage using 64bit entry
  kexec: load and relocate purgatory at kernel load time
  purgatory: core purgatory functionality
  purgatory/sha256: provide implementation of sha256 in purgaotory context
  kexec: implementation of new syscall kexec_file_load
  kexec: new syscall kexec_file_load() declaration
  kexec: make kexec_segment user buffer pointer a union
  resource: provide new functions to walk through resources
  kexec: use common function for kimage_normal_alloc() and kimage_crash_alloc()
  kexec: move segment verification code in a separate function
  kexec: rename unusebale_pages to unusable_pages
  kernel: build bin2c based on config option CONFIG_BUILD_BIN2C
  bin2c: move bin2c in scripts/basic
  shm: wait for pins to be released when sealing
  ...
This commit is contained in:
Linus Torvalds 2014-08-08 15:57:47 -07:00
commit 8065be8d03
348 changed files with 10235 additions and 3582 deletions

View File

@ -1381,6 +1381,9 @@ S: 17 rue Danton
S: F - 94270 Le Kremlin-Bicêtre
S: France
N: Jack Hammer
D: IBM ServeRAID RAID (ips) driver maintenance
N: Greg Hankins
E: gregh@cc.gatech.edu
D: fixed keyboard driver to separate LED and locking status
@ -1691,6 +1694,10 @@ S: Reading
S: RG6 2NU
S: United Kingdom
N: Dave Jeffery
E: dhjeffery@gmail.com
D: SCSI hacks and IBM ServeRAID RAID driver maintenance
N: Jakub Jelinek
E: jakub@redhat.com
W: http://sunsite.mff.cuni.cz/~jj

View File

@ -0,0 +1,269 @@
What: /sys/fs/nilfs2/features/revision
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show current revision of NILFS file system driver.
This value informs about file system revision that
driver is ready to support.
What: /sys/fs/nilfs2/features/README
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Describe attributes of /sys/fs/nilfs2/features group.
What: /sys/fs/nilfs2/<device>/revision
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show NILFS file system revision on volume.
This value informs about metadata structures'
revision on mounted volume.
What: /sys/fs/nilfs2/<device>/blocksize
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show volume's block size in bytes.
What: /sys/fs/nilfs2/<device>/device_size
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show volume size in bytes.
What: /sys/fs/nilfs2/<device>/free_blocks
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show count of free blocks on volume.
What: /sys/fs/nilfs2/<device>/uuid
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show volume's UUID (Universally Unique Identifier).
What: /sys/fs/nilfs2/<device>/volume_name
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show volume's label.
What: /sys/fs/nilfs2/<device>/README
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Describe attributes of /sys/fs/nilfs2/<device> group.
What: /sys/fs/nilfs2/<device>/superblock/sb_write_time
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show last write time of super block in human-readable
format.
What: /sys/fs/nilfs2/<device>/superblock/sb_write_time_secs
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show last write time of super block in seconds.
What: /sys/fs/nilfs2/<device>/superblock/sb_write_count
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show current write count of super block.
What: /sys/fs/nilfs2/<device>/superblock/sb_update_frequency
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show/Set interval of periodical update of superblock
(in seconds).
What: /sys/fs/nilfs2/<device>/superblock/README
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Describe attributes of /sys/fs/nilfs2/<device>/superblock
group.
What: /sys/fs/nilfs2/<device>/segctor/last_pseg_block
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show start block number of the latest segment.
What: /sys/fs/nilfs2/<device>/segctor/last_seg_sequence
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show sequence value of the latest segment.
What: /sys/fs/nilfs2/<device>/segctor/last_seg_checkpoint
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show checkpoint number of the latest segment.
What: /sys/fs/nilfs2/<device>/segctor/current_seg_sequence
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show segment sequence counter.
What: /sys/fs/nilfs2/<device>/segctor/current_last_full_seg
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show index number of the latest full segment.
What: /sys/fs/nilfs2/<device>/segctor/next_full_seg
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show index number of the full segment index
to be used next.
What: /sys/fs/nilfs2/<device>/segctor/next_pseg_offset
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show offset of next partial segment in the current
full segment.
What: /sys/fs/nilfs2/<device>/segctor/next_checkpoint
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show next checkpoint number.
What: /sys/fs/nilfs2/<device>/segctor/last_seg_write_time
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show write time of the last segment in
human-readable format.
What: /sys/fs/nilfs2/<device>/segctor/last_seg_write_time_secs
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show write time of the last segment in seconds.
What: /sys/fs/nilfs2/<device>/segctor/last_nongc_write_time
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show write time of the last segment not for cleaner
operation in human-readable format.
What: /sys/fs/nilfs2/<device>/segctor/last_nongc_write_time_secs
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show write time of the last segment not for cleaner
operation in seconds.
What: /sys/fs/nilfs2/<device>/segctor/dirty_data_blocks_count
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show number of dirty data blocks.
What: /sys/fs/nilfs2/<device>/segctor/README
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Describe attributes of /sys/fs/nilfs2/<device>/segctor
group.
What: /sys/fs/nilfs2/<device>/segments/segments_number
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show number of segments on a volume.
What: /sys/fs/nilfs2/<device>/segments/blocks_per_segment
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show number of blocks in segment.
What: /sys/fs/nilfs2/<device>/segments/clean_segments
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show count of clean segments.
What: /sys/fs/nilfs2/<device>/segments/dirty_segments
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show count of dirty segments.
What: /sys/fs/nilfs2/<device>/segments/README
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Describe attributes of /sys/fs/nilfs2/<device>/segments
group.
What: /sys/fs/nilfs2/<device>/checkpoints/checkpoints_number
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show number of checkpoints on volume.
What: /sys/fs/nilfs2/<device>/checkpoints/snapshots_number
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show number of snapshots on volume.
What: /sys/fs/nilfs2/<device>/checkpoints/last_seg_checkpoint
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show checkpoint number of the latest segment.
What: /sys/fs/nilfs2/<device>/checkpoints/next_checkpoint
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show next checkpoint number.
What: /sys/fs/nilfs2/<device>/checkpoints/README
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Describe attributes of /sys/fs/nilfs2/<device>/checkpoints
group.
What: /sys/fs/nilfs2/<device>/mounted_snapshots/README
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Describe content of /sys/fs/nilfs2/<device>/mounted_snapshots
group.
What: /sys/fs/nilfs2/<device>/mounted_snapshots/<id>/inodes_count
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show number of inodes for snapshot.
What: /sys/fs/nilfs2/<device>/mounted_snapshots/<id>/blocks_count
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Show number of blocks for snapshot.
What: /sys/fs/nilfs2/<device>/mounted_snapshots/<id>/README
Date: April 2014
Contact: "Vyacheslav Dubeyko" <slava@dubeyko.com>
Description:
Describe attributes of /sys/fs/nilfs2/<device>/mounted_snapshots/<id>
group.

View File

@ -24,64 +24,27 @@ Please note that implementation details can be changed.
a page/swp_entry may be charged (usage += PAGE_SIZE) at
mem_cgroup_charge_anon()
Called at new page fault and Copy-On-Write.
mem_cgroup_try_charge_swapin()
Called at do_swap_page() (page fault on swap entry) and swapoff.
Followed by charge-commit-cancel protocol. (With swap accounting)
At commit, a charge recorded in swap_cgroup is removed.
mem_cgroup_charge_file()
Called at add_to_page_cache()
mem_cgroup_cache_charge_swapin()
Called at shmem's swapin.
mem_cgroup_prepare_migration()
Called before migration. "extra" charge is done and followed by
charge-commit-cancel protocol.
At commit, charge against oldpage or newpage will be committed.
mem_cgroup_try_charge()
2. Uncharge
a page/swp_entry may be uncharged (usage -= PAGE_SIZE) by
mem_cgroup_uncharge_page()
Called when an anonymous page is fully unmapped. I.e., mapcount goes
to 0. If the page is SwapCache, uncharge is delayed until
mem_cgroup_uncharge_swapcache().
mem_cgroup_uncharge_cache_page()
Called when a page-cache is deleted from radix-tree. If the page is
SwapCache, uncharge is delayed until mem_cgroup_uncharge_swapcache().
mem_cgroup_uncharge_swapcache()
Called when SwapCache is removed from radix-tree. The charge itself
is moved to swap_cgroup. (If mem+swap controller is disabled, no
charge to swap occurs.)
mem_cgroup_uncharge()
Called when a page's refcount goes down to 0.
mem_cgroup_uncharge_swap()
Called when swp_entry's refcnt goes down to 0. A charge against swap
disappears.
mem_cgroup_end_migration(old, new)
At success of migration old is uncharged (if necessary), a charge
to new page is committed. At failure, charge to old page is committed.
3. charge-commit-cancel
In some case, we can't know this "charge" is valid or not at charging
(because of races).
To handle such case, there are charge-commit-cancel functions.
mem_cgroup_try_charge_XXX
mem_cgroup_commit_charge_XXX
mem_cgroup_cancel_charge_XXX
these are used in swap-in and migration.
Memcg pages are charged in two steps:
mem_cgroup_try_charge()
mem_cgroup_commit_charge() or mem_cgroup_cancel_charge()
At try_charge(), there are no flags to say "this page is charged".
at this point, usage += PAGE_SIZE.
At commit(), the function checks the page should be charged or not
and set flags or avoid charging.(usage -= PAGE_SIZE)
At commit(), the page is associated with the memcg.
At cancel(), simply usage -= PAGE_SIZE.
@ -91,18 +54,6 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
Anonymous page is newly allocated at
- page fault into MAP_ANONYMOUS mapping.
- Copy-On-Write.
It is charged right after it's allocated before doing any page table
related operations. Of course, it's uncharged when another page is used
for the fault address.
At freeing anonymous page (by exit() or munmap()), zap_pte() is called
and pages for ptes are freed one by one.(see mm/memory.c). Uncharges
are done at page_remove_rmap() when page_mapcount() goes down to 0.
Another page freeing is by page-reclaim (vmscan.c) and anonymous
pages are swapped out. In this case, the page is marked as
PageSwapCache(). uncharge() routine doesn't uncharge the page marked
as SwapCache(). It's delayed until __delete_from_swap_cache().
4.1 Swap-in.
At swap-in, the page is taken from swap-cache. There are 2 cases.
@ -111,41 +62,6 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
(b) If the SwapCache has been mapped by processes, it has been
charged already.
This swap-in is one of the most complicated work. In do_swap_page(),
following events occur when pte is unchanged.
(1) the page (SwapCache) is looked up.
(2) lock_page()
(3) try_charge_swapin()
(4) reuse_swap_page() (may call delete_swap_cache())
(5) commit_charge_swapin()
(6) swap_free().
Considering following situation for example.
(A) The page has not been charged before (2) and reuse_swap_page()
doesn't call delete_from_swap_cache().
(B) The page has not been charged before (2) and reuse_swap_page()
calls delete_from_swap_cache().
(C) The page has been charged before (2) and reuse_swap_page() doesn't
call delete_from_swap_cache().
(D) The page has been charged before (2) and reuse_swap_page() calls
delete_from_swap_cache().
memory.usage/memsw.usage changes to this page/swp_entry will be
Case (A) (B) (C) (D)
Event
Before (2) 0/ 1 0/ 1 1/ 1 1/ 1
===========================================
(3) +1/+1 +1/+1 +1/+1 +1/+1
(4) - 0/ 0 - -1/ 0
(5) 0/-1 0/ 0 -1/-1 0/ 0
(6) - 0/-1 - 0/-1
===========================================
Result 1/ 1 1/ 1 1/ 1 1/ 1
In any cases, charges to this page should be 1/ 1.
4.2 Swap-out.
At swap-out, typical state transition is below.
@ -158,28 +74,20 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
swp_entry's refcnt -= 1.
At (b), the page is marked as SwapCache and not uncharged.
At (d), the page is removed from SwapCache and a charge in page_cgroup
is moved to swap_cgroup.
Finally, at task exit,
(e) zap_pte() is called and swp_entry's refcnt -=1 -> 0.
Here, a charge in swap_cgroup disappears.
5. Page Cache
Page Cache is charged at
- add_to_page_cache_locked().
uncharged at
- __remove_from_page_cache().
The logic is very clear. (About migration, see below)
Note: __remove_from_page_cache() is called by remove_from_page_cache()
and __remove_mapping().
6. Shmem(tmpfs) Page Cache
Memcg's charge/uncharge have special handlers of shmem. The best way
to understand shmem's page state transition is to read mm/shmem.c.
The best way to understand shmem's page state transition is to read
mm/shmem.c.
But brief explanation of the behavior of memcg around shmem will be
helpful to understand the logic.
@ -192,56 +100,10 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
It's charged when...
- A new page is added to shmem's radix-tree.
- A swp page is read. (move a charge from swap_cgroup to page_cgroup)
It's uncharged when
- A page is removed from radix-tree and not SwapCache.
- When SwapCache is removed, a charge is moved to swap_cgroup.
- When swp_entry's refcnt goes down to 0, a charge in swap_cgroup
disappears.
7. Page Migration
One of the most complicated functions is page-migration-handler.
Memcg has 2 routines. Assume that we are migrating a page's contents
from OLDPAGE to NEWPAGE.
Usual migration logic is..
(a) remove the page from LRU.
(b) allocate NEWPAGE (migration target)
(c) lock by lock_page().
(d) unmap all mappings.
(e-1) If necessary, replace entry in radix-tree.
(e-2) move contents of a page.
(f) map all mappings again.
(g) pushback the page to LRU.
(-) OLDPAGE will be freed.
Before (g), memcg should complete all necessary charge/uncharge to
NEWPAGE/OLDPAGE.
The point is....
- If OLDPAGE is anonymous, all charges will be dropped at (d) because
try_to_unmap() drops all mapcount and the page will not be
SwapCache.
- If OLDPAGE is SwapCache, charges will be kept at (g) because
__delete_from_swap_cache() isn't called at (e-1)
- If OLDPAGE is page-cache, charges will be kept at (g) because
remove_from_swap_cache() isn't called at (e-1)
memcg provides following hooks.
- mem_cgroup_prepare_migration(OLDPAGE)
Called after (b) to account a charge (usage += PAGE_SIZE) against
memcg which OLDPAGE belongs to.
- mem_cgroup_end_migration(OLDPAGE, NEWPAGE)
Called after (f) before (g).
If OLDPAGE is used, commit OLDPAGE again. If OLDPAGE is already
charged, a charge by prepare_migration() is automatically canceled.
If NEWPAGE is used, commit NEWPAGE and uncharge OLDPAGE.
But zap_pte() (by exit or munmap) can be called while migration,
we have to check if OLDPAGE/NEWPAGE is a valid page after commit().
mem_cgroup_migrate()
8. LRU
Each memcg has its own private LRU. Now, its handling is under global

View File

@ -70,6 +70,7 @@ nuvoton,npct501 i2c trusted platform module (TPM)
nxp,pca9556 Octal SMBus and I2C registered interface
nxp,pca9557 8-bit I2C-bus and SMBus I/O port with reset
nxp,pcf8563 Real-time clock/calendar
nxp,pcf85063 Tiny Real-Time Clock
ovti,ov5642 OV5642: Color CMOS QSXGA (5-megapixel) Image Sensor with OmniBSI and Embedded TrueFocus
pericom,pt7c4338 Real-time Clock Module
plx,pex8648 48-Lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch

View File

@ -268,6 +268,8 @@ characters, each representing a particular tainted value.
14: 'E' if an unsigned module has been loaded in a kernel supporting
module signature.
15: 'L' if a soft lockup has previously occurred on the system.
The primary reason for the 'Tainted: ' string is to tell kernel
debuggers if this is a clean kernel or if anything unusual has
occurred. Tainting is permanent: even if an offending module is

View File

@ -20,13 +20,26 @@ II. Known problems
None.
III. To do
III. DMA Engine Support
Add DMA data transfers (non-messaging).
Add inbound region (SRIO-to-PCIe) mapping.
Tsi721 mport driver supports DMA data transfers between local system memory and
remote RapidIO devices. This functionality is implemented according to SLAVE
mode API defined by common Linux kernel DMA Engine framework.
Depending on system requirements RapidIO DMA operations can be included/excluded
by setting CONFIG_RAPIDIO_DMA_ENGINE option. Tsi721 miniport driver uses seven
out of eight available BDMA channels to support DMA data transfers.
One BDMA channel is reserved for generation of maintenance read/write requests.
If Tsi721 mport driver have been built with RAPIDIO_DMA_ENGINE support included,
this driver will accept DMA-specific module parameter:
"dma_desc_per_channel" - defines number of hardware buffer descriptors used by
each BDMA channel of Tsi721 (by default - 128).
IV. Version History
1.1.0 - DMA operations re-worked to support data scatter/gather lists larger
than hardware buffer descriptors ring.
1.0.0 - Initial driver release.
V. License

View File

@ -826,6 +826,7 @@ can be ORed together:
4096 - An out-of-tree module has been loaded.
8192 - An unsigned module has been loaded in a kernel supporting module
signature.
16384 - A soft lockup has previously occurred on the system.
==============================================================

View File

@ -597,7 +597,7 @@ AMD GEODE CS5536 USB DEVICE CONTROLLER DRIVER
M: Thomas Dahlmann <dahlmann.thomas@arcor.de>
L: linux-geode@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: drivers/usb/gadget/amd5536udc.*
F: drivers/usb/gadget/udc/amd5536udc.*
AMD GEODE PROCESSOR/CHIPSET SUPPORT
P: Andres Salomon <dilinger@queued.net>
@ -621,7 +621,7 @@ AMD MICROCODE UPDATE SUPPORT
M: Andreas Herrmann <herrmann.der.user@googlemail.com>
L: amd64-microcode@amd64.org
S: Maintained
F: arch/x86/kernel/microcode_amd.c
F: arch/x86/kernel/cpu/microcode/amd*
AMD XGBE DRIVER
M: Tom Lendacky <thomas.lendacky@amd.com>
@ -911,7 +911,7 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/baohua/linux.git
S: Maintained
F: arch/arm/mach-prima2/
F: drivers/clk/clk-prima2.c
F: drivers/clk/sirf/
F: drivers/clocksource/timer-prima2.c
F: drivers/clocksource/timer-marco.c
N: [^a-z]sirf
@ -1164,6 +1164,7 @@ M: Linus Walleij <linus.walleij@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-nomadik/
F: drivers/pinctrl/nomadik/
F: drivers/i2c/busses/i2c-nomadik.c
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-nomadik.git
@ -1185,8 +1186,7 @@ F: drivers/mmc/host/msm_sdcc.h
F: drivers/tty/serial/msm_serial.h
F: drivers/tty/serial/msm_serial.c
F: drivers/*/pm8???-*
F: drivers/mfd/ssbi/
F: include/linux/mfd/pm8xxx/
F: drivers/mfd/ssbi.c
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davidb/linux-msm.git
S: Maintained
@ -1443,7 +1443,8 @@ F: drivers/mfd/abx500*
F: drivers/mfd/ab8500*
F: drivers/mfd/dbx500*
F: drivers/mfd/db8500*
F: drivers/pinctrl/pinctrl-nomadik*
F: drivers/pinctrl/nomadik/pinctrl-ab*
F: drivers/pinctrl/nomadik/pinctrl-nomadik*
F: drivers/rtc/rtc-ab8500.c
F: drivers/rtc/rtc-pl031.c
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson.git
@ -1699,7 +1700,7 @@ ATMEL USBA UDC DRIVER
M: Nicolas Ferre <nicolas.ferre@atmel.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: drivers/usb/gadget/atmel_usba_udc.*
F: drivers/usb/gadget/udc/atmel_usba_udc.*
ATMEL WIRELESS DRIVER
M: Simon Kelley <simon@thekelleys.org.uk>
@ -1991,7 +1992,7 @@ F: arch/arm/boot/dts/bcm113*
F: arch/arm/boot/dts/bcm216*
F: arch/arm/boot/dts/bcm281*
F: arch/arm/configs/bcm_defconfig
F: drivers/mmc/host/sdhci_bcm_kona.c
F: drivers/mmc/host/sdhci-bcm-kona.c
F: drivers/clocksource/bcm_kona_timer.c
BROADCOM BCM2835 ARM ARCHICTURE
@ -2341,12 +2342,6 @@ L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/cirrus/ep93xx_eth.c
CIRRUS LOGIC EP93XX OHCI USB HOST DRIVER
M: Lennert Buytenhek <kernel@wantstofly.org>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/host/ohci-ep93xx.c
CIRRUS LOGIC AUDIO CODEC DRIVERS
M: Brian Austin <brian.austin@cirrus.com>
M: Paul Handrigan <Paul.Handrigan@cirrus.com>
@ -2431,7 +2426,7 @@ W: http://linux-cifs.samba.org/
Q: http://patchwork.ozlabs.org/project/linux-cifs-client/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6.git
S: Supported
F: Documentation/filesystems/cifs.txt
F: Documentation/filesystems/cifs/
F: fs/cifs/
COMPACTPCI HOTPLUG CORE
@ -2966,7 +2961,9 @@ L: linux-media@vger.kernel.org
L: dri-devel@lists.freedesktop.org
L: linaro-mm-sig@lists.linaro.org
F: drivers/dma-buf/
F: include/linux/dma-buf* include/linux/reservation.h include/linux/*fence.h
F: include/linux/dma-buf*
F: include/linux/reservation.h
F: include/linux/*fence.h
F: Documentation/dma-buf-sharing.txt
T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
@ -3061,7 +3058,6 @@ L: dri-devel@lists.freedesktop.org
T: git git://people.freedesktop.org/~agd5f/linux
S: Supported
F: drivers/gpu/drm/radeon/
F: include/drm/radeon*
F: include/uapi/drm/radeon*
DRM PANEL DRIVERS
@ -3255,26 +3251,12 @@ T: git git://linuxtv.org/anttip/media_tree.git
S: Maintained
F: drivers/media/tuners/e4000*
EATA-DMA SCSI DRIVER
M: Michael Neuffer <mike@i-Connect.Net>
L: linux-eata@i-connect.net
L: linux-scsi@vger.kernel.org
S: Maintained
F: drivers/scsi/eata*
EATA ISA/EISA/PCI SCSI DRIVER
M: Dario Ballabio <ballabio_dario@emc.com>
L: linux-scsi@vger.kernel.org
S: Maintained
F: drivers/scsi/eata.c
EATA-PIO SCSI DRIVER
M: Michael Neuffer <mike@i-Connect.Net>
L: linux-eata@i-connect.net
L: linux-scsi@vger.kernel.org
S: Maintained
F: drivers/scsi/eata_pio.*
EC100 MEDIA DRIVER
M: Antti Palosaari <crope@iki.fi>
L: linux-media@vger.kernel.org
@ -3449,7 +3431,7 @@ M: Matt Fleming <matt.fleming@intel.com>
L: linux-efi@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi.git
S: Maintained
F: Documentation/x86/efi-stub.txt
F: Documentation/efi-stub.txt
F: arch/ia64/kernel/efi.c
F: arch/x86/boot/compressed/eboot.[ch]
F: arch/x86/include/asm/efi.h
@ -3836,7 +3818,7 @@ M: Li Yang <leoli@freescale.com>
L: linux-usb@vger.kernel.org
L: linuxppc-dev@lists.ozlabs.org
S: Maintained
F: drivers/usb/gadget/fsl*
F: drivers/usb/gadget/udc/fsl*
FREESCALE QUICC ENGINE UCC ETHERNET DRIVER
M: Li Yang <leoli@freescale.com>
@ -4525,10 +4507,7 @@ S: Supported
F: drivers/scsi/ibmvscsi/ibmvfc*
IBM ServeRAID RAID DRIVER
P: Jack Hammer
M: Dave Jeffery <ipslinux@adaptec.com>
W: http://www.developer.ibm.com/welcome/netfinity/serveraid.html
S: Supported
S: Orphan
F: drivers/scsi/ips.*
ICH LPC AND GPIO DRIVER
@ -4725,8 +4704,8 @@ F: drivers/platform/x86/intel_menlow.c
INTEL IA32 MICROCODE UPDATE SUPPORT
M: Tigran Aivazian <tigran@aivazian.fsnet.co.uk>
S: Maintained
F: arch/x86/kernel/microcode_core.c
F: arch/x86/kernel/microcode_intel.c
F: arch/x86/kernel/cpu/microcode/core*
F: arch/x86/kernel/cpu/microcode/intel*
INTEL I/OAT DMA DRIVER
M: Dan Williams <dan.j.williams@intel.com>
@ -5185,7 +5164,6 @@ L: linux-nfs@vger.kernel.org
W: http://nfs.sourceforge.net/
S: Supported
F: fs/nfsd/
F: include/linux/nfsd/
F: include/uapi/linux/nfsd/
F: fs/lockd/
F: fs/nfs_common/
@ -5906,7 +5884,6 @@ F: drivers/clocksource/metag_generic.c
F: drivers/irqchip/irq-metag.c
F: drivers/irqchip/irq-metag-ext.c
F: drivers/tty/metag_da.c
F: fs/imgdafs/
MICROBLAZE ARCHITECTURE
M: Michal Simek <monstr@monstr.eu>
@ -6997,9 +6974,9 @@ M: Jamie Iles <jamie@jamieiles.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
T: git git://github.com/jamieiles/linux-2.6-ji.git
S: Supported
F: arch/arm/boot/dts/picoxcell*
F: arch/arm/mach-picoxcell/
F: drivers/*/picoxcell*
F: drivers/*/*/picoxcell*
F: drivers/crypto/picoxcell*
PIN CONTROL SUBSYSTEM
M: Linus Walleij <linus.walleij@linaro.org>
@ -7224,7 +7201,7 @@ F: drivers/ptp/*
F: include/linux/ptp_cl*
PTRACE SUPPORT
M: Roland McGrath <roland@redhat.com>
M: Roland McGrath <roland@hack.frob.com>
M: Oleg Nesterov <oleg@redhat.com>
S: Maintained
F: include/asm-generic/syscall.h
@ -7274,7 +7251,7 @@ S: Maintained
F: arch/arm/mach-pxa/
F: drivers/pcmcia/pxa2xx*
F: drivers/spi/spi-pxa2xx*
F: drivers/usb/gadget/pxa2*
F: drivers/usb/gadget/udc/pxa2*
F: include/sound/pxa2xx-lib.h
F: sound/arm/pxa*
F: sound/soc/pxa/
@ -7283,7 +7260,7 @@ PXA3xx NAND FLASH DRIVER
M: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
L: linux-mtd@lists.infradead.org
S: Maintained
F: drivers/mtd/nand/pxa3xx-nand.c
F: drivers/mtd/nand/pxa3xx_nand.c
MMP SUPPORT
M: Eric Miao <eric.y.miao@gmail.com>
@ -9628,8 +9605,8 @@ USB WEBCAM GADGET
M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/gadget/*uvc*.c
F: drivers/usb/gadget/webcam.c
F: drivers/usb/gadget/function/*uvc*.c
F: drivers/usb/gadget/legacy/webcam.c
USB WIRELESS RNDIS DRIVER (rndis_wlan)
M: Jussi Kivilinna <jussi.kivilinna@iki.fi>

View File

@ -6,4 +6,5 @@ generic-y += exec.h
generic-y += hash.h
generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += trace_clock.h

View File

@ -1,6 +0,0 @@
#ifndef _ALPHA_SCATTERLIST_H
#define _ALPHA_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#endif /* !(_ALPHA_SCATTERLIST_H) */

View File

@ -83,6 +83,7 @@ config ARM
<http://www.arm.linux.org.uk/>.
config ARM_HAS_SG_CHAIN
select ARCH_HAS_SG_CHAIN
bool
config NEED_SG_DMA_LENGTH
@ -1982,6 +1983,8 @@ config XIP_PHYS_ADDR
config KEXEC
bool "Kexec system call (EXPERIMENTAL)"
depends on (!SMP || PM_SLEEP_SMP)
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -22,6 +22,7 @@ generic-y += poll.h
generic-y += preempt.h
generic-y += resource.h
generic-y += rwsem.h
generic-y += scatterlist.h
generic-y += sections.h
generic-y += segment.h
generic-y += sembuf.h

View File

@ -1,12 +0,0 @@
#ifndef _ASMARM_SCATTERLIST_H
#define _ASMARM_SCATTERLIST_H
#ifdef CONFIG_ARM_HAS_SG_CHAIN
#define ARCH_HAS_SG_CHAIN
#endif
#include <asm/memory.h>
#include <asm/types.h>
#include <asm-generic/scatterlist.h>
#endif /* _ASMARM_SCATTERLIST_H */

View File

@ -336,7 +336,7 @@ static int __init early_touchbook_revision(char *p)
if (!p)
return 0;
return strict_strtoul(p, 10, &touchbook_revision);
return kstrtoul(p, 10, &touchbook_revision);
}
early_param("tbr", early_touchbook_revision);

View File

@ -681,29 +681,19 @@ static ssize_t omap_mux_dbg_signal_write(struct file *file,
const char __user *user_buf,
size_t count, loff_t *ppos)
{
char buf[OMAP_MUX_MAX_ARG_CHAR];
struct seq_file *seqf;
struct omap_mux *m;
unsigned long val;
int buf_size, ret;
u16 val;
int ret;
struct omap_mux_partition *partition;
if (count > OMAP_MUX_MAX_ARG_CHAR)
return -EINVAL;
memset(buf, 0, sizeof(buf));
buf_size = min(count, sizeof(buf) - 1);
if (copy_from_user(buf, user_buf, buf_size))
return -EFAULT;
ret = strict_strtoul(buf, 0x10, &val);
ret = kstrtou16_from_user(user_buf, count, 0x10, &val);
if (ret < 0)
return ret;
if (val > 0xffff)
return -EINVAL;
seqf = file->private_data;
m = seqf->private;
@ -711,7 +701,7 @@ static ssize_t omap_mux_dbg_signal_write(struct file *file,
if (!partition)
return -ENODEV;
omap_mux_write(partition, (u16)val, m->reg_offset);
omap_mux_write(partition, val, m->reg_offset);
*ppos += count;
return count;
@ -917,14 +907,14 @@ static void __init omap_mux_set_cmdline_signals(void)
while ((token = strsep(&next_opt, ",")) != NULL) {
char *keyval, *name;
unsigned long val;
u16 val;
keyval = token;
name = strsep(&keyval, "=");
if (name) {
int res;
res = strict_strtoul(keyval, 0x10, &val);
res = kstrtou16(keyval, 0x10, &val);
if (res < 0)
continue;

View File

@ -90,7 +90,7 @@ int __init parse_balloon3_features(char *arg)
if (!arg)
return 0;
return strict_strtoul(arg, 0, &balloon3_features_present);
return kstrtoul(arg, 0, &balloon3_features_present);
}
early_param("balloon3_features", parse_balloon3_features);

View File

@ -769,7 +769,7 @@ static unsigned long viper_tpm;
static int __init viper_tpm_setup(char *str)
{
return strict_strtoul(str, 10, &viper_tpm) >= 0;
return kstrtoul(str, 10, &viper_tpm) >= 0;
}
__setup("tpm=", viper_tpm_setup);

View File

@ -242,7 +242,7 @@ static int __init jive_mtdset(char *options)
if (options == NULL || options[0] == '\0')
return 0;
if (strict_strtoul(options, 10, &set)) {
if (kstrtoul(options, 10, &set)) {
printk(KERN_ERR "failed to parse mtdset=%s\n", options);
return 0;
}

View File

@ -178,7 +178,8 @@ static int __init nuc900_set_cpufreq(char *str)
if (!*str)
return 0;
strict_strtoul(str, 0, &cpufreq);
if (kstrtoul(str, 0, &cpufreq))
return 0;
nuc900_clock_source(NULL, "ext");

View File

@ -1,6 +1,7 @@
config ARM64
def_bool y
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_SUPPORTS_ATOMIC_RMW

View File

@ -28,9 +28,6 @@
#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1))
/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
#define __HAVE_ARCH_GATE_AREA 1
/*
* The idmap and swapper page tables need some space reserved in the kernel
* image. Both require pgd, pud (4 levels only) and pmd tables to (section)

View File

@ -194,25 +194,6 @@ up_fail:
return PTR_ERR(ret);
}
/*
* We define AT_SYSINFO_EHDR, so we need these function stubs to keep
* Linux happy.
*/
int in_gate_area_no_mm(unsigned long addr)
{
return 0;
}
int in_gate_area(struct mm_struct *mm, unsigned long addr)
{
return 0;
}
struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
{
return NULL;
}
/*
* Update the vDSO data page to keep in sync with kernel timekeeping.
*/

View File

@ -13,6 +13,7 @@ generic-y += linkage.h
generic-y += mcs_spinlock.h
generic-y += module.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += trace_clock.h
generic-y += vga.h
generic-y += xor.h

View File

@ -1,6 +0,0 @@
#ifndef __ASM_CRIS_SCATTERLIST_H
#define __ASM_CRIS_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#endif /* !(__ASM_CRIS_SCATTERLIST_H) */

View File

@ -5,4 +5,5 @@ generic-y += exec.h
generic-y += hash.h
generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += trace_clock.h

View File

@ -1,6 +0,0 @@
#ifndef _ASM_SCATTERLIST_H
#define _ASM_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#endif /* !_ASM_SCATTERLIST_H */

View File

@ -28,6 +28,7 @@ config IA64
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_VIRT_CPU_ACCOUNTING
select ARCH_HAS_SG_CHAIN
select VIRT_TO_BUS
select ARCH_DISCARD_MEMBLOCK
select GENERIC_IRQ_PROBE
@ -548,6 +549,8 @@ source "drivers/sn/Kconfig"
config KEXEC
bool "kexec system call"
depends on !IA64_HP_SIM && (!SMP || HOTPLUG_CPU)
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -5,5 +5,6 @@ generic-y += hash.h
generic-y += kvm_para.h
generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += trace_clock.h
generic-y += vtime.h

View File

@ -231,4 +231,6 @@ get_order (unsigned long size)
#define PERCPU_ADDR (-PERCPU_PAGE_SIZE)
#define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE)
#define __HAVE_ARCH_GATE_AREA 1
#endif /* _ASM_IA64_PAGE_H */

View File

@ -1,7 +0,0 @@
#ifndef _ASM_IA64_SCATTERLIST_H
#define _ASM_IA64_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#define ARCH_HAS_SG_CHAIN
#endif /* _ASM_IA64_SCATTERLIST_H */

View File

@ -384,21 +384,6 @@ static struct irqaction timer_irqaction = {
.name = "timer"
};
static struct platform_device rtc_efi_dev = {
.name = "rtc-efi",
.id = -1,
};
static int __init rtc_init(void)
{
if (platform_device_register(&rtc_efi_dev) < 0)
printk(KERN_ERR "unable to register rtc device...\n");
/* not necessarily an error */
return 0;
}
module_init(rtc_init);
void read_persistent_clock(struct timespec *ts)
{
efi_gettimeofday(ts);

View File

@ -278,6 +278,37 @@ setup_gate (void)
ia64_patch_gate();
}
static struct vm_area_struct gate_vma;
static int __init gate_vma_init(void)
{
gate_vma.vm_mm = NULL;
gate_vma.vm_start = FIXADDR_USER_START;
gate_vma.vm_end = FIXADDR_USER_END;
gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
gate_vma.vm_page_prot = __P101;
return 0;
}
__initcall(gate_vma_init);
struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
{
return &gate_vma;
}
int in_gate_area_no_mm(unsigned long addr)
{
if ((addr >= FIXADDR_USER_START) && (addr < FIXADDR_USER_END))
return 1;
return 0;
}
int in_gate_area(struct mm_struct *mm, unsigned long addr)
{
return in_gate_area_no_mm(addr);
}
void ia64_mmu_init(void *my_cpu_data)
{
unsigned long pta, impl_va_bits;

View File

@ -6,4 +6,5 @@ generic-y += hash.h
generic-y += mcs_spinlock.h
generic-y += module.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += trace_clock.h

View File

@ -1,6 +0,0 @@
#ifndef _ASM_M32R_SCATTERLIST_H
#define _ASM_M32R_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#endif /* _ASM_M32R_SCATTERLIST_H */

View File

@ -91,6 +91,8 @@ config MMU_SUN3
config KEXEC
bool "kexec system call"
depends on M68KCLASSIC
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -7,5 +7,6 @@ generic-y += exec.h
generic-y += hash.h
generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += syscalls.h
generic-y += trace_clock.h

View File

@ -1 +0,0 @@
#include <asm-generic/scatterlist.h>

View File

@ -2396,6 +2396,8 @@ source "kernel/Kconfig.preempt"
config KEXEC
bool "Kexec system call"
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -6,4 +6,5 @@ generic-y += exec.h
generic-y += hash.h
generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += trace_clock.h

View File

@ -1,16 +0,0 @@
/* MN10300 Scatterlist definitions
*
* Copyright (C) 2007 Matsushita Electric Industrial Co., Ltd.
* Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#ifndef _ASM_SCATTERLIST_H
#define _ASM_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#endif /* _ASM_SCATTERLIST_H */

View File

@ -111,6 +111,7 @@ config PPC
select HAVE_DMA_API_DEBUG
select HAVE_OPROFILE
select HAVE_DEBUG_KMEMLEAK
select ARCH_HAS_SG_CHAIN
select GENERIC_ATOMIC64 if PPC32
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select HAVE_PERF_EVENTS
@ -398,6 +399,8 @@ config PPC64_SUPPORTS_MEMORY_FAILURE
config KEXEC
bool "kexec system call"
depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP))
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -4,5 +4,6 @@ generic-y += hash.h
generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += rwsem.h
generic-y += scatterlist.h
generic-y += trace_clock.h
generic-y += vtime.h

View File

@ -48,9 +48,6 @@ extern unsigned int HPAGE_SHIFT;
#define HUGE_MAX_HSTATE (MMU_PAGE_COUNT-1)
#endif
/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */
#define __HAVE_ARCH_GATE_AREA 1
/*
* Subtle: (1 << PAGE_SHIFT) is an int, not an unsigned long. So if we
* assign PAGE_MASK to a larger type it gets extended the way we want

View File

@ -1,17 +0,0 @@
#ifndef _ASM_POWERPC_SCATTERLIST_H
#define _ASM_POWERPC_SCATTERLIST_H
/*
* Copyright (C) 2001 PPC64 Team, IBM Corp
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <asm/dma.h>
#include <asm-generic/scatterlist.h>
#define ARCH_HAS_SG_CHAIN
#endif /* _ASM_POWERPC_SCATTERLIST_H */

View File

@ -149,13 +149,13 @@ static void check_smt_enabled(void)
else if (!strcmp(smt_enabled_cmdline, "off"))
smt_enabled_at_boot = 0;
else {
long smt;
int smt;
int rc;
rc = strict_strtol(smt_enabled_cmdline, 10, &smt);
rc = kstrtoint(smt_enabled_cmdline, 10, &smt);
if (!rc)
smt_enabled_at_boot =
min(threads_per_core, (int)smt);
min(threads_per_core, smt);
}
} else {
dn = of_find_node_by_path("/options");

View File

@ -840,19 +840,3 @@ static int __init vdso_init(void)
return 0;
}
arch_initcall(vdso_init);
int in_gate_area_no_mm(unsigned long addr)
{
return 0;
}
int in_gate_area(struct mm_struct *mm, unsigned long addr)
{
return 0;
}
struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
{
return NULL;
}

View File

@ -977,7 +977,7 @@ static ssize_t viodev_cmo_desired_set(struct device *dev,
size_t new_desired;
int ret;
ret = strict_strtoul(buf, 10, &new_desired);
ret = kstrtoul(buf, 10, &new_desired);
if (ret)
return ret;

View File

@ -33,6 +33,7 @@
#include <linux/export.h>
#include <asm/tlbflush.h>
#include <asm/dma.h>
#include "mmu_decl.h"

View File

@ -25,6 +25,7 @@
#include <asm/time.h>
#include <asm/uic.h>
#include <asm/ppc4xx.h>
#include <asm/dma.h>
static __initdata struct of_device_id warp_of_bus[] = {

View File

@ -13,6 +13,7 @@
#include <generated/utsrelease.h>
#include <linux/pci.h>
#include <linux/of.h>
#include <asm/dma.h>
#include <asm/prom.h>
#include <asm/time.h>
#include <asm/machdep.h>

View File

@ -24,6 +24,7 @@
#include <asm/i8259.h>
#include <asm/time.h>
#include <asm/udbg.h>
#include <asm/dma.h>
extern void __flush_disable_L1(void);

View File

@ -400,10 +400,10 @@ out:
static ssize_t dlpar_cpu_probe(const char *buf, size_t count)
{
struct device_node *dn, *parent;
unsigned long drc_index;
u32 drc_index;
int rc;
rc = strict_strtoul(buf, 0, &drc_index);
rc = kstrtou32(buf, 0, &drc_index);
if (rc)
return -EINVAL;

View File

@ -320,7 +320,7 @@ static ssize_t migrate_store(struct class *class, struct class_attribute *attr,
u64 streamid;
int rc;
rc = strict_strtoull(buf, 0, &streamid);
rc = kstrtou64(buf, 0, &streamid);
if (rc)
return rc;

View File

@ -48,6 +48,8 @@ config ARCH_SUPPORTS_DEBUG_PAGEALLOC
config KEXEC
def_bool y
select CRYPTO
select CRYPTO_SHA256
config AUDIT_ARCH
def_bool y
@ -145,6 +147,7 @@ config S390
select TTY
select VIRT_CPU_ACCOUNTING
select VIRT_TO_BUS
select ARCH_HAS_SG_CHAIN
config SCHED_OMIT_FRAME_POINTER
def_bool y

View File

@ -4,4 +4,5 @@ generic-y += clkdev.h
generic-y += hash.h
generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += trace_clock.h

View File

@ -162,6 +162,4 @@ static inline int devmem_is_allowed(unsigned long pfn)
#include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h>
#define __HAVE_ARCH_GATE_AREA 1
#endif /* _S390_PAGE_H */

View File

@ -1,3 +0,0 @@
#include <asm-generic/scatterlist.h>
#define ARCH_HAS_SG_CHAIN

View File

@ -316,18 +316,3 @@ static int __init vdso_init(void)
return 0;
}
early_initcall(vdso_init);
int in_gate_area_no_mm(unsigned long addr)
{
return 0;
}
int in_gate_area(struct mm_struct *mm, unsigned long addr)
{
return 0;
}
struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
{
return NULL;
}

View File

@ -8,5 +8,6 @@ generic-y += cputime.h
generic-y += hash.h
generic-y += mcs_spinlock.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += trace_clock.h
generic-y += xor.h

View File

@ -1,6 +0,0 @@
#ifndef _ASM_SCORE_SCATTERLIST_H
#define _ASM_SCORE_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#endif /* _ASM_SCORE_SCATTERLIST_H */

View File

@ -595,6 +595,8 @@ source kernel/Kconfig.hz
config KEXEC
bool "kexec system call (EXPERIMENTAL)"
depends on SUPERH32 && MMU
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -186,11 +186,6 @@ typedef struct page *pgtable_t;
#include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h>
/* vDSO support */
#ifdef CONFIG_VSYSCALL
#define __HAVE_ARCH_GATE_AREA
#endif
/*
* Some drivers need to perform DMA into kmalloc'ed buffers
* and so we have to increase the kmalloc minalign for this.

View File

@ -92,18 +92,3 @@ const char *arch_vma_name(struct vm_area_struct *vma)
return NULL;
}
struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
{
return NULL;
}
int in_gate_area(struct mm_struct *mm, unsigned long address)
{
return 0;
}
int in_gate_area_no_mm(unsigned long address)
{
return 0;
}

View File

@ -42,6 +42,7 @@ config SPARC
select MODULES_USE_ELF_RELA
select ODD_RT_SIGACTION
select OLD_SIGSUSPEND
select ARCH_HAS_SG_CHAIN
config SPARC32
def_bool !64BIT

View File

@ -15,6 +15,7 @@ generic-y += mcs_spinlock.h
generic-y += module.h
generic-y += mutex.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += serial.h
generic-y += trace_clock.h
generic-y += types.h

View File

@ -1,8 +0,0 @@
#ifndef _SPARC_SCATTERLIST_H
#define _SPARC_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#define ARCH_HAS_SG_CHAIN
#endif /* !(_SPARC_SCATTERLIST_H) */

View File

@ -191,6 +191,8 @@ source "kernel/Kconfig.hz"
config KEXEC
bool "kexec system call"
select CRYPTO
select CRYPTO_SHA256
---help---
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -23,7 +23,7 @@
struct proc_dir_entry;
#ifdef CONFIG_HARDWALL
void proc_tile_hardwall_init(struct proc_dir_entry *root);
int proc_pid_hardwall(struct task_struct *task, char *buffer);
int proc_pid_hardwall(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task);
#else
static inline void proc_tile_hardwall_init(struct proc_dir_entry *root) {}
#endif

View File

@ -38,12 +38,6 @@
#define PAGE_MASK (~(PAGE_SIZE - 1))
#define HPAGE_MASK (~(HPAGE_SIZE - 1))
/*
* We do define AT_SYSINFO_EHDR to support vDSO,
* but don't use the gate mechanism.
*/
#define __HAVE_ARCH_GATE_AREA 1
/*
* If the Kconfig doesn't specify, set a maximum zone order that
* is enough so that we can create huge pages from small pages given

View File

@ -947,15 +947,15 @@ static void hardwall_remove_proc(struct hardwall_info *info)
remove_proc_entry(buf, info->type->proc_dir);
}
int proc_pid_hardwall(struct task_struct *task, char *buffer)
int proc_pid_hardwall(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task)
{
int i;
int n = 0;
for (i = 0; i < HARDWALL_TYPES; ++i) {
struct hardwall_info *info = task->thread.hardwall[i].info;
if (info)
n += sprintf(&buffer[n], "%s: %d\n",
info->type->name, info->id);
seq_printf(m, "%s: %d\n", info->type->name, info->id);
}
return n;
}

View File

@ -121,21 +121,6 @@ const char *arch_vma_name(struct vm_area_struct *vma)
return NULL;
}
struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
{
return NULL;
}
int in_gate_area(struct mm_struct *mm, unsigned long address)
{
return 0;
}
int in_gate_area_no_mm(unsigned long address)
{
return 0;
}
int setup_vdso_pages(void)
{
struct page **pagelist;

View File

@ -21,6 +21,7 @@ generic-y += param.h
generic-y += pci.h
generic-y += percpu.h
generic-y += preempt.h
generic-y += scatterlist.h
generic-y += sections.h
generic-y += switch_to.h
generic-y += topology.h

View File

@ -119,4 +119,9 @@ extern unsigned long uml_physmem;
#include <asm-generic/getorder.h>
#endif /* __ASSEMBLY__ */
#ifdef CONFIG_X86_32
#define __HAVE_ARCH_GATE_AREA 1
#endif
#endif /* __UM_PAGE_H */

View File

@ -16,3 +16,7 @@ obj-$(CONFIG_IA32_EMULATION) += ia32/
obj-y += platform/
obj-y += net/
ifeq ($(CONFIG_X86_64),y)
obj-$(CONFIG_KEXEC) += purgatory/
endif

View File

@ -96,6 +96,7 @@ config X86
select IRQ_FORCED_THREADING
select HAVE_BPF_JIT if X86_64
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select ARCH_HAS_SG_CHAIN
select CLKEVT_I8253
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select GENERIC_IOMAP
@ -1581,6 +1582,9 @@ source kernel/Kconfig.hz
config KEXEC
bool "kexec system call"
select BUILD_BIN2C
select CRYPTO
select CRYPTO_SHA256
---help---
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
@ -1595,6 +1599,28 @@ config KEXEC
interface is strongly in flux, so no good recommendation can be
made.
config KEXEC_VERIFY_SIG
bool "Verify kernel signature during kexec_file_load() syscall"
depends on KEXEC
---help---
This option makes kernel signature verification mandatory for
kexec_file_load() syscall. If kernel is signature can not be
verified, kexec_file_load() will fail.
This option enforces signature verification at generic level.
One needs to enable signature verification for type of kernel
image being loaded to make sure it works. For example, enable
bzImage signature verification option to be able to load and
verify signatures of bzImage. Otherwise kernel loading will fail.
config KEXEC_BZIMAGE_VERIFY_SIG
bool "Enable bzImage signature verification support"
depends on KEXEC_VERIFY_SIG
depends on SIGNED_PE_FILE_VERIFICATION
select SYSTEM_TRUSTED_KEYRING
---help---
Enable bzImage signature verification support.
config CRASH_DUMP
bool "kernel crash dumps"
depends on X86_64 || (X86_32 && HIGHMEM)

View File

@ -183,6 +183,14 @@ archscripts: scripts_basic
archheaders:
$(Q)$(MAKE) $(build)=arch/x86/syscalls all
archprepare:
ifeq ($(CONFIG_KEXEC),y)
# Build only for 64bit. No loaders for 32bit yet.
ifeq ($(CONFIG_X86_64),y)
$(Q)$(MAKE) $(build)=arch/x86/purgatory arch/x86/purgatory/kexec-purgatory.c
endif
endif
###
# Kernel objects

View File

@ -5,6 +5,7 @@ genhdr-y += unistd_64.h
genhdr-y += unistd_x32.h
generic-y += clkdev.h
generic-y += early_ioremap.h
generic-y += cputime.h
generic-y += early_ioremap.h
generic-y += mcs_spinlock.h
generic-y += scatterlist.h

View File

@ -0,0 +1,9 @@
#ifndef _ASM_X86_CRASH_H
#define _ASM_X86_CRASH_H
int crash_load_segments(struct kimage *image);
int crash_copy_backup_region(struct kimage *image);
int crash_setup_memmap_entries(struct kimage *image,
struct boot_params *params);
#endif /* _ASM_X86_CRASH_H */

View File

@ -0,0 +1,6 @@
#ifndef _ASM_KEXEC_BZIMAGE64_H
#define _ASM_KEXEC_BZIMAGE64_H
extern struct kexec_file_ops kexec_bzImage64_ops;
#endif /* _ASM_KEXE_BZIMAGE64_H */

View File

@ -23,6 +23,9 @@
#include <asm/page.h>
#include <asm/ptrace.h>
#include <asm/bootparam.h>
struct kimage;
/*
* KEXEC_SOURCE_MEMORY_LIMIT maximum page get_free_page can return.
@ -61,6 +64,10 @@
# define KEXEC_ARCH KEXEC_ARCH_X86_64
#endif
/* Memory to backup during crash kdump */
#define KEXEC_BACKUP_SRC_START (0UL)
#define KEXEC_BACKUP_SRC_END (640 * 1024UL) /* 640K */
/*
* CPU does not save ss and sp on stack if execution is already
* running in kernel mode at the time of NMI occurrence. This code
@ -160,6 +167,44 @@ struct kimage_arch {
pud_t *pud;
pmd_t *pmd;
pte_t *pte;
/* Details of backup region */
unsigned long backup_src_start;
unsigned long backup_src_sz;
/* Physical address of backup segment */
unsigned long backup_load_addr;
/* Core ELF header buffer */
void *elf_headers;
unsigned long elf_headers_sz;
unsigned long elf_load_addr;
};
#endif /* CONFIG_X86_32 */
#ifdef CONFIG_X86_64
/*
* Number of elements and order of elements in this structure should match
* with the ones in arch/x86/purgatory/entry64.S. If you make a change here
* make an appropriate change in purgatory too.
*/
struct kexec_entry64_regs {
uint64_t rax;
uint64_t rcx;
uint64_t rdx;
uint64_t rbx;
uint64_t rsp;
uint64_t rbp;
uint64_t rsi;
uint64_t rdi;
uint64_t r8;
uint64_t r9;
uint64_t r10;
uint64_t r11;
uint64_t r12;
uint64_t r13;
uint64_t r14;
uint64_t r15;
uint64_t rip;
};
#endif

View File

@ -70,7 +70,6 @@ extern bool __virt_addr_valid(unsigned long kaddr);
#include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h>
#define __HAVE_ARCH_GATE_AREA 1
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
#endif /* __KERNEL__ */

View File

@ -39,4 +39,6 @@ void copy_page(void *to, void *from);
#endif /* !__ASSEMBLY__ */
#define __HAVE_ARCH_GATE_AREA 1
#endif /* _ASM_X86_PAGE_64_H */

View File

@ -1,8 +0,0 @@
#ifndef _ASM_X86_SCATTERLIST_H
#define _ASM_X86_SCATTERLIST_H
#include <asm-generic/scatterlist.h>
#define ARCH_HAS_SG_CHAIN
#endif /* _ASM_X86_SCATTERLIST_H */

View File

@ -118,4 +118,5 @@ ifeq ($(CONFIG_X86_64),y)
obj-$(CONFIG_PCI_MMCONFIG) += mmconf-fam10h_64.o
obj-y += vsmp_64.o
obj-$(CONFIG_KEXEC) += kexec-bzimage64.o
endif

View File

@ -461,7 +461,7 @@ static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf,
cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map));
if (strict_strtoul(buf, 10, &val) < 0)
if (kstrtoul(buf, 10, &val) < 0)
return -EINVAL;
err = amd_set_l3_disable_slot(this_leaf->base.nb, cpu, slot, val);
@ -511,7 +511,7 @@ store_subcaches(struct _cpuid4_info *this_leaf, const char *buf, size_t count,
if (!this_leaf->base.nb || !amd_nb_has_feature(AMD_NB_L3_PARTITIONING))
return -EINVAL;
if (strict_strtoul(buf, 16, &val) < 0)
if (kstrtoul(buf, 16, &val) < 0)
return -EINVAL;
if (amd_set_subcaches(cpu, val))

View File

@ -2136,7 +2136,7 @@ static ssize_t set_bank(struct device *s, struct device_attribute *attr,
{
u64 new;
if (strict_strtoull(buf, 0, &new) < 0)
if (kstrtou64(buf, 0, &new) < 0)
return -EINVAL;
attr_to_bank(attr)->ctl = new;
@ -2174,7 +2174,7 @@ static ssize_t set_ignore_ce(struct device *s,
{
u64 new;
if (strict_strtoull(buf, 0, &new) < 0)
if (kstrtou64(buf, 0, &new) < 0)
return -EINVAL;
if (mca_cfg.ignore_ce ^ !!new) {
@ -2198,7 +2198,7 @@ static ssize_t set_cmci_disabled(struct device *s,
{
u64 new;
if (strict_strtoull(buf, 0, &new) < 0)
if (kstrtou64(buf, 0, &new) < 0)
return -EINVAL;
if (mca_cfg.cmci_disabled ^ !!new) {

View File

@ -353,7 +353,7 @@ store_interrupt_enable(struct threshold_block *b, const char *buf, size_t size)
if (!b->interrupt_capable)
return -EINVAL;
if (strict_strtoul(buf, 0, &new) < 0)
if (kstrtoul(buf, 0, &new) < 0)
return -EINVAL;
b->interrupt_enable = !!new;
@ -372,7 +372,7 @@ store_threshold_limit(struct threshold_block *b, const char *buf, size_t size)
struct thresh_restart tr;
unsigned long new;
if (strict_strtoul(buf, 0, &new) < 0)
if (kstrtoul(buf, 0, &new) < 0)
return -EINVAL;
if (new > THRESHOLD_MAX)

View File

@ -4,9 +4,14 @@
* Created by: Hariprasad Nellitheertha (hari@in.ibm.com)
*
* Copyright (C) IBM Corporation, 2004. All rights reserved.
* Copyright (C) Red Hat Inc., 2014. All rights reserved.
* Authors:
* Vivek Goyal <vgoyal@redhat.com>
*
*/
#define pr_fmt(fmt) "kexec: " fmt
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/smp.h>
@ -16,6 +21,7 @@
#include <linux/elf.h>
#include <linux/elfcore.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <asm/processor.h>
#include <asm/hardirq.h>
@ -28,6 +34,45 @@
#include <asm/reboot.h>
#include <asm/virtext.h>
/* Alignment required for elf header segment */
#define ELF_CORE_HEADER_ALIGN 4096
/* This primarily represents number of split ranges due to exclusion */
#define CRASH_MAX_RANGES 16
struct crash_mem_range {
u64 start, end;
};
struct crash_mem {
unsigned int nr_ranges;
struct crash_mem_range ranges[CRASH_MAX_RANGES];
};
/* Misc data about ram ranges needed to prepare elf headers */
struct crash_elf_data {
struct kimage *image;
/*
* Total number of ram ranges we have after various adjustments for
* GART, crash reserved region etc.
*/
unsigned int max_nr_ranges;
unsigned long gart_start, gart_end;
/* Pointer to elf header */
void *ehdr;
/* Pointer to next phdr */
void *bufp;
struct crash_mem mem;
};
/* Used while preparing memory map entries for second kernel */
struct crash_memmap_data {
struct boot_params *params;
/* Type of memory */
unsigned int type;
};
int in_crash_kexec;
/*
@ -39,6 +84,7 @@ int in_crash_kexec;
*/
crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss = NULL;
EXPORT_SYMBOL_GPL(crash_vmclear_loaded_vmcss);
unsigned long crash_zero_bytes;
static inline void cpu_crash_vmclear_loaded_vmcss(void)
{
@ -135,3 +181,520 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
#endif
crash_save_cpu(regs, safe_smp_processor_id());
}
#ifdef CONFIG_X86_64
static int get_nr_ram_ranges_callback(unsigned long start_pfn,
unsigned long nr_pfn, void *arg)
{
int *nr_ranges = arg;
(*nr_ranges)++;
return 0;
}
static int get_gart_ranges_callback(u64 start, u64 end, void *arg)
{
struct crash_elf_data *ced = arg;
ced->gart_start = start;
ced->gart_end = end;
/* Not expecting more than 1 gart aperture */
return 1;
}
/* Gather all the required information to prepare elf headers for ram regions */
static void fill_up_crash_elf_data(struct crash_elf_data *ced,
struct kimage *image)
{
unsigned int nr_ranges = 0;
ced->image = image;
walk_system_ram_range(0, -1, &nr_ranges,
get_nr_ram_ranges_callback);
ced->max_nr_ranges = nr_ranges;
/*
* We don't create ELF headers for GART aperture as an attempt
* to dump this memory in second kernel leads to hang/crash.
* If gart aperture is present, one needs to exclude that region
* and that could lead to need of extra phdr.
*/
walk_iomem_res("GART", IORESOURCE_MEM, 0, -1,
ced, get_gart_ranges_callback);
/*
* If we have gart region, excluding that could potentially split
* a memory range, resulting in extra header. Account for that.
*/
if (ced->gart_end)
ced->max_nr_ranges++;
/* Exclusion of crash region could split memory ranges */
ced->max_nr_ranges++;
/* If crashk_low_res is not 0, another range split possible */
if (crashk_low_res.end != 0)
ced->max_nr_ranges++;
}
static int exclude_mem_range(struct crash_mem *mem,
unsigned long long mstart, unsigned long long mend)
{
int i, j;
unsigned long long start, end;
struct crash_mem_range temp_range = {0, 0};
for (i = 0; i < mem->nr_ranges; i++) {
start = mem->ranges[i].start;
end = mem->ranges[i].end;
if (mstart > end || mend < start)
continue;
/* Truncate any area outside of range */
if (mstart < start)
mstart = start;
if (mend > end)
mend = end;
/* Found completely overlapping range */
if (mstart == start && mend == end) {
mem->ranges[i].start = 0;
mem->ranges[i].end = 0;
if (i < mem->nr_ranges - 1) {
/* Shift rest of the ranges to left */
for (j = i; j < mem->nr_ranges - 1; j++) {
mem->ranges[j].start =
mem->ranges[j+1].start;
mem->ranges[j].end =
mem->ranges[j+1].end;
}
}
mem->nr_ranges--;
return 0;
}
if (mstart > start && mend < end) {
/* Split original range */
mem->ranges[i].end = mstart - 1;
temp_range.start = mend + 1;
temp_range.end = end;
} else if (mstart != start)
mem->ranges[i].end = mstart - 1;
else
mem->ranges[i].start = mend + 1;
break;
}
/* If a split happend, add the split to array */
if (!temp_range.end)
return 0;
/* Split happened */
if (i == CRASH_MAX_RANGES - 1) {
pr_err("Too many crash ranges after split\n");
return -ENOMEM;
}
/* Location where new range should go */
j = i + 1;
if (j < mem->nr_ranges) {
/* Move over all ranges one slot towards the end */
for (i = mem->nr_ranges - 1; i >= j; i--)
mem->ranges[i + 1] = mem->ranges[i];
}
mem->ranges[j].start = temp_range.start;
mem->ranges[j].end = temp_range.end;
mem->nr_ranges++;
return 0;
}
/*
* Look for any unwanted ranges between mstart, mend and remove them. This
* might lead to split and split ranges are put in ced->mem.ranges[] array
*/
static int elf_header_exclude_ranges(struct crash_elf_data *ced,
unsigned long long mstart, unsigned long long mend)
{
struct crash_mem *cmem = &ced->mem;
int ret = 0;
memset(cmem->ranges, 0, sizeof(cmem->ranges));
cmem->ranges[0].start = mstart;
cmem->ranges[0].end = mend;
cmem->nr_ranges = 1;
/* Exclude crashkernel region */
ret = exclude_mem_range(cmem, crashk_res.start, crashk_res.end);
if (ret)
return ret;
ret = exclude_mem_range(cmem, crashk_low_res.start, crashk_low_res.end);
if (ret)
return ret;
/* Exclude GART region */
if (ced->gart_end) {
ret = exclude_mem_range(cmem, ced->gart_start, ced->gart_end);
if (ret)
return ret;
}
return ret;
}
static int prepare_elf64_ram_headers_callback(u64 start, u64 end, void *arg)
{
struct crash_elf_data *ced = arg;
Elf64_Ehdr *ehdr;
Elf64_Phdr *phdr;
unsigned long mstart, mend;
struct kimage *image = ced->image;
struct crash_mem *cmem;
int ret, i;
ehdr = ced->ehdr;
/* Exclude unwanted mem ranges */
ret = elf_header_exclude_ranges(ced, start, end);
if (ret)
return ret;
/* Go through all the ranges in ced->mem.ranges[] and prepare phdr */
cmem = &ced->mem;
for (i = 0; i < cmem->nr_ranges; i++) {
mstart = cmem->ranges[i].start;
mend = cmem->ranges[i].end;
phdr = ced->bufp;
ced->bufp += sizeof(Elf64_Phdr);
phdr->p_type = PT_LOAD;
phdr->p_flags = PF_R|PF_W|PF_X;
phdr->p_offset = mstart;
/*
* If a range matches backup region, adjust offset to backup
* segment.
*/
if (mstart == image->arch.backup_src_start &&
(mend - mstart + 1) == image->arch.backup_src_sz)
phdr->p_offset = image->arch.backup_load_addr;
phdr->p_paddr = mstart;
phdr->p_vaddr = (unsigned long long) __va(mstart);
phdr->p_filesz = phdr->p_memsz = mend - mstart + 1;
phdr->p_align = 0;
ehdr->e_phnum++;
pr_debug("Crash PT_LOAD elf header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n",
phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz,
ehdr->e_phnum, phdr->p_offset);
}
return ret;
}
static int prepare_elf64_headers(struct crash_elf_data *ced,
void **addr, unsigned long *sz)
{
Elf64_Ehdr *ehdr;
Elf64_Phdr *phdr;
unsigned long nr_cpus = num_possible_cpus(), nr_phdr, elf_sz;
unsigned char *buf, *bufp;
unsigned int cpu;
unsigned long long notes_addr;
int ret;
/* extra phdr for vmcoreinfo elf note */
nr_phdr = nr_cpus + 1;
nr_phdr += ced->max_nr_ranges;
/*
* kexec-tools creates an extra PT_LOAD phdr for kernel text mapping
* area on x86_64 (ffffffff80000000 - ffffffffa0000000).
* I think this is required by tools like gdb. So same physical
* memory will be mapped in two elf headers. One will contain kernel
* text virtual addresses and other will have __va(physical) addresses.
*/
nr_phdr++;
elf_sz = sizeof(Elf64_Ehdr) + nr_phdr * sizeof(Elf64_Phdr);
elf_sz = ALIGN(elf_sz, ELF_CORE_HEADER_ALIGN);
buf = vzalloc(elf_sz);
if (!buf)
return -ENOMEM;
bufp = buf;
ehdr = (Elf64_Ehdr *)bufp;
bufp += sizeof(Elf64_Ehdr);
memcpy(ehdr->e_ident, ELFMAG, SELFMAG);
ehdr->e_ident[EI_CLASS] = ELFCLASS64;
ehdr->e_ident[EI_DATA] = ELFDATA2LSB;
ehdr->e_ident[EI_VERSION] = EV_CURRENT;
ehdr->e_ident[EI_OSABI] = ELF_OSABI;
memset(ehdr->e_ident + EI_PAD, 0, EI_NIDENT - EI_PAD);
ehdr->e_type = ET_CORE;
ehdr->e_machine = ELF_ARCH;
ehdr->e_version = EV_CURRENT;
ehdr->e_phoff = sizeof(Elf64_Ehdr);
ehdr->e_ehsize = sizeof(Elf64_Ehdr);
ehdr->e_phentsize = sizeof(Elf64_Phdr);
/* Prepare one phdr of type PT_NOTE for each present cpu */
for_each_present_cpu(cpu) {
phdr = (Elf64_Phdr *)bufp;
bufp += sizeof(Elf64_Phdr);
phdr->p_type = PT_NOTE;
notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));
phdr->p_offset = phdr->p_paddr = notes_addr;
phdr->p_filesz = phdr->p_memsz = sizeof(note_buf_t);
(ehdr->e_phnum)++;
}
/* Prepare one PT_NOTE header for vmcoreinfo */
phdr = (Elf64_Phdr *)bufp;
bufp += sizeof(Elf64_Phdr);
phdr->p_type = PT_NOTE;
phdr->p_offset = phdr->p_paddr = paddr_vmcoreinfo_note();
phdr->p_filesz = phdr->p_memsz = sizeof(vmcoreinfo_note);
(ehdr->e_phnum)++;
#ifdef CONFIG_X86_64
/* Prepare PT_LOAD type program header for kernel text region */
phdr = (Elf64_Phdr *)bufp;
bufp += sizeof(Elf64_Phdr);
phdr->p_type = PT_LOAD;
phdr->p_flags = PF_R|PF_W|PF_X;
phdr->p_vaddr = (Elf64_Addr)_text;
phdr->p_filesz = phdr->p_memsz = _end - _text;
phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);
(ehdr->e_phnum)++;
#endif
/* Prepare PT_LOAD headers for system ram chunks. */
ced->ehdr = ehdr;
ced->bufp = bufp;
ret = walk_system_ram_res(0, -1, ced,
prepare_elf64_ram_headers_callback);
if (ret < 0)
return ret;
*addr = buf;
*sz = elf_sz;
return 0;
}
/* Prepare elf headers. Return addr and size */
static int prepare_elf_headers(struct kimage *image, void **addr,
unsigned long *sz)
{
struct crash_elf_data *ced;
int ret;
ced = kzalloc(sizeof(*ced), GFP_KERNEL);
if (!ced)
return -ENOMEM;
fill_up_crash_elf_data(ced, image);
/* By default prepare 64bit headers */
ret = prepare_elf64_headers(ced, addr, sz);
kfree(ced);
return ret;
}
static int add_e820_entry(struct boot_params *params, struct e820entry *entry)
{
unsigned int nr_e820_entries;
nr_e820_entries = params->e820_entries;
if (nr_e820_entries >= E820MAX)
return 1;
memcpy(&params->e820_map[nr_e820_entries], entry,
sizeof(struct e820entry));
params->e820_entries++;
return 0;
}
static int memmap_entry_callback(u64 start, u64 end, void *arg)
{
struct crash_memmap_data *cmd = arg;
struct boot_params *params = cmd->params;
struct e820entry ei;
ei.addr = start;
ei.size = end - start + 1;
ei.type = cmd->type;
add_e820_entry(params, &ei);
return 0;
}
static int memmap_exclude_ranges(struct kimage *image, struct crash_mem *cmem,
unsigned long long mstart,
unsigned long long mend)
{
unsigned long start, end;
int ret = 0;
cmem->ranges[0].start = mstart;
cmem->ranges[0].end = mend;
cmem->nr_ranges = 1;
/* Exclude Backup region */
start = image->arch.backup_load_addr;
end = start + image->arch.backup_src_sz - 1;
ret = exclude_mem_range(cmem, start, end);
if (ret)
return ret;
/* Exclude elf header region */
start = image->arch.elf_load_addr;
end = start + image->arch.elf_headers_sz - 1;
return exclude_mem_range(cmem, start, end);
}
/* Prepare memory map for crash dump kernel */
int crash_setup_memmap_entries(struct kimage *image, struct boot_params *params)
{
int i, ret = 0;
unsigned long flags;
struct e820entry ei;
struct crash_memmap_data cmd;
struct crash_mem *cmem;
cmem = vzalloc(sizeof(struct crash_mem));
if (!cmem)
return -ENOMEM;
memset(&cmd, 0, sizeof(struct crash_memmap_data));
cmd.params = params;
/* Add first 640K segment */
ei.addr = image->arch.backup_src_start;
ei.size = image->arch.backup_src_sz;
ei.type = E820_RAM;
add_e820_entry(params, &ei);
/* Add ACPI tables */
cmd.type = E820_ACPI;
flags = IORESOURCE_MEM | IORESOURCE_BUSY;
walk_iomem_res("ACPI Tables", flags, 0, -1, &cmd,
memmap_entry_callback);
/* Add ACPI Non-volatile Storage */
cmd.type = E820_NVS;
walk_iomem_res("ACPI Non-volatile Storage", flags, 0, -1, &cmd,
memmap_entry_callback);
/* Add crashk_low_res region */
if (crashk_low_res.end) {
ei.addr = crashk_low_res.start;
ei.size = crashk_low_res.end - crashk_low_res.start + 1;
ei.type = E820_RAM;
add_e820_entry(params, &ei);
}
/* Exclude some ranges from crashk_res and add rest to memmap */
ret = memmap_exclude_ranges(image, cmem, crashk_res.start,
crashk_res.end);
if (ret)
goto out;
for (i = 0; i < cmem->nr_ranges; i++) {
ei.size = cmem->ranges[i].end - cmem->ranges[i].start + 1;
/* If entry is less than a page, skip it */
if (ei.size < PAGE_SIZE)
continue;
ei.addr = cmem->ranges[i].start;
ei.type = E820_RAM;
add_e820_entry(params, &ei);
}
out:
vfree(cmem);
return ret;
}
static int determine_backup_region(u64 start, u64 end, void *arg)
{
struct kimage *image = arg;
image->arch.backup_src_start = start;
image->arch.backup_src_sz = end - start + 1;
/* Expecting only one range for backup region */
return 1;
}
int crash_load_segments(struct kimage *image)
{
unsigned long src_start, src_sz, elf_sz;
void *elf_addr;
int ret;
/*
* Determine and load a segment for backup area. First 640K RAM
* region is backup source
*/
ret = walk_system_ram_res(KEXEC_BACKUP_SRC_START, KEXEC_BACKUP_SRC_END,
image, determine_backup_region);
/* Zero or postive return values are ok */
if (ret < 0)
return ret;
src_start = image->arch.backup_src_start;
src_sz = image->arch.backup_src_sz;
/* Add backup segment. */
if (src_sz) {
/*
* Ideally there is no source for backup segment. This is
* copied in purgatory after crash. Just add a zero filled
* segment for now to make sure checksum logic works fine.
*/
ret = kexec_add_buffer(image, (char *)&crash_zero_bytes,
sizeof(crash_zero_bytes), src_sz,
PAGE_SIZE, 0, -1, 0,
&image->arch.backup_load_addr);
if (ret)
return ret;
pr_debug("Loaded backup region at 0x%lx backup_start=0x%lx memsz=0x%lx\n",
image->arch.backup_load_addr, src_start, src_sz);
}
/* Prepare elf headers and add a segment */
ret = prepare_elf_headers(image, &elf_addr, &elf_sz);
if (ret)
return ret;
image->arch.elf_headers = elf_addr;
image->arch.elf_headers_sz = elf_sz;
ret = kexec_add_buffer(image, (char *)elf_addr, elf_sz, elf_sz,
ELF_CORE_HEADER_ALIGN, 0, -1, 0,
&image->arch.elf_load_addr);
if (ret) {
vfree((void *)image->arch.elf_headers);
return ret;
}
pr_debug("Loaded ELF headers at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
image->arch.elf_load_addr, elf_sz, elf_sz);
return ret;
}
#endif /* CONFIG_X86_64 */

View File

@ -0,0 +1,553 @@
/*
* Kexec bzImage loader
*
* Copyright (C) 2014 Red Hat Inc.
* Authors:
* Vivek Goyal <vgoyal@redhat.com>
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
#define pr_fmt(fmt) "kexec-bzImage64: " fmt
#include <linux/string.h>
#include <linux/printk.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/kexec.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/efi.h>
#include <linux/verify_pefile.h>
#include <keys/system_keyring.h>
#include <asm/bootparam.h>
#include <asm/setup.h>
#include <asm/crash.h>
#include <asm/efi.h>
#define MAX_ELFCOREHDR_STR_LEN 30 /* elfcorehdr=0x<64bit-value> */
/*
* Defines lowest physical address for various segments. Not sure where
* exactly these limits came from. Current bzimage64 loader in kexec-tools
* uses these so I am retaining it. It can be changed over time as we gain
* more insight.
*/
#define MIN_PURGATORY_ADDR 0x3000
#define MIN_BOOTPARAM_ADDR 0x3000
#define MIN_KERNEL_LOAD_ADDR 0x100000
#define MIN_INITRD_LOAD_ADDR 0x1000000
/*
* This is a place holder for all boot loader specific data structure which
* gets allocated in one call but gets freed much later during cleanup
* time. Right now there is only one field but it can grow as need be.
*/
struct bzimage64_data {
/*
* Temporary buffer to hold bootparams buffer. This should be
* freed once the bootparam segment has been loaded.
*/
void *bootparams_buf;
};
static int setup_initrd(struct boot_params *params,
unsigned long initrd_load_addr, unsigned long initrd_len)
{
params->hdr.ramdisk_image = initrd_load_addr & 0xffffffffUL;
params->hdr.ramdisk_size = initrd_len & 0xffffffffUL;
params->ext_ramdisk_image = initrd_load_addr >> 32;
params->ext_ramdisk_size = initrd_len >> 32;
return 0;
}
static int setup_cmdline(struct kimage *image, struct boot_params *params,
unsigned long bootparams_load_addr,
unsigned long cmdline_offset, char *cmdline,
unsigned long cmdline_len)
{
char *cmdline_ptr = ((char *)params) + cmdline_offset;
unsigned long cmdline_ptr_phys, len;
uint32_t cmdline_low_32, cmdline_ext_32;
memcpy(cmdline_ptr, cmdline, cmdline_len);
if (image->type == KEXEC_TYPE_CRASH) {
len = sprintf(cmdline_ptr + cmdline_len - 1,
" elfcorehdr=0x%lx", image->arch.elf_load_addr);
cmdline_len += len;
}
cmdline_ptr[cmdline_len - 1] = '\0';
pr_debug("Final command line is: %s\n", cmdline_ptr);
cmdline_ptr_phys = bootparams_load_addr + cmdline_offset;
cmdline_low_32 = cmdline_ptr_phys & 0xffffffffUL;
cmdline_ext_32 = cmdline_ptr_phys >> 32;
params->hdr.cmd_line_ptr = cmdline_low_32;
if (cmdline_ext_32)
params->ext_cmd_line_ptr = cmdline_ext_32;
return 0;
}
static int setup_e820_entries(struct boot_params *params)
{
unsigned int nr_e820_entries;
nr_e820_entries = e820_saved.nr_map;
/* TODO: Pass entries more than E820MAX in bootparams setup data */
if (nr_e820_entries > E820MAX)
nr_e820_entries = E820MAX;
params->e820_entries = nr_e820_entries;
memcpy(&params->e820_map, &e820_saved.map,
nr_e820_entries * sizeof(struct e820entry));
return 0;
}
#ifdef CONFIG_EFI
static int setup_efi_info_memmap(struct boot_params *params,
unsigned long params_load_addr,
unsigned int efi_map_offset,
unsigned int efi_map_sz)
{
void *efi_map = (void *)params + efi_map_offset;
unsigned long efi_map_phys_addr = params_load_addr + efi_map_offset;
struct efi_info *ei = &params->efi_info;
if (!efi_map_sz)
return 0;
efi_runtime_map_copy(efi_map, efi_map_sz);
ei->efi_memmap = efi_map_phys_addr & 0xffffffff;
ei->efi_memmap_hi = efi_map_phys_addr >> 32;
ei->efi_memmap_size = efi_map_sz;
return 0;
}
static int
prepare_add_efi_setup_data(struct boot_params *params,
unsigned long params_load_addr,
unsigned int efi_setup_data_offset)
{
unsigned long setup_data_phys;
struct setup_data *sd = (void *)params + efi_setup_data_offset;
struct efi_setup_data *esd = (void *)sd + sizeof(struct setup_data);
esd->fw_vendor = efi.fw_vendor;
esd->runtime = efi.runtime;
esd->tables = efi.config_table;
esd->smbios = efi.smbios;
sd->type = SETUP_EFI;
sd->len = sizeof(struct efi_setup_data);
/* Add setup data */
setup_data_phys = params_load_addr + efi_setup_data_offset;
sd->next = params->hdr.setup_data;
params->hdr.setup_data = setup_data_phys;
return 0;
}
static int
setup_efi_state(struct boot_params *params, unsigned long params_load_addr,
unsigned int efi_map_offset, unsigned int efi_map_sz,
unsigned int efi_setup_data_offset)
{
struct efi_info *current_ei = &boot_params.efi_info;
struct efi_info *ei = &params->efi_info;
if (!current_ei->efi_memmap_size)
return 0;
/*
* If 1:1 mapping is not enabled, second kernel can not setup EFI
* and use EFI run time services. User space will have to pass
* acpi_rsdp=<addr> on kernel command line to make second kernel boot
* without efi.
*/
if (efi_enabled(EFI_OLD_MEMMAP))
return 0;
ei->efi_loader_signature = current_ei->efi_loader_signature;
ei->efi_systab = current_ei->efi_systab;
ei->efi_systab_hi = current_ei->efi_systab_hi;
ei->efi_memdesc_version = current_ei->efi_memdesc_version;
ei->efi_memdesc_size = efi_get_runtime_map_desc_size();
setup_efi_info_memmap(params, params_load_addr, efi_map_offset,
efi_map_sz);
prepare_add_efi_setup_data(params, params_load_addr,
efi_setup_data_offset);
return 0;
}
#endif /* CONFIG_EFI */
static int
setup_boot_parameters(struct kimage *image, struct boot_params *params,
unsigned long params_load_addr,
unsigned int efi_map_offset, unsigned int efi_map_sz,
unsigned int efi_setup_data_offset)
{
unsigned int nr_e820_entries;
unsigned long long mem_k, start, end;
int i, ret = 0;
/* Get subarch from existing bootparams */
params->hdr.hardware_subarch = boot_params.hdr.hardware_subarch;
/* Copying screen_info will do? */
memcpy(&params->screen_info, &boot_params.screen_info,
sizeof(struct screen_info));
/* Fill in memsize later */
params->screen_info.ext_mem_k = 0;
params->alt_mem_k = 0;
/* Default APM info */
memset(&params->apm_bios_info, 0, sizeof(params->apm_bios_info));
/* Default drive info */
memset(&params->hd0_info, 0, sizeof(params->hd0_info));
memset(&params->hd1_info, 0, sizeof(params->hd1_info));
/* Default sysdesc table */
params->sys_desc_table.length = 0;
if (image->type == KEXEC_TYPE_CRASH) {
ret = crash_setup_memmap_entries(image, params);
if (ret)
return ret;
} else
setup_e820_entries(params);
nr_e820_entries = params->e820_entries;
for (i = 0; i < nr_e820_entries; i++) {
if (params->e820_map[i].type != E820_RAM)
continue;
start = params->e820_map[i].addr;
end = params->e820_map[i].addr + params->e820_map[i].size - 1;
if ((start <= 0x100000) && end > 0x100000) {
mem_k = (end >> 10) - (0x100000 >> 10);
params->screen_info.ext_mem_k = mem_k;
params->alt_mem_k = mem_k;
if (mem_k > 0xfc00)
params->screen_info.ext_mem_k = 0xfc00; /* 64M*/
if (mem_k > 0xffffffff)
params->alt_mem_k = 0xffffffff;
}
}
#ifdef CONFIG_EFI
/* Setup EFI state */
setup_efi_state(params, params_load_addr, efi_map_offset, efi_map_sz,
efi_setup_data_offset);
#endif
/* Setup EDD info */
memcpy(params->eddbuf, boot_params.eddbuf,
EDDMAXNR * sizeof(struct edd_info));
params->eddbuf_entries = boot_params.eddbuf_entries;
memcpy(params->edd_mbr_sig_buffer, boot_params.edd_mbr_sig_buffer,
EDD_MBR_SIG_MAX * sizeof(unsigned int));
return ret;
}
int bzImage64_probe(const char *buf, unsigned long len)
{
int ret = -ENOEXEC;
struct setup_header *header;
/* kernel should be atleast two sectors long */
if (len < 2 * 512) {
pr_err("File is too short to be a bzImage\n");
return ret;
}
header = (struct setup_header *)(buf + offsetof(struct boot_params, hdr));
if (memcmp((char *)&header->header, "HdrS", 4) != 0) {
pr_err("Not a bzImage\n");
return ret;
}
if (header->boot_flag != 0xAA55) {
pr_err("No x86 boot sector present\n");
return ret;
}
if (header->version < 0x020C) {
pr_err("Must be at least protocol version 2.12\n");
return ret;
}
if (!(header->loadflags & LOADED_HIGH)) {
pr_err("zImage not a bzImage\n");
return ret;
}
if (!(header->xloadflags & XLF_KERNEL_64)) {
pr_err("Not a bzImage64. XLF_KERNEL_64 is not set.\n");
return ret;
}
if (!(header->xloadflags & XLF_CAN_BE_LOADED_ABOVE_4G)) {
pr_err("XLF_CAN_BE_LOADED_ABOVE_4G is not set.\n");
return ret;
}
/*
* Can't handle 32bit EFI as it does not allow loading kernel
* above 4G. This should be handled by 32bit bzImage loader
*/
if (efi_enabled(EFI_RUNTIME_SERVICES) && !efi_enabled(EFI_64BIT)) {
pr_debug("EFI is 32 bit. Can't load kernel above 4G.\n");
return ret;
}
/* I've got a bzImage */
pr_debug("It's a relocatable bzImage64\n");
ret = 0;
return ret;
}
void *bzImage64_load(struct kimage *image, char *kernel,
unsigned long kernel_len, char *initrd,
unsigned long initrd_len, char *cmdline,
unsigned long cmdline_len)
{
struct setup_header *header;
int setup_sects, kern16_size, ret = 0;
unsigned long setup_header_size, params_cmdline_sz, params_misc_sz;
struct boot_params *params;
unsigned long bootparam_load_addr, kernel_load_addr, initrd_load_addr;
unsigned long purgatory_load_addr;
unsigned long kernel_bufsz, kernel_memsz, kernel_align;
char *kernel_buf;
struct bzimage64_data *ldata;
struct kexec_entry64_regs regs64;
void *stack;
unsigned int setup_hdr_offset = offsetof(struct boot_params, hdr);
unsigned int efi_map_offset, efi_map_sz, efi_setup_data_offset;
header = (struct setup_header *)(kernel + setup_hdr_offset);
setup_sects = header->setup_sects;
if (setup_sects == 0)
setup_sects = 4;
kern16_size = (setup_sects + 1) * 512;
if (kernel_len < kern16_size) {
pr_err("bzImage truncated\n");
return ERR_PTR(-ENOEXEC);
}
if (cmdline_len > header->cmdline_size) {
pr_err("Kernel command line too long\n");
return ERR_PTR(-EINVAL);
}
/*
* In case of crash dump, we will append elfcorehdr=<addr> to
* command line. Make sure it does not overflow
*/
if (cmdline_len + MAX_ELFCOREHDR_STR_LEN > header->cmdline_size) {
pr_debug("Appending elfcorehdr=<addr> to command line exceeds maximum allowed length\n");
return ERR_PTR(-EINVAL);
}
/* Allocate and load backup region */
if (image->type == KEXEC_TYPE_CRASH) {
ret = crash_load_segments(image);
if (ret)
return ERR_PTR(ret);
}
/*
* Load purgatory. For 64bit entry point, purgatory code can be
* anywhere.
*/
ret = kexec_load_purgatory(image, MIN_PURGATORY_ADDR, ULONG_MAX, 1,
&purgatory_load_addr);
if (ret) {
pr_err("Loading purgatory failed\n");
return ERR_PTR(ret);
}
pr_debug("Loaded purgatory at 0x%lx\n", purgatory_load_addr);
/*
* Load Bootparams and cmdline and space for efi stuff.
*
* Allocate memory together for multiple data structures so
* that they all can go in single area/segment and we don't
* have to create separate segment for each. Keeps things
* little bit simple
*/
efi_map_sz = efi_get_runtime_map_size();
efi_map_sz = ALIGN(efi_map_sz, 16);
params_cmdline_sz = sizeof(struct boot_params) + cmdline_len +
MAX_ELFCOREHDR_STR_LEN;
params_cmdline_sz = ALIGN(params_cmdline_sz, 16);
params_misc_sz = params_cmdline_sz + efi_map_sz +
sizeof(struct setup_data) +
sizeof(struct efi_setup_data);
params = kzalloc(params_misc_sz, GFP_KERNEL);
if (!params)
return ERR_PTR(-ENOMEM);
efi_map_offset = params_cmdline_sz;
efi_setup_data_offset = efi_map_offset + efi_map_sz;
/* Copy setup header onto bootparams. Documentation/x86/boot.txt */
setup_header_size = 0x0202 + kernel[0x0201] - setup_hdr_offset;
/* Is there a limit on setup header size? */
memcpy(&params->hdr, (kernel + setup_hdr_offset), setup_header_size);
ret = kexec_add_buffer(image, (char *)params, params_misc_sz,
params_misc_sz, 16, MIN_BOOTPARAM_ADDR,
ULONG_MAX, 1, &bootparam_load_addr);
if (ret)
goto out_free_params;
pr_debug("Loaded boot_param, command line and misc at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
bootparam_load_addr, params_misc_sz, params_misc_sz);
/* Load kernel */
kernel_buf = kernel + kern16_size;
kernel_bufsz = kernel_len - kern16_size;
kernel_memsz = PAGE_ALIGN(header->init_size);
kernel_align = header->kernel_alignment;
ret = kexec_add_buffer(image, kernel_buf,
kernel_bufsz, kernel_memsz, kernel_align,
MIN_KERNEL_LOAD_ADDR, ULONG_MAX, 1,
&kernel_load_addr);
if (ret)
goto out_free_params;
pr_debug("Loaded 64bit kernel at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
kernel_load_addr, kernel_memsz, kernel_memsz);
/* Load initrd high */
if (initrd) {
ret = kexec_add_buffer(image, initrd, initrd_len, initrd_len,
PAGE_SIZE, MIN_INITRD_LOAD_ADDR,
ULONG_MAX, 1, &initrd_load_addr);
if (ret)
goto out_free_params;
pr_debug("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
initrd_load_addr, initrd_len, initrd_len);
setup_initrd(params, initrd_load_addr, initrd_len);
}
setup_cmdline(image, params, bootparam_load_addr,
sizeof(struct boot_params), cmdline, cmdline_len);
/* bootloader info. Do we need a separate ID for kexec kernel loader? */
params->hdr.type_of_loader = 0x0D << 4;
params->hdr.loadflags = 0;
/* Setup purgatory regs for entry */
ret = kexec_purgatory_get_set_symbol(image, "entry64_regs", &regs64,
sizeof(regs64), 1);
if (ret)
goto out_free_params;
regs64.rbx = 0; /* Bootstrap Processor */
regs64.rsi = bootparam_load_addr;
regs64.rip = kernel_load_addr + 0x200;
stack = kexec_purgatory_get_symbol_addr(image, "stack_end");
if (IS_ERR(stack)) {
pr_err("Could not find address of symbol stack_end\n");
ret = -EINVAL;
goto out_free_params;
}
regs64.rsp = (unsigned long)stack;
ret = kexec_purgatory_get_set_symbol(image, "entry64_regs", &regs64,
sizeof(regs64), 0);
if (ret)
goto out_free_params;
ret = setup_boot_parameters(image, params, bootparam_load_addr,
efi_map_offset, efi_map_sz,
efi_setup_data_offset);
if (ret)
goto out_free_params;
/* Allocate loader specific data */
ldata = kzalloc(sizeof(struct bzimage64_data), GFP_KERNEL);
if (!ldata) {
ret = -ENOMEM;
goto out_free_params;
}
/*
* Store pointer to params so that it could be freed after loading
* params segment has been loaded and contents have been copied
* somewhere else.
*/
ldata->bootparams_buf = params;
return ldata;
out_free_params:
kfree(params);
return ERR_PTR(ret);
}
/* This cleanup function is called after various segments have been loaded */
int bzImage64_cleanup(void *loader_data)
{
struct bzimage64_data *ldata = loader_data;
if (!ldata)
return 0;
kfree(ldata->bootparams_buf);
ldata->bootparams_buf = NULL;
return 0;
}
#ifdef CONFIG_KEXEC_BZIMAGE_VERIFY_SIG
int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len)
{
bool trusted;
int ret;
ret = verify_pefile_signature(kernel, kernel_len,
system_trusted_keyring, &trusted);
if (ret < 0)
return ret;
if (!trusted)
return -EKEYREJECTED;
return 0;
}
#endif
struct kexec_file_ops kexec_bzImage64_ops = {
.probe = bzImage64_probe,
.load = bzImage64_load,
.cleanup = bzImage64_cleanup,
#ifdef CONFIG_KEXEC_BZIMAGE_VERIFY_SIG
.verify_sig = bzImage64_verify_sig,
#endif
};

View File

@ -6,6 +6,8 @@
* Version 2. See the file COPYING for more details.
*/
#define pr_fmt(fmt) "kexec: " fmt
#include <linux/mm.h>
#include <linux/kexec.h>
#include <linux/string.h>
@ -21,6 +23,11 @@
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
#include <asm/debugreg.h>
#include <asm/kexec-bzimage64.h>
static struct kexec_file_ops *kexec_file_loaders[] = {
&kexec_bzImage64_ops,
};
static void free_transition_pgtable(struct kimage *image)
{
@ -171,6 +178,38 @@ static void load_segments(void)
);
}
/* Update purgatory as needed after various image segments have been prepared */
static int arch_update_purgatory(struct kimage *image)
{
int ret = 0;
if (!image->file_mode)
return 0;
/* Setup copying of backup region */
if (image->type == KEXEC_TYPE_CRASH) {
ret = kexec_purgatory_get_set_symbol(image, "backup_dest",
&image->arch.backup_load_addr,
sizeof(image->arch.backup_load_addr), 0);
if (ret)
return ret;
ret = kexec_purgatory_get_set_symbol(image, "backup_src",
&image->arch.backup_src_start,
sizeof(image->arch.backup_src_start), 0);
if (ret)
return ret;
ret = kexec_purgatory_get_set_symbol(image, "backup_sz",
&image->arch.backup_src_sz,
sizeof(image->arch.backup_src_sz), 0);
if (ret)
return ret;
}
return ret;
}
int machine_kexec_prepare(struct kimage *image)
{
unsigned long start_pgtable;
@ -184,6 +223,11 @@ int machine_kexec_prepare(struct kimage *image)
if (result)
return result;
/* update purgatory as needed */
result = arch_update_purgatory(image);
if (result)
return result;
return 0;
}
@ -283,3 +327,198 @@ void arch_crash_save_vmcoreinfo(void)
(unsigned long)&_text - __START_KERNEL);
}
/* arch-dependent functionality related to kexec file-based syscall */
int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
unsigned long buf_len)
{
int i, ret = -ENOEXEC;
struct kexec_file_ops *fops;
for (i = 0; i < ARRAY_SIZE(kexec_file_loaders); i++) {
fops = kexec_file_loaders[i];
if (!fops || !fops->probe)
continue;
ret = fops->probe(buf, buf_len);
if (!ret) {
image->fops = fops;
return ret;
}
}
return ret;
}
void *arch_kexec_kernel_image_load(struct kimage *image)
{
vfree(image->arch.elf_headers);
image->arch.elf_headers = NULL;
if (!image->fops || !image->fops->load)
return ERR_PTR(-ENOEXEC);
return image->fops->load(image, image->kernel_buf,
image->kernel_buf_len, image->initrd_buf,
image->initrd_buf_len, image->cmdline_buf,
image->cmdline_buf_len);
}
int arch_kimage_file_post_load_cleanup(struct kimage *image)
{
if (!image->fops || !image->fops->cleanup)
return 0;
return image->fops->cleanup(image->image_loader_data);
}
int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel,
unsigned long kernel_len)
{
if (!image->fops || !image->fops->verify_sig) {
pr_debug("kernel loader does not support signature verification.");
return -EKEYREJECTED;
}
return image->fops->verify_sig(kernel, kernel_len);
}
/*
* Apply purgatory relocations.
*
* ehdr: Pointer to elf headers
* sechdrs: Pointer to section headers.
* relsec: section index of SHT_RELA section.
*
* TODO: Some of the code belongs to generic code. Move that in kexec.c.
*/
int arch_kexec_apply_relocations_add(const Elf64_Ehdr *ehdr,
Elf64_Shdr *sechdrs, unsigned int relsec)
{
unsigned int i;
Elf64_Rela *rel;
Elf64_Sym *sym;
void *location;
Elf64_Shdr *section, *symtabsec;
unsigned long address, sec_base, value;
const char *strtab, *name, *shstrtab;
/*
* ->sh_offset has been modified to keep the pointer to section
* contents in memory
*/
rel = (void *)sechdrs[relsec].sh_offset;
/* Section to which relocations apply */
section = &sechdrs[sechdrs[relsec].sh_info];
pr_debug("Applying relocate section %u to %u\n", relsec,
sechdrs[relsec].sh_info);
/* Associated symbol table */
symtabsec = &sechdrs[sechdrs[relsec].sh_link];
/* String table */
if (symtabsec->sh_link >= ehdr->e_shnum) {
/* Invalid strtab section number */
pr_err("Invalid string table section index %d\n",
symtabsec->sh_link);
return -ENOEXEC;
}
strtab = (char *)sechdrs[symtabsec->sh_link].sh_offset;
/* section header string table */
shstrtab = (char *)sechdrs[ehdr->e_shstrndx].sh_offset;
for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) {
/*
* rel[i].r_offset contains byte offset from beginning
* of section to the storage unit affected.
*
* This is location to update (->sh_offset). This is temporary
* buffer where section is currently loaded. This will finally
* be loaded to a different address later, pointed to by
* ->sh_addr. kexec takes care of moving it
* (kexec_load_segment()).
*/
location = (void *)(section->sh_offset + rel[i].r_offset);
/* Final address of the location */
address = section->sh_addr + rel[i].r_offset;
/*
* rel[i].r_info contains information about symbol table index
* w.r.t which relocation must be made and type of relocation
* to apply. ELF64_R_SYM() and ELF64_R_TYPE() macros get
* these respectively.
*/
sym = (Elf64_Sym *)symtabsec->sh_offset +
ELF64_R_SYM(rel[i].r_info);
if (sym->st_name)
name = strtab + sym->st_name;
else
name = shstrtab + sechdrs[sym->st_shndx].sh_name;
pr_debug("Symbol: %s info: %02x shndx: %02x value=%llx size: %llx\n",
name, sym->st_info, sym->st_shndx, sym->st_value,
sym->st_size);
if (sym->st_shndx == SHN_UNDEF) {
pr_err("Undefined symbol: %s\n", name);
return -ENOEXEC;
}
if (sym->st_shndx == SHN_COMMON) {
pr_err("symbol '%s' in common section\n", name);
return -ENOEXEC;
}
if (sym->st_shndx == SHN_ABS)
sec_base = 0;
else if (sym->st_shndx >= ehdr->e_shnum) {
pr_err("Invalid section %d for symbol %s\n",
sym->st_shndx, name);
return -ENOEXEC;
} else
sec_base = sechdrs[sym->st_shndx].sh_addr;
value = sym->st_value;
value += sec_base;
value += rel[i].r_addend;
switch (ELF64_R_TYPE(rel[i].r_info)) {
case R_X86_64_NONE:
break;
case R_X86_64_64:
*(u64 *)location = value;
break;
case R_X86_64_32:
*(u32 *)location = value;
if (value != *(u32 *)location)
goto overflow;
break;
case R_X86_64_32S:
*(s32 *)location = value;
if ((s64)value != *(s32 *)location)
goto overflow;
break;
case R_X86_64_PC32:
value -= (u64)address;
*(u32 *)location = value;
break;
default:
pr_err("Unknown rela relocation: %llu\n",
ELF64_R_TYPE(rel[i].r_info));
return -ENOEXEC;
}
}
return 0;
overflow:
pr_err("Overflow in relocation type %d value 0x%lx\n",
(int)ELF64_R_TYPE(rel[i].r_info), value);
return -ENOEXEC;
}

View File

@ -273,7 +273,7 @@ static int mmu_audit_set(const char *val, const struct kernel_param *kp)
int ret;
unsigned long enable;
ret = strict_strtoul(val, 10, &enable);
ret = kstrtoul(val, 10, &enable);
if (ret < 0)
return -EINVAL;

View File

@ -1479,7 +1479,7 @@ static ssize_t ptc_proc_write(struct file *file, const char __user *user,
return count;
}
if (strict_strtol(optstr, 10, &input_arg) < 0) {
if (kstrtol(optstr, 10, &input_arg) < 0) {
printk(KERN_DEBUG "%s is invalid\n", optstr);
return -EINVAL;
}

View File

@ -0,0 +1,30 @@
purgatory-y := purgatory.o stack.o setup-x86_$(BITS).o sha256.o entry64.o string.o
targets += $(purgatory-y)
PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y))
LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib
targets += purgatory.ro
# Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
# in turn leaves some undefined symbols like __fentry__ in purgatory and not
# sure how to relocate those. Like kexec-tools, use custom flags.
KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -MD -Os -mcmodel=large
$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
$(call if_changed,ld)
targets += kexec-purgatory.c
quiet_cmd_bin2c = BIN2C $@
cmd_bin2c = cat $(obj)/purgatory.ro | $(objtree)/scripts/basic/bin2c kexec_purgatory > $(obj)/kexec-purgatory.c
$(obj)/kexec-purgatory.c: $(obj)/purgatory.ro FORCE
$(call if_changed,bin2c)
# No loaders for 32bits yet.
ifeq ($(CONFIG_X86_64),y)
obj-$(CONFIG_KEXEC) += kexec-purgatory.o
endif

View File

@ -0,0 +1,101 @@
/*
* Copyright (C) 2003,2004 Eric Biederman (ebiederm@xmission.com)
* Copyright (C) 2014 Red Hat Inc.
* Author(s): Vivek Goyal <vgoyal@redhat.com>
*
* This code has been taken from kexec-tools.
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
.text
.balign 16
.code64
.globl entry64, entry64_regs
entry64:
/* Setup a gdt that should be preserved */
lgdt gdt(%rip)
/* load the data segments */
movl $0x18, %eax /* data segment */
movl %eax, %ds
movl %eax, %es
movl %eax, %ss
movl %eax, %fs
movl %eax, %gs
/* Setup new stack */
leaq stack_init(%rip), %rsp
pushq $0x10 /* CS */
leaq new_cs_exit(%rip), %rax
pushq %rax
lretq
new_cs_exit:
/* Load the registers */
movq rax(%rip), %rax
movq rbx(%rip), %rbx
movq rcx(%rip), %rcx
movq rdx(%rip), %rdx
movq rsi(%rip), %rsi
movq rdi(%rip), %rdi
movq rsp(%rip), %rsp
movq rbp(%rip), %rbp
movq r8(%rip), %r8
movq r9(%rip), %r9
movq r10(%rip), %r10
movq r11(%rip), %r11
movq r12(%rip), %r12
movq r13(%rip), %r13
movq r14(%rip), %r14
movq r15(%rip), %r15
/* Jump to the new code... */
jmpq *rip(%rip)
.section ".rodata"
.balign 4
entry64_regs:
rax: .quad 0x0
rcx: .quad 0x0
rdx: .quad 0x0
rbx: .quad 0x0
rsp: .quad 0x0
rbp: .quad 0x0
rsi: .quad 0x0
rdi: .quad 0x0
r8: .quad 0x0
r9: .quad 0x0
r10: .quad 0x0
r11: .quad 0x0
r12: .quad 0x0
r13: .quad 0x0
r14: .quad 0x0
r15: .quad 0x0
rip: .quad 0x0
.size entry64_regs, . - entry64_regs
/* GDT */
.section ".rodata"
.balign 16
gdt:
/* 0x00 unusable segment
* 0x08 unused
* so use them as gdt ptr
*/
.word gdt_end - gdt - 1
.quad gdt
.word 0, 0, 0
/* 0x10 4GB flat code segment */
.word 0xFFFF, 0x0000, 0x9A00, 0x00AF
/* 0x18 4GB flat data segment */
.word 0xFFFF, 0x0000, 0x9200, 0x00CF
gdt_end:
stack: .quad 0, 0
stack_init:

View File

@ -0,0 +1,72 @@
/*
* purgatory: Runs between two kernels
*
* Copyright (C) 2014 Red Hat Inc.
*
* Author:
* Vivek Goyal <vgoyal@redhat.com>
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
#include "sha256.h"
#include "../boot/string.h"
struct sha_region {
unsigned long start;
unsigned long len;
};
unsigned long backup_dest = 0;
unsigned long backup_src = 0;
unsigned long backup_sz = 0;
u8 sha256_digest[SHA256_DIGEST_SIZE] = { 0 };
struct sha_region sha_regions[16] = {};
/*
* On x86, second kernel requries first 640K of memory to boot. Copy
* first 640K to a backup region in reserved memory range so that second
* kernel can use first 640K.
*/
static int copy_backup_region(void)
{
if (backup_dest)
memcpy((void *)backup_dest, (void *)backup_src, backup_sz);
return 0;
}
int verify_sha256_digest(void)
{
struct sha_region *ptr, *end;
u8 digest[SHA256_DIGEST_SIZE];
struct sha256_state sctx;
sha256_init(&sctx);
end = &sha_regions[sizeof(sha_regions)/sizeof(sha_regions[0])];
for (ptr = sha_regions; ptr < end; ptr++)
sha256_update(&sctx, (uint8_t *)(ptr->start), ptr->len);
sha256_final(&sctx, digest);
if (memcmp(digest, sha256_digest, sizeof(digest)))
return 1;
return 0;
}
void purgatory(void)
{
int ret;
ret = verify_sha256_digest();
if (ret) {
/* loop forever */
for (;;)
;
}
copy_backup_region();
}

View File

@ -0,0 +1,58 @@
/*
* purgatory: setup code
*
* Copyright (C) 2003,2004 Eric Biederman (ebiederm@xmission.com)
* Copyright (C) 2014 Red Hat Inc.
*
* This code has been taken from kexec-tools.
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
.text
.globl purgatory_start
.balign 16
purgatory_start:
.code64
/* Load a gdt so I know what the segment registers are */
lgdt gdt(%rip)
/* load the data segments */
movl $0x18, %eax /* data segment */
movl %eax, %ds
movl %eax, %es
movl %eax, %ss
movl %eax, %fs
movl %eax, %gs
/* Setup a stack */
leaq lstack_end(%rip), %rsp
/* Call the C code */
call purgatory
jmp entry64
.section ".rodata"
.balign 16
gdt: /* 0x00 unusable segment
* 0x08 unused
* so use them as the gdt ptr
*/
.word gdt_end - gdt - 1
.quad gdt
.word 0, 0, 0
/* 0x10 4GB flat code segment */
.word 0xFFFF, 0x0000, 0x9A00, 0x00AF
/* 0x18 4GB flat data segment */
.word 0xFFFF, 0x0000, 0x9200, 0x00CF
gdt_end:
.bss
.balign 4096
lstack:
.skip 4096
lstack_end:

283
arch/x86/purgatory/sha256.c Normal file
View File

@ -0,0 +1,283 @@
/*
* SHA-256, as specified in
* http://csrc.nist.gov/groups/STM/cavp/documents/shs/sha256-384-512.pdf
*
* SHA-256 code by Jean-Luc Cooke <jlcooke@certainkey.com>.
*
* Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com>
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
* Copyright (c) 2014 Red Hat Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*/
#include <linux/bitops.h>
#include <asm/byteorder.h>
#include "sha256.h"
#include "../boot/string.h"
static inline u32 Ch(u32 x, u32 y, u32 z)
{
return z ^ (x & (y ^ z));
}
static inline u32 Maj(u32 x, u32 y, u32 z)
{
return (x & y) | (z & (x | y));
}
#define e0(x) (ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22))
#define e1(x) (ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25))
#define s0(x) (ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3))
#define s1(x) (ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10))
static inline void LOAD_OP(int I, u32 *W, const u8 *input)
{
W[I] = __be32_to_cpu(((__be32 *)(input))[I]);
}
static inline void BLEND_OP(int I, u32 *W)
{
W[I] = s1(W[I-2]) + W[I-7] + s0(W[I-15]) + W[I-16];
}
static void sha256_transform(u32 *state, const u8 *input)
{
u32 a, b, c, d, e, f, g, h, t1, t2;
u32 W[64];
int i;
/* load the input */
for (i = 0; i < 16; i++)
LOAD_OP(i, W, input);
/* now blend */
for (i = 16; i < 64; i++)
BLEND_OP(i, W);
/* load the state into our registers */
a = state[0]; b = state[1]; c = state[2]; d = state[3];
e = state[4]; f = state[5]; g = state[6]; h = state[7];
/* now iterate */
t1 = h + e1(e) + Ch(e, f, g) + 0x428a2f98 + W[0];
t2 = e0(a) + Maj(a, b, c); d += t1; h = t1 + t2;
t1 = g + e1(d) + Ch(d, e, f) + 0x71374491 + W[1];
t2 = e0(h) + Maj(h, a, b); c += t1; g = t1 + t2;
t1 = f + e1(c) + Ch(c, d, e) + 0xb5c0fbcf + W[2];
t2 = e0(g) + Maj(g, h, a); b += t1; f = t1 + t2;
t1 = e + e1(b) + Ch(b, c, d) + 0xe9b5dba5 + W[3];
t2 = e0(f) + Maj(f, g, h); a += t1; e = t1 + t2;
t1 = d + e1(a) + Ch(a, b, c) + 0x3956c25b + W[4];
t2 = e0(e) + Maj(e, f, g); h += t1; d = t1 + t2;
t1 = c + e1(h) + Ch(h, a, b) + 0x59f111f1 + W[5];
t2 = e0(d) + Maj(d, e, f); g += t1; c = t1 + t2;
t1 = b + e1(g) + Ch(g, h, a) + 0x923f82a4 + W[6];
t2 = e0(c) + Maj(c, d, e); f += t1; b = t1 + t2;
t1 = a + e1(f) + Ch(f, g, h) + 0xab1c5ed5 + W[7];
t2 = e0(b) + Maj(b, c, d); e += t1; a = t1 + t2;
t1 = h + e1(e) + Ch(e, f, g) + 0xd807aa98 + W[8];
t2 = e0(a) + Maj(a, b, c); d += t1; h = t1 + t2;
t1 = g + e1(d) + Ch(d, e, f) + 0x12835b01 + W[9];
t2 = e0(h) + Maj(h, a, b); c += t1; g = t1 + t2;
t1 = f + e1(c) + Ch(c, d, e) + 0x243185be + W[10];
t2 = e0(g) + Maj(g, h, a); b += t1; f = t1 + t2;
t1 = e + e1(b) + Ch(b, c, d) + 0x550c7dc3 + W[11];
t2 = e0(f) + Maj(f, g, h); a += t1; e = t1 + t2;
t1 = d + e1(a) + Ch(a, b, c) + 0x72be5d74 + W[12];
t2 = e0(e) + Maj(e, f, g); h += t1; d = t1 + t2;
t1 = c + e1(h) + Ch(h, a, b) + 0x80deb1fe + W[13];
t2 = e0(d) + Maj(d, e, f); g += t1; c = t1 + t2;
t1 = b + e1(g) + Ch(g, h, a) + 0x9bdc06a7 + W[14];
t2 = e0(c) + Maj(c, d, e); f += t1; b = t1 + t2;
t1 = a + e1(f) + Ch(f, g, h) + 0xc19bf174 + W[15];
t2 = e0(b) + Maj(b, c, d); e += t1; a = t1+t2;
t1 = h + e1(e) + Ch(e, f, g) + 0xe49b69c1 + W[16];
t2 = e0(a) + Maj(a, b, c); d += t1; h = t1+t2;
t1 = g + e1(d) + Ch(d, e, f) + 0xefbe4786 + W[17];
t2 = e0(h) + Maj(h, a, b); c += t1; g = t1+t2;
t1 = f + e1(c) + Ch(c, d, e) + 0x0fc19dc6 + W[18];
t2 = e0(g) + Maj(g, h, a); b += t1; f = t1+t2;
t1 = e + e1(b) + Ch(b, c, d) + 0x240ca1cc + W[19];
t2 = e0(f) + Maj(f, g, h); a += t1; e = t1+t2;
t1 = d + e1(a) + Ch(a, b, c) + 0x2de92c6f + W[20];
t2 = e0(e) + Maj(e, f, g); h += t1; d = t1+t2;
t1 = c + e1(h) + Ch(h, a, b) + 0x4a7484aa + W[21];
t2 = e0(d) + Maj(d, e, f); g += t1; c = t1+t2;
t1 = b + e1(g) + Ch(g, h, a) + 0x5cb0a9dc + W[22];
t2 = e0(c) + Maj(c, d, e); f += t1; b = t1+t2;
t1 = a + e1(f) + Ch(f, g, h) + 0x76f988da + W[23];
t2 = e0(b) + Maj(b, c, d); e += t1; a = t1+t2;
t1 = h + e1(e) + Ch(e, f, g) + 0x983e5152 + W[24];
t2 = e0(a) + Maj(a, b, c); d += t1; h = t1+t2;
t1 = g + e1(d) + Ch(d, e, f) + 0xa831c66d + W[25];
t2 = e0(h) + Maj(h, a, b); c += t1; g = t1+t2;
t1 = f + e1(c) + Ch(c, d, e) + 0xb00327c8 + W[26];
t2 = e0(g) + Maj(g, h, a); b += t1; f = t1+t2;
t1 = e + e1(b) + Ch(b, c, d) + 0xbf597fc7 + W[27];
t2 = e0(f) + Maj(f, g, h); a += t1; e = t1+t2;
t1 = d + e1(a) + Ch(a, b, c) + 0xc6e00bf3 + W[28];
t2 = e0(e) + Maj(e, f, g); h += t1; d = t1+t2;
t1 = c + e1(h) + Ch(h, a, b) + 0xd5a79147 + W[29];
t2 = e0(d) + Maj(d, e, f); g += t1; c = t1+t2;
t1 = b + e1(g) + Ch(g, h, a) + 0x06ca6351 + W[30];
t2 = e0(c) + Maj(c, d, e); f += t1; b = t1+t2;
t1 = a + e1(f) + Ch(f, g, h) + 0x14292967 + W[31];
t2 = e0(b) + Maj(b, c, d); e += t1; a = t1+t2;
t1 = h + e1(e) + Ch(e, f, g) + 0x27b70a85 + W[32];
t2 = e0(a) + Maj(a, b, c); d += t1; h = t1+t2;
t1 = g + e1(d) + Ch(d, e, f) + 0x2e1b2138 + W[33];
t2 = e0(h) + Maj(h, a, b); c += t1; g = t1+t2;
t1 = f + e1(c) + Ch(c, d, e) + 0x4d2c6dfc + W[34];
t2 = e0(g) + Maj(g, h, a); b += t1; f = t1+t2;
t1 = e + e1(b) + Ch(b, c, d) + 0x53380d13 + W[35];
t2 = e0(f) + Maj(f, g, h); a += t1; e = t1+t2;
t1 = d + e1(a) + Ch(a, b, c) + 0x650a7354 + W[36];
t2 = e0(e) + Maj(e, f, g); h += t1; d = t1+t2;
t1 = c + e1(h) + Ch(h, a, b) + 0x766a0abb + W[37];
t2 = e0(d) + Maj(d, e, f); g += t1; c = t1+t2;
t1 = b + e1(g) + Ch(g, h, a) + 0x81c2c92e + W[38];
t2 = e0(c) + Maj(c, d, e); f += t1; b = t1+t2;
t1 = a + e1(f) + Ch(f, g, h) + 0x92722c85 + W[39];
t2 = e0(b) + Maj(b, c, d); e += t1; a = t1+t2;
t1 = h + e1(e) + Ch(e, f, g) + 0xa2bfe8a1 + W[40];
t2 = e0(a) + Maj(a, b, c); d += t1; h = t1+t2;
t1 = g + e1(d) + Ch(d, e, f) + 0xa81a664b + W[41];
t2 = e0(h) + Maj(h, a, b); c += t1; g = t1+t2;
t1 = f + e1(c) + Ch(c, d, e) + 0xc24b8b70 + W[42];
t2 = e0(g) + Maj(g, h, a); b += t1; f = t1+t2;
t1 = e + e1(b) + Ch(b, c, d) + 0xc76c51a3 + W[43];
t2 = e0(f) + Maj(f, g, h); a += t1; e = t1+t2;
t1 = d + e1(a) + Ch(a, b, c) + 0xd192e819 + W[44];
t2 = e0(e) + Maj(e, f, g); h += t1; d = t1+t2;
t1 = c + e1(h) + Ch(h, a, b) + 0xd6990624 + W[45];
t2 = e0(d) + Maj(d, e, f); g += t1; c = t1+t2;
t1 = b + e1(g) + Ch(g, h, a) + 0xf40e3585 + W[46];
t2 = e0(c) + Maj(c, d, e); f += t1; b = t1+t2;
t1 = a + e1(f) + Ch(f, g, h) + 0x106aa070 + W[47];
t2 = e0(b) + Maj(b, c, d); e += t1; a = t1+t2;
t1 = h + e1(e) + Ch(e, f, g) + 0x19a4c116 + W[48];
t2 = e0(a) + Maj(a, b, c); d += t1; h = t1+t2;
t1 = g + e1(d) + Ch(d, e, f) + 0x1e376c08 + W[49];
t2 = e0(h) + Maj(h, a, b); c += t1; g = t1+t2;
t1 = f + e1(c) + Ch(c, d, e) + 0x2748774c + W[50];
t2 = e0(g) + Maj(g, h, a); b += t1; f = t1+t2;
t1 = e + e1(b) + Ch(b, c, d) + 0x34b0bcb5 + W[51];
t2 = e0(f) + Maj(f, g, h); a += t1; e = t1+t2;
t1 = d + e1(a) + Ch(a, b, c) + 0x391c0cb3 + W[52];
t2 = e0(e) + Maj(e, f, g); h += t1; d = t1+t2;
t1 = c + e1(h) + Ch(h, a, b) + 0x4ed8aa4a + W[53];
t2 = e0(d) + Maj(d, e, f); g += t1; c = t1+t2;
t1 = b + e1(g) + Ch(g, h, a) + 0x5b9cca4f + W[54];
t2 = e0(c) + Maj(c, d, e); f += t1; b = t1+t2;
t1 = a + e1(f) + Ch(f, g, h) + 0x682e6ff3 + W[55];
t2 = e0(b) + Maj(b, c, d); e += t1; a = t1+t2;
t1 = h + e1(e) + Ch(e, f, g) + 0x748f82ee + W[56];
t2 = e0(a) + Maj(a, b, c); d += t1; h = t1+t2;
t1 = g + e1(d) + Ch(d, e, f) + 0x78a5636f + W[57];
t2 = e0(h) + Maj(h, a, b); c += t1; g = t1+t2;
t1 = f + e1(c) + Ch(c, d, e) + 0x84c87814 + W[58];
t2 = e0(g) + Maj(g, h, a); b += t1; f = t1+t2;
t1 = e + e1(b) + Ch(b, c, d) + 0x8cc70208 + W[59];
t2 = e0(f) + Maj(f, g, h); a += t1; e = t1+t2;
t1 = d + e1(a) + Ch(a, b, c) + 0x90befffa + W[60];
t2 = e0(e) + Maj(e, f, g); h += t1; d = t1+t2;
t1 = c + e1(h) + Ch(h, a, b) + 0xa4506ceb + W[61];
t2 = e0(d) + Maj(d, e, f); g += t1; c = t1+t2;
t1 = b + e1(g) + Ch(g, h, a) + 0xbef9a3f7 + W[62];
t2 = e0(c) + Maj(c, d, e); f += t1; b = t1+t2;
t1 = a + e1(f) + Ch(f, g, h) + 0xc67178f2 + W[63];
t2 = e0(b) + Maj(b, c, d); e += t1; a = t1+t2;
state[0] += a; state[1] += b; state[2] += c; state[3] += d;
state[4] += e; state[5] += f; state[6] += g; state[7] += h;
/* clear any sensitive info... */
a = b = c = d = e = f = g = h = t1 = t2 = 0;
memset(W, 0, 64 * sizeof(u32));
}
int sha256_init(struct sha256_state *sctx)
{
sctx->state[0] = SHA256_H0;
sctx->state[1] = SHA256_H1;
sctx->state[2] = SHA256_H2;
sctx->state[3] = SHA256_H3;
sctx->state[4] = SHA256_H4;
sctx->state[5] = SHA256_H5;
sctx->state[6] = SHA256_H6;
sctx->state[7] = SHA256_H7;
sctx->count = 0;
return 0;
}
int sha256_update(struct sha256_state *sctx, const u8 *data, unsigned int len)
{
unsigned int partial, done;
const u8 *src;
partial = sctx->count & 0x3f;
sctx->count += len;
done = 0;
src = data;
if ((partial + len) > 63) {
if (partial) {
done = -partial;
memcpy(sctx->buf + partial, data, done + 64);
src = sctx->buf;
}
do {
sha256_transform(sctx->state, src);
done += 64;
src = data + done;
} while (done + 63 < len);
partial = 0;
}
memcpy(sctx->buf + partial, src, len - done);
return 0;
}
int sha256_final(struct sha256_state *sctx, u8 *out)
{
__be32 *dst = (__be32 *)out;
__be64 bits;
unsigned int index, pad_len;
int i;
static const u8 padding[64] = { 0x80, };
/* Save number of bits */
bits = cpu_to_be64(sctx->count << 3);
/* Pad out to 56 mod 64. */
index = sctx->count & 0x3f;
pad_len = (index < 56) ? (56 - index) : ((64+56) - index);
sha256_update(sctx, padding, pad_len);
/* Append length (before padding) */
sha256_update(sctx, (const u8 *)&bits, sizeof(bits));
/* Store state in digest */
for (i = 0; i < 8; i++)
dst[i] = cpu_to_be32(sctx->state[i]);
/* Zeroize sensitive information. */
memset(sctx, 0, sizeof(*sctx));
return 0;
}

View File

@ -0,0 +1,22 @@
/*
* Copyright (C) 2014 Red Hat Inc.
*
* Author: Vivek Goyal <vgoyal@redhat.com>
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
#ifndef SHA256_H
#define SHA256_H
#include <linux/types.h>
#include <crypto/sha.h>
extern int sha256_init(struct sha256_state *sctx);
extern int sha256_update(struct sha256_state *sctx, const u8 *input,
unsigned int length);
extern int sha256_final(struct sha256_state *sctx, u8 *hash);
#endif /* SHA256_H */

View File

@ -0,0 +1,19 @@
/*
* purgatory: stack
*
* Copyright (C) 2014 Red Hat Inc.
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
/* A stack for the loaded kernel.
* Seperate and in the data section so it can be prepopulated.
*/
.data
.balign 4096
.globl stack, stack_end
stack:
.skip 4096
stack_end:

View File

@ -0,0 +1,13 @@
/*
* Simple string functions.
*
* Copyright (C) 2014 Red Hat Inc.
*
* Author:
* Vivek Goyal <vgoyal@redhat.com>
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
#include "../boot/string.c"

Some files were not shown because too many files have changed in this diff Show More