License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2008-10-23 13:26:29 +08:00
|
|
|
#ifndef _ASM_X86_IO_H
|
|
|
|
#define _ASM_X86_IO_H
|
2008-03-19 08:00:15 +08:00
|
|
|
|
2010-02-05 22:37:09 +08:00
|
|
|
/*
|
|
|
|
* This file contains the definitions for the x86 IO instructions
|
|
|
|
* inb/inw/inl/outb/outw/outl and the "string versions" of the same
|
|
|
|
* (insb/insw/insl/outsb/outsw/outsl). You can also use "pausing"
|
|
|
|
* versions of the single-IO instructions (inb_p/inw_p/..).
|
|
|
|
*
|
|
|
|
* This file is not meant to be obfuscating: it's just complicated
|
|
|
|
* to (a) handle it all in a way that makes gcc able to optimize it
|
|
|
|
* as well as possible and (b) trying to avoid writing the same thing
|
|
|
|
* over and over again with slight variations and possibly making a
|
|
|
|
* mistake somewhere.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Thanks to James van Artsdalen for a better timing-fix than
|
|
|
|
* the two short jumps: using outb's to a nonexistent port seems
|
|
|
|
* to guarantee better timings even on fast machines.
|
|
|
|
*
|
|
|
|
* On the other hand, I'd like to be sure of a non-existent port:
|
|
|
|
* I feel a bit unsafe about using 0x80 (should be safe, though)
|
|
|
|
*
|
|
|
|
* Linus
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Bit simplified and optimized by Jan Hubicka
|
|
|
|
* Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999.
|
|
|
|
*
|
|
|
|
* isa_memset_io, isa_memcpy_fromio, isa_memcpy_toio added,
|
|
|
|
* isa_read[wl] and isa_write[wl] fixed
|
|
|
|
* - Arnaldo Carvalho de Melo <acme@conectiva.com.br>
|
|
|
|
*/
|
|
|
|
|
2008-03-19 08:00:24 +08:00
|
|
|
#define ARCH_HAS_IOREMAP_WC
|
2015-06-05 00:55:15 +08:00
|
|
|
#define ARCH_HAS_IOREMAP_WT
|
2008-03-19 08:00:24 +08:00
|
|
|
|
2010-02-05 22:37:09 +08:00
|
|
|
#include <linux/string.h>
|
2008-05-28 00:47:13 +08:00
|
|
|
#include <linux/compiler.h>
|
2009-02-07 05:29:44 +08:00
|
|
|
#include <asm/page.h>
|
2014-04-08 06:39:49 +08:00
|
|
|
#include <asm/early_ioremap.h>
|
2015-06-02 17:01:38 +08:00
|
|
|
#include <asm/pgtable_types.h>
|
2008-05-28 00:47:13 +08:00
|
|
|
|
|
|
|
#define build_mmio_read(name, size, type, reg, barrier) \
|
|
|
|
static inline type name(const volatile void __iomem *addr) \
|
2008-08-14 03:07:07 +08:00
|
|
|
{ type ret; asm volatile("mov" size " %1,%0":reg (ret) \
|
2008-05-28 00:47:13 +08:00
|
|
|
:"m" (*(volatile type __force *)addr) barrier); return ret; }
|
|
|
|
|
|
|
|
#define build_mmio_write(name, size, type, reg, barrier) \
|
|
|
|
static inline void name(type val, volatile void __iomem *addr) \
|
|
|
|
{ asm volatile("mov" size " %0,%1": :reg (val), \
|
|
|
|
"m" (*(volatile type __force *)addr) barrier); }
|
|
|
|
|
2008-08-14 03:07:07 +08:00
|
|
|
build_mmio_read(readb, "b", unsigned char, "=q", :"memory")
|
|
|
|
build_mmio_read(readw, "w", unsigned short, "=r", :"memory")
|
|
|
|
build_mmio_read(readl, "l", unsigned int, "=r", :"memory")
|
2008-05-28 00:47:13 +08:00
|
|
|
|
2008-08-14 03:07:07 +08:00
|
|
|
build_mmio_read(__readb, "b", unsigned char, "=q", )
|
|
|
|
build_mmio_read(__readw, "w", unsigned short, "=r", )
|
|
|
|
build_mmio_read(__readl, "l", unsigned int, "=r", )
|
2008-05-28 00:47:13 +08:00
|
|
|
|
|
|
|
build_mmio_write(writeb, "b", unsigned char, "q", :"memory")
|
|
|
|
build_mmio_write(writew, "w", unsigned short, "r", :"memory")
|
|
|
|
build_mmio_write(writel, "l", unsigned int, "r", :"memory")
|
|
|
|
|
|
|
|
build_mmio_write(__writeb, "b", unsigned char, "q", )
|
|
|
|
build_mmio_write(__writew, "w", unsigned short, "r", )
|
|
|
|
build_mmio_write(__writel, "l", unsigned int, "r", )
|
|
|
|
|
2017-07-01 01:09:30 +08:00
|
|
|
#define readb readb
|
|
|
|
#define readw readw
|
|
|
|
#define readl readl
|
2008-05-28 00:47:13 +08:00
|
|
|
#define readb_relaxed(a) __readb(a)
|
|
|
|
#define readw_relaxed(a) __readw(a)
|
|
|
|
#define readl_relaxed(a) __readl(a)
|
|
|
|
#define __raw_readb __readb
|
|
|
|
#define __raw_readw __readw
|
|
|
|
#define __raw_readl __readl
|
|
|
|
|
2017-07-01 01:09:30 +08:00
|
|
|
#define writeb writeb
|
|
|
|
#define writew writew
|
|
|
|
#define writel writel
|
2013-09-04 18:34:08 +08:00
|
|
|
#define writeb_relaxed(v, a) __writeb(v, a)
|
|
|
|
#define writew_relaxed(v, a) __writew(v, a)
|
|
|
|
#define writel_relaxed(v, a) __writel(v, a)
|
2008-05-28 00:47:13 +08:00
|
|
|
#define __raw_writeb __writeb
|
|
|
|
#define __raw_writew __writew
|
|
|
|
#define __raw_writel __writel
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_64
|
2008-11-30 17:20:20 +08:00
|
|
|
|
2018-05-15 19:52:11 +08:00
|
|
|
build_mmio_read(readq, "q", u64, "=r", :"memory")
|
|
|
|
build_mmio_read(__readq, "q", u64, "=r", )
|
|
|
|
build_mmio_write(writeq, "q", u64, "r", :"memory")
|
|
|
|
build_mmio_write(__writeq, "q", u64, "r", )
|
2008-05-28 00:47:13 +08:00
|
|
|
|
2017-07-01 01:09:34 +08:00
|
|
|
#define readq_relaxed(a) __readq(a)
|
|
|
|
#define writeq_relaxed(v, a) __writeq(v, a)
|
2008-11-30 17:20:20 +08:00
|
|
|
|
2017-07-01 01:09:34 +08:00
|
|
|
#define __raw_readq __readq
|
|
|
|
#define __raw_writeq __writeq
|
2008-11-30 17:20:20 +08:00
|
|
|
|
2008-11-30 16:33:55 +08:00
|
|
|
/* Let people know that we have them */
|
2008-11-30 17:20:20 +08:00
|
|
|
#define readq readq
|
|
|
|
#define writeq writeq
|
2008-11-30 16:16:04 +08:00
|
|
|
|
2011-05-25 08:13:09 +08:00
|
|
|
#endif
|
|
|
|
|
2017-11-16 06:29:51 +08:00
|
|
|
#define ARCH_HAS_VALID_PHYS_ADDR_RANGE
|
|
|
|
extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
|
|
|
|
extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
|
|
|
|
|
2009-02-07 05:29:44 +08:00
|
|
|
/**
|
|
|
|
* virt_to_phys - map virtual addresses to physical
|
|
|
|
* @address: address to remap
|
|
|
|
*
|
|
|
|
* The returned physical address is the physical (CPU) mapping for
|
|
|
|
* the memory address given. It is only valid to use this function on
|
|
|
|
* addresses directly mapped or allocated via kmalloc.
|
|
|
|
*
|
|
|
|
* This function does not give bus mappings for DMA transfers. In
|
|
|
|
* almost all conceivable cases a device driver should not be using
|
|
|
|
* this function
|
|
|
|
*/
|
|
|
|
|
|
|
|
static inline phys_addr_t virt_to_phys(volatile void *address)
|
|
|
|
{
|
|
|
|
return __pa(address);
|
|
|
|
}
|
2017-07-01 01:09:30 +08:00
|
|
|
#define virt_to_phys virt_to_phys
|
2009-02-07 05:29:44 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* phys_to_virt - map physical address to virtual
|
|
|
|
* @address: address to remap
|
|
|
|
*
|
|
|
|
* The returned virtual address is a current CPU mapping for
|
|
|
|
* the memory address given. It is only valid to use this function on
|
|
|
|
* addresses that have a kernel mapping
|
|
|
|
*
|
|
|
|
* This function does not handle bus mappings for DMA transfers. In
|
|
|
|
* almost all conceivable cases a device driver should not be using
|
|
|
|
* this function
|
|
|
|
*/
|
|
|
|
|
|
|
|
static inline void *phys_to_virt(phys_addr_t address)
|
|
|
|
{
|
|
|
|
return __va(address);
|
|
|
|
}
|
2017-07-01 01:09:30 +08:00
|
|
|
#define phys_to_virt phys_to_virt
|
2009-02-07 05:29:44 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Change "struct page" to physical address.
|
|
|
|
*/
|
|
|
|
#define page_to_phys(page) ((dma_addr_t)page_to_pfn(page) << PAGE_SHIFT)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ISA I/O bus memory addresses are 1:1 with the physical address.
|
2009-02-18 05:01:51 +08:00
|
|
|
* However, we truncate the address to unsigned int to avoid undesirable
|
|
|
|
* promitions in legacy drivers.
|
2009-02-07 05:29:44 +08:00
|
|
|
*/
|
2009-02-18 05:01:51 +08:00
|
|
|
static inline unsigned int isa_virt_to_bus(volatile void *address)
|
|
|
|
{
|
|
|
|
return (unsigned int)virt_to_phys(address);
|
|
|
|
}
|
|
|
|
#define isa_bus_to_virt phys_to_virt
|
2009-02-07 05:29:44 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* However PCI ones are not necessarily 1:1 and therefore these interfaces
|
|
|
|
* are forbidden in portable PCI drivers.
|
|
|
|
*
|
|
|
|
* Allow them on x86 for legacy drivers, though.
|
|
|
|
*/
|
|
|
|
#define virt_to_bus virt_to_phys
|
|
|
|
#define bus_to_virt phys_to_virt
|
|
|
|
|
2017-01-28 07:17:52 +08:00
|
|
|
/*
|
|
|
|
* The default ioremap() behavior is non-cached; if you need something
|
|
|
|
* else, you probably want one of the following.
|
|
|
|
*/
|
|
|
|
extern void __iomem *ioremap_uc(resource_size_t offset, unsigned long size);
|
|
|
|
#define ioremap_uc ioremap_uc
|
|
|
|
extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size);
|
2017-07-01 01:09:30 +08:00
|
|
|
#define ioremap_cache ioremap_cache
|
2017-01-28 07:17:52 +08:00
|
|
|
extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, unsigned long prot_val);
|
2017-07-01 01:09:30 +08:00
|
|
|
#define ioremap_prot ioremap_prot
|
x86/ioremap: Add an ioremap_encrypted() helper
When SME is enabled, the memory is encrypted in the first kernel. In
this case, SME also needs to be enabled in the kdump kernel, and we have
to remap the old memory with the memory encryption mask.
The case of concern here is if SME is active in the first kernel,
and it is active too in the kdump kernel. There are four cases to be
considered:
a. dump vmcore
It is encrypted in the first kernel, and needs be read out in the
kdump kernel.
b. crash notes
When dumping vmcore, the people usually need to read useful
information from notes, and the notes is also encrypted.
c. iommu device table
It's encrypted in the first kernel, kdump kernel needs to access its
content to analyze and get information it needs.
d. mmio of AMD iommu
not encrypted in both kernels
Add a new bool parameter @encrypted to __ioremap_caller(). If set,
memory will be remapped with the SME mask.
Add a new function ioremap_encrypted() to explicitly pass in a true
value for @encrypted. Use ioremap_encrypted() for the above a, b, c
cases.
[ bp: cleanup commit message, extern defs in io.h and drop forgotten
include. ]
Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: kexec@lists.infradead.org
Cc: tglx@linutronix.de
Cc: mingo@redhat.com
Cc: hpa@zytor.com
Cc: akpm@linux-foundation.org
Cc: dan.j.williams@intel.com
Cc: bhelgaas@google.com
Cc: baiyaowei@cmss.chinamobile.com
Cc: tiwai@suse.de
Cc: brijesh.singh@amd.com
Cc: dyoung@redhat.com
Cc: bhe@redhat.com
Cc: jroedel@suse.de
Link: https://lkml.kernel.org/r/20180927071954.29615-2-lijiang@redhat.com
2018-09-27 15:19:51 +08:00
|
|
|
extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size);
|
|
|
|
#define ioremap_encrypted ioremap_encrypted
|
2017-01-28 07:17:52 +08:00
|
|
|
|
2009-02-07 05:29:52 +08:00
|
|
|
/**
|
|
|
|
* ioremap - map bus memory into CPU space
|
|
|
|
* @offset: bus address of the memory
|
|
|
|
* @size: size of the resource to map
|
|
|
|
*
|
|
|
|
* ioremap performs a platform specific sequence of operations to
|
|
|
|
* make bus memory CPU accessible via the readb/readw/readl/writeb/
|
|
|
|
* writew/writel functions and the other mmio helpers. The returned
|
|
|
|
* address is not guaranteed to be usable directly as a virtual
|
|
|
|
* address.
|
|
|
|
*
|
|
|
|
* If the area you are trying to map is a PCI BAR you should have a
|
|
|
|
* look at pci_iomap().
|
|
|
|
*/
|
2019-08-13 05:35:59 +08:00
|
|
|
void __iomem *ioremap(resource_size_t offset, unsigned long size);
|
2017-07-01 01:09:30 +08:00
|
|
|
#define ioremap ioremap
|
2009-02-07 05:29:52 +08:00
|
|
|
|
|
|
|
extern void iounmap(volatile void __iomem *addr);
|
2017-07-01 01:09:30 +08:00
|
|
|
#define iounmap iounmap
|
2009-02-07 05:29:52 +08:00
|
|
|
|
2010-09-17 00:44:02 +08:00
|
|
|
extern void set_iounmap_nonlazy(void);
|
2008-07-22 00:54:29 +08:00
|
|
|
|
2010-02-05 22:37:09 +08:00
|
|
|
#ifdef __KERNEL__
|
|
|
|
|
x86: re-introduce non-generic memcpy_{to,from}io
This has been broken forever, and nobody ever really noticed because
it's purely a performance issue.
Long long ago, in commit 6175ddf06b61 ("x86: Clean up mem*io functions")
Brian Gerst simplified the memory copies to and from iomem, since on
x86, the instructions to access iomem are exactly the same as the
regular instructions.
That is technically true, and things worked, and nobody said anything.
Besides, back then the regular memcpy was pretty simple and worked fine.
Nobody noticed except for David Laight, that is. David has a testing a
TLP monitor he was writing for an FPGA, and has been occasionally
complaining about how memcpy_toio() writes things one byte at a time.
Which is completely unacceptable from a performance standpoint, even if
it happens to technically work.
The reason it's writing one byte at a time is because while it's
technically true that accesses to iomem are the same as accesses to
regular memory on x86, the _granularity_ (and ordering) of accesses
matter to iomem in ways that they don't matter to regular cached memory.
In particular, when ERMS is set, we default to using "rep movsb" for
larger memory copies. That is indeed perfectly fine for real memory,
since the whole point is that the CPU is going to do cacheline
optimizations and executes the memory copy efficiently for cached
memory.
With iomem? Not so much. With iomem, "rep movsb" will indeed work, but
it will copy things one byte at a time. Slowly and ponderously.
Now, originally, back in 2010 when commit 6175ddf06b61 was done, we
didn't use ERMS, and this was much less noticeable.
Our normal memcpy() was simpler in other ways too.
Because in fact, it's not just about using the string instructions. Our
memcpy() these days does things like "read and write overlapping values"
to handle the last bytes of the copy. Again, for normal memory,
overlapping accesses isn't an issue. For iomem? It can be.
So this re-introduces the specialized memcpy_toio(), memcpy_fromio() and
memset_io() functions. It doesn't particularly optimize them, but it
tries to at least not be horrid, or do overlapping accesses. In fact,
this uses the existing __inline_memcpy() function that we still had
lying around that uses our very traditional "rep movsl" loop followed by
movsw/movsb for the final bytes.
Somebody may decide to try to improve on it, but if we've gone almost a
decade with only one person really ever noticing and complaining, maybe
it's not worth worrying about further, once it's not _completely_ broken?
Reported-by: David Laight <David.Laight@aculab.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-05 09:52:49 +08:00
|
|
|
void memcpy_fromio(void *, const volatile void __iomem *, size_t);
|
|
|
|
void memcpy_toio(volatile void __iomem *, const void *, size_t);
|
|
|
|
void memset_io(volatile void __iomem *, int, size_t);
|
|
|
|
|
|
|
|
#define memcpy_fromio memcpy_fromio
|
|
|
|
#define memcpy_toio memcpy_toio
|
|
|
|
#define memset_io memset_io
|
|
|
|
|
2010-02-05 22:37:09 +08:00
|
|
|
#include <asm-generic/iomap.h>
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ISA space is 'always mapped' on a typical x86 system, no need to
|
|
|
|
* explicitly ioremap() it. The fact that the ISA IO space is mapped
|
|
|
|
* to PAGE_OFFSET is pure coincidence - it does not mean ISA values
|
|
|
|
* are physical addresses. The following constant pointer can be
|
|
|
|
* used as the IO-area pointer (it can be iounmapped as well, so the
|
|
|
|
* analogy with PCI is quite large):
|
|
|
|
*/
|
|
|
|
#define __ISA_IO_base ((char __iomem *)(PAGE_OFFSET))
|
|
|
|
|
|
|
|
#endif /* __KERNEL__ */
|
|
|
|
|
|
|
|
extern void native_io_delay(void);
|
|
|
|
|
|
|
|
extern int io_delay_type;
|
|
|
|
extern void io_delay_init(void);
|
|
|
|
|
|
|
|
#if defined(CONFIG_PARAVIRT)
|
|
|
|
#include <asm/paravirt.h>
|
2007-10-11 17:20:03 +08:00
|
|
|
#else
|
2010-02-05 22:37:09 +08:00
|
|
|
|
|
|
|
static inline void slow_down_io(void)
|
|
|
|
{
|
|
|
|
native_io_delay();
|
|
|
|
#ifdef REALLY_SLOW_IO
|
|
|
|
native_io_delay();
|
|
|
|
native_io_delay();
|
|
|
|
native_io_delay();
|
2007-10-11 17:20:03 +08:00
|
|
|
#endif
|
2010-02-05 22:37:09 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2017-10-20 22:30:55 +08:00
|
|
|
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
|
|
|
#include <linux/jump_label.h>
|
|
|
|
|
|
|
|
extern struct static_key_false sev_enable_key;
|
|
|
|
static inline bool sev_key_active(void)
|
|
|
|
{
|
|
|
|
return static_branch_unlikely(&sev_enable_key);
|
|
|
|
}
|
|
|
|
|
|
|
|
#else /* !CONFIG_AMD_MEM_ENCRYPT */
|
|
|
|
|
|
|
|
static inline bool sev_key_active(void) { return false; }
|
|
|
|
|
|
|
|
#endif /* CONFIG_AMD_MEM_ENCRYPT */
|
|
|
|
|
2010-02-05 22:37:09 +08:00
|
|
|
#define BUILDIO(bwl, bw, type) \
|
|
|
|
static inline void out##bwl(unsigned type value, int port) \
|
|
|
|
{ \
|
|
|
|
asm volatile("out" #bwl " %" #bw "0, %w1" \
|
|
|
|
: : "a"(value), "Nd"(port)); \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
static inline unsigned type in##bwl(int port) \
|
|
|
|
{ \
|
|
|
|
unsigned type value; \
|
|
|
|
asm volatile("in" #bwl " %w1, %" #bw "0" \
|
|
|
|
: "=a"(value) : "Nd"(port)); \
|
|
|
|
return value; \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
static inline void out##bwl##_p(unsigned type value, int port) \
|
|
|
|
{ \
|
|
|
|
out##bwl(value, port); \
|
|
|
|
slow_down_io(); \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
static inline unsigned type in##bwl##_p(int port) \
|
|
|
|
{ \
|
|
|
|
unsigned type value = in##bwl(port); \
|
|
|
|
slow_down_io(); \
|
|
|
|
return value; \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
static inline void outs##bwl(int port, const void *addr, unsigned long count) \
|
|
|
|
{ \
|
2017-10-20 22:30:55 +08:00
|
|
|
if (sev_key_active()) { \
|
|
|
|
unsigned type *value = (unsigned type *)addr; \
|
|
|
|
while (count) { \
|
|
|
|
out##bwl(*value, port); \
|
|
|
|
value++; \
|
|
|
|
count--; \
|
|
|
|
} \
|
|
|
|
} else { \
|
|
|
|
asm volatile("rep; outs" #bwl \
|
|
|
|
: "+S"(addr), "+c"(count) \
|
|
|
|
: "d"(port) : "memory"); \
|
|
|
|
} \
|
2010-02-05 22:37:09 +08:00
|
|
|
} \
|
|
|
|
\
|
|
|
|
static inline void ins##bwl(int port, void *addr, unsigned long count) \
|
|
|
|
{ \
|
2017-10-20 22:30:55 +08:00
|
|
|
if (sev_key_active()) { \
|
|
|
|
unsigned type *value = (unsigned type *)addr; \
|
|
|
|
while (count) { \
|
|
|
|
*value = in##bwl(port); \
|
|
|
|
value++; \
|
|
|
|
count--; \
|
|
|
|
} \
|
|
|
|
} else { \
|
|
|
|
asm volatile("rep; ins" #bwl \
|
|
|
|
: "+D"(addr), "+c"(count) \
|
|
|
|
: "d"(port) : "memory"); \
|
|
|
|
} \
|
2010-02-05 22:37:09 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
BUILDIO(b, b, char)
|
|
|
|
BUILDIO(w, w, short)
|
|
|
|
BUILDIO(l, , int)
|
2008-03-19 08:00:15 +08:00
|
|
|
|
2017-07-01 01:09:30 +08:00
|
|
|
#define inb inb
|
|
|
|
#define inw inw
|
|
|
|
#define inl inl
|
|
|
|
#define inb_p inb_p
|
|
|
|
#define inw_p inw_p
|
|
|
|
#define inl_p inl_p
|
|
|
|
#define insb insb
|
|
|
|
#define insw insw
|
|
|
|
#define insl insl
|
|
|
|
|
|
|
|
#define outb outb
|
|
|
|
#define outw outw
|
|
|
|
#define outl outl
|
|
|
|
#define outb_p outb_p
|
|
|
|
#define outw_p outw_p
|
|
|
|
#define outl_p outl_p
|
|
|
|
#define outsb outsb
|
|
|
|
#define outsw outsw
|
|
|
|
#define outsl outsl
|
|
|
|
|
2014-07-28 23:20:33 +08:00
|
|
|
extern void *xlate_dev_mem_ptr(phys_addr_t phys);
|
|
|
|
extern void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
|
2008-03-19 08:00:15 +08:00
|
|
|
|
2017-07-01 01:09:30 +08:00
|
|
|
#define xlate_dev_mem_ptr xlate_dev_mem_ptr
|
|
|
|
#define unxlate_dev_mem_ptr unxlate_dev_mem_ptr
|
|
|
|
|
2008-03-19 08:00:16 +08:00
|
|
|
extern int ioremap_change_attr(unsigned long vaddr, unsigned long size,
|
2014-11-03 21:01:58 +08:00
|
|
|
enum page_cache_mode pcm);
|
2009-01-10 08:13:13 +08:00
|
|
|
extern void __iomem *ioremap_wc(resource_size_t offset, unsigned long size);
|
2017-07-01 01:09:30 +08:00
|
|
|
#define ioremap_wc ioremap_wc
|
2015-06-05 00:55:15 +08:00
|
|
|
extern void __iomem *ioremap_wt(resource_size_t offset, unsigned long size);
|
2017-07-01 01:09:30 +08:00
|
|
|
#define ioremap_wt ioremap_wt
|
2008-03-19 08:00:16 +08:00
|
|
|
|
2010-10-14 07:02:24 +08:00
|
|
|
extern bool is_early_ioremap_ptep(pte_t *ptep);
|
2008-06-25 12:19:03 +08:00
|
|
|
|
2009-01-29 07:42:23 +08:00
|
|
|
#define IO_SPACE_LIMIT 0xffff
|
2008-06-25 12:19:03 +08:00
|
|
|
|
2017-07-01 01:09:31 +08:00
|
|
|
#include <asm-generic/io.h>
|
|
|
|
#undef PCI_IOBASE
|
|
|
|
|
2013-05-14 07:58:40 +08:00
|
|
|
#ifdef CONFIG_MTRR
|
2015-05-26 16:28:13 +08:00
|
|
|
extern int __must_check arch_phys_wc_index(int handle);
|
|
|
|
#define arch_phys_wc_index arch_phys_wc_index
|
|
|
|
|
2013-05-14 07:58:40 +08:00
|
|
|
extern int __must_check arch_phys_wc_add(unsigned long base,
|
|
|
|
unsigned long size);
|
|
|
|
extern void arch_phys_wc_del(int handle);
|
|
|
|
#define arch_phys_wc_add arch_phys_wc_add
|
|
|
|
#endif
|
|
|
|
|
2016-10-24 13:27:59 +08:00
|
|
|
#ifdef CONFIG_X86_PAT
|
|
|
|
extern int arch_io_reserve_memtype_wc(resource_size_t start, resource_size_t size);
|
|
|
|
extern void arch_io_free_memtype_wc(resource_size_t start, resource_size_t size);
|
|
|
|
#define arch_io_reserve_memtype_wc arch_io_reserve_memtype_wc
|
|
|
|
#endif
|
|
|
|
|
2017-07-18 05:10:16 +08:00
|
|
|
extern bool arch_memremap_can_ram_remap(resource_size_t offset,
|
|
|
|
unsigned long size,
|
|
|
|
unsigned long flags);
|
|
|
|
#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap
|
|
|
|
|
2017-07-18 05:10:30 +08:00
|
|
|
extern bool phys_mem_access_encrypted(unsigned long phys_addr,
|
|
|
|
unsigned long size);
|
|
|
|
|
2020-01-22 07:43:41 +08:00
|
|
|
/**
|
|
|
|
* iosubmit_cmds512 - copy data to single MMIO location, in 512-bit units
|
2020-10-05 23:11:22 +08:00
|
|
|
* @dst: destination, in MMIO space (must be 512-bit aligned)
|
2020-01-22 07:43:41 +08:00
|
|
|
* @src: source
|
|
|
|
* @count: number of 512 bits quantities to submit
|
|
|
|
*
|
|
|
|
* Submit data from kernel space to MMIO space, in units of 512 bits at a
|
|
|
|
* time. Order of access is not guaranteed, nor is a memory barrier
|
|
|
|
* performed afterwards.
|
|
|
|
*
|
|
|
|
* Warning: Do not use this helper unless your driver has checked that the CPU
|
|
|
|
* instruction is supported on the platform.
|
|
|
|
*/
|
2020-10-05 23:11:22 +08:00
|
|
|
static inline void iosubmit_cmds512(void __iomem *dst, const void *src,
|
2020-01-22 07:43:41 +08:00
|
|
|
size_t count)
|
|
|
|
{
|
|
|
|
const u8 *from = src;
|
|
|
|
const u8 *end = from + count * 64;
|
|
|
|
|
|
|
|
while (from < end) {
|
2020-10-05 23:11:22 +08:00
|
|
|
movdir64b(dst, from);
|
2020-01-22 07:43:41 +08:00
|
|
|
from += 64;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-10-23 13:26:29 +08:00
|
|
|
#endif /* _ASM_X86_IO_H */
|