Merge drm/drm-next into drm-misc-next

Picking up v5.0 + missed misc-fixes from last release

Signed-off-by: Sean Paul <seanpaul@chromium.org>
This commit is contained in:
Sean Paul 2019-03-06 09:22:18 -05:00
commit cd7d3a1bb4
545 changed files with 8031 additions and 4063 deletions

20
CREDITS
View File

@ -842,10 +842,9 @@ D: ax25-utils maintainer.
N: Helge Deller
E: deller@gmx.de
E: hdeller@redhat.de
D: PA-RISC Linux hacker, LASI-, ASP-, WAX-, LCD/LED-driver
S: Schimmelsrain 1
S: D-69231 Rauenberg
W: http://www.parisc-linux.org/
D: PA-RISC Linux architecture maintainer
D: LASI-, ASP-, WAX-, LCD/LED-driver
S: Germany
N: Jean Delvare
@ -1361,7 +1360,7 @@ S: Stellenbosch, Western Cape
S: South Africa
N: Grant Grundler
E: grundler@parisc-linux.org
E: grantgrundler@gmail.com
W: http://obmouse.sourceforge.net/
W: http://www.parisc-linux.org/
D: obmouse - rewrote Olivier Florent's Omnibook 600 "pop-up" mouse driver
@ -2492,7 +2491,7 @@ S: Syracuse, New York 13206
S: USA
N: Kyle McMartin
E: kyle@parisc-linux.org
E: kyle@mcmartin.ca
D: Linux/PARISC hacker
D: AD1889 sound driver
S: Ottawa, Canada
@ -3780,14 +3779,13 @@ S: 21513 Conradia Ct
S: Cupertino, CA 95014
S: USA
N: Thibaut Varene
E: T-Bone@parisc-linux.org
W: http://www.parisc-linux.org/~varenet/
P: 1024D/B7D2F063 E67C 0D43 A75E 12A5 BB1C FA2F 1E32 C3DA B7D2 F063
N: Thibaut Varène
E: hacks+kernel@slashdirt.org
W: http://hacks.slashdirt.org/
D: PA-RISC port minion, PDC and GSCPS2 drivers, debuglocks and other bits
D: Some ARM at91rm9200 bits, S1D13XXX FB driver, random patches here and there
D: AD1889 sound driver
S: Paris, France
S: France
N: Heikki Vatiainen
E: hessu@cs.tut.fi

View File

@ -1,9 +1,9 @@
.. _readme:
Linux kernel release 4.x <http://kernel.org/>
Linux kernel release 5.x <http://kernel.org/>
=============================================
These are the release notes for Linux version 4. Read them carefully,
These are the release notes for Linux version 5. Read them carefully,
as they tell you what this is all about, explain how to install the
kernel, and what to do if something goes wrong.
@ -63,7 +63,7 @@ Installing the kernel source
directory where you have permissions (e.g. your home directory) and
unpack it::
xz -cd linux-4.X.tar.xz | tar xvf -
xz -cd linux-5.x.tar.xz | tar xvf -
Replace "X" with the version number of the latest kernel.
@ -72,26 +72,26 @@ Installing the kernel source
files. They should match the library, and not get messed up by
whatever the kernel-du-jour happens to be.
- You can also upgrade between 4.x releases by patching. Patches are
- You can also upgrade between 5.x releases by patching. Patches are
distributed in the xz format. To install by patching, get all the
newer patch files, enter the top level directory of the kernel source
(linux-4.X) and execute::
(linux-5.x) and execute::
xz -cd ../patch-4.x.xz | patch -p1
xz -cd ../patch-5.x.xz | patch -p1
Replace "x" for all versions bigger than the version "X" of your current
Replace "x" for all versions bigger than the version "x" of your current
source tree, **in_order**, and you should be ok. You may want to remove
the backup files (some-file-name~ or some-file-name.orig), and make sure
that there are no failed patches (some-file-name# or some-file-name.rej).
If there are, either you or I have made a mistake.
Unlike patches for the 4.x kernels, patches for the 4.x.y kernels
Unlike patches for the 5.x kernels, patches for the 5.x.y kernels
(also known as the -stable kernels) are not incremental but instead apply
directly to the base 4.x kernel. For example, if your base kernel is 4.0
and you want to apply the 4.0.3 patch, you must not first apply the 4.0.1
and 4.0.2 patches. Similarly, if you are running kernel version 4.0.2 and
want to jump to 4.0.3, you must first reverse the 4.0.2 patch (that is,
patch -R) **before** applying the 4.0.3 patch. You can read more on this in
directly to the base 5.x kernel. For example, if your base kernel is 5.0
and you want to apply the 5.0.3 patch, you must not first apply the 5.0.1
and 5.0.2 patches. Similarly, if you are running kernel version 5.0.2 and
want to jump to 5.0.3, you must first reverse the 5.0.2 patch (that is,
patch -R) **before** applying the 5.0.3 patch. You can read more on this in
:ref:`Documentation/process/applying-patches.rst <applying_patches>`.
Alternatively, the script patch-kernel can be used to automate this
@ -114,7 +114,7 @@ Installing the kernel source
Software requirements
---------------------
Compiling and running the 4.x kernels requires up-to-date
Compiling and running the 5.x kernels requires up-to-date
versions of various software packages. Consult
:ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers
required and how to get updates for these packages. Beware that using
@ -132,12 +132,12 @@ Build directory for the kernel
place for the output files (including .config).
Example::
kernel source code: /usr/src/linux-4.X
kernel source code: /usr/src/linux-5.x
build directory: /home/name/build/kernel
To configure and build the kernel, use::
cd /usr/src/linux-4.X
cd /usr/src/linux-5.x
make O=/home/name/build/kernel menuconfig
make O=/home/name/build/kernel
sudo make O=/home/name/build/kernel modules_install install

View File

@ -0,0 +1,59 @@
Qualcomm adreno/snapdragon GMU (Graphics management unit)
The GMU is a programmable power controller for the GPU. the CPU controls the
GMU which in turn handles power controls for the GPU.
Required properties:
- compatible: "qcom,adreno-gmu-XYZ.W", "qcom,adreno-gmu"
for example: "qcom,adreno-gmu-630.2", "qcom,adreno-gmu"
Note that you need to list the less specific "qcom,adreno-gmu"
for generic matches and the more specific identifier to identify
the specific device.
- reg: Physical base address and length of the GMU registers.
- reg-names: Matching names for the register regions
* "gmu"
* "gmu_pdc"
* "gmu_pdc_seg"
- interrupts: The interrupt signals from the GMU.
- interrupt-names: Matching names for the interrupts
* "hfi"
* "gmu"
- clocks: phandles to the device clocks
- clock-names: Matching names for the clocks
* "gmu"
* "cxo"
* "axi"
* "mnoc"
- power-domains: should be <&clock_gpucc GPU_CX_GDSC>
- iommus: phandle to the adreno iommu
- operating-points-v2: phandle to the OPP operating points
Example:
/ {
...
gmu: gmu@506a000 {
compatible="qcom,adreno-gmu-630.2", "qcom,adreno-gmu";
reg = <0x506a000 0x30000>,
<0xb280000 0x10000>,
<0xb480000 0x10000>;
reg-names = "gmu", "gmu_pdc", "gmu_pdc_seq";
interrupts = <GIC_SPI 304 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "hfi", "gmu";
clocks = <&gpucc GPU_CC_CX_GMU_CLK>,
<&gpucc GPU_CC_CXO_CLK>,
<&gcc GCC_DDRSS_GPU_AXI_CLK>,
<&gcc GCC_GPU_MEMNOC_GFX_CLK>;
clock-names = "gmu", "cxo", "axi", "memnoc";
power-domains = <&gpucc GPU_CX_GDSC>;
iommus = <&adreno_smmu 5>;
operating-points-v2 = <&gmu_opp_table>;
};
};

View File

@ -10,14 +10,23 @@ Required properties:
If "amd,imageon" is used, there should be no top level msm device.
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt signal from the gpu.
- clocks: device clocks
- clocks: device clocks (if applicable)
See ../clocks/clock-bindings.txt for details.
- clock-names: the following clocks are required:
- clock-names: the following clocks are required by a3xx, a4xx and a5xx
cores:
* "core"
* "iface"
* "mem_iface"
For GMU attached devices the GPU clocks are not used and are not required. The
following devices should not list clocks:
- qcom,adreno-630.2
- iommus: optional phandle to an adreno iommu instance
- operating-points-v2: optional phandle to the OPP operating points
- qcom,gmu: For GMU attached devices a phandle to the GMU device that will
control the power for the GPU. Applicable targets:
- qcom,adreno-630.2
Example:
Example 3xx/4xx/a5xx:
/ {
...
@ -37,3 +46,30 @@ Example:
<&mmcc MMSS_IMEM_AHB_CLK>;
};
};
Example a6xx (with GMU):
/ {
...
gpu@5000000 {
compatible = "qcom,adreno-630.2", "qcom,adreno";
#stream-id-cells = <16>;
reg = <0x5000000 0x40000>, <0x509e000 0x10>;
reg-names = "kgsl_3d0_reg_memory", "cx_mem";
/*
* Look ma, no clocks! The GPU clocks and power are
* controlled entirely by the GMU
*/
interrupts = <GIC_SPI 300 IRQ_TYPE_LEVEL_HIGH>;
iommus = <&adreno_smmu 0>;
operating-points-v2 = <&gpu_opp_table>;
qcom,gmu = <&gmu>;
};
};

View File

@ -533,16 +533,12 @@ Bridge VLAN filtering
function that the driver has to call for each VLAN the given port is a member
of. A switchdev object is used to carry the VID and bridge flags.
- port_fdb_prepare: bridge layer function invoked when the bridge prepares the
installation of a Forwarding Database entry. If the operation is not
supported, this function should return -EOPNOTSUPP to inform the bridge code
to fallback to a software implementation. No hardware setup must be done in
this function. See port_fdb_add for this and details.
- port_fdb_add: bridge layer function invoked when the bridge wants to install a
Forwarding Database entry, the switch hardware should be programmed with the
specified address in the specified VLAN Id in the forwarding database
associated with this VLAN ID
associated with this VLAN ID. If the operation is not supported, this
function should return -EOPNOTSUPP to inform the bridge code to fallback to
a software implementation.
Note: VLAN ID 0 corresponds to the port private database, which, in the context
of DSA, would be the its port-based VLAN, used by the associated bridge device.

View File

@ -7,7 +7,7 @@ Intro
=====
The MSG_ZEROCOPY flag enables copy avoidance for socket send calls.
The feature is currently implemented for TCP sockets.
The feature is currently implemented for TCP and UDP sockets.
Opportunity and Caveats

View File

@ -92,11 +92,11 @@ device.
Switch ID
^^^^^^^^^
The switchdev driver must implement the switchdev op switchdev_port_attr_get
for SWITCHDEV_ATTR_ID_PORT_PARENT_ID for each port netdev, returning the same
physical ID for each port of a switch. The ID must be unique between switches
on the same system. The ID does not need to be unique between switches on
different systems.
The switchdev driver must implement the net_device operation
ndo_get_port_parent_id for each port netdev, returning the same physical ID for
each port of a switch. The ID must be unique between switches on the same
system. The ID does not need to be unique between switches on different
systems.
The switch ID is used to locate ports on a switch and to know if aggregated
ports belong to the same switch.

View File

@ -216,14 +216,14 @@ You can use the ``interdiff`` program (http://cyberelk.net/tim/patchutils/) to
generate a patch representing the differences between two patches and then
apply the result.
This will let you move from something like 4.7.2 to 4.7.3 in a single
This will let you move from something like 5.7.2 to 5.7.3 in a single
step. The -z flag to interdiff will even let you feed it patches in gzip or
bzip2 compressed form directly without the use of zcat or bzcat or manual
decompression.
Here's how you'd go from 4.7.2 to 4.7.3 in a single step::
Here's how you'd go from 5.7.2 to 5.7.3 in a single step::
interdiff -z ../patch-4.7.2.gz ../patch-4.7.3.gz | patch -p1
interdiff -z ../patch-5.7.2.gz ../patch-5.7.3.gz | patch -p1
Although interdiff may save you a step or two you are generally advised to
do the additional steps since interdiff can get things wrong in some cases.
@ -245,62 +245,67 @@ The patches are available at http://kernel.org/
Most recent patches are linked from the front page, but they also have
specific homes.
The 4.x.y (-stable) and 4.x patches live at
The 5.x.y (-stable) and 5.x patches live at
https://www.kernel.org/pub/linux/kernel/v4.x/
https://www.kernel.org/pub/linux/kernel/v5.x/
The -rc patches live at
The -rc patches are not stored on the webserver but are generated on
demand from git tags such as
https://www.kernel.org/pub/linux/kernel/v4.x/testing/
https://git.kernel.org/torvalds/p/v5.1-rc1/v5.0
The stable -rc patches live at
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/
The 4.x kernels
The 5.x kernels
===============
These are the base stable releases released by Linus. The highest numbered
release is the most recent.
If regressions or other serious flaws are found, then a -stable fix patch
will be released (see below) on top of this base. Once a new 4.x base
will be released (see below) on top of this base. Once a new 5.x base
kernel is released, a patch is made available that is a delta between the
previous 4.x kernel and the new one.
previous 5.x kernel and the new one.
To apply a patch moving from 4.6 to 4.7, you'd do the following (note
that such patches do **NOT** apply on top of 4.x.y kernels but on top of the
base 4.x kernel -- if you need to move from 4.x.y to 4.x+1 you need to
first revert the 4.x.y patch).
To apply a patch moving from 5.6 to 5.7, you'd do the following (note
that such patches do **NOT** apply on top of 5.x.y kernels but on top of the
base 5.x kernel -- if you need to move from 5.x.y to 5.x+1 you need to
first revert the 5.x.y patch).
Here are some examples::
# moving from 4.6 to 4.7
# moving from 5.6 to 5.7
$ cd ~/linux-4.6 # change to kernel source dir
$ patch -p1 < ../patch-4.7 # apply the 4.7 patch
$ cd ~/linux-5.6 # change to kernel source dir
$ patch -p1 < ../patch-5.7 # apply the 5.7 patch
$ cd ..
$ mv linux-4.6 linux-4.7 # rename source dir
$ mv linux-5.6 linux-5.7 # rename source dir
# moving from 4.6.1 to 4.7
# moving from 5.6.1 to 5.7
$ cd ~/linux-4.6.1 # change to kernel source dir
$ patch -p1 -R < ../patch-4.6.1 # revert the 4.6.1 patch
# source dir is now 4.6
$ patch -p1 < ../patch-4.7 # apply new 4.7 patch
$ cd ~/linux-5.6.1 # change to kernel source dir
$ patch -p1 -R < ../patch-5.6.1 # revert the 5.6.1 patch
# source dir is now 5.6
$ patch -p1 < ../patch-5.7 # apply new 5.7 patch
$ cd ..
$ mv linux-4.6.1 linux-4.7 # rename source dir
$ mv linux-5.6.1 linux-5.7 # rename source dir
The 4.x.y kernels
The 5.x.y kernels
=================
Kernels with 3-digit versions are -stable kernels. They contain small(ish)
critical fixes for security problems or significant regressions discovered
in a given 4.x kernel.
in a given 5.x kernel.
This is the recommended branch for users who want the most recent stable
kernel and are not interested in helping test development/experimental
versions.
If no 4.x.y kernel is available, then the highest numbered 4.x kernel is
If no 5.x.y kernel is available, then the highest numbered 5.x kernel is
the current stable kernel.
.. note::
@ -308,23 +313,23 @@ the current stable kernel.
The -stable team usually do make incremental patches available as well
as patches against the latest mainline release, but I only cover the
non-incremental ones below. The incremental ones can be found at
https://www.kernel.org/pub/linux/kernel/v4.x/incr/
https://www.kernel.org/pub/linux/kernel/v5.x/incr/
These patches are not incremental, meaning that for example the 4.7.3
patch does not apply on top of the 4.7.2 kernel source, but rather on top
of the base 4.7 kernel source.
These patches are not incremental, meaning that for example the 5.7.3
patch does not apply on top of the 5.7.2 kernel source, but rather on top
of the base 5.7 kernel source.
So, in order to apply the 4.7.3 patch to your existing 4.7.2 kernel
source you have to first back out the 4.7.2 patch (so you are left with a
base 4.7 kernel source) and then apply the new 4.7.3 patch.
So, in order to apply the 5.7.3 patch to your existing 5.7.2 kernel
source you have to first back out the 5.7.2 patch (so you are left with a
base 5.7 kernel source) and then apply the new 5.7.3 patch.
Here's a small example::
$ cd ~/linux-4.7.2 # change to the kernel source dir
$ patch -p1 -R < ../patch-4.7.2 # revert the 4.7.2 patch
$ patch -p1 < ../patch-4.7.3 # apply the new 4.7.3 patch
$ cd ~/linux-5.7.2 # change to the kernel source dir
$ patch -p1 -R < ../patch-5.7.2 # revert the 5.7.2 patch
$ patch -p1 < ../patch-5.7.3 # apply the new 5.7.3 patch
$ cd ..
$ mv linux-4.7.2 linux-4.7.3 # rename the kernel source dir
$ mv linux-5.7.2 linux-5.7.3 # rename the kernel source dir
The -rc kernels
===============
@ -343,38 +348,38 @@ This is a good branch to run for people who want to help out testing
development kernels but do not want to run some of the really experimental
stuff (such people should see the sections about -next and -mm kernels below).
The -rc patches are not incremental, they apply to a base 4.x kernel, just
like the 4.x.y patches described above. The kernel version before the -rcN
The -rc patches are not incremental, they apply to a base 5.x kernel, just
like the 5.x.y patches described above. The kernel version before the -rcN
suffix denotes the version of the kernel that this -rc kernel will eventually
turn into.
So, 4.8-rc5 means that this is the fifth release candidate for the 4.8
kernel and the patch should be applied on top of the 4.7 kernel source.
So, 5.8-rc5 means that this is the fifth release candidate for the 5.8
kernel and the patch should be applied on top of the 5.7 kernel source.
Here are 3 examples of how to apply these patches::
# first an example of moving from 4.7 to 4.8-rc3
# first an example of moving from 5.7 to 5.8-rc3
$ cd ~/linux-4.7 # change to the 4.7 source dir
$ patch -p1 < ../patch-4.8-rc3 # apply the 4.8-rc3 patch
$ cd ~/linux-5.7 # change to the 5.7 source dir
$ patch -p1 < ../patch-5.8-rc3 # apply the 5.8-rc3 patch
$ cd ..
$ mv linux-4.7 linux-4.8-rc3 # rename the source dir
$ mv linux-5.7 linux-5.8-rc3 # rename the source dir
# now let's move from 4.8-rc3 to 4.8-rc5
# now let's move from 5.8-rc3 to 5.8-rc5
$ cd ~/linux-4.8-rc3 # change to the 4.8-rc3 dir
$ patch -p1 -R < ../patch-4.8-rc3 # revert the 4.8-rc3 patch
$ patch -p1 < ../patch-4.8-rc5 # apply the new 4.8-rc5 patch
$ cd ~/linux-5.8-rc3 # change to the 5.8-rc3 dir
$ patch -p1 -R < ../patch-5.8-rc3 # revert the 5.8-rc3 patch
$ patch -p1 < ../patch-5.8-rc5 # apply the new 5.8-rc5 patch
$ cd ..
$ mv linux-4.8-rc3 linux-4.8-rc5 # rename the source dir
$ mv linux-5.8-rc3 linux-5.8-rc5 # rename the source dir
# finally let's try and move from 4.7.3 to 4.8-rc5
# finally let's try and move from 5.7.3 to 5.8-rc5
$ cd ~/linux-4.7.3 # change to the kernel source dir
$ patch -p1 -R < ../patch-4.7.3 # revert the 4.7.3 patch
$ patch -p1 < ../patch-4.8-rc5 # apply new 4.8-rc5 patch
$ cd ~/linux-5.7.3 # change to the kernel source dir
$ patch -p1 -R < ../patch-5.7.3 # revert the 5.7.3 patch
$ patch -p1 < ../patch-5.8-rc5 # apply new 5.8-rc5 patch
$ cd ..
$ mv linux-4.7.3 linux-4.8-rc5 # rename the kernel source dir
$ mv linux-5.7.3 linux-5.8-rc5 # rename the kernel source dir
The -mm patches and the linux-next tree

View File

@ -4,7 +4,7 @@
.. _it_readme:
Rilascio del kernel Linux 4.x <http://kernel.org/>
Rilascio del kernel Linux 5.x <http://kernel.org/>
===================================================
.. warning::

View File

@ -409,8 +409,7 @@ F: drivers/platform/x86/wmi.c
F: include/uapi/linux/wmi.h
AD1889 ALSA SOUND DRIVER
M: Thibaut Varene <T-Bone@parisc-linux.org>
W: http://wiki.parisc-linux.org/AD1889
W: https://parisc.wiki.kernel.org/index.php/AD1889
L: linux-parisc@vger.kernel.org
S: Maintained
F: sound/pci/ad1889.*
@ -2865,7 +2864,7 @@ R: Martin KaFai Lau <kafai@fb.com>
R: Song Liu <songliubraving@fb.com>
R: Yonghong Song <yhs@fb.com>
L: netdev@vger.kernel.org
L: linux-kernel@vger.kernel.org
L: bpf@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147
@ -2895,6 +2894,7 @@ N: bpf
BPF JIT for ARM
M: Shubham Bansal <illusionist.neo@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/arm/net/
@ -2903,18 +2903,21 @@ M: Daniel Borkmann <daniel@iogearbox.net>
M: Alexei Starovoitov <ast@kernel.org>
M: Zi Shen Lim <zlim.lnx@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/arm64/net/
BPF JIT for MIPS (32-BIT AND 64-BIT)
M: Paul Burton <paul.burton@mips.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/mips/net/
BPF JIT for NFP NICs
M: Jakub Kicinski <jakub.kicinski@netronome.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: drivers/net/ethernet/netronome/nfp/bpf/
@ -2922,6 +2925,7 @@ BPF JIT for POWERPC (32-BIT AND 64-BIT)
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
M: Sandipan Das <sandipan@linux.ibm.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/powerpc/net/
@ -2929,6 +2933,7 @@ BPF JIT for S390
M: Martin Schwidefsky <schwidefsky@de.ibm.com>
M: Heiko Carstens <heiko.carstens@de.ibm.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/s390/net/
X: arch/s390/net/pnet.c
@ -2936,12 +2941,14 @@ X: arch/s390/net/pnet.c
BPF JIT for SPARC (32-BIT AND 64-BIT)
M: David S. Miller <davem@davemloft.net>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/sparc/net/
BPF JIT for X86 32-BIT
M: Wang YanQing <udknight@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/x86/net/bpf_jit_comp32.c
@ -2949,6 +2956,7 @@ BPF JIT for X86 64-BIT
M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/x86/net/
X: arch/x86/net/bpf_jit_comp32.c
@ -3403,9 +3411,8 @@ F: Documentation/media/v4l-drivers/cafe_ccic*
F: drivers/media/platform/marvell-ccic/
CAIF NETWORK LAYER
M: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no>
L: netdev@vger.kernel.org
S: Supported
S: Orphan
F: Documentation/networking/caif/
F: drivers/net/caif/
F: include/uapi/linux/caif/
@ -4851,10 +4858,11 @@ F: Documentation/devicetree/bindings/display/multi-inno,mi0283qt.txt
DRM DRIVER FOR MSM ADRENO GPU
M: Rob Clark <robdclark@gmail.com>
M: Sean Paul <sean@poorly.run>
L: linux-arm-msm@vger.kernel.org
L: dri-devel@lists.freedesktop.org
L: freedreno@lists.freedesktop.org
T: git git://people.freedesktop.org/~robclark/linux
T: git https://gitlab.freedesktop.org/drm/msm.git
S: Maintained
F: drivers/gpu/drm/msm/
F: include/uapi/drm/msm_drm.h
@ -8523,6 +8531,7 @@ L7 BPF FRAMEWORK
M: John Fastabend <john.fastabend@gmail.com>
M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: include/linux/skmsg.h
F: net/core/skmsg.c
@ -11524,7 +11533,7 @@ F: Documentation/blockdev/paride.txt
F: drivers/block/paride/
PARISC ARCHITECTURE
M: "James E.J. Bottomley" <jejb@parisc-linux.org>
M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
M: Helge Deller <deller@gmx.de>
L: linux-parisc@vger.kernel.org
W: http://www.parisc-linux.org/
@ -16750,6 +16759,7 @@ M: Jesper Dangaard Brouer <hawk@kernel.org>
M: John Fastabend <john.fastabend@gmail.com>
L: netdev@vger.kernel.org
L: xdp-newbies@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: net/core/xdp.c
F: include/net/xdp.h
@ -16763,6 +16773,7 @@ XDP SOCKETS (AF_XDP)
M: Björn Töpel <bjorn.topel@intel.com>
M: Magnus Karlsson <magnus.karlsson@intel.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/xskmap.c
F: net/xdp/

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 0
SUBLEVEL = 0
EXTRAVERSION = -rc7
EXTRAVERSION =
NAME = Shy Crocodile
# *DOCUMENTATION*

View File

@ -191,7 +191,6 @@ config NR_CPUS
config ARC_SMP_HALT_ON_RESET
bool "Enable Halt-on-reset boot mode"
default y if ARC_UBOOT_SUPPORT
help
In SMP configuration cores can be configured as Halt-on-reset
or they could all start at same time. For Halt-on-reset, non
@ -407,6 +406,14 @@ config ARC_HAS_ACCL_REGS
(also referred to as r58:r59). These can also be used by gcc as GPR so
kernel needs to save/restore per process
config ARC_IRQ_NO_AUTOSAVE
bool "Disable hardware autosave regfile on interrupts"
default n
help
On HS cores, taken interrupt auto saves the regfile on stack.
This is programmable and can be optionally disabled in which case
software INTERRUPT_PROLOGUE/EPILGUE do the needed work
endif # ISA_ARCV2
endmenu # "ARC CPU Configuration"
@ -515,17 +522,6 @@ config ARC_DBG_TLB_PARANOIA
endif
config ARC_UBOOT_SUPPORT
bool "Support uboot arg Handling"
help
ARC Linux by default checks for uboot provided args as pointers to
external cmdline or DTB. This however breaks in absence of uboot,
when booting from Metaware debugger directly, as the registers are
not zeroed out on reset by mdb and/or ARCv2 based cores. The bogus
registers look like uboot args to kernel which then chokes.
So only enable the uboot arg checking/processing if users are sure
of uboot being in play.
config ARC_BUILTIN_DTB_NAME
string "Built in DTB"
help

View File

@ -31,7 +31,6 @@ CONFIG_ARC_CACHE_LINE_SHIFT=5
# CONFIG_ARC_HAS_LLSC is not set
CONFIG_ARC_KVADDR_SIZE=402
CONFIG_ARC_EMUL_UNALIGNED=y
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_PREEMPT=y
CONFIG_NET=y
CONFIG_UNIX=y

View File

@ -13,7 +13,6 @@ CONFIG_PARTITION_ADVANCED=y
CONFIG_ARC_PLAT_AXS10X=y
CONFIG_AXS103=y
CONFIG_ISA_ARCV2=y
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38"
CONFIG_PREEMPT=y
CONFIG_NET=y

View File

@ -15,8 +15,6 @@ CONFIG_AXS103=y
CONFIG_ISA_ARCV2=y
CONFIG_SMP=y
# CONFIG_ARC_TIMERS_64BIT is not set
# CONFIG_ARC_SMP_HALT_ON_RESET is not set
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp"
CONFIG_PREEMPT=y
CONFIG_NET=y

View File

@ -151,6 +151,14 @@ struct bcr_isa_arcv2 {
#endif
};
struct bcr_uarch_build_arcv2 {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:8, prod:8, maj:8, min:8;
#else
unsigned int min:8, maj:8, prod:8, pad:8;
#endif
};
struct bcr_mpy {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:8, x1616:8, dsp:4, cycles:2, type:2, ver:8;

View File

@ -52,6 +52,17 @@
#define cache_line_size() SMP_CACHE_BYTES
#define ARCH_DMA_MINALIGN SMP_CACHE_BYTES
/*
* Make sure slab-allocated buffers are 64-bit aligned when atomic64_t uses
* ARCv2 64-bit atomics (LLOCKD/SCONDD). This guarantess runtime 64-bit
* alignment for any atomic64_t embedded in buffer.
* Default ARCH_SLAB_MINALIGN is __alignof__(long long) which has a relaxed
* value of 4 (and not 8) in ARC ABI.
*/
#if defined(CONFIG_ARC_HAS_LL64) && defined(CONFIG_ARC_HAS_LLSC)
#define ARCH_SLAB_MINALIGN 8
#endif
extern void arc_cache_init(void);
extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len);
extern void read_decode_cache_bcr(void);

View File

@ -17,6 +17,33 @@
;
; Now manually save: r12, sp, fp, gp, r25
#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE
.ifnc \called_from, exception
st.as r9, [sp, -10] ; save r9 in it's final stack slot
sub sp, sp, 12 ; skip JLI, LDI, EI
PUSH lp_count
PUSHAX lp_start
PUSHAX lp_end
PUSH blink
PUSH r11
PUSH r10
sub sp, sp, 4 ; skip r9
PUSH r8
PUSH r7
PUSH r6
PUSH r5
PUSH r4
PUSH r3
PUSH r2
PUSH r1
PUSH r0
.endif
#endif
#ifdef CONFIG_ARC_HAS_ACCL_REGS
PUSH r59
PUSH r58
@ -86,6 +113,33 @@
POP r59
#endif
#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE
.ifnc \called_from, exception
POP r0
POP r1
POP r2
POP r3
POP r4
POP r5
POP r6
POP r7
POP r8
POP r9
POP r10
POP r11
POP blink
POPAX lp_end
POPAX lp_start
POP r9
mov lp_count, r9
add sp, sp, 12 ; skip JLI, LDI, EI
ld.as r9, [sp, -10] ; reload r9 which got clobbered
.endif
#endif
.endm
/*------------------------------------------------------------------------*/

View File

@ -207,7 +207,7 @@ raw_copy_from_user(void *to, const void __user *from, unsigned long n)
*/
"=&r" (tmp), "+r" (to), "+r" (from)
:
: "lp_count", "lp_start", "lp_end", "memory");
: "lp_count", "memory");
return n;
}
@ -433,7 +433,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n)
*/
"=&r" (tmp), "+r" (to), "+r" (from)
:
: "lp_count", "lp_start", "lp_end", "memory");
: "lp_count", "memory");
return n;
}
@ -653,7 +653,7 @@ static inline unsigned long __arc_clear_user(void __user *to, unsigned long n)
" .previous \n"
: "+r"(d_char), "+r"(res)
: "i"(0)
: "lp_count", "lp_start", "lp_end", "memory");
: "lp_count", "memory");
return res;
}
@ -686,7 +686,7 @@ __arc_strncpy_from_user(char *dst, const char __user *src, long count)
" .previous \n"
: "+r"(res), "+r"(dst), "+r"(src), "=r"(val)
: "g"(-EFAULT), "r"(count)
: "lp_count", "lp_start", "lp_end", "memory");
: "lp_count", "memory");
return res;
}

View File

@ -209,7 +209,9 @@ restore_regs:
;####### Return from Intr #######
debug_marker_l1:
bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot
; bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot
btst r0, STATUS_DE_BIT ; Z flag set if bit clear
bnz .Lintr_ret_to_delay_slot ; branch if STATUS_DE_BIT set
.Lisr_ret_fast_path:
; Handle special case #1: (Entry via Exception, Return via IRQ)

View File

@ -17,6 +17,7 @@
#include <asm/entry.h>
#include <asm/arcregs.h>
#include <asm/cache.h>
#include <asm/irqflags.h>
.macro CPU_EARLY_SETUP
@ -47,6 +48,15 @@
sr r5, [ARC_REG_DC_CTRL]
1:
#ifdef CONFIG_ISA_ARCV2
; Unaligned access is disabled at reset, so re-enable early as
; gcc 7.3.1 (ARC GNU 2018.03) onwards generates unaligned access
; by default
lr r5, [status32]
bset r5, r5, STATUS_AD_BIT
kflag r5
#endif
.endm
.section .init.text, "ax",@progbits
@ -90,15 +100,13 @@ ENTRY(stext)
st.ab 0, [r5, 4]
1:
#ifdef CONFIG_ARC_UBOOT_SUPPORT
; Uboot - kernel ABI
; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2
; r1 = magic number (board identity, unused as of now
; r1 = magic number (always zero as of now)
; r2 = pointer to uboot provided cmdline or external DTB in mem
; These are handled later in setup_arch()
; These are handled later in handle_uboot_args()
st r0, [@uboot_tag]
st r2, [@uboot_arg]
#endif
; setup "current" tsk and optionally cache it in dedicated r25
mov r9, @init_task

View File

@ -49,11 +49,13 @@ void arc_init_IRQ(void)
*(unsigned int *)&ictrl = 0;
#ifndef CONFIG_ARC_IRQ_NO_AUTOSAVE
ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */
ictrl.save_blink = 1;
ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */
ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */
ictrl.save_idx_regs = 1; /* JLI, LDI, EI */
#endif
WRITE_AUX(AUX_IRQ_CTRL, ictrl);

View File

@ -199,20 +199,36 @@ static void read_arc_build_cfg_regs(void)
cpu->bpu.ret_stk = 4 << bpu.rse;
if (cpu->core.family >= 0x54) {
unsigned int exec_ctrl;
READ_BCR(AUX_EXEC_CTRL, exec_ctrl);
cpu->extn.dual_enb = !(exec_ctrl & 1);
struct bcr_uarch_build_arcv2 uarch;
/* dual issue always present for this core */
cpu->extn.dual = 1;
/*
* The first 0x54 core (uarch maj:min 0:1 or 0:2) was
* dual issue only (HS4x). But next uarch rev (1:0)
* allows it be configured for single issue (HS3x)
* Ensure we fiddle with dual issue only on HS4x
*/
READ_BCR(ARC_REG_MICRO_ARCH_BCR, uarch);
if (uarch.prod == 4) {
unsigned int exec_ctrl;
/* dual issue hardware always present */
cpu->extn.dual = 1;
READ_BCR(AUX_EXEC_CTRL, exec_ctrl);
/* dual issue hardware enabled ? */
cpu->extn.dual_enb = !(exec_ctrl & 1);
}
}
}
READ_BCR(ARC_REG_AP_BCR, ap);
if (ap.ver) {
cpu->extn.ap_num = 2 << ap.num;
cpu->extn.ap_full = !!ap.min;
cpu->extn.ap_full = !ap.min;
}
READ_BCR(ARC_REG_SMART_BCR, bcr);
@ -462,43 +478,78 @@ void setup_processor(void)
arc_chk_core_config();
}
static inline int is_kernel(unsigned long addr)
static inline bool uboot_arg_invalid(unsigned long addr)
{
if (addr >= (unsigned long)_stext && addr <= (unsigned long)_end)
return 1;
return 0;
/*
* Check that it is a untranslated address (although MMU is not enabled
* yet, it being a high address ensures this is not by fluke)
*/
if (addr < PAGE_OFFSET)
return true;
/* Check that address doesn't clobber resident kernel image */
return addr >= (unsigned long)_stext && addr <= (unsigned long)_end;
}
#define IGNORE_ARGS "Ignore U-boot args: "
/* uboot_tag values for U-boot - kernel ABI revision 0; see head.S */
#define UBOOT_TAG_NONE 0
#define UBOOT_TAG_CMDLINE 1
#define UBOOT_TAG_DTB 2
void __init handle_uboot_args(void)
{
bool use_embedded_dtb = true;
bool append_cmdline = false;
/* check that we know this tag */
if (uboot_tag != UBOOT_TAG_NONE &&
uboot_tag != UBOOT_TAG_CMDLINE &&
uboot_tag != UBOOT_TAG_DTB) {
pr_warn(IGNORE_ARGS "invalid uboot tag: '%08x'\n", uboot_tag);
goto ignore_uboot_args;
}
if (uboot_tag != UBOOT_TAG_NONE &&
uboot_arg_invalid((unsigned long)uboot_arg)) {
pr_warn(IGNORE_ARGS "invalid uboot arg: '%px'\n", uboot_arg);
goto ignore_uboot_args;
}
/* see if U-boot passed an external Device Tree blob */
if (uboot_tag == UBOOT_TAG_DTB) {
machine_desc = setup_machine_fdt((void *)uboot_arg);
/* external Device Tree blob is invalid - use embedded one */
use_embedded_dtb = !machine_desc;
}
if (uboot_tag == UBOOT_TAG_CMDLINE)
append_cmdline = true;
ignore_uboot_args:
if (use_embedded_dtb) {
machine_desc = setup_machine_fdt(__dtb_start);
if (!machine_desc)
panic("Embedded DT invalid\n");
}
/*
* NOTE: @boot_command_line is populated by setup_machine_fdt() so this
* append processing can only happen after.
*/
if (append_cmdline) {
/* Ensure a whitespace between the 2 cmdlines */
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, uboot_arg, COMMAND_LINE_SIZE);
}
}
void __init setup_arch(char **cmdline_p)
{
#ifdef CONFIG_ARC_UBOOT_SUPPORT
/* make sure that uboot passed pointer to cmdline/dtb is valid */
if (uboot_tag && is_kernel((unsigned long)uboot_arg))
panic("Invalid uboot arg\n");
/* See if u-boot passed an external Device Tree blob */
machine_desc = setup_machine_fdt(uboot_arg); /* uboot_tag == 2 */
if (!machine_desc)
#endif
{
/* No, so try the embedded one */
machine_desc = setup_machine_fdt(__dtb_start);
if (!machine_desc)
panic("Embedded DT invalid\n");
/*
* If we are here, it is established that @uboot_arg didn't
* point to DT blob. Instead if u-boot says it is cmdline,
* append to embedded DT cmdline.
* setup_machine_fdt() would have populated @boot_command_line
*/
if (uboot_tag == 1) {
/* Ensure a whitespace between the 2 cmdlines */
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, uboot_arg,
COMMAND_LINE_SIZE);
}
}
handle_uboot_args();
/* Save unparsed command line copy for /proc/cmdline */
*cmdline_p = boot_command_line;

View File

@ -25,15 +25,11 @@
#endif
#ifdef CONFIG_ARC_HAS_LL64
# define PREFETCH_READ(RX) prefetch [RX, 56]
# define PREFETCH_WRITE(RX) prefetchw [RX, 64]
# define LOADX(DST,RX) ldd.ab DST, [RX, 8]
# define STOREX(SRC,RX) std.ab SRC, [RX, 8]
# define ZOLSHFT 5
# define ZOLAND 0x1F
#else
# define PREFETCH_READ(RX) prefetch [RX, 28]
# define PREFETCH_WRITE(RX) prefetchw [RX, 32]
# define LOADX(DST,RX) ld.ab DST, [RX, 4]
# define STOREX(SRC,RX) st.ab SRC, [RX, 4]
# define ZOLSHFT 4
@ -41,8 +37,6 @@
#endif
ENTRY_CFI(memcpy)
prefetch [r1] ; Prefetch the read location
prefetchw [r0] ; Prefetch the write location
mov.f 0, r2
;;; if size is zero
jz.d [blink]
@ -72,8 +66,6 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy32_64bytes
;; LOOP START
LOADX (r6, r1)
PREFETCH_READ (r1)
PREFETCH_WRITE (r3)
LOADX (r8, r1)
LOADX (r10, r1)
LOADX (r4, r1)
@ -117,9 +109,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_1
;; LOOP START
ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 24)
or r7, r7, r5
@ -162,9 +152,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_2
;; LOOP START
ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 16)
or r7, r7, r5
@ -204,9 +192,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_3
;; LOOP START
ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 8)
or r7, r7, r5

View File

@ -9,6 +9,7 @@ menuconfig ARC_SOC_HSDK
bool "ARC HS Development Kit SOC"
depends on ISA_ARCV2
select ARC_HAS_ACCL_REGS
select ARC_IRQ_NO_AUTOSAVE
select CLK_HSDK
select RESET_HSDK
select HAVE_PCI

View File

@ -1400,6 +1400,7 @@ config NR_CPUS
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
depends on SMP
select GENERIC_IRQ_MIGRATION
help
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu.

View File

@ -729,7 +729,7 @@
&cpsw_emac0 {
phy-handle = <&ethphy0>;
phy-mode = "rgmii-txid";
phy-mode = "rgmii-id";
};
&tscadc {

View File

@ -651,13 +651,13 @@
&cpsw_emac0 {
phy-handle = <&ethphy0>;
phy-mode = "rgmii-txid";
phy-mode = "rgmii-id";
dual_emac_res_vlan = <1>;
};
&cpsw_emac1 {
phy-handle = <&ethphy1>;
phy-mode = "rgmii-txid";
phy-mode = "rgmii-id";
dual_emac_res_vlan = <2>;
};

View File

@ -144,30 +144,32 @@
status = "okay";
};
nand@d0000 {
nand-controller@d0000 {
status = "okay";
label = "pxa3xx_nand-0";
num-cs = <1>;
marvell,nand-keep-config;
nand-on-flash-bbt;
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
nand@0 {
reg = <0>;
label = "pxa3xx_nand-0";
nand-rb = <0>;
nand-on-flash-bbt;
partition@0 {
label = "U-Boot";
reg = <0 0x800000>;
};
partition@800000 {
label = "Linux";
reg = <0x800000 0x800000>;
};
partition@1000000 {
label = "Filesystem";
reg = <0x1000000 0x3f000000>;
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "U-Boot";
reg = <0 0x800000>;
};
partition@800000 {
label = "Linux";
reg = <0x800000 0x800000>;
};
partition@1000000 {
label = "Filesystem";
reg = <0x1000000 0x3f000000>;
};
};
};
};

View File

@ -160,12 +160,15 @@
status = "okay";
};
nand@d0000 {
nand-controller@d0000 {
status = "okay";
label = "pxa3xx_nand-0";
num-cs = <1>;
marvell,nand-keep-config;
nand-on-flash-bbt;
nand@0 {
reg = <0>;
label = "pxa3xx_nand-0";
nand-rb = <0>;
nand-on-flash-bbt;
};
};
};

View File

@ -81,49 +81,52 @@
};
nand@d0000 {
nand-controller@d0000 {
status = "okay";
label = "pxa3xx_nand-0";
num-cs = <1>;
marvell,nand-keep-config;
nand-on-flash-bbt;
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
nand@0 {
reg = <0>;
label = "pxa3xx_nand-0";
nand-rb = <0>;
nand-on-flash-bbt;
partition@0 {
label = "u-boot";
reg = <0x00000000 0x000e0000>;
read-only;
};
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
partition@e0000 {
label = "u-boot-env";
reg = <0x000e0000 0x00020000>;
read-only;
};
partition@0 {
label = "u-boot";
reg = <0x00000000 0x000e0000>;
read-only;
};
partition@100000 {
label = "u-boot-env2";
reg = <0x00100000 0x00020000>;
read-only;
};
partition@e0000 {
label = "u-boot-env";
reg = <0x000e0000 0x00020000>;
read-only;
};
partition@120000 {
label = "zImage";
reg = <0x00120000 0x00400000>;
};
partition@100000 {
label = "u-boot-env2";
reg = <0x00100000 0x00020000>;
read-only;
};
partition@520000 {
label = "initrd";
reg = <0x00520000 0x00400000>;
};
partition@120000 {
label = "zImage";
reg = <0x00120000 0x00400000>;
};
partition@e00000 {
label = "boot";
reg = <0x00e00000 0x3f200000>;
partition@520000 {
label = "initrd";
reg = <0x00520000 0x00400000>;
};
partition@e00000 {
label = "boot";
reg = <0x00e00000 0x3f200000>;
};
};
};
};

View File

@ -443,7 +443,7 @@
};
display-controller@6a000000 {
status = "disabled";
status = "okay";
port@0 {
reg = <0>;

View File

@ -13,10 +13,25 @@
stdout-path = "serial0:115200n8";
};
memory@80000000 {
/*
* Note that recent version of the device tree compiler (starting with
* version 1.4.2) warn about this node containing a reg property, but
* missing a unit-address. However, the bootloader on these Chromebook
* devices relies on the full name of this node to be exactly /memory.
* Adding the unit-address causes the bootloader to create a /memory
* node and write the memory bank configuration to that node, which in
* turn leads the kernel to believe that the device has 2 GiB of
* memory instead of the amount detected by the bootloader.
*
* The name of this node is effectively ABI and must not be changed.
*/
memory {
device_type = "memory";
reg = <0x0 0x80000000 0x0 0x80000000>;
};
/delete-node/ memory@80000000;
host1x@50000000 {
hdmi@54280000 {
status = "okay";

View File

@ -212,10 +212,11 @@ K256:
.global sha256_block_data_order
.type sha256_block_data_order,%function
sha256_block_data_order:
.Lsha256_block_data_order:
#if __ARM_ARCH__<7
sub r3,pc,#8 @ sha256_block_data_order
#else
adr r3,sha256_block_data_order
adr r3,.Lsha256_block_data_order
#endif
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
ldr r12,.LOPENSSL_armcap

View File

@ -93,10 +93,11 @@ K256:
.global sha256_block_data_order
.type sha256_block_data_order,%function
sha256_block_data_order:
.Lsha256_block_data_order:
#if __ARM_ARCH__<7
sub r3,pc,#8 @ sha256_block_data_order
#else
adr r3,sha256_block_data_order
adr r3,.Lsha256_block_data_order
#endif
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
ldr r12,.LOPENSSL_armcap

View File

@ -274,10 +274,11 @@ WORD64(0x5fcb6fab,0x3ad6faec, 0x6c44198c,0x4a475817)
.global sha512_block_data_order
.type sha512_block_data_order,%function
sha512_block_data_order:
.Lsha512_block_data_order:
#if __ARM_ARCH__<7
sub r3,pc,#8 @ sha512_block_data_order
#else
adr r3,sha512_block_data_order
adr r3,.Lsha512_block_data_order
#endif
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
ldr r12,.LOPENSSL_armcap

View File

@ -141,10 +141,11 @@ WORD64(0x5fcb6fab,0x3ad6faec, 0x6c44198c,0x4a475817)
.global sha512_block_data_order
.type sha512_block_data_order,%function
sha512_block_data_order:
.Lsha512_block_data_order:
#if __ARM_ARCH__<7
sub r3,pc,#8 @ sha512_block_data_order
#else
adr r3,sha512_block_data_order
adr r3,.Lsha512_block_data_order
#endif
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
ldr r12,.LOPENSSL_armcap

View File

@ -25,7 +25,6 @@
#ifndef __ASSEMBLY__
struct irqaction;
struct pt_regs;
extern void migrate_irqs(void);
extern void asm_do_IRQ(unsigned int, struct pt_regs *);
void handle_IRQ(unsigned int, struct pt_regs *);

View File

@ -31,7 +31,6 @@
#include <linux/smp.h>
#include <linux/init.h>
#include <linux/seq_file.h>
#include <linux/ratelimit.h>
#include <linux/errno.h>
#include <linux/list.h>
#include <linux/kallsyms.h>
@ -109,64 +108,3 @@ int __init arch_probe_nr_irqs(void)
return nr_irqs;
}
#endif
#ifdef CONFIG_HOTPLUG_CPU
static bool migrate_one_irq(struct irq_desc *desc)
{
struct irq_data *d = irq_desc_get_irq_data(desc);
const struct cpumask *affinity = irq_data_get_affinity_mask(d);
struct irq_chip *c;
bool ret = false;
/*
* If this is a per-CPU interrupt, or the affinity does not
* include this CPU, then we have nothing to do.
*/
if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
return false;
if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
affinity = cpu_online_mask;
ret = true;
}
c = irq_data_get_irq_chip(d);
if (!c->irq_set_affinity)
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)
cpumask_copy(irq_data_get_affinity_mask(d), affinity);
return ret;
}
/*
* The current CPU has been marked offline. Migrate IRQs off this CPU.
* If the affinity settings do not allow other CPUs, force them onto any
* available CPU.
*
* Note: we must iterate over all IRQs, whether they have an attached
* action structure or not, as we need to get chained interrupts too.
*/
void migrate_irqs(void)
{
unsigned int i;
struct irq_desc *desc;
unsigned long flags;
local_irq_save(flags);
for_each_irq_desc(i, desc) {
bool affinity_broken;
raw_spin_lock(&desc->lock);
affinity_broken = migrate_one_irq(desc);
raw_spin_unlock(&desc->lock);
if (affinity_broken)
pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n",
i, smp_processor_id());
}
local_irq_restore(flags);
}
#endif /* CONFIG_HOTPLUG_CPU */

View File

@ -254,7 +254,7 @@ int __cpu_disable(void)
/*
* OK - migrate IRQs away from this CPU
*/
migrate_irqs();
irq_migrate_all_off_this_cpu();
/*
* Flush user cache and TLB mappings, and then remove this CPU

View File

@ -2390,4 +2390,6 @@ void arch_teardown_dma_ops(struct device *dev)
return;
arm_teardown_iommu_dma_ops(dev);
/* Let arch_setup_dma_ops() start again from scratch upon re-probe */
set_dma_ops(dev, NULL);
}

View File

@ -247,7 +247,7 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *or
}
/* Copy arch-dep-instance from template. */
memcpy(code, (unsigned char *)optprobe_template_entry,
memcpy(code, (unsigned long *)&optprobe_template_entry,
TMPL_END_IDX * sizeof(kprobe_opcode_t));
/* Adjust buffer according to instruction. */

View File

@ -351,7 +351,7 @@
reg = <0>;
pinctrl-names = "default";
pinctrl-0 = <&cp0_copper_eth_phy_reset>;
reset-gpios = <&cp1_gpio1 11 GPIO_ACTIVE_LOW>;
reset-gpios = <&cp0_gpio2 11 GPIO_ACTIVE_LOW>;
reset-assert-us = <10000>;
};

View File

@ -37,7 +37,7 @@
};
memory@86200000 {
reg = <0x0 0x86200000 0x0 0x2600000>;
reg = <0x0 0x86200000 0x0 0x2d00000>;
no-map;
};

View File

@ -158,8 +158,8 @@ ENTRY(hchacha_block_neon)
mov w3, w2
bl chacha_permute
st1 {v0.16b}, [x1], #16
st1 {v3.16b}, [x1]
st1 {v0.4s}, [x1], #16
st1 {v3.4s}, [x1]
ldp x29, x30, [sp], #16
ret
@ -532,6 +532,10 @@ ENTRY(chacha_4block_xor_neon)
add v3.4s, v3.4s, v19.4s
add a2, a2, w8
add a3, a3, w9
CPU_BE( rev a0, a0 )
CPU_BE( rev a1, a1 )
CPU_BE( rev a2, a2 )
CPU_BE( rev a3, a3 )
ld4r {v24.4s-v27.4s}, [x0], #16
ld4r {v28.4s-v31.4s}, [x0]
@ -552,6 +556,10 @@ ENTRY(chacha_4block_xor_neon)
add v7.4s, v7.4s, v23.4s
add a6, a6, w8
add a7, a7, w9
CPU_BE( rev a4, a4 )
CPU_BE( rev a5, a5 )
CPU_BE( rev a6, a6 )
CPU_BE( rev a7, a7 )
// x8[0-3] += s2[0]
// x9[0-3] += s2[1]
@ -569,6 +577,10 @@ ENTRY(chacha_4block_xor_neon)
add v11.4s, v11.4s, v27.4s
add a10, a10, w8
add a11, a11, w9
CPU_BE( rev a8, a8 )
CPU_BE( rev a9, a9 )
CPU_BE( rev a10, a10 )
CPU_BE( rev a11, a11 )
// x12[0-3] += s3[0]
// x13[0-3] += s3[1]
@ -586,6 +598,10 @@ ENTRY(chacha_4block_xor_neon)
add v15.4s, v15.4s, v31.4s
add a14, a14, w8
add a15, a15, w9
CPU_BE( rev a12, a12 )
CPU_BE( rev a13, a13 )
CPU_BE( rev a14, a14 )
CPU_BE( rev a15, a15 )
// interleave 32-bit words in state n, n+1
ldp w6, w7, [x2], #64

View File

@ -36,4 +36,8 @@
#include <arm_neon.h>
#endif
#ifdef CONFIG_CC_IS_CLANG
#pragma clang diagnostic ignored "-Wincompatible-pointer-types"
#endif
#endif /* __ASM_NEON_INTRINSICS_H */

View File

@ -539,8 +539,7 @@ set_hcr:
/* GICv3 system register access */
mrs x0, id_aa64pfr0_el1
ubfx x0, x0, #24, #4
cmp x0, #1
b.ne 3f
cbz x0, 3f
mrs_s x0, SYS_ICC_SRE_EL2
orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1

View File

@ -1702,19 +1702,20 @@ void syscall_trace_exit(struct pt_regs *regs)
}
/*
* SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a
* We also take into account DIT (bit 24), which is not yet documented, and
* treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be
* allocated an EL0 meaning in future.
* SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
* We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
* not described in ARM DDI 0487D.a.
* We treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may
* be allocated an EL0 meaning in future.
* Userspace cannot use these until they have an architectural meaning.
* Note that this follows the SPSR_ELx format, not the AArch32 PSR format.
* We also reserve IL for the kernel; SS is handled dynamically.
*/
#define SPSR_EL1_AARCH64_RES0_BITS \
(GENMASK_ULL(63,32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5))
(GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5))
#define SPSR_EL1_AARCH32_RES0_BITS \
(GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20))
(GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))
static int valid_compat_regs(struct user_pt_regs *regs)
{

View File

@ -339,6 +339,9 @@ void __init setup_arch(char **cmdline_p)
smp_init_cpus();
smp_build_mpidr_hash();
/* Init percpu seeds for random tags after cpus are set up. */
kasan_init_tags();
#ifdef CONFIG_ARM64_SW_TTBR0_PAN
/*
* Make sure init_thread_info.ttbr0 always generates translation

View File

@ -252,8 +252,6 @@ void __init kasan_init(void)
memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
kasan_init_tags();
/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n");

View File

@ -70,6 +70,8 @@ static struct platform_device bcm63xx_enet_shared_device = {
static int shared_device_registered;
static u64 enet_dmamask = DMA_BIT_MASK(32);
static struct resource enet0_res[] = {
{
.start = -1, /* filled at runtime */
@ -99,6 +101,8 @@ static struct platform_device bcm63xx_enet0_device = {
.resource = enet0_res,
.dev = {
.platform_data = &enet0_pd,
.dma_mask = &enet_dmamask,
.coherent_dma_mask = DMA_BIT_MASK(32),
},
};
@ -131,6 +135,8 @@ static struct platform_device bcm63xx_enet1_device = {
.resource = enet1_res,
.dev = {
.platform_data = &enet1_pd,
.dma_mask = &enet_dmamask,
.coherent_dma_mask = DMA_BIT_MASK(32),
},
};
@ -157,6 +163,8 @@ static struct platform_device bcm63xx_enetsw_device = {
.resource = enetsw_res,
.dev = {
.platform_data = &enetsw_pd,
.dma_mask = &enet_dmamask,
.coherent_dma_mask = DMA_BIT_MASK(32),
},
};

View File

@ -54,10 +54,9 @@ unsigned long __xchg_small(volatile void *ptr, unsigned long val, unsigned int s
unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,
unsigned long new, unsigned int size)
{
u32 mask, old32, new32, load32;
u32 mask, old32, new32, load32, load;
volatile u32 *ptr32;
unsigned int shift;
u8 load;
/* Check that ptr is naturally aligned */
WARN_ON((unsigned long)ptr & (size - 1));

View File

@ -384,7 +384,8 @@ static void __init bootmem_init(void)
init_initrd();
reserved_end = (unsigned long) PFN_UP(__pa_symbol(&_end));
memblock_reserve(PHYS_OFFSET, reserved_end << PAGE_SHIFT);
memblock_reserve(PHYS_OFFSET,
(reserved_end << PAGE_SHIFT) - PHYS_OFFSET);
/*
* max_low_pfn is not a number of pages. The number of pages

View File

@ -31,8 +31,8 @@ static int vmmc_probe(struct platform_device *pdev)
dma_addr_t dma;
cp1_base =
(void *) CPHYSADDR(dma_alloc_coherent(NULL, CP1_SIZE,
&dma, GFP_ATOMIC));
(void *) CPHYSADDR(dma_alloc_coherent(&pdev->dev, CP1_SIZE,
&dma, GFP_KERNEL));
gpio_count = of_gpio_count(pdev->dev.of_node);
while (gpio_count > 0) {

View File

@ -79,8 +79,6 @@ enum reg_val_type {
REG_64BIT_32BIT,
/* 32-bit compatible, need truncation for 64-bit ops. */
REG_32BIT,
/* 32-bit zero extended. */
REG_32BIT_ZERO_EX,
/* 32-bit no sign/zero extension needed. */
REG_32BIT_POS
};
@ -343,12 +341,15 @@ static int build_int_epilogue(struct jit_ctx *ctx, int dest_reg)
const struct bpf_prog *prog = ctx->skf;
int stack_adjust = ctx->stack_size;
int store_offset = stack_adjust - 8;
enum reg_val_type td;
int r0 = MIPS_R_V0;
if (dest_reg == MIPS_R_RA &&
get_reg_val_type(ctx, prog->len, BPF_REG_0) == REG_32BIT_ZERO_EX)
if (dest_reg == MIPS_R_RA) {
/* Don't let zero extended value escape. */
emit_instr(ctx, sll, r0, r0, 0);
td = get_reg_val_type(ctx, prog->len, BPF_REG_0);
if (td == REG_64BIT)
emit_instr(ctx, sll, r0, r0, 0);
}
if (ctx->flags & EBPF_SAVE_RA) {
emit_instr(ctx, ld, MIPS_R_RA, store_offset, MIPS_R_SP);
@ -692,7 +693,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
if (dst < 0)
return dst;
td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) {
if (td == REG_64BIT) {
/* sign extend */
emit_instr(ctx, sll, dst, dst, 0);
}
@ -707,7 +708,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
if (dst < 0)
return dst;
td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) {
if (td == REG_64BIT) {
/* sign extend */
emit_instr(ctx, sll, dst, dst, 0);
}
@ -721,7 +722,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
if (dst < 0)
return dst;
td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
if (td == REG_64BIT || td == REG_32BIT_ZERO_EX)
if (td == REG_64BIT)
/* sign extend */
emit_instr(ctx, sll, dst, dst, 0);
if (insn->imm == 1) {
@ -860,13 +861,13 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
if (src < 0 || dst < 0)
return -EINVAL;
td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) {
if (td == REG_64BIT) {
/* sign extend */
emit_instr(ctx, sll, dst, dst, 0);
}
did_move = false;
ts = get_reg_val_type(ctx, this_idx, insn->src_reg);
if (ts == REG_64BIT || ts == REG_32BIT_ZERO_EX) {
if (ts == REG_64BIT) {
int tmp_reg = MIPS_R_AT;
if (bpf_op == BPF_MOV) {
@ -1254,8 +1255,7 @@ jeq_common:
if (insn->imm == 64 && td == REG_32BIT)
emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32);
if (insn->imm != 64 &&
(td == REG_64BIT || td == REG_32BIT_ZERO_EX)) {
if (insn->imm != 64 && td == REG_64BIT) {
/* sign extend */
emit_instr(ctx, sll, dst, dst, 0);
}
@ -1819,7 +1819,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
/* Update the icache */
flush_icache_range((unsigned long)ctx.target,
(unsigned long)(ctx.target + ctx.idx * sizeof(u32)));
(unsigned long)&ctx.target[ctx.idx]);
if (bpf_jit_enable > 1)
/* Dump JIT code */

View File

@ -308,15 +308,29 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
long do_syscall_trace_enter(struct pt_regs *regs)
{
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs)) {
if (test_thread_flag(TIF_SYSCALL_TRACE)) {
int rc = tracehook_report_syscall_entry(regs);
/*
* Tracing decided this syscall should not happen or the
* debugger stored an invalid system call number. Skip
* the system call and the system call restart handling.
* As tracesys_next does not set %r28 to -ENOSYS
* when %r20 is set to -1, initialize it here.
*/
regs->gr[20] = -1UL;
goto out;
regs->gr[28] = -ENOSYS;
if (rc) {
/*
* A nonzero return code from
* tracehook_report_syscall_entry() tells us
* to prevent the syscall execution. Skip
* the syscall call and the syscall restart handling.
*
* Note that the tracer may also just change
* regs->gr[20] to an invalid syscall number,
* that is handled by tracesys_next.
*/
regs->gr[20] = -1UL;
return -1;
}
}
/* Do the secure computing check after ptrace. */
@ -340,7 +354,6 @@ long do_syscall_trace_enter(struct pt_regs *regs)
regs->gr[24] & 0xffffffff,
regs->gr[23] & 0xffffffff);
out:
/*
* Sign extend the syscall number to 64bit since it may have been
* modified by a compat ptrace call

View File

@ -1593,6 +1593,8 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
pnv_pci_ioda2_setup_dma_pe(phb, pe);
#ifdef CONFIG_IOMMU_API
iommu_register_group(&pe->table_group,
pe->phb->hose->global_number, pe->pe_number);
pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL);
#endif
}

View File

@ -1147,6 +1147,8 @@ static int pnv_tce_iommu_bus_notifier(struct notifier_block *nb,
return 0;
pe = &phb->ioda.pe_array[pdn->pe_number];
if (!pe->table_group.group)
return 0;
iommu_add_device(&pe->table_group, dev);
return 0;
case BUS_NOTIFY_DEL_DEVICE:

View File

@ -297,7 +297,7 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
scb_s->crycbd = 0;
apie_h = vcpu->arch.sie_block->eca & ECA_APIE;
if (!apie_h && !key_msk)
if (!apie_h && (!key_msk || fmt_o == CRYCB_FORMAT0))
return 0;
if (!crycb_addr)

View File

@ -1,3 +1,3 @@
ifneq ($(CONFIG_BUILTIN_DTB_SOURCE),"")
obj-y += $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o
obj-$(CONFIG_USE_BUILTIN_DTB) += $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o
endif

View File

@ -841,7 +841,7 @@ union hv_gpa_page_range {
* count is equal with how many entries of union hv_gpa_page_range can
* be populated into the input parameter page.
*/
#define HV_MAX_FLUSH_REP_COUNT (PAGE_SIZE - 2 * sizeof(u64) / \
#define HV_MAX_FLUSH_REP_COUNT ((PAGE_SIZE - 2 * sizeof(u64)) / \
sizeof(union hv_gpa_page_range))
struct hv_guest_mapping_flush_list {

View File

@ -299,6 +299,7 @@ union kvm_mmu_extended_role {
unsigned int cr4_smap:1;
unsigned int cr4_smep:1;
unsigned int cr4_la57:1;
unsigned int maxphyaddr:6;
};
};
@ -397,6 +398,7 @@ struct kvm_mmu {
void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
u64 *spte, const void *pte);
hpa_t root_hpa;
gpa_t root_cr3;
union kvm_mmu_role mmu_role;
u8 root_level;
u8 shadow_root_level;

View File

@ -284,7 +284,7 @@ do { \
__put_user_goto(x, ptr, "l", "k", "ir", label); \
break; \
case 8: \
__put_user_goto_u64((__typeof__(*ptr))(x), ptr, label); \
__put_user_goto_u64(x, ptr, label); \
break; \
default: \
__put_user_bad(); \
@ -431,8 +431,10 @@ do { \
({ \
__label__ __pu_label; \
int __pu_err = -EFAULT; \
__typeof__(*(ptr)) __pu_val; \
__pu_val = x; \
__uaccess_begin(); \
__put_user_size((x), (ptr), (size), __pu_label); \
__put_user_size(__pu_val, (ptr), (size), __pu_label); \
__pu_err = 0; \
__pu_label: \
__uaccess_end(); \

View File

@ -335,6 +335,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
unsigned f_la57 = 0;
/* cpuid 1.edx */
const u32 kvm_cpuid_1_edx_x86_features =
@ -489,7 +490,10 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
// TSC_ADJUST is emulated
entry->ebx |= F(TSC_ADJUST);
entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
f_la57 = entry->ecx & F(LA57);
cpuid_mask(&entry->ecx, CPUID_7_ECX);
/* Set LA57 based on hardware capability. */
entry->ecx |= f_la57;
entry->ecx |= f_umip;
/* PKU is not yet implemented for shadow paging. */
if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))

View File

@ -3555,6 +3555,7 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
&invalid_list);
mmu->root_hpa = INVALID_PAGE;
}
mmu->root_cr3 = 0;
}
kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
@ -3610,6 +3611,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root);
} else
BUG();
vcpu->arch.mmu->root_cr3 = vcpu->arch.mmu->get_cr3(vcpu);
return 0;
}
@ -3618,10 +3620,11 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
{
struct kvm_mmu_page *sp;
u64 pdptr, pm_mask;
gfn_t root_gfn;
gfn_t root_gfn, root_cr3;
int i;
root_gfn = vcpu->arch.mmu->get_cr3(vcpu) >> PAGE_SHIFT;
root_cr3 = vcpu->arch.mmu->get_cr3(vcpu);
root_gfn = root_cr3 >> PAGE_SHIFT;
if (mmu_check_root(vcpu, root_gfn))
return 1;
@ -3646,7 +3649,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu->root_hpa = root;
return 0;
goto set_root_cr3;
}
/*
@ -3712,6 +3715,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->lm_root);
}
set_root_cr3:
vcpu->arch.mmu->root_cr3 = root_cr3;
return 0;
}
@ -4163,7 +4169,7 @@ static bool cached_root_available(struct kvm_vcpu *vcpu, gpa_t new_cr3,
struct kvm_mmu_root_info root;
struct kvm_mmu *mmu = vcpu->arch.mmu;
root.cr3 = mmu->get_cr3(vcpu);
root.cr3 = mmu->root_cr3;
root.hpa = mmu->root_hpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) {
@ -4176,6 +4182,7 @@ static bool cached_root_available(struct kvm_vcpu *vcpu, gpa_t new_cr3,
}
mmu->root_hpa = root.hpa;
mmu->root_cr3 = root.cr3;
return i < KVM_MMU_NUM_PREV_ROOTS;
}
@ -4770,6 +4777,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu)
ext.cr4_pse = !!is_pse(vcpu);
ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE);
ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57);
ext.maxphyaddr = cpuid_maxphyaddr(vcpu);
ext.valid = 1;
@ -5516,11 +5524,13 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
vcpu->arch.root_mmu.root_hpa = INVALID_PAGE;
vcpu->arch.root_mmu.root_cr3 = 0;
vcpu->arch.root_mmu.translate_gpa = translate_gpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
vcpu->arch.root_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
vcpu->arch.guest_mmu.root_hpa = INVALID_PAGE;
vcpu->arch.guest_mmu.root_cr3 = 0;
vcpu->arch.guest_mmu.translate_gpa = translate_gpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;

View File

@ -117,67 +117,11 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup,
}
EXPORT_SYMBOL_GPL(ex_handler_fprestore);
/* Helper to check whether a uaccess fault indicates a kernel bug. */
static bool bogus_uaccess(struct pt_regs *regs, int trapnr,
unsigned long fault_addr)
{
/* This is the normal case: #PF with a fault address in userspace. */
if (trapnr == X86_TRAP_PF && fault_addr < TASK_SIZE_MAX)
return false;
/*
* This code can be reached for machine checks, but only if the #MC
* handler has already decided that it looks like a candidate for fixup.
* This e.g. happens when attempting to access userspace memory which
* the CPU can't access because of uncorrectable bad memory.
*/
if (trapnr == X86_TRAP_MC)
return false;
/*
* There are two remaining exception types we might encounter here:
* - #PF for faulting accesses to kernel addresses
* - #GP for faulting accesses to noncanonical addresses
* Complain about anything else.
*/
if (trapnr != X86_TRAP_PF && trapnr != X86_TRAP_GP) {
WARN(1, "unexpected trap %d in uaccess\n", trapnr);
return false;
}
/*
* This is a faulting memory access in kernel space, on a kernel
* address, in a usercopy function. This can e.g. be caused by improper
* use of helpers like __put_user and by improper attempts to access
* userspace addresses in KERNEL_DS regions.
* The one (semi-)legitimate exception are probe_kernel_{read,write}(),
* which can be invoked from places like kgdb, /dev/mem (for reading)
* and privileged BPF code (for reading).
* The probe_kernel_*() functions set the kernel_uaccess_faults_ok flag
* to tell us that faulting on kernel addresses, and even noncanonical
* addresses, in a userspace accessor does not necessarily imply a
* kernel bug, root might just be doing weird stuff.
*/
if (current->kernel_uaccess_faults_ok)
return false;
/* This is bad. Refuse the fixup so that we go into die(). */
if (trapnr == X86_TRAP_PF) {
pr_emerg("BUG: pagefault on kernel address 0x%lx in non-whitelisted uaccess\n",
fault_addr);
} else {
pr_emerg("BUG: GPF in non-whitelisted uaccess (non-canonical address?)\n");
}
return true;
}
__visible bool ex_handler_uaccess(const struct exception_table_entry *fixup,
struct pt_regs *regs, int trapnr,
unsigned long error_code,
unsigned long fault_addr)
{
if (bogus_uaccess(regs, trapnr, fault_addr))
return false;
regs->ip = ex_fixup_addr(fixup);
return true;
}
@ -188,8 +132,6 @@ __visible bool ex_handler_ext(const struct exception_table_entry *fixup,
unsigned long error_code,
unsigned long fault_addr)
{
if (bogus_uaccess(regs, trapnr, fault_addr))
return false;
/* Special hack for uaccess_err */
current->thread.uaccess_err = 1;
regs->ip = ex_fixup_addr(fixup);

View File

@ -122,8 +122,10 @@ static void alg_do_release(const struct af_alg_type *type, void *private)
int af_alg_release(struct socket *sock)
{
if (sock->sk)
if (sock->sk) {
sock_put(sock->sk);
sock->sk = NULL;
}
return 0;
}
EXPORT_SYMBOL_GPL(af_alg_release);

View File

@ -95,7 +95,7 @@ static void __update_runtime_status(struct device *dev, enum rpm_status status)
static void pm_runtime_deactivate_timer(struct device *dev)
{
if (dev->power.timer_expires > 0) {
hrtimer_cancel(&dev->power.suspend_timer);
hrtimer_try_to_cancel(&dev->power.suspend_timer);
dev->power.timer_expires = 0;
}
}

View File

@ -144,8 +144,7 @@ static void __init at91sam9x5_pmc_setup(struct device_node *np,
return;
at91sam9x5_pmc = pmc_data_allocate(PMC_MAIN + 1,
nck(at91sam9x5_systemck),
nck(at91sam9x35_periphck), 0);
nck(at91sam9x5_systemck), 31, 0);
if (!at91sam9x5_pmc)
return;
@ -210,7 +209,7 @@ static void __init at91sam9x5_pmc_setup(struct device_node *np,
parent_names[1] = "mainck";
parent_names[2] = "plladivck";
parent_names[3] = "utmick";
parent_names[4] = "mck";
parent_names[4] = "masterck";
for (i = 0; i < 2; i++) {
char name[6];

View File

@ -240,7 +240,7 @@ static void __init sama5d2_pmc_setup(struct device_node *np)
parent_names[1] = "mainck";
parent_names[2] = "plladivck";
parent_names[3] = "utmick";
parent_names[4] = "mck";
parent_names[4] = "masterck";
for (i = 0; i < 3; i++) {
char name[6];
@ -291,7 +291,7 @@ static void __init sama5d2_pmc_setup(struct device_node *np)
parent_names[1] = "mainck";
parent_names[2] = "plladivck";
parent_names[3] = "utmick";
parent_names[4] = "mck";
parent_names[4] = "masterck";
parent_names[5] = "audiopll_pmcck";
for (i = 0; i < ARRAY_SIZE(sama5d2_gck); i++) {
hw = at91_clk_register_generated(regmap, &pmc_pcr_lock,

View File

@ -207,7 +207,7 @@ static void __init sama5d4_pmc_setup(struct device_node *np)
parent_names[1] = "mainck";
parent_names[2] = "plladivck";
parent_names[3] = "utmick";
parent_names[4] = "mck";
parent_names[4] = "masterck";
for (i = 0; i < 3; i++) {
char name[6];

View File

@ -264,9 +264,9 @@ static SUNXI_CCU_GATE(ahb1_mmc1_clk, "ahb1-mmc1", "ahb1",
static SUNXI_CCU_GATE(ahb1_mmc2_clk, "ahb1-mmc2", "ahb1",
0x060, BIT(10), 0);
static SUNXI_CCU_GATE(ahb1_mmc3_clk, "ahb1-mmc3", "ahb1",
0x060, BIT(12), 0);
0x060, BIT(11), 0);
static SUNXI_CCU_GATE(ahb1_nand1_clk, "ahb1-nand1", "ahb1",
0x060, BIT(13), 0);
0x060, BIT(12), 0);
static SUNXI_CCU_GATE(ahb1_nand0_clk, "ahb1-nand0", "ahb1",
0x060, BIT(13), 0);
static SUNXI_CCU_GATE(ahb1_sdram_clk, "ahb1-sdram", "ahb1",

View File

@ -542,7 +542,7 @@ static struct ccu_reset_map sun8i_v3s_ccu_resets[] = {
[RST_BUS_OHCI0] = { 0x2c0, BIT(29) },
[RST_BUS_VE] = { 0x2c4, BIT(0) },
[RST_BUS_TCON0] = { 0x2c4, BIT(3) },
[RST_BUS_TCON0] = { 0x2c4, BIT(4) },
[RST_BUS_CSI] = { 0x2c4, BIT(8) },
[RST_BUS_DE] = { 0x2c4, BIT(12) },
[RST_BUS_DBG] = { 0x2c4, BIT(31) },

View File

@ -187,8 +187,8 @@ static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
cpufreq_cooling_unregister(priv->cdev);
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
kfree(priv);
dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
kfree(priv);
return 0;
}

View File

@ -30,7 +30,7 @@ static inline int cc_pm_init(struct cc_drvdata *drvdata)
return 0;
}
static void cc_pm_go(struct cc_drvdata *drvdata) {}
static inline void cc_pm_go(struct cc_drvdata *drvdata) {}
static inline void cc_pm_fini(struct cc_drvdata *drvdata) {}

View File

@ -30,6 +30,7 @@
#define GPIO_REG_EDGE 0xA0
struct mtk_gc {
struct irq_chip irq_chip;
struct gpio_chip chip;
spinlock_t lock;
int bank;
@ -189,13 +190,6 @@ mediatek_gpio_irq_type(struct irq_data *d, unsigned int type)
return 0;
}
static struct irq_chip mediatek_gpio_irq_chip = {
.irq_unmask = mediatek_gpio_irq_unmask,
.irq_mask = mediatek_gpio_irq_mask,
.irq_mask_ack = mediatek_gpio_irq_mask,
.irq_set_type = mediatek_gpio_irq_type,
};
static int
mediatek_gpio_xlate(struct gpio_chip *chip,
const struct of_phandle_args *spec, u32 *flags)
@ -254,6 +248,13 @@ mediatek_gpio_bank_probe(struct device *dev,
return ret;
}
rg->irq_chip.name = dev_name(dev);
rg->irq_chip.parent_device = dev;
rg->irq_chip.irq_unmask = mediatek_gpio_irq_unmask;
rg->irq_chip.irq_mask = mediatek_gpio_irq_mask;
rg->irq_chip.irq_mask_ack = mediatek_gpio_irq_mask;
rg->irq_chip.irq_set_type = mediatek_gpio_irq_type;
if (mtk->gpio_irq) {
/*
* Manually request the irq here instead of passing
@ -270,14 +271,14 @@ mediatek_gpio_bank_probe(struct device *dev,
return ret;
}
ret = gpiochip_irqchip_add(&rg->chip, &mediatek_gpio_irq_chip,
ret = gpiochip_irqchip_add(&rg->chip, &rg->irq_chip,
0, handle_simple_irq, IRQ_TYPE_NONE);
if (ret) {
dev_err(dev, "failed to add gpiochip_irqchip\n");
return ret;
}
gpiochip_set_chained_irqchip(&rg->chip, &mediatek_gpio_irq_chip,
gpiochip_set_chained_irqchip(&rg->chip, &rg->irq_chip,
mtk->gpio_irq, NULL);
}
@ -310,7 +311,6 @@ mediatek_gpio_probe(struct platform_device *pdev)
mtk->gpio_irq = irq_of_parse_and_map(np, 0);
mtk->dev = dev;
platform_set_drvdata(pdev, mtk);
mediatek_gpio_irq_chip.name = dev_name(dev);
for (i = 0; i < MTK_BANK_CNT; i++) {
ret = mediatek_gpio_bank_probe(dev, np, i);

View File

@ -245,6 +245,7 @@ static bool pxa_gpio_has_pinctrl(void)
{
switch (gpio_type) {
case PXA3XX_GPIO:
case MMP2_GPIO:
return false;
default:

View File

@ -411,6 +411,8 @@ struct amdgpu_fpriv {
struct amdgpu_ctx_mgr ctx_mgr;
};
int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv);
int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
unsigned size, struct amdgpu_ib *ib);
void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib,

View File

@ -131,7 +131,7 @@ static void amdgpu_doorbell_get_kfd_info(struct amdgpu_device *adev,
void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
{
int i, n;
int i;
int last_valid_bit;
if (adev->kfd.dev) {
@ -142,7 +142,9 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
.gpuvm_size = min(adev->vm_manager.max_pfn
<< AMDGPU_GPU_PAGE_SHIFT,
AMDGPU_GMC_HOLE_START),
.drm_render_minor = adev->ddev->render->index
.drm_render_minor = adev->ddev->render->index,
.sdma_doorbell_idx = adev->doorbell_index.sdma_engine,
};
/* this is going to have a few of the MSBs set that we need to
@ -172,35 +174,20 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
&gpu_resources.doorbell_aperture_size,
&gpu_resources.doorbell_start_offset);
if (adev->asic_type < CHIP_VEGA10) {
kgd2kfd_device_init(adev->kfd.dev, &gpu_resources);
return;
}
n = (adev->asic_type < CHIP_VEGA20) ? 2 : 8;
for (i = 0; i < n; i += 2) {
/* On SOC15 the BIF is involved in routing
* doorbells using the low 12 bits of the
* address. Communicate the assignments to
* KFD. KFD uses two doorbell pages per
* process in case of 64-bit doorbells so we
* can use each doorbell assignment twice.
*/
gpu_resources.sdma_doorbell[0][i] =
adev->doorbell_index.sdma_engine[0] + (i >> 1);
gpu_resources.sdma_doorbell[0][i+1] =
adev->doorbell_index.sdma_engine[0] + 0x200 + (i >> 1);
gpu_resources.sdma_doorbell[1][i] =
adev->doorbell_index.sdma_engine[1] + (i >> 1);
gpu_resources.sdma_doorbell[1][i+1] =
adev->doorbell_index.sdma_engine[1] + 0x200 + (i >> 1);
}
/* Doorbells 0x0e0-0ff and 0x2e0-2ff are reserved for
* SDMA, IH and VCN. So don't use them for the CP.
/* Since SOC15, BIF starts to statically use the
* lower 12 bits of doorbell addresses for routing
* based on settings in registers like
* SDMA0_DOORBELL_RANGE etc..
* In order to route a doorbell to CP engine, the lower
* 12 bits of its address has to be outside the range
* set for SDMA, VCN, and IH blocks.
*/
gpu_resources.reserved_doorbell_mask = 0x1e0;
gpu_resources.reserved_doorbell_val = 0x0e0;
if (adev->asic_type >= CHIP_VEGA10) {
gpu_resources.non_cp_doorbells_start =
adev->doorbell_index.first_non_cp;
gpu_resources.non_cp_doorbells_end =
adev->doorbell_index.last_non_cp;
}
kgd2kfd_device_init(adev->kfd.dev, &gpu_resources);
}

View File

@ -204,38 +204,25 @@ void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo)
}
/* amdgpu_amdkfd_remove_eviction_fence - Removes eviction fence(s) from BO's
/* amdgpu_amdkfd_remove_eviction_fence - Removes eviction fence from BO's
* reservation object.
*
* @bo: [IN] Remove eviction fence(s) from this BO
* @ef: [IN] If ef is specified, then this eviction fence is removed if it
* @ef: [IN] This eviction fence is removed if it
* is present in the shared list.
* @ef_list: [OUT] Returns list of eviction fences. These fences are removed
* from BO's reservation object shared list.
* @ef_count: [OUT] Number of fences in ef_list.
*
* NOTE: If called with ef_list, then amdgpu_amdkfd_add_eviction_fence must be
* called to restore the eviction fences and to avoid memory leak. This is
* useful for shared BOs.
* NOTE: Must be called with BO reserved i.e. bo->tbo.resv->lock held.
*/
static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
struct amdgpu_amdkfd_fence *ef,
struct amdgpu_amdkfd_fence ***ef_list,
unsigned int *ef_count)
struct amdgpu_amdkfd_fence *ef)
{
struct reservation_object *resv = bo->tbo.resv;
struct reservation_object_list *old, *new;
unsigned int i, j, k;
if (!ef && !ef_list)
if (!ef)
return -EINVAL;
if (ef_list) {
*ef_list = NULL;
*ef_count = 0;
}
old = reservation_object_get_list(resv);
if (!old)
return 0;
@ -254,8 +241,7 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
f = rcu_dereference_protected(old->shared[i],
reservation_object_held(resv));
if ((ef && f->context == ef->base.context) ||
(!ef && to_amdgpu_amdkfd_fence(f)))
if (f->context == ef->base.context)
RCU_INIT_POINTER(new->shared[--j], f);
else
RCU_INIT_POINTER(new->shared[k++], f);
@ -263,21 +249,6 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
new->shared_max = old->shared_max;
new->shared_count = k;
if (!ef) {
unsigned int count = old->shared_count - j;
/* Alloc memory for count number of eviction fence pointers.
* Fill the ef_list array and ef_count
*/
*ef_list = kcalloc(count, sizeof(**ef_list), GFP_KERNEL);
*ef_count = count;
if (!*ef_list) {
kfree(new);
return -ENOMEM;
}
}
/* Install the new fence list, seqcount provides the barriers */
preempt_disable();
write_seqcount_begin(&resv->seq);
@ -291,46 +262,13 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
f = rcu_dereference_protected(new->shared[i],
reservation_object_held(resv));
if (!ef)
(*ef_list)[k++] = to_amdgpu_amdkfd_fence(f);
else
dma_fence_put(f);
dma_fence_put(f);
}
kfree_rcu(old, rcu);
return 0;
}
/* amdgpu_amdkfd_add_eviction_fence - Adds eviction fence(s) back into BO's
* reservation object.
*
* @bo: [IN] Add eviction fences to this BO
* @ef_list: [IN] List of eviction fences to be added
* @ef_count: [IN] Number of fences in ef_list.
*
* NOTE: Must call amdgpu_amdkfd_remove_eviction_fence before calling this
* function.
*/
static void amdgpu_amdkfd_add_eviction_fence(struct amdgpu_bo *bo,
struct amdgpu_amdkfd_fence **ef_list,
unsigned int ef_count)
{
int i;
if (!ef_list || !ef_count)
return;
for (i = 0; i < ef_count; i++) {
amdgpu_bo_fence(bo, &ef_list[i]->base, true);
/* Re-adding the fence takes an additional reference. Drop that
* reference.
*/
dma_fence_put(&ef_list[i]->base);
}
kfree(ef_list);
}
static int amdgpu_amdkfd_bo_validate(struct amdgpu_bo *bo, uint32_t domain,
bool wait)
{
@ -346,18 +284,8 @@ static int amdgpu_amdkfd_bo_validate(struct amdgpu_bo *bo, uint32_t domain,
ret = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
if (ret)
goto validate_fail;
if (wait) {
struct amdgpu_amdkfd_fence **ef_list;
unsigned int ef_count;
ret = amdgpu_amdkfd_remove_eviction_fence(bo, NULL, &ef_list,
&ef_count);
if (ret)
goto validate_fail;
ttm_bo_wait(&bo->tbo, false, false);
amdgpu_amdkfd_add_eviction_fence(bo, ef_list, ef_count);
}
if (wait)
amdgpu_bo_sync_wait(bo, AMDGPU_FENCE_OWNER_KFD, false);
validate_fail:
return ret;
@ -444,7 +372,6 @@ static int add_bo_to_vm(struct amdgpu_device *adev, struct kgd_mem *mem,
{
int ret;
struct kfd_bo_va_list *bo_va_entry;
struct amdgpu_bo *pd = vm->root.base.bo;
struct amdgpu_bo *bo = mem->bo;
uint64_t va = mem->va;
struct list_head *list_bo_va = &mem->bo_va_list;
@ -484,14 +411,8 @@ static int add_bo_to_vm(struct amdgpu_device *adev, struct kgd_mem *mem,
*p_bo_va_entry = bo_va_entry;
/* Allocate new page tables if needed and validate
* them. Clearing of new page tables and validate need to wait
* on move fences. We don't want that to trigger the eviction
* fence, so remove it temporarily.
* them.
*/
amdgpu_amdkfd_remove_eviction_fence(pd,
vm->process_info->eviction_fence,
NULL, NULL);
ret = amdgpu_vm_alloc_pts(adev, vm, va, amdgpu_bo_size(bo));
if (ret) {
pr_err("Failed to allocate pts, err=%d\n", ret);
@ -504,13 +425,9 @@ static int add_bo_to_vm(struct amdgpu_device *adev, struct kgd_mem *mem,
goto err_alloc_pts;
}
/* Add the eviction fence back */
amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true);
return 0;
err_alloc_pts:
amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true);
amdgpu_vm_bo_rmv(adev, bo_va_entry->bo_va);
list_del(&bo_va_entry->bo_list);
err_vmadd:
@ -809,24 +726,11 @@ static int unmap_bo_from_gpuvm(struct amdgpu_device *adev,
{
struct amdgpu_bo_va *bo_va = entry->bo_va;
struct amdgpu_vm *vm = bo_va->base.vm;
struct amdgpu_bo *pd = vm->root.base.bo;
/* Remove eviction fence from PD (and thereby from PTs too as
* they share the resv. object). Otherwise during PT update
* job (see amdgpu_vm_bo_update_mapping), eviction fence would
* get added to job->sync object and job execution would
* trigger the eviction fence.
*/
amdgpu_amdkfd_remove_eviction_fence(pd,
vm->process_info->eviction_fence,
NULL, NULL);
amdgpu_vm_bo_unmap(adev, bo_va, entry->va);
amdgpu_vm_clear_freed(adev, vm, &bo_va->last_pt_update);
/* Add the eviction fence back */
amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true);
amdgpu_sync_fence(NULL, sync, bo_va->last_pt_update, false);
return 0;
@ -1002,7 +906,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void **process_info,
pr_err("validate_pt_pd_bos() failed\n");
goto validate_pd_fail;
}
ret = ttm_bo_wait(&vm->root.base.bo->tbo, false, false);
amdgpu_bo_sync_wait(vm->root.base.bo, AMDGPU_FENCE_OWNER_KFD, false);
if (ret)
goto wait_pd_fail;
amdgpu_bo_fence(vm->root.base.bo,
@ -1389,8 +1293,7 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
* attached
*/
amdgpu_amdkfd_remove_eviction_fence(mem->bo,
process_info->eviction_fence,
NULL, NULL);
process_info->eviction_fence);
pr_debug("Release VA 0x%llx - 0x%llx\n", mem->va,
mem->va + bo_size * (1 + mem->aql_queue));
@ -1617,8 +1520,7 @@ int amdgpu_amdkfd_gpuvm_unmap_memory_from_gpu(
if (mem->mapped_to_gpu_memory == 0 &&
!amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm) && !mem->bo->pin_count)
amdgpu_amdkfd_remove_eviction_fence(mem->bo,
process_info->eviction_fence,
NULL, NULL);
process_info->eviction_fence);
unreserve_out:
unreserve_bo_and_vms(&ctx, false, false);
@ -1679,7 +1581,7 @@ int amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(struct kgd_dev *kgd,
}
amdgpu_amdkfd_remove_eviction_fence(
bo, mem->process_info->eviction_fence, NULL, NULL);
bo, mem->process_info->eviction_fence);
list_del_init(&mem->validate_list.head);
if (size)
@ -1945,16 +1847,6 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info)
amdgpu_sync_create(&sync);
/* Avoid triggering eviction fences when unmapping invalid
* userptr BOs (waits for all fences, doesn't use
* FENCE_OWNER_VM)
*/
list_for_each_entry(peer_vm, &process_info->vm_list_head,
vm_list_node)
amdgpu_amdkfd_remove_eviction_fence(peer_vm->root.base.bo,
process_info->eviction_fence,
NULL, NULL);
ret = process_validate_vms(process_info);
if (ret)
goto unreserve_out;
@ -2015,10 +1907,6 @@ static int validate_invalid_user_pages(struct amdkfd_process_info *process_info)
ret = process_update_pds(process_info, &sync);
unreserve_out:
list_for_each_entry(peer_vm, &process_info->vm_list_head,
vm_list_node)
amdgpu_bo_fence(peer_vm->root.base.bo,
&process_info->eviction_fence->base, true);
ttm_eu_backoff_reservation(&ticket, &resv_list);
amdgpu_sync_wait(&sync, false);
amdgpu_sync_free(&sync);

View File

@ -124,6 +124,7 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
struct amdgpu_ring *rings[AMDGPU_MAX_RINGS];
struct drm_sched_rq *rqs[AMDGPU_MAX_RINGS];
unsigned num_rings;
unsigned num_rqs = 0;
switch (i) {
case AMDGPU_HW_IP_GFX:
@ -166,12 +167,16 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
break;
}
for (j = 0; j < num_rings; ++j)
rqs[j] = &rings[j]->sched.sched_rq[priority];
for (j = 0; j < num_rings; ++j) {
if (!rings[j]->adev)
continue;
rqs[num_rqs++] = &rings[j]->sched.sched_rq[priority];
}
for (j = 0; j < amdgpu_ctx_num_entities[i]; ++j)
r = drm_sched_entity_init(&ctx->entities[i][j].entity,
rqs, num_rings, &ctx->guilty);
rqs, num_rqs, &ctx->guilty);
if (r)
goto error_cleanup_entities;
}

View File

@ -158,9 +158,6 @@ static int amdgpu_debugfs_process_reg_op(bool read, struct file *f,
while (size) {
uint32_t value;
if (*pos > adev->rmmio_size)
goto end;
if (read) {
value = RREG32(*pos >> 2);
r = put_user(value, (uint32_t *)buf);

View File

@ -71,6 +71,8 @@ struct amdgpu_doorbell_index {
uint32_t vce_ring6_7;
} uvd_vce;
};
uint32_t first_non_cp;
uint32_t last_non_cp;
uint32_t max_assignment;
/* Per engine SDMA doorbell size in dword */
uint32_t sdma_doorbell_range;
@ -143,6 +145,10 @@ typedef enum _AMDGPU_VEGA20_DOORBELL_ASSIGNMENT
AMDGPU_VEGA20_DOORBELL64_VCE_RING2_3 = 0x18D,
AMDGPU_VEGA20_DOORBELL64_VCE_RING4_5 = 0x18E,
AMDGPU_VEGA20_DOORBELL64_VCE_RING6_7 = 0x18F,
AMDGPU_VEGA20_DOORBELL64_FIRST_NON_CP = AMDGPU_VEGA20_DOORBELL_sDMA_ENGINE0,
AMDGPU_VEGA20_DOORBELL64_LAST_NON_CP = AMDGPU_VEGA20_DOORBELL64_VCE_RING6_7,
AMDGPU_VEGA20_DOORBELL_MAX_ASSIGNMENT = 0x18F,
AMDGPU_VEGA20_DOORBELL_INVALID = 0xFFFF
} AMDGPU_VEGA20_DOORBELL_ASSIGNMENT;
@ -222,6 +228,9 @@ typedef enum _AMDGPU_DOORBELL64_ASSIGNMENT
AMDGPU_DOORBELL64_VCE_RING4_5 = 0xFE,
AMDGPU_DOORBELL64_VCE_RING6_7 = 0xFF,
AMDGPU_DOORBELL64_FIRST_NON_CP = AMDGPU_DOORBELL64_sDMA_ENGINE0,
AMDGPU_DOORBELL64_LAST_NON_CP = AMDGPU_DOORBELL64_VCE_RING6_7,
AMDGPU_DOORBELL64_MAX_ASSIGNMENT = 0xFF,
AMDGPU_DOORBELL64_INVALID = 0xFFFF
} AMDGPU_DOORBELL64_ASSIGNMENT;

View File

@ -184,61 +184,6 @@ u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
return vrefresh;
}
void amdgpu_calculate_u_and_p(u32 i, u32 r_c, u32 p_b,
u32 *p, u32 *u)
{
u32 b_c = 0;
u32 i_c;
u32 tmp;
i_c = (i * r_c) / 100;
tmp = i_c >> p_b;
while (tmp) {
b_c++;
tmp >>= 1;
}
*u = (b_c + 1) / 2;
*p = i_c / (1 << (2 * (*u)));
}
int amdgpu_calculate_at(u32 t, u32 h, u32 fh, u32 fl, u32 *tl, u32 *th)
{
u32 k, a, ah, al;
u32 t1;
if ((fl == 0) || (fh == 0) || (fl > fh))
return -EINVAL;
k = (100 * fh) / fl;
t1 = (t * (k - 100));
a = (1000 * (100 * h + t1)) / (10000 + (t1 / 100));
a = (a + 5) / 10;
ah = ((a * t) + 5000) / 10000;
al = a - ah;
*th = t - ah;
*tl = t + al;
return 0;
}
bool amdgpu_is_uvd_state(u32 class, u32 class2)
{
if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
return true;
if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
return true;
if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
return true;
if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
return true;
if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
return true;
return false;
}
bool amdgpu_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor)
{
switch (sensor) {
@ -949,39 +894,6 @@ enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct amdgpu_device *adev,
return AMDGPU_PCIE_GEN1;
}
u16 amdgpu_get_pcie_lane_support(struct amdgpu_device *adev,
u16 asic_lanes,
u16 default_lanes)
{
switch (asic_lanes) {
case 0:
default:
return default_lanes;
case 1:
return 1;
case 2:
return 2;
case 4:
return 4;
case 8:
return 8;
case 12:
return 12;
case 16:
return 16;
}
}
u8 amdgpu_encode_pci_lane_width(u32 lanes)
{
u8 encoded_lanes[] = { 0, 1, 2, 0, 3, 0, 0, 0, 4, 0, 0, 0, 5, 0, 0, 0, 6 };
if (lanes > 16)
return 0;
return encoded_lanes[lanes];
}
struct amd_vce_state*
amdgpu_get_vce_clock_state(void *handle, u32 idx)
{

View File

@ -486,10 +486,6 @@ void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev);
void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev);
bool amdgpu_is_uvd_state(u32 class, u32 class2);
void amdgpu_calculate_u_and_p(u32 i, u32 r_c, u32 p_b,
u32 *p, u32 *u);
int amdgpu_calculate_at(u32 t, u32 h, u32 fh, u32 fl, u32 *tl, u32 *th);
bool amdgpu_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor);
@ -505,11 +501,6 @@ enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct amdgpu_device *adev,
enum amdgpu_pcie_gen asic_gen,
enum amdgpu_pcie_gen default_gen);
u16 amdgpu_get_pcie_lane_support(struct amdgpu_device *adev,
u16 asic_lanes,
u16 default_lanes);
u8 amdgpu_encode_pci_lane_width(u32 lanes);
struct amd_vce_state*
amdgpu_get_vce_clock_state(void *handle, u32 idx);

View File

@ -73,9 +73,10 @@
* - 3.27.0 - Add new chunk to to AMDGPU_CS to enable BO_LIST creation.
* - 3.28.0 - Add AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES
* - 3.29.0 - Add AMDGPU_IB_FLAG_RESET_GDS_MAX_WAVE_ID
* - 3.30.0 - Add AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE.
*/
#define KMS_DRIVER_MAJOR 3
#define KMS_DRIVER_MINOR 29
#define KMS_DRIVER_MINOR 30
#define KMS_DRIVER_PATCHLEVEL 0
int amdgpu_vram_limit = 0;
@ -1179,6 +1180,22 @@ static const struct file_operations amdgpu_driver_kms_fops = {
#endif
};
int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv)
{
struct drm_file *file;
if (!filp)
return -EINVAL;
if (filp->f_op != &amdgpu_driver_kms_fops) {
return -EINVAL;
}
file = filp->private_data;
*fpriv = file->driver_priv;
return 0;
}
static bool
amdgpu_get_crtc_scanout_position(struct drm_device *dev, unsigned int pipe,
bool in_vblank_irq, int *vpos, int *hpos,

View File

@ -140,9 +140,7 @@ void amdgpu_ih_ring_fini(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih)
* Interrupt hander (VI), walk the IH ring.
* Returns irq process return code.
*/
int amdgpu_ih_process(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih,
void (*callback)(struct amdgpu_device *adev,
struct amdgpu_ih_ring *ih))
int amdgpu_ih_process(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih)
{
u32 wptr;
@ -162,7 +160,7 @@ restart_ih:
rmb();
while (ih->rptr != wptr) {
callback(adev, ih);
amdgpu_irq_dispatch(adev, ih);
ih->rptr &= ih->ptr_mask;
}

View File

@ -69,8 +69,6 @@ struct amdgpu_ih_funcs {
int amdgpu_ih_ring_init(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih,
unsigned ring_size, bool use_bus_addr);
void amdgpu_ih_ring_fini(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih);
int amdgpu_ih_process(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih,
void (*callback)(struct amdgpu_device *adev,
struct amdgpu_ih_ring *ih));
int amdgpu_ih_process(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih);
#endif

View File

@ -130,29 +130,6 @@ void amdgpu_irq_disable_all(struct amdgpu_device *adev)
spin_unlock_irqrestore(&adev->irq.lock, irqflags);
}
/**
* amdgpu_irq_callback - callback from the IH ring
*
* @adev: amdgpu device pointer
* @ih: amdgpu ih ring
*
* Callback from IH ring processing to handle the entry at the current position
* and advance the read pointer.
*/
static void amdgpu_irq_callback(struct amdgpu_device *adev,
struct amdgpu_ih_ring *ih)
{
u32 ring_index = ih->rptr >> 2;
struct amdgpu_iv_entry entry;
entry.iv_entry = (const uint32_t *)&ih->ring[ring_index];
amdgpu_ih_decode_iv(adev, &entry);
trace_amdgpu_iv(ih - &adev->irq.ih, &entry);
amdgpu_irq_dispatch(adev, &entry);
}
/**
* amdgpu_irq_handler - IRQ handler
*
@ -170,7 +147,7 @@ irqreturn_t amdgpu_irq_handler(int irq, void *arg)
struct amdgpu_device *adev = dev->dev_private;
irqreturn_t ret;
ret = amdgpu_ih_process(adev, &adev->irq.ih, amdgpu_irq_callback);
ret = amdgpu_ih_process(adev, &adev->irq.ih);
if (ret == IRQ_HANDLED)
pm_runtime_mark_last_busy(dev->dev);
return ret;
@ -188,7 +165,7 @@ static void amdgpu_irq_handle_ih1(struct work_struct *work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
irq.ih1_work);
amdgpu_ih_process(adev, &adev->irq.ih1, amdgpu_irq_callback);
amdgpu_ih_process(adev, &adev->irq.ih1);
}
/**
@ -203,7 +180,7 @@ static void amdgpu_irq_handle_ih2(struct work_struct *work)
struct amdgpu_device *adev = container_of(work, struct amdgpu_device,
irq.ih2_work);
amdgpu_ih_process(adev, &adev->irq.ih2, amdgpu_irq_callback);
amdgpu_ih_process(adev, &adev->irq.ih2);
}
/**
@ -394,14 +371,23 @@ int amdgpu_irq_add_id(struct amdgpu_device *adev,
* Dispatches IRQ to IP blocks.
*/
void amdgpu_irq_dispatch(struct amdgpu_device *adev,
struct amdgpu_iv_entry *entry)
struct amdgpu_ih_ring *ih)
{
unsigned client_id = entry->client_id;
unsigned src_id = entry->src_id;
u32 ring_index = ih->rptr >> 2;
struct amdgpu_iv_entry entry;
unsigned client_id, src_id;
struct amdgpu_irq_src *src;
bool handled = false;
int r;
entry.iv_entry = (const uint32_t *)&ih->ring[ring_index];
amdgpu_ih_decode_iv(adev, &entry);
trace_amdgpu_iv(ih - &adev->irq.ih, &entry);
client_id = entry.client_id;
src_id = entry.src_id;
if (client_id >= AMDGPU_IRQ_CLIENTID_MAX) {
DRM_DEBUG("Invalid client_id in IV: %d\n", client_id);
@ -416,7 +402,7 @@ void amdgpu_irq_dispatch(struct amdgpu_device *adev,
client_id, src_id);
} else if ((src = adev->irq.client[client_id].sources[src_id])) {
r = src->funcs->process(adev, src, entry);
r = src->funcs->process(adev, src, &entry);
if (r < 0)
DRM_ERROR("error processing interrupt (%d)\n", r);
else if (r)
@ -428,7 +414,7 @@ void amdgpu_irq_dispatch(struct amdgpu_device *adev,
/* Send it to amdkfd as well if it isn't already handled */
if (!handled)
amdgpu_amdkfd_interrupt(adev, entry->iv_entry);
amdgpu_amdkfd_interrupt(adev, entry.iv_entry);
}
/**

View File

@ -108,7 +108,7 @@ int amdgpu_irq_add_id(struct amdgpu_device *adev,
unsigned client_id, unsigned src_id,
struct amdgpu_irq_src *source);
void amdgpu_irq_dispatch(struct amdgpu_device *adev,
struct amdgpu_iv_entry *entry);
struct amdgpu_ih_ring *ih);
int amdgpu_irq_update(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
unsigned type);
int amdgpu_irq_get(struct amdgpu_device *adev, struct amdgpu_irq_src *src,

View File

@ -207,11 +207,12 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
if (!r) {
acpi_status = amdgpu_acpi_init(adev);
if (acpi_status)
dev_dbg(&dev->pdev->dev,
dev_dbg(&dev->pdev->dev,
"Error during ACPI methods call\n");
}
if (amdgpu_device_is_px(dev)) {
dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP);
pm_runtime_use_autosuspend(dev->dev);
pm_runtime_set_autosuspend_delay(dev->dev, 5000);
pm_runtime_set_active(dev->dev);

View File

@ -406,6 +406,7 @@ struct amdgpu_crtc {
struct amdgpu_flip_work *pflip_works;
enum amdgpu_flip_status pflip_status;
int deferred_flip_completion;
u64 last_flip_vblank;
/* pll sharing */
struct amdgpu_atom_ss ss;
bool ss_enabled;

View File

@ -1284,6 +1284,30 @@ void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,
reservation_object_add_excl_fence(resv, fence);
}
/**
* amdgpu_sync_wait_resv - Wait for BO reservation fences
*
* @bo: buffer object
* @owner: fence owner
* @intr: Whether the wait is interruptible
*
* Returns:
* 0 on success, errno otherwise.
*/
int amdgpu_bo_sync_wait(struct amdgpu_bo *bo, void *owner, bool intr)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
struct amdgpu_sync sync;
int r;
amdgpu_sync_create(&sync);
amdgpu_sync_resv(adev, &sync, bo->tbo.resv, owner, false);
r = amdgpu_sync_wait(&sync, intr);
amdgpu_sync_free(&sync);
return r;
}
/**
* amdgpu_bo_gpu_offset - return GPU offset of bo
* @bo: amdgpu object for which we query the offset

View File

@ -266,6 +266,7 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,
bool shared);
int amdgpu_bo_sync_wait(struct amdgpu_bo *bo, void *owner, bool intr);
u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo);
int amdgpu_bo_validate(struct amdgpu_bo *bo);
int amdgpu_bo_restore_shadow(struct amdgpu_bo *shadow,

View File

@ -54,16 +54,20 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
enum drm_sched_priority priority)
{
struct file *filp = fget(fd);
struct drm_file *file;
struct amdgpu_fpriv *fpriv;
struct amdgpu_ctx *ctx;
uint32_t id;
int r;
if (!filp)
return -EINVAL;
file = filp->private_data;
fpriv = file->driver_priv;
r = amdgpu_file_to_fpriv(filp, &fpriv);
if (r) {
fput(filp);
return r;
}
idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
amdgpu_ctx_priority_override(ctx, priority);
@ -72,6 +76,39 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
return 0;
}
static int amdgpu_sched_context_priority_override(struct amdgpu_device *adev,
int fd,
unsigned ctx_id,
enum drm_sched_priority priority)
{
struct file *filp = fget(fd);
struct amdgpu_fpriv *fpriv;
struct amdgpu_ctx *ctx;
int r;
if (!filp)
return -EINVAL;
r = amdgpu_file_to_fpriv(filp, &fpriv);
if (r) {
fput(filp);
return r;
}
ctx = amdgpu_ctx_get(fpriv, ctx_id);
if (!ctx) {
fput(filp);
return -EINVAL;
}
amdgpu_ctx_priority_override(ctx, priority);
amdgpu_ctx_put(ctx);
fput(filp);
return 0;
}
int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
@ -81,7 +118,7 @@ int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
int r;
priority = amdgpu_to_sched_priority(args->in.priority);
if (args->in.flags || priority == DRM_SCHED_PRIORITY_INVALID)
if (priority == DRM_SCHED_PRIORITY_INVALID)
return -EINVAL;
switch (args->in.op) {
@ -90,6 +127,12 @@ int amdgpu_sched_ioctl(struct drm_device *dev, void *data,
args->in.fd,
priority);
break;
case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE:
r = amdgpu_sched_context_priority_override(adev,
args->in.fd,
args->in.ctx_id,
priority);
break;
default:
DRM_ERROR("Invalid sched op specified: %d\n", args->in.op);
r = -EINVAL;

View File

@ -652,12 +652,14 @@ void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct amdgpu_vm_bo_base *bo_base;
#if 0
if (vm->bulk_moveable) {
spin_lock(&glob->lru_lock);
ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
spin_unlock(&glob->lru_lock);
return;
}
#endif
memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
@ -698,8 +700,6 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_vm_bo_base *bo_base, *tmp;
int r = 0;
vm->bulk_moveable &= list_empty(&vm->evicted);
list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
struct amdgpu_bo *bo = bo_base->bo;
@ -828,7 +828,7 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
WARN_ON(job->ibs[0].length_dw > 64);
r = amdgpu_sync_resv(adev, &job->sync, bo->tbo.resv,
AMDGPU_FENCE_OWNER_UNDEFINED, false);
AMDGPU_FENCE_OWNER_KFD, false);
if (r)
goto error_free;
@ -1332,31 +1332,6 @@ static void amdgpu_vm_cpu_set_ptes(struct amdgpu_pte_update_params *params,
}
}
/**
* amdgpu_vm_wait_pd - Wait for PT BOs to be free.
*
* @adev: amdgpu_device pointer
* @vm: related vm
* @owner: fence owner
*
* Returns:
* 0 on success, errno otherwise.
*/
static int amdgpu_vm_wait_pd(struct amdgpu_device *adev, struct amdgpu_vm *vm,
void *owner)
{
struct amdgpu_sync sync;
int r;
amdgpu_sync_create(&sync);
amdgpu_sync_resv(adev, &sync, vm->root.base.bo->tbo.resv, owner, false);
r = amdgpu_sync_wait(&sync, true);
amdgpu_sync_free(&sync);
return r;
}
/**
* amdgpu_vm_update_func - helper to call update function
*
@ -1451,7 +1426,8 @@ restart:
params.adev = adev;
if (vm->use_cpu_for_update) {
r = amdgpu_vm_wait_pd(adev, vm, AMDGPU_FENCE_OWNER_VM);
r = amdgpu_bo_sync_wait(vm->root.base.bo,
AMDGPU_FENCE_OWNER_VM, true);
if (unlikely(r))
return r;
@ -1772,9 +1748,9 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
params.adev = adev;
params.vm = vm;
/* sync to everything on unmapping */
/* sync to everything except eviction fences on unmapping */
if (!(flags & AMDGPU_PTE_VALID))
owner = AMDGPU_FENCE_OWNER_UNDEFINED;
owner = AMDGPU_FENCE_OWNER_KFD;
if (vm->use_cpu_for_update) {
/* params.src is used as flag to indicate system Memory */
@ -1784,7 +1760,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
/* Wait for PT BOs to be idle. PTs share the same resv. object
* as the root PD BO
*/
r = amdgpu_vm_wait_pd(adev, vm, owner);
r = amdgpu_bo_sync_wait(vm->root.base.bo, owner, true);
if (unlikely(r))
return r;

View File

@ -2980,7 +2980,7 @@ static int dce_v6_0_pageflip_irq(struct amdgpu_device *adev,
struct amdgpu_irq_src *source,
struct amdgpu_iv_entry *entry)
{
unsigned long flags;
unsigned long flags;
unsigned crtc_id;
struct amdgpu_crtc *amdgpu_crtc;
struct amdgpu_flip_work *works;

View File

@ -266,7 +266,8 @@ flr_done:
}
/* Trigger recovery for world switch failure if no TDR */
if (amdgpu_device_should_recover_gpu(adev))
if (amdgpu_device_should_recover_gpu(adev)
&& amdgpu_lockup_timeout == MAX_SCHEDULE_TIMEOUT)
amdgpu_device_gpu_recover(adev, NULL);
}

View File

@ -32,7 +32,7 @@
static u32 nbio_v7_4_get_rev_id(struct amdgpu_device *adev)
{
u32 tmp = RREG32_SOC15(NBIO, 0, mmRCC_DEV0_EPF0_STRAP0);
u32 tmp = RREG32_SOC15(NBIO, 0, mmRCC_DEV0_EPF0_STRAP0);
tmp &= RCC_DEV0_EPF0_STRAP0__STRAP_ATI_REV_ID_DEV0_F0_MASK;
tmp >>= RCC_DEV0_EPF0_STRAP0__STRAP_ATI_REV_ID_DEV0_F0__SHIFT;

View File

@ -128,7 +128,7 @@ static const struct soc15_reg_golden golden_settings_sdma0_4_2_init[] = {
static const struct soc15_reg_golden golden_settings_sdma0_4_2[] =
{
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 0x02831d07),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CLK_CTRL, 0xffffffff, 0x3f000100),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0000773f, 0x00004002),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002),
@ -158,7 +158,7 @@ static const struct soc15_reg_golden golden_settings_sdma0_4_2[] =
};
static const struct soc15_reg_golden golden_settings_sdma1_4_2[] = {
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831d07),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0000773f, 0x00004002),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002),

Some files were not shown because too many files have changed in this diff Show More