Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Two AF_* families adding entries to the lockdep tables
at the same time.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2017-01-11 14:43:39 -05:00
commit 02ac5d1487
101 changed files with 778 additions and 415 deletions

View File

@ -137,6 +137,7 @@ Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Rudolf Marek <R.Marek@sh.cvut.cz> Rudolf Marek <R.Marek@sh.cvut.cz>
Rui Saraiva <rmps@joel.ist.utl.pt> Rui Saraiva <rmps@joel.ist.utl.pt>
Sachin P Sant <ssant@in.ibm.com> Sachin P Sant <ssant@in.ibm.com>
Sarangdhar Joshi <spjoshi@codeaurora.org>
Sam Ravnborg <sam@mars.ravnborg.org> Sam Ravnborg <sam@mars.ravnborg.org>
Santosh Shilimkar <ssantosh@kernel.org> Santosh Shilimkar <ssantosh@kernel.org>
Santosh Shilimkar <santosh.shilimkar@oracle.org> Santosh Shilimkar <santosh.shilimkar@oracle.org>
@ -150,10 +151,13 @@ Shuah Khan <shuah@kernel.org> <shuah.kh@samsung.com>
Simon Kelley <simon@thekelleys.org.uk> Simon Kelley <simon@thekelleys.org.uk>
Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr> Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr>
Stephen Hemminger <shemminger@osdl.org> Stephen Hemminger <shemminger@osdl.org>
Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Subhash Jadavani <subhashj@codeaurora.org>
Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Sumit Semwal <sumit.semwal@ti.com> Sumit Semwal <sumit.semwal@ti.com>
Tejun Heo <htejun@gmail.com> Tejun Heo <htejun@gmail.com>
Thomas Graf <tgraf@suug.ch> Thomas Graf <tgraf@suug.ch>
Thomas Pedersen <twp@codeaurora.org>
Tony Luck <tony.luck@intel.com> Tony Luck <tony.luck@intel.com>
Tsuneo Yoshioka <Tsuneo.Yoshioka@f-secure.com> Tsuneo Yoshioka <Tsuneo.Yoshioka@f-secure.com>
Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de> Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de>

View File

@ -0,0 +1,42 @@
Page fragments
--------------
A page fragment is an arbitrary-length arbitrary-offset area of memory
which resides within a 0 or higher order compound page. Multiple
fragments within that page are individually refcounted, in the page's
reference counter.
The page_frag functions, page_frag_alloc and page_frag_free, provide a
simple allocation framework for page fragments. This is used by the
network stack and network device drivers to provide a backing region of
memory for use as either an sk_buff->head, or to be used in the "frags"
portion of skb_shared_info.
In order to make use of the page fragment APIs a backing page fragment
cache is needed. This provides a central point for the fragment allocation
and tracks allows multiple calls to make use of a cached page. The
advantage to doing this is that multiple calls to get_page can be avoided
which can be expensive at allocation time. However due to the nature of
this caching it is required that any calls to the cache be protected by
either a per-cpu limitation, or a per-cpu limitation and forcing interrupts
to be disabled when executing the fragment allocation.
The network stack uses two separate caches per CPU to handle fragment
allocation. The netdev_alloc_cache is used by callers making use of the
__netdev_alloc_frag and __netdev_alloc_skb calls. The napi_alloc_cache is
used by callers of the __napi_alloc_frag and __napi_alloc_skb calls. The
main difference between these two calls is the context in which they may be
called. The "netdev" prefixed functions are usable in any context as these
functions will disable interrupts, while the "napi" prefixed functions are
only usable within the softirq context.
Many network device drivers use a similar methodology for allocating page
fragments, but the page fragments are cached at the ring or descriptor
level. In order to enable these cases it is necessary to provide a generic
way of tearing down a page cache. For this reason __page_frag_cache_drain
was implemented. It allows for freeing multiple references from a single
page via a single call. The advantage to doing this is that it allows for
cleaning up the multiple references that were added to a page in order to
avoid calling get_page per allocation.
Alexander Duyck, Nov 29, 2016.

View File

@ -81,7 +81,6 @@ Descriptions of section entries:
Q: Patchwork web based patch tracking system site Q: Patchwork web based patch tracking system site
T: SCM tree type and location. T: SCM tree type and location.
Type is one of: git, hg, quilt, stgit, topgit Type is one of: git, hg, quilt, stgit, topgit
B: Bug tracking system location.
S: Status, one of the following: S: Status, one of the following:
Supported: Someone is actually paid to look after this. Supported: Someone is actually paid to look after this.
Maintained: Someone actually looks after it. Maintained: Someone actually looks after it.
@ -4123,7 +4122,7 @@ F: drivers/gpu/drm/cirrus/
RADEON and AMDGPU DRM DRIVERS RADEON and AMDGPU DRM DRIVERS
M: Alex Deucher <alexander.deucher@amd.com> M: Alex Deucher <alexander.deucher@amd.com>
M: Christian König <christian.koenig@amd.com> M: Christian König <christian.koenig@amd.com>
L: dri-devel@lists.freedesktop.org L: amd-gfx@lists.freedesktop.org
T: git git://people.freedesktop.org/~agd5f/linux T: git git://people.freedesktop.org/~agd5f/linux
S: Supported S: Supported
F: drivers/gpu/drm/radeon/ F: drivers/gpu/drm/radeon/

View File

@ -1020,7 +1020,8 @@ struct {
const char *basename; const char *basename;
struct simd_skcipher_alg *simd; struct simd_skcipher_alg *simd;
} aesni_simd_skciphers2[] = { } aesni_simd_skciphers2[] = {
#if IS_ENABLED(CONFIG_CRYPTO_PCBC) #if (defined(MODULE) && IS_ENABLED(CONFIG_CRYPTO_PCBC)) || \
IS_BUILTIN(CONFIG_CRYPTO_PCBC)
{ {
.algname = "pcbc(aes)", .algname = "pcbc(aes)",
.drvname = "pcbc-aes-aesni", .drvname = "pcbc-aes-aesni",

View File

@ -25,6 +25,7 @@
#include <linux/genhd.h> #include <linux/genhd.h>
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/backing-dev.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/err.h> #include <linux/err.h>
@ -112,6 +113,14 @@ static inline bool is_partial_io(struct bio_vec *bvec)
return bvec->bv_len != PAGE_SIZE; return bvec->bv_len != PAGE_SIZE;
} }
static void zram_revalidate_disk(struct zram *zram)
{
revalidate_disk(zram->disk);
/* revalidate_disk reset the BDI_CAP_STABLE_WRITES so set again */
zram->disk->queue->backing_dev_info.capabilities |=
BDI_CAP_STABLE_WRITES;
}
/* /*
* Check if request is within bounds and aligned on zram logical blocks. * Check if request is within bounds and aligned on zram logical blocks.
*/ */
@ -1095,15 +1104,9 @@ static ssize_t disksize_store(struct device *dev,
zram->comp = comp; zram->comp = comp;
zram->disksize = disksize; zram->disksize = disksize;
set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT); set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT);
zram_revalidate_disk(zram);
up_write(&zram->init_lock); up_write(&zram->init_lock);
/*
* Revalidate disk out of the init_lock to avoid lockdep splat.
* It's okay because disk's capacity is protected by init_lock
* so that revalidate_disk always sees up-to-date capacity.
*/
revalidate_disk(zram->disk);
return len; return len;
out_destroy_comp: out_destroy_comp:
@ -1149,7 +1152,7 @@ static ssize_t reset_store(struct device *dev,
/* Make sure all the pending I/O are finished */ /* Make sure all the pending I/O are finished */
fsync_bdev(bdev); fsync_bdev(bdev);
zram_reset_device(zram); zram_reset_device(zram);
revalidate_disk(zram->disk); zram_revalidate_disk(zram);
bdput(bdev); bdput(bdev);
mutex_lock(&bdev->bd_mutex); mutex_lock(&bdev->bd_mutex);

View File

@ -205,7 +205,7 @@ static int mxs_gpio_set_wake_irq(struct irq_data *d, unsigned int enable)
return 0; return 0;
} }
static int __init mxs_gpio_init_gc(struct mxs_gpio_port *port, int irq_base) static int mxs_gpio_init_gc(struct mxs_gpio_port *port, int irq_base)
{ {
struct irq_chip_generic *gc; struct irq_chip_generic *gc;
struct irq_chip_type *ct; struct irq_chip_type *ct;

View File

@ -1317,12 +1317,12 @@ void gpiochip_remove(struct gpio_chip *chip)
/* FIXME: should the legacy sysfs handling be moved to gpio_device? */ /* FIXME: should the legacy sysfs handling be moved to gpio_device? */
gpiochip_sysfs_unregister(gdev); gpiochip_sysfs_unregister(gdev);
gpiochip_free_hogs(chip);
/* Numb the device, cancelling all outstanding operations */ /* Numb the device, cancelling all outstanding operations */
gdev->chip = NULL; gdev->chip = NULL;
gpiochip_irqchip_remove(chip); gpiochip_irqchip_remove(chip);
acpi_gpiochip_remove(chip); acpi_gpiochip_remove(chip);
gpiochip_remove_pin_ranges(chip); gpiochip_remove_pin_ranges(chip);
gpiochip_free_hogs(chip);
of_gpiochip_remove(chip); of_gpiochip_remove(chip);
/* /*
* We accept no more calls into the driver from this point, so * We accept no more calls into the driver from this point, so

View File

@ -840,6 +840,9 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
else if (type == CGS_UCODE_ID_SMU_SK) else if (type == CGS_UCODE_ID_SMU_SK)
strcpy(fw_name, "amdgpu/polaris10_smc_sk.bin"); strcpy(fw_name, "amdgpu/polaris10_smc_sk.bin");
break; break;
case CHIP_POLARIS12:
strcpy(fw_name, "amdgpu/polaris12_smc.bin");
break;
default: default:
DRM_ERROR("SMC firmware not supported\n"); DRM_ERROR("SMC firmware not supported\n");
return -EINVAL; return -EINVAL;

View File

@ -73,6 +73,7 @@ static const char *amdgpu_asic_name[] = {
"STONEY", "STONEY",
"POLARIS10", "POLARIS10",
"POLARIS11", "POLARIS11",
"POLARIS12",
"LAST", "LAST",
}; };
@ -1277,6 +1278,7 @@ static int amdgpu_early_init(struct amdgpu_device *adev)
case CHIP_FIJI: case CHIP_FIJI:
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS10: case CHIP_POLARIS10:
case CHIP_POLARIS12:
case CHIP_CARRIZO: case CHIP_CARRIZO:
case CHIP_STONEY: case CHIP_STONEY:
if (adev->asic_type == CHIP_CARRIZO || adev->asic_type == CHIP_STONEY) if (adev->asic_type == CHIP_CARRIZO || adev->asic_type == CHIP_STONEY)

View File

@ -418,6 +418,13 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x67CA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10}, {0x1002, 0x67CA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
{0x1002, 0x67CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10}, {0x1002, 0x67CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
{0x1002, 0x67CF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10}, {0x1002, 0x67CF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
/* Polaris12 */
{0x1002, 0x6980, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
{0x1002, 0x6981, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
{0x1002, 0x6985, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
{0x1002, 0x6986, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
{0x1002, 0x6987, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
{0x1002, 0x699F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
{0, 0, 0} {0, 0, 0}
}; };

View File

@ -98,6 +98,7 @@ static int amdgpu_pp_early_init(void *handle)
switch (adev->asic_type) { switch (adev->asic_type) {
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS10: case CHIP_POLARIS10:
case CHIP_POLARIS12:
case CHIP_TONGA: case CHIP_TONGA:
case CHIP_FIJI: case CHIP_FIJI:
case CHIP_TOPAZ: case CHIP_TOPAZ:

View File

@ -65,6 +65,7 @@
#define FIRMWARE_STONEY "amdgpu/stoney_uvd.bin" #define FIRMWARE_STONEY "amdgpu/stoney_uvd.bin"
#define FIRMWARE_POLARIS10 "amdgpu/polaris10_uvd.bin" #define FIRMWARE_POLARIS10 "amdgpu/polaris10_uvd.bin"
#define FIRMWARE_POLARIS11 "amdgpu/polaris11_uvd.bin" #define FIRMWARE_POLARIS11 "amdgpu/polaris11_uvd.bin"
#define FIRMWARE_POLARIS12 "amdgpu/polaris12_uvd.bin"
/** /**
* amdgpu_uvd_cs_ctx - Command submission parser context * amdgpu_uvd_cs_ctx - Command submission parser context
@ -98,6 +99,7 @@ MODULE_FIRMWARE(FIRMWARE_FIJI);
MODULE_FIRMWARE(FIRMWARE_STONEY); MODULE_FIRMWARE(FIRMWARE_STONEY);
MODULE_FIRMWARE(FIRMWARE_POLARIS10); MODULE_FIRMWARE(FIRMWARE_POLARIS10);
MODULE_FIRMWARE(FIRMWARE_POLARIS11); MODULE_FIRMWARE(FIRMWARE_POLARIS11);
MODULE_FIRMWARE(FIRMWARE_POLARIS12);
static void amdgpu_uvd_idle_work_handler(struct work_struct *work); static void amdgpu_uvd_idle_work_handler(struct work_struct *work);
@ -149,6 +151,9 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
case CHIP_POLARIS11: case CHIP_POLARIS11:
fw_name = FIRMWARE_POLARIS11; fw_name = FIRMWARE_POLARIS11;
break; break;
case CHIP_POLARIS12:
fw_name = FIRMWARE_POLARIS12;
break;
default: default:
return -EINVAL; return -EINVAL;
} }

View File

@ -52,6 +52,7 @@
#define FIRMWARE_STONEY "amdgpu/stoney_vce.bin" #define FIRMWARE_STONEY "amdgpu/stoney_vce.bin"
#define FIRMWARE_POLARIS10 "amdgpu/polaris10_vce.bin" #define FIRMWARE_POLARIS10 "amdgpu/polaris10_vce.bin"
#define FIRMWARE_POLARIS11 "amdgpu/polaris11_vce.bin" #define FIRMWARE_POLARIS11 "amdgpu/polaris11_vce.bin"
#define FIRMWARE_POLARIS12 "amdgpu/polaris12_vce.bin"
#ifdef CONFIG_DRM_AMDGPU_CIK #ifdef CONFIG_DRM_AMDGPU_CIK
MODULE_FIRMWARE(FIRMWARE_BONAIRE); MODULE_FIRMWARE(FIRMWARE_BONAIRE);
@ -66,6 +67,7 @@ MODULE_FIRMWARE(FIRMWARE_FIJI);
MODULE_FIRMWARE(FIRMWARE_STONEY); MODULE_FIRMWARE(FIRMWARE_STONEY);
MODULE_FIRMWARE(FIRMWARE_POLARIS10); MODULE_FIRMWARE(FIRMWARE_POLARIS10);
MODULE_FIRMWARE(FIRMWARE_POLARIS11); MODULE_FIRMWARE(FIRMWARE_POLARIS11);
MODULE_FIRMWARE(FIRMWARE_POLARIS12);
static void amdgpu_vce_idle_work_handler(struct work_struct *work); static void amdgpu_vce_idle_work_handler(struct work_struct *work);
@ -121,6 +123,9 @@ int amdgpu_vce_sw_init(struct amdgpu_device *adev, unsigned long size)
case CHIP_POLARIS11: case CHIP_POLARIS11:
fw_name = FIRMWARE_POLARIS11; fw_name = FIRMWARE_POLARIS11;
break; break;
case CHIP_POLARIS12:
fw_name = FIRMWARE_POLARIS12;
break;
default: default:
return -EINVAL; return -EINVAL;

View File

@ -167,6 +167,7 @@ static void dce_v11_0_init_golden_registers(struct amdgpu_device *adev)
(const u32)ARRAY_SIZE(stoney_golden_settings_a11)); (const u32)ARRAY_SIZE(stoney_golden_settings_a11));
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
amdgpu_program_register_sequence(adev, amdgpu_program_register_sequence(adev,
polaris11_golden_settings_a11, polaris11_golden_settings_a11,
(const u32)ARRAY_SIZE(polaris11_golden_settings_a11)); (const u32)ARRAY_SIZE(polaris11_golden_settings_a11));
@ -608,6 +609,7 @@ static int dce_v11_0_get_num_crtc (struct amdgpu_device *adev)
num_crtc = 6; num_crtc = 6;
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
num_crtc = 5; num_crtc = 5;
break; break;
default: default:
@ -1589,6 +1591,7 @@ static int dce_v11_0_audio_init(struct amdgpu_device *adev)
adev->mode_info.audio.num_pins = 8; adev->mode_info.audio.num_pins = 8;
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
adev->mode_info.audio.num_pins = 6; adev->mode_info.audio.num_pins = 6;
break; break;
default: default:
@ -2388,7 +2391,8 @@ static u32 dce_v11_0_pick_pll(struct drm_crtc *crtc)
int pll; int pll;
if ((adev->asic_type == CHIP_POLARIS10) || if ((adev->asic_type == CHIP_POLARIS10) ||
(adev->asic_type == CHIP_POLARIS11)) { (adev->asic_type == CHIP_POLARIS11) ||
(adev->asic_type == CHIP_POLARIS12)) {
struct amdgpu_encoder *amdgpu_encoder = struct amdgpu_encoder *amdgpu_encoder =
to_amdgpu_encoder(amdgpu_crtc->encoder); to_amdgpu_encoder(amdgpu_crtc->encoder);
struct amdgpu_encoder_atom_dig *dig = amdgpu_encoder->enc_priv; struct amdgpu_encoder_atom_dig *dig = amdgpu_encoder->enc_priv;
@ -2822,7 +2826,8 @@ static int dce_v11_0_crtc_mode_set(struct drm_crtc *crtc,
return -EINVAL; return -EINVAL;
if ((adev->asic_type == CHIP_POLARIS10) || if ((adev->asic_type == CHIP_POLARIS10) ||
(adev->asic_type == CHIP_POLARIS11)) { (adev->asic_type == CHIP_POLARIS11) ||
(adev->asic_type == CHIP_POLARIS12)) {
struct amdgpu_encoder *amdgpu_encoder = struct amdgpu_encoder *amdgpu_encoder =
to_amdgpu_encoder(amdgpu_crtc->encoder); to_amdgpu_encoder(amdgpu_crtc->encoder);
int encoder_mode = int encoder_mode =
@ -2992,6 +2997,7 @@ static int dce_v11_0_early_init(void *handle)
adev->mode_info.num_dig = 6; adev->mode_info.num_dig = 6;
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
adev->mode_info.num_hpd = 5; adev->mode_info.num_hpd = 5;
adev->mode_info.num_dig = 5; adev->mode_info.num_dig = 5;
break; break;
@ -3101,7 +3107,8 @@ static int dce_v11_0_hw_init(void *handle)
amdgpu_atombios_crtc_powergate_init(adev); amdgpu_atombios_crtc_powergate_init(adev);
amdgpu_atombios_encoder_init_dig(adev); amdgpu_atombios_encoder_init_dig(adev);
if ((adev->asic_type == CHIP_POLARIS10) || if ((adev->asic_type == CHIP_POLARIS10) ||
(adev->asic_type == CHIP_POLARIS11)) { (adev->asic_type == CHIP_POLARIS11) ||
(adev->asic_type == CHIP_POLARIS12)) {
amdgpu_atombios_crtc_set_dce_clock(adev, adev->clock.default_dispclk, amdgpu_atombios_crtc_set_dce_clock(adev, adev->clock.default_dispclk,
DCE_CLOCK_TYPE_DISPCLK, ATOM_GCK_DFS); DCE_CLOCK_TYPE_DISPCLK, ATOM_GCK_DFS);
amdgpu_atombios_crtc_set_dce_clock(adev, 0, amdgpu_atombios_crtc_set_dce_clock(adev, 0,

View File

@ -139,6 +139,13 @@ MODULE_FIRMWARE("amdgpu/polaris10_mec.bin");
MODULE_FIRMWARE("amdgpu/polaris10_mec2.bin"); MODULE_FIRMWARE("amdgpu/polaris10_mec2.bin");
MODULE_FIRMWARE("amdgpu/polaris10_rlc.bin"); MODULE_FIRMWARE("amdgpu/polaris10_rlc.bin");
MODULE_FIRMWARE("amdgpu/polaris12_ce.bin");
MODULE_FIRMWARE("amdgpu/polaris12_pfp.bin");
MODULE_FIRMWARE("amdgpu/polaris12_me.bin");
MODULE_FIRMWARE("amdgpu/polaris12_mec.bin");
MODULE_FIRMWARE("amdgpu/polaris12_mec2.bin");
MODULE_FIRMWARE("amdgpu/polaris12_rlc.bin");
static const struct amdgpu_gds_reg_offset amdgpu_gds_reg_offset[] = static const struct amdgpu_gds_reg_offset amdgpu_gds_reg_offset[] =
{ {
{mmGDS_VMID0_BASE, mmGDS_VMID0_SIZE, mmGDS_GWS_VMID0, mmGDS_OA_VMID0}, {mmGDS_VMID0_BASE, mmGDS_VMID0_SIZE, mmGDS_GWS_VMID0, mmGDS_OA_VMID0},
@ -689,6 +696,7 @@ static void gfx_v8_0_init_golden_registers(struct amdgpu_device *adev)
(const u32)ARRAY_SIZE(tonga_golden_common_all)); (const u32)ARRAY_SIZE(tonga_golden_common_all));
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
amdgpu_program_register_sequence(adev, amdgpu_program_register_sequence(adev,
golden_settings_polaris11_a11, golden_settings_polaris11_a11,
(const u32)ARRAY_SIZE(golden_settings_polaris11_a11)); (const u32)ARRAY_SIZE(golden_settings_polaris11_a11));
@ -903,6 +911,9 @@ static int gfx_v8_0_init_microcode(struct amdgpu_device *adev)
case CHIP_POLARIS10: case CHIP_POLARIS10:
chip_name = "polaris10"; chip_name = "polaris10";
break; break;
case CHIP_POLARIS12:
chip_name = "polaris12";
break;
case CHIP_STONEY: case CHIP_STONEY:
chip_name = "stoney"; chip_name = "stoney";
break; break;
@ -1768,6 +1779,7 @@ static int gfx_v8_0_gpu_early_init(struct amdgpu_device *adev)
gb_addr_config = TONGA_GB_ADDR_CONFIG_GOLDEN; gb_addr_config = TONGA_GB_ADDR_CONFIG_GOLDEN;
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
ret = amdgpu_atombios_get_gfx_info(adev); ret = amdgpu_atombios_get_gfx_info(adev);
if (ret) if (ret)
return ret; return ret;
@ -2682,6 +2694,7 @@ static void gfx_v8_0_tiling_mode_table_init(struct amdgpu_device *adev)
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
modearray[0] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | modearray[0] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
PIPE_CONFIG(ADDR_SURF_P4_16x16) | PIPE_CONFIG(ADDR_SURF_P4_16x16) |
TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) | TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
@ -3503,6 +3516,7 @@ gfx_v8_0_raster_config(struct amdgpu_device *adev, u32 *rconf, u32 *rconf1)
*rconf1 |= 0x0; *rconf1 |= 0x0;
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
*rconf |= RB_MAP_PKR0(2) | RB_XSEL2(1) | SE_MAP(2) | *rconf |= RB_MAP_PKR0(2) | RB_XSEL2(1) | SE_MAP(2) |
SE_XSEL(1) | SE_YSEL(1); SE_XSEL(1) | SE_YSEL(1);
*rconf1 |= 0x0; *rconf1 |= 0x0;
@ -4021,7 +4035,8 @@ static void gfx_v8_0_init_pg(struct amdgpu_device *adev)
cz_enable_cp_power_gating(adev, true); cz_enable_cp_power_gating(adev, true);
else else
cz_enable_cp_power_gating(adev, false); cz_enable_cp_power_gating(adev, false);
} else if (adev->asic_type == CHIP_POLARIS11) { } else if ((adev->asic_type == CHIP_POLARIS11) ||
(adev->asic_type == CHIP_POLARIS12)) {
gfx_v8_0_init_csb(adev); gfx_v8_0_init_csb(adev);
gfx_v8_0_init_save_restore_list(adev); gfx_v8_0_init_save_restore_list(adev);
gfx_v8_0_enable_save_restore_machine(adev); gfx_v8_0_enable_save_restore_machine(adev);
@ -4095,7 +4110,8 @@ static int gfx_v8_0_rlc_resume(struct amdgpu_device *adev)
RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK); RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK);
WREG32(mmRLC_CGCG_CGLS_CTRL, tmp); WREG32(mmRLC_CGCG_CGLS_CTRL, tmp);
if (adev->asic_type == CHIP_POLARIS11 || if (adev->asic_type == CHIP_POLARIS11 ||
adev->asic_type == CHIP_POLARIS10) { adev->asic_type == CHIP_POLARIS10 ||
adev->asic_type == CHIP_POLARIS12) {
tmp = RREG32(mmRLC_CGCG_CGLS_CTRL_3D); tmp = RREG32(mmRLC_CGCG_CGLS_CTRL_3D);
tmp &= ~0x3; tmp &= ~0x3;
WREG32(mmRLC_CGCG_CGLS_CTRL_3D, tmp); WREG32(mmRLC_CGCG_CGLS_CTRL_3D, tmp);
@ -4283,6 +4299,7 @@ static int gfx_v8_0_cp_gfx_start(struct amdgpu_device *adev)
amdgpu_ring_write(ring, 0x0000002A); amdgpu_ring_write(ring, 0x0000002A);
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
amdgpu_ring_write(ring, 0x16000012); amdgpu_ring_write(ring, 0x16000012);
amdgpu_ring_write(ring, 0x00000000); amdgpu_ring_write(ring, 0x00000000);
break; break;
@ -4664,7 +4681,8 @@ static int gfx_v8_0_cp_compute_resume(struct amdgpu_device *adev)
(adev->asic_type == CHIP_FIJI) || (adev->asic_type == CHIP_FIJI) ||
(adev->asic_type == CHIP_STONEY) || (adev->asic_type == CHIP_STONEY) ||
(adev->asic_type == CHIP_POLARIS11) || (adev->asic_type == CHIP_POLARIS11) ||
(adev->asic_type == CHIP_POLARIS10)) { (adev->asic_type == CHIP_POLARIS10) ||
(adev->asic_type == CHIP_POLARIS12)) {
WREG32(mmCP_MEC_DOORBELL_RANGE_LOWER, WREG32(mmCP_MEC_DOORBELL_RANGE_LOWER,
AMDGPU_DOORBELL_KIQ << 2); AMDGPU_DOORBELL_KIQ << 2);
WREG32(mmCP_MEC_DOORBELL_RANGE_UPPER, WREG32(mmCP_MEC_DOORBELL_RANGE_UPPER,
@ -4700,7 +4718,8 @@ static int gfx_v8_0_cp_compute_resume(struct amdgpu_device *adev)
mqd->cp_hqd_persistent_state = tmp; mqd->cp_hqd_persistent_state = tmp;
if (adev->asic_type == CHIP_STONEY || if (adev->asic_type == CHIP_STONEY ||
adev->asic_type == CHIP_POLARIS11 || adev->asic_type == CHIP_POLARIS11 ||
adev->asic_type == CHIP_POLARIS10) { adev->asic_type == CHIP_POLARIS10 ||
adev->asic_type == CHIP_POLARIS12) {
tmp = RREG32(mmCP_ME1_PIPE3_INT_CNTL); tmp = RREG32(mmCP_ME1_PIPE3_INT_CNTL);
tmp = REG_SET_FIELD(tmp, CP_ME1_PIPE3_INT_CNTL, GENERIC2_INT_ENABLE, 1); tmp = REG_SET_FIELD(tmp, CP_ME1_PIPE3_INT_CNTL, GENERIC2_INT_ENABLE, 1);
WREG32(mmCP_ME1_PIPE3_INT_CNTL, tmp); WREG32(mmCP_ME1_PIPE3_INT_CNTL, tmp);
@ -5279,7 +5298,8 @@ static int gfx_v8_0_late_init(void *handle)
static void gfx_v8_0_enable_gfx_static_mg_power_gating(struct amdgpu_device *adev, static void gfx_v8_0_enable_gfx_static_mg_power_gating(struct amdgpu_device *adev,
bool enable) bool enable)
{ {
if (adev->asic_type == CHIP_POLARIS11) if ((adev->asic_type == CHIP_POLARIS11) ||
(adev->asic_type == CHIP_POLARIS12))
/* Send msg to SMU via Powerplay */ /* Send msg to SMU via Powerplay */
amdgpu_set_powergating_state(adev, amdgpu_set_powergating_state(adev,
AMD_IP_BLOCK_TYPE_SMC, AMD_IP_BLOCK_TYPE_SMC,
@ -5353,6 +5373,7 @@ static int gfx_v8_0_set_powergating_state(void *handle,
gfx_v8_0_enable_gfx_dynamic_mg_power_gating(adev, false); gfx_v8_0_enable_gfx_dynamic_mg_power_gating(adev, false);
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
if ((adev->pg_flags & AMD_PG_SUPPORT_GFX_SMG) && enable) if ((adev->pg_flags & AMD_PG_SUPPORT_GFX_SMG) && enable)
gfx_v8_0_enable_gfx_static_mg_power_gating(adev, true); gfx_v8_0_enable_gfx_static_mg_power_gating(adev, true);
else else

View File

@ -46,6 +46,7 @@ static int gmc_v8_0_wait_for_idle(void *handle);
MODULE_FIRMWARE("amdgpu/tonga_mc.bin"); MODULE_FIRMWARE("amdgpu/tonga_mc.bin");
MODULE_FIRMWARE("amdgpu/polaris11_mc.bin"); MODULE_FIRMWARE("amdgpu/polaris11_mc.bin");
MODULE_FIRMWARE("amdgpu/polaris10_mc.bin"); MODULE_FIRMWARE("amdgpu/polaris10_mc.bin");
MODULE_FIRMWARE("amdgpu/polaris12_mc.bin");
static const u32 golden_settings_tonga_a11[] = static const u32 golden_settings_tonga_a11[] =
{ {
@ -130,6 +131,7 @@ static void gmc_v8_0_init_golden_registers(struct amdgpu_device *adev)
(const u32)ARRAY_SIZE(golden_settings_tonga_a11)); (const u32)ARRAY_SIZE(golden_settings_tonga_a11));
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
amdgpu_program_register_sequence(adev, amdgpu_program_register_sequence(adev,
golden_settings_polaris11_a11, golden_settings_polaris11_a11,
(const u32)ARRAY_SIZE(golden_settings_polaris11_a11)); (const u32)ARRAY_SIZE(golden_settings_polaris11_a11));
@ -225,6 +227,9 @@ static int gmc_v8_0_init_microcode(struct amdgpu_device *adev)
case CHIP_POLARIS10: case CHIP_POLARIS10:
chip_name = "polaris10"; chip_name = "polaris10";
break; break;
case CHIP_POLARIS12:
chip_name = "polaris12";
break;
case CHIP_FIJI: case CHIP_FIJI:
case CHIP_CARRIZO: case CHIP_CARRIZO:
case CHIP_STONEY: case CHIP_STONEY:

View File

@ -60,6 +60,8 @@ MODULE_FIRMWARE("amdgpu/polaris10_sdma.bin");
MODULE_FIRMWARE("amdgpu/polaris10_sdma1.bin"); MODULE_FIRMWARE("amdgpu/polaris10_sdma1.bin");
MODULE_FIRMWARE("amdgpu/polaris11_sdma.bin"); MODULE_FIRMWARE("amdgpu/polaris11_sdma.bin");
MODULE_FIRMWARE("amdgpu/polaris11_sdma1.bin"); MODULE_FIRMWARE("amdgpu/polaris11_sdma1.bin");
MODULE_FIRMWARE("amdgpu/polaris12_sdma.bin");
MODULE_FIRMWARE("amdgpu/polaris12_sdma1.bin");
static const u32 sdma_offsets[SDMA_MAX_INSTANCE] = static const u32 sdma_offsets[SDMA_MAX_INSTANCE] =
@ -206,6 +208,7 @@ static void sdma_v3_0_init_golden_registers(struct amdgpu_device *adev)
(const u32)ARRAY_SIZE(golden_settings_tonga_a11)); (const u32)ARRAY_SIZE(golden_settings_tonga_a11));
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
amdgpu_program_register_sequence(adev, amdgpu_program_register_sequence(adev,
golden_settings_polaris11_a11, golden_settings_polaris11_a11,
(const u32)ARRAY_SIZE(golden_settings_polaris11_a11)); (const u32)ARRAY_SIZE(golden_settings_polaris11_a11));
@ -278,6 +281,9 @@ static int sdma_v3_0_init_microcode(struct amdgpu_device *adev)
case CHIP_POLARIS10: case CHIP_POLARIS10:
chip_name = "polaris10"; chip_name = "polaris10";
break; break;
case CHIP_POLARIS12:
chip_name = "polaris12";
break;
case CHIP_CARRIZO: case CHIP_CARRIZO:
chip_name = "carrizo"; chip_name = "carrizo";
break; break;

View File

@ -56,7 +56,6 @@
#define BIOS_SCRATCH_4 0x5cd #define BIOS_SCRATCH_4 0x5cd
MODULE_FIRMWARE("radeon/tahiti_smc.bin"); MODULE_FIRMWARE("radeon/tahiti_smc.bin");
MODULE_FIRMWARE("radeon/tahiti_k_smc.bin");
MODULE_FIRMWARE("radeon/pitcairn_smc.bin"); MODULE_FIRMWARE("radeon/pitcairn_smc.bin");
MODULE_FIRMWARE("radeon/pitcairn_k_smc.bin"); MODULE_FIRMWARE("radeon/pitcairn_k_smc.bin");
MODULE_FIRMWARE("radeon/verde_smc.bin"); MODULE_FIRMWARE("radeon/verde_smc.bin");
@ -3488,19 +3487,6 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
(adev->pdev->device == 0x6817) || (adev->pdev->device == 0x6817) ||
(adev->pdev->device == 0x6806)) (adev->pdev->device == 0x6806))
max_mclk = 120000; max_mclk = 120000;
} else if (adev->asic_type == CHIP_VERDE) {
if ((adev->pdev->revision == 0x81) ||
(adev->pdev->revision == 0x83) ||
(adev->pdev->revision == 0x87) ||
(adev->pdev->device == 0x6820) ||
(adev->pdev->device == 0x6821) ||
(adev->pdev->device == 0x6822) ||
(adev->pdev->device == 0x6823) ||
(adev->pdev->device == 0x682A) ||
(adev->pdev->device == 0x682B)) {
max_sclk = 75000;
max_mclk = 80000;
}
} else if (adev->asic_type == CHIP_OLAND) { } else if (adev->asic_type == CHIP_OLAND) {
if ((adev->pdev->revision == 0xC7) || if ((adev->pdev->revision == 0xC7) ||
(adev->pdev->revision == 0x80) || (adev->pdev->revision == 0x80) ||
@ -7687,49 +7673,49 @@ static int si_dpm_init_microcode(struct amdgpu_device *adev)
chip_name = "tahiti"; chip_name = "tahiti";
break; break;
case CHIP_PITCAIRN: case CHIP_PITCAIRN:
if ((adev->pdev->revision == 0x81) || if ((adev->pdev->revision == 0x81) &&
(adev->pdev->device == 0x6810) || ((adev->pdev->device == 0x6810) ||
(adev->pdev->device == 0x6811) || (adev->pdev->device == 0x6811)))
(adev->pdev->device == 0x6816) ||
(adev->pdev->device == 0x6817) ||
(adev->pdev->device == 0x6806))
chip_name = "pitcairn_k"; chip_name = "pitcairn_k";
else else
chip_name = "pitcairn"; chip_name = "pitcairn";
break; break;
case CHIP_VERDE: case CHIP_VERDE:
if ((adev->pdev->revision == 0x81) || if (((adev->pdev->device == 0x6820) &&
(adev->pdev->revision == 0x83) || ((adev->pdev->revision == 0x81) ||
(adev->pdev->revision == 0x87) || (adev->pdev->revision == 0x83))) ||
(adev->pdev->device == 0x6820) || ((adev->pdev->device == 0x6821) &&
(adev->pdev->device == 0x6821) || ((adev->pdev->revision == 0x83) ||
(adev->pdev->device == 0x6822) || (adev->pdev->revision == 0x87))) ||
(adev->pdev->device == 0x6823) || ((adev->pdev->revision == 0x87) &&
(adev->pdev->device == 0x682A) || ((adev->pdev->device == 0x6823) ||
(adev->pdev->device == 0x682B)) (adev->pdev->device == 0x682b))))
chip_name = "verde_k"; chip_name = "verde_k";
else else
chip_name = "verde"; chip_name = "verde";
break; break;
case CHIP_OLAND: case CHIP_OLAND:
if ((adev->pdev->revision == 0xC7) || if (((adev->pdev->revision == 0x81) &&
(adev->pdev->revision == 0x80) || ((adev->pdev->device == 0x6600) ||
(adev->pdev->revision == 0x81) || (adev->pdev->device == 0x6604) ||
(adev->pdev->revision == 0x83) || (adev->pdev->device == 0x6605) ||
(adev->pdev->revision == 0x87) || (adev->pdev->device == 0x6610))) ||
(adev->pdev->device == 0x6604) || ((adev->pdev->revision == 0x83) &&
(adev->pdev->device == 0x6605)) (adev->pdev->device == 0x6610)))
chip_name = "oland_k"; chip_name = "oland_k";
else else
chip_name = "oland"; chip_name = "oland";
break; break;
case CHIP_HAINAN: case CHIP_HAINAN:
if ((adev->pdev->revision == 0x81) || if (((adev->pdev->revision == 0x81) &&
(adev->pdev->revision == 0x83) || (adev->pdev->device == 0x6660)) ||
(adev->pdev->revision == 0xC3) || ((adev->pdev->revision == 0x83) &&
(adev->pdev->device == 0x6664) || ((adev->pdev->device == 0x6660) ||
(adev->pdev->device == 0x6665) || (adev->pdev->device == 0x6663) ||
(adev->pdev->device == 0x6667)) (adev->pdev->device == 0x6665) ||
(adev->pdev->device == 0x6667))) ||
((adev->pdev->revision == 0xc3) &&
(adev->pdev->device == 0x6665)))
chip_name = "hainan_k"; chip_name = "hainan_k";
else else
chip_name = "hainan"; chip_name = "hainan";

View File

@ -791,15 +791,10 @@ static int uvd_v5_0_set_clockgating_state(void *handle,
{ {
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
bool enable = (state == AMD_CG_STATE_GATE) ? true : false; bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
static int curstate = -1;
if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG)) if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG))
return 0; return 0;
if (curstate == state)
return 0;
curstate = state;
if (enable) { if (enable) {
/* wait for STATUS to clear */ /* wait for STATUS to clear */
if (uvd_v5_0_wait_for_idle(handle)) if (uvd_v5_0_wait_for_idle(handle))

View File

@ -320,11 +320,12 @@ static unsigned vce_v3_0_get_harvest_config(struct amdgpu_device *adev)
{ {
u32 tmp; u32 tmp;
/* Fiji, Stoney, Polaris10, Polaris11 are single pipe */ /* Fiji, Stoney, Polaris10, Polaris11, Polaris12 are single pipe */
if ((adev->asic_type == CHIP_FIJI) || if ((adev->asic_type == CHIP_FIJI) ||
(adev->asic_type == CHIP_STONEY) || (adev->asic_type == CHIP_STONEY) ||
(adev->asic_type == CHIP_POLARIS10) || (adev->asic_type == CHIP_POLARIS10) ||
(adev->asic_type == CHIP_POLARIS11)) (adev->asic_type == CHIP_POLARIS11) ||
(adev->asic_type == CHIP_POLARIS12))
return AMDGPU_VCE_HARVEST_VCE1; return AMDGPU_VCE_HARVEST_VCE1;
/* Tonga and CZ are dual or single pipe */ /* Tonga and CZ are dual or single pipe */

View File

@ -88,6 +88,7 @@ MODULE_FIRMWARE("amdgpu/polaris10_smc.bin");
MODULE_FIRMWARE("amdgpu/polaris10_smc_sk.bin"); MODULE_FIRMWARE("amdgpu/polaris10_smc_sk.bin");
MODULE_FIRMWARE("amdgpu/polaris11_smc.bin"); MODULE_FIRMWARE("amdgpu/polaris11_smc.bin");
MODULE_FIRMWARE("amdgpu/polaris11_smc_sk.bin"); MODULE_FIRMWARE("amdgpu/polaris11_smc_sk.bin");
MODULE_FIRMWARE("amdgpu/polaris12_smc.bin");
/* /*
* Indirect registers accessor * Indirect registers accessor
@ -312,6 +313,7 @@ static void vi_init_golden_registers(struct amdgpu_device *adev)
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS10: case CHIP_POLARIS10:
case CHIP_POLARIS12:
default: default:
break; break;
} }
@ -671,6 +673,7 @@ static int vi_read_register(struct amdgpu_device *adev, u32 se_num,
case CHIP_TONGA: case CHIP_TONGA:
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS10: case CHIP_POLARIS10:
case CHIP_POLARIS12:
case CHIP_CARRIZO: case CHIP_CARRIZO:
case CHIP_STONEY: case CHIP_STONEY:
asic_register_table = cz_allowed_read_registers; asic_register_table = cz_allowed_read_registers;
@ -994,6 +997,11 @@ static int vi_common_early_init(void *handle)
adev->pg_flags = 0; adev->pg_flags = 0;
adev->external_rev_id = adev->rev_id + 0x50; adev->external_rev_id = adev->rev_id + 0x50;
break; break;
case CHIP_POLARIS12:
adev->cg_flags = AMD_CG_SUPPORT_UVD_MGCG;
adev->pg_flags = 0;
adev->external_rev_id = adev->rev_id + 0x64;
break;
case CHIP_CARRIZO: case CHIP_CARRIZO:
adev->cg_flags = AMD_CG_SUPPORT_UVD_MGCG | adev->cg_flags = AMD_CG_SUPPORT_UVD_MGCG |
AMD_CG_SUPPORT_GFX_MGCG | AMD_CG_SUPPORT_GFX_MGCG |
@ -1346,6 +1354,7 @@ static int vi_common_set_clockgating_state(void *handle,
case CHIP_TONGA: case CHIP_TONGA:
case CHIP_POLARIS10: case CHIP_POLARIS10:
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS12:
vi_common_set_clockgating_state_by_smu(adev, state); vi_common_set_clockgating_state_by_smu(adev, state);
default: default:
break; break;
@ -1429,6 +1438,7 @@ int vi_set_ip_blocks(struct amdgpu_device *adev)
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS10: case CHIP_POLARIS10:
case CHIP_POLARIS12:
amdgpu_ip_block_add(adev, &vi_common_ip_block); amdgpu_ip_block_add(adev, &vi_common_ip_block);
amdgpu_ip_block_add(adev, &gmc_v8_1_ip_block); amdgpu_ip_block_add(adev, &gmc_v8_1_ip_block);
amdgpu_ip_block_add(adev, &tonga_ih_ip_block); amdgpu_ip_block_add(adev, &tonga_ih_ip_block);

View File

@ -23,7 +23,7 @@
#ifndef __AMD_SHARED_H__ #ifndef __AMD_SHARED_H__
#define __AMD_SHARED_H__ #define __AMD_SHARED_H__
#define AMD_MAX_USEC_TIMEOUT 100000 /* 100 ms */ #define AMD_MAX_USEC_TIMEOUT 200000 /* 200 ms */
/* /*
* Supported ASIC types * Supported ASIC types
@ -46,6 +46,7 @@ enum amd_asic_type {
CHIP_STONEY, CHIP_STONEY,
CHIP_POLARIS10, CHIP_POLARIS10,
CHIP_POLARIS11, CHIP_POLARIS11,
CHIP_POLARIS12,
CHIP_LAST, CHIP_LAST,
}; };

View File

@ -95,6 +95,7 @@ int hwmgr_init(struct amd_pp_init *pp_init, struct pp_instance *handle)
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS10: case CHIP_POLARIS10:
case CHIP_POLARIS12:
polaris_set_asic_special_caps(hwmgr); polaris_set_asic_special_caps(hwmgr);
hwmgr->feature_mask &= ~(PP_UVD_HANDSHAKE_MASK); hwmgr->feature_mask &= ~(PP_UVD_HANDSHAKE_MASK);
break; break;
@ -745,7 +746,7 @@ int polaris_set_asic_special_caps(struct pp_hwmgr *hwmgr)
phm_cap_set(hwmgr->platform_descriptor.platformCaps, phm_cap_set(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_TablelessHardwareInterface); PHM_PlatformCaps_TablelessHardwareInterface);
if (hwmgr->chip_id == CHIP_POLARIS11) if ((hwmgr->chip_id == CHIP_POLARIS11) || (hwmgr->chip_id == CHIP_POLARIS12))
phm_cap_set(hwmgr->platform_descriptor.platformCaps, phm_cap_set(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_SPLLShutdownSupport); PHM_PlatformCaps_SPLLShutdownSupport);
return 0; return 0;

View File

@ -521,7 +521,7 @@ int smu7_enable_didt_config(struct pp_hwmgr *hwmgr)
PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result); PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris10); result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris10);
PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result); PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
} else if (hwmgr->chip_id == CHIP_POLARIS11) { } else if ((hwmgr->chip_id == CHIP_POLARIS11) || (hwmgr->chip_id == CHIP_POLARIS12)) {
result = smu7_program_pt_config_registers(hwmgr, GCCACConfig_Polaris11); result = smu7_program_pt_config_registers(hwmgr, GCCACConfig_Polaris11);
PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result); PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris11); result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris11);

View File

@ -65,6 +65,7 @@ int smum_init(struct amd_pp_init *pp_init, struct pp_instance *handle)
break; break;
case CHIP_POLARIS11: case CHIP_POLARIS11:
case CHIP_POLARIS10: case CHIP_POLARIS10:
case CHIP_POLARIS12:
polaris10_smum_init(smumgr); polaris10_smum_init(smumgr);
break; break;
default: default:

View File

@ -1259,8 +1259,10 @@ int drm_atomic_helper_commit(struct drm_device *dev,
if (!nonblock) { if (!nonblock) {
ret = drm_atomic_helper_wait_for_fences(dev, state, true); ret = drm_atomic_helper_wait_for_fences(dev, state, true);
if (ret) if (ret) {
drm_atomic_helper_cleanup_planes(dev, state);
return ret; return ret;
}
} }
/* /*

View File

@ -51,6 +51,9 @@ static int meson_plane_atomic_check(struct drm_plane *plane,
struct drm_crtc_state *crtc_state; struct drm_crtc_state *crtc_state;
struct drm_rect clip = { 0, }; struct drm_rect clip = { 0, };
if (!state->crtc)
return 0;
crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc);
if (IS_ERR(crtc_state)) if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state); return PTR_ERR(crtc_state);

View File

@ -38,6 +38,11 @@
* - TV Panel encoding via ENCT * - TV Panel encoding via ENCT
*/ */
/* HHI Registers */
#define HHI_VDAC_CNTL0 0x2F4 /* 0xbd offset in data sheet */
#define HHI_VDAC_CNTL1 0x2F8 /* 0xbe offset in data sheet */
#define HHI_HDMI_PHY_CNTL0 0x3a0 /* 0xe8 offset in data sheet */
struct meson_cvbs_enci_mode meson_cvbs_enci_pal = { struct meson_cvbs_enci_mode meson_cvbs_enci_pal = {
.mode_tag = MESON_VENC_MODE_CVBS_PAL, .mode_tag = MESON_VENC_MODE_CVBS_PAL,
.hso_begin = 3, .hso_begin = 3,
@ -242,6 +247,20 @@ void meson_venc_disable_vsync(struct meson_drm *priv)
void meson_venc_init(struct meson_drm *priv) void meson_venc_init(struct meson_drm *priv)
{ {
/* Disable CVBS VDAC */
regmap_write(priv->hhi, HHI_VDAC_CNTL0, 0);
regmap_write(priv->hhi, HHI_VDAC_CNTL1, 8);
/* Power Down Dacs */
writel_relaxed(0xff, priv->io_base + _REG(VENC_VDAC_SETTING));
/* Disable HDMI PHY */
regmap_write(priv->hhi, HHI_HDMI_PHY_CNTL0, 0);
/* Disable HDMI */
writel_bits_relaxed(0x3, 0,
priv->io_base + _REG(VPU_HDMI_SETTING));
/* Disable all encoders */ /* Disable all encoders */
writel_relaxed(0, priv->io_base + _REG(ENCI_VIDEO_EN)); writel_relaxed(0, priv->io_base + _REG(ENCI_VIDEO_EN));
writel_relaxed(0, priv->io_base + _REG(ENCP_VIDEO_EN)); writel_relaxed(0, priv->io_base + _REG(ENCP_VIDEO_EN));

View File

@ -167,7 +167,7 @@ static void meson_venc_cvbs_encoder_disable(struct drm_encoder *encoder)
/* Disable CVBS VDAC */ /* Disable CVBS VDAC */
regmap_write(priv->hhi, HHI_VDAC_CNTL0, 0); regmap_write(priv->hhi, HHI_VDAC_CNTL0, 0);
regmap_write(priv->hhi, HHI_VDAC_CNTL1, 0); regmap_write(priv->hhi, HHI_VDAC_CNTL1, 8);
} }
static void meson_venc_cvbs_encoder_enable(struct drm_encoder *encoder) static void meson_venc_cvbs_encoder_enable(struct drm_encoder *encoder)

View File

@ -213,7 +213,14 @@ void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
void adreno_flush(struct msm_gpu *gpu) void adreno_flush(struct msm_gpu *gpu)
{ {
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
uint32_t wptr = get_wptr(gpu->rb); uint32_t wptr;
/*
* Mask wptr value that we calculate to fit in the HW range. This is
* to account for the possibility that the last command fit exactly into
* the ringbuffer and rb->next hasn't wrapped to zero yet
*/
wptr = get_wptr(gpu->rb) & ((gpu->rb->size / 4) - 1);
/* ensure writes to ringbuffer have hit system memory: */ /* ensure writes to ringbuffer have hit system memory: */
mb(); mb();

View File

@ -106,7 +106,8 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
pagefault_disable(); pagefault_disable();
} }
if (submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) { if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) ||
!(submit_bo.flags & MSM_SUBMIT_BO_FLAGS)) {
DRM_ERROR("invalid flags: %x\n", submit_bo.flags); DRM_ERROR("invalid flags: %x\n", submit_bo.flags);
ret = -EINVAL; ret = -EINVAL;
goto out_unlock; goto out_unlock;
@ -290,7 +291,7 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
{ {
uint32_t i, last_offset = 0; uint32_t i, last_offset = 0;
uint32_t *ptr; uint32_t *ptr;
int ret; int ret = 0;
if (offset % 4) { if (offset % 4) {
DRM_ERROR("non-aligned cmdstream buffer: %u\n", offset); DRM_ERROR("non-aligned cmdstream buffer: %u\n", offset);
@ -318,12 +319,13 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc)); ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc));
if (ret) if (ret)
return -EFAULT; goto out;
if (submit_reloc.submit_offset % 4) { if (submit_reloc.submit_offset % 4) {
DRM_ERROR("non-aligned reloc offset: %u\n", DRM_ERROR("non-aligned reloc offset: %u\n",
submit_reloc.submit_offset); submit_reloc.submit_offset);
return -EINVAL; ret = -EINVAL;
goto out;
} }
/* offset in dwords: */ /* offset in dwords: */
@ -332,12 +334,13 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
if ((off >= (obj->base.size / 4)) || if ((off >= (obj->base.size / 4)) ||
(off < last_offset)) { (off < last_offset)) {
DRM_ERROR("invalid offset %u at reloc %u\n", off, i); DRM_ERROR("invalid offset %u at reloc %u\n", off, i);
return -EINVAL; ret = -EINVAL;
goto out;
} }
ret = submit_bo(submit, submit_reloc.reloc_idx, NULL, &iova, &valid); ret = submit_bo(submit, submit_reloc.reloc_idx, NULL, &iova, &valid);
if (ret) if (ret)
return ret; goto out;
if (valid) if (valid)
continue; continue;
@ -354,9 +357,10 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
last_offset = off; last_offset = off;
} }
out:
msm_gem_put_vaddr_locked(&obj->base); msm_gem_put_vaddr_locked(&obj->base);
return 0; return ret;
} }
static void submit_cleanup(struct msm_gem_submit *submit) static void submit_cleanup(struct msm_gem_submit *submit)

View File

@ -23,7 +23,8 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int size)
struct msm_ringbuffer *ring; struct msm_ringbuffer *ring;
int ret; int ret;
size = ALIGN(size, 4); /* size should be dword aligned */ if (WARN_ON(!is_power_of_2(size)))
return ERR_PTR(-EINVAL);
ring = kzalloc(sizeof(*ring), GFP_KERNEL); ring = kzalloc(sizeof(*ring), GFP_KERNEL);
if (!ring) { if (!ring) {

View File

@ -50,7 +50,6 @@ MODULE_FIRMWARE("radeon/tahiti_ce.bin");
MODULE_FIRMWARE("radeon/tahiti_mc.bin"); MODULE_FIRMWARE("radeon/tahiti_mc.bin");
MODULE_FIRMWARE("radeon/tahiti_rlc.bin"); MODULE_FIRMWARE("radeon/tahiti_rlc.bin");
MODULE_FIRMWARE("radeon/tahiti_smc.bin"); MODULE_FIRMWARE("radeon/tahiti_smc.bin");
MODULE_FIRMWARE("radeon/tahiti_k_smc.bin");
MODULE_FIRMWARE("radeon/PITCAIRN_pfp.bin"); MODULE_FIRMWARE("radeon/PITCAIRN_pfp.bin");
MODULE_FIRMWARE("radeon/PITCAIRN_me.bin"); MODULE_FIRMWARE("radeon/PITCAIRN_me.bin");
@ -1657,9 +1656,6 @@ static int si_init_microcode(struct radeon_device *rdev)
switch (rdev->family) { switch (rdev->family) {
case CHIP_TAHITI: case CHIP_TAHITI:
chip_name = "TAHITI"; chip_name = "TAHITI";
/* XXX: figure out which Tahitis need the new ucode */
if (0)
new_smc = true;
new_chip_name = "tahiti"; new_chip_name = "tahiti";
pfp_req_size = SI_PFP_UCODE_SIZE * 4; pfp_req_size = SI_PFP_UCODE_SIZE * 4;
me_req_size = SI_PM4_UCODE_SIZE * 4; me_req_size = SI_PM4_UCODE_SIZE * 4;
@ -1671,12 +1667,9 @@ static int si_init_microcode(struct radeon_device *rdev)
break; break;
case CHIP_PITCAIRN: case CHIP_PITCAIRN:
chip_name = "PITCAIRN"; chip_name = "PITCAIRN";
if ((rdev->pdev->revision == 0x81) || if ((rdev->pdev->revision == 0x81) &&
(rdev->pdev->device == 0x6810) || ((rdev->pdev->device == 0x6810) ||
(rdev->pdev->device == 0x6811) || (rdev->pdev->device == 0x6811)))
(rdev->pdev->device == 0x6816) ||
(rdev->pdev->device == 0x6817) ||
(rdev->pdev->device == 0x6806))
new_smc = true; new_smc = true;
new_chip_name = "pitcairn"; new_chip_name = "pitcairn";
pfp_req_size = SI_PFP_UCODE_SIZE * 4; pfp_req_size = SI_PFP_UCODE_SIZE * 4;
@ -1689,15 +1682,15 @@ static int si_init_microcode(struct radeon_device *rdev)
break; break;
case CHIP_VERDE: case CHIP_VERDE:
chip_name = "VERDE"; chip_name = "VERDE";
if ((rdev->pdev->revision == 0x81) || if (((rdev->pdev->device == 0x6820) &&
(rdev->pdev->revision == 0x83) || ((rdev->pdev->revision == 0x81) ||
(rdev->pdev->revision == 0x87) || (rdev->pdev->revision == 0x83))) ||
(rdev->pdev->device == 0x6820) || ((rdev->pdev->device == 0x6821) &&
(rdev->pdev->device == 0x6821) || ((rdev->pdev->revision == 0x83) ||
(rdev->pdev->device == 0x6822) || (rdev->pdev->revision == 0x87))) ||
(rdev->pdev->device == 0x6823) || ((rdev->pdev->revision == 0x87) &&
(rdev->pdev->device == 0x682A) || ((rdev->pdev->device == 0x6823) ||
(rdev->pdev->device == 0x682B)) (rdev->pdev->device == 0x682b))))
new_smc = true; new_smc = true;
new_chip_name = "verde"; new_chip_name = "verde";
pfp_req_size = SI_PFP_UCODE_SIZE * 4; pfp_req_size = SI_PFP_UCODE_SIZE * 4;
@ -1710,13 +1703,13 @@ static int si_init_microcode(struct radeon_device *rdev)
break; break;
case CHIP_OLAND: case CHIP_OLAND:
chip_name = "OLAND"; chip_name = "OLAND";
if ((rdev->pdev->revision == 0xC7) || if (((rdev->pdev->revision == 0x81) &&
(rdev->pdev->revision == 0x80) || ((rdev->pdev->device == 0x6600) ||
(rdev->pdev->revision == 0x81) || (rdev->pdev->device == 0x6604) ||
(rdev->pdev->revision == 0x83) || (rdev->pdev->device == 0x6605) ||
(rdev->pdev->revision == 0x87) || (rdev->pdev->device == 0x6610))) ||
(rdev->pdev->device == 0x6604) || ((rdev->pdev->revision == 0x83) &&
(rdev->pdev->device == 0x6605)) (rdev->pdev->device == 0x6610)))
new_smc = true; new_smc = true;
new_chip_name = "oland"; new_chip_name = "oland";
pfp_req_size = SI_PFP_UCODE_SIZE * 4; pfp_req_size = SI_PFP_UCODE_SIZE * 4;
@ -1728,12 +1721,15 @@ static int si_init_microcode(struct radeon_device *rdev)
break; break;
case CHIP_HAINAN: case CHIP_HAINAN:
chip_name = "HAINAN"; chip_name = "HAINAN";
if ((rdev->pdev->revision == 0x81) || if (((rdev->pdev->revision == 0x81) &&
(rdev->pdev->revision == 0x83) || (rdev->pdev->device == 0x6660)) ||
(rdev->pdev->revision == 0xC3) || ((rdev->pdev->revision == 0x83) &&
(rdev->pdev->device == 0x6664) || ((rdev->pdev->device == 0x6660) ||
(rdev->pdev->device == 0x6665) || (rdev->pdev->device == 0x6663) ||
(rdev->pdev->device == 0x6667)) (rdev->pdev->device == 0x6665) ||
(rdev->pdev->device == 0x6667))) ||
((rdev->pdev->revision == 0xc3) &&
(rdev->pdev->device == 0x6665)))
new_smc = true; new_smc = true;
new_chip_name = "hainan"; new_chip_name = "hainan";
pfp_req_size = SI_PFP_UCODE_SIZE * 4; pfp_req_size = SI_PFP_UCODE_SIZE * 4;

View File

@ -3008,19 +3008,6 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
(rdev->pdev->device == 0x6817) || (rdev->pdev->device == 0x6817) ||
(rdev->pdev->device == 0x6806)) (rdev->pdev->device == 0x6806))
max_mclk = 120000; max_mclk = 120000;
} else if (rdev->family == CHIP_VERDE) {
if ((rdev->pdev->revision == 0x81) ||
(rdev->pdev->revision == 0x83) ||
(rdev->pdev->revision == 0x87) ||
(rdev->pdev->device == 0x6820) ||
(rdev->pdev->device == 0x6821) ||
(rdev->pdev->device == 0x6822) ||
(rdev->pdev->device == 0x6823) ||
(rdev->pdev->device == 0x682A) ||
(rdev->pdev->device == 0x682B)) {
max_sclk = 75000;
max_mclk = 80000;
}
} else if (rdev->family == CHIP_OLAND) { } else if (rdev->family == CHIP_OLAND) {
if ((rdev->pdev->revision == 0xC7) || if ((rdev->pdev->revision == 0xC7) ||
(rdev->pdev->revision == 0x80) || (rdev->pdev->revision == 0x80) ||

View File

@ -856,7 +856,7 @@ irqreturn_t tilcdc_crtc_irq(struct drm_crtc *crtc)
struct tilcdc_crtc *tilcdc_crtc = to_tilcdc_crtc(crtc); struct tilcdc_crtc *tilcdc_crtc = to_tilcdc_crtc(crtc);
struct drm_device *dev = crtc->dev; struct drm_device *dev = crtc->dev;
struct tilcdc_drm_private *priv = dev->dev_private; struct tilcdc_drm_private *priv = dev->dev_private;
uint32_t stat; uint32_t stat, reg;
stat = tilcdc_read_irqstatus(dev); stat = tilcdc_read_irqstatus(dev);
tilcdc_clear_irqstatus(dev, stat); tilcdc_clear_irqstatus(dev, stat);
@ -921,17 +921,26 @@ irqreturn_t tilcdc_crtc_irq(struct drm_crtc *crtc)
dev_err_ratelimited(dev->dev, "%s(0x%08x): Sync lost", dev_err_ratelimited(dev->dev, "%s(0x%08x): Sync lost",
__func__, stat); __func__, stat);
tilcdc_crtc->frame_intact = false; tilcdc_crtc->frame_intact = false;
if (tilcdc_crtc->sync_lost_count++ > if (priv->rev == 1) {
SYNC_LOST_COUNT_LIMIT) { reg = tilcdc_read(dev, LCDC_RASTER_CTRL_REG);
dev_err(dev->dev, "%s(0x%08x): Sync lost flood detected, recovering", __func__, stat); if (reg & LCDC_RASTER_ENABLE) {
queue_work(system_wq, &tilcdc_crtc->recover_work);
if (priv->rev == 1)
tilcdc_clear(dev, LCDC_RASTER_CTRL_REG, tilcdc_clear(dev, LCDC_RASTER_CTRL_REG,
LCDC_V1_SYNC_LOST_INT_ENA); LCDC_RASTER_ENABLE);
else tilcdc_set(dev, LCDC_RASTER_CTRL_REG,
LCDC_RASTER_ENABLE);
}
} else {
if (tilcdc_crtc->sync_lost_count++ >
SYNC_LOST_COUNT_LIMIT) {
dev_err(dev->dev,
"%s(0x%08x): Sync lost flood detected, recovering",
__func__, stat);
queue_work(system_wq,
&tilcdc_crtc->recover_work);
tilcdc_write(dev, LCDC_INT_ENABLE_CLR_REG, tilcdc_write(dev, LCDC_INT_ENABLE_CLR_REG,
LCDC_SYNC_LOST); LCDC_SYNC_LOST);
tilcdc_crtc->sync_lost_count = 0; tilcdc_crtc->sync_lost_count = 0;
}
} }
} }

View File

@ -190,7 +190,7 @@ static netdev_tx_t ipddp_xmit(struct sk_buff *skb, struct net_device *dev)
*/ */
static int ipddp_create(struct ipddp_route *new_rt) static int ipddp_create(struct ipddp_route *new_rt)
{ {
struct ipddp_route *rt = kmalloc(sizeof(*rt), GFP_KERNEL); struct ipddp_route *rt = kzalloc(sizeof(*rt), GFP_KERNEL);
if (rt == NULL) if (rt == NULL)
return -ENOMEM; return -ENOMEM;

View File

@ -3964,8 +3964,8 @@ static void igb_clean_rx_ring(struct igb_ring *rx_ring)
PAGE_SIZE, PAGE_SIZE,
DMA_FROM_DEVICE, DMA_FROM_DEVICE,
DMA_ATTR_SKIP_CPU_SYNC); DMA_ATTR_SKIP_CPU_SYNC);
__page_frag_drain(buffer_info->page, 0, __page_frag_cache_drain(buffer_info->page,
buffer_info->pagecnt_bias); buffer_info->pagecnt_bias);
buffer_info->page = NULL; buffer_info->page = NULL;
} }
@ -6991,7 +6991,7 @@ static struct sk_buff *igb_fetch_rx_buffer(struct igb_ring *rx_ring,
dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,
PAGE_SIZE, DMA_FROM_DEVICE, PAGE_SIZE, DMA_FROM_DEVICE,
DMA_ATTR_SKIP_CPU_SYNC); DMA_ATTR_SKIP_CPU_SYNC);
__page_frag_drain(page, 0, rx_buffer->pagecnt_bias); __page_frag_cache_drain(page, rx_buffer->pagecnt_bias);
} }
/* clear contents of rx_buffer */ /* clear contents of rx_buffer */

View File

@ -2275,7 +2275,7 @@ static int mlx4_en_change_mtu(struct net_device *dev, int new_mtu)
if (priv->tx_ring_num[TX_XDP] && if (priv->tx_ring_num[TX_XDP] &&
!mlx4_en_check_xdp_mtu(dev, new_mtu)) !mlx4_en_check_xdp_mtu(dev, new_mtu))
return -ENOTSUPP; return -EOPNOTSUPP;
dev->mtu = new_mtu; dev->mtu = new_mtu;

View File

@ -3669,14 +3669,8 @@ static void mlx5e_nic_init(struct mlx5_core_dev *mdev,
static void mlx5e_nic_cleanup(struct mlx5e_priv *priv) static void mlx5e_nic_cleanup(struct mlx5e_priv *priv)
{ {
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5_eswitch *esw = mdev->priv.eswitch;
mlx5e_vxlan_cleanup(priv); mlx5e_vxlan_cleanup(priv);
if (MLX5_CAP_GEN(mdev, vport_group_manager))
mlx5_eswitch_unregister_vport_rep(esw, 0);
if (priv->xdp_prog) if (priv->xdp_prog)
bpf_prog_put(priv->xdp_prog); bpf_prog_put(priv->xdp_prog);
} }
@ -3801,9 +3795,14 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv)
static void mlx5e_nic_disable(struct mlx5e_priv *priv) static void mlx5e_nic_disable(struct mlx5e_priv *priv)
{ {
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5_eswitch *esw = mdev->priv.eswitch;
queue_work(priv->wq, &priv->set_rx_mode_work); queue_work(priv->wq, &priv->set_rx_mode_work);
if (MLX5_CAP_GEN(mdev, vport_group_manager))
mlx5_eswitch_unregister_vport_rep(esw, 0);
mlx5e_disable_async_events(priv); mlx5e_disable_async_events(priv);
mlx5_lag_remove(priv->mdev); mlx5_lag_remove(mdev);
} }
static const struct mlx5e_profile mlx5e_nic_profile = { static const struct mlx5e_profile mlx5e_nic_profile = {

View File

@ -109,7 +109,6 @@ static bool mlx5e_am_on_top(struct mlx5e_rx_am *am)
switch (am->tune_state) { switch (am->tune_state) {
case MLX5E_AM_PARKING_ON_TOP: case MLX5E_AM_PARKING_ON_TOP:
case MLX5E_AM_PARKING_TIRED: case MLX5E_AM_PARKING_TIRED:
WARN_ONCE(true, "mlx5e_am_on_top: PARKING\n");
return true; return true;
case MLX5E_AM_GOING_RIGHT: case MLX5E_AM_GOING_RIGHT:
return (am->steps_left > 1) && (am->steps_right == 1); return (am->steps_left > 1) && (am->steps_right == 1);
@ -123,7 +122,6 @@ static void mlx5e_am_turn(struct mlx5e_rx_am *am)
switch (am->tune_state) { switch (am->tune_state) {
case MLX5E_AM_PARKING_ON_TOP: case MLX5E_AM_PARKING_ON_TOP:
case MLX5E_AM_PARKING_TIRED: case MLX5E_AM_PARKING_TIRED:
WARN_ONCE(true, "mlx5e_am_turn: PARKING\n");
break; break;
case MLX5E_AM_GOING_RIGHT: case MLX5E_AM_GOING_RIGHT:
am->tune_state = MLX5E_AM_GOING_LEFT; am->tune_state = MLX5E_AM_GOING_LEFT;
@ -144,7 +142,6 @@ static int mlx5e_am_step(struct mlx5e_rx_am *am)
switch (am->tune_state) { switch (am->tune_state) {
case MLX5E_AM_PARKING_ON_TOP: case MLX5E_AM_PARKING_ON_TOP:
case MLX5E_AM_PARKING_TIRED: case MLX5E_AM_PARKING_TIRED:
WARN_ONCE(true, "mlx5e_am_step: PARKING\n");
break; break;
case MLX5E_AM_GOING_RIGHT: case MLX5E_AM_GOING_RIGHT:
if (am->profile_ix == (MLX5E_PARAMS_AM_NUM_PROFILES - 1)) if (am->profile_ix == (MLX5E_PARAMS_AM_NUM_PROFILES - 1))
@ -282,10 +279,8 @@ static void mlx5e_am_calc_stats(struct mlx5e_rx_am_sample *start,
u32 delta_us = ktime_us_delta(end->time, start->time); u32 delta_us = ktime_us_delta(end->time, start->time);
unsigned int npkts = end->pkt_ctr - start->pkt_ctr; unsigned int npkts = end->pkt_ctr - start->pkt_ctr;
if (!delta_us) { if (!delta_us)
WARN_ONCE(true, "mlx5e_am_calc_stats: delta_us=0\n");
return; return;
}
curr_stats->ppms = (npkts * USEC_PER_MSEC) / delta_us; curr_stats->ppms = (npkts * USEC_PER_MSEC) / delta_us;
curr_stats->epms = (MLX5E_AM_NEVENTS * USEC_PER_MSEC) / delta_us; curr_stats->epms = (MLX5E_AM_NEVENTS * USEC_PER_MSEC) / delta_us;

View File

@ -161,15 +161,21 @@ static void mlx5e_detach_encap(struct mlx5e_priv *priv,
} }
} }
/* we get here also when setting rule to the FW failed, etc. It means that the
* flow rule itself might not exist, but some offloading related to the actions
* should be cleaned.
*/
static void mlx5e_tc_del_flow(struct mlx5e_priv *priv, static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow) struct mlx5e_tc_flow *flow)
{ {
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_fc *counter = NULL; struct mlx5_fc *counter = NULL;
counter = mlx5_flow_rule_counter(flow->rule); if (!IS_ERR(flow->rule)) {
counter = mlx5_flow_rule_counter(flow->rule);
mlx5_del_flow_rules(flow->rule); mlx5_del_flow_rules(flow->rule);
mlx5_fc_destroy(priv->mdev, counter);
}
if (esw && esw->mode == SRIOV_OFFLOADS) { if (esw && esw->mode == SRIOV_OFFLOADS) {
mlx5_eswitch_del_vlan_action(esw, flow->attr); mlx5_eswitch_del_vlan_action(esw, flow->attr);
@ -177,8 +183,6 @@ static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
mlx5e_detach_encap(priv, flow); mlx5e_detach_encap(priv, flow);
} }
mlx5_fc_destroy(priv->mdev, counter);
if (!mlx5e_tc_num_filters(priv) && (priv->fs.tc.t)) { if (!mlx5e_tc_num_filters(priv) && (priv->fs.tc.t)) {
mlx5_destroy_flow_table(priv->fs.tc.t); mlx5_destroy_flow_table(priv->fs.tc.t);
priv->fs.tc.t = NULL; priv->fs.tc.t = NULL;
@ -225,6 +229,11 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
void *headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, void *headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
outer_headers); outer_headers);
struct flow_dissector_key_control *enc_control =
skb_flow_dissector_target(f->dissector,
FLOW_DISSECTOR_KEY_ENC_CONTROL,
f->key);
if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS)) { if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
struct flow_dissector_key_ports *key = struct flow_dissector_key_ports *key =
skb_flow_dissector_target(f->dissector, skb_flow_dissector_target(f->dissector,
@ -237,28 +246,34 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
/* Full udp dst port must be given */ /* Full udp dst port must be given */
if (memchr_inv(&mask->dst, 0xff, sizeof(mask->dst))) if (memchr_inv(&mask->dst, 0xff, sizeof(mask->dst)))
return -EOPNOTSUPP; goto vxlan_match_offload_err;
/* udp src port isn't supported */
if (memchr_inv(&mask->src, 0, sizeof(mask->src)))
return -EOPNOTSUPP;
if (mlx5e_vxlan_lookup_port(priv, be16_to_cpu(key->dst)) && if (mlx5e_vxlan_lookup_port(priv, be16_to_cpu(key->dst)) &&
MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap)) MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap))
parse_vxlan_attr(spec, f); parse_vxlan_attr(spec, f);
else else {
netdev_warn(priv->netdev,
"%d isn't an offloaded vxlan udp dport\n", be16_to_cpu(key->dst));
return -EOPNOTSUPP; return -EOPNOTSUPP;
}
MLX5_SET(fte_match_set_lyr_2_4, headers_c, MLX5_SET(fte_match_set_lyr_2_4, headers_c,
udp_dport, ntohs(mask->dst)); udp_dport, ntohs(mask->dst));
MLX5_SET(fte_match_set_lyr_2_4, headers_v, MLX5_SET(fte_match_set_lyr_2_4, headers_v,
udp_dport, ntohs(key->dst)); udp_dport, ntohs(key->dst));
MLX5_SET(fte_match_set_lyr_2_4, headers_c,
udp_sport, ntohs(mask->src));
MLX5_SET(fte_match_set_lyr_2_4, headers_v,
udp_sport, ntohs(key->src));
} else { /* udp dst port must be given */ } else { /* udp dst port must be given */
return -EOPNOTSUPP; vxlan_match_offload_err:
netdev_warn(priv->netdev,
"IP tunnel decap offload supported only for vxlan, must set UDP dport\n");
return -EOPNOTSUPP;
} }
if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) { if (enc_control->addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
struct flow_dissector_key_ipv4_addrs *key = struct flow_dissector_key_ipv4_addrs *key =
skb_flow_dissector_target(f->dissector, skb_flow_dissector_target(f->dissector,
FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
@ -280,10 +295,10 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
MLX5_SET(fte_match_set_lyr_2_4, headers_v, MLX5_SET(fte_match_set_lyr_2_4, headers_v,
dst_ipv4_dst_ipv6.ipv4_layout.ipv4, dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
ntohl(key->dst)); ntohl(key->dst));
}
MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype); MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype);
MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IP); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IP);
}
/* Enforce DMAC when offloading incoming tunneled flows. /* Enforce DMAC when offloading incoming tunneled flows.
* Flow counters require a match on the DMAC. * Flow counters require a match on the DMAC.
@ -346,6 +361,9 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
if (parse_tunnel_attr(priv, spec, f)) if (parse_tunnel_attr(priv, spec, f))
return -EOPNOTSUPP; return -EOPNOTSUPP;
break; break;
case FLOW_DISSECTOR_KEY_IPV6_ADDRS:
netdev_warn(priv->netdev,
"IPv6 tunnel decap offload isn't supported\n");
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
@ -375,6 +393,10 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
MLX5_SET(fte_match_set_lyr_2_4, headers_c, frag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_c, frag, 1);
MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag,
key->flags & FLOW_DIS_IS_FRAGMENT); key->flags & FLOW_DIS_IS_FRAGMENT);
/* the HW doesn't need L3 inline to match on frag=no */
if (key->flags & FLOW_DIS_IS_FRAGMENT)
*min_inline = MLX5_INLINE_MODE_IP;
} }
} }
@ -647,17 +669,14 @@ static int mlx5e_route_lookup_ipv4(struct mlx5e_priv *priv,
#if IS_ENABLED(CONFIG_INET) #if IS_ENABLED(CONFIG_INET)
rt = ip_route_output_key(dev_net(mirred_dev), fl4); rt = ip_route_output_key(dev_net(mirred_dev), fl4);
if (IS_ERR(rt)) { if (IS_ERR(rt))
pr_warn("%s: no route to %pI4\n", __func__, &fl4->daddr); return PTR_ERR(rt);
return -EOPNOTSUPP;
}
#else #else
return -EOPNOTSUPP; return -EOPNOTSUPP;
#endif #endif
if (!switchdev_port_same_parent_id(priv->netdev, rt->dst.dev)) { if (!switchdev_port_same_parent_id(priv->netdev, rt->dst.dev)) {
pr_warn("%s: Can't offload the flow, netdevices aren't on the same HW e-switch\n", pr_warn("%s: can't offload, devices not on same HW e-switch\n", __func__);
__func__);
ip_rt_put(rt); ip_rt_put(rt);
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
@ -718,12 +737,12 @@ static int mlx5e_create_encap_header_ipv4(struct mlx5e_priv *priv,
struct net_device **out_dev) struct net_device **out_dev)
{ {
int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size); int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
struct neighbour *n = NULL;
struct flowi4 fl4 = {}; struct flowi4 fl4 = {};
struct neighbour *n;
char *encap_header; char *encap_header;
int encap_size; int encap_size;
__be32 saddr; __be32 saddr = 0;
int ttl; int ttl = 0;
int err; int err;
encap_header = kzalloc(max_encap_size, GFP_KERNEL); encap_header = kzalloc(max_encap_size, GFP_KERNEL);
@ -750,7 +769,8 @@ static int mlx5e_create_encap_header_ipv4(struct mlx5e_priv *priv,
e->out_dev = *out_dev; e->out_dev = *out_dev;
if (!(n->nud_state & NUD_VALID)) { if (!(n->nud_state & NUD_VALID)) {
err = -ENOTSUPP; pr_warn("%s: can't offload, neighbour to %pI4 invalid\n", __func__, &fl4.daddr);
err = -EOPNOTSUPP;
goto out; goto out;
} }
@ -772,6 +792,8 @@ static int mlx5e_create_encap_header_ipv4(struct mlx5e_priv *priv,
err = mlx5_encap_alloc(priv->mdev, e->tunnel_type, err = mlx5_encap_alloc(priv->mdev, e->tunnel_type,
encap_size, encap_header, &e->encap_id); encap_size, encap_header, &e->encap_id);
out: out:
if (err && n)
neigh_release(n);
kfree(encap_header); kfree(encap_header);
return err; return err;
} }
@ -792,9 +814,17 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
int tunnel_type; int tunnel_type;
int err; int err;
/* udp dst port must be given */ /* udp dst port must be set */
if (!memchr_inv(&key->tp_dst, 0, sizeof(key->tp_dst))) if (!memchr_inv(&key->tp_dst, 0, sizeof(key->tp_dst)))
goto vxlan_encap_offload_err;
/* setting udp src port isn't supported */
if (memchr_inv(&key->tp_src, 0, sizeof(key->tp_src))) {
vxlan_encap_offload_err:
netdev_warn(priv->netdev,
"must set udp dst port and not set udp src port\n");
return -EOPNOTSUPP; return -EOPNOTSUPP;
}
if (mlx5e_vxlan_lookup_port(priv, be16_to_cpu(key->tp_dst)) && if (mlx5e_vxlan_lookup_port(priv, be16_to_cpu(key->tp_dst)) &&
MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap)) { MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap)) {
@ -802,6 +832,8 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
info.tun_id = tunnel_id_to_key32(key->tun_id); info.tun_id = tunnel_id_to_key32(key->tun_id);
tunnel_type = MLX5_HEADER_TYPE_VXLAN; tunnel_type = MLX5_HEADER_TYPE_VXLAN;
} else { } else {
netdev_warn(priv->netdev,
"%d isn't an offloaded vxlan udp dport\n", be16_to_cpu(key->tp_dst));
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
@ -809,6 +841,9 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
case AF_INET: case AF_INET:
info.daddr = key->u.ipv4.dst; info.daddr = key->u.ipv4.dst;
break; break;
case AF_INET6:
netdev_warn(priv->netdev,
"IPv6 tunnel encap offload isn't supported\n");
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
@ -986,7 +1021,7 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol,
if (IS_ERR(flow->rule)) { if (IS_ERR(flow->rule)) {
err = PTR_ERR(flow->rule); err = PTR_ERR(flow->rule);
goto err_free; goto err_del_rule;
} }
err = rhashtable_insert_fast(&tc->ht, &flow->node, err = rhashtable_insert_fast(&tc->ht, &flow->node,
@ -997,7 +1032,7 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol,
goto out; goto out;
err_del_rule: err_del_rule:
mlx5_del_flow_rules(flow->rule); mlx5e_tc_del_flow(priv, flow);
err_free: err_free:
kfree(flow); kfree(flow);

View File

@ -1217,7 +1217,8 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
{ {
int err = 0; int err = 0;
mlx5_drain_health_wq(dev); if (cleanup)
mlx5_drain_health_wq(dev);
mutex_lock(&dev->intf_state_mutex); mutex_lock(&dev->intf_state_mutex);
if (test_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state)) { if (test_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state)) {
@ -1402,9 +1403,10 @@ static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev,
mlx5_enter_error_state(dev); mlx5_enter_error_state(dev);
mlx5_unload_one(dev, priv, false); mlx5_unload_one(dev, priv, false);
/* In case of kernel call save the pci state */ /* In case of kernel call save the pci state and drain the health wq */
if (state) { if (state) {
pci_save_state(pdev); pci_save_state(pdev);
mlx5_drain_health_wq(dev);
mlx5_pci_disable_device(dev); mlx5_pci_disable_device(dev);
} }

View File

@ -279,6 +279,7 @@ config MARVELL_PHY
config MESON_GXL_PHY config MESON_GXL_PHY
tristate "Amlogic Meson GXL Internal PHY" tristate "Amlogic Meson GXL Internal PHY"
depends on ARCH_MESON || COMPILE_TEST
---help--- ---help---
Currently has a driver for the Amlogic Meson GXL Internal PHY Currently has a driver for the Amlogic Meson GXL Internal PHY

View File

@ -1192,7 +1192,8 @@ static int marvell_read_status(struct phy_device *phydev)
int err; int err;
/* Check the fiber mode first */ /* Check the fiber mode first */
if (phydev->supported & SUPPORTED_FIBRE) { if (phydev->supported & SUPPORTED_FIBRE &&
phydev->interface != PHY_INTERFACE_MODE_SGMII) {
err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_FIBER); err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_FIBER);
if (err < 0) if (err < 0)
goto error; goto error;

View File

@ -3576,39 +3576,87 @@ static bool delay_autosuspend(struct r8152 *tp)
return false; return false;
} }
static int rtl8152_suspend(struct usb_interface *intf, pm_message_t message) static int rtl8152_rumtime_suspend(struct r8152 *tp)
{ {
struct r8152 *tp = usb_get_intfdata(intf);
struct net_device *netdev = tp->netdev; struct net_device *netdev = tp->netdev;
int ret = 0; int ret = 0;
mutex_lock(&tp->control); if (netif_running(netdev) && test_bit(WORK_ENABLE, &tp->flags)) {
u32 rcr = 0;
if (PMSG_IS_AUTO(message)) { if (delay_autosuspend(tp)) {
if (netif_running(netdev) && delay_autosuspend(tp)) {
ret = -EBUSY; ret = -EBUSY;
goto out1; goto out1;
} }
set_bit(SELECTIVE_SUSPEND, &tp->flags); if (netif_carrier_ok(netdev)) {
} else { u32 ocp_data;
netif_device_detach(netdev);
rcr = ocp_read_dword(tp, MCU_TYPE_PLA, PLA_RCR);
ocp_data = rcr & ~RCR_ACPT_ALL;
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, ocp_data);
rxdy_gated_en(tp, true);
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA,
PLA_OOB_CTRL);
if (!(ocp_data & RXFIFO_EMPTY)) {
rxdy_gated_en(tp, false);
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, rcr);
ret = -EBUSY;
goto out1;
}
}
clear_bit(WORK_ENABLE, &tp->flags);
usb_kill_urb(tp->intr_urb);
tp->rtl_ops.autosuspend_en(tp, true);
if (netif_carrier_ok(netdev)) {
napi_disable(&tp->napi);
rtl_stop_rx(tp);
rxdy_gated_en(tp, false);
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, rcr);
napi_enable(&tp->napi);
}
} }
set_bit(SELECTIVE_SUSPEND, &tp->flags);
out1:
return ret;
}
static int rtl8152_system_suspend(struct r8152 *tp)
{
struct net_device *netdev = tp->netdev;
int ret = 0;
netif_device_detach(netdev);
if (netif_running(netdev) && test_bit(WORK_ENABLE, &tp->flags)) { if (netif_running(netdev) && test_bit(WORK_ENABLE, &tp->flags)) {
clear_bit(WORK_ENABLE, &tp->flags); clear_bit(WORK_ENABLE, &tp->flags);
usb_kill_urb(tp->intr_urb); usb_kill_urb(tp->intr_urb);
napi_disable(&tp->napi); napi_disable(&tp->napi);
if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) { cancel_delayed_work_sync(&tp->schedule);
rtl_stop_rx(tp); tp->rtl_ops.down(tp);
tp->rtl_ops.autosuspend_en(tp, true);
} else {
cancel_delayed_work_sync(&tp->schedule);
tp->rtl_ops.down(tp);
}
napi_enable(&tp->napi); napi_enable(&tp->napi);
} }
out1:
return ret;
}
static int rtl8152_suspend(struct usb_interface *intf, pm_message_t message)
{
struct r8152 *tp = usb_get_intfdata(intf);
int ret;
mutex_lock(&tp->control);
if (PMSG_IS_AUTO(message))
ret = rtl8152_rumtime_suspend(tp);
else
ret = rtl8152_system_suspend(tp);
mutex_unlock(&tp->control); mutex_unlock(&tp->control);
return ret; return ret;

View File

@ -262,7 +262,9 @@ static netdev_tx_t vrf_process_v4_outbound(struct sk_buff *skb,
.flowi4_iif = LOOPBACK_IFINDEX, .flowi4_iif = LOOPBACK_IFINDEX,
.flowi4_tos = RT_TOS(ip4h->tos), .flowi4_tos = RT_TOS(ip4h->tos),
.flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF, .flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF,
.flowi4_proto = ip4h->protocol,
.daddr = ip4h->daddr, .daddr = ip4h->daddr,
.saddr = ip4h->saddr,
}; };
struct net *net = dev_net(vrf_dev); struct net *net = dev_net(vrf_dev);
struct rtable *rt; struct rtable *rt;
@ -1249,6 +1251,8 @@ static int vrf_newlink(struct net *src_net, struct net_device *dev,
return -EINVAL; return -EINVAL;
vrf->tb_id = nla_get_u32(data[IFLA_VRF_TABLE]); vrf->tb_id = nla_get_u32(data[IFLA_VRF_TABLE]);
if (vrf->tb_id == RT_TABLE_UNSPEC)
return -EINVAL;
dev->priv_flags |= IFF_L3MDEV_MASTER; dev->priv_flags |= IFF_L3MDEV_MASTER;

View File

@ -1047,6 +1047,7 @@ int rtl_usb_probe(struct usb_interface *intf,
return -ENOMEM; return -ENOMEM;
} }
rtlpriv = hw->priv; rtlpriv = hw->priv;
rtlpriv->hw = hw;
rtlpriv->usb_data = kzalloc(RTL_USB_MAX_RX_COUNT * sizeof(u32), rtlpriv->usb_data = kzalloc(RTL_USB_MAX_RX_COUNT * sizeof(u32),
GFP_KERNEL); GFP_KERNEL);
if (!rtlpriv->usb_data) if (!rtlpriv->usb_data)

View File

@ -691,8 +691,8 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
pgoff_t index, unsigned long pfn) pgoff_t index, unsigned long pfn)
{ {
struct vm_area_struct *vma; struct vm_area_struct *vma;
pte_t *ptep; pte_t pte, *ptep = NULL;
pte_t pte; pmd_t *pmdp = NULL;
spinlock_t *ptl; spinlock_t *ptl;
bool changed; bool changed;
@ -707,21 +707,42 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
address = pgoff_address(index, vma); address = pgoff_address(index, vma);
changed = false; changed = false;
if (follow_pte(vma->vm_mm, address, &ptep, &ptl)) if (follow_pte_pmd(vma->vm_mm, address, &ptep, &pmdp, &ptl))
continue; continue;
if (pfn != pte_pfn(*ptep))
goto unlock;
if (!pte_dirty(*ptep) && !pte_write(*ptep))
goto unlock;
flush_cache_page(vma, address, pfn); if (pmdp) {
pte = ptep_clear_flush(vma, address, ptep); #ifdef CONFIG_FS_DAX_PMD
pte = pte_wrprotect(pte); pmd_t pmd;
pte = pte_mkclean(pte);
set_pte_at(vma->vm_mm, address, ptep, pte); if (pfn != pmd_pfn(*pmdp))
changed = true; goto unlock_pmd;
unlock: if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
pte_unmap_unlock(ptep, ptl); goto unlock_pmd;
flush_cache_page(vma, address, pfn);
pmd = pmdp_huge_clear_flush(vma, address, pmdp);
pmd = pmd_wrprotect(pmd);
pmd = pmd_mkclean(pmd);
set_pmd_at(vma->vm_mm, address, pmdp, pmd);
changed = true;
unlock_pmd:
spin_unlock(ptl);
#endif
} else {
if (pfn != pte_pfn(*ptep))
goto unlock_pte;
if (!pte_dirty(*ptep) && !pte_write(*ptep))
goto unlock_pte;
flush_cache_page(vma, address, pfn);
pte = ptep_clear_flush(vma, address, ptep);
pte = pte_wrprotect(pte);
pte = pte_mkclean(pte);
set_pte_at(vma->vm_mm, address, ptep, pte);
changed = true;
unlock_pte:
pte_unmap_unlock(ptep, ptl);
}
if (changed) if (changed)
mmu_notifier_invalidate_page(vma->vm_mm, address); mmu_notifier_invalidate_page(vma->vm_mm, address);

View File

@ -3303,6 +3303,16 @@ static int ocfs2_downconvert_lock(struct ocfs2_super *osb,
mlog(ML_BASTS, "lockres %s, level %d => %d\n", lockres->l_name, mlog(ML_BASTS, "lockres %s, level %d => %d\n", lockres->l_name,
lockres->l_level, new_level); lockres->l_level, new_level);
/*
* On DLM_LKF_VALBLK, fsdlm behaves differently with o2cb. It always
* expects DLM_LKF_VALBLK being set if the LKB has LVB, so that
* we can recover correctly from node failure. Otherwise, we may get
* invalid LVB in LKB, but without DLM_SBF_VALNOTVALID being set.
*/
if (!ocfs2_is_o2cb_active() &&
lockres->l_ops->flags & LOCK_TYPE_USES_LVB)
lvb = 1;
if (lvb) if (lvb)
dlm_flags |= DLM_LKF_VALBLK; dlm_flags |= DLM_LKF_VALBLK;

View File

@ -48,6 +48,12 @@ static char ocfs2_hb_ctl_path[OCFS2_MAX_HB_CTL_PATH] = "/sbin/ocfs2_hb_ctl";
*/ */
static struct ocfs2_stack_plugin *active_stack; static struct ocfs2_stack_plugin *active_stack;
inline int ocfs2_is_o2cb_active(void)
{
return !strcmp(active_stack->sp_name, OCFS2_STACK_PLUGIN_O2CB);
}
EXPORT_SYMBOL_GPL(ocfs2_is_o2cb_active);
static struct ocfs2_stack_plugin *ocfs2_stack_lookup(const char *name) static struct ocfs2_stack_plugin *ocfs2_stack_lookup(const char *name)
{ {
struct ocfs2_stack_plugin *p; struct ocfs2_stack_plugin *p;

View File

@ -298,6 +298,9 @@ void ocfs2_stack_glue_set_max_proto_version(struct ocfs2_protocol_version *max_p
int ocfs2_stack_glue_register(struct ocfs2_stack_plugin *plugin); int ocfs2_stack_glue_register(struct ocfs2_stack_plugin *plugin);
void ocfs2_stack_glue_unregister(struct ocfs2_stack_plugin *plugin); void ocfs2_stack_glue_unregister(struct ocfs2_stack_plugin *plugin);
/* In ocfs2_downconvert_lock(), we need to know which stack we are using */
int ocfs2_is_o2cb_active(void);
extern struct kset *ocfs2_kset; extern struct kset *ocfs2_kset;
#endif /* STACKGLUE_H */ #endif /* STACKGLUE_H */

View File

@ -38,9 +38,8 @@ struct vm_area_struct;
#define ___GFP_ACCOUNT 0x100000u #define ___GFP_ACCOUNT 0x100000u
#define ___GFP_NOTRACK 0x200000u #define ___GFP_NOTRACK 0x200000u
#define ___GFP_DIRECT_RECLAIM 0x400000u #define ___GFP_DIRECT_RECLAIM 0x400000u
#define ___GFP_OTHER_NODE 0x800000u #define ___GFP_WRITE 0x800000u
#define ___GFP_WRITE 0x1000000u #define ___GFP_KSWAPD_RECLAIM 0x1000000u
#define ___GFP_KSWAPD_RECLAIM 0x2000000u
/* If the above are modified, __GFP_BITS_SHIFT may need updating */ /* If the above are modified, __GFP_BITS_SHIFT may need updating */
/* /*
@ -172,11 +171,6 @@ struct vm_area_struct;
* __GFP_NOTRACK_FALSE_POSITIVE is an alias of __GFP_NOTRACK. It's a means of * __GFP_NOTRACK_FALSE_POSITIVE is an alias of __GFP_NOTRACK. It's a means of
* distinguishing in the source between false positives and allocations that * distinguishing in the source between false positives and allocations that
* cannot be supported (e.g. page tables). * cannot be supported (e.g. page tables).
*
* __GFP_OTHER_NODE is for allocations that are on a remote node but that
* should not be accounted for as a remote allocation in vmstat. A
* typical user would be khugepaged collapsing a huge page on a remote
* node.
*/ */
#define __GFP_COLD ((__force gfp_t)___GFP_COLD) #define __GFP_COLD ((__force gfp_t)___GFP_COLD)
#define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN)
@ -184,10 +178,9 @@ struct vm_area_struct;
#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
#define __GFP_NOTRACK ((__force gfp_t)___GFP_NOTRACK) #define __GFP_NOTRACK ((__force gfp_t)___GFP_NOTRACK)
#define __GFP_NOTRACK_FALSE_POSITIVE (__GFP_NOTRACK) #define __GFP_NOTRACK_FALSE_POSITIVE (__GFP_NOTRACK)
#define __GFP_OTHER_NODE ((__force gfp_t)___GFP_OTHER_NODE)
/* Room for N __GFP_FOO bits */ /* Room for N __GFP_FOO bits */
#define __GFP_BITS_SHIFT 26 #define __GFP_BITS_SHIFT 25
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
/* /*
@ -506,11 +499,10 @@ extern void free_hot_cold_page(struct page *page, bool cold);
extern void free_hot_cold_page_list(struct list_head *list, bool cold); extern void free_hot_cold_page_list(struct list_head *list, bool cold);
struct page_frag_cache; struct page_frag_cache;
extern void __page_frag_drain(struct page *page, unsigned int order, extern void __page_frag_cache_drain(struct page *page, unsigned int count);
unsigned int count); extern void *page_frag_alloc(struct page_frag_cache *nc,
extern void *__alloc_page_frag(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask);
unsigned int fragsz, gfp_t gfp_mask); extern void page_frag_free(void *addr);
extern void __free_page_frag(void *addr);
#define __free_page(page) __free_pages((page), 0) #define __free_page(page) __free_pages((page), 0)
#define free_page(addr) free_pages((addr), 0) #define free_page(addr) free_pages((addr), 0)

View File

@ -120,7 +120,7 @@ struct mem_cgroup_reclaim_iter {
*/ */
struct mem_cgroup_per_node { struct mem_cgroup_per_node {
struct lruvec lruvec; struct lruvec lruvec;
unsigned long lru_size[NR_LRU_LISTS]; unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
struct mem_cgroup_reclaim_iter iter[DEF_PRIORITY + 1]; struct mem_cgroup_reclaim_iter iter[DEF_PRIORITY + 1];
@ -432,7 +432,7 @@ static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
int mem_cgroup_select_victim_node(struct mem_cgroup *memcg); int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
int nr_pages); int zid, int nr_pages);
unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
int nid, unsigned int lru_mask); int nid, unsigned int lru_mask);
@ -441,9 +441,23 @@ static inline
unsigned long mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru) unsigned long mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru)
{ {
struct mem_cgroup_per_node *mz; struct mem_cgroup_per_node *mz;
unsigned long nr_pages = 0;
int zid;
mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec); mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
return mz->lru_size[lru]; for (zid = 0; zid < MAX_NR_ZONES; zid++)
nr_pages += mz->lru_zone_size[zid][lru];
return nr_pages;
}
static inline
unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
enum lru_list lru, int zone_idx)
{
struct mem_cgroup_per_node *mz;
mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
return mz->lru_zone_size[zone_idx][lru];
} }
void mem_cgroup_handle_over_high(void); void mem_cgroup_handle_over_high(void);
@ -671,6 +685,12 @@ mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru)
{ {
return 0; return 0;
} }
static inline
unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
enum lru_list lru, int zone_idx)
{
return 0;
}
static inline unsigned long static inline unsigned long
mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,

View File

@ -1210,8 +1210,8 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
struct vm_area_struct *vma); struct vm_area_struct *vma);
void unmap_mapping_range(struct address_space *mapping, void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows); loff_t const holebegin, loff_t const holelen, int even_cows);
int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
spinlock_t **ptlp); pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address, int follow_pfn(struct vm_area_struct *vma, unsigned long address,
unsigned long *pfn); unsigned long *pfn);
int follow_phys(struct vm_area_struct *vma, unsigned long address, int follow_phys(struct vm_area_struct *vma, unsigned long address,

View File

@ -39,7 +39,7 @@ static __always_inline void update_lru_size(struct lruvec *lruvec,
{ {
__update_lru_size(lruvec, lru, zid, nr_pages); __update_lru_size(lruvec, lru, zid, nr_pages);
#ifdef CONFIG_MEMCG #ifdef CONFIG_MEMCG
mem_cgroup_update_lru_size(lruvec, lru, nr_pages); mem_cgroup_update_lru_size(lruvec, lru, zid, nr_pages);
#endif #endif
} }

View File

@ -2477,14 +2477,19 @@ static inline int skb_gro_header_hard(struct sk_buff *skb, unsigned int hlen)
return NAPI_GRO_CB(skb)->frag0_len < hlen; return NAPI_GRO_CB(skb)->frag0_len < hlen;
} }
static inline void skb_gro_frag0_invalidate(struct sk_buff *skb)
{
NAPI_GRO_CB(skb)->frag0 = NULL;
NAPI_GRO_CB(skb)->frag0_len = 0;
}
static inline void *skb_gro_header_slow(struct sk_buff *skb, unsigned int hlen, static inline void *skb_gro_header_slow(struct sk_buff *skb, unsigned int hlen,
unsigned int offset) unsigned int offset)
{ {
if (!pskb_may_pull(skb, hlen)) if (!pskb_may_pull(skb, hlen))
return NULL; return NULL;
NAPI_GRO_CB(skb)->frag0 = NULL; skb_gro_frag0_invalidate(skb);
NAPI_GRO_CB(skb)->frag0_len = 0;
return skb->data + offset; return skb->data + offset;
} }

View File

@ -854,6 +854,16 @@ struct signal_struct {
#define SIGNAL_UNKILLABLE 0x00000040 /* for init: ignore fatal signals */ #define SIGNAL_UNKILLABLE 0x00000040 /* for init: ignore fatal signals */
#define SIGNAL_STOP_MASK (SIGNAL_CLD_MASK | SIGNAL_STOP_STOPPED | \
SIGNAL_STOP_CONTINUED)
static inline void signal_set_stop_flags(struct signal_struct *sig,
unsigned int flags)
{
WARN_ON(sig->flags & (SIGNAL_GROUP_EXIT|SIGNAL_GROUP_COREDUMP));
sig->flags = (sig->flags & ~SIGNAL_STOP_MASK) | flags;
}
/* If true, all threads except ->group_exit_task have pending SIGKILL */ /* If true, all threads except ->group_exit_task have pending SIGKILL */
static inline int signal_group_exit(const struct signal_struct *sig) static inline int signal_group_exit(const struct signal_struct *sig)
{ {

View File

@ -2485,7 +2485,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev,
static inline void skb_free_frag(void *addr) static inline void skb_free_frag(void *addr)
{ {
__free_page_frag(addr); page_frag_free(addr);
} }
void *napi_alloc_frag(unsigned int fragsz); void *napi_alloc_frag(unsigned int fragsz);

View File

@ -226,7 +226,7 @@ static inline const char *__check_heap_object(const void *ptr,
* (PAGE_SIZE*2). Larger requests are passed to the page allocator. * (PAGE_SIZE*2). Larger requests are passed to the page allocator.
*/ */
#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1)
#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
#ifndef KMALLOC_SHIFT_LOW #ifndef KMALLOC_SHIFT_LOW
#define KMALLOC_SHIFT_LOW 3 #define KMALLOC_SHIFT_LOW 3
#endif #endif
@ -239,7 +239,7 @@ static inline const char *__check_heap_object(const void *ptr,
* be allocated from the same page. * be allocated from the same page.
*/ */
#define KMALLOC_SHIFT_HIGH PAGE_SHIFT #define KMALLOC_SHIFT_HIGH PAGE_SHIFT
#define KMALLOC_SHIFT_MAX 30 #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
#ifndef KMALLOC_SHIFT_LOW #ifndef KMALLOC_SHIFT_LOW
#define KMALLOC_SHIFT_LOW 3 #define KMALLOC_SHIFT_LOW 3
#endif #endif

View File

@ -150,8 +150,9 @@ enum {
SWP_FILE = (1 << 7), /* set after swap_activate success */ SWP_FILE = (1 << 7), /* set after swap_activate success */
SWP_AREA_DISCARD = (1 << 8), /* single-time swap area discards */ SWP_AREA_DISCARD = (1 << 8), /* single-time swap area discards */
SWP_PAGE_DISCARD = (1 << 9), /* freed swap page-cluster discards */ SWP_PAGE_DISCARD = (1 << 9), /* freed swap page-cluster discards */
SWP_STABLE_WRITES = (1 << 10), /* no overwrite PG_writeback pages */
/* add others here before... */ /* add others here before... */
SWP_SCANNING = (1 << 10), /* refcount in scan_swap_map */ SWP_SCANNING = (1 << 11), /* refcount in scan_swap_map */
}; };
#define SWAP_CLUSTER_MAX 32UL #define SWAP_CLUSTER_MAX 32UL

View File

@ -8,23 +8,7 @@
#ifndef _LINUX_TIMERFD_H #ifndef _LINUX_TIMERFD_H
#define _LINUX_TIMERFD_H #define _LINUX_TIMERFD_H
/* For O_CLOEXEC and O_NONBLOCK */ #include <uapi/linux/timerfd.h>
#include <linux/fcntl.h>
/* For _IO helpers */
#include <linux/ioctl.h>
/*
* CAREFUL: Check include/asm-generic/fcntl.h when defining
* new flags, since they might collide with O_* ones. We want
* to re-use O_* flags that couldn't possibly have a meaning
* from eventfd, in order to leave a free define-space for
* shared O_* flags.
*/
#define TFD_TIMER_ABSTIME (1 << 0)
#define TFD_TIMER_CANCEL_ON_SET (1 << 1)
#define TFD_CLOEXEC O_CLOEXEC
#define TFD_NONBLOCK O_NONBLOCK
#define TFD_SHARED_FCNTL_FLAGS (TFD_CLOEXEC | TFD_NONBLOCK) #define TFD_SHARED_FCNTL_FLAGS (TFD_CLOEXEC | TFD_NONBLOCK)
/* Flags for timerfd_create. */ /* Flags for timerfd_create. */
@ -32,6 +16,4 @@
/* Flags for timerfd_settime. */ /* Flags for timerfd_settime. */
#define TFD_SETTIME_FLAGS (TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET) #define TFD_SETTIME_FLAGS (TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET)
#define TFD_IOC_SET_TICKS _IOW('T', 0, u64)
#endif /* _LINUX_TIMERFD_H */ #endif /* _LINUX_TIMERFD_H */

View File

@ -47,8 +47,7 @@
{(unsigned long)__GFP_WRITE, "__GFP_WRITE"}, \ {(unsigned long)__GFP_WRITE, "__GFP_WRITE"}, \
{(unsigned long)__GFP_RECLAIM, "__GFP_RECLAIM"}, \ {(unsigned long)__GFP_RECLAIM, "__GFP_RECLAIM"}, \
{(unsigned long)__GFP_DIRECT_RECLAIM, "__GFP_DIRECT_RECLAIM"},\ {(unsigned long)__GFP_DIRECT_RECLAIM, "__GFP_DIRECT_RECLAIM"},\
{(unsigned long)__GFP_KSWAPD_RECLAIM, "__GFP_KSWAPD_RECLAIM"},\ {(unsigned long)__GFP_KSWAPD_RECLAIM, "__GFP_KSWAPD_RECLAIM"}\
{(unsigned long)__GFP_OTHER_NODE, "__GFP_OTHER_NODE"} \
#define show_gfp_flags(flags) \ #define show_gfp_flags(flags) \
(flags) ? __print_flags(flags, "|", \ (flags) ? __print_flags(flags, "|", \

View File

@ -414,6 +414,7 @@ header-y += telephony.h
header-y += termios.h header-y += termios.h
header-y += thermal.h header-y += thermal.h
header-y += time.h header-y += time.h
header-y += timerfd.h
header-y += times.h header-y += times.h
header-y += timex.h header-y += timex.h
header-y += tiocl.h header-y += tiocl.h

View File

@ -0,0 +1,36 @@
/*
* include/linux/timerfd.h
*
* Copyright (C) 2007 Davide Libenzi <davidel@xmailserver.org>
*
*/
#ifndef _UAPI_LINUX_TIMERFD_H
#define _UAPI_LINUX_TIMERFD_H
#include <linux/types.h>
/* For O_CLOEXEC and O_NONBLOCK */
#include <linux/fcntl.h>
/* For _IO helpers */
#include <linux/ioctl.h>
/*
* CAREFUL: Check include/asm-generic/fcntl.h when defining
* new flags, since they might collide with O_* ones. We want
* to re-use O_* flags that couldn't possibly have a meaning
* from eventfd, in order to leave a free define-space for
* shared O_* flags.
*
* Also make sure to update the masks in include/linux/timerfd.h
* when adding new flags.
*/
#define TFD_TIMER_ABSTIME (1 << 0)
#define TFD_TIMER_CANCEL_ON_SET (1 << 1)
#define TFD_CLOEXEC O_CLOEXEC
#define TFD_NONBLOCK O_NONBLOCK
#define TFD_IOC_SET_TICKS _IOW('T', 0, __u64)
#endif /* _UAPI_LINUX_TIMERFD_H */

View File

@ -1176,6 +1176,10 @@ config CGROUP_DEBUG
Say N. Say N.
config SOCK_CGROUP_DATA
bool
default n
endif # CGROUPS endif # CGROUPS
config CHECKPOINT_RESTORE config CHECKPOINT_RESTORE

View File

@ -1977,7 +1977,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
} }
rcu_read_lock(); rcu_read_lock();
sem_lock(sma, sops, nsops); locknum = sem_lock(sma, sops, nsops);
if (!ipc_valid_object(&sma->sem_perm)) if (!ipc_valid_object(&sma->sem_perm))
goto out_unlock_free; goto out_unlock_free;

View File

@ -56,7 +56,7 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
attr->value_size == 0 || attr->map_flags) attr->value_size == 0 || attr->map_flags)
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
if (attr->value_size >= 1 << (KMALLOC_SHIFT_MAX - 1)) if (attr->value_size > KMALLOC_MAX_SIZE)
/* if value_size is bigger, the user space won't be able to /* if value_size is bigger, the user space won't be able to
* access the elements. * access the elements.
*/ */

View File

@ -274,7 +274,7 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
*/ */
goto free_htab; goto free_htab;
if (htab->map.value_size >= (1 << (KMALLOC_SHIFT_MAX - 1)) - if (htab->map.value_size >= KMALLOC_MAX_SIZE -
MAX_BPF_STACK - sizeof(struct htab_elem)) MAX_BPF_STACK - sizeof(struct htab_elem))
/* if value_size is bigger, the user space won't be able to /* if value_size is bigger, the user space won't be able to
* access the elements via bpf syscall. This check also makes * access the elements via bpf syscall. This check also makes

View File

@ -246,7 +246,9 @@ static void devm_memremap_pages_release(struct device *dev, void *data)
/* pages are dead and unused, undo the arch mapping */ /* pages are dead and unused, undo the arch mapping */
align_start = res->start & ~(SECTION_SIZE - 1); align_start = res->start & ~(SECTION_SIZE - 1);
align_size = ALIGN(resource_size(res), SECTION_SIZE); align_size = ALIGN(resource_size(res), SECTION_SIZE);
mem_hotplug_begin();
arch_remove_memory(align_start, align_size); arch_remove_memory(align_start, align_size);
mem_hotplug_done();
untrack_pfn(NULL, PHYS_PFN(align_start), align_size); untrack_pfn(NULL, PHYS_PFN(align_start), align_size);
pgmap_radix_release(res); pgmap_radix_release(res);
dev_WARN_ONCE(dev, pgmap->altmap && pgmap->altmap->alloc, dev_WARN_ONCE(dev, pgmap->altmap && pgmap->altmap->alloc,
@ -358,7 +360,9 @@ void *devm_memremap_pages(struct device *dev, struct resource *res,
if (error) if (error)
goto err_pfn_remap; goto err_pfn_remap;
mem_hotplug_begin();
error = arch_add_memory(nid, align_start, align_size, true); error = arch_add_memory(nid, align_start, align_size, true);
mem_hotplug_done();
if (error) if (error)
goto err_add_memory; goto err_add_memory;

View File

@ -346,7 +346,7 @@ static bool task_participate_group_stop(struct task_struct *task)
* fresh group stop. Read comment in do_signal_stop() for details. * fresh group stop. Read comment in do_signal_stop() for details.
*/ */
if (!sig->group_stop_count && !(sig->flags & SIGNAL_STOP_STOPPED)) { if (!sig->group_stop_count && !(sig->flags & SIGNAL_STOP_STOPPED)) {
sig->flags = SIGNAL_STOP_STOPPED; signal_set_stop_flags(sig, SIGNAL_STOP_STOPPED);
return true; return true;
} }
return false; return false;
@ -843,7 +843,7 @@ static bool prepare_signal(int sig, struct task_struct *p, bool force)
* will take ->siglock, notice SIGNAL_CLD_MASK, and * will take ->siglock, notice SIGNAL_CLD_MASK, and
* notify its parent. See get_signal_to_deliver(). * notify its parent. See get_signal_to_deliver().
*/ */
signal->flags = why | SIGNAL_STOP_CONTINUED; signal_set_stop_flags(signal, why | SIGNAL_STOP_CONTINUED);
signal->group_stop_count = 0; signal->group_stop_count = 0;
signal->group_exit_code = 0; signal->group_exit_code = 0;
} }

View File

@ -164,7 +164,7 @@ config DEBUG_INFO_REDUCED
config DEBUG_INFO_SPLIT config DEBUG_INFO_SPLIT
bool "Produce split debuginfo in .dwo files" bool "Produce split debuginfo in .dwo files"
depends on DEBUG_INFO depends on DEBUG_INFO && !FRV
help help
Generate debug info into separate .dwo files. This significantly Generate debug info into separate .dwo files. This significantly
reduces the build directory size for builds with DEBUG_INFO, reduces the build directory size for builds with DEBUG_INFO,

View File

@ -138,7 +138,7 @@ static int page_cache_tree_insert(struct address_space *mapping,
dax_radix_locked_entry(0, RADIX_DAX_EMPTY)); dax_radix_locked_entry(0, RADIX_DAX_EMPTY));
/* Wakeup waiters for exceptional entry lock */ /* Wakeup waiters for exceptional entry lock */
dax_wake_mapping_entry_waiter(mapping, page->index, p, dax_wake_mapping_entry_waiter(mapping, page->index, p,
false); true);
} }
} }
__radix_tree_replace(&mapping->page_tree, node, slot, page, __radix_tree_replace(&mapping->page_tree, node, slot, page,

View File

@ -883,15 +883,17 @@ void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd)
{ {
pmd_t entry; pmd_t entry;
unsigned long haddr; unsigned long haddr;
bool write = vmf->flags & FAULT_FLAG_WRITE;
vmf->ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd); vmf->ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd);
if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
goto unlock; goto unlock;
entry = pmd_mkyoung(orig_pmd); entry = pmd_mkyoung(orig_pmd);
if (write)
entry = pmd_mkdirty(entry);
haddr = vmf->address & HPAGE_PMD_MASK; haddr = vmf->address & HPAGE_PMD_MASK;
if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry, if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry, write))
vmf->flags & FAULT_FLAG_WRITE))
update_mmu_cache_pmd(vmf->vma, vmf->address, vmf->pmd); update_mmu_cache_pmd(vmf->vma, vmf->address, vmf->pmd);
unlock: unlock:
@ -919,8 +921,7 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
} }
for (i = 0; i < HPAGE_PMD_NR; i++) { for (i = 0; i < HPAGE_PMD_NR; i++) {
pages[i] = alloc_page_vma_node(GFP_HIGHUSER_MOVABLE | pages[i] = alloc_page_vma_node(GFP_HIGHUSER_MOVABLE, vma,
__GFP_OTHER_NODE, vma,
vmf->address, page_to_nid(page)); vmf->address, page_to_nid(page));
if (unlikely(!pages[i] || if (unlikely(!pages[i] ||
mem_cgroup_try_charge(pages[i], vma->vm_mm, mem_cgroup_try_charge(pages[i], vma->vm_mm,

View File

@ -1773,23 +1773,32 @@ free:
} }
/* /*
* When releasing a hugetlb pool reservation, any surplus pages that were * This routine has two main purposes:
* allocated to satisfy the reservation must be explicitly freed if they were * 1) Decrement the reservation count (resv_huge_pages) by the value passed
* never used. * in unused_resv_pages. This corresponds to the prior adjustments made
* Called with hugetlb_lock held. * to the associated reservation map.
* 2) Free any unused surplus pages that may have been allocated to satisfy
* the reservation. As many as unused_resv_pages may be freed.
*
* Called with hugetlb_lock held. However, the lock could be dropped (and
* reacquired) during calls to cond_resched_lock. Whenever dropping the lock,
* we must make sure nobody else can claim pages we are in the process of
* freeing. Do this by ensuring resv_huge_page always is greater than the
* number of huge pages we plan to free when dropping the lock.
*/ */
static void return_unused_surplus_pages(struct hstate *h, static void return_unused_surplus_pages(struct hstate *h,
unsigned long unused_resv_pages) unsigned long unused_resv_pages)
{ {
unsigned long nr_pages; unsigned long nr_pages;
/* Uncommit the reservation */
h->resv_huge_pages -= unused_resv_pages;
/* Cannot return gigantic pages currently */ /* Cannot return gigantic pages currently */
if (hstate_is_gigantic(h)) if (hstate_is_gigantic(h))
return; goto out;
/*
* Part (or even all) of the reservation could have been backed
* by pre-allocated pages. Only free surplus pages.
*/
nr_pages = min(unused_resv_pages, h->surplus_huge_pages); nr_pages = min(unused_resv_pages, h->surplus_huge_pages);
/* /*
@ -1799,12 +1808,22 @@ static void return_unused_surplus_pages(struct hstate *h,
* when the nodes with surplus pages have no free pages. * when the nodes with surplus pages have no free pages.
* free_pool_huge_page() will balance the the freed pages across the * free_pool_huge_page() will balance the the freed pages across the
* on-line nodes with memory and will handle the hstate accounting. * on-line nodes with memory and will handle the hstate accounting.
*
* Note that we decrement resv_huge_pages as we free the pages. If
* we drop the lock, resv_huge_pages will still be sufficiently large
* to cover subsequent pages we may free.
*/ */
while (nr_pages--) { while (nr_pages--) {
h->resv_huge_pages--;
unused_resv_pages--;
if (!free_pool_huge_page(h, &node_states[N_MEMORY], 1)) if (!free_pool_huge_page(h, &node_states[N_MEMORY], 1))
break; goto out;
cond_resched_lock(&hugetlb_lock); cond_resched_lock(&hugetlb_lock);
} }
out:
/* Fully uncommit the reservation */
h->resv_huge_pages -= unused_resv_pages;
} }

View File

@ -943,7 +943,7 @@ static void collapse_huge_page(struct mm_struct *mm,
VM_BUG_ON(address & ~HPAGE_PMD_MASK); VM_BUG_ON(address & ~HPAGE_PMD_MASK);
/* Only allocate from the target node */ /* Only allocate from the target node */
gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_OTHER_NODE | __GFP_THISNODE; gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
/* /*
* Before allocating the hugepage, release the mmap_sem read lock. * Before allocating the hugepage, release the mmap_sem read lock.
@ -1242,7 +1242,6 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
struct vm_area_struct *vma; struct vm_area_struct *vma;
unsigned long addr; unsigned long addr;
pmd_t *pmd, _pmd; pmd_t *pmd, _pmd;
bool deposited = false;
i_mmap_lock_write(mapping); i_mmap_lock_write(mapping);
vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
@ -1267,26 +1266,10 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd); spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd);
/* assume page table is clear */ /* assume page table is clear */
_pmd = pmdp_collapse_flush(vma, addr, pmd); _pmd = pmdp_collapse_flush(vma, addr, pmd);
/*
* now deposit the pgtable for arch that need it
* otherwise free it.
*/
if (arch_needs_pgtable_deposit()) {
/*
* The deposit should be visibile only after
* collapse is seen by others.
*/
smp_wmb();
pgtable_trans_huge_deposit(vma->vm_mm, pmd,
pmd_pgtable(_pmd));
deposited = true;
}
spin_unlock(ptl); spin_unlock(ptl);
up_write(&vma->vm_mm->mmap_sem); up_write(&vma->vm_mm->mmap_sem);
if (!deposited) { atomic_long_dec(&vma->vm_mm->nr_ptes);
atomic_long_dec(&vma->vm_mm->nr_ptes); pte_free(vma->vm_mm, pmd_pgtable(_pmd));
pte_free(vma->vm_mm, pmd_pgtable(_pmd));
}
} }
} }
i_mmap_unlock_write(mapping); i_mmap_unlock_write(mapping);
@ -1326,8 +1309,7 @@ static void collapse_shmem(struct mm_struct *mm,
VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
/* Only allocate from the target node */ /* Only allocate from the target node */
gfp = alloc_hugepage_khugepaged_gfpmask() | gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
__GFP_OTHER_NODE | __GFP_THISNODE;
new_page = khugepaged_alloc_page(hpage, gfp, node); new_page = khugepaged_alloc_page(hpage, gfp, node);
if (!new_page) { if (!new_page) {

View File

@ -625,8 +625,8 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg,
unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
int nid, unsigned int lru_mask) int nid, unsigned int lru_mask)
{ {
struct lruvec *lruvec = mem_cgroup_lruvec(NODE_DATA(nid), memcg);
unsigned long nr = 0; unsigned long nr = 0;
struct mem_cgroup_per_node *mz;
enum lru_list lru; enum lru_list lru;
VM_BUG_ON((unsigned)nid >= nr_node_ids); VM_BUG_ON((unsigned)nid >= nr_node_ids);
@ -634,8 +634,7 @@ unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
for_each_lru(lru) { for_each_lru(lru) {
if (!(BIT(lru) & lru_mask)) if (!(BIT(lru) & lru_mask))
continue; continue;
mz = mem_cgroup_nodeinfo(memcg, nid); nr += mem_cgroup_get_lru_size(lruvec, lru);
nr += mz->lru_size[lru];
} }
return nr; return nr;
} }
@ -1002,6 +1001,7 @@ out:
* mem_cgroup_update_lru_size - account for adding or removing an lru page * mem_cgroup_update_lru_size - account for adding or removing an lru page
* @lruvec: mem_cgroup per zone lru vector * @lruvec: mem_cgroup per zone lru vector
* @lru: index of lru list the page is sitting on * @lru: index of lru list the page is sitting on
* @zid: zone id of the accounted pages
* @nr_pages: positive when adding or negative when removing * @nr_pages: positive when adding or negative when removing
* *
* This function must be called under lru_lock, just before a page is added * This function must be called under lru_lock, just before a page is added
@ -1009,27 +1009,25 @@ out:
* so as to allow it to check that lru_size 0 is consistent with list_empty). * so as to allow it to check that lru_size 0 is consistent with list_empty).
*/ */
void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
int nr_pages) int zid, int nr_pages)
{ {
struct mem_cgroup_per_node *mz; struct mem_cgroup_per_node *mz;
unsigned long *lru_size; unsigned long *lru_size;
long size; long size;
bool empty;
if (mem_cgroup_disabled()) if (mem_cgroup_disabled())
return; return;
mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec); mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
lru_size = mz->lru_size + lru; lru_size = &mz->lru_zone_size[zid][lru];
empty = list_empty(lruvec->lists + lru);
if (nr_pages < 0) if (nr_pages < 0)
*lru_size += nr_pages; *lru_size += nr_pages;
size = *lru_size; size = *lru_size;
if (WARN_ONCE(size < 0 || empty != !size, if (WARN_ONCE(size < 0,
"%s(%p, %d, %d): lru_size %ld but %sempty\n", "%s(%p, %d, %d): lru_size %ld\n",
__func__, lruvec, lru, nr_pages, size, empty ? "" : "not ")) { __func__, lruvec, lru, nr_pages, size)) {
VM_BUG_ON(1); VM_BUG_ON(1);
*lru_size = 0; *lru_size = 0;
} }

View File

@ -3772,8 +3772,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
} }
#endif /* __PAGETABLE_PMD_FOLDED */ #endif /* __PAGETABLE_PMD_FOLDED */
static int __follow_pte(struct mm_struct *mm, unsigned long address, static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
pte_t **ptepp, spinlock_t **ptlp) pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
{ {
pgd_t *pgd; pgd_t *pgd;
pud_t *pud; pud_t *pud;
@ -3790,11 +3790,20 @@ static int __follow_pte(struct mm_struct *mm, unsigned long address,
pmd = pmd_offset(pud, address); pmd = pmd_offset(pud, address);
VM_BUG_ON(pmd_trans_huge(*pmd)); VM_BUG_ON(pmd_trans_huge(*pmd));
if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out;
/* We cannot handle huge page PFN maps. Luckily they don't exist. */ if (pmd_huge(*pmd)) {
if (pmd_huge(*pmd)) if (!pmdpp)
goto out;
*ptlp = pmd_lock(mm, pmd);
if (pmd_huge(*pmd)) {
*pmdpp = pmd;
return 0;
}
spin_unlock(*ptlp);
}
if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out; goto out;
ptep = pte_offset_map_lock(mm, pmd, address, ptlp); ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
@ -3810,17 +3819,31 @@ out:
return -EINVAL; return -EINVAL;
} }
int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, static inline int follow_pte(struct mm_struct *mm, unsigned long address,
spinlock_t **ptlp) pte_t **ptepp, spinlock_t **ptlp)
{ {
int res; int res;
/* (void) is needed to make gcc happy */ /* (void) is needed to make gcc happy */
(void) __cond_lock(*ptlp, (void) __cond_lock(*ptlp,
!(res = __follow_pte(mm, address, ptepp, ptlp))); !(res = __follow_pte_pmd(mm, address, ptepp, NULL,
ptlp)));
return res; return res;
} }
int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
{
int res;
/* (void) is needed to make gcc happy */
(void) __cond_lock(*ptlp,
!(res = __follow_pte_pmd(mm, address, ptepp, pmdpp,
ptlp)));
return res;
}
EXPORT_SYMBOL(follow_pte_pmd);
/** /**
* follow_pfn - look up PFN at a user virtual address * follow_pfn - look up PFN at a user virtual address
* @vma: memory mapping * @vma: memory mapping

View File

@ -1864,14 +1864,14 @@ int move_freepages(struct zone *zone,
#endif #endif
for (page = start_page; page <= end_page;) { for (page = start_page; page <= end_page;) {
/* Make sure we are not inadvertently changing nodes */
VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
if (!pfn_valid_within(page_to_pfn(page))) { if (!pfn_valid_within(page_to_pfn(page))) {
page++; page++;
continue; continue;
} }
/* Make sure we are not inadvertently changing nodes */
VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
if (!PageBuddy(page)) { if (!PageBuddy(page)) {
page++; page++;
continue; continue;
@ -2583,30 +2583,22 @@ int __isolate_free_page(struct page *page, unsigned int order)
* Update NUMA hit/miss statistics * Update NUMA hit/miss statistics
* *
* Must be called with interrupts disabled. * Must be called with interrupts disabled.
*
* When __GFP_OTHER_NODE is set assume the node of the preferred
* zone is the local node. This is useful for daemons who allocate
* memory on behalf of other processes.
*/ */
static inline void zone_statistics(struct zone *preferred_zone, struct zone *z, static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
gfp_t flags)
{ {
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
int local_nid = numa_node_id();
enum zone_stat_item local_stat = NUMA_LOCAL; enum zone_stat_item local_stat = NUMA_LOCAL;
if (unlikely(flags & __GFP_OTHER_NODE)) { if (z->node != numa_node_id())
local_stat = NUMA_OTHER; local_stat = NUMA_OTHER;
local_nid = preferred_zone->node;
}
if (z->node == local_nid) { if (z->node == preferred_zone->node)
__inc_zone_state(z, NUMA_HIT); __inc_zone_state(z, NUMA_HIT);
__inc_zone_state(z, local_stat); else {
} else {
__inc_zone_state(z, NUMA_MISS); __inc_zone_state(z, NUMA_MISS);
__inc_zone_state(preferred_zone, NUMA_FOREIGN); __inc_zone_state(preferred_zone, NUMA_FOREIGN);
} }
__inc_zone_state(z, local_stat);
#endif #endif
} }
@ -2674,7 +2666,7 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
} }
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
zone_statistics(preferred_zone, zone, gfp_flags); zone_statistics(preferred_zone, zone);
local_irq_restore(flags); local_irq_restore(flags);
VM_BUG_ON_PAGE(bad_range(zone, page), page); VM_BUG_ON_PAGE(bad_range(zone, page), page);
@ -3904,8 +3896,8 @@ EXPORT_SYMBOL(free_pages);
* drivers to provide a backing region of memory for use as either an * drivers to provide a backing region of memory for use as either an
* sk_buff->head, or to be used in the "frags" portion of skb_shared_info. * sk_buff->head, or to be used in the "frags" portion of skb_shared_info.
*/ */
static struct page *__page_frag_refill(struct page_frag_cache *nc, static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
gfp_t gfp_mask) gfp_t gfp_mask)
{ {
struct page *page = NULL; struct page *page = NULL;
gfp_t gfp = gfp_mask; gfp_t gfp = gfp_mask;
@ -3925,22 +3917,23 @@ static struct page *__page_frag_refill(struct page_frag_cache *nc,
return page; return page;
} }
void __page_frag_drain(struct page *page, unsigned int order, void __page_frag_cache_drain(struct page *page, unsigned int count)
unsigned int count)
{ {
VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
if (page_ref_sub_and_test(page, count)) { if (page_ref_sub_and_test(page, count)) {
unsigned int order = compound_order(page);
if (order == 0) if (order == 0)
free_hot_cold_page(page, false); free_hot_cold_page(page, false);
else else
__free_pages_ok(page, order); __free_pages_ok(page, order);
} }
} }
EXPORT_SYMBOL(__page_frag_drain); EXPORT_SYMBOL(__page_frag_cache_drain);
void *__alloc_page_frag(struct page_frag_cache *nc, void *page_frag_alloc(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask) unsigned int fragsz, gfp_t gfp_mask)
{ {
unsigned int size = PAGE_SIZE; unsigned int size = PAGE_SIZE;
struct page *page; struct page *page;
@ -3948,7 +3941,7 @@ void *__alloc_page_frag(struct page_frag_cache *nc,
if (unlikely(!nc->va)) { if (unlikely(!nc->va)) {
refill: refill:
page = __page_frag_refill(nc, gfp_mask); page = __page_frag_cache_refill(nc, gfp_mask);
if (!page) if (!page)
return NULL; return NULL;
@ -3991,19 +3984,19 @@ refill:
return nc->va + offset; return nc->va + offset;
} }
EXPORT_SYMBOL(__alloc_page_frag); EXPORT_SYMBOL(page_frag_alloc);
/* /*
* Frees a page fragment allocated out of either a compound or order 0 page. * Frees a page fragment allocated out of either a compound or order 0 page.
*/ */
void __free_page_frag(void *addr) void page_frag_free(void *addr)
{ {
struct page *page = virt_to_head_page(addr); struct page *page = virt_to_head_page(addr);
if (unlikely(put_page_testzero(page))) if (unlikely(put_page_testzero(page)))
__free_pages_ok(page, compound_order(page)); __free_pages_ok(page, compound_order(page));
} }
EXPORT_SYMBOL(__free_page_frag); EXPORT_SYMBOL(page_frag_free);
static void *make_alloc_exact(unsigned long addr, unsigned int order, static void *make_alloc_exact(unsigned long addr, unsigned int order,
size_t size) size_t size)

View File

@ -2457,7 +2457,6 @@ union freelist_init_state {
unsigned int pos; unsigned int pos;
unsigned int *list; unsigned int *list;
unsigned int count; unsigned int count;
unsigned int rand;
}; };
struct rnd_state rnd_state; struct rnd_state rnd_state;
}; };
@ -2483,8 +2482,7 @@ static bool freelist_state_initialize(union freelist_init_state *state,
} else { } else {
state->list = cachep->random_seq; state->list = cachep->random_seq;
state->count = count; state->count = count;
state->pos = 0; state->pos = rand % count;
state->rand = rand;
ret = true; ret = true;
} }
return ret; return ret;
@ -2493,7 +2491,9 @@ static bool freelist_state_initialize(union freelist_init_state *state,
/* Get the next entry on the list and randomize it using a random shift */ /* Get the next entry on the list and randomize it using a random shift */
static freelist_idx_t next_random_slot(union freelist_init_state *state) static freelist_idx_t next_random_slot(union freelist_init_state *state)
{ {
return (state->list[state->pos++] + state->rand) % state->count; if (state->pos >= state->count)
state->pos = 0;
return state->list[state->pos++];
} }
/* Swap two freelist entries */ /* Swap two freelist entries */

View File

@ -943,11 +943,25 @@ bool reuse_swap_page(struct page *page, int *total_mapcount)
count = page_trans_huge_mapcount(page, total_mapcount); count = page_trans_huge_mapcount(page, total_mapcount);
if (count <= 1 && PageSwapCache(page)) { if (count <= 1 && PageSwapCache(page)) {
count += page_swapcount(page); count += page_swapcount(page);
if (count == 1 && !PageWriteback(page)) { if (count != 1)
goto out;
if (!PageWriteback(page)) {
delete_from_swap_cache(page); delete_from_swap_cache(page);
SetPageDirty(page); SetPageDirty(page);
} else {
swp_entry_t entry;
struct swap_info_struct *p;
entry.val = page_private(page);
p = swap_info_get(entry);
if (p->flags & SWP_STABLE_WRITES) {
spin_unlock(&p->lock);
return false;
}
spin_unlock(&p->lock);
} }
} }
out:
return count <= 1; return count <= 1;
} }
@ -2448,6 +2462,10 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
error = -ENOMEM; error = -ENOMEM;
goto bad_swap; goto bad_swap;
} }
if (bdi_cap_stable_pages_required(inode_to_bdi(inode)))
p->flags |= SWP_STABLE_WRITES;
if (p->bdev && blk_queue_nonrot(bdev_get_queue(p->bdev))) { if (p->bdev && blk_queue_nonrot(bdev_get_queue(p->bdev))) {
int cpu; int cpu;

View File

@ -242,6 +242,16 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru)
return node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru); return node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
} }
unsigned long lruvec_zone_lru_size(struct lruvec *lruvec, enum lru_list lru,
int zone_idx)
{
if (!mem_cgroup_disabled())
return mem_cgroup_get_zone_lru_size(lruvec, lru, zone_idx);
return zone_page_state(&lruvec_pgdat(lruvec)->node_zones[zone_idx],
NR_ZONE_LRU_BASE + lru);
}
/* /*
* Add a shrinker callback to be called from the vm. * Add a shrinker callback to be called from the vm.
*/ */
@ -1382,8 +1392,7 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode)
* be complete before mem_cgroup_update_lru_size due to a santity check. * be complete before mem_cgroup_update_lru_size due to a santity check.
*/ */
static __always_inline void update_lru_sizes(struct lruvec *lruvec, static __always_inline void update_lru_sizes(struct lruvec *lruvec,
enum lru_list lru, unsigned long *nr_zone_taken, enum lru_list lru, unsigned long *nr_zone_taken)
unsigned long nr_taken)
{ {
int zid; int zid;
@ -1392,11 +1401,11 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec,
continue; continue;
__update_lru_size(lruvec, lru, zid, -nr_zone_taken[zid]); __update_lru_size(lruvec, lru, zid, -nr_zone_taken[zid]);
#ifdef CONFIG_MEMCG
mem_cgroup_update_lru_size(lruvec, lru, zid, -nr_zone_taken[zid]);
#endif
} }
#ifdef CONFIG_MEMCG
mem_cgroup_update_lru_size(lruvec, lru, -nr_taken);
#endif
} }
/* /*
@ -1501,7 +1510,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
*nr_scanned = scan; *nr_scanned = scan;
trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan, scan, trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan, scan,
nr_taken, mode, is_file_lru(lru)); nr_taken, mode, is_file_lru(lru));
update_lru_sizes(lruvec, lru, nr_zone_taken, nr_taken); update_lru_sizes(lruvec, lru, nr_zone_taken);
return nr_taken; return nr_taken;
} }
@ -2047,10 +2056,8 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
if (!managed_zone(zone)) if (!managed_zone(zone))
continue; continue;
inactive_zone = zone_page_state(zone, inactive_zone = lruvec_zone_lru_size(lruvec, file * LRU_FILE, zid);
NR_ZONE_LRU_BASE + (file * LRU_FILE)); active_zone = lruvec_zone_lru_size(lruvec, (file * LRU_FILE) + LRU_ACTIVE, zid);
active_zone = zone_page_state(zone,
NR_ZONE_LRU_BASE + (file * LRU_FILE) + LRU_ACTIVE);
inactive -= min(inactive, inactive_zone); inactive -= min(inactive, inactive_zone);
active -= min(active, active_zone); active -= min(active, active_zone);

View File

@ -259,10 +259,6 @@ config XPS
config HWBM config HWBM
bool bool
config SOCK_CGROUP_DATA
bool
default n
config CGROUP_NET_PRIO config CGROUP_NET_PRIO
bool "Network priority cgroup" bool "Network priority cgroup"
depends on CGROUPS depends on CGROUPS

View File

@ -4437,7 +4437,9 @@ static void skb_gro_reset_offset(struct sk_buff *skb)
pinfo->nr_frags && pinfo->nr_frags &&
!PageHighMem(skb_frag_page(frag0))) { !PageHighMem(skb_frag_page(frag0))) {
NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
NAPI_GRO_CB(skb)->frag0_len = skb_frag_size(frag0); NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int,
skb_frag_size(frag0),
skb->end - skb->tail);
} }
} }

View File

@ -67,8 +67,8 @@ EXPORT_SYMBOL(skb_flow_dissector_init);
* The function will try to retrieve a be32 entity at * The function will try to retrieve a be32 entity at
* offset poff * offset poff
*/ */
__be16 skb_flow_get_be16(const struct sk_buff *skb, int poff, void *data, static __be16 skb_flow_get_be16(const struct sk_buff *skb, int poff,
int hlen) void *data, int hlen)
{ {
__be16 *u, _u; __be16 *u, _u;

View File

@ -369,7 +369,7 @@ static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
local_irq_save(flags); local_irq_save(flags);
nc = this_cpu_ptr(&netdev_alloc_cache); nc = this_cpu_ptr(&netdev_alloc_cache);
data = __alloc_page_frag(nc, fragsz, gfp_mask); data = page_frag_alloc(nc, fragsz, gfp_mask);
local_irq_restore(flags); local_irq_restore(flags);
return data; return data;
} }
@ -391,7 +391,7 @@ static void *__napi_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
{ {
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
return __alloc_page_frag(&nc->page, fragsz, gfp_mask); return page_frag_alloc(&nc->page, fragsz, gfp_mask);
} }
void *napi_alloc_frag(unsigned int fragsz) void *napi_alloc_frag(unsigned int fragsz)
@ -441,7 +441,7 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
local_irq_save(flags); local_irq_save(flags);
nc = this_cpu_ptr(&netdev_alloc_cache); nc = this_cpu_ptr(&netdev_alloc_cache);
data = __alloc_page_frag(nc, len, gfp_mask); data = page_frag_alloc(nc, len, gfp_mask);
pfmemalloc = nc->pfmemalloc; pfmemalloc = nc->pfmemalloc;
local_irq_restore(flags); local_irq_restore(flags);
@ -505,7 +505,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
if (sk_memalloc_socks()) if (sk_memalloc_socks())
gfp_mask |= __GFP_MEMALLOC; gfp_mask |= __GFP_MEMALLOC;
data = __alloc_page_frag(&nc->page, len, gfp_mask); data = page_frag_alloc(&nc->page, len, gfp_mask);
if (unlikely(!data)) if (unlikely(!data))
return NULL; return NULL;

View File

@ -222,7 +222,7 @@ static const char *const af_family_key_strings[AF_MAX+1] = {
"sk_lock-AF_RXRPC" , "sk_lock-AF_ISDN" , "sk_lock-AF_PHONET" , "sk_lock-AF_RXRPC" , "sk_lock-AF_ISDN" , "sk_lock-AF_PHONET" ,
"sk_lock-AF_IEEE802154", "sk_lock-AF_CAIF" , "sk_lock-AF_ALG" , "sk_lock-AF_IEEE802154", "sk_lock-AF_CAIF" , "sk_lock-AF_ALG" ,
"sk_lock-AF_NFC" , "sk_lock-AF_VSOCK" , "sk_lock-AF_KCM" , "sk_lock-AF_NFC" , "sk_lock-AF_VSOCK" , "sk_lock-AF_KCM" ,
"sk_lock-AF_SMC" , "sk_lock-AF_MAX" "sk_lock-AF_QIPCRTR", "sk_lock-AF_SMC" , "sk_lock-AF_MAX"
}; };
static const char *const af_family_slock_key_strings[AF_MAX+1] = { static const char *const af_family_slock_key_strings[AF_MAX+1] = {
"slock-AF_UNSPEC", "slock-AF_UNIX" , "slock-AF_INET" , "slock-AF_UNSPEC", "slock-AF_UNIX" , "slock-AF_INET" ,
@ -239,7 +239,7 @@ static const char *const af_family_slock_key_strings[AF_MAX+1] = {
"slock-AF_RXRPC" , "slock-AF_ISDN" , "slock-AF_PHONET" , "slock-AF_RXRPC" , "slock-AF_ISDN" , "slock-AF_PHONET" ,
"slock-AF_IEEE802154", "slock-AF_CAIF" , "slock-AF_ALG" , "slock-AF_IEEE802154", "slock-AF_CAIF" , "slock-AF_ALG" ,
"slock-AF_NFC" , "slock-AF_VSOCK" ,"slock-AF_KCM" , "slock-AF_NFC" , "slock-AF_VSOCK" ,"slock-AF_KCM" ,
"slock-AF_SMC" , "slock-AF_MAX" "slock-AF_QIPCRTR", "slock-AF_SMC" , "slock-AF_MAX"
}; };
static const char *const af_family_clock_key_strings[AF_MAX+1] = { static const char *const af_family_clock_key_strings[AF_MAX+1] = {
"clock-AF_UNSPEC", "clock-AF_UNIX" , "clock-AF_INET" , "clock-AF_UNSPEC", "clock-AF_UNIX" , "clock-AF_INET" ,
@ -256,7 +256,7 @@ static const char *const af_family_clock_key_strings[AF_MAX+1] = {
"clock-AF_RXRPC" , "clock-AF_ISDN" , "clock-AF_PHONET" , "clock-AF_RXRPC" , "clock-AF_ISDN" , "clock-AF_PHONET" ,
"clock-AF_IEEE802154", "clock-AF_CAIF" , "clock-AF_ALG" , "clock-AF_IEEE802154", "clock-AF_CAIF" , "clock-AF_ALG" ,
"clock-AF_NFC" , "clock-AF_VSOCK" , "clock-AF_KCM" , "clock-AF_NFC" , "clock-AF_VSOCK" , "clock-AF_KCM" ,
"closck-AF_smc" , "clock-AF_MAX" "clock-AF_QIPCRTR", "closck-AF_smc" , "clock-AF_MAX"
}; };
/* /*

View File

@ -378,9 +378,11 @@ static int dsa_dst_apply(struct dsa_switch_tree *dst)
return err; return err;
} }
err = dsa_cpu_port_ethtool_setup(dst->ds[0]); if (dst->ds[0]) {
if (err) err = dsa_cpu_port_ethtool_setup(dst->ds[0]);
return err; if (err)
return err;
}
/* If we use a tagging format that doesn't have an ethertype /* If we use a tagging format that doesn't have an ethertype
* field, make sure that all packets from this point on get * field, make sure that all packets from this point on get
@ -417,7 +419,8 @@ static void dsa_dst_unapply(struct dsa_switch_tree *dst)
dsa_ds_unapply(dst, ds); dsa_ds_unapply(dst, ds);
} }
dsa_cpu_port_ethtool_restore(dst->ds[0]); if (dst->ds[0])
dsa_cpu_port_ethtool_restore(dst->ds[0]);
pr_info("DSA: tree %d unapplied\n", dst->tree); pr_info("DSA: tree %d unapplied\n", dst->tree);
dst->applied = false; dst->applied = false;

View File

@ -1618,8 +1618,13 @@ void fib_select_multipath(struct fib_result *res, int hash)
void fib_select_path(struct net *net, struct fib_result *res, void fib_select_path(struct net *net, struct fib_result *res,
struct flowi4 *fl4, int mp_hash) struct flowi4 *fl4, int mp_hash)
{ {
bool oif_check;
oif_check = (fl4->flowi4_oif == 0 ||
fl4->flowi4_flags & FLOWI_FLAG_SKIP_NH_OIF);
#ifdef CONFIG_IP_ROUTE_MULTIPATH #ifdef CONFIG_IP_ROUTE_MULTIPATH
if (res->fi->fib_nhs > 1 && fl4->flowi4_oif == 0) { if (res->fi->fib_nhs > 1 && oif_check) {
if (mp_hash < 0) if (mp_hash < 0)
mp_hash = get_hash_from_flowi4(fl4) >> 1; mp_hash = get_hash_from_flowi4(fl4) >> 1;
@ -1629,7 +1634,7 @@ void fib_select_path(struct net *net, struct fib_result *res,
#endif #endif
if (!res->prefixlen && if (!res->prefixlen &&
res->table->tb_num_default > 1 && res->table->tb_num_default > 1 &&
res->type == RTN_UNICAST && !fl4->flowi4_oif) res->type == RTN_UNICAST && oif_check)
fib_select_default(fl4, res); fib_select_default(fl4, res);
if (!fl4->saddr) if (!fl4->saddr)

View File

@ -930,7 +930,7 @@ static struct ctl_table ipv4_net_table[] = {
.data = &init_net.ipv4.sysctl_tcp_notsent_lowat, .data = &init_net.ipv4.sysctl_tcp_notsent_lowat,
.maxlen = sizeof(unsigned int), .maxlen = sizeof(unsigned int),
.mode = 0644, .mode = 0644,
.proc_handler = proc_dointvec, .proc_handler = proc_douintvec,
}, },
{ {
.procname = "tcp_tw_reuse", .procname = "tcp_tw_reuse",

View File

@ -606,7 +606,6 @@ bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst,
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(tcp_peer_is_proven);
void tcp_fetch_timewait_stamp(struct sock *sk, struct dst_entry *dst) void tcp_fetch_timewait_stamp(struct sock *sk, struct dst_entry *dst)
{ {

View File

@ -191,6 +191,7 @@ static struct sk_buff **ipv6_gro_receive(struct sk_buff **head,
ops = rcu_dereference(inet6_offloads[proto]); ops = rcu_dereference(inet6_offloads[proto]);
if (!ops || !ops->callbacks.gro_receive) { if (!ops || !ops->callbacks.gro_receive) {
__pskb_pull(skb, skb_gro_offset(skb)); __pskb_pull(skb, skb_gro_offset(skb));
skb_gro_frag0_invalidate(skb);
proto = ipv6_gso_pull_exthdrs(skb, proto); proto = ipv6_gso_pull_exthdrs(skb, proto);
skb_gro_pull(skb, -skb_transport_offset(skb)); skb_gro_pull(skb, -skb_transport_offset(skb));
skb_reset_transport_header(skb); skb_reset_transport_header(skb);

View File

@ -1464,7 +1464,7 @@ static struct rt6_info *__ip6_route_redirect(struct net *net,
struct fib6_node *fn; struct fib6_node *fn;
/* Get the "current" route for this destination and /* Get the "current" route for this destination and
* check if the redirect has come from approriate router. * check if the redirect has come from appropriate router.
* *
* RFC 4861 specifies that redirects should only be * RFC 4861 specifies that redirects should only be
* accepted if they come from the nexthop to the target. * accepted if they come from the nexthop to the target.
@ -2768,7 +2768,7 @@ static int rt6_mtu_change_route(struct rt6_info *rt, void *p_arg)
old MTU is the lowest MTU in the path, update the route PMTU old MTU is the lowest MTU in the path, update the route PMTU
to reflect the increase. In this case if the other nodes' MTU to reflect the increase. In this case if the other nodes' MTU
also have the lowest MTU, TOO BIG MESSAGE will be lead to also have the lowest MTU, TOO BIG MESSAGE will be lead to
PMTU discouvery. PMTU discovery.
*/ */
if (rt->dst.dev == arg->dev && if (rt->dst.dev == arg->dev &&
dst_metric_raw(&rt->dst, RTAX_MTU) && dst_metric_raw(&rt->dst, RTAX_MTU) &&

View File

@ -1044,7 +1044,8 @@ static int iucv_sock_sendmsg(struct socket *sock, struct msghdr *msg,
{ {
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
struct iucv_sock *iucv = iucv_sk(sk); struct iucv_sock *iucv = iucv_sk(sk);
size_t headroom, linear; size_t headroom = 0;
size_t linear;
struct sk_buff *skb; struct sk_buff *skb;
struct iucv_message txmsg = {0}; struct iucv_message txmsg = {0};
struct cmsghdr *cmsg; struct cmsghdr *cmsg;
@ -1122,18 +1123,20 @@ static int iucv_sock_sendmsg(struct socket *sock, struct msghdr *msg,
* this is fine for SOCK_SEQPACKET (unless we want to support * this is fine for SOCK_SEQPACKET (unless we want to support
* segmented records using the MSG_EOR flag), but * segmented records using the MSG_EOR flag), but
* for SOCK_STREAM we might want to improve it in future */ * for SOCK_STREAM we might want to improve it in future */
headroom = (iucv->transport == AF_IUCV_TRANS_HIPER) if (iucv->transport == AF_IUCV_TRANS_HIPER) {
? sizeof(struct af_iucv_trans_hdr) + ETH_HLEN : 0; headroom = sizeof(struct af_iucv_trans_hdr) + ETH_HLEN;
if (headroom + len < PAGE_SIZE) {
linear = len; linear = len;
} else { } else {
/* In nonlinear "classic" iucv skb, if (len < PAGE_SIZE) {
* reserve space for iucv_array linear = len;
*/ } else {
if (iucv->transport != AF_IUCV_TRANS_HIPER) /* In nonlinear "classic" iucv skb,
headroom += sizeof(struct iucv_array) * * reserve space for iucv_array
(MAX_SKB_FRAGS + 1); */
linear = PAGE_SIZE - headroom; headroom = sizeof(struct iucv_array) *
(MAX_SKB_FRAGS + 1);
linear = PAGE_SIZE - headroom;
}
} }
skb = sock_alloc_send_pskb(sk, headroom + linear, len - linear, skb = sock_alloc_send_pskb(sk, headroom + linear, len - linear,
noblock, &err, 0); noblock, &err, 0);

View File

@ -252,7 +252,7 @@ static struct sk_buff *qrtr_alloc_resume_tx(u32 src_node,
const int pkt_len = 20; const int pkt_len = 20;
struct qrtr_hdr *hdr; struct qrtr_hdr *hdr;
struct sk_buff *skb; struct sk_buff *skb;
u32 *buf; __le32 *buf;
skb = alloc_skb(QRTR_HDR_SIZE + pkt_len, GFP_KERNEL); skb = alloc_skb(QRTR_HDR_SIZE + pkt_len, GFP_KERNEL);
if (!skb) if (!skb)
@ -269,7 +269,7 @@ static struct sk_buff *qrtr_alloc_resume_tx(u32 src_node,
hdr->dst_node_id = cpu_to_le32(dst_node); hdr->dst_node_id = cpu_to_le32(dst_node);
hdr->dst_port_id = cpu_to_le32(QRTR_PORT_CTRL); hdr->dst_port_id = cpu_to_le32(QRTR_PORT_CTRL);
buf = (u32 *)skb_put(skb, pkt_len); buf = (__le32 *)skb_put(skb, pkt_len);
memset(buf, 0, pkt_len); memset(buf, 0, pkt_len);
buf[0] = cpu_to_le32(QRTR_TYPE_RESUME_TX); buf[0] = cpu_to_le32(QRTR_TYPE_RESUME_TX);
buf[1] = cpu_to_le32(src_node); buf[1] = cpu_to_le32(src_node);

View File

@ -1048,7 +1048,7 @@ static void sctp_outq_flush(struct sctp_outq *q, int rtx_timeout, gfp_t gfp)
(new_transport->state == SCTP_PF))) (new_transport->state == SCTP_PF)))
new_transport = asoc->peer.active_path; new_transport = asoc->peer.active_path;
if (new_transport->state == SCTP_UNCONFIRMED) { if (new_transport->state == SCTP_UNCONFIRMED) {
WARN_ONCE(1, "Atempt to send packet on unconfirmed path."); WARN_ONCE(1, "Attempt to send packet on unconfirmed path.");
sctp_chunk_fail(chunk, 0); sctp_chunk_fail(chunk, 0);
sctp_chunk_free(chunk); sctp_chunk_free(chunk);
continue; continue;

View File

@ -531,7 +531,7 @@ static ssize_t sockfs_listxattr(struct dentry *dentry, char *buffer,
return used; return used;
} }
int sockfs_setattr(struct dentry *dentry, struct iattr *iattr) static int sockfs_setattr(struct dentry *dentry, struct iattr *iattr)
{ {
int err = simple_setattr(dentry, iattr); int err = simple_setattr(dentry, iattr);

View File

@ -655,7 +655,6 @@ static const struct {
{ "__GFP_RECLAIM", "R" }, { "__GFP_RECLAIM", "R" },
{ "__GFP_DIRECT_RECLAIM", "DR" }, { "__GFP_DIRECT_RECLAIM", "DR" },
{ "__GFP_KSWAPD_RECLAIM", "KR" }, { "__GFP_KSWAPD_RECLAIM", "KR" },
{ "__GFP_OTHER_NODE", "ON" },
}; };
static size_t max_gfp_len; static size_t max_gfp_len;

View File

@ -90,7 +90,7 @@ ifdef INSTALL_PATH
done; done;
@# Ask all targets to emit their test scripts @# Ask all targets to emit their test scripts
echo "#!/bin/bash" > $(ALL_SCRIPT) echo "#!/bin/sh" > $(ALL_SCRIPT)
echo "cd \$$(dirname \$$0)" >> $(ALL_SCRIPT) echo "cd \$$(dirname \$$0)" >> $(ALL_SCRIPT)
echo "ROOT=\$$PWD" >> $(ALL_SCRIPT) echo "ROOT=\$$PWD" >> $(ALL_SCRIPT)

View File

@ -1,4 +1,4 @@
#!/bin/bash #!/bin/sh
SRC_TREE=../../../../ SRC_TREE=../../../../

View File

@ -1,4 +1,4 @@
#!/bin/bash #!/bin/sh
echo "--------------------" echo "--------------------"
echo "running socket test" echo "running socket test"

Some files were not shown because too many files have changed in this diff Show More