Instead of falling back to C code to do a memcpy of the output of the
last block, handle this in the asm code directly if possible, which is
the case if the entire input is longer than 16 bytes.
Cc: Nathan Huckleberry <nhuck@google.com>
Cc: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
LDWB is getting incorrectly used in HW when
CPT_AF_LF()_PTR_CTL[IQB_LDWB]=1 and CPT instruction queue has less than
320 free entries. So, increase HW instruction queue size by 320 and give
320 entries less for SW/NIX RX as a SW workaround.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When CPT_AF_DIAG[FLT_DIS] = 0 and a CPT engine access to
LLC/DRAM encounters a fault/poison, a rare case may result
in unpredictable data being delivered to a CPT engine.
So, this patch adds code to set FLT_DIS as a workaround.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When software sets CPT_AF_CTL[RNM_REQ_EN]=1 and RNM in not producing
entropy(i.e., RNM_ENTROPY_STATUS[NORMAL_CNT] < 0x40), the first cycle of
the response may be lost due to a conditional clocking issue. Due to
this, the subsequent random number stream will be corrupted. So, this
patch adds support to ensure RNM_ENTROPY_STATUS[NORMAL_CNT] = 0x40
before writing CPT_AF_CTL[RNM_REQ_EN] = 1, as a workaround.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For two reasons, current_quality may become zero within the rngd
kernel thread: (1) The user lowers current_quality to 0 by writing
to the sysfs module parameter file (note that increasing the quality
from zero is without effect at the moment), or (2) there are two or
more hwrng devices registered, and those which provide quality>0 are
unregistered, but one with quality==0 remains.
If current_quality is 0, the randomness is not trusted and cannot help
to increase the entropy count. That will lead to continuous calls to
the hwrngd thread and continuous stirring of the input pool with
untrusted bits.
Cc: Matt Mackall <mpm@selenic.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case the user-specified rng device is not working, it is not used;
therefore cur_rng_set_by_user must not be set to 1.
Cc: Matt Mackall <mpm@selenic.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Using rng_buffer in add_early_randomness() may race with rng_dev_read().
Use rng_fillbuf instead, as it is otherwise only used within the kernel
by hwrng_fillfn() and therefore never exposed to userspace.
Cc: Matt Mackall <mpm@selenic.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
According to <linux/hw_random.h>, the @max parameter of the ->read
callback "is a multiple of 4 and >= 32 bytes". That promise was not
kept by add_early_randomness(), which only asked for 16 bytes. As
rng_buffer_size() is at least 32, we can simply ask for 32 bytes.
Cc: Matt Mackall <mpm@selenic.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
hw-random device drivers depend on the hw-random core being
initialized. Make this ordering explicit, also for the case
these drivers are built-in. As the core itself depends on
misc_register() which is set up at subsys_initcall time,
advance the initialization of the core (only) to the
fs_initcall() level.
Cc: Matt Mackall <mpm@selenic.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
padata_do_parallel() calls cpumask_weight() to check if any bit of a
given cpumask is set. We can do it more efficiently with cpumask_empty()
because cpumask_empty() stops traversing the cpumask as soon as it finds
first set bit, while cpumask_weight() counts all bits unconditionally.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fixes a bug in scatterlist processing that may cause incorrect AES block encryption/decryption.
Fixes: 2e6d793e1b ("crypto: mxs-dcp - Use sg_mapping_iter to copy data")
Signed-off-by: Tomas Paukrt <tomaspaukrt@email.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The kernel test rebot report this warning: Uninitialized variable: ret.
The code flow may return value of ret directly. This value is an
uninitialized variable, here is fix it.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the correct print format. Printing an unsigned int value should
use %u instead of %d.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR counter is 32bit rollover default on the BD.
But the NIST standard is 128bit rollover. it cause the
testing failed, so need to fix the BD configuration.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix the maximum length of AAD for the CCM mode due to the hardware limited.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Modify the print of information that might lead to user misunderstanding.
Currently only XTS mode need the fallback tfm when using 192bit key.
Others algs not need soft fallback tfm. So others algs can return
directly.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fixup icv(integrity check value) checking enabled wrong on
Kunpeng 930
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
OcteonTX2 CPT driver will fail to link without devlink support.
aarch64-linux-gnu-ld: otx2_cpt_devlink.o: in function `otx2_cpt_dl_egrp_delete':
otx2_cpt_devlink.c:18: undefined reference to `devlink_priv'
aarch64-linux-gnu-ld: otx2_cpt_devlink.o: in function `otx2_cpt_dl_egrp_create':
otx2_cpt_devlink.c:9: undefined reference to `devlink_priv'
aarch64-linux-gnu-ld: otx2_cpt_devlink.o: in function `otx2_cpt_dl_uc_info':
otx2_cpt_devlink.c:27: undefined reference to `devlink_priv'
Fixes: fed8f4d5f9 ("crypto: octeontx2 - parameters for custom engine groups")
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The C standard does not support dereferencing pointers that are not
aligned with respect to the pointed-to type, and doing so is technically
undefined behavior, even if the underlying hardware supports it.
This means that conditionally dereferencing such pointers based on
whether CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y is not the right thing
to do, and actually results in alignment faults on ARM, which are fixed
up on a slow path. Instead, we should use the unaligned accessors in
such cases: on architectures that don't care about alignment, they will
result in identical codegen whereas, e.g., codegen on ARM will avoid
doubleword loads and stores but use ordinary ones, which are able to
tolerate misalignment.
Link: https://lore.kernel.org/linux-crypto/CAHk-=wiKkdYLY0bv+nXrcJz3NH9mAqPAafX7PpW5EwVtxsEu7Q@mail.gmail.com/
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function crypto_authenc_decrypt_tail discards its flags
argument and always relies on the flags from the original request
when starting its sub-request.
This is clearly wrong as it may cause the SLEEPABLE flag to be
set when it shouldn't.
Fixes: 92d95ba917 ("crypto: authenc - Convert to new AEAD interface")
Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The new convention for akcipher_alg::verify makes it unclear which
values are the lengths of the signature and digest. Add local variables
to make it clearer what is going on.
Also rename the digest_size variable in pkcs1pad_sign(), as it is
actually the digest *info* size, not the digest size which is different.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Before checking whether the expected digest_info is present, we need to
check that there are enough bytes remaining.
Fixes: a49de377e0 ("crypto: Add hash param to pkcs1pad")
Cc: <stable@vger.kernel.org> # v4.6+
Cc: Tadeusz Struk <tadeusz.struk@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
RSA PKCS#1 v1.5 signatures are required to be the same length as the RSA
key size. RFC8017 specifically requires the verifier to check this
(https://datatracker.ietf.org/doc/html/rfc8017#section-8.2.2).
Commit a49de377e0 ("crypto: Add hash param to pkcs1pad") changed the
kernel to allow longer signatures, but didn't explain this part of the
change; it seems to be unrelated to the rest of the commit.
Revert this change, since it doesn't appear to be correct.
We can be pretty sure that no one is relying on overly-long signatures
(which would have to be front-padded with zeroes) being supported, given
that they would have been broken since commit c7381b0128
("crypto: akcipher - new verify API for public key algorithms").
Fixes: a49de377e0 ("crypto: Add hash param to pkcs1pad")
Cc: <stable@vger.kernel.org> # v4.6+
Cc: Tadeusz Struk <tadeusz.struk@linaro.org>
Suggested-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit c7381b0128 ("crypto: akcipher - new verify API for public key
algorithms") changed akcipher_alg::verify to take in both the signature
and the actual hash and do the signature verification, rather than just
return the hash expected by the signature as was the case before. To do
this, it implemented a hack where the signature and hash are
concatenated with each other in one scatterlist.
Obviously, for this to work correctly, akcipher_alg::verify needs to
correctly extract the two items from the scatterlist it is given.
Unfortunately, it doesn't correctly extract the hash in the case where
the signature is longer than the RSA key size, as it assumes that the
signature's length is equal to the RSA key size. This causes a prefix
of the hash, or even the entire hash, to be taken from the *signature*.
(Note, the case of a signature longer than the RSA key size should not
be allowed in the first place; a separate patch will fix that.)
It is unclear whether the resulting scheme has any useful security
properties.
Fix this by correctly extracting the hash from the scatterlist.
Fixes: c7381b0128 ("crypto: akcipher - new verify API for public key algorithms")
Cc: <stable@vger.kernel.org> # v5.2+
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The pkcs1pad template can be instantiated with an arbitrary akcipher
algorithm, which doesn't make sense; it is specifically an RSA padding
scheme. Make it check that the underlying algorithm really is RSA.
Fixes: 3d5b1ecdea ("crypto: rsa - RSA padding algorithm")
Cc: <stable@vger.kernel.org> # v4.5+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The logic that detects, enables and disables pfvf interrupts was
expecting a single CSR per VF. Instead, the source and mask register are
two registers with a bit per VF.
Due to this, the driver is reading and setting reserved CSRs and not
masking the correct source of interrupts.
Fix the access to the source and mask register for QAT GEN4 devices by
removing the outer loop in adf_gen4_get_vf2pf_sources(),
adf_gen4_enable_vf2pf_interrupts() and
adf_gen4_disable_vf2pf_interrupts() and changing the helper macros
ADF_4XXX_VM2PF_SOU and ADF_4XXX_VM2PF_MSK.
Fixes: a9dc0d9666 ("crypto: qat - add PFVF support to the GEN4 host driver")
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Co-developed-by: Siming Wan <siming.wan@intel.com>
Signed-off-by: Siming Wan <siming.wan@intel.com>
Reviewed-by: Xin Zeng <xin.zeng@intel.com>
Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Marco Chiappero <marco.chiappero@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It makes no sense to leave crc32_be using the generic code while we
only accelerate the little-endian ops.
Even though the big-endian form doesn't fit as smoothly into the arm64,
we can speed it up and avoid hitting the D cache.
Tested on Cortex-A53. Without acceleration:
crc32: CRC_LE_BITS = 64, CRC_BE BITS = 64
crc32: self tests passed, processed 225944 bytes in 192240 nsec
crc32c: CRC_LE_BITS = 64
crc32c: self tests passed, processed 112972 bytes in 21360 nsec
With acceleration:
crc32: CRC_LE_BITS = 64, CRC_BE BITS = 64
crc32: self tests passed, processed 225944 bytes in 53480 nsec
crc32c: CRC_LE_BITS = 64
crc32c: self tests passed, processed 112972 bytes in 21480 nsec
Signed-off-by: Kevin Bracey <kevin@bracey.fi>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crc32c_le self test had a stray multiply by two inherited from
the crc32_le+crc32_be test loop.
Signed-off-by: Kevin Bracey <kevin@bracey.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crc32_le and __crc32c_le can be overridden - extend this to crc32_be.
Signed-off-by: Kevin Bracey <kevin@bracey.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Casts were added in commit 8f243af42a ("sections: fix const sections
for crc32 table") to cope with the tables not being const. They are no
longer required since commit f5e38b9284 ("lib: crc32: constify crc32
lookup table").
Signed-off-by: Kevin Bracey <kevin@bracey.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In addition to sha256 we must also enable hmac for the kdf self-test
to work.
Reported-by: kernel test robot <oliver.sang@intel.com>
Fixes: 304b4acee2 ("crypto: kdf - select SHA-256 required...")
Fixes: 026a733e66 ("crypto: kdf - add SP800-108 counter key...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When adding hashes support to sun8i-ss, I have added them only on A83T.
But I forgot that 0 is a valid algorithm ID, so hashes are enabled on A80 but
with an incorrect ID.
Anyway, even with correct IDs, hashes do not work on A80 and I cannot
find why.
So let's disable all of them on A80.
Fixes: d9b45418a9 ("crypto: sun8i-ss - support hash algorithms")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use SPDX-License-Identifier instead of a verbose license text and
update external link.
Cc: James Guilford <james.guilford@intel.com>
Cc: Sean Gulley <sean.m.gulley@intel.com>
Cc: Chandramouli Narayanan <mouli@linux.intel.com>
Signed-off-by: Nathan Huckleberry <nhuck@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As testmgr is part of cryptomgr which was designed to be unloadable
as a module, it shouldn't export any symbols for other crypto
modules to use as that would prevent it from being unloaded. All
its functionality is meant to be accessed through notifiers.
The symbol crypto_simd_disabled_for_test was added to testmgr
which caused it to be pinned as a module if its users were also
loaded. This patch moves it out of testmgr and into crypto/algapi.c
so cryptomgr can again be unloaded and replaced on demand.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The Cavium ThunderX Random Number Generator is only present on Cavium
ThunderX SoCs, and not available as an independent PCIe endpoint. Hence
add a dependency on ARCH_THUNDER, to prevent asking the user about this
driver when configuring a kernel without Cavium Thunder SoC support.
Fixes: cc2f1908c6 ("hwrng: cavium - Add Cavium HWRNG driver for ThunderX SoC.")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Return value from ccp_crypto_enqueue_request() directly instead
of taking this in another redundant variable.
Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn>
Signed-off-by: CGEL ZTE <cgel.zte@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The "ret" variable needs to be signed or there is an error message which
will not be printed correctly.
Fixes: 0cec19c761 ("crypto: qat - add support for compression for 4xxx")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Initialize psp_ret inside of __sev_platform_init_locked() because there
are many failure paths with PSP initialization that do not set
__sev_do_cmd_locked().
Fixes: e423b9d75e77: ("crypto: ccp - Move SEV_INIT retry for corrupted data")
Signed-off-by: Peter Gonda <pgonda@google.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Marc Orr <marcorr@google.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: John Allen <john.allen@amd.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tcrypt supports testing of SM3 hash algorithms that use AVX
instruction acceleration.
In order to add the sm3 asynchronous test to the appropriate
position, shift the testcase sequence number of the multi buffer
backward and start from 450.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds AVX assembly accelerated implementation of SM3 secure
hash algorithm. From the benchmark data, compared to pure software
implementation sm3-generic, the performance increase is up to 38%.
The main algorithm implementation based on SM3 AES/BMI2 accelerated
work by libgcrypt at:
https://gnupg.org/software/libgcrypt/index.html
Benchmark on Intel i5-6200U 2.30GHz, performance data of two
implementations, pure software sm3-generic and sm3-avx acceleration.
The data comes from the 326 mode and 422 mode of tcrypt. The abscissas
are different lengths of per update. The data is tabulated and the
unit is Mb/s:
update-size | 16 64 256 1024 2048 4096 8192
------------+-------------------------------------------------------
sm3-generic | 105.97 129.60 182.12 189.62 188.06 193.66 194.88
sm3-avx | 119.87 163.05 244.44 260.92 257.60 264.87 265.88
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SM3 generic library is stand-alone implementation, it is necessary
making the sm3-generic implementation to depends on SM3 library.
The functions crypto_sm3_*() provided by sm3_generic is no longer
exported.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SM3 generic library is stand-alone implementation, it is necessary
for the calculation of sm2 z digest to depends on SM3 library
instead of sm3-generic.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SM3 generic library is stand-alone implementation, sm3-ce can depend
on the SM3 library instead of sm3-generic.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Stand-alone implementation of the SM3 algorithm. It is designed
to have as little dependencies as possible. In other cases you
should generally use the hash APIs from include/crypto/hash.h.
Especially when hashing large amounts of data as those APIs may
be hw-accelerated. In the new SM3 stand-alone library,
sm3_transform() has also been optimized, instead of simply using
the code in sm3_generic.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Reviewed-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update documentation describing DebugFS for function's QoS limiting.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update documentation describing DebugFS for function's QoS limiting.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update documentation describing DebugFS for function's QoS limiting.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the tabs on all Hisilicon Accelerator documentation. including
hpre, sec, zip.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- Fix printing 'phys_addr' in 'perf script'.
- Fix failure to add events with 'perf probe' in ppc64 due to not removing
leading dot (ppc64 ABIv1).
- Fix cpu_map__item() python binding building.
- Support event alias in form foo-bar-baz, add pmu-events and parse-event tests
for it.
- No need to setup affinities when starting a workload or attaching to a pid.
- Use path__join() to compose a path instead of ad-hoc snprintf() equivalent.
- Override attr->sample_period for non-libpfm4 events.
- Use libperf cpumap APIs instead of accessing the internal state directly.
- Sync x86 arch prctl headers and files changed by the new
set_mempolicy_home_node syscall with the kernel sources.
- Remove duplicate include in cpumap.h.
- Remove redundant err variable.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQR2GiIUctdOfX2qHhGyPKLppCJ+JwUCYeytiAAKCRCyPKLppCJ+
JyxZAQDNopdDHXLcUaa2Yp8M97QZQ31APY4vOSkn+AneVuBxvAEAs+BPCyBKWCHA
o45Hit7BzOtb3erHN5XVLTNW3WNBIQM=
=RdC6
-----END PGP SIGNATURE-----
Merge tag 'perf-tools-for-v5.17-2022-01-22' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull more perf tools updates from Arnaldo Carvalho de Melo:
- Fix printing 'phys_addr' in 'perf script'.
- Fix failure to add events with 'perf probe' in ppc64 due to not
removing leading dot (ppc64 ABIv1).
- Fix cpu_map__item() python binding building.
- Support event alias in form foo-bar-baz, add pmu-events and
parse-event tests for it.
- No need to setup affinities when starting a workload or attaching to
a pid.
- Use path__join() to compose a path instead of ad-hoc snprintf()
equivalent.
- Override attr->sample_period for non-libpfm4 events.
- Use libperf cpumap APIs instead of accessing the internal state
directly.
- Sync x86 arch prctl headers and files changed by the new
set_mempolicy_home_node syscall with the kernel sources.
- Remove duplicate include in cpumap.h.
- Remove redundant err variable.
* tag 'perf-tools-for-v5.17-2022-01-22' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux:
perf tools: Remove redundant err variable
perf test: Add parse-events test for aliases with hyphens
perf test: Add pmu-events test for aliases with hyphens
perf parse-events: Support event alias in form foo-bar-baz
perf evsel: Override attr->sample_period for non-libpfm4 events
perf cpumap: Remove duplicate include in cpumap.h
perf cpumap: Migrate to libperf cpumap api
perf python: Fix cpu_map__item() building
perf script: Fix printing 'phys_addr' failure issue
tools headers UAPI: Sync files changed by new set_mempolicy_home_node syscall
tools headers UAPI: Sync x86 arch prctl headers with the kernel sources
perf machine: Use path__join() to compose a path instead of snprintf(dir, '/', filename)
perf evlist: No need to setup affinities when disabling events for pid targets
perf evlist: No need to setup affinities when enabling events for pid targets
perf stat: No need to setup affinities when starting a workload
perf affinity: Allow passing a NULL arg to affinity__cleanup()
perf probe: Fix ppc64 'perf probe add events failed' case