Summary:
This patch could be treated as a rebase of D33960. It also fixes PR35547.
A fix for `llvm/test/Other/close-stderr.ll` is proposed in D68164. Seems
the consensus is that the test is passing by chance and I'm not
sure how important it is for us. So it is removed like in D33960 for now.
The rest of the test fixes are just adding `--crash` flag to `not` tool.
** The reason it fixes PR35547 is
`exit` does cleanup including calling class destructor whereas `abort`
does not do any cleanup. In multithreading environment such as ThinLTO or JIT,
threads may share states which mostly are ManagedStatic<>. If faulting thread
tearing down a class when another thread is using it, there are chances of
memory corruption. This is bad 1. It will stop error reporting like pretty
stack printer; 2. The memory corruption is distracting and nondeterministic in
terms of error message, and corruption type (depending one the timing, it
could be double free, heap free after use, etc.).
Reviewers: rnk, chandlerc, zturner, sepavloff, MaskRay, espindola
Reviewed By: rnk, MaskRay
Subscribers: wuzish, jholewinski, qcolombet, dschuff, jyknight, emaste, sdardis, nemanjai, jvesely, nhaehnle, sbc100, arichardson, jgravelle-google, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, lenary, s.egerton, pzheng, cfe-commits, MaskRay, filcab, davide, MatzeB, mehdi_amini, hiraditya, steven_wu, dexonsmith, rupprecht, seiya, llvm-commits
Tags: #llvm, #clang
Differential Revision: https://reviews.llvm.org/D67847
As detailed in https://blog.regehr.org/archives/1709 we don't make use of the known leading/trailing zeros for shifted values in cases where we don't know the shift amount value.
This patch adds support to SelectionDAG::ComputeKnownBits to use KnownBits::countMinTrailingZeros and countMinLeadingZeros to set the minimum guaranteed leading/trailing known zero bits.
Differential Revision: https://reviews.llvm.org/D72573
Previously extern function is added as BTF_KIND_VAR. This does not work
well with existing BTF infrastructure as function expected to use
BTF_KIND_FUNC and BTF_KIND_FUNC_PROTO.
This patch added extern function to BTF_KIND_FUNC. The two bits 0:1
of btf_type.info are used to indicate what kind of function it is:
0: static
1: global
2: extern
Differential Revision: https://reviews.llvm.org/D71638
Previous btf field relocation is always at assignment like
r1 = 4
which is converted from an ld_imm64 instruction.
This patch did an optimization such that relocation
instruction might be load/store/shift. Specically, the
following insns may also have relocation, except BPF_MOV:
LDB, LDH, LDW, LDD, STB, STH, STW, STD,
LDB32, LDH32, LDW32, STB32, STH32, STW32,
SLL, SRL, SRA
To accomplish this, a few BPF target specific
codegen only instructions are invented. They
are generated at backend BPF SimplifyPatchable phase,
which is at early llc phase when SSA form is available.
The new codegen only instructions will be converted to
real proper instructions at the codegen and BTF emission stage.
Note that, as revealed by a few tests, this optimization might
be actual generating more relocations:
Scenario 1:
if (...) {
... __builtin_preserve_field_info(arg->b2, 0) ...
} else {
... __builtin_preserve_field_info(arg->b2, 0) ...
}
Compiler could do CSE to only have one relocation. But if both
of the above is translated into codegen internal instructions,
the compiler will not be able to do that.
Scenario 2:
offset = ... __builtin_preserve_field_info(arg->b2, 0) ...
...
... offset ...
... offset ...
... offset ...
For whatever reason, the compiler might be temporarily do copy
propagation of the righthand of "offset" assignment like
... __builtin_preserve_field_info(arg->b2, 0) ...
... __builtin_preserve_field_info(arg->b2, 0) ...
and CSE will be able to deduplicate later.
But if these intrinsics are converted to BPF pseudo instructions,
they will not be able to get deduplicated.
I do not expect we have big instruction count difference.
It may actually reduce instruction count since now relocation
is in deeper insn dependency chain.
For example, for test offset-reloc-fieldinfo-2.ll, this patch
generates 7 instead of 6 relocations for non-alu32 mode, but it
actually reduced instruction count from 29 to 26.
Differential Revision: https://reviews.llvm.org/D71790
Currently for extern variables with section attribute, those
BTF_KIND_VARs will not be placed in any DataSec. This is
inconvenient as any other generated BTF_KIND_VAR belongs to
one DataSec. This patch put these extern variables into
".extern" section so bpf loader can have a consistent
processing mechanism for all data sections and variables.
extern variable usage in BPF is different from traditional
pure user space application. Recent discussion in linux bpf
mailing list has two use cases where debug info types are
required to use extern variables:
- extern types are required to have a suitable interface
in libbpf (bpf loader) to provide kernel config parameters
to bpf programs.
https://lore.kernel.org/bpf/CAEf4BzYCNo5GeVGMhp3fhysQ=_axAf=23PtwaZs-yAyafmXC9g@mail.gmail.com/T/#t
- extern types are required so kernel bpf verifier can
verify program which uses external functions more precisely.
This will make later link with actual external function no
need to reverify.
https://lore.kernel.org/bpf/87eez4odqp.fsf@toke.dk/T/#m8d5c3e87ffe7f2764e02d722cb0d8cbc136880ed
This patch added bpf support to consume such info into BTF,
which can then be used by bpf loader. Function processFuncPrototypes()
only adds extern function definitions into BTF. The functions
with actual definition have been added to BTF in some other places.
Differential Revision: https://reviews.llvm.org/D70697
Generate types for global variables with "weak" attribute.
Keep allocation scope the same for both weak and non-weak
globals as ELF symbol table can determine whether a global
symbol is weak or not.
Differential Revision: https://reviews.llvm.org/D71162
Currently, BPF backend creates some global variables with name like
<type_name>:<reloc_type>:<patch_imm>$<access_str>
to carry certain information to BPF backend.
With direct clang compilation, the following code in
llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
is triggered and the above globals are emitted to the ELF file.
(clang enabled this as opt flag -faddrsig is on by default.)
if (TM.Options.EmitAddrsig) {
// Emit address-significance attributes for all globals.
OutStreamer->EmitAddrsig();
for (const GlobalValue &GV : M.global_values())
if (!GV.use_empty() && !GV.isThreadLocal() &&
!GV.hasDLLImportStorageClass() && !GV.getName().startswith("llvm.") &&
!GV.hasAtLeastLocalUnnamedAddr())
OutStreamer->EmitAddrsigSym(getSymbol(&GV));
}
...
10162: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND tcp_sock:0:2048$0:117
10163: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND tcp_sock:0:2112$0:126:0
10164: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND tcp_sock:1:8$0:31:6
...
While in llc, those globals are not emited since EmitAddrsig
default option is false for llc. The llc flag "-addrsig" can be used to
enable the above code.
This patch added "llvm." prefix to these internal globals so that
they can be ignored in the above codes and possible other
places.
Differential Revision: https://reviews.llvm.org/D70703
Commit a0841dfe85 ("[BPF] Fix a bug in peephole optimization")
fixed a bug in peephole optimization. Recursion is introduced
to handle COPY and PHI instructions.
Unfortunately, multiple PHI instructions may form a cycle
and this will cause infinite recursion, eventual segfault.
For Commit a0841dfe85, I indeed tried a few loops to ensure
that I won't see the recursion, but I did not try with
complex control flows, which, as demonstrated with the test case
in this patch, may introduce PHI cycles.
This patch fixed the issue by introducing a set to remember
visited PHI instructions. This way, cycles can be properly
detected and handled.
Differential Revision: https://reviews.llvm.org/D70586
One of current peephole optimiations is to remove SLL/SRL if
the sub register has been zero extended. This phase has two bugs
and one limitations.
First, for the physical subregister used in pseudo insn COPY
like below, it permits incorrect optimization.
%0:gpr32 = COPY $w0
...
%4:gpr = MOV_32_64 %0:gpr32
%5:gpr = SLL_ri %4:gpr(tied-def 0), 32
%6:gpr = SRA_ri %5:gpr(tied-def 0), 32
The $w0 could be from the return value of a previous function call
and its upper 32-bit value might contain some non-zero values.
The same applies to function arguments.
Second, the current code may permits removing SLL/SRA like below:
%0:gpr32 = COPY $w0
%1:gpr32 = COPY %0:gpr32
...
%4:gpr = MOV_32_64 %1:gpr32
%5:gpr = SLL_ri %4:gpr(tied-def 0), 32
%6:gpr = SRA_ri %5:gpr(tied-def 0), 32
The reason is that it did not follow def-use chain to skip all
intermediate 32bit-to-32bit COPY instructions.
The current implementation is also very conservative for PHI
instructions. If any PHI insn component is another PHI or COPY insn,
it will just permit SLL/SRA.
This patch fixed the issue as follows:
- During def/use chain traversal, if any physical register is read,
SLL/SRA will be preserved as these physical registers are mostly
from function return values or current function arguments.
- Recursively visit all COPY and PHI instructions.
Enable to generate BTF_KIND_VARs for non-static
default-section globals which is not allowed previously.
Modified the existing test case to accommodate the new change.
Also removed unused linkage enum members VAR_GLOBAL_TENTATIVE and
VAR_GLOBAL_EXTERNAL.
Differential Revision: https://reviews.llvm.org/D70145
bitfield handling is not robust with current implementation.
I have seen two issues as described below.
Issue 1:
struct s {
long long f1;
char f2;
char b1:1;
} *p;
The current approach will generate an access bit size
56 (from b1 to the end of structure) which will be
rejected as it is not power of 2.
Issue 2:
struct s {
char f1;
char b1:3;
char b2:5;
char b3:6:
char b4:2;
char f2;
};
The LLVM will group 4 bitfields together with 2 bytes. But
loading 2 bytes is not correct as it violates alignment
requirement. Note that sometimes, LLVM breaks a large
bitfield groups into multiple groups, but not in this case.
To resolve the above two issues, this patch takes a
different approach. The alignment for the structure is used
to construct the offset of the bitfield access. The bitfield
incurred memory access is an aligned memory access with alignment/size
equal to the alignment of the structure.
This also simplified the code.
This may not be the optimal memory access in terms of memory access
width. But this should be okay since extracting the bitfield value
will have the same amount of work regardless of what kind of
memory access width.
Differential Revision: https://reviews.llvm.org/D69837
During deriving proper bitfield access FIELD_BYTE_SIZE,
function Member->getStorageOffsetInBits() is used to
get llvm IR type storage offset in bits so that
the byte size can permit aligned loads/stores with previously
derived FIELD_BYTE_OFFSET.
The function should only be used with bitfield members and it will
assert if ASSERT is turned on during cmake build.
Constant *getStorageOffsetInBits() const {
assert(getTag() == dwarf::DW_TAG_member && isBitField());
if (auto *C = cast_or_null<ConstantAsMetadata>(getExtraData()))
return C->getValue();
return nullptr;
}
This patch fixed the issue by using Member->isBitField()
directly and a test case is added to cover this missing case.
This issue is discovered when running Andrii's linux kernel CO-RE
tests.
Differential Revision: https://reviews.llvm.org/D69761
Ilya Leoshkevich (<iii@linux.ibm.com>) reported an issue that
with -mattr=+alu32 CO-RE has a segfault in BPF MISimplifyPatchable
pass.
The pattern will be transformed by MISimplifyPatchable
pass looks like below:
r5 = ld_imm64 @"b:0:0$0:0"
r2 = ldw r5, 0
... r2 ... // use r2
The pass will remove the intermediate 'ldw' instruction
and replacing all r2 with r5 likes below:
r5 = ld_imm64 @"b:0:0$0:0"
... r5 ... // use r5
Later, the ld_imm64 insn will be replaced with
r5 = <patched immediate>
for field relocation purpose.
With -mattr=+alu32, the input code may become
r5 = ld_imm64 @"b:0:0$0:0"
w2 = ldw32 r5, 0
... w2 ... // use w2
Replacing "w2" with "r5" is incorrect and will
trigger compiler internal errors.
To fix the problem, if the register class of ldw* dest
register is sub_32, we just replace the original ldw*
register with:
w2 = w5
Directly replacing all uses of w2 with in-place
constructed w5 for the use operand seems not working in all cases.
The latest kernel will have -mattr=+alu32 on by default,
so added this flag to all CORE tests.
Tested with latest kernel bpf-next branch as well with this patch.
Differential Revision: https://reviews.llvm.org/D69438
Currently, for indirect call, the assembly code printed out as
callx <imm>
This is not right, it should be
callx <reg>
Fixed the issue with proper format.
Differential Revision: https://reviews.llvm.org/D69229
llvm-svn: 375386
Currently, BPF backend is doing truncation elimination. If one truncation
is performed on a value defined by narrow loads, then it could be redundant
given BPF loads zero extend the destination register implicitly.
When the definition of the truncated value is a merging value (PHI node)
that could come from different code paths, then checks need to be done on
all possible code paths.
Above described optimization was introduced as r306685, however it doesn't
work when there is back-edge, for example when loop is used inside BPF
code.
For example for the following code, a zero-extended value should be stored
into b[i], but the "and reg, 0xffff" is wrongly eliminated which then
generates corrupted data.
void cal1(unsigned short *a, unsigned long *b, unsigned int k)
{
unsigned short e;
e = *a;
for (unsigned int i = 0; i < k; i++) {
b[i] = e;
e = ~e;
}
}
The reason is r306685 was trying to do the PHI node checks inside isel
DAG2DAG phase, and the checks are done on MachineInstr. This is actually
wrong, because MachineInstr is being built during isel phase and the
associated information is not completed yet. A quick search shows none
target other than BPF is access MachineInstr info during isel phase.
For an PHI node, when you reached it during isel phase, it may have all
predecessors linked, but not successors. It seems successors are linked to
PHI node only when doing SelectionDAGISel::FinishBasicBlock and this
happens later than PreprocessISelDAG hook.
Previously, BPF program doesn't allow loop, there is probably the reason
why this bug was not exposed.
This patch therefore fixes the bug by the following approach:
- The existing truncation elimination code and the associated
"load_to_vreg_" records are removed.
- Instead, implement truncation elimination using MachineSSA pass, this
is where all information are built, and keep the pass together with other
similar peephole optimizations inside BPFMIPeephole.cpp. Redundant move
elimination logic is updated accordingly.
- Unit testcase included + no compilation errors for kernel BPF selftest.
Patch Review
===
Patch was sent to and reviewed by BPF community at:
https://lore.kernel.org/bpf
Reported-by: David Beckett <david.beckett@netronome.com>
Reviewed-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 375007
Previously, patchable extern relocations are introduced to patch
external variables used for multi versioning in
compile once, run everywhere use case. The load instruction
will be converted into a move with an patchable immediate
which can be changed by bpf loader on the host.
The kernel verifier has evolved and is able to load
and propagate constant values, so compiler relocation
becomes unnecessary. This patch removed codes related to this.
Differential Revision: https://reviews.llvm.org/D68760
llvm-svn: 374367
During studying support for bitfield, I found an issue for
an example like the one in test offset-reloc-middle-chain.ll.
struct t1 { int c; };
struct s1 { struct t1 b; };
struct r1 { struct s1 a; };
#define _(x) __builtin_preserve_access_index(x)
void test1(void *p1, void *p2, void *p3);
void test(struct r1 *arg) {
struct s1 *ps = _(&arg->a);
struct t1 *pt = _(&arg->a.b);
int *pi = _(&arg->a.b.c);
test1(ps, pt, pi);
}
The IR looks like:
%0 = llvm.preserve.struct.access(base, ...)
%1 = llvm.preserve.struct.access(%0, ...)
%2 = llvm.preserve.struct.access(%1, ...)
using %0, %1 and %2
In this case, we need to generate three relocatiions
corresponding to chains: (%0), (%0, %1) and (%0, %1, %2).
After collecting all the chains, the current implementation
process each chain (in a map) with code generation sequentially.
For example, after (%0) is processed, the code may look like:
%0 = base + special_global_variable
// llvm.preserve.struct.access(base, ...) is delisted
// from the instruction stream.
%1 = llvm.preserve.struct.access(%0, ...)
%2 = llvm.preserve.struct.access(%1, ...)
using %0, %1 and %2
When processing chain (%0, %1), the current implementation
tries to visit intrinsic llvm.preserve.struct.access(base, ...)
to get some of its properties and this caused segfault.
This patch fixed the issue by remembering all necessary
information (kind, metadata, access_index, base) during
analysis phase, so in code generation phase there is
no need to examine the intrinsic call instructions.
This also simplifies the code.
Differential Revision: https://reviews.llvm.org/D68389
llvm-svn: 373621
Currently, if an array element type size is 0, the number of
array elements will be set to 0, regardless of what user
specified. This implementation is done in the beginning where
BTF is mostly used to calculate the member offset.
For example,
struct s {};
struct s1 {
int b;
struct s a[2];
};
struct s1 s1;
The BTF will have struct "s1" member "a" with element count 0.
Now BTF types are used for compile-once and run-everywhere
relocations and we need more precise type representation
for type comparison. Andrii reported the issue as there
are differences between original structure and BTF-generated
structure.
This patch made the change to correctly assign "2"
as the number elements of member "a".
Some dead codes related to ElemSize compuation are also removed.
Differential Revision: https://reviews.llvm.org/D67979
llvm-svn: 372785
Currently, not all user specified relocations
(with clang intrinsic __builtin_preserve_access_index())
will turn into relocations.
In the current implementation, a __builtin_preserve_access_index()
chain is turned into relocation only if the result of the clang
intrinsic is used in a function call or a nonzero offset computation
of getelementptr. For all other cases, the relocatiion request
is ignored and the __builtin_preserve_access_index() is turned
into regular getelementptr instructions.
The main reason is to mimic bpf_probe_read() requirement.
But there are other use cases where relocatable offset is
generated but not used for bpf_probe_read(). This patch
relaxed previous constraints when to generate relocations.
Now, all user __builtin_preserve_access_index() will have
relocations generated.
Differential Revision: https://reviews.llvm.org/D67688
llvm-svn: 372198
Commit https://reviews.llvm.org/D57939 ("[DWARF] Refactor
RelocVisitor and fix computation of SHT_RELA-typed relocation entries)
made a change for relocation resolution when operating
on an object file.
The change unfortunately broke BPF as given SymbolValue (S) and
Addent (A), previously relocation is resolved to
S + A
and after the change, it is resolved to
S
This patch fixed the issue by resolving relocation correctly.
It looks not all relocation resolution reaches here and I did not
trace down exactly when. But I do find if the object file includes
codes in two different ELF sections than default ".text",
the above bug will be triggered.
This patch included a trivial two function source code to
demonstrate this issue. The relocation for .debug_loc is resolved
incorrectly due to this and llvm-objdump cannot display source
annotated assembly.
Differential Revision: https://reviews.llvm.org/D66372
llvm-svn: 369199
Currently, when a GVN or CSE optimization happens,
the llvm.preserve.access.index metadata is dropped.
This caused a problem for BPF AbstructMemberOffset phase
as it relies on the metadata (debuginfo types).
This patch added proper hooks in lib/Transforms to
preserve !preserve.access.index metadata. A test
case is added to ensure metadata is preserved under CSE.
Differential Revision: https://reviews.llvm.org/D65700
llvm-svn: 367769
With newly added debuginfo type
metadata for preserve_array_access_index() intrinsic,
this patch did the following two things:
(1). checking validity before adding a new access index
to the access chain.
(2). calculating access byte offset in IR phase
BPFAbstractMemberAccess instead of when BTF is emitted.
For (1), the metadata provided by all preserve_*_access_index()
intrinsics are used to check whether the to-be-added type
is a proper struct/union member or array element.
For (2), with all available metadata, calculating access byte
offset becomes easier in BPFAbstractMemberAccess IR phase.
This enables us to remove the unnecessary complexity in
BTFDebug.cpp.
New tests are added for
. user explicit casting to array/structure/union
. global variable (or its dereference) as the source of base
. multi demensional arrays
. array access given a base pointer
. cases where we won't generate relocation if we cannot find
type name.
Differential Revision: https://reviews.llvm.org/D65618
llvm-svn: 367735
Previously, debuginfo types are annotated to
IR builtin preserve_struct_access_index() and
preserve_union_access_index(), but not
preserve_array_access_index(). The debug info
is useful to identify the root type name which
later will be used for type comparison.
For user access without explicit type conversions,
the previous scheme works as we can ignore intermediate
compiler generated type conversions (e.g., from union types to
union members) and still generate correct access index string.
The issue comes with user explicit type conversions, e.g.,
converting an array to a structure like below:
struct t { int a; char b[40]; };
struct p { int c; int d; };
struct t *var = ...;
... __builtin_preserve_access_index(&(((struct p *)&(var->b[0]))->d)) ...
Although BPF backend can derive the type of &(var->b[0]),
explicit type annotation make checking more consistent
and less error prone.
Another benefit is for multiple dimension array handling.
For example,
struct p { int c; int d; } g[8][9][10];
... __builtin_preserve_access_index(&g[2][3][4].d) ...
It would be possible to calculate the number of "struct p"'s
before accessing its member "d" if array debug info is
available as it contains each dimension range.
This patch enables to annotate IR builtin preserve_array_access_index()
with proper debuginfo type. The unit test case and language reference
is updated as well.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D65664
llvm-svn: 367724
Currently, the CO-RE offset relocation does not work
if any struct/union member or array element is a typedef.
For example,
typedef const int arr_t[7];
struct input {
arr_t a;
};
func(...) {
struct input *in = ...;
... __builtin_preserve_access_index(&in->a[1]) ...
}
The BPF backend calculated default offset is 0 while
4 is the correct answer. Similar issues exist for struct/union
typedef's.
When getting struct/union member or array element type,
we should trace down to the type by skipping typedef
and qualifiers const/volatile as this is what clang did
to generate getelementptr instructions.
(const/volatile member type qualifiers are already
ignored by clang.)
This patch fixed this issue, for each access index,
skipping typedef and const/volatile/restrict BTF types.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D65259
llvm-svn: 367062
Currently, we expect the CO-RE offset relocation records
a string encoding the original getelementptr access index,
so kernel bpf loader can decode it correctly.
For example,
struct s { int a; int b; };
struct t { int c; int d; };
#define _(x) (__builtin_preserve_access_index(x))
int get_value(const void *addr1, const void *addr2);
int test(struct s *arg1, struct t *arg2) {
return get_value(_(&arg1->b), _(&arg2->d));
}
We expect two offset relocations:
reloc 1: type s, access index 0, 1
reloc 2: type t, access index 0, 1
Two globals are created to retain access indexes for the
above two relocations with global variable names.
The first global has a name "0:1:". Unfortunately,
the second global has the name "0:1:.1" as the llvm
internals automatically add suffix ".1" to a global
with the same name. Later on, the BPF peels the last
character and record "0:1" and "0:1:." in the
relocation table.
This is not desirable. BPF backend could use the global
variable suffix knowledge to generate correct access str.
This patch rather took an approach not relying on
that knowledge. It generates "s:0:1:" and "t:0:1:" to
avoid global variable suffixes and later on generate
correct index access string "0:1" for both records.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D65258
llvm-svn: 367030
This is a followup patch for https://reviews.llvm.org/D61810/new/,
which adds new intrinsics preserve_{array,union,struct}_access_index.
Currently, only BPF backend utilizes preserve_{array,union,struct}_access_index
intrinsics, so all tests are compiled with BPF target.
https://reviews.llvm.org/D61524 already added some tests for these
intrinsics, but some of them pretty complex.
This patch added a few unit test cases focusing on individual intrinsic
functions.
Also made a few clarification on language reference for these intrinsics.
Differential Revision: https://reviews.llvm.org/D64606
llvm-svn: 366038
Introduction
============
This patch added intial support for bpf program compile once
and run everywhere (CO-RE).
The main motivation is for bpf program which depends on
kernel headers which may vary between different kernel versions.
The initial discussion can be found at https://lwn.net/Articles/773198/.
Currently, bpf program accesses kernel internal data structure
through bpf_probe_read() helper. The idea is to capture the
kernel data structure to be accessed through bpf_probe_read()
and relocate them on different kernel versions.
On each host, right before bpf program load, the bpfloader
will look at the types of the native linux through vmlinux BTF,
calculates proper access offset and patch the instruction.
To accommodate this, three intrinsic functions
preserve_{array,union,struct}_access_index
are introduced which in clang will preserve the base pointer,
struct/union/array access_index and struct/union debuginfo type
information. Later, bpf IR pass can reconstruct the whole gep
access chains without looking at gep itself.
This patch did the following:
. An IR pass is added to convert preserve_*_access_index to
global variable who name encodes the getelementptr
access pattern. The global variable has metadata
attached to describe the corresponding struct/union
debuginfo type.
. An SimplifyPatchable MachineInstruction pass is added
to remove unnecessary loads.
. The BTF output pass is enhanced to generate relocation
records located in .BTF.ext section.
Typical CO-RE also needs support of global variables which can
be assigned to different values to different hosts. For example,
kernel version can be used to guard different versions of codes.
This patch added the support for patchable externals as well.
Example
=======
The following is an example.
struct pt_regs {
long arg1;
long arg2;
};
struct sk_buff {
int i;
struct net_device *dev;
};
#define _(x) (__builtin_preserve_access_index(x))
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) =
(void *) 4;
extern __attribute__((section(".BPF.patchable_externs"))) unsigned __kernel_version;
int bpf_prog(struct pt_regs *ctx) {
struct net_device *dev = 0;
// ctx->arg* does not need bpf_probe_read
if (__kernel_version >= 41608)
bpf_probe_read(&dev, sizeof(dev), _(&((struct sk_buff *)ctx->arg1)->dev));
else
bpf_probe_read(&dev, sizeof(dev), _(&((struct sk_buff *)ctx->arg2)->dev));
return dev != 0;
}
In the above, we want to translate the third argument of
bpf_probe_read() as relocations.
-bash-4.4$ clang -target bpf -O2 -g -S trace.c
The compiler will generate two new subsections in .BTF.ext,
OffsetReloc and ExternReloc.
OffsetReloc is to record the structure member offset operations,
and ExternalReloc is to record the external globals where
only u8, u16, u32 and u64 are supported.
BPFOffsetReloc Size
struct SecLOffsetReloc for ELF section #1
A number of struct BPFOffsetReloc for ELF section #1
struct SecOffsetReloc for ELF section #2
A number of struct BPFOffsetReloc for ELF section #2
...
BPFExternReloc Size
struct SecExternReloc for ELF section #1
A number of struct BPFExternReloc for ELF section #1
struct SecExternReloc for ELF section #2
A number of struct BPFExternReloc for ELF section #2
struct BPFOffsetReloc {
uint32_t InsnOffset; ///< Byte offset in this section
uint32_t TypeID; ///< TypeID for the relocation
uint32_t OffsetNameOff; ///< The string to traverse types
};
struct BPFExternReloc {
uint32_t InsnOffset; ///< Byte offset in this section
uint32_t ExternNameOff; ///< The string for external variable
};
Note that only externs with attribute section ".BPF.patchable_externs"
are considered for Extern Reloc which will be patched by bpf loader
right before the load.
For the above test case, two offset records and one extern record
will be generated:
OffsetReloc records:
.long .Ltmp12 # Insn Offset
.long 7 # TypeId
.long 242 # Type Decode String
.long .Ltmp18 # Insn Offset
.long 7 # TypeId
.long 242 # Type Decode String
ExternReloc record:
.long .Ltmp5 # Insn Offset
.long 165 # External Variable
In string table:
.ascii "0:1" # string offset=242
.ascii "__kernel_version" # string offset=165
The default member offset can be calculated as
the 2nd member offset (0 representing the 1st member) of struct "sk_buff".
The asm code:
.Ltmp5:
.Ltmp6:
r2 = 0
r3 = 41608
.Ltmp7:
.Ltmp8:
.loc 1 18 9 is_stmt 0 # t.c:18:9
.Ltmp9:
if r3 > r2 goto LBB0_2
.Ltmp10:
.Ltmp11:
.loc 1 0 9 # t.c:0:9
.Ltmp12:
r2 = 8
.Ltmp13:
.loc 1 19 66 is_stmt 1 # t.c:19:66
.Ltmp14:
.Ltmp15:
r3 = *(u64 *)(r1 + 0)
goto LBB0_3
.Ltmp16:
.Ltmp17:
LBB0_2:
.loc 1 0 66 is_stmt 0 # t.c:0:66
.Ltmp18:
r2 = 8
.loc 1 21 66 is_stmt 1 # t.c:21:66
.Ltmp19:
r3 = *(u64 *)(r1 + 8)
.Ltmp20:
.Ltmp21:
LBB0_3:
.loc 1 0 66 is_stmt 0 # t.c:0:66
r3 += r2
r1 = r10
.Ltmp22:
.Ltmp23:
.Ltmp24:
r1 += -8
r2 = 8
call 4
For instruction .Ltmp12 and .Ltmp18, "r2 = 8", the number
8 is the structure offset based on the current BTF.
Loader needs to adjust it if it changes on the host.
For instruction .Ltmp5, "r2 = 0", the external variable
got a default value 0, loader needs to supply an appropriate
value for the particular host.
Compiling to generate object code and disassemble:
0000000000000000 bpf_prog:
0: b7 02 00 00 00 00 00 00 r2 = 0
1: 7b 2a f8 ff 00 00 00 00 *(u64 *)(r10 - 8) = r2
2: b7 02 00 00 00 00 00 00 r2 = 0
3: b7 03 00 00 88 a2 00 00 r3 = 41608
4: 2d 23 03 00 00 00 00 00 if r3 > r2 goto +3 <LBB0_2>
5: b7 02 00 00 08 00 00 00 r2 = 8
6: 79 13 00 00 00 00 00 00 r3 = *(u64 *)(r1 + 0)
7: 05 00 02 00 00 00 00 00 goto +2 <LBB0_3>
0000000000000040 LBB0_2:
8: b7 02 00 00 08 00 00 00 r2 = 8
9: 79 13 08 00 00 00 00 00 r3 = *(u64 *)(r1 + 8)
0000000000000050 LBB0_3:
10: 0f 23 00 00 00 00 00 00 r3 += r2
11: bf a1 00 00 00 00 00 00 r1 = r10
12: 07 01 00 00 f8 ff ff ff r1 += -8
13: b7 02 00 00 08 00 00 00 r2 = 8
14: 85 00 00 00 04 00 00 00 call 4
Instructions #2, #5 and #8 need relocation resoutions from the loader.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D61524
llvm-svn: 365503
The variables in BTF DataSec type encode in-section offset.
R_BPF_NONE should be generated instead of R_BPF_64_32.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D62460
llvm-svn: 361742
Currently, without -g, BTF sections may still be emitted with
data sections, e.g., for linux kernel bpf selftest
test_tcp_check_syncookie_kern.c issue discovered by Martin
as shown below.
-bash-4.4$ bpftool btf dump file test_tcp_check_syncookie_kern.o
[1] VAR 'results' type_id=0, linkage=global-alloc
[2] VAR '_license' type_id=0, linkage=global-alloc
[3] DATASEC 'license' size=0 vlen=1
type_id=2 offset=0 size=4
[4] DATASEC 'maps' size=0 vlen=1
type_id=1 offset=0 size=28
Let disable BTF generation if no debuginfo, which is
the original design.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D61826
llvm-svn: 360556
This fixes the https://bugs.llvm.org/show_bug.cgi?id=41355.
Previously with -r we printed relocation section name instead of the target section name.
It was like this: "RELOCATION RECORDS FOR [.rel.text]"
Now it is: "RELOCATION RECORDS FOR [.text]"
Also when relocation target section has more than one relocation section,
we did not combine the output. Now we do.
Differential revision: https://reviews.llvm.org/D61312
llvm-svn: 360143
Summary:
Targets like ARM, MSP430, PPC, and SystemZ have complex behavior when
printing the address of a MachineOperand::MO_GlobalAddress. Move that
handling into a new overriden method in each base class. A virtual
method was added to the base class for handling the generic case.
Refactors a few subclasses to support the target independent %a, %c, and
%n.
The patch also contains small cleanups for AVRAsmPrinter and
SystemZAsmPrinter.
It seems that NVPTXTargetLowering is possibly missing some logic to
transform GlobalAddressSDNodes for
TargetLowering::LowerAsmOperandForConstraint to handle with "i" extended
inline assembly asm constraints.
Fixes:
- https://bugs.llvm.org/show_bug.cgi?id=41402
- https://github.com/ClangBuiltLinux/linux/issues/449
Reviewers: echristo, void
Reviewed By: void
Subscribers: void, craig.topper, jholewinski, dschuff, jyknight, dylanmckay, sdardis, nemanjai, javed.absar, sbc100, jgravelle-google, eraman, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, jrtc27, atanasyan, jsji, llvm-commits, kees, tpimh, nathanchance, peter.smith, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60887
llvm-svn: 359337
For multi-dimensional array like below
int a[2][3];
the previous implementation generates BTF_KIND_ARRAY type
like below:
. element_type: int
. index_type: unsigned int
. number of elements: 6
This is not the best way to represent arrays, esp.,
when converting BTF back to headers and users will see
int a[6];
instead.
This patch generates proper support for multi-dimensional arrays.
For "int a[2][3]", the two BTF_KIND_ARRAY types will be
generated:
Type #n:
. element_type: int
. index_type: unsigned int
. number of elements: 3
Type #(n+1):
. element_type: #n
. index_type: unsigned int
. number of elements: 2
The linux kernel already supports such a multi-dimensional
array representation properly.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D59943
llvm-svn: 357215
The .BTF.ext FuncInfoTable and LineInfoTable contain
information organized per ELF section. Current definition
of FuncInfoTable/LineInfoTable is:
std::unordered_map<uint32_t, std::vector<BTFFuncInfo>> FuncInfoTable
std::unordered_map<uint32_t, std::vector<BTFLineInfo>> LineInfoTable
where the key is the section name off in the string table.
The unordered_map may cause the order of section output
different for different platforms.
The same for unordered map definition of
std::unordered_map<std::string, std::unique_ptr<BTFKindDataSec>>
DataSecEntries
where BTF_KIND_DATASEC entries may have different ordering
for different platforms.
This patch fixed the issue by using std::map.
Test static-var-derived-type.ll is modified to generate two
DataSec's which will ensure the ordering is the same for all
supported platforms.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 357077
The DataSecEentries is defined as an unordered_map since
order does not really matter.
std::unordered_map<std::string, std::unique_ptr<BTFKindDataSec>>
DataSecEntries;
This seems causing the test static-var-derived-type.ll flaky
as two sections ".bss" and ".readonly" have undeterministic
ordering when performing map iterating, which decides the
output assembly code sequence of BTF_KIND_DATASEC entries.
Fix the test to have only one data section to remove
flakiness.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356731
Currently, the type id for a derived type is computed incorrectly.
For example,
type #1: int
type #2: ptr to #1
For a global variable "int *a", type #1 will be attributed to variable "a".
This is due to a bug which assigns the type id of the basetype of
that derived type as the derived type's type id. This happens
to "const", "volatile", "restrict", "typedef" and "pointer" types.
This patch fixed this bug, fixed existing test cases and added
a new one focusing on pointers plus other derived types.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356727
Two new kinds, BTF_KIND_VAR and BTF_KIND_DATASEC, are added.
BTF_KIND_VAR has the following specification:
btf_type.name: var name
btf_type.info: type kind
btf_type.type: var type
// btf_type is followed by one u32
u32: varinfo (currently, only 0 - static, 1 - global allocated in elf sections)
Not all globals are supported in this patch. The following globals are supported:
. static variables with or without section attributes
. global variables with section attributes
The inclusion of globals with section attributes
is for future potential extraction of key/value
type id's from map definition.
BTF_KIND_DATASEC has the following specification:
btf_type.name: section name associated with variable or
one of .data/.bss/.readonly
btf_type.info: type kind and vlen for # of variables
btf_type.size: 0
#vlen number of the following:
u32: id of corresponding BTF_KIND_VAR
u32: in-session offset of the var
u32: the size of memory var occupied
At the time of debug info emission, the data section
size is unknown, so the btf_type.size = 0 for
BTF_KIND_DATASEC. The loader can patch it during
loading time.
The in-session offseet of the var is only available
for static variables. For global variables, the
loader neeeds to assign the global variable symbol value in
symbol table to in-section offset.
The size of memory is used to specify the amount of the
memory a variable occupies. Typically, it equals to
the type size, but for certain structures, e.g.,
struct tt {
int a;
int b;
char c[];
};
static volatile struct tt s2 = {3, 4, "abcdefghi"};
The static variable s2 has size of 20.
Note that for BTF_KIND_DATASEC name, the section name
does not contain object name. The compiler does have
input module name. For example, two cases below:
. clang -target bpf -O2 -g -c test.c
The compiler knows the input file (module) is test.c
and can generate sec name like test.data/test.bss etc.
. clang -target bpf -O2 -g -emit-llvm -c test.c -o - |
llc -march=bpf -filetype=obj -o test.o
The llc compiler has the input file as stdin, and
would generate something like stdin.data/stdin.bss etc.
which does not really make sense.
For any user specificed section name, e.g.,
static volatile int a __attribute__((section("id1")));
static volatile const int b __attribute__((section("id2")));
The DataSec with name "id1" and "id2" does not contain
information whether the section is readonly or not.
The loader needs to check the corresponding elf section
flags for such information.
A simple example:
-bash-4.4$ cat t.c
int g1;
int g2 = 3;
const int g3 = 4;
static volatile int s1;
struct tt {
int a;
int b;
char c[];
};
static volatile struct tt s2 = {3, 4, "abcdefghi"};
static volatile const int s3 = 4;
int m __attribute__((section("maps"), used)) = 4;
int test() { return g1 + g2 + g3 + s1 + s2.a + s3 + m; }
-bash-4.4$ clang -target bpf -O2 -g -S t.c
Checking t.s, 4 BTF_KIND_VAR's are generated (s1, s2, s3 and m).
4 BTF_KIND_DATASEC's are generated with names
".data", ".bss", ".rodata" and "maps".
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D59441
llvm-svn: 356326
Previous commit 6bc58e6d3dbd ("[BPF] do not generate unused local/global types")
tried to exclude global variable from type generation. The condition is:
if (Global.hasExternalLinkage())
continue;
This is not right. It also excluded initialized globals.
The correct condition (from AssemblyWriter::printGlobal()) is:
if (!GV->hasInitializer() && GV->hasExternalLinkage())
Out << "external ";
Let us do the same in BTF type generation. Also added a test for it.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356279
The kernel currently has a limit for # of types to be 64KB and
the size of string subsection to be 64KB. A simple bcc tool
runqlat.py generates:
. the size of ~33KB type section, roughly ~10K types
. the size of ~17KB string section
The majority type is from the types referenced by local
variables in the bpf program. For example, the kernel "task_struct"
itself recursively brings in ~900 other types.
This patch did the following optimization to avoid generating
unused types:
. do not generate types for local variables unless they are
function arguments.
. do not generate types for external globals.
If an external global is not used in the program, llvm
already removes it from IR, so global variable saving is
typical small. For runqlat.py, only one variable "llvm.used"
is the external global.
The types for locals and external globals can be added back
once there is a usage for them.
After the above optimization, the runqlat.py generates:
. the size of ~1.5KB type section, roughtly 500 types
. the size of ~0.7KB string section
UPDATE:
resubmitted the patch after previous revert with
the following fix:
use Global.hasExternalLinkage() to test "external"
linkage instead of using Global.getInitializer(),
which will assert on external variables.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356234
The kernel currently has a limit for # of types to be 64KB and
the size of string subsection to be 64KB. A simple bcc tool
runqlat.py generates:
. the size of ~33KB type section, roughly ~10K types
. the size of ~17KB string section
The majority type is from the types referenced by local
variables in the bpf program. For example, the kernel "task_struct"
itself recursively brings in ~900 other types.
This patch did the following optimization to avoid generating
unused types:
. do not generate types for local variables unless they are
function arguments.
. do not generate types for external globals.
If an external global is not used in the program, llvm
already removes it from IR, so global variable saving is
typical small. For runqlat.py, only one variable "llvm.used"
is the external global.
The types for locals and external globals can be added back
once there is a usage for them.
After the above optimization, the runqlat.py generates:
. the size of ~1.5KB type section, roughtly 500 types
. the size of ~0.7KB string section
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356232
Summary:
A number of optimizations are inhibited by single-use TokenFactors not
being merged into the TokenFactor using it. This makes we consider if
we can do the merge immediately.
Most tests changes here are due to the change in visitation causing
minor reorderings and associated reassociation of paired memory
operations.
CodeGen tests with non-reordering changes:
X86/aligned-variadic.ll -- memory-based add folded into stored leaq
value.
X86/constant-combiners.ll -- Optimizes out overlap between stores.
X86/pr40631_deadstore_elision -- folds constant byte store into
preceding quad word constant store.
Reviewers: RKSimon, craig.topper, spatel, efriedma, courbet
Reviewed By: courbet
Subscribers: dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, eraman, hiraditya, kbarton, jrtc27, atanasyan, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59260
llvm-svn: 356068
If There is no types/non-empty strings, do not generate
.BTF section. If there is no func_info/line_info, do
not generate .BTF.ext section.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D58936
llvm-svn: 355360
Support sub-register code-gen for XADD is like supporting any other Load
and Store patterns.
No new instruction is introduced.
lock *(u32 *)(r1 + 0) += w2
has exactly the same underlying insn as:
lock *(u32 *)(r1 + 0) += r2
BPF_W width modifier has guaranteed they behave the same at runtime. This
patch merely teaches BPF back-end that BPF_W width modifier could work
GPR32 register class and that's all needed for sub-register code-gen
support for XADD.
test/CodeGen/BPF/xadd.ll updated to include sub-register code-gen tests.
A new testcase test/CodeGen/BPF/xadd_legal.ll is added to make sure the
legal case could pass on all code-gen modes. It could also test dead Def
check on GPR32. If there is no proper handling like what has been done
inside BPFMIChecking.cpp:hasLivingDefs, then this testcase will fail.
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
llvm-svn: 355126
Currently, the LLVM will print an error like
Unsupported relocation: try to compile with -O2 or above,
or check your static variable usage
if user defines more than one static variables in a single
ELF section (e.g., .bss or .data).
There is ongoing effort to support static and global
variables in libbpf and kernel. This patch removed the
assertion so user programs with static variables won't
fail compilation.
The static variable in-section offset is written to
the "imm" field of the corresponding to-be-relocated
bpf instruction. Below is an example to show how the
application (e.g., libbpf) can relate variable to relocations.
-bash-4.4$ cat g1.c
static volatile long a = 2;
static volatile int b = 3;
int test() { return a + b; }
-bash-4.4$ clang -target bpf -O2 -c g1.c
-bash-4.4$ llvm-readelf -r g1.o
Relocation section '.rel.text' at offset 0x158 contains 2 entries:
Offset Info Type Symbol's Value Symbol's Name
0000000000000000 0000000400000001 R_BPF_64_64 0000000000000000 .data
0000000000000018 0000000400000001 R_BPF_64_64 0000000000000000 .data
-bash-4.4$ llvm-readelf -s g1.o
Symbol table '.symtab' contains 6 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 FILE LOCAL DEFAULT ABS g1.c
2: 0000000000000000 8 OBJECT LOCAL DEFAULT 4 a
3: 0000000000000008 4 OBJECT LOCAL DEFAULT 4 b
4: 0000000000000000 0 SECTION LOCAL DEFAULT 4
5: 0000000000000000 64 FUNC GLOBAL DEFAULT 2 test
-bash-4.4$ llvm-objdump -d g1.o
g1.o: file format ELF64-BPF
Disassembly of section .text:
0000000000000000 test:
0: 18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
2: 79 11 00 00 00 00 00 00 r1 = *(u64 *)(r1 + 0)
3: 18 02 00 00 08 00 00 00 00 00 00 00 00 00 00 00 r2 = 8 ll
5: 61 20 00 00 00 00 00 00 r0 = *(u32 *)(r2 + 0)
6: 0f 10 00 00 00 00 00 00 r0 += r1
7: 95 00 00 00 00 00 00 00 exit
-bash-4.4$
. from symbol table, static variable "a" is in section #4, offset 0.
. from symbol table, static variable "b" is in section #4, offset 8.
. the first relocation is against symbol #4:
4: 0000000000000000 0 SECTION LOCAL DEFAULT 4
and in-section offset 0 (see llvm-objdump result)
. the second relocation is against symbol #4:
4: 0000000000000000 0 SECTION LOCAL DEFAULT 4
and in-section offset 8 (see llvm-objdump result)
. therefore, the first relocation is for variable "a", and
the second relocation is for variable "b".
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 354954