2018-10-06 07:40:00 +08:00
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
2018-04-19 06:56:05 +08:00
/* Copyright (c) 2018 Facebook */
2018-10-04 06:26:42 +08:00
# ifndef __LIBBPF_BTF_H
# define __LIBBPF_BTF_H
2018-04-19 06:56:05 +08:00
2019-05-25 02:59:03 +08:00
# include <stdarg.h>
libbpf: Allow modification of BTF and add btf__add_str API
Allow internal BTF representation to switch from default read-only mode, in
which raw BTF data is a single non-modifiable block of memory with BTF header,
types, and strings layed out sequentially and contiguously in memory, into
a writable representation with types and strings data split out into separate
memory regions, that can be dynamically expanded.
Such writable internal representation is transparent to users of libbpf APIs,
but allows to append new types and strings at the end of BTF, which is
a typical use case when generating BTF programmatically. All the basic
guarantees of BTF types and strings layout is preserved, i.e., user can get
`struct btf_type *` pointer and read it directly. Such btf_type pointers might
be invalidated if BTF is modified, so some care is required in such mixed
read/write scenarios.
Switch from read-only to writable configuration happens automatically the
first time when user attempts to modify BTF by either adding a new type or new
string. It is still possible to get raw BTF data, which is a single piece of
memory that can be persisted in ELF section or into a file as raw BTF. Such
raw data memory is also still owned by BTF and will be freed either when BTF
object is freed or if another modification to BTF happens, as any modification
invalidates BTF raw representation.
This patch adds the first two BTF manipulation APIs: btf__add_str(), which
allows to add arbitrary strings to BTF string section, and btf__find_str()
which allows to find existing string offset, but not add it if it's missing.
All the added strings are automatically deduplicated. This is achieved by
maintaining an additional string lookup index for all unique strings. Such
index is built when BTF is switched to modifiable mode. If at that time BTF
strings section contained duplicate strings, they are not de-duplicated. This
is done specifically to not modify the existing content of BTF (types, their
string offsets, etc), which can cause confusion and is especially important
property if there is struct btf_ext associated with struct btf. By following
this "imperfect deduplication" process, btf_ext is kept consitent and correct.
If deduplication of strings is necessary, it can be forced by doing BTF
deduplication, at which point all the strings will be eagerly deduplicated and
all string offsets both in struct btf and struct btf_ext will be updated.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200926011357.2366158-6-andriin@fb.com
2020-09-26 09:13:53 +08:00
# include <stdbool.h>
2019-08-08 05:39:48 +08:00
# include <linux/btf.h>
2018-07-24 23:40:21 +08:00
# include <linux/types.h>
2018-04-19 06:56:05 +08:00
2019-12-14 09:43:29 +08:00
# include "libbpf_common.h"
2018-11-22 01:29:44 +08:00
# ifdef __cplusplus
extern " C " {
# endif
2018-04-19 06:56:05 +08:00
# define BTF_ELF_SEC ".BTF"
2018-11-20 07:29:16 +08:00
# define BTF_EXT_ELF_SEC ".BTF.ext"
libbpf: allow specifying map definitions using BTF
This patch adds support for a new way to define BPF maps. It relies on
BTF to describe mandatory and optional attributes of a map, as well as
captures type information of key and value naturally. This eliminates
the need for BPF_ANNOTATE_KV_PAIR hack and ensures key/value sizes are
always in sync with the key/value type.
Relying on BTF, this approach allows for both forward and backward
compatibility w.r.t. extending supported map definition features. By
default, any unrecognized attributes are treated as an error, but it's
possible relax this using MAPS_RELAX_COMPAT flag. New attributes, added
in the future will need to be optional.
The outline of the new map definition (short, BTF-defined maps) is as follows:
1. All the maps should be defined in .maps ELF section. It's possible to
have both "legacy" map definitions in `maps` sections and BTF-defined
maps in .maps sections. Everything will still work transparently.
2. The map declaration and initialization is done through
a global/static variable of a struct type with few mandatory and
extra optional fields:
- type field is mandatory and specified type of BPF map;
- key/value fields are mandatory and capture key/value type/size information;
- max_entries attribute is optional; if max_entries is not specified or
initialized, it has to be provided in runtime through libbpf API
before loading bpf_object;
- map_flags is optional and if not defined, will be assumed to be 0.
3. Key/value fields should be **a pointer** to a type describing
key/value. The pointee type is assumed (and will be recorded as such
and used for size determination) to be a type describing key/value of
the map. This is done to save excessive amounts of space allocated in
corresponding ELF sections for key/value of big size.
4. As some maps disallow having BTF type ID associated with key/value,
it's possible to specify key/value size explicitly without
associating BTF type ID with it. Use key_size and value_size fields
to do that (see example below).
Here's an example of simple ARRAY map defintion:
struct my_value { int x, y, z; };
struct {
int type;
int max_entries;
int *key;
struct my_value *value;
} btf_map SEC(".maps") = {
.type = BPF_MAP_TYPE_ARRAY,
.max_entries = 16,
};
This will define BPF ARRAY map 'btf_map' with 16 elements. The key will
be of type int and thus key size will be 4 bytes. The value is struct
my_value of size 12 bytes. This map can be used from C code exactly the
same as with existing maps defined through struct bpf_map_def.
Here's an example of STACKMAP definition (which currently disallows BTF type
IDs for key/value):
struct {
__u32 type;
__u32 max_entries;
__u32 map_flags;
__u32 key_size;
__u32 value_size;
} stackmap SEC(".maps") = {
.type = BPF_MAP_TYPE_STACK_TRACE,
.max_entries = 128,
.map_flags = BPF_F_STACK_BUILD_ID,
.key_size = sizeof(__u32),
.value_size = PERF_MAX_STACK_DEPTH * sizeof(struct bpf_stack_build_id),
};
This approach is naturally extended to support map-in-map, by making a value
field to be another struct that describes inner map. This feature is not
implemented yet. It's also possible to incrementally add features like pinning
with full backwards and forward compatibility. Support for static
initialization of BPF_MAP_TYPE_PROG_ARRAY using pointers to BPF programs
is also on the roadmap.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-18 03:26:56 +08:00
# define MAPS_ELF_SEC ".maps"
2018-04-19 06:56:05 +08:00
struct btf ;
2018-11-20 07:29:16 +08:00
struct btf_ext ;
2018-07-24 23:40:22 +08:00
struct btf_type ;
2018-04-19 06:56:05 +08:00
2019-04-10 05:20:14 +08:00
struct bpf_object ;
2018-10-16 13:50:34 +08:00
LIBBPF_API void btf__free ( struct btf * btf ) ;
2020-07-10 09:10:23 +08:00
LIBBPF_API struct btf * btf__new ( const void * data , __u32 size ) ;
2020-09-26 09:13:54 +08:00
LIBBPF_API struct btf * btf__new_empty ( void ) ;
2020-08-02 09:32:17 +08:00
LIBBPF_API struct btf * btf__parse ( const char * path , struct btf_ext * * btf_ext ) ;
LIBBPF_API struct btf * btf__parse_elf ( const char * path , struct btf_ext * * btf_ext ) ;
LIBBPF_API struct btf * btf__parse_raw ( const char * path ) ;
2019-04-10 05:20:14 +08:00
LIBBPF_API int btf__finalize_data ( struct bpf_object * obj , struct btf * btf ) ;
2019-02-09 03:19:36 +08:00
LIBBPF_API int btf__load ( struct btf * btf ) ;
2018-10-16 13:50:34 +08:00
LIBBPF_API __s32 btf__find_by_name ( const struct btf * btf ,
const char * type_name ) ;
2019-11-15 02:57:05 +08:00
LIBBPF_API __s32 btf__find_by_name_kind ( const struct btf * btf ,
const char * type_name , __u32 kind ) ;
2019-02-05 09:29:46 +08:00
LIBBPF_API __u32 btf__get_nr_types ( const struct btf * btf ) ;
2018-10-16 13:50:34 +08:00
LIBBPF_API const struct btf_type * btf__type_by_id ( const struct btf * btf ,
__u32 id ) ;
libbpf: Handle BTF pointer sizes more carefully
With libbpf and BTF it is pretty common to have libbpf built for one
architecture, while BTF information was generated for a different architecture
(typically, but not always, BPF). In such case, the size of a pointer might
differ betweem architectures. libbpf previously was always making an
assumption that pointer size for BTF is the same as native architecture
pointer size, but that breaks for cases where libbpf is built as 32-bit
library, while BTF is for 64-bit architecture.
To solve this, add heuristic to determine pointer size by searching for `long`
or `unsigned long` integer type and using its size as a pointer size. Also,
allow to override the pointer size with a new API btf__set_pointer_size(), for
cases where application knows which pointer size should be used. User
application can check what libbpf "guessed" by looking at the result of
btf__pointer_size(). If it's not 0, then libbpf successfully determined a
pointer size, otherwise native arch pointer size will be used.
For cases where BTF is parsed from ELF file, use ELF's class (32-bit or
64-bit) to determine pointer size.
Fixes: 8a138aed4a80 ("bpf: btf: Add BTF support to libbpf")
Fixes: 351131b51c7a ("libbpf: add btf_dump API for BTF-to-C conversion")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200813204945.1020225-5-andriin@fb.com
2020-08-14 04:49:40 +08:00
LIBBPF_API size_t btf__pointer_size ( const struct btf * btf ) ;
LIBBPF_API int btf__set_pointer_size ( struct btf * btf , size_t ptr_sz ) ;
2018-10-16 13:50:34 +08:00
LIBBPF_API __s64 btf__resolve_size ( const struct btf * btf , __u32 type_id ) ;
LIBBPF_API int btf__resolve_type ( const struct btf * btf , __u32 type_id ) ;
2019-12-14 09:43:30 +08:00
LIBBPF_API int btf__align_of ( const struct btf * btf , __u32 id ) ;
2018-10-16 13:50:34 +08:00
LIBBPF_API int btf__fd ( const struct btf * btf ) ;
2020-07-08 09:53:14 +08:00
LIBBPF_API void btf__set_fd ( struct btf * btf , int fd ) ;
2019-02-09 03:19:37 +08:00
LIBBPF_API const void * btf__get_raw_data ( const struct btf * btf , __u32 * size ) ;
2018-10-16 13:50:34 +08:00
LIBBPF_API const char * btf__name_by_offset ( const struct btf * btf , __u32 offset ) ;
2018-11-24 08:44:32 +08:00
LIBBPF_API int btf__get_from_id ( __u32 id , struct btf * * btf ) ;
2019-02-06 03:48:22 +08:00
LIBBPF_API int btf__get_map_kv_tids ( const struct btf * btf , const char * map_name ,
2019-02-05 03:00:58 +08:00
__u32 expected_key_size ,
__u32 expected_value_size ,
__u32 * key_type_id , __u32 * value_type_id ) ;
2018-04-19 06:56:05 +08:00
2019-02-05 03:00:57 +08:00
LIBBPF_API struct btf_ext * btf_ext__new ( __u8 * data , __u32 size ) ;
LIBBPF_API void btf_ext__free ( struct btf_ext * btf_ext ) ;
2019-03-01 07:31:22 +08:00
LIBBPF_API const void * btf_ext__get_raw_data ( const struct btf_ext * btf_ext ,
2019-02-09 03:19:38 +08:00
__u32 * size ) ;
2020-09-04 04:35:33 +08:00
LIBBPF_API LIBBPF_DEPRECATED ( " btf_ext__reloc_func_info was never meant as a public API and has wrong assumptions embedded in it; it will be removed in the future libbpf versions " )
int btf_ext__reloc_func_info ( const struct btf * btf ,
const struct btf_ext * btf_ext ,
const char * sec_name , __u32 insns_cnt ,
void * * func_info , __u32 * cnt ) ;
LIBBPF_API LIBBPF_DEPRECATED ( " btf_ext__reloc_line_info was never meant as a public API and has wrong assumptions embedded in it; it will be removed in the future libbpf versions " )
int btf_ext__reloc_line_info ( const struct btf * btf ,
const struct btf_ext * btf_ext ,
const char * sec_name , __u32 insns_cnt ,
void * * line_info , __u32 * cnt ) ;
2019-02-05 03:00:57 +08:00
LIBBPF_API __u32 btf_ext__func_info_rec_size ( const struct btf_ext * btf_ext ) ;
LIBBPF_API __u32 btf_ext__line_info_rec_size ( const struct btf_ext * btf_ext ) ;
2018-11-20 07:29:16 +08:00
2020-01-16 07:00:31 +08:00
LIBBPF_API struct btf * libbpf_find_kernel_btf ( void ) ;
libbpf: Allow modification of BTF and add btf__add_str API
Allow internal BTF representation to switch from default read-only mode, in
which raw BTF data is a single non-modifiable block of memory with BTF header,
types, and strings layed out sequentially and contiguously in memory, into
a writable representation with types and strings data split out into separate
memory regions, that can be dynamically expanded.
Such writable internal representation is transparent to users of libbpf APIs,
but allows to append new types and strings at the end of BTF, which is
a typical use case when generating BTF programmatically. All the basic
guarantees of BTF types and strings layout is preserved, i.e., user can get
`struct btf_type *` pointer and read it directly. Such btf_type pointers might
be invalidated if BTF is modified, so some care is required in such mixed
read/write scenarios.
Switch from read-only to writable configuration happens automatically the
first time when user attempts to modify BTF by either adding a new type or new
string. It is still possible to get raw BTF data, which is a single piece of
memory that can be persisted in ELF section or into a file as raw BTF. Such
raw data memory is also still owned by BTF and will be freed either when BTF
object is freed or if another modification to BTF happens, as any modification
invalidates BTF raw representation.
This patch adds the first two BTF manipulation APIs: btf__add_str(), which
allows to add arbitrary strings to BTF string section, and btf__find_str()
which allows to find existing string offset, but not add it if it's missing.
All the added strings are automatically deduplicated. This is achieved by
maintaining an additional string lookup index for all unique strings. Such
index is built when BTF is switched to modifiable mode. If at that time BTF
strings section contained duplicate strings, they are not de-duplicated. This
is done specifically to not modify the existing content of BTF (types, their
string offsets, etc), which can cause confusion and is especially important
property if there is struct btf_ext associated with struct btf. By following
this "imperfect deduplication" process, btf_ext is kept consitent and correct.
If deduplication of strings is necessary, it can be forced by doing BTF
deduplication, at which point all the strings will be eagerly deduplicated and
all string offsets both in struct btf and struct btf_ext will be updated.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200926011357.2366158-6-andriin@fb.com
2020-09-26 09:13:53 +08:00
LIBBPF_API int btf__find_str ( struct btf * btf , const char * s ) ;
LIBBPF_API int btf__add_str ( struct btf * btf , const char * s ) ;
libbpf: Add BTF writing APIs
Add APIs for appending new BTF types at the end of BTF object.
Each BTF kind has either one API of the form btf__add_<kind>(). For types
that have variable amount of additional items (struct/union, enum, func_proto,
datasec), additional API is provided to emit each such item. E.g., for
emitting a struct, one would use the following sequence of API calls:
btf__add_struct(...);
btf__add_field(...);
...
btf__add_field(...);
Each btf__add_field() will ensure that the last BTF type is of STRUCT or
UNION kind and will automatically increment that type's vlen field.
All the strings are provided as C strings (const char *), not a string offset.
This significantly improves usability of BTF writer APIs. All such strings
will be automatically appended to string section or existing string will be
re-used, if such string was already added previously.
Each API attempts to do all the reasonable validations, like enforcing
non-empty names for entities with required names, proper value bounds, various
bit offset restrictions, etc.
Type ID validation is minimal because it's possible to emit a type that refers
to type that will be emitted later, so libbpf has no way to enforce such
cases. User must be careful to properly emit all the necessary types and
specify type IDs that will be valid in the finally generated BTF.
Each of btf__add_<kind>() APIs return new type ID on success or negative
value on error. APIs like btf__add_field() that emit additional items
return zero on success and negative value on error.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200929020533.711288-2-andriin@fb.com
2020-09-29 10:05:30 +08:00
LIBBPF_API int btf__add_int ( struct btf * btf , const char * name , size_t byte_sz , int encoding ) ;
LIBBPF_API int btf__add_ptr ( struct btf * btf , int ref_type_id ) ;
LIBBPF_API int btf__add_array ( struct btf * btf ,
int index_type_id , int elem_type_id , __u32 nr_elems ) ;
/* struct/union construction APIs */
LIBBPF_API int btf__add_struct ( struct btf * btf , const char * name , __u32 sz ) ;
LIBBPF_API int btf__add_union ( struct btf * btf , const char * name , __u32 sz ) ;
LIBBPF_API int btf__add_field ( struct btf * btf , const char * name , int field_type_id ,
__u32 bit_offset , __u32 bit_size ) ;
/* enum construction APIs */
LIBBPF_API int btf__add_enum ( struct btf * btf , const char * name , __u32 bytes_sz ) ;
LIBBPF_API int btf__add_enum_value ( struct btf * btf , const char * name , __s64 value ) ;
enum btf_fwd_kind {
BTF_FWD_STRUCT = 0 ,
BTF_FWD_UNION = 1 ,
BTF_FWD_ENUM = 2 ,
} ;
LIBBPF_API int btf__add_fwd ( struct btf * btf , const char * name , enum btf_fwd_kind fwd_kind ) ;
LIBBPF_API int btf__add_typedef ( struct btf * btf , const char * name , int ref_type_id ) ;
LIBBPF_API int btf__add_volatile ( struct btf * btf , int ref_type_id ) ;
LIBBPF_API int btf__add_const ( struct btf * btf , int ref_type_id ) ;
LIBBPF_API int btf__add_restrict ( struct btf * btf , int ref_type_id ) ;
/* func and func_proto construction APIs */
LIBBPF_API int btf__add_func ( struct btf * btf , const char * name ,
enum btf_func_linkage linkage , int proto_type_id ) ;
LIBBPF_API int btf__add_func_proto ( struct btf * btf , int ret_type_id ) ;
LIBBPF_API int btf__add_func_param ( struct btf * btf , const char * name , int type_id ) ;
/* var & datasec construction APIs */
LIBBPF_API int btf__add_var ( struct btf * btf , const char * name , int linkage , int type_id ) ;
LIBBPF_API int btf__add_datasec ( struct btf * btf , const char * name , __u32 byte_sz ) ;
LIBBPF_API int btf__add_datasec_var_info ( struct btf * btf , int var_type_id ,
__u32 offset , __u32 byte_sz ) ;
2019-02-05 09:29:45 +08:00
struct btf_dedup_opts {
2019-03-01 07:31:23 +08:00
unsigned int dedup_table_size ;
2019-02-05 09:29:45 +08:00
bool dont_resolve_fwds ;
} ;
LIBBPF_API int btf__dedup ( struct btf * btf , struct btf_ext * btf_ext ,
const struct btf_dedup_opts * opts ) ;
2019-05-25 02:59:03 +08:00
struct btf_dump ;
struct btf_dump_opts {
void * ctx ;
} ;
typedef void ( * btf_dump_printf_fn_t ) ( void * ctx , const char * fmt , va_list args ) ;
LIBBPF_API struct btf_dump * btf_dump__new ( const struct btf * btf ,
const struct btf_ext * btf_ext ,
const struct btf_dump_opts * opts ,
btf_dump_printf_fn_t printf_fn ) ;
LIBBPF_API void btf_dump__free ( struct btf_dump * d ) ;
LIBBPF_API int btf_dump__dump_type ( struct btf_dump * d , __u32 id ) ;
2019-12-14 09:43:31 +08:00
struct btf_dump_emit_type_decl_opts {
/* size of this struct, for forward/backward compatiblity */
size_t sz ;
/* optional field name for type declaration, e.g.:
* - struct my_struct < FNAME >
* - void ( * < FNAME > ) ( int )
* - char ( * < FNAME > ) [ 123 ]
*/
const char * field_name ;
/* extra indentation level (in number of tabs) to emit for multi-line
* type declarations ( e . g . , anonymous struct ) ; applies for lines
* starting from the second one ( first line is assumed to have
* necessary indentation already
*/
int indent_level ;
2020-07-14 07:24:08 +08:00
/* strip all the const/volatile/restrict mods */
bool strip_mods ;
2019-12-14 09:43:31 +08:00
} ;
2020-07-14 07:24:09 +08:00
# define btf_dump_emit_type_decl_opts__last_field strip_mods
2019-12-14 09:43:31 +08:00
LIBBPF_API int
btf_dump__emit_type_decl ( struct btf_dump * d , __u32 id ,
const struct btf_dump_emit_type_decl_opts * opts ) ;
2019-08-08 05:39:48 +08:00
/*
* A set of helpers for easier BTF types handling
*/
static inline __u16 btf_kind ( const struct btf_type * t )
{
return BTF_INFO_KIND ( t - > info ) ;
}
static inline __u16 btf_vlen ( const struct btf_type * t )
{
return BTF_INFO_VLEN ( t - > info ) ;
}
static inline bool btf_kflag ( const struct btf_type * t )
{
return BTF_INFO_KFLAG ( t - > info ) ;
}
libbpf: Add support for extracting kernel symbol addresses
Add support for another (in addition to existing Kconfig) special kind of
externs in BPF code, kernel symbol externs. Such externs allow BPF code to
"know" kernel symbol address and either use it for comparisons with kernel
data structures (e.g., struct file's f_op pointer, to distinguish different
kinds of file), or, with the help of bpf_probe_user_kernel(), to follow
pointers and read data from global variables. Kernel symbol addresses are
found through /proc/kallsyms, which should be present in the system.
Currently, such kernel symbol variables are typeless: they have to be defined
as `extern const void <symbol>` and the only operation you can do (in C code)
with them is to take its address. Such extern should reside in a special
section '.ksyms'. bpf_helpers.h header provides __ksym macro for this. Strong
vs weak semantics stays the same as with Kconfig externs. If symbol is not
found in /proc/kallsyms, this will be a failure for strong (non-weak) extern,
but will be defaulted to 0 for weak externs.
If the same symbol is defined multiple times in /proc/kallsyms, then it will
be error if any of the associated addresses differs. In that case, address is
ambiguous, so libbpf falls on the side of caution, rather than confusing user
with randomly chosen address.
In the future, once kernel is extended with variables BTF information, such
ksym externs will be supported in a typed version, which will allow BPF
program to read variable's contents directly, similarly to how it's done for
fentry/fexit input arguments.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hao Luo <haoluo@google.com>
Link: https://lore.kernel.org/bpf/20200619231703.738941-3-andriin@fb.com
2020-06-20 07:16:56 +08:00
static inline bool btf_is_void ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_UNKN ;
}
2019-08-08 05:39:48 +08:00
static inline bool btf_is_int ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_INT ;
}
static inline bool btf_is_ptr ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_PTR ;
}
static inline bool btf_is_array ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_ARRAY ;
}
static inline bool btf_is_struct ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_STRUCT ;
}
static inline bool btf_is_union ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_UNION ;
}
static inline bool btf_is_composite ( const struct btf_type * t )
{
__u16 kind = btf_kind ( t ) ;
return kind = = BTF_KIND_STRUCT | | kind = = BTF_KIND_UNION ;
}
static inline bool btf_is_enum ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_ENUM ;
}
static inline bool btf_is_fwd ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_FWD ;
}
static inline bool btf_is_typedef ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_TYPEDEF ;
}
static inline bool btf_is_volatile ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_VOLATILE ;
}
static inline bool btf_is_const ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_CONST ;
}
static inline bool btf_is_restrict ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_RESTRICT ;
}
static inline bool btf_is_mod ( const struct btf_type * t )
{
__u16 kind = btf_kind ( t ) ;
return kind = = BTF_KIND_VOLATILE | |
kind = = BTF_KIND_CONST | |
kind = = BTF_KIND_RESTRICT ;
}
static inline bool btf_is_func ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_FUNC ;
}
static inline bool btf_is_func_proto ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_FUNC_PROTO ;
}
static inline bool btf_is_var ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_VAR ;
}
static inline bool btf_is_datasec ( const struct btf_type * t )
{
return btf_kind ( t ) = = BTF_KIND_DATASEC ;
}
static inline __u8 btf_int_encoding ( const struct btf_type * t )
{
return BTF_INT_ENCODING ( * ( __u32 * ) ( t + 1 ) ) ;
}
static inline __u8 btf_int_offset ( const struct btf_type * t )
{
return BTF_INT_OFFSET ( * ( __u32 * ) ( t + 1 ) ) ;
}
static inline __u8 btf_int_bits ( const struct btf_type * t )
{
return BTF_INT_BITS ( * ( __u32 * ) ( t + 1 ) ) ;
}
static inline struct btf_array * btf_array ( const struct btf_type * t )
{
return ( struct btf_array * ) ( t + 1 ) ;
}
static inline struct btf_enum * btf_enum ( const struct btf_type * t )
{
return ( struct btf_enum * ) ( t + 1 ) ;
}
static inline struct btf_member * btf_members ( const struct btf_type * t )
{
return ( struct btf_member * ) ( t + 1 ) ;
}
/* Get bit offset of a member with specified index. */
static inline __u32 btf_member_bit_offset ( const struct btf_type * t ,
__u32 member_idx )
{
const struct btf_member * m = btf_members ( t ) + member_idx ;
bool kflag = btf_kflag ( t ) ;
return kflag ? BTF_MEMBER_BIT_OFFSET ( m - > offset ) : m - > offset ;
}
/*
* Get bitfield size of a member , assuming t is BTF_KIND_STRUCT or
* BTF_KIND_UNION . If member is not a bitfield , zero is returned .
*/
static inline __u32 btf_member_bitfield_size ( const struct btf_type * t ,
__u32 member_idx )
{
const struct btf_member * m = btf_members ( t ) + member_idx ;
bool kflag = btf_kflag ( t ) ;
return kflag ? BTF_MEMBER_BITFIELD_SIZE ( m - > offset ) : 0 ;
}
static inline struct btf_param * btf_params ( const struct btf_type * t )
{
return ( struct btf_param * ) ( t + 1 ) ;
}
static inline struct btf_var * btf_var ( const struct btf_type * t )
{
return ( struct btf_var * ) ( t + 1 ) ;
}
static inline struct btf_var_secinfo *
btf_var_secinfos ( const struct btf_type * t )
{
return ( struct btf_var_secinfo * ) ( t + 1 ) ;
}
2018-11-22 01:29:44 +08:00
# ifdef __cplusplus
} /* extern "C" */
# endif
2018-10-04 06:26:42 +08:00
# endif /* __LIBBPF_BTF_H */