So that instructions like `lla a5, (0xFF + end) - 4` (supported by GNU as) can
be parsed.
Add a missing test that an operand like `foo + foo` is not allowed.
Reviewed By: jrtc27
Differential Revision: https://reviews.llvm.org/D92293
- most importantly, fix a use-after-free when using thin archives,
by putting the archive unique_ptr to the arena allocator. This
ports D65565 to MachO
- correctly demangle symbol namess from archives in diagnostics
- add a test for thin archives -- it finds this UaF, but only when
running it under asan (it also finds the demangling fix)
- make forceLoadArchive() use addFile() with a bool to have the archive
loading code in fewer places. no behavior change; matches COFF port a
bit better
Differential Revision: https://reviews.llvm.org/D92360
When handling a DSOLocalEquivalent operand change:
- Remove assertion checking that the `To` type and current type are the
same type. This is not always a requirement.
- Add a missing bitcast from an old DSOLocalEquivalent to the type of
the new one.
Instead of falling back to selecting TB(N)Z when we fail to select an
optimized compare against 0, select Bcc instead.
Also simplify selectCompareBranch a little while we're here, because the logic
was kind of hard to follow.
At -O0, this is a 0.1% geomean code size improvement for CTMark.
A simple example of where this can kick in is here:
https://godbolt.org/z/4rra6P
In the example above, GlobalISel currently produces a subs, cset, and tbnz.
SelectionDAG, on the other hand, just emits a compare and b.le.
Differential Revision: https://reviews.llvm.org/D92358
PDL patterns are now supported via a new `PDLPatternModule` class. This class contains a ModuleOp with the pdl::PatternOp operations representing the patterns, as well as a collection of registered C++ functions for native constraints/creations/rewrites/etc. that may be invoked via the pdl patterns. Instances of this class are added to an OwningRewritePatternList in the same fashion as C++ RewritePatterns, i.e. via the `insert` method.
The PDL bytecode is an in-memory representation of the PDL interpreter dialect that can be efficiently interpreted/executed. The representation of the bytecode boils down to a code array(for opcodes/memory locations/etc) and a memory buffer(for storing attributes/operations/values/any other data necessary). The bytecode operations are effectively a 1-1 mapping to the PDLInterp dialect operations, with a few exceptions in cases where the in-memory representation of the bytecode can be more efficient than the MLIR representation. For example, a generic `AreEqual` bytecode op can be used to represent AreEqualOp, CheckAttributeOp, and CheckTypeOp.
The execution of the bytecode is split into two phases: matching and rewriting. When matching, all of the matched patterns are collected to avoid the overhead of re-running parts of the matcher. These matched patterns are then considered alongside the native C++ patterns, which rewrite immediately in-place via `RewritePattern::matchAndRewrite`, for the given root operation. When a PDL pattern is matched and has the highest benefit, it is passed back to the bytecode to execute its rewriter.
Differential Revision: https://reviews.llvm.org/D89107
This is the same logic that ld64 uses to determine which sections
contain functions. This was added so that we could determine which
STABS entries should be N_FUN.
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D92430
This addresses a lot of the comments in {D89257}. Ideally it'd have been
done in the same diff, but the commits in between make that difficult.
This diff implements:
* N_GSYM and N_STSYM, the STABS for global and static symbols
* Has the STABS reflect the section IDs of their referent symbols
* Ensures we don't fail when encountering absolute symbols or files with
no debug info
* Sorts STABS symbols by file to minimize the number of N_OSO entries
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D92366
We should also set the modtime when running LTO. That will be done in a
future diff, together with support for the `-object_path_lto` flag.
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D91318
ld64 emits string tables which start with a space and a zero byte. We
match its behavior here since some tools depend on it.
Similar rationale as {D89561}.
Reviewed By: #lld-macho, smeenai
Differential Revision: https://reviews.llvm.org/D89639
Symbols of the same type must be laid out contiguously: following ld64's
lead, we choose to emit all local symbols first, then external symbols,
and finally undefined symbols. For each symbol type, the LC_DYSYMTAB
load command will record the range (start index and total number) of
those symbols in the symbol table.
This work was motivated by the fact that LLDB won't search for debug
info if LC_DYSYMTAB says there are no local symbols (since STABS symbols
are all local symbols). With this change, LLDB is now able to display
the source lines at a given breakpoint when debugging our binaries.
Some tests had to be updated due to local symbol names now appearing in
`llvm-objdump`'s output.
Reviewed By: #lld-macho, smeenai, clayborg
Differential Revision: https://reviews.llvm.org/D89285
Debug sections contain a large amount of data. In order not to bloat the size
of the final binary, we remove them and instead emit STABS symbols for
`dsymutil` and the debugger to locate their contents in the object files.
With this diff, `dsymutil` is able to locate the debug info. However, we need
a few more features before `lldb` is able to work well with our binaries --
e.g. having `LC_DYSYMTAB` accurately reflect the number of local symbols,
emitting `LC_UUID`, and more. Those will be handled in follow-up diffs.
Note also that the STABS we emit differ slightly from what ld64 does. First, we
emit the path to the source file as one `N_SO` symbol instead of two. (`ld64`
emits one `N_SO` for the dirname and one of the basename.) Second, we do not
emit `N_BNSYM` and `N_ENSYM` STABS to mark the start and end of functions,
because the `N_FUN` STABS already serve that purpose. @clayborg recommended
these changes based on his knowledge of what the debugging tools look for.
Additionally, this current implementation doesn't accurately reflect the size
of function symbols. It uses the size of their containing sectioins as a proxy,
but that is only accurate if `.subsections_with_symbols` is set, and if there
isn't an `N_ALT_ENTRY` in that particular subsection. I think we have two
options to solve this:
1. We can split up subsections by symbol even if `.subsections_with_symbols`
is not set, but include constraints to ensure those subsections retain
their order in the final output. This is `ld64`'s approach.
2. We could just add a `size` field to our `Symbol` class. This seems simpler,
and I'm more inclined toward it, but I'm not sure if there are use cases
that it doesn't handle well. As such I'm punting on the decision for now.
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D89257
* Enable PIE by default if targeting 10.6 or above on x86-64. (The
manpage says 10.7, but that actually applies only to i386, and in
general varies based on the target platform. I didn't update the
manpage because listing all the different behaviors would make for a
pretty long description.)
* Add support for `-no_pie`
* Remove `HelpHidden` from `-pie`
Reviewed By: thakis
Differential Revision: https://reviews.llvm.org/D92362
This reverts commit cf1c774d6a.
This change caused several regressions in the gdb test suite - at least
a sample of which was due to line zero instructions making breakpoints
un-lined. I think they're worth investigating/understanding more (&
possibly addressing) before moving forward with this change.
Revert "[FastISel] NFC: Clean up unnecessary bookkeeping"
This reverts commit 3fd39d3694.
Revert "[FastISel] NFC: Remove obsolete -fast-isel-sink-local-values option"
This reverts commit a474657e30.
Revert "Remove static function unused after cf1c774."
This reverts commit dc35368ccf.
Revert "[lldb] Fix TestThreadStepOut.py after "Flush local value map on every instruction""
This reverts commit 53a14a47ee.
This patch carries forward our aim to remove offset field from qRegisterInfo
packets and XML register description. I have created a new function which
returns if offset fields are dynamic meaning client can calculate offset on
its own based on register number sequence and register size. For now this
function only returns true for NativeRegisterContextLinux_arm64 but we can
test this for other architectures and make it standard later.
As a consequence we do not send offset field from lldb-server (arm64 for now)
while other stubs dont have an offset field so it wont effect them for now.
On the client side we have replaced previous offset calculation algorithm
with a new scheme, where we sort all primary registers in increasing
order of remote regnum and then calculate offset incrementally.
This committ also includes a test to verify all of above functionality
on Arm64.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D91241
This came up while putting together our new strategy to create g/G packets
in compliance with GDB RSP protocol where register offsets are calculated in
increasing order of register numbers without any unused spacing.
RegisterInfoPOSIX_arm64::GPR size was being calculated after alignment
correction to 8 bytes which meant there was a 4 bytes unused space between
last gpr (cpsr) and first vector register V. We have put LLVM_PACKED_START
decorator on RegisterInfoPOSIX_arm64::GPR to make sure single byte
alignment is enforced. Moreover we are now doing to use arm64 user_pt_regs
struct defined in ptrace.h for accessing ptrace user registers.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D92063
This allows us to use its value everywhere, rather than just clang. Some
other places, like opt and lld, will use its value soon.
Rename it internally to LLVM_ENABLE_NEW_PASS_MANAGER.
The #define for it is now in llvm-config.h.
The initial land accidentally set the value of
LLVM_ENABLE_NEW_PASS_MANAGER to the string
ENABLE_EXPERIMENTAL_NEW_PASS_MANAGER instead of its value.
Reviewed By: rnk, hans
Differential Revision: https://reviews.llvm.org/D92072
- Change InferTypeOpInterface::inferResultTypes to use fully qualified types matching
the ones generated by genTypeInterfaceMethods, so the redundancy can be detected.
- Move genTypeInterfaceMethods() before genOpInterfaceMethods() so that the
inferResultTypes method generated by genTypeInterfaceMethods() takes precedence
over the declaration that might be generated by genOpInterfaceMethods()
- Modified an op in the test dialect to exercise this (the modified op would fail to
generate valid C++ code due to duplicate inferResultTypes methods).
Differential Revision: https://reviews.llvm.org/D92414
These changes add support for Intel's umonitor/umwait usage in wait
code, for architectures that support those intrinsic functions. Usage of
umonitor/umwait is off by default, but can be turned on by setting the
KMP_USER_LEVEL_MWAIT environment variable.
Differential Revision: https://reviews.llvm.org/D91189
This allows us to use its value everywhere, rather than just clang. Some
other places, like opt and lld, will use its value soon.
The #define for it is now in llvm-config.h.
Reviewed By: rnk, hans
Differential Revision: https://reviews.llvm.org/D92072
Currently, `llvm_bb_addr_map` sections are generated per section names because we use
the `LinkedToSymbol` argument of getELFSection. This will cause the address map tables of functions
grouped into the same section when `-function-sections=true -unique-section-names=false` which is not
the intended behaviour. This patch lets the unique id of every `.text` section propagate to the associated
`.llvm_bb_addr_map` section.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D92113