Summary:
How does RecursiveASTVisitor call the WalkUp callback for expressions?
* In pre-order traversal mode, RecursiveASTVisitor calls the WalkUp
callback from the default implementation of Traverse callbacks.
* In post-order traversal mode when we don't have a DataRecursionQueue,
RecursiveASTVisitor also calls the WalkUp callback from the default
implementation of Traverse callbacks.
* However, in post-order traversal mode when we have a DataRecursionQueue,
RecursiveASTVisitor calls the WalkUp callback from PostVisitStmt.
As a result, when the user overrides the Traverse callback, in pre-order
traversal mode they never get the corresponding WalkUp callback. However
in the post-order traversal mode the WalkUp callback is invoked or not
depending on whether the data recursion optimization could be applied.
I had to adjust the implementation of TraverseCXXForRangeStmt in the
syntax tree builder to call the WalkUp method directly, as it was
relying on this behavior. There is an existing test for this
functionality and it prompted me to make this extra fix.
In addition, I had to fix the default implementation implementation of
RecursiveASTVisitor::TraverseSynOrSemInitListExpr to call WalkUpFrom in
the same manner as the implementation generated by the DEF_TRAVERSE_STMT
macro. Without this fix, the InitListExprIsPostOrderNoQueueVisitedTwice
test was failing because WalkUpFromInitListExpr was never called.
Reviewers: eduucaldas, ymandel
Reviewed By: eduucaldas, ymandel
Subscribers: gribozavr2, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D82486
Use a simpler code sequence when the shift amount is known not to be
zero modulo the bit width.
Nothing much uses this until D77152 changes the translation of fshl and
fshr intrinsics.
Differential Revision: https://reviews.llvm.org/D82540
Using a negation instead of a subtraction from a constant can save an
instruction on some targets.
Nothing much uses this until D77152 changes the translation of fshl and
fshr intrinsics.
Differential Revision: https://reviews.llvm.org/D82539
Since strong dependencies aren't user-facing (its hardly ever legal to disable
them), lets enforce that they are hidden. Modeling checkers that aren't
dependencies are of course not impacted, but there is only so much you can do
against developers shooting themselves in the foot :^)
I also made some changes to the test files, reversing the "test" package for,
well, testing.
Differential Revision: https://reviews.llvm.org/D81761
Adds implementation of getMainExecutable() and is_local_impl() to
Support/Unix/Path.inc. Both are needed to compile LLVM for z/OS.
Reviewed By: hubert.reinterpretcast, emaste
Differential Revision: https://reviews.llvm.org/D82544
This is needed to build LLVM on z/OS, as there is no header file
which provides these constants.
Reviewed By: hubert.reinterpretcast
Differential Revision: https://reviews.llvm.org/D82368
Compilers may evaluate call arguments in different order,
which would result in different order of IR, which would break the tests.
Spotted thanks to Dmitri Gribenko!
This expands the existing extend costs with a few extras for larger
types than legal, which will usually be split under MVE. It also adds
trunk support for the same thing. These should not have a large effect
on many things, but makes the costs explicit and keeps a certain balance
between the trunks and extends.
Differential Revision: https://reviews.llvm.org/D82457
Summary:
I'm interested in taking the original C++ input,
for which we currently are stuck with an alloca
and producing roughly the lower IR,
with neither an alloca nor a vector ops:
https://godbolt.org/z/cRRWaJ
For that, as intermediate step, i'd to somehow perform scalarization.
As per @arsenmn suggestion, i'm trying to see if scalarizer can help me
avoid writing a bicycle.
I'm not sure if it's really intentional that variable insert is not handled currently.
If it really is, and is supposed to stay that way (?), i guess i could guard it..
See [[ https://bugs.llvm.org/show_bug.cgi?id=46524 | PR46524 ]].
Reviewers: bjope, cameron.mcinally, arsenm, jdoerfert
Reviewed By: jdoerfert
Subscribers: arphaman, uabelho, wdng, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D82961
Summary:
It appears to be better IR-wise to aggressively scalarize it,
rather than relying on gathering it, and leaving it as-is.
Reviewers: jdoerfert, bjope, arsenm, cameron.mcinally
Reviewed By: jdoerfert
Subscribers: arphaman, wdng, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D83101
Summary: As it can be clearly seen from the diff, this results in nicer IR.
Reviewers: jdoerfert, arsenm, bjope, cameron.mcinally
Reviewed By: jdoerfert
Subscribers: arphaman, wdng, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D83102
Summary:
1000 iteratons is still kinda a lot.
Would it make sense to iteratively lower it, until it becomes `2`,
with some delay inbetween in order to let users actually potentially encounter it?
Reviewers: spatel, nikic, kuhar
Reviewed By: nikic
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D83160
This alters getMemoryOpCost to use the Base TargetTransformInfo version
that includes some additional checks for whether extending loads are
legal. This will generally have the effect of making <2 x ..> and some
<4 x ..> loads/stores more expensive, which in turn should help favour
larger vector factors.
Notably it alters the cost of a <4 x half>, which with the current
codegen will be expensive if it is not extended.
Differential Revision: https://reviews.llvm.org/D82456
Summary:
Change stack alignment from 64 bits to 128 bits to follow ABI correctly.
And add a regression test for datalayout.
Reviewers: simoll, k-ishizaka
Reviewed By: simoll
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #llvm, #ve, #clang
Differential Revision: https://reviews.llvm.org/D83173
User can own a version of coroutine_handle::address() whose return type is not
void* by using template specialization for coroutine_handle<> for some
promise_type.
In this case, the codes may violate the capability with existing async C APIs
that accepted a void* data parameter which was then passed back to the
user-provided callback.
Patch by ChuanqiXu
Differential Revision: https://reviews.llvm.org/D82442
We make assumptions about what projects and runtimes are enabled
when configuring our toolchain build, so we should enable those in
the cache file as well rather than relying on those being set
externally.
Differential Revision: https://reviews.llvm.org/D81514
If the compilation fails, the test is marked as unsupported.
-> This will never change for a specific version of gcc
If the linking fails, the test is marked as expected to fail.
-> This might change as LLVM/OpenMP implements the missing GOMP interface function
Reviewed by: Hahnfeld
Differential Revision: https://reviews.llvm.org/D83077
Making -g[no-]column-info opt out reduces the length of a typical CC1 command line.
Additionally, in a non-debug compile, we won't see -dwarf-column-info.
Whether an instruction is deemed to have side effects in determined by
whether it has a tblgen pattern that emits a single instruction.
Because of the way a lot of the the vcvt instructions are specified
either in dagtodag code or with patterns that emit multiple
instructions, they don't get marked as not having side effects.
This just marks them as not having side effects manually. It can help
especially with instruction scheduling, to not create artificial
barriers, but one of these tests also managed to produce fewer
instructions.
Differential Revision: https://reviews.llvm.org/D81639
This patch resolves crash that occurs when user wanted to remove all
symbols and add a brand new one using:
```
llvm-objcopy -R .symtab --add-symbol foo=1234 in.o out.o
```
Before these changes the symbol table internally being null when adding
new symbols. For now we will regenerate symtab in this case.
This fixes: https://bugs.llvm.org/show_bug.cgi?id=43930
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D82935
OSS-Fuzz and the Emscripten test suite uncovered some edge cases in
which the range check instruction seemed to be an (i32.const 0) or
other unexpected instruction, triggering an assertion. Unfortunately
the reproducers are rather complicated, so they don't make good unit
tests. This commit removes the bad assertion and conservatively
optimizes range checks only when the range check instruction is
i32.gt_u.
Differential Revision: https://reviews.llvm.org/D83169
Xcode 12 beta apparently has the Wrange-loop-analysis changes from
half a year ago, but it seems to lack https://reviews.llvm.org/D72212
which made the warning usable again.
This fixes the build of compiler-rt on macOS when _not_ using
clang_base_path in args.gn: Xcode clang knows where to find the
SDK, but regular clang doesn't and needs a -isysroot parameter.
We correctly add that parameter when clang_base_path is set,
but else we omit it. If clang_base_path was not set, we also
didn't add the flag for stage2_unix_toolchain() when we build
compiler-rt with just-built clang.
Make stage2_unix_toolchain() use clang_base_path instead of setting
cc / cxx. It's less code, and it gets things like this right.
As it can be seen in newly-added (previously-crashing) test-case,
there can be a situation where multiple GV's are used in instr,
and we would schedule the same instruction to be deleted several times,
crashing when trying to delete it the second time.
We could either store WeakVH (done here), or use something set-like.
I think using WeakVH is prevalent in these cases elsewhere.
As it can be seen in newly-added (previously-crashing) test-case,
there can be a situation where multiple arguments are used in instr,
and we would schedule the same instruction to be deleted several times,
crashing when trying to delete it the second time.
We could either store WeakVH (done here), or use something set-like.
I think using WeakVH is prevalent in these cases elsewhere.