Summary:
This patch adds a special DAG combine for SSE1 to recognize the IR pattern InstCombine gives us for movmsk. This only does the recognition for a few cases where its obvious the input won't be scalarized resulting in building a vector just do to the movmsk. I've made it separate from our existing matching for movmsk since that's called in multiple places and I didn't spend time to see if the other callers would make sense here. Plus the restrictions and additional checks would complicate that.
This fixes the case from PR42870. Buts its probably still broken the presence of logic ops feeding the movmsk pattern which would further hide the v4f32 type.
Reviewers: spatel, RKSimon, xbolva00
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65689
llvm-svn: 368506
Summary:
On windows if the frame size exceed 4096 bytes, compiler need to
generate a call to _alloca_probe. X86CallFrameOptimization pass
changes the reserved stack size and cause of stack probe function
not be inserted. This patch fix the issue by detecting the call
frame size, if the size exceed 4096 bytes, drop X86CallFrameOptimization.
Reviewers: craig.topper, wxiao3, annita.zhang, rnk, RKSimon
Reviewed By: rnk
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65923
llvm-svn: 368503
Introducing non-global control for default block-scan-limit in MemDep analysis.
Useful when there are many compilations per initialized LLVM instance (e.g. JIT).
Reviewed By: asbirlea
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65806
llvm-svn: 368502
Summary:
I split out the "extract parent instead of this" logic from the "this isn't
worth extracting" logic (now in eligibleForExtraction()), because I found it
hard to reason about.
While here, handle overloaded as well as builtin assignment operators.
Also this uncovered a bug in getCallExpr() which I fixed.
Reviewers: SureYeaah
Subscribers: ilya-biryukov, MaskRay, jkorous, arphaman, kadircet, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D65337
llvm-svn: 368500
Summary:
clangd supports a -j option to limit the amount of threads to use for parsing
TUs. However, when using -background-index (the default in later versions of
clangd), the parallelism used by clangd defaults to the hardware_parallelisn,
i.e. number of physical cores.
On shared hardware environments, with large projects, this can significantly
affect performance with no way to tune it down.
This change makes the -j parameter apply equally to parsing and background
index. It's not perfect, because the total number of threads is 2x the -j value,
which may still be unexpected. But at least this change allows users to prevent
clangd using all CPU cores.
Reviewers: kadircet, sammccall
Reviewed By: sammccall
Subscribers: javed.absar, jfb, sammccall, ilya-biryukov, MaskRay, jkorous, arphaman, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D66031
llvm-svn: 368498
The default behavior of Clang's indirect function call checker will replace
the address of each CFI-checked function in the output file's symbol table
with the address of a jump table entry which will pass CFI checks. We refer
to this as making the jump table `canonical`. This property allows code that
was not compiled with ``-fsanitize=cfi-icall`` to take a CFI-valid address
of a function, but it comes with a couple of caveats that are especially
relevant for users of cross-DSO CFI:
- There is a performance and code size overhead associated with each
exported function, because each such function must have an associated
jump table entry, which must be emitted even in the common case where the
function is never address-taken anywhere in the program, and must be used
even for direct calls between DSOs, in addition to the PLT overhead.
- There is no good way to take a CFI-valid address of a function written in
assembly or a language not supported by Clang. The reason is that the code
generator would need to insert a jump table in order to form a CFI-valid
address for assembly functions, but there is no way in general for the
code generator to determine the language of the function. This may be
possible with LTO in the intra-DSO case, but in the cross-DSO case the only
information available is the function declaration. One possible solution
is to add a C wrapper for each assembly function, but these wrappers can
present a significant maintenance burden for heavy users of assembly in
addition to adding runtime overhead.
For these reasons, we provide the option of making the jump table non-canonical
with the flag ``-fno-sanitize-cfi-canonical-jump-tables``. When the jump
table is made non-canonical, symbol table entries point directly to the
function body. Any instances of a function's address being taken in C will
be replaced with a jump table address.
This scheme does have its own caveats, however. It does end up breaking
function address equality more aggressively than the default behavior,
especially in cross-DSO mode which normally preserves function address
equality entirely.
Furthermore, it is occasionally necessary for code not compiled with
``-fsanitize=cfi-icall`` to take a function address that is valid
for CFI. For example, this is necessary when a function's address
is taken by assembly code and then called by CFI-checking C code. The
``__attribute__((cfi_jump_table_canonical))`` attribute may be used to make
the jump table entry of a specific function canonical so that the external
code will end up taking a address for the function that will pass CFI checks.
Fixes PR41972.
Differential Revision: https://reviews.llvm.org/D65629
llvm-svn: 368495
Ensure that malloc_default_zone and malloc_zone_from_ptr return the
sanitizer-installed malloc zone even when MallocStackLogging (MSL) is
requested. This prevents crashes in certain situations. Note that the
sanitizers and MSL cannot be used together. If both are enabled, MSL
functionality is essentially deactivated since it only hooks the default
allocator which is replaced by a custom sanitizer allocator.
rdar://53686175
Reviewed By: kubamracek
Differential Revision: https://reviews.llvm.org/D65990
llvm-svn: 368492
Summary:
This patch adds support for the close map modifier in Clang.
This ensures that the new map type is marked and passed to the OpenMP runtime appropriately.
Additional regression tests have been merged from patch D55892 (author @saghir).
Reviewers: ABataev, caomhin, jdoerfert, kkwli0
Reviewed By: ABataev
Subscribers: kkwli0, Hahnfeld, saghir, guansong, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D65341
llvm-svn: 368491
This is the codegen part of fixing:
https://bugs.llvm.org/show_bug.cgi?id=32939
Even with the optimal/canonical IR that is ideally created by D65954,
we would reverse that transform in DAGCombiner and end up with the same
asm on AArch64 or x86.
I see 2 options for trying to correct this:
1. Limit isNegatibleForFree() by special-casing the fmul pattern (this patch).
2. Avoid creating (fmul X, 2.0) in the 1st place by adding a special-case
transform to SelectionDAG::getNode() and/or SelectionDAGBuilder::visitFMul()
that matches the transform done by DAGCombiner.
This seems like the less intrusive patch, but if there's some other reason to
prefer 1 option over the other, we can change to the other option.
Differential Revision: https://reviews.llvm.org/D66016
llvm-svn: 368490
Summary:
This patch adds support for the close map modifier.
The close map modifier will overwrite the unified shared memory requirement and create a device copy of the data.
Reviewers: ABataev, Hahnfeld, caomhin, grokos, jdoerfert, AlexEichenberger
Reviewed By: Hahnfeld, AlexEichenberger
Subscribers: guansong, openmp-commits
Tags: #openmp
Differential Revision: https://reviews.llvm.org/D65340
llvm-svn: 368488
Summary:
Targets often have instructions that can sign-extend certain cases faster
than the equivalent shift-left/arithmetic-shift-right. Such cases can be
identified by matching a shift-left/shift-right pair but there are some
issues with this in the context of combines. For example, suppose you can
sign-extend 8-bit up to 32-bit with a target extend instruction.
%1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity)
%2:_(s32) = G_ASHR %1:_(s32), i32 24
%3:_(s32) = G_ASHR %2:_(s32), i32 1
would reasonably combine to:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 25
which no longer matches the special case. If your shifts and extend are
equal cost, this would break even as a pair of shifts but if your shift is
more expensive than the extend then it's cheaper as:
%2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8
%3:_(s32) = G_ASHR %2:_(s32), i32 1
It's possible to match the shift-pair in ISel and emit an extend and ashr.
However, this is far from the only way to break this shift pair and make
it hard to match the extends. Another example is that with the right
known-zeros, this:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 24
%3:_(s32) = G_MUL %2:_(s32), i32 2
can become:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 23
All upstream targets have been configured to lower it to the current
G_SHL,G_ASHR pair but will likely want to make it legal in some cases to
handle their faster cases.
To follow-up: Provide a way to legalize based on the constant. At the
moment, I'm thinking that the best way to achieve this is to provide the
MI in LegalityQuery but that opens the door to breaking core principles
of the legalizer (legality is not context sensitive). That said, it's
worth noting that looking at other instructions and acting on that
information doesn't violate this principle in itself. It's only a
violation if, at the end of legalization, a pass that checks legality
without being able to see the context would say an instruction might not be
legal. That's a fairly subtle distinction so to give a concrete example,
saying %2 in:
%1 = G_CONSTANT 16
%2 = G_SEXT_INREG %0, %1
is legal is in violation of that principle if the legality of %2 depends
on %1 being constant and/or being 16. However, legalizing to either:
%2 = G_SEXT_INREG %0, 16
or:
%1 = G_CONSTANT 16
%2:_(s32) = G_SHL %0, %1
%3:_(s32) = G_ASHR %2, %1
depending on whether %1 is constant and 16 does not violate that principle
since both outputs are genuinely legal.
Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm
Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, kristof.beyls, javed.absar, hiraditya, jrtc27, atanasyan, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61289
llvm-svn: 368487
clang-doc now generates a file that contains only an index to all the
infos that can be used as the landing page for the generated website.
Differential Revision: https://reviews.llvm.org/D65918
llvm-svn: 368484
Summary:
A block address may be used in inline assembly. In which case it
requires a name so that the asm parser has something to parse. Creating
a name for every block address is a large hammer, but is necessary
because at the point when a temp symbol is created we don't necessarily
know if it's used in inline asm. This ensures that it exists regardless.
Reviewers: nickdesaulniers, craig.topper
Subscribers: nathanchance, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65352
llvm-svn: 368478
Summary:
This patch keeps track of MCSymbols created for blocks that were
referenced in inline asm. It prevents creating a new symbol which
doesn't refer to the block.
Inline asm may have a reference to a label. The asm parser however
doesn't recognize it as a label and tries to create a new symbol. The
result being that instead of the original symbol (e.g. ".Ltmp0") the
parser replaces it in the inline asm with the new one (e.g. ".Ltmp00")
without updating it in the symbol table. So the machine basic block
retains the "old" symbol (".Ltmp0"), but the inline asm uses the new one
(".Ltmp00").
Reviewers: nickdesaulniers, craig.topper
Subscribers: nathanchance, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65304
llvm-svn: 368477
This regressed in r368322, and was reported as PR42948 and on the
mailing list. The fix is to ignore the specific error code for this
case. The problem doesn't seem to reproduce on Windows, where a
different error code is used instead.
llvm-svn: 368475
CFStrings should be 8-byte aligned when built for the Swift CF runtime
ABI as the atomic CF info field must be properly aligned. This is a
problem on 32-bit platforms which would give the structure 4-byte
alignment rather than 8-byte alignment.
llvm-svn: 368471
We have one global RTLs.RequiresFlags, I don't see a need to make a
copy per device that the runtime manages. This was problematic anyway
because the copy happened during the first __tgt_register_lib(). This
made it impossible to call __tgt_register_requires() from normal user
funtions for testing.
Hence, this change also fixes unified_shared_memory/shared_update.c for
older versions of Clang that don't call __tgt_register_requires() before
__tgt_register_lib().
Differential Revision: https://reviews.llvm.org/D66019
llvm-svn: 368465
Summary:
The purpose is to be able to extract the number of new edges added to
the original (i.e. output) corpus directory after doing the merge. Use case
example: in ClusterFuzz, we do merge after every fuzzing session, to avoid
uploading too many corpus files, and we also record coverage stats at that
point. Having a separate line indicating stats after reading the initial output
corpus directory would make the stats extraction easier for both humans and
parsing scripts.
Context: https://github.com/google/clusterfuzz/issues/802.
Reviewers: morehouse, hctim
Reviewed By: hctim
Subscribers: delcypher, #sanitizers, llvm-commits, kcc
Tags: #llvm, #sanitizers
Differential Revision: https://reviews.llvm.org/D66020
llvm-svn: 368461
Two command line options have been added to clang-doc.
--repository=<string> - URL of repository that hosts code; used for links to definition locations.
--source-root=<string> - Directory where processed files are stored. Links to definition locations will only be generated if the file is in this dir.
If the file is in the source-root and a repository options is passed;
a link to the source code will be rendered by the HTML generator.
Differential Revision: https://reviews.llvm.org/D65483
llvm-svn: 368460
Summary:
The problem:
When an operand had bits explicitly set to "1" (as in the InitValue.td test case attached), the decoder was ignoring those bits, and the DecoderMethod was receiving an input where the bits were still zero.
The solution:
We added an "InitValue" variable that stores the initial value of the operand based on what bits were explicitly initialized to 1 in TableGen code. The generated decoder code then uses that initial value to initialize the "tmp" variable, then calls fieldFromInstruction to read the values for the remaining bits that were left unknown in TableGen.
This is mainly useful when there are variations of an instruction that differ based on what bits are set in the operands, since this change makes it possible to access those bits in a DecoderMethod. The DecoderMethod can use those bits to know how to handle the input.
Patch by Nicolas Guillemot
Reviewers: craig.topper, dsanders, fhahn
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D63741
llvm-svn: 368458