ELFYAML.h contains a `Section` class which is a base for a few other
sections classes that are used for mapping different section types.
`Section` has a `StringRef Info` field used for storing sh_info.
At the same time, sh_info has very different meanings for sections and
cannot be processed in a similar way generally,
for example ELFDumper does not handle it in `dumpCommonSection`
but do that in `dumpGroup` and `dumpCommonRelocationSection` respectively.
At this moment, we have and handle it as a string, because that was possible for
the current use case. But also it can simply be a number:
For SHT_GNU_verdef is "The number of version definitions within the section."
The patch moves `Info` field out to be able to have it as a number.
With that change, each class will be able to decide what type and purpose
of the sh_info field it wants to use.
I also had to edit 2 test cases. This is because patch fixes a bug. Previously we
accepted yaml files with Info fields for all sections (for example, for SHT_DYNSYM too).
But we do not handle it and the resulting objects had zero sh_info fields set for
such sections. Now it is accepted only for sections that supports it.
Differential revision: https://reviews.llvm.org/D58054
llvm-svn: 353810
This patch adds a -time-regions option to tablegen that can enable timers
(currently only one) that assess the performance of tablegen itself. This
can be useful for identifying scaling problems with tablegen backends.
This particular timer has allowed me to ignore time that is not attributed
the GISel combiner pass. It's useful by itself but it is particularly
useful in combination with https://reviews.llvm.org/D52954 which causes
this period of time to be annotated within Xcode Instruments which in turn
allows profile samples and recorded allocations attributed to reading
instructions to be filtered out.
llvm-svn: 353763
Currently there is no way to lazy-load an in-memory IR module without
first writing it to disk. This patch just exposes the existing
implementation of getLazyIRModule.
This is effectively a revert of rL212364
Differential Revision: https://reviews.llvm.org/D56203
llvm-svn: 353755
Summary:
This verification may fail after certain transformations due to
BasicAA's fragility. Added a small explanation and a testcase that
triggers the assert in checkClobberSanity (before its removal).
Addresses PR40509.
Reviewers: george.burgess.iv
Subscribers: sanjoy, jlebar, llvm-commits, Prazek
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57973
llvm-svn: 353739
Loop::setAlreadyUnrolled() and
LoopVectorizeHints::setLoopAlreadyUnrolled() both add loop metadata that
stops the same loop from being transformed multiple times. This patch
merges both implementations.
In doing so we fix 3 potential issues:
* setLoopAlreadyUnrolled() kept the llvm.loop.vectorize/interleave.*
metadata even though it will not be used anymore. This already caused
problems such as http://llvm.org/PR40546. Change the behavior to the
one of setAlreadyUnrolled which deletes this loop metadata.
* setAlreadyUnrolled() used to create a new LoopID by calling
MDNode::get with nullptr as the first operand, then replacing it by
the returned references using replaceOperandWith. It is possible
that MDNode::get would instead return an existing node (due to
de-duplication) that then gets modified. To avoid, use a fresh
TempMDNode that does not get uniqued with anything else before
replacing it with replaceOperandWith.
* LoopVectorizeHints::matchesHintMetadataName() only compares the
suffix of the attribute to set the new value for. That is, when
called with "enable", would erase attributes such as
"llvm.loop.unroll.enable", "llvm.loop.vectorize.enable" and
"llvm.loop.distribute.enable" instead of the one to replace.
Fortunately, function was only called with "isvectorized".
Differential Revision: https://reviews.llvm.org/D57566
llvm-svn: 353738
Summary:
This patch fixes PR40587.
When a dbg.value instrinsic is emitted to the DAG
by using EmitFuncArgumentDbgValue the resulting
DBG_VALUE is hoisted to the beginning of the entry
block. I think the idea is to be able to locate
a formal argument already from the start of the
function.
However, EmitFuncArgumentDbgValue only checked that
the value that was used to describe a variable was
originating from a function parameter, not that the
variable itself actually was an argument to the
function. So when for example assigning a local
variable "local" the value from an argument "a",
the assocated DBG_VALUE instruction would be hoisted
to the beginning of the function, even if the scope
for "local" started somewhere else (or if "local"
was mapped to other values earlier in the function).
This patch adds some logic to EmitFuncArgumentDbgValue
to check that the variable being described actually
is an argument to the function. And that the dbg.value
being lowered already is in the entry block. Otherwise
we bail out, and the dbg.value will be handled as an
ordinary dbg.value (not as a "FuncArgumentDbgValue").
A tricky situation is when both the variable and
the value is related to function arguments, but not
neccessarily the same argument. We make sure that we
do not describe the same argument more than once as
a "FuncArgumentDbgValue". This solution works as long
as opt has injected a "first" dbg.value that corresponds
to the formal argument at the function entry.
Reviewers: jmorse, aprantl
Subscribers: jyknight, hiraditya, fedor.sergeev, dstenb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57702
llvm-svn: 353735
Summary:
If there is no clobbering access for a store inside the loop, that store
can only be hoisted if there are no interfearing loads.
A more general verification introduced here: there are no loads that are
not optimized to an access outside the loop.
Addresses PR40586.
Reviewers: george.burgess.iv
Subscribers: sanjoy, jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57967
llvm-svn: 353734
Summary:
rL189250 added a realpath call, and rL352916 because realpath breaks assumptions with some build systems. However, the /usr/lib/debug case has been clarified, falling back to /usr/lib/debug is currently broken if the obj passed in is a relative path. Adding a call to use absolute paths when falling back to /usr/lib/debug fixes that while still not making any realpath assumptions.
This also adds a --fallback-debug-path command line flag for testing (since we probably can't write to /usr/lib/debug from buildbot environments), but was also verified manually:
```
$ rm -f path/to/dwarfdump-test.elf-x86-64
$ strace llvm-symbolizer --obj=relative/path/to/dwarfdump-test.elf-x86-64.debuglink 0x40113f |& grep dwarfdump
```
Lookups went to relative/path/to/dwarfdump-test.elf-x86-64, relative/path/to/.debug/dwarfdump-test.elf-x86-64, and then finally /usr/lib/debug/absolute/path/to/dwarfdump-test.elf-x86-64.
Reviewers: dblaikie, samsonov
Reviewed By: dblaikie
Subscribers: krytarowski, aprantl, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57916
llvm-svn: 353730
This is a follow up of r353706. When the scheduler fails to issue a ready
instruction to the underlying pipelines, it now updates a mask of 'busy resource
units'. That information will be used in future to obtain the set of
"problematic" resources in the case of bottlenecks caused by resource pressure.
No functional change intended.
llvm-svn: 353728
In case of bottlenecks caused by pipeline pressure, we want to be able to
correctly report the set of problematic pipelines. This is a first step towards
adding support for bottleneck hints in llvm-mca (see PR37494). No functional
change intended.
llvm-svn: 353706
`CallBase` class rather than `CallSite` wrappers.
I pushed this change down through most of the statepoint infrastructure,
completely removing the use of CallSite where I could reasonably do so.
I ended up making a couple of cut-points: generic call handling
(instcombine, TLI, SDAG). As soon as it hit truly generic handling with
users outside the immediate code, I simply transitioned into or out of
a `CallSite` to make this a reasonable sized chunk.
Differential Revision: https://reviews.llvm.org/D56122
llvm-svn: 353660
SimplifySetCC still has much room for improvement, but this should
fix the remaining problem examples from:
https://bugs.llvm.org/show_bug.cgi?id=40657
The initial fix for this problem was rL353615.
llvm-svn: 353639
In preparation for supporting vector expansion.
Also drop a variant of ExpandLibCall, of which the MULO expansions
were the only user.
llvm-svn: 353611
This teaches the tools to parse and dump
the .dynamic section and its dynamic tags.
Differential revision: https://reviews.llvm.org/D57691
llvm-svn: 353606
Summary:
Take care of some missing clean-ups that belong with r249548 and some
other copy/paste that had happened. In particular, the destructors are
no longer vtable anchors after r249548; and `setSectionName` in
`MCSectionWasm` is private and unused since r313058 culled its only
caller. The destructors are now implicitly defined, and the unused
function is removed.
Reviewers: nemanjai, jasonliu, grosbach
Reviewed By: nemanjai
Subscribers: sbc100, aheejin, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57182
llvm-svn: 353597
This patch accompanies the RFC posted here:
http://lists.llvm.org/pipermail/llvm-dev/2018-October/127239.html
This patch adds a new CallBr IR instruction to support asm-goto
inline assembly like gcc as used by the linux kernel. This
instruction is both a call instruction and a terminator
instruction with multiple successors. Only inline assembly
usage is supported today.
This also adds a new INLINEASM_BR opcode to SelectionDAG and
MachineIR to represent an INLINEASM block that is also
considered a terminator instruction.
There will likely be more bug fixes and optimizations to follow
this, but we felt it had reached a point where we would like to
switch to an incremental development model.
Patch by Craig Topper, Alexander Ivchenko, Mikhail Dvoretckii
Differential Revision: https://reviews.llvm.org/D53765
llvm-svn: 353563
Summary:
The motivating use case is eliminating duplicate profile data registered
for the same inline function in two object files. Before this change,
users would observe multiple symbol definition errors with VC link, but
links with LLD would succeed.
Users (Mozilla) have reported that PGO works well with clang-cl and LLD,
but when using LLD without this static registration, we would get into a
"relocation against a discarded section" situation. I'm not sure what
happens in that situation, but I suspect that duplicate, unused profile
information was retained. If so, this change will reduce the size of
such binaries with LLD.
Now, Windows uses static registration and is in line with all the other
platforms.
Reviewers: davidxl, wmi, inglorion, void, calixte
Subscribers: mgorny, krytarowski, eraman, fedor.sergeev, hiraditya, #sanitizers, dmajor, llvm-commits
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D57929
llvm-svn: 353547
Differential revision: https://reviews.llvm.org/D55444
dpp move with uses and old reg initializer should be in the same BB.
bound_ctrl:0 is only considered when bank_mask and row_mask are fully enabled (0xF). Otherwise the old register value is checked for identity.
Added add, subrev, and, or instructions to the old folding function.
Kill flag is cleared for the src0 (DPP register) as it may be copied into more than one user.
The pass is still disabled by default.
llvm-svn: 353513
Check that when SimplifyCFG is flattening a 'br', all their debug intrinsic instructions are removed, including any dbg.label referencing a label associated with the basic blocks being removed.
Differential Revision: https://reviews.llvm.org/D57444
llvm-svn: 353511
This broke the Chromium build on Windows, see https://crbug.com/930058
> Summary:
> When adding one thin archive to another, we currently chop off the relative path to the flattened members. For instance, when adding `foo/child.a` (which contains `x.txt`) to `parent.a`, whe
> lattening it we should add it as `foo/x.txt` (which exists) instead of `x.txt` (which does not exist).
>
> As a note, this also undoes the `IsNew` parameter of handling relative paths in r288280. The unit test there still passes.
>
> This was reported as part of testing the kernel build with llvm-ar: https://patchwork.kernel.org/patch/10767545/ (see the second point).
>
> Reviewers: mstorsjo, pcc, ruiu, davide, david2050
>
> Subscribers: hiraditya, llvm-commits
>
> Tags: #llvm
>
> Differential Revision: https://reviews.llvm.org/D57842
This reverts commit bf990ab5aa.
llvm-svn: 353507
Summary: Assumption cache's self-updating mechanism does not correctly handle the case when blocks are extracted from the function by the CodeExtractor. As a result function's assumption cache may have stale references to the llvm.assume calls that were moved to the outlined function. This patch fixes this problem by removing extracted llvm.assume calls from the function’s assumption cache.
Reviewers: hfinkel, vsk, fhahn, davidxl, sanjoy
Reviewed By: hfinkel, vsk
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57215
llvm-svn: 353500
Add a flag to allow symbols to have a wasm import name which differs from the
linker symbol name, allowing the linker to link code using the import_module
attribute.
This is the MC/Object portion of the patch.
Differential Revision: https://reviews.llvm.org/D57632
llvm-svn: 353474
This is pretty much directly ported from SelectionDAG. Doesn't include
the shift by non-constant but known bits version, since there isn't a
globalisel version of computeKnownBits yet.
This shows a disadvantage of targets not specifically which type
should be used for the shift amount. If type 0 is legalized before
type 1, the operations on the shift amount type use the wider type
(which are also less likely to legalize). This can be avoided by
targets specifying legalization actions on type 1 earlier than for
type 0.
llvm-svn: 353455
Introduce a new function which handles instructions with multiple type
indices, but have the same number of vector elements.
Also legalize v2s16 shifts when applicable.
llvm-svn: 353432
Summary:
When adding one thin archive to another, we currently chop off the relative path to the flattened members. For instance, when adding `foo/child.a` (which contains `x.txt`) to `parent.a`, when flattening it we should add it as `foo/x.txt` (which exists) instead of `x.txt` (which does not exist).
As a note, this also undoes the `IsNew` parameter of handling relative paths in r288280. The unit test there still passes.
This was reported as part of testing the kernel build with llvm-ar: https://patchwork.kernel.org/patch/10767545/ (see the second point).
Reviewers: mstorsjo, pcc, ruiu, davide, david2050
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57842
llvm-svn: 353424
When type streams with forward references were merged using GHashes, cycles
were introduced in the debug info. This was caused by
GlobalTypeTableBuilder::insertRecordAs() not inserting the record on the second
pass, thus yielding an empty ArrayRef at that record slot. Later on, upon PDB
emission, TpiStreamBuilder::commit() would skip that empty record, thus
offseting all indices that came after in the stream.
This solution comes in two steps:
1. Fix the hash calculation, by doing a multiple-step resolution, iff there are
forward references in the input stream.
2. Fix merge by resolving with multiple passes, therefore moving records with
forward references at the end of the stream.
This patch also adds support for llvm-readoj --codeview-ghash.
Finally, fix dumpCodeViewMergedTypes() which previously could reference deleted
memory.
Fixes PR40221
Differential Revision: https://reviews.llvm.org/D57790
llvm-svn: 353412
Modify GenerateConstantOffsetsImpl to create offsets that can be used
by indexed addressing modes. If formulae can be generated which
result in the constant offset being the same size as the recurrence,
we can generate a pre-indexed access. This allows the pointer to be
updated via the single pre-indexed access so that (hopefully) no
add/subs are required to update it for the next iteration. For small
cores, this can significantly improve performance DSP-like loops.
Differential Revision: https://reviews.llvm.org/D55373
llvm-svn: 353403
Summary:
Experimentally we found that promotion to scalars carries less benefits
than sinking and hoisting in LICM. When using MemorySSA, we build an
AliasSetTracker on demand in order to reuse the current infrastructure.
We only build it if less than AccessCapForMSSAPromotion exist in the
loop, a cap that is by default set to 250. This value ensures there are
no runtime regressions, and there are small compile time gains for
pathological cases. A much lower value (20) was found to yield a single
regression in the llvm-test-suite and much higher benefits for compile
times. Conservatively we set the current cap to a high value, but we will
explore lowering it when MemorySSA is enabled by default.
Reviewers: sanjoy, chandlerc
Subscribers: nemanjai, jlebar, Prazek, george.burgess.iv, jfb, jsji, llvm-commits
Differential Revision: https://reviews.llvm.org/D56625
llvm-svn: 353339
Summary:
Pass the alias info to addPointer when available. Will save an alias()
call for must sets when adding a known Must or May alias.
[Part of a series of cleanup patches]
Reviewers: reames, mkazantsev
Subscribers: sanjoy, jlebar, llvm-commits
Differential Revision: https://reviews.llvm.org/D56613
llvm-svn: 353335
As far as I can tell, malloc.h is only being used here to provide
a definition of mallinfo (malloc itself is declared in stdlib.h via
cstdlib). We already have a macro for whether mallinfo is available,
so switch to using that instead.
Differential Revision: https://reviews.llvm.org/D57807
llvm-svn: 353329
There was a lot of repeated code wrt unary math intrinsics in
translateKnownIntrinsic. This factors out the repeated MIRBuilder code into
two functions: translateSimpleUnaryIntrinsic and getSimpleUnaryIntrinsicOpcode.
This simplifies adding simple unary intrinsics, since after this, all you have
to do is add the mapping to SimpleUnaryIntrinsicOpcodes.
Differential Revision: https://reviews.llvm.org/D57774
llvm-svn: 353316
Previously, there were two different scripts for generating VCS headers:
one used by LLVM and one used by Clang and lldb. They were both similar,
but different. They were both broken in their own ways, for example the
one used by Clang didn't properly handle monorepo resulting in an
incorrect version information reported by Clang.
This change unifies two the scripts by introducing a new script that's
used from both LLVM, Clang and lldb, ensures that the new script
supports both monorepo and standalone SVN and Git setups, and removes
the old scripts.
Differential Revision: https://reviews.llvm.org/D57063
llvm-svn: 353268
DomTreeUpdater depends on headers from Analysis, but is in IR. This is a
layering violation since Analysis depends on IR. Relocate this code from IR
to Analysis to fix the layering violation.
llvm-svn: 353265
Summary:
Use a small cache for Values tested by nonEscapingLocalObject().
Since the calls to PointerMayBeCaptured are fairly expensive, this saves
a good amount of compile time for anything relying heavily on
BasicAA.alias() calls.
This uses the same approach as the AliasCache, i.e. the cache is reset
after each alias() call. The cache is not used or updated by modRefInfo
calls since it's harder to know when to reset the cache.
Testcases that show improvements with this patch are too large to
include. Example compile time improvement: 7s to 6s.
Reviewers: chandlerc, sunfish
Subscribers: sanjoy, jlebar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57627
llvm-svn: 353245
The existing scheme of class template static members for Name and
NameMutex is a bit verbose, involves global ctors (even if they're cheap
for string and mutex, still not entirely free), and (importantly/my
immediate motivation here) trips over a bug in LLVM's modules
implementation that's a bit involved (hmm, sounds like Mr. Smith has a
fix for the modules thing - but I'm still inclined to commit this patch
as general goodness).
llvm-svn: 353241
A fallible iterator is one whose increment or decrement operations may fail.
This would usually be supported by replacing the ++ and -- operators with
methods that return error:
class MyFallibleIterator {
public:
// ...
Error inc();
Errro dec();
// ...
};
The downside of this style is that it no longer conforms to the C++ iterator
concept, and can not make use of standard algorithms and features such as
range-based for loops.
The fallible_iterator wrapper takes an iterator written in the style above
and adapts it to (mostly) conform with the C++ iterator concept. It does this
by providing standard ++ and -- operator implementations, returning any errors
generated via a side channel (an Error reference passed into the wrapper at
construction time), and immediately jumping the iterator to a known 'end'
value upon error. It also marks the Error as checked any time an iterator is
compared with a known end value and found to be inequal, allowing early exit
from loops without redundant error checking*.
Usage looks like:
MyFallibleIterator I = ..., E = ...;
Error Err = Error::success();
for (auto &Elem : make_fallible_range(I, E, Err)) {
// Loop body is only entered when safe.
// Early exits from loop body permitted without checking Err.
if (SomeCondition)
return;
}
if (Err)
// Handle error.
* Since failure causes a fallible iterator to jump to end, testing that a
fallible iterator is not an end value implicitly verifies that the error is a
success value, and so is equivalent to an error check.
Reviewers: dblaikie, rupprecht
Subscribers: mgorny, dexonsmith, kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57618
llvm-svn: 353237
https://reviews.llvm.org/D57608
It's a common pattern in GISel to have a MachineInstrBuilder from which we get various regs
(commonly MIB->getOperand(0).getReg()). This adds a helper method and the above can be
replaced with MIB.getReg(0).
llvm-svn: 353223
Summary:
According to
https://docs.nvidia.com/cuda/archive/10.0/ptx-writers-guide-to-interoperability/index.html#cuda-specific-dwarf,
the compiler should emit the DW_AT_address_class attribute for all
variable and parameter. It means, that DW_AT_address_class attribute
should be used in the non-standard way to support compatibility with the
cuda-gdb debugger.
Clang is able to generate the information about the variable address
class. This information is emitted as the expression sequence
`DW_OP_constu <DWARF Address Space> DW_OP_swap DW_OP_xderef`. The patch
tries to find all such expressions and transform them into
`DW_AT_address_class <DWARF Address Space>` if target is NVPTX and the debugger is gdb.
If the expression is not found, then default values are used. For the
local variables <DWARF Address Space> is set to ADDR_local_space(6), for
the globals <DWARF Address Space> is set to ADDR_global_space(5). The
values are taken from the table in the same section 5.2. CUDA-Specific
DWARF Definitions.
Reviewers: echristo, probinson
Subscribers: jholewinski, aprantl, llvm-commits
Differential Revision: https://reviews.llvm.org/D57157
llvm-svn: 353203
Summary:
Adds the standard gauntlet of accessors for global indirect functions and updates the echo test.
Now it would be nice to have a target abstraction so one could know if they have access to a suitable ELF linker and runtime.
Reviewers: whitequark, deadalnix
Reviewed By: whitequark
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D56177
llvm-svn: 353193
Summary:
Add support for options that always prefix their value, giving an error
if the value is in the next argument or if the option is given a value
assignment (ie. opt=val). This is the desired behavior for the -D option
of FileCheck for instance.
Copyright:
- Linaro (changes in version 2 of revision D55940)
- GraphCore (changes in later versions and introduced when creating
D56549)
Reviewers: jdenny
Subscribers: llvm-commits, probinson, kristina, hiraditya,
JonChesterfield
Differential Revision: https://reviews.llvm.org/D56549
llvm-svn: 353172
DispatchStage should always delegate to an object of class RegisterFile the task
of updating data dependencies. ReadState and WriteState objects should not be
modified directly by DispatchStage.
This patch also renames stage IS_AVAILABLE to IS_DISPATCHED.
llvm-svn: 353170
Some use cases are appearing where salvaging is needed that does not
correspond to an instruction being deleted -- for example an instruction
being sunk, or a Value not being available in a block being isel'd.
Enable more fine grained control over how salavging occurs by splitting
the logic into helper functions, separating things that are specific to
working on DbgVariableIntrinsics from those specific to interpreting IR
and building DIExpressions.
Differential Revision: https://reviews.llvm.org/D57696
llvm-svn: 353156
This patch improves code generation for some AArch64 ACLE intrinsics. It adds
support to CGP to duplicate and sink operands to their user, if they can be
folded into a target instruction, like zexts and sub into usubl. It adds a
TargetLowering hook shouldSinkOperands, which looks at the operands of
instructions to see if sinking is profitable.
I decided to add a new target hook, as for the sinking to be profitable,
at least on AArch64, we have to look at multiple operands of an
instruction, instead of looking at the users of a zext for example.
The sinking is done in CGP, because it works around an instruction
selection limitation. If instruction selection is not limited to a
single basic block, this patch should not be needed any longer.
Alternatively this could be done in the LoopSink pass, which tries to
undo LICM for instructions in blocks that are not executed frequently.
Note that we do not force the operands to sink to have a single user,
because we duplicate them before sinking. Therefore this is only
desirable if they really can be done for free. Additionally we could
consider the impact on live ranges later on.
This should fix https://bugs.llvm.org/show_bug.cgi?id=40025.
As for performance, we have internal code that uses intrinsics and can
be speed up by 10% by this change.
Reviewers: SjoerdMeijer, t.p.northover, samparker, efriedma, RKSimon, spatel
Reviewed By: samparker
Differential Revision: https://reviews.llvm.org/D57377
llvm-svn: 353152
The fewerElementsVectors implementation for load/stores
handles the scalar reduction case just as well, so drop
the redundant code in narrowScalar. This also introduces
support for narrowing irregular size breakdowns for
scalars.
llvm-svn: 353125
Don't handle vector conditions.
I think this can be merged in the future with
fewerElementsVectorSelect, although this becomes slightly tricky with
a vector condition.
llvm-svn: 353122
Try to use the underlying source registers.
This enables legalization in more cases where some irregular
operations are widened and others narrowed.
This seems to make the test_combines_2 AArch64 test worse, since the
MERGE_VALUES has multiple uses. Since this should be required for
legalization, a hasOneUse check is probably inappropriate (or maybe
should only be used if the merge is legal?).
llvm-svn: 353121
I'm looking at adding a second INLINEASM opcode for better modeling asm-goto
as a terminator. Using the existing predicate will reduce teh number of
places that will need to use the new opcode.
llvm-svn: 353095
The LiveDebugValues pass recognizes spills but not restores, which can
cause large gaps in location information for some variables, depending
on control flow. This patch make LiveDebugValues recognize restores and
generate appropriate DBG_VALUE instructions.
This patch was posted previously with r352642 and reverted in r352666 due
to buildbot errors. A missing return statement was the cause for the
failures.
Reviewers: aprantl, NicolaPrica
Differential Revision: https://reviews.llvm.org/D57271
llvm-svn: 353089
This fixes two problems with CSE done in buildConstant. First, this
would hit an assert when used with a vector result type. Solve this by
allowing CSE on the vector elements, but not on the result vector for
now.
Second, this was also performing the CSE based on the input
ConstantInt pointer. The underlying buildConstant could potentially
convert the constant depending on the result type, giving in a
different ConstantInt*. Stop allowing the APInt and ConstantInt forms
from automatically casting to the result type to avoid any similar
problems in the future.
llvm-svn: 353077
Summary:
This patch fixes clang-tidy warnings on wasm-only files.
The list of checks used is:
`-*,clang-diagnostic-*,llvm-*,misc-*,-misc-unused-parameters,readability-identifier-naming,modernize-*`
(LLVM's default .clang-tidy list is the same except it does not have
`modernize-*`. But I've seen in multiple CLs in LLVM the modernize style
was recommended and code was fixed based on the style, so I added it as
well.)
The common fixes are:
- Variable names start with an uppercase letter
- Function names start with a lowercase letter
- Use `auto` when you use casts so the type is evident
- Use inline initialization for class member variables
- Use `= default` for empty constructors / destructors
- Use `using` in place of `typedef`
Reviewers: sbc100, tlively, aardappel
Subscribers: dschuff, sunfish, jgravelle-google, yurydelendik, kripken, MatzeB, mgorny, rupprecht, llvm-commits
Differential Revision: https://reviews.llvm.org/D57500
llvm-svn: 353075
This reverts commit b05ecba6d687fcb3078509220c67458bf1d77a2e.
Apparently adding floor breaks AMDGPU somehow, so I have to back this out
while I look into it.
llvm-svn: 353065
See https://github.com/WebAssembly/tool-conventions/pull/95.
This is less typing and IMHO more readable, and it also fits with
our naming around the binary format which tends to use the short name.
e.g.
include/llvm/BinaryFormat/Wasm.h
tools/llvm-objdump/WasmDump.cpp
etc..
Differential Revision: https://reviews.llvm.org/D57611
llvm-svn: 353062
Add an intrinsic that takes 2 unsigned integers with the scale of them
provided as the third argument and performs fixed point multiplication on
them.
This is a part of implementing fixed point arithmetic in clang where some of
the more complex operations will be implemented as intrinsics.
Differential Revision: https://reviews.llvm.org/D55625
llvm-svn: 353059
This introduces a generic opcode for floating point floor, working towards
selecting @llvm.floor.
Differential Revision: https://reviews.llvm.org/D57484
llvm-svn: 353057
This patch removes hidden codegen flag -print-schedule effectively reverting the
logic originally committed as r300311
(https://llvm.org/viewvc/llvm-project?view=revision&revision=300311).
Flag -print-schedule was originally introduced by r300311 to address PR32216
(https://bugs.llvm.org/show_bug.cgi?id=32216). That bug was about adding "Better
testing of schedule model instruction latencies/throughputs".
These days, we can use llvm-mca to test scheduling models. So there is no longer
a need for flag -print-schedule in LLVM. The main use case for PR32216 is
now addressed by llvm-mca.
Flag -print-schedule is mainly used for debugging purposes, and it is only
actually used by x86 specific tests. We already have extensive (latency and
throughput) tests under "test/tools/llvm-mca" for X86 processor models. That
means, most (if not all) existing -print-schedule tests for X86 are redundant.
When flag -print-schedule was first added to LLVM, several files had to be
modified; a few APIs gained new arguments (see for example method
MCAsmStreamer::EmitInstruction), and MCSubtargetInfo/TargetSubtargetInfo gained
a couple of getSchedInfoStr() methods.
Method getSchedInfoStr() had to originally work for both MCInst and
MachineInstr. The original implmentation of getSchedInfoStr() introduced a
subtle layering violation (reported as PR37160 and then fixed/worked-around by
r330615).
In retrospect, that new API could have been designed more optimally. We can
always query MCSchedModel to get the latency and throughput. More importantly,
the "sched-info" string should not have been generated by the subtarget.
Note, r317782 fixed an issue where "print-schedule" didn't work very well in the
presence of inline assembly. That commit is also reverted by this change.
Differential Revision: https://reviews.llvm.org/D57244
llvm-svn: 353043
This is the most important uaddo problem mentioned in PR31754:
https://bugs.llvm.org/show_bug.cgi?id=31754
...but that was overcome in x86 codegen with D57637.
That patch also corrects the inc vs. add regressions seen with the previous attempt at this.
Still, we want to make this matcher complete, so we can potentially canonicalize the pattern
even if it's an 'add 1' operation.
Pattern matching, however, shouldn't assume that we have canonicalized IR, so we match 4
commuted variants of uaddo.
There's also a test with a crazy type to show that the existing CGP transform based on this
matcher is not limited by target legality checks.
I'm not sure if the Hexagon diff means the test is no longer testing what it intended to
test, but that should be solvable in a follow-up.
Differential Revision: https://reviews.llvm.org/D57516
llvm-svn: 352998
Summary:
The analysis result of DA caches pointers to AA, SCEV, and LI, but it
never checks for their invalidation. Fix that.
Reviewers: chandlerc, dmgreen, bogner
Reviewed By: dmgreen
Subscribers: hiraditya, bollu, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D56381
llvm-svn: 352986
For the scalar case only.
Also move the similar G_MERGE_VALUES handling to a separate function
and cleanup to make them look more similar.
llvm-svn: 352979
We already have the getConstantOperandVal helper which returns a uint64_t, but along comes the fuzzer and inserts a i128 -1 constant or something and the whole thing asserts.......
I've updated a few obvious cases, and tried to make use of the const reference where possible, but there's more to do. A number of existing oss-fuzz tickets should be fixed if we start using APInt and perform value clamping where necessary.
llvm-svn: 352961
Summary: This fixes using the correct stack registers for SEH when stack realignment is needed or when variable size objects are present.
Reviewers: rnk, efriedma, ssijaric, TomTan
Reviewed By: rnk, efriedma
Subscribers: javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D57183
llvm-svn: 352923
This cleans up all CallInst creation in LLVM to explicitly pass a
function type rather than deriving it from the pointer's element-type.
Differential Revision: https://reviews.llvm.org/D57170
llvm-svn: 352909
Background: At the moment, we record the AtomicOrdering of an access in the MMO, but also mark any atomic access as volatile in SelectionDAG. GlobalISEL keeps the two separate, but currently doesn't know how to lower an atomic G_LOAD at all. See https://reviews.llvm.org/D57601 for context.
The definition used for unordered was only checking volatility, not atomicity. As noted above, all atomic MMOs are currently also volatile, so this is a latent bug only. Copy the definition used in IR, after auditing the two (2) uses of the function to be sure the desired semantics are the same.
Differential Revision: https://reviews.llvm.org/D57593
llvm-svn: 352898
The version of FoldConstantArithmetic() that takes arbitrary nodes
was confusingly naming those nodes as constants when they might
not be; also "Cst" reads like "Cast".
llvm-svn: 352884
InlineCost's isInlineViable() is changed to return InlineResult
instead of bool. This provides messages for failure reasons and
allows to get more specific messages for cases where callsites
are not viable for inlining.
Reviewed By: xbolva00, anemet
Differential Revision: https://reviews.llvm.org/D57089
llvm-svn: 352849
For targets where i32 is not a legal type (e.g. 64-bit RISC-V),
LegalizeIntegerTypes must promote the integer operand of ISD::FPOWI. As this
is a signed value, this should be sign-extended.
This patch enables all tests in test/CodeGen/RISCVfloat-intrinsics.ll for
RV64, as prior to this patch that file couldn't be compiled for RV64 due to an
assertion when performing codegen for fpowi.
Differential Revision: https://reviews.llvm.org/D54574
llvm-svn: 352832
Recommit r352791 after tweaking DerivedTypes.h slightly, so that gcc
doesn't choke on it, hopefully.
Original Message:
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.
Then:
- update the CallInst/InvokeInst instruction creation functions to
take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.
One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.
However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)
Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.
Differential Revision: https://reviews.llvm.org/D57315
llvm-svn: 352827
This reverts commit f47d6b38c7 (r352791).
Seems to run into compilation failures with GCC (but not clang, where
I tested it). Reverting while I investigate.
llvm-svn: 352800
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.
Then:
- update the CallInst/InvokeInst instruction creation functions to
take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.
One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.
However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)
Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.
Differential Revision: https://reviews.llvm.org/D57315
llvm-svn: 352791
Summary:
EarlyCSE needs to optimize MemoryPhis after an access is removed and has
special handling for it. This should be handled by MemorySSA instead.
The default remains that MemoryPhis are *not* optimized after an access
is removed.
Reviewers: george.burgess.iv
Subscribers: sanjoy, jlebar, llvm-commits, Prazek
Differential Revision: https://reviews.llvm.org/D57199
llvm-svn: 352787
Summary:
Previously, llvm-nm would report symbols for .debug and .note sections as: '?' with an empty section name:
```
00000000 ?
00000000 ?
...
```
With this patch the output more closely resembles GNU nm:
```
00000000 N .debug_abbrev
00000000 n .note.GNU-stack
...
```
This patch calls `getSectionName` for sections that belong to symbols of type `ELF::STT_SECTION`, which returns the name of the section from the section string table.
Reviewers: Bigcheese, davide, jhenderson
Reviewed By: davide, jhenderson
Subscribers: rupprecht, jhenderson, llvm-commits
Differential Revision: https://reviews.llvm.org/D57105
llvm-svn: 352785
The original commit of this function (r129800 in 2011) had a typo where
part of the "Micro" version check was actually comparing against the "Minor"
version number.
llvm-svn: 352776
This is the most important uaddo problem mentioned in PR31754:
https://bugs.llvm.org/show_bug.cgi?id=31754
We were failing to match the canonicalized pattern when it's an 'add 1' operation.
Pattern matching, however, shouldn't assume that we have canonicalized IR, so we
match 4 commuted variants of uaddo.
There's also a test with a crazy type to show that the existing CGP transform
based on this matcher is not limited by target legality checks, but that's a
different problem.
Differential Revision: https://reviews.llvm.org/D57516
llvm-svn: 352766
Summary:
COFF requires that COMDAT name match that of the leader. When we promote
and rename an internal leader in ThinLTO due to an import, ensure we
subsequently rename the associated COMDAT. Similar to D31963 which did
this during ThinLTO module splitting.
Fixes PR40414.
Reviewers: pcc, inglorion
Subscribers: mehdi_amini, dexonsmith, dmajor, llvm-commits
Differential Revision: https://reviews.llvm.org/D57395
llvm-svn: 352763
Introduces a pass that provides default lowering strategy for the
`experimental.widenable.condition` intrinsic, replacing all its uses with
`i1 true`.
Differential Revision: https://reviews.llvm.org/D56096
Reviewed By: reames
llvm-svn: 352739
And instead just generate a libcall. My motivating example on ARM was a simple:
shl i64 %A, %B
for which the code bloat is quite significant. For other targets that also
accept __int128/i128 such as AArch64 and X86, it is also beneficial for these
cases to generate a libcall when optimising for minsize. On these 64-bit targets,
the 64-bits shifts are of course unaffected because the SHIFT/SHIFT_PARTS
lowering operation action is not set to custom/expand.
Differential Revision: https://reviews.llvm.org/D57386
llvm-svn: 352736
Previously, there were two different scripts for generating VCS headers:
one used by LLVM and one used by Clang. They were both similar, but
different. They were both broken in their own ways, for example the one
used by Clang didn't properly handle monorepo resulting in an incorrect
version information reported by Clang.
This change unifies two the scripts by introducing a new script that's
used from both LLVM and Clang, ensures that the new script supports both
monorepo and standalone SVN and Git setups, and removes the old scripts.
Differential Revision: https://reviews.llvm.org/D57063
llvm-svn: 352729
Summary:
We planned to delete this intrinsic and do custom lowering from
`wasm.get.exception`, which has a token argument, to
`EXTRACT_EXCEPTION`, a wasm pseudo instruction that simulates popping a
value from the wasm stack.
To do that, we need to introduce a new `WebAssemblyISD` node for this,
which itself is not a problem, but also have to introduce the
`WebAssemblyISD` namespace in SelectionDAGBuilder.cpp. I don't think any
other targets are doing that in the file. And also putting a
target-specific intrinsic in the common file is a little weird too. (All
other intrinsic functions in this `visitIntrinsicCall` functions are not
target-specific ones. Other target-specific intrinsics are usually
handled in `lib/Target/[TargetName]/[TargetName]ISelLowering.cpp`. The
reason we can't do this is it has a token argument.
Anyway, so I think I prefer the current code with one redundant
intrinsic more than adding one more `WebAssemblyISD` node and
also introducing the `WebAssemblyISD` namespace into
SelectionDAGBuilder.cpp. What do you think?
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D57480
llvm-svn: 352695
This adds instruction selection support for G_FABS in AArch64. It also updates
the existing basic FP tests, adds a selection test for G_FABS.
https://reviews.llvm.org/D57418
llvm-svn: 352684
This introduces a generic instruction for computing the floating point
square root of a value.
Right now, we can't select @llvm.sqrt, so this is working towards fixing that.
llvm-svn: 352668
This is meant to be used with clang's __builtin_dynamic_object_size.
When 'true' is passed to this parameter, the intrinsic has the
potential to be folded into instructions that will be evaluated
at run time. When 'false', the objectsize intrinsic behaviour is
unchanged.
rdar://32212419
Differential revision: https://reviews.llvm.org/D56761
llvm-svn: 352664
The LiveDebugValues pass recognizes spills but not restores, which can
cause large gaps in location information for some variables, depending
on control flow. This patch make LiveDebugValues recognize restores and
generate appropriate DBG_VALUE instructions.
Reviewers: aprantl, NicolaPrica
Differential Revision: https://reviews.llvm.org/D57271
llvm-svn: 352642
Linker relaxation may change code size. We need to fix up the alignment
of alignment directive in text section by inserting Nops and R_RISCV_ALIGN
relocation type. So then linker could satisfy the alignment by removing Nops.
To do this:
1. Add shouldInsertExtraNopBytesForCodeAlign target hook to calculate
the Nops we need to insert.
2. Add shouldInsertFixupForCodeAlign target hook to insert
R_RISCV_ALIGN fixup type.
Differential Revision: https://reviews.llvm.org/D47755
llvm-svn: 352616
Summary:
This patch fixes access to fpo streams in native pdb from DbiStream and makes
code consistent with DbiStreamBuilder.
Patch By: leonid.mashinskiy
Reviewers: zturner, aleksandr.urakov
Reviewed By: zturner
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D56725
llvm-svn: 352615
Summary:
This patch does the following to simplify the asm-goto patch
-Move isInlineAsm from CallInst to CallBase to share with CallBrInst in the asm-goto patch.
-Forward CallSite's data_operands_begin()/data_operands_end() to CallBase's implementation.
-Forward CallSite's getOperandBundlesAsDefs to CallBase.
Reviewers: chandlerc
Reviewed By: chandlerc
Subscribers: nickdesaulniers, llvm-commits
Differential Revision: https://reviews.llvm.org/D57415
llvm-svn: 352600
Summary:
This switches the EH implementation to the new proposal:
https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md
(The previous proposal was
https://github.com/WebAssembly/exception-handling/blob/master/proposals/old/Exceptions.md)
- Instruction changes
- Now we have one single `catch` instruction that returns a except_ref
value
- `throw` now can take variable number of operations
- `rethrow` does not have 'depth' argument anymore
- `br_on_exn` queries an except_ref to see if it matches the tag and
branches to the given label if true.
- `extract_exception` is a pseudo instruction that simulates popping
values from wasm stack. This is to make `br_on_exn`, a very special
instruction, work: `br_on_exn` puts values onto the stack only if it
is taken, and the # of values can vay depending on the tag.
- Now there's only one `catch` per `try`, this patch removes all special
handling for terminate pad with a call to `__clang_call_terminate`.
Before it was the only case there are two catch clauses (a normal
`catch` and `catch_all` per `try`).
- Make `rethrow` act as a terminator like `throw`. This splits BB after
`rethrow` in WasmEHPrepare, and deletes an unnecessary `unreachable`
after `rethrow` in LateEHPrepare.
- Now we stop at all catchpads (because we add wasm `catch` instruction
that catches all exceptions), this creates new
`findWasmUnwindDestinations` function in SelectionDAGBuilder.
- Now we use `br_on_exn` instrution to figure out if an except_ref
matches the current tag or not, LateEHPrepare generates this sequence
for catch pads:
```
catch
block i32
br_on_exn $__cpp_exception
end_block
extract_exception
```
- Branch analysis for `br_on_exn` in WebAssemblyInstrInfo
- Other various misc. changes to switch to the new proposal.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D57134
llvm-svn: 352598
The absolute values of this enum are important at least in that
they get printed by SelectionDAGISel. e.g:
`Before: -O2 ; After: -O0`
Differential Revision: https://reviews.llvm.org/D57430
llvm-svn: 352587
Since these pass the pointer in m0 unlike other DS instructions, these
need to worry about whether the address is uniform or not. This
assumes the address is dynamically uniform, and just uses
readfirstlane to get a copy into an SGPR.
I don't know if these have the same 16-bit add for the addressing mode
offset problem on SI or not, but I've just assumed they do.
Also includes some misc. changes to avoid test differences between the
LDS and GDS versions.
llvm-svn: 352422
This introduces generic instrutions for floating point sin and cos, G_FCOS and
G_FSIN. It updates the tests, etc.
https://reviews.llvm.org/D57197
1/3
llvm-svn: 352400
The only caller has been deleted in r352076, and I'd like to minimize
the amount of code walking Type hierarchies generically, to make it
easier to identify code depending on pointee types.
llvm-svn: 352353
Add generic costs calculation for SADDSAT/SSUBSAT intrinsics, this uses generic costs for sadd_with_overflow/ssub_with_overflow, an extra sign comparison + a selects based on the sign/overflow.
This completes PR40316
Differential Revision: https://reviews.llvm.org/D57239
llvm-svn: 352315
The IR enforced limit for the address space is 24-bits, but LLT was
only using 23-bits. Additionally, the argument to the constructor was
truncating to 16-bits.
A similar problem still exists for the number of vector elements. The
IR enforces no limit, so if you try to use a vector with > 65535
elements the IRTranslator asserts in the LLT constructor.
llvm-svn: 352264
These intrinsics may return different values every time they are called
and should not be CSE'd. IntrInaccessibleMemOnly appears to be the right
attribute to model this behavior.
Differential Revision: https://reviews.llvm.org/D57259
llvm-svn: 352256
N_FUNC_COLD is a new MachO symbol attribute. It's a hint to the linker
to order a symbol towards the end of its section, to improve locality.
Example:
```
void a1() {}
__attribute__((cold)) void a2() {}
void a3() {}
int main() {
a1();
a2();
a3();
return 0;
}
```
A linker that supports N_FUNC_COLD will order _a2 to the end of the text
section. From `nm -njU` output, we see:
```
_a1
_a3
_main
_a2
```
Differential Revision: https://reviews.llvm.org/D57190
llvm-svn: 352227
This patch extends TableGen language with !cond operator.
Instead of embedding !if inside !if which can get cumbersome,
one can now use !cond.
Below is an example to convert an integer 'x' into a string:
!cond(!lt(x,0) : "Negative",
!eq(x,0) : "Zero",
!eq(x,1) : "One,
1 : "MoreThanOne")
Reviewed By: hfinkel, simon_tatham, greened
Differential Revision: https://reviews.llvm.org/D55758
llvm-svn: 352185
https://reviews.llvm.org/D57178
Now add a hook in TargetPassConfig to query if CSE needs to be
enabled. By default this hook returns false only for O0 opt level but
this can be overridden by the target.
As a consequence of the default of enabled for non O0, a few tests
needed to be updated to not use CSE (by passing in -O0) to the run
line.
reviewed by: arsenm
llvm-svn: 352126
Summary:
An alignment should be non-zero positive power-of-two, anything and everything else is UB.
We should not have that check for all these prerequisites here, it's just UB.
Also, that was likely confusing middle-end passes.
While there, `CreateIntCast()` should be called with `/*isSigned*/ false`.
Think about it, there are two explanations: "An alignment should be positive",
therefore the sign bit is unset, so `zext` and `sext` is equivalent.
Or a second one: you have `i2 0b10` - a valid alignment,
now you `sext` it: `i2 0b110` - no longer valid alignment.
Reviewers: craig.topper, jyknight, hfinkel, erichkeane, rjmccall
Reviewed By: hfinkel, rjmccall
Subscribers: hfinkel, llvm-commits
Differential Revision: https://reviews.llvm.org/D54653
llvm-svn: 352089
It should be emitted when any floating-point operations (including
calls) are present in the object, not just when calls to printf/scanf
with floating point args are made.
The difference caused by this is very subtle: in static (/MT) builds,
on x86-32, in a program that uses floating point but doesn't print it,
the default x87 rounding mode may not be set properly upon
initialization.
This commit also removes the walk of the types pointed to by pointer
arguments in calls. (To assist in opaque pointer types migration --
eventually the pointee type won't be available.)
That latter implies that it will no longer consider a call like
`scanf("%f", &floatvar)` as sufficient to emit _fltused on its
own. And without _fltused, `scanf("%f")` will abort with error R6002. This
new behavior is unlikely to bite anyone in practice (you'd have to
read a float, and do nothing with it!), and also, is consistent with
MSVC.
Differential Revision: https://reviews.llvm.org/D56548
llvm-svn: 352076
Add generic costs calculation for UADDSAT/USUBSAT intrinsics, this fallbacks to using generic costs for uadd_with_overflow/usub_with_overflow + a select.
Differential Revision: https://reviews.llvm.org/D56907
llvm-svn: 352044
Summary:
UBSan wants to detect when unreachable code is actually reached, so it
adds instrumentation before every `unreachable` instruction. However,
the optimizer will remove code after calls to functions marked with
`noreturn`. To avoid this UBSan removes `noreturn` from both the call
instruction as well as from the function itself. Unfortunately, ASan
relies on this annotation to unpoison the stack by inserting calls to
`_asan_handle_no_return` before `noreturn` functions. This is important
for functions that do not return but access the the stack memory, e.g.,
unwinder functions *like* `longjmp` (`longjmp` itself is actually
"double-proofed" via its interceptor). The result is that when ASan and
UBSan are combined, the `noreturn` attributes are missing and ASan
cannot unpoison the stack, so it has false positives when stack
unwinding is used.
Changes:
# UBSan now adds the `expect_noreturn` attribute whenever it removes
the `noreturn` attribute from a function
# ASan additionally checks for the presence of this attribute
Generated code:
```
call void @__asan_handle_no_return // Additionally inserted to avoid false positives
call void @longjmp
call void @__asan_handle_no_return
call void @__ubsan_handle_builtin_unreachable
unreachable
```
The second call to `__asan_handle_no_return` is redundant. This will be
cleaned up in a follow-up patch.
rdar://problem/40723397
Reviewers: delcypher, eugenis
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D56624
llvm-svn: 352003
Summary:
Profile sample files include the number of times each entry or inlined
call site is sampled. This is translated into the entry count metadta
on functions.
When sample data is being read, if a call site that was inlined
in the sample program is considered cold and not inlined, then
the entry count of the out-of-line functions does not reflect
the current compilation.
In this patch, we note call sites where the function was not inlined
and as a last action of the sample profile loading, we update the
called function's entry count to reflect the calls from these
call sites which are not included in the profile file.
Reviewers: danielcdh, wmi, Kader, modocache
Reviewed By: wmi
Subscribers: davidxl, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D52845
llvm-svn: 352001
Summary:
Renamed setBaseDiscriminator to cloneWithBaseDiscriminator, to match
similar APIs. Also changed its behavior to copy over the other
discriminator components, instead of eliding them.
Renamed cloneWithDuplicationFactor to
cloneByMultiplyingDuplicationFactor, which more closely matches what
this API does.
Reviewers: dblaikie, wmi
Reviewed By: dblaikie
Subscribers: zzheng, llvm-commits
Differential Revision: https://reviews.llvm.org/D56220
llvm-svn: 351996
Summary:
Previously no client of ilist traits has needed to know about transfers
of nodes within the same list, so as an optimization, ilist doesn't call
transferNodesFromList in that case. However, now there are clients that
want to use ilist traits to cache instruction ordering information to
optimize dominance queries of instructions in the same basic block.
This change updates the existing ilist traits users to detect in-list
transfers and do nothing in that case.
After this change, we can start caching instruction ordering information
in LLVM IR data structures. There are two main ways to do that:
- by putting an order integer into the Instruction class
- by maintaining order integers in a hash table on BasicBlock
I plan to implement and measure both, but I wanted to commit this change
first to enable other out of tree ilist clients to implement this
optimization as well.
Reviewers: lattner, hfinkel, chandlerc
Subscribers: hiraditya, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D57120
llvm-svn: 351992
This patch adds a new ReadAdvance definition named ReadInt2Fpu.
ReadInt2Fpu allows x86 scheduling models to accurately describe delays caused by
data transfers from the integer unit to the floating point unit.
ReadInt2Fpu currently defaults to a delay of zero cycles (i.e. no delay) for all
x86 models excluding BtVer2. That means, this patch is only a functional change
for the Jaguar cpu model only.
Tablegen definitions for instructions (V)PINSR* have been updated to account for
the new ReadInt2Fpu. That read is mapped to the the GPR input operand.
On Jaguar, int-to-fpu transfers are modeled as a +6cy delay. Before this patch,
that extra delay was added to the opcode latency. In practice, the insert opcode
only executes for 1cy. Most of the actual latency is actually contributed by the
so-called operand-latency. According to the AMD SOG for family 16h, (V)PINSR*
latency is defined by expression f+1, where f is defined as a forwarding delay
from the integer unit to the fpu.
When printing instruction latency from MCA (see InstructionInfoView.cpp) and LLC
(only when flag -print-schedule is speified), we now need to account for any
extra forwarding delays. We do this by checking if scheduling classes declare
any negative ReadAdvance entries. Quoting a code comment in TargetSchedule.td:
"A negative advance effectively increases latency, which may be used for
cross-domain stalls". When computing the instruction latency for the purpose of
our scheduling tests, we now add any extra delay to the formula. This avoids
regressing existing codegen and mca schedule tests. It comes with the cost of an
extra (but very simple) hook in MCSchedModel.
Differential Revision: https://reviews.llvm.org/D57056
llvm-svn: 351965
This patch replaces the existing LLVMVectorSameWidth matcher with LLVMScalarOrSameVectorWidth.
The matching args must be either scalars or vectors with the same number of elements, but in either case the scalar/element type can differ, specified by LLVMScalarOrSameVectorWidth.
I've updated the _overflow intrinsics to demonstrate this - allowing it to return a i1 or <N x i1> overflow result, matching the scalar/vectorwidth of the other (add/sub/mul) result type.
The masked load/store/gather/scatter intrinsics have also been updated to use this, although as we specify the reference type to be llvm_anyvector_ty we guarantee the mask will be <N x i1> so no change in behaviour
Differential Revision: https://reviews.llvm.org/D57090
llvm-svn: 351957
#pragma clang loop pipeline(disable)
Disable SWP optimization for the next loop.
“disable” is the only possible value.
#pragma clang loop pipeline_initiation_interval(number)
Set value of initiation interval for SWP
optimization to specified number value for
the next loop. Number is the positive value
greater than 0.
These pragmas could be used for debugging or reducing
compile time purposes. It is possible to disable SWP for
concrete loops to save compilation time or to find bugs
by not doing SWP to certain loops. It is possible to set
value of initiation interval to concrete number to save
compilation time by not doing extra pipeliner passes or
to check created schedule for specific initiation interval.
That is llvm part of the fix
Clang part of fix: https://reviews.llvm.org/D55710
Patch by Alexey Lapshin!
Differential Revision: https://reviews.llvm.org/D56403
llvm-svn: 351923
Each hwasan check requires emitting a small piece of code like this:
https://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html#memory-accesses
The problem with this is that these code blocks typically bloat code
size significantly.
An obvious solution is to outline these blocks of code. In fact, this
has already been implemented under the -hwasan-instrument-with-calls
flag. However, as currently implemented this has a number of problems:
- The functions use the same calling convention as regular C functions.
This means that the backend must spill all temporary registers as
required by the platform's C calling convention, even though the
check only needs two registers on the hot path.
- The functions take the address to be checked in a fixed register,
which increases register pressure.
Both of these factors can diminish the code size effect and increase
the performance hit of -hwasan-instrument-with-calls.
The solution that this patch implements is to involve the aarch64
backend in outlining the checks. An intrinsic and pseudo-instruction
are created to represent a hwasan check. The pseudo-instruction
is register allocated like any other instruction, and we allow the
register allocator to select almost any register for the address to
check. A particular combination of (register selection, type of check)
triggers the creation in the backend of a function to handle the check
for specifically that pair. The resulting functions are deduplicated by
the linker. The pseudo-instruction (really the function) is specified
to preserve all registers except for the registers that the AAPCS
specifies may be clobbered by a call.
To measure the code size and performance effect of this change, I
took a number of measurements using Chromium for Android on aarch64,
comparing a browser with inlined checks (the baseline) against a
browser with outlined checks.
Code size: Size of .text decreases from 243897420 to 171619972 bytes,
or a 30% decrease.
Performance: Using Chromium's blink_perf.layout microbenchmarks I
measured a median performance regression of 6.24%.
The fact that a perf/size tradeoff is evident here suggests that
we might want to make the new behaviour conditional on -Os/-Oz.
But for now I've enabled it unconditionally, my reasoning being that
hwasan users typically expect a relatively large perf hit, and ~6%
isn't really adding much. We may want to revisit this decision in
the future, though.
I also tried experimenting with varying the number of registers
selectable by the hwasan check pseudo-instruction (which would result
in fewer variants being created), on the hypothesis that creating
fewer variants of the function would expose another perf/size tradeoff
by reducing icache pressure from the check functions at the cost of
register pressure. Although I did observe a code size increase with
fewer registers, I did not observe a strong correlation between the
number of registers and the performance of the resulting browser on the
microbenchmarks, so I conclude that we might as well use ~all registers
to get the maximum code size improvement. My results are below:
Regs | .text size | Perf hit
-----+------------+---------
~all | 171619972 | 6.24%
16 | 171765192 | 7.03%
8 | 172917788 | 5.82%
4 | 177054016 | 6.89%
Differential Revision: https://reviews.llvm.org/D56954
llvm-svn: 351920
Some member functions of StringRef/SmallVector/StringSwitch
are marked with the `always_inline` attribute. The result
is that the body of these functions is not emitted, hence the
debugger can't evaluate them (a typical example is
StringRef::size()), even if the code is built with `-O0`.
The main driver behind this was that of getting faster turnaround
when running `check-llvm`. A previous commit clarifies how to
get good performance when running the testsuite, so we can
get rid of the attribute here.
An alternative approach considered was that of using attribute `used`,
but in the end we preferred to not slap yet another attribute on
these functions.
llvm-svn: 351891
For AMDGPU the shift amount is never 64-bit, and
this needs to use a 32-bit shift.
X86 uses i8, but seemed to be hacking around this before.
llvm-svn: 351882
Summary: Initial function labels must follow the debug location for the correct relocation info generation.
Reviewers: tra, jlebar, echristo
Subscribers: jholewinski, llvm-commits
Differential Revision: https://reviews.llvm.org/D45784
llvm-svn: 351843
llvm::is_trivially_copyable portability is verified at compile time using
std::is_trivially_copyable as the reference implementation.
Unfortunately, the latter is not available on all platforms, so introduce
a proper configure check to detect if it is available on the target platform.
In a similar manner, std::is_copy_assignable is not fully supported for gcc4.9.
Provide a portable (?) implementation instead.
Differential Revision: https://reviews.llvm.org/D57018
llvm-svn: 351820