Another step toward TableGen'ed like structure for the RegisterBankInfo
of AArch64. By doing this, we also save a bit of compile time for the
exact same output.
llvm-svn: 282550
In some cases, non-special member functions were being marked as being defaulted
in templated classes. This can cause interactions with later code that expects
the default function to be one of the specific member functions. Fix the check
so that templated class members are checked the same way as non-templated class
members are.
llvm-svn: 282547
The 'or' case shows up in copysign. The copysign code also had
redundant checking for a scalar zero operand with 'and', so I
removed that.
I'm not sure how to test vector 'and', 'andn', and 'xor' yet,
but it seems better to just include all of the logic ops since
we're fixing 'or' anyway.
llvm-svn: 282546
Summary:
The current implementation of isConstantPhysReg() checks for defs of
physical registers to determine if they are constant. Some
architectures (e.g. AArch64 XZR/WZR) have registers that are constant
and may be used as destinations to indicate the generated value is
discarded, preventing isConstantPhysReg() from returning true. This
change adds a TargetRegisterInfo hook that overrides the no defs check
for cases such as this.
Reviewers: MatzeB, qcolombet, t.p.northover, jmolloy
Subscribers: junbuml, aemerson, mcrosier, rengolin
Differential Revision: https://reviews.llvm.org/D24570
llvm-svn: 282543
There is really no reason for these to be separate.
The vectorizer started this pretty bad tradition that the text of the
missed remarks is pretty meaningless, i.e. vectorization failed. There,
you have to query analysis to get the full picture.
I think we should just explain the reason for missing the optimization
in the missed remark when possible. Analysis remarks should provide
information that the pass gathers regardless whether the optimization is
passing or not.
llvm-svn: 282542
(Re-committed after moving the template specialization under the yaml
namespace. GCC was complaining about this.)
This allows various presentation of this data using an external tool.
This was first recommended here[1].
As an example, consider this module:
1 int foo();
2 int bar();
3
4 int baz() {
5 return foo() + bar();
6 }
The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):
remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)
Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 }
Function: baz
Hotness: 30
Args:
- Callee: foo
- String: will not be inlined into
- Caller: baz
...
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 }
Function: baz
Hotness: 30
Args:
- Callee: bar
- String: will not be inlined into
- Caller: baz
...
This is a summary of the high-level decisions:
* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:
ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
DEBUG_TYPE, "NotInlined", &I)
<< NV("Callee", Callee) << " will not be inlined into "
<< NV("Caller", CS.getCaller()) << setIsVerbose());
NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.
Subsequent patches will update ORE users to use the new streaming API.
* I am using YAML I/O for writing the YAML file. YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types. Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.
On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).
* The YAML stream is stored in the LLVM context.
* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".
* As before hotness is computed in the analysis pass instead of
DiganosticInfo. This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.
[1] https://reviews.llvm.org/D19678#419445
Differential Revision: https://reviews.llvm.org/D24587
llvm-svn: 282539
Support overriding the Doxygen & OCamldoc install directories,
and provide a more FHS-compliant defaults for both of them. This extends
r282240 that added this override for Sphinx-built documentation.
LLVM_INSTALL_DOXYGEN_HTML_DIR and LLVM_INSTALL_OCAMLDOC_HTML_DIR are
added, to control the location where Doxygen-generated and
OCamldoc-generated HTML docs are installed appropriately. They both
specify CMake-style install paths, and therefore can either by relative
to the install prefix or absolute.
The new defaults are subdirectories of share/doc/llvm, and replace
the previous directories of 'docs/html' and 'docs/ocaml/html' that
resulted in creating invalid '/usr/docs' that furthermore lacked proper
namespacing for the LLVM package. The new defaults are consistent with
the ones used for Sphinx HTML documentation, differing only in the last
component. Since the 'html' subdirectory is already used for Sphinx
docs, the 'doxygen-html' and 'ocaml-html' directories are used instead.
Differential Revision: https://reviews.llvm.org/D24935
llvm-svn: 282536
Turns out several external projects relied on llvm printing statistics
on exit. Let's go back to this behaviour by default and have an optional
parameter to disable it.
llvm-svn: 282532
Handle this in the exact same way as IMAGE_REL_AMD64_SECREL
and IMAGE_REL_I386_SECREL.
Differential revision: https://reviews.llvm.org/D24608
llvm-svn: 282531
Summary:
This is a register context converter from Minidump to Linux reg context.
This knows the layout of the register context in the Minidump file
(which is the same as in Windows FYI) and as a result emits a binary data
buffer that matches the Linux register context binary layout.
This way we can reuse the existing RegisterContextLinux_x86_64 and
RegisterContextCorePOSIX_x86_64 classes.
Reviewers: labath, zturner
Subscribers: beanz, mgorny, lldb-commits, amccarth
Differential Revision: https://reviews.llvm.org/D24919
llvm-svn: 282529
Revert r282483 as it causes build failures due to missing symbols when
not linking to -lgcc_s (i.e. doing pure LLVM stack build). The patch can
be reintroduced when the build system is fixed to add all needed
libraries (libunwind, compiler-rt).
llvm-svn: 282524
LLVM developers might be surprised to learn that there are blocks
without valid insertion points (catchswitch), so it seems worth calling
that out explicitly. Also add a FIXME about what we should really be
doing if we ever need to make optimized Windows EH code debuggable.
While I'm here, make auto usage more consistent with LLVM standards and
avoid an unecessary call to insertBefore.
llvm-svn: 282521
A landing pad can have live-in registers that are defined by the runtime,
not the program (exception pointer register and exception selector
register). Make sure to recognize that case and not link these registers
with any defs in the program.
Each landing pad will have phi nodes added at the beginning to provide
definitions of these registers, but the uses of those phi nodes will not
have any reaching defs.
llvm-svn: 282519
Summary:
The previous output was confusing as it would output "Taget triple:
x86_64-unknown-linux-gnu" even when LLVM_HOST_TRIPLE or
LLVM_DEFAULT_TARGET_TRIPLE were set on the CMake command line
Patch by: Alex Richardson!
Reviewers: beanz
Subscribers: Eugene.Zelenko
Differential Revision: https://reviews.llvm.org/D17067
llvm-svn: 282516
I don't expect `PendingLoopPredicates` to have very many
elements (e.g. when -O3'ing the sqlite3 amalgamation,
`PendingLoopPredicates` has at most 3 elements). So now we use a
`SmallPtrSet` for it instead of the more heavyweight `DenseSet`.
llvm-svn: 282511
New routines should be used for atomics like "<int>OP=<float>" when <int> is
unsigned. Using functions __kmpc_atomic_fixed<bits>_<op>_fp) produces incorrect
results
Differential Revision: https://reviews.llvm.org/D24756
llvm-svn: 282509
We only use the .o-style debug info here regardless, so having
it run all three debuginfo styles was a waste.
This also strips out the custom build function and uses the
TestBase.build() method.
llvm-svn: 282508
This change set disables creation of the monitor thread by default. The global
counter maintained by the monitor thread was replaced by logic that uses system
time directly, and cyclic yielding on Linux target was also removed since there
was no clear benefit of using it. Turning on KMP_USE_MONITOR variable (=1)
enables creation of monitor thread again if it is really necessary for some
reasons.
Differential Revision: https://reviews.llvm.org/D24739
llvm-svn: 282507
Variables are sometimes missing their debug location information in
blocks in which the variables should be available. This would occur
when one or more predecessor blocks had not yet been visited by the
routine which propagated the information from predecessor blocks.
This is addressed by only considering predecessor blocks which have
already been visited.
The solution to this problem was suggested by Daniel Berlin on the
LLVM developer mailing list.
Differential Revision: https://reviews.llvm.org/D24927
llvm-svn: 282506
llvm::join_items is similar to llvm::join, which produces a string
by concatenating a sequence of values together separated by a
given separator. But it differs in that the arguments to
llvm::join() are same-type members of a container, whereas the
arguments to llvm::join_items are arbitrary types passed into
a variadic template. The only requirement on parameters to
llvm::join_items (including for the separator themselves) is
that they be implicitly convertible to std::string or have
an overload of std::string::operator+
Differential Revision: https://reviews.llvm.org/D24880
llvm-svn: 282502
This allows various presentation of this data using an external tool.
This was first recommended here[1].
As an example, consider this module:
1 int foo();
2 int bar();
3
4 int baz() {
5 return foo() + bar();
6 }
The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):
remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)
Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 }
Function: baz
Hotness: 30
Args:
- Callee: foo
- String: will not be inlined into
- Caller: baz
...
--- !Missed
Pass: inline
Name: NotInlined
DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 }
Function: baz
Hotness: 30
Args:
- Callee: bar
- String: will not be inlined into
- Caller: baz
...
This is a summary of the high-level decisions:
* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:
ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
DEBUG_TYPE, "NotInlined", &I)
<< NV("Callee", Callee) << " will not be inlined into "
<< NV("Caller", CS.getCaller()) << setIsVerbose());
NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.
Subsequent patches will update ORE users to use the new streaming API.
* I am using YAML I/O for writing the YAML file. YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types. Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.
On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).
* The YAML stream is stored in the LLVM context.
* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".
* As before hotness is computed in the analysis pass instead of
DiganosticInfo. This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.
[1] https://reviews.llvm.org/D19678#419445
Differential Revision: https://reviews.llvm.org/D24587
llvm-svn: 282499