Summary:
This change addresses https://github.com/google/sanitizers/issues/766. I
tested the change with make check-asan and the newly added test case.
Reviewers: ygribov, kcc, alekseyshl
Subscribers: kubamracek, llvm-commits
Patch by mrigger
Differential Revision: https://reviews.llvm.org/D30384
llvm-svn: 298650
Summary:
When dumping these records from an object file section, we should use
only one type database. However, when dumping from a PDB, we should use
two: one for the type stream and one for the IPI stream.
Certain type records that normally live in the .debug$T object file
section get moved over to the IPI stream of the PDB file and they get
new indices.
So far, I've noticed that the MSVC linker always moves these records
into IPI:
- LF_FUNC_ID
- LF_MFUNC_ID
- LF_STRING_ID
- LF_SUBSTR_LIST
- LF_BUILDINFO
- LF_UDT_MOD_SRC_LINE
These records have index fields that can point into TPI or IPI. In
particular, LF_SUBSTR_LIST and LF_BUILDINFO point to LF_STRING_ID
records to describe compilation command lines.
I've modified the dumper to have an optional pointer to the item DB, and
to do type name lookup of these fields in that DB. See printItemIndex.
The result is that our pdbdump-headers.test is more faithful to the PDB
contents and the output is less confusing.
Reviewers: ruiu
Subscribers: amccarth, zturner, llvm-commits
Differential Revision: https://reviews.llvm.org/D31309
llvm-svn: 298649
The old candidate collection method in the outliner caused some very large
regressions in compile time on large tests. For MultiSource/Benchmarks/7zip it
caused a 284.07 s or 1156% increase in compile time. On average, using the
SingleSource/MultiSource tests, it caused an average increase of 8 seconds in
compile time (something like 1000%).
This commit replaces that candidate collection method with a new one which
only visits each node in the tree once. This reduces the worst compile time
increase (still 7zip) to a 0.542 s overhead (22%) and the average compile time
increase on SingleSource and MultiSource to 0.018 s (4%).
llvm-svn: 298648
Summary:
loop unrolling and icp will make the sample profile annotation much harder in the backend. So disable these 2 optimization in the ThinLTO compile phase.
Will add a test in cfe in a separate patch.
Reviewers: tejohnson
Reviewed By: tejohnson
Subscribers: mehdi_amini, llvm-commits, Prazek
Differential Revision: https://reviews.llvm.org/D31217
llvm-svn: 298646
Now that we call ShrinkDemandedConstant on the RHS of sub this should be taken care of. This code doesn't trigger on any in tree regressions, but did before ShrinkDemandedConstant was added to the RHS.
llvm-svn: 298644
GCC has the alloc_align attribute, which is similar to assume_aligned, except the attribute's parameter is the index of the integer parameter that needs aligning to.
This is the required LLVM changes to make this happen.
Differential Revision: https://reviews.llvm.org/D29598
llvm-svn: 298643
It is not guaranteed that the memory used for MachineBasicBlocks in
the previous MachineFunction hasn't been freed, so holding on to a
pointer to the last function's isn't correct. Particularly I have
observed the sret.ll testcase failing because the first BasicBlock in
the new function happened to be allocated to the exact same memory as
the previously saved and (deleted) PrevInstBB.
llvm-svn: 298642
The new test asserts that scalarized memory operations get memcheck metadata
added even if the loop is only unrolled.
Differential Revision: https://reviews.llvm.org/D30972
llvm-svn: 298641
Using AssemblyAnnotationWriter for LVI printer prints
for instructions and basic blocks.
So, we explicitly need to print LVI info for the arguments of the function (these
are values and not instructions).
llvm-svn: 298640
Summary:
Clang companion patch to LLVM patch D31027, which adds support
for emitting minimized bitcode file for use in the thin link step.
Add a cc1 option -fthin-link-bitcode=<file> to trigger this behavior.
Depends on D31027.
Reviewers: mehdi_amini, pcc
Subscribers: cfe-commits, Prazek
Differential Revision: https://reviews.llvm.org/D31050
llvm-svn: 298639
Summary:
The cumulative size of the bitcode files for a very large application
can be huge, particularly with -g. In a distributed build environment,
all of these files must be sent to the remote build node that performs
the thin link step, and this can exceed size limits.
The thin link actually only needs the summary along with a bitcode
symbol table. Until we have a proper bitcode symbol table, simply
stripping the debug metadata results in significant size reduction.
Add support for an option to additionally emit minimized bitcode
modules, just for use in the thin link step, which for now just strips
all debug metadata. I plan to add a cc1 option so this can be invoked
easily during the compile step.
However, care must be taken to ensure that these minimized thin link
bitcode files produce the same index as with the original bitcode files,
as these original bitcode files will be used in the backends.
Specifically:
1) The module hash used for caching is typically produced by hashing the
written bitcode, and we want to include the hash that would correspond
to the original bitcode file. This is because we want to ensure that
changes in the stripped portions affect caching. Added plumbing to emit
the same module hash in the minimized thin link bitcode file.
2) The module paths in the index are constructed from the module ID of
each thin linked bitcode, and typically is automatically generated from
the input file path. This is the path used for finding the modules to
import from, and obviously we need this to point to the original bitcode
files. Added gold-plugin support to take a suffix replacement during the
thin link that is used to override the identifier on the MemoryBufferRef
constructed from the loaded thin link bitcode file. The assumption is
that the build system can specify that the minimized bitcode file has a
name that is similar but uses a different suffix (e.g. out.thinlink.bc
instead of out.o).
Added various tests to ensure that we get identical index files out of
the thin link step.
Reviewers: mehdi_amini, pcc
Subscribers: Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D31027
llvm-svn: 298638
Correct class-template deprecation behavior
Based on the comment in the test, and my reading of the standard, a deprecated warning should be issued in the following case:
template<typename T> [[deprecated]] class Foo{}; Foo<int> f;
This was not the case, because the ClassTemplateSpecializationDecl creation did not also copy the deprecated attribute.
Note: I did NOT audit the complete set of attributes to see WHICH ones should be copied, so instead I simply copy ONLY the deprecated attribute.
Previous DiffRev: https://reviews.llvm.org/D27486, was reverted.
This patch fixes the issues brought up here by the reverter: https://reviews.llvm.org/rL298410
Differential Revision: https://reviews.llvm.org/D31245
llvm-svn: 298634
Given below case:
%y = shl %x, n
%z = ashr %y, m
when n = m, SCEV models it as sext(trunc(x)). This patch tries to handle
the case where n > m by using sext(mul(trunc(x), 2^(n-m)))) as the SCEV
expression.
llvm-svn: 298631
Reverting until I can figure out the root cause.
Revert "Re-land: Make NativeExeSymbol a concrete subclass of NativeRawSymbol [PDB]"
This reverts commit f461a70cc376f0f91c8b4917be79479cc86330a5.
llvm-svn: 298626
Use the -color-output option explicitly to eliminate the ANSI color codes in
pdb-native-summary.test. (The default should have done this.)
llvm-svn: 298625
Summary:
The true and false operands for the CMOV are operands 0 and 1.
ARMISelLowering.cpp::computeKnownBits was looking at operands 1 and 2
instead. This can cause CMOV instructions to be incorrectly folded into
BFI if value set by the CMOV is another CMOV, whose known bits are
computed incorrectly.
This patch fixes the issue and adds a test case.
Reviewers: kristof.beyls, jmolloy
Subscribers: llvm-commits, aemerson, srhines, rengolin
Differential Revision: https://reviews.llvm.org/D31265
llvm-svn: 298624
The new test should pass on all platforms now that llvm-pdbdump has the
`-color-output` option.
This moves exe symbol-specific method implementations out of NativeRawSymbol
into a concrete subclass. Also adds implementations for hasCTypes and
hasPrivateSymbols and a simple test to ensure the native reader can access
the summary information for the executable from the PDB.
Original Differential Revision: https://reviews.llvm.org/D31059
llvm-svn: 298623
Summary:
This furtherly improves r295303: [clang-tidy] Ignore spaces between globs in the Checks option.
Trims all whitespaces and not only spaces and correctly computes the offset of the checks list (taking the size before trimming).
Reviewers: alexfh
Reviewed By: alexfh
Subscribers: cfe-commits, JDevlieghere
Differential Revision: https://reviews.llvm.org/D30567
llvm-svn: 298621
This patch adds support for vectorizing GEPs. Previously, we only generated
vector GEPs on-demand when creating gather or scatter operations. All GEPs from
the original loop were scalarized by default, and if a pointer was to be stored
to memory, we would have to build up the pointer vector with insertelement
instructions.
With this patch, we will vectorize all GEPs that haven't already been marked
for scalarization.
The patch refines collectLoopScalars to more exactly identify the scalar GEPs.
The function now more closely resembles collectLoopUniforms. And the patch
moves vector GEP creation out of vectorizeMemoryInstruction and into the main
vectorization loop. The vector GEPs needed for gather and scatter operations
will have already been generated before vectoring the memory accesses.
Differential Revision: https://reviews.llvm.org/D30710
llvm-svn: 298620
Provide an common way for testing if a statement contains something
for region and block statements. First user is
RegionGenerator::addOperandToPHI.
Suggested-by: Tobias Grosser <tobias@grosser.es>
llvm-svn: 298617
The code for generating scalar base pointers in vectorizeMemoryInstruction is
not needed. We currently scalarize all GEPs and maintain the scalarized values
in VectorLoopValueMap. The GEP cloning in this unneeded code is the same as
that in scalarizeInstruction. The test cases that changed as a result of this
patch changed because we were able to reuse the scalarized GEP that we
previously generated instead of cloning a new one.
Differential Revision: https://reviews.llvm.org/D30587
llvm-svn: 298615
Summary: Add tests for all atomic operations for powerpc64le, so that all changes can be easily examined.
Reviewers: kbarton, hfinkel, echristo
Subscribers: mehdi_amini, nemanjai, llvm-commits
Differential Revision: https://reviews.llvm.org/D31285
llvm-svn: 298614
Summary:
sysconf(_SC_PAGESIZE) is called very early during sanitizer init and
any instrumented code (sysconf() wrapper/interceptor will likely be
instrumented) calling back to sanitizer before init is done will
most surely crash.
2nd attempt, now with glibc version checks (D31092 was reverted).
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D31221
llvm-svn: 298613
This fix is a follow up a previous change with stored
value types as signed integers in memory.
In future, once the yaml<->wasm binary patche lands we
can add test coverage for this kind of thing.
Differential Revision: https://reviews.llvm.org/D31227
Patch by Sam Clegg
llvm-svn: 298612
Add tests for vector lengths that could be handled without a libcall.
There's an explicit test for 'nobuiltin', so there's not much value
in a separate run that checks that same behavior over and over again.
llvm-svn: 298611
Adds -color-output option to llvm-pdbdump pretty commands that lets the user
specify whether the output should have color. The default depends on whether
the output is going to a TTY (per prior discussion in
https://reviews.llvm.org/D31246).
This will enable tests that pipe llvm-pdbdump output to FileCheck to work
across platforms without regard to the differences in ANSI codes.
Differential Revision: https://reviews.llvm.org/D31263
llvm-svn: 298610
Summary:
1. Support pointer type as function argumnet and return value
2. G_STORE/G_LOAD - set legal action for i8/i16/i32/i64/f32/f64/vec128
3. RegisterBank - support typeless operations like G_STORE/G_LOAD, for scalar use GPR bank.
4. Support instruction selection for G_LOAD/G_STORE
Reviewers: zvi, rovka, ab, qcolombet
Reviewed By: rovka
Subscribers: llvm-commits, dberris, kristof.beyls, eladcohen, guyblank
Differential Revision: https://reviews.llvm.org/D30973
llvm-svn: 298609
Catch trivially true statements of the form a != 1 || a != 3. Statements like
these are likely errors. They are particularly easy to miss when handling enums:
enum State {
RUNNING,
STOPPED,
STARTING,
ENDING
}
...
if (state != RUNNING || state != STARTING)
...
Patch by Blaise Watson!
Differential revision: https://reviews.llvm.org/D29858
llvm-svn: 298607
When a python script is run, by default it creates the bytecode file if the directory is writable, and this ‘pollutes’ source folders.
From python's help:
-B Don’t write .py[co] files on import. See also PYTHONDONTWRITEBYTECODE.
Patch by Pere Mato (D30604)!
llvm-svn: 298603
Summary: ThinLTO will annotate the CFG twice. If the branch weight is set by the first annotation, we should not set the branch weight again in the second annotation because the first annotation is more accurate as there is less optimization that could affect debug info accuracy.
Reviewers: tejohnson, davidxl
Reviewed By: tejohnson
Subscribers: mehdi_amini, aprantl, llvm-commits
Differential Revision: https://reviews.llvm.org/D31228
llvm-svn: 298602
Summary:
By manipulating a local variable in the loop, when the loop can
be optimized away (due to no non-trivial destructors), this lets
it be fully optimized away and we modify the __end_ separately.
This results in a substantial improvement in the generated code.
Prior to this change, this would be generated (on x86_64):
movq (%rdi), %rdx
movq 8(%rdi), %rcx
cmpq %rdx, %rcx
je LBB2_2
leaq -12(%rcx), %rax
subq %rdx, %rax
movabsq $-6148914691236517205, %rdx ## imm = 0xAAAAAAAAAAAAAAAB
mulq %rdx
shrq $3, %rdx
notq %rdx
leaq (%rdx,%rdx,2), %rax
leaq (%rcx,%rax,4), %rax
movq %rax, 8(%rdi)
And after:
movq (%rdi), %rax
movq %rax, 8(%rdi)
This brings this in line with what other implementations do.
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D25241
llvm-svn: 298601