objects. There were a few FIXMEs in ARMAsmBackend.cpp suggesting the class
definitions should be in a separate file. Starting with ARMAsmBackend, the
class definition has been put in a header file, and #includes reduced. Each
sub-type of ARMAsmBackend is now in its own header file.
Derived types have been painted with a different color of bike-shed:
s/DarwinARMAsmBackend/ARMAsmBackendDarwin/g
s/ARMWinCOFFAsmBackend/ARMAsmBackendWinCOFF/g
s/ELFARMAsmBackend/ARMAsmBackendELF/g
Finally, clang-format has been run across ARMAsmBackend.cpp
llvm-svn: 217866
The default implementation of getCmpSelInstrCost, which provides the cost of
icmp/fcmp/select instructions, did not deal sensibly with illegal vector types
that were scalarized. We'd ask for the legalization cost of the vector type,
which would return something like (4, f64) given an input of <4 x double>, and
we'd then check the TLI status of the ISD opcode on that scalar type. This would
result in querying (ISD::VSELECT, f64), for example. Amusingly enough,
ISD::VSELECT on scalar types is marked as Legal by default (as with most other
operations), and most backends never change this because VSELECT is never
generated on scalars. However, seeing the resulting operation as Legal, we'd
neglect to add the scalarization cost before returning. The result is that we'd
grossly under-estimate the cost of cmps/selects on illegal vector types.
Now, if type legalization clearly results in scalarization, we skip the early
return and add the scalarization cost.
llvm-svn: 217859
Teach yaml2obj how to make a bigobj COFF file. Like the rest of LLVM,
we automatically decide whether or not to use regular COFF or bigobj
COFF on the fly depending on how many sections the resulting object
would have.
This ends the task of adding bigobj support to LLVM.
N.B. This was tested by forcing yaml2obj to be used in bigobj mode
regardless of the number of sections. While a dedicated test was
written, the smallest I could make it was 36 MB (!) of yaml and it still
took a significant amount of time to execute on a powerful machine.
llvm-svn: 217858
the blend that is matched by this are "used" in any sense, and so any
build_vector or other nodes feeding these will already drop other lanes.
llvm-svn: 217855
This finishes the ability of llvm-objdump to print out all information from
the LC_DYLD_INFO load command.
The -bind option prints out symbolic references that dyld must resolve
immediately.
The -lazy-bind option prints out symbolc reference that are lazily resolved on
first use.
The -weak-bind option prints out information about symbols which dyld must
try to coalesce across images.
llvm-svn: 217853
constexpr function. Part of this fix is a tentative fix for an as-yet-unfiled
core issue (we're missing a prohibition against reading mutable members from
unions via a trivial constructor/assignment, since that doesn't perform an
lvalue-to-rvalue conversion on the members).
llvm-svn: 217852
matching. This design just fundamentally didn't work because ADDSUB is
available prior to any legal lowerings of BLENDI nodes. Instead, we have
a dedicated ADDSUB synthetic ISD node which is pattern matched trivially
into the instructions. These nodes are then recognized by both the
existing and a trivial new lowering combine in the backend. Removing
these patterns required adding 2 missing shuffle masks to the DAG
combine, without which tests would have failed. Added the masks and
a helpful assert as well to catch if anything ever goes wrong here.
llvm-svn: 217851
that we don't use VSELECT and directly emit an addsub synthetic node.
Also remove a stale comment referencing VSELECT.
The test case is updated to use 'core2' which only has SSE3, not SSE4.1,
and it still passes. Previously it would not because we lacked
sufficient blend support to legalize the VSELECT.
llvm-svn: 217849
ADDSUBPD nodes out of blends of adds and subs.
This allows us to actually form these instructions with SSE3 rather than
only forming them when we had both SSE3 for the ADDSUB instructions and
SSE4.1 for the blend instructions. ;] Kind-of important.
I've adjusted the CPU requirements on one of the tests to demonstrate
this kicking in nicely for an SSE3 cpu configuration.
llvm-svn: 217848
This adds the missing test case for the previous commit:
Allow handling of vectors during return lowering for little endian machines.
Sorry for the noise.
llvm-svn: 217847
Allow handling of vectors during return lowering at least for little endian machines.
This was restricted in r208200 to fix it for big endian machines (according to
the comment), but it also disabled it for little endian too.
llvm-svn: 217846
The problem was the read_func we were supplying to the interactive interpreter wasn't stripping the newline from the end of the string. Now it does and multi-line python scripts can be typed in Xcode.
<rdar://problem/17696438>
llvm-svn: 217843
This allows us to fixup the address of the symbol as soon as we parse it
so that lldb is not confused thinking there are two different symbols in
the binary (one with the thumb bit, one without). Also, differentiating
between THUMB and ARM symbols allows the debugger to place the right
type of breakpoint.
Change by Stephane Sezer.
llvm-svn: 217841
This changes the debug output of the llvm-cov tool to consistently
write to stderr, and moves the highlighting output closer to where
it's relevant.
llvm-svn: 217838
In r217746, though it was supposed to be NFC, I broke llvm-cov's
handling of showing regions without showing counts. This should've
shown up in the existing tests, except they were checking debug output
that was displayed regardless of what was actually output. I've moved
the relevant debug output to a more appropriate place so that the
tests catch this kind of thing.
llvm-svn: 217835
This lowers frem to a runtime libcall inside fast-isel.
The test case also checks the CallLoweringInfo bug that was exposed by this
change.
This fixes rdar://problem/18342783.
llvm-svn: 217833
This fixes a bug in FastISel::CallLoweringInfo, where the number of
arguments was obtained from the argument vector before it had been
initialized.
Test case follows in another commit.
llvm-svn: 217832
We already have routines to encode SLEB128 as well as encode/decode ULEB128.
This last function fills out the matrix. I'll need this for some llvm-objdump
work I am doing.
llvm-svn: 217830
a property assignment due to numerous user errors.
Cannot come up with a reasonable test case due to
array of user errors before the crash point.
rdar://17813651.
llvm-svn: 217825
Summary: UsedByBranch is always true according to how BonusInst is defined.
Test Plan:
Passes check-all, and also verified
if (BonusInst && !UsedByBranch) {
...
}
is never entered during check-all.
Reviewers: resistor, nadav, jingyue
Reviewed By: jingyue
Subscribers: llvm-commits, eliben, meheff
Differential Revision: http://reviews.llvm.org/D5324
llvm-svn: 217824
Summary:
Expand list of supported targets for Mips to include mips32 r1.
Previously it only include r2. More patches are coming where there is
a difference but in the current patches as pushed upstream, r1 and r2
are equivalent.
Test Plan:
simplestorefp1.ll
add new build bots at mips to test this flavor at both -O0 and -O2
Reviewers: dsanders
Reviewed By: dsanders
Differential Revision: http://reviews.llvm.org/D5306
llvm-svn: 217821
introducing a synthetic X86 ISD node representing this generic
operation.
The relevant patterns for mapping these nodes into the concrete
instructions are also added, and a gnarly bit of C++ code in the
target-specific DAG combiner is replaced with simple code emitting this
primitive.
The next step is to generically combine blends of adds and subs into
this node so that we can drop the reliance on an SSE4.1 ISD node
(BLENDI) when matching an SSE3 feature (ADDSUB).
llvm-svn: 217819
There are several places where multiple threads are accessing the same variables simultaneously without any kind of protection. I propose using std::atomic<> to make it safer. I did a special build of lldb, using the google tool 'thread sanitizer' which identified many cases of multiple threads accessing the same memory. std::atomic is low overhead and does not use any locks for simple types such as int/bool.
See http://reviews.llvm.org/D5302 for more details.
Change by Shawn Best.
llvm-svn: 217818
Change 1: we used to add static sanitizer runtimes at the
very beginning of the linker invocation, even before crtbegin.o, which
is gross and not correct in general. Fix this: now addSanitizerRuntimes()
adds all sanitizer-related link flags to the end of the linker invocation
being constructed. It means, that we should call this function in the
correct place, namely, before AddLinkerInputs() to make sure sanitizer
versions of library functions will be preferred.
Change 2: Put system libraries sanitizer libraries depend on at the
end of the linker invocation, where all the rest system libraries are
located. Respect --nodefaultlibs and --nostdlib flags. This is another way
to fix PR15823. Original fix landed in r215940 put "-lpthread" and friends
immediately after static ASan runtime, before the user linker inputs.
This caused significant slowdown in dynamic linker for large binaries
linked against thousands of shared objects. Instead, to mark system
libraries as DT_NEEDED we prepend them with "--no-as-needed" flag,
discarding the "-Wl,--as-needed" flag that could be provided by the user.
Otherwise, this change is a code cleanup. Instead of having a special method
for each sanitizer, we introduce a function collectSanitizerRuntimes() that
analyzes -fsanitize= flags and returns the set of static and shared
libraries that needs to be linked.
llvm-svn: 217817