index greater than the size of the vector is invalid. The shuffle may be
shrinking the size of the vector. Fixes a crash!
Also drop the maximum recursion depth of the safety check for this
optimization to five.
llvm-svn: 183080
The MOV64ri64i32 instruction required hacky MCInst lowering because it
was allocated as setting a GR64, but the eventual instruction ("movl")
only set a GR32. This converts it into a so-called "MOV32ri64" which
still accepts a (appropriate) 64-bit immediate but defines a GR32.
This is then converted to the full GR64 by a SUBREG_TO_REG operation,
thus keeping everyone happy.
This fixes a typo in the opcode field of the original patch, which
should make the legact JIT work again (& adds test for that problem).
llvm-svn: 183068
Fixes rdar:14036816, PR16130.
There is an opportunity to compute precise trip counts for 'or'
expressions and multi-exit loops.
rdar:14038809: Optimize trip count computation for multi-exit loops.
To do this we need to record the fact that ExitLimit assumes NSW. When
it does not we can safely assume that the loop trip count is the
minimum ExitLimt across all subexpressions and loop exits.
llvm-svn: 183060
We check that instructions in the loop don't have outside users (except if
they are reduction values). Unfortunately, we skipped this check for
if-convertable PHIs.
Fixes PR16184.
llvm-svn: 183035
Namely, check if the target allows to fold more that one register in the
addressing mode and if yes, adjust the cost accordingly.
Prior to this commit, reg1 + scale * reg2 accesses were artificially preferred
to reg1 + reg2 accesses. Indeed, the cost model wrongly assumed that reg1 + reg2
needs a temporary register for the computation, whereas it was correctly
estimated for reg1 + scale * reg2.
<rdar://problem/13973908>
llvm-svn: 183021
These instructions are deprecated oddities, but we still need to be able to
disassemble (and reassemble) them if and when they're encountered.
Patch by Amaury de la Vieuville.
llvm-svn: 183011
The disassembly of VEXT instructions was too lax in the bits checked. This
fixes the case where the instruction affects Q-registers but a misaligned lane
was specified (should be UNDEFINED).
Patch by Amaury de la Vieuville
llvm-svn: 183003
Unlike most -- hopefully "all other", but I'm still checking -- memory
instructions we support, LOAD REVERSED and STORE REVERSED may access
the memory location several times. This means that they are not suitable
for volatile loads and stores.
This patch is a prerequisite for better atomic load and store support.
The same principle applies there: almost all memory instructions we
support are inherently atomic ("block concurrent"), but LOAD REVERSED
and STORE REVERSED are exceptions.
Other instructions continue to allow volatile operands. I will add
positive "allows volatile" tests at the same time as the "allows atomic
load or store" tests.
llvm-svn: 183002
Now that 3.3 is branched, we are re-enabling virtual registers to help
iron out bugs before the next release. Some of the post-RA passes do
not play well with virtual registers, so we disable them for now. The
needed functionality of the PrologEpilogInserter pass is copied to a
new backend-specific NVPTXPrologEpilog pass.
The test for this commit is not breaking the existing tests.
llvm-svn: 182998
Before this change, each module defined a weak_odr global __msan_track_origins
with a value of 1 if origin tracking is enabled, 0 if disabled. If there are
modules with different values, any of them may win. If 0 wins, and there is at
least one module with 1, the program will most likely crash.
With this change, __msan_track_origins is only emitted if origin tracking is
on. Then runtime library detects if there is at least one module with origin
tracking, and enables runtime support for it.
llvm-svn: 182997
The MOV64ri64i32 instruction required hacky MCInst lowering because it was
allocated as setting a GR64, but the eventual instruction ("movl") only set a
GR32. This converts it into a so-called "MOV32ri64" which still accepts a
(appropriate) 64-bit immediate but defines a GR32. This is then converted to
the full GR64 by a SUBREG_TO_REG operation, thus keeping everyone happy.
llvm-svn: 182991
This fixes the test on ARM. Looks like it was broken by r182877. Not
sure if this is a bug on fast isel on ARM, but this should help fix
the ARM bots.
llvm-svn: 182937
The pattern the test originally checked for doesn't occur on other -mcpu
settings. On atom it's still there though slightly differently scheduled.
llvm-svn: 182933
This test was failing on some hosts when an unexpected register was used for a
variable. This just extends the regexp to allow the new x86-64 registers.
llvm-svn: 182929
Instead of having a bunch of separate MOV8r0, MOV16r0, ... pseudo-instructions,
it's better to use a single MOV32r0 (which will expand to "xorl %reg, %reg")
and obtain other sizes with EXTRACT_SUBREG and SUBREG_TO_REG. The encoding is
smaller and partial register updates can sometimes be avoided.
Until recently, this sequence was a barrier to rematerialization though. That
should now be fixed so it's an appropriate time to make the change.
llvm-svn: 182928
The code to distinguish between unaligned and aligned addresses was
already there, so this is mostly just a switch-on-and-test process.
llvm-svn: 182920
For COFF and MachO, sections semantically have relocations that apply to them.
That is not the case on ELF.
In relocatable objects (.o), a section with relocations in ELF has offsets to
another section where the relocations should be applied.
In dynamic objects and executables, relocations don't have an offset, they have
a virtual address. The section sh_info may or may not point to another section,
but that is not actually used for resolving the relocations.
This patch exposes that in the ObjectFile API. It has the following advantages:
* Most (all?) clients can handle this more efficiently. They will normally walk
all relocations, so doing an effort to iterate in a particular order doesn't
save time.
* llvm-readobj now prints relocations in the same way the native readelf does.
* probably most important, relocations that don't point to any section are now
visible. This is the case of relocations in the rela.dyn section. See the
updated relocation-executable.test for example.
llvm-svn: 182908
Fixes PR16146: gdb.base__call-ar-st.exp fails after
pre-RA-sched=source fixes.
Patch by Xiaoyi Guo!
This also fixes an unsupported dbg.value test case. Codegen was
previously incorrect but the test was passing by luck.
llvm-svn: 182885
FastISel was only enabled for iOS ARM and Thumb2, this patch enables it
for ARM (not Thumb2) on Linux and NaCl.
Thumb2 support needs a bit more work, mainly around register class
restrictions.
The patch punts to SelectionDAG when doing TLS relocation on non-Darwin
targets. I will fix this and other FastISel-to-SelectionDAG failures in
a separate patch.
The patch also forces FastISel to retain frame pointers: iOS always
keeps them for backtracking (so emitted code won't change because of
this), but Linux was getting much worse code that was incorrect when
using big frames (such as test-suite's lencod). I'll also fix this in a
later patch, it will probably require a peephole so that FastISel
doesn't rematerialize frame pointers back-to-back.
The test changes are straightforward, similar to:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130513/174279.html
They also add a vararg test that got dropped in that change.
I ran all of test-suite on A15 hardware with --optimize-option=-O0 and
all the tests pass.
llvm-svn: 182877
This allows rematerialization during register coalescing to handle
more cases involving operations like SUBREG_TO_REG which might need to
be rematerialized using sub-register indices.
For example, code like:
v1(GPR64):sub_32 = MOVZ something
v2(GPR64) = COPY v1(GPR64)
should be convertable to:
v2(GPR64):sub_32 = MOVZ something
but previously we just gave up in places like this
llvm-svn: 182872
Since the testing case uses ref_addr, which requires version 3+ to work,
we will solve the dwarf version issue first.
This patch also causes failures in one of the bots. I will update the patch
accordingly in my next attempt.
rdar://13926659
llvm-svn: 182867
This patch solves the problem of numeric register values not being accepted:
../set_alias.s:1:11: error: expected valid expression after comma
.set r4,$4
^
The parsing of .set directive is changed and handling of symbols in code
as well to enable this feature.
The test example is added.
Patch by Vladimir Medic
llvm-svn: 182807
- llvm.loop.parallel metadata has been renamed to llvm.loop to be more generic
by making the root of additional loop metadata.
- Loop::isAnnotatedParallel now looks for llvm.loop and associated
llvm.mem.parallel_loop_access
- document llvm.loop and update llvm.mem.parallel_loop_access
- add support for llvm.vectorizer.width and llvm.vectorizer.unroll
- document llvm.vectorizer.* metadata
- add utility class LoopVectorizerHints for getting/setting loop metadata
- use llvm.vectorizer.width=1 to indicate already vectorized instead of
already_vectorized
- update existing tests that used llvm.loop.parallel and
llvm.vectorizer.already_vectorized
Reviewed by: Nadav Rotem
llvm-svn: 182802
Previously we would read-modify-write the target bits when processing
relocations for the MCJIT. This had the problem that when relocations
were processed multiple times for the same object file (as they can
be), the result is not idempotent and the values became corrupted.
The solution to this is to take any bits used in the destination from
the pristine object file as LLVM emitted it.
This should fix PR16013 and remote MCJIT on ARM ELF targets.
llvm-svn: 182800