RUN: a
RUN: b || true
as "a && (b || true)" in Tcl mode, and as "(a && b) || true" in sh mode.
Everyone seems to (quite reasonably) write tests assuming the Tcl behavior,
so use that in sh mode too.
llvm-svn: 169441
This is more consistent with other vectors in this code. In addition, I ran some
tests compiling a large program and >96% of fragments have 4 or less fixups, so
SmallVector<4> is a good optimization.
llvm-svn: 169433
This is much simpler to reason about, more efficient, and
fixes some corner cases involving implicit super-register defs.
Fixed rdar://12797931.
llvm-svn: 169425
The encoding of NOP in ARMAsmBackend.cpp is missing a trailing zero, which
causes the emission of a coprocessor instruction rather than "mov r0, r0"
as indicated in the comment. The test also checks for the wrong encoding.
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20121203/157919.html
llvm-svn: 169420
The new command line option -unwind-info dumps the Win64 EH unwind
data to the console. This is a nice feature if you need to debug
generated EH data (e.g. from LLVM). Includes a test case.
Initial patch by João Matos, extensions and rework by Kai Nacke.
llvm-svn: 169415
Change member types of RuntimeFunction and UnwindInfo from uint64_t to
uint32_t:
These members represent addresses. According to MSDN, they are image
relative, that is, they are 32-bit offsets from the starting address
of the image that contains the function table entry.
See MSDN for more information:
RUNTIME_FUNCTION: http://msdn.microsoft.com/en-us/library/ft9x1kdx.aspx
UNWIND_INFO: http://msdn.microsoft.com/en-us/library/ddssxxy8.aspx
Make Win64.h platform-neutral:
The standard types unit8_t, uint16_t and uint32_t are replaced with
their counterparts from Endian.h. Accessor functions are introduced to
replace bit fields.
Patch by João Matos and Kai Nacke.
llvm-svn: 169414
For OS X builds, we generate one version of config.h but then build for
multiple architectures. This means that the LLVM_HOSTTRIPLE setting may have
the wrong architecture. Adjust it dynamically to match the current
architecture. <rdar://problem/12715470>
llvm-svn: 169405
A MachineInstr can only ever be constructed by CreateMachineInstr() and
CloneMachineInstr(), and those factories don't use the removed
constructors.
llvm-svn: 169395
This is for the lldb team so most of but not all of the values are
to be printed as hex with this option. Some small values like the
scale in an X86 address were requested to printed in decimal
without the leading 0x.
There may be some tweaks need to places that may still be in
decimal that they want in hex. Specially for arm. I made my best
guess. Any tweaks from here should be simple.
I also did the best I know now with help from the C++ gurus
creating the cleanest formatImm() utility function and containing
the changes. But if someone has a better idea to make something
cleaner I'm all ears and game for changing the implementation.
rdar://8109283
llvm-svn: 169393
- fixed ordering of calls to doFinalization to be the reverse of the pass run order due to potential dependencies
- fixed machine module info to operate in the doInitialization/doFinalization model, also fixes some FIXMEs
reviewed by Evan Cheng <evan.cheng@apple.com>
llvm-svn: 169391
At build-time register pressure was always computed in terms of
register units. But the compile-time API was expressed in terms of
register classes because it was intended for virtual registers (and
physical register units weren't yet used anywhere in codegen).
Now that the codegen uses physreg units consistently, prepare for
tracking register pressure also in terms of live units, not live
registers.
llvm-svn: 169360
Sorry for the massive commit, but I just wanted to knock this one down
and it is really straightforward.
There are still a couple trivial (i.e. not related to the content)
things left to fix:
- Use of raw HTML links where :doc:`...` and :ref:`...` could be used
instead. If you are a newbie and want to help fix this it would make
for some good bite-sized patches; more experienced developers should
be focusing on adding new content (to this tutorial or elsewhere, but
please _do not_ waste your time on formatting when there is such dire
need for documentation (see docs/SphinxQuickstartTemplate.rst to get
started writing)).
- Highlighting of the kaleidoscope code blocks (currently left as bare
`::`). I will be working on writing a custom Pygments highlighter for
this, mostly as training for maintaining the `llvm` code-block's lexer
in-tree. I want to do this because I am extremely unhappy with how it
just "gives up" on the slightest deviation from the expected syntax
and leaves the whole code-block un-highlighted.
More generally I am looking at writing some Sphinx extensions and
keeping them in-tree as well, to support common use cases that
currently have no good solution (like "monospace text inside a link").
llvm-svn: 169343
This change attempts to simplify (X^Y) -> X or Y in the user's context if we know that
only bits from X or Y are demanded.
A minimized case is provided bellow. This change will simplify "t>>16" into "var1 >>16".
=============================================================
unsigned foo (unsigned val1, unsigned val2) {
unsigned t = val1 ^ 1234;
return (t >> 16) | t; // NOTE: t is used more than once.
}
=============================================================
Note that if the "t" were used only once, the expression would be finally optimized as well.
However, with with this change, the optimization will take place earlier.
Reviewed by Nadav, Thanks a lot!
llvm-svn: 169317
The count attribute is more accurate with regards to the size of an array. It
also obviates the upper bound attribute in the subrange. We can also better
handle an unbound array by setting the count to -1 instead of the lower bound to
1 and upper bound to 0.
llvm-svn: 169312
This reapplies the fix for PR13303 now with more justification. Based on my
execution of the GDB 7.5 test suite this results in:
expected passes: 16101 -> 20890 (+30%)
unexpected failures: 4826 -> 637 (-77%)
There are 23 checks that used to pass and now fail. They are all in
gdb.reverse. Investigating a few looks like they were accidentally passing
due to extra breakpoints being set by this bug. They're generally due to the
difference in end location between gcc and clang, the test suite is trying to
set breakpoints on the closing '}' that clang doesn't associate with any
instructions.
llvm-svn: 169304
textually as NativeClient. Also added a link to the native client project for
readers unfamiliar with it.
A Clang patch will follow shortly.
llvm-svn: 169291
on 64-bit PowerPC ELF.
The patch includes code to handle external assembly and MC output with the
integrated assembler. It intentionally does not support the "old" JIT.
For the initial-exec TLS model, the ABI requires the following to calculate
the address of external thread-local variable x:
Code sequence Relocation Symbol
ld 9,x@got@tprel(2) R_PPC64_GOT_TPREL16_DS x
add 9,9,x@tls R_PPC64_TLS x
The register 9 is arbitrary here. The linker will replace x@got@tprel
with the offset relative to the thread pointer to the generated GOT
entry for symbol x. It will replace x@tls with the thread-pointer
register (13).
The two test cases verify correct assembly output and relocation output
as just described.
PowerPC-specific selection node variants are added for the two
instructions above: LD_GOT_TPREL and ADD_TLS. These are inserted
when an initial-exec global variable is encountered by
PPCTargetLowering::LowerGlobalTLSAddress(), and later lowered to
machine instructions LDgotTPREL and ADD8TLS. LDgotTPREL is a pseudo
that uses the same LDrs support added for medium code model's LDtocL,
with a different relocation type.
The rest of the processing is straightforward.
llvm-svn: 169281
Again, tools are trickier to pick the main module header for than
library source files. I've started to follow the pattern of using
LLVMContext.h when it is included as a stub for program source files.
llvm-svn: 169252
missed in the first pass because the script didn't yet handle include
guards.
Note that the script is now able to handle all of these headers without
manual edits. =]
llvm-svn: 169224
1) Teach it to handle files with #include on the first line -- these do
actually exist in LLVM.
2) Support llvm-c and clang-c include projects.
3) Nuke some stail imports.
4) Switch to using os.path to split the file extension off.
5) Remove debugging leftovers.
6) Add docstring (a really puny one) for the sort function.
I'm continuing te avoid stripping the whitespace on the RHS to preserve
whatever newline characters happen to be in the original file.
llvm-svn: 169222
The count field is necessary because there isn't a difference between the 'lo'
and 'hi' attributes for a one-element array and a zero-element array. When the
count is '0', we know that this is a zero-element array. When it's >=1, then
it's a normal constant sized array. When it's -1, then the array is unbounded.
llvm-svn: 169218
Added the code that actually performs the if-conversion during vectorization.
We can now vectorize this code:
for (int i=0; i<n; ++i) {
unsigned k = 0;
if (a[i] > b[i]) <------ IF inside the loop.
k = k * 5 + 3;
a[i] = k; <---- K is a phi node that becomes vector-select.
}
llvm-svn: 169217
the alignment is clamped to TargetFrameLowering.getStackAlignment if the target
does not support stack realignment or the option "realign-stack" is off.
This will cause miscompile if the address is treated as aligned and add is
replaced with or in DAGCombine.
Added a bool StackRealignable to TargetFrameLowering to check whether stack
realignment is implemented for the target. Also added a bool RealignOption
to MachineFrameInfo to check whether the option "realign-stack" is on.
rdar://12713765
llvm-svn: 169197
Now that there can be multiple hint registers from targets, it doesn't
make sense to have a function that returns 'the' preferred register.
llvm-svn: 169190
Targets can provide multiple hints now, so getRegAllocPref() doesn't
make sense any longer because it only returns one preferred register.
Replace it with getSimpleHint() in the remaining heuristics. This
function only
llvm-svn: 169188
This change tries to simmplify E1 = " X >> C1 << C2" into :
- E2 = "X << (C2 - C1)" if C2 > C1, or
- E2 = "X >> (C1 - C2)" if C1 > C2, or
- E2 = X if C1 == C2.
Reviewed by Nadav. Thanks!
llvm-svn: 169182
Virtual registers with a known preferred register are prioritized by
RAGreedy. This function makes the condition explicit without depending
on getRegAllocPref().
llvm-svn: 169179
This small change adds support for that. It will make all MCJIT tests pass
in make-check on BigEndian platforms.
Patch by Petar Jovanovic.
llvm-svn: 169178
This change adds endian-awareness to MipsJITInfo and emitWordLE in
MipsCodeEmitter has become emitWord now to support both endianness.
Patch by Petar Jovanovic.
llvm-svn: 169177
This provides the same functionality as getRawAllocationOrder() for the
even/odd hints, but without the many constant register arrays.
llvm-svn: 169169
"Windows.h" includes <Windows.h> which defines a bunch of stuff it shouldn't
(even with all the restriction macros). We have no control over this file, so
make it's scope as small as possible.
llvm-svn: 169165
The TargetRegisterInfo::getRegAllocationHints() function is going to
replace the existing mechanisms for providing target-dependent hints to
the register allocator: ResolveRegAllocHint() and
getRawAllocationOrder().
The new hook is more flexible because it allows the target to provide
multiple preferred candidate registers for each virtual register, and it
is easier to use because targets are not required to return a reference
to a constant array like getRawAllocationOrder().
An optional VirtRegMap argument can be used to provide target-dependent
hints that depend on the provisional assignments of other virtual
registers.
llvm-svn: 169154
which is the legality of the if-conversion transformation. The next step is to
implement the cost-model for the if-converted code as well as the
vectorization itself.
llvm-svn: 169152
is not yet good enough for more sophistication. The important goal of this
test is to make sure llc doesn't crash on this IR like it used to.
llvm-svn: 169146
AKA: Recompile *ALL* the source code!
This one went much better. No manual edits here. I spot-checked for
silliness and grep-checked for really broken edits and everything seemed
good. It all still compiles. Yell if you see something that looks goofy.
llvm-svn: 169133
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.
Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]
llvm-svn: 169131
standards.
I am a terrible Python programmer. Patches more the welcome. Please tell
me how this should look if it should look differently. It's just a tiny
little script so it didn't make sense to go through pre-commit review,
especially as someone who actually knows python may want to just rip it
apart and do it The Right Way.
I will be preparing a commit shortly that uses this script to
canonicalize *all* of the #include lines in LLVM. Really, all of them.
llvm-svn: 169125
The partitioning logic attempted to handle uses of an alloca with an
offset starting before the alloca so long as the use had some overlap
with the alloca itself. However, there was a bug where we tested
'(uint64_t)Offset >= AllocSize' without first checking whether 'Offset'
was positive. As a consequence, essentially every negative offset (that
is, starting *before* the alloca does) would be thrown out, even if it
was overlapping. The subsequent code to throw out negative offsets which
were actually non-overlapping was essentially dead. The code to *handle*
overlapping negative offsets was actually dead!
I've just removed all of this, and taught SROA to discard any uses which
start prior to the alloca from the beginning. It has the lovely property
of simplifying the code. =] All the tests still pass, and in fact no new
tests are needed as this is already covered by our testsuite. Fixing the
code so that negative offsets work the way the comments indicate they
were supposed to work causes regressions. That's how I found this.
Anyways, this is all progress in the correct direction -- tightening up
SROA to be maximally aggressive. Some day, I really hope to turn
out-of-bounds accesses to an alloca into 'unreachable'.
llvm-svn: 169120
; CHECK: [[VAR:[a-z]]]
The problem was that to find the end of the regex var definition, it was
simplistically looking for the next ]] and finding the incorrect one. A
better approach is to count nesting of brackets (taking escaping into
account). This way the brackets that are part of the regex can be discovered
and skipped properly, and the ]] ending is detected in the right place.
llvm-svn: 169109
Also check in a case to repeat the issue, on which 'opt -globalopt' consumes 1.6GB memory.
The big memory footprint cause is that current GlobalOpt one by one hoists and stores the leaf element constant into the global array, in each iteration, it recreates the global array initializer constant and leave the old initializer alone. This may result in many obsolete constants left.
For example: we have global array @rom = global [16 x i32] zeroinitializer
After the first element value is hoisted and installed: @rom = global [16 x i32] [ 1, 0, 0, ... ]
After the second element value is installed: @rom = global [16 x 32] [ 1, 2, 0, 0, ... ] // here the previous initializer is obsolete
...
When the transform is done, we have 15 obsolete initializers left useless.
llvm-svn: 169079
- Each macro instantiation introduces a new buffer, and FindBufferForLoc() is
linear, so previously macro instantiation could be N^2 for some pathological
inputs.
llvm-svn: 169073
The TwoAddressInstructionPass takes the machine code out of SSA form by
expanding REG_SEQUENCE instructions into copies. It is no longer
necessary to rewrite the registers used by a REG_SEQUENCE instruction
because the new coalescer algorithm can do it now.
REG_SEQUENCE is just converted to a sequence of sub-register copies now.
llvm-svn: 169067
MachineCopyPropagation doesn't understand super-register liveness well
enough to be able to remove implicit defs of super-registers.
This fixes a problem in ARM/2012-01-26-CopyPropKills.ll that is exposed
by an future TwoAddressInstructionPass change. The KILL instructions are
removed before the machine code is emitted.
llvm-svn: 169060
uses. APFloat::convert() takes the pointer to the fltSemantics
variable, which is later accessed it in ~APFloat() desctructor.
That is, semantics must still be alive at the moment we delete
APFloat.
Found by experimental AddressSanitizer use-after-scope checker.
llvm-svn: 169047
The original patch removed a bunch of code that the SjLjEHPrepare pass placed
into the entry block if all of the landing pads were removed during the
CodeGenPrepare class. The more natural way of doing things is to run the CGP
*before* we run the SjLjEHPrepare pass.
Make it so!
llvm-svn: 169044
This causes llc to repeat the module compilation N times, making it
possible to get more accurate information from -time-passes when
compiling small modules.
llvm-svn: 169040
Codegen was failing with an assertion because of unexpected vector
operands when legalizing the selection DAG for a MUL instruction.
The asserting code was legalizing multiplies for vectors of size 128
bits. It uses a custom lowering to try and detect cases where it can
use a VMULL instruction instead of a VMOVL + VMUL. The code was
looking for input operands to the MUL that had been sign or zero
extended. If it found the extended operands it would drop the
sign/zero extension and use the original vector size as input to a
VMULL instruction.
The code assumed that the original input vector was 64 bits so that
after dropping the extension it would fit directly into a D register
and could be used as an operand of a VMULL instruction. The input
code that trigger the failure used a vector of <4 x i8> that was
sign extended to <4 x i32>. It was not safe to drop the sign
extension in this case because the original vector is only 32 bits
wide. The fix is to insert a sign extension for the vector to reach
the required 64 bit size. In this particular example, the vector would
need to be sign extented to a <4 x i16>.
llvm-svn: 169024