On MachO, sections also have segment names. When a tool looking at a .o file
prints a segment name, this is what they mean. In reality, a .o has only one
anonymous, segment.
This patch adds a MachO only function to fetch that segment name. I named it
getSectionFinalSegmentName since the main use for the name seems to be inform
the linker with segment this section should go to.
The patch also changes MachOObjectFile::getSectionName to return just the
section name instead of computing SegmentName,SectionName.
The main difference from the previous patch is that it doesn't use
InMemoryStruct. It is extremely dangerous: if the endians match it returns
a pointer to the file buffer, if not, it returns a pointer to an internal buffer
that is overwritten in the next API call.
We should change all of this code to use
support::detail::packed_endian_specific_integral like ELF, but since these
functions only handle strings, they work with big and little endian machines
as is.
I have tested this by installing ubuntu 12.10 ppc on qemu, that is why it took
so long :-)
llvm-svn: 170838
Instructions that are inserted in a basic block can still be decorated
with addOperand(MO).
Make the two-argument addOperand() function contain the actual
implementation. This function will now always have a valid MF reference
that it can use for memory allocation.
llvm-svn: 170798
This function is often used to decorate dangling instructions, so a
context reference is required to allocate memory for the operands.
Also add a corresponding MachineInstrBuilder method.
llvm-svn: 170797
Rename the AttributeImpl* from Attrs to pImpl to be consistent with other code.
Add comments where none were before. Or doxygen-ify other comments.
llvm-svn: 170767
This is supposed to be a mechanical change with no functional effects.
InstrEmitter can generate all types of MachineOperands which revealed
that MachineInstrBuilder was missing a few methods, added by this patch.
Besides providing a context pointer to MI::addOperand(),
MachineInstrBuilder seems like a better fit for this code.
llvm-svn: 170712
Similarly inlining of the function is inhibited, if that would duplicate the call (in particular inlining is still allowed when there is only one callsite and the function has internal linkage).
llvm-svn: 170704
MC disassembler clients (LLDB) are interested in querying if an
instruction may affect control flow other than by virtue of being
an explicit branch instruction. For example, instructions which
write directly to the PC on some architectures.
llvm-svn: 170610
These were defined on TargetRegisterInfo, but they don't use any information
that's not available in MCRegisterInfo, so sink them down to be available
at the MC layer.
llvm-svn: 170608
Use the version that also takes an MF reference instead.
It would technically be possible to extract an MF reference from the MI
as MI->getParent()->getParent(), but that would not work for MIs that
are not inserted into any basic block.
Given the reasonably small number of places this constructor was used at
all, I preferred the compile time check to a run time assertion.
llvm-svn: 170588
Just like for addMemOperand(), the function pointer provides a context
for allocating memory. This will make it possible to use a better memory
allocation strategy for the MI operand list, which is currently a slow
std::vector.
Most calls to addOperand() come from MachineInstrBuilder, so give that
class an MF reference as well. Code using BuildMI() won't need changing
at all since the MF reference is already required to allocate a
MachineInstr.
Future patches will fix code that calls MI::addOperand(Op) directly, as
well as code that uses the now deprecated MachineInstrBuilder(MI)
constructor.
llvm-svn: 170574
- An MVT can become an EVT when being split (e.g. v2i8 -> v1i8, the latter doesn't exist)
- Return the scalar value when an MVT is scalarized (v1i64 -> i64)
Fixes PR14639ff.
llvm-svn: 170546
I cannot reproduce it the failures locally, so I will keep an eye at the ppc
bots. This patch does add the change to the "Disassembly of section" message,
but that is not what was failing on the bots.
Original message:
Add a funciton to get the segment name of a section.
On MachO, sections also have segment names. When a tool looking at a .o file
prints a segment name, this is what they mean. In reality, a .o has only one
anonymous, segment.
This patch adds a MachO only function to fetch that segment name. I named it
getSectionFinalSegmentName since the main use for the name seems to be infor
the linker with segment this section should go to.
The patch also changes MachOObjectFile::getSectionName to return just the
section name instead of computing SegmentName,SectionName.
llvm-svn: 170545
instructions in the assembly code variant if one exists.
The intended use for this is so tools like lldb and darwin's otool(1)
can be switched to print Intel-flavored disassembly.
I discussed extensively this API with Jim Grosbach and we feel
while it may not be fully general, in reality there is only one syntax
for each assembly with the exception of X86 which has exactly
two for historical reasons.
rdar://10989182
llvm-svn: 170477
The bundle flags are now maintained by the slightly higher-level
functions bundleWithPred() / bundleWithSucc() which enforce consistent
bundle flags between neighboring instructions.
See also MIBundleBuilder for an even higher-level approach to building
bundles.
llvm-svn: 170475
The bundle_iterator::operator++ function now doesn't need to dig out the
basic block and check against end(). It can use the isBundledWithSucc()
flag to find the last bundled instruction safely.
Similarly, MachineInstr::isBundled() no longer needs to look at
iterators etc. It only has to look at flags.
llvm-svn: 170473
The bundle-related MI flags need to be kept in sync with the neighboring
instructions. Don't allow the bulk flag-setting setFlags() function to
change them.
Also don't copy MI flags when cloning an instruction. The clone's bundle
flags will be set when it is explicitly inserted into a bundle.
llvm-svn: 170459
Remove the instr_iterator versions of the splice() functions. It doesn't
seem useful to be able to splice sequences of instructions that don't
consist of full bundles.
The normal splice functions that take MBB::iterator arguments are not
changed, and they can move whole bundles around without any problems.
llvm-svn: 170456
The single-element ilist::splice() function supports a noop move:
List.splice(I, List, I);
The corresponding std::list function doesn't allow that, so add a unit
test to document that behavior.
This also means that
List.splice(I, List, F);
is somewhat surprisingly not equivalent to
List.splice(I, List, F, next(F));
This patch adds an assertion to catch the illegal case I == F above.
Alternatively, we could make I == F a legal noop, but that would make
ilist differ even more from std::list.
llvm-svn: 170443
The normal insert() function takes an MBB::iterator position, and
inserts a stand-alone MachineInstr as before.
The insert() function that takes an MBB::instr_iterator position can
insert instructions inside a bundle, and will now update the bundle
flags correctly when that happens.
When the insert position is between two bundles, it is unclear whether
the instruction should be appended to the previous bundle, prepended to
the next bundle, or stand on its own. The MBB::insert() function doesn't
bundle the instruction in that case, use the MIBundleBuilder class for
that.
llvm-svn: 170437
Most code is oblivious to bundles and uses the MBB::iterator which only
visits whole bundles. MBB::erase() operates on whole bundles at a time
as before.
MBB::remove() now refuses to remove bundled instructions. It is not safe
to remove all instructions in a bundle without deleting them since there
is no way of returning pointers to all the removed instructions.
MBB::remove_instr() and MBB::erase_instr() will now update bundle flags
correctly, lifting individual instructions out of bundles while leaving
the remaining bundle intact.
The MachineInstr convenience functions are updated so
eraseFromParent() erases a whole bundle as before
eraseFromBundle() erases a single instruction, leaving the rest of its bundle.
removeFromParent() refuses to operate on bundled instructions, and
removeFromBundle() lifts a single instruction out of its bundle.
These functions will no longer accidentally split or coalesce bundles -
bundle flags are updated to preserve the existing bundling, and explicit
bundleWith* / unbundleFrom* functions should be used to change the
instruction bundling.
This API update is still a work in progress. I am going to update APIs
first so they maintain bundle flags automatically when possible. Then
I'll add stricter verification of the bundle flags.
llvm-svn: 170384
compilation directory.
This defaults to the current working directory, just as it always has,
but now an assembler can choose to override it with a custom directory.
I've taught llvm-mc about this option and added a test case.
llvm-svn: 170371
Mips16 is really a processor decoding mode (ala thumb 1) and in the same
program, mips16 and mips32 functions can exist and can call each other.
If a jal type instruction encounters an address with the lower bit set, then
the processor switches to mips16 mode (if it is not already in it). If the
lower bit is not set, then it switches to mips32 mode.
The linker knows which functions are mips16 and which are mips32.
When relocation is performed on code labels, this lower order bit is
set if the code label is a mips16 code label.
In general this works just fine, however when creating exception handling
tables and dwarf, there are cases where you don't want this lower order
bit added in.
This has been traditionally distinguished in gas assembly source by using a
different syntax for the label.
lab1: ; this will cause the lower order bit to be added
lab2=. ; this will not cause the lower order bit to be added
In some cases, it does not matter because in dwarf and debug tables
the difference of two labels is used and in that case the lower order
bits subtract each other out.
To fix this, I have added to mcstreamer the notion of a debuglabel.
The default is for label and debug label to be the same. So calling
EmitLabel and EmitDebugLabel produce the same result.
For various reasons, there is only one set of labels that needs to be
modified for the mips exceptions to work. These are the "$eh_func_beginXXX"
labels.
Mips overrides the debug label suffix from ":" to "=." .
This initial patch fixes exceptions. More changes most likely
will be needed to DwarfCFException to make all of this work
for actual debugging. These changes will be to emit debug labels in some
places where a simple label is emitted now.
Some historical discussion on this from gcc can be found at:
http://gcc.gnu.org/ml/gcc-patches/2008-08/msg00623.htmlhttp://gcc.gnu.org/ml/gcc-patches/2008-11/msg01273.html
llvm-svn: 170279
for a wider range of GOT entries that can hold thread-relative offsets.
This matches the behavior of GCC, which was not documented in the PPC64 TLS
ABI. The ABI will be updated with the new code sequence.
Former sequence:
ld 9,x@got@tprel(2)
add 9,9,x@tls
New sequence:
addis 9,2,x@got@tprel@ha
ld 9,x@got@tprel@l(9)
add 9,9,x@tls
Note that a linker optimization exists to transform the new sequence into
the shorter sequence when appropriate, by replacing the addis with a nop
and modifying the base register and relocation type of the ld.
llvm-svn: 170209
Accordingly, add helper funtions getSimpleValueType (in parallel to
getValueType) in SDValue, SDNode, and TargetLowering.
This is the first, in a series of patches.
This is the second attempt. In the first attempt (r169837), a few
getSimpleVT() were hoisted too far, detected by bootstrap failures.
llvm-svn: 170104
On MachO, sections also have segment names. When a tool looking at a .o file
prints a segment name, this is what they mean. In reality, a .o has only one,
anonymous, segment.
This patch adds a MachO only function to fetch that segment name. I named it
getSectionFinalSegmentName since the main use for the name seems to be informing
the linker with segment this section should go to.
The patch also changes MachOObjectFile::getSectionName to return just the
section name instead of computing SegmentName,SectionName.
llvm-svn: 170095
In a previous thread it was pointed out that isPowerOfTwo is not a very precise
name since it can return false for powers of two if it is unable to show that
they are powers of two.
llvm-svn: 170093
Provides m_Argument that allows matching against a CallSite's specified argument. Provides m_Intrinsic pattern that can be templatized over the intrinsic id and bind/match arguments similarly to other pattern matchers. Implementations provided for 0 to 4 arguments, though it's very simple to extend for more. Also provides example template specialization for bswap (m_BSwap) and example of code cleanup for its use.
llvm-svn: 170091
Also add an MIBundleBuilder constructor that takes an existing bundle.
Together these functions make it possible to add instructions to
existing bundles.
llvm-svn: 170063
structures to and from YAML using traits. The first client will
be the test suite of lld. The documentation will show up at:
http://llvm.org/docs/YamlIO.html
llvm-svn: 170019
PowerPC target. This is the last of the four models, so we now have
full TLS support.
This is mostly a straightforward extension of the general dynamic model.
I had to use an additional Chain operand to tie ADDIS_DTPREL_HA to the
register copy following ADDI_TLSLD_L; otherwise everything above the
ADDIS_DTPREL_HA appeared dead and was removed.
As before, there are new test cases to test the assembly generation, and
the relocations output during integrated assembly. The expected code
gen sequence can be read in test/CodeGen/PowerPC/tls-ld.ll.
There are a couple of things I think can be done more efficiently in the
overall TLS code, so there will likely be a clean-up patch forthcoming;
but for now I want to be sure the functionality is in place.
Bill
llvm-svn: 170003
been used in the first place. It simply was passed to the function and to the
recursive invocations. Simply drop the parameter and update the callers for the
new signature.
Patch by Saleem Abdulrasool!
llvm-svn: 169988
When ASan replaces <alloca instruction> with
<offset into a common large alloca>, it should also patch
llvm.dbg.declare calls and replace debug info descriptors to mark
that we've replaced alloca with a value that stores an address
of the user variable, not the user variable itself.
See PR11818 for more context.
llvm-svn: 169984
Add R_ARM_NONE and R_ARM_PREL31 relocation types
to MCExpr. Both of them will be used while
generating .ARM.extab and .ARM.exidx sections.
llvm-svn: 169965
mention the inline memcpy / memset expansion code is a mess?
This patch split the ZeroOrLdSrc argument into two: IsMemset and ZeroMemset.
The first indicates whether it is expanding a memset or a memcpy / memmove.
The later is whether the memset is a memset of zero. It's totally possible
(likely even) that targets may want to do different things for memcpy and
memset of zero.
llvm-svn: 169959
Also added more comments to explain why it is generally ok to return true.
- Rename getOptimalMemOpType argument IsZeroVal to ZeroOrLdSrc. It's meant to
be true for loaded source (memcpy) or zero constants (memset). The poor name
choice is probably some kind of legacy issue.
llvm-svn: 169954
fsub X, +0 ==> X
fsub X, -0 ==> X, when we know X is not -0
fsub +/-0.0, (fsub -0.0, X) ==> X
fsub nsz +/-0.0, (fsub +/-0.0, X) ==> X
fsub nnan ninf X, X ==> 0.0
fadd nsz X, 0 ==> X
fadd [nnan ninf] X, (fsub [nnan ninf] 0, X) ==> 0
where nnan and ninf have to occur at least once somewhere in this expression
fmul X, 1.0 ==> X
llvm-svn: 169940
m_ConstantFP - match and bind a float constant
m_SpecificConstantFP - match a specific floating point value or vector of floats of that value
m_FPOne - match a floating point 1.0 or vector of 1.0s
m_NegZero - match -0.0
m_AnyZero - match 0 or -0.0
llvm-svn: 169939
ScalarTargetTransformInfo::getIntImmCost() instead. "Legal" is a poorly defined
term for something like integer immediate materialization. It is always possible
to materialize an integer immediate. Whether to use it for memcpy expansion is
more a "cost" conceern.
llvm-svn: 169929
Given a thread-local symbol x with global-dynamic access, the generated
code to obtain x's address is:
Instruction Relocation Symbol
addis ra,r2,x@got@tlsgd@ha R_PPC64_GOT_TLSGD16_HA x
addi r3,ra,x@got@tlsgd@l R_PPC64_GOT_TLSGD16_L x
bl __tls_get_addr(x@tlsgd) R_PPC64_TLSGD x
R_PPC64_REL24 __tls_get_addr
nop
<use address in r3>
The implementation borrows from the medium code model work for introducing
special forms of ADDIS and ADDI into the DAG representation. This is made
slightly more complicated by having to introduce a call to the external
function __tls_get_addr. Using the full call machinery is overkill and,
more importantly, makes it difficult to add a special relocation. So I've
introduced another opcode GET_TLS_ADDR to represent the function call, and
surrounded it with register copies to set up the parameter and return value.
Most of the code is pretty straightforward. I ran into one peculiarity
when I introduced a new PPC opcode BL8_NOP_ELF_TLSGD, which is just like
BL8_NOP_ELF except that it takes another parameter to represent the symbol
("x" above) that requires a relocation on the call. Something in the
TblGen machinery causes BL8_NOP_ELF and BL8_NOP_ELF_TLSGD to be treated
identically during the emit phase, so this second operand was never
visited to generate relocations. This is the reason for the slightly
messy workaround in PPCMCCodeEmitter.cpp:getDirectBrEncoding().
Two new tests are included to demonstrate correct external assembly and
correct generation of relocations using the integrated assembler.
Comments welcome!
Thanks,
Bill
llvm-svn: 169910
instead of the instruction. I've left a forwarding wrapper for the
instruction so users with the instruction don't need to create
a GEPOperator themselves.
This lets us remove the copy of this code in instsimplify.
I've looked at most of the other copies of similar code, and this is the
only one I've found that is actually exactly the same. The one in
InlineCost is very close, but it requires re-mapping non-constant
indices through the cost analysis value simplification map. I could add
direct support for this to the generic routine, but it seems overly
specific.
llvm-svn: 169853
the GEP instruction class.
This is part of the continued refactoring and cleaning of the
infrastructure used by SROA. This particular operation is also done in
a few other places which I'll try to refactor to share this
implementation.
llvm-svn: 169852
Accordingly, add helper funtions getSimpleValueType (in parallel to
getValueType) in SDValue, SDNode, and TargetLowering.
This is the first, in a series of patches.
llvm-svn: 169837
This shouldn't affect codegen for -O0 compiles as tail call markers are not
emitted in unoptimized compiles. Testing with the external/internal nightly
test suite reveals no change in compile time performance. Testing with -O1,
-O2 and -O3 with fast-isel enabled did not cause any compile-time or
execution-time failures. All tests were performed on my x86 machine.
I'll monitor our arm testers to ensure no regressions occur there.
In an upcoming clang patch I will be marking the objc_autoreleaseReturnValue
and objc_retainAutoreleaseReturnValue as tail calls unconditionally. While
it's theoretically true that this is just an optimization, it's an
optimization that we very much want to happen even at -O0, or else ARC
applications become substantially harder to debug.
Part of rdar://12553082
llvm-svn: 169796
1. Teach it to use overlapping unaligned load / store to copy / set the trailing
bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies.
2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g.
x86 and ARM.
3. When memcpy from a constant string, do *not* replace the load with a constant
if it's not possible to materialize an integer immediate with a single
instruction (required a new target hook: TLI.isIntImmLegal()).
4. Use unaligned load / stores more aggressively if target hooks indicates they
are "fast".
5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8.
Also increase the threshold to something reasonable (8 for memset, 4 pairs
for memcpy).
This significantly improves Dhrystone, up to 50% on ARM iOS devices.
rdar://12760078
llvm-svn: 169791
InitSections is called before the MCContext is initialized it could cause
duplicate temporary symbols to be emitted later (after context initialization
resets the temporary label counter).
llvm-svn: 169785
The linker will call `lto_codegen_add_must_preserve_symbol' on all globals that
should be kept around. The linker will pretend that a dylib is being created.
<rdar://problem/12528059>
llvm-svn: 169770
The `-mno-red-zone' flag wasn't being propagated to the functions that code
coverage generates. This allowed some of them to use the red zone when that
wasn't allowed.
<rdar://problem/12843084>
llvm-svn: 169754
This visitor provides infrastructure for recursively traversing the
use-graph of a pointer-producing instruction like an alloca or a malloc.
It maintains a worklist of uses to visit, so it can handle very deep
recursions. It automatically looks through instructions which simply
translate one pointer to another (bitcasts and GEPs). It tracks the
offset relative to the original pointer as long as that offset remains
constant and exposes it during the visit as an APInt offset. Finally, it
performs conservative escape analysis.
However, currently it has some limitations that should be addressed
going forward:
1) It doesn't handle vectors of pointers.
2) It doesn't provide a cheaper visitor when the constant offset
tracking isn't needed.
3) It doesn't support non-instruction pointer values.
The current functionality is exactly what is required to implement the
SROA pointer-use visitors in terms of this one, rather than in terms of
their own ad-hoc base visitor, which was always very poorly specified.
SROA has been converted to use this, and the code there deleted which
this utility now provides.
Technically speaking, using this new visitor allows SROA to handle a few
more cases than it previously did. It is now more aggressive in ignoring
chains of instructions which look like they would defeat SROA, but in
fact do not because they never result in a read or write of memory.
While this is "neat", it shouldn't be interesting for real programs as
any such chains should have been removed by others passes long before we
get to SROA. As a consequence, I've not added any tests for these
features -- it shouldn't be part of SROA's contract to perform such
heroics.
The goal is to extend the functionality of this visitor going forward,
and re-use it from passes like ASan that can benefit from doing
a detailed walk of the uses of a pointer.
Thanks to Ben Kramer for the code review rounds and lots of help
reviewing and debugging this patch.
llvm-svn: 169728
- added function to VectorTargetTransformInfo to query cost of intrinsics
- vectorize trivially vectorizable intrinsic calls such as sin, cos, log, etc.
Reviewed by: Nadav
llvm-svn: 169711
There are still bugs in this pass, as well as other issues that are
being worked on, but the bugs are crashers that occur pretty easily in
the wild. Test cases have been sent to the original commit's review
thread.
This reverts the commits:
r169671: Fix a logic error.
r169604: Move the popcnt tests to an X86 subdirectory.
r168931: Initial commit adding the pass.
llvm-svn: 169683
This function sets the `_exportDynamic' ivar. When that's set, we export all
symbols (e.g. we don't run the internalize pass). This is equivalent to the
`--export-dynamic' linker flag in GNU land:
--export-dynamic
When creating a dynamically linked executable, add all symbols to the dynamic
symbol table. The dynamic symbol table is the set of symbols which are visible
from dynamic objects at run time. If you do not use this option, the dynamic
symbol table will normally contain only those symbols which are referenced by
some dynamic object mentioned in the link. If you use dlopen to load a dynamic
object which needs to refer back to the symbols defined by the program, rather
than some other dynamic object, then you will probably need to use this option
when linking the program itself.
The Darwin linker will support this via the `-export_dynamic' flag. We should
modify clang to support this via the `-rdynamic' flag.
llvm-svn: 169656
SmallString. This makes it possible to use the length-erased SmallVectorImpl
in the interface without imposing buffer size. Thus, the size of MCInstFragment
is back down since a preallocated 8-byte contents buffer is enough.
It would be generally a good idea to rid all the fragments of SmallString as
contents, because a vector just makes more sense.
llvm-svn: 169644
Before this patch, when you objdump an LLVM-compiled file, objdump tried to
decode data-in-code sections as if they were code. This patch adds the missing
Mapping Symbols, as defined by "ELF for the ARM Architecture" (ARM IHI 0044D).
Patch based on work by Greg Fitzgerald.
llvm-svn: 169609
This is still a work in progress. The purpose is to make bundling and
unbundling operations explicit, and to catch errors where bundles are
broken or created inadvertently.
The old IsInsideBundle flag is replaced by two MI flags: BundledPred
which has the same meaning as IsInsideBundle, and BundledSucc which is
set on instructions that are bundled with a successor. Having two flags
provdes redundancy to detect when a bundle is inadvertently torn by a
splice() or insert(), and it makes it possible to write bundle iterators
that don't need to peek at adjacent instructions.
The new flags can't be manipulated directly (once setIsInsideBundle is
gone). Instead there are MI functions to make and break bundle bonds.
The setIsInsideBundle function will be removed in a future commit. It
should be replaced by bundleWithPred().
llvm-svn: 169583
original change description:
change MCContext to work on the doInitialization/doFinalization model
reviewed by Evan Cheng <evan.cheng@apple.com>
llvm-svn: 169553
understand target implementation of any_extend / extload, just generate
zero_extend in place of any_extend for liveouts when the target knows the
zero_extend will be implicit (e.g. ARM ldrb / ldrh) or folded (e.g. x86 movz).
rdar://12771555
llvm-svn: 169536
This is an alternative to the ImmutableMapRef interface where a factory
should still be canonicalizing by default, but in certain cases an
improvement can be made by delaying the canonicalization.
llvm-svn: 169532
Some languages, e.g. Ada and Pascal, allow you to specify that the array bounds
are different from the default (1 in these cases). If we have a lower bound
that's non-default, then we emit the lower bound. We also calculate the correct
upper bound in those cases.
llvm-svn: 169484
This is more consistent with other vectors in this code. In addition, I ran some
tests compiling a large program and >96% of fragments have 4 or less fixups, so
SmallVector<4> is a good optimization.
llvm-svn: 169433
This is much simpler to reason about, more efficient, and
fixes some corner cases involving implicit super-register defs.
Fixed rdar://12797931.
llvm-svn: 169425
Change member types of RuntimeFunction and UnwindInfo from uint64_t to
uint32_t:
These members represent addresses. According to MSDN, they are image
relative, that is, they are 32-bit offsets from the starting address
of the image that contains the function table entry.
See MSDN for more information:
RUNTIME_FUNCTION: http://msdn.microsoft.com/en-us/library/ft9x1kdx.aspx
UNWIND_INFO: http://msdn.microsoft.com/en-us/library/ddssxxy8.aspx
Make Win64.h platform-neutral:
The standard types unit8_t, uint16_t and uint32_t are replaced with
their counterparts from Endian.h. Accessor functions are introduced to
replace bit fields.
Patch by João Matos and Kai Nacke.
llvm-svn: 169414
A MachineInstr can only ever be constructed by CreateMachineInstr() and
CloneMachineInstr(), and those factories don't use the removed
constructors.
llvm-svn: 169395
This is for the lldb team so most of but not all of the values are
to be printed as hex with this option. Some small values like the
scale in an X86 address were requested to printed in decimal
without the leading 0x.
There may be some tweaks need to places that may still be in
decimal that they want in hex. Specially for arm. I made my best
guess. Any tweaks from here should be simple.
I also did the best I know now with help from the C++ gurus
creating the cleanest formatImm() utility function and containing
the changes. But if someone has a better idea to make something
cleaner I'm all ears and game for changing the implementation.
rdar://8109283
llvm-svn: 169393
- fixed ordering of calls to doFinalization to be the reverse of the pass run order due to potential dependencies
- fixed machine module info to operate in the doInitialization/doFinalization model, also fixes some FIXMEs
reviewed by Evan Cheng <evan.cheng@apple.com>
llvm-svn: 169391
At build-time register pressure was always computed in terms of
register units. But the compile-time API was expressed in terms of
register classes because it was intended for virtual registers (and
physical register units weren't yet used anywhere in codegen).
Now that the codegen uses physreg units consistently, prepare for
tracking register pressure also in terms of live units, not live
registers.
llvm-svn: 169360
The count attribute is more accurate with regards to the size of an array. It
also obviates the upper bound attribute in the subrange. We can also better
handle an unbound array by setting the count to -1 instead of the lower bound to
1 and upper bound to 0.
llvm-svn: 169312
textually as NativeClient. Also added a link to the native client project for
readers unfamiliar with it.
A Clang patch will follow shortly.
llvm-svn: 169291
on 64-bit PowerPC ELF.
The patch includes code to handle external assembly and MC output with the
integrated assembler. It intentionally does not support the "old" JIT.
For the initial-exec TLS model, the ABI requires the following to calculate
the address of external thread-local variable x:
Code sequence Relocation Symbol
ld 9,x@got@tprel(2) R_PPC64_GOT_TPREL16_DS x
add 9,9,x@tls R_PPC64_TLS x
The register 9 is arbitrary here. The linker will replace x@got@tprel
with the offset relative to the thread pointer to the generated GOT
entry for symbol x. It will replace x@tls with the thread-pointer
register (13).
The two test cases verify correct assembly output and relocation output
as just described.
PowerPC-specific selection node variants are added for the two
instructions above: LD_GOT_TPREL and ADD_TLS. These are inserted
when an initial-exec global variable is encountered by
PPCTargetLowering::LowerGlobalTLSAddress(), and later lowered to
machine instructions LDgotTPREL and ADD8TLS. LDgotTPREL is a pseudo
that uses the same LDrs support added for medium code model's LDtocL,
with a different relocation type.
The rest of the processing is straightforward.
llvm-svn: 169281
The count field is necessary because there isn't a difference between the 'lo'
and 'hi' attributes for a one-element array and a zero-element array. When the
count is '0', we know that this is a zero-element array. When it's >=1, then
it's a normal constant sized array. When it's -1, then the array is unbounded.
llvm-svn: 169218
the alignment is clamped to TargetFrameLowering.getStackAlignment if the target
does not support stack realignment or the option "realign-stack" is off.
This will cause miscompile if the address is treated as aligned and add is
replaced with or in DAGCombine.
Added a bool StackRealignable to TargetFrameLowering to check whether stack
realignment is implemented for the target. Also added a bool RealignOption
to MachineFrameInfo to check whether the option "realign-stack" is on.
rdar://12713765
llvm-svn: 169197
Now that there can be multiple hint registers from targets, it doesn't
make sense to have a function that returns 'the' preferred register.
llvm-svn: 169190
Targets can provide multiple hints now, so getRegAllocPref() doesn't
make sense any longer because it only returns one preferred register.
Replace it with getSimpleHint() in the remaining heuristics. This
function only
llvm-svn: 169188
Virtual registers with a known preferred register are prioritized by
RAGreedy. This function makes the condition explicit without depending
on getRegAllocPref().
llvm-svn: 169179
The TargetRegisterInfo::getRegAllocationHints() function is going to
replace the existing mechanisms for providing target-dependent hints to
the register allocator: ResolveRegAllocHint() and
getRawAllocationOrder().
The new hook is more flexible because it allows the target to provide
multiple preferred candidate registers for each virtual register, and it
is easier to use because targets are not required to return a reference
to a constant array like getRawAllocationOrder().
An optional VirtRegMap argument can be used to provide target-dependent
hints that depend on the provisional assignments of other virtual
registers.
llvm-svn: 169154
AKA: Recompile *ALL* the source code!
This one went much better. No manual edits here. I spot-checked for
silliness and grep-checked for really broken edits and everything seemed
good. It all still compiles. Yell if you see something that looks goofy.
llvm-svn: 169133
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.
Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]
llvm-svn: 169131
The original patch removed a bunch of code that the SjLjEHPrepare pass placed
into the entry block if all of the landing pads were removed during the
CodeGenPrepare class. The more natural way of doing things is to run the CGP
*before* we run the SjLjEHPrepare pass.
Make it so!
llvm-svn: 169044
Rationale:
1) This was the name in the comment block. ;]
2) It matches Clang's __has_feature naming convention.
3) It matches other compiler-feature-test conventions.
Sorry for the noise. =]
I've also switch the comment block to use a \brief tag and not duplicate
the name.
llvm-svn: 168996
references from whether it supports an R-value reference *this. No
version of GCC today supports the latter, which breaks GCC C++11
compiles of LLVM and Clang now.
Also add doxygen comments clarifying what's going on here, and update
the usage in Optional. I'll update the usages in Clang next.
llvm-svn: 168993
For example, don't allow empty strings to be passed to getInt.
Move asserts inside parseSpecifier. (One day we may want to pass parse
error messages to the user - from LLParser - instead of using asserts,
but keep the code simple until then. There have been an attempt to do
this. See r142288, which got reverted, and r142605.)
llvm-svn: 168991
depends on the IR infrastructure, there is no sense in it being off in
Support land.
This is in preparation to start working to expand InstVisitor into more
special-purpose visitors that are still generic and can be re-used
across different passes. The expansion will go into the Analylis tree
though as nothing in VMCore needs it.
llvm-svn: 168972
This expands to '&', and is intended to be used when an /optional/ rvalue
override is available.
Before:
void foo() const { ... }
After:
void foo() const LLVM_LVALUE_FUNCTION { ... }
void foo() && { ... }
This is used to allow moving the contents of an Optional.
llvm-svn: 168963
This revision attempts to recognize following population-count pattern:
while(a) { c++; ... ; a &= a - 1; ... },
where <c> and <a>could be used multiple times in the loop body.
TODO: On X8664 and ARM, __buildin_ctpop() are not expanded to a efficent
instruction sequence, which need to be improved in the following commits.
Reviewed by Nadav, really appreciate!
llvm-svn: 168931
MachOObjectFile owns a MachOObj, but never frees it. Both MachOObjectFile
and MachOObj want to own the MemoryBuffer, though, so we have to be careful
and give them each one of their own.
Thanks to Greg Clayton, Eric Christopher and Michael Spencer for helping
figure out what's going wrong here.
rdar://12561773
llvm-svn: 168923
No functional change, just moved header files.
Targets can inject custom passes between register allocation and
rewriting. This makes it possible to tweak the register allocation
before rewriting, using the full global interference checking available
from LiveRegMatrix.
llvm-svn: 168806
appropriate unit tests. This change in itself is not expected to
affect any functionality at this point, but it will serve as a
stepping stone to improve FileCheck's variable matching capabilities.
Luckily, our regex implementation already supports backreferences,
although a bit of hacking is required to enable it. It supports both
Basic Regular Expressions (BREs) and Extended Regular Expressions
(EREs), without supporting backrefs for EREs, following POSIX strictly
in this respect. And EREs is what we actually use (rightly). This is
contrary to many implementations (including the default on Linux) of
POSIX regexes, that do allow backrefs in EREs.
Adding backref support to our EREs is a very simple change in the
regcomp parsing code. I fail to think of significant cases where it
would clash with existing things, and can bring more versatility to
the regexes we write. There's always the danger of a backref in a
specially crafted regex causing exponential matching times, but since
we mainly use them for testing purposes I don't think it's a big
problem. [it can also be placed behind a flag specific to FileCheck,
if needed].
For more details, see:
* http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-November/055840.html
* http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20121126/156878.html
llvm-svn: 168802
This is a simple, cheap infrastructure for analyzing the shape of a
DAG. It recognizes uniform DAGs that take the shape of bottom-up
subtrees, such as the included matrix multiplication example. This is
useful for heuristics that balance register pressure with ILP. Two
canonical expressions of the heuristic are implemented in scheduling
modes: -misched-ilpmin and -misched-ilpmax.
llvm-svn: 168773
The SectionMemoryManager now supports (and requires) applying section-specific page permissions. Clients using this memory manager must call either MCJIT::finalizeObject() or SectionMemoryManager::applyPermissions() before executing JITed code.
See r168718 for changes from the previous implementation.
llvm-svn: 168721
The default for 64-bit PowerPC is small code model, in which TOC entries
must be addressable using a 16-bit offset from the TOC pointer. Additionally,
only TOC entries are addressed via the TOC pointer.
With medium code model, TOC entries and data sections can all be addressed
via the TOC pointer using a 32-bit offset. Cooperation with the linker
allows 16-bit offsets to be used when these are sufficient, reducing the
number of extra instructions that need to be executed. Medium code model
also does not generate explicit TOC entries in ".section toc" for variables
that are wholly internal to the compilation unit.
Consider a load of an external 4-byte integer. With small code model, the
compiler generates:
ld 3, .LC1@toc(2)
lwz 4, 0(3)
.section .toc,"aw",@progbits
.LC1:
.tc ei[TC],ei
With medium model, it instead generates:
addis 3, 2, .LC1@toc@ha
ld 3, .LC1@toc@l(3)
lwz 4, 0(3)
.section .toc,"aw",@progbits
.LC1:
.tc ei[TC],ei
Here .LC1@toc@ha is a relocation requesting the upper 16 bits of the
32-bit offset of ei's TOC entry from the TOC base pointer. Similarly,
.LC1@toc@l is a relocation requesting the lower 16 bits. Note that if
the linker determines that ei's TOC entry is within a 16-bit offset of
the TOC base pointer, it will replace the "addis" with a "nop", and
replace the "ld" with the identical "ld" instruction from the small
code model example.
Consider next a load of a function-scope static integer. For small code
model, the compiler generates:
ld 3, .LC1@toc(2)
lwz 4, 0(3)
.section .toc,"aw",@progbits
.LC1:
.tc test_fn_static.si[TC],test_fn_static.si
.type test_fn_static.si,@object
.local test_fn_static.si
.comm test_fn_static.si,4,4
For medium code model, the compiler generates:
addis 3, 2, test_fn_static.si@toc@ha
addi 3, 3, test_fn_static.si@toc@l
lwz 4, 0(3)
.type test_fn_static.si,@object
.local test_fn_static.si
.comm test_fn_static.si,4,4
Again, the linker may replace the "addis" with a "nop", calculating only
a 16-bit offset when this is sufficient.
Note that it would be more efficient for the compiler to generate:
addis 3, 2, test_fn_static.si@toc@ha
lwz 4, test_fn_static.si@toc@l(3)
The current patch does not perform this optimization yet. This will be
addressed as a peephole optimization in a later patch.
For the moment, the default code model for 64-bit PowerPC will remain the
small code model. We plan to eventually change the default to medium code
model, which matches current upstream GCC behavior. Note that the different
code models are ABI-compatible, so code compiled with different models will
be linked and execute correctly.
I've tested the regression suite and the application/benchmark test suite in
two ways: Once with the patch as submitted here, and once with additional
logic to force medium code model as the default. The tests all compile
cleanly, with one exception. The mandel-2 application test fails due to an
unrelated ABI compatibility with passing complex numbers. It just so happens
that small code model was incredibly lucky, in that temporary values in
floating-point registers held the expected values needed by the external
library routine that was called incorrectly. My current thought is to correct
the ABI problems with _Complex before making medium code model the default,
to avoid introducing this "regression."
Here are a few comments on how the patch works, since the selection code
can be difficult to follow:
The existing logic for small code model defines three pseudo-instructions:
LDtoc for most uses, LDtocJTI for jump table addresses, and LDtocCPT for
constant pool addresses. These are expanded by SelectCodeCommon(). The
pseudo-instruction approach doesn't work for medium code model, because
we need to generate two instructions when we match the same pattern.
Instead, new logic in PPCDAGToDAGISel::Select() intercepts the TOC_ENTRY
node for medium code model, and generates an ADDIStocHA followed by either
a LDtocL or an ADDItocL. These new node types correspond naturally to
the sequences described above.
The addis/ld sequence is generated for the following cases:
* Jump table addresses
* Function addresses
* External global variables
* Tentative definitions of global variables (common linkage)
The addis/addi sequence is generated for the following cases:
* Constant pool entries
* File-scope static global variables
* Function-scope static variables
Expanding to the two-instruction sequences at select time exposes the
instructions to subsequent optimization, particularly scheduling.
The rest of the processing occurs at assembly time, in
PPCAsmPrinter::EmitInstruction. Each of the instructions is converted to
a "real" PowerPC instruction. When a TOC entry needs to be created, this
is done here in the same manner as for the existing LDtoc, LDtocJTI, and
LDtocCPT pseudo-instructions (I factored out a new routine to handle this).
I had originally thought that if a TOC entry was needed for LDtocL or
ADDItocL, it would already have been generated for the previous ADDIStocHA.
However, at higher optimization levels, the ADDIStocHA may appear in a
different block, which may be assembled textually following the block
containing the LDtocL or ADDItocL. So it is necessary to include the
possibility of creating a new TOC entry for those two instructions.
Note that for LDtocL, we generate a new form of LD called LDrs. This
allows specifying the @toc@l relocation for the offset field of the LD
instruction (i.e., the offset is replaced by a SymbolLo relocation).
When the peephole optimization described above is added, we will need
to do similar things for all immediate-form load and store operations.
The seven "mcm-n.ll" test cases are kept separate because otherwise the
intermingling of various TOC entries and so forth makes the tests fragile
and hard to understand.
The above assumes use of an external assembler. For use of the
integrated assembler, new relocations are added and used by
PPCELFObjectWriter. Testing is done with "mcm-obj.ll", which tests for
proper generation of the various relocations for the same sequences
tested with the external assembler.
llvm-svn: 168708
Added in first optimization using fast-math flags to serve as an example for following optimizations. SimplifyInstruction will now try to optimize an fmul observing its FastMathFlags to see if it can fold multiply by zero when 'nnan' and 'nsz' flags are set.
llvm-svn: 168648
Added in bitcode enum for the serializing of fast-math flags. Added in the reading/writing of fast-math flags from the OptimizationFlags record for BinaryOps.
llvm-svn: 168646
Add in getter/setter methods for Instructions, allowing them to be the interface to FPMathOperator similarly to now NUS/NSW is handled.
llvm-svn: 168642
Created FastMathFlags convenience struct for the getting and setting of fast-math flags en masse. Added SubclassOptionalData bitfields and corresponding getters/setters to FPMathOperator for the various fast-math flags.
llvm-svn: 168641
- Widespread trailing space removal
- A dash of OCD spacing to block align enums
- joined a line that probably needed 80 cols a while back
llvm-svn: 168566
to support it. Original patch with the parsing and plumbing by the PaX team and
Roman Divacky. I added the bits in MCDwarf.cpp and the test.
llvm-svn: 168565
Necessary to give disassembler users (like darwin's otool) a possibility to
dlopen libLTO and still initialize the required LLVM bits. This used to go
through libMCDisassembler but that's a gross layering violation, the MC layer
can't pull in functions from the targets. Adding a function to libLTO is a bit
of a hack but not worse than exposing other disassembler bits from libLTO.
Fixes PR14362.
llvm-svn: 168545
Give MCCFIInstruction a single, private constructor and add helper static
methods that create each type of cfi instruction. This is is preparation
for changing its representation. The representation with a pair
MachineLocations older than MC and has been abused quiet a bit to support
more cfi instructions.
llvm-svn: 168532
I discovered a few more missing functions while migrating optimizations
from the simplify-libcalls pass to the instcombine (I already added some
in r167659).
llvm-svn: 168501
so that I can (someday) call SE->getSCEV without complaint.
No semantic change intended.
Patch from Preston Briggs <preston.briggs@gmail.com>.
llvm-svn: 168391
When code deletes the context, the AttributeImpls that the AttrListPtr points to
are now invalid. Therefore, instead of keeping a separate managed static for the
AttrListPtrs that's reference counted, move it into the LLVMContext and delete
it when deleting the AttributeImpls.
llvm-svn: 168354
The rationale is to get YAML filenames in diagnostics from
yaml::Stream::printError -- currently the filename is hard-coded as
"YAML" because there's no buffer information available.
Patch by Kim Gräsman!
llvm-svn: 168341
This patch moves the isInlineViable function from the InlineAlways pass into
the InlineCostAnalyzer and then changes the InlineCost computation to use that
simple check for always-inline functions. All the special-case checks for
AlwaysInline in the CallAnalyzer can then go away.
llvm-svn: 168300
positive.
In this particular case, R6 was being spilled by the register scavenger when it
was in fact dead. The isUsed function reported R6 as used because the R6_R7
alias was reserved (due to the fact that we've reserved R7 as the FP). The
solution is to only check if the original register (i.e., R6) isReserved and
not the aliases. The aliases are only checked to make sure they're available.
The test case is derived from one of the nightly tester benchmarks and is rather
intractable and difficult to reproduce, so I haven't included it.
rdar://12592448
llvm-svn: 168054
For global variables that get the same value stored into them
everywhere, GlobalOpt will replace them with a constant. The problem is
that a thread-local GlobalVariable looks like one value (the address of
the TLS var), but is different between threads.
This patch introduces Constant::isThreadDependent() which returns true
for thread-local variables and constants which depend on them (e.g. a GEP
into a thread-local array), and teaches GlobalOpt not to track such
values.
llvm-svn: 168037
This seems like redundant leftovers from r142288 - exposing
TargetData::parseSpecifier to LLParser - which got reverted. Removes
redunant td != NULL checks in parseSpecifier, and simplifies the
interface to parseSpecifier and init.
llvm-svn: 167924
Previously in a vector of pointers, the pointer couldn't be any pointer type,
it had to be a pointer to an integer or floating point type. This is a hassle
for dragonegg because the GCC vectorizer happily produces vectors of pointers
where the pointer is a pointer to a struct or whatever. Vector getelementptr
was restricted to just one index, but now that vectors of pointers can have
any pointer type it is more natural to allow arbitrary vector getelementptrs.
There is however the issue of struct GEPs, where if each lane chose different
struct fields then from that point on each lane will be working down into
unrelated types. This seems like too much pain for too little gain, so when
you have a vector struct index all the elements are required to be the same.
llvm-svn: 167828
This allows me to begin enabling (or backing out) misched by default
for one subtarget at a time. To run misched we typically want to:
- Disable SelectionDAG scheduling (use the source order scheduler)
- Enable more aggressive coalescing (until we decide to always run the coalescer this way)
- Enable MachineScheduler pass itself.
Disabling PostRA sched may follow for some subtargets.
llvm-svn: 167826
This patch migrates the math library call simplifications from the
simplify-libcalls pass into the instcombine library call simplifier.
I have typically migrated just one simplifier at a time, but the math
simplifiers are interdependent because:
1. CosOpt, PowOpt, and Exp2Opt all depend on UnaryDoubleFPOpt.
2. CosOpt, PowOpt, Exp2Opt, and UnaryDoubleFPOpt all depend on
the option -enable-double-float-shrink.
These two factors made migrating each of these simplifiers individually
more of a pain than it would be worth. So, I migrated them all together.
llvm-svn: 167815
If we have a type 'int a[1]' and a type 'int b[0]', the generated DWARF is the
same for both of them because we use the 'upper_bound' attribute. Instead use
the 'count' attrbute, which gives the correct number of elements in the array.
<rdar://problem/12566646>
llvm-svn: 167806
getNumContainedPasses() used to compute the size of the vector on demand. It is
called repeated in loops (such as runOnFunction()) and it can be updated while
inside the loop.
llvm-svn: 167759
Uses the infrastructure from r167742 to support clustering instructure
that the target processor can "fuse". e.g. cmp+jmp.
Next step: target hook implementations with test cases, and enable.
llvm-svn: 167744
This infrastructure is generally useful for any target that wants to
strongly prefer two instructions to be adjacent after scheduling.
A following checkin will add target-specific hooks with unit
tests. Then this feature will be enabled by default with misched.
llvm-svn: 167742
This adds support for weak DAG edges to the general scheduling
infrastructure in preparation for MachineScheduler support for
heuristics based on weak edges.
llvm-svn: 167738
In some cases the library call simplifier may need to replace instructions
other than the library call being simplified. In those cases it may be
necessary for clients of the simplifier to override how the replacements
are actually done. As such, a new overrideable method for replacing
instructions was added to LibCallSimplifier.
A new subclass of LibCallSimplifier is also defined which overrides
the instruction replacement method. This is because the instruction
combiner defines its own replacement method which updates the worklist
when instructions are replaced.
llvm-svn: 167681
In the process of migrating optimizations from the simplify-libcalls pass
to the instcombine pass I noticed that a few functions are missing from
the target library information. These functions need to be available for
querying in the instcombine library call simplifiers. More functions will
probably be added in the future as more simplifiers are migrated to
instcombine.
llvm-svn: 167659
- Add RTM code generation support throught 3 X86 intrinsics:
xbegin()/xend() to start/end a transaction region, and xabort() to abort a
tranaction region
llvm-svn: 167573
misched is disabled by default. With -enable-misched, these heuristics
balance the schedule to simultaneously avoid saturating processor
resources, expose ILP, and minimize register pressure. I've been
analyzing the performance of these heuristics on everything in the
llvm test suite in addition to a few other benchmarks. I would like
each heuristic check to be verified by a unit test, but I'm still
trying to figure out the best way to do that. The heuristics are still
in considerable flux, but as they are refined we should be rigorous
about unit testing the improvements.
llvm-svn: 167527
This patch adds the interface to expose events from MCJIT when an object is emitted or freed and implements the MCJIT functionality to send those events. The IntelJITEventListener implementation is left empty for now. It will be fleshed out in a future patch.
llvm-svn: 167475
Expose the processor resources defined by the machine model to the
scheduler and other clients through the TargetSchedule interface.
Normalize each resource count with respect to other kinds of
resources. This allows scheduling heuristics to balance resources
against other kinds of resources and latency.
llvm-svn: 167444
Prior to this patch RuntimeDyld attempted to re-apply relocations every time reassignSectionAddress was called (via MCJIT::mapSectionAddress). In addition to being inefficient and redundant, this led to a problem when a section was temporarily moved too far away from another section with a relative relocation referencing the section being moved. To fix this, I'm adding a new method (finalizeObject) which the client can call to indicate that it is finished rearranging section addresses so the relocations can safely be applied.
llvm-svn: 167400
is that the unit test doesn't have IntTy equal to APInt, instead it uses a class
derived from APInt. When, as in these lines, an IntTy& reference is returned
but is assigned to an APInt&, the compiler destroys the temporary the IntTy& was
referring to, leaving the APInt& referring to garbage. This causes the unittest
to fail systematically on my machine; it can also be caught by running the test
under valgrind.
llvm-svn: 167356
the inttoptr instruction. The conceptual model here is that
'getAddressSpace' refers to the address space of this instruction's
type. It just happens that for GEPs, that is always the same as the
pointer operand's address space. We want both names so that access
patterns can be consistent between different instruction types.
llvm-svn: 167229
compute the address space in the one place it was used.
Also write the getPointerAddressSpace member in terms of the
getPointerOperandType member.
llvm-svn: 167226
'@brief' doxygen markup to the now standard '\brief' markup form, in
conformance with the coding standards. This will let me continue to
write new comments in this form without making things inconsistent.
llvm-svn: 167225
politely and document this feature.
This simple API extension then allows us to write all of the
Instructions' address space query methods much more simply. No
functionality change intended here.
llvm-svn: 167223
r165941: Resubmit the changes to llvm core to update the functions to
support different pointer sizes on a per address space basis.
Despite this commit log, this change primarily changed stuff outside of
VMCore, and those changes do not carry any tests for correctness (or
even plausibility), and we have consistently found questionable or flat
out incorrect cases in these changes. Most of them are probably correct,
but we need to devise a system that makes it more clear when we have
handled the address space concerns correctly, and ideally each pass that
gets updated would receive an accompanying test case that exercises that
pass specificaly w.r.t. alternate address spaces.
However, from this commit, I have retained the new C API entry points.
Those were an orthogonal change that probably should have been split
apart, but they seem entirely good.
In several places the changes were very obvious cleanups with no actual
multiple address space code added; these I have not reverted when
I spotted them.
In a few other places there were merge conflicts due to a cleaner
solution being implemented later, often not using address spaces at all.
In those cases, I've preserved the new code which isn't address space
dependent.
This is part of my ongoing effort to clean out the partial address space
code which carries high risk and low test coverage, and not likely to be
finished before the 3.2 release looms closer. Duncan and I would both
like to see the above issues addressed before we return to these
changes.
llvm-svn: 167222
getIntPtrType support for multiple address spaces via a pointer type,
and also introduced a crasher bug in the constant folder reported in
PR14233.
These commits also contained several problems that should really be
addressed before they are re-committed. I have avoided reverting various
cleanups to the DataLayout APIs that are reasonable to have moving
forward in order to reduce the amount of churn, and minimize the number
of commits that were reverted. I've also manually updated merge
conflicts and manually arranged for the getIntPtrType function to stay
in DataLayout and to be defined in a plausible way after this revert.
Thanks to Duncan for working through this exact strategy with me, and
Nick Lewycky for tracking down the really annoying crasher this
triggered. (Test case to follow in its own commit.)
After discussing with Duncan extensively, and based on a note from
Micah, I'm going to continue to back out some more of the more
problematic patches in this series in order to ensure we go into the
LLVM 3.2 branch with a reasonable story here. I'll send a note to
llvmdev explaining what's going on and why.
Summary of reverted revisions:
r166634: Fix a compiler warning with an unused variable.
r166607: Add some cleanup to the DataLayout changes requested by
Chandler.
r166596: Revert "Back out r166591, not sure why this made it through
since I cancelled the command. Bleh, sorry about this!
r166591: Delete a directory that wasn't supposed to be checked in yet.
r166578: Add in support for getIntPtrType to get the pointer type based
on the address space.
llvm-svn: 167221
Explicitly allow composition of null sub-register indices, and handle
that common case in an inlinable stub.
Use a compressed table implementation instead of the previous nested
switches which generated pretty bad code.
llvm-svn: 167190
the MachineInstr MayLoad/MayLoad flags are based on the tablegen implementation.
For inline assembly, however, we need to compute these based on the constraints.
Revert r166929 as this is no longer needed, but leave the test case in place.
rdar://12033048 and PR13504
llvm-svn: 167040
When the switch-to-lookup tables transform landed in SimplifyCFG, it
was pointed out that this could be inappropriate for some targets.
Since there was no way at the time for the pass to know anything about
the target, an awkward reverse-transform was added in CodeGenPrepare
that turned lookup tables back into switches for some targets.
This patch uses the new TargetTransformInfo to determine if a
switch should be transformed, and removes
CodeGenPrepare::ConvertLoadToSwitch.
llvm-svn: 167011
checks to avoid performing compile-time arithmetic on PPCDoubleDouble.
Now that APFloat supports arithmetic on PPCDoubleDouble, those checks
are no longer needed, and we can treat the type like any other.
llvm-svn: 166958
wrapper returns a vector of integers when passed a vector of pointers) by having
getIntPtrType itself return a vector of integers in this case. Outside of this
wrapper, I didn't find anywhere in the codebase that was relying on the old
behaviour for vectors of pointers, so give this a whirl through the buildbots.
llvm-svn: 166939
We may need to change the way profile counter values are stored, but
saturation is the wrong thing to do. Just remove it for now.
Patch by Alastair Murray!
llvm-svn: 166938
Add getCostXXX calls for different families of opcodes, such as casts, arithmetic, cmp, etc.
Port the LoopVectorizer to the new API.
The LoopVectorizer now finds instructions which will remain uniform after vectorization. It uses this information when calculating the cost of these instructions.
llvm-svn: 166836
APInt::shl generated llvm.trap to guard against shifts greater than bit-width.
This was already checked with an assert, and there was a special case for
shifts equal to bit-width. Modify this check to catch shifts greater than or
equal to bit-width, so llvm.trap isn't generated.
Patch contributed by JF Bastien
llvm-svn: 166803
Enabled with -verify-scev. This could be extended significantly but hopefully
catches the common cases now. Note that it's not enabled by default in any
configuration because the way it tries to distinguish SCEVs is still fragile and
may produce false positives. Also the test-suite isn't clean yet, one example
is that it fails if a pass drops an NSW bit but it's still present in SCEV's
cached. Cleaning up all those cases will take some time.
llvm-svn: 166786
As discussed on IRC, add VectorTargetTransform::getNumberOfParts
to provide a stable interface to the vector legalization splitting factor.
llvm-svn: 166751
include/llvm/MC/MCTargetAsmParser.h:46:8: error: 'llvm::ParseInstructionInfo' has a field 'llvm::ParseInstructionInfo::AsmRewrites' whose type uses the anonymous namespace [-Werror]
llvm-svn: 166729
Most places can use PrintFatalError as the unwinding mechanism was not
used for anything other than printing the error. The single exception
was CodeGenDAGPatterns.cpp, where intermediate errors during type
resolution were ignored to simplify incremental platform development.
This use is replaced by an error flag in TreePattern and bailout earlier
in various places if it is set.
llvm-svn: 166712
Relationship maps are represented as InstrMapping records which are parsed by
TableGen and the information is used to construct mapping tables to represent
appropriate relations between instructions. These tables are emitted into
XXXGenInstrInfo.inc file along with the functions to query them.
Patch by Jyotsna Verma <jverma@codeaurora.org>.
llvm-svn: 166685
This patch adds initial PPC64 TOC MC object creation using the small mcmodel
(a single 64K TOC) adding the some TOC relocations (R_PPC64_TOC,
R_PPC64_TOC16, and R_PPC64_TOC16DS).
The addition of 'undefinedExplicitRelSym' hook on 'MCELFObjectTargetWriter'
is meant to avoid the creation of an unreferenced ".TOC." symbol (used in
the .odp creation) as well to set the R_PPC64_TOC relocation target as the
temporary ".TOC." symbol. On PPC64 ABI, the R_PPC64_TOC relocation should
not point to any symbol.
llvm-svn: 166677
a bunch of errors for all the Operator subclasses such as:
include/llvm/Operator.h:76:7: error: deleted function 'virtual llvm::OverflowingBinaryOperator::~OverflowingBinaryOperator()'
include/llvm/Operator.h:43:3: error: overriding non-deleted function 'virtual llvm::Operator::~Operator()'
include/llvm/Operator.h:76:7: error: 'virtual llvm::OverflowingBinaryOperator::~OverflowingBinaryOperator()' is implicitly deleted because the default definition would be ill-formed:
include/llvm/Operator.h:43:3: error: 'virtual llvm::Operator::~Operator()' is private
include/llvm/Operator.h:76:7: error: within this context
llvm-svn: 166611
called. Provide an (asserting) definition of Operator's private destructor.
Remove destructors from all classes derived from Operator. We don't need them
for safety, because their implicit definitions would be ill-formed (they'd call
Operator's private destructor), and we don't need them to avoid emitting
vtables, because we don't do anything with Operator subclasses which would
trigger vtable instantiation.
The Operator hierarchy is still a complete disaster with regard to undefined
behavior, but this at least allows LLVM to link when using Clang's
-fcatch-undefined-behavior with a new vptr-based type checking mechanism.
llvm-svn: 166530
Per the October 12, 2012 Proposal for annotated disassembly output sent out by
Jim Grosbach this set of changes implements this for X86 and arm. The llvm-mc
tool now has a -mdis option to produced the marked up disassembly and a couple
of small example test cases have been added.
rdar://11764962
llvm-svn: 166445
Patch by Quentin Colombet <qcolombet@apple.com>
Original description:
"""
The attached patch is the first step to have a better control on Oz related optimizations.
The Oz optimization level focuses on code size, thus I propose to add an attribute called ForceSizeOpt.
"""
llvm-svn: 166422
a memory operand. Retain this information and then add the sizing directives
to the IR. This allows the backend to do proper instruction selection.
llvm-svn: 166316
which is supposed to consistently raise SIGTRAP across all systems. In contrast,
__builtin_trap() behave differently on different systems. e.g. it raises SIGTRAP on ARM, and
SIGILL on X86. The purpose of __builtin_debugtrap() is to consistently provide "trap"
functionality, in the mean time preserve the compatibility with on gcc on __builtin_trap().
The X86 backend is already able to handle debugtrap(). This patch is to:
1) make front-end recognize "__builtin_debugtrap()" (emboddied in the one-line change to Clang).
2) In DAG legalization phase, by default, "debugtrap" will be replaced with "trap", which
make the __builtin_debugtrap() "available" to all existing ports without the hassle of
changing their code.
3) If trap-function is specified (via -trap-func=xyz to llc), both __builtin_debugtrap() and
__builtin_trap() will be expanded into the function call of the specified trap function.
This behavior may need change in the future.
The provided testing-case is to make sure 2) and 3) are working for ARM port, and we
already have a testing case for x86.
llvm-svn: 166300
This allows llvm::Optional to be used with movable-but-not-copyable types.
While LLVM itself is still C++03, there's no reason why tools built on
top of it can't use C++11 features.
llvm-svn: 166241
This more accurately reflects what is actually being stored in the
field.
No functionality change intended.
Author: Matthew Curtis <mcurtis@codeaurora.org>
llvm-svn: 166215
layer. Add the ParseMSInlineAsm() function, which is the new interface to
clang. Also expose the new MCAsmParserSemaCallback interface, which is used
by the back-end to do name lookup in Sema. Finally, remove the now defunct
APIs introduced in r165946.
llvm-svn: 166183
over the implicitly-formed-and-nesting CGSCC pass manager and function
pass managers, especially when using them on the opt commandline or
using extension points in the module builder. The '-barrier' opt flag
(or the pass itself) will create a no-op module pass in the pipeline,
resetting the pass manager stack, and allowing the creation of a new
pipeline of function passes or CGSCC passes to be created that is
independent from any previous pipelines.
For example, this can be used to test running two CGSCC passes in
independent CGSCC pass managers as opposed to in the same CGSCC pass
manager. It also allows us to introduce a further hack into the
PassManagerBuilder to separate the O0 pipeline extension passes from the
always-inliner's CGSCC pass manager, which they likely do not want to
participate in... At the very least none of the Sanitizer passes want
this behavior.
This fixes a bug with ASan at O0 currently, and I'll commit the ASan
test which covers this pass. I'm happy to add a test case that this pass
exists and works, but not sure how much time folks would like me to
spend adding test cases for the details of its behavior of partition
pass managers.... The whole thing is just vile, and mostly intended to
unblock ASan, so I'm hoping to rip this all out in a brave new pass
manager world.
llvm-svn: 166172
The TargetTransform changes are breaking LTO bootstraps of clang. I am
working with Nadav to figure out the problem, but I am reverting it for now
to get our buildbots working.
This reverts svn commits: 165665 165669 165670 165786 165787 165997
and I have also reverted clang svn 165741
llvm-svn: 166168
any scheduling heuristics nor does it build up any scheduling data structure
that other heuristics use. It essentially linearize by doing a DFA walk but
it does handle glues correctly.
IMPORTANT: it probably can't handle all the physical register dependencies so
it's not suitable for x86. It also doesn't deal with dbg_value nodes right now
so it's definitely is still WIP.
rdar://12474515
llvm-svn: 166122
All callers of these functions really want the isPhysRegOrOverlapUsed()
functionality which also checks aliases. For historical reasons, targets
without register aliases were calling isPhysRegUsed() instead.
Change isPhysRegUsed() to also check aliases, and switch all
isPhysRegOrOverlapUsed() callers to isPhysRegUsed().
llvm-svn: 166117