The KILL pseudo-instruction may survive to the asm printer pass, just like the IMPLICIT_DEF. Print the KILL as a comment instead of just leaving a blank line in the output.
With -asm-verbose=0, a blank line is printed, like IMPLICIT?DEF.
llvm-svn: 86041
the testcase into:
_test1: ## @test1
## BB#0: ## %entry
leaq L_test1_bb6(%rip), %rax
jmpq *%rax
L_test1_bb: ## Address Taken
LBB1_1: ## %bb
movb $1, %al
ret
L_test1_bb6: ## Address Taken
LBB1_2: ## %bb6
movb $2, %al
ret
Note, it is very very strange that BlockAddressSDNode doesn't carry
around TargetFlags. Dan, please fix this.
llvm-svn: 85703
unfolding loads for hoisting. getOpcodeAfterMemoryUnfold returns the
opcode of the original operation without the load, not the load
itself, MachineLICM needs to know the operand index in order to get
the correct register class. Extend getOpcodeAfterMemoryUnfold to
return this information.
llvm-svn: 85622
bunch of associated comments, because it doesn't have anything to do
with DAGs or scheduling. This is another step in decoupling MachineInstr
emitting from scheduling.
llvm-svn: 85517
encounters an OEQ or UNE comparison, and update its callers to check
for this return status and recover. This fixes a problem resulting from
the LowerOperation hooks being called from LegalizeVectorOps, because
LegalizeVectorOps only lowers vectors, so OEQ and UNE comparisons may
still be at large. This fixes PR5092.
llvm-svn: 84640
All of these "subreg32" modifier instructions are handled
explicitly by the MCInst lowering phase. If they got to
the asmprinter, they would explode. They should eventually
be replace with correct use of subregs.
llvm-svn: 84526
LLC was scheduling compares before the adds causing wrong branches to be taken
in programs, resulting in misoptimized code wherever atomic adds where used.
llvm-svn: 84485
stack slots and giving them different PseudoSourceValue's did not fix the
problem of post-alloc scheduling miscompiling llvm itself.
- Apply Dan's conservative workaround by assuming any non fixed stack slots can
alias other memory locations. This means a load from spill slot #1 cannot
move above a store of spill slot #2.
- Enable post-alloc scheduling for x86 at optimization leverl Default and above.
llvm-svn: 84424
1. Emit external function type information for all COFF targets since it's
a feature of object format
2. Emit linker directives only for cygming (since this is ld-specific stuff)
llvm-svn: 84214
(for uses marked kill and defs marked dead) a few instructions in
addition to forwards. Also, increase the maximum number of instructions
to scan, as it appears to help in a fair number of cases.
llvm-svn: 84061
it to hold the address of an sret return value, for x86-64 ABI purposes.
Also, fix the test that was originally intended to test this to actually
test it, using FileCheck.
llvm-svn: 83853
when one of the bits being tested would end up being the sign bit in the
narrower type, and a signed comparison is being performed, since this would
change the result of the signed comparison. This fixes PR5132.
llvm-svn: 83670
implementations with a new MachineInstr::isInvariantLoad, which uses
MachineMemOperands and is target-independent. This brings MachineLICM
and other functionality to targets which previously lacked an
isInvariantLoad implementation.
llvm-svn: 83475
a virtual register to eliminate a frame index, it can return that register
and the constant stored there to PEI to track. When scavenging to allocate
for those registers, PEI then tracks the last-used register and value, and
if it is still available and matches the value for the next index, reuses
the existing value rather and removes the re-materialization instructions.
Fancier tracking and adjustment of scavenger allocations to keep more
values live for longer is possible, but not yet implemented and would likely
be better done via a different, less special-purpose, approach to the
problem.
eliminateFrameIndex() is modified so the target implementations can return
the registers they wish to be tracked for reuse.
ARM Thumb1 implements and utilizes the new mechanism. All other targets are
simply modified to adjust for the changed eliminateFrameIndex() prototype.
llvm-svn: 83467
verbose-asm mode, print comments instead. This eliminates a non-comment
difference between verbose-asm mode and non-verbose-asm mode.
Also, factor out the relevant code out of all the targets and into
target-independent code.
llvm-svn: 83392
the new predicates I added) instead of going through a context and doing a
pointer comparison. Besides being cheaper, this allows a smart compiler
to turn the if sequence into a switch.
llvm-svn: 83297
unused DECLARE instruction.
KILL is not yet used anywhere, it will replace TargetInstrInfo::IMPLICIT_DEF
in the places where IMPLICIT_DEF is just used to alter liveness of physical
registers.
llvm-svn: 83006
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
This eliminates MachineInstr's std::list member and allows the data to be
created by isel and live for the remainder of codegen, avoiding a lot of
copying and unnecessary translation. This also shrinks MemSDNode.
- Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
fields for MachineMemOperands.
- Change MemSDNode to have a MachineMemOperand member instead of its own
fields with the same information. This introduces some redundancy, but
it's more consistent with what MachineInstr will eventually want.
- Ignore alignment when searching for redundant loads for CSE, but remember
the greatest alignment.
Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.
llvm-svn: 82794
naming scheme used in SelectionDAG, where there are multiple kinds
of "target" nodes, but "machine" nodes are nodes which represent
a MachineInstr.
llvm-svn: 82790
And fix a bug with the behavior of min/max instructions formed from
fcmp uge comparisons.
Also, use FiniteOnlyFPMath() for this code instead of UnsafeFPMath,
as it is more specific.
llvm-svn: 82466
getSymbolForDwarfGlobalReference is smart enough to know that it
needs to register the stub it references with MachineModuleInfoMachO,
so that it gets emitted at the end of the file.
Move stub emission from X86ATTAsmPrinter::doFinalization to the
new X86ATTAsmPrinter::EmitEndOfAsmFile asmprinter hook. The important
thing here is that EmitEndOfAsmFile is called *after* the ehframes are
emitted, so we get all the stubs.
This allows us to remove a gross hack from the asmprinter where it would
"just know" that it needed to output stubs for personality functions.
Now this is all driven from a consistent interface.
The testcase change is just reordering the expected output now that the
stubs come out after the ehframe instead of before.
This also unblocks other changes that Bill wants to make.
llvm-svn: 82269
trying to create RMW opportunities in the x86 backend. This can cause a
cycle to appear in the graph, since the other uses may eventually feed into
the TokenFactor we are sinking the load below.
llvm-svn: 81996
the Intel instruction tables.
The patterns will stay blank because ADD reg, reg
is faster, but having the encoding available is
useful for the disassembler.
llvm-svn: 81994
Eliminate the PersonalityPrefix/Suffix & NeedsIndirectEncoding
fields from MAI: they aren't part of the asm syntax, they are
related to the structure of the object file.
To replace their functionality, add a new
TLOF::getSymbolForDwarfGlobalReference method which asks targets
to decide how to reference a global from EH in a pc-relative way.
The default implementation just returns the symbol. The default
darwin implementation references the symbol through an indirect
$non_lazy_ptr stub. The bizarro x86-64 darwin specialization
handles the weird "foo@GOTPCREL+4" hack.
DwarfException.cpp now uses this to emit the reference to the
symbol in the right way, and this also eliminates another
horrible hack from DwarfException.cpp:
- if (strcmp(MAI->getPersonalitySuffix(), "+4@GOTPCREL"))
- O << "-" << MAI->getPCSymbol();
llvm-svn: 81991
All of these do not have patterns (they're for the
disassembler).
Many of the floating-point instructions will probably
be rolled into definitions that have patterns, and may
eventually be superseded by mdefs. So I put them
together and left a comment.
llvm-svn: 81979
interrupt instruction, which shouldn't arise any other way). 0xcd is
also used by JITMemoryManager to initialize the buffer to garbage,
which means it could appear following a noreturn call even when
that is not a stub, confusing X86CompilationCallback2. PR 4929.
llvm-svn: 81888
has multiple uses, as one of the other uses may be on a path
to a different node above the callseq_start, because that
leads to a cyclic graph. This problem is exposed when
-combiner-global-alias-analysis is used. This fixes PR4880.
llvm-svn: 81821
full AsmPrinter, and change TargetRegistry to keep track
of registered MCInstPrinters.
llvm-mc is still linking in the entire
target foo to get the code emitter stuff, but this is an
important step in the right direction.
llvm-svn: 81754
Change the picbase symbol on non-darwin systems from ".Lllvm$4.$piclabel" to
".L4$pb". The actual name doesn't matter and the darwin name is shorter.
llvm-svn: 81688
Move GetMBBSymbol up to AsmPrinter and make printBasicBlockLabel use it so that
we only have one place that decides what to name bb labels. Hopefully various
clients of printBasicBlockLabel can start using GetMBBSymbol instead.
llvm-svn: 81652
this means that it can only lower one MachineInstr to one MCInst. To
make this fly, we need to pull out handling of MO_GOT_ABSOLUTE_ADDRESS
(which generates an implicit label) out of X86MCInstLower.
llvm-svn: 81629
more efficient SmallPtrSet<MCSymbol*>. This eliminates string
craziness and fixes CodeGen/X86/darwin-quote.ll with the new asmprinter.
Codegen is producing stubs in a nondeterminstic order, but it was doing
this before anyway.
llvm-svn: 81511
Mangler::getNameWithPrefix. In addition to avoiding some over
quoting, this also is more efficient because it uses smallvector
instead of std::string thrashing.
llvm-svn: 81508
safe. This can happen we a subreg_to_reg 0 has been coalesced. One
exception is when the instruction that folds the load is a move, then we
can simply turn it into a 32-bit load from the stack slot.
rdar://7170444
llvm-svn: 81494
that things like .word can be parsed as target specific. Moved parsing .word
out of AsmParser.cpp into X86AsmParser.cpp as it is 2 bytes on X86 and 4 bytes
for other targets that support the .word directive.
llvm-svn: 81461
the MCInst path of the asmprinter. Instead, pull comment printing
out of the autogenerated asmprinter into each target that uses the
autogenerated asmprinter. This causes code duplication into each
target, but in a way that will be easier to clean up later when more
asmprinter stuff is commonized into the base AsmPrinter class.
This also fixes an xcore strangeness where it inserted two tabs
before every instruction.
llvm-svn: 81396
asm printer into the "printInstruction" routine. This
fixes a problem where the experimental asmprinter would
drop debug labels in some cases, and fixes issues on ppc/xcore
where pseudo instructions like "mr" didn't get debug locs properly.
It is annoying that this moves the call from one place into each
target, but a future set of more invasive refactorings will fix
that problem.
llvm-svn: 81377
to instructions instead of zero extended ones. This makes the asmprinter
print signed values more consistently. This apparently only really affects
the X86 backend.
llvm-svn: 81265
depth first order, so it wouldn't process unreachable blocks.
When compiling at -O0, late dead block elimination isn't done
and the bad instructions got to isel.
llvm-svn: 81187
- when transforming a vector shift of a non-immediate scalar shift amount, zero
extend the i32 shift amount to i64 since the vector shift reads 64 bits
- when transforming i16 vectors to use a vector shift, zero extend i16 shift amount
- improve the code quality in some cases when transforming vectors to use a vector shift
llvm-svn: 80935
disabling the use of 16-bit operations on x86. This doesn't yet work for
inline asms with 16-bit constraints, vectors with 16-bit elements,
trampoline code, and perhaps other obscurities, but it's enough to try
some experiments.
llvm-svn: 80930
from MCAsmLexer.h in preparation of supporting other targets. Changed the
X86AsmParser code to reflect this by removing AsmLexer::LexPercent and looking
for AsmToken::Percent when parsing in places that used AsmToken::Register.
Then changed X86ATTAsmParser::ParseRegister to parse out registers as an
AsmToken::Percent followed by an AsmToken::Identifier.
llvm-svn: 80929
different formatting from the old asmprinter, but it should be
semantically the same. We used to get:
popl %eax
addl $_GLOBAL_OFFSET_TABLE_ + [.-.Lllvm$6.$piclabel], %eax
...
Now we get:
popl %eax
.Lpicbaseref6:
addl $(_GLOBAL_OFFSET_TABLE_ + (.Lpicbaseref6 - .Lllvm$6.$piclabel)), %eax
...
llvm-svn: 80905
instruction tables to support segmented addressing (and other objects
of obscure type).
Modified the X86 assembly printers to handle these new operand types.
Added JMP and CALL instructions that use segmented addresses.
llvm-svn: 80857
encodings.
- Make some of the values emitted by the FDEs dependent upon the pointer
size. This is in line with how GCC does things. And it has the benefit of
working for Darwin in 64-bit mode now.
llvm-svn: 80428
- Note, this is a gigantic hack, with the sole purpose of unblocking further
work on the assembler (its also possible to test the mathcer more completely
now).
- Despite being a hack, its actually good enough to work over all of 403.gcc
(although some encodings are probably incorrect). This is a testament to the
beauty of X86's MachineInstr, no doubt! ;)
llvm-svn: 80234
moves. This avoids the need to promote the operands (or implicitly
extend them, a partial register update condition), and can reduce
i8 register pressure. This substantially speeds up code such as
write_hex in lib/Support/raw_ostream.cpp.
subclass-coalesce.ll is too trivial and no longer tests what it was
originally intended to test.
llvm-svn: 80184
leads to partial-register definitions. To help avoid redundant
zero-extensions, also teach the h-register matching patterns that
use movzbl to match anyext as well as zext.
llvm-svn: 80099
MachineInstr and MachineOperand. This required eliminating a
bunch of stuff that was using DOUT, I hope that bill doesn't
mind me stealing his fun. ;-)
llvm-svn: 79813
over absolute addressing even in non-PIC mode (unless the address
has an index or something else incompatible), because it has a
smaller encoding.
llvm-svn: 79553
Add patterns and instruction encoding information.
Add custom lowering to deal with hardwired return register of
uncertain type (xmm0).
llvm-svn: 79377
can asmprint:
NEW: movl "L___stack_chk_guard$non_lazy_ptr", %eax
OLD: movl L___stack_chk_guard$non_lazy_ptr, %eax
where 'new' is coming out of the MCInst version of the printer.
llvm-svn: 79170
what was there before. In "no FP mode", we weren't generating labels and unwind
table entries after each "push" instruction. While more than likely "okay", it's
not technically correct. The major thing was that the ordering of when to define
a new CFA register and at what offset wasn't correct. This would cause the
exception handling to fail in ways most miserable to users.
I also cleaned up some code a bit. There's one function which has a "return" at
the beginning, so it's never used. Should I just remove it? :-)
llvm-svn: 79139
support unaligned mem access only for certain types. (Should it be size
instead?)
ARM v7 supports unaligned access for i16 and i32, some v6 variants support it
as well.
llvm-svn: 79127
specific printer (this only works on x86, for now).
- This makes it possible to do some correctness checking of the parsing and
matching, since we can compare the results of 'as' on the original input, to
those of 'as' on the output from llvm-mc.
- In theory, we could now have an easy ATT -> Intel syntax converter. :)
llvm-svn: 78986
TargetAsmInfo. This eliminates a dependency on TargetMachine.h from
TargetRegistry.h, which technically was a layering violation.
- Clients probably can only sensibly pass in the same TargetAsmInfo as the
TargetMachine has, but there are only limited clients of this API.
llvm-svn: 78928
x86_64-apple-darwin10.
--- Reverse-merging r78895 into '.':
U test/CodeGen/PowerPC/2008-12-12-EH.ll
U lib/Target/DarwinTargetAsmInfo.cpp
--- Reverse-merging r78892 into '.':
U include/llvm/Target/DarwinTargetAsmInfo.h
U lib/Target/X86/X86TargetAsmInfo.cpp
U lib/Target/X86/X86TargetAsmInfo.h
U lib/Target/ARM/ARMTargetAsmInfo.h
U lib/Target/ARM/ARMTargetMachine.cpp
U lib/Target/ARM/ARMTargetAsmInfo.cpp
U lib/Target/PowerPC/PPCTargetAsmInfo.cpp
U lib/Target/PowerPC/PPCTargetAsmInfo.h
U lib/Target/PowerPC/PPCTargetMachine.cpp
G lib/Target/DarwinTargetAsmInfo.cpp
llvm-svn: 78919
pair instead of from a virtual method on TargetMachine. This cuts the final
ties of TargetAsmInfo to TargetMachine, meaning that MC can now use
TargetAsmInfo.
llvm-svn: 78802
"inlineasmstart/end" strings so that the contents of the directive
are separate from the comment character. This lets elf targets
get #APP/#NOAPP for free even if they don't use "#" as the comment
character. This also allows hoisting the darwin stuff up to the
shared TAI class.
llvm-svn: 78737
- Used to mark fake instructions which don't correspond to an actual machine
instruction (or are duplicates of a real instruction). This is to be used for
"special cases" in the .td files, which should be ignored by things like the
assembler and disassembler. We still need a good solution to handle pervasive
duplication, like with the Int_ instructions.
- Set the bit on fake "mov 0" style instructions, which allows turning an
assembler matcher warning into a hard error.
- -2 FIXMEs.
llvm-svn: 78731
and short. Well, it's kinda short. Definitely nasty and brutish.
The front-end generates the register/unregister calls into the SjLj runtime,
call-site indices and landing pad dispatch. The back end fills in the LSDA
with the call-site information provided by the front end. Catch blocks are
not yet implemented.
Built on Darwin and verified no llvm-core "make check" regressions.
llvm-svn: 78625
instead of syntactically as a string. This means that it keeps track of the
segment, section, flags, etc directly and asmprints them in the right format.
This also includes parsing and validation support for llvm-mc and
"attribute(section)", so we should now start getting errors about invalid
section attributes from the compiler instead of the assembler on darwin.
Still todo:
1) Uniquing of darwin mcsections
2) Move all the Darwin stuff out to MCSectionMachO.[cpp|h]
3) there are a few FIXMEs, for example what is the syntax to get the
S_GB_ZEROFILL segment type?
llvm-svn: 78547
since they are in 64 bit mode with i64immSExt32 imms. JIT is not affected since
it handles both word absolute relocations in the same way
llvm-svn: 78479
- This doesn't actually improve the algorithm (its still linear), but the
generated (match) code is now fairly compact and table driven. Still need a
generic string matcher.
- The table still needs to be compressed, this is quite simple to do and should
shrink it to under 16k.
- This also simplifies and restructures the code to make the match classes more
explicit, in anticipation of resolving ambiguities.
llvm-svn: 78461
- Still not very sane, but a least its not 60k lines on X86. :)
- In terms of correctness, currently some things are hard wired for X86, and we
still don't properly resolve ambiguities (this is ignoring the instructions
we don't even match due to funny .td stuff or other corner cases).
The high level changes:
1. Represent tokens which are significant for matching explicitly as separate
operands. This uniformly handles not only the instruction mnemonic, but
also 'signficiant' syntax like the '*' in "call * ...".
2. Separate the matching of operands to an instruction from the construction of
the MCInst. In theory this can be done during matching, but since the number
of variations is small I think it makes sense to decompose the problems.
3. Improved a few of the mechanisms to at least successfully flatten / tokenize
the assembly strings for PowerPC and ARM.
4. The comment at the top of AsmMatcherEmitter.cpp explains the approach I'm
moving towards for handling ambiguous instructions. The high-bit is to infer
a partial ordering of the operand classes (and force the user to specify one
if we can't) and use that to resolve ambiguities.
llvm-svn: 78378
by aggressive chain operand optimization. UpdateNodeOperands
does not modify the node in place if it would result in
a node identical to an existing node.
llvm-svn: 78297
a dirty hack and isn't need anymore since the last x86 code emitter patch)
- Add a target-dependent modifier to addend calculation
- Use R_X86_64_32S relocation for X86::reloc_absolute_word_sext
- Use getELFSectionFlags whenever possible
- fix getTextSection to use TLOF and emit the right text section
- Handle global emission for static ctors, dtors and Type::PointerTyID
- Some minor fixes
llvm-svn: 78176
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.
This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.
This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.
llvm-svn: 78142
calls were originally put in place because errs() at one time was
not unbuffered, and these print routines are commonly used with errs()
for debugging. However, errs() is now properly unbuffered, so the
flush calls are no longer needed. This significantly reduces the
number of write(2) calls for regular asm printing when there are many
small functions.
llvm-svn: 78137
for ELF to work.
2) RIP addressing: Use SIB bytes for absolute relocations where RegBase=0,
IndexReg=0.
3) The JIT can get the real address of cstpools and jmptables during
code emission, fix that for object code emission
llvm-svn: 78129
Since we're generating stubs by hands we don't follow the ABI and don't
create a register spill area.
Don't use this area in compilation callback!
llvm-svn: 77968
pushes in the function prolog if the function doesn't have any stack space,
i.e. for a prolog like:
0x40011870: push %r15
0x40011872: push %r14
0x40011874: push %rbx
Patch by Zoltan!
llvm-svn: 77919
Module*.
Also, dropped uses of TargetMachine where unnecessary. The only target which
still takes a TargetMachine& is Mips, I would appreciate it if someone would
normalize this to match other targets.
llvm-svn: 77918
the only real caller (GetFunctionSizeInBytes) uses it.
The custom ARM implementation of this is basically reimplementing
an assembler poorly for negligible gain. It should be removed
IMNSHO, but I'll leave that to ARMish folks to decide.
llvm-svn: 77877
getLSDASection() to be more specific. This makes it pretty obvious
that the ELF LSDA section is being specified wrong in PIC mode. We're
probably getting a lot of startup-time relocations to a readonly page,
which is expensive and bad.
Someone who cares about ELF C++ should investigate this.
llvm-svn: 77847
compute it based on what it knows. As part of this, rename getSectionForMergeableConstant
to getSectionForConstant because it works for non-mergable constants also.
The only functionality change from this is that Xcore will start dropping
its jump tables into readonly section instead of data section in -static mode.
This should be fine as the linker resolves the relocations. If this is a
problem, let me know and we'll come up with another solution.
llvm-svn: 77833
- Operands which are just a label should be parsed as immediates, not memory
operands (from the assembler perspective).
- Match a few more flavors of immediates.
- Distinguish match functions for memory operands which don't take a segment
register.
- We match the .s for "hello world" now!
llvm-svn: 77745
thing is #if0'd out anyway. Just simplify the code by reducing the interface.
Not deleting this is essential for Bill's continuing happiness.
llvm-svn: 77736
- This is "experimental" code, I am feeling my way around and working out the
best way to do things (and learning tblgen in the process). Comments welcome,
but keep in mind this stuff will change radically.
- This is enough to match "subb" and friends, but not much else. The next step is to
automatically generate the matchers for individual operands.
llvm-svn: 77657
When the return value is not used (i.e. only care about the value in the memory), x86 does not have to use add to implement these. Instead, it can use add, sub, inc, dec instructions with the "lock" prefix.
This is currently implemented using a bit of instruction selection trick. The issue is the target independent pattern produces one output and a chain and we want to map it into one that just output a chain. The current trick is to select it into a merge_values with the first definition being an implicit_def. The proper solution is to add new ISD opcodes for the no-output variant. DAG combiner can then transform the node before it gets to target node selection.
Problem #2 is we are adding a whole bunch of x86 atomic instructions when in fact these instructions are identical to the non-lock versions. We need a way to add target specific information to target nodes and have this information carried over to machine instructions. Asm printer (or JIT) can use this information to add the "lock" prefix.
llvm-svn: 77582
and convert code to using it, instead of having lots of things
poke the isLookupPtrRegClass() method directly.
2. Make PointerLikeRegClass contain a 'kind' int, and store it in
the existing regclass field of TargetOperandInfo when the
isLookupPtrRegClass() predicate is set. Make getRegClass pass
this into TargetRegisterInfo::getPointerRegClass(), allowing
targets to have multiple ptr_rc things.
llvm-svn: 77504
it is highly specific to the object file that will be generated in the end,
this introduces a new TargetLoweringObjectFile interface that is implemented
for each of ELF/MachO/COFF/Alpha/PIC16 and XCore.
Though still is still a brutal and ugly refactoring, this is a major step
towards goodness.
This patch also:
1. fixes a bunch of dangling pointer problems in the PIC16 backend.
2. disables the TargetLowering copy ctor which PIC16 was accidentally using.
3. gets us closer to xcore having its own crazy target section flags and
pic16 not having to shadow sections with its own objects.
4. fixes wierdness where ELF targets would set CStringSection but not
CStringSection_. Factor the code better.
5. fixes some bugs in string lowering on ELF targets.
llvm-svn: 77294
'unnamed' bss section, but some impls would want a named one. Since
they don't have consistent behavior, just make each target do their
own thing, instead of doing something "sortof common" then having
targets change immutable objects later.
llvm-svn: 77165
group instead of a bunch of random unrelated ideas. Provide predicates
to categorize a SectionKind into a group, and use them instead of
getKind() throughout the code.
This also renames a ton of SectionKinds to be more consistent and
evocative, and adds a huge number of comments on the enums so that
I will hopefully be able to remember how this stuff works long from
now.
llvm-svn: 77129
1. Spell SectionFlags::Writeable as "Writable".
2. Add predicates for deriving SectionFlags from SectionKinds.
3. Sink ELF-specific getSectionPrefixForUniqueGlobal impl into
ELFTargetAsmInfo.
4. Fix SectionFlagsForGlobal to know that BSS/ThreadBSS has the
BSS bit set (the real fix for PR4619).
5. Fix isSuitableForBSS to not put globals with explicit sections
set in BSS (which was the reason #4 wasn't fixed earlier).
6. Remove my previous hack for PR4619.
llvm-svn: 77085
- Instead of requiring targets to define a JIT quality match function, we just
have them specify if they support a JIT.
- Target selection for the JIT just gets the host triple and looks for the best
target which matches the triple and has a JIT.
llvm-svn: 77060
- Some clients which used DOUT have moved to DEBUG. We are deprecating the
"magic" DOUT behavior which avoided calling printing functions when the
statement was disabled. In addition to being unnecessary magic, it had the
downside of leaving code in -Asserts builds, and of hiding potentially
unnecessary computations.
llvm-svn: 77019
It's classifications now include elf-specific discriminators. Targets
that don't have these features (like darwin and pecoff) simply treat
data.rel like data, etc.
llvm-svn: 76993
The later doesn't depend on any crazy LLVM IR stuff, and this
pulls the concatenation of prefix with GV name (the root problem behind
PR4584) out one level.
llvm-svn: 76948