'unnamed' bss section, but some impls would want a named one. Since
they don't have consistent behavior, just make each target do their
own thing, instead of doing something "sortof common" then having
targets change immutable objects later.
llvm-svn: 77165
group instead of a bunch of random unrelated ideas. Provide predicates
to categorize a SectionKind into a group, and use them instead of
getKind() throughout the code.
This also renames a ton of SectionKinds to be more consistent and
evocative, and adds a huge number of comments on the enums so that
I will hopefully be able to remember how this stuff works long from
now.
llvm-svn: 77129
- This is a simplified mechanism which just looks up a target based on the
target triple, with a few additional flags.
- Remove getClosestStaticTargetForModule, the moral equivalent is now:
lookupTarget(Mod->getTargetTriple, true, false, ...);
- This no longer does the fuzzy matching with target data (based on endianness
and pointer width) that getClosestStaticTargetForModule was doing, but this
was deemed unnecessary.
llvm-svn: 77111
and make it more aggressive, we now put:
const int G2 __attribute__((weak)) = 42;
into the text (readonly) segment like gcc, previously we put
it into the data (readwrite) segment.
llvm-svn: 77104
1. Spell SectionFlags::Writeable as "Writable".
2. Add predicates for deriving SectionFlags from SectionKinds.
3. Sink ELF-specific getSectionPrefixForUniqueGlobal impl into
ELFTargetAsmInfo.
4. Fix SectionFlagsForGlobal to know that BSS/ThreadBSS has the
BSS bit set (the real fix for PR4619).
5. Fix isSuitableForBSS to not put globals with explicit sections
set in BSS (which was the reason #4 wasn't fixed earlier).
6. Remove my previous hack for PR4619.
llvm-svn: 77085
- Instead of requiring targets to define a JIT quality match function, we just
have them specify if they support a JIT.
- Target selection for the JIT just gets the host triple and looks for the best
target which matches the triple and has a JIT.
llvm-svn: 77060
Before:
adr r12, #LJTI3_0_0
ldr pc, [r12, +r0, lsl #2]
LJTI3_0_0:
.long LBB3_24
.long LBB3_30
.long LBB3_31
.long LBB3_32
After:
adr r12, #LJTI3_0_0
add pc, r12, +r0, lsl #2
LJTI3_0_0:
b.w LBB3_24
b.w LBB3_30
b.w LBB3_31
b.w LBB3_32
This has several advantages.
1. This will make it easier to optimize this to a TBB / TBH instruction +
(smaller) table.
2. This eliminate the need for ugly asm printer hack to force the address
into thumb addresses (bit 0 is one).
3. Same codegen for pic and non-pic.
4. This eliminate the need to align the table so constantpool island pass
won't have to over-estimate the size.
Based on my calculation, the later is probably slightly faster as well since
ldr pc with shifter address is very slow. That is, it should be a win as long
as the HW implementation can do a reasonable job of branch predict the second
branch.
llvm-svn: 77024
- Some clients which used DOUT have moved to DEBUG. We are deprecating the
"magic" DOUT behavior which avoided calling printing functions when the
statement was disabled. In addition to being unnecessary magic, it had the
downside of leaving code in -Asserts builds, and of hiding potentially
unnecessary computations.
llvm-svn: 77019
It's classifications now include elf-specific discriminators. Targets
that don't have these features (like darwin and pecoff) simply treat
data.rel like data, etc.
llvm-svn: 76993