- The defines are in stddint.h, which is #include'd already.
- The block wasn't used anyway, since it was _OpenBSD_, and not __OpenBSD__
Patch by David Hill!
llvm-svn: 161515
This patch allows us to use cmake to specify a cross compiler for Hexagon.
In particular, the patch adds a missing case for the target Hexagon in
cmake/config-ix.cmake, and it moves LLVM_DEFAULT_TARGET_TRIPLE and TARGET_TRIPLE
variables from cmake/config-ix.cmake to the toplevel CMakeLists.txt to make them
available at configure time. Here is the command line that I have used to test
my patches:
$ cmake -G Ninja -D BUILD_SHARED_LIBS:BOOL=ON -D LLVM_TARGETS_TO_BUILD:STRING=Hexagon -D TARGET_TRIPLE:STRING=hexagon-unknown-linux-gnu -D LLVM_DEFAULT_TARGET_TRIPLE:STRING=hexagon-unknown-linux-gnu -D LLVM_TARGET_ARCH:STRING=hexagon-unknown-linux-gnu -D LLVM_ENABLE_PIC:BOOL=OFF ..
$ ninja check
llvm-svn: 161504
There are situations where inline ASM may want to change the section -- for
instance, to create a variable in the .data section. However, it cannot do this
without (potentially) restoring to the wrong section. E.g.:
asm volatile (".section __DATA, __data\n\t"
".globl _fnord\n\t"
"_fnord: .quad 1f\n\t"
".text\n\t"
"1:" :::);
This may be wrong if this is inlined into a function that has a "section"
attribute. The user should use `.pushsection' and `.popsection' here instead.
The addition of `.previous' is added for completeness.
<rdar://problem/12048387>
llvm-svn: 161477
We perform the following:
1> Use SUB instead of CMP for i8,i16,i32 and i64 in ISel lowering.
2> Modify MachineCSE to correctly handle implicit defs.
3> Convert SUB back to CMP if possible at peephole.
Removed pattern matching of (a>b) ? (a-b):0 and like, since they are handled
by peephole now.
rdar://11873276
llvm-svn: 161462
We can't rematerialize a PIC base after register allocation anyway, and
scanning physreg use-def chains is very expensive in a function with
many calls.
<rdar://problem/12047515>
llvm-svn: 161461
The getSumForBlock function was quadratic in the number of successors
because getSuccWeight would perform a linear search for an already known
iterator.
llvm-svn: 161460
multiple scalar promotions on a single loop. This also has the effect of
preserving the order of stores sunk out of loops, which is aesthetically
pleasing, and it happens to fix the testcase in PR13542, though it doesn't
fix the underlying problem.
llvm-svn: 161459
This adds support for TargetIndex operands during isel. The meaning of
these (index, offset, flags) operands is entirely defined by the target.
llvm-svn: 161453
An unsigned value converted to floating-point will always be greater than
a negative constant. Unfortunately InstCombine reversed the check so that
unsigned values were being optimized to always be greater than all positive
floating-point constants. <rdar://problem/12029145>
llvm-svn: 161452
A target index operand looks a lot like a constant pool reference, but
it is completely target-defined. It contains the 8-bit TargetFlags, a
32-bit index, and a 64-bit offset. It is preserved by all code generator
passes.
TargetIndex operands can be used to carry target-specific information in
cases where immediate operands won't suffice.
llvm-svn: 161441
Compare the critical paths of the two traces through an if-conversion
candidate. If the difference is larger than the branch brediction
penalty, reject the if-conversion. If would never pay.
llvm-svn: 161433
a use or a BB, but it is inline in the handling of the invoke instruction.
This patch refactors it so that it can be used in other cases. For example, in
define i32 @f(i32 %x) {
bb0:
%cmp = icmp eq i32 %x, 0
br i1 %cmp, label %bb2, label %bb1
bb1:
br label %bb2
bb2:
%cond = phi i32 [ %x, %bb0 ], [ 0, %bb1 ]
%foo = add i32 %cond, %x
ret i32 %foo
}
GVN should be able to replace %x with 0 in any use that is dominated by the
true edge out of bb0. In the above example the only such use is the one in
the phi.
llvm-svn: 161429
and "instruction address -> file/line" lookup.
Instead of plain collection of rows, debug line table for compilation unit is now
treated as the number of row ranges, describing sequences (series of contiguous machine
instructions). The sequences are not always listed in the order of increasing
address, so previously used std::lower_bound() sometimes produced wrong results.
Now the instruction address lookup consists of two stages: finding the correct
sequence, and searching for address in range of rows for this sequence.
llvm-svn: 161414
We give a bonus for every argument because the argument setup is not needed
anymore when the function is inlined. With this patch we interpret byval
arguments as a compact representation of many arguments. The byval argument
setup is implemented in the backend as an inline memcpy, so to model the
cost as accurately as possible we take the number of pointer-sized elements
in the byval argument and give a bonus of 2 instructions for every one of
those. The bonus is capped at 8 elements, which is the number of stores
at which the x86 backend switches from an expanded inline memcpy to a real
memcpy. It would be better to use the real memcpy threshold from the backend,
but it's not available via TargetData.
This change brings the performance of c-ray in line with gcc 4.7. The included
test case tries to reproduce the c-ray problem to catch regressions for this
benchmark early, its performance is dominated by the inline decision of a
specific call.
This only has a small impact on most code, more on x86 and arm than on x86_64
due to the way the ABI works. When building LLVM for x86 it gives a small
inline cost boost to virtually any function using StringRef or STL allocators,
but only a 0.01% increase in overall binary size. The size of gcc compiled by
clang actually shrunk by a couple bytes with this patch applied, but not
significantly.
llvm-svn: 161413
instsimplify+inline strategy.
The crux of the problem is that instsimplify was reasonably relying on
an invariant that is true within any single function, but is no longer
true mid-inline the way we use it. This invariant is that an argument
pointer != a local (alloca) pointer.
The fix is really light weight though, and allows instsimplify to be
resiliant to these situations: when checking the relation ships to
function arguments, ensure that the argumets come from the same
function. If they come from different functions, then none of these
assumptions hold. All credit to Benjamin Kramer for coming up with this
clever solution to the problem.
llvm-svn: 161410
Previously, MBP essentially aligned every branch target it could. This
bloats code quite a bit, especially non-looping code which has no real
reason to prefer aligned branch targets so heavily.
As Andy said in review, it's still a bit odd to do this without a real
cost model, but this at least has much more plausible heuristics.
Fixes PR13265.
llvm-svn: 161409
If the result of a common subexpression is used at all uses of the candidate
expression, CSE should not increase the live range of the common subexpression.
rdar://11393714 and rdar://11819721
llvm-svn: 161396
initialize fields of the class that it used.
The result was nonsense code.
Before:
0000000000000000 <foo>:
0: 00441100 0x441100
4: 03e00008 jr ra
8: 00000000 nop
After:
0000000000000000 <foo>:
0: 00041000 sll v0,a0,0x0
4: 03e00008 jr ra
8: 00000000 nop
llvm-svn: 161377
This allows codegen passes to query properties like
InstrItins->SchedModel->IssueWidth. It also ensure's that
computeOperandLatency returns the X86 defaults for loads and "high
latency ops". This should have no significant impact on existing
schedulers because X86 defaults happen to be the same as global
defaults.
llvm-svn: 161370
were using a class defined for 32 bit instructions and
thus the instruction was for addiu instead of daddiu.
This was corrected by adding the instruction opcode as a
field in the base class to be filled in by the defs.
llvm-svn: 161359
When the command line target options were removed from the LLVM libraries, LTO
lost its ability to specify things like `-disable-fp-elim'. Add this back by
adding the command line variables to the `lto' project.
<rdar://problem/12038729>
llvm-svn: 161353
These 2 relocations gain access to the
highest and the second highest 16 bits
of a 64 bit object.
R_MIPS_HIGHER %higher(A+S)
The %higher(x) function is [ (((long long) x + 0x80008000LL) >> 32) & 0xffff ].
R_MIPS_HIGHEST %highest(A+S)
The %highest(x) function is [ (((long long) x + 0x800080008000LL) >> 48) & 0xffff ].
llvm-svn: 161348
The MFTB instruction itself is being phased out, and its functionality
is provided by MFSPR. According to the ISA docs, using MFSPR works on all known
chips except for the 601 (which did not have a timebase register anyway)
and the POWER3.
Thanks to Adhemerval Zanella for pointing this out!
llvm-svn: 161346
On PPC64, this can be done with a simple TableGen pattern.
To enable this, I've added the (otherwise missing) readcyclecounter
SDNode definition to TargetSelectionDAG.td.
llvm-svn: 161302
This patch is mostly just refactoring a bunch of copy-and-pasted code, but
it also adds a check that the call instructions are readnone or readonly.
That check was already present for sin, cos, sqrt, log2, and exp2 calls, but
it was missing for the rest of the builtins being handled in this code.
llvm-svn: 161282
This option runs LiveIntervals before TwoAddressInstructionPass which
will eventually learn to exploit and update the analysis.
Eventually, LiveIntervals will run before PHIElimination, and we can get
rid of LiveVariables.
llvm-svn: 161270
The previous change caused fast isel to not attempt handling any calls to
builtin functions. That included things like "printf" and caused some
noticable regressions in compile time. I wanted to avoid having fast isel
keep a separate list of functions that had to be kept in sync with what the
code in SelectionDAGBuilder.cpp was handling. I've resolved that here by
moving the list into TargetLibraryInfo. This is somewhat redundant in
SelectionDAGBuilder but it will ensure that we keep things consistent.
llvm-svn: 161263
I noticed that SelectionDAGBuilder::visitCall was missing a check for memcmp
in TargetLibraryInfo, so that it would use custom code for memcmp calls even
with -fno-builtin. I also had to add a new -disable-simplify-libcalls option
to llc so that I could write a test for this.
llvm-svn: 161262
The 'unused' state of a value number can be represented as an invalid
def SlotIndex. This also exposed code that shouldn't have been looking
at unused value VNInfos.
llvm-svn: 161258
The only real user of the flag was removeCopyByCommutingDef(), and it
has been switched to LiveIntervals::hasPHIKill().
All the code changed by this patch was only concerned with computing and
propagating the flag.
llvm-svn: 161255
The VNInfo::HAS_PHI_KILL is only half supported. We precompute it in
LiveIntervalAnalysis, but it isn't properly updated by live range
splitting and functions like shrinkToUses().
It is only used in one place: RegisterCoalescer::removeCopyByCommutingDef().
This patch changes that function to use a new LiveIntervals::hasPHIKill()
function that computes the flag for a given value number.
llvm-svn: 161254
to store additional flag options since too many things can
and do override CPPFLAGS. Also, this is exported, unlike CPPFLAGS
so it can be actually used elsewhere. This should enable us
to remove the AC_SUBSTs in the intel checks, but I have no way
of testing it.
llvm-svn: 161233
Fast isel doesn't currently have support for translating builtin function
calls to target instructions. For embedded environments where the library
functions are not available, this is a matter of correctness and not
just optimization. Most of this patch is just arranging to make the
TargetLibraryInfo available in fast isel. <rdar://problem/12008746>
llvm-svn: 161232
This just provides a way to look up a LibFunc::Func enum value for a
function name. Alphabetize the enums and function names so we can use a
binary search.
llvm-svn: 161231
The "findUsedStructTypes" method is very expensive to run. It needs to be
optimized so that LTO can run faster. Splitting this method out of the Module
class will help this occur. For instance, it can keep a list of seen objects so
that it doesn't process them over and over again.
llvm-svn: 161228
Now that TableGen supports references to NAME w/o it being explicitly
referenced in the definition's own name, use that to simplify
assembly InstAlias definitions in multiclasses.
llvm-svn: 161218
Add more comments and use early returns to reduce nesting in isLoadFoldable.
Also disable folding for V_SET0 to avoid introducing a const pool entry and
a const pool load.
rdar://10554090 and rdar://11873276
llvm-svn: 161207
yaml2obj takes a textual description of an object file in YAML format
and outputs the binary equivalent. This greatly simplifies writing
tests that take binary object files as input.
llvm-svn: 161205
Previously, def NAME values were only populated, and references to NAME
resolved, when NAME was referenced in the 'def' entry of the multiclass
sub-entry. e.g.,
multiclass foo<...> {
def prefix_#NAME : ...
}
It's useful, however, to be able to reference NAME even when the default
def name is used. For example, when a multiclass has 'def : Pat<...>'
or 'def : InstAlias<...>' entries which refer to earlier instruction
definitions in the same multiclass. e.g.,
multiclass myMulti<RegisterClass rc> {
def _r : myI<(outs rc:$d), (ins rc:$r), "r $d, $r", []>;
def : InstAlias<\"wilma $r\", (!cast<Instruction>(NAME#\"_r\") rc:$r, rc:$r)>;
}
llvm-svn: 161198
Whenever both instruction depths and instruction heights are known in a
block, it is possible to compute the length of the critical path as
max(depth+height) over the instructions in the block.
The stored live-in lists make it possible to accurately compute the
length of a critical path that bypasses the current (small) block.
llvm-svn: 161197
LiveRangeEdit::eliminateDeadDefs() can delete a dead instruction that
reads unreserved physregs. This would leave the corresponding regunit
live interval dangling because we don't have shrinkToUses() for physical
registers.
Fix this problem by turning the instruction into a KILL instead of
deleting it. This happens in a landing pad in
test/CodeGen/X86/2012-05-19-CoalescerCrash.ll:
%vreg27<def,dead> = COPY %EDX<kill>; GR32:%vreg27
becomes:
KILL %EDX<kill>
An upcoming fix to the machine verifier will catch problems like this by
verifying regunit live intervals.
This fixes PR13498. I am not including the test case from the PR since
we already have one exposing the problem once the verifier is fixed.
llvm-svn: 161182
This trivial helper function tests if a register contains a register
unit. It is similar to regsOverlap(), but with asymmetric arguments.
llvm-svn: 161180
Machine CSE and other optimizations can remove instructions so folding
is possible at peephole while not possible at ISel.
This patch is a rework of r160919 and was tested on clang self-host on my local
machine.
rdar://10554090 and rdar://11873276
llvm-svn: 161152
The height on an instruction is the minimum number of cycles from the
instruction is issued to the end of the trace. Heights are computed for
all instructions in and below the trace center block.
The method for computing heights is different from the depth
computation. As we visit instructions in the trace bottom-up, heights of
used instructions are pushed upwards. This way, we avoid scanning long
use lists, looking for uses in the current trace.
At each basic block boundary, a list of live-in registers and their
minimum heights is saved in the trace block info. These live-in lists
are used when restarting depth computations on a trace that
converges with an already computed trace. They will also be used to
accurately compute the critical path length.
llvm-svn: 161138
TinyPtrVector. With these, it is sufficiently functional for my more
normal / pedestrian uses.
I've not included some r-value reference stuff here because the value
type for a TinyPtrVector is, necessarily, just a pointer.
I've added tests that cover the basic behavior of these routines, but
they aren't as comprehensive as I'd like. In particular, they don't
really test the iterator semantics as thoroughly as they should. Maybe
some brave soul will feel enterprising and flesh them out. ;]
llvm-svn: 161104
Since the llvm::sys::fs::map_file_pages() support function it relies on
is not yet implemented on Windows, the unit tests for FileOutputBuffer
are currently conditionalized to run only on unix.
llvm-svn: 161099
No new test case is added.
This patch makes test JITTest.FunctionIsRecompiledAndRelinked pass on mips
platform.
Patch by Petar Jovanovic.
llvm-svn: 161098
instructions that decrement and increment the stack pointer before and after a
call when the function does not have a reserved call frame.
llvm-svn: 161093
MipsSEFrameLowering.
Implement MipsSEFrameLowering::hasReservedCallFrame. Call frames will not be
reserved if there is a call with a large call frame or there are variable sized
objects on the stack.
llvm-svn: 161090
The frame object which points to the dynamically allocated area will not be
needed after changes are made to cease reserving call frames.
llvm-svn: 161076
Assuming infinite issue width, compute the earliest each instruction in
the trace can issue, when considering the latency of data dependencies.
The issue cycle is record as a 'depth' from the beginning of the trace.
This is half the computation required to find the length of the critical
path through the trace. Heights are next.
llvm-svn: 161074
arguments to the stack in MipsISelLowering::LowerCall, use stack pointer and
integer offset operands rather than frame object operands.
llvm-svn: 161068
single-precision load and store.
Also avoid selecting LUXC1 and SUXC1 instructions during isel. It is incorrect
to map unaligned floating point load/store nodes to these instructions.
llvm-svn: 161063
One motivating example is to sink an instruction from a basic block which has
two successors: one outside the loop, the other inside the loop. We should try
to sink the instruction outside the loop.
rdar://11980766
llvm-svn: 161062
for this class. These tests exercise most of the basic properties, but
the API for TinyPtrVector is very strange currently. My plan is to start
fleshing out the API to match that of SmallVector, but I wanted a test
for what is there first.
Sadly, it doesn't look reasonable to just re-use the SmallVector tests,
as this container can only ever store pointers, and much of the
SmallVector testing is to get construction and destruction right.
Just to get this basic test working, I had to add value_type to the
interface.
While here I found a subtle bug in the combination of 'erase', 'begin',
and 'end'. Both 'begin' and 'end' wanted to use a null pointer to
indicate the "end" iterator of an empty vector, regardless of whether
there is actually a vector allocated or the pointer union is null.
Everything else was fine with this except for erase. If you erase the
last element of a vector after it has held more than one element, we
return the end iterator of the underlying SmallVector which need not be
a null pointer. Instead, simply use the pointer, and poniter + size()
begin/end definitions in the tiny case, and delegate to the inner vector
whenever it is present.
llvm-svn: 161024
We are extending live ranges, so kill flags are not accurate. They
aren't needed until they are recomputed after RA anyway.
<rdar://problem/11950722>
llvm-svn: 161023
We branch to the successor with higher edge weight first.
Convert from
je LBB4_8 --> to outer loop
jmp LBB4_14 --> to inner loop
to
jne LBB4_14
jmp LBB4_8
PR12750
rdar: 11393714
llvm-svn: 161018
CallInst for intrinsics. This allows users of the InstVisitor that would
like to special case certain very common intrinsics to do so naturally
in keeping with the type hierarchy's utility classes.
llvm-svn: 161006
This lets traces include the final iteration of a nested loop above the
center block, and the first iteration of a nested loop below the center
block.
We still don't allow traces to contain backedges, and traces are
truncated where they would leave a loop, as seen from the center block.
llvm-svn: 161003
Empty macro arguments at the end of the list should be as-if not specified at
all, but those in the middle of the list need to be kept so as not to screw
up the positional numbering. E.g.:
.macro foo
foo_-bash___:
nop
.endm
foo 1, 2, 3, 4
foo 1, , 3, 4
Should create two labels, "foo_1_2_3_4" and "foo_1__3_4".
rdar://11948769
llvm-svn: 161002
test more than a single instantiation of SmallVector.
Add testing for 0, 1, 2, and 4 element sized "small" buffers. These
appear to be essentially untested in the unit tests until now.
Fix several tests to be robust in the face of a '0' small buffer. As
a consequence of this size buffer, the growth patterns are actually
observable in the test -- yes this means that many tests never caused
a grow to occur before. For some tests I've merely added a reserve call
to normalize behavior. For others, the growth is actually interesting,
and so I captured the fact that growth would occur and adjusted the
assertions to not assume how rapidly growth occured.
Also update the specialization for a '0' small buffer length to have all
the same interface points as the normal small vector.
llvm-svn: 161001
When computing a trace, all the candidates for pred/succ must have been
visited. Filter out back-edges first, though. The PO traversal ignores
them.
Thanks to Andy for spotting this in review.
llvm-svn: 160995
where the other_half of the movt and movw relocation entries needs to get set
and only with the 16 bits of the other half.
rdar://10038370
llvm-svn: 160978
This is a cleaned up version of the isFree() function in
MachineTraceMetrics.cpp.
Transient instructions are very unlikely to produce any code in the
final output. Either because they get eliminated by RegisterCoalescing,
or because they are pseudo-instructions like labels and debug values.
llvm-svn: 160977
A->isPredecessor(B) is the same as B->isSuccessor(A), but it can
tolerate a B that is null or dangling. This shouldn't happen normally,
but it it useful for verification code.
llvm-svn: 160968
Machine CSE and other optimizations can remove instructions so folding
is possible at peephole while not possible at ISel.
rdar://10554090 and rdar://11873276
llvm-svn: 160919
It is possible that an instruction can use and update EFLAGS.
When checking the safety, we should check the usage of EFLAGS first before
declaring it is safe to optimize due to the update.
llvm-svn: 160912
A value number is a PHI def if and only if it begins at a block
boundary. This can be derived from the def slot, a separate flag is not
necessary.
llvm-svn: 160893
This option replaces the existing live interval computation with one
based on LiveRangeCalc.cpp. The new algorithm does not depend on
LiveVariables, and it can be run at any time, before or after leaving
SSA form.
llvm-svn: 160892
This can happen as long as the instruction is not reachable. Instcombine does generate these unreachable malformed selects when doing RAUW
llvm-svn: 160874
The rationale here is that it's hard to write loops containing vector erases and
it only shows up if the vector contains non-trivial objects leading to crashes
when forming them out of garbage memory.
llvm-svn: 160854
These tables were indexed by [register][subreg index] which made them,
very large and sparse.
Replace them with lists of sub-register indexes that match the existing
lists of sub-registers. MCRI::getSubReg() becomes a very short linear
search, like getSubRegIndex() already was.
llvm-svn: 160843
Now that the weird X86 sub_ss and sub_sd sub-register indexes are gone,
there is no longer a need for the CompositeIndices construct in .td
files. Sub-register index composition can be specified on the
SubRegIndex itself using the ComposedOf field.
Also enforce unique names for sub-registers in TableGen. The same
sub-register cannot be available with multiple sub-register indexes.
llvm-svn: 160842
The (COPY_TO_REGCLASS GR32:$src, VR128) pattern looks odd, but
copyPhysReg does the right thing with it. (The old pattern would
eventually produce the same cross-class copy).
llvm-svn: 160830
The SUBREG_TO_REG instruction has magic semantics asserting that the
source value was defined by an instruction that cleared the high half of
the register. Those semantics are never actually exploited for xmm
registers.
llvm-svn: 160818
These idempotent sub-register indices don't do anything --- They simply
map XMM registers to themselves. They no longer affect register classes
either since the SubRegClasses field has been removed from Target.td.
This patch replaces XMM->XMM EXTRACT_SUBREG and INSERT_SUBREG patterns
with COPY_TO_REGCLASS patterns which simply become COPY instructions.
The number of IMPLICIT_DEF instructions before register allocation is
reduced, and that is the cause of the test case changes.
llvm-svn: 160816
This is still a work in progress.
Out-of-order CPUs usually execute instructions from multiple basic
blocks simultaneously, so it is necessary to look at longer traces when
estimating the performance effects of code transformations.
The MachineTraceMetrics analysis will pick a typical trace through a
given basic block and provide performance metrics for the trace. Metrics
will include:
- Instruction count through the trace.
- Issue count per functional unit.
- Critical path length, and per-instruction 'slack'.
These metrics can be used to determine the performance limiting factor
when executing the trace, and how it will be affected by a code
transformation.
Initially, this will be used by the early if-conversion pass.
llvm-svn: 160796
hopefully make it more visible. Adjust the web-docs to have a link to
this file rather than the list itself. I described code owners as also
being gatekeepers for their part of the code, which I think is true but
isn't in the code owner explanation on the web page.
llvm-svn: 160776
It is redundant; RegisterCoalescer will do the remat if it can't eliminate
the copy. Collected instruction counts before and after this. A few extra
instructions are generated due to spilling but it is normal to see these kinds
of changes with almost any small codegen change, according to Jakob.
This also fixed rdar://11830760 where xor is expected instead of movi0.
llvm-svn: 160749
When a live range splits into multiple connected components, we would
arbitrarily assign <undef> uses to component 0. This is wrong when the
use is tied to a def that gets assigned to a different component:
%vreg69<def> = ADD8ri %vreg68<undef>, 1
The use and def must get the same virtual register.
Fix this by assigning <undef> uses to the same component as the value
defined by the instruction, if any:
%vreg69<def> = ADD8ri %vreg69<undef>, 1
This fixes PR13402. The PR has a test case which I am not including
because it is unlikely to keep exposing this behavior in the future.
llvm-svn: 160739
of an array element (rather than at the beginning of the element) and extended
into the next element, then the load from the second element was being handled
wrong due to incorrect updating of the notion of which byte to load next. This
fixes PR13442. Thanks to Chris Smowton for reporting the problem, analyzing it
and providing a fix.
llvm-svn: 160711
The long branch pass (fixed in r160601) no longer uses the global base register
to compute addresses of branch destinations, so it is not necessary to reserve
a slot on the stack.
llvm-svn: 160703
struct s {
double x1;
float x2;
};
__attribute__((regparm(3))) struct s f(int a, int b, int c);
void g(void) {
f(41, 42, 43);
}
We need to be able to represent passing the address of s to f (sret) in a
register (inreg). Turns out that all that is needed is to not mark them as
mutually incompatible.
llvm-svn: 160695
if Condition Is Met instuctions that was not correctly determining the target
instruction.
So for a jne rel32 instruction:
% cat x.s
.byte 0x0f, 0x85, 0x09, 0x00, 0x00, 0x00
% as x.s
it was incorrectly deterining the target:
% otool -q -tv a.out
a.out:
(__TEXT,__text) section
0000000000000000 jne 0xd
and with the fix it gets this correct as:
% otool -q -tv a.out
a.out:
(__TEXT,__text) section
0000000000000000 jne 0xf
rdar://11505997
llvm-svn: 160694
are targeting an ELF platform. Only fold gs-relative (and fs-relative) loads
if it is actually sensible to do so for the target platform.
This fixes PR13438.
llvm-svn: 160687