politely and document this feature.
This simple API extension then allows us to write all of the
Instructions' address space query methods much more simply. No
functionality change intended here.
llvm-svn: 167223
r165941: Resubmit the changes to llvm core to update the functions to
support different pointer sizes on a per address space basis.
Despite this commit log, this change primarily changed stuff outside of
VMCore, and those changes do not carry any tests for correctness (or
even plausibility), and we have consistently found questionable or flat
out incorrect cases in these changes. Most of them are probably correct,
but we need to devise a system that makes it more clear when we have
handled the address space concerns correctly, and ideally each pass that
gets updated would receive an accompanying test case that exercises that
pass specificaly w.r.t. alternate address spaces.
However, from this commit, I have retained the new C API entry points.
Those were an orthogonal change that probably should have been split
apart, but they seem entirely good.
In several places the changes were very obvious cleanups with no actual
multiple address space code added; these I have not reverted when
I spotted them.
In a few other places there were merge conflicts due to a cleaner
solution being implemented later, often not using address spaces at all.
In those cases, I've preserved the new code which isn't address space
dependent.
This is part of my ongoing effort to clean out the partial address space
code which carries high risk and low test coverage, and not likely to be
finished before the 3.2 release looms closer. Duncan and I would both
like to see the above issues addressed before we return to these
changes.
llvm-svn: 167222
getIntPtrType support for multiple address spaces via a pointer type,
and also introduced a crasher bug in the constant folder reported in
PR14233.
These commits also contained several problems that should really be
addressed before they are re-committed. I have avoided reverting various
cleanups to the DataLayout APIs that are reasonable to have moving
forward in order to reduce the amount of churn, and minimize the number
of commits that were reverted. I've also manually updated merge
conflicts and manually arranged for the getIntPtrType function to stay
in DataLayout and to be defined in a plausible way after this revert.
Thanks to Duncan for working through this exact strategy with me, and
Nick Lewycky for tracking down the really annoying crasher this
triggered. (Test case to follow in its own commit.)
After discussing with Duncan extensively, and based on a note from
Micah, I'm going to continue to back out some more of the more
problematic patches in this series in order to ensure we go into the
LLVM 3.2 branch with a reasonable story here. I'll send a note to
llvmdev explaining what's going on and why.
Summary of reverted revisions:
r166634: Fix a compiler warning with an unused variable.
r166607: Add some cleanup to the DataLayout changes requested by
Chandler.
r166596: Revert "Back out r166591, not sure why this made it through
since I cancelled the command. Bleh, sorry about this!
r166591: Delete a directory that wasn't supposed to be checked in yet.
r166578: Add in support for getIntPtrType to get the pointer type based
on the address space.
llvm-svn: 167221
We want the diagnostic, and if the load is optimized away, we still want to
trap it. Stop checking non-default address spaces; that doesn't work in
general.
llvm-svn: 167219
When target costs are available, use them to account for the costs of
shuffles on internal edges of the DAG of candidate pairs.
Because the shuffle costs here are currently for only the internal edges,
the current target cost model is trivial, and the chain depth requirement
is still in place, I don't yet have an easy test
case. Nevertheless, by looking at the debug output, it does seem to do the right
think to the effective "size" of each DAG of candidate pairs.
llvm-svn: 167217
The detection of values that need to be copied in to the generated OpenMP
subfunction also detects the array base addresses needed in the SCoP. Hence, it
is not necessary to unconditionally copy all the base addresses to the generated
function.
Test cases are modified to reflect this change. Arrays which are global
variables do not occur in the struct passed to the subfunction anymore. A test
case for base address copy-in is added in copy_in_array.{c,ll}.
Committed with slight modifications
Contributed by: Armin Groesslinger <armin.groesslinger@uni-passau.de>
llvm-svn: 167215
In addition to the arrays and clast variables a SCoP statement may also refer to
values defined before the SCoP or to function arguments. Detect these values and
add them to the set of values passed to the function generated for OpenMP
parallel execution of a clast.
Committed with additional test cases and some refactoring.
Contributed by: Armin Groesslinger <armin.groesslinger@uni-passau.de>
llvm-svn: 167214
When generating OpenMP or GPGPU code the original ValueMap and ClastVars must be
kept. We already recovered the original ClastVars by reverting the changes, but
we did not keep the content of the ValueMap. This patch keeps now an explicit
copy of both maps and restores them after generating OpenMP or GPGPU code.
This is an adapted version of a patch contributed by:
Armin Groesslinger <armin.groesslinger@uni-passau.de>
llvm-svn: 167213
Specifically, if adding a constraint makes the current system infeasible,
assume the constraint is false, instead of attempting to add its negation.
In +Asserts builds we will still assert that at least one state is feasible.
Patch by Ryan Govostes!
llvm-svn: 167195
Explicitly allow composition of null sub-register indices, and handle
that common case in an inlinable stub.
Use a compressed table implementation instead of the previous nested
switches which generated pretty bad code.
llvm-svn: 167190
The adc/sbb optimization is to able to convert following expression
into a single adc/sbb instruction:
(ult) ... = x + 1 // where the ult is unsigned-less-than comparison
(ult) ... = x - 1
This change is to flip the "x >u y" (i.e. ugt comparison) in order
to expose the adc/sbb opportunity.
llvm-svn: 167180
The stat cache became essentially useless ever since we started
validating all file entries in the PCH.
But the motivating reason for removing it now is that it also affected
correctness in this situation:
-You have a header without include guards (using "#pragma once" or #import)
-When creating the PCH:
-The same header is referenced in an #include with different filename cases.
-In the PCH, of course, we record only one file entry for the header file
-But we cache in the PCH file the stat info for both filename cases
-Then the source files are updated and the header file is updated in a way that
its size and modification time are the same but its inode changes
-When using the PCH:
-We validate the headers, we check that header file and we create a file entry with its current inode
-There's another #include with a filename with different case than the previously created file entry
-In order to get its stat info we go through the cached stat info of the PCH and we receive the old inode
-because of the different inodes, we think they are different files so we go ahead and include its contents.
Removing the stat cache will potentially break clients that are attempting to use the stat cache
as a way of avoiding having the actual input files available. If that use case is important, patches are welcome
to bring it back in a way that will actually work correctly (i.e., emit a PCH that is self-contained, coping with
literal strings, line/column computations, etc.).
This fixes rdar://5502805
llvm-svn: 167172
cl is not attempting to complete a templated class when used in this
context. The conversion forces this to happen.
Thanks to Richard Smith for figuring this out.
llvm-svn: 167166