Summary:
Previously we were doing 1 << S. "1" is an int, so this doesn't work
when S >= 32.
This patch also adds some static_asserts to these functions to ensure
that we don't hit UB by shifting left too much.
Reviewers: rnk
Subscribers: llvm-commits, dylanmckay
Differential Revision: https://reviews.llvm.org/D22441
llvm-svn: 275719
Doing "I++" inside of an EXPECT_* triggers
warning: expression with side effects has no effect in an unevaluated context
because EXPECT_* partially expands to
EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(i++)) == 1)>
which is an unevaluated context.
llvm-svn: 275717
Summary:
This shift is undefined behavior (and, as compiled by clang, gives the
wrong answer for maxUIntN(64)).
Reviewers: mkuper
Subscribers: llvm-commits, jroelofs, rsmith
Differential Revision: https://reviews.llvm.org/D22430
llvm-svn: 275656
Block 1 and 2 of an MSF file are bit vectors that represent the
list of blocks allocated and free in the file. We had been using
these blocks to write stream data and other data, so we mark them
as the free page map now. We don't yet serialize these pages to
the disk, but at least we make a note of what it is, and avoid
writing random data to them.
Doing this also necessitated cleaning up some of the tests to be
more general and hardcode fewer values, which is nice.
llvm-svn: 275629
Previously we would read a PDB, then write some of it back out,
but write the directory, super block, and other pertinent metadata
back out unchanged. This generates incorrect PDBs since the amount
of data written was not always the same as the amount of data read.
This patch changes things to use the newly introduced `MsfBuilder`
class to write out a correct and accurate set of Msf metadata for
the data *actually* written, which opens up the door for adding and
removing type records, symbol records, and other types of data to
an existing PDB.
llvm-svn: 275627
Doing "I++" inside of an EXPECT_* triggers
warning: expression with side effects has no effect in an unevaluated context
because EXPECT_* partially expands to
EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(MockObjects[I++] + 1)) == 1)>
which is an unevaluated context.
llvm-svn: 275293
Summary: Normally when you do a bitwise operation on an enum value, you
get back an instance of the underlying type (e.g. int). But using this
macro, bitwise ops on your enum will return you back instances of the
enum. This is particularly useful for enums which represent a
combination of flags.
Suppose you have a function which takes an int and a set of flags. One
way to do this would be to take two numeric params:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, int Flags);
void foo() {
Fn(42, F2 | F3);
}
But now if you get the order of arguments wrong, you won't get an error.
You might try to fix this by changing the signature of Fn so it accepts
a SomeFlags arg:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, static_cast<SomeFlags>(F2 | F3));
}
But now we need a static cast after doing "F2 | F3" because the result
of that computation is the enum's underlying type.
This patch adds a mechanism which gives us the safety of the second
approach with the brevity of the first.
enum SomeFlags {
F1 = 1, F2 = 2, F3 = 4, ..., F_MAX = 128,
LLVM_MARK_AS_BITMASK_ENUM(F_MAX)
};
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, F2 | F3); // No static_cast.
}
The LLVM_MARK_AS_BITMASK_ENUM macro enables overloads for bitwise
operators on SomeFlags. Critically, these operators return the enum
type, not its underlying type, so you don't need any static_casts.
An advantage of this solution over the previously-proposed BitMask class
[0, 1] is that we don't need any wrapper classes -- we can operate
directly on the enum itself.
The approach here is somewhat similar to OpenOffice's typed_flags_set
[2]. But we skirt the need for a wrapper class (and a good deal of
complexity) by judicious use of enable_if. We SFINAE on the presence of
a particular enumerator (added by the LLVM_MARK_AS_BITMASK_ENUM macro)
instead of using a traits class so that it's impossible to use the enum
before the overloads are present. The solution here also seamlessly
works across multiple namespaces.
[0] http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20150622/283369.html
[1] http://lists.llvm.org/pipermail/llvm-commits/attachments/20150623/073434b6/attachment.obj
[2] https://cgit.freedesktop.org/libreoffice/core/tree/include/o3tl/typed_flags_set.hxx
Reviewers: chandlerc, rsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22279
llvm-svn: 275292
Because of the goop involved in the EXPECT_EQ macro, we were getting the
following warning
expression with side effects has no effect in an unevaluated context
because the "I++" was being used inside of a template type:
switch (0) case 0: default: if (const ::testing::AssertionResult gtest_ar = (::testing::internal:: EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(Args[I++])) == 1)>::Compare("Args[I++]", "&A", Args[I++], &A))) ; else ::testing::internal::AssertHelper(::testing::TestPartResult::kNonFatalFailure, "../src/unittests/IR/FunctionTest.cpp", 94, gtest_ar.failure_message()) = ::testing::Message();
llvm-svn: 275291
Summary:
This represents the adjustment applied to the implicit 'this' parameter
in the prologue of a virtual method in the MS C++ ABI. The adjustment is
always zero unless multiple inheritance is involved.
This increases the size of DISubprogram by 8 bytes, unfortunately. The
adjustment really is a signed 32-bit integer. If this size increase is
too much, we could probably win it back by splitting out a subclass with
info specific to virtual methods (virtuality, vindex, thisadjustment,
containingType).
Reviewers: aprantl, dexonsmith
Subscribers: aaboud, amccarth, llvm-commits
Differential Revision: http://reviews.llvm.org/D21614
llvm-svn: 274325
When concatenating two error lists the ErrorList::join method (which is called
by joinErrors) was failing to set the checked bit on the second error, leading
to a 'failure to check error' assertion.
llvm-svn: 274249
re-insertion of entries into the worklist moves them to the end.
This is fairly similar to a SetVector, but helps in the case where in
addition to not inserting duplicates you want to adjust the sequence of
a pop-off-the-back worklist.
I'm not at all attached to the name of this data structure if others
have better suggestions, but this is one that David Majnemer brought up
in IRC discussions that seems plausible.
I've trimmed the interface down somewhat from SetVector's interface
because several things make less sense here IMO: iteration primarily.
I'd prefer to add these back as we have users that need them. My use
case doesn't even need all of what is provided here. =]
I've also included a basic unittest to make sure this functions
reasonably.
Differential Revision: http://reviews.llvm.org/D21866
llvm-svn: 274198
This fixes an issue where occurrence counts would be unexpectedly
reset when parsing different parts of a command line multiple
times.
**ORIGINAL COMMIT MESSAGE**
This allows command line tools to use syntaxes like the following:
llvm-foo.exe command1 -o1 -o2
llvm-foo.exe command2 -p1 -p2
Where command1 and command2 contain completely different sets of
valid options. This is backwards compatible with previous uses
of llvm cl which did not support subcommands, as any option
which specifies no optional subcommand (e.g. all existing
code) goes into a special "top level" subcommand that expects
dashed options to appear immediately after the program name.
For example, code which is subcommand unaware would generate
a command line such as the following, where no subcommand
is specified:
llvm-foo.exe -q1 -q2
The top level subcommand can co-exist with actual subcommands,
as it is implemented as an actual subcommand which is searched
if no explicit subcommand is specified. So llvm-foo.exe as
specified above could be written so as to support all three
aforementioned command lines simultaneously.
There is one additional "special" subcommand called AllSubCommands,
which can be used to inject an option into every subcommand.
This is useful to support things like help, so that commands
such as:
llvm-foo.exe --help
llvm-foo.exe command1 --help
llvm-foo.exe command2 --help
All work and display the help for the selected subcommand
without having to explicitly go and write code to handle each
one separately.
This patch is submitted without an example of anything actually
using subcommands, but a followup patch will convert the
llvm-pdbdump tool to use subcommands.
Reviewed By: beanz
llvm-svn: 274171
This allows command line tools to use syntaxes like the following:
llvm-foo.exe command1 -o1 -o2
llvm-foo.exe command2 -p1 -p2
Where command1 and command2 contain completely different sets of
valid options. This is backwards compatible with previous uses
of llvm cl which did not support subcommands, as any option
which specifies no optional subcommand (e.g. all existing
code) goes into a special "top level" subcommand that expects
dashed options to appear immediately after the program name.
For example, code which is subcommand unaware would generate
a command line such as the following, where no subcommand
is specified:
llvm-foo.exe -q1 -q2
The top level subcommand can co-exist with actual subcommands,
as it is implemented as an actual subcommand which is searched
if no explicit subcommand is specified. So llvm-foo.exe as
specified above could be written so as to support all three
aforementioned command lines simultaneously.
There is one additional "special" subcommand called AllSubCommands,
which can be used to inject an option into every subcommand.
This is useful to support things like help, so that commands
such as:
llvm-foo.exe --help
llvm-foo.exe command1 --help
llvm-foo.exe command2 --help
All work and display the help for the selected subcommand
without having to explicitly go and write code to handle each
one separately.
This patch is submitted without an example of anything actually
using subcommands, but a followup patch will convert the
llvm-pdbdump tool to use subcommands.
Reviewed By: beanz
Differential Revision: http://reviews.llvm.org/D21485
llvm-svn: 274054
The bitset metadata currently used in LLVM has a few problems:
1. It has the wrong name. The name "bitset" refers to an implementation
detail of one use of the metadata (i.e. its original use case, CFI).
This makes it harder to understand, as the name makes no sense in the
context of virtual call optimization.
2. It is represented using a global named metadata node, rather than
being directly associated with a global. This makes it harder to
manipulate the metadata when rebuilding global variables, summarise it
as part of ThinLTO and drop unused metadata when associated globals are
dropped. For this reason, CFI does not currently work correctly when
both CFI and vcall opt are enabled, as vcall opt needs to rebuild vtable
globals, and fails to associate metadata with the rebuilt globals. As I
understand it, the same problem could also affect ASan, which rebuilds
globals with a red zone.
This patch solves both of those problems in the following way:
1. Rename the metadata to "type metadata". This new name reflects how
the metadata is currently being used (i.e. to represent type information
for CFI and vtable opt). The new name is reflected in the name for the
associated intrinsic (llvm.type.test) and pass (LowerTypeTests).
2. Attach metadata directly to the globals that it pertains to, rather
than using the "llvm.bitsets" global metadata node as we are doing now.
This is done using the newly introduced capability to attach
metadata to global variables (r271348 and r271358).
See also: http://lists.llvm.org/pipermail/llvm-dev/2016-June/100462.html
Differential Revision: http://reviews.llvm.org/D21053
llvm-svn: 273729
This change is motivated by an upcoming change to the metadata representation
used for CFI. The indirect function call checker needs type information for
external function declarations in order to correctly generate jump table
entries for such declarations. We currently associate such type information
with declarations using a global metadata node, but I plan [1] to move all
such metadata to global object attachments.
In bitcode, metadata attachments for function declarations appear in the
global metadata block. This seems reasonable to me because I expect metadata
attachments on declarations to be uncommon. In the long term I'd also expect
this to be the case for CFI, because we'd want to use some specialized bitcode
format for this metadata that could be read as part of the ThinLTO thin-link
phase, which would mean that it would not appear in the global metadata block.
To solve the lazy loaded metadata issue I was seeing with D20147, I use the
same bitcode representation for metadata attachments for global variables as I
do for function declarations. Since there's a use case for metadata attachments
in the global metadata block, we might as well use that representation for
global variables as well, at least until we have a mechanism for lazy loading
global variables.
In the assembly format, the metadata attachments appear after the "declare"
keyword in order to avoid a parsing ambiguity.
[1] http://lists.llvm.org/pipermail/llvm-dev/2016-June/100462.html
Differential Revision: http://reviews.llvm.org/D21052
llvm-svn: 273336
We recently made MemorySSA own the walker it creates. As a part of this,
the MSSA test fixture was changed to have a `Walker*` instead of a
`unique_ptr<Walker>`. So, we no longer need to do `&*Walker` in order to
get a `Walker*`.
llvm-svn: 273189
pass manager passes' `run` methods.
This removes a bunch of SFINAE goop from the pass manager and just
requires pass authors to accept `AnalysisManager<IRUnitT> &` as a dead
argument. This is a small price to pay for the simplicity of the system
as a whole, despite the noise that changing it causes at this stage.
This will also helpfull allow us to make the signature of the run
methods much more flexible for different kinds af passes to support
things like intelligently updating the pass's progression over IR units.
While this touches many, many, files, the changes are really boring.
Mostly made with the help of my trusty perl one liners.
Thanks to Sean and Hal for bouncing ideas for this with me in IRC.
llvm-svn: 272978
We should update results of the BranchProbabilityInfo after removing block in JumpThreading. Otherwise
we will get dangling pointer inside BranchProbabilityInfo cache.
Differential Revision: http://reviews.llvm.org/D20957
llvm-svn: 272891
Differential Revision: http://reviews.llvm.org/D19842
Corresponding clang patch: http://reviews.llvm.org/D19843
Re-commit after addressing issues with of generating too many warnings for Windows and asan test failures
Patch by Eric Niebler
llvm-svn: 272555
undef uses are no real uses of a register and must be ignored by
findLastUseBefore() so that handleMove() does not produce invalid live
intervals in some cases.
This fixed http://llvm.org/PR28083
llvm-svn: 272446
This fixes an alignment issue by forcing all cached allocations
to be 8 byte aligned, and also fixes an issue arising on big
endian systems by writing ulittle32_t's instead of uint32_t's
in the test.
llvm-svn: 272437
This adds method and tests for writing to a PDB stream. With
this, even a PDB stream which is discontiguous can be treated
as a sequential stream of bytes for the purposes of writing.
Reviewed By: ruiu
Differential Revision: http://reviews.llvm.org/D21157
llvm-svn: 272369
Summary:
Now DISubroutineType has a 'cc' field which should be a DW_CC_ enum. If
it is present and non-zero, the backend will emit it as a
DW_AT_calling_convention attribute. On the CodeView side, we translate
it to the appropriate enum for the LF_PROCEDURE record.
I added a new LLVM vendor specific enum to the list of DWARF calling
conventions. DWARF does not appear to attempt to standardize these, so I
assume it's OK to do this until we coordinate with GCC on how to emit
vectorcall convention functions.
Reviewers: dexonsmith, majnemer, aaboud, amccarth
Subscribers: mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D21114
llvm-svn: 272197
The architecture enumeration is shared across ARM and AArch64. However, the
data is not. The code incorrectly would index into the array using the
architecture index which was offset by the ARMv7 architecture enumeration. We
do not have a marker for indicating the architectural family to which the
enumeration belongs so we cannot be clever about offsetting the index (at least
it is not immediately apparent to me). Instead, fall back to the tried-and-true
method of slowly iterating the array (its not a large array, so the impact of
this is not too high).
Because of the incorrect indexing, if we were lucky, we would crash, but usually
we would return an invalid StringRef. We did not have any tests for the AArch64
target parser previously;. Extend the previous tests I had added for ARM to
cover AArch64 for ensuring that we return expected StringRefs.
Take the opportunity to change some iterator types to references.
This work is needed to support parsing `.arch name` directives in the AArch64
target asm parser.
llvm-svn: 272145
This allows mapping of any endian-aware type whose underlying
type (e.g. uint32_t) provides a ScalarTraits specialization.
Reviewed by: majnemer
Differential Revision: http://reviews.llvm.org/D21057
llvm-svn: 272049
Arrange to call verify(Function &) on each function, followed by
verify(Module &), whether the verifier is being used from the pass or
from verifyModule(). As a side effect, this fixes an issue that caused
us not to call verify(Function &) on unmaterialized functions from
verifyModule().
Differential Revision: http://reviews.llvm.org/D21042
llvm-svn: 271956
Also fix slice wrappers drop_front and drop_back.
The unittests are pretty awkward, but do the job; alternatives
welcome!
..and yes, I do have ArrayRefs with more than 4 billion elements.
llvm-svn: 271546
We only considered the length of the operation and the length of the
StreamRef without considered what it meant for the offset to be at a
non-zero position.
llvm-svn: 271496
Add support for the new pass manager to MemorySSA pass.
Change MemorySSA to be computed eagerly upon construction.
Change MemorySSAWalker to be owned by the MemorySSA object that creates
it.
Reviewers: dberlin, george.burgess.iv
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19664
llvm-svn: 271432
This will be necessary to allow the global merge pass to attach
multiple debug info metadata nodes to global variables once we reverse
the edge from DIGlobalVariable to GlobalVariable.
Differential Revision: http://reviews.llvm.org/D20414
llvm-svn: 271358
This tidies up some code that was manually constructing RuntimeDyld::SymbolInfo
instances from JITSymbols. It will save more mess in the future when
JITSymbol::getAddress is extended to return an Expected<TargetAddress> rather
than just a TargetAddress, since we'll be able to embed the error checking in
the conversion.
llvm-svn: 271350
StringError can be used to represent Errors that aren't recoverable based on
the error type, but that have a useful error message that can be reported to
the user or logged.
llvm-svn: 270948
APInt::slt was copying the LHS and RHS in to temporaries then making
them unsigned so that it could use an unsigned comparision. It did
this even on the paths which were trivial to give results for, such
as the sign bit of the LHS being set while RHS was not set.
This changes the logic to return out immediately in the trivial cases,
and use an unsigned comparison in the remaining cases. But this time,
just use the unsigned comparison directly without creating any temporaries.
This works because, for example:
true = (-2 slt -1) = (0xFE ult 0xFF)
Also added some tests explicitly for slt with APInt's larger than 64-bits
so that this new code is tested.
Using the memory for 'opt -O2 verify-uselistorder.lto.opt.bc -o opt.bc'
(see r236629 for details), this reduces the number of allocations from
26.8M to 23.9M.
llvm-svn: 270881
Since r268966 the modern Verifier pass defaults to stripping invalid debug info
in nonasserts builds. This patch ports this behavior back to the legacy
Verifier pass as well. The primary motivation is that the clang frontend
accepts bitcode files as input but is still using the legacy pass pipeline.
Background: The problem I'm trying to solve with this sequence of patches is
that historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about breaking
bitcode compatibility with existing producers. For example, we don't necessarily
want IR produced by an older version of clang to be rejected by an LTO link just
because of malformed debug info, and rather provide an option to strip it. Note
that merely outdated (but well-formed) debug info would continue to be
auto-upgraded in this scenario.
http://reviews.llvm.org/D20629
<rdar://problem/26448800>
llvm-svn: 270768
This removes the subclasses of ProfileSummary, moves the members of the derived classes to the base class.
Differential Revision: http://reviews.llvm.org/D20390
llvm-svn: 270143
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
- Remove the base ProfError class to work around an MSVC ICE.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 270020
Having an enum member named Default is quite confusing: Is it distinct
from the others?
This patch removes that member and instead uses Optional<Reloc> in
places where we have a user input that still hasn't been maped to the
default value, which is now clear has no be one of the remaining 3
options.
llvm-svn: 269988
Summary:
Add support to control where files for a distributed backend (the
individual index files and optional imports files) are created.
This is invoked with a new thinlto-prefix-replace option in the gold
plugin and llvm-lto. If specified, expects a string of the form
"oldprefix:newprefix", and instead of generating these files in the
same directory path as the corresponding bitcode file, will use a path
formed by replacing the bitcode file's path prefix matching oldprefix
with newprefix.
Also add a new replace_path_prefix helper to Path.h in libSupport.
Depends on D19636.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19644
llvm-svn: 269771
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Address undefined-var-template warning.
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269694
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269491
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269462
a sequence of values.
It increments through the values in the half-open range: [Begin, End),
producing those values when indirecting the iterator. It should support
integers, iterators, and any other type providing these basic arithmetic
operations.
This came up in the C++ standards committee meeting, and it seemed like
a useful construct that LLVM might want as well, and I wanted to
understand how easily we could solve it. I suspect this can be used to
write simpler counting loops even in LLVM along the lines of:
for (int i : seq(0, v.size())) {
...
};
As part of this, I had to fix the lack of a proxy object returned from
the operator[] in our iterator facade.
Differential Revision: http://reviews.llvm.org/D17870
llvm-svn: 269390
Summary:
...loop after the last iteration.
This is really hard to do correctly. The core problem is that we need to
model liveness through the induction PHIs from iteration to iteration in
order to get the correct results, and we need to correctly de-duplicate
the common subgraphs of instructions feeding some subset of the
induction PHIs. All of this can be driven either from a side effect at
some iteration or from the loop values used after the loop finishes.
This patch implements this by storing the forward-propagating analysis
of each instruction in a cache to recall whether it was free and whether
it has become live and thus counted toward the total unroll cost. Then,
at each sink for a value in the loop, we recursively walk back through
every value that feeds the sink, including looping back through the
iterations as needed, until we have marked the entire input graph as
live. Because we cache this, we never visit instructions more than twice
-- once when we analyze them and put them into the cache, and once when
we count their cost towards the unrolled loop. Also, because the cache
is only two bits and because we are dealing with relatively small
iteration counts, we can store all of this very densely in memory to
avoid this from becoming an excessively slow analysis.
The code here is still pretty gross. I would appreciate suggestions
about better ways to factor or split this up, I've stared too long at
the algorithmic side to really have a good sense of what the design
should probably look at.
Also, it might seem like we should do all of this bottom-up, but I think
that is a red herring. Specifically, the simplification power is *much*
greater working top-down. We can forward propagate very effectively,
even across strange and interesting recurrances around the backedge.
Because we use data to propagate, this doesn't cause a state space
explosion. Doing this level of constant folding, etc, would be very
expensive to do bottom-up because it wouldn't be until the last moment
that you could collapse everything. The current solution is essentially
a top-down simplification with a bottom-up cost accounting which seems
to get the best of both worlds. It makes the simplification incremental
and powerful while leaving everything dead until we *know* it is needed.
Finally, a core property of this approach is its *monotonicity*. At all
times, the current UnrolledCost is a conservatively low estimate. This
ensures that we will never early-exit from the analysis due to exceeding
a threshold when if we had continued, the cost would have gone back
below the threshold. These kinds of bugs can cause incredibly hard to
track down random changes to behavior.
We could use a techinque similar (but much simpler) within the inliner
as well to avoid considering speculated code in the inline cost.
Reviewers: chandlerc
Subscribers: sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D11758
llvm-svn: 269388
Remove the ModuleLevelChanges argument, and the ability to create new
subprograms for cloned functions. The latter was added without review in
r203662, but it has no in-tree clients (all non-test callers pass false
for ModuleLevelChanges [1], so it isn't reachable outside of tests). It
also isn't clear that adding a duplicate subprogram to the compile unit is
always the right thing to do when cloning a function within a module. If
this functionality comes back it should be accompanied with a more concrete
use case.
Furthermore, all in-tree clients add the returned function to the module.
Since that's pretty much the only sensible thing you can do with the function,
just do that in CloneFunction.
[1] http://llvm-cs.pcc.me.uk/lib/Transforms/Utils/CloneFunction.cpp/rCloneFunction
Differential Revision: http://reviews.llvm.org/D18628
llvm-svn: 269110
Add convenience function to create MachineModuleInfo and
MachineFunctionAnalysis passes and add them to a pass manager.
Despite factoring out some shared code in
LiveIntervalTest/LLVMTargetMachine this will be used by my upcoming llc
change.
llvm-svn: 269002
allow the transformation to strip invalid debug info.
This patch separates the Verifier into an analysis and a transformation
pass, with the transformation pass optionally stripping malformed
debug info.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
http://reviews.llvm.org/D19988
rdar://problem/25818489
This reapplies r268937 without modifications.
llvm-svn: 268966