`BasicBlockFwdRefs` (and `BlockAddrFwdRefs` before it) was being emptied
in a non-deterministic order. When predicting use-list order I've
worked around this another way, but even when parsing lazily (and we
can't recreate use-list order) use-lists should be deterministic.
Make them so by using a side-queue of functions with forward-referenced
blocks that gets visited in order.
llvm-svn: 214899
`parseBitcodeFile()` uses the generic `getLazyBitcodeFile()` function as
a helper. Since `parseBitcodeFile()` isn't actually lazy -- it calls
`MaterializeAllPermanently()` -- bypass the unnecessary call to
`materializeForwardReferencedFunctions()` by extracting out a common
helper function. This removes the last of the use-list churn caused by
blockaddresses.
This highlights that we can't reproduce use-list order of globals and
constants when parsing lazily -- but that's necessarily out of scope.
When we're parsing lazily, we never have all the functions in memory, so
the use-lists of globals (and constants that reference globals) are
always incomplete.
This is part of PR5680.
llvm-svn: 214581
Now that we can reliably handle forward references to `BlockAddress`
(r214563), change the mechanics to simplify predicting use-list order.
Previously, we created dummy `GlobalVariable`s to represent block
addresses. After every function was materialized, we'd go through any
forward references to its blocks and RAUW them with a proper
`BlockAddress` constant. This causes some (potentially a lot of)
unnecessary use-list churn, since any constant expression that it's a
part of will need to be rematerialized as well.
Instead, pre-construct a `BasicBlock` immediately -- without attaching
it to its (empty) `Function` -- and use that to construct a
`BlockAddress`. This constant will not have to be regenerated. When
the function body is parsed, hook this pre-constructed basic block up
in the right place using `BasicBlock::insertInto()`.
Both before and after this change, the IR is temporarily in an invalid
state that gets resolved when `materializeForwardReferencedFunctions()`
gets called.
This is a prep commit that's part of PR5680, but the only functionality
change is the reduction of churn in the constant pool.
llvm-svn: 214570
`BlockAddress`es are interesting in that they can reference basic blocks
from *outside* the block's function. Since basic blocks are not global
values, this presents particular challenges for lazy parsing.
One corner case was found in PR11677 and fixed in r147425. In that
case, a global variable references a block address. It's necessary to
load the relevant function to resolve the forward reference before doing
anything with the module.
By inspection, I found (and have fixed here) two other cases:
- An instruction from one function references a block address from
another function, and only the first function is lazily loaded.
I fixed this the same way as PR11677: by eagerly loading the
referenced function.
- A function whose block address is taken is dematerialized, leaving
invalid references to it.
I fixed this by refusing to dematerialize functions whose block
addresses are taken (if you have to load it, you can't unload it).
llvm-svn: 214559
Correctly sort self-users (such as PHI nodes). I added a targeted test
in `test/Bitcode/use-list-order.ll` and the final missing RUN line to
tests in `test/Assembly`.
This is part of PR5680.
llvm-svn: 214417
Since initializers of GlobalValues are being assigned IDs before
GlobalValues themselves, explicitly exclude GlobalValues from the
constant pool. Added targeted test in `test/Bitcode/use-list-order.ll`
and added two more RUN lines in `test/Assembly`.
This is part of PR5680.
llvm-svn: 214368
When predicting use-list order, we visit functions in reverse order
followed by `GlobalValue`s and write out use-lists at the first
opportunity. In the reader, this will translate to *after* the last use
has been added.
For this to work, we actually need to descend into `GlobalValue`s.
Added a targeted test in `use-list-order.ll` and `RUN` lines to the
newly passing tests in `test/Bitcode`.
There are two remaining failures in `test/Bitcode`:
- blockaddress.ll: I haven't thought through how to model the way
block addresses change the order of use-lists (or how to work around
it).
- metadata-2.ll: There's an old-style `@llvm.used` global array here
that I suspect the .ll parser isn't upgrading properly. When it
round-trips through bitcode, the .bc reader *does* upgrade it, so
the extra variable (`i8* null`) has an extra use, and the shuffle
vector doesn't match.
I think the fix is to upgrade old-style global arrays (or reject
them?) in the .ll parser.
This is part of PR5680.
llvm-svn: 214321
This commit fixes undefined behaviour that caused the revert in r214249.
The problem was two unsequenced operations on a `DenseMap<>`, giving
different behaviour in GCC and Clang. This:
DenseMap<T*, unsigned> DM;
for (auto &X : ...)
DM[&X] = DM.size() + 1;
should have been:
DenseMap<T*, unsigned> DM;
for (auto &X : ...) {
unsigned Size = DM.size();
DM[&X] = Size + 1;
}
Until r214242, this difference between compilers didn't matter. In
r214242, `OrderMap::LastGlobalValueID` was introduced and compared
against IDs, which in GCC were off-by-one my expectations.
llvm-svn: 214270
To avoid unnecessary forward references, the reader doesn't process
initializers of `GlobalValue`s until after the constant pool has been
processed, and then in reverse order. Model this when predicting
use-list order. This gets two more Bitcode tests passing with
`llvm-uselistorder`.
Part of PR5680.
llvm-svn: 214242
This will let users in other libraries know which error occurred. In particular,
it will be possible to check if the parsing failed or if the file is not
bitcode.
llvm-svn: 214209
Fix the sort of expected order in the reader to correctly return `false`
when comparing a `Use` against itself.
This was caught by test/Bitcode/binaryIntInstructions.3.2.ll, so I'm
adding a `RUN` line using `llvm-uselistorder` for every test in
`test/Bitcode` that passes.
A few tests still fail, so I'll investigate those next.
This is part of PR5680.
llvm-svn: 214157
Since we're storing lots of these, save two-pointers per vector with a
custom type rather than using the relatively heavy `SmallVector`.
Part of PR5680.
llvm-svn: 214135
Predict and serialize use-list order in bitcode. This makes the option
`-preserve-bc-use-list-order` work *most* of the time, but this is still
experimental.
- Builds a full value-table up front in the writer, sets up a list of
use-list orders to write out, and discards the table. This is a
simpler first step than determining the order from the various
overlapping IDs of values on-the-fly.
- The shuffles stored in the use-list order list have an unnecessarily
large memory footprint.
- `blockaddress` expressions cause functions to be materialized
out-of-order. For now I've ignored this problem, so use-list orders
will be wrong for constants used by functions that have block
addresses taken. There are a couple of ways to fix this, but I
don't have a concrete plan yet.
- When materializing functions lazily, the use-lists for constants
will not be correct. This use case is out of scope: what should the
use-list order be, if it's incomplete?
This is part of PR5680.
llvm-svn: 214125
`ValueEnumerator::OptimizeConstants()` creates forward references within
the constant pools, which makes predicting constants' use-list order
difficult. For now, just disable the optimization.
This can be re-enabled in the future in one of two ways:
- Enable a limited version of this optimization that doesn't create
forward references. One idea is to categorize constants by their
"height" and make that the top-level sort.
- Enable it entirely. This requires predicting how may times each
constant will be recreated as its operands' and operands' operands'
(etc.) forward references get resolved.
This is part of PR5680.
llvm-svn: 213953
Add a -verify-use-list-order pass, which shuffles use-list order, writes
to bitcode, reads back, and verifies that the (shuffled) order matches.
- The utility functions live in lib/IR/UseListOrder.cpp.
- Moved (and renamed) the command-line option to enable writing
use-lists, so that this pass can return early if the use-list orders
aren't being serialized.
It's not clear that this pass is the right direction long-term (perhaps
a separate tool instead?), but short-term it's a great way to test the
use-list order prototype. I've added an XFAIL-ed testcase that I'm
hoping to get working pretty quickly.
This is part of PR5680.
llvm-svn: 213945
This attribute indicates that the parameter or return pointer is
dereferenceable. Practically speaking, loads from such a pointer within the
associated byte range are safe to speculatively execute. Such pointer
parameters are common in source languages (C++ references, for example).
llvm-svn: 213385
Currently the only kind of integer IR attributes that we have are alignment
attributes, and so the attribute kind that takes an integer parameter is called
AlignAttr, but that will change (we'll soon be adding a dereferenceable
attribute that also takes an integer value). Accordingly, rename AlignAttribute
to IntAttribute (class names, enums, etc.).
No functionality change intended.
llvm-svn: 213352
This was an oversight in the original support. As it is, I stuffed this
bit into the alignment. The alignment is stored in log2 form, so it
doesn't need more than 5 bits, given that Value::MaximumAlignment is 1
<< 29.
Reviewers: nicholas
Differential Revision: http://reviews.llvm.org/D3943
llvm-svn: 213118
The regular end of the bitcode parsing is in the BitstreamEntry::EndBlock
case.
Should fix the LTO bootstrap on OS X (this function is only used by ld64).
llvm-svn: 212357
This reverts commit r212342.
We can get a StringRef into the current Record, but not one in the bitcode
itself since the string is compressed in it.
llvm-svn: 212356
This new IR facility allows us to represent the object-file semantic of
a COMDAT group.
COMDATs allow us to tie together sections and make the inclusion of one
dependent on another. This is required to implement features like MS
ABI VFTables and optimizing away certain kinds of initialization in C++.
This functionality is only representable in COFF and ELF, Mach-O has no
similar mechanism.
Differential Revision: http://reviews.llvm.org/D4178
llvm-svn: 211920
[LLVM part]
These patches rename the loop unrolling and loop vectorizer metadata
such that they have a common 'llvm.loop.' prefix. Metadata name
changes:
llvm.vectorizer.* => llvm.loop.vectorizer.*
llvm.loopunroll.* => llvm.loop.unroll.*
This was a suggestion from an earlier review
(http://reviews.llvm.org/D4090) which added the loop unrolling
metadata.
Patch by Mark Heffernan.
llvm-svn: 211710
This allows us to just use a std::unique_ptr to store the pointer to the buffer.
The flip side is that they have to support releasing the buffer back to the
caller.
Overall this looks like a more efficient and less brittle api.
llvm-svn: 211542
LLVMGetBitcodeModuleInContext should not take ownership on error. I will
try to localize this odd api requirement, but this should get the bots green.
llvm-svn: 211213
We do have use cases for the bitcode reader owning the buffer or not, but we
always know which one we have when we construct it.
It might be possible to simplify this further, but this is a step in the
right direction.
llvm-svn: 211205
This commit adds a weak variant of the cmpxchg operation, as described
in C++11. A cmpxchg instruction with this modifier is permitted to
fail to store, even if the comparison indicated it should.
As a result, cmpxchg instructions must return a flag indicating
success in addition to their original iN value loaded. Thus, for
uniformity *all* cmpxchg instructions now return "{ iN, i1 }". The
second flag is 1 when the store succeeded.
At the DAG level, a new ATOMIC_CMP_SWAP_WITH_SUCCESS node has been
added as the natural representation for the new cmpxchg instructions.
It is a strong cmpxchg.
By default this gets Expanded to the existing ATOMIC_CMP_SWAP during
Legalization, so existing backends should see no change in behaviour.
If they wish to deal with the enhanced node instead, they can call
setOperationAction on it. Beware: as a node with 2 results, it cannot
be selected from TableGen.
Currently, no use is made of the extra information provided in this
patch. Test updates are almost entirely adapting the input IR to the
new scheme.
Summary for out of tree users:
------------------------------
+ Legacy Bitcode files are upgraded during read.
+ Legacy assembly IR files will be invalid.
+ Front-ends must adapt to different type for "cmpxchg".
+ Backends should be unaffected by default.
llvm-svn: 210903
Alias with unnamed_addr were in a strange state. It is stored in GlobalValue,
the language reference talks about "unnamed_addr aliases" but the verifier
was rejecting them.
It seems natural to allow unnamed_addr in aliases:
* It is a property of how it is accessed, not of the data itself.
* It is perfectly possible to write code that depends on the address
of an alias.
This patch then makes unname_addr legal for aliases. One side effect is that
the syntax changes for a corner case: In globals, unnamed_addr is now printed
before the address space.
llvm-svn: 210302
It includes a pass that rewrites all indirect calls to jumptable functions to pass through these tables.
This also adds backend support for generating the jump-instruction tables on ARM and X86.
Note that since the jumptable attribute creates a second function pointer for a
function, any function marked with jumptable must also be marked with unnamed_addr.
llvm-svn: 210280
This patch changes GlobalAlias to point to an arbitrary ConstantExpr and it is
up to MC (or the system assembler) to decide if that expression is valid or not.
This reduces our ability to diagnose invalid uses and how early we can spot
them, but it also lets us do things like
@test5 = alias inttoptr(i32 sub (i32 ptrtoint (i32* @test2 to i32),
i32 ptrtoint (i32* @bar to i32)) to i32*)
An important implication of this patch is that the notion of aliased global
doesn't exist any more. The alias has to encode the information needed to
access it in its metadata (linkage, visibility, type, etc).
Another consequence to notice is that getSection has to return a "const char *".
It could return a NullTerminatedStringRef if there was such a thing, but when
that was proposed the decision was to just uses "const char*" for that.
llvm-svn: 210062