piece can always be generated.
The default end of diagnostic path piece was failing to generate on a
BlockEdge that was outgoing from a basic block without a terminator,
resulting in a very simple diagnostic being rendered (ex: no path
highlighting or custom visitors). Reuse another function, which is
essentially doing the same thing and correct it not to fail when a block
has no terminator.
llvm-svn: 150659
We are not properly handling the memory regions that escape into struct
fields, which led to a bunch of false positives. Be conservative here
and give up when a pointer escapes into a struct.
llvm-svn: 150658
* Fix bug when determining whether && / || are potential constant expressions
* Try harder when determining whether ?: is a potential constant expression
* Produce a diagnostic on sizeof(VLA) to provide a better source location
llvm-svn: 150657
that take formats or sizes.
Also document that scalar expression results can be used in any command using
expressions inside backticks.
llvm-svn: 150652
The garbage collection metadata needs to be merged "intelligently", when two or
more modules are linked together, and not merely appended. (Appending creates a
section which is too large.) The module flags metadata method is the way to do
this.
<rdar://problem/8198537>
llvm-svn: 150648
pointers and block pointers). We use dummy definitions to keep the
invariant that an implicit, used definition has a body; IR generation
will substitute the actual contents, since they can't be represented
as C++.
For the block pointer case, compute the copy-initialization needed to
capture the lambda object in the block, which IR generation will need
later.
llvm-svn: 150645
-fno-objc-arc-exceptions. This will allow the optimizer to perform
optimizations which are only safe under that flag.
This is a part of rdar://10803830.
llvm-svn: 150644
Call instructions no longer have a list of 43 call-clobbered registers.
Instead, they get a single register mask operand with a bit vector of
call-preserved registers.
This saves a lot of memory, 42 x 32 bytes = 1344 bytes per call
instruction, and it speeds up building call instructions because those
43 imp-def operands no longer need to be added to use-def lists. (And
removed and shifted and re-added for every explicit call operand).
Passes like LiveVariables, LiveIntervals, RAGreedy, PEI, and
BranchFolding are significantly faster because they can deal with call
clobbers in bulk.
Overall, clang -O2 is between 0% and 8% faster, uniformly distributed
depending on call density in the compiled code. Debug builds using
clang -O0 are 0% - 3% faster.
I have verified that this patch doesn't change the assembly generated
for the LLVM nightly test suite when building with -disable-copyprop
and -disable-branch-fold.
Branch folding behaves slightly differently in a few cases because call
instructions have different hash values now.
Copy propagation flushes its data structures when it crosses a register
mask operand. This causes it to leave a few dead copies behind, on the
order of 20 instruction across the entire nightly test suite, including
SPEC. Fixing this properly would require the pass to use different data
structures.
llvm-svn: 150638
The existing framework for postra scheduling is library local. We want to keep it that way. Soon we will have a more general MachineScheduler interface. At that time, various bits will be exposed to targets. In the meantime, the VLIWPacketizer wants to use ScheduleDAGInstrs directly, so it needs to wrapped in a PIMPL to avoid exposing it to the target interface.
llvm-svn: 150633
partial types for contexts and forward decls while allowing us to
complete types later on for debug purposes.
This piggy-backs on the metadata replacement and rauw changes
for temporary nodes and takes advantage of the incremental
support I added in earlier. This allows us to, if we decide,
to limit adding methods and variables to structures in order
to limit the amount of debug information output into a .o file.
The caching is a bit complicated though so any thoughts on
untangling that are welcome.
llvm-svn: 150631
method. This allows the target lowering code to not have to deal with MDNodes.
Also, avoid leaking memory like a sieve by not creating a global variable for
the image info section, but just emitting the code directly.
llvm-svn: 150624
Snooping in other namespaces when the identifier being corrected is
already qualified (i.e. a valid CXXScopeSpec is passed to CorrectTypo)
and ranking synthesized namespace qualifiers relative to the existing
qualifier is now performed. Support for disambiguating the string
representation of synthesized namespace qualifers has also been added
(the change to test/Parser/cxx-using-directive.cpp is an example of an
ambiguous relative qualifier).
llvm-svn: 150622
Accomplished by moving the body of StringRef::edit_distance into
a separate function that accepts two ArrayRefs, and making
StringRef::edit_distance a wrapper around the new function.
llvm-svn: 150621