Commit Graph

138853 Commits

Author SHA1 Message Date
Greg Clayton bc41bf70bd Make sure that the lldb globals:
lldb.target
    lldb.process
    lldb.thread
    lldb.frame

are initialized to at least contain empty lldb classes in case some python gets imported that uses them.

llvm-svn: 169750
2012-12-10 19:18:23 +00:00
Jim Grosbach f7cbcf2648 CMake: Don't run 'git svn' if there is no .git/svn directory.
If the local checkout does not have 'git svn' references set up, don't try
to use 'git svn' for version information.

llvm-svn: 169749
2012-12-10 19:03:37 +00:00
Eli Bendersky c01322ee90 This patch adds statistics for other non-DWARF fragments emitted by
the assembler. This is useful in order to know how the numbers add up,
since in particular the Align fragments account for a non-trivial
portion of the emitted fragments (especially on -O0 which sets
relax-all).

llvm-svn: 169747
2012-12-10 18:59:39 +00:00
Daniel Jasper a4396865d0 Addi formatting tests for pointer template parameters.
Fix spacing before ",".

llvm-svn: 169746
2012-12-10 18:59:13 +00:00
Hal Finkel 66859ae0f6 Use GetUnderlyingObjects in misched
misched used GetUnderlyingObject in order to break false load/store
dependencies, and the -enable-aa-sched-mi feature similarly relied on
GetUnderlyingObject in order to ensure it is safe to use the aliasing analysis.
Unfortunately, GetUnderlyingObject does not recurse through phi nodes, and so
(especially due to LSR) all of these mechanisms failed for
induction-variable-dependent loads and stores inside loops.

This change replaces uses of GetUnderlyingObject with GetUnderlyingObjects
(which will recurse through phi and select instructions) in misched.

Andy reviewed, tested and simplified this patch; Thanks!

llvm-svn: 169744
2012-12-10 18:49:16 +00:00
Michael Ilseman 43e17ad1d0 Remove unneeded typedef and volatile
llvm-svn: 169743
2012-12-10 18:48:08 +00:00
Sean Silva aab278fbba Fix funky copy-pasted grammatical error.
PR14343

llvm-svn: 169742
2012-12-10 18:37:26 +00:00
Chandler Carruth 867c7bff9a Revert "Make '-mtune=x86_64' assume fast unaligned memory accesses."
Accidental commit... git svn betrayed me. Sorry for the noise.

llvm-svn: 169741
2012-12-10 18:23:52 +00:00
Chandler Carruth 7eaa45c738 Make '-mtune=x86_64' assume fast unaligned memory accesses.
Summary:
Not all chips targeted by x86_64 have this feature, but a dramatically
increasing number do. Specifying a chip-specific tuning parameter will
continue to turn the feature on or off as appropriate for that
particular chip, but the generic flag should try to achieve the best
performance on the most widely available hardware. Today, the number of
chips with fast UA access dwarfs those without in the x86-64 space.

Note that this also brings LLVM's code generation for this '-march' flag
more in line with that of modern GCCs.

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D195

llvm-svn: 169740
2012-12-10 18:22:42 +00:00
Chandler Carruth 17f25c4e0d Fix a typo in my previous commit -- bloomfield is 0x1A not 0x2A.
Thanks to the PaX folks for noticing in review! We need some tests here,
any sugestions welcome...

llvm-svn: 169739
2012-12-10 18:22:40 +00:00
Alexander Kornienko 2ca766f32a Clang-format: error recovery for access specifiers
Reviewers: klimek

Reviewed By: klimek

CC: cfe-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D198

llvm-svn: 169738
2012-12-10 16:34:48 +00:00
Manuel Klimek dd1ff5ece0 Clarifying comments for the MatchFinder and matchesNames matcher.
llvm-svn: 169737
2012-12-10 16:13:06 +00:00
Alexander Potapenko af5a108ea8 [ASan] Typo fix in memcpy() and memmove() interceptors: ASAN_WRITE_RANGE and ASAN_READ_RANGE were swapped.
This has been spotted by Anna Zaks (ganna@apple.com)

llvm-svn: 169736
2012-12-10 16:02:13 +00:00
Kostya Serebryany 13bdbe698e [asan] move FakeStack into a separate file
llvm-svn: 169734
2012-12-10 14:19:15 +00:00
Kostya Serebryany 14282a91d5 [asan] introduce asan_allocator2.cc, which will have the replacement for asan allocator (now, just a bit of boilerplate)
llvm-svn: 169733
2012-12-10 13:52:55 +00:00
Alexander Potapenko 1746f555ee Add a libsanitizer API __sanitizer_sandbox_on_notify(void* reserved), which should be used by
the client programs to notify the tools that sandboxing is about to be turned on.

llvm-svn: 169732
2012-12-10 13:10:40 +00:00
Chandler Carruth 0f58558101 Address a FIXME and update the fast unaligned memory feature for newer
Intel chips.

The model number rules were determined by inspecting Intel's
documentation for their newer chip model numbers. My understanding is
that all of the newer Intel chips have fast unaligned memory access, but
if anyone is concerned about a particular chip, just shout.

No tests updated; it's not clear we have dedicated tests for the chips'
various features, but if anyone would like tests (or can point me at
some existing ones), I'm happy to oblige.

llvm-svn: 169730
2012-12-10 09:18:44 +00:00
Dmitry Vyukov 1046031e1a tsan: exclude flaky test
llvm-svn: 169729
2012-12-10 09:16:17 +00:00
Chandler Carruth e41e7b7901 Add a new visitor for walking the uses of a pointer value.
This visitor provides infrastructure for recursively traversing the
use-graph of a pointer-producing instruction like an alloca or a malloc.
It maintains a worklist of uses to visit, so it can handle very deep
recursions. It automatically looks through instructions which simply
translate one pointer to another (bitcasts and GEPs). It tracks the
offset relative to the original pointer as long as that offset remains
constant and exposes it during the visit as an APInt offset. Finally, it
performs conservative escape analysis.

However, currently it has some limitations that should be addressed
going forward:
1) It doesn't handle vectors of pointers.
2) It doesn't provide a cheaper visitor when the constant offset
   tracking isn't needed.
3) It doesn't support non-instruction pointer values.

The current functionality is exactly what is required to implement the
SROA pointer-use visitors in terms of this one, rather than in terms of
their own ad-hoc base visitor, which was always very poorly specified.
SROA has been converted to use this, and the code there deleted which
this utility now provides.

Technically speaking, using this new visitor allows SROA to handle a few
more cases than it previously did. It is now more aggressive in ignoring
chains of instructions which look like they would defeat SROA, but in
fact do not because they never result in a read or write of memory.
While this is "neat", it shouldn't be interesting for real programs as
any such chains should have been removed by others passes long before we
get to SROA. As a consequence, I've not added any tests for these
features -- it shouldn't be part of SROA's contract to perform such
heroics.

The goal is to extend the functionality of this visitor going forward,
and re-use it from passes like ASan that can benefit from doing
a detailed walk of the uses of a pointer.

Thanks to Ben Kramer for the code review rounds and lots of help
reviewing and debugging this patch.

llvm-svn: 169728
2012-12-10 08:28:39 +00:00
Craig Topper d8005db486 Teach DAG combine to handle vector add/sub with vectors of all 0s.
llvm-svn: 169727
2012-12-10 08:12:29 +00:00
NAKAMURA Takumi f9573f1174 [CMake] TARGET_TRIPLE may be internal alias of LLVM_DEFAULT_TARGET_TRIPLE.
llvm-svn: 169726
2012-12-10 07:14:29 +00:00
Manuel Klimek e792efd7ba Adding tests since when I was asked whether this works I wasn't
100% sure.

llvm-svn: 169725
2012-12-10 07:08:53 +00:00
NAKAMURA Takumi 6b819c5fb1 [CMake] Update dependencies to intrinsics_gen corresponding to r169711.
llvm-svn: 169724
2012-12-10 05:27:15 +00:00
Michael J. Spencer 74f29afd58 [Driver] Add test.
llvm-svn: 169721
2012-12-10 02:53:10 +00:00
Bill Wendling 39d3809368 Revert to old behavior until linker can pass export-dynamic option.
llvm-svn: 169720
2012-12-10 02:51:16 +00:00
Chandler Carruth e45f4658a3 Fix PR14548: SROA was crashing on a mixture of i1 and i8 loads and stores.
When SROA was evaluating a mixture of i1 and i8 loads and stores, in
just a particular case, it would tickle a latent bug where we compared
bits to bytes rather than bits to bits. As a consequence of the latent
bug, we would allow integers through which were not byte-size multiples,
a situation the later rewriting code was never intended to handle.

In release builds this could trigger all manner of oddities, but the
reported issue in PR14548 was forming invalid bitcast instructions.

The only downside of this fix is that it makes it more clear that SROA
in its current form is not capable of handling mixed i1 and i8 loads and
stores. Sometimes with the previous code this would work by luck, but
usually it would crash, so I'm not terribly worried. I'll watch the LNT
numbers just to be sure.

llvm-svn: 169719
2012-12-10 00:54:45 +00:00
Michael J. Spencer 99b99d26bb [Driver] Properly handle -entry for X86 Linux.
llvm-svn: 169718
2012-12-09 23:56:37 +00:00
Michael J. Spencer 3825760550 [Driver] Add -### support for printing out the core command line.
llvm-svn: 169717
2012-12-09 23:56:26 +00:00
Michael J. Spencer 48ed572710 Remove left over debugging output.
llvm-svn: 169716
2012-12-09 23:56:03 +00:00
Michael J. Spencer c8b60a70be [Driver] Make the X86Linux target use X86 (not x64) and properly initalize WriterOptions.
llvm-svn: 169715
2012-12-09 23:55:52 +00:00
Dmitri Gribenko 38782b8e87 Documentation: convert ReleaseNotes.html to reST.
Patch by Anthony Mykhailenko with small fixes by me.

llvm-svn: 169714
2012-12-09 23:14:26 +00:00
Benjamin Kramer 7464efcac8 Unbreak the clang build after r169712.
llvm-svn: 169713
2012-12-09 21:58:24 +00:00
Michael Ilseman 65f1435a6f Reorganize FastMathFlags to be a wrapper around unsigned, and streamline some interfaces.
llvm-svn: 169712
2012-12-09 21:12:04 +00:00
Paul Redmond 2adb13c100 LoopVectorize: support vectorizing intrinsic calls
- added function to VectorTargetTransformInfo to query cost of intrinsics
- vectorize trivially vectorizable intrinsic calls such as sin, cos, log, etc.

Reviewed by: Nadav

llvm-svn: 169711
2012-12-09 20:42:17 +00:00
Michael Ilseman 6d2ffa1858 Have the bitcode reader/writer just use FPMathOperator's fast math enum directly
llvm-svn: 169710
2012-12-09 20:23:16 +00:00
Paul Redmond f7cd6b391a test commit.
llvm-svn: 169709
2012-12-09 19:46:31 +00:00
Aaron Ballman 02df2e0872 Virtual method overrides can no longer have mismatched calling conventions. This fixes PR14339.
llvm-svn: 169705
2012-12-09 17:45:41 +00:00
Chris Lattner c11b91ec18 So many people have touched this, it doesn't make sense to ascribe authorship anymore.
llvm-svn: 169704
2012-12-09 16:55:39 +00:00
Jakub Staszak 8432185ec2 Use m_OneUse pattern instead of hasOneUse() method.
No functionality change.

llvm-svn: 169703
2012-12-09 16:06:44 +00:00
Sean Silva 18dc538690 docs: Convert GarbageCollection.html to reST
Patch by Alexander Zinenko!

llvm-svn: 169702
2012-12-09 15:52:47 +00:00
Jakub Staszak 538e3861e3 Remove trailing spaces.
llvm-svn: 169701
2012-12-09 15:37:46 +00:00
Dmitri Gribenko 2247085c37 Documentation: HowToReleaseLLVM.rst: remove trailing whitespace.
llvm-svn: 169700
2012-12-09 15:33:26 +00:00
Dmitri Gribenko 0c79cd6107 Documentation: don't create TOCs manually.
Thanks to Sean Silva for pointing out!

llvm-svn: 169699
2012-12-09 15:29:56 +00:00
Chandler Carruth 93ff2447ec Switch SROA to pop Uses off the back of its visitors' queues.
This will more closely match the behavior of the new PtrUseVisitor that
I am adding. Hopefully this will not change the actual behavior in any
way, but by making the processing order more similar help in debugging.

llvm-svn: 169697
2012-12-09 11:56:01 +00:00
Chandler Carruth 6bb5c06e17 Add a triple to this test. It depends on little-endian bitfield layout.
llvm-svn: 169696
2012-12-09 10:39:18 +00:00
Benjamin Kramer 054ab17c17 Drop the address space limit for tests in the makefile build.
The limit seems to break newer pythons (see PR13598) so just drop it for now.
Eventually lit should learn to set limits for its children instead of a global
limit in the makefile.

If some PPC bots fail after this change: That's a good thing, they actually run
clang tests now.

llvm-svn: 169695
2012-12-09 10:34:22 +00:00
Chandler Carruth ed72cdc010 Cleanup and fix an assert that was mis-firing.
Note that there is no test suite update. This was found by a couple of
tests failing when the test suite was run on a powerpc64 host (thanks
Roman!). The tests don't specify a triple, which might seem surprising
for a codegen test. But in fact, these tests don't even inspect their
output. Not at all. I could add a bunch of triples to these tests so
that we'd get the test coverage for normal builds, but really someone
needs to go through and add actual *tests* to these tests. =[ The ones
in question are:

  test/CodeGen/bitfield-init.c
  test/CodeGen/union.c

llvm-svn: 169694
2012-12-09 10:33:27 +00:00
Chandler Carruth e8f7a95941 Add a test case that I've been using to clarify the bitfield layout for
both LE and BE targets.

AFAICT, Clang get's this correct for PPC64. I've compared it to GCC 4.8
output for PPC64 (thanks Roman!) and to my limited ability to read power
assembly, it looks functionally equivalent. It would be really good to
fill in the assertions on this test case for x86-32, PPC32, ARM, etc.,
but I've reached the limit of my time and energy... Hopefully other
folks can chip in as it would be good to have this in place to test any
subsequent changes.

To those who care about PPC64 performance, a side note: there is some
*obnoxiously* bad code generated for these test cases. It would be worth
someone's time to sit down and teach the PPC backend to pattern match
these IR constructs better. It appears that things like '(shr %foo,
<imm>)' turn into 'rldicl R, R, 64-<imm>, <imm>' or some such. They
don't even get combined with other 'rldicl' instructions *immediately
adjacent*. I'll add a couple of these patterns to the README, but
I think it would be better to look at all the patterns produced by this
and other bitfield access code, and systematically build up a collection
of patterns that efficiently reduce them to the minimal code.

llvm-svn: 169693
2012-12-09 10:08:22 +00:00
Craig Topper 5ea3bdd75b Remove extra blank line.
llvm-svn: 169692
2012-12-09 08:20:52 +00:00
Chandler Carruth fd8eca202f Fix the bitfield record layout in codegen for big endian targets.
This was an egregious bug due to the several iterations of refactorings
that took place. Size no longer meant what it original did by the time
I finished, but this line of code never got updated. Unfortunately we
had essentially zero tests for this in the regression test suite. =[

I've added a PPC64 run over the bitfield test case I've been primarily
using. I'm still looking at adding more tests and making sure this is
the *correct* bitfield access code on PPC64 linux, but it looks pretty
close to me, and it is *worlds* better than before this patch as it no
longer asserts! =] More commits to follow with at least additional tests
and maybe more fixes.

Sorry for the long breakage due to this....

llvm-svn: 169691
2012-12-09 07:26:04 +00:00