Commit Graph

874 Commits

Author SHA1 Message Date
Alexey Bataev c71a4099cf [OPENMP] Preserve alignment of the original variables for the captured references.
Patch makes codegen to preserve alignment of the shared variables captured and used in OpenMP regions.

llvm-svn: 247401
2015-09-11 10:29:41 +00:00
Alexey Bataev 2377fe95c6 [OPENMP] Outlined function for parallel and other regions with list of captured variables.
Currently all variables used in OpenMP regions are captured into a record and passed to outlined functions in this record. It may result in some poor performance because of too complex analysis later in optimization passes. Patch makes to emit outlined functions for parallel-based regions with a list of captured variables. It reduces code for 2*n GEPs, stores and loads at least.
Codegen for task-based regions remains unchanged because runtime requires that all captured variables are passed in captured record.

llvm-svn: 247251
2015-09-10 08:12:02 +00:00
Peter Collingbourne 2c7f7e31c4 CFI: Introduce -fsanitize=cfi-icall flag.
This flag causes the compiler to emit bit set entries for functions as well
as runtime bitset checks at indirect call sites. Depends on the new function
bitset mechanism.

Differential Revision: http://reviews.llvm.org/D11857

llvm-svn: 247238
2015-09-10 02:17:40 +00:00
Peter Collingbourne ee381ffba4 CodeGen: Add CFI unrelated cast checks to the new pointer code path.
llvm-svn: 247105
2015-09-09 00:01:31 +00:00
Michael Zolotukhin 84df12375c Introduce __builtin_nontemporal_store and __builtin_nontemporal_load.
Summary:
Currently clang provides no general way to generate nontemporal loads/stores.
There are some architecture specific builtins for doing so (e.g. in x86), but
there is no way to generate non-temporal store on, e.g. AArch64. This patch adds
generic builtins which are expanded to a simple store with '!nontemporal'
attribute in IR.

Differential Revision: http://reviews.llvm.org/D12313

llvm-svn: 247104
2015-09-08 23:52:33 +00:00
John McCall 7f416cc426 Compute and preserve alignment more faithfully in IR-generation.
Introduce an Address type to bundle a pointer value with an
alignment.  Introduce APIs on CGBuilderTy to work with Address
values.  Change core APIs on CGF/CGM to traffic in Address where
appropriate.  Require alignments to be non-zero.  Update a ton
of code to compute and propagate alignment information.

As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment
helper function to CGF and made use of it in a number of places in
the expression emitter.

The end result is that we should now be significantly more correct
when performing operations on objects that are locally known to
be under-aligned.  Since alignment is not reliably tracked in the
type system, there are inherent limits to this, but at least we
are no longer confused by standard operations like derived-to-base
conversions and array-to-pointer decay.  I've also fixed a large
number of bugs where we were applying the complete-object alignment
to a pointer instead of the non-virtual alignment, although most of
these were hidden by the very conservative approach we took with
member alignment.

Also, because IRGen now reliably asserts on zero alignments, we
should no longer be subject to an absurd but frustrating recurring
bug where an incomplete type would report a zero alignment and then
we'd naively do a alignmentAtOffset on it and emit code using an
alignment equal to the largest power-of-two factor of the offset.

We should also now be emitting much more aggressive alignment
attributes in the presence of over-alignment.  In particular,
field access now uses alignmentAtOffset instead of min.

Several times in this patch, I had to change the existing
code-generation pattern in order to more effectively use
the Address APIs.  For the most part, this seems to be a strict
improvement, like doing pointer arithmetic with GEPs instead of
ptrtoint.  That said, I've tried very hard to not change semantics,
but it is likely that I've failed in a few places, for which I
apologize.

ABIArgInfo now always carries the assumed alignment of indirect and
indirect byval arguments.  In order to cut down on what was already
a dauntingly large patch, I changed the code to never set align
attributes in the IR on non-byval indirect arguments.  That is,
we still generate code which assumes that indirect arguments have
the given alignment, but we don't express this information to the
backend except where it's semantically required (i.e. on byvals).
This is likely a minor regression for those targets that did provide
this information, but it'll be trivial to add it back in a later
patch.

I partially punted on applying this work to CGBuiltin.  Please
do not add more uses of the CreateDefaultAligned{Load,Store}
APIs; they will be going away eventually.

llvm-svn: 246985
2015-09-08 08:05:57 +00:00
Alexey Bataev caacd53dde [OPENMP] Fix for http://llvm.org/PR24674: assertion failed and and abort trap
Fix processing of shared variables with reference types in OpenMP constructs. Previously, if the variable was not marked in one of the private clauses, the reference to this variable was emitted incorrectly and caused an assertion later.

llvm-svn: 246846
2015-09-04 11:26:21 +00:00
Alexey Bataev d6fdc8b685 [OPENMP 4.0] Codegen for array sections.
Added codegen for array section in 'depend' clause of 'task' directive. It emits to pointers, one for the begin of array section and another for the end of array section. Size of the section is calculated as (end + 1 - start) * sizeof(basic_element_type).

llvm-svn: 246422
2015-08-31 07:32:19 +00:00
Daniel Jasper ad5b7962c9 Revert "[OPENMP 4.0] Codegen for array sections."
The test is currently failing on bots:
http://lab.llvm.org:8080/green/job/clang-stage1-cmake-RA-incremental_check/12747/

llvm-svn: 246288
2015-08-28 08:42:22 +00:00
Alexey Bataev 117fb35cf7 [OPENMP 4.0] Codegen for array sections.
Added codegen for array section in 'depend' clause of 'task' directive. It emits to pointers, one for the begin of array section and another for the end of array section. Size of the section is calculated as (end + 1 - start) * sizeof(basic_element_type).

llvm-svn: 246278
2015-08-28 06:09:05 +00:00
Filipe Cabecinhas 7af183d841 Propagate SourceLocations through to get a Loc on float_cast_overflow
Summary:
float_cast_overflow is the only UBSan check without a source location attached.
This patch propagates SourceLocations where necessary to get them to the
EmitCheck() call.

Reviewers: rsmith, ABataev, rjmccall

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D11757

llvm-svn: 244568
2015-08-11 04:19:28 +00:00
Benjamin Kramer 9938310ab3 [CodeGen] Simplify creation of shuffle masks.
No functional change intended.

llvm-svn: 243439
2015-07-28 16:25:32 +00:00
David Blaikie f05779e21c Pass an iterator range to EmitCallArgs
llvm-svn: 242824
2015-07-21 18:37:18 +00:00
David Blaikie 4ba525b727 Rely on default zero-arg value for IRBuilder::CreateCall calls to zero-arg functions
Patch by servuswiegehtz at yahoo.de

llvm-svn: 242168
2015-07-14 17:27:39 +00:00
Ulrich Weigand 03ce2a16bf Respect alignment of nested bitfields
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:

struct XBitfield {
  unsigned b1 : 10;
  unsigned b2 : 12;
  unsigned b3 : 10;
};
struct YBitfield {
  char x;
  struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;

unsigned test7() {
  // CHECK: @test7
  // CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
  return gbitfield.y.b2;
}

The "align 4" is actually wrong.  Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.

This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp.  Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.

Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).

In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage.  This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.

Differential Revision: http://reviews.llvm.org/D11034

llvm-svn: 241916
2015-07-10 17:30:00 +00:00
Akira Hatanaka 85365cd72a Attach attribute "trap-func-name" to call sites of llvm.trap and llvm.debugtrap.
This is needed to use clang's command line option "-ftrap-function" for LTO and
enable changing the trap function name on a per-call-site basis.

rdar://problem/21225723

Differential Revision: http://reviews.llvm.org/D10831

llvm-svn: 241306
2015-07-02 22:15:41 +00:00
Alexander Kornienko ab9db51042 Revert r240270 ("Fixed/added namespace ending comments using clang-tidy").
llvm-svn: 240353
2015-06-22 23:07:51 +00:00
Alexander Kornienko 3d9d929e42 Fixed/added namespace ending comments using clang-tidy. NFC
The patch is generated using this command:

  $ tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \
      -checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \
      work/llvm/tools/clang

To reduce churn, not touching namespaces spanning less than 10 lines.

llvm-svn: 240270
2015-06-22 09:47:44 +00:00
Peter Collingbourne 6708c4a176 Implement diagnostic mode for -fsanitize=cfi*, -fsanitize=cfi-diag.
This causes programs compiled with this flag to print a diagnostic when
a control flow integrity check fails instead of aborting. Diagnostics are
printed using UBSan's runtime library.

The main motivation of this feature over -fsanitize=vptr is fidelity with
the -fsanitize=cfi implementation: the diagnostics are printed under exactly
the same conditions as those which would cause -fsanitize=cfi to abort the
program. This means that the same restrictions apply regarding compiling
all translation units with -fsanitize=cfi, cross-DSO virtual calls are
forbidden, etc.

Differential Revision: http://reviews.llvm.org/D10268

llvm-svn: 240109
2015-06-19 01:51:54 +00:00
Peter Collingbourne 9881b78b53 Introduce -fsanitize-trap= flag.
This flag controls whether a given sanitizer traps upon detecting
an error. It currently only supports UBSan. The existing flag
-fsanitize-undefined-trap-on-error has been made an alias of
-fsanitize-trap=undefined.

This change also cleans up some awkward behavior around the combination
of -fsanitize-trap=undefined and -fsanitize=undefined. Previously we
would reject command lines containing the combination of these two flags,
as -fsanitize=vptr is not compatible with trapping. This required the
creation of -fsanitize=undefined-trap, which excluded -fsanitize=vptr
(and -fsanitize=function, but this seems like an oversight).

Now, -fsanitize=undefined is an alias for -fsanitize=undefined-trap,
and if -fsanitize-trap=undefined is specified, we treat -fsanitize=vptr
as an "unsupported" flag, which means that we error out if the flag is
specified explicitly, but implicitly disable it if the flag was implied
by -fsanitize=undefined.

Differential Revision: http://reviews.llvm.org/D10464

llvm-svn: 240105
2015-06-18 23:59:22 +00:00
David Blaikie 43f9bb7371 API update for streamlining of IRBuilder::CreateCall to just use ArrayRef/initializer_list+braced init
llvm-svn: 237625
2015-05-18 22:14:03 +00:00
Peter Collingbourne 3eea677f3a Unify sanitizer kind representation between the driver and the rest of the compiler.
No functional change.

Differential Revision: http://reviews.llvm.org/D9618

llvm-svn: 237055
2015-05-11 21:39:14 +00:00
Justin Bogner 66242d6c5e InstrProf: Stop using RegionCounter outside of CodeGenPGO (NFC)
The RegionCounter type does a lot of legwork, but most of it is only
meaningful within the implementation of CodeGenPGO. The uses elsewhere
in CodeGen generally just want to increment or read counters, so do
that directly.

llvm-svn: 235664
2015-04-23 23:06:47 +00:00
Benjamin Kramer f3e67de85a [CodeGen] Do a more principled fix for PR231653, always use the inner type.
We were still using the MaterializeTemporaryExpr's type to check if the
transform is legal. Always use the inner Expr type.

llvm-svn: 234543
2015-04-09 22:50:07 +00:00
Benjamin Kramer b2b81439a3 [CodeGen] When promoting a reference temporary to a global use the inner type to fold it.
The MaterializeTemporaryExpr can have a different type than the inner
expression, miscompiling the constant. PR23165.

llvm-svn: 234499
2015-04-09 16:09:29 +00:00
David Blaikie 2e80428dc5 clang-format my last commit
(sorry, keep forgetting that)

llvm-svn: 234129
2015-04-05 22:47:07 +00:00
David Blaikie 1ed728c499 [opaque pointer type] More GEP API migrations
Looks like the VTable code in particular will need some work to pass
around the pointee type explicitly.

llvm-svn: 234128
2015-04-05 22:45:47 +00:00
David Blaikie 17ea266bac [opaque pointer type] More GEP API migrations
llvm-svn: 234109
2015-04-04 21:07:17 +00:00
Nick Lewycky 84146bee6c Fix the LLVM type used when lowering initializer list reference temporaries to global variables. Reapplies r232454 with fix for PR22940.
llvm-svn: 232579
2015-03-18 01:06:24 +00:00
Hans Wennborg f9d865b059 Revert r232454 and r232456: "Fix the LLVM type used when lowering initializer list reference temporaries to global variables."
This caused PR22940.

llvm-svn: 232496
2015-03-17 16:38:58 +00:00
Nick Lewycky cf191adaf5 Fix the LLVM type used when lowering initializer list reference temporaries to global variables.
llvm-svn: 232454
2015-03-17 02:21:31 +00:00
Peter Collingbourne d2926c91d5 Implement bad cast checks using control flow integrity information.
This scheme checks that pointer and lvalue casts are made to an object of
the correct dynamic type; that is, the dynamic type of the object must be
a derived class of the pointee type of the cast. The checks are currently
only introduced where the class being casted to is a polymorphic class.

Differential Revision: http://reviews.llvm.org/D8312

llvm-svn: 232241
2015-03-14 02:42:25 +00:00
Benjamin Kramer f8b86964ca Reapply r231508 "CodeGen: Emit constant temporaries into read-only globals."
I disabled putting the new global into the same COMDAT as the function for now.
There's a fundamental problem when we inline references to the global but still
have the global in a COMDAT linked to the inlined function. Since this is only
an optimization there may be other versions of the COMDAT around that are
missing the new global and hell breaks loose at link time.

I hope the chromium build doesn't break this time :)

llvm-svn: 231564
2015-03-07 13:37:13 +00:00
Hans Wennborg cd8f011157 Revert r231508 "CodeGen: Emit constant temporaries into read-only globals."
This broke the Chromium build. Links were failing with messages like:

obj/dbus/libdbus_test_support.a(obj/dbus/dbus_test_support.mock_object_proxy.o):../../dbus/mock_object_proxy.cc:function dbus::MockObjectProxy::Detach(): warning: relocation refers to discarded section
/usr/local/google/work/chromium/src/third_party/binutils/Linux_x64/Release/bin/ld.gold: error: treating warnings as errors

llvm-svn: 231541
2015-03-07 00:46:19 +00:00
Benjamin Kramer 3d8aa5c77d CodeGen: Emit constant temporaries into read-only globals.
Instead of creating a copy on the stack just stash them in a private
constant global. This saves both the copying overhead and the stack
space, and gives the optimizer more room to constant fold.

This tries to make array temporaries more similar to regular arrays,
they can't use the same logic because a temporary has no VarDecl to be
bound to so we roll our own version here.

The original use case for this optimization was code like
  for (int i : {1, 2, 3, 4, 5, 6, 7, 8, 10})
    foo(i);
where without this patch (assuming that the loop is not unrolled) we
would alloca an array on the stack, copy the 10 values over and
iterate on that. With this patch we put the array in .text use it
directly. Apart from that case this helps on virtually any passing of
a constant std::initializer_list as a function argument.

Differential Revision: http://reviews.llvm.org/D8034

llvm-svn: 231508
2015-03-06 20:00:03 +00:00
David Majnemer ced8bdf74a Sema: Parenthesized bound destructor member expressions can be called
We would wrongfully reject (a.~A)() in both the destructor and
pseudo-destructor cases.

This fixes PR22668.

llvm-svn: 230512
2015-02-25 17:36:15 +00:00
Alexey Bataev 3eff5f46d7 [OPENMP] Rename methods of OpenMPRuntime class. NFC.
llvm-svn: 230470
2015-02-25 08:32:46 +00:00
David Majnemer eeaec26534 Try to unbreak the Hexagon bot
llvm-svn: 229219
2015-02-14 02:18:14 +00:00
David Majnemer ce27e42d47 CodeGen: _Atomic(_Complex) shouldn't crash
We could be a little kinder if we did a compare-exchange loop instead of
an atomic-load/store pair.

llvm-svn: 229212
2015-02-14 01:48:17 +00:00
David Majnemer a5b195a1dc Revert "Revert r229082 for a bit, it caused PR22577."
This reverts commit r229123.  It was a red herring, the bug was present
without r229082.

llvm-svn: 229205
2015-02-14 01:35:12 +00:00
David Majnemer 6866a3c6f4 CodeGen: Correctly convert atomic bool from i8 to i1
Bools are a little tricky, they are i8 in memory and must be coerced
back to i1 before further operations can be performed on them.

This fixes PR22577.

llvm-svn: 229204
2015-02-14 01:35:07 +00:00
Nico Weber 7ce96b853d Revert r229082 for a bit, it caused PR22577.
llvm-svn: 229123
2015-02-13 16:27:00 +00:00
David Majnemer abc482effc MS ABI: Implement /volatile:ms
The /volatile:ms semantics turn volatile loads and stores into atomic
acquire and release operations.  This distinction is important because
volatile memory operations do not form a happens-before relationship
with non-atomic memory.  This means that a volatile store is not
sufficient for implementing a mutex unlock routine.

Differential Revision: http://reviews.llvm.org/D7580

llvm-svn: 229082
2015-02-13 07:55:47 +00:00
David Blaikie 9f7ae2c948 DebugInfo: Attribute calls to overloaded operators with the operator, not the start of the whole expression
llvm-svn: 227028
2015-01-25 01:25:37 +00:00
David Blaikie 9b47966615 DebugInfo: Use the preferred location rather than the start location for expression line info
This causes things like assignment to refer to the '=' rather than the
LHS when attributing the store instruction, for example.

There were essentially 3 options for this:

* The beginning of an expression (this was the behavior prior to this
  commit). This meant that stepping through subexpressions would bounce
  around from subexpressions back to the start of the outer expression,
  etc. (eg: x + y + z would go x, y, x, z, x (the repeated 'x's would be
  where the actual addition occurred)).

* The end of an expression. This seems to be what GCC does /mostly/, and
  certainly this for function calls. This has the advantage that
  progress is always 'forwards' (never jumping backwards - except for
  independent subexpressions if they're evaluated in interesting orders,
  etc). "x + y + z" would go "x y z" with the additions occurring at y
  and z after the respective loads.
  The problem with this is that the user would still have to think
  fairly hard about precedence to realize which subexpression is being
  evaluated or which operator overload is being called in, say, an asan
  backtrace.

* The preferred location or 'exprloc'. In this case you get sort of what
  you'd expect, though it's a bit confusing in its own way due to going
  'backwards'. In this case the locations would be: "x y + z +" in
  lovely postfix arithmetic order. But this does mean that if the op+
  were an operator overload, say, and in a backtrace, the backtrace will
  point to the exact '+' that's being called, not to the end of one of
  its operands.

(actually the operator overload case doesn't work yet for other reasons,
but that's being fixed - but this at least gets scalar/complex
assignments and other plain operators right)

llvm-svn: 227027
2015-01-25 01:19:10 +00:00
David Blaikie 2d321fb79d DebugInfo: Correct the line location of geps on array accesses
llvm-svn: 227023
2015-01-24 23:35:17 +00:00
David Blaikie a1fd099575 DebugInfo: Remove outdated comment. Column info is no longer needed to differentiate inline callsites.
llvm-svn: 226955
2015-01-23 22:48:27 +00:00
David Blaikie 835afb205f DebugInfo: Remove forced column-info workaround for inlined calls
This workaround was to provide unique call sites to ensure LLVM's inline
debug info handling would properly unique two calls to the same function
on the same line. Instead, this has now been fixed in LLVM (r226736) and
the workaround here can be removed.

Originally committed in r176895, but this isn't a straight revert due to
all the changes since then. I just searched for anything ForcedColumn*
related and removed them.

We could test this - but it didn't strike me as terribly valuable once
we're no longer adding this workaround everything just works as expected
& it's no longer a special case to test for.

llvm-svn: 226738
2015-01-21 23:08:17 +00:00
Chandler Carruth 0d9593ddec [cleanup] Re-sort *all* #include lines with llvm/utils/sort_includes.py
Sorry for the noise, I managed to miss a bunch of recent regressions of
include orderings here. This should actually sort all the includes for
Clang. Again, no functionality changed, this is just a mechanical
cleanup that I try to run periodically to keep the #include lines as
regular as possible across the project.

llvm-svn: 225979
2015-01-14 11:29:14 +00:00
David Blaikie 66e4197f07 Reapply r225000 (reverted in r225555): DebugInfo: Generalize debug info location handling (and follow-up commits).
Several pieces of code were relying on implicit debug location setting
which usually lead to incorrect line information anyway. So I've fixed
those (in r225955 and r225845) separately which should pave the way for
this commit to be cleanly reapplied.

The reason these implicit dependencies resulted in crashes with this
patch is that the debug location would no longer implicitly leak from
one place to another, but be set back to invalid. Once a call with
no/invalid location was emitted, if that call was ever inlined it could
produce invalid debugloc chains and assert during LLVM's codegen.

There may be further cases of such bugs in this patch - they're hard to
flush out with regression testing, so I'll keep an eye out for reports
and investigate/fix them ASAP if they come up.

Original commit message:

Reapply "DebugInfo: Generalize debug info location handling"

Originally committed in r224385 and reverted in r224441 due to concerns
this change might've introduced a crash. Turns out this change fixes the
crash introduced by one of my earlier more specific location handling
changes (those specific fixes are reverted by this patch, in favor of
the more general solution).

Recommitted in r224941 and reverted in r224970 after it caused a crash
when building compiler-rt. Looks to be due to this change zeroing out
the debug location when emitting default arguments (which were meant to
inherit their outer expression's location) thus creating call
instructions without locations - these create problems for inlining and
must not be created. That is fixed and tested in this version of the
change.

Original commit message:

This is a more scalable (fixed in mostly one place, rather than many
places that will need constant improvement/maintenance) solution to
several commits I've made recently to increase source fidelity for
subexpressions.

This resetting had to be done at the DebugLoc level (not the
SourceLocation level) to preserve scoping information (if the resetting
was done with CGDebugInfo::EmitLocation, it would've caused the tail end
of an expression's codegen to end up in a potentially different scope
than the start, even though it was at the same source location). The
drawback to this is that it might leave CGDebugInfo out of sync. Ideally
CGDebugInfo shouldn't have a duplicate sense of the current
SourceLocation, but for now it seems it does... - I don't think I'm
going to tackle removing that just now.

I expect this'll probably cause some more buildbot fallout & I'll
investigate that as it comes up.

Also these sort of improvements might be starting to show a weakness/bug
in LLVM's line table handling: we don't correctly emit is_stmt for
statements, we just put it on every line table entry. This means one
statement split over multiple lines appears as multiple 'statements' and
two statements on one line (without column info) are treated as one
statement.

I don't think we have any IR representation of statements that would
help us distinguish these cases and identify the beginning of each
statement - so that might be something we need to add (possibly to the
lexical scope chain - a scope for each statement). This does cause some
problems for GDB and possibly other DWARF consumers.

llvm-svn: 225956
2015-01-14 07:38:27 +00:00