Commit Graph

2926 Commits

Author SHA1 Message Date
David Majnemer 8fa8c384d2 Basic: Update clang to reflect changes made to LLVM datalayout
We now give x86-64 COFF targets a different mangling code, update clang
to use it.

llvm-svn: 232571
2015-03-17 23:55:00 +00:00
Joerg Sonnenberger 1d3b431c98 Global inline assembler blocks are merged before parsing, so no specific
location data is available. If pragma handling wants to look up the
position, it finds the LLVM buffer and wants to compare it with the
special built-in buffer, failing badly. Extend to the special handling
of the built-in buffer to also check for the inline asm buffer. Expect
only a single asm buffer. Sort it between the built-in buffers and the
normal file buffers.

Fixes the assert part of PR 22576.

llvm-svn: 232389
2015-03-16 17:54:54 +00:00
Ahmed Bougacha 5a4aa42a59 Add a bunch of missing "CHECK" colons in tests. NFC.
llvm-svn: 232237
2015-03-14 01:10:19 +00:00
Robert Lougher 7607b918a7 Make tests more robust. No functional change.
In preparation for recommit of revision 232190, change tests so that they
are resilient to operands being commuted by the reassociate pass.

llvm-svn: 232206
2015-03-13 20:35:45 +00:00
David Blaikie bdf40a62a7 Test case updates for explicit type parameter to the gep operator
llvm-svn: 232187
2015-03-13 18:21:46 +00:00
David Blaikie 81e96f8192 Update test case to make it easier to automatically port to typeless pointer gep operator changes coming soon
llvm-svn: 232185
2015-03-13 18:21:11 +00:00
Sanjay Patel 0a6da5de55 [X86, AVX2] Replace inserti128 and extracti128 intrinsics with generic shuffles
This is nearly identical to the v*f128_si256 parts of r231792 and r232052.

AVX2 introduced proper integer variants of the hacked integer insert/extract
C intrinsics that were created for this same functionality with AVX1.

This should complete the front end fixes for insert/extract128 intrinsics. 
Corresponding LLVM patch to follow.

llvm-svn: 232109
2015-03-12 21:54:24 +00:00
Sanjay Patel 0c351aba25 [X86, AVX] replace vextractf128 intrinsics with generic shuffles
This is very much like D8088 (checked in at r231792).

Now that we've replaced the vinsertf128 intrinsics,
do the same for their extract twins.

Differential Revision: http://reviews.llvm.org/D8275

llvm-svn: 232052
2015-03-12 15:50:36 +00:00
Joerg Sonnenberger 27173288c2 Under duress, move check for target support of __builtin_setjmp/
__builtin_longjmp to Sema as requested by John McCall.

llvm-svn: 231986
2015-03-11 23:46:32 +00:00
Hal Finkel 0d0a1a53e3 [PowerPC] ABI support for the QPX vector instruction set
Support for the QPX vector instruction set, used on the IBM BG/Q supercomputer,
has recently been added to the LLVM PowerPC backend. This vector instruction
set requires some ABI modifications because the ABI on the BG/Q expects
<4 x double> vectors to be provided with 32-byte stack alignment, and to be
handled as native vector types (similar to how Altivec vectors are handled on
mainline PPC systems). I've named this ABI variant elfv1-qpx, have made this
the default ABI when QPX is supported, and have updated the ABI handling code
to provide QPX vectors with the correct stack alignment and associated
register-assignment logic.

llvm-svn: 231960
2015-03-11 19:14:15 +00:00
Kit Barton 8553bec911 Add builtins for the 64-bit vector integer arithmetic instructions added in POWER8.
These are the Clang-related changes for the instructions added to LLVM in http://reviews.llvm.org/D7959.

Phabricator review: http://reviews.llvm.org/D8041

llvm-svn: 231931
2015-03-11 15:57:19 +00:00
Rafael Espindola 3937738b81 Remove a bugus test.
This was using the driver to test LLVM.

I checked that disabling the code path that the test was testing causes
llvm tests to fail.

llvm-svn: 231895
2015-03-11 00:28:59 +00:00
Sanjay Patel 7f6aa52e93 [X86, AVX] Replace vinsertf128 intrinsics with generic shuffles.
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.

This is the sibling patch for the LLVM half of this change:
http://reviews.llvm.org/D8086

Differential Revision: http://reviews.llvm.org/D8088

llvm-svn: 231792
2015-03-10 15:19:26 +00:00
NAKAMURA Takumi f65421ad9f Suppress a couple of tests, clang/test/CodeGen/catch-undef-behavior.c and one, for -Asserts for now. They were introduced in r231711.
llvm-svn: 231717
2015-03-09 22:32:03 +00:00
Alexey Samsonov 21d2dda3d2 [UBSan] Split -fsanitize=shift into -fsanitize=shift-base and -fsanitize=shift-exponent.
This is a recommit of r231150, reverted in r231409. Turns out
that -fsanitize=shift-base check implementation only works if the
shift exponent is valid, otherwise it contains undefined behavior
itself.

Make sure we check that exponent is valid before we proceed to
check the base. Make sure that we actually report invalid values
of base or exponent if -fsanitize=shift-base or
-fsanitize=shift-exponent is specified, respectively.

llvm-svn: 231711
2015-03-09 21:50:19 +00:00
Tim Northover d157e19562 ARM: use ABI-specified alignment for byval parameters.
When passing a type with large alignment byval, we were specifying the type's
alignment rather than the alignment that the backend is actually capable of
producing (ABIAlign).

This would be OK (if odd) assuming the backend dealt with it prooperly,
unfortunately it doesn't and trying to pass types with "byval align 16" can
cause it to set fp incorrectly and trash the stack during the prologue. I'll be
fixing that in a separate patch, but Clang should still be emitting IR that's
as close to its intent as possible.

rdar://20059039

llvm-svn: 231706
2015-03-09 21:40:42 +00:00
Alexey Samsonov 48a9db034a Revert "[UBSan] Split -fsanitize=shift into -fsanitize=shift-base and -fsanitize=shift-exponent."
It's not that easy. If we're only checking -fsanitize=shift-base we
still need to verify that exponent has sane value, otherwise
UBSan-inserted checks for base will contain undefined behavior
themselves.

llvm-svn: 231409
2015-03-05 21:57:35 +00:00
Rick Foos e9c019a7a6 Temporary XFAILs for Hexagon
Summary: Temporary XFAIL's until patches done.

Reviewers: echristo, adasgupt, colinl

Reviewed By: colinl

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8044

llvm-svn: 231318
2015-03-04 23:40:38 +00:00
Nemanja Ivanovic 55e757db4a Add Clang support for PPC cryptography builtins
Review: http://reviews.llvm.org/D7951

llvm-svn: 231291
2015-03-04 21:48:22 +00:00
Reid Kleckner 533bd17268 Fix test/CodeGen/builtins.c for platforms that don't lower sjlj
Opt in Win64 to supporting sjlj lowering. We have the backend lowering,
so I think this was just an oversight because WinX86_64TargetCodeGenInfo
doesn't inherit from X86_64TargetCodeGenInfo.

llvm-svn: 231280
2015-03-04 19:24:16 +00:00
Daniel Jasper 8dc4a2a3be Prevent test from writing files.
llvm-svn: 231247
2015-03-04 15:02:17 +00:00
Joerg Sonnenberger 244a577754 Adjust the changes from r230255 to bail out if the backend can't lower
__builtin_setjmp/__builtin_longjmp and don't fall back to the libc
functions.

llvm-svn: 231245
2015-03-04 14:25:35 +00:00
Duncan P. N. Exon Smith bcf02b44de DebugInfo: Remove useless test
This test doesn't provide any value (it just checks that the frontend
produces exactly one compile unit), and it certainly isn't doing what
the comment says.  Noticed via IRC review of my update to it in r231083.

llvm-svn: 231152
2015-03-03 22:18:24 +00:00
Alexey Samsonov 783b8174ad [UBSan] Split -fsanitize=shift into -fsanitize=shift-base and -fsanitize=shift-exponent.
-fsanitize=shift is now a group that includes both these checks, so
exisiting users should not be affected.

This change introduces two new UBSan kinds that sanitize only left-hand
side and right-hand side of shift operation. In practice, invalid
exponent value (negative or too large) tends to cause more portability
problems, including inconsistencies between different compilers, crashes
and inadequeate results on non-x86 architectures etc. That is,
-fsanitize=shift-exponent failures should generally be addressed first.

As a bonus, this change simplifies CodeGen implementation for emitting left
shift (separate checks for base and exponent are now merged by the
existing generic logic in EmitCheck()), and LLVM IR for these checks
(the number of basic blocks is reduced).

llvm-svn: 231150
2015-03-03 22:15:35 +00:00
Duncan P. N. Exon Smith f04be1fb3a DebugInfo: Move new hierarchy into place (clang)
Update testcases for LLVM change in r231082 to use the new debug info
hierarchy.

llvm-svn: 231083
2015-03-03 17:25:55 +00:00
Juergen Ributzka 9baa03fc07 Lower _mm256_broadcastsi128_si256 directly to a vector shuffle.
Originally we were using the same GCC builtins to lower this AVX2 vector
intrinsic. Instead we will now lower it directly to a vector shuffle.

This will not only allow LLVM to generate better code, but it will also allow us
to remove the GCC intrinsics.

Reviewed by Andrea

This is related to rdar://problem/18742778.

llvm-svn: 231081
2015-03-03 17:22:53 +00:00
David Blaikie a953f2825b Update Clang tests to handle explicitly typed load changes in LLVM.
llvm-svn: 230795
2015-02-27 21:19:58 +00:00
David Blaikie 218b783192 Update Clang tests to handle explicitly typed gep changes in LLVM.
llvm-svn: 230783
2015-02-27 19:18:17 +00:00
Nico Weber 6bdd9b0608 Reland __leave tests (r230717 and r230720, reverted in r230740).
The only change is that line 266 changed from
    // CHECK:  br label %[[except]]
to
    // CHECK:  br label %[[except:[^ ]*]]

llvm-svn: 230764
2015-02-27 16:40:43 +00:00
Daniel Jasper 7fe82ad80b Revert r230717 (and subsequent r230720).
The tests keeps failing on build bots:
http://lab.llvm.org:8080/green/job/clang-stage2-configure-Rlto_check/2355/testReport/junit/Clang/CodeGen/exceptions_seh_leave_c/

llvm-svn: 230740
2015-02-27 08:16:32 +00:00
Craig Topper b1bc5cf4bc [X86] Remove pblendw and pblendd builtins that aren't being used by the intrinsic headers.
llvm-svn: 230738
2015-02-27 06:54:25 +00:00
Alexey Bataev b832926176 [OPENMP] Codegen for "#pragma omp atomic write"
For global reg lvalue - use regular store through global register.
For simple lvalue - use simple atomic store.
For bitfields, vector element, extended vector elements - the original value of the whole storage (for vector elements) or of some aligned value (for bitfields) is atomically read, the part of this value for the given lvalue is modified and then use atomic compare-and-exchange operation to try to atomically write modified value (if it was not modified).
Also, changes in this patch fix the bug for '#pragma omp atomic read' applied to extended vector elements.
Differential Revision: http://reviews.llvm.org/D7369

llvm-svn: 230736
2015-02-27 06:33:30 +00:00
Nico Weber 5339a52c9a Add last missing __leave test.
llvm-svn: 230720
2015-02-27 02:26:14 +00:00
Nico Weber 497bf5587e Add another __leave test.
llvm-svn: 230717
2015-02-27 01:58:08 +00:00
Nico Weber ff62a6a0b7 Don't crash on leaving nested __finally blocks through an EH edge.
The __finally emission block tries to be clever by removing unused continuation
edges if there's an unconditional jump out of the __finally block. With
exception edges, the EH continuation edge isn't always unused though and we'd
crash in a few places.

Just don't be clever. That makes the IR for __finally blocks a bit longer in
some cases (hence small and behavior-preserving changes to existing tests), but
it makes no difference in general and it fixes the last crash from PR22553.

http://reviews.llvm.org/D7918

llvm-svn: 230697
2015-02-26 22:34:33 +00:00
Petar Jovanovic d55ae6ba37 Add support for generating MIPS legacy NaN
Currently, the NaN values emitted for MIPS architectures do not cover
non-IEEE754-2008 compliant case. This change fixes the issue.

Patch by Vladimir Radosavljevic.

Differential Revision: http://reviews.llvm.org/D7882

llvm-svn: 230653
2015-02-26 18:19:22 +00:00
Craig Topper ac0d58bc4c [X86] Remove the blendps/blendpd builtins. They aren't used by the intrinsic headers. We use appropriate shuffle vector instead.
llvm-svn: 230616
2015-02-26 08:09:05 +00:00
Nico Weber d9b8bd6b86 Make __leave test pass in -Asserts builds.
llvm-svn: 230514
2015-02-25 17:44:04 +00:00
Nico Weber e68b9f3e0a Reland r230460 with a test fix for -Asserts builds.
Original CL description:
Produce less broken basic block sequences for __finally blocks.

The way cleanups (such as PerformSEHFinally) get emitted is that codegen
generates some initialization code, then calls the cleanup's Emit() with the
insertion point set to a good place, then the cleanup is supposed to emit its
stuff, and then codegen might tack in a jump or similar to where the insertion
point is after the cleanup.

The PerformSEHFinally cleanup tries to just stash away the block it's supposed
to codegen into, and then does codegen later, into that stashed block.  However,
after codegen'ing the __finally block, it used to set the insertion point to
the finally's continuation block (where the __finally cleanup goes when its body
is completed after regular, non-exceptional control flow).  That's not correct,
as that block can (and generally does) already ends in a jump.  Instead,
remember the insertion point that was current before the __finally got emitted,
and restore that.

Fixes two of the crashes in PR22553.

llvm-svn: 230503
2015-02-25 16:25:00 +00:00
Daniel Jasper cd94c40b10 Revert "Produce less broken basic block sequences for __finally blocks."
The test is broken on buildbots:
http://lab.llvm.org:8080/green/job/clang-stage2-configure-Rlto_check/2279/

This reverts commit adda738b6dc533c42db5f5f5b31344098a3aba7d.

llvm-svn: 230472
2015-02-25 10:07:14 +00:00
Nico Weber 795bd2d411 Produce less broken basic block sequences for __finally blocks.
The way cleanups (such as PerformSEHFinally) get emitted is that codegen
generates some initialization code, then calls the cleanup's Emit() with the
insertion point set to a good place, then the cleanup is supposed to emit its
stuff, and then codegen might tack in a jump or similar to where the insertion
point is after the cleanup.

The PerformSEHFinally cleanup tries to just stash away the block it's supposed
to codegen into, and then does codegen later, into that stashed block.  However,
after codegen'ing the __finally block, it used to set the insertion point to
the finally's continuation block (where the __finally cleanup goes when its body
is completed after regular, non-exceptional control flow).  That's not correct,
as that block can (and generally does) already ends in a jump.  Instead,
remember the insertion point that was current before the __finally got emitted,
and restore that.

Fixes two of the crashes in PR22553.

llvm-svn: 230460
2015-02-25 04:05:18 +00:00
Adrian Prantl cbc368c5b5 Revert "Wrap clang module files in a Mach-O, ELF, or COFF container."
llvm-svn: 230454
2015-02-25 02:44:04 +00:00
Adrian Prantl 8bf7af3de8 Wrap clang module files in a Mach-O, ELF, or COFF container.
This is a necessary prerequisite for debugging with modules.
The .pcm files become containers that hold the serialized AST which allows
us to store debug information in the module file that can be shared by all
object files that were built importing the module.

This reapplies r230044 with a fixed configure+make build and updated
dependencies and testcase requirements. Over the last iteration this
version adds
- missing target requirements for testcases that specify an x86 triple,
- a missing clangCodeGen.a dependency to libClang.a in the make build.

rdar://problem/19104245

llvm-svn: 230423
2015-02-25 01:31:45 +00:00
Tim Northover bc784d1caa ARM: Simplify PCS handling.
The backend should now be able to handle all AAPCS rules based on argument
type, which means Clang no longer has to duplicate the register-counting logic
and the CodeGen can be significantly simplified.

llvm-svn: 230349
2015-02-24 17:22:40 +00:00
Michael Kuperstein 4f818708a8 [WinX86_64 ABI] Treat C99 _Complex as a struct
MSVC does not support C99 _Complex.
ICC, however, does support it on windows x86_64, and treats it, for purposes of parameter passing, as equivalent to a struct containing two fields (for the real and imaginary part). 

Differential Revision: http://reviews.llvm.org/D7825

llvm-svn: 230315
2015-02-24 09:35:58 +00:00
Joerg Sonnenberger 096feeb741 Only lower __builtin_setjmp / __builtin_longjmp to
llvm.eh.sjlj.setjmp / llvm.eh.sjlj.longjmp, if the backend is known to
support them outside the Exception Handling context. The default
handling in LLVM codegen doesn't work and will create incorrect code.
The ARM backend on the other hand will assert if the intrinsics are
used.

llvm-svn: 230255
2015-02-23 20:23:47 +00:00
Rafael Espindola 6b07a1c6ee Add -funique-section-names and -fno-unique-section-names options.
For now -funique-section-names is the default, so no change in default behavior.

The total .o size in a build of llvm and clang goes from 241687775 to 230649031
bytes if -fno-unique-section-names is used.

llvm-svn: 230031
2015-02-20 18:08:57 +00:00
Filipe Cabecinhas 54a2ba8b76 [Headers] Add tests for _mm256_insert_epi64 and fix its definition
Summary:
The definition for _mm256_insert_epi64 was taking an int, which would get
truncated before being inserted in the vector.

Original patch by Joshua Magee!

Reviewers: bruno, craig.topper

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D7179

llvm-svn: 229811
2015-02-19 03:02:33 +00:00
Daniel Sanders 933f0a04d3 Fix test/CodeGen/atomic_ops.c failure on clang-cmake-mips builder (and others).
Not all targets generate 'store atomic' instructions for
'_Atomic(_Complex int)'. Some targets use the __atomic_store builtin instead. 

This commit makes the test accept either one.

llvm-svn: 229676
2015-02-18 15:08:37 +00:00
David Majnemer 5927e7b681 CodeGen: Relax a FileCheck line for SystemZ
llvm-svn: 229617
2015-02-18 02:28:15 +00:00