Commit Graph

1604 Commits

Author SHA1 Message Date
Nick Desaulniers ef1d4bec89 [Clang][CGM] style cleanups NFC
Summary:
Forked from:
https://reviews.llvm.org/D80242

Use the getter for access to DebugInfo consistently.
Use break in switch in CodeGenModule::EmitTopLevelDecl consistently.

Reviewers: dblaikie

Reviewed By: dblaikie

Subscribers: cfe-commits, srhines

Tags: #clang

Differential Revision: https://reviews.llvm.org/D80840
2020-06-01 09:33:08 -07:00
James Y Knight aca3d067ef Fix Darwin 'constinit thread_local' variables.
Unlike other platforms using ItaniumCXXABI, Darwin does not allow the
creation of a thread-wrapper function for a variable in the TU of
users. Because of this, it can set the linkage of the thread-local
symbol to internal, with the assumption that no TUs other than the one
defining the variable will need it.

However, constinit thread_local variables do not require the use of
the thread-wrapper call, so users reference the variable
directly. Thus, it must not be converted to internal, or users will
get a link failure.

This was a regression introduced by the optimization in
00223827a9.

Differential Revision: https://reviews.llvm.org/D80417
2020-05-27 11:59:30 -04:00
Fangrui Song 9d55e4ee13 Make explicit -fno-semantic-interposition (in -fpic mode) infer dso_local
-fno-semantic-interposition is currently the CC1 default. (The opposite
disables some interprocedural optimizations.) However, it does not infer
dso_local: on most targets accesses to ExternalLinkage functions/variables
defined in the current module still need PLT/GOT.

This patch makes explicit -fno-semantic-interposition infer dso_local,
so that PLT/GOT can be eliminated if targets implement local aliases
for AsmPrinter::getSymbolPreferLocal (currently only x86).

Currently we check whether the module flag "SemanticInterposition" is 0.
If yes, infer dso_local. In the future, we can infer dso_local unless
"SemanticInterposition" is 1: frontends other than clang will also
benefit from the optimization if they don't bother setting the flag.
(There will be risks if they do want ELF interposition: they need to set
"SemanticInterposition" to 1.)
2020-05-25 20:48:18 -07:00
Eli Friedman 62f3ef2b53 [CGCall] Annotate references with "align" attribute.
If we're going to assume references are dereferenceable, we should also
assume they're aligned: otherwise, we can't actually dereference them.

See also D80072.

Differential Revision: https://reviews.llvm.org/D80166
2020-05-19 20:21:30 -07:00
George Burgess IV 94908088a8 [CodeGen] fix inline builtin-related breakage from D78162
In cases where we have multiple decls of an inline builtin, we may need
to go hunting for the one with a definition when setting function
attributes.

An additional test-case was provided on
https://github.com/ClangBuiltLinux/linux/issues/979
2020-04-16 11:54:10 -07:00
Richard Smith bab6df86ae Rework how UuidAttr, CXXUuidofExpr, and GUID template arguments and constants are represented.
Summary:
Previously, we treated CXXUuidofExpr as quite a special case: it was the
only kind of expression that could be a canonical template argument, it
could be a constant lvalue base object, and so on. In addition, we
represented the UUID value as a string, whose source form we did not
preserve faithfully, and that we partially parsed in multiple different
places.

With this patch, we create an MSGuidDecl object to represent the
implicit object of type 'struct _GUID' created by a UuidAttr. Each
UuidAttr holds a pointer to its 'struct _GUID' and its original
(as-written) UUID string. A non-value-dependent CXXUuidofExpr behaves
like a DeclRefExpr denoting that MSGuidDecl object. We cache an APValue
representation of the GUID on the MSGuidDecl and use it from constant
evaluation where needed.

This allows removing a lot of the special-case logic to handle these
expressions. Unfortunately, many parts of Clang assume there are only
a couple of interesting kinds of ValueDecl, so the total amount of
special-case logic is not really reduced very much.

This fixes a few bugs and issues:
 * PR38490: we now support reading from GUID objects returned from
   __uuidof during constant evaluation.
 * Our Itanium mangling for a non-instantiation-dependent template
   argument involving __uuidof no longer depends on which CXXUuidofExpr
   template argument we happened to see first.
 * We now predeclare ::_GUID, and permit use of __uuidof without
   any header inclusion, better matching MSVC's behavior. We do not
   predefine ::__s_GUID, though; that seems like a step too far.
 * Our IR representation for GUID constants now uses the correct IR type
   wherever possible. We will still fall back to using the
      {i32, i16, i16, [8 x i8]}
   layout if a definition of struct _GUID is not available. This is not
   ideal: in principle the two layouts could have different padding.

Reviewers: rnk, jdoerfert

Subscribers: arphaman, cfe-commits, aeubanks

Tags: #clang

Differential Revision: https://reviews.llvm.org/D78171
2020-04-15 12:20:42 -07:00
George Burgess IV 2dd17ff081 [CodeGen] only add nobuiltin to inline builtins if we'll emit them
There are some inline builtin definitions that we can't emit
(isTriviallyRecursive & callers go into why). Marking these
nobuiltin is only useful if we actually emit the body, so don't mark
these as such unless we _do_ plan on emitting that.

This suboptimality was encountered in Linux (see some discussion on
D71082, and https://github.com/ClangBuiltLinux/linux/issues/979).

Differential Revision: https://reviews.llvm.org/D78162
2020-04-15 11:05:22 -07:00
George Burgess IV 91c8c74180 [CodeGen] clarify a comment; NFC
Prompted by discussion on https://reviews.llvm.org/D78148.
2020-04-14 14:33:01 -07:00
Ayke van Laethem cfc002714a
[AVR] Support aliases in non-zero address space
This fixes code like the following on AVR:

void foo(void) {
}
void bar(void) __attribute__((alias("foo")));

Code like this is present in compiler-rt, which I'm trying to build.

Differential Revision: https://reviews.llvm.org/D76182
2020-04-14 00:42:19 +02:00
Michael Liao c97be2c377 [hip] Remove `hip_pinned_shadow`.
Summary:
- Use `device_builtin_surface` and `device_builtin_texture` for
  surface/texture reference support. So far, both the host and device
  use the same reference type, which could be revised later when
  interface/implementation is stablized.

Reviewers: yaxunl

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D77583
2020-04-07 09:51:49 -04:00
Eli Friedman 83fa811e5b [clang][opaque pointers] Fix up a bunch of "getType()->getElementType()"
In contexts where we know an LLVM type is a pointer, there's generally
some simpler way to get the pointee type.
2020-04-03 18:00:33 -07:00
Michael Liao 5be9b8cbe2 [cuda][hip] Add CUDA builtin surface/texture reference support.
Summary: - Re-commit after fix Sema checks on partial template specialization.

Reviewers: tra, rjmccall, yaxunl, a.sidorin

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76365
2020-03-27 17:18:49 -04:00
Artem Belevich fe8063e1a0 Revert "[cuda][hip] Add CUDA builtin surface/texture reference support."
This reverts commit 6a9ad5f3f4.
The patch breaks CUDA copmilation.

Differential Revision: https://reviews.llvm.org/D76365
2020-03-27 10:01:38 -07:00
Johannes Doerfert befb4be3a8 [OpenMP] `omp begin/end declare variant` - part 2, sema ("+CG")
This is the second part loosely extracted from D71179 and cleaned up.

This patch provides semantic analysis support for `omp begin/end declare
variant`, mostly as defined in OpenMP technical report 8 (TR8) [0].
The sema handling makes code generation obsolete as we generate "the
right" calls that can just be handled as usual. This handling also
applies to the existing, albeit problematic, `omp declare variant
support`. As a consequence a lot of unneeded code generation and
complexity is removed.

A major purpose of this patch is to provide proper `math.h`/`cmath`
support for OpenMP target offloading. See PR42061, PR42798, PR42799. The
current code was developed with this feature in mind, see [1].

The logic is as follows:

If we have seen a `#pragma omp begin declare variant match(<SELECTOR>)`
but not the corresponding `end declare variant`, and we find a function
definition we will:
  1) Create a function declaration for the definition we were about to generate.
  2) Create a function definition but with a mangled name (according to
     `<SELECTOR>`).
  3) Annotate the declaration with the `OMPDeclareVariantAttr`, the same
     one used already for `omp declare variant`, using and the mangled
     function definition as specialization for the context defined by
     `<SELECTOR>`.

When a call is created we inspect it. If the target has an
`OMPDeclareVariantAttr` attribute we try to specialize the call. To this
end, all variants are checked, the best applicable one is picked and a
new call to the specialization is created. The new call is used instead
of the original one to the base function. To keep the AST printing and
tooling possible we utilize the PseudoObjectExpr. The original call is
the syntactic expression, the specialized call is the semantic
expression.

[0] https://www.openmp.org/wp-content/uploads/openmp-TR8.pdf
[1] https://reviews.llvm.org/D61399#change-496lQkg0mhRN

Reviewers: kiranchandramohan, ABataev, RaviNarayanaswamy, gtbercea, grokos, sdmitriev, JonChesterfield, hfinkel, fghanim, aaron.ballman

Subscribers: bollu, guansong, openmp-commits, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D75779
2020-03-27 02:30:58 -05:00
Michael Liao 6a9ad5f3f4 [cuda][hip] Add CUDA builtin surface/texture reference support.
Summary:
- Even though the bindless surface/texture interfaces are promoted,
  there are still code using surface/texture references. For example,
  [PR#26400](https://bugs.llvm.org/show_bug.cgi?id=26400) reports the
  compilation issue for code using `tex2D` with texture references. For
  better compatibility, this patch proposes the support of
  surface/texture references.
- Due to the absent documentation and magic headers, it's believed that
  `nvcc` does use builtins for texture support. From the limited NVVM
  documentation[^nvvm] and NVPTX backend texture/surface related
  tests[^test], it's believed that surface/texture references are
  supported by replacing their reference types, which are annotated with
  `device_builtin_surface_type`/`device_builtin_texture_type`, with the
  corresponding handle-like object types, `cudaSurfaceObject_t` or
  `cudaTextureObject_t`, in the device-side compilation. On the host
  side, that global handle variables are registered and will be
  established and updated later when corresponding binding/unbinding
  APIs are called[^bind]. Surface/texture references are most like
  device global variables but represented in different types on the host
  and device sides.
- In this patch, the following changes are proposed to support that
  behavior:
  + Refine `device_builtin_surface_type` and
    `device_builtin_texture_type` attributes to be applied on `Type`
    decl only to check whether a variable is of the surface/texture
    reference type.
  + Add hooks in code generation to replace that reference types with
    the correponding object types as well as all accesses to them. In
    particular, `nvvm.texsurf.handle.internal` should be used to load
    object handles from global reference variables[^texsurf] as well as
    metadata annotations.
  + Generate host-side registration with proper template argument
    parsing.

---
[^nvvm]: https://docs.nvidia.com/cuda/pdf/NVVM_IR_Specification.pdf
[^test]: https://raw.githubusercontent.com/llvm/llvm-project/master/llvm/test/CodeGen/NVPTX/tex-read-cuda.ll
[^bind]: See section 3.2.11.1.2 ``Texture reference API` in [CUDA C Programming Guide](https://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf).
[^texsurf]: According to NVVM IR, `nvvm.texsurf.handle` should be used.  But, the current backend doesn't have that supported. We may revise that later.

Reviewers: tra, rjmccall, yaxunl, a.sidorin

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76365
2020-03-26 14:44:52 -04:00
Shiva Chen fc3752665f [RISCV] Passing small data limitation value to RISCV backend
Passing small data limit to RISCVELFTargetObjectFile by module flag,
So the backend can set small data section threshold by the value.
The data will be put into the small data section if the data smaller than
the threshold.

Differential Revision: https://reviews.llvm.org/D57497
2020-03-20 11:03:51 +08:00
Michael Liao 4cf01ed75e [hip] Revise `GlobalDecl` constructors. NFC.
Summary:
- https://reviews.llvm.org/D68578 revises the `GlobalDecl` constructors
  to ensure all GPU kernels have `ReferenceKenelKind` initialized
  properly with an explicit constructor and static one. But, there are
  lots of places using the implicit constructor triggering the assertion
  on non-GPU kernels. That's found in compilation of many tests and
  workloads.
- Fixing all of them may change more code and, more importantly, all of
  them assumes the default kernel reference kind. This patch changes
  that constructor to tell `CUDAGlobalAttr` and construct `GlobalDecl`
  properly.

Reviewers: yaxunl

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76344
2020-03-18 09:33:39 -04:00
Jon Chesterfield c45eaeabb7 [Clang] Undef attribute for global variables
Summary:
[Clang] Attribute to allow defining undef global variables

Initializing global variables is very cheap on hosted implementations. The
C semantics of zero initializing globals work very well there. It is not
necessarily cheap on freestanding implementations. Where there is no loader
available, code must be emitted near the start point to write the appropriate
values into memory.

At present, external variables can be declared in C++ and definitions provided
in assembly (or IR) to achive this effect. This patch provides an attribute in
order to remove this reason for writing assembly for performance sensitive
freestanding implementations.

A close analogue in tree is LDS memory for amdgcn, where the kernel is
responsible for initializing the memory after it starts executing on the gpu.
Uninitalized variables in LDS are observably cheaper than zero initialized.

Patch is loosely based on the cuda __shared__ and opencl __local variable
implementation which also produces undef global variables.

Reviewers: kcc, rjmccall, rsmith, glider, vitalybuka, pcc, eugenis, vlad.tsyrklevich, jdoerfert, gregrodgers, jfb, aaron.ballman

Reviewed By: rjmccall, aaron.ballman

Subscribers: Anastasia, aaron.ballman, davidb, Quuxplusone, dexonsmith, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D74361
2020-03-17 21:22:23 +00:00
Reid Kleckner e08464fb45 Avoid including FileManager.h from SourceManager.h
Most clients of SourceManager.h need to do things like turning source
locations into file & line number pairs, but this doesn't require
bringing in FileManager.h and LLVM's FS headers.

The main code change here is to sink SM::createFileID into the cpp file.
I reason that this is not performance critical because it doesn't happen
on the diagnostic path, it happens along the paths of macro expansion
(could be hot) and new includes (less hot).

Saves some includes:
    309 -    /usr/local/google/home/rnk/llvm-project/clang/include/clang/Basic/FileManager.h
    272 -    /usr/local/google/home/rnk/llvm-project/clang/include/clang/Basic/FileSystemOptions.h
    271 -    /usr/local/google/home/rnk/llvm-project/llvm/include/llvm/Support/VirtualFileSystem.h
    267 -    /usr/local/google/home/rnk/llvm-project/llvm/include/llvm/Support/FileSystem.h
    266 -    /usr/local/google/home/rnk/llvm-project/llvm/include/llvm/Support/Chrono.h

Differential Revision: https://reviews.llvm.org/D75406
2020-03-11 13:53:12 -07:00
Yaxun (Sam) Liu 22c457a869 [HIP] Fix device stub name
HIP emits a device stub function for each kernel in host code.

The HIP debugger requires device stub function to have a different unmangled name as the kernel.

Currently the name of the device stub function is the mangled name with a postfix .stub. However,
this does not work with the HIP debugger since the unmangled name is the same as the kernel.

This patch adds prefix __device__stub__ to the unmangled name of the device stub before mangling,
therefore the device stub function has a valid mangled name which is different than the device kernel
name. The device side kernel name is kept unchanged. kernels with extern "C" also gets the prefix added
to the corresponding device stub function.

Differential Revision: https://reviews.llvm.org/D68578
2020-03-09 16:40:05 -04:00
Erich Keane 7b66160828 Fix Target Multiversioning renaming.
The initial implementation only did 'first declaration renaming' when
a default version came after. This is insufficient in cases where a
default does not exist, so this patch makes sure that we do the renaming
in all cases.

This renaming is necessary because we emit the first declaration before
knowing that it IS a target multiversion function, which would change
its name. The second declaration (the one that caused the
multiversioning) then needs to make sure that the first one has its name
changed to be consistent with the resolver usage.
2020-03-09 08:29:18 -07:00
Yaxun (Sam) Liu 29e1a16be8 [NFC] Let mangler accept GlobalDecl
Differential Revision: https://reviews.llvm.org/D75700
2020-03-07 23:51:41 -05:00
hsmahesha cac068600e [HIP] Make sure, unused hip-pinned-shadow global var is kept within device code
Summary:
hip-pinned-shadow global var should remain in the final code object irrespective
of whether it is used or not within the code. Add it to used list, so that it
will not get eliminated when it is unused.

Reviewers: yaxunl, tra, hliao

Reviewed By: yaxunl

Subscribers: hliao, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D75402
2020-03-04 10:54:26 +05:30
Dan Gohman 00072c08c7 [WebAssembly] Mangle the argc/argv `main` as `__wasm_argc_argv`.
WebAssembly enforces a rule that caller and callee signatures must
match. This means that the traditional technique of passing `main`
`argc` and `argv` even when it doesn't need them doesn't work.

Currently the backend renames `main` to `__original_main`, however this
doesn't interact well with LTO'ing libc, and the name isn't intuitive.
This patch allows us to transition to `__main_argc_argv` instead.

This implements the proposal in
https://github.com/WebAssembly/tool-conventions/pull/134
with a flag to disable it when targeting Emscripten, though this is
expected to be temporary, as discussed in the proposal comments.

Differential Revision: https://reviews.llvm.org/D70700
2020-02-27 07:55:36 -08:00
Roman Lebedev 3dd5a298bf
[clang] Annotating C++'s `operator new` with more attributes
Summary:
Right now we annotate C++'s `operator new` with `noalias` attribute,
which very much is healthy for optimizations.

However as per [[ http://eel.is/c++draft/basic.stc.dynamic.allocation | `[basic.stc.dynamic.allocation]` ]],
there are more promises on global `operator new`, namely:
* non-`std::nothrow_t` `operator new` *never* returns `nullptr`
* If `std::align_val_t align` parameter is taken, the pointer will also be `align`-aligned
* ~~global `operator new`-returned pointer is `__STDCPP_DEFAULT_NEW_ALIGNMENT__`-aligned ~~ It's more caveated than that.

Supplying this information may not cause immediate landslide effects
on any specific benchmarks, but it for sure will be healthy for optimizer
in the sense that the IR will better reflect the guarantees provided in the source code.

The caveat is `-fno-assume-sane-operator-new`, which currently prevents emitting `noalias`
attribute, and is automatically passed by Sanitizers ([[ https://bugs.llvm.org/show_bug.cgi?id=16386 | PR16386 ]]) - should it also cover these attributes?
The problem is that the flag is back-end-specific, as seen in `test/Modules/explicit-build-flags.cpp`.
But while it is okay to add `noalias` metadata in backend, we really should be adding at least
the alignment metadata to the AST, since that allows us to perform sema checks on it.

Reviewers: erichkeane, rjmccall, jdoerfert, eugenis, rsmith

Reviewed By: rsmith

Subscribers: xbolva00, jrtc27, atanasyan, nlopes, cfe-commits

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D73380
2020-02-26 01:37:17 +03:00
Xiangling Liao 8bee52bdb5 [AIX][Frontend] C++ ABI customizations for AIX boilerplate
This PR enables "XL" C++ ABI in frontend AST to IR codegen. And it is driven by
static init work. The current kind in Clang by default is Generic Itanium, which
has different behavior on static init with IBM xlclang compiler on AIX.

Differential Revision: https://reviews.llvm.org/D74015
2020-02-24 10:26:51 -05:00
serge_sans_paille e67cbac812 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with proper LiveIn
declaration, better option handling and more portable testing.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-09 10:42:45 +01:00
serge-sans-paille 4546211600 Revert "Support -fstack-clash-protection for x86"
This reverts commit 0fd51a4554.

Failures:

http://lab.llvm.org:8011/builders/llvm-clang-win-x-armv7l/builds/4354
2020-02-09 10:06:31 +01:00
serge_sans_paille 0fd51a4554 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with proper LiveIn
declaration, better option handling and more portable testing.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-09 09:35:42 +01:00
serge-sans-paille 658495e6ec Revert "Support -fstack-clash-protection for x86"
This reverts commit e229017732.

Failures:

http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-debian/builds/2604
http://lab.llvm.org:8011/builders/llvm-clang-win-x-aarch64/builds/4308
2020-02-08 14:26:22 +01:00
serge_sans_paille e229017732 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with better option
handling and more portable testing

Differential Revision: https://reviews.llvm.org/D68720
2020-02-08 13:31:52 +01:00
Nico Weber b03c3d8c62 Revert "Support -fstack-clash-protection for x86"
This reverts commit 4a1a0690ad.
Breaks tests on mac and win, see https://reviews.llvm.org/D68720
2020-02-07 14:49:38 -05:00
serge_sans_paille 4a1a0690ad Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

This a recommit of 39f50da2a3 with correct option
flags set.

Differential Revision: https://reviews.llvm.org/D68720
2020-02-07 19:54:39 +01:00
serge-sans-paille f6d98429fc Revert "Support -fstack-clash-protection for x86"
This reverts commit 39f50da2a3.

The -fstack-clash-protection is being passed to the linker too, which
is not intended.

Reverting and fixing that in a later commit.
2020-02-07 11:36:53 +01:00
serge_sans_paille 39f50da2a3 Support -fstack-clash-protection for x86
Implement protection against the stack clash attack [0] through inline stack
probing.

Probe stack allocation every PAGE_SIZE during frame lowering or dynamic
allocation to make sure the page guard, if any, is touched when touching the
stack, in a similar manner to GCC[1].

This extends the existing `probe-stack' mechanism with a special value `inline-asm'.
Technically the former uses function call before stack allocation while this
patch provides inlined stack probes and chunk allocation.

Only implemented for x86.

[0] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
[1] https://gcc.gnu.org/ml/gcc-patches/2017-07/msg00556.html

Differential Revision: https://reviews.llvm.org/D68720
2020-02-07 10:56:15 +01:00
Matt Arsenault a3c814d234 Separately track input and output denormal mode
AMDGPU and x86 at least both have separate controls for whether
denormal results are flushed on output, and for whether denormals are
implicitly treated as 0 as an input. The current DAGCombiner use only
really cares about the input treatment of denormals.
2020-02-04 12:59:21 -05:00
Fangrui Song dbc96b518b Revert "[CodeGenModule] Assume dso_local for -fpic -fno-semantic-interposition"
This reverts commit 789a46f2d7.

Accidentally committed.
2020-02-03 10:09:39 -08:00
Fangrui Song 789a46f2d7 [CodeGenModule] Assume dso_local for -fpic -fno-semantic-interposition
Summary:
Clang -fpic defaults to -fno-semantic-interposition (GCC -fpic defaults
to -fsemantic-interposition).
Users need to specify -fsemantic-interposition to get semantic
interposition behavior.

Semantic interposition is currently a best-effort feature. There may
still be some cases where it is not handled well.

Reviewers: peter.smith, rnk, serge-sans-paille, sfertile, jfb, jdoerfert

Subscribers: dschuff, jyknight, dylanmckay, nemanjai, jvesely, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, arphaman, PkmX, jocewei, jsji, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D73865
2020-02-03 09:52:48 -08:00
serge-sans-paille fd09f12f32 Implement -fsemantic-interposition
First attempt at implementing -fsemantic-interposition.

Rely on GlobalValue::isInterposable that already captures most of the expected
behavior.

Rely on a ModuleFlag to state whether we should respect SemanticInterposition or
not. The default remains no.

So this should be a no-op if -fsemantic-interposition isn't used, and if it is,
isInterposable being already used in most optimisation, they should honor it
properly.

Note that it only impacts architecture compiled with -fPIC and no pie.

Differential Revision: https://reviews.llvm.org/D72829
2020-01-31 14:02:33 +01:00
Benjamin Kramer adcd026838 Make llvm::StringRef to std::string conversions explicit.
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.

This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.

This doesn't actually modify StringRef yet, I'll do that in a follow-up.
2020-01-28 23:25:25 +01:00
Teresa Johnson 458676db6e [WPD/VFE] Always emit vcall_visibility metadata for -fwhole-program-vtables
Summary:
First patch to support Safe Whole Program Devirtualization Enablement,
see RFC here: http://lists.llvm.org/pipermail/llvm-dev/2019-December/137543.html

Always emit !vcall_visibility metadata under -fwhole-program-vtables,
and not just for -fvirtual-function-elimination. The vcall visibility
metadata will (in a subsequent patch) be used to communicate to WPD
which vtables are safe to devirtualize, and we will optionally convert
the metadata to hidden visibility at link time. Subsequent follow on
patches will help enable this by adding vcall_visibility metadata to the
ThinLTO summaries, and always emit type test intrinsics under
-fwhole-program-vtables (and not just for vtables with hidden
visibility).

In order to do this safely with VFE, since for VFE all vtable loads must
be type checked loads which will no longer be the case, this patch adds
a new "Virtual Function Elim" module flag to communicate to GlobalDCE
whether to perform VFE using the vcall_visibility metadata.

One additional advantage of using the vcall_visibility metadata to drive
more WPD at LTO link time is that we can use the same mechanism to
enable more aggressive VFE at LTO link time as well. The link time
option proposed in the RFC will convert vcall_visibility metadata to
hidden (aka linkage unit visibility), which combined with
-fvirtual-function-elimination will allow it to be done more
aggressively at LTO link time under the same conditions.

Reviewers: pcc, ostannard, evgeny777, steven_wu

Subscribers: mehdi_amini, Prazek, hiraditya, dexonsmith, davidxl, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71907
2020-01-23 11:36:01 -08:00
Zakk Chen e15fb06e2d [RISCV] Pass target-abi via module flag metadata
Reviewers: lenary, asb

Reviewed By: lenary

Tags: #clang

Differential Revision: https://reviews.llvm.org/D72755
2020-01-20 23:30:54 -08:00
Matt Arsenault a4451d88ee Consolidate internal denormal flushing controls
Currently there are 4 different mechanisms for controlling denormal
flushing behavior, and about as many equivalent frontend controls.

- AMDGPU uses the fp32-denormals and fp64-f16-denormals subtarget features
- NVPTX uses the nvptx-f32ftz attribute
- ARM directly uses the denormal-fp-math attribute
- Other targets indirectly use denormal-fp-math in one DAGCombine
- cl-denorms-are-zero has a corresponding denorms-are-zero attribute

AMDGPU wants a distinct control for f32 flushing from f16/f64, and as
far as I can tell the same is true for NVPTX (based on the attribute
name).

Work on consolidating these into the denormal-fp-math attribute, and a
new type specific denormal-fp-math-f32 variant. Only ARM seems to
support the two different flush modes, so this is overkill for the
other use cases. Ideally we would error on the unsupported
positive-zero mode on other targets from somewhere.

Move the logic for selecting the flush mode into the compiler driver,
instead of handling it in cc1. denormal-fp-math/denormal-fp-math-f32
are now both cc1 flags, but denormal-fp-math-f32 is not yet exposed as
a user flag.

-cl-denorms-are-zero, -fcuda-flush-denormals-to-zero and
-fno-cuda-flush-denormals-to-zero will be mapped to
-fp-denormal-math-f32=ieee or preserve-sign rather than the old
attributes.

Stop emitting the denorms-are-zero attribute for the OpenCL flag. It
has no in-tree users. The meaning would also be target dependent, such
as the AMDGPU choice to treat this as only meaning allow flushing of
f32 and not f16 or f64. The naming is also potentially confusing,
since DAZ in other contexts refers to instructions implicitly treating
input denormals as zero, not necessarily flushing output denormals to
zero.

This also does not attempt to change the behavior for the current
attribute. The LangRef now states that the default is ieee behavior,
but this is inaccurate for the current implementation. The clang
handling is slightly hacky to avoid touching the existing
denormal-fp-math uses. Fixing this will be left for a future patch.

AMDGPU is still using the subtarget feature to control the denormal
mode, but the new attribute are now emitted. A future change will
switch this and remove the subtarget features.
2020-01-17 20:09:53 -05:00
serge-sans-paille d437fba8ef Reapply Allow system header to provide their own implementation of some builtin
This reverts commit 3d210ed3d1.

See https://reviews.llvm.org/D71082 for the patch and discussion that make it
possible to reapply this patch.
2020-01-17 09:58:32 +01:00
Alexey Bataev 25b542c61f [OPENMP]Do not emit RTTI descriptor for NVPTX devices.
Need to disable emission of RTTI descriptors for NVPTX devices to be
able to use dynamic classes without unresolved symbols at link stage.
2020-01-16 18:12:50 -05:00
Simon Pilgrim 19c5057e8d Fix "pointer is null" static analyzer warnings. NFCI.
Use castAs<> instead of getAs<> since the pointer is dereferenced immediately in all cases and castAs will perform the null assertion for us.
2020-01-16 13:02:40 +00:00
Amy Huang 3d210ed3d1 Revert "Allow system header to provide their own implementation of some builtin"
This reverts commit 921f871ac4 because it
causes libc++ code to trigger __warn_memset_zero_len.

See https://reviews.llvm.org/D71082.
2020-01-15 15:03:45 -08:00
Amy Huang 53539bb032 [DebugInfo] Add another level to DebugInfoKind called Constructor
The option will limit debug info by only emitting complete class
type information when its constructor is emitted.
This patch changes comparisons with LimitedDebugInfo to use the new
level instead.

Differential Revision: https://reviews.llvm.org/D72427
2020-01-13 15:59:03 -08:00
serge-sans-paille 921f871ac4 Allow system header to provide their own implementation of some builtin
If a system header provides an (inline) implementation of some of their
function, clang still matches on the function name and generate the appropriate
llvm builtin, e.g. memcpy. This behavior is in line with glibc recommendation «
users may not provide their own version of symbols » but doesn't account for the
fact that glibc itself can provide inline version of some functions.

It is the case for the memcpy function when -D_FORTIFY_SOURCE=1 is on. In that
case an inline version of memcpy calls __memcpy_chk, a function that performs
extra runtime checks. Clang currently ignores the inline version and thus
provides no runtime check.

This code fixes the issue by detecting functions whose name is a builtin name
but also have an inline implementation.

Differential Revision: https://reviews.llvm.org/D71082
2020-01-10 09:44:20 +01:00
Craig Topper d35bcbbb5d [Sema][X86] Consider target attribute into the checks in validateOutputSize and validateInputSize.
The validateOutputSize and validateInputSize need to check whether
AVX or AVX512 are enabled. But this can be affected by the
target attribute so we need to factor that in.

This patch moves some of the code from CodeGen to create an
appropriate feature map that we can pass to the function.

Differential Revision: https://reviews.llvm.org/D68627
2019-12-23 11:23:30 -08:00