Go to file
Matt Arsenault a4451d88ee Consolidate internal denormal flushing controls
Currently there are 4 different mechanisms for controlling denormal
flushing behavior, and about as many equivalent frontend controls.

- AMDGPU uses the fp32-denormals and fp64-f16-denormals subtarget features
- NVPTX uses the nvptx-f32ftz attribute
- ARM directly uses the denormal-fp-math attribute
- Other targets indirectly use denormal-fp-math in one DAGCombine
- cl-denorms-are-zero has a corresponding denorms-are-zero attribute

AMDGPU wants a distinct control for f32 flushing from f16/f64, and as
far as I can tell the same is true for NVPTX (based on the attribute
name).

Work on consolidating these into the denormal-fp-math attribute, and a
new type specific denormal-fp-math-f32 variant. Only ARM seems to
support the two different flush modes, so this is overkill for the
other use cases. Ideally we would error on the unsupported
positive-zero mode on other targets from somewhere.

Move the logic for selecting the flush mode into the compiler driver,
instead of handling it in cc1. denormal-fp-math/denormal-fp-math-f32
are now both cc1 flags, but denormal-fp-math-f32 is not yet exposed as
a user flag.

-cl-denorms-are-zero, -fcuda-flush-denormals-to-zero and
-fno-cuda-flush-denormals-to-zero will be mapped to
-fp-denormal-math-f32=ieee or preserve-sign rather than the old
attributes.

Stop emitting the denorms-are-zero attribute for the OpenCL flag. It
has no in-tree users. The meaning would also be target dependent, such
as the AMDGPU choice to treat this as only meaning allow flushing of
f32 and not f16 or f64. The naming is also potentially confusing,
since DAZ in other contexts refers to instructions implicitly treating
input denormals as zero, not necessarily flushing output denormals to
zero.

This also does not attempt to change the behavior for the current
attribute. The LangRef now states that the default is ieee behavior,
but this is inaccurate for the current implementation. The clang
handling is slightly hacky to avoid touching the existing
denormal-fp-math uses. Fixing this will be left for a future patch.

AMDGPU is still using the subtarget feature to control the denormal
mode, but the new attribute are now emitted. A future change will
switch this and remove the subtarget features.
2020-01-17 20:09:53 -05:00
clang Consolidate internal denormal flushing controls 2020-01-17 20:09:53 -05:00
clang-tools-extra Another speculative fix for the Windows bots. 2020-01-17 10:23:45 -05:00
compiler-rt hwasan: Remove dead code. NFCI. 2020-01-17 15:12:38 -08:00
debuginfo-tests Add test for GDB pretty printers. 2020-01-11 09:17:15 +01:00
libc [libc] Replace the use of gtest with a new light weight unittest framework. 2020-01-17 16:24:53 -08:00
libclc libclc: Drop the old python based build system 2019-11-08 09:59:40 -05:00
libcxx [libcxx] Introduce LinuxRemoteTI for remote testing 2020-01-18 01:27:30 +03:00
libcxxabi [demangle] Copy back some NFC commits from LLVM 2020-01-09 10:27:24 -08:00
libunwind Bump the trunk major version to 11 2020-01-15 13:38:01 +01:00
lld [ELF] Allow R_PLT_PC (R_PC) to a hidden undefined weak symbol 2020-01-17 13:06:42 -08:00
lldb [lldb/Docs] Fix formatting for the variable formatting page 2020-01-17 14:17:26 -08:00
llgo IR: Support parsing numeric block ids, and emit them in textual output. 2019-03-22 18:27:13 +00:00
llvm Consolidate internal denormal flushing controls 2020-01-17 20:09:53 -05:00
mlir [mlir][Linalg] Extend linalg vectorization to MatmulOp 2020-01-17 17:09:47 -05:00
openmp [OpenMP][Tool] Fix memory leak and double-allocation 2020-01-16 10:05:06 -10:00
parallel-libs Fix typos throughout the license files that somehow I and my reviewers 2019-01-21 09:52:34 +00:00
polly PointerLikeTypeTraits: Standardize NumLowBitsAvailable on static constexpr rather than anonymous enum 2020-01-16 15:30:50 -08:00
pstl Bump the trunk major version to 11 2020-01-15 13:38:01 +01:00
.arcconfig Update monorepo .arcconfig with new project callsign. 2019-01-31 14:34:59 +00:00
.clang-format Add .clang-tidy and .clang-format files to the toplevel of the 2019-01-29 16:43:16 +00:00
.clang-tidy Disable tidy checks with too many hits 2019-02-01 11:20:13 +00:00
.git-blame-ignore-revs Add LLDB reformatting to .git-blame-ignore-revs 2019-09-04 09:31:55 +00:00
.gitignore Add a newline at the end of the file 2019-09-04 06:33:46 +00:00
CONTRIBUTING.md Add contributing info to CONTRIBUTING.md and README.md 2019-12-02 15:47:15 +00:00
README.md Add contributing info to CONTRIBUTING.md and README.md 2019-12-02 15:47:15 +00:00

README.md

The LLVM Compiler Infrastructure

This directory and its subdirectories contain source code for LLVM, a toolkit for the construction of highly optimized compilers, optimizers, and runtime environments.

The README briefly describes how to get started with building LLVM. For more information on how to contribute to the LLVM project, please take a look at the Contributing to LLVM guide.

Getting Started with the LLVM System

Taken from https://llvm.org/docs/GettingStarted.html.

Overview

Welcome to the LLVM project!

The LLVM project has multiple components. The core of the project is itself called "LLVM". This contains all of the tools, libraries, and header files needed to process intermediate representations and converts it into object files. Tools include an assembler, disassembler, bitcode analyzer, and bitcode optimizer. It also contains basic regression tests.

C-like languages use the Clang front end. This component compiles C, C++, Objective C, and Objective C++ code into LLVM bitcode -- and from there into object files, using LLVM.

Other components include: the libc++ C++ standard library, the LLD linker, and more.

Getting the Source Code and Building LLVM

The LLVM Getting Started documentation may be out of date. The Clang Getting Started page might have more accurate information.

This is an example workflow and configuration to get and build the LLVM source:

  1. Checkout LLVM (including related subprojects like Clang):

    • git clone https://github.com/llvm/llvm-project.git

    • Or, on windows, git clone --config core.autocrlf=false https://github.com/llvm/llvm-project.git

  2. Configure and build LLVM and Clang:

    • cd llvm-project

    • mkdir build

    • cd build

    • cmake -G <generator> [options] ../llvm

      Some common generators are:

      • Ninja --- for generating Ninja build files. Most llvm developers use Ninja.
      • Unix Makefiles --- for generating make-compatible parallel makefiles.
      • Visual Studio --- for generating Visual Studio projects and solutions.
      • Xcode --- for generating Xcode projects.

      Some Common options:

      • -DLLVM_ENABLE_PROJECTS='...' --- semicolon-separated list of the LLVM subprojects you'd like to additionally build. Can include any of: clang, clang-tools-extra, libcxx, libcxxabi, libunwind, lldb, compiler-rt, lld, polly, or debuginfo-tests.

        For example, to build LLVM, Clang, libcxx, and libcxxabi, use -DLLVM_ENABLE_PROJECTS="clang;libcxx;libcxxabi".

      • -DCMAKE_INSTALL_PREFIX=directory --- Specify for directory the full pathname of where you want the LLVM tools and libraries to be installed (default /usr/local).

      • -DCMAKE_BUILD_TYPE=type --- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug.

      • -DLLVM_ENABLE_ASSERTIONS=On --- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types).

    • Run your build tool of choice!

      • The default target (i.e. ninja or make) will build all of LLVM.

      • The check-all target (i.e. ninja check-all) will run the regression tests to ensure everything is in working order.

      • CMake will generate build targets for each tool and library, and most LLVM sub-projects generate their own check-<project> target.

      • Running a serial build will be slow. To improve speed, try running a parallel build. That's done by default in Ninja; for make, use make -j NNN (NNN is the number of parallel jobs, use e.g. number of CPUs you have.)

    • For more information see CMake

Consult the Getting Started with LLVM page for detailed information on configuring and compiling LLVM. You can visit Directory Layout to learn about the layout of the source code tree.