forked from OSchip/llvm-project
![]() There are 2 parts to getting the -fassociative-math command-line flag translated to LLVM FMF: 1. In the driver/frontend, we accept the flag and its 'no' inverse and deal with the interactions with other flags like -ffast-math -fno-signed-zeros -fno-trapping-math. This was mostly already done - we just need to translate the flag as a codegen option. The test file is complicated because there are many potential combinations of flags here. Note that we are matching gcc's behavior that requires 'nsz' and no-trapping-math. 2. In codegen, we map the codegen option to FMF in the IR builder. This is simple code and corresponding test. For the motivating example from PR27372: float foo(float a, float x) { return ((a + x) - x); } $ ./clang -O2 27372.c -S -o - -ffast-math -fno-associative-math -emit-llvm | egrep 'fadd|fsub' %add = fadd nnan ninf nsz arcp contract float %0, %1 %sub = fsub nnan ninf nsz arcp contract float %add, %2 So 'reassoc' is off as expected (and so is the new 'afn' but that's a different patch). This case now works as expected end-to-end although the underlying logic is still wrong: $ ./clang -O2 27372.c -S -o - -ffast-math -fno-associative-math | grep xmm addss %xmm1, %xmm0 subss %xmm1, %xmm0 We're not done because the case where 'reassoc' is set is ignored by optimizer passes. Example: $ ./clang -O2 27372.c -S -o - -fassociative-math -fno-signed-zeros -fno-trapping-math -emit-llvm | grep fadd %add = fadd reassoc float %0, %1 $ ./clang -O2 27372.c -S -o - -fassociative-math -fno-signed-zeros -fno-trapping-math | grep xmm addss %xmm1, %xmm0 subss %xmm1, %xmm0 Differential Revision: https://reviews.llvm.org/D39812 llvm-svn: 320920 |
||
---|---|---|
.. | ||
ABIInfo.h | ||
Address.h | ||
BackendUtil.cpp | ||
CGAtomic.cpp | ||
CGBlocks.cpp | ||
CGBlocks.h | ||
CGBuilder.h | ||
CGBuiltin.cpp | ||
CGCUDANV.cpp | ||
CGCUDARuntime.cpp | ||
CGCUDARuntime.h | ||
CGCXX.cpp | ||
CGCXXABI.cpp | ||
CGCXXABI.h | ||
CGCall.cpp | ||
CGCall.h | ||
CGClass.cpp | ||
CGCleanup.cpp | ||
CGCleanup.h | ||
CGCoroutine.cpp | ||
CGDebugInfo.cpp | ||
CGDebugInfo.h | ||
CGDecl.cpp | ||
CGDeclCXX.cpp | ||
CGException.cpp | ||
CGExpr.cpp | ||
CGExprAgg.cpp | ||
CGExprCXX.cpp | ||
CGExprComplex.cpp | ||
CGExprConstant.cpp | ||
CGExprScalar.cpp | ||
CGGPUBuiltin.cpp | ||
CGLoopInfo.cpp | ||
CGLoopInfo.h | ||
CGObjC.cpp | ||
CGObjCGNU.cpp | ||
CGObjCMac.cpp | ||
CGObjCRuntime.cpp | ||
CGObjCRuntime.h | ||
CGOpenCLRuntime.cpp | ||
CGOpenCLRuntime.h | ||
CGOpenMPRuntime.cpp | ||
CGOpenMPRuntime.h | ||
CGOpenMPRuntimeNVPTX.cpp | ||
CGOpenMPRuntimeNVPTX.h | ||
CGRecordLayout.h | ||
CGRecordLayoutBuilder.cpp | ||
CGStmt.cpp | ||
CGStmtOpenMP.cpp | ||
CGVTT.cpp | ||
CGVTables.cpp | ||
CGVTables.h | ||
CGValue.h | ||
CMakeLists.txt | ||
CodeGenABITypes.cpp | ||
CodeGenAction.cpp | ||
CodeGenFunction.cpp | ||
CodeGenFunction.h | ||
CodeGenModule.cpp | ||
CodeGenModule.h | ||
CodeGenPGO.cpp | ||
CodeGenPGO.h | ||
CodeGenTBAA.cpp | ||
CodeGenTBAA.h | ||
CodeGenTypeCache.h | ||
CodeGenTypes.cpp | ||
CodeGenTypes.h | ||
ConstantEmitter.h | ||
ConstantInitBuilder.cpp | ||
CoverageMappingGen.cpp | ||
CoverageMappingGen.h | ||
EHScopeStack.h | ||
ItaniumCXXABI.cpp | ||
MacroPPCallbacks.cpp | ||
MacroPPCallbacks.h | ||
MicrosoftCXXABI.cpp | ||
ModuleBuilder.cpp | ||
ObjectFilePCHContainerOperations.cpp | ||
README.txt | ||
SanitizerMetadata.cpp | ||
SanitizerMetadata.h | ||
SwiftCallingConv.cpp | ||
TargetInfo.cpp | ||
TargetInfo.h | ||
VarBypassDetector.cpp | ||
VarBypassDetector.h |
README.txt
IRgen optimization opportunities. //===---------------------------------------------------------------------===// The common pattern of -- short x; // or char, etc (x == 10) -- generates an zext/sext of x which can easily be avoided. //===---------------------------------------------------------------------===// Bitfields accesses can be shifted to simplify masking and sign extension. For example, if the bitfield width is 8 and it is appropriately aligned then is is a lot shorter to just load the char directly. //===---------------------------------------------------------------------===// It may be worth avoiding creation of alloca's for formal arguments for the common situation where the argument is never written to or has its address taken. The idea would be to begin generating code by using the argument directly and if its address is taken or it is stored to then generate the alloca and patch up the existing code. In theory, the same optimization could be a win for block local variables as long as the declaration dominates all statements in the block. NOTE: The main case we care about this for is for -O0 -g compile time performance, and in that scenario we will need to emit the alloca anyway currently to emit proper debug info. So this is blocked by being able to emit debug information which refers to an LLVM temporary, not an alloca. //===---------------------------------------------------------------------===// We should try and avoid generating basic blocks which only contain jumps. At -O0, this penalizes us all the way from IRgen (malloc & instruction overhead), all the way down through code generation and assembly time. On 176.gcc:expr.ll, it looks like over 12% of basic blocks are just direct branches! //===---------------------------------------------------------------------===//