The current expression language is currently tracked in a few places within the ClangExpressionParser constructor.
This patch adds a private lldb::LanguageType attribute to the ClangExpressionParser class and tracks the expression language from that one place.
Author: Luke Drummond <luke.drummond@codeplay.com>
Differential Revision: http://reviews.llvm.org/D17719
llvm-svn: 263099
Only half of the intrinsics in this file is documented here. The patch for the other half will be sent out later.
The doxygen comments are automatically generated based on Sony's intrinsics document.
I got an OK from Eric Christopher to commit doxygen comments without prior code review upstream.
llvm-svn: 263098
Operation SCALAR_TO_VECTOR for v64i8 and v32i16 should be lowered if BW feature is "on".
Differential Revision: http://reviews.llvm.org/D17994
llvm-svn: 263097
Summary:
This provides a macro that expands to __builtin_debugtrap() for clang,
and __debugbreak() for MSVC.
It intentionally expands to nothing for compilers that do not support a
similar mechanism that halts the debugger without otherwise crashing the
process.
Differential Revision: http://reviews.llvm.org/D18002
llvm-svn: 263095
Summary:
They are needed for inline asm during LTO.
In particular we hit the report_fatal_error on
llvm/lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp:138
LLVM ERROR: Inline asm not supported by this streamer because we don't have an asm parser for this target
Reviewers: ruiu, rafael
Subscribers: Bigcheese, llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18027
llvm-svn: 263094
This change adds a support for a preserve_most calling convention to the AArch64 backend, similar to how it was done for X86-64.
There is also a subsequent patch on top of this one to add a tail-calls support for this calling convention.
Differential Revision: http://reviews.llvm.org/D18016
llvm-svn: 263092
See https://llvm.org/bugs/show_bug.cgi?id=26894 for details. This change
fixes the incorrect flags to Clang and the piping issue. It also disables
the FileCheck portion of the test, which is currently failing.
llvm-svn: 263091
opt adds Verifier passes in AddOptimizationPasses even if
-disable-verify is on. Fix it so that the extra verification occurs
either when (1) -disable-verifier is off, or (2) -verify-each is on.
Thanks to David Jones for pointing out this behavior!
llvm-svn: 263090
MinVecRegSize is currently hardcoded to 128; this patch adds a cl::opt
to allow changing it. I tried not to change any existing behavior for the default
case.
Differential revision: http://reviews.llvm.org/D13278
llvm-svn: 263089
Summary:
For PseudoObjectExpr, the DeclMatcher need to search only all the semantics
but also need to search pass OpaqueValueExpr for all potential uses for the
Decl.
Reviewers: thakis, rtrieu, rjmccall, doug.gregor
Subscribers: xazax.hun, rjmccall, doug.gregor, cfe-commits
Differential Revision: http://reviews.llvm.org/D17627
llvm-svn: 263087
Summary:
This is intended to be a performance flag, on the same level as clang
cc1 option "--disable-free". LLVM will never initialize it by default,
it will be up to the client creating the LLVMContext to request this
behavior. Clang will do it by default in Release build (just like
--disable-free).
"opt" and "llc" can opt-in using -disable-named-value command line
option.
When performing LTO on llvm-tblgen, the initial merging of IR peaks
at 92MB without this patch, and 86MB after this patch,setNameImpl()
drops from 6.5MB to 0.5MB.
The total link time goes from ~29.5s to ~27.8s.
Compared to a compile-time flag (like the IRBuilder one), it performs
very close. I profiled on SROA and obtain these results:
420ms with IRBuilder that preserve name
372ms with IRBuilder that strip name
375ms with IRBuilder that preserve name, and a runtime flag to strip
Reviewers: chandlerc, dexonsmith, bogner
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D17946
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 263086
Summary:
GCC does not emit DW_AT_data_member_location for members of a union.
Starting with a 0 value for member locations helps is reading union types
in such cases.
Reviewers: clayborg
Subscribers: ldrumm, lldb-commits
Differential Revision: http://reviews.llvm.org/D18008
llvm-svn: 263085
need to be changed for porting to the new pass manager.
Also sink the comment on the ValueTable class back to that class instead
of it dangling on an anonymous namespace.
No functionality changed.
llvm-svn: 263084
This is a fairly straightforward port to the new pass manager with one
exception. It removes a very questionable use of releaseMemory() in
the old pass to invalidate its caches between runs on a function.
I don't think this is really guaranteed to be safe. I've just used the
more direct port to the new PM to address this by nuking the results
object each time the pass runs. While this could cause some minor malloc
traffic increase, I don't expect the compile time performance hit to be
noticable, and it makes the correctness and other aspects of the pass
much easier to reason about. In some cases, it may make things faster by
making the sets and maps smaller with better locality. Indeed, the
measurements collected by Bruno (thanks!!!) show mostly compile time
improvements.
There is sadly very limited testing at this point as there are only two
tests of memdep, and both rely on GVN. I'll be porting GVN next and that
will exercise this heavily though.
Differential Revision: http://reviews.llvm.org/D17962
llvm-svn: 263082
Previously line table parsing code assumed that the only gaps would
occur at the end of functions. In practice this isn't true, so this
patch makes the line table parsing more robust in the face of
functions with non-contiguous byte arrangements.
llvm-svn: 263078
Summary:
__BIG_ENDIAN__ and __LITTLE_ENDIAN__ are not supported by gcc, which
eg. for ubsan Value::getFloatValue will silently fall through to
the little endian branch, breaking display of float values by ubsan.
Use __BYTE_ORDER__ == __ORDER_BIG/LITTLE_ENDIAN__ as the condition
instead, which is supported by both clang and gcc.
Noticed while porting ubsan to s390x.
Patch by Marcin Kościelnicki!
Differential Revision: http://reviews.llvm.org/D17660
llvm-svn: 263077
Since it's provided by the compiler. This allows a system module map
file to declare a module for it.
No test change for cstd.m, since stdatomic.h doesn't function without a
relatively complete stdint.h and stddef.h, which tests using this module
don't provide.
rdar://problem/24931246
llvm-svn: 263076
MemoryDependenceAnalysis had a hard-coded exception to the general aliasing rules for malloc and calloc. The reasoning that applied there is equally valid in BasicAA and clarifies the remaining logic in MDA.
In principal, this can expose slightly more optimization opportunities, but since essentially all of our aliasing aware memory optimization passes go through MDA, this will likely be NFC in practice.
Differential Revision: http://reviews.llvm.org/D15912
llvm-svn: 263075
This patch teaches CGP to duplicate addressing mode computations into cold paths (detected via explicit cold attribute on calls) if required to let addressing mode be safely sunk into the basic block containing each load and store.
In general, duplicating code into cold blocks may result in code growth, but should not effect performance. In this case, it's better to duplicate some code than to put extra pressure on the register allocator by making it keep the address through the entirely of the fast path.
This patch only handles addressing computations, but in principal, we could implement a more general cold cold scheduling heuristic which tries to reduce register pressure in the fast path by duplicating code into the cold path. Getting the profitability of the general case right seemed likely to be challenging, so I stuck to the existing case (addressing computation) we already had.
Differential Revision: http://reviews.llvm.org/D17652
llvm-svn: 263074
This patch teaches LICM's implementation of store promotion to exploit the fact that the memory location being accessed might be provable thread local. The fact it's thread local weakens the requirements for where we can insert stores since no other thread can observe the write. This allows us perform store promotion even in cases where the store is not guaranteed to execute in the loop.
Two key assumption worth drawing out is that this assumes a) no-capture is strong enough to imply no-escape, and b) standard allocation functions like malloc, calloc, and operator new return values which can be assumed not to have previously escaped.
In future work, it would be nice to generalize this so that it works without directly seeing the allocation site. I believe that the nocapture return attribute should be suitable for this purpose, but haven't investigated carefully. It's also likely that we could support unescaped allocas with similar reasoning, but since SROA and Mem2Reg should destroy those, they're less interesting than they first might seem.
Differential Revision: http://reviews.llvm.org/D16783
llvm-svn: 263072
Summary:
This implements another part of -save-temps.
After this, the only remaining part is dumping the optimized bitcode. But
currently LLD's LTO doesn't have a non-intrusive place to put this.
Eventually we probably will and it will make sense to add it then.
Reviewers: ruiu, rafael
Subscribers: joker.eph, Bigcheese, llvm-commits
Differential Revision: http://reviews.llvm.org/D18009
llvm-svn: 263070
The irony of this patch is that one CPU that is affected is AMD Jaguar, and Jaguar
has a completely double-pumped AVX implementation. But getting the cost model to
reflect that is a much bigger problem. The small goal here is simply to improve on
the lie that !AVX2 == SandyBridge.
Differential Revision: http://reviews.llvm.org/D18000
llvm-svn: 263069
Instead of a variable-blend instruction, form a blend with immediate because those are always cheaper.
Differential Revision: http://reviews.llvm.org/D17899
llvm-svn: 263067
Summary:
From Adrian McCarthy:
"Running ninja check-lldb now has one crash in a Python process, due to deferencing a null pointer in IRExecutionUnit.cpp: candidate_sc.symbol is null, which leads to a call with a null this pointer."
Reviewers: zturner, spyffe, amccarth
Subscribers: ted, jingham, lldb-commits
Differential Revision: http://reviews.llvm.org/D17860
llvm-svn: 263066
When checking whether an smin is positive, we can move the comparison to one of the inputs if the other is known positive. If the known positive one is the min, then the other can't be negative. If the other is the min, then we compute the min.
Differential Revision: http://reviews.llvm.org/D17873
llvm-svn: 263059
I somehow missed this. The case in GCC (global_alloc) was similar to
the new testcase except it had an array of structs rather than a two
dimensional array.
Fixes RP26885.
llvm-svn: 263058
Summary:
This is useful for debugging issues with LTO.
The option follows the analogous option in ld64 and the gold plugin (per
Rafael's suggestion).
For starters, this only dumps the combined bitcode file.
In a future patch I will add dumping for the .o file.
The naming of the output follows ld64's convention which is slightly more
consistent IMO (consistent `.lto.<extension>` for all the files).
Reviewers: rafael, ruiu
Subscribers: joker.eph, Bigcheese, llvm-commits
Differential Revision: http://reviews.llvm.org/D18006
llvm-svn: 263055
Includes new built-in, conversion of built-in to target-independent intrinsic
and update in the header file. Tests are also updated. There is a second part in
the backend for which I will post a separate code-review. BACKEND PART SHOULD BE
COMMITTED FIRST.
Phabricator: http://reviews.llvm.org/D17816
llvm-svn: 263051
That way you can set offset breakpoints that will move as the function they are
contained in moves (which address breakpoints can't do...)
I don't align the new address to instruction boundaries yet, so you have to get
this right yourself for now.
<rdar://problem/13365575>
llvm-svn: 263049