Some older versions of clang emitted bit offsets that were negative and these bitfields would have their bitfield-ness stripped off and it would cause a clang assertion in clang assertions were enabled. I updated the bitfield C test to make sure we don't regress.
<rdar://problem/21082998>
llvm-svn: 267248
Code was added in ClangExpressionParser::ClangExpressionParser that was calling through
the process w/o checking that it was good. Also, we were pretending that we could do something
reasonable if we had no target, but that's actually not true, so I check for a target at the
beginning of the constructor and don't make a compiler in that case.
<rdar://problem/25841198>
llvm-svn: 266944
Recommit modified version of r266311 including build bot regression fix.
This differs from the original r266311 by:
- Fixing Scalar::Promote to correctly zero- or sign-extend value depending
on signedness of the *source* type, not the target type.
- Omitting a few stand-alone fixes that were already committed separately.
llvm-svn: 266422
This routine contained a stray "return false;" making part of the code
never executed. Also, the stack offset where to find on-stack arguments
was incorrect.
llvm-svn: 266417
This implements a PDBASTParser and corresponding logic in
SymbolFilePDB to do type lookup by name. This is just a first
pass and leaves many aspects of type lookup unimplemented, and
just focuses on laying the framework. With this patch, you should
be able to lookup basic types by name from a PDB.
Full class definitions are not completed yet, we will instead
just return a forward declaration of the class.
Differential Revision: http://reviews.llvm.org/D18848
Reviewed by: Greg Clayton
llvm-svn: 266392
Code in ObjectFileELF::ParseTrampolineSymbols assumes that the sh_info
field of the .rel(a).plt section identifies the .plt section.
However, with recent GNU ld this is no longer true. As a result of this:
https://sourceware.org/bugzilla/show_bug.cgi?id=18169
in object files generated with current linkers the sh_info field of
.rel(a).plt now points to the .got.plt section (or .got on some targets).
This causes LLDB to fail to identify any PLT stubs, causing a number of
test case failures.
This patch changes LLDB to simply always look for the .plt section by
name. This should be safe across all linkers and targets.
Differential Revision: http://reviews.llvm.org/D18973
llvm-svn: 266316
Running the ARM instruction emulation test on a big-endian system
would fail, since the code doesn't respect endianness properly.
In EmulateInstructionARM::TestEmulation, code assumes that an
instruction opcode read in from the test file is in target byte
order, but it was in fact read in in host byte order.
More difficult to fix, the EmulationStateARM structure models
the overlapping sregs and dregs by a union in _sd_regs. This
only works correctly if the host is a little-endian system.
I've removed the union in favor of a simple array containing
the 32 sregs, and changed any code accessing dregs to explicitly
use the correct two sregs overlaying that dreg in the proper
target order.
Also, the EmulationStateARM::ReadPseudoMemory and WritePseudoMemory
track memory as a map of uint32_t values in host byte order, and
implement 64-bit memory accessing by splitting them up into two
uint32_t ones. However, callers expect memory contents to be
provided in the form of a byte array (in target byte order).
This means the uint32_t contents need to be byte-swapped on
BE systems, and when splitting up a 64-bit access into two 32-bit
ones, byte order has to be respected.
Differential Revision: http://reviews.llvm.org/D18984
llvm-svn: 266314
This patch fixes a bunch of issues that show up on big-endian systems:
- The gnu_libstdcpp.py script doesn't follow the way libstdc++ encodes
bit vectors: it should identify the enclosing *word* and then access
the appropriate bit within that word. Instead, the script simply
operates on bytes. This gives the same result on little-endian
systems, but not on big-endian.
- lldb_private::formatters::WCharSummaryProvider always assumes wchar_t
is UTF16, even though it could also be UTF8 or UTF32. This is mostly
not an issue on little-endian systems, but immediately fails on BE.
Fixed by checking the size of wchar_t like WCharStringSummaryProvider
already does.
- ClangASTContext::GetChildCompilerTypeAtIndex uses uint32_t to access
the virtual base offset stored in the vtable, even though the size
of this field matches the target pointer size according to the C++
ABI. Again, this is mostly not visible on LE, but fails on BE.
- Process::ReadStringFromMemory uses strncmp to search for a terminator
consisting of multiple zero bytes. This doesn't work since strncmp
will stop already at the first zero byte. Use memcmp instead.
Differential Revision: http://reviews.llvm.org/D18983
llvm-svn: 266313
The Scalar implementation and a few other places in LLDB directly
access the internal implementation of APInt values using the
getRawData method. Unfortunately, pretty much all of these places
do not handle big-endian systems correctly. While on little-endian
machines, the pointer returned by getRawData can simply be used as
a pointer to the integer value in its natural format, no matter
what size, this is not true on big-endian systems: getRawData
actually points to an array of type uint64_t, with the first element
of the array always containing the least-significant word of the
integer. This means that if the bitsize of that integer is smaller
than 64, we need to add an offset to the pointer returned by
getRawData in order to access the value in its natural type, and
if the bitsize is *larger* than 64, we actually have to swap the
constituent words before we can access the value in its natural type.
This patch fixes every incorrect use of getRawData in the code base.
For the most part, this is done by simply removing uses of getRawData
in the first place, and using other APInt member functions to operate
on the integer data.
This can be done in many member functions of Scalar itself, as well
as in Symbol/Type.h and in IRInterpreter::Interpret. For the latter,
I've had to add a Scalar::MakeUnsigned routine to parallel the existing
Scalar::MakeSigned, e.g. in order to implement an unsigned divide.
The Scalar::RawUInt, Scalar::RawULong, and Scalar::RawULongLong
were already unused and can be simply removed. I've also removed
the Scalar::GetRawBits64 function and its few users.
The one remaining user of getRawData in Scalar.cpp is GetBytes.
I've implemented all the cases described above to correctly
implement access to the underlying integer data on big-endian
systems. GetData now simply calls GetBytes instead of reimplementing
its contents.
Finally, two places in the clang interface code were also accessing
APInt.getRawData in order to actually construct a byte representation
of an integer. I've changed those to make use of a Scalar instead,
to avoid having to re-implement the logic there.
The patch also adds a couple of unit tests verifying correct operation
of the GetBytes routine as well as the conversion routines. Those tests
actually exposed more problems in the Scalar code: the SetValueFromData
routine didn't work correctly for 128- and 256-bit data types, and the
SChar routine should have an explicit "signed char" return type to work
correctly on platforms where char defaults to unsigned.
Differential Revision: http://reviews.llvm.org/D18981
llvm-svn: 266311
Scalar::GetBytes provides a non-const access to the underlying bytes
of the scalar value, supposedly allowing for modification of those
bytes. However, even with the current implementation, this is not
really possible. For floating-point scalars, the pointer returned
by GetBytes refers to a temporary copy; modifications to that copy
will be simply ignored. For integer scalars, the pointer refers
to internal memory of the APInt implementation, which isn't
supposed to be directly modifyable; GetBytes simply casts aways
the const-ness of the pointer ...
With my upcoming patch to fix Scalar::GetBytes for big-endian
systems, this problem is going to get worse, since there we need
temporary copies even for some integer scalars. Therefore, this
patch makes Scalar::GetBytes const, fixing all those problems.
As a follow-on change, RegisterValues::GetBytes must be made const
as well. This in turn means that the way of initializing a
RegisterValue by doing a SetType followed by writing to GetBytes
no longer works. Instead, I've changed SetValueFromData to do
the equivalent of SetType itself, and then re-implemented
SetFromMemoryData to work on top of SetValueFromData.
There is still a need for RegisterValue::SetType, since some
platform-specific code uses it to reinterpret the contents of
an already filled RegisterValue. To make this usage work in
all cases (even changing from a type implemented via Scalar
to a type implemented as a byte buffer), SetType now simply
copies the old contents out, and then reloads the RegisterValue
from this data using the new type via SetValueFromData.
This in turn means that there is no remaining caller of
Scalar::SetType, so it can be removed.
The only other follow-on change was in MIPS EmulateInstruction
code, where some uses of RegisterValue::GetBytes could be made
const trivially.
Differential Revision: http://reviews.llvm.org/D18980
llvm-svn: 266310
This patch adds support for Linux on SystemZ:
- A new ArchSpec value of eCore_s390x_generic
- A new directory Plugins/ABI/SysV-s390x providing an ABI implementation
- Register context support
- Native Linux support including watchpoint support
- ELF core file support
- Misc. support throughout the code base (e.g. breakpoint opcodes)
- Test case updates to support the platform
This should provide complete support for debugging the SystemZ platform.
Not yet supported are optional features like transaction support (zEC12)
or SIMD vector support (z13).
There is no instruction emulation, since our ABI requires that all code
provide correct DWARF CFI at all PC locations in .eh_frame to support
unwinding (i.e. -fasynchronous-unwind-tables is on by default).
The implementation follows existing platforms in a mostly straightforward
manner. A couple of things that are different:
- We do not use PTRACE_PEEKUSER / PTRACE_POKEUSER to access single registers,
since some registers (access register) reside at offsets in the user area
that are multiples of 4, but the PTRACE_PEEKUSER interface only allows
accessing aligned 8-byte blocks in the user area. Instead, we use a s390
specific ptrace interface PTRACE_PEEKUSR_AREA / PTRACE_POKEUSR_AREA that
allows accessing a whole block of the user area in one go, so in effect
allowing to treat parts of the user area as register sets.
- SystemZ hardware does not provide any means to implement read watchpoints,
only write watchpoints. In fact, we can only support a *single* write
watchpoint (but this can span a range of arbitrary size). In LLDB this
means we support only a single watchpoint. I've set all test cases that
require read watchpoints (or multiple watchpoints) to expected failure
on the platform. [ Note that there were two test cases that install
a read/write watchpoint even though they nowhere rely on the "read"
property. I've changed those to simply use plain write watchpoints. ]
Differential Revision: http://reviews.llvm.org/D18978
llvm-svn: 266308
If the UnwindPlan did not identify how to unwind the stack pointer
register, LLDB currently assumes it can determine to caller's SP
from the current frame's CFA. This is true on most platforms
where CFA is by definition equal to the incoming SP at function
entry.
However, on the s390x target, we instead define the CFA to equal
the incoming SP plus an offset of 160 bytes. This is because
our ABI defines that the caller has to provide a register save
area of size 160 bytes. This area is allocated by the caller,
but is considered part of the callee's stack frame, and therefore
the CFA is defined as pointing to the top of this area.
In order to make this work on s390x, this patch introduces a new
ABI callback GetFallbackRegisterLocation that provides platform-
specific fallback register locations for unwinding. The existing
code to handle SP unwinding as well as volatile registers is moved
into the default implementation of that ABI callback, to allow
targets where that implementation is incorrect to override it.
This patch in itself is a no-op for all existing platforms.
But it is a pre-requisite for adding s390x support.
Differential Revision: http://reviews.llvm.org/D18977
llvm-svn: 266307
(lldb) b ~Foo
(lldb) b Foo::~Foo
(lldb) b Bar::Foo::~Foo
Improved out C++ breakpoint locations tests as well to cover this issue.
<rdar://problem/25577252>
llvm-svn: 266139
The result variables aren't useful, and if you have a breakpoint on a
common function you can generate a lot of these. So I changed the
code that checks the condition to set ResultVariableIsInternal in the
EvaluateExpressionOptions that we pass to the execution.
Unfortunately, the check for this variable was done in the wrong place
(the static UserExpression::Evaluate) which is not how breakpoint
conditions execute expressions (UserExpression::Execute). So I moved
the check to UserExpression::Execute (which Evaluate also calls) and made the
overridden method DoExecute.
llvm-svn: 266093
The structure definitions are not provided, but we perform a sizeof operation of
them which causes a build failure. Include `asm/ptrace.h` to get the structure
definitions.
llvm-svn: 266042
The Python import works by ensuring the directory of the module or package is in sys.path, and then it does a Python `import foo`. The original code was not escaping the backslashes in the directory path, so this wasn't working.
Differential Revision: http://reviews.llvm.org/D18873
llvm-svn: 265738
os to "ios" or "macosx" if it is unspecified. For environments
where there genuinely is no os, we don't want to errantly
convert that to ios/macosx, e.g. bare board debugging.
Change PlatformRemoteiOS, PlatformRemoteAppleWatch, and
PlatformRemoteAppleTV to not create themselves if we have
an unspecified OS. Same problem - these are not appropriate
platforms for bare board debugging environments.
Have Process::Attach's logging take place if either
process or target logging is enabled.
<rdar://problem/25592378>
llvm-svn: 265732
In turns out this does make a functional change, in case when the inferior hits an int3 that was
not placed by the debugger. Backing out for now.
llvm-svn: 265647
Summary:
SetThreadStopInfo was checking for a breakpoint at the current PC several times. This merges the
identical code into a separate function. I've left one breakpoint check alone, as it was doing
more complicated stuff, and it did not see a way to merge that without making the interface
complicated. NFC.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18819
llvm-svn: 265560
Summary:
This resolves a similar problem as D16720 (which handled the case when we single-step onto a
breakpoint), but this one deals with involutary stops: when we stop a thread (e.g. because
another thread has hit a breakpont and we are doing a full stop), we can end up stopping it right
before it executes a breakpoint instruction. In this case, the stop reason will be empty, but we
will still step over the breakpoint when do the next resume, thereby missing a breakpoint hit.
I have observed this happening in TestConcurrentEvents, but I have no idea how to reproduce this
behavior more reliably.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18692
llvm-svn: 265525
Summary:
The logic to read modules from memory was added to LoadModuleAtAddress
in the dynamic loader, but not in process gdb remote. This means that when
the remote uses svr4 packets to give library info, libraries only present
on the remote will not be loaded.
This patch therefore involves some code duplication from LoadModuleAtAddress
in the dynamic loader, but removing this would require some amount of code
refactoring.
Reviewers: ADodds, tberghammer, tfiala, deepak2427, ted
Subscribers: tfiala, lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18531
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265418
Summary:
There was a bug in linux core file handling, where if there was a running process with the same
process id as the id in the core file, the core file debugging would fail, as we would pull some
pieces of information (ProcessInfo structure) from the running process instead of the core file.
I fix this by routing the ProcessInfo requests through the Process class and overriding it in
ProcessElfCore to return correct data.
A (slightly convoluted) test is included.
Reviewers: clayborg, zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18697
llvm-svn: 265391
in thumb mode into one method in ArchSpec, replace checks for
specific cores in the disassembler with calls to this. Also call
this from the arm instruction emulation code.
The determination of whether a given ArchSpec is thumb-only is still
a bit of a hack, but at least the hack is consolidated into a single
place. In my original version of this patch http://reviews.llvm.org/D13578
I was calling into llvm's feature arm feature tables to make this
determination, like
#include "llvm/Support/TargetRegistry.h"
#include "llvm/MC/MCSubtargetInfo.h"
#include "llvm/../../lib/Target/ARM/ARMGenRegisterInfo.inc"
#include "llvm/../../lib/Target/ARM/ARMFeatures.h"
[...]
std::string triple (GetTriple().getTriple());
const char *cpu = "";
const char *features_str = "";
const llvm::Target *curr_target = llvm::TargetRegistry::lookupTarget(triple.c_str(), Error);
std::unique_ptr<llvm::MCSubtargetInfo> subtarget_info_up (curr_target->createMCSubtargetInfo(triple.c_str(), cpu, features_str));
if (subtarget_info_up->getFeatureBits()[llvm::ARM::FeatureNoARM])
{
return true;
}
but those tables are post-llvm-build generated and linking against them
for all of our different build system methods was a big hiccup that I
haven't had time to revisit convincingly.
I'll keep that reviews.llvm.org patch around to remind myself that I
need to take another run at linking against the necessary tables
again in llvm.
<rdar://problem/23022803>
llvm-svn: 265377
TypeSP SymbolFileDWARF::FindDefinitionTypeForDWARFDeclContext (const DWARFDeclContext &dwarf_decl_ctx);
The problem was we might be looking for a type "Foo", and find one from another langauge. Then the DWARFASTParserClang would try to make an AST type using a CompilerType that might return an empty.
This fix makes sure that when we create a DWARFDeclContext from a DWARFDIE that the DWARFDeclContext we set the language of the DIE. Then when we go to find matches for DWARFDeclContext, we end up with bunch of DIEs. We check each DWARFDIE that we found by asking it for its language and making sure the language is compatible with the type system that we want to use. This keeps us from using the wrong types to resolve forward declarations.
<rdar://problem/25276165>
llvm-svn: 265196
rnb_err_t
RNBRemote::HandlePacket_stop_process (const char *p)
{
if (!DNBProcessInterrupt(m_ctx.ProcessID()))
HandlePacket_last_signal (NULL);
return rnb_success;
}
In the call to DNBProcessInterrupt we did:
nub_bool_t
DNBProcessInterrupt(nub_process_t pid)
{
MachProcessSP procSP;
if (GetProcessSP (pid, procSP))
return procSP->Interrupt();
return false;
}
This would always return false. It would cause HandlePacket_stop_process to always call "HandlePacket_last_signal (NULL);" which would send an extra stop reply packet _if_ the process is stopped. On a machine with enough cores, it would call DNBProcessInterrupt(...) and then HandlePacket_last_signal(NULL) so quickly that it will never send out an extra stop reply packet. But if the machine is slow enough or doesn't have enough cores, it could cause the call to HandlePacket_last_signal() to actually succeed and send an extra stop reply packet. This would cause problems up in GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse() where it would get the first stop reply packet and then possibly return or execute an async packet. If it returned, then the next packet that was sent will get the second stop reply as its response. If it executes an async packet, the async packet will get the wrong response.
To fix this I did the following:
1 - in debugserver, I fixed "bool MachProcess::Interrupt()" to return true if it sends the signal so we avoid sending the stop reply twice on slower machines
2 - Added a log line to RNBRemote::HandlePacket_stop_process() to say if we ever send an extra stop reply so we will see this in the darwin console output if this does happen
3 - Added response validators to StringExtractorGDBRemote so that we can verify some responses to some packets.
4 - Added validators to packets that often follow stop reply packets like the "m" packet for memory reads, JSON packets since "jThreadsInfo" is often sent immediately following a stop reply.
5 - Modified GDBRemoteCommunicationClient::SendPacketAndWaitForResponseNoLock() to validate responses. Any "StringExtractorGDBRemote &response" that contains a valid response verifier will verify the response and keep looking for correct responses up to 3 times. This will help us get back on track if we do get extra stop replies. If a StringExtractorGDBRemote does not have a response validator, it will accept any packet in response.
6 - In GDBRemoteCommunicationClient::SendPacketAndWaitForResponse we copy the response validator from the "response" argument over into m_async_response so that if we send the packet by interrupting the running process, we can validate the response we actually get in GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse()
7 - Modified GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse() to always check for an extra stop reply packet for 100ms when the process is interrupted. We were already doing this because we might interrupt a process with a \x03 packet, yet the process was in the process of stopping due to another reason. This race condition could cause an extra stop reply packet because the GDB remote protocol says if a \x03 packet is sent while the process is stopped, we should send a stop reply packet back. Now we always check for an extra stop reply packet when we manually interrupt a process.
The issue was showing up when our IDE would attempt to set a breakpoint while the process is running and this would happen:
--> \x03
<-- $T<stop reply 1>
--> z0,AAAAA,BB (set breakpoint)
<-- $T<stop reply 1> (incorrect extra stop reply packet)
--> c
<-- OK (response from z0 packet)
Now all packet traffic was off by one response. Since we now have a validator on the response for "z" packets, we do this:
--> \x03
<-- $T<stop reply 1>
--> z0,AAAAA,BB (set breakpoint)
<-- $T<stop reply 1> (Ignore this because this can't be the response to z0 packets)
<-- OK -- (we are back on track as this is a valid response to z0)
...
As time goes on we should add more packet validators.
<rdar://problem/22859505>
llvm-svn: 265086
Summary:
In case of Dwo, DIERef stores a compile unit offset in the main object file, and not in the dwo.
The implementation of SymbolFileDWARFDwo::GetDIE inherited from SymbolFileDWARF tried to lookup
the compilation unit in the DWO based on the main object file offset (and failed). I change the
implementation to verify the DIERef indeed references compile unit belonging to this dwo and then
lookup the die based on the die offset alone.
Includes a couple of fixes for mismatched struct/class tags.
Reviewers: tberghammer, clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18646
llvm-svn: 265011
1 - DWARF in .o files with debug map in executable: we would place the compile unit index in the upper 32 bits of the 64 bit value and the lower 32 bits would be the DIE offset
2 - DWO: we would place the compile unit offset in the upper 32 bits of the 64 bit value and the lower 32 bits would be the DIE offset
There was a mixing and matching of this and it wasn't done consistently.
Major changes include:
The DIERef constructor that takes a lldb::user_id_t now requires a SymbolFileDWARF:
DIERef(lldb::user_id_t uid, SymbolFileDWARF *dwarf)
It is needed so that it can be decoded correctly. If it is DWARF in .o files with debug map in executable, then we get the right compile unit from the SymbolFileDWARFDebugMap, otherwise, we use the compile unit offset and DIE offset for DWO or normal DWARF.
The function:
lldb::user_id_t DIERef::GetUID() const;
Now becomes
lldb::user_id_t DIERef::GetUID(SymbolFileDWARF *dwarf) const;
Again, we need the DWARF file to encode it correctly.
This removes the need for "lldb::user_id_t SymbolFileDWARF::MakeUserID() const" and for bool SymbolFileDWARF::UserIDMatches (lldb::user_id_t uid) const". There were also many places were doing things inneficiently like:
1 - encode a dw_offset_t into a lldb::user_id_t
2 - call the public SymbolFile interface to resolve types using the lldb::user_id_t
3 - This would then decode the lldb::user_id_t into a DIERef, and then try to find that type.
There are many places that are now doing this more efficiently by storing DW_AT_type form values as DWARFFormValue objects and then making a DIERef from them and directly calling the underlying function to resolve the lldb_private::Type, lldb_private::CompilerType, lldb_private::CompilerDecl, lldb_private::CompilerDeclContext.
If there are any regressions in DWARF with DWO, we will need to fix any issues that arise since the original patch wasn't functional for the much more widely used DWARF in .o files with debug map.
<rdar://problem/25200976>
llvm-svn: 264909
The problem was that the static DynamicLoaderDarwinKernel::Initialize() was recently changed to come before DynamicLoaderMacOSXDYLD::Initialize() which caused the DynamicLoaderDarwinKernel::CreateInstance(...) to be called before DynamicLoaderMacOSXDYLD::CreateInstance(...) and DynamicLoaderDarwinKernel would claim it could be the dynamic loader for a user space MacOSX process. The fix is to make DynamicLoaderDarwinKernel::CreateInstance() a bit more thourough when vetting the process so that it doesn't claim MacOSX user space processes.
<rdar://problem/25425373>
llvm-svn: 264794
quietly apply fixits for those who really trust clang's fixits.
Also, moved the retry into ClangUserExpression::Evaluate, where I can make a whole new ClangUserExpression
to do the work. Reusing any of the parts of a UserExpression in situ isn't supported at present.
<rdar://problem/25351938>
llvm-svn: 264793
Summary:
Since r264316, clang started adding DW_AT_GNU_dwo_name attribute to dwo files (previously, this
attribute was only present in main object files), breaking pretty much every dwo test. The
problem was that we were treating the presence of said attribute as a signal that we should look
for information in an external object file, and caused us to enter an infinite loop. I fix this
by making sure we do not go looking for an external dwo file if we already *are* parsing a dwo
file.
Reviewers: tberghammer, clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18547
llvm-svn: 264729
This allows these functions to be re-used by a forthcoming
PDBASTParser. The functions in question are CanCompleteType,
CompleteType, and CanImport. Conceptually, these functions belong
on ClangASTImporter anyway, and previously they were just ping
ponging around through a few levels of indirection to end up there
as well, so this patch actually makes the code somewhat simpler.
A few methods were moved to a new file called ClangUtil, so that
they can be shared between ClangASTImporter and ClangASTContext
without creating a circular dependency between those two cpp
files.
Differential Revision: http://reviews.llvm.org/D18381
llvm-svn: 264685
Blocks and lambdas have their implementation functions stored in the IR for an
expression. If we put the block/lambda into a result variable it needs to stay
around. As a heuristic, remember any execution unit that has more than one
function in it.
<rdar://problem/22864976>
llvm-svn: 264483
months back to PlatformRemoteAppleTV and PlatformRemoteAppleWatch
to help understand what's happening when lldb can't find binaries
that it should be finding.
llvm-svn: 264380
This feature is controlled by an expression command option, a target property and the
SBExpressionOptions setting. FixIt's are only applied to UserExpressions, not UtilityFunctions,
those you have to get right when you make them.
This is just a first stage. At present the fixits are applied silently. The next step
is to tell the user about the applied fixit.
<rdar://problem/25351938>
llvm-svn: 264379
Summary:
Fixes SBCommandReturnObject::SetImmediateOutputFile() and
SBCommandReturnObject::SetImmediateOutputFile() for files opened
with "a" or "a+" by resolving inconsistencies between File and
our Python parsing of file objects.
Reviewers: granata.enrico, Eugene.Zelenko, jingham, clayborg
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18228
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 264351
Summary:
Though r264012 was fancy enough to make reading the jit entry struct
work with templates, the packing and alignment attributes do not work on
Windows. So, this change makes it plain and simple with manual reading
of the jit entry struct.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18379
llvm-svn: 264217
This patch adds ThreadSanitizer support into LLDB:
- Adding a new InstrumentationRuntime plugin, ThreadSanitizerRuntime, in the same way ASan is implemented.
- A breakpoint stops in `__tsan_on_report`, then we extract all sorts of information by evaluating an expression. We then populate this into StopReasonExtendedInfo.
- SBThread gets a new API, SBThread::GetStopReasonExtendedBacktraces(), which returns TSan’s backtraces in the form of regular SBThreads. Non-TSan stop reasons return an empty collection.
- Added some test cases.
Reviewed by Greg Clayton.
llvm-svn: 264162
This patch adds a new ExecutionPolicy, eExecutionPolicyTopLevel, which
tells the expression parser that the expression should be JITted as top
level code but nothing (except static initializers) should be run. I
have modified the Clang expression parser to recognize this execution
policy. On top of the existing patches that support storing IR and
maintaining a map of arbitrary Decls, this is mainly just patching up a
few places in the expression parser.
I intend to submit a patch for review that exposes this functionality
through the "expression" command and through the SB API. That patch
also includes a testcase for all of this.
<rdar://problem/22864976>
llvm-svn: 264095
Win32 API calls that are Unicode aware require wide character
strings, but LLDB uses UTF8 everywhere. This patch does conversions
wherever necessary when passing strings into and out of Win32 API
calls.
Patch by Cameron
Differential Revision: http://reviews.llvm.org/D17107
Reviewed By: zturner, amccarth
llvm-svn: 264074
IRExecutionUnits contain code and data that persistent declarations can
depend on. In order to keep them alive and provide for lookup of these
symbols, we now allow any PersistentExpressionState to keep a list of
execution units. Then, when doing symbol lookup on behalf of an
expression, any IRExecutionUnit can consult the persistent expression
states on a particular Target to find the appropriate symbol.
<rdar://problem/22864976>
llvm-svn: 263995
a way for compilation to take a "thread to use for compilation". If it isn't set then the
compilation will use the currently selected thread. This should help keep function execution
to the one thread intended.
llvm-svn: 263972
Persistent decls have traditionally only been types. However, we want to
be able to persist more things, like functions and global variables. This
changes some of the nomenclature and the lookup rules to make this possible.
<rdar://problem/22864976>
llvm-svn: 263864
We want to do a better job presenting errors that occur when evaluating
expressions. Key to this effort is getting away from a model where all
errors are spat out onto a stream where the client has to take or leave
all of them.
To this end, this patch adds a new class, DiagnosticManager, which
contains errors produced by the compiler or by LLDB as an expression
is created. The DiagnosticManager can dump itself to a log as well as
to a string. Clients will (in the future) be able to filter out the
errors they're interested in by ID or present subsets of these errors
to the user.
This patch is not intended to change the *users* of errors - only to
thread DiagnosticManagers to all the places where streams are used. I
also attempt to standardize our use of errors a bit, removing trailing
newlines and making clients omit 'error:', 'warning:' etc. and instead
pass the Severity flag.
The patch is testsuite-neutral, with modifications to one part of the
MI tests because it relied on "error: error:" being erroneously
printed. This patch fixes the MI variable handling and the testcase.
<rdar://problem/22864976>
llvm-svn: 263859
Summary:
This also adds a basic smoke test for linux core file reading. I'm checking in the core files as
well, so that the tests can run on all platforms. With some tricks I was able to produce
reasonably-sized core files (~40K).
This fixes the first part of pr26322.
Reviewers: zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18176
llvm-svn: 263628
This can cause differences in which bit patterns end up meaning YES or NO. In general, however, 0 == NO and 1 == YES.
To keep it simple, LLDB will now show "YES" and "NO" only for 1 and 0 respectively, and format other values as the plain numeric value instead.
Fixes rdar://24809994
llvm-svn: 263604
Build-id support is being added to lld and by default it may produce a
64-bit build-id.
Prior to this change lldb would reject such a build-id. However, it then
falls back to a 4-byte crc32, which is a poorer quality identifier.
Differential Revision: http://reviews.llvm.org/D18096
llvm-svn: 263432
Turns out that most of the code that runs expressions (e.g. the ObjC runtime grubber) on
behalf of the expression parser was using the currently selected thread. But sometimes,
e.g. when we are evaluating breakpoint conditions/commands, we don't select the thread
we're running on, we instead set the context for the interpreter, and explicitly pass
that to other callers. That wasn't getting communicated to these utility expressions, so
they would run on some other thread instead, and that could cause a variety of subtle and
hard to reproduce problems.
I also went through the commands and cleaned up the use of GetSelectedThread. All those
uses should have been trying the thread in the m_exe_ctx belonging to the command object
first. It would actually have been pretty hard to get misbehavior in these cases, but for
correctness sake it is good to make this usage consistent.
<rdar://problem/24978569>
llvm-svn: 263326
Removed lldb_private::File::Duplicate() and the copy constructor and the assignment operator that used to duplicate the file handles and made them private so no one uses them. Previously the lldb_private::File::Duplicate() function duplicated files that used file descriptors, (int) but not file streams (FILE *), so the lldb_private::File::Duplicate() function only worked some of the time. No one else excep thee ScriptInterpreterPython was using these functions, so that aren't needed nor desired. Previously every time you would drop into the python interpreter we would duplicate files, and now we avoid this file churn.
<rdar://problem/24877720>
llvm-svn: 263161
Fix a problem raised with the previous patches being applied in the wrong order.
Committed on behalf of: Dean De Leo <dean@codeplay.com>
llvm-svn: 263134
This commit implements the reading of stack spilled function arguments for little endian MIPS targets.
Committed on behalf of: Dean De Leo <dean@codeplay.com>
llvm-svn: 263131
This commit implements the reading of stack spilled function arguments for little endian MIPS targets.
Committed on behalf of: Dean De Leo <dean@codeplay.com>
llvm-svn: 263130
Currently it is not specified, and since allocations are usually
requested once we hit a renderscript breakpoint, the language will be
inferred being as renderscript by the ExpressionParser.
Actually allocations attempt to invoke functions part of the RS runtime,
written in C/C++, so evaluating the calls in RenderScript could be
misleading.
In particular, in MIPS, the ABI between C/C++ (mips o32) and
renderscript (arm) might introduce subtle bugs when evaluating such
expressions.
This change explicitly sets the language used to evaluate the allocations
as C++.
Committed on behalf of: Dean De Leo <dean@codeplay.com>
llvm-svn: 263129
The current expression language is currently tracked in a few places within the ClangExpressionParser constructor.
This patch adds a private lldb::LanguageType attribute to the ClangExpressionParser class and tracks the expression language from that one place.
Author: Luke Drummond <luke.drummond@codeplay.com>
Differential Revision: http://reviews.llvm.org/D17719
llvm-svn: 263099
Summary:
GCC does not emit DW_AT_data_member_location for members of a union.
Starting with a 0 value for member locations helps is reading union types
in such cases.
Reviewers: clayborg
Subscribers: ldrumm, lldb-commits
Differential Revision: http://reviews.llvm.org/D18008
llvm-svn: 263085
Previously line table parsing code assumed that the only gaps would
occur at the end of functions. In practice this isn't true, so this
patch makes the line table parsing more robust in the face of
functions with non-contiguous byte arrangements.
llvm-svn: 263078
That way you can set offset breakpoints that will move as the function they are
contained in moves (which address breakpoints can't do...)
I don't align the new address to instruction boundaries yet, so you have to get
this right yourself for now.
<rdar://problem/13365575>
llvm-svn: 263049