The Python import works by ensuring the directory of the module or package is in sys.path, and then it does a Python `import foo`. The original code was not escaping the backslashes in the directory path, so this wasn't working.
Differential Revision: http://reviews.llvm.org/D18873
llvm-svn: 265738
os to "ios" or "macosx" if it is unspecified. For environments
where there genuinely is no os, we don't want to errantly
convert that to ios/macosx, e.g. bare board debugging.
Change PlatformRemoteiOS, PlatformRemoteAppleWatch, and
PlatformRemoteAppleTV to not create themselves if we have
an unspecified OS. Same problem - these are not appropriate
platforms for bare board debugging environments.
Have Process::Attach's logging take place if either
process or target logging is enabled.
<rdar://problem/25592378>
llvm-svn: 265732
In turns out this does make a functional change, in case when the inferior hits an int3 that was
not placed by the debugger. Backing out for now.
llvm-svn: 265647
TargetOptions is ambiguous due to a definition in LLVM and in clang. This was
exposed by SVN r265640. Update to fix the build against the newer revision.
llvm-svn: 265644
Summary:
SetThreadStopInfo was checking for a breakpoint at the current PC several times. This merges the
identical code into a separate function. I've left one breakpoint check alone, as it was doing
more complicated stuff, and it did not see a way to merge that without making the interface
complicated. NFC.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18819
llvm-svn: 265560
Summary:
This resolves a similar problem as D16720 (which handled the case when we single-step onto a
breakpoint), but this one deals with involutary stops: when we stop a thread (e.g. because
another thread has hit a breakpont and we are doing a full stop), we can end up stopping it right
before it executes a breakpoint instruction. In this case, the stop reason will be empty, but we
will still step over the breakpoint when do the next resume, thereby missing a breakpoint hit.
I have observed this happening in TestConcurrentEvents, but I have no idea how to reproduce this
behavior more reliably.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18692
llvm-svn: 265525
Summary: Flag updated in D233237
Reviewers: spyffe, jingham, Eugene.Zelenko
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18660
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265421
Summary: Print environment from triple if it exists.
Reviewers: tfiala, clayborg
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18620
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265420
Summary:
The logic to read modules from memory was added to LoadModuleAtAddress
in the dynamic loader, but not in process gdb remote. This means that when
the remote uses svr4 packets to give library info, libraries only present
on the remote will not be loaded.
This patch therefore involves some code duplication from LoadModuleAtAddress
in the dynamic loader, but removing this would require some amount of code
refactoring.
Reviewers: ADodds, tberghammer, tfiala, deepak2427, ted
Subscribers: tfiala, lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18531
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265418
Summary:
There was a bug in linux core file handling, where if there was a running process with the same
process id as the id in the core file, the core file debugging would fail, as we would pull some
pieces of information (ProcessInfo structure) from the running process instead of the core file.
I fix this by routing the ProcessInfo requests through the Process class and overriding it in
ProcessElfCore to return correct data.
A (slightly convoluted) test is included.
Reviewers: clayborg, zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18697
llvm-svn: 265391
in thumb mode into one method in ArchSpec, replace checks for
specific cores in the disassembler with calls to this. Also call
this from the arm instruction emulation code.
The determination of whether a given ArchSpec is thumb-only is still
a bit of a hack, but at least the hack is consolidated into a single
place. In my original version of this patch http://reviews.llvm.org/D13578
I was calling into llvm's feature arm feature tables to make this
determination, like
#include "llvm/Support/TargetRegistry.h"
#include "llvm/MC/MCSubtargetInfo.h"
#include "llvm/../../lib/Target/ARM/ARMGenRegisterInfo.inc"
#include "llvm/../../lib/Target/ARM/ARMFeatures.h"
[...]
std::string triple (GetTriple().getTriple());
const char *cpu = "";
const char *features_str = "";
const llvm::Target *curr_target = llvm::TargetRegistry::lookupTarget(triple.c_str(), Error);
std::unique_ptr<llvm::MCSubtargetInfo> subtarget_info_up (curr_target->createMCSubtargetInfo(triple.c_str(), cpu, features_str));
if (subtarget_info_up->getFeatureBits()[llvm::ARM::FeatureNoARM])
{
return true;
}
but those tables are post-llvm-build generated and linking against them
for all of our different build system methods was a big hiccup that I
haven't had time to revisit convincingly.
I'll keep that reviews.llvm.org patch around to remind myself that I
need to take another run at linking against the necessary tables
again in llvm.
<rdar://problem/23022803>
llvm-svn: 265377
Teach LLDB that different shells have different characters they are sensitive to, and use that knowledge to do shell-aware escaping
This helps solve a class of problems on OS X where LLDB would try to launch via sh, and run into problems if the command line being passed to the inferior contained such special markers (hint: the shell would error out and we'd fail to launch)
This makes those launch scenarios work transparently via shell expansion
Slightly improve the error message when this kind of failure occurs to at least suggest that the user try going through 'process launch' directly
Fixes rdar://problem/22749408
llvm-svn: 265357
Summary:
Even though FileSpec attempted to handle both kinds of path syntaxes (posix and windows) on both
platforms, it relied on the llvm path library to do its work, whose behavior differed on
different platforms. This led to subtle differences in FileSpec behavior between platforms. This
replaces the pieces of the llvm library with our own implementations. The functions are simply
copied from llvm, with #ifdefs replaced by runtime checks for ePathSyntaxWindows.
Reviewers: zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18689
llvm-svn: 265299
In doing so, two bugs were uncovered (and fixed). The first bug
is that ClangASTContext::RemoveFastQualifiers() was broken, and
was not removing fast qualifiers (or doing anything else for that
matter). The second bug is that UnifyAccessSpecifiers treated
AS_None asymmetrically, which is probably an edge case, but seems
like a bug nonetheless.
llvm-svn: 265200
TypeSP SymbolFileDWARF::FindDefinitionTypeForDWARFDeclContext (const DWARFDeclContext &dwarf_decl_ctx);
The problem was we might be looking for a type "Foo", and find one from another langauge. Then the DWARFASTParserClang would try to make an AST type using a CompilerType that might return an empty.
This fix makes sure that when we create a DWARFDeclContext from a DWARFDIE that the DWARFDeclContext we set the language of the DIE. Then when we go to find matches for DWARFDeclContext, we end up with bunch of DIEs. We check each DWARFDIE that we found by asking it for its language and making sure the language is compatible with the type system that we want to use. This keeps us from using the wrong types to resolve forward declarations.
<rdar://problem/25276165>
llvm-svn: 265196
rnb_err_t
RNBRemote::HandlePacket_stop_process (const char *p)
{
if (!DNBProcessInterrupt(m_ctx.ProcessID()))
HandlePacket_last_signal (NULL);
return rnb_success;
}
In the call to DNBProcessInterrupt we did:
nub_bool_t
DNBProcessInterrupt(nub_process_t pid)
{
MachProcessSP procSP;
if (GetProcessSP (pid, procSP))
return procSP->Interrupt();
return false;
}
This would always return false. It would cause HandlePacket_stop_process to always call "HandlePacket_last_signal (NULL);" which would send an extra stop reply packet _if_ the process is stopped. On a machine with enough cores, it would call DNBProcessInterrupt(...) and then HandlePacket_last_signal(NULL) so quickly that it will never send out an extra stop reply packet. But if the machine is slow enough or doesn't have enough cores, it could cause the call to HandlePacket_last_signal() to actually succeed and send an extra stop reply packet. This would cause problems up in GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse() where it would get the first stop reply packet and then possibly return or execute an async packet. If it returned, then the next packet that was sent will get the second stop reply as its response. If it executes an async packet, the async packet will get the wrong response.
To fix this I did the following:
1 - in debugserver, I fixed "bool MachProcess::Interrupt()" to return true if it sends the signal so we avoid sending the stop reply twice on slower machines
2 - Added a log line to RNBRemote::HandlePacket_stop_process() to say if we ever send an extra stop reply so we will see this in the darwin console output if this does happen
3 - Added response validators to StringExtractorGDBRemote so that we can verify some responses to some packets.
4 - Added validators to packets that often follow stop reply packets like the "m" packet for memory reads, JSON packets since "jThreadsInfo" is often sent immediately following a stop reply.
5 - Modified GDBRemoteCommunicationClient::SendPacketAndWaitForResponseNoLock() to validate responses. Any "StringExtractorGDBRemote &response" that contains a valid response verifier will verify the response and keep looking for correct responses up to 3 times. This will help us get back on track if we do get extra stop replies. If a StringExtractorGDBRemote does not have a response validator, it will accept any packet in response.
6 - In GDBRemoteCommunicationClient::SendPacketAndWaitForResponse we copy the response validator from the "response" argument over into m_async_response so that if we send the packet by interrupting the running process, we can validate the response we actually get in GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse()
7 - Modified GDBRemoteCommunicationClient::SendContinuePacketAndWaitForResponse() to always check for an extra stop reply packet for 100ms when the process is interrupted. We were already doing this because we might interrupt a process with a \x03 packet, yet the process was in the process of stopping due to another reason. This race condition could cause an extra stop reply packet because the GDB remote protocol says if a \x03 packet is sent while the process is stopped, we should send a stop reply packet back. Now we always check for an extra stop reply packet when we manually interrupt a process.
The issue was showing up when our IDE would attempt to set a breakpoint while the process is running and this would happen:
--> \x03
<-- $T<stop reply 1>
--> z0,AAAAA,BB (set breakpoint)
<-- $T<stop reply 1> (incorrect extra stop reply packet)
--> c
<-- OK (response from z0 packet)
Now all packet traffic was off by one response. Since we now have a validator on the response for "z" packets, we do this:
--> \x03
<-- $T<stop reply 1>
--> z0,AAAAA,BB (set breakpoint)
<-- $T<stop reply 1> (Ignore this because this can't be the response to z0 packets)
<-- OK -- (we are back on track as this is a valid response to z0)
...
As time goes on we should add more packet validators.
<rdar://problem/22859505>
llvm-svn: 265086
Summary:
In case of Dwo, DIERef stores a compile unit offset in the main object file, and not in the dwo.
The implementation of SymbolFileDWARFDwo::GetDIE inherited from SymbolFileDWARF tried to lookup
the compilation unit in the DWO based on the main object file offset (and failed). I change the
implementation to verify the DIERef indeed references compile unit belonging to this dwo and then
lookup the die based on the die offset alone.
Includes a couple of fixes for mismatched struct/class tags.
Reviewers: tberghammer, clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18646
llvm-svn: 265011
1 - DWARF in .o files with debug map in executable: we would place the compile unit index in the upper 32 bits of the 64 bit value and the lower 32 bits would be the DIE offset
2 - DWO: we would place the compile unit offset in the upper 32 bits of the 64 bit value and the lower 32 bits would be the DIE offset
There was a mixing and matching of this and it wasn't done consistently.
Major changes include:
The DIERef constructor that takes a lldb::user_id_t now requires a SymbolFileDWARF:
DIERef(lldb::user_id_t uid, SymbolFileDWARF *dwarf)
It is needed so that it can be decoded correctly. If it is DWARF in .o files with debug map in executable, then we get the right compile unit from the SymbolFileDWARFDebugMap, otherwise, we use the compile unit offset and DIE offset for DWO or normal DWARF.
The function:
lldb::user_id_t DIERef::GetUID() const;
Now becomes
lldb::user_id_t DIERef::GetUID(SymbolFileDWARF *dwarf) const;
Again, we need the DWARF file to encode it correctly.
This removes the need for "lldb::user_id_t SymbolFileDWARF::MakeUserID() const" and for bool SymbolFileDWARF::UserIDMatches (lldb::user_id_t uid) const". There were also many places were doing things inneficiently like:
1 - encode a dw_offset_t into a lldb::user_id_t
2 - call the public SymbolFile interface to resolve types using the lldb::user_id_t
3 - This would then decode the lldb::user_id_t into a DIERef, and then try to find that type.
There are many places that are now doing this more efficiently by storing DW_AT_type form values as DWARFFormValue objects and then making a DIERef from them and directly calling the underlying function to resolve the lldb_private::Type, lldb_private::CompilerType, lldb_private::CompilerDecl, lldb_private::CompilerDeclContext.
If there are any regressions in DWARF with DWO, we will need to fix any issues that arise since the original patch wasn't functional for the much more widely used DWARF in .o files with debug map.
<rdar://problem/25200976>
llvm-svn: 264909
They're not supposed to go in the symbol table, and in fact the way the JIT
is currently implemented it sometimes crashes when you try to get the
address of such a function. So we skip them.
llvm-svn: 264821
The problem was that the static DynamicLoaderDarwinKernel::Initialize() was recently changed to come before DynamicLoaderMacOSXDYLD::Initialize() which caused the DynamicLoaderDarwinKernel::CreateInstance(...) to be called before DynamicLoaderMacOSXDYLD::CreateInstance(...) and DynamicLoaderDarwinKernel would claim it could be the dynamic loader for a user space MacOSX process. The fix is to make DynamicLoaderDarwinKernel::CreateInstance() a bit more thourough when vetting the process so that it doesn't claim MacOSX user space processes.
<rdar://problem/25425373>
llvm-svn: 264794
quietly apply fixits for those who really trust clang's fixits.
Also, moved the retry into ClangUserExpression::Evaluate, where I can make a whole new ClangUserExpression
to do the work. Reusing any of the parts of a UserExpression in situ isn't supported at present.
<rdar://problem/25351938>
llvm-svn: 264793