On Darwin if a mmap file is code signed and the code signature is invalid, it used to crash. If we specify the MAP_RESILIENT_CODESIGN mmap flag when mapping a file for reading, we can avoid crashing.
Another mmap flag named MAP_RESILIENT_MEDIA allows us to survive if we mmap files that are on removable media like network servers or removable hard drives. If a file was mapped and later the media that had the file became unavailable, we would crash when we would touch the next page that wasn't paged in. Now it will return zeroes and stop of from us from crashing.
<rdar://problem/25918698>
llvm-svn: 270254
Summary: One of the cases handled by ValueObjectChild::UpdateValue() uses the entire width of the parent's scalar value as the size of the child, and extracts the child by calling Scalar::ExtractBitfield(). This seems valid but APInt::trunc(), APInt::sext() and APInt::zext() assert that the bit field must not have the same size as the parent scalar. Replacing those calls with sextOrTrunc(), zextOrTrunc(), sextOrSelf() and zextOrSelf() fixes the assertion failures.
Reviewers: uweigand, labath
Subscribers: labath, lldb-commits
Differential Revision: http://reviews.llvm.org/D20355
llvm-svn: 270062
This is a pretty straightforward first pass over removing a number of uses of
Mutex in favor of std::mutex or std::recursive_mutex. The problem is that there
are interfaces which take Mutex::Locker & to lock internal locks. This patch
cleans up most of the easy cases. The only non-trivial change is in
CommandObjectTarget.cpp where a Mutex::Locker was split into two.
llvm-svn: 269877
The main issues were:
- Listeners recently were converted over to used by getting a shared pointer to a listener. And when they listened to broadcasters they would get a strong reference added to them meaning the listeners would never go away. This caused memory usage to increase and would cause performance issue if many steps were done.
- The lldb_private::Process private state thread had an issue where if a "stop" contol signal was attempted to be sent to that thread, it could end up not responding in 2 seconds and end up getting cancelled which might cause us to cancel a thread that had a mutex locked and it would deadlock the test.
This change makes broadcasters hold onto weak references to listeners. It also fixes some bad threading code that had races inside of it by making the m_events_mutex be non-recursive and getting rid of fragile use of a Predicate<bool> to say that new events are available, and replacing it with using the m_events_mutex with a new m_events_condition to control access to the events in a safer way.
The private state thread now uses a safer way to communicate that the control event has been received by the private state thread: it makes a EventDataReceipt instance that it attaches to the event that sends the control to the private state thread and used this to synchronize the fact that the private state thread has received the event instead of using a Predicate<bool> to convey the info. When the signal event is received, it will pull the event off of the queue in the private state thread and cause the EventData::DoOnRemoval() to be called, which will signal that the event has been received. This cleans up the signal delivery notification so it doesn't rely on a member variable of the process class to convey the info.
std::shared_ptr<EventDataReceipt> event_receipt_sp(new EventDataReceipt());
m_private_state_control_broadcaster.BroadcastEvent(signal, event_receipt_sp);
<rdar://problem/26256353> Listeners are being kept around longer than they should be due to recent changs
<rdar://problem/26256258> Private process state thread can be cancelled and cause deadlocks in test suite
llvm-svn: 269377
Patch by Nitesh Jain.
Summary: The ArchSpec::m_flags will be set based on ELF flag ABI.
Reviewers: ovyalov, clayborg
Subscribers: lldb-commits, mohit.bhakkad, sagar, jaydeep, bhushan
Differential: D18858
llvm-svn: 269181
Clear() log message was claiming it was the destructor, which had me very confused when looking
at the log messages. Fix the message, and add a log message to the real destructor.
Also noticed that the destructor was needlessly locking the broadcaster mutex (as Clear was
locking it again anyway), so remove that as well.
llvm-svn: 269058
"Allow LanguageRuntimes to return an error if they fail in the course of dynamic type discovery
This is not meant to report that a value doesn't have a dynamic type - it is only meant as a mechanism to propagate actual type discovery issues (e.g. malformed type metadata for languages that have such a notion)
This information is used by ValueObjectDynamic to set its own m_error, which is a fairly sharp and heavyweight tool to begin with
For the time being, this is an architectural improvement but a practical no-op as no existing runtimes are actually setting errors"
I need to think about what I want to do in this space more carefully - this attempt might be too heavy of a hammer for the nail I am trying to fix, and I don't want to leave it in while I ponder
llvm-svn: 268686
This is not meant to report that a value doesn't have a dynamic type - it is only meant as a mechanism to propagate actual type discovery issues (e.g. malformed type metadata for languages that have such a notion)
This information is used by ValueObjectDynamic to set its own m_error, which is a fairly sharp and heavyweight tool to begin with
For the time being, this is an architectural improvement but a practical no-op as no existing runtimes are actually setting errors
llvm-svn: 268591
We don't want a mutex in debugger as it will cause A/B locking issues with the lldb_private::Target's mutex, but we do need to stop two threads from doing Debugger::Clear at the same time. We have seen issues with this with the C++ global destructor chain where the global debugger list is being destroyed and the Debugger::~Debugger() is calling it while another thread was in the middle of running that function.
<rdar://problem/26098913>
llvm-svn: 268563
Summary:
AdbClient was attempting to handle the case where the socket input arrived in pieces, but it was
failing to handle the case where the connection was closed before that happened. In this case, it
would just spin in an infinite loop calling Connection::Read. (This was also the cause of the
spurious timeouts on the darwin->android buildbot. The exact cause of the premature EOF remains
to be investigated, but is likely a server bug.)
Since this wait-for-a-certain-number-of-bytes seems like a useful functionality to have, I am
moving it (with the infinite loop fixed) to the Connection class, and adding an
appropriate test for it.
Reviewers: clayborg, zturner, ovyalov
Subscribers: tberghammer, danalbert, lldb-commits
Differential Revision: http://reviews.llvm.org/D19533
llvm-svn: 268380
This reverts commit r267833 as it breaks the build. It looks like some work in progress got
committed together with the actual fix, but I'm not sure which one is which, so I'll revert the
whole patch and let author resumbit it after fixing the build error.
llvm-svn: 267861
In templated const functions, trying to run an expression would produce the
error
error: out-of-line definition of '$__lldb_expr' does not match any declaration in 'foo'
member declaration does not match because it is const qualified
error: 1 error parsing expression
which is no good. It turned out we don't actually need to worry about "const,"
we just need to be consistent about the declaration of the expression and the
FunctionDecl we inject into the class for "this."
Also added a test case.
<rdar://problem/24985958>
llvm-svn: 267833
rL267291 introduces a lot regression on arm-linux LLDB testsuite.
This patch fixes half of them. I am merging it under already revied android counterpart.
Another patch fixing rest of the issue will follow this commit.
Differential revision: http://reviews.llvm.org/D19480
llvm-svn: 267508
Recommit modified version of r266311 including build bot regression fix.
This differs from the original r266311 by:
- Fixing Scalar::Promote to correctly zero- or sign-extend value depending
on signedness of the *source* type, not the target type.
- Omitting a few stand-alone fixes that were already committed separately.
llvm-svn: 266422
Currently, the DataExtractor::GetMaxU64Bitfield and GetMaxS64Bitfield
routines assume the incoming "bitfield_bit_offset" parameter uses
little-endian bit numbering, i.e. a bitfield_bit_offset 0 refers to
a bitfield whose least-significant bit coincides with the least-
significant bit of the surrounding integer.
On many big-endian systems, however, the big-endian bit numbering
is used for bit fields. Here, a bitfield_bit_offset 0 refers to
a bitfield whose most-significant bit conincides with the most-
significant bit of the surrounding integer.
Now, in principle LLDB could arbitrarily choose which semantics of
bitfield_bit_offset to use. However, there are two problems with
the current approach:
- When parsing DWARF, LLDB decodes bit offsets in little-endian
bit numbering on LE systems, but in big-endian bit numbering
on BE systems. Passing those offsets later on into the
DataExtractor routines gives incorrect results on BE.
- In the interim, LLDB's type layer combines byte and bit offsets
into a single number. I.e. instead of recording bitfields by
specifying the byte offset and byte size of the surrounding
integer *plus* the bit offset of the bit field within that field,
it simply records a single bit offset number.
Now, note that converting from byte offset + bit offset to a
single offset value and back is well-defined if we either use
little-endian byte order *and* little-endian bit numbering,
or use big-endian byte order *and* big-endian bit numbering.
Any other combination will yield incorrect results.
Therefore, the simplest approach would seem to be to always use
the bit numbering that matches the system byte order. This makes
storing a single bit offset valid, and makes the existing DWARF
code correct. The only place to fix is to teach DataExtractor
to use big-endian bit numbering on big endian systems.
However, there is only additional caveat: we also get bit offsets
from LLDB synthetic bitfields. While the exact semantics of those
doesn't seem to be well-defined, from test cases it appears that
the intent was for the user-provided synthetic bitfield offset to
always use little-endian bit numbering. Therefore, on a big-endian
system we now have to convert those to big-endian bit numbering
to remain consistent.
Differential Revision: http://reviews.llvm.org/D18982
llvm-svn: 266312
The Scalar implementation and a few other places in LLDB directly
access the internal implementation of APInt values using the
getRawData method. Unfortunately, pretty much all of these places
do not handle big-endian systems correctly. While on little-endian
machines, the pointer returned by getRawData can simply be used as
a pointer to the integer value in its natural format, no matter
what size, this is not true on big-endian systems: getRawData
actually points to an array of type uint64_t, with the first element
of the array always containing the least-significant word of the
integer. This means that if the bitsize of that integer is smaller
than 64, we need to add an offset to the pointer returned by
getRawData in order to access the value in its natural type, and
if the bitsize is *larger* than 64, we actually have to swap the
constituent words before we can access the value in its natural type.
This patch fixes every incorrect use of getRawData in the code base.
For the most part, this is done by simply removing uses of getRawData
in the first place, and using other APInt member functions to operate
on the integer data.
This can be done in many member functions of Scalar itself, as well
as in Symbol/Type.h and in IRInterpreter::Interpret. For the latter,
I've had to add a Scalar::MakeUnsigned routine to parallel the existing
Scalar::MakeSigned, e.g. in order to implement an unsigned divide.
The Scalar::RawUInt, Scalar::RawULong, and Scalar::RawULongLong
were already unused and can be simply removed. I've also removed
the Scalar::GetRawBits64 function and its few users.
The one remaining user of getRawData in Scalar.cpp is GetBytes.
I've implemented all the cases described above to correctly
implement access to the underlying integer data on big-endian
systems. GetData now simply calls GetBytes instead of reimplementing
its contents.
Finally, two places in the clang interface code were also accessing
APInt.getRawData in order to actually construct a byte representation
of an integer. I've changed those to make use of a Scalar instead,
to avoid having to re-implement the logic there.
The patch also adds a couple of unit tests verifying correct operation
of the GetBytes routine as well as the conversion routines. Those tests
actually exposed more problems in the Scalar code: the SetValueFromData
routine didn't work correctly for 128- and 256-bit data types, and the
SChar routine should have an explicit "signed char" return type to work
correctly on platforms where char defaults to unsigned.
Differential Revision: http://reviews.llvm.org/D18981
llvm-svn: 266311
Scalar::GetBytes provides a non-const access to the underlying bytes
of the scalar value, supposedly allowing for modification of those
bytes. However, even with the current implementation, this is not
really possible. For floating-point scalars, the pointer returned
by GetBytes refers to a temporary copy; modifications to that copy
will be simply ignored. For integer scalars, the pointer refers
to internal memory of the APInt implementation, which isn't
supposed to be directly modifyable; GetBytes simply casts aways
the const-ness of the pointer ...
With my upcoming patch to fix Scalar::GetBytes for big-endian
systems, this problem is going to get worse, since there we need
temporary copies even for some integer scalars. Therefore, this
patch makes Scalar::GetBytes const, fixing all those problems.
As a follow-on change, RegisterValues::GetBytes must be made const
as well. This in turn means that the way of initializing a
RegisterValue by doing a SetType followed by writing to GetBytes
no longer works. Instead, I've changed SetValueFromData to do
the equivalent of SetType itself, and then re-implemented
SetFromMemoryData to work on top of SetValueFromData.
There is still a need for RegisterValue::SetType, since some
platform-specific code uses it to reinterpret the contents of
an already filled RegisterValue. To make this usage work in
all cases (even changing from a type implemented via Scalar
to a type implemented as a byte buffer), SetType now simply
copies the old contents out, and then reloads the RegisterValue
from this data using the new type via SetValueFromData.
This in turn means that there is no remaining caller of
Scalar::SetType, so it can be removed.
The only other follow-on change was in MIPS EmulateInstruction
code, where some uses of RegisterValue::GetBytes could be made
const trivially.
Differential Revision: http://reviews.llvm.org/D18980
llvm-svn: 266310
This patch adds support for Linux on SystemZ:
- A new ArchSpec value of eCore_s390x_generic
- A new directory Plugins/ABI/SysV-s390x providing an ABI implementation
- Register context support
- Native Linux support including watchpoint support
- ELF core file support
- Misc. support throughout the code base (e.g. breakpoint opcodes)
- Test case updates to support the platform
This should provide complete support for debugging the SystemZ platform.
Not yet supported are optional features like transaction support (zEC12)
or SIMD vector support (z13).
There is no instruction emulation, since our ABI requires that all code
provide correct DWARF CFI at all PC locations in .eh_frame to support
unwinding (i.e. -fasynchronous-unwind-tables is on by default).
The implementation follows existing platforms in a mostly straightforward
manner. A couple of things that are different:
- We do not use PTRACE_PEEKUSER / PTRACE_POKEUSER to access single registers,
since some registers (access register) reside at offsets in the user area
that are multiples of 4, but the PTRACE_PEEKUSER interface only allows
accessing aligned 8-byte blocks in the user area. Instead, we use a s390
specific ptrace interface PTRACE_PEEKUSR_AREA / PTRACE_POKEUSR_AREA that
allows accessing a whole block of the user area in one go, so in effect
allowing to treat parts of the user area as register sets.
- SystemZ hardware does not provide any means to implement read watchpoints,
only write watchpoints. In fact, we can only support a *single* write
watchpoint (but this can span a range of arbitrary size). In LLDB this
means we support only a single watchpoint. I've set all test cases that
require read watchpoints (or multiple watchpoints) to expected failure
on the platform. [ Note that there were two test cases that install
a read/write watchpoint even though they nowhere rely on the "read"
property. I've changed those to simply use plain write watchpoints. ]
Differential Revision: http://reviews.llvm.org/D18978
llvm-svn: 266308
Summary: Print environment from triple if it exists.
Reviewers: tfiala, clayborg
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18620
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265420
in thumb mode into one method in ArchSpec, replace checks for
specific cores in the disassembler with calls to this. Also call
this from the arm instruction emulation code.
The determination of whether a given ArchSpec is thumb-only is still
a bit of a hack, but at least the hack is consolidated into a single
place. In my original version of this patch http://reviews.llvm.org/D13578
I was calling into llvm's feature arm feature tables to make this
determination, like
#include "llvm/Support/TargetRegistry.h"
#include "llvm/MC/MCSubtargetInfo.h"
#include "llvm/../../lib/Target/ARM/ARMGenRegisterInfo.inc"
#include "llvm/../../lib/Target/ARM/ARMFeatures.h"
[...]
std::string triple (GetTriple().getTriple());
const char *cpu = "";
const char *features_str = "";
const llvm::Target *curr_target = llvm::TargetRegistry::lookupTarget(triple.c_str(), Error);
std::unique_ptr<llvm::MCSubtargetInfo> subtarget_info_up (curr_target->createMCSubtargetInfo(triple.c_str(), cpu, features_str));
if (subtarget_info_up->getFeatureBits()[llvm::ARM::FeatureNoARM])
{
return true;
}
but those tables are post-llvm-build generated and linking against them
for all of our different build system methods was a big hiccup that I
haven't had time to revisit convincingly.
I'll keep that reviews.llvm.org patch around to remind myself that I
need to take another run at linking against the necessary tables
again in llvm.
<rdar://problem/23022803>
llvm-svn: 265377
Summary: On Windows (and possibly other hosts with LLDB_DISABLE_LIBEDIT defined), the (lldb) prompt won't print after async output, like from a breakpoint hit or a step. This patch forces the prompt to be printed out after async output.
Reviewers: zturner, clayborg
Subscribers: amccarth, lldb-commits
Differential Revision: http://reviews.llvm.org/D18335
llvm-svn: 264332
Win32 API calls that are Unicode aware require wide character
strings, but LLDB uses UTF8 everywhere. This patch does conversions
wherever necessary when passing strings into and out of Win32 API
calls.
Patch by Cameron
Differential Revision: http://reviews.llvm.org/D17107
Reviewed By: zturner, amccarth
llvm-svn: 264074
On FreeBSD _LIBCPP_EXTERN_TEMPLATE is being defined from something
included by lldb/lldb-private.h. Undefine it after the #include to avoid
the redefinition warning.
Differential Revision: http://reviews.llvm.org/D17402
llvm-svn: 263486
When the parent of an expression is anonymous, skip adding '.' or '->' before the expression name.
Differential Revision: http://reviews.llvm.org/D18005
llvm-svn: 263166
That way you can set offset breakpoints that will move as the function they are
contained in moves (which address breakpoints can't do...)
I don't align the new address to instruction boundaries yet, so you have to get
this right yourself for now.
<rdar://problem/13365575>
llvm-svn: 263049
The System-V x86_64 ABI requires floating point values to be passed
in 128-but SSE vector registers (xmm0, ...). When printing such a
variable this currently yields an <invalid load address>.
This patch makes LLDB's DWARF expression evaluator accept 128-bit
registers as scalars. It also relaxes the check that the size of the
result of the DWARF expression be equal to the size of the variable to a
greater-than. DWARF defers to the ABI how smaller values are being placed
in a larger register.
Implementation note: I found the code in Value::SetContext() that changes
the m_value_type after the fact to be questionable. I added a sanity check
that the Value's memory buffer has indeed been written to (this is
necessary, because we may have a scalar value in a vector register), but
really I feel like this is the wrong place to be setting it.
Reviewed by Greg Clayton.
http://reviews.llvm.org/D17897
rdar://problem/24944340
llvm-svn: 262947