Code in ObjectFileELF::ParseTrampolineSymbols assumes that the sh_info
field of the .rel(a).plt section identifies the .plt section.
However, with recent GNU ld this is no longer true. As a result of this:
https://sourceware.org/bugzilla/show_bug.cgi?id=18169
in object files generated with current linkers the sh_info field of
.rel(a).plt now points to the .got.plt section (or .got on some targets).
This causes LLDB to fail to identify any PLT stubs, causing a number of
test case failures.
This patch changes LLDB to simply always look for the .plt section by
name. This should be safe across all linkers and targets.
Differential Revision: http://reviews.llvm.org/D18973
llvm-svn: 266316
A number of test cases were failing on big-endian systems simply due to
byte order assumptions in the tests themselves, and no underlying bug
in LLDB.
These two test cases:
tools/lldb-server/lldbgdbserverutils.py
python_api/process/TestProcessAPI.py
actually check for big-endian target byte order, but contain Python errors
in the corresponding code paths.
These test cases:
functionalities/data-formatter/data-formatter-python-synth/TestDataFormatterPythonSynth.py
functionalities/data-formatter/data-formatter-smart-array/TestDataFormatterSmartArray.py
functionalities/data-formatter/synthcapping/TestSyntheticCapping.py
lang/cpp/frame-var-anon-unions/TestFrameVariableAnonymousUnions.py
python_api/sbdata/TestSBData.py (first change)
could be fixed to check for big-endian target byte order and update the
expected result strings accordingly. For the two synthetic tests, I've
also updated the source to make sure the fake_a value is always nonzero
on both big- and little-endian platforms.
These test case:
python_api/sbdata/TestSBData.py (second change)
functionalities/memory/cache/TestMemoryCache.py
simply accessed memory with the wrong size, which wasn't noticed on LE
but fails on BE.
Differential Revision: http://reviews.llvm.org/D18985
llvm-svn: 266315
Running the ARM instruction emulation test on a big-endian system
would fail, since the code doesn't respect endianness properly.
In EmulateInstructionARM::TestEmulation, code assumes that an
instruction opcode read in from the test file is in target byte
order, but it was in fact read in in host byte order.
More difficult to fix, the EmulationStateARM structure models
the overlapping sregs and dregs by a union in _sd_regs. This
only works correctly if the host is a little-endian system.
I've removed the union in favor of a simple array containing
the 32 sregs, and changed any code accessing dregs to explicitly
use the correct two sregs overlaying that dreg in the proper
target order.
Also, the EmulationStateARM::ReadPseudoMemory and WritePseudoMemory
track memory as a map of uint32_t values in host byte order, and
implement 64-bit memory accessing by splitting them up into two
uint32_t ones. However, callers expect memory contents to be
provided in the form of a byte array (in target byte order).
This means the uint32_t contents need to be byte-swapped on
BE systems, and when splitting up a 64-bit access into two 32-bit
ones, byte order has to be respected.
Differential Revision: http://reviews.llvm.org/D18984
llvm-svn: 266314
This patch fixes a bunch of issues that show up on big-endian systems:
- The gnu_libstdcpp.py script doesn't follow the way libstdc++ encodes
bit vectors: it should identify the enclosing *word* and then access
the appropriate bit within that word. Instead, the script simply
operates on bytes. This gives the same result on little-endian
systems, but not on big-endian.
- lldb_private::formatters::WCharSummaryProvider always assumes wchar_t
is UTF16, even though it could also be UTF8 or UTF32. This is mostly
not an issue on little-endian systems, but immediately fails on BE.
Fixed by checking the size of wchar_t like WCharStringSummaryProvider
already does.
- ClangASTContext::GetChildCompilerTypeAtIndex uses uint32_t to access
the virtual base offset stored in the vtable, even though the size
of this field matches the target pointer size according to the C++
ABI. Again, this is mostly not visible on LE, but fails on BE.
- Process::ReadStringFromMemory uses strncmp to search for a terminator
consisting of multiple zero bytes. This doesn't work since strncmp
will stop already at the first zero byte. Use memcmp instead.
Differential Revision: http://reviews.llvm.org/D18983
llvm-svn: 266313
Currently, the DataExtractor::GetMaxU64Bitfield and GetMaxS64Bitfield
routines assume the incoming "bitfield_bit_offset" parameter uses
little-endian bit numbering, i.e. a bitfield_bit_offset 0 refers to
a bitfield whose least-significant bit coincides with the least-
significant bit of the surrounding integer.
On many big-endian systems, however, the big-endian bit numbering
is used for bit fields. Here, a bitfield_bit_offset 0 refers to
a bitfield whose most-significant bit conincides with the most-
significant bit of the surrounding integer.
Now, in principle LLDB could arbitrarily choose which semantics of
bitfield_bit_offset to use. However, there are two problems with
the current approach:
- When parsing DWARF, LLDB decodes bit offsets in little-endian
bit numbering on LE systems, but in big-endian bit numbering
on BE systems. Passing those offsets later on into the
DataExtractor routines gives incorrect results on BE.
- In the interim, LLDB's type layer combines byte and bit offsets
into a single number. I.e. instead of recording bitfields by
specifying the byte offset and byte size of the surrounding
integer *plus* the bit offset of the bit field within that field,
it simply records a single bit offset number.
Now, note that converting from byte offset + bit offset to a
single offset value and back is well-defined if we either use
little-endian byte order *and* little-endian bit numbering,
or use big-endian byte order *and* big-endian bit numbering.
Any other combination will yield incorrect results.
Therefore, the simplest approach would seem to be to always use
the bit numbering that matches the system byte order. This makes
storing a single bit offset valid, and makes the existing DWARF
code correct. The only place to fix is to teach DataExtractor
to use big-endian bit numbering on big endian systems.
However, there is only additional caveat: we also get bit offsets
from LLDB synthetic bitfields. While the exact semantics of those
doesn't seem to be well-defined, from test cases it appears that
the intent was for the user-provided synthetic bitfield offset to
always use little-endian bit numbering. Therefore, on a big-endian
system we now have to convert those to big-endian bit numbering
to remain consistent.
Differential Revision: http://reviews.llvm.org/D18982
llvm-svn: 266312
The Scalar implementation and a few other places in LLDB directly
access the internal implementation of APInt values using the
getRawData method. Unfortunately, pretty much all of these places
do not handle big-endian systems correctly. While on little-endian
machines, the pointer returned by getRawData can simply be used as
a pointer to the integer value in its natural format, no matter
what size, this is not true on big-endian systems: getRawData
actually points to an array of type uint64_t, with the first element
of the array always containing the least-significant word of the
integer. This means that if the bitsize of that integer is smaller
than 64, we need to add an offset to the pointer returned by
getRawData in order to access the value in its natural type, and
if the bitsize is *larger* than 64, we actually have to swap the
constituent words before we can access the value in its natural type.
This patch fixes every incorrect use of getRawData in the code base.
For the most part, this is done by simply removing uses of getRawData
in the first place, and using other APInt member functions to operate
on the integer data.
This can be done in many member functions of Scalar itself, as well
as in Symbol/Type.h and in IRInterpreter::Interpret. For the latter,
I've had to add a Scalar::MakeUnsigned routine to parallel the existing
Scalar::MakeSigned, e.g. in order to implement an unsigned divide.
The Scalar::RawUInt, Scalar::RawULong, and Scalar::RawULongLong
were already unused and can be simply removed. I've also removed
the Scalar::GetRawBits64 function and its few users.
The one remaining user of getRawData in Scalar.cpp is GetBytes.
I've implemented all the cases described above to correctly
implement access to the underlying integer data on big-endian
systems. GetData now simply calls GetBytes instead of reimplementing
its contents.
Finally, two places in the clang interface code were also accessing
APInt.getRawData in order to actually construct a byte representation
of an integer. I've changed those to make use of a Scalar instead,
to avoid having to re-implement the logic there.
The patch also adds a couple of unit tests verifying correct operation
of the GetBytes routine as well as the conversion routines. Those tests
actually exposed more problems in the Scalar code: the SetValueFromData
routine didn't work correctly for 128- and 256-bit data types, and the
SChar routine should have an explicit "signed char" return type to work
correctly on platforms where char defaults to unsigned.
Differential Revision: http://reviews.llvm.org/D18981
llvm-svn: 266311
Scalar::GetBytes provides a non-const access to the underlying bytes
of the scalar value, supposedly allowing for modification of those
bytes. However, even with the current implementation, this is not
really possible. For floating-point scalars, the pointer returned
by GetBytes refers to a temporary copy; modifications to that copy
will be simply ignored. For integer scalars, the pointer refers
to internal memory of the APInt implementation, which isn't
supposed to be directly modifyable; GetBytes simply casts aways
the const-ness of the pointer ...
With my upcoming patch to fix Scalar::GetBytes for big-endian
systems, this problem is going to get worse, since there we need
temporary copies even for some integer scalars. Therefore, this
patch makes Scalar::GetBytes const, fixing all those problems.
As a follow-on change, RegisterValues::GetBytes must be made const
as well. This in turn means that the way of initializing a
RegisterValue by doing a SetType followed by writing to GetBytes
no longer works. Instead, I've changed SetValueFromData to do
the equivalent of SetType itself, and then re-implemented
SetFromMemoryData to work on top of SetValueFromData.
There is still a need for RegisterValue::SetType, since some
platform-specific code uses it to reinterpret the contents of
an already filled RegisterValue. To make this usage work in
all cases (even changing from a type implemented via Scalar
to a type implemented as a byte buffer), SetType now simply
copies the old contents out, and then reloads the RegisterValue
from this data using the new type via SetValueFromData.
This in turn means that there is no remaining caller of
Scalar::SetType, so it can be removed.
The only other follow-on change was in MIPS EmulateInstruction
code, where some uses of RegisterValue::GetBytes could be made
const trivially.
Differential Revision: http://reviews.llvm.org/D18980
llvm-svn: 266310
This fixes several test case failure on s390x caused by the fact that
on this platform, the default "char" type is unsigned.
- In ClangASTContext::GetBuiltinTypeForEncodingAndBitSize we should return
an explicit *signed* char type for encoding eEncodingSint and bit size 8,
instead of the default platform char type (which may be unsigned).
This fix matches existing code in ClangASTContext::GetIntTypeFromBitSize,
and fixes the TestClangASTContext.TestBuiltinTypeForEncodingAndBitSize
unit test case.
- The test/expression_command/char/TestExprsChar.py test case is known to
fail on platforms defaulting to unsigned char (pr23069), and just needs
to be xfailed on s390x like on arm.
- The test/functionalities/watchpoint/watchpoint_on_vectors/main.c test
case defines a vector of "char" and implicitly assumes to be signed.
Use an explicit "signed char" instead.
Differential Revision: http://reviews.llvm.org/D18979
llvm-svn: 266309
This patch adds support for Linux on SystemZ:
- A new ArchSpec value of eCore_s390x_generic
- A new directory Plugins/ABI/SysV-s390x providing an ABI implementation
- Register context support
- Native Linux support including watchpoint support
- ELF core file support
- Misc. support throughout the code base (e.g. breakpoint opcodes)
- Test case updates to support the platform
This should provide complete support for debugging the SystemZ platform.
Not yet supported are optional features like transaction support (zEC12)
or SIMD vector support (z13).
There is no instruction emulation, since our ABI requires that all code
provide correct DWARF CFI at all PC locations in .eh_frame to support
unwinding (i.e. -fasynchronous-unwind-tables is on by default).
The implementation follows existing platforms in a mostly straightforward
manner. A couple of things that are different:
- We do not use PTRACE_PEEKUSER / PTRACE_POKEUSER to access single registers,
since some registers (access register) reside at offsets in the user area
that are multiples of 4, but the PTRACE_PEEKUSER interface only allows
accessing aligned 8-byte blocks in the user area. Instead, we use a s390
specific ptrace interface PTRACE_PEEKUSR_AREA / PTRACE_POKEUSR_AREA that
allows accessing a whole block of the user area in one go, so in effect
allowing to treat parts of the user area as register sets.
- SystemZ hardware does not provide any means to implement read watchpoints,
only write watchpoints. In fact, we can only support a *single* write
watchpoint (but this can span a range of arbitrary size). In LLDB this
means we support only a single watchpoint. I've set all test cases that
require read watchpoints (or multiple watchpoints) to expected failure
on the platform. [ Note that there were two test cases that install
a read/write watchpoint even though they nowhere rely on the "read"
property. I've changed those to simply use plain write watchpoints. ]
Differential Revision: http://reviews.llvm.org/D18978
llvm-svn: 266308
If the UnwindPlan did not identify how to unwind the stack pointer
register, LLDB currently assumes it can determine to caller's SP
from the current frame's CFA. This is true on most platforms
where CFA is by definition equal to the incoming SP at function
entry.
However, on the s390x target, we instead define the CFA to equal
the incoming SP plus an offset of 160 bytes. This is because
our ABI defines that the caller has to provide a register save
area of size 160 bytes. This area is allocated by the caller,
but is considered part of the callee's stack frame, and therefore
the CFA is defined as pointing to the top of this area.
In order to make this work on s390x, this patch introduces a new
ABI callback GetFallbackRegisterLocation that provides platform-
specific fallback register locations for unwinding. The existing
code to handle SP unwinding as well as volatile registers is moved
into the default implementation of that ABI callback, to allow
targets where that implementation is incorrect to override it.
This patch in itself is a no-op for all existing platforms.
But it is a pre-requisite for adding s390x support.
Differential Revision: http://reviews.llvm.org/D18977
llvm-svn: 266307
Summary:
In D18689, I removed the call to Normalize() in FileSpec::SetFile, because it no longer seemed
needed, and it resolved a quirk in the FileSpec API (spec.GetCString() returnes a path with
backslashes, but spec.GetDirectory().GetCString() has forward slashes). This turned out to be a
problem because we would consider paths with different separators as different (which led to
unresolved breakpoints for instance).
Here, I am putting back in the call to Normalize() and adding a unittest for FileSpec::Equal. I
am commenting out the GetDirectory unittests until we figure out the what is the expected
behaviour here.
Reviewers: zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D19060
llvm-svn: 266286
result_formatter used inspect.getfile() to get the python file name, which returned "*.pyc" if
the bytecode file was present. This resulted in files being displayed with the wrong extension,
and more critically, would confuse the rerun logic because it would try to rerun the pyc file
(which resulted in an empty rerun list as unittest refused to run those).
Fix: use inspect.getsourcefile() instead.
I am not sure why does was not an issue before. I can only assume that some system update
tricked python into producing bytecode files more aggressively.
llvm-svn: 266192
will not exceed the bounds of their Section. This is addressing a
problem where a file had a large space between two sections that
were not used by this module - the last symbol in the text section
had an enormous size because the distance between that and the first
symbol in the data section were used to compute the size.
http://reviews.llvm.org/D19004
<rdar://problem/25227945>
llvm-svn: 266165
When run with the multiprocess test runner, the getchar() trick doesn't work, so ninja check-lldb would fail on this test, but running the test directly worked fine.
Differential Revision: http://reviews.llvm.org/D19035
llvm-svn: 266145
(lldb) b ~Foo
(lldb) b Foo::~Foo
(lldb) b Bar::Foo::~Foo
Improved out C++ breakpoint locations tests as well to cover this issue.
<rdar://problem/25577252>
llvm-svn: 266139
The result variables aren't useful, and if you have a breakpoint on a
common function you can generate a lot of these. So I changed the
code that checks the condition to set ResultVariableIsInternal in the
EvaluateExpressionOptions that we pass to the execution.
Unfortunately, the check for this variable was done in the wrong place
(the static UserExpression::Evaluate) which is not how breakpoint
conditions execute expressions (UserExpression::Execute). So I moved
the check to UserExpression::Execute (which Evaluate also calls) and made the
overridden method DoExecute.
llvm-svn: 266093
this test was unintentionally XFAILed due to a change in the behavior of the expectedFailure
decorator. Fix that. Also, mark the test as debug-info independent while I'm in there.
llvm-svn: 266072
The structure definitions are not provided, but we perform a sizeof operation of
them which causes a build failure. Include `asm/ptrace.h` to get the structure
definitions.
llvm-svn: 266042
Summary:
If we recieve a SIGCONT or SIGTSTP, while the driver is shutting down (which, sometimes, we do,
for reasons which are not completely clear to me), we would crash to due a null pointer
dereference. Guard against this situation.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18965
llvm-svn: 265958
-thread-info in lldbmi does not conform to protocol. Should end with
current thread id as described here:
https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Thread-Commands.html#GDB_002fMI-Thread-Commands
When printing all threads, the current thread id should be printed
afterwards.
Example:
-thread-info
^done,threads=[
{id="2",target-id="Thread 0xb7e14b90 (LWP 21257)",
frame={level="0",addr="0xffffe410",func="__kernel_vsyscall",
args=[]},state="running"},
{id="1",target-id="Thread 0xb7e156b0 (LWP 21254)",
frame={level="0",addr="0x0804891f",func="foo",
args=[{name="i",value="10"}],
file="/tmp/a.c",fullname="/tmp/a.c",line="158"},
state="running"}],
current-thread-id="1"
(gdb)
Patch from jacdavis@microsoft.com
Reviewers: zturner, chuckr
Differential Revision: http://reviews.llvm.org/differential/revision/edit/18880/
llvm-svn: 265858
This code was getting evaluated unintentionally at binding
generation time instead of binding file compilation time.
Addresses:
https://bugs.swift.org/browse/SR-1192
llvm-svn: 265829
The Python import works by ensuring the directory of the module or package is in sys.path, and then it does a Python `import foo`. The original code was not escaping the backslashes in the directory path, so this wasn't working.
Differential Revision: http://reviews.llvm.org/D18873
llvm-svn: 265738
os to "ios" or "macosx" if it is unspecified. For environments
where there genuinely is no os, we don't want to errantly
convert that to ios/macosx, e.g. bare board debugging.
Change PlatformRemoteiOS, PlatformRemoteAppleWatch, and
PlatformRemoteAppleTV to not create themselves if we have
an unspecified OS. Same problem - these are not appropriate
platforms for bare board debugging environments.
Have Process::Attach's logging take place if either
process or target logging is enabled.
<rdar://problem/25592378>
llvm-svn: 265732
In turns out this does make a functional change, in case when the inferior hits an int3 that was
not placed by the debugger. Backing out for now.
llvm-svn: 265647
TargetOptions is ambiguous due to a definition in LLVM and in clang. This was
exposed by SVN r265640. Update to fix the build against the newer revision.
llvm-svn: 265644
Summary:
SetThreadStopInfo was checking for a breakpoint at the current PC several times. This merges the
identical code into a separate function. I've left one breakpoint check alone, as it was doing
more complicated stuff, and it did not see a way to merge that without making the interface
complicated. NFC.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18819
llvm-svn: 265560
Summary:
This resolves a similar problem as D16720 (which handled the case when we single-step onto a
breakpoint), but this one deals with involutary stops: when we stop a thread (e.g. because
another thread has hit a breakpont and we are doing a full stop), we can end up stopping it right
before it executes a breakpoint instruction. In this case, the stop reason will be empty, but we
will still step over the breakpoint when do the next resume, thereby missing a breakpoint hit.
I have observed this happening in TestConcurrentEvents, but I have no idea how to reproduce this
behavior more reliably.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18692
llvm-svn: 265525
This test sets the compiler optimization level to -O1 and
makes some assumptions about how local frame vars will be
stored (i.e. in registers). These assumptions are not always
true.
I did a first-pass set of improvements that:
(1) no longer assumes that every one of the target locations has
every variable in a register. Sometimes the compiler
is even smarter and skips the register entirely.
(2) simply expects one of the 5 or so variables it checks
to be in a register.
This test probably passes on a whole lot more systems than it
used to now. This is certainly true on OS X.
llvm-svn: 265498
Summary:
The '-p' option for dotest.py was ignored in multiprocess mode,
as the -p argument to the inferior would overwrite the -p argument
passed on the command line.
Reviewers: zturner, tfiala
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18779
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265422
Summary: Flag updated in D233237
Reviewers: spyffe, jingham, Eugene.Zelenko
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18660
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265421
Summary: Print environment from triple if it exists.
Reviewers: tfiala, clayborg
Subscribers: lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18620
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265420
Summary:
The logic to read modules from memory was added to LoadModuleAtAddress
in the dynamic loader, but not in process gdb remote. This means that when
the remote uses svr4 packets to give library info, libraries only present
on the remote will not be loaded.
This patch therefore involves some code duplication from LoadModuleAtAddress
in the dynamic loader, but removing this would require some amount of code
refactoring.
Reviewers: ADodds, tberghammer, tfiala, deepak2427, ted
Subscribers: tfiala, lldb-commits, sas
Differential Revision: http://reviews.llvm.org/D18531
Change by Francis Ricci <fjricci@fb.com>
llvm-svn: 265418
Previously we had 3 different method to run shell commands on the
target and 4 copy of code waiting until a given file appears on the
target device (used for syncronization). This CL merges these methods
to 1 run_platform_command and 1 wait_for_file_on_target functions
located in some utility classes.
Differential revision: http://reviews.llvm.org/D18789
llvm-svn: 265398
Summary:
There was a bug in linux core file handling, where if there was a running process with the same
process id as the id in the core file, the core file debugging would fail, as we would pull some
pieces of information (ProcessInfo structure) from the running process instead of the core file.
I fix this by routing the ProcessInfo requests through the Process class and overriding it in
ProcessElfCore to return correct data.
A (slightly convoluted) test is included.
Reviewers: clayborg, zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18697
llvm-svn: 265391
in thumb mode into one method in ArchSpec, replace checks for
specific cores in the disassembler with calls to this. Also call
this from the arm instruction emulation code.
The determination of whether a given ArchSpec is thumb-only is still
a bit of a hack, but at least the hack is consolidated into a single
place. In my original version of this patch http://reviews.llvm.org/D13578
I was calling into llvm's feature arm feature tables to make this
determination, like
#include "llvm/Support/TargetRegistry.h"
#include "llvm/MC/MCSubtargetInfo.h"
#include "llvm/../../lib/Target/ARM/ARMGenRegisterInfo.inc"
#include "llvm/../../lib/Target/ARM/ARMFeatures.h"
[...]
std::string triple (GetTriple().getTriple());
const char *cpu = "";
const char *features_str = "";
const llvm::Target *curr_target = llvm::TargetRegistry::lookupTarget(triple.c_str(), Error);
std::unique_ptr<llvm::MCSubtargetInfo> subtarget_info_up (curr_target->createMCSubtargetInfo(triple.c_str(), cpu, features_str));
if (subtarget_info_up->getFeatureBits()[llvm::ARM::FeatureNoARM])
{
return true;
}
but those tables are post-llvm-build generated and linking against them
for all of our different build system methods was a big hiccup that I
haven't had time to revisit convincingly.
I'll keep that reviews.llvm.org patch around to remind myself that I
need to take another run at linking against the necessary tables
again in llvm.
<rdar://problem/23022803>
llvm-svn: 265377
Teach LLDB that different shells have different characters they are sensitive to, and use that knowledge to do shell-aware escaping
This helps solve a class of problems on OS X where LLDB would try to launch via sh, and run into problems if the command line being passed to the inferior contained such special markers (hint: the shell would error out and we'd fail to launch)
This makes those launch scenarios work transparently via shell expansion
Slightly improve the error message when this kind of failure occurs to at least suggest that the user try going through 'process launch' directly
Fixes rdar://problem/22749408
llvm-svn: 265357
$(LLDB_PYTHON_TESTSUITE_CC) defaults to the just-built clang. Together
with changes to the zorg repo, this enables the Green Dragon LLDB OS X
Xcode-based builder to run the new TSAN LLDB tests.
llvm-svn: 265315
Summary:
Even though FileSpec attempted to handle both kinds of path syntaxes (posix and windows) on both
platforms, it relied on the llvm path library to do its work, whose behavior differed on
different platforms. This led to subtle differences in FileSpec behavior between platforms. This
replaces the pieces of the llvm library with our own implementations. The functions are simply
copied from llvm, with #ifdefs replaced by runtime checks for ePathSyntaxWindows.
Reviewers: zturner
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D18689
llvm-svn: 265299