Commit Graph

2079 Commits

Author SHA1 Message Date
Nico Weber cf16437e05 fix typos to cycle bots 2020-12-12 20:19:33 -05:00
Georgii Rymar ffbce65f95 [lib/Object, tools] - Make ELFObjectFile::getELFFile return reference.
We always have an object, so we don't have to return a pointer.

Differential revision: https://reviews.llvm.org/D92560
2020-12-04 16:02:29 +03:00
Amy Huang efd1ec0dec Recommit "[llvm-symbolizer] Switch to using native symbolizer by default on Windows"
This reverts commit 1b63177a56.
2020-11-30 17:36:12 -08:00
Amy Huang 00bbef2bb2 [llvm-symbolizer] Fix native symbolization on windows for inline sites.
The existing code handles this correctly and I checked that the code
in NativeInlineSiteSymbol also handles this correctly, but it was
wrong in the NativeFunctionSymbol code.

Differential Revision: https://reviews.llvm.org/D92134
2020-11-30 14:27:35 -08:00
Amy Huang 1b63177a56 Revert "[llvm-symbolizer] Switch to using native symbolizer by default on Windows"
Breaks some asan tests on the buildbot.

This reverts commit c74b427cb2.
2020-11-23 16:29:45 -08:00
Amy Huang c74b427cb2 [llvm-symbolizer] Switch to using native symbolizer by default on Windows
llvm-symbolizer used to use the DIA SDK for symbolization on
Windows; this patch switches to using native symbolization, which was
implemented recently.

Users can still make the symbolizer use DIA by adding the `-dia` flag
in the LLVM_SYMBOLIZER_OPTS environment variable.

Differential Revision: https://reviews.llvm.org/D91814
2020-11-23 15:57:08 -08:00
Amy Huang b4902bcd98 [NFC] remove print statement I accidentally added. 2020-11-23 10:51:09 -08:00
Georgii Rymar 9a99d23a1b [lib/Object] - Generalize the RelocationResolver API.
This allows to reuse the RelocationResolver from the code
that doesn't want to deal with `RelocationRef` class.

I am going to use it in llvm-readobj. See the description
of D91530 for more details.

Differential revision: https://reviews.llvm.org/D91533
2020-11-20 10:32:49 +03:00
Jorge Gorbe Moya 314a0d73a8 Fix crash after looking up dwo_id=0 in CU index.
In the current state, if getFromHash(0) is called and there's no CU with
dwo_id=0, the lookup will stop at an empty slot, then the check
`Rows[H].getSignature() != S` won't cause the lookup to fail and return
a nullptr (as it should), because the empty slot has a 0 in the
signature field, and a pointer to the empty slot will be incorrectly
returned.

This patch fixes this by using the index field in the hash entry to
check for empty slots: signature = 0 can match a valid hash but
according to the spec the index for an occupied slot will always be
non-zero.

Differential Revision: https://reviews.llvm.org/D91670
2020-11-19 11:15:01 -08:00
Amy Huang bc98034040 [llvm-symbolizer] Add inline stack traces for Windows.
This adds inline stack frames for symbolizing on Windows.

Differential Revision: https://reviews.llvm.org/D88988
2020-11-17 13:19:13 -08:00
serge-sans-paille 9218ff50f9 llvmbuildectomy - replace llvm-build by plain cmake
No longer rely on an external tool to build the llvm component layout.

Instead, leverage the existing `add_llvm_componentlibrary` cmake function and
introduce `add_llvm_component_group` to accurately describe component behavior.

These function store extra properties in the created targets. These properties
are processed once all components are defined to resolve library dependencies
and produce the header expected by llvm-config.

Differential Revision: https://reviews.llvm.org/D90848
2020-11-13 10:35:24 +01:00
David Truby 81b96bb6f1 [Aarch64] Fix assumption that Windows implies x86
When compiling for Windows on Arm the amd64 debug interfce from the Visual
Studio SDK is used as the cmake currently only distinguishes between x86 and
amd64 by checking the pointer size. Instead we can get the target
architecture for the compilier and check that to distinguish between
architectures.
2020-10-30 12:11:34 +00:00
Amy Huang 7669f3c0f6 Recommit "[CodeView] Emit static data members as S_CONSTANTs."
We used to only emit static const data members in CodeView as
S_CONSTANTS when they were used; this patch makes it so they are always emitted.

This changes CodeViewDebug.cpp to find the static const members from the
class debug info instead of creating DIGlobalVariables in the IR
whenever a static const data member is used.

Bug: https://bugs.llvm.org/show_bug.cgi?id=47580

Differential Revision: https://reviews.llvm.org/D89072

This reverts commit 504615353f.
2020-10-28 16:35:59 -07:00
Amy Huang 504615353f Revert "[CodeView] Emit static data members as S_CONSTANTs."
Seems like there's an assert in here that we shouldn't be running into.

This reverts commit 515973222e.
2020-10-27 11:29:58 -07:00
Amy Huang 515973222e [CodeView] Emit static data members as S_CONSTANTs.
We used to only emit static const data members in CodeView as
S_CONSTANTS when they were used; this patch makes it so they are always emitted.

I changed CodeViewDebug.cpp to find the static const members from the
class debug info instead of creating DIGlobalVariables in the IR
whenever a static const data member is used.

Bug: https://bugs.llvm.org/show_bug.cgi?id=47580

Differential Revision: https://reviews.llvm.org/D89072
2020-10-26 15:30:35 -07:00
David Blaikie 0ec5baa132 llvm-dwarfdump: Support verbose printing DW_OP_convert to print the CU local offset before the resolved absolute offset 2020-10-23 18:50:15 -07:00
David Blaikie a67d164a82 Revert several changes related to llvm-symbolizer exiting non-zero on failure.
Seems users have enough different uses of the symbolizer where they
might have unknown binaries and offsets such that "best effort" behavior
is all that's expected of llvm-symbolizer - so even erroring on unknown
executables and out of bounds offsets might not be suitable.

This reverts commit 1de0199748.
This reverts commit a7b209a6d4.
This reverts commit 338dd138ea.
2020-10-21 15:21:44 -07:00
Luqman Aden 51892a42da [COFF][ARM] Fix CodeView for Windows on 32bit ARM targets.
Create the LLVM / CodeView register mappings for the 32-bit ARM Window targets.

Reviewed By: compnerd

Differential Revision: https://reviews.llvm.org/D89622
2020-10-19 22:16:16 -07:00
David Blaikie a7b209a6d4 llvm-symbolizer: Exit non-zero when DWARF parsing errors have been rendered 2020-10-14 23:42:00 -07:00
David Blaikie 9670a45c98 libDebugInfoDWARF: Don't try to parse loclist[.dwo] headers when parsing debug_info[.dwo]
There's no way to know whether there's a loclist contribution to parse
if there's no loclistx encoding - and if there is one, there's no need
to walk back from the loclist_base (or, uin the case of
info.dwo/loclist.dwo - starting at 0 in the contribution) to parse the
header, instead rely on the DWARF32/64 and address size in the CU
that's already available.

This would come up in split DWARF (non-split wouldn't try to read a
loclist header in the absence of a loclist_base) when one unit had
location lists and another does not (because the loclists.dwo section
would be non-empty in that case - in the case where it's empty the
parsing would silently skip).

Simplify the testing a bit, rather than needing a whole dwp, etc - by
creating a malformed loclists.dwo section (and use single file Split
DWARF) that would trip up any attempt to parse it - but no attempt
should be made.
2020-10-13 22:28:59 -07:00
Greg Clayton a4b842e294 Show register names in DWARF unwind info.
Register context information was already being passed into the DWARFDebugFrame code that dumps unwind information but it wasn't being used. This change adds the ability to dump registers names of a valid MC register context was passed in and if it knows about the register. Updated the tests to use the newly returned register names.

Differential Revision: https://reviews.llvm.org/D88767
2020-10-05 15:34:33 -07:00
David Blaikie 6d0be74af5 llvm-dwarfdump: Don't try to parse rnglist tables when dumping CUs
It's not possible to do this in complete generality - a CU using a
sec_offset DW_AT_ranges has no way of knowing where its rnglists
contribution starts, so should not attempt to parse any full rnglist
table/header to do so. And even using FORM_rnglistx there's no need to
parse the header - the offset can be computed using the CU's DWARF
format (32 or 64) to compute offset entry sizes, and then the list
parsed at that offset without ever trying to find a rnglist contribution
header immediately prior to the rnglists_base.
2020-10-04 19:18:14 -07:00
David Blaikie 92c45e4ee2 llvm-dwarfdump: Add support for DW_RLE_startx_endx 2020-10-04 17:50:43 -07:00
David Blaikie 628a319475 llvm-dwarfdump: Print addresses in debug_line to the parsed address size 2020-10-04 16:05:49 -07:00
David Blaikie ea83e0b17e llvm-dwarfdump: Dump address forms in their encoded length rather than always in 64 bits
Few places did this already - refactor them all into a common helper.
2020-10-04 15:48:57 -07:00
David Blaikie 8036cf7f54 llvm-dwarfdump: Skip tombstoned address ranges
Make the dumper & API a bit more informative by using the new tombstone
addresses to filter out or otherwise render more explicitly dead code
ranges.
2020-10-04 13:43:29 -07:00
Reid Kleckner 5519e4da83 Re-land "[PDB] Merge types in parallel when using ghashing"
Stored Error objects have to be checked, even if they are success
values.

This reverts commit 8d250ac3cd.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..

Original commit message:
-----------------------------------------

This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.

To give an idea, here is the /time output placed side by side:
                              BEFORE    | AFTER
  Input File Reading:           956 ms  |  968 ms
  Code Layout:                  258 ms  |  190 ms
  Commit Output File:             6 ms  |    7 ms
  PDB Emission (Cumulative):   6691 ms  | 4253 ms
    Add Objects:               4341 ms  | 2927 ms
      Type Merging:            2814 ms  | 1269 ms  -55%!
      Symbol Merging:          1509 ms  | 1645 ms
    Publics Stream Layout:      111 ms  |  112 ms
    TPI Stream Layout:          764 ms  |   26 ms  trivial
    Commit to Disk:            1322 ms  | 1036 ms  -300ms
----------------------------------------- --------
Total Link Time:               8416 ms    5882 ms  -30% overall

The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.

This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.

Algorithm
----------

Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.

In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206

The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
  tpiSources[tpiSrcIndex]->ghashes[typeIndex]

By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.

To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.

After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.

Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.

Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.

Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.

Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.

Differential Revision: https://reviews.llvm.org/D87805
2020-09-30 15:44:38 -07:00
Reid Kleckner 8d250ac3cd Revert "[PDB] Merge types in parallel when using ghashing"
This reverts commit 49b3459930.
2020-09-30 14:55:32 -07:00
Reid Kleckner 49b3459930 [PDB] Merge types in parallel when using ghashing
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.

To give an idea, here is the /time output placed side by side:
                              BEFORE    | AFTER
  Input File Reading:           956 ms  |  968 ms
  Code Layout:                  258 ms  |  190 ms
  Commit Output File:             6 ms  |    7 ms
  PDB Emission (Cumulative):   6691 ms  | 4253 ms
    Add Objects:               4341 ms  | 2927 ms
      Type Merging:            2814 ms  | 1269 ms  -55%!
      Symbol Merging:          1509 ms  | 1645 ms
    Publics Stream Layout:      111 ms  |  112 ms
    TPI Stream Layout:          764 ms  |   26 ms  trivial
    Commit to Disk:            1322 ms  | 1036 ms  -300ms
----------------------------------------- --------
Total Link Time:               8416 ms    5882 ms  -30% overall

The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.

This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.

Algorithm
----------

Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.

In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206

The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
  tpiSources[tpiSrcIndex]->ghashes[typeIndex]

By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.

To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.

After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.

Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.

Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.

Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.

Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.

Differential Revision: https://reviews.llvm.org/D87805
2020-09-30 14:22:48 -07:00
David Blaikie 0328feb086 DebugInfo: Filter DWARFv5 TUs out of the debug_info unit list when CUs requested
Since DWARFv5 places TUs in debug_info, some of DWARFContext's APIs have
become a bit erroneous, including TUs in the CU list by accident.
Correct that by providing compile_units (& dwo_compile_units) that
filter out the type units from the debug_info units.

Differential Revision: https://reviews.llvm.org/D87935
2020-09-23 22:15:53 -07:00
Jonas Devlieghere e1ef7183c6 [dwarfdump] Warn for tags with DW_CHILDREN_yes but no children.
Flag DIEs that have DW_CHILDREN_yes set in their abbreviation but don't
actually have any children.

rdar://59809554

Differential revision: https://reviews.llvm.org/D88048
2020-09-23 22:12:04 -07:00
David Blaikie ad68a8b952 DebugInfo: Cleanup RLE dumping, using a length-constrained DataExtractor rather than carrying the end offset separately 2020-09-18 19:32:38 -07:00
David Blaikie 51a505340d DebugInfo: Simplify line table parsing to take all the units together, rather than CUs and TUs separately 2020-09-18 11:18:23 -07:00
David Blaikie e0802fe016 DebugInfo: Tidy up initializing multi-section contributions in DWARFContext 2020-09-18 10:54:43 -07:00
Simon Pilgrim ed53ff4cde SymbolizableObjectFile.h - remove unnecessary includes. NFCI.
Use forward declarations where possible, move includes down to SymbolizableObjectFile.cpp and avoid duplicate includes.
2020-09-17 13:18:53 +01:00
Reid Kleckner e47d2927de Include (Type|Symbol)Record.h less
Most clients only need CVType and CVSymbol, not structs for every type
and symbol. Move CVSymbol and CVType to CVRecord.h to accomplish this.
Update some of the common headers that need CVSymbol and CVType to use
the new location.
2020-09-16 09:59:03 -07:00
Petr Hosek 9c73e55510 Revert "[DebugInfo] Remove dots from getFilenameByIndex return value"
This is failing on Windows bots due to path separator normalization.

This reverts commit 042c235068.
2020-09-15 10:06:47 -07:00
Petr Hosek 042c235068 [DebugInfo] Remove dots from getFilenameByIndex return value
When concatenating directory with filename in getFilenameByIndex, we
might end up with a path that contains extra dots. For example, if the
input is /path and ./example, we would return /path/./example. Run
sys::path::remove_dots on the output to eliminate unnecessary dots.

Differential Revision: https://reviews.llvm.org/D87657
2020-09-14 20:19:06 -07:00
David Blaikie 69da27c749 llvm-symbolizer: Add optional "start file" to match "start line"
Since a function might have portions of its code coming from multiple
different files, "start line" is ambiguous (it can't just be resolved
relative to the file/line specified). Add start file to disambiguate it.
2020-09-08 15:40:58 -07:00
Xing GUO 67ce11405b [llvm-dwarfdump] Warn user when it encounters no null terminated strings.
When llvm-dwarfdump encounters no null terminated strings, we should
warn user about it rather than ignore it and print nothing.

Before this patch, when llvm-dwarfdump dumps a .debug_str section whose
content is "abc", it prints:

```
.debug_str contents:
```

After this patch:

```
.debug_str contents:
warning: no null terminated string at offset 0x0
```

Reviewed By: jhenderson, MaskRay

Differential Revision: https://reviews.llvm.org/D86998
2020-09-03 08:49:57 +08:00
Xing GUO 369f9169a5 [DebugInfo] Simplify string table dumpers.
This patch adds a helper function DumpStrSection to simplify codes.
Besides, nonprintable chars in debug_str and debug_str.dwo sections
are printed as escaped chars.

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D86918
2020-09-02 08:41:10 +08:00
Jordan Rupprecht 202766947e [NFC] Fix unused var in release builds.
This was always unused, but the change in D86354 upgraded this to a compiler warning.
2020-09-01 16:38:24 -07:00
David Blaikie f7a49d2aa6 [WIP][DebugInfo] Lazily parse debug_loclist offsets
Parsing DWARFv5 debug_loclist offsets when a CU is parsed is weighing
down memory usage of symbolizers that don't need to parse this data at
all. There's not much benefit to caching these anyway - since they are
O(1) lookup and reading once you know where the offset list starts (and
can do bounds checking with the offset list size too).

In general, I think it might be time to start paying down some of the
technical debt of loc/loclist/range/rnglist parsing to try to unify it a
bit more.

eg:

* Currently DWARFUnit has: RangeSection, RangeSectionBase, LocSection,
  LocSectionBase, LocTable, RngListTable, LoclistTableHeader (be nice if
  these were all wrapped up in two variables - one for loclists, one for
  rnglists)

* rnglists and loclists are handled differently (see:
  LoclistTableHeader, but no RnglistTableHeader)

* maybe all these types could be less stateful - lazily parse what they
  need to, even reparsing rather than caching because it doesn't seem
  too expensive, for instance. (though admittedly so long as it's
  constantcost/overead per compilatiton that's probably adequate)

* Maybe implementing and using a DWARFDataExtractor that can be
  sub-ranged (so we could slice it up to just the single contribution) -
  though maybe that's not so useful because loc/ranges need to refer to
  it by absolute, not contribution-relative mechanisms

Differential Revision: https://reviews.llvm.org/D86110
2020-08-18 10:49:39 -07:00
Igor Kudrin 95fad44e34 [DebugInfo] Avoid an infinite loop with a truncated pre-v5 .debug_str_offsets.dwo.
dumpStringOffsetsSection() expects the size of a contribution to be
correctly aligned. The patch adds the corresponding verifications for
pre-v5 cases.

Differential Revision: https://reviews.llvm.org/D85739
2020-08-14 13:11:37 +07:00
Igor Kudrin 9ceb192e14 [llvm-dwarfdump] Avoid crashing if an abbreviation offset is invalid.
Note that DWARFUnit::getAbbreviations() returns nullptr if the
abbreviations could not be read, but callers used the returned
pointer without checking.

Differential Revision: https://reviews.llvm.org/D85738
2020-08-12 16:01:53 +07:00
David Stenberg 91bd9db2cd [DebugInfo] Allow GNU macro extension to be read
Allow the GNU .debug_macro extension to be parsed and printed by
llvm-dwarfdump. In an upcoming patch support will be added for emitting
that format also.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D82974
2020-08-11 13:30:52 +02:00
James Henderson ca05601cd2 [DebugInfo] Don't error for zero-length arange entries
Although the DWARF specification states that .debug_aranges entries
can't have length zero, these can occur in the wild. There's no
particular reason to enforce this part of the spec, since functionally
they have no impact. The patch removes the error and introduces a new
warning for premature terminator entries which does not stop parsing.

This is a relanding of cb3a598c87, adding the missing obj2yaml part
that was needed.

Fixes https://bugs.llvm.org/show_bug.cgi?id=46805. See also
https://reviews.llvm.org/D71932 which originally introduced the error.

Reviewed by: ikudrin, dblaikie, Higuoxing

Differential Revision: https://reviews.llvm.org/D85313
2020-08-10 14:57:52 +01:00
Nico Weber bc5d68dd8a Revert "[DebugInfo] Don't error for zero-length arange entries"
This reverts commit cb3a598c87.
Breaks build of check-llvm dep obj2yaml everywhere.
2020-08-10 08:20:35 -04:00
James Henderson cb3a598c87 [DebugInfo] Don't error for zero-length arange entries
Although the DWARF specification states that .debug_aranges entries
can't have length zero, these can occur in the wild. There's no
particular reason to enforce this part of the spec, since functionally
they have no impact. The patch removes the error and introduces a new
warning for premature terminator entries which does not stop parsing.

Fixes https://bugs.llvm.org/show_bug.cgi?id=46805. See also
https://reviews.llvm.org/D71932 which originally introduced the error.

Reviewed by: ikudrin, dblaikie

Differential Revision: https://reviews.llvm.org/D85313
2020-08-10 12:48:31 +01:00
Raphael Isemann 1de43bd6df Revert "PDBExtras.h - remove unnecessary raw_ostream forward declaration. NFCI."
This reverts commit 87c5437afd.

The commit includes several headers in the middle of a function, which
breaks pretty much everything.
2020-08-06 15:15:43 +02:00