forked from OSchip/llvm-project
[lld/mac] Parallelize code signature computation
According to ministat, this is a small but measurable speedup (using the repro in PR56121): N Min Max Median Avg Stddev x 10 3.7439518 3.7783802 3.7730219 3.7655502 0.012375226 + 10 3.6149218 3.692198 3.6519327 3.6502951 0.025905601 Difference at 95.0% confidence -0.115255 +/- 0.0190746 -3.06078% +/- 0.506554% (Student's t, pooled s = 0.0203008) (Without858e8b17f7
, this change here to use parallelFor is an 18% speedup, and doing858e8b17f7
on top of this change is just a 2.55% +/- 0.58% win. Doing both results in a total speedup of 20.85% +/- 0.44%.) Differential Revision: https://reviews.llvm.org/D128298
This commit is contained in:
parent
6d6268dcbf
commit
0baf13e282
|
@ -22,6 +22,7 @@
|
|||
#include "llvm/Support/EndianStream.h"
|
||||
#include "llvm/Support/FileSystem.h"
|
||||
#include "llvm/Support/LEB128.h"
|
||||
#include "llvm/Support/Parallel.h"
|
||||
#include "llvm/Support/Path.h"
|
||||
|
||||
#if defined(__APPLE__)
|
||||
|
@ -1247,11 +1248,11 @@ void CodeSignatureSection::writeHashes(uint8_t *buf) const {
|
|||
// NOTE: Changes to this functionality should be repeated in llvm-objcopy's
|
||||
// MachOWriter::writeSignatureData.
|
||||
uint8_t *hashes = buf + fileOff + allHeadersSize;
|
||||
for (uint64_t i = 0; i < getBlockCount(); ++i) {
|
||||
parallelFor(0, getBlockCount(), [&](size_t i) {
|
||||
sha256(buf + i * blockSize,
|
||||
std::min(static_cast<size_t>(fileOff - i * blockSize), blockSize),
|
||||
hashes + i * hashSize);
|
||||
}
|
||||
});
|
||||
#if defined(__APPLE__)
|
||||
// This is macOS-specific work-around and makes no sense for any
|
||||
// other host OS. See https://openradar.appspot.com/FB8914231
|
||||
|
|
Loading…
Reference in New Issue