I found that the initial corpus allocation of fork mode has certain defects.
I designed a new initial corpus allocation strategy based on size grouping.
This method can give more energy to the small seeds in the corpus and
increase the throughput of the test.
Fuzzbench data (glibfuzzer is -fork_corpus_groups=1):
https://www.fuzzbench.com/reports/experimental/2021-08-05-parallel/index.html
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D105084
Extend the existing single-pass algorithm for `Merger::Merge` with an algorithm that gives better results. This new implementation can be used with a new **set_cover_merge=1** flag.
This greedy set cover implementation gives a substantially smaller final corpus (40%-80% less testcases) while preserving the same features/coverage. At the same time, the execution time penalty is not that significant (+50% for ~1M corpus files and far less for smaller corpora). These results were obtained by comparing several targets with varying size corpora.
Change `Merger::CrashResistantMergeInternalStep` to collect all features from each file and not just unique ones. This is needed for the set cover algorithm to work correctly. The implementation of the algorithm in `Merger::SetCoverMerge` uses a bitvector to store features that are covered by a file while performing the pass. Collisions while indexing the bitvector are ignored similarly to the fuzzer.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D105284
This fails with:
/tmp/FlagsTest-5761bc.o: In function `sancov.module_ctor_8bit_counters':
FlagsTest.cpp:(.text.sancov.module_ctor_8bit_counters[sancov.module_ctor_8bit_counters]+0x14): undefined reference to `__start___sancov_cntrs'
FlagsTest.cpp:(.text.sancov.module_ctor_8bit_counters[sancov.module_ctor_8bit_counters]+0x18): undefined reference to `__stop___sancov_cntrs'
<...>
Since https://reviews.llvm.org/D107374. However the changes
there don't seem to be the real fault so xfail while I look into it.
- Enable extra coverage counters on Windows.
- Update extra_counters.test to run on Windows also.
- Update TableLookupTest.cpp to include the required pragma/declspec for the extra coverage counters.
Patch By: MichaelSquires
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D106676
This reverts commit 52aeacfbf5.
There isn't full agreement on a path forward yet, but there is agreement that
this shouldn't land as-is. See discussion on https://reviews.llvm.org/D105338
Also reverts unreviewed "[clang] Improve `-Wnull-dereference` diag to be more in-line with reality"
This reverts commit f4877c78c0.
And all the related changes to tests:
This reverts commit 9a0152799f.
This reverts commit 3f7c9cc274.
This reverts commit 329f8197ef.
This reverts commit aa9f58cc2c.
This reverts commit 2df37d5ddd.
This reverts commit a72a441812.
This change introduces libMutagen/libclang_rt.mutagen.a as a subset of libFuzzer/libclang_rt.fuzzer.a. This library contains only the fuzzing strategies used by libFuzzer to produce new test inputs from provided inputs, dictionaries, and SanitizerCoverage feedback.
Most of this change is simply moving sections of code to one side or the other of the library boundary. The only meaningful new code is:
* The Mutagen.h interface and its implementation in Mutagen.cpp.
* The following methods in MutagenDispatcher.cpp:
* UseCmp
* UseMemmem
* SetCustomMutator
* SetCustomCrossOver
* LateInitialize (similar to the MutationDispatcher's original constructor)
* Mutate_AddWordFromTORC (uses callbacks instead of accessing TPC directly)
* StartMutationSequence
* MutationSequence
* DictionaryEntrySequence
* RecommendDictionary
* RecommendDictionaryEntry
* FuzzerMutate.cpp (which now justs sets callbacks and handles printing)
* MutagenUnittest.cpp (which adds tests of Mutagen.h)
A note on performance: This change was tested with a 100 passes of test/fuzzer/LargeTest.cpp with 1000 runs per pass, both with and without the change. The running time distribution was qualitatively similar both with and without the change, and the average difference was within 30 microseconds (2.240 ms/run vs 2.212 ms/run, respectively). Both times were much higher than observed with the fully optimized system clang (~0.38 ms/run), most likely due to the combination of CMake "dev mode" settings (e.g. CMAKE_BUILD_TYPE="Debug", LLVM_ENABLE_LTO=OFF, etc.). The difference between the two versions built similarly seems to be "in the noise" and suggests no meaningful performance degradation.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D102447
These have been broken by https://reviews.llvm.org/D104494.
However, `lib/fuzzer/dataflow/` is unused (?) so addressing this is not a priority.
Added TODOs to re-enable them in the future.
Reviewed By: stephan.yichao.zhao
Differential Revision: https://reviews.llvm.org/D104568
Complete support for fast8:
- amend shadow size and mapping in runtime
- remove fast16 mode and -dfsan-fast-16-labels flag
- remove legacy mode and make fast8 mode the default
- remove dfsan-fast-8-labels flag
- remove functions in dfsan interface only applicable to legacy
- remove legacy-related instrumentation code and tests
- update documentation.
Reviewed By: stephan.yichao.zhao, browneee
Differential Revision: https://reviews.llvm.org/D103745
This change introduces libMutagen/libclang_rt.mutagen.a as a subset of libFuzzer/libclang_rt.fuzzer.a. This library contains only the fuzzing strategies used by libFuzzer to produce new test inputs from provided inputs, dictionaries, and SanitizerCoverage feedback.
Most of this change is simply moving sections of code to one side or the other of the library boundary. The only meaningful new code is:
* The Mutagen.h interface and its implementation in Mutagen.cpp.
* The following methods in MutagenDispatcher.cpp:
* UseCmp
* UseMemmem
* SetCustomMutator
* SetCustomCrossOver
* LateInitialize (similar to the MutationDispatcher's original constructor)
* Mutate_AddWordFromTORC (uses callbacks instead of accessing TPC directly)
* StartMutationSequence
* MutationSequence
* DictionaryEntrySequence
* RecommendDictionary
* RecommendDictionaryEntry
* FuzzerMutate.cpp (which now justs sets callbacks and handles printing)
* MutagenUnittest.cpp (which adds tests of Mutagen.h)
A note on performance: This change was tested with a 100 passes of test/fuzzer/LargeTest.cpp with 1000 runs per pass, both with and without the change. The running time distribution was qualitatively similar both with and without the change, and the average difference was within 30 microseconds (2.240 ms/run vs 2.212 ms/run, respectively). Both times were much higher than observed with the fully optimized system clang (~0.38 ms/run), most likely due to the combination of CMake "dev mode" settings (e.g. CMAKE_BUILD_TYPE="Debug", LLVM_ENABLE_LTO=OFF, etc.). The difference between the two versions built similarly seems to be "in the noise" and suggests no meaningful performance degradation.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D102447
Address sanitizer can detect stack exhaustion via its SEGV handler, which is
executed on a separate stack using the sigaltstack mechanism. When libFuzzer is
used with address sanitizer, it installs its own signal handlers which defer to
those put in place by the sanitizer before performing additional actions. In the
particular case of a stack overflow, the current setup fails because libFuzzer
doesn't preserve the flag for executing the signal handler on a separate stack:
when we run out of stack space, the operating system can't run the SEGV handler,
so address sanitizer never reports the issue. See the included test for an
example.
This commit fixes the issue by making libFuzzer preserve the SA_ONSTACK flag
when installing its signal handlers; the dedicated signal-handler stack set up
by the sanitizer runtime appears to be large enough to support the additional
frames from the fuzzer.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D101824
This test was found to fail for some of our downstream builds, on
computers where python was not on the default $PATH. Therefore
add a %python substitution to use sys.executable, based on similar
solutions for python calls in tests elsewhere in LLVM.
Differential Revision: https://reviews.llvm.org/D96799
Adds a new option, `handle_winexcept` to try to intercept uncaught
Visual C++ exceptions on Windows. On Linux, such exceptions are handled
implicitly by `std::terminate()` raising `SIBABRT`. This option brings the
Windows behavior in line with Linux.
Unfortunately this exception code is intentionally undocumented, however
has remained stable for the last decade. More information can be found
here: https://devblogs.microsoft.com/oldnewthing/20100730-00/?p=13273
Reviewed By: morehouse, metzman
Differential Revision: https://reviews.llvm.org/D89755
-print_full_coverage=1 produces a detailed branch coverage dump when run on a single file.
Uses same infrastructure as -print_coverage flag, but prints all branches (regardless of coverage status) in an easy-to-parse format.
Usage: For internal use with machine learning fuzzing models which require detailed coverage information on seed files to generate mutations.
Differential Revision: https://reviews.llvm.org/D85928
The test started to consistently fail after unrelated
2ffaa9a173.
Even before the patch it was possible to fail the test,
e.g. -seed=1660180256 on my workstation.
Also this checks do not look related to strcmp.
When using a custom mutator (e.g. thrift mutator, similar to LPM)
that calls back into libfuzzer's mutations via `LLVMFuzzerMutate`, the mutation
sequences needed to achieve new coverage can get prohibitively large.
Printing these large sequences has two downsides:
1) It makes the logs hard to understand for a human.
2) The performance cost slows down fuzzing.
In this patch I change the `PrintMutationSequence` function to take a max
number of entries, to achieve this goal. I also update `PrintStatusForNewUnit`
to default to printing only 10 entries, in the default verbosity level (1),
requiring the user to set verbosity to 2 if they want the full mutation
sequence.
For our use case, turning off verbosity is not an option, as that would also
disable `PrintStats()` which is very useful for infrastructure that analyzes
the logs in realtime. I imagine most users of libfuzzer always want those logs
in the default.
I built a fuzzer locally with this patch applied to libfuzzer.
When running with the default verbosity, I see logs like this:
#65 NEW cov: 4799 ft: 10443 corp: 41/1447Kb lim: 64000 exec/s: 1 rss: 575Mb L: 28658/62542 MS: 196 Custom-CrossOver-ChangeBit-EraseBytes-ChangeBit-ChangeBit-ChangeBit-CrossOver-ChangeBit-CrossOver- DE: "\xff\xff\xff\x0e"-"\xfe\xff\xff\x7f"-"\xfe\xff\xff\x7f"-"\x17\x00\x00\x00\x00\x00\x00\x00"-"\x00\x00\x00\xf9"-"\xff\xff\xff\xff"-"\xfa\xff\xff\xff"-"\xf7\xff\xff\xff"-"@\xff\xff\xff\xff\xff\xff\xff"-"E\x00"-
#67 NEW cov: 4810 ft: 10462 corp: 42/1486Kb lim: 64000 exec/s: 1 rss: 577Mb L: 39823/62542 MS: 135 Custom-CopyPart-ShuffleBytes-ShuffleBytes-ChangeBit-ChangeBinInt-EraseBytes-ChangeBit-ChangeBinInt-ChangeBit- DE: "\x01\x00\x00\x00\x00\x00\x01\xf1"-"\x00\x00\x00\x07"-"\x00\x0d"-"\xfd\xff\xff\xff"-"\xfe\xff\xff\xf4"-"\xe3\xff\xff\xff"-"\xff\xff\xff\xf1"-"\xea\xff\xff\xff"-"\x00\x00\x00\xfd"-"\x01\x00\x00\x05"-
Staring hard at the logs it's clear that the cap of 10 is applied.
When running with verbosity level 2, the logs look like the below:
#66 NEW cov: 4700 ft: 10188 corp: 37/1186Kb lim: 64000 exec/s: 2 rss: 509Mb L: 47616/61231 MS: 520 Custom-CopyPart-ChangeBinInt-ChangeBit-ChangeByte-EraseBytes-PersAutoDict-CopyPart-ShuffleBytes-ChangeBit-ShuffleBytes-CopyPart-EraseBytes-CopyPart-ChangeBinInt-CopyPart-ChangeByte-ShuffleBytes-ChangeBinInt-ShuffleBytes-ChangeBit-CMP-ShuffleBytes-ChangeBit-CrossOver-ChangeBinInt-ChangeByte-ShuffleBytes-CrossOver-EraseBytes-ChangeBinInt-InsertRepeatedBytes-PersAutoDict-InsertRepeatedBytes-InsertRepeatedBytes-CrossOver-ChangeByte-ShuffleBytes-CopyPart-ShuffleBytes-CopyPart-CrossOver-ChangeBit-ShuffleBytes-CrossOver-PersAutoDict-ChangeByte-ChangeBit-ShuffleBytes-CrossOver-ChangeByte-EraseBytes-CopyPart-ChangeBinInt-PersAutoDict-CrossOver-ShuffleBytes-CrossOver-CrossOver-EraseBytes-CrossOver-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-ChangeByte-EraseBytes-ShuffleBytes-ShuffleBytes-ChangeBit-EraseBytes-ChangeBinInt-ChangeBit-ChangeBinInt-CopyPart-EraseBytes-PersAutoDict-EraseBytes-CopyPart-ChangeBinInt-ChangeByte-CrossOver-ChangeBinInt-ShuffleBytes-PersAutoDict-PersAutoDict-ChangeBinInt-CopyPart-ChangeBinInt-CrossOver-ChangeBit-ChangeBinInt-CopyPart-ChangeByte-ChangeBit-CopyPart-CrossOver-ChangeByte-ChangeBit-ChangeByte-ShuffleBytes-CMP-ChangeBit-CopyPart-ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBit-ChangeBinInt-ChangeBinInt-PersAutoDict-ChangeBinInt-ChangeBinInt-ChangeByte-CopyPart-ShuffleBytes-ChangeByte-ChangeBit-ChangeByte-ChangeByte-EraseBytes-CrossOver-ChangeByte-ChangeByte-EraseBytes-EraseBytes-InsertRepeatedBytes-ShuffleBytes-CopyPart-CopyPart-ChangeBit-ShuffleBytes-PersAutoDict-ShuffleBytes-ChangeBit-ChangeByte-ChangeBit-ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBit-EraseBytes-CopyPart-ChangeByte-CrossOver-EraseBytes-CrossOver-ChangeByte-ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-ChangeByte-InsertRepeatedBytes-InsertByte-ShuffleBytes-PersAutoDict-ChangeBit-ChangeByte-ChangeBit-ShuffleBytes-ShuffleBytes-CopyPart-ShuffleBytes-EraseBytes-ShuffleBytes-ShuffleBytes-CrossOver-ChangeBinInt-CopyPart-CopyPart-CopyPart-EraseBytes-EraseBytes-ChangeByte-ChangeBinInt-ShuffleBytes-CMP-InsertByte-EraseBytes-ShuffleBytes-CopyPart-ChangeBit-CrossOver-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-EraseBytes-ChangeByte-ChangeBinInt-ChangeBit-ChangeBit-ChangeByte-ShuffleBytes-PersAutoDict-PersAutoDict-CMP-ChangeBit-ShuffleBytes-PersAutoDict-ChangeBinInt-EraseBytes-EraseBytes-ShuffleBytes-ChangeByte-ShuffleBytes-ChangeBit-EraseBytes-CMP-ShuffleBytes-ChangeByte-ChangeBinInt-EraseBytes-ChangeBinInt-ChangeByte-EraseBytes-ChangeByte-CrossOver-ShuffleBytes-EraseBytes-EraseBytes-ShuffleBytes-ChangeBit-EraseBytes-CopyPart-ShuffleBytes-ShuffleBytes-CrossOver-CopyPart-ChangeBinInt-ShuffleBytes-CrossOver-InsertByte-InsertByte-ChangeBinInt-ChangeBinInt-CopyPart-EraseBytes-ShuffleBytes-ChangeBit-ChangeBit-EraseBytes-ChangeByte-ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBinInt-ShuffleBytes-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-ShuffleBytes-CrossOver-EraseBytes-CopyPart-CopyPart-CopyPart-ChangeBit-ShuffleBytes-ChangeByte-EraseBytes-ChangeByte-InsertRepeatedBytes-InsertByte-InsertRepeatedBytes-PersAutoDict-EraseBytes-ShuffleBytes-ChangeByte-ShuffleBytes-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeBit-CrossOver-CrossOver-ShuffleBytes-CrossOver-CopyPart-CrossOver-CrossOver-CopyPart-ChangeByte-ChangeByte-CrossOver-ChangeBit-ChangeBinInt-EraseBytes-ShuffleBytes-EraseBytes-CMP-PersAutoDict-PersAutoDict-InsertByte-ChangeBit-ChangeByte-CopyPart-CrossOver-ChangeByte-ChangeBit-ChangeByte-CopyPart-ChangeBinInt-EraseBytes-CrossOver-ChangeBit-CrossOver-PersAutoDict-CrossOver-ChangeByte-CrossOver-ChangeByte-ChangeByte-CrossOver-ShuffleBytes-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-ChangeBinInt-ChangeBinInt-ChangeBinInt-ShuffleBytes-CrossOver-ChangeBinInt-ShuffleBytes-ChangeBit-PersAutoDict-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeByte-CrossOver-ChangeBit-CopyPart-ChangeBit-ChangeBit-CopyPart-ChangeByte-PersAutoDict-ChangeBit-ShuffleBytes-ChangeByte-ChangeBit-CrossOver-ChangeByte-CrossOver-ChangeByte-CrossOver-ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-CopyPart-ChangeBinInt-ChangeBit-CrossOver-ChangeBit-PersAutoDict-ShuffleBytes-EraseBytes-CrossOver-ChangeByte-ChangeBinInt-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes-PersAutoDict-CrossOver-ChangeByte-Custom-PersAutoDict-CopyPart-CopyPart-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeBit-ShuffleBytes-CrossOver-CMP-ChangeByte-CopyPart-ShuffleBytes-CopyPart-CopyPart-CrossOver-CrossOver-CrossOver-ShuffleBytes-ChangeByte-ChangeBinInt-ChangeBit-ChangeBit-ChangeBit-ChangeByte-EraseBytes-ChangeByte-ChangeBit-ChangeByte-ChangeByte-CopyPart-PersAutoDict-ChangeBinInt-PersAutoDict-PersAutoDict-PersAutoDict-CopyPart-CopyPart-CrossOver-ChangeByte-ChangeBinInt-ShuffleBytes-ChangeBit-CopyPart-EraseBytes-CopyPart-CopyPart-CrossOver-ChangeByte-EraseBytes-ShuffleBytes-ChangeByte-CopyPart-EraseBytes-CopyPart-CrossOver-ChangeBinInt-ChangeBinInt-InsertByte-ChangeBinInt-ChangeBit-ChangeByte-CopyPart-ChangeByte-EraseBytes-ChangeByte-ChangeBit-ChangeByte-ShuffleBytes-CopyPart-ChangeBinInt-EraseBytes-CrossOver-ChangeBit-ChangeBit-CrossOver-EraseBytes-ChangeBinInt-CopyPart-CopyPart-ChangeBinInt-ChangeBit-EraseBytes-InsertRepeatedBytes-EraseBytes-ChangeBit-CrossOver-CrossOver-EraseBytes-EraseBytes-ChangeByte-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeBit-ChangeByte-EraseBytes-ChangeBit-ChangeByte-ChangeByte-CrossOver-CopyPart-EraseBytes-ChangeByte-EraseBytes-ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-CopyPart-ChangeByte-ChangeByte-ChangeBit-CopyPart-ChangeBit-ChangeBinInt-CopyPart-ShuffleBytes-ChangeBit-ChangeBinInt-ChangeBit-EraseBytes-CMP-CrossOver-CopyPart-ChangeBinInt-CrossOver-CrossOver-CopyPart-CrossOver-CrossOver-InsertByte-InsertByte-CopyPart-Custom- DE: "warn"-"\x00\x00\x00\x80"-"\xfe\xff\xff\xfb"-"\xff\xff"-"\x10\x00\x00\x00"-"\xfe\xff\xff\xff"-"\xff\xff\xff\xf6"-"U\x01\x00\x00\x00\x00\x00\x00"-"\xd9\xff\xff\xff"-"\xfe\xff\xff\xea"-"\xf0\xff\xff\xff"-"\xfc\xff\xff\xff"-"warn"-"\xff\xff\xff\xff"-"\xfe\xff\xff\xfb"-"\x00\x00\x00\x80"-"\xfe\xff\xff\xf1"-"\xfe\xff\xff\xea"-"\x00\x00\x00\x00\x00\x00\x012"-"\xe2\x00"-"\xfb\xff\xff\xff"-"\x00\x00\x00\x00"-"\xe9\xff\xff\xff"-"\xff\xff"-"\x00\x00\x00\x80"-"\x01\x00\x04\xc9"-"\xf0\xff\xff\xff"-"\xf9\xff\xff\xff"-"\xff\xff\xff\xff\xff\xff\xff\x12"-"\xe2\x00"-"\xfe\xff\xff\xff"-"\xfe\xff\xff\xea"-"\xff\xff\xff\xff"-"\xf4\xff\xff\xff"-"\xe9\xff\xff\xff"-"\xf1\xff\xff\xff"-
#48 NEW cov: 4502 ft: 9151 corp: 27/750Kb lim: 64000 exec/s: 2 rss: 458Mb L: 50772/50772 MS: 259 ChangeByte-ShuffleBytes-ChangeBinInt-ChangeByte-ChangeByte-ChangeByte-ChangeByte-ChangeBit-CopyPart-CrossOver-CopyPart-ChangeByte-CrossOver-CopyPart-ChangeBit-ChangeByte-EraseBytes-ChangeByte-CopyPart-CopyPart-CopyPart-ChangeBit-EraseBytes-ChangeBinInt-CrossOver-CopyPart-CrossOver-CopyPart-ChangeBit-ChangeByte-ChangeBit-InsertByte-CrossOver-InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes-ChangeBinInt-EraseBytes-InsertRepeatedBytes-InsertByte-ChangeBit-ShuffleBytes-ChangeBit-ChangeBit-CopyPart-ChangeBit-ChangeByte-CrossOver-ChangeBinInt-ChangeByte-CrossOver-CMP-ChangeByte-CrossOver-ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-ChangeBinInt-CopyPart-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-InsertByte-ChangeBit-CopyPart-ChangeBinInt-ChangeByte-CrossOver-ChangeBit-EraseBytes-CopyPart-ChangeBinInt-ChangeBit-ChangeBit-ChangeByte-CopyPart-ChangeBinInt-CrossOver-PersAutoDict-ChangeByte-ChangeBit-ChangeByte-ChangeBinInt-ChangeBinInt-EraseBytes-CopyPart-CopyPart-ChangeByte-ChangeByte-EraseBytes-PersAutoDict-CopyPart-ChangeByte-ChangeByte-EraseBytes-CrossOver-CopyPart-CopyPart-CopyPart-ChangeByte-ChangeBit-CMP-CopyPart-ChangeBinInt-ChangeBinInt-CrossOver-ChangeBit-ChangeBit-EraseBytes-ChangeByte-ShuffleBytes-ChangeBit-ChangeBinInt-CMP-InsertRepeatedBytes-CopyPart-Custom-ChangeByte-CrossOver-EraseBytes-ChangeBit-CopyPart-CrossOver-CMP-ShuffleBytes-EraseBytes-CrossOver-PersAutoDict-ChangeByte-CrossOver-CopyPart-CrossOver-CrossOver-ShuffleBytes-ChangeBinInt-CrossOver-ChangeBinInt-ShuffleBytes-PersAutoDict-ChangeByte-EraseBytes-ChangeBit-CrossOver-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-EraseBytes-InsertByte-InsertRepeatedBytes-InsertByte-InsertByte-ChangeByte-ChangeBinInt-ChangeBit-CrossOver-ChangeByte-CrossOver-EraseBytes-ChangeByte-ShuffleBytes-ChangeBit-ChangeBit-ShuffleBytes-CopyPart-ChangeByte-PersAutoDict-ChangeBit-ChangeByte-InsertRepeatedBytes-CMP-CrossOver-ChangeByte-EraseBytes-ShuffleBytes-CrossOver-ShuffleBytes-ChangeBinInt-ChangeBinInt-CopyPart-PersAutoDict-ShuffleBytes-ChangeBit-CopyPart-ShuffleBytes-CopyPart-EraseBytes-ChangeByte-ChangeBit-ChangeBit-ChangeBinInt-ChangeByte-CopyPart-EraseBytes-ChangeBinInt-EraseBytes-EraseBytes-PersAutoDict-CMP-PersAutoDict-CrossOver-CrossOver-ChangeBit-CrossOver-PersAutoDict-CrossOver-CopyPart-ChangeByte-EraseBytes-ChangeByte-ShuffleBytes-ChangeByte-ChangeByte-CrossOver-ChangeBit-EraseBytes-ChangeByte-EraseBytes-ChangeBinInt-CrossOver-CrossOver-EraseBytes-ChangeBinInt-CrossOver-ChangeBit-ShuffleBytes-ChangeBit-ChangeByte-EraseBytes-ChangeBit-CrossOver-CrossOver-CrossOver-ChangeByte-ChangeBit-ShuffleBytes-ChangeBit-ChangeBit-EraseBytes-CrossOver-CrossOver-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-CopyPart-CrossOver-CopyPart-CrossOver-CrossOver-EraseBytes-EraseBytes-ShuffleBytes-InsertRepeatedBytes-ChangeBit-CopyPart-Custom- DE: "\xfe\xff\xff\xfc"-"\x00\x00\x00\x00"-"F\x00"-"\xf3\xff\xff\xff"-"St9exception"-"_\x00\x00\x00"-"\xf6\xff\xff\xff"-"\xfe\xff\xff\xff"-"\x00\x00\x00\x00"-"p\x02\x00\x00\x00\x00\x00\x00"-"\xfe\xff\xff\xfb"-"\xff\xff"-"\xff\xff\xff\xff"-"\x01\x00\x00\x07"-"\xfe\xff\xff\xfe"-
These are prohibitively large and of limited value in the default case (when
someone is running the fuzzer, not debugging it), in my opinion.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D86658
This patch adds a new command-line option -mutation_graph_file=FILE for
debugging purposes, which traces how corpus inputs evolve during a fuzzing
run. For each new input that is added to the corpus, a new vertex corresponding
to the added input, as well as a new edge that connects its base input to itself
are written to the given file. Each vertex is labeled with the filename of the
input, and each edge is labeled with the mutation sequence that led to the input
w.r.t. its base input.
The format of the mutation graph file is the dot file format. Once prepended and
appended with "graph {" and "}", respectively, the graph becomes a valid dot
file and can be visualized.
Differential Revision: https://reviews.llvm.org/D86560
This patch scales the energy computed by the Entropic schedule based on the
execution time of each input. The input execution time is compared with the
average execution time of inputs in the corpus, and, based on the amount by
which they differ, the energy is scaled from 0.1x (for inputs executing slow) to
3x (for inputs executing fast). Note that the exact scaling criteria and formula
is borrowed from AFL.
On FuzzBench, this gives a sizeable throughput increase, which in turn leads to
more coverage on several benchmarks. For details, see the following report.
https://storage.googleapis.com/fuzzer-test-suite-public/exectime-report/index.html
Differential Revision: https://reviews.llvm.org/D86092
This patch adds an option "cross_over_uniform_dist", which, if 1, considers all
inputs in the corpus for the crossover input selection. More specifically, this
patch uses a uniform distribution of all inputs in the corpus for the CrossOver
input selection. Note that input selection for mutation is still fully
determined by the scheduling policy (i.e., vanilla or Entropic); the uniform
distribution only applies to the secondary input selection, only for the
crossover mutation of the base input chosen by the scheduling policy. This way
the corpus inputs that have useful fragments in them, even though they are
deprioritized by the scheduling policy, have chances of getting mixed with other
inputs that are prioritized and selected as base input for mutation.
Differential Revision: https://reviews.llvm.org/D86954
This patch adds an option "keep_seed" to keep all initial seed inputs in the
corpus. Previously, only the initial seed inputs that find new coverage were
added to the corpus, and all the other initial inputs were discarded. We
observed in some circumstances that useful initial seed inputs are discarded as
they find no new coverage, even though they contain useful fragments in them
(e.g., SQLITE3 FuzzBench benchmark). This newly added option provides a way to
keeping seed inputs in the corpus for those circumstances. With this patch, and
with -keep_seed=1, all initial seed inputs are kept in the corpus regardless of
whether they find new coverage or not. Further, these seed inputs are not
replaced with smaller inputs even if -reduce_inputs=1.
Differential Revision: https://reviews.llvm.org/D86577
Currently, libFuzzer will exit with an error message if a non-existent
directory is provided for any of the appropriate arguments. For cases
where libFuzzer is used in a specialized embedded environment, it would
be much easier to have libFuzzer create the directories for the user.
This patch accommodates for this scenario by allowing the user to provide
the argument `-create_missing_dirs=1` which makes libFuzzer attempt to
create the `artifact_prefix`, `exact_artifact_path`,
`features_dir` and/or corpus directory if they don't already exist rather
than throw an error and exit.
Split off from D84808 as requested [here](https://reviews.llvm.org/D84808#2208546).
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D86733
Currently, libFuzzer will exit with an error message if a non-existent
directory is provided for any of the appropriate arguments. For cases
where libFuzzer is used in a specialized embedded environment, it would
be much easier to have libFuzzer create the directories for the user.
This patch accommodates for this scenario by allowing the user to provide
the argument `-create_missing_dirs=1` which makes libFuzzer attempt to
create the `artifact_prefix`, `exact_artifact_path`,
`features_dir` and/or corpus directory if they don't already exist rather
than throw an error and exit.
Split off from D84808 as requested [here](https://reviews.llvm.org/D84808#2208546).
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D86733
When using a custom mutator (e.g. thrift mutator, similar to LPM)
that calls back into libfuzzer's mutations via `LLVMFuzzerMutate`, the mutation
sequences needed to achieve new coverage can get prohibitively large.
Printing these large sequences has two downsides:
1) It makes the logs hard to understand for a human.
2) The performance cost slows down fuzzing.
In this patch I change the `PrintMutationSequence` function to take a max
number of entries, to achieve this goal. I also update `PrintStatusForNewUnit`
to default to printing only 10 entries, in the default verbosity level (1),
requiring the user to set verbosity to 2 if they want the full mutation
sequence.
For our use case, turning off verbosity is not an option, as that would also
disable `PrintStats()` which is very useful for infrastructure that analyzes
the logs in realtime. I imagine most users of libfuzzer always want those logs
in the default.
I built a fuzzer locally with this patch applied to libfuzzer.
When running with the default verbosity, I see logs like this:
#65 NEW cov: 4799 ft: 10443 corp: 41/1447Kb lim: 64000 exec/s: 1 rss: 575Mb L: 28658/62542 MS: 196 Custom-CrossOver-ChangeBit-EraseBytes-ChangeBit-ChangeBit-ChangeBit-CrossOver-ChangeBit-CrossOver- DE: "\xff\xff\xff\x0e"-"\xfe\xff\xff\x7f"-"\xfe\xff\xff\x7f"-"\x17\x00\x00\x00\x00\x00\x00\x00"-"\x00\x00\x00\xf9"-"\xff\xff\xff\xff"-"\xfa\xff\xff\xff"-"\xf7\xff\xff\xff"-"@\xff\xff\xff\xff\xff\xff\xff"-"E\x00"-
#67 NEW cov: 4810 ft: 10462 corp: 42/1486Kb lim: 64000 exec/s: 1 rss: 577Mb L: 39823/62542 MS: 135 Custom-CopyPart-ShuffleBytes-ShuffleBytes-ChangeBit-ChangeBinInt-EraseBytes-ChangeBit-ChangeBinInt-ChangeBit- DE: "\x01\x00\x00\x00\x00\x00\x01\xf1"-"\x00\x00\x00\x07"-"\x00\x0d"-"\xfd\xff\xff\xff"-"\xfe\xff\xff\xf4"-"\xe3\xff\xff\xff"-"\xff\xff\xff\xf1"-"\xea\xff\xff\xff"-"\x00\x00\x00\xfd"-"\x01\x00\x00\x05"-
Staring hard at the logs it's clear that the cap of 10 is applied.
When running with verbosity level 2, the logs look like the below:
#66 NEW cov: 4700 ft: 10188 corp: 37/1186Kb lim: 64000 exec/s: 2 rss: 509Mb L: 47616/61231 MS: 520 Custom-CopyPart-ChangeBinInt-ChangeBit-ChangeByte-EraseBytes-PersAutoDict-CopyPart-ShuffleBytes-ChangeBit-ShuffleBytes-CopyPart-EraseBytes-CopyPart-ChangeBinInt-CopyPart-ChangeByte-ShuffleBytes-ChangeBinInt-ShuffleBytes-ChangeBit-CMP-ShuffleBytes-ChangeBit-CrossOver-ChangeBinInt-ChangeByte-ShuffleBytes-CrossOver-EraseBytes-ChangeBinInt-InsertRepeatedBytes-PersAutoDict-InsertRepeatedBytes-InsertRepeatedBytes-CrossOver-ChangeByte-ShuffleBytes-CopyPart-ShuffleBytes-CopyPart-CrossOver-ChangeBit-ShuffleBytes-CrossOver-PersAutoDict-ChangeByte-ChangeBit-ShuffleBytes-CrossOver-ChangeByte-EraseBytes-CopyPart-ChangeBinInt-PersAutoDict-CrossOver-ShuffleBytes-CrossOver-CrossOver-EraseBytes-CrossOver-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-ChangeByte-EraseBytes-ShuffleBytes-ShuffleBytes-ChangeBit-EraseBytes-ChangeBinInt-ChangeBit-ChangeBinInt-CopyPart-EraseBytes-PersAutoDict-EraseBytes-CopyPart-ChangeBinInt-ChangeByte-CrossOver-ChangeBinInt-ShuffleBytes-PersAutoDict-PersAutoDict-ChangeBinInt-CopyPart-ChangeBinInt-CrossOver-ChangeBit-ChangeBinInt-CopyPart-ChangeByte-ChangeBit-CopyPart-CrossOver-ChangeByte-ChangeBit-ChangeByte-ShuffleBytes-CMP-ChangeBit-CopyPart-ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBit-ChangeBinInt-ChangeBinInt-PersAutoDict-ChangeBinInt-ChangeBinInt-ChangeByte-CopyPart-ShuffleBytes-ChangeByte-ChangeBit-ChangeByte-ChangeByte-EraseBytes-CrossOver-ChangeByte-ChangeByte-EraseBytes-EraseBytes-InsertRepeatedBytes-ShuffleBytes-CopyPart-CopyPart-ChangeBit-ShuffleBytes-PersAutoDict-ShuffleBytes-ChangeBit-ChangeByte-ChangeBit-ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBit-EraseBytes-CopyPart-ChangeByte-CrossOver-EraseBytes-CrossOver-ChangeByte-ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-ChangeByte-InsertRepeatedBytes-InsertByte-ShuffleBytes-PersAutoDict-ChangeBit-ChangeByte-ChangeBit-ShuffleBytes-ShuffleBytes-CopyPart-ShuffleBytes-EraseBytes-ShuffleBytes-ShuffleBytes-CrossOver-ChangeBinInt-CopyPart-CopyPart-CopyPart-EraseBytes-EraseBytes-ChangeByte-ChangeBinInt-ShuffleBytes-CMP-InsertByte-EraseBytes-ShuffleBytes-CopyPart-ChangeBit-CrossOver-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-EraseBytes-ChangeByte-ChangeBinInt-ChangeBit-ChangeBit-ChangeByte-ShuffleBytes-PersAutoDict-PersAutoDict-CMP-ChangeBit-ShuffleBytes-PersAutoDict-ChangeBinInt-EraseBytes-EraseBytes-ShuffleBytes-ChangeByte-ShuffleBytes-ChangeBit-EraseBytes-CMP-ShuffleBytes-ChangeByte-ChangeBinInt-EraseBytes-ChangeBinInt-ChangeByte-EraseBytes-ChangeByte-CrossOver-ShuffleBytes-EraseBytes-EraseBytes-ShuffleBytes-ChangeBit-EraseBytes-CopyPart-ShuffleBytes-ShuffleBytes-CrossOver-CopyPart-ChangeBinInt-ShuffleBytes-CrossOver-InsertByte-InsertByte-ChangeBinInt-ChangeBinInt-CopyPart-EraseBytes-ShuffleBytes-ChangeBit-ChangeBit-EraseBytes-ChangeByte-ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBinInt-ShuffleBytes-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-ShuffleBytes-CrossOver-EraseBytes-CopyPart-CopyPart-CopyPart-ChangeBit-ShuffleBytes-ChangeByte-EraseBytes-ChangeByte-InsertRepeatedBytes-InsertByte-InsertRepeatedBytes-PersAutoDict-EraseBytes-ShuffleBytes-ChangeByte-ShuffleBytes-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeBit-CrossOver-CrossOver-ShuffleBytes-CrossOver-CopyPart-CrossOver-CrossOver-CopyPart-ChangeByte-ChangeByte-CrossOver-ChangeBit-ChangeBinInt-EraseBytes-ShuffleBytes-EraseBytes-CMP-PersAutoDict-PersAutoDict-InsertByte-ChangeBit-ChangeByte-CopyPart-CrossOver-ChangeByte-ChangeBit-ChangeByte-CopyPart-ChangeBinInt-EraseBytes-CrossOver-ChangeBit-CrossOver-PersAutoDict-CrossOver-ChangeByte-CrossOver-ChangeByte-ChangeByte-CrossOver-ShuffleBytes-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-ChangeBinInt-ChangeBinInt-ChangeBinInt-ShuffleBytes-CrossOver-ChangeBinInt-ShuffleBytes-ChangeBit-PersAutoDict-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeByte-CrossOver-ChangeBit-CopyPart-ChangeBit-ChangeBit-CopyPart-ChangeByte-PersAutoDict-ChangeBit-ShuffleBytes-ChangeByte-ChangeBit-CrossOver-ChangeByte-CrossOver-ChangeByte-CrossOver-ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-CopyPart-ChangeBinInt-ChangeBit-CrossOver-ChangeBit-PersAutoDict-ShuffleBytes-EraseBytes-CrossOver-ChangeByte-ChangeBinInt-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes-PersAutoDict-CrossOver-ChangeByte-Custom-PersAutoDict-CopyPart-CopyPart-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeBit-ShuffleBytes-CrossOver-CMP-ChangeByte-CopyPart-ShuffleBytes-CopyPart-CopyPart-CrossOver-CrossOver-CrossOver-ShuffleBytes-ChangeByte-ChangeBinInt-ChangeBit-ChangeBit-ChangeBit-ChangeByte-EraseBytes-ChangeByte-ChangeBit-ChangeByte-ChangeByte-CopyPart-PersAutoDict-ChangeBinInt-PersAutoDict-PersAutoDict-PersAutoDict-CopyPart-CopyPart-CrossOver-ChangeByte-ChangeBinInt-ShuffleBytes-ChangeBit-CopyPart-EraseBytes-CopyPart-CopyPart-CrossOver-ChangeByte-EraseBytes-ShuffleBytes-ChangeByte-CopyPart-EraseBytes-CopyPart-CrossOver-ChangeBinInt-ChangeBinInt-InsertByte-ChangeBinInt-ChangeBit-ChangeByte-CopyPart-ChangeByte-EraseBytes-ChangeByte-ChangeBit-ChangeByte-ShuffleBytes-CopyPart-ChangeBinInt-EraseBytes-CrossOver-ChangeBit-ChangeBit-CrossOver-EraseBytes-ChangeBinInt-CopyPart-CopyPart-ChangeBinInt-ChangeBit-EraseBytes-InsertRepeatedBytes-EraseBytes-ChangeBit-CrossOver-CrossOver-EraseBytes-EraseBytes-ChangeByte-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeBit-ChangeByte-EraseBytes-ChangeBit-ChangeByte-ChangeByte-CrossOver-CopyPart-EraseBytes-ChangeByte-EraseBytes-ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-CopyPart-ChangeByte-ChangeByte-ChangeBit-CopyPart-ChangeBit-ChangeBinInt-CopyPart-ShuffleBytes-ChangeBit-ChangeBinInt-ChangeBit-EraseBytes-CMP-CrossOver-CopyPart-ChangeBinInt-CrossOver-CrossOver-CopyPart-CrossOver-CrossOver-InsertByte-InsertByte-CopyPart-Custom- DE: "warn"-"\x00\x00\x00\x80"-"\xfe\xff\xff\xfb"-"\xff\xff"-"\x10\x00\x00\x00"-"\xfe\xff\xff\xff"-"\xff\xff\xff\xf6"-"U\x01\x00\x00\x00\x00\x00\x00"-"\xd9\xff\xff\xff"-"\xfe\xff\xff\xea"-"\xf0\xff\xff\xff"-"\xfc\xff\xff\xff"-"warn"-"\xff\xff\xff\xff"-"\xfe\xff\xff\xfb"-"\x00\x00\x00\x80"-"\xfe\xff\xff\xf1"-"\xfe\xff\xff\xea"-"\x00\x00\x00\x00\x00\x00\x012"-"\xe2\x00"-"\xfb\xff\xff\xff"-"\x00\x00\x00\x00"-"\xe9\xff\xff\xff"-"\xff\xff"-"\x00\x00\x00\x80"-"\x01\x00\x04\xc9"-"\xf0\xff\xff\xff"-"\xf9\xff\xff\xff"-"\xff\xff\xff\xff\xff\xff\xff\x12"-"\xe2\x00"-"\xfe\xff\xff\xff"-"\xfe\xff\xff\xea"-"\xff\xff\xff\xff"-"\xf4\xff\xff\xff"-"\xe9\xff\xff\xff"-"\xf1\xff\xff\xff"-
#48 NEW cov: 4502 ft: 9151 corp: 27/750Kb lim: 64000 exec/s: 2 rss: 458Mb L: 50772/50772 MS: 259 ChangeByte-ShuffleBytes-ChangeBinInt-ChangeByte-ChangeByte-ChangeByte-ChangeByte-ChangeBit-CopyPart-CrossOver-CopyPart-ChangeByte-CrossOver-CopyPart-ChangeBit-ChangeByte-EraseBytes-ChangeByte-CopyPart-CopyPart-CopyPart-ChangeBit-EraseBytes-ChangeBinInt-CrossOver-CopyPart-CrossOver-CopyPart-ChangeBit-ChangeByte-ChangeBit-InsertByte-CrossOver-InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes-ChangeBinInt-EraseBytes-InsertRepeatedBytes-InsertByte-ChangeBit-ShuffleBytes-ChangeBit-ChangeBit-CopyPart-ChangeBit-ChangeByte-CrossOver-ChangeBinInt-ChangeByte-CrossOver-CMP-ChangeByte-CrossOver-ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-ChangeBinInt-CopyPart-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-InsertByte-ChangeBit-CopyPart-ChangeBinInt-ChangeByte-CrossOver-ChangeBit-EraseBytes-CopyPart-ChangeBinInt-ChangeBit-ChangeBit-ChangeByte-CopyPart-ChangeBinInt-CrossOver-PersAutoDict-ChangeByte-ChangeBit-ChangeByte-ChangeBinInt-ChangeBinInt-EraseBytes-CopyPart-CopyPart-ChangeByte-ChangeByte-EraseBytes-PersAutoDict-CopyPart-ChangeByte-ChangeByte-EraseBytes-CrossOver-CopyPart-CopyPart-CopyPart-ChangeByte-ChangeBit-CMP-CopyPart-ChangeBinInt-ChangeBinInt-CrossOver-ChangeBit-ChangeBit-EraseBytes-ChangeByte-ShuffleBytes-ChangeBit-ChangeBinInt-CMP-InsertRepeatedBytes-CopyPart-Custom-ChangeByte-CrossOver-EraseBytes-ChangeBit-CopyPart-CrossOver-CMP-ShuffleBytes-EraseBytes-CrossOver-PersAutoDict-ChangeByte-CrossOver-CopyPart-CrossOver-CrossOver-ShuffleBytes-ChangeBinInt-CrossOver-ChangeBinInt-ShuffleBytes-PersAutoDict-ChangeByte-EraseBytes-ChangeBit-CrossOver-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-EraseBytes-InsertByte-InsertRepeatedBytes-InsertByte-InsertByte-ChangeByte-ChangeBinInt-ChangeBit-CrossOver-ChangeByte-CrossOver-EraseBytes-ChangeByte-ShuffleBytes-ChangeBit-ChangeBit-ShuffleBytes-CopyPart-ChangeByte-PersAutoDict-ChangeBit-ChangeByte-InsertRepeatedBytes-CMP-CrossOver-ChangeByte-EraseBytes-ShuffleBytes-CrossOver-ShuffleBytes-ChangeBinInt-ChangeBinInt-CopyPart-PersAutoDict-ShuffleBytes-ChangeBit-CopyPart-ShuffleBytes-CopyPart-EraseBytes-ChangeByte-ChangeBit-ChangeBit-ChangeBinInt-ChangeByte-CopyPart-EraseBytes-ChangeBinInt-EraseBytes-EraseBytes-PersAutoDict-CMP-PersAutoDict-CrossOver-CrossOver-ChangeBit-CrossOver-PersAutoDict-CrossOver-CopyPart-ChangeByte-EraseBytes-ChangeByte-ShuffleBytes-ChangeByte-ChangeByte-CrossOver-ChangeBit-EraseBytes-ChangeByte-EraseBytes-ChangeBinInt-CrossOver-CrossOver-EraseBytes-ChangeBinInt-CrossOver-ChangeBit-ShuffleBytes-ChangeBit-ChangeByte-EraseBytes-ChangeBit-CrossOver-CrossOver-CrossOver-ChangeByte-ChangeBit-ShuffleBytes-ChangeBit-ChangeBit-EraseBytes-CrossOver-CrossOver-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-CopyPart-CrossOver-CopyPart-CrossOver-CrossOver-EraseBytes-EraseBytes-ShuffleBytes-InsertRepeatedBytes-ChangeBit-CopyPart-Custom- DE: "\xfe\xff\xff\xfc"-"\x00\x00\x00\x00"-"F\x00"-"\xf3\xff\xff\xff"-"St9exception"-"_\x00\x00\x00"-"\xf6\xff\xff\xff"-"\xfe\xff\xff\xff"-"\x00\x00\x00\x00"-"p\x02\x00\x00\x00\x00\x00\x00"-"\xfe\xff\xff\xfb"-"\xff\xff"-"\xff\xff\xff\xff"-"\x01\x00\x00\x07"-"\xfe\xff\xff\xfe"-
These are prohibitively large and of limited value in the default case (when
someone is running the fuzzer, not debugging it), in my opinion.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D86658
The CrossOver mutator is meant to cross over two given buffers (referred to as
the first/second buffer henceforth). Previously InsertPartOf/CopyPartOf calls
used in the CrossOver mutator incorrectly inserted/copied part of the second
buffer into a "scratch buffer" (MutateInPlaceHere of the size
CurrentMaxMutationLen), rather than the first buffer. This is not intended
behavior, because the scratch buffer does not always (i) contain the content of
the first buffer, and (ii) have the same size as the first buffer;
CurrentMaxMutationLen is typically a lot larger than the size of the first
buffer. This patch fixes the issue by using the first buffer instead of the
scratch buffer in InsertPartOf/CopyPartOf calls.
A FuzzBench experiment was run to make sure that this change does not
inadvertently degrade the performance. The performance is largely the same; more
details can be found at:
https://storage.googleapis.com/fuzzer-test-suite-public/fixcrossover-report/index.html
This patch also adds two new tests, namely "cross_over_insert" and
"cross_over_copy", which specifically target InsertPartOf and CopyPartOf,
respectively.
- cross_over_insert.test checks if the fuzzer can use InsertPartOf to trigger
the crash.
- cross_over_copy.test checks if the fuzzer can use CopyPartOf to trigger the
crash.
These newly added tests were designed to pass with the current patch, but not
without the it (with 790878f291 these tests do not
pass). To achieve this, -max_len was intentionally given a high value. Without
this patch, InsertPartOf/CopyPartOf will generate larger inputs, possibly with
unpredictable data in it, thereby failing to trigger the crash.
The test pass condition for these new tests is narrowed down by (i) limiting
mutation depth to 1 (i.e., a single CrossOver mutation should be able to trigger
the crash) and (ii) checking whether the mutation sequence of "CrossOver-" leads
to the crash.
Also note that these newly added tests and an existing test (cross_over.test)
all use "-reduce_inputs=0" flags to prevent reducing inputs; it's easier to
force the fuzzer to keep original input string this way than tweaking
cov-instrumented basic blocks in the source code of the fuzzer executable.
Differential Revision: https://reviews.llvm.org/D85554
value-profile-load.test needs adjustment with a mutator change in
bb54bcf849, which reverted as of now, but will be
recommitted after landing this patch.
This patch makes value-profile-load.test more friendly to (and aware of) the
current value profiling strategy, which is based on the hamming as well as the
absolute distance. To this end, this patch adjusts the set of input values that
trigger an expected crash. More specifically, this patch now uses a single value
0x01effffe as a crashing input, because this value is close to values like
{0x1ffffff, 0xffffff, ...}, which are very likely to be added to the corpus per
the current hamming- and absolute-distance-based value profiling strategy. Note
that previously the crashing input values were {1234567 * {1, 2, ...}, s.t. <
INT_MAX}.
Every byte in the chosen value 0x01effeef is intentionally different; this was
to make it harder to find the value without the intermediate inputs added to the
corpus by the value profiling strategy.
Also note that LoadTest.cpp now uses a narrower condition (Size != 8) for
initial pruning of inputs, effectively preventing libFuzzer from generating
inputs longer than necessary and spending time on mutating such long inputs in
the corpus - a functionality not meant to be tested by this specific test.
Differential Revision: https://reviews.llvm.org/D86247
Currently, libFuzzer will exit with an error message if a non-existent
corpus directory is provided. However, if a user provides a non-existent
directory for the `artifact_prefix`, `exact_artifact_path`, or
`features_dir`, libFuzzer will continue execution but silently fail to
write artifacts/features.
To improve the user experience, this PR adds validation for the existence of
all user supplied directories before executing the main fuzzing loop. If they
don't exist, libFuzzer will exit with an error message.
Patch By: dgg5503
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D84808