I have come across a scenario where certain target inputs are so unsafe that for most inputs, Mayhem generates defects.
As an example, please refer to this mayhem run:
Mayhem Run
Github Repo
Although these codebases are technically broken, the fuzzer in theory is still able to find real defects. How should such cases be fuzzed? Should they be ignored entirely?
Additionally, while discussing this topic, I would like to know if this specific target would be considered ‘appropriate.’ The content seems to be slightly less than professional but not outright NSFW. Your guidance on the acceptability of such targets would be appreciated.
I took a look and it may be considered mockery, so it’d be tough to accept.
For the more general case, I think that it’s fine if your fuzzer finds a lot of defects. The developer is responsible for fixing issues, the fuzzer just finds the issues. That being said, if you know the specific exception or assertion being thrown is “noisy” and you want to suppress it, you can either catch the exception or silence the warning via the compiler flags (or in the case of Rust, annotations). This isn’t ideal since the crash will still happen and will slow the fuzzer down a bit. Another option is to filter for and throw away test cases that you know trigger the assertion. This way the assertion is never triggered but Mayhem can generate other test cases that penetrate more deeply in the code. Again, the short answer is, of course, to fix the code in question, but since you likely don’t have control over it as a third party, the above should provide some viable alternatives.
Well, I think the issue I’m seeing here is that the fuzzer doesn’t find a single input that doesn’t crash. Would targets like these be accepted? I would perhaps convert it to return Result<(), Error> myself for fuzzing purposes, would changes like these be allowed in a fuzz target?, even if upstream has not accepted them yet?
- Crashing on every input is usually indicative of a broken harness or a poor target. I’m guessing it’s the second one in this case.
- We would encourage you not to make changes to the source if possible.
We would still accept the target, even if it’s buggy, as long as the harness is valid and the repo meets all of the criteria. You may want to look at emulating some of the test functions (i.e. here: pua-lang/mod.rs at master · 4812571/pua-lang · GitHub) for a bit more narrowly scoped fuzzing, but no, there’s nothing explicitly wrong with a target that is buggy.
One thing to add here is that if you are unable to proceed with fuzzing due to the excessively buggy nature of the target, providing a seed corpus with at least 5 passing test cases may get you past the smoketesting phase.