[fuzzing] Fuzzing tradeoffs - where previously described?
discojonny at gmail.com
Tue Jan 30 11:53:30 CST 2007
On 30/01/07, Ari Takanen <ari.takanen at codenomicon.com> wrote:
> Hello all,
> Oh, who cares if it is exploitable... Bug is a bug...
I agree entirely, but for costing purposes there is significantly more
time and effort invested in finding certain type of bug as apposed to
> > > > So I find this helpful for thinking about what has to happen
> > > > before a "smart" fuzzer is justified.
> How about taking a text book metric for ROSI (return on security
> investment). If investment to a fuzzer is less than the (value of a
> security incident * probability of incident) the investment is
Isnt this the metric that justifies testing in the first place? I
think the OP was trying to work out which type is worth doing
(intelligent v random style) for that we would need new metrics. So
his manager or whoever says I have $100k to spend on testing total
(maybe secuirty only?) , how do we spend it.
> Comparing two fuzzers is easy (at least that is what our customers
> have noted).
and the customer is always right... ;)
> If we (I am referring to our company here) find 10 times
> more flaws than the closest competitor and we cover all the flaws they
> found, the result is simple.
> Now this is valuable (I have to tell this to our marketing guys). But
> I am sure you meant "cost in fixing each bug"? ;)
Im pretty sure he meant cost in finding the bug.
> This is really where
> the value comes from intelligence in fuzzing. We bring huge value to
> the testers if our tool finds 1000 tests that crash the software and
> 200 tests that leak memory, and 10 which take 10 times more processing
> power from the SUT, but automatically categorize these findings so
> that tester can immediately see that these are created by 10
> programming flaws. With "non-intelligent" you will have 100.000 tests
> that find something... Analyzing that will take too much time... and
> might actually just discover 2 flaws.
depending on how it is done that is not always true, for example I
have worked in a company where they use fixing bugs to narrow down the
results set. - not ideal, and surprisingly enough it works..! [yeah i
was stunned too]
> > As far as the question: which fuzzer should I create, a smart or
> > dumb fuzzer? The obvious answer to me is that we should create the
> > fuzzer which we believe is most likely to yield the most bugs of any
> > type (especially if we're fuzzing the attack surface).
> Exactly. Of course unless you are building a weapon (oh but isn't that
I am not so sure. I am sure that it has to be written from an
objectives point of view. I think we are in danger of slipping into
talking cross purposes here. Are we talking about
1 - Writing a bespoke fuzzer to be used by aan internal test team?
2 - Writing a bespoke fuzzer to be used in a compliance/standards team?
3 - Writing a fuzzer to find the higest number of exploitable bugs?
4 - Auditing 3rd party code
Each one of the above has a different answer to the question :)
> Then it is enough if you find one exploitable flaw, and
> might even automatically exploit it. But I am sure we are all into
> fuzzer development with good intentions.
> > However, the point that Disco brings up is valid and very
> > interesting. It does seem that as randomness in a tool increase we
> > tend to find more null ptrs and the like.
> Uh. I would like to see an example like this.
I think this is the same point you made earlier with your example of
1000 bugs actually being 2 bugs. - however random _will_ be cheaper.
>I am sure the same would
> have been found by an intelligent fuzzer also.
at what cost? just to find 10 null derefs?
> And if you ask a
> financial corp or a carrier, a DoS the worst that can happen. Hackers
> are interested in exploits (I wonder why), our customers are
> interested in any critical flaw.
so what is a critical flaw? what about not critical flaws? do they
count as bugs?
> > If you work for a software company you want to quickly find the most
> > bugs possible, and fix 'em fast. If you're a hacker, perhaps you'd
> > like to find just one "very tricky" bug that will stay in the wild
> > for a long time.
> > This shows the difficulty of doing such work. For software companys
> > doing testing (Disco jump in here) I'm guessing they also have teams
> > of testers filling various roles.
yes they do. :) what info would you like? I dont think it is how you
imagine though. No general purpose QA spods bother reverse
engineering the product they are working on - too much time, no
reward. Get a code audit :)
> This would be very valuable to all of us fuzzer developers. What are
> the real pain points that the testing organizations have?
the mindset "anyone can test" - what sort of information would you like?
More information about the fuzzing