[fuzzing] Fuzzing tradeoffs - where previously described?
ari.takanen at codenomicon.com
Tue Jan 30 09:28:32 CST 2007
Comments to several messages in the same thread:
> > > I have been thinking about tradeoffs between "smarter" fuzz
> > > tools that take some time per trial but try to increase the
> > > chance that a test case exhibits a bug over less smart tools,
> > > and "dumber" fuzzing that is very fast per trial but doesn't do
> > > anything special. An equation that seems to capture some of the
> > > tradeoffs is the following:
> > >
> > > Expected # bugs = #trials/time * time * Pr[Bug per trial]
Doesn't the "time" cancel itself out from the equation? No, but
seriously I think this is interesting thinking. There are several
factors you should actually think about:
- what is the "model" built from (traffic capture, specification)
- how is the "intelligence" built in (attack heuristics, secure
programming practices, ...)
- how do you measure "time" (processor time, clock, parallel tests)
And many others...
> > If you just want to find exploitable bugs then it will form a
> > large part of the equation.
Oh, who cares if it is exploitable... Bug is a bug...
> > > So I find this helpful for thinking about what has to happen
> > > before a "smart" fuzzer is justified.
How about taking a text book metric for ROSI (return on security
investment). If investment to a fuzzer is less than the (value of a
security incident * probability of incident) the investment is
Comparing two fuzzers is easy (at least that is what our customers
have noted). If we (I am referring to our company here) find 10 times
more flaws than the closest competitor and we cover all the flaws they
found, the result is simple.
> > > We can also consider a variant that looks at cost per bug, since
> > > you could use a farm full of VM images or something to drive the
> > > #trials/time as high as you like. Maybe other variants are
> > > possible, but this is the idea.
Now this is valuable (I have to tell this to our marketing guys). But
I am sure you meant "cost in fixing each bug"? ;) This is really where
the value comes from intelligence in fuzzing. We bring huge value to
the testers if our tool finds 1000 tests that crash the software and
200 tests that leak memory, and 10 which take 10 times more processing
power from the SUT, but automatically categorize these findings so
that tester can immediately see that these are created by 10
programming flaws. With "non-intelligent" you will have 100.000 tests
that find something... Analyzing that will take too much time... and
might actually just discover 2 flaws.
> Great conversation guys. No there really hasn't been much formal
> fuzzing work, of this sort, that I know of. If Ari and I find time, we
> intend to write a book on Fuzzing this year. Could be out as soon as
> October of this year ... assuming we get out butts in gear! :)
Sure we will. ;)
> As far as the question: which fuzzer should I create, a smart or
> dumb fuzzer? The obvious answer to me is that we should create the
> fuzzer which we believe is most likely to yield the most bugs of any
> type (especially if we're fuzzing the attack surface).
Exactly. Of course unless you are building a weapon (oh but isn't that
illegal). Then it is enough if you find one exploitable flaw, and
might even automatically exploit it. But I am sure we are all into
fuzzer development with good intentions.
> However, the point that Disco brings up is valid and very
> interesting. It does seem that as randomness in a tool increase we
> tend to find more null ptrs and the like.
Uh. I would like to see an example like this. I am sure the same would
have been found by an intelligent fuzzer also. And if you ask a
financial corp or a carrier, a DoS the worst that can happen. Hackers
are interested in exploits (I wonder why), our customers are
interested in any critical flaw.
> Also, we can't really predict which fuzzing heuristics would lead to
> "tricky" (null pts, etc.) bugs.
> If you work for a software company you want to quickly find the most
> bugs possible, and fix 'em fast. If you're a hacker, perhaps you'd
> like to find just one "very tricky" bug that will stay in the wild
> for a long time.
> This shows the difficulty of doing such work. For software companys
> doing testing (Disco jump in here) I'm guessing they also have teams
> of testers filling various roles.
This would be very valuable to all of us fuzzer developers. What are
the real pain points that the testing organizations have?
Ari Takanen Codenomicon Ltd.
ari.takanen at codenomicon.com Tutkijantie 4E
tel: +358-40 50 67678 FIN-90570 Oulu
More information about the fuzzing