[fuzzing] Fuzzing tradeoffs - where previously described?
discojonny at gmail.com
Thu Feb 1 10:09:17 CST 2007
kinda playing devils advocate here - dont get too mad - you hit one of
my pet hates.
On 01/02/07, Ari Takanen <ari.takanen at codenomicon.com> wrote:
> Hello all,
> Note that because the input space is always infinite,
pardon? no it isnt. it might be huge, phenomenally huge, but physics
stops it being infinite. and this is more than just a techincal point,
it actually makes a real difference
This is probably not the best forum to discuss this though. - but no
computer program can have a true infinite surface area, nor an
infinite amount of internal states.
I agree with the sentiments though - it is generally too large to test
at the end of a products life cycle. (but not to calculate)
> it will always
> take an infinite time to find all flaws.
This is because testing does not prove the absence of bugs, not
because the input area is infinite.
> Optimizing (i.e. adding
> intelligence) will find more flaws earlier, but can still never ensure
> that "all" flaws have been found. This would not be true of course for
> protocols that have a "physical" limit such as UDP packet size (and no
> fragmentation options). Then you will have a physical limit that will
> cover "all" tests for that single packet.
isnt this the same for the ram on my machine? its the same idea that
limits any applications inputs to be less than infinity.
> But that would be a LOT of
> tests. Even then the internal state of the SUT will influence the
> possibility of finding some of the flaws.
yup :( - still it not an excuse not to do work because there is too
much of it :)
determinism helps a lot with this though. most things are not actually
as big as you expected them to be.
> I agree with you, both approaches can co-exist. I think there is a
> place for both random testing (fuzzing) and systematic testing
> (robustness testing).
are we talking about yours and jared's interpretation of random? or on
the scale of true random?
>If you have time and possibility to run random
> fuzzers, then fine, go ahead! You might be the lucky one. I still
> think random fuzzing is a nice community effort, and certainly if we
> add more the monkeys in the process, we might find our
we will find our shakespeare. they got the first 64 digits of omega
this way.but again it is a time/cost thing.
> But I also think random testing does not fit very well in
> systematic software development process where repeatability and
> predictable test execution time are crucial.
if you are in at the very start of the development project it is a lot
more useful than at the end.
not all bugs are repeatable - Also if you are doing your random stuff
properly you record what you do for repeatability's sake. so, how
would this differ from the systematic approach? (dont get me wrong, i
used random data, but it has a place, and not a very big one at that.
- I am looking at other methods)
Random testing does have its place in software testing. but it is up
to the tester and their skillset as to the amount that is used and the
> Here and there in large
> manufacturing organizations you might find people that find time for
> random fuzzers, but it will be very difficult to enforce random
> testing into the product development life-cycle.
why? you think the time would be better spent doing something else?
it is a cost thing. [and these are the metrics we are trying to work
> I would not even try
> to say this or that is better. Both of them have their uses. And I am
> not sure the same people would even benefit from having both
the more testing that gets done the better, but it is a case by case thing.
> I have already heard e.g. our customers say that we have
> too many test cases, and our tests are extremely optimized. They would
> switch to other tools if we said that we just added 2 Million test
> cases (especially if those tests would not find enough new issues).
that is a customer education issue... i guess the market drives your
testing - perfection drives mine... (not having a dig at you - but we
are coming at this from different angles.)
> an optimized set of tests could find 90% of the vulnerabilities, then
> how much can random testing add and at what cost?
at most it could add the extra 10% to what you have done. But this is
what we are talking about, when should you use random and when should
you spend time on developing intelligent, or when should you just use
> I think we should
> try to find facts instead of speculation. If anyone has any data in
> relation to this, please do not hesitate to send that to me. I can
> summarize the results (and anonymizethe data if you wish).
what data are you after? I have lots. - contact me off list if you like?
btw I am in as much of a dilemma as you as to what is better and
when... for pretty much the same reasons :)
More information about the fuzzing