[fuzzing] MoKB take?
discojonny at gmail.com
Thu Nov 9 07:32:22 CST 2006
more goodness inline
On 09/11/06, Gadi Evron <ge at linuxbox.org> wrote:
> On Wed, 8 Nov 2006, Disco Jonny wrote:
> > > This vulnerability isn't triggered by MULTIPLE ',' characters. The first
> > > character, the buffer prefix, should be ','.
> > perfect, this could easily have been found with code coverage then.
After speaking to a few coders here, they have a very different idea
of code coverage than I do. I will elaborate more on this in my
response to charlie. (its a big 'un)
> > > The approach we chose with beSTORM to explore this type of vulnerabilities
> > > is to define a variable length buffer prefix that attempts all the known
> > > characters 0x00-0xFF.
> > confirmed my suspicions, they are not doing reactive fuzzing (think
> > autodafe, then a little bit more - props to Martin Vuagnoux - a man
> "When you assume you make an ASS out of U and ME."
That a dig at me Gadi? I said suspicion not assumption. And I got
this suspicion after speaking to the creators of beSTORM about their
product - I was not trying to belittle it. Your statement backed up
> > Would you agree with the statement that it is not intelligently
> > fuzzing the application?
hurm, cool I will post some questions then :)
> > > An example basic optimization could be achieved by attempting only
> > > printable characters (0x20-0x7e). This is significant.
> > and needlessly limiting. you are blindly guessing that there are not
> > bugs with a certain subset.
> Of course I am, otherwise you solved the Turing halting problem
not really, there are a couple of nice ways to sidestep the halting
problem [both rely heavily on BVA and EP]. (hence why we have the
first 64odd digits of omega - although they brutefoced these digits,
there are the speculated extra 20odd digits, that they have found.)
Remember the halting problem is about a general proof, not a specific proof.
> You can't cover everything, you need to choose what you test, and more
> importantly - what you test first. This is difficult, but is easier in
> security as we are interested (mostly) in specific bugs resulting in
This is an argument I have continually with the compsec people. lets
just agree to disagree :) - I really think this logic is false, and
not actually addressing the problem.
It would be nice if we could at least see that people are attempting
to get the best spread of input complexity coverage, as appose to
hiding behind the "well there probably isnt security related stuff
Is it only me that sees fuzzing as more of password cracking than antivirus?
> As an example, testing for non-printables would in some cases prove a
> waste of resources (in comparison to other tests), while for example,
> when testing binary protocols they would prove effective.
How do you know where and when those cases are? this is the point,
you need to do exploratory testing. like when you buy clothes you look
at the sizes try them on and buy them if the fit, and not if they
dont. What about when you go to a tailors? they do not have every
size and shape of suit, but they have the tools to create the exact
one tailored to your space. This is what people should be
concentrating on, not making the best one-size-fits all. (this is
just my opinion)
> > > Note: Without attempting to optimize, the process becomes significantly
> > > slower. Every character added, increases the time to completion by a
> > > factor of 256. This multiplies again by the defined prefix size.
> > well no, not really, you shouldnt need to check all of those to know
> > how the software will react in a given situation. - what you do need
> > to do is make sure that you _reactivly_ set your equivalences and
> > partitions [this is not rocket science] - then you need to
> > statistically analyse this data [this is the hard part] and then you
> > are all set.
> I believe someone else covered that much better than I could. There are
> instances where you will not be able to predict how a program reacts
> without testing, even if you reach 100% code coverage.
from what i have found out today 100% code coverage does not mean that
100% of the code has been tested. The code coverage our devs do here
doesnt seem to be all that helpful in fuzzing.
anyway I will elaborate on these points in my other mail.
More information about the fuzzing