Our website uses cookies to enhance your browsing experience.
Accept
to the top
close form

Fill out the form in 2 simple steps below:

Your contact information:

Step 1
Congratulations! This is your promo code!

Desired license type:

Step 2
Team license
Enterprise license
** By clicking this button you agree to our Privacy Policy statement
close form
Request our prices
New License
License Renewal
--Select currency--
USD
EUR
* By clicking this button you agree to our Privacy Policy statement

close form
Free PVS‑Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

close form
To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

close form
I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

close form
check circle
Message submitted.

Your message has been sent. We will email you at


If you haven't received our response, please do the following:
check your Spam/Junk folder and click the "Not Spam" button for our message.
This way, you won't miss messages from our team in the future.

>
>
Myths about static analysis. The fifth …

Myths about static analysis. The fifth myth - a small test program is enough to evaluate a tool

Nov 07 2011
Author:

While communicating with people on forums, I noticed there are a few lasting misconceptions concerning the static analysis methodology. I decided to write a series of brief articles where I want to show you the real state of things.

The fifth myth: "You can easily evaluate capabilities of a static analyzer on a small test code".

This is how this statement looks in discussions on forums (this is a collective image):

I've written a special program, its size is 100 code lines. But the analyzer doesn't generate anything although all the warning levels are enabled. This [tool of yours] / [static analysis] in general is just rubbish.

It is not the static analysis methodology which is rubbish, but this approach to evaluating the usability of a particular tool. The incorrectness of this kind of tool studying consists of two aspects:

1.

Programmers think they don't make simple mistakes. This phenomenon was discussed in Myth 2. So they try to feed an analyzer with a tricky sample and feel happy secretly when the analyzer can't find the error. This game is interesting yet senseless.

You should understand that most errors are simple as hell, and static analyzers detect them very well. The paradox is that it's much more difficult to invent a simple mistake than a complicated one. Here you are an example. Can you ever guess to write a sample like this?

int threadcounts[] = { 1, kNumThreads };
for (size_t i = 0;
     i < sizeof(threadcounts) / sizeof(threadcounts); i++) {

I doubt. I cannot imagine one can make such a silly mistake and write "sizeof(threadcounts) / sizeof(threadcounts)". So, such an example will never be created on purpose. By the way, this fragment is taken not from a student's lab work, but from the Chromium project. It is diagnosed by the PVS-Studio analyzer very easily, of course.

2.

Written samples are of random character, and they are few. So you may get very different results depending on chance. You may invent 5 errors that will be successfully found by one analyzer and not found by another analyzer. Or you may create a program with five errors, and two analyzers will give opposite results for it. The sampling for such an investigation is too small. To be able to compare and study tools with at least somewhat reliable results, you must write a program text with at least 500 different errors. An investigation based on 5-10 errors is not reliable.

Moreover, programmers expect to see diagnostic messages on errors of some particular type and forget about the rest. For example, almost all the programmers write one and the same sample with a memory release defect:

void Foo()
{
  int *a = (int *)malloc(X);
  int *b = (int *)malloc(Y);
  //...
  free(a);
}

Some analyzers detect this error, the others don't. For instance, PVS-Studio does not diagnose memory leaks currently.

Blast from the past: a lot has changed since 2017. You're welcome to check out the article "Yes, PVS-Studio Can Detect Memory Leaks".

But it can find the following stuff:

static int rr_cmp(uchar *a,uchar *b)
{
  if (a[0] != b[0])
    return (int) a[0] - (int) b[0];
  if (a[1] != b[1])
    return (int) a[1] - (int) b[1];
  if (a[2] != b[2])
    return (int) a[2] - (int) b[2];
  if (a[3] != b[3])
    return (int) a[3] - (int) b[3];
  if (a[4] != b[4])
    return (int) a[4] - (int) b[4];
  if (a[5] != b[5])
    return (int) a[1] - (int) b[5];
  if (a[6] != b[6])
    return (int) a[6] - (int) b[6];
  return (int) a[7] - (int) b[7];
}

There must be "return (int) a[5] - (int) b[5];" instead of "return (int) a[1] - (int) b[5];".

Why does nobody write such examples? Note that PVS-Studio has found this error in the MySQL project.

The conclusion is, adequate investigation or comparison of tools can be carried out only with real projects. You take project A, test it with PC-Lint / Visual C++ / PVS-Studio / C++Test, study all the messages attentively, draw up a table of results (how many and which errors each analyzer has found). This is the only real investigation and comparison. For example: "Comparing the general static analysis in Visual Studio 2010 and PVS-Studio by examples of errors detected in five open source projects ".



Comments (0)

Next comments next comments
close comment form