Incorrectness Logic by Example

Incorrectness Logic is a formal method to reason about program behavior. Similar formal reasoning techniques have been used for a multitude of different tasks in program verification, bug hunting and exploitation. To give some examples, Symbolic Execution (SE) has been used to generate hundreds of working exploits against a (at that time) current version of Debian [1]. While possible, it is often difficult to scale SE with all its precision to large targets. However, SE can also be used to perform many smaller steps in the workflow of reverse engineers, bug hunters and vulnerability developers: In the past, it has been used to uncover novel ways to turn heap corruptions into useful exploitation primitives [2], to deobfuscate bytecode handlers to reverse engineer VM based obfuscation [3], to search for very specific code patterns that allow to trigger memory access violations without crashing the target applications [4], and many other program analysis tasks.

Current techniques are usually based on overestimating the set of possible states. That is, the formal approach underlying the mental models behind techniques such as SE, Abstract Interpretation and to some extend even Concolic Execution is usually heavily informed by the ideas behind the so called Hoare Logic [5]. Hoare Logic was designed as a method to prove the correctness of programs, by overestimating the set of states that a program can possibly reach. If this larger set doesn’t contain a faulty state, then the program must be correct. While this is very useful in theory, in practice its almost impossible to prove the correctness of even simple programs. Plus, many of us aren’t even that interested in proving correctness, rather we want to find bugs.

Incorrectness Logic simply adapts the ideas from Hoare Logic to underapproximation: Where Hoare Logic proves that at most a set of states is reachable, we can use Incorrectness Logic to prove that a set of states is definitely reachable. While we can show correctness using Hoare Logic to demonstrating that a bug is not reachable, we can prove incorrectness using Incorrectness Logic by proving that a bug is certainly reachable.

Recently, two papers introduce this concept [6, 7] of Incorrectness Logic and it took me a while get my head around how to use Incorrectness Logic. To help, I applied it to a bunch of toy programs, and tried to figure out what we can do with it.

Applying Hoare Logic

To help understanding Incorrectness Logic, I give a short introduction into Hoare Logic. If you are already familiar with Hoare Logic, feel free to skip this part.

In classic Hoare Logic, we create so called “Hoare Triplets” for each statement of the program. Such a triple consists of a pre-condition, the statement and a post-condition. For example, if we know that x==0 and we execute int y = z; the post-conditions would be x==0 and y == z . Typically these conditions are written between { and } . So in a paper you see the following snippet:

{ x == 0 } y = z { x == 0 and y == z }

In Hoare Logic these triplets have to satisfy the following condition: If we execute the statement starting in a state that satisfies the pre-condition, then the result state after the statement has to satisfy the post-condition. It can be seen that our example has this property. This property also allows us to “forget” constraints that we already know:

{ x == 0 } y = z { y == z }

We can see, that weakening the post-condition is always a valid operation in Hoare Logic, since this only increases the set of states that we are allowed to end up in. To verify a complete program, we need to find such triples for each statement in the program, such that the post-condition of the last statement is the pre-condition of the next statement. This allows us to chain multiple individual steps together.

Example 1

Consider a simple program like this example that calculates the absolute value of the input:

x = user_input() y = 0 if x < 0 then y = - x else y = x end assert(y >= 0 )

To prove that the program is correct, we need to show that the assert statement is always true, that is, the post-condition of the program needs to imply y >= 0 . It’s easy to build Hoare triples for the first statements:

{ true } x = user_input() #we don't really learn anything about x { true } y = 0 { y == 0 }

The if statement is a bit more tricky. Here we need to come up with a pre-condition PRE and a post-condition POST such that:

{ PRE } # `\ if C then # | { PRE and C } # | `\ do_something # | | must be valid { POST } # | ,/ else # | { PRE and ! C } # | `\ do_something # | | must be valid { POST } # | ,/ end # | { POST } # ,/ { PRE } if ... end { POST } is valid, # if the both branches are valid

In this example, the PRE: y == 0 will work, while POST needs to imply y >= 0 . Consequently we pick, POST: y >= 0 . After inserting PRE and POST in our code, we obtain the following “then”-branch for the if. Note that we need to weaken the post-condition after the assignment y = -x to obtain POST . Normaly, we would assume that the post condition of this branch should be y>0 . While y>0 is indeed a valid post-condition for the “then” branch,it is not a valid post-condition for the overall “if” statement. Consequently we weaken the post-condition to y>=0 to

{ y == 0 } # PRE if x < 0 then { y == 0 and x < 0 } # PRE and C y = - x { y ==- x and x < 0 } { y >= 0 } # POST (weaker than the line above) else #...

After filling in the “else”-branch, we obtain the whole triple for the whole program: If we start with any state at all, we are guaranteed that after executing the program y >= 0 holds. Consequently we have proven that the assertion can never be fail.

{ true } x = user_input() #we don't really learn anything about x { true } y = 0 { y == 0 } # PRE if x < 0 then { y == 0 and x < 0 } # PRE and C y = - x { y ==- x and x < 0 } { y >= 0 } # POST (weaker than the line above) else { y == 0 and ! (x < 0 ) } # PRE and !C y = x { y == x and ! (x < 0 ) } { y >= 0 } # POST (weaker than the line above) end { y >= 0 } # POST assert(y >= 0 )

Example 2

Consider a somewhat more complex program containing a loop like this:

x = 0 ; y = user_input_unsigned() z = 0 while ( x < y ) do x = x + 1 z = z + 2 end assert(z == 2 * y);

To prove that the program is correct (assuming unbounded integers), again, we need to show that the assert statement is always true. That is, the post-condition of the program needs to imply z == 2*y . This time however, we need to deal with a loop. Again, it’s easy to build Hoare triples for the first statements:

{ true } x = 0 { x == 0 } y = user_input_unsigned(); { x == 0 and y > 0 } z = 0 { x == 0 and y > 0 and z == 0 } # What now? How to deal with loops? while ( x < y ) do x = x + 1 z = z + 2 end assert(z == 2 * y)

To build a valid Hoare triple for a loopy statement, we need to find a so called Loop Invariant. A loop invariant is a statement that holds before entering the loop, at the start of the loop and the end of the loop and after exiting the loop. Assuming we picked an Invariant I , we can show that it holds by showing that the following triplets are valid:

{ I } # `\ while ( C ) do # | { I and C } # | `\ do_something # | | if this triple is valid { I } # | ,/ end # | { I and ! C } # ,/ { I } while ... end { I and !C } is valid

In our example, we can pick I: x <= y and z = 2*x and we obtain the following program:

#... code so far ... { x == 0 and y > 0 and z == 0 } { x <= y and z == 2 * x } while ( x < y ) do { x <= y and z == 2 * x and x < y } x = x + 1 z = z + 2 { x <= y and z == 2 * x } end { x <= y and z == 2 * x and ! (x < y) } assert(z == 2 * y)

First of all, we need to check that the invariant is indeed implied by the post-condition of the code prior to the loop. Indeed we can see that since x == 0 and y > 0 it has also to be true that x <= y and due to x == 0 and z == 0 , z = 2*x also holds. The next step is to verify that the invariant also holds after executing the loop:

#... code so far ... { x <= y and z == 2 * x and x < y } # I and C x = x + 1 ; z = z + 2 ; { x <= y and z == 2 * x } # I

According to the rules of Hoare Logic, if we want to build post-conditions for a variable assignment a = some_expr , where some_expr contains a , we need to ensure that all occurrences of the variable a in the precondition are exactly of the shape some_expr . Again we use our ability to arbitrarily weaken the post-condition at any time to introduce intermediate conditions.

#... code so far ... { x <= y and z == 2 * x and x < y } # I and C { x < y and z == 2 * x > # (weaker than the line above) { x + 1 <= y and z + 2 == 2 * (x + 1 ) } # (weaker than the line above) x = x + 1 ; z = z + 2 ; { x <= y and z == 2 * x } # I

Now all occurrences of updated variables have the right shape, and we can simply replace all occurrences of some_expr by a :

#... code so far ... { x <= y and z == 2 * x and x < y } # I and C { x < y and z == 2 * x > # (weaker than the line above) { x + 1 <= y and z + 2 == 2 * (x + 1 ) } # (weaker than the line above) x = x + 1 ; { x <= y and z + 2 == 2 * x } z = z + 2 ; { x <= y and z == 2 * x } # I

Which proves that I was indeed a valid loop invariant. In total we get:

{ true } x = 0 { x == 0 } y = user_input_unsigned() { x == 0 and y > 0 } z = 0 { x == 0 and y > 0 and z == 0 } { x <= y and z == 2 * x } # I (weaker than the line above) while ( x < y ) do { x <= y and z == 2 * x and x < y } # I and C { x < y and z == 2 * x } # (weaker than the line above) { x + 1 <= y and z + 2 == 2 * (x + 1 ) } # (weaker than the line above) x = x + 1 { x <= y and z + 2 == 2 * x } z = z + 2 { x <= y and z == 2 * x } # I end { x <= y and z == 2 * x and ! (x < y) } # I and !C { x == y and z == 2 * x } # (weaker than the line above) { z == 2 * y } # (weaker than the line above) assert(z == 2 * y)

Since the last post-condition implies that the assert statement is true, we have proven the program correct. To be more precise, we have shown that it never triggers the assert - it still might run into an infinite loop.

More Exercises

If you have never used Hoare Logic before, I’d recommend you try to find proofs for the following examples to get a feeling for finding invariants:

i = 0 y = user_input_unsigned() res = 1 while (i < y) do res *= 2 i += 1 end assert( res == 2 ** y )

i = 0 y = user_input_unsigned() res = 0 while (i < y) do res += i i += 1 end assert( res == sum( 1 .. y) ) #hint: use res == sum(1..i) as invariant

i = 0 y = user_input_unsigned() res = 1 while (i < y) do if i % 2 == 0 then res += i end i += 1 end assert( res % 2 == 1 )

Applying Incorrectness Logic

Similar to Hoare Logic, Incorrectness Logic operates on triples of the shape [pre]statement[post] . However, the interpretation of the pre and post conditions changes. Since the goal is to show that some states are definitively reachable, we require that any state that satisfies the post condition is reached after execution statement by at least one state that satisfies pre . Consider a simple assignment without any information about the right hand side, x = y . The most obvious, valid, IL triple would also be a valid Hoare triple [ true ]x = y[ x==y ] . However while { true }x = y{ true } is a valid Hoare triple, it is not a valid IL triple. For example, the state where x==1 and y==2 , satisfies the post-condition, while also being unreachable. However, the IL triple [true]x=y[ 0 < x < 10 and x==y ] is valid in IL, but not in Hoare Logic. This is due to the fact that we are free to explicitly exclude any states in IL. Similarly, while in Hoare Logic we always have to consider all paths, we are free to drop paths in IL. This affects the rules for control flow. In IL the following rules are all valid for if statements:

[ PRE ] # `\ if C then # | [ PRE and C ] # | `\ do_then # | | if this is valid [ POST ] # | ,/ else # | # ignore else case # | Then the whole statement is valid. do_else # | Since we are underaproximating, we can end # | ignore paths, such as the `else` case. [ POST ] # ,/

[ PRE ] # `\ if C then # | Similarly, we can ignore the `then` case # ignore then case # | if the else case is valid. do_then # | else # | [ PRE and ! C ] # | `\ do_else # | | if this is valid [ POST ] # | ,/ end # | [ POST ] # ,/

Obviously, the rule for loops changes as well, as we are now free to pick any number of loop iterations. Again there is multiple valid ways to construct IL triples:

[ PRE ] # `\ while C do # | This is always valid, since it represents do_something # | the case where we never enter the loop. end # | Hence the body has no effect. [ PRE and ! C ] # ,/

We can also unroll the loop any number of times:

[ PRE ] # `\ while C do # | [ PRE and C ] # | `\ do_something # | | if all of these are valid, [ POST_1 and C ] # | < do_something # | | [ POST_2 and C ] # | < do_something # | | [ POST_3 and C ] # | ,/ # ... # | end # | unrolling the loop n times is valid. [ POST_n and ! C ] # ,/

Lastly, we can use IL to prove that some states are reachable with any number of loop iterations. Here we will need a loop invariant again. However we are now parameterising the loop invariant with the number of loop iterations taken n . The “loop invariant” I(n) is then proven by induction:

[ I( 0 ) ] # `\ while C do # | [ n >= 0 and I(n) and C ] # | `\ do_something # | | if this is valid, [ I(n + 1 ) ] # | ,/ end # | this is valid too. [ 'it exist a n>0 such that' I(n) and ! C ] # ,/

Note how in contrast to Hoare Logic the result explicitly mentions the number of loop iterations taken. Also, if the loop is not terminating, the post-condition is still valid: it is unsatisfiable. While in Hoare Logic { true } is always a valid post-condition, in IL [ false ] is always valid.

Example 1

Consider the following program that is a horrible manual implementation of multiplication.

x = user_input_unsigned() y = user_input_unsigned() z = 0 res = 0 while ( z < y ) do res = res + x z = z + 1 end assert(res != 20 )

Using IL, there is many (in fact infinitely many) ways we can analyze the program. The most restricted version is a picking an arbitrary concrete execution:

[ true ] x = user_input_unsigned() [ x == 50 ] y = user_input_unsigned() [ x == 50 and y == 2 ] z = 0 [ x == 50 and y == 2 and z == 0 ] res = 0 [ x == 50 and y == 2 and z == 0 and res == 0 ] while ( z < y ) do [ x == 50 and y == 2 and z == 0 and res == 0 and z < y ] # `\ res = res + x # | [ x == 50 and y == 2 and z == 0 and res == 50 ] # | unroll #1 z = z + 1 # | [ x == 50 and y == 2 and z == 1 and res == 50 ] # ,/ [ x == 50 and y == 2 and z == 0 and res == 0 and z < y ] # `\ res = res + x # | [ x == 50 and y == 2 and z == 0 and res == 100 ] # | unroll #2 z = z + 1 # | [ x == 50 and y == 2 and z == 2 and res == 100 ] # ,/ end [ x == 50 and y == 2 and z == 2 and res == 100 and ! (z < y) ] assert(res != 20 )

This is probably not something we are overly interested in, as we don’t really need a logic to perform concrete execution. We can simply use the CPU to execute the program and observe the outcome. Also, using this particular trace we cannot show that the assertion fails.

Another common approach is to perform a so called “concolic” execution, where a path is fixed, and we execute symbolically along the path of a concrete execution. If any constraint gets too complicated, we can substitute the concrete values. Note how we need to track x>0 which doesn’t really affect the rest of the execution. In Hoare Logic we would be allowed to drop this, but in IL we need to keep it to ensure soundness. If we would drop it, we would conclude that the state res==-2 and z==2 and y==2 and x==-1 is reachable.

[ true ] x = user_input_unsigned() [ x > 0 ] y = user_input_unsigned() [ x > 0 and y == 2 ] # we fix y, but not x z = 0 [ x > 0 and y == 2 and z == 0 ] res = 0 [ x > 0 and y == 2 and z == 0 and res == 0 ] while ( z < y ) do [ x > 0 and y == 2 and z == 0 and res == 0 and z < y ] # `\ res = res + x # | [ x > 0 and y == 2 and z == 0 and res == x ] # | unroll #1 z = z + 1 # | [ x > 0 and y == 2 and z == 1 and res == x ] # ,/ [ x > 0 and y == 2 and z == 0 and res == 0 and z < y ] # `\ res = res + x # | [ x > 0 and y == 2 and z == 0 and res == x + x ] # | unroll #2 z = z + 1 # | [ x > 0 and y == 2 and z == 2 and res == x + x ] # ,/ end [ x > 0 and y == 2 and z == 2 and res == x + x and ! (z < y) ] # POST assert(res != 20 )

Using concolic execution with y==2 , we can already prove that the final assertion fails, by demonstrating that x==10 and y==2 and z==2 and res == 20 is a valid, reachable, state that triggers the assertion. After we obtained the final condition POST , we can automatically check if the assertion is violated, by using an SMT solver to find a variable assignment that satisfies POST and !(res!=20) . This approach has been known long before the formal description of IL. However IL puts it into a nice mental framework. Consider the case where we change the assertion to assert(res != 139*31) . Using this larger number, concolic execution would have a very hard time finding the exact right number of loop iterations to falsify the assertion. If we consider concolic execution to be a special case of IL, we can generalize concolic execution to reason about multiple different paths at the same time. If we keep both x and y symbolic, we can prove that the assertion still fails. By applying the induction rule for loops we obtain the following result:

[ true ] x = user_input_unsigned() [ x > 0 ] y = user_input_unsigned() [ x > 0 and y > 0 ] # we fix neither x, nor y z = 0 [ x > 0 and y > 0 and z == 0 ] res = 0 [ x > 0 and y > 0 and z == 0 and res == 0 ] [ I( 0 ) ] # `\ while ( z < y ) do # | [ n >= 0 and I(n) and z < y ] # | need to find an I(n) res = res + x # | that satiesfies z = z + 1 # | this conditions. [ n >= 0 and I(n + 1 ) ] # | end # | [ 'it exist a n>0 such that' I(n) and ! (z < y) ] # ,/ assert(res != 139 * 31 )

A natural candidate for I(n) is I(n): res==x*n and z==n and z<=y and x>0 and y>0 . Since this is rather long, we define the rest R: x>0 and y>0 and n>=0 to shorten I(n) to the relevant part res==x*n and z==n and z<=y and R . Lets plug it in and see if it works:

# ... previous code ... [ x > 0 and y > 0 and z == 0 and res == 0 ] # I(0) <-, [ res == x * 0 and z == 0 and z <= y and x > 0 and y > 0 ] # stronger than ´ while ( z < y ) do [ res == x * n and z == n and z <= y and R and z < y ] # I(n) and C <-, [ res + x = x * (n + 1 ) and z + 1 = (n + 1 ) and z + 1 <= y and R ] # stronger than ´ res = res + x [ res = x * (n + 1 ) and z + 1 = (n + 1 ) and z + 1 <= y and R ] z = z + 1 [ res = x * (n + 1 ) and z = (n + 1 ) and z <= y and R ] # I(n+1) end [ 'it exist a n>0 such that' res == x * n and z == n and # z <= y and ! (z < y) and R ] # I(n) and !(C)<-, [ 'it exist a n>0 such that' res == x * y and z == n and # stronger than ´ z == y and R ] # assert(res != 139 * 31 )

As we can see, similar to using invariants in Hoare logic, we applied I(n) to the pattern for loops, and used our ability to strengthen post-conditions to show that [ I(n) and C ]body[ I(n+1) ] is indeed a valid IL triple. In most cases we didn’t even really strengthen the post-condition, but simply applied equivalence transformations. For example, we transform z<=y and !(z<y) into z=y . As a consequence we have shown that any state where res==x*y and x>0 and y>0 and ... is reachable. Using the SMT solver to check POST and !(res!=139*31) we can easily check that the assertion is still triggerable.

Example 2

Lets pick another somewhat more elaborate example. This time we will deal with arrays. This function reads an array from the user_input, and copies the absolute value of every element to the result array. In this example, we skip the simple cases and instead try to prove that we can reach states where `forall j in (0..x.len): res[j] == x[j] and …`` holds. To keep this example readable, we skip dealing with the length, and assume that setting elements in an array always succeeds.

x = user_input_array() res = array() i = 0 while ( i < x . len ) do if x [ i ] < 0 then res [ i ]=- x [ i ] else res [ i ]= x [ i ] end end

We will also skip the prefix and jump right into dealing with the loop body. For the first case we allow arbitrary numbers of loop iterations, while fixing that we always take the else case in the if statement. To simplify the rest of this example, we add the additional constraint that x.len >= 1 since otherwise the inductive rule doesn’t apply and we have to use the much simpler rule for loops with zero iterations. Hence we have to find an I(n) such that

[ i == 0 and x . len >= 1 ] # <-, [ I( 0 ) ] # stronger than ´ while ( i < x . len ) do [ n >= 0 and I(n) andi < x . len ] if x [ i ] < 0 then # don't care, fixed the else path else [ n >= 0 and I(n) and i < x . len and ! (x [ i ]< 0 ) ] res [ i ]= x [ i ] [ n >= 0 and I(n) and i < x . len and ! (x [ i ]< 0 ) and res [ i ]= x [ i ]] end [ n >= 0 and I(n) and i < x . len and ! (x [ i ]< 0 ) and res [ i ]= x [ i ]] i = i + 1 [ n >= 0 and I(n + 1 ) ] end [ 'it exist a n>0 such that' I(n) and ! (i < x . len) ] # <-, [ 'forall j in (0..x.len)' res [ j ]== x [] and ...] # stronger than ´

Again, the hard part is to come up with a proper I(n) . We will try the following I(n)= 'forall j in (0..n)' res[j]==x[j] and x[j]>=0 and i<=x.len and i==n and R where R = x.len>=1 and n>=0 is the part that we don’t care about but still have to carry around. Now beware, this example is the most complex one we used so far.

[ i == 0 and x . len >= 1 ] [ 'forall j in (0..0)' res [ j ]== x [ j ] and x [ j ]>= 0 and i <= x . len and i == n and x . len >= 1 ] ´ while ( i < x . len ) do [ 'forall j in (0..n)' res [ j ]== x [ j ] and x [ j ]>= 0 and i <= x . len and i == n and i < x . len and R ] # in this step we pick an equivalent post-condition # to agregate i<=x.len and i<x.len into the later [ 'forall j in (0..n)' res [ j ]== x [ j ] and x [ j ]>= 0 and i == n and i < x . len and R ] if x [ i ] < 0 then # don't care else [ 'forall j in (0..n)' res [ j ]== x [ j ] and x [ j ]>= 0 and i == n and i < x . len and R and ! (x [ i ]< 0 ) ] res [ i ]= x [ i ] [ 'forall j in (0..n)' res [ j ]== x [ j ] and x [ j ]>= 0 and i == n and i < x . len and R and ! (x [ i ]< 0 ) and res [ i ]= x [ i ] ] end [ 'forall j in (0..n)' res [ j ]== x [ j ] and x [ j ]>= 0 and i == n and i < x . len and R and ! (x [ i ]< 0 ) and res [ i ]= x [ i ] ] # Here we transform the post-condition to be a valid precondition # for the following assignment. Not the weird i == (i+1)-1 trick # that is needed to replace i+1 with i. This condition is # equivalent to the previous condition. [ 'forall j in (0..n)' res [ j ]== x [ j ] and x [ j ]>= 0 and i + 1 == n + 1 and i + 1 <= x . len and R and ! (x [ i + 1 - 1 ]< 0 ) and res [ i + 1 - 1 ]= x [ i + 1 - 1 ] ] i = i + 1 [ 'forall j in (0..n)' res [ j ]== x [ j ] and x [ j ]>= 0 and i == n + 1 and i <= x . len and R and ! (x [ i - 1 ]< 0 ) and res [ i - 1 ]= x [ i - 1 ] ] # Since we know that x[i-1] >= 0 and res[i-1]=x[i-1] and # i == n+1 we also now that the forall statement now holds for # j in (0..n+1). This is indeed exactly [n>=0 and I(n+1)] [ 'forall j in (0..n+1)' res [ j ]== x [ j ] and x [ j ]>= 0 and i == n + 1 and i <= x . len and R ] end [ 'it exist a n>0 such that' 'forall j in (0..n+1)' res [ j ]== x [ j ] and x [ j ]>= 0 and i == n + 1 and i <= x . len and R and ! (i < x . len) ] # use !(i<x.len) and i<=x.len to infer i==x.len [ 'it exist a n>0 such that' 'forall j in (0..n+1)' res [ j ]== x [ j ] and x [ j ]>= 0 and n + 1 == x . len and i = x . len and R ] # use that n+1 == x.len, ignore the rest [ 'forall j in (0..x.len)' res [ j ]== x [ j ] and x [ j ]>= 0 and ... ]

As we can see, I(n) is indeed applicable to this loop, and after exiting the loop, we can see that res[j]==x[j] holds. However, we also learn that to reach these states, x[j]>=0 also has to hold.

Exercise:

I would recommend trying to apply the same approach to prove that states where forall j in (0..x.len): res[j]==-x[j] and ... is true are reachable.

How can we use Incorrectness Logic?

After gaining an intuition for how IL works we obviously want to know how we can use it. What do we gain by thinking of concrete and concolic execution as special cases of IL?

For starters, IL allows us to smoothly increase the amount of symbolism from zero (concrete execution) to a hundred percent (underapproximating SE). As a consequence, techniques based on IL can sit almost anywhere on the spectrum between dynamic and static analysis. Hence, IL lends itself to integrate dynamic, testing based approaches with static reasoning based approaches.

Internal Fuzzing

For example, imagine using IL to prove that some states in the middle of the target application are reachable:

data = read_data() data2 = decode_zip(data) [ "we prove that data2 can be any string" and ...] stuff = parse(data2)

Now we could spawn a fuzzer that directly generates data2 and targets the parsing, without having to deal with decode_zip().

Fuzzing for Target States.

One of the really big issues of symbolic approaches is that its very hard to model all environment effects. Consider a program that uses IPC to communicate with a Database. Building a model that allows to reason about values flowing into and out of the database is almost certainly prohibitive expensive. However fuzzers usually don’t suffer from this approach. Symbolic reasoning on the other hand easily pinpoints bugs that are very hard to trigger randomly such as an off-by-one integer overflow. Ideally these two components would be combined to compensate each others weak points, while combining the strength. Most current approaches use symbolic reasoning and fuzzing in an orthogonal fashion. The symbolic solver and fuzzer both generate inputs almost independently. Tools such as QSYM,Driller etc. only share the inputs found between the fuzzer and the symbolic engine.

One can imagine using IL to prove that certain sets of internal states trigger interesting behavior, and then pass this information on to the fuzzers instrumentation so that the fuzzer can specifically target these states that are guaranteed to yield interesting behavior. In such a tool, the symbolic reasoning would work backwards from the seemingly vulnerable positions to find interesting target states, while the fuzzer works forwards to overcome un-modeled side effects or unknown instructions or constraints that are plain to hard for a solver.

Randomized Symbolic Execution

While the previous two ideas aim to move fuzzing towards the strength of SE based approaches, one could also start from SE and move towards the strength of fuzzing / random testing.

Since our goal is not to prove the absence of bugs, but to hunt for bugs, one could start from the classical SE based approach, and make it more dynamic. Instead of trying to follow all possible paths and to enumerate the whole states pace, an IL based randomized Symbolic Executor could explore random paths, until they become to hard to solve or too boring to explore further. In such cases the randomized SE could either add additional constraints that make solving easier (until the input becomes entirely concrete, and solving becomes trivial), or pick different paths altogether. Note this is already similar to what tools like KLEE are doing . However it’s really uncommon for SE based tools that a) randomly drop paths, b) systematically add constraints to make the paths easier to solve or) combine loop invariants with bug finding login instead of aiming at correctness proofs.

It is yet to early to tell what Incorrectness Logic adds to the world of software testing. While I am very firmly in the camp of “dumb dynamic approaches/guided fuzzing”, I personally think there are some interesting properties that IL can offer to software testing. And no matter whether the engineering behind IL based tools ends up becoming as influential as other approaches, a novel way to think about formal reasoning in program analysis is always very exciting!