Code Satisfiability
Code Satisfiability - Achievable in constrained contexts.
Code Validity - Very difficult, I haven't found much academic work that tackles that successfully.
When we want to thoroughly test code, specifically for security vulnerabilities. We have a couple of options:
- Code audit - Slow, manual, not scalable, but can find complex bugs
- Blind fuzzing - Fast, scalable, but mostly catches the lowest hanging fruit.
- Whitebox fuzzing - Fast, scalable and can identify some intermediate complexity bugs
- Symbolic execution - Can be exhaustively slow, flexes all input combinations, so has the potential to identify complex bugs, at scale.
There is lots of academic work being done on code satisfiability, lots of amazing work. However there appears to be duplication of implementations in the academic world.
Thesaurus check:
Dynamic Symbolic Execution ≈ Concolic Testing (Or as I liked to call it "Mixed symbolic and concrete testing")
The aim of Concolic/Dynamic... is to deal with the path explosion that comes from testing every valid state of variables in symbolic execution.
One of the key strategies, outlined well by the KATCH paper is to use a regression suite to allow you concretely execute a binary up to the vicinity of the code under test, after which you can symbolically execute. This limits the search space which helps to deal with the path explosion issue.
To take a step back to the basics of how to symbolically execute a piece of code, you can either take one of 2 approaches: Translate source code to an abstract syntax tree, then walk that to generate SMTLIB2 compliant language like Z3 or STP to execute the function symbolically.
Ideally using something like ANTLR to do the AST generation as there are ANTLR grammars available for nearly all the popular source code languages.
- Good (pretty much the only way) for dynamic languages
- More simplistic component design.
- Lots of code code to maintain
Or we have the option to use the abstraction of LLVM or Binary Analysis Platform (BAP), which is the most popular with the existing academic tools.</p>
Adding in the LLVM Intermediate Representation (IR) abstraction removes the maintenance overhead of the grammar transform work, the LLVM IR is walkable similar to an AST.
- Less code to maintain
- Can analyze code without having source and any binary that LLVM can give you IR for should work with your solver.
- Concrete execution can be achieved through LLVM too, so no requirement for a separate test runner.
- Doesn't work for non-compiled languages. e.g. Javascript, however there are workarounds like Jalangi
References:
- KLEE - Framework to allow you to declare specific values in your code to be symbolically executed, uses LLVM alongside a solver of your choice, the default is STP.
- KATCH - An automated patch analysis tool that uses KLEE and some interesting Control Flow Graph distancing to concretely execute your code to the vicinity of the patch and then symbolically execute the variables in that area, using KLEE