alex_aldo - Fotolia
- Robert Richardson, Editorial Director
Formal verification is one of those things we don’t talk much about in mainstream information security circles. If we could get it to work on a broader scale, beyond system design, we could prove—literally—that various pieces of software were free from security vulnerabilities. The desire to do this dates back to code-breaker Alan Turing, but proof of correctness against formal “behavior” specifications gained attention in 1967, when computer scientist Robert W. Floyd published groundbreaking research, Assigning Meanings to Programs. (Don’t follow that link unless mathematical proofs of program specifications intrigue you; more digestible is this “gentle introduction to formal verification.”)
Multiple approaches to verification exist, but Floyd’s, as refined by C.A.R. Hoare in 1969, was particularly useful early on; it was possible to formally prove—in the same sense that you create proofs in geometry—that a small section of source code was bulletproof.
The fundamental problem is that formal verification requires that all the possible conditions or logic branches of a piece of software be stated and then tested to ensure they don’t cause behavior that breaks rules, which must also be formally defined. Against this set of statements (or, depending on your approach, finite state tables), data sets that represent all the possibilities for input (including illegal input) have to be tested.
The difficulty is that creating these algorithms is too time-consuming and expensive. Because it’s a manual process that requires expertise, it’s also hard to scale. And formal verification has, over the years, resisted automation.
This is not the same thing as fuzz testing, where you may find some inputs that cause the software to break, but you haven’t proven there are no inputs that will cause a fault. With formal verification, you don’t know in advance how many tests are required, but when you’re done, the proof is finished, Q.E.D.
For two years now, DARPA has been funding a program to crowdsource all that analysis testing. The trick has been to make games of it. You can see the games for yourself (and sign up and play them) at verigames.com.
Five game-design teams have joined the project, each team comprising several organizations. John Murray, the program director of the computer science laboratory at SRI International, is the leader of one of the teams, which also includes members from the University of California at Santa Cruz and the Commissariat à l’énergie Atomique, in Saclay, France.
Murray’s team recently launched their second game, dubbed Binary Fission. “You want to take very small pieces of the testing and turn those into activities,” he says. “Players solve puzzles, and in the process of doing that they’re contributing to just a little slice of the problem.”
The point, Murray says, is not to automate the finding of the final proof but to assist the verification specialists by getting the vast quantity of small tests out of the way with less direct effort.
Playing Binary Fission isn’t like winning a popular first-person shooter game. It’s a more abstract, puzzle-based challenge. The goal is to sort atomic particles in as few steps as possible, but it’s not easy. You’re presented with a collection of tan and blue balls, inside a circle. You have a large choice of “filters,” represented by pentagonal shapes of differing sizes. You choose one, and the contents of the atoms are split into two separate circles with the tan and blue balls in them. If all the blues are in one circle and all the tans are in the other, then you’ve finished that round.
Some of the early filters that seem very effective turn out to put you in situations where the final sort can’t be achieved in the number of available rounds. The logic behind the game is confusing because you don’t know what the filters are actually doing; all you see is the result, so when you’re stuck you back up and try something else. It’s not a thrill-a-minute, but it’s surprisingly mesmerizing.
Or at least that’s how it went for me. There may be logic there that I’m just not seeing, but I suspect that part of the point is that you don’t know what you’re testing in terms of specific code or input values. After all, sorting the atomic particles is a puzzle. Shooting your way through a game level is about fast physical reaction against targets you’ve seen before. “These types of puzzles don’t lend themselves to that genre,” Murray says.
So what kinds of software are being tested behind the façade of these games? Murray points to medical equipment and stock exchange systems. “It is the case that the funding is coming from DARPA, but the applications have very broad applicability,” he says.
Can crowdsourcing via gaming be used by malicious actors to find vulnerabilities? Because, after all, not being able to prove that a section of code is sound is in effect finding something about it that makes it insecure.
It’s not worth the trouble if hacking is the intent, according to Murray: “You might be better off using your resources in some other way,” he says. “Maybe you pay to compromise people involved to provide you with the source code.” Or you use the funds to outsource DDoS attacks or to buy zero-day vulnerabilities from other researchers. As always, defenders bear the burden of completeness, while attackers just need to find a single forgotten detail.
Formal verification was sidelined in the earliest days of computer security because it was impractical. This led to a long infatuation with tools for perimeter defense that created no incentives to get our primary, business-critical software up to snuff. Programs like Binary Fission give us a glimpse of a possible way to train our attention back to fundamentals of getting stuff right in the first place. Like so many other things in the Internet world, crowdsourcing offers new ways to organize huge projects into bite-sized chunks.
Robert Richardson is the editorial director of TechTarget’s Security Media Group. Follow him on Twitter @cryptorobert.