Marcus Ranum: Brian, thank you for taking the time to chat! I hope I'm not going to frustrate you too much if we...
jump straight in to what I suspect is a pain point for you. It seems to me computer programming is a game of "one step forward, two steps back" and every time there's a push for quality improvements, it's immediately offset by something that seems to encourage throwing quality to the winds. Is it a lack of tools, or are the incentives wrong/backwards? Do people just not care if their programs are buggy or full of malware? I am still semi-stunned by the fact that most "Web programming" is done in an environment of trial and error. Is that an accurate perception? What's going on?
Brian Chess: This is a pain point for me, but perhaps not for the reason you suspect. I've recently taken off my code analysis hat and gotten back to writing some Web software from scratch. The last time I started this fresh was around 2000 when we were building the foundation that became NetSuite. Here are some of the things that stand out to me about software development practices, then and now:
Then: If you're serious, you put your data in Oracle.
Now: Choose between a few good open source relational databases and a dozen or more non-relational and somewhat dubious young data stores (Couch, MongoDB, Cassandra, etc.).
Then: Java is cool.
Now: Functional is cool. Choose Ruby, Python, Node.js, or Scala. (I'm a fan of type checking, so we went the Scala route. And while the shine has come off of Java, I still think the Java Virtual Machine is cool, and Scala runs in a JVM.)
Then: Internet Explorer sucked but it was pretty much the only game in town.
Now: The browsers are much better, but you really can't ignore IE, Firefox, Chrome or Safari. Quadruple the testing fun!
Then: Want a server? Rack it.
Now: Want a server? Click it. Or click a few times and have a whole bunch. The world (rightfully) has some doubts about this whole cloud thing, but from a developer's perspective, it's awesome.
So in a little more than a decade, "Web programming" has become an entirely different game. It's still evolving quickly enough that anyone who wants to keep up is forced into a lot of trial and error. You can't look at this rate of change and believe anyone understands the long-term ramifications of the stuff they made up yesterday. And don't think you can pick an older stack and just stand there. The new stuff allows you to build a lot faster. The economic advantage is huge. Oh, and the security people are going to force you to upgrade anyway.
Marcus: Your comment ’You can't look at this rate of change and believe anyone understands the long-term ramifications of the stuff they made up yesterday’ makes my blood run cold. The security world is already desperate because of the huge code-mass that has been pushed into production already; we've been stuck in the world of "penetrate and patch" bug hunting for nearly 15 years, now, and we both know how well that has worked. What you've just mapped out is a software environment that is differentiating unbearably rapidly. Some of those frameworks are going to die, others are going to suck, and there are so many people that won't even take the time to make a sober assessment of what has a rosy future; they'll use whatever their favorite software blogger hero suggests, and that'll be the next big business app. So the problem isn't the lack of tools, it's the profusion of environments, which makes it unlikely that the tools (which have to be environment-specific) will come along.
Brian: Tools encode knowledge about what's safe and what's unsafe. Clever tool makers can encode their knowledge in such a way that the tool can automatically adapt to a wide range of scenarios and can be quickly updated when new kinds of problems emerge, but automation is still about preventing repeat mistakes, not discovering the next new variety of problems. Don't get me wrong, we've come a long way. We don't have to wait for a human tester to find the next buffer overflow or SQL injection vulnerability. There are good ways (static and dynamic) to find brand new instances of those vulnerability types. But when the next new kind of vulnerability comes along, we'll need new tooling to find it. So while a profusion of environments does make the job harder, it's the rate of change (and the necessary lag involved in learning about what's safe/unsafe and encoding it in a tool) that creates unavoidable exposure.
But I don't want to leave the impression that all change in software development practices has been bad for security. There are more and more new systems doing things like allowing for updates to their cryptographic primitives. We've seen enough change that we know today's algorithms and key lengths won't work forever, and we can build software that anticipates the eventual need for improvements. That's cool stuff, as is the idea of creating attack-aware software in which a program's notion of what constitutes an "attack" can be updated independently of the program's functionality.
Marcus: I recently saw some nice things being said about Apple's "game changer" idea of turning off a feature if it hasn't been used. Apparently, the operating environment notices that you haven't used the Flash Player (for example) and turns it off after a month of inactivity. Then, the next time you want to use it, it asks "Are you sure?" Seems like a fair idea to me, but hardly a game changer. The real point seems to be that our software environments still have a LONG way to go in terms of adding user-friendly checks to make sure the right thing is happening. I'm pretty pleased with my iPad and its constant desire to keep its software up to date. I'm still constantly gobsmacked that application whitelisting seems to be so slow catching on, compared to running antivirus and getting malware. Where do you see this going? Are there any cool ideas the industry has been overlooking?
Brian: Yes, turning off stuff you haven't used in a while isn't exactly a game changer, but there's a lot to be learned from what Apple has done in the last five years. The walled garden is a powerful concept. Users relinquish a substantial amount of control over their devices and data, but they get substantial benefits in return. For example, the Apple app store is a form of whitelisting. Apple has done a less than perfect job of making the application approval process a security screen, but the opportunity is there. And when bad code gets through, the mobile device management (MDM) service has the ability to revoke applications after the fact. I don't think these capabilities were built specifically for improving security, but they have great potential. For many years the tech industry has celebrated "open" systems (by that I mean systems that could be extended and built upon by the customer), but what I take away from watching Apple is that "open" is a major security burden.
So are we better off in the Wild West where developers can do as they like and consumers have a devil of a time making an informed choice, or is life better in the walled garden where Big Brother is always watching? Between iOS, Kindle and Windows 8, we're going to learn more about the walled garden in the next few years.
Marcus: I don’t care if my garden is walled, as long as it’s good. Well, I’d better let you go, because I know you’re busy. Thank you for taking the time to talk.
Brian: Exactly! Keep in touch!