How to develop software the secure, Gary McGraw way
A comprehensive collection of articles, videos and more, hand-picked by our editors
Mobile security has been such a hot topic for so long that it has become boring and hackneyed, even when it comes to the increasingly important subject of security for mobile apps. I've noticed that every consultant who can hang up a shingle seems to be a newly minted "mobile security expert!" It reminds me of the hotshot developer I interviewed for a job in 1997 who claimed to have five years of Java programming experience. Uh, check the calendar lately? (He didn't get the job; Java was introduced in 1995.)
But as ridiculously overhyped as it is, mobile security is critically important -- and mobile app security plays a central role. In the world of bring your own device, massive convergence and mobile commerce, what is the grown-up approach to mobile app security? How can we as security professionals look past the hyperbole and the hack-a-minute headlines and formalize the way enterprises should approach mobile app security?
The three-legged stool of mobile app security
Remember that mobile security is about way more than apps. There's the chipset, the radios, the mobile OS, the virtual machines and the carriers -- just to name a few things in the supply chain from parts vendors to the end user. But we're going to concentrate on the apps in this article. Why? Because virtually no firm that produces mobile apps has any control over the chipsets, OSes, VMs, carrier shenanigans (e.g., uncoupling the SSL pipe as a "watcher in the middle"), and platform preferences of their users. Imagine that you're a multinational bank with millions of customers. Do you get to tell your customers which smartphone they have to use? No, you do not. But do your customers want a mobile app to move their bits -- um, I mean money? Yes, they do. That makes mobile app security your problem, and it's a problem enterprises in many industries now face.
In the end, what we need is a solution that allows untrusted code to run safely and trusted code to protect itself from attack.
From the point of view of security engineering, the mobile app security stool has three legs: everyday software security activities as applied to mobile; app store creation and curating (including the assurance testing of apps before posting); and app-situation awareness (we'll dig into this last stool leg more fully in a minute). These three legs all have to do with essentially the same thing: building something trusted (an app) to run on something busted (the device itself), also known as "trusted on busted."
Leg 1. The first thing to realize is that the mobile app platform is somewhat familiar. Your really cool smartphone is actually a portable Internet browser that fits in your pocket -- and is thus really easy to lose. When writing code for any platform, you need to do what you can to design and implement that code with security in mind, driven by your business objectives and some kind of a threat model. Everything you and your development team learned about software security applies directly to developing mobile apps. If this message sounds familiar, it's because this was exactly the point I made in my July 2012 column. What? Nothing special about mobile app security? Just the same old, same old software security stuff? Well, not quite. As I wrote, despite the familiarity, there are some important contrasts:
There are some differences in potential vulnerabilities when it comes to mobile software security, even if the security review techniques are the same. Mobile applications must more diligently authenticate their users, make fewer assumptions about the protection and transport of data, and more carefully handle the storage, creation, and deletion of subscriber data than standard-issue software.
Leg 2. App stores are an important issue to consider when it comes to writing and publishing apps. How do your users get your app? Should your firm create its own app store? How do you vet an app before you put it in your store? What about malware? What about flaky apps that were written and shipped too fast? What about users telling the difference between your app and a fake app that looks just like yours? What about auto-update features that might get hijacked? (I had promised back in that July article to address all of this in a future article, and just like Congress and the fiscal cliff, I think I will kick the can down the road again! Future article, here we come.) For now, I'll simply advise you to consider all these questions carefully. They're important.
Leg 3. What in the heck is this "app situation awareness," you ask? It all boils down to this: Your app is going to run in a hostile environment. What should you do to try to control the risk of a rooted (jailbroken) device? What kind of permissions should your app ask for? How can we use the functions and capabilities unique to mobile devices to help with the app security situation (think geolocation, call history and so on)? Is it a problem if another app records a complete history of when, where and how your app is used?
The thorny situation that is leg 3 reminds me of Rene Descartes' "malicious demon" from the "brain in a vat" thought experiment. In the 1600s, Descartes performed a thought experiment wondering how we might possibly know whether we are not simply a brain in a vat getting just the right kinds of input from a malicious demon (as opposed to creatures with free will). It was by performing this experiment that he came up with his famous line: "I think, therefore I am." It turns out that Descartes asked a very good question. A typical app tends to believe everything its platform tells it to. In the end, the app is the brain in the malicious demon's mobile platform vat. The app and its developers should know that it may be bamboozled, decompiled, relinked, abused and otherwise tormented, sometimes in surprising ways.
With that said, let's dig into this situation-awareness concept in more detail.
Trusted Computing, coprocessors and security
Consider leg 3 through the eyes of the poor app being told to give its heart and soul over to the processor and tech stack it's about to run on.
Scenario one: The device you need to run on is a rooted iPhone. To run or not to run? Apple prides itself on its somewhat successful "walled garden" security ecosystem. Want an app on your iPhone? It needs to be vetted by the Apple app store first. In theory, this lessens the possibility of a Trojan horse app, but in practice, you never really know. Still, let's credit Apple for trying to create a kindergarten-safe walled garden. Alternatively, Android devices run apps from all over. In some (weak) sense, they come "prerooted" -- any device can run any app anytime. We've moved on from kindergarten to power tools, but many users may not be comfortable with such power.
In any case, an app must be able to determine whether the phone has been rooted. Commercial tools that detect rooting rely on a few telltale signs of classic jailbreak approaches, but there are many more ways to root a phone than there are popular jailbreaking packages. In fact, at Cigital, we're not sure it's even possible to tell if a phone has been rooted by relying on commercial tools alone. Bottom line? Drawing a line in the sand at "rooted phone" may not help much. The best bet is to treat all phones as rooted. Here's why.
From the editors: More by Gary McGraw
Watch Gary discuss the three biggest fundamental issues affecting enterprise mobile security at the 2013 RSA Conference.
Can a badness-ometer address third-party software?
Gary McGraw's 13 principles for secure system design
Scenario two. Imagine the phone is rooted. Are there extra-app data sources that you can rely on to determine how much trust to put in the user and the environment? Geolocation can, in theory, tell you where the phone is right now. Does that help? How about call history or text message history?
We have some bad news. If the phone is rooted, it can lie about many of the data sources you might want to verify. If a phone is in Wi-Fi mode, airplane mode or otherwise out-of-service area, causing it to believe it is somewhere it's not is entirely possible. Remember the brain in the vat mentioned above? Well, the attacker who steals a legitimate phone is the malicious demon. Trouble indeed.
In the end, what we need is a solution that allows untrusted code to run safely and trusted code to protect itself from attack by the broken platform and its "malicious demon" controller. This is exactly the problem set that Trusted Computing set out to tackle some decades ago and with decidedly mixed results. Trusted coprocessors are a key part of this equation. Sadly, though some phones come with these, they are not exposed for use by app developers. Maybe they need to be.
Dang, this is more complicated than we thought! However, there is hope. In fact, there are three things you can do right away to work through these issues:
- Learn about Trusted Computing from multiple sources (the organization we pointed to above, WhatIs.com and Ross Anderson's site are several good ones) and ask the mobile platform vendors what kind of support they are planning to expose to app writers.
- Think carefully about app design and expose as little as you can in the app itself, keeping the bulk of processing on the server.
- Seek help from knowledgeable technologists who understand that the app security problem is really the Trusted Computing problem with a new twist.
- A bonus fourth tip thrown in for good measure: Don't believe what device vendors and Mobile Trusted Module people say without digging past the surface. They just want to sell you product.
Drawing lines in the sand and decomposing the impossible
Now that we've run out of space in this column, we'll resort to some quick-hit questions and answers:
- What should our mobile app security policy be?
Customers will dictate parts of your policy (like rooting their own phones). Should we let them have free reign? No. Your customers may drive your app's platforms and even its functionality, but you drive its architecture and security. Don't abdicate that responsibility.
- How do we do assessment and what do we assess?
First, make sure your code can't be directly attacked through its own bugs and flaws. Think carefully about the app security "special sauce" invoked above. Include known attacks and threat models in your design and development processes. Ensure that any third-party code you can control is built securely. Run everything through your secure software development lifecycle (you have one of those, right?). You can't protect against every attack, but you can still make attacks much harder to pull off than they might be otherwise.
- How do our enterprise architects embrace the mobile tidal wave?
Hold on tight and don't believe the vendors selling magic solutions. There is no magic solution. Remember that the back-end systems for your mobile apps are mostly the same kinds of secure back-end systems that your wise and talented security architects are already making for your "normal" software applications. Get these people involved immediately.
- What are the risk guys supposed to think?
Know first that the risk here is about the same as letting customers use their browser on their malware-infected PC to perform sensitive transactions on your current website. The Zeus toolkit has already arrived on mobile platforms, and it will be a headache. Account for this and other risks when thinking about identification, authentication, data storage, key management, encryption and so on. Not every solution is going to be a technical one on the mobile device, so don't forget about fraud detection.
- And the $50,000,000 question: What am I supposed to teach my developers when it comes to software security and mobile?
Teach them about software security just like normal, and then add in platform and tech-stack kung fu. Then, teach them first principles related to Trusted Computing and the "brain in the vat" problem so they know what really might happen to their code.
Head not spinning yet? We can fix that. Guess what? Your phone is a really small tablet, which is kind of like a laptop with no keyboard. Maybe we should talk about "tablet security" from now on?!
Thanks to Sammy Migues and Joel Scambray for comments on an early draft. Joel is working on a mobile security book that will be published in July.
About the author
Gary McGraw is the chief technology officer of Cigital Inc., a software security consulting firm with headquarters in the Washington, D.C., area and offices throughout the world. He is a globally recognized authority on software security and the author of eight best-selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games and six other books; and he is editor of the Addison-Wesley Software Security series. Dr. McGraw has also written more than 100 peer-reviewed scientific publications, authors a monthly security column for SearchSecurity.com and Information Security magazine, and is frequently quoted in the press. Besides serving as a strategic counselor for top business and IT executives, Gary is on the Advisory Boards of Dasient (acquired by Twitter), Fortify Software (acquired by HP), Wall + Main Inc. and Raven White. His dual doctorate is in cognitive science and computer science from Indiana University, where he serves on the Dean's Advisory Council for the School of Informatics. Gary served on the IEEE Computer Society Board of Governors and produces the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine (syndicated by SearchSecurity).