Even on the sprawling 52-inch high-definition screen, the black and white image is grainy, denying observers anything...
that resembles usable intelligence. The man's face, taken from surveillance footage, is obscured by a hood and poorly lit; his profile is barely visible. But the confident smirk on Marios Savvides' face tells a different story. So does the sure-handed urgency with which he rhythmically taps out commands on his keyboard.
Within an instant the man's face, so barely distinguishable from the static, snowy image on the screen, begins to churn. Underneath, algorithms developed inside Savvides' biometrics lab at Carnegie Mellon University's Electrical and Computer Engineering Department whirr and whizz. Soon the outcome is there: A three-dimensional model of the subject's face that rotates at the whim of a mouse click. The animated model is a golden image, one that can be compared, for example, to images of known terrorists stored in a database.
This is typical of the security brainpower percolating on the CMU campus, in particular at CyLab, the university's cybersecurity research wing. While the grad students, faculty and researchers at CyLab may not be tackling the latest Trojan to threaten networks or phishing scam taking aim at personally identifiable information, they are concentrating on facial and iris recognition software, or they're modeling what a disgruntled insider might look like, developing simple key exchange, or they're examining ways to preserve the sanctity of virtual computing environments.
It's a think tank addressing tomorrow's information security concerns, a brainy conglomerate set against the brawny landscape that is the city of Pittsburgh.
FACE (AND IRIS) TIME
Savvides is also managing his charges toward the perfection of facial matching, essentially taking two-dimensional images--for example, newspaper photos or subway surveillance images--and translating them into three-dimensional computer models. Since criminals and terrorists are experts at evading detection, this system takes minimal facial characteristics and creates the three-dimensional model that can then be rotated to create a frontal posed two-dimensional image that is enrolled. The algorithm developed by Savvides' lab compares the information captured to the facial landmarks of thousands of faces stored in a database, and makes a more than educated guess at what the three-dimensional image should look like. The lab's work is filling a huge gap in facial recognition; most algorithms today necessitate a posed, illuminated image to be successful. Not very practical, Savvides says.
"These are very challenging problems," he says.
Savvides wouldn't confirm whether any of his lab's work is already in practical application, but reading between the lines, it's not a stretch to conclude that is the case. Labs such as this one are the heart of security research, and much of the work goes publicly unheralded. But it's invaluable to those entities chasing terrorists, or soldiers on the ground in Iraq and Afghanistan who were the early adopters of facial and iris recognition technologies.
Iris recognition is another primary calling of Savvides' lab. Researchers, for example, have equal number medical journals on their desks as technical tomes. Not only are they developing algorithms to pattern match iris characteristics, but they need to understand how diseases such as diabetes, cataracts and cancer affect iris modeling.
"Iris recognition is as unique as fingerprinting and more stable throughout a person's lifetime," Savvides says. "This allows you to identify individuals over diverse periods of time."
Acquisition of iris information is a primary thinking exercise at CyLab since most pattern-matching algorithms are mature and in use today. The difficulty in acquiring sufficient information lies in the fact that most surveillance cameras, for example, shoot from above, and the angles aren't conducive to proper enrollment, Savvides explains. A couple of years ago, the lab acquired a system called Sarnoff's Iris on the Move, a seven-foot tall portal similar to an airport metal detector. The system captures iris images while a subject walks through the portal. Unlike other capture systems, this one does not require a subject to stand still, pose and repeatedly adjust their glance into a scanner at a very close distance.
The system detects iris shape and characteristics on mobile subjects. While simultaneous detection and enrollment is not possible yet, the system is able to match captured information immediately with data stored on watchlist databases. In Iraq, for example, U.S. troops use portable iris enrollment and recognition devices; widespread use of the Sarnoff system would be useful should an insurgent make his way to U.S. soil and be stopped at the airport.
"We are very close," Savvides says. "This is extremely useful to law enforcement or in the fight against terrorism."
SO YOU THINK YOU KNOW INSIDERS
Insiders have widely been identified as the biggest threat to assets, in particular sensitive data such as customer information or intellectual property. Insiders are pegged as threats because they frequently have unimpeded access to these assets and are often aided by lax authorization and provisioning policies that dole out credentials to more applications and systems than are necessary to do one's job.
While technology solutions, such as identity management, can solve some of the problems, IT and business managers such as human resources executives can't rely on hardware and software alone to stop the riskiest threats: privileged insiders or disgruntled employees who have been let go or are on the verge of termination.
Spotting these troubled individuals before problems are unleashed is critical. CERT/CC has developed a detailed model of what disgruntled insiders look like and the sparks that set them off.
For privileged insiders, system administrators or database administrators and those intent on causing some kind of IT sabotage, there is very little in the way of a demographic profile outside of the credentials they possess or hand out, says team lead Dawn M. Capelli.
But one thing does transcend all offenders.
"If you look at the people you work with, there are the one or two people who don't get along well with others, cause problems, can't take criticisms, and people walk on eggshells around them," Capelli says. "Those are the people who commit IT sabotage. We don't have a single case where people said, 'He was such a nice guy, I can't believe he did it.'"
While that narrows your field of potential risky insiders, there are still conditions that cause these situations to manifest, such as a withheld promotion or lower than expected pay raise. While these conditions usually aren't exclusive to the insider, some aren't able to overcome them psychologically and they become disgruntled.
"We've validated this with all our cases," Capelli says, noting that CERT/CC has a database of 150 actual cases from which it builds and refines its models. "This is a distinct pattern."
Capelli says the insider stews, often acting out via conflicts with colleagues, tardiness or skipping work altogether. All the while, these privileged insiders are planning their attacks, knowing that termination may be inevitable and vengeance will be theirs.
"Organizations have to be watching so they can notice the signs. Some organizations are not monitoring for these signs at all, paying no attention to these people. Off the bat, they're going to be victims," Capelli says. "Technical people we talk to are not surprised by what we find in these cases. Their frustration is that management doesn't understand, and they can't get the funding and resources they need to be proactive and take the actions they know they need to take."
For those organizations that are paying attention, sanctions should be immediate. For employees who aren't predisposed for these behaviors, Capelli says a formal write-up by HR or even a demotion will bring the insider in line. The saboteurs will instead get angrier and continue to act out. The ultimate sanction, Capelli says, is termination, which triggers an attack.
"These people are sysadmins; they know your flaws and how to exploit them," Capelli says.
Technical monitoring is important. Savvy review of logs or account auditing could intercept unknown access paths laid down by the disgruntled insider, or the creation of backdoors or the insertion of logic bombs and other malware.
"The pattern is distinct where things start going downhill," Capelli says. "There is time to head this off."
The IT saboteur model is most mature; Capelli says CERT/CC is working on an insider theft of confidential information model, as well as a model of an insider who commits fraud.
CERT/CC is also developing an insider threat diagnostic, which is being funded by CyLab, that it can take to organizations to help them evaluate the insider threat and what can be done about it. Capelli says the diagnostic is a three-to-five-day onsite visit to an organization where managers are interviewed about their processes, policies and technologies based on the thousands of technical vulnerabilities and psychological traits generated from the hundreds of cases in CERT/CC's database.
"If a manager understands the signs, they can work with IT, HR, legal and others and come at the problem of insider threats together," Capelli says. "We're trying to raise awareness and perhaps give IT a justification of the problem they can take higher up to management to get more resources to fight the problem."
Capelli's teams have embarked on their first pilots, and they're asking pointed questions--for example, about account auditing and how often new accounts are vetted and with whom.
"Our report will identify areas of concern and say whether they're easy or difficult to fix," Capelli says. "Easier problems can be fixed at a lower cost, and maybe they'll look at the harder ones and fold them into their overall enterprise risk management strategy."
DEEP THOUGHTS, PRACTICAL SECURITY
He's got the skills, but his greatest gift is context. He adeptly associates problems with solutions, though perhaps to the horror of most security professionals, he experiments with putting security in the hands of the user or within the interaction between users.
"My group in particular is concerned about people who don't have computer science degrees and Ph.D.s in security. Even I have problems using and configuring products," says Perrig, associate professor at CMU and CyLab technical director.
"I approach security by thinking about my family and how they deal with it. I have friends of mine who have Ph.D.s in computer science taking three hours to install their 802.1 access point security. We're just trying to create security that's easy to use."
One such project, developed by Perrig and CMU colleagues Michael K. Reiter (who has since left CMU) and Jonathan M. McCune, is the Seeing is Believing (SiB) protocol, which enables secure communication between mobile devices that have no contextual relationship. The protocol employs two-dimensional barcodes that serve as the devices' respective public encryption keys. The barcode is photographed by the other SiB-enabled device, which decodes the barcode, then contacts the other device via Bluetooth to obtain another copy of the public key. If the two match, the devices are authenticated and secure communication can happen without the need for a certificate authority.
"Whenever we need to use encrypted email, we need to trust certificates. There are a lot of problems with certificates," Perrig says. "With this system, you get rid of the certificate authority and essentially create your own."
Perrig sees several important business applications of his protocol, most notably in collaborative settings where certificates aren't necessarily well managed (see "Goodbye PKI, Hello AIP," below).
"This technology sets up a trusted relationship, without PKI, so it's much cheaper," Perrig says. "It would need pretty much no infrastructure; it just locally works. If you have people from different companies and have the system installed, you can instantly set up keys and securely communicate."
SiB also shaves the chances of falling victim to a man-in-the-middle attack, where an attacker spoofs one end of a communication and reroutes traffic to them. SiB has a built-in failsafe that detects the intercession of another key and asks if the user wants to allow it access. In most cases, this key would be an attacker's.
"We want to provide security that is easy to use and provides security guarantees in all aspects of a transaction," Perrig says.
Guarantees are another thing Perrig is big on. Take, for example, his guarantee that a tiny 1,000-byte piece of hypervisor code he co-wrote with fellow CyLab researchers Arvind Seshadri, Mark Luk and Ning Qu, called SecVisor, will protect an operating system against any malware in the wild today.
"SecVisor write-protects the kernel so that no one can access it," Perrig says. "It will only allow a list of modules that are allowed to run on a particular OS and only permits this software to execute."
SecVisor stops kernel-level rootkits in their tracks and even detects the undetectable Blue Pill virtual rootkit, Perrig says. Only code approved by an admin is executed with kernel privilege, and all code loaded into the kernel is checked against this policy before it runs.
"SecVisor virtualizes the physical memory, which allows it to set hardware protections over kernel memory that are independent of any protections set by the kernel," Perrig, et al, write in a paper describing the project.
Virtualization is hot right now. Companies are consolidating servers and systems to cut licensing costs and conserve data center space and, more importantly, energy consumption. Many organizations are moving forward on these projects with little consideration for the se-curity of virtual environments. Attacks against the kernel are especially dangerous because usually, once a kernel is owned, it's owned forever.
Perrig says he's ported SecVisor to Windows and Linux, with very few modifications to either OS. He says SecVisor could be commercialized soon.
"Microsoft is very interested in it; we're talking to them about adopting some of the technology," Perrig says. "From the time I sent them an email about SecVisor, I had a response within hours, and within days they sent me a disk with the source code of the Windows kernel."
CyLab boasts 50 faculty and more than 130 graduate students, all of whom are contributing to a diverse set of projects such as SecVisor, Seeing is Believing, insider threat modeling or facial matching. Additional work is being done around privacy, risk management, and more technical areas such as audio CAPTCHA for authentication, botnet detection and e-voting security.
Dig Deeper on Emerging cyberattacks and threats