SAN FRANCISCO -- If you can't code, you can't do security at Google.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Engineering-friendly, email-happy Google sports an information security team comparable to that of most Fortune 100 enterprises, but with an important distinction: The people charged with protecting both its customer-facing applications and internal operations must be developers.
"Google has a decidedly go-at-it-alone conventional approach to solving problems," security director Scott Petry said Tuesday during an interview at RSA Conference 2008. "This is most evident in the value of security inside engineering."
Security engineers write authentication modules, routines that pull data from logs or other modules such as those that employ cryptography on static data used company-wide.
"They write these libraries and they're embraced across the organization. If they're not, you're violating a standard that Google has set," Petry said. "We put all of our security eggs in one basket; people who know a narrow set of technology are chartered with defining how applications are secured."
That philosophy permeates enterprise-wide. Google has reached the plateau most companies only talk about, in that it has integrated security into its development lifecycle. Newbie developers, known as Nooglers, face rigorous multi-day security training seminars before they are assigned to a team or a project; during the seminar they're grilled and taught everything from policy and process development to code hacking. Production code must pass peer-review muster before it's live, not only with security teams but with nearly any member of the Google engineering community, depending on the size and scope of the project.
"No one person is authorized to write code into production," Petry said.
Petry is the founder of Postini, a hosted email security company the search giant acquired last July. It delivers Gmail users such services as message security, encryption, archiving, policy enforcement and more. He explained that Google has to take a black-box approach to application-level security because input variables are infinite with today's Web applications, making it impossible to anticipate every outcome. Therefore, peer reviews are essential to the development process.
"We don't know what our endpoints are doing, and we don't know what the vulnerabilities are," Petry said. "We have to get more eyes to fill those gaps."
Google also gathers intelligence from attacks and probes such as cross-site scripting or buffer overflow attacks against its infrastructure from outsiders. A database of attacks against Google is kept and those attacks are run against production code in order to prevent exposures.
"Google is probably the highest-value target on the Internet today. The bigger we are, the more we're attacked and the quicker we learn what others are doing against us," Petry said. "You can look at them as criminals, or consider it research. Attacks are lessons."
Petry also talked about Google's policy of responsible disclosure and response. He hopes hackers share their vulnerability research with Google. He says Google categorizes that research, replicates it, validates it with an engineer and responds to the researcher with a patch timeframe. Google, in turn, asks that the researcher does not go public with their finding.
"It's a popular process inside of Google and we find that most of the time, people apply it," Petry said. He added that Google publishes a list of acknowledgements online recognizing researchers who have reported bugs responsibly.