Information Security

Defending the digital infrastructure

Minerva Studio - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

Machine versus the bots: Does your website pass the Turing 2.0 test?

New Web security models use browser behavior and polymorphism to protect against data theft and fraud.

When security people talk about botnets these days, a lot of the focus is on crippling DDoS attacks. But if your team secures a Web-facing retail business, bots are going full tilt at your storefront all the time for reasons that have nothing to do with denial of service.

Botnets are sometimes deployed to take snapshots of user behavior or steal content. Your competitors, for example, may be scraping your Web pages to build up databases of your available inventory.

If an Internet business is susceptible to adware or click fraud, you’re likely losing significant money to bots: The Interactive Advertising Bureau reported last year that click fraud racked up $11 billion in losses worldwide.

Trying to stop botnets from non-DDoS attacks entails two key elements. On the “pwned” endpoint, protection largely boils down to detecting the malware itself. But how often do you get the opportunity to scan your customers’ computers? And even if you do get access, the success rate of most scanning tools is far from perfect.

On the Web application side, there’s a kind of an inverted Turing test going on. The Turing test, as you may recall, is an artificial intelligence exercise Alan Turing created in the 1950s. It asks whether a human interrogator who is given a set of written responses to questions can discern whether an interlocutor is a human or a computer trying to pass as a human. With the “inverted” approach, a computer is tasked with figuring out, based on queries about the state of browsers and related activity, whether Web requests are coming from a human user or a malicious bot.

Bot signals

At least two companies have developed security technology that automates this inverted interrogation approach: White Ops in New York probes how the user’s or botnet’s browser handles JavaScript requests and blocks fraudulent traffic. Distil Networks in San Francisco takes a more general look at browser behavior to create visitor profiles and automated threat response based on behavioral machine learning.

White Ops is the brainchild of co-founder and chief scientist Dan Kaminsky, who’s probably best known for finding a show-stopping security flaw in the DNS system back in 1998. At last year’s RSA Conference, Kaminsky told me that it’s easy to spot a bot: “Your attacker in Shanghai stays in Shanghai. No matter how clever the exploit is, it’s not going to teleport him in front of the computer.”

That’s assuming someone is actually looking for malicious Web activity. “There’s little awareness of bots out there right now,” says Rami Essaid, Distil Networks’ co-founder and CEO. “Web admins don’t know how much bot traffic they have, or what’s really going on.”

Websites nowadays use JavaScript all over the place. Botnet writers need a browser that not only pulls down raw Web pages, but also interprets JavaScript. To sidestep the huge complexity of browser and JavaScript implementation (just imagine how much Microsoft has invested by now in Internet Explorer) botmakers invariably borrow the internal workings of other browsers. But they don’t do this by emulating keystrokes or fully rendering page elements onscreen. Botmakers use browsers in ways that won’t blow their cover, Kaminsky says, but the internal status of the JavaScript engine doesn’t match what you’d find if a real user was typing at the keyboard. The White Ops fraud sensors query the internal state of the browser, and wrong answers expose potential bots.

Distil Networks takes a similar approach, using a statistical profile of server traffic from real users. “You can think of [it] as like a captcha,” Essaid explains, “but completely transparent, behind the scenes and on every single page.” And this detection technique turns up lots of bots, too. “The botmakers don’t know what your traffic is supposed to look like, and hard as they try, they tend to look either really systematic or really random,” he says.

Shape shifting

Another approach to Web user interface security is worth mentioning because it differs considerably from the techniques Distil Networks and White Ops use. With Shape Security, a startup in Mountain View, Calif., the idea is to present bots with Web pages that are different every time a browser loads a page. The page, when fully rendered on screen, will look perfectly normal to a user, but a bot that is trying to, say, scrape the screen contents, will never be able to orient itself to the contents of the page the same way twice. Shape’s network security appliance introduces polymorphism -- a common malware technique -- to Web pages.

Your run-of-the-mill screen scraper will stumble over polymorphism, but clearly there’s an arms race between botmakers and Web application defenders. Attackers may well find their way past the polymorphism (consider, after all, that the rendered page will show all the data just as it’s supposed to look). When it comes to querying the state of the browser, any single question asked in the inverse Turing test can, on a one-off basis, be given a convincing potted answer.

Nevertheless, the approaches of all three companies seem to turn the tables in an interesting way, giving the attacker a harder job than the defender -- an absolute rarity in computer security.

Robert Richardson is the editorial director of TechTarget’s Security Media Group. Follow him on Twitter: @cryptorobert.

Article 6 of 6
This was last published in April 2015

Dig Deeper on Application attacks (buffer overflows, cross-site scripting)

Get More Information Security

Access to all of our back issues View All