As fears grow over government surveillance, the phrase “facial recognition” often triggers a bit of panic in the public, and some commentators are exploiting that fear to overstate any risks associated with Apple’s new Face ID security system.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
One of the more common misunderstandings is around who has access to the Face ID data. There are those who claim Apple is building a giant database of facial scans and that the government could compel Apple to share that data through court orders.
However, just as has been seen with the FBI attempting to bypass the Touch ID fingerprint scanner on iPhones, the same security measures are in place for Face ID security. The facial scan data is stored in the Secure Enclave on the user’s iPhone, according to Apple, and is never transmitted to the cloud — the same system that has protected users’ fingerprint data since 2013.
Over the past four years, Apple’s Touch ID has been deployed on hundreds of millions of iPhones and iPads, but not one report has ever surfaced of that fingerprint data being compromised, gathered by Apple, or shared with law enforcement. But, there are those out there who would claim that somehow this system would suddenly fail because it is holding Face ID security data.
The fact is that the data stored in the iOS Secure Enclave is only accessible on the specific device. Apple has built in layers of security so that even it cannot access that data — let alone share it with law enforcement — without rearchitecting iOS at a fundamental level, which often means the burden of doing so is too high to be compelled by court order.
Face ID security vs facial recognition
This is not to say that there is no reason to be wary of facial recognition systems. Having been a cybersecurity reporter for close to three years now, I’ve learned that there will always be an organization that abuses a system like this or fails to protect important data (**cough**Equifax**cough**).
Facial recognition systems should be questioned and the measures to protect user privacy should be scrutinized. But, just because a technology is “creepy” or has the potential to be abused doesn’t mean all logic should go out the window.
Facebook and Google have been doing facial recognition for years to help users better tag faces in photos. Those are legitimate databases of faces that should be far more worrying than Face ID security, because the companies holding those database could be compelled to share them via court order.
Of course, one could also argue that many of the photos used by Facebook and Google to train those recognition systems are public, and it is known that the FBI has its own facial recognition database of more than 411 million photos.
To create panic over Apple’s Face ID security when the same data is already in the hands of law enforcement is little more than clickbait and not worthy of the outlets spreading the fear.