Quantum computing may not be at the point where it can instantly break current encryption protocols, but Steve...
Grobman believes that when that day finally arrives, we likely won't know it.
Grobman, senior vice president and CTO of McAfee, is urging faster action on developing and implementing new encryption protocols that can stave off quantum computing threats. Unfortunately, he says, the infosec industry isn't moving fast enough. Furthermore, Grobman believes it will be extremely difficult -- and perhaps impossible -- to determine if nation-states have encryption-breaking quantum computing systems.
In part one of our conversation with Grobman, he discussed the need for caution regarding artificial intelligence and machine learning models for cybersecurity. In part two, he talks about the development of quantum-resistant encryption, what's holding those efforts back and how enterprises should prepare for quantum computing threats.
Editor's note: This interview has been edited for length and clarity.
You've talked in the past about quantum computing in relation to cybersecurity. How worried should we be about quantum computing threats?
Steve Grobman: I think as a society, we should be worried. And here's why -- we rely on a set of algorithms to protect data essentially in everything that we do. The same algorithms, primarily the public key algorithms like RSA and elliptic curve, are used in SSL to protect our web traffic, and they're used in data protection. And the point that's being missed in this discussion is the problem doesn't occur when quantum computing becomes practical to break these algorithms. The problem is now, and the reason is an adversary can take interesting encrypted data and just put it on the shelf, and then wait for quantum computing to become viably able to break public key encryption and then act on it at that point.
I think there are a few things that we need to do differently. At the highest level, we have to go faster; unfortunately, the way that we're looking at developing quantum resistant encryption algorithms is akin to the way that we would create the next version of a protocol, such as going from TLS 1.2 to TLS 1.3. Yes, there are advantages [with the latest version], but if it takes a year or two to get there, then such is life.
And right now, NIST is doing a good job on the technical vetting of some of the proposed new algorithms, but we're looking at the next set of steps very linearly. It's going to take a couple of years to get new algorithms defined. And then we're going to start looking at protocols and figure out, 'OK, for these new algorithms, how do they actually work in the new protocols?' And then companies will start looking at, 'Well, how do we transition from the old protocols to the new protocols?' And by the time all of those things are done, we could easily be looking at six or seven years before we're using quantum resistant encryption in the mainstream.
Steve Grobmansenior vice president and CTO, McAfee
I don't want to be one of these people that just say we have a problem and not actually have prescriptive advice, so I think there are two things we can do. From an industry perspective, we need to look at parallelizing our work to start looking at the protocol impact, and the product impact even before we make final decisions on things like the algorithms themselves. We can look at the top candidates and at least start doing some rough engineering work around the likely candidates. And if we have to do a little bit of rework, then I think that's a reasonable price to pay to getting ahead of the game. From a customer or organization perspective, they can look at triaging their data today.
And the most important thing is to understand the difference between data sensitivity and data sensitivity over a time horizon. I'll give you an example of a very sensitive piece of data that has a very short time horizon: earnings for a publicly traded company prior to public release. The data is going to become public in two weeks, so you probably don't need to worry about encrypting that with quantum-resistant technology because all you care about is nobody sees it for two weeks.
The smaller the window, the less of a priority is it for quantum-resistant encryption then?
Grobman: Right. There are lots of encrypted data that have a requirement for long-term confidentiality or secrecy, especially in government. In business, certain aspects of intellectual property or trade secrets have long-term secrecy requirements. If you're Coca-Cola, the secret recipe is something that you need to be kept secret for the foreseeable future. Your trade secrets are things you care about. When you look at your information, understanding this timeline of requirements for confidentiality is going to be the key thing that you need to triage so that you can then start looking at, number one, is my data isolated if I know that I can't protect it? Number two, start building the plan for when technologies become commercially available. Focus on the things that are of the most sensitive time criticality first, and then move throughout your organization.
I think those are some of the key things. The final point that I'd make is I think we're naive if we believe we'll actually know when nation-states have become successful in implementing quantum computing deployments that are able to break our public-key cryptographic algorithms. And I think it takes no more than looking to history as an example. The Allies didn't disclose that Enigma had been cracked for three decades.
I'm glad you mentioned that. Is there no practical way to determine if quantum computing threats have broken current encryption protocols?
Grobman: If a country was able to develop and successfully implement quantum computing for the purpose of breaking RSA encryption, they're not going to tell anyone. At some point, academia or the private sector will make advances that might show that it's plausible. But I think we have to be realistic and understand that the largest investors in this area are doing so such that it is highly unlikely that we will actually be aware when they are successful.
I suppose the only way that you would know if quantum computing broke your encryption is in a case where there's only one instance of that data, you know what it was encrypted with, there was a limited number of people that had access to it, and there was no trace that there was any insider threat or any type of access control issue.
Grobman: But I think even in that scenario, with all of those assumptions that you need to make, there's a probability that one of those is actually a flawed assumption. You can assume that none of the people with access to the data were insider threats, but can you be 99.99% sure? Could that actually be the way the data was leaked? Or could it be flaws in the implementation of existing algorithms? It's not just good enough to have strong algorithms, we need strong implementations of the algorithm. If data all of a sudden is leaked, was it because the algorithm was cracked or one of these other government agencies identified a vulnerability that they chose not to disclose? We should be aware of the [quantum computing] threat, but it's important we don't jump to conclusions as well.