Transparency after a cyber attack: How much is too much?

Sharing threat intelligence and proof-of-concept exploits can often help other organizations better defend themselves, but such efforts are hampered by obstacles and restrictions.

While infosec experts agree that publicly available proof-of-concept exploits and other information can be weaponized by threat actors, the implications of not sharing information can be far greater.

The extent of how transparent a company should be following an attack has been an ongoing debate in the infosec industry, and one that's been heightened by recent major cyber attacks. Because indicators of compromise (IOCs) and proofs of concept (PoCs) can offer a peek into an enterprises' security posture, some argue that such sensitive information should be kept private or not be revealed until long after the investigation is complete. On the other hand, full cyber attack transparency can be valuable to security researchers and the broader community to better understand the threat landscape and immediate risk to other organizations.

For example, SolarWinds was applauded for its transparency following a massive supply chain attack last year. During an RSA webcast last month titled, "SolarWinds: The Detailed Account of the Incident Response," Ronald Plesco, partner at DLA Piper Global Law Firm, recalled the moment CrowdStrike discovered the backdoor inside Orion software updates and the rapid communication that followed. Plesco credited SolarWinds for the company's decision to go public on Dec. 13 -- the day after it was informed of the breach by FireEye -- with a security advisory that disclosed as much information as they had at that point.

Ronald Plesco and other experts at an RSA Conference webcast
In an RSA Conference webcast about the SolarWinds incident response effort, Ronald Plesco, partner at law firm DLA Piper (bottom, right), applauded the vendor's decision to publish a security advisory one day after learning of the breach.

"I've never had a client, in all the investigations I've done, either under attorney-client privilege with another law firm or myself as the attorney, say, 'Let's go public,' when they don't know the results of the investigation yet," Plesco said during the webcast. "Putting in the time and effort to do that in the middle of one the most complex investigations in history -- it's just unheard of."

In addition to SolarWinds sharing the information with law enforcement and government agencies, incident response (IR) partners CrowdStrike and KPMG published blog posts for the general community, as well.

"SolarWinds was all about transparency with the investigation, getting out in front of that, because they firmly believe, as do I, what they did there is definitely happening elsewhere or attempted elsewhere, and to get word out to prevent that is literally the right thing to do," Plesco said.

Many security researchers and threat analysts favor maximum transparency for cyber attacks and vulnerabilities. Some technology companies believe transparency is a badge of honor.

Omar Santos, principal engineer at Cisco, who works in the company's Product Incident Response Team, told SearchSecurity the company aims to publish everything -- from internal security advisories to those found by the IR team or ethical hackers. Even when the Cisco IR team only has suspicions of active exploitation, even if there is no patch available, Santos said they will still go public.

"We are super transparent. That's one thing that we take pride in," Santos said.

But a tug-of-war has emerged in recent years over what information should be shared and disclosed to the public. While researchers and incident response professionals routinely upload malware samples and PoC exploits to public services like VirusTotal and GitHub, they sometimes face obstacles such as restrictions for these services, as well as nondisclosure agreements (NDAs) imposed by employers or vendors.

For example, GitHub came under fire earlier this year when it removed a PoC for the ProxyLogon Exchange Server vulnerabilities, even though patches had already been released for the flaws. GitHub, which is owned by Microsoft, said the exploit code violated the service's acceptable use policy. The move sparked a debate over restrictions on security research. And such incidents have led some in the infosec community to express concern about a lack of transparency, while others are wondering how much might be too much.

Walking a fine line

Sharing data after a breach or incident can be tricky; Hank Schless, senior manager of security solutions at mobile security vendor Lookout, said, oftentimes, it's done on a more private basis than sharing information publicly about malicious campaigns that have been discovered.

Sharing intelligence is crucial to reducing knowledge gaps in the cybersecurity industry.
Stefano De BlasiThreat researcher, Digital Shadows

But private sharing can be limited and, often, leaves many infosec professionals on the outside looking in. During a session on supply chain attacks at RSA Conference, Marco Figueroa, principal threat researcher at SentinelOne, lamented the fact that some malware used in the SolarWinds attacks was not shared publicly by the vendors that discovered them. He also said NDAs imposed by companies often restrict valuable information about threats that could benefit other organizations.

In March, Intel released a study examining how transparency, security innovation and ongoing security assistance affect purchase decisions. Independently conducted by the Ponemon Institute, 1,875 individuals involved in overseeing the security of their organization's IT infrastructure the U.S., the United Kingdom, Europe, the Middle East, Africa and Latin America were surveyed. According to the survey, 64% of respondents said it is highly important for their technology provider to be transparent about available security updates and mitigations, and 47% said their technology provider doesn't provide this transparency.

But transparency isn't always in the cards for enterprises. Whether it's in response to critical vulnerabilities or major cyber attacks, organizations often seek to limit the amount of information that's both documented internally and shared publicly in order to limit legal liability and protect a company's reputation.

Wam Voster, senior director of research at Gartner, published a report last month on the mounting dangers of "weaponized" operational technology (OT) that could lead to loss of life. On top of recommending 10 controls to safeguard to the safety of OT systems, Voster recommended security and risk managers shift their priorities. "Rather than focus on protecting confidentiality, integrity and availability," he wrote, "they should implement an OT security control framework to include controls to safeguard the safety of their OT systems."

There are different ways to share security information, and not all routes are public. As ransomware attacks ramped up over the past year, the U.S. government provided an additional way to share and report attacks through the website Stopransomware.org. Intelligence sharing can also be done through groups like nonprofit organization the Cyber Threat Alliance, as well as industry partnerships with security companies and government.

Sharing intelligence through those means is really the way forward, Craig Williams, director of outreach for Cisco Talos, said. The way the bad guy wins, he said, is companies don't share information. "If we all know what their software looks like, and we all know how to detect it, there will be a lot less successful deployment," Williams said.

Sharing intelligence can benefit both security researchers and the vendor by contributing to the bigger picture and the overall threat landscape. From a researcher's perspective, threat intelligence provides details on active threat actors and their tactics, techniques and procedures. Analyzing IOCs can support researcher investigations and, in turn, can result in providing more accurate assessments on attribution and attacker motivations, said Stefano De Blasi, threat researcher at Digital Shadows. On the vendor side, having an established intelligence sharing framework can help organizations implement what is needed to protect its security posture.

"Sharing intelligence is crucial to reducing knowledge gaps in the cybersecurity industry," De Blasi wrote in an email to SearchSecurity.

Sharing intel is worth the risks

While sharing information can be used to improve security postures, in the wrong hands it can be exploited. Nathan Einwechter, director of security research at Vectra, said it's a complicated issue with no clear right or wrong answer.

Santos said it extended even further than that, raising the question of what transparency is. "Is there a global definition for that? I don't think there is, even in the industry. Is there a global definition on it, now that we're creating some? It's actually coordinated disclosure."

Sometimes, the decision on how transparent to be boils down to maintaining a reputation.

According to Santos, many vendors only disclose externally discovered vulnerabilities, while those found internally will keep such discoveries confidential. Similarly, De Blasi said many organizations often prioritize maintaining a competitive advantage, which can hinder security cooperation.

We're foolish to think that the real, sophisticated attackers cannot get [your proof of concept] by not even looking at your advisory -- they just look at your patch.
Omar SantosPrincipal engineer, Cisco

More problems occur in the event of major discoveries, which can take months or years for research teams to get from the initial discovery to the point of being able to share the information without giving the threat actor a way out.

"As these discoveries are made, vendors will quietly implement coverage for their customers in order to protect them as more research is being done," Einwechter wrote in an email to SearchSecurity. "From a competitive standpoint, vendors may only want to publicly share a portion of their findings."

While that competitive nature isn't going anywhere, there has been more widespread recognition that anyone can be breached. According to Einwechter, reputation and judgement of this happening is no longer as significant a concern as it has been in the past.

There are other concerns, though. Sharing incident details too early may let an adversary know the target is on to them, and Einwechter said it can often cause the attacker to ensure they've established backup access and then go dormant, or, even worse, go destructive, as is the case in ransomware incidents.

"For this reason, we often remind responders not to submit potentially unique malware samples to public repositories for analysis (like VirusTotal). Samples submitted to these sites are often monitored by adversaries to provide an early sign of response activities within target environments," Einwechter said.

While free services like public repositories can add value, there are drawbacks, as well. Ryan Olson, vice president of threat intelligence for Unit 42, said if someone is analyzing a file for free, there is a chance they can turn around and sell that file or some analysis based off the file to someone else. In turn, companies should be aware of what they're sharing. That includes whether the files have any private or identifying information for an organization. Not every company has a malware team that can sort through the files before they are scanned.

"It's amazing how much information can be included in a small piece of data that you share, without really realizing what it's going to reveal," Olson said.

Transparency affects all levels of cybersecurity; even at the government level it's an issue. In May, the White House issued an executive order on improving the nation's cybersecurity. According to the order, the security of software used by the federal government is vital to the government's ability to perform its critical functions. However, the order also said the development of commercial software often lacks transparency.

What can be done?

Santos said Cisco is trying to lead some of the cyber attack transparency efforts. Part of those efforts is being one of the first vendors to have an API where users can subscribe to pull information about vulnerabilities. However, publishing such publicly available information can be controversial.

"One of the arguments is that the bad guys are probably taking advantage of this. Yes, absolutely. Now, they can reverse engineer your patch anyways. Right? So, it doesn't matter," Santos said. "An exploit proof of concept, of course, that will be weaponized in two seconds. But, at the same time, we're foolish to think that the real, sophisticated attackers cannot get that by not even looking at your advisory -- they just look at your patch."

Further steps to be taken include the use of machine learning. If the data set gets large enough, Schless said, it reaches a critical mass where telemetry and artifacts can be automatically ingested and flagged as malicious.

"Being able to apply machine learning to a data set that constantly takes new information helps rapidly broaden coverage for customers and gives researchers more to work with as they conduct their own investigations," Schless said.

Most of the concerns and issues around transparency revolve around information sharing while the incident is ongoing. After it has been revolved, Einwechter said there's a much stronger perspective by researchers, security analysts and incident responders that publication of a complete set of details is extremely important.

Despite all the nuance and the significant benefit reporting incident details can provide for the broader community, he said security teams will often be met with resistance from other teams when proposing publishing details.

"While the concerns frequently raised are valid and worth discussion, publishing should be viewed as the default, in lieu of other clear factors that might preclude it, in particular legal or customer communication requirements," Einwechter said.

Next Steps

'ProxyLogon' Exchange bug resurfaces after presentation

Hackers selling access to breached networks for $10,000

T-Mobile offers details of data breach that affected 40M

Dig Deeper on Risk management

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close