icetray - Fotolia

Who's to blame for ransomware attacks -- beyond the attackers?

Cyberattackers are to blame for ransomware attacks, but what about companies that release flawed software or don't install patches? Our expert looks at where the buck stops.

In early May, hackers infiltrated the Baltimore, Md., computer network. The ransomware attack ended normal business operations, interrupted critical city services, cost the city millions and inconvenienced hundreds of thousands of residents.

Baltimore joined the list of other cities that have fallen victim to serious ransomware threats that affect business and commerce. While ransomware attacks have many variations, they generally make victims' data unrecoverable due to strong encryption enabled by cyberattackers who then demand payment to decrypt the data.

While Baltimore may be typical of many ransomware attacks against government and businesses, it is atypical in other ways. The city said the attack was facilitated by the use of EternalBlue, a cyberweapon developed by the U.S. National Security Agency (NSA). The capability behind EternalBlue was allegedly stolen from or leaked by an NSA employee and later released in April 2017 by a group called the Shadow Brokers.

Fingerprints of EternalBlue's use by cybercriminals actually showed up as early as 14 months before the Shadow Brokers dumped the files. The NSA disputes Baltimore's claim that EternalBlue is involved in the attack. But the NSA's objection doesn't change the basic problem -- that cyberweapons were either stolen or released, and U.S. government tools were subsequently used to attack businesses and individuals. Baltimore refused to pay the ransom, and the city's government asked for millions of dollars in relief from the federal government, which ultimately means from the taxpayers. 

Who's to blame for cyberweapons in the wild?

While EternalBlue is high-profile and serious, it is just one of many tools and exploits believed to have been released into the wild due to the NSA breach, and many organizations around the world have suffered from the impact.

We the people keep wringing our hands after attacks, and we are still months, years or even decades behind on systems upgrades and security remediation.

Beyond the cyberattackers who facilitate the breach, whose fault is a ransomware attack? Is it the fault of the software company that puts out vulnerable software? The EternalBlue exploit is very effective, but only if the victim fails to patch the software vulnerability that allows its execution. After all, Microsoft released a patch for the previously unknown vulnerability long before the Baltimore attack. Baltimore and many other breach victims could have patched their systems. Patching would have avoided the ransomware attack entirely.

Does responsibility fall to the government agency that knew the software was vulnerable, built an exploit, then failed to warn the public? More controversially, does the blame fall to the victim who, after being warned, didn't patch the systems to block the exploit? Considering user license agreements, and without evidence the software maker knew or should have known about the vulnerability, it's difficult to hold software makers responsible.

If the U.S. government released these tools even unintentionally, and knew the vulnerabilities existed long enough for them to be exploited, you could argue the government should hold the responsibility. On the other hand, those who were ultimately impacted could have prevented the ransomware attack by patching their systems.

There's plenty of responsibility to go around, but the core responsibility in the case of EternalBlue keeps coming back to the NSA. To allow these tools into the wild by any means is equivalent to releasing biological or nuclear secrets. The level of damage that can be done with these tools can't be understated. Cyberweapons have been used by foreign bad actors to devastate individuals, businesses and other organizations with life-critical missions, including hospitals and police departments, for example.

The need to be able to take action against our enemies is vital to the NSA. The need for secrecy and keeping access to cyberweapons private is core to mission success. The problem begins when that secrecy fails, tools get into the wild and are used to victimize innocent individuals and companies.

Patch management -- an obvious fix

Organizations must immediately get real about managing patching for known critical vulnerabilities as soon as possible. An estimated 70% to 80% of all breaches can be prevented by software patching. Before an organization moves on to invest in advanced technology, it makes sense to close the vulnerabilities hackers use to attack. Software vendors need to act more ethically and quickly when they become aware of serious vulnerabilities. Time is critical, and time from vulnerability discovery to actual exploit narrows with every passing day.

In April 2019, the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Security Agency released new requirements for remediating critical and high vulnerabilities. DHS's Operational Directive (BOD) 19-02 states that vulnerability remediation requirements for internet-accessible systems, to enhance federal agencies' coordinated approach to ensuring effective and timely remediation of critical and high vulnerabilities in information systems.

This directive was driven by the fact that DHS clearly understands the immense security gains of patching. Patching is not simple or easy and often requires overtime or additional staffing. But what it requires most is commitment and no interference from executives and others who are unwilling to allow outages to complete the critical patching services.

The U.S. government needs to understand the tools being created are dangerous in the wrong hands and should be protected as secret deadly weapons. Criminal penalties and career punishments for releasing or abetting these tools getting out should be severe and unwavering.

What's the plan?

We the people keep wringing our hands after attacks, and we are still months, years or even decades behind on systems upgrades and security remediation. Chief information security officers should be obligated to report to the CEO and board of directors, not the CIOs who may not want to tell the entire and accurate story of an organizations security posture. If we hope have any chance of defending ourselves and avoiding potentially global outages that directly impact human survivability, we need to get serious and do it now.

Dig Deeper on Threats and vulnerabilities

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close