Insider risk indicators thwart potential threats

By paying attention to risk indicators, enterprises can tell the difference between insider threat and insider risk to prevent falling victim at the hands of one of their own.

Insider threats aren't just the subject of Joe Payne's book or the target of products created by the company where he is CEO; he has firsthand experience with one.

Five days after she left security software company Code42, one of Payne's employees downloaded the entire contents of her laptop, including payroll data and employee Social Security numbers, onto an external hard drive -- a prime insider threat indicator. Fortunately, the act was caught by the company's own product. When confronted, the well-liked HR employee said she was only trying to copy her contact list.

"Not only was I thinking how embarrassing it would have been to be breached, but it also reinforced to me that every company has really important data," Payne said.

That data comes in many forms, he added: Every salesperson has access to Salesforce data, for example, just as every marketing person has access to the company's entire marketing database. Likewise, every HR employee has access to sensitive employee data, and engineers have access to the company's source code.

This experience -- combined with eye-opening statistics that two-thirds of all breaches are caused by insiders and that only 10% of security budgets are allocated to address insider threats -- led Payne and his co-authors to write Inside Jobs to help others from falling victim to insider risks.

Here, Payne shares key insights from the book, including how to identify insider risk indicators, how his company's file activity product counters such threats and more. For more information on the types of insider risks, read an excerpt of Chapter 3 of Inside Jobs.

What is exacerbating the security threats presented by insiders?

Joe PayneJoe Payne

Joe Payne: The whole concept for the book came out of the idea that the world was changing technologically and culturally in ways the security community hadn't adjusted to.

Technologically, new tools to help people share and collaborate are being rolled out: Slack, OneDrive, Box, Google Drive. They're fantastic tools, but traditional security software that addresses the issues of insider risk and data loss was written to block sharing and collaboration. Now, you have the CIO and CEO saying, 'Share, work together, collaborate!' and the CISO saying, 'Sharing and collaboration is bad because that creates risk!' -- a major disconnect on the security side from what the business is trying to do.

Culturally, we're seeing changes around where people work at their jobs. They're working from Starbucks, working from home, on the road or in hotels. Now, with the pandemic, this point is being proven even more.

Beyond where people work, we're also seeing cultural changes around how long people stay at their jobs. Young people stay an average of three years; older people [stay] an average of four years. And, when people switch jobs, they stay in their same industry for the most part, essentially going to work at a competitor.

The combination of new collaboration software, employees working from everywhere and people changing jobs a lot has created the perfect storm for insider risk.

Internal activity by your own employees is typically not a threat, but a risk.
Joe Payne

You call them internal risks versus internal threats. What's the difference?

Payne: This is a topic we address in the book that we've also been discussing as an industry. Talking about insiders is different than talking about external threats. Most external threats are actual threats -- if somebody is in your network that doesn't belong, for example. Malware, phishing, spam, ransomware -- all of them are literally threats.

Internal activity by your own employees is typically not a threat, but a risk. We look for what we call 'insider risk indicators.' We're careful not to call employees 'threats' because they might not actually be threats at all; they just might be an indicator that something needs following up.

What are some examples of risk indicators?

Payne: An employee working at the wrong hours has been an insider risk red flag for years. In the old days, they would come into the office at midnight and make a bunch of copies on the copy machine. That working off hours and on weekends mentality persists, believe it or not. Today, people are working from home, and instead of copying a bunch of files to the cloud or uploading to their Dropbox account at noon, they wait until the day's over. That action probably is just somebody doing their job eight out of 10 times. But it is an indicator of risky behavior.

'Inside Jobs' book cover imageClick to learn more about
Inside Jobs.

Likewise, somebody deleting a bunch of files is an indicator of risk. People planning to exfiltrate files cover their tracks by deleting the files they exfiltrated. But deleting a bunch of files might also just be somebody cleaning up their desktop, which isn't a big deal.

The biggest indicator of risk, by far, is when somebody quits. It's such a big indicator of risk that we devote an entire part of our product to people who are leaving. The fact that an employee quits doesn't mean they've done anything wrong or taken any data. But it's something to consider as risky behavior.

In the book, you wrote: 'Insider risk is a game of odds.' How do you measure those odds? When does a risk turn into a threat?

Payne: Code42 doesn't run the software for customers; rather, we build the tools that our customers operate. Our product pulls together a bunch of different risk factors. If an employee triggers one factor, their boss probably doesn't care -- and doesn't want to create a lot of noise for the security team. But, if an employee hits a bunch of risk factors, the product correlates that and creates an alert for an investigator to look into the situation.

The product does not stop employees from doing their jobs and doesn't treat them like a criminal because they exhibited risky behavior. Rather, we characterize the product as a big 'DVR' that always has rules and alerts running.

If an employee quits and worked late at night and deleted a bunch of files, for example, the product will send out an alert for further investigation. If, in looking at that data, the investigator finds the employee exfiltrated customer or employee lists or uploaded source code to Dropbox, the HR and legal teams should be brought in to have a conversation with the employee.

You also wrote that security teams 'can't have the foresight to create a policy for every possible insider risk.' How can new and previously unknown risk indicators be accommodated?

Payne: We capture data about data -- a lot of metadata, basically. That data will say, for example, this file moved to this location, was uploaded to this Gmail account, went out via public share on this cloud service, etc. Thus, we're always capturing new areas of exfiltration. For example, we added AirDrop recently per a customer's request. We also added printing -- if an employee prints something, it will capture what they printed and where.

We capture file activity and help customers look at different user behavior around it. Here's a different type of an example: Some of our customers say employees uploading resumes to job sites is a risk indicator. It's not that the employee did anything wrong -- they're not going to be stopped or reported. But it puts them in a higher category [of] risk because it suggests they're thinking about leaving.

Is there any sort of time frame to assess insider risk indicators?

Payne: Our 'DVR' goes back 90 days. We have clients asking us to extend it, and over time, we'll probably make that an option. But we found that, typically, 90 days is good. Even in the departing employee example, most will start taking data about two to three weeks before they leave.

Dig Deeper on Risk management

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close