Enterprises are justifiably worried about attacks by unauthorized outsiders, but they should not ignore the insider threat.
Insider threats refer to any activities where employees expose the network to some degree of risk that wasn’t there originally, says Toby Lewis, head of threat analysis at Darktrace. In other words, it’s the organization’s own staff who is eroding the organization’s security layer.
The common image of the insider threat is that of a person deliberately trying to circumvent security mechanisms, such as an angry employee who has been fired or a disgruntled worker trying to cause some damage to the enterprise. But focusing on just those types of scenarios puts the organization at risk because the security teams may not notice other people who didn’t realize the consequences of seemingly small actions.
For example, an employee who is trying to perform a task as part of their day-to-day work may have to deal with a process that, to them, feels cumbersome or bureaucratic. When they figure out a shortcut, they are not deliberately trying to break the rules to personally profit off the activity. But they are not thinking about the fact that there may be a reason why the process was created in that specific way, either.
“They’ve invented their own process,” Lewis says. “In doing so, they might not realize what the security implications are of going down this route or that this little shortcut they’ve just invented actually might be quite risky.”
To Err Is Human
Another type of insider threat is user error. Someone forgot to do something or did something even though they’d been trained not to. When the ones making mistakes have valid usernames and passwords and can access data-rich systems and applications, security teams have to recognize that these mistakes can potentially lead to security incidents, Lewis says.
“If your last line of defense is hoping that somebody won’t click a link or open an attachment, then you’ve done so many things wrong up until that point,” Lewis says.
Training doesn’t cover all the bases. But someone making a mistake doesn’t mean there was a failure in training. Some people will absorb the training material more thoroughly than others.
“You will have a spectrum of individuals who took part in training,” Lewis says. “Some will get it, and some will pay lip service and check the boxes to just pass the test so that they can get on with the next part of their day. And others will have no idea what they’re doing and just randomly click buttons.”
People will sometimes forget what they were taught. For example, parents may be more inclined to click on something that may reference their children. Or if a person has a specific passion, a message referencing that topic may trick that person into doing something unsafe.
“There’s always going to be something that when it happens, you will forget all the rules and training,” Lewis says.
The “Great Resignation” can also raise some issues. If people are leaving the organization, they may become more relaxed about security because their priorities have shifted.
“The security of the company they’re employed by doesn’t matter anymore because they think, ‘I’m not going to be here in two weeks,'” Lewis says. Or they may consider taking company information — which could just mean their email contacts list, or files they worked on — with them on their way out.
Zero Trust Tackles Insider Risk
When it came to network and perimeter security, people outside of the network were considered inherently bad while those on the inside were good. This rule breaks down, however, when an outsider obtains stolen credentials to access internal resources, or they bypass security controls and compromise a system to gain a foothold into the network.
“They’re using a good person’s information to access the network, but they’ve got bad motivation,” Lewis says. “Are they good? Are they bad? How do you differentiate?”
Zero trust treats every connection and action as suspicious. There are signals to verify, such as the device being used, the time of the day, and the order of applications being accessed. If the user is straying outside what’s expected, it triggers an investigation, even if the activity is originating from inside the environment.
“They need to prove who they are. They need to prove they are coming from a safe device. And they need to prove they’ve got good intentions,” Lewis says.
In a zero-trust organization, it would be harder for insiders to act badly, Lewis notes. By managing identity, security teams understand who the users are and determine what “normal” looks like. This way, they can assess the level of risk for each person and get a sense of when to ask for more information.
The other part is network segmentation. If the network has been divided into different compartments, then users have to authenticate each time they cross into a new area. Different parts of the network can be carved out based on risk and where sensitive data is stored.
“Each part of your network should be behind its own set of locked doors,” Lewis says. “You could only cross this barrier if you are a trusted person.”
People are unpredictable, and security should not depend on knowing exactly what they are going to do. Security teams should implement technical controls to catch all those times when the employee acts counter to their training. Technology can minimize the potential impact of a mistake or block a potential problematic action.
“Wherever possible, technology should take the burden,” Lewis says.