Ntiva Live: Cybersecurity for the Rest of Us

Insider Threats: All You Need to Know

Episode Overview: Insider Threats

In today's episode, our new CISO, Dr. Jerry Craig, takes a deep dive on insider threats. Learn how to spot them and how they can affect your business!

Sign Up Today

Complete the form to register for the "Cybersecurity for the Rest of Us" series. You’ll get an email reminder before each livestream, plus an email with a link to the recording in case you miss any of the live events.

Insider Threats: Episode Transcript

Okay. It's 1:00 after, so I'm going to go ahead and get started. My name is Dr. Jerry Craig. I'm with Ntiva cybersecurity. I'm going to use a PowerPoint to facilitate this, so give me one second and I will have that shared out with you.

Insider Threats: A Deep Dive

Today's topic is going to be on insider threats. The agenda for today, we're going to cover an introduction, a basic definition of insider threats. Give you some threat case examples in case you're not used to hearing things about insider threats.

Discuss some of the behavioral patterns that you would expect to see, or we have seen in these cases, some best practices on how you can identify some insider behavior and insider threats themselves, as well as things that you can do for best practices to try to mitigate and remediate.

Then I do want to cover unintentional insider threats. It's something that always comes up and folks maybe get hung up on. Did someone mean to do something or was it unintentional? And then a Q and A. For the Q and A component, because I am sharing my screen and in this presenter mode, I cannot see questions that are being asked.

So if you want to type your questions in the Q and A portion of the meeting, then as soon as I get out of presenter mode and I finished the slides, there's not that many of them, then I can go ahead and get into that chat and address those questions. All right.

And again, to introduce myself, I'm Dr. Craig. I have about 20 plus years in IT and in security in general. I started with the Marine Corps on active duty for about four years, setting up networks and infrastructures. And then I moved into the healthcare space and installed healthcare systems for DOD.

Spent another decade installing networks and securing them overseas with deployed active duty Marines. And then I, again, transitioned back into the healthcare space for the last six years where I've been in charge of securing some of the largest data centers for the federal government.

Now, with Ntiva, I've taken over from my predecessor, Frank Smith, who you may have seen on some of these past sessions. And I will be here to support all of our Ntiva customers as well as Ntiva on the corporate side.


What Are Insider Threats?

To start off, I think the definition of insider threats is probably the best way to go, get everybody on the same page. There is no one definition that is accepted across the industry. Depending on which vendor you talk to, what certification they may be offering, each one has a little bit of a spin off of each other.

So with the ones I threw up here, we're kind of a mix of what others have done and just put the keywords together so that when you look at it, if you talk to anybody else, I think 90% of the folks or more would agree that, "Okay, this makes sense. This is what we're talking about."


Three Types of Insider Threats: Sabotage, Fraud, and Theft

There are three main types of insider threats that I'm going to cover today. These are the ones that make up the greatest percentage of insider threats. There could be some outliers, but if they're in such a small percentage, they're probably not worth addressing. Those are IT sabotage, fraud, and the theft of intellectual property

And one thing I want to mention before we really get into the definitions is as you're watching this or participating in this webinar, you may be thinking to yourself, "Well, my organization does X and you're doing Y, and my industry is this," and you're in a different industry.

So it's not going to be cookie cutter and it's not going to be necessarily perfect to fit across all industries, but it should be a general enough presentation that you can say, "I see where this would fit in my organization," or, "I see if I had to adapt something or adapt it a little bit differently, it will still fit and it will still work."

On that note, IT sabotage, this is really just insider's use of IT to put any type of harm against an organization or an individual. You can really think of these as more of your disgruntled employees. They want to pay the organization back for whatever they feel is wrong or has harmed them, whatever has held them back, those kinds of things. And I'll give you, again, an example here in a minute.

Insider fraud, this is using IT to modify, add, or delete organizational data for personal gain or theft. This one is usually related to stealing money, transferring money, really money is around most cases. A lot of times fraud is really tied to the financial industry.

You see that a lot on television shows and in the news. I'm in the third kind that a lot of people overlook unless they're in certain industries, which is the theft of intellectual property, and this is really using IT to steal intellectual property from the organization. If you create software, if you're a manufacturing organization, if you're DOD, those kinds of folks, intellectual property is a big deal.

It's your designs, all of how you build and execute, how you price things are all considered intellectual property. And a lot of times, organizations have that ripped off and sold to their competitors, or as we speak to, unintentional fraud and unintentional insider threats.

It could be as simple as someone leaving your organization and taking that with them because they feel that will help them get a jump start at their next org. One of the things that I've seen over my career is that a lot of folks, when they leave an organization, they're leaving to go to a competitor.

Competitors offering them a better place to work, more money, whatever the case may be. And whether it's absolutely stated or not, the competitor expects you to kind of bring your knowledge, your contacts, your trade secrets, whatever.

They're hiring you for a reason. It's not just because you're good at your job. They feel there's going to be some form of inside advantage. In that case, intellectual property is normally the place that we would see it.

Starting off with IT sabotage as an example. The next few slides are going to be the same format, just going through the different examples. In this case, it's a logic bomb.

This was a system administrator who worked for a healthcare organization. The employee found out that layoffs were imminent. He or she felt like the layoff was going to affect them personally, decided to embed a logic bomb to wipe out the data on 70 servers on the employee's birthday.

This is a software package that was deployed. It sat within the organization, effectively hidden from the organization. It does nothing, it just sits there.

And then when it's coded to blow up, for lack of better terms, it gets executed. In this particular case, a different administrator found the code, reported it, and was able to alleviate the threat prior to it going off.

And as you'll see, in this example and in others, you have a couple options here, or the attacker, insider threat has a couple options. They could set this bomb to go off while they're there, after they've left the organization. It's really up to them what they want to do.

So you can't always assume that, "Well, as long as the person stays here, they aren't a threat though. If they're going to do something, it would be after they leave." Now, they might just put it in there and say, "I'm going to have it go off in 30 days. That way, it doesn't draw attention to something that maybe they got a bad annual performance review."

They wait a month, they wait two months, they set it off, but they haven't left the organization. What you'll see a lot from people managers in this space is that you look for employees who are disgruntled, but a lot of times they can hide it.

And if we assume that they haven't left the organization, then maybe they're okay, maybe they're happy, maybe it didn't bother them as much as we thought, when in reality, it did.

They have a mechanism of just waiting to deploy this. In this case, there was a consequence. The person was prosecuted. They spent 30 months in prison and they were forced to pay $82,000 in restitution.

The second one being a fraud example, using fake email addresses. A help desk agent who was working for a military contractor fraudulently requested replacement parts. There was over $8 million worth of replacement parts, and then they took those and sold them on the black market for $500,000.

It's hard to tell with some of these, they don't give the specifics. So $8 million worth of parts in over what period of time isn't always available to us. If you're talking about the defense space, $8 million could be one item, two items, or it could be a ton of small items.

Ideally, we would think that if it was a ton of small items, that over time, the security programs would have caught that behavior. However, it obviously didn't. We don't always get the facts.

The companies don't always share, unless it makes the news and someone investigates reports on it. We sometimes just have to go with what we have here, which is that they did a lot of damage to the organization and they were able to profit off of it.

This particular individual did spend a little over four years in prison, followed up by two years of supervised release, and had to pay the $8 million in restitution.

And one of the things that will come up in a lot of Q and A's and discussion is whether or not people get prosecuted, whether they have to pay something back. If you only made a $500,000 off of your behavior, likely you spent it, and then you have to pay $8 million in restitution, how is that possible?

I don't want folks to think that even if you're lucky enough to catch an insider, prosecute them, and get a court to order some form of restitution, let's be realistic. If this person wasn't a single individual, how would they ever pay back $8 million? They could work the rest of their life and probably never even make $8 million.

It can be kind of a false sense of security and closure that, "Oh, wow, the system worked. It caught this person." But in reality, if you're the organization, you have to come up with, "How do I get my $8 million back?"

And then the third one's intellectual property. This one was based on large downloads. This was a programmer, also working for a government organization.

Their motive to do this was that there was a reduction in access level with termination on the horizon, so this person already saw something's coming, "I'm probably going to be terminated. They've reduced my access level."

So they used a backdoor in the system to get their elevated privileges back before downloading source code and password files. This is fairly typical. Maybe it doesn't always play out this way, but if you were an admin, you probably have access to multiple admin accounts on multiple hosts.

So even if someone were to reduce the privileges on your account, you have alternate means of getting those privileges back or using an account where it's less likely someone is going to catch what you're doing.

This is one of the reasons why we preach not to use shared accounts, shared local accounts on hosts. It's a pretty common practice because if something goes wrong in the network, the local account is a way to get in. But typically, if I have a hundred servers or a thousand servers, I'm not going to put a different password on every one of them.

I'm going to have a single one and I'm going to share it across my team, so an individual whose active directory account may be is now a basic user, they still know that password across all of those hosts.

When you hear some of these things and why from a security team perspective, we're recommending them, we understand that they are probably roadblocks and hindrances to business, but there are technical solutions that do cost money that can help alleviate those. But there really is a good reason for doing it.

If you're not doing it, then in this example, one person having one shared password was able to download all of this organization's source code and password files, and then potentially sell it.

What captured them and basically set off the alarms was the large remote download size, so this triggered an investigation. And in this case, at least the organization was protected enough to be able to watch for that. Had this person, for example, downloaded small numbers of files or small file sizes, it probably would have fallen right under the radar.


Data Loss Prevention

One of the things that we talk about on the technology side is a way to prevent that through data loss prevention or DLP. And when organizations start to look into these technologies, you'll see that it's very difficult to prevent this type of behavior.

If you're talking about a large image, for example, that's huge, it's very easy to watch for that and say, "Okay, someone's downloading a wire." And then you look at the person when they're doing it, where they're doing it, "Fine, we'll block it."

But if we're talking about an Excel file, you could have literally thousands of patient records in it, and the file size might be kilobytes or a megabyte in size, that will not normally be blocked and trigger any type of alert because we email ourselves files all the time, let's say up to 15 megs.

So anytime you get under that threshold of what's normal behavior, it gets very difficult to track and investigate. And that's, again, if your organization has invested in the tools and the personnel with the right skill sets to be able to do so.

Again, in this case, they were so good for them. You'll notice though that even though they were stealing intellectual property from an organization all day, that was five months in prison and $10,000 in restitution. This one, they're likely to be able to pay back the $10,000, the five months in prison, okay.

The question then really becomes, "Is that enough of a deterrent that would stop the next person, and how many people have done this in the past before someone was actually prosecuted?" If it was one in 10, you may not be scaring anybody into not behaving this way.


Spotting an Insider Threat

Some of the signs and events you can see from behavioral patterns, a lot of them here on the left side are very common sentence, but organizations will overlook them sometimes.

I think because either they're busy, they don't have the right processes and procedures in place, or they honestly feel like the employee just would not harm them, their customers, or their coworkers. That is not the case.

When you look for things, your frontline managers are normally your first folks that you can go to to say, "Do you have people who are having conflicts with fellow coworkers?" It could be on the same team, it could be across teams, it could be folks that are having a problem dealing and interacting with your customer base.

If you have employees who won't escalate their issues to supervisors, but they complain a lot to other personnel or to competitors, that's a big sign.

Missing work, and patterns of being late, and leaving early, they could be doing something else. Now, this could be totally something completely different and have nothing to do with it, but you should investigate the pattern either way.

Any sudden decline in job performance, this could easily come after a reprimand that the motion, a poor performance review. If they have external factors, drug use gambling, alcohol problems, any types of mood swings, or aggressive, violent behavior, if someone's going through a messy divorce, a bad breakup.

Those things are all triggers for these types of behaviors. Poor hygiene, inability to perform the rules. Again, some of these are very vague and broad, so they could be leading to a completely different problem, has nothing to do with insider threats.

But one of the things that most organizations should do, if they're not doing, is when you see these types of patterns and these types of behaviors, put something in place to just monitor that employee. You don't have to be big brother and read their emails and their text messages, but just watch their behavior.

When are they logging on? When are they logging off? If they only work 9:00 to 5:00 for five years, and all of a sudden, they're logging in Saturday at 2:00 AM, that may be a sign that something's going on. And some of the stressful events is triggers.

Like I said, the motions and reprimands are big ones. Suspensions may or may not because most of the time, if you suspend someone, you're going to suspend their access. When they come back, I would hope people are going to be cognizant that they might still be upset and watching what they're doing. If you go to remove access or responsibilities, that can be a big one.

Now, one type that normally doesn't trigger it, but should still occur is if someone moves roles within an organization, say they're going from a system admin to a project manager.

As a project manager, you wouldn't need the same level of access, so it should just be a company policy where before someone moves, you evaluate all of their access, grant them whatever new access they need, and remove the old access.

If they are happy in what they're doing, you may forget to do this. You may move them and then a year down the road, let's say they're disgruntled, or you have to reprimand them, you don't even realize that they still have those permissions that granted them the access to do harm.

And then if they're in financial hardship, that's a big one, especially when it comes to the fraud and the intellectual property.


Best Practices and Mitigation Strategies

Best practices and some mitigation strategies. Really set job expectations early, offer assistance programs to employees so that they feel that the company really is there and invested in them. That builds a lot of loyalty.

And even if they are disgruntled, they're more likely just to up and leave, and not try to harm the company. Invest in their professional development.

As I said earlier, do some monitoring. You can do some targeted monitoring, in this case, of access paths and behaviors. Again, when are they logging in? Where are they going? What kind of files are they accessing?

If they've never tried to access a finance file because they're an admin, and all of a sudden, they're looking through that or attempting to look through those types of files, that can be a trigger.

Disabling access before you terminate is a great one that most organizations, from what I've seen, do not participate in. If you know you're going to fire someone at noon on Friday, then maybe at 9:00 AM, you call them into the office to have the conversation with them.

While they're sitting in the office, all of their legitimate access has already been terminated. This has been pre-planned through your HR security org. And that way, if they try to return to their desks to go in there and delete something, they're already locked out.

Also, bringing in their equipment so that they don't have the ability to walk out with it. Having configuration management processes and procedures in place. People make changes all the time. Most organizations do, let's say, less than adequate job capturing those.

Most organizations aren't worried about the potential insider threat as much as they are just, "We need to know who did something in case we have to roll it back." In this case, if you're actually following those processes and have them documented and you're reviewing them, you may catch the insider threat. Use a risk-based approach to prioritize high value assets.

Again, you're going to have a lot of assets in your environment. It's going to be difficult to secure them all. Can you prioritize them and start with the most important? Which ones are the ones that you couldn't live with if they were down?

Securing access to the organization log files. Clearly, we don't want admins going in and deleting or modifying the log files. And we're moving the path or the trail of breadcrumbs that they've left.

Performing backups and testing restoration. This is probably one of the most important on here.

Most organizations do a fairly adequate job with backups. Most rarely ever test the restoration until they need it. And there's a high percentage chance that some part of that restoration will fail, so you need to know in advance. Getting separation of duties and two-person rule where applicable.

For example, if you're a software development organization, you may have one individual who can promote code through a non production environment, but you don't want them to be able to promote in production, and then a second individual to do the production piece without the nonprod.

This way, each one has a set role, each one has a limited amount of access. It requires two people to do something. One person can't go in there and wipe out the entire organization or promote bad code all the way through the organization without being seen.

Educate your employees. This is annual security awareness training, role-based security training, and I would even say ad hoc training as different things occur within your organization, as well as within just the threat landscape as you watch the news and social media.

If you can afford the solutions and the skills from the personnel perspective, putting in data exfiltration prevention solutions, as well as implementing physical security. If you work in a space that requires annual certification, excuse me, the physical security measures is probably already something you're doing. You may just have to beef it up a little bit to make it a little bit more comprehensive.

And then whenever possible, encrypt data. Obviously, on employee laptops and remote devices, encrypting data is very easy. It's encrypted automatically with the right tools. Internally though within servers and your more critical assets, that can be difficult.

Sometimes encrypting data as you try to move it across the network causes problems for the security tools to be able to inspect that traffic. It also makes it difficult for some applications. And then if you're in an application development type environment, it can be extremely painful for your developers.

So there's going to be a business balance to what you do. You have to decide how much security risk you want to accept versus how much security you want to implement. But again, these are steps that you can take that everyone should take at some level. You just have to determine how far you want to implement them.

And lastly, the unintentional insider threats, the non-malicious activity. There's a lot of debate if you go to seminars and conferences on security as to, "Is an unintentional non-malicious user different than an intentional one?"

From a behavioral standpoint, from an intent standpoint or motive, yes, but the risk and damage to the organization is the same. If a person deletes all of your data off of a server, like the first example I gave you, and they did it because they are disgruntled and want to harm you, or they accidentally deleted it, doesn't make a difference. For the organization, you're in the same boat either way.

How you punish the employee or try to get restitution may be different, but the behavior leads to the same event for you and the same harm.

So I recommend and many of my colleagues recommend that you don't treat an unintentional non-malicious user any different than an intentional one.

Just treat them all the same, give the training, the security awareness training and the role-based training, clearly communicate your rules of behavior through policy, implement good password and account management policies, take care of separation of duties and least privilege, like I talked about on the last slide, and then monitor and log the employee's actions.

If you do that, then you can probably spot when someone's going to accidentally do something. It's probably just a poor behavior that you can correct. If they're malicious, you might be able to stop it before it actually takes place.


Insider Threats: Q & A

And on that note, I will stop sharing so that I can go to the Q and A portion.

The question was about the whistleblower at Google taking information that would harm children and sharing it. I was under the impression, up until very recently, that that was illegal, that the organization owns that, and you sign all sorts of confidentiality and NDAs and all sorts of agreements.

However, in the most recent one with Facebook... And you may be speaking about Facebook in this one, I'm not sure if we just swapped out company names.

But the one with Facebook, when the woman testified on Capitol Hill, they actually stated that there is a law in place protecting the whistleblower, that if there's any harm being done or they think there's harm being done, they actually can take the insider information and share it with the federal government.

So from the insider's perspective, there is coverage. Now obviously, any individual who goes in front of Congress exposing an organization is probably going to have a hard time finding a job in certain locations, but they are protected legally.

The risk to the organization is obviously that anything that you've said and done is now out there for everybody to see and read. I think it's less about, at that point, stealing property or committing fraud. It's almost two different scenarios.

It would still be considered insider information and an insider threat. But at this point, if you're protecting against someone stealing the things that you're doing that are harming others, I don't know I would call that insider threat.

Hopefully, that answers the question there. But again, if you didn't know this already, that that is something that they are protected, so doing the right thing is protected by the federal government.

Got about three minutes left. If there are any other questions, anyone would like to ask in the Q and A chat box? Okay.

If not, then I appreciate everybody who took the time out of their day to join. Thank you for joining. We hope you'll come back and join future sessions. And for anyone who didn't join live, but is watching us after the fact, thank you for taking the time as well.

We'll talk to you next time. Thank you.

About the Ntiva Cybersecurity for the Rest of Us Livestream

Ntiva’s Dr. Jerry Craig hosts the Cybersecurity for the Rest of Us livestream every other Thursday from 12:00 to 12:30pm ET. These live events, presented by the Ntiva team of cybersecurity experts, are sharply focused, easily digestible, and cover topics surrounding cyber security in today's modern workplace. We take questions from the audience and share what's working for us and others in the industry.