Cyber threats and cybersecurity are topics that impact every business, no matter how small or how large, or what industry they’re in. Here, a panel of cybersecurity experts discusses the evolving cyber crime landscape—and what businesses can do to fight back.

PARTICIPANTS:

Sarah Clark, Salt Lake Chamber

Joe Crandall, JourneyTEAM

Bruce James, Intermountain Healthcare

Tsutomu Johnson, Parsons Behle & Latimer

Robert Jorgensen, Utah Valley University

Sean Lawson, University of Utah

Elaina Maragakis, Ray Quinney & Nebeker

Eric Montague, Executech

Aubrey Murray, Perpetual Storage, Inc.

Shawn Orr, Big-D Construction

Dean Sapp, Braintrace

David Sonnenreich, Utah Attorney General’s Office

Matt Sorensen, Secuvant

A special thank you to Romaine Marshall, partner at Holland & Hart, for moderating the discussion.

What are some of the major cybersecurity threats in 2018 that businesses should be aware of?

SAPP: When we work with clients, we’re seeing a lot of what I would call common hygiene problems that are impacting the businesses around credentials. We see a lot of theft of usernames and passwords associated with email accounts, and then that email account access is being leveraged for wire fraud or phishing fraud. It’s commonly called CEO fraud or business email compromise, where your CEO or accounts payable individuals in the organizations are exchanging wire transfer information and it’s fraudulent. Large amounts of money get wired and approved because companies don’t have very strong dual controls over the movement of money. And so they’re realizing large losses. If I were to average in the valley, recently, the breaches we’ve responded to are in the neighborhood of $2-300,000. So significant amounts of money.

SORENSEN: We’ve also seen an uptick in the delivery channel for phishing and malware into social media. Where companies have invested some resources to watch email and filter email and prevent those links from getting to the end user, they’re now getting them through Facebook and even LinkedIn. You tighten up one area and it just spreads into another.

MARAGAKIS: From a liability perspective, one of the things we’re starting to have to advise our clients about is if they are going to offer multi-factor authentication, are you going to require it or is it going to be optional? Oftentimes what they’ll say is a password and a voiceprint, for example. Well, if they don’t set up the voiceprint, are you going to let them use the system or not? And that’s a huge liability factor because they say, “Well, why didn’t you tell me that if I didn’t set up the voiceprint, I couldn’t use the system?"

And then your client is left in a position where they’re saying, “Well, you’re the one who didn’t set up the voiceprint, so why didn’t you do that?" You can sink a lot of money into litigating the liability on that.

JORGENSEN: UVU recently enforced two-factor authentication for all employees. It was basically, “You can’t log in if you have not set this up." It is optional for students, just because there’s obviously some extra things involved with that, with 35,000 students, but we do offer it for all our students.

MARSHALL: Do you guys ever get any push-back to that from the user community? “This is harder, I want to get into the systems quicker."

JORGENSEN: There always is push-back. And faculty is probably one of the worst user groups to deal with, up there with executives—and that can be on the record—as far as user acceptance. So yes, there’s a lot of push-back initially, but when people see how simple it is—we use Duo, so we’ve got the push authentication. And for them to see that they type in their password, and then a half a second later, their phone buzzes and they punch the thing, and then they’re in—it seems to alleviate a lot of that once they see how quickly you can get in with those devices because it really adds seconds to the login, at most.

MONTAGUE: I’ve seen it way too often, where you meet with the company, you present them with security solutions, and they’re like, “That is fantastic. Let’s do it. We’re all in. Oh, but don’t do it for us five executives. I don’t want two-factor, I don’t want my password to expire."

You’re like, “Wait, you’re an idiot. You five executives probably have the most critical data, and all this we’re putting in place is probably going to protect you five 80 percent, the rest of the company 20 percent.” Half the time we do a security analysis, I get some variant of that: “This is awesome. Do it for the whole company, but not for us."

JOHNSON: It’s important when you implement these strong procedures to get buy in from the executives at the top of the company. It’s really making the case that when we implement these new security structures or buy new technology in order to solve these problems, it’s not just so that we can spend money—it’s so we can secure against real threats, that there’s a real reason that we’re doing this. In addition to getting that buy-in, it’s important to roll out an educational campaign throughout the organization to say, “This is what we’re doing, these are the guidelines and guideposts of what we’re doing, and this is why it’s imperative that we have you go through these processes" so they don’t just view it as just a waste of time.

LAWSON: The most important layer is that human layer. Right? You can have all the great tech in the world, but tech can’t solve stupid. If people click that link, download that file, go to that malicious website—if they don’t have good cybersecurity education, the tech is not going to save you.

JORGENSEN: A lot of times when we talk about things like business email compromise and electronic funds transfer, that’s not necessarily a technology problem. It’s often a business process problem. You don’t have proper controls for your financial transactions, things like that. So one of the ways to approach both dual-factor authentication and business email compromise is to look at it holistically as a business problem. It’s not a technology problem, it’s not a cybersecurity problem, it’s a business problem to address. And, yes, technology and cybersecurity is a piece of it, but also business practices, user education policies, all those sorts of things.

SORENSEN: Cybersecurity is a business problem and it needs to be solved by the business leadership. When I meet with executive teams, I ask them about the primary risks they face as a business. They never include cybersecurity. Rarely. And then they tell us, “Well, one of our risks is our brand and reputation. If we lose our reputation in our community, we would have a serious problem."

They understand that cybersecurity can impact their reputation and brand, but they don’t know how to manage that risk the way they know how to manage the risk of labor shortage or supply chain or other things.

MURRAY: CIOs have a lot of burden put on them, because they’re not just trying to protect the desktop stuff and the cell phones that are being used at the company, but they’re also having to do the cybersecurity and manage the data. What’s been really interesting is even larger companies are having trouble even with the definition of their data tiering. So backup, archive and disaster recovery all meld into this weird place. CIOs are in charge of the desktops and phones and the cybersecurity and all of the parts that go into that—and the budget.

Let’s just talk about how their budgets aren’t big enough. Right? They’ve got too few people doing too much work, and they’re supposed to strategize about how they’re supposed to do all of this, and you just get this very overwhelmed population within your company that’s saying, “I’m putting out a million fires and I can’t even think that far into the future. I would like to think about strategy, but, unfortunately we’ve got this fire going on over here."

MARAGAKIS: The flip side of cybersecurity is privacy. I can’t tell you how many paper breaches we’ve had to deal with, because we still have a lot of paper and people take pictures of it with their iPhones, or you go to a doctor’s office and the chart for a patient is sitting right there and people take pictures of it. That happens quite a bit.

Even if it’s not being enforced under state data protection laws, if you tell a client, “We didn’t notify you because it wasn’t an electronic or computerized record," that’s not going to cut it. They’re going to want to know anyway because that information is equally valuable to the thieves as it is for computerized data.

So when we talk about protecting the organization, we should talk about enforcing a culture of privacy generally. Not just, “Protect your password," but, “Look, if there’s something sensitive, don’t leave it on your desk."

SONNENREICH: The FBI would tell you that more identity theft is still occurring from traditional forms than from the residue of big data breaches. The Utah Protection of Personal Information Act covers personal information even if it’s in paper form. Companies should be aware of what information they’re giving out. We’ve seen situations where, for example, companies use windows in envelopes from which you can see the account number as well as the person’s name in the mail. This is also a data breach.

What trends and potential attacks are on the horizon?

LAWSON: We’re seeing cryptocurrency mining malware—cryptojacking, where you infect your target with some sort of malware that then steals processer power to mine for bitcoin or ethereum or any of the other cryptocurrencies that are coming out. We’re going to see more of those kinds of threats. Perhaps we’ll even see Internet of Things cryptojacking, where you hijack lots of smart home devices and use their processer power to mine bitcoin. Maybe that’s even already happening.

We’ll probably see more hybrid kinds of attacks, like data-breach driven, propaganda and blackmail kinds of things like we saw in the election. I would suspect that other criminal groups will probably get an idea that says, “Hmm, if you can potentially throw an election with that, what kind of data breach-plus-propaganda disinformation combination might we use for blackmail or some other kind of criminal purpose?”

And then critical infrastructure. We’ve seen a few more really scary examples of attacks against critical infrastructure, like trauma systems, in the last couple years that are pretty worrying, especially some of the recent ones that we saw in 2016 in Ukraine. There’s been a lot of worrying about power grids going down and other apocalyptic doom scenarios for 30 years, and those things haven’t happened. And I still think they probably won’t happen, not the kind of “everything is falling out of the sky and blowing up and we’re all dying because of cyber-attacks," but we probably are on the cusp of beginning to see more actual attacks against critical infrastructure that, at minimum, disrupts service or causes harm to those companies that are providing those services.

JOHNSON: If hackers are looking at this like a business proposition, a return on investment, one of the easiest things to do is to approach a business’s employees and pay them money in order to get information. That’s something I’ve seen with clients, and it’s very simple. It’s, “Hey, I know that you deal with this sensitive information. I will give you $5 for each social security number you’re willing to give me."

That’s probably something that most businesses don’t think about, but it underscores the point that there’s a human factor to cybersecurity. Sometimes basic threat vectors get overshadowed by the flashy trend of what’s happening in big data breaches.

SAPP: Verizon puts out a data breach report out every year, and they attributed 25 percent of the data breaches and losses to internal risk. That can be employees that have made poor decisions or were in a tough economic situation and intentionally made a poor decision of opportunity.

As far as trends, we’re seeing things related to mobile device attacks that are starting to really scale up, because these devices are so ever-present in our environment. They can be used for tracking; we basically opt into a lot of things we use these cool nifty apps. But using the mobile device to harvest data—financial data, card data, personally identifiable information that can be resold—is perpetuating itself. And a lot of things that are coming out are memory-based attacks now because cybertechnology is getting better. We’re able to detect patterns and trends and things, and so now the attackers are going after a memory-based attack on a device. It’s much harder to observe and to detect and prevent.

We’re seeing an evolution of sophistication going after both individuals and corporate users. Really, it’s based off of how much value you represent to an attacker. If you’re a C-level executive, you’re going to be in one pool of attack targets. If you’re a consumer, you’re maybe in another pool. But the attackers are extremely good at understanding the ROI of going after individuals, and then they leverage that ROI to figure out who they will attack so they can make money off of the victim.

MURRAY: Insider threat is something that we’ve seen a lot, and it can be managed through policies and procedures and managing access. It touches on multi-factor authentication and it touches on budget as well. If you have only a few people in charge of everything, you’re going to have a lot of people able to access everything. So you need to look at your budget and say, “Look, we need to have these levels of access, and if we don’t have these levels of access, then we’re leaving ourselves open to all of these people." If an admin has the same access level as the CIO, I’m sorry, but your cloud is very much at risk. You’re leaving yourself open.

CRANDALL: The one thing that is going to really peak this next year is cryptojacking, just because it’s so nuanced. It’s not as invasive as ransomware. The idea behind cryptojacking is quite simple. In order to generate currencies like bitcoin, they have to do some type of processing. The graphics processer in your computer is very, very good at doing that. So instead of installing ransomware, instead of encrypting your data and making that a real intensive thing, and then waiting a week or a month for someone to potentially pay you back to unencrypt that, we can simply hijack a web server and have you go to this website, and in the background we can process this algorithm without you realizing it and generate bitcoin.

What’s so scary is the scale at which they can do that. Companies have hundreds of thousands of desktops and laptops and phones, and how many times do you go and check maybe your pay stub on your phone or on your desktop and you’re hitting a web server somewhere? When the cryptojacking occurs, they could have compromised your bank, they could have compromised your cloud provider, just one innocuous web server that you happened to have run some background task on, and now you’ve got all of your employees with a thousand machines, and you’re paying power on them. And that’s really what it costs you: power and that processing time.

How can businesses keep themselves as safe as possible from cybersecurity threats?

JOHNSON: I don’t want to create a misperception that there’s a way to keep yourself 100 percent safe. Given that this issue is dynamic, you are always going to have to be addressing this issue. You’re always going to need to be talking to your attorneys, your IT, your security people, your insurance people to make sure that you are constantly addressing this issue.

But at base level, you should create a policy that affects the entire organization, and that policy should, first and foremost, identify the set of data that comes into the organization and how it flows through the organization. Figure out how the organization is consuming that information, how it is manipulating that information and, ultimately, deleting it out the organization. What’s the security that takes place at each of those manipulation points until rest or deletion? Assign risk variables to that data based on confidentiality, integrity and accessibility. That is probably one of the most critical exercises an organization can do.

In addition to identifying the digital assets that are swimming through your organization, then you need to create tight policies around not only security but privacy in order to help your employees, your customers, your vendors, understand what you’re doing in order to secure that information. Then, in the event of an incident, create a strong incident response plan that is very detailed and high-level, a one-page plan that you can hand to any employee that lets everybody know how the organization responds to incidents and identifies who the incident response coordinator is, so that when there is an incident, people can respond effectively.

The organization, at the very least, also should find insurance for this type of thing. If you can’t do anything else, at least you can insure against this.

JORGENSEN: A lot of companies simply don’t know what data they have or where all their data is, especially when you get into larger organizations where you start getting into the phantom IT sort of things, where somebody doesn’t like your CRM solution so they’ve gone and got their own, and they’ve imported a whole bunch of data into some other platform. Your company uses Salesforce. Somebody doesn’t like Salesforce, they want to use an API for this because they’re an app developer, and so they import all the data into there, and suddenly all your customer data is somewhere you don’t know about.

When you think about other kinds of insurance, they want to know what your assets are, what you’re protecting. But when we talk about especially digital things, we don’t always think about them as individual pieces of data or information that lives here or lives there, we kind of just think of it being in the cloud. Or it’s on this system. We don’t know where our data is or where it’s being stored, who has it, and what copies we have.

SORENSEN: There are two resources for companies that are coming into their own. The NIST Cybersecurity Framework is five high-level activities that if you have something going on in all five of them, your security program is on its way. The other resource for the IT people is the Center for Internet Security’s 20 critical controls.

JAMES: From the Intermountain Healthcare standpoint, identifying the data we had was something that we struggled with quite a few years ago, and it just had to do with the size of the organization. Something to understand is that if you haven’t gone through that step as an organization, it’s going to take years to do.

Part of the framework is to put a risk value associated with that. At Intermountain, our electronic medical record system is one of the highest-risk systems that we would lose data from, so maybe we need to focus there first and understand where the data moves there first, whereas some of these other systems that have data—maybe our public website—aren’t quite as risky. Those are just examples, but just understand that there should be a risk-based approach to this.

LAWSON: What’s the internal process like for reporting a breach, especially if it’s the employee’s fault—they messed up and something got breached? You want them to report. You want them to report as quickly as possible. I mean, you want to hold them accountable for messing up and maybe costing the company some money, but at the same time, and probably more importantly, you want them to report that as quickly as possible.

So figuring out what are the business processes you can implement that don’t punish people too much so that it deters them from even reporting it in the first place or trying to hide what they did, but instead encourages them if they do make a mistake that leads to a breach, that they report it.

JOHNSON: It’s very important for organizations to have a strong privacy impact assessment process that has a security component too, so that when we have new technology, new processes, new applications, new hardware, before those things get implemented, there is a review to determine the impact on security and on privacy before things are implemented. Early in the development cycle or in the roll-out cycle, we can pinpoint privacy or security concerns. Let’s address it right now while we’re in the formation phase of this new idea, and then we’ll have all of those addressed early on and it will be easier to roll out. Once a lot of resources have already been spent and you’re right at the last phase of implementation and roll out—and at that phase, what really can you change? All the money’s been spent in order to get the thing from concept to actual practice and application.

MONTAGUE: The security auditor and the IT department should be separate. As humans, we’re really bad at constructively assessing ourselves, so security and IT should be separate.

Another negative impact to businesses when they suffer a cyber-attack is the potential for investigations by government regulators. Is there really a potential for that?

SONNENREICH: Yes, there most definitely is a potential for that. Data breach is basically a joint federal/state effort. The FTC takes the lead primarily on the federal side. When they’re looking at data breach issues, they primarily treat them under Section 5, which is their Unfair or Deceptive Advertising Practices Act, which may not seem like a natural fit, but that’s where they tend to look at this. So what they’re saying is, basically, when you obtain data from someone, what did you promise them and have you fulfilled your promises?

On the state side, we typically have specific acts. In our case, it’s the Utah Protection of Personal Information Act and the Utah Consumer Credit Protection Act. And the Utah Division of Consumer Protection enforces those acts.

My personal jurisdiction is the UPPIA and the UCCPA. There are three basic requirements that we look for. Two are outside of a data breach and one is after the data breach. The two that are outside of the data breach are, one, are you using reasonable practices to prevent a data breach. That’s where C-level engagement is very important. None of us can ever prevent all data breaches. As a government regulator, I understand that. We’re looking to see how engaged the company is in the process.

When an attack occurs, we want to see four things: Were they proactive? Were they quick? Have they been transparent? And, candidly, are they empathetic to the people whose data was breached? Are they trying to solve their problem or are they trying to solve the company’s image problem first and foremost, at the risk of exposing people to identity theft?

The second thing we’re looking at is a very simple one. And that is, you have a statutory obligation to properly dispose of protected information when you no longer need it. If you are no longer processing credit card payments directly because you’ve outsourced that, you have a whole bunch of legacy credit card information, why are you keeping it? Why are you providing a resource for somebody to hack into? If that’s what got hacked and you’ve had it on your books for the last 10 years, I’m going to be a lot less sympathetic because you have a specific obligation to destroy that information. And in paper form too.

Finally, once a breach has occurred, have you assessed it in terms of its importance as a potential source of identity theft? And if it is deemed to be a potential source of identity theft, have you provided timely and adequate notice to the people who are affected so they can be protected?

MARAGAKIS: Oftentimes people will focus in on state law or they’ll focus in on federal law without any understanding of what their notice and reporting obligations are. So, for example, if you have a HIPAA breach, you’re going to have to report it. You’re probably going to get audited, so you better have those processes in place before that happens. Because people understand breaches; they don’t understand failures to act or failures to prepare.

Whenever I do presentations, I always start with the penalties, because that’s when people pay attention. Maybe $3,500 doesn’t seem a lot—until you multiply it by a hundred thousand. You need to be honest with your clients about what their risks are for being investigated and viewing, when it’s appropriate, law enforcement as your partner and not as an adversary.

SONNENREICH: The sooner you work with us, the easier it is to make things happen. If we have a question about what notice you’re giving to consumers, it’s easier if we work up front than after the fact and six months later we find out what you told people.

What industries are most vulnerable to attacks?

JORGENSEN: As we have industries that are becoming more and more digital—in the cloud, online—that are traditionally not companies that deal with data, they’re going to be targeted just because they’re new, they’re kind of walking around like, “Oh, we’re on the information super highway now. What do we do?" There’s a broad spectrum of non-technology companies that in this category.

SAPP: It’s that same function of ROI. They go after the lowest hanging fruit, the simplest to compromise. You can never outrun the bear—right?—but you can outrun a lot of your neighbors and campers next to you.  So if the company is making reasonable investments, that raises their maturity and it reduces the likelihood they will be targeted or taken advantage of. And if they’re investing very, very little, then the likelihood is very high they will be targeted because they’re easier to hack.

SONNENREICH: I’m concerned about the Internet of Things particularly in this regard. There’s a lack of sufficient thought process within companies that are coming up with brilliant new ideas of ways to integrate with the internet. And we know they’re not often doing a very good job at building in security. Think about any physical device that is part of the Internet of Things and you can put an RFID tag skimmer on and start pulling information off of.

JORGENSEN: The Internet of Things really expands that whole “where is your data going?" Right now, if you want to get some form of security cameras, you can buy them for next to nothing on Amazon, but if you look at some of those cameras, particularly ones coming out of China, they are actually, in the firmware, sending your video from your house to their servers in China, and there’s no way to turn that off. It’s a feature. And the EULA in there says that they’ll record six seconds at a time when motion is detected and store it on their server forever.

Imagine you’re a small business and you put those security cameras in, and they happen to be watching your cash register. Well, you’re going to be transmitting everyone’s credit card information as the clerk is looking at their card to China or somewhere else.

I’m not going to say don’t adopt tech, because that’s ridiculous. But you need to be smart about how you do it and think about what you’re actually getting—if you buy the cheapest whatever, is that company actually going to be a company that you can trust with your data and trust them to update their systems?

If you look at a big company—Amazon, Google, some of these others that are providing these smart devices and AI systems—they’re pretty good about patching things up quickly. But some of these smaller companies, they could go out of business and you have this device that’s connected to your house that’s never going to be updated and is going to be mining bitcoins for the rest of the device’s life or be compromised.

However, with services like Amazon cloud, we’re finding things up there that shouldn’t have been up there in the first place. We’re seeing things from federal agencies, we’re seeing other data that’s just up there in publicly available storage buckets where it’s like,  “We stuck it up there so we could access it from home to work on it" or whatever. If we put something in Amazon with no authentication at all and just leave it there, we’re basically saying, “We don’t care about that data." You’re not doing any kind of due diligence. You’re not taking reasonable measures to protect that data by throwing it up there with no authentication.

LAWSON: You have to think about what data you might have that hackers are looking for and if they’re seeing dollar signs, that might hang a sign on your back that says, “Attack me." So, for example, law firms. Think about all the intellectual property and other confidential and super valuable data that law firms take in in massive amounts.

JORGENSEN: Think about how effectively you could craft a phishing lure if you compromised somebody’s law firm or other business partner. Right now, with just open source information, you can craft a pretty effective phishing lure. But if you can breach someone’s partner, whether it be a law firm, a business associate, even certain clients, you get information that is just a treasure-trove. When you try to motivate people, you can motivate them with fear or with greed, and if you know exactly what’s going on with their organization, you can easily do one of those two things with insider information that only one of these organizations is going to know.