Why IoT Security Is Like a Zombie Marathon: Q&A
A veteran Navy cybersecurity expert has a Halloween-esque view of hackers. They are like zombies, he says, but getting away from them doesn’t have to be overly complicated.
October 7, 2016
You don’t have to be a cybersecurity ninja to avoid having an IoT device get hacked. “I have this running joke that I tell people: It is like being in a zombie marathon,” says Chase Cunningham, PhD, a former U.S. Navy chief cryptologic technician who supported U.S. Special Forces and Navy Seals during three tours of Iraq. “If I can outrun the guy next to me, and he gets eaten by the zombies, I don’t look back; I just keep going because I won,” Cunningham says.
In a conversation with the Internet of Things Institute, Chase, who is now the networks director of cyber operations at A10 Networks, sheds his thoughts on the recent botnet involving 150,000 IoT devices. He also explains why he is terrified of driverless vehicles and hackers targeting the U.S. water supply and provides advice on getting away from the “zombies.”
What are your thoughts on the recent Mirai IoT botnet that infected more than 150,000 IoT devices?
Chase: The source code for the botnet has been released and is taking off like wildfire. All the code needed was 61 different combinations of username and passwords to create this giant botnet. It just takes seconds to grab a device and use it for botnet or DDoS. I just ran a query with some code I have that looks for devices that identify themselves as “IoT” that are online now. I found 3551 devices that are sitting for somebody waiting to tell them what to do.
What do you make of all of the media attention IoT security is getting?
Chase: It is hard to talk about IoT security at large. It really depends on the vertical. In the past seven years, banks and government have done a good job at learning how to lock things down and minimizing risk and threat. Healthcare has done a terrible job. They have ten years of catch up to go. The legal industry has ten to 15 years of catch up to go. Small- and medium-sized businesses are well behind the power curve. It is not the doomsday scenario that a lot of people put out there, though.
Cyberwarfare has become one of the themes that Clinton and Trump discuss in the current presidential election. How big of a problem do you think this is?
Chase: In 2010, the U.S. military declared cyberspace to be a combat arena. The U.S. government fights in it as do the Russians and Chinese. I mean, everyone who plugs themselves into the Internet somewhere is in a live-fire zone in cyberspace. In all of history, there has never been a time when regular people and business and children are all walking around in a live battlefield.
Driverless cars are another topic making headlines these days. How concerned are you about the security ramifications of driverless vehicles?
Chase: It scares the crap out of me to be perfectly honest. There are already guys who have remotely locked up the brakes on the Jeep. They have turned Corvettes fuel systems on and off. Tesla just had some security problems. It is not that the companies are doing anything wrong. They are just trying to innovate.
For driverless cars, all of the redundant sensors they have are great, but unless you are going to have a human being who can stomp their foot on the brake, bad things are going to happen. 4000 pounds of metal going thirty miles an hour, without somebody slamming on the brakes, somebody is going to get hurt.
You also have fleets of eighteen wheelers that are going to cross the U.S. with no human in it. That is a great big chunk of metal flying down the road at high speed. Without somebody behind it, God help us.
I can see that logically it would be safer to have driverless cars. Lord knows there are plenty of jackasses who are paying more attention to their text messages than their cars. But just the possibility of a bad guy modifying something on it when it is in fully autonomous mode is scary. What if that car was hacked right during the time when school was let out, and it goes blazing through a stop sign with 15 kids walking in front of it?
The simple fact is that if you can execute logic on the computer that is in that car, you can make it do things it is not supposed to do. If I have access to the car and I have a USB stick that has brains to tell it what to do, game over. Hell, you can get a Raspberry Pi for thirty bucks with a USB connection, program that sucker like a little computer, and pop it in there and walk out and that’s all you have to do.
There are also risks of problems via wireless software updates. If you can push an update, you can push malicious code. Have you ever tried to push an update to your iPhone? Can you imagine pushing it to 100,000 cars across the U.S. that are all running on different cellular signals?
I think boils down to the question of can you or should you? Can you put a car or an eighteen-wheeler in fully autonomous mode? Great, but the next question is: should we really do that? Is it going to completely safe where we don’t have to worry about problems ever arising? If you look at the benefit vs. risk, in my opinion, I don’t want an autonomous eighteen-wheeler on I-5 going 70 miles per hour any time I am on the highway. Can it be partially automated and have a human in the cab? Sure, but when it is fully autonomous, there is too much room for something to go awry.
What else do you think is a significant IoT-related security risk?
Chase: If I was the head bad hacker guy for ISIS and I wanted to do something that could kill a thousand or ten thousand people, I would be hoping and praying that the water treatment plants in the U.S. were using automated IoT-enabled chemical induction systems. I would find a way to get into one of those things and dump every ounce of chlorine and other chemicals into the water and watch how many people get sick and die from it. That kind of stuff to me is worrisome about where we are going.
Power grids also are at risk, but there is so much manual redundancy built into them. I am retired Navy, so I worked on a lot of power plants on ships. There are IoT-enabled things on those power plants and yes, you can modify things and cause outages. But the doomsday scenario of shutting down all of the power to the U.S. is unlikely. If a hacked IoT device flips a switch off, you can usually manually still turn it on.
What kind of protections do you think are being overlooked?
Chase: The thing that always boggles my mind is that a lot of good security is not a moonshot. If you can just implement one or two simple things such as two-factor authentication or biometrics or basic encryption, that can knock down your risk by a lot. If you have one of those pieces of the puzzle, you have effectively made yourself a harder target than the next guy. That is the whole goal. Bad guys, threat actors, and whatever else are looking at where they can get access and what they can do with it.
Two-factor authentication alone can really knock down the risk. IoT companies should make it where you can’t just jump on the device and modify settings without a username and password. Put something else in there. Send an SMS text, a phone call, any other thing. Remember when you used to have the threats of credit card theft at gas stations? When Chevron and Exxon made it so you had to enter your zip code, they saw credit card thefts go down by 94% across their enterprise with that one step.
If you are building IoT devices and they are going to communicate wirelessly, it has to have built-in encryption on the data side and on the communications side. If you are not at least making it default to WPA2 encryption and are counting on the end user to do it, you are selling yourself short.
It is not rocket science.
Do you think IoT companies should set traps to lure hackers in?
Chase: I am a huge fan of honeypotting. That is something that every enterprise and organization should have, period. The last time I fire up a honeypot with basic system controls, I had something like 186,000 connections in under 12 hours. Within the first five minutes, we had five thousand hits.
What do you think needs to happen for IoT security to improve substantially?
Chase: Unfortunately, pain is the greatest teacher there is. Until we really feel some sort of physical or financial or societal pain from this IoT security issue, it is not going to be anything other than a line item on a budget. Once something really bad happens, then all of the sudden, it will become a priority. Until then it is: ‘ooh, neat, look what my device can do.’
About the Author
You May Also Like