Supply Chain Security: Malware, Not Missiles
How hostiles will target our aircraft: An Air Force cadet reflects on supply chain security and the need to secure hardware.
September 23, 2019
By Julia Pack
Disclaimer: The opinions expressed in this paper are the author’s and do not represent an official position of the U.S. Air Force or the U.S. Air Force Academy.
The recent tragedies of two Boeing 737 MAX-8 passenger jets crashing, despite pilots’ efforts to save them, set off a global alarm about the vulnerability of large, sophisticated aircraft to a flaw in a single subsystem. We were reminded of this fundamental truth after losing the Challenger shuttle. Now, we are reminded of it again. The Boeing disasters resulted from unintentional mistakes in software code. Imagine the potential of deliberate, targeted defects introduced into military aircraft carrying potent weaponry. It’s yet another example of the pervasive concern of supply chain security.
Recently, during the summer between my third and fourth years attending the Air Force Academy in Colorado Springs, I worked as an intern at a specialized cybersecurity engineering firm based in Ohio. Though far from Silicon Valley, this group guides tech giants — including leading chip-makers, in developing their hardware-level security.
During my too-brief exposure to the topic of military hardware-level cybersecurity, I learned that in the near future, hostiles could attempt to use malware and counterfeit chips to attack our aircraft. They don’t need to rely on expensive missiles. I’m not a cybersecurity engineer, but as a future officer and warfighter, I began asking questions about supply chain security and securing aircraft systems. What follows is a quick look into a deep topic that could impact anyone who flies.
For a broad look at claimed hacks of commercial aircraft, and test cyberattacks on commercial jets by the U.S. Department of Homeland Security, this article in Newsweek is a good starting point:
Are military aircraft vulnerable to failure in a single, relatively small onboard system? There’s certainly evidence that such a gap existed in the past. In 2008, a USAF B-2 Spirit bomber crashed in Guam after takeoff due to faulty readings caused by moisture in three of its 24 port-side air-data sensors, which judge airspeed. The B-2’s computer-controlled system largely flies the aircraft. Because of this, the system automatically manipulated flight surfaces and settings to correct and maintain aircraft stability, but on this occasion with incorrect data. The control system overrode the pilots’ inputs, and they were unable to recover the aircraft before bailing out. The pilots were seriously injured, and this first-ever loss of a B-2 cost an estimated $1.4 billion.
It’s easy to understand why malware is one threat to device security that keeps chief information security officers (and their military counterparts) up at night. This rogue software is loaded by unauthorized means onto a device, most likely when it reboots. Presumably, that malicious code is designed to help attackers steal information, sabotage the device itself or interfere with the operation of the overall system, whether it is a weapon, vehicle or aircraft.
The Traditional Way to Attack Military Aircraft
Attacking a sophisticated fighter craft by sending some physical object at it — missiles, for example, is both expensive and risky. You can’t remain undetected, and there will be retribution. It requires proximity and fleeting opportunity. Putting counterfeit hardware into military hardware would be more appealing to certain enemies if it had a high chance of success.
Cyberattacks on aircraft may well be more cost-effective for hostile entities than targeting with an expensive missile. Depending on the security of the device or subsystem, an everyday laptop or small computer chip may have the capability to interrupt and hinder operations in the air. We can assume that hackers would aim to cause GPS jamming or interference, corrupt data communications and even gain access to DoD weapons systems. It is critical to have an effective means of detecting the smallest changes in code.
The specialized chips that are used in aircraft subsystems can be designed and produced to detect and block rogue software — while signaling that the subsystem has been compromised.
That’s the good news. What if the so-called “security chip,” however, is replaced by one that appears identical but serves the purposes of hostiles? Could that actually happen?
It would be quite challenging to replace a subsystem chip on an active USAF aircraft. A saboteur posing as a mechanic cannot sneak up to a fighter jet on the tarmac and begin swapping out parts at 3 a.m. without being noticed, even at a maintenance depot. A hostile infiltrator who has been accepted as part of a skilled maintenance crew could pose a greater threat, but that scenario seems unlikely.
During the manufacturing process, and along the supply chain of different defense suppliers, a lapse in security might leave an opportunity for a hostile actor to carry out the swap. The many components in a system are made in different places, and then they are shipped to where they are incorporated into larger systems. It’s challenging to ensure that an extensive supply chain remains 100% “pure” at all times, and some defense contractors have complex, far-flung supply chains.
If this seems like succumbing to paranoia, it’s not. It’s embracing reality.
To be safe, we should assume attacks will come, and we should take measures to stop them. It makes sense to consider that a counterfeit, malicious chip could be substituted into a military aircraft component. One way to defend against that is to architect dependencies between chips in the component that communicate with one another, requesting a key or other signal that only the legitimate original part could answer with.
How Hardware Can Detect a Hostile Insertion of Hardware or Software
A security chip, such as Micron’s Authenta or a trusted platform module, protects against attacks by assuring device identity and the “health” of the code being run. Security chips are designed to spot and capture measurements of code on substituted parts and compare this to an original copy, thereby identifying rogue software — which will never measure (hash) identically to the authorized software.
When the active code measures to the exact correct value, this means it can be trusted, and the security chip makes available an encrypted key that unlocks remaining data to continue running the system. When a connection to another subsystem or remote device in the aircraft is requested by the device containing the security chip, or another device, or an operator of the system, the security chip participates in a protocol to authenticate both devices to one another. If both devices are uncompromised, they can communicate normally.
Cyber Resiliency in the Face of Attack
If the software code is wrong, the security chip can help keep the aircraft operating safely. It will be architected to run the system in a mode that minimizes damage. This capability gives the system, and the machine it is part of, some resiliency; the ability to keep functioning through an attack. Resiliency is crucial. There’s almost no conceivable situation of compromised systems where you’d want a jet engine stopped in flight.
Effective Hardware Cyber Resiliency Is an Intricate Puzzle
Each device inside the aircraft that is connected to anything — the Internet, or other devices inside the aircraft — needs to have its own onboard security at the hardware level to help sound the alarm when an attempt to load rogue software takes place. It is a truly complex task to correctly design the required authentications between systems and subsystems, and the automated responses of each subsystem in case of compromise or when missing components are detected.
The Air Force has been extremely proactive in tapping the talents and patriotism of white-hat, ethical hackers to uncover vulnerabilities. It conducts ‘Hack the Air Force’ initiatives open to 191 countries, and they are an important — and community-building way of discovering bugs in Air Force systems. With upwards of eight million lines of code built into newer models of aircraft, we have a sharp need to be certain that exactly and only the correct eight million lines are loaded.
Working with TrustiPhi, though it was a short time period, fundamentally changed how I perceive security. When our peers strap into the seat of an F-35, we want to know — and they need to know — that the aircraft’s mission will prevail over the almost inevitable hacks and cyber-sabotage by hostile actors, by having resiliency built into the subsystems. Having had a first look under the hood in the cyber world, I will maintain awareness of security issues and seek to ensure resiliency against malware and sabotage in our hardware. It is with full confidence in our training, vigilance against cyberattack, and resiliency of our equipment that the Air Force will continue its success in defending from the skies.
Julia Pack is a cadet first class at the U.S. Air Force Academy (CS-09 “Viking 9;” USAF Academy, Class of 2020) and a 2019 Intern at TrustiPhi.
You May Also Like