Research shows way to secure cyber physical systems via anomaly detection
Recent successful cybersecurity attacks, such as the 2010 Stuxnet attack on uranium centrifuges central to Iran’s nuclear program, have made it impossible to ignore the lack of effective security of cyber physical systems today.
Cyber physical systems, including modern advanced automotive and avionics applications, power plants and industrial automation systems, have historically not been subject to software security breaches as they were not networked and were isolated from the outside world. However, today’s world is becoming increasingly interconnected, which opens up avenues for people to attack these systems.
ITI/TCIPG researcher Sibin Mohan, along with co-PI Lui Sha, a professor of computer science and researcher in the Coordinated Science Laboratory, have recently been awarded a three-year, $500,000 NSF grant titled "Small Behavior Based Zero Day Intrusion Detection for Real Time Cyber Physical Systems," to help solve this problem.
“This research is incredibly timely because now we are seeing a lot of attacks on critical systems,” Mohan said. “Even five years ago, people thought these kinds of attacks weren’t possible. Unfortunately, the attacks are not only happening, but they’re not just limited to stealing data. Attacks also could result in real physical damage, such as crashed cars or power grid crashes that could harm equipment or worse, people.”
According to Mohan, these real-time cyber physical systems are much harder to secure because there are limited resources for it. You can’t just run an anti-virus program because it is hard to gauge the amount of time and resources the software will take and that would distract from the program running properly since these systems have limited hardware resources. If a anti-virus software runs too long or uses too many resources, that could affect the execution of critical tasks.
“We are trying to look at the practical side of how you detect security violations in a real time system,” Mohan said. “These systems are designed to be predictable and behave in a certain predetermined fashion, so we can use those behaviors to look for attacks. When a system deviates from what it’s designed for, that’s an indication of an attack.”
What Mohan and Sha propose is to look for predictable behavior in a system and try to identify anything that is irregular. The architecture they’re developing will also ensure that the underlying physical system remains safe and running.
For instance, Mohan and Sha would be looking at how long it takes a particular program to run, what resources are being used in the system or how much memory is being used -- and how it’s being used -- to determine if there are anomalies.
The first step is to build an expected behavioral profile of a system, some of which can be determined at design time.
As the technology becomes more refined, Mohan hopes to collaborate with automotive, aviation or power system companies, or even mobile companies such as Qualcomm, to adapt the newly developed architecture to a variety of devices. Mohan anticipates this to be the biggest challenge -- working with real systems -- after the technology is created in the lab.
“A lot of systems are very proprietary and the technology is secret because they deal with critical infrastructures or have taken a lot of time to develop,” Mohan said. “In our labs, we can do simulations and work on smaller testbeds, but that’s still not representative of real-world configurations. We’ve gotten great positive feedback so far, so hopefully this will lead to many collaborations.”