The past few years have been a mixed bag for facial recognition. In 2017, the technology stepped into the global spotlight as Apple launched the iPhone X — its first smartphone to rely on face, rather than fingerprint, scanning for authentication.
But facial recognition has also courted controversy with well-publicised studies revealing its limitations: that the software can be fooled into wrongly identifying a person using face coverings such as masks, eyeglasses, “Dazzle” makeup, and customised QR-code-like stickers. In one alarming instance, researchers took images of turtles and tweaked a couple of pixels, successfully tricking the AI system into believing the reptiles were guns.
All around the world, AI researchers followed these developments in earnest. Trevor E. Carlson, an assistant professor at NUS Computing, was one of them. He recalls: “I started thinking to myself: can we trick a popular AI algorithm into doing something that it didn’t expect to do?”
Carlson and his team — comprising PhD students Arash Pashrashid and Ali Hajiabadi — carefully considered a handful of AI systems to study. They eventually settled on PerSpectron, a state-of-the-art tool that employs machine learning to detect malicious attacks on computer processors.
“A lot of people assume that an AI can do a good job,” explains Carlson. “But there are applications that can trick the AI into thinking everything’s fine, and then all of a sudden, it gets really confused and the detector can no longer uncover problematic areas with high accuracy, and the system starts leaking data.”
The breached data signals a loss of privacy and, if fallen into the wrong hands, can prove extremely dangerous.
Sneaking in through the side
Such attacks are known as ‘side-channel attacks,’ so-called because hackers use channels that unintentionally leak information to steal data from hardware systems. This indirect means to an end is one frequently adopted in science — you can’t see wind, for instance, but you know it’s there from the rustling of leaves in the trees or the feel of it upon your face.
Most high-performance CPUs, or central processing units, are vulnerable to side-channel attacks, says Carlson. That’s because they rely on a technique called ‘speculative execution’ to function quickly and efficiently, by predicting the commands they’re likely to receive and executing them ahead of time. “If they didn’t do this, our mobile phones and laptops would be significantly slower than they are today,” he says.
However, the drawback of such speculation is that it leaves behind a trail of data breadcrumbs. “The vast majority of things get cleaned up properly, but some do not,” explains Carlson. “And those traces that get left behind can be detected by attackers.”
It’s a catch-22: CPUs can’t do without speculative execution, but it can make them susceptible to side-channel attacks. Indeed, the number of such attacks — albeit theoretical for now (researchers test them on systems in a controlled manner) — has grown in recent years, with IBM, Intel, and other computer chips proving vulnerable to ominously-named attacks such as Spectre, Meltdown, and Foreshadow.
A better AI detector
Although PerSpectron is a leading AI-based, side-channel detector, Carlson’s team suspected it wasn’t infallible. To prove their hunch, they spent the past year developing two types of speculative side-channel attacks — which they call Expanded-Spectre and Benign-Program-Spectre — to evade the system.
When used to trick the PerSpectron detector, the researchers discovered something astonishing: the algorithm’s accuracy of detecting an attack fell from 99% to 14% and 12% with Expanded-Spectre and the Benign-Program-Spectre attacks, respectively. In other words, only roughly one in every 10 attacks were being successfully identified.
“There are significant limitations with this — these AI algorithms appear to be quite fragile,” says Carlson.
And so he and his team set about building a detector that could better guard against such side-channel attacks, without the need to use opaque AI algorithms that could be easily tricked. What they came up with is a system called Spectify, which they described in a new paper published in November.
Spectify works by monitoring changes to a CPU’s microarchitecture — the way in which a computer is organised to carry out instructions that determines the security and performance of a system.
“The whole idea is that we tried to break things down to the fundamental pieces,” says Carlson. “All this requires very low-level knowledge of how the system functions, instead of relying on AI.”
Monitoring the microarchitecture allows one to detect the precise location of a data leak, he explains. That’s because tracking changes in a CPU’s microarchitecture means being “extremely detailed.” Carlson likens this to keeping track of the food in your fridge in a meticulous manner: for instance, knowing that you have 15 eggs — each with its own distinctive marks or appearance — on the third rack of shelf, sitting in a transparent 5x5 plastic container that is 23 centimetres from the left wall, 32.5 centimetres from the right, and 3 centimetres from the edge.
“You need to know exactly where your eggs are before you can detect if someone has moved them,” he explains. “Which egg is taken matters too.”
His team designed Spectify in this manner because “it allows the system to determine the details about the data leaks before a potential attacker gets a chance to reconstruct the leaked data,” thus allowing for appropriate defense measures to be deployed.
“By doing so, we can restore the system to high accuracy and effectively catch all of the problems that are happening,” he says.
When tested against the two attacks the team created, Spectify successfully detected all attacks, with negligible false positives and zero false negatives.
Aside from this better performance, the new technique also offers up the benefits of early detection with a low performance overhead, meaning that the computer can keep running without significantly slowing down.
Carlson and his team are now exploring whether Spectify can be used to detect other types of side-channel attacks. He says: “At the end of the day, I hope our work can lead to a new class of energy-efficient systems that can continuously and efficiently detect potential side-channel issues early.”
Paper: Fast, Robust and Accurate Detection of Cache-based Spectre Attack Phases