Auto, News, Technology

Are Autonomous Vehicles Safe from Cyber Attacks? (Part II of II)

Abhinav Raj

Abhinav Raj, Writer
@uxconnections

As reliance on artificial intelligence modules and machine learning algorithms increases in designing self-driving cars, we must take cognizance of the cybersecurity risks they carry.

In part I, we explored a process called ‘adversarial contamination’, through which cyber threat actors can disrupt the performance of machine learning algorithms by injecting their malicious samples—impairing the ability of the algorithm to correctly identify objects. 

This can be understood by looking into how machine learning models for image processing work. 

A machine learning algorithm identifying cars in road traffic. (Image: Xaltius)

ML algorithms are trained on what is called a ‘training set’—which is an accumulation of data an algorithm or ML model uses to recognize objects and patterns. The data is to the ML algorithm what a teacher is to a student. 

ML algorithms develop relationships between data points, store information that they later recall to extrapolate and make meaning of what is fed to them. The performance of the model is contingent upon the quality and the quantity of training data that is fed to it. 

Adversarial contamination or ‘poisoning’ works by fooling an ML model by feeding it deceptive output, which thereby hinders its ability to make sense of its surroundings.

An example of an adversarial input.

Adversarial contamination can potentially have catastrophic consequences when one of the use-cases of the algorithm is differentiating a pedestrian from a speed breaker in an autonomous vehicle. 

The EU Agency for Cybersecurity (ENISA) has advised investigating the vulnerabilities instigated by the uptake of AI technology. 

“Cybersecurity risks in autonomous driving vehicles can have a direct impact on the safety of passengers, pedestrians, other vehicles, and related infrastructures. It is therefore essential to investigate potential vulnerabilities introduced by the usage of AI,” stated the EU cybersecurity watchdog. 

It is evident that cybersecurity vigilance is the smartest way forward, however, preparedness is an absolute necessity. 

In a study by the University of Exeter and Sheffield Hallam University, researchers Matthew Channon and James Marson have emphasized the introduction of an insurance framework that seeks to redress vehicles impacted by cyber threat actors. 

“It’s impossible to measure the risk of driverless vehicles being hacked, but it’s important to be prepared”, commented Dr Channon. 

“We suggest the introduction of an insurance backed Maliciously Compromised Connected Vehicle Agreement to compensate low-cost hacks and a government backed guarantee fund to compensate high-cost hacks.”

The study appears in the Computer Law & Security Review-Journal.  

In an ever-changing, ever-evolving landscape of technology interspersed with artificial intelligence, we may arrive at a time where what’s interspersed might become ever-present. 

Is it a good time to start preparing, or can we afford to turn a blind AI?

Leave your thoughts in the comments. 

Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.

Follow Us