
Are Autonomous Vehicles Safe from Cyber Attacks? (Part I of II)
As reliance on artificial intelligence modules and machine learning algorithms increases in designing self-driving cars, we must take cognizance of the cybersecurity risks they carry.
When our world and our cars are being driven by autonomous technology, road safety shouldn’t be an afterthought.
At least that’s what the European Union Agency for Cybersecurity (ENISA) says. In a February report with the Joint Research Centre, the ENISA released a report underscoring various cybersecurity risks the policymakers, national authorities, regulatory bodies and artificial intelligence technology frontrunners must take cognizance of.
Rise of the Autonomous Automotive
To anyone who has been watching the automotive industry, the rise of automation has been no surprise. Automation was set in motion long before Tesla’s stocks went public. With the advent of automatic transmission technology, driving functions were consistently being automated, one function at a time.

Levels of driving automation as categorised by the European Parliamentary Research Service (ERPS). (Image: ERPS, European Commission)
The European Parliamentary Research Service (EPRS) has categorized and demarcated the levels of driving automation in public vehicles—starting from level zero–indicating no automation, up to driver assistance, partial automation and finally, full automation–that requires no human intervention at all.
The EPRS estimates that fully automated vehicles will arrive by 2030. The vehicles will autonomously perform all of the tasks expected of a driver today and require minimal to no human correction.
Present artificial intelligence modules and machine learning algorithms aim to mitigate the most probable cause of road accidents—human errors. But what happens when the adversary in the way is not human at all?
AIs on the Road

Waymo aims to put fully driverless cars on the road soon. (Image: Waymo)
There has been a growing concern among government and regulatory bodies about the potential vulnerabilities of autonomous driving vehicles—which is shared by cybersecurity experts.
Much like anything that can be connected to the internet, the complex AI and ML algorithms that run an autonomous vehicle imparting the ability to identify objects on the road—are susceptible to adversarial attacks from threat actors.
There can be myriad ways a threat actor can target an autonomous vehicle.
Self-driving vehicles rely on numerous sensors that provide visual feedback to the machine, enabling it to make decisions accurately. These sensors include cameras and infrared devices that act as eyes and ears of autonomous vehicles. Through targeted jamming, spoofing or saturation attack, a threat actor can damage the infrastructure and pose threat to life.
In a relevant case of adversarial spoofing, hackers were able to trick the Mobileye EyeQ3 camera of a Tesla vehicle on road into speeding 50 miles an hour past the speeding limit—simply by extending a line past the 3 in a speed limit sign to make it read ‘85’ instead of the intended 35.

An example of adversarial spoofing by meddling with a speed limit board. (Image: MIT Technology Review)
In the next feature, we take a dive into the mechanics of adversarial contamination, the measures recommended by the EU cybersecurity watchdog, and the pressing need for proactive policymaking fit for the changing paradigm of the world.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.
Follow Us