This article is more than 1 year old

FYI: You could make Tesla's Autopilot swerve into traffic with a few stickers on the road

His Muskiness praises Tencent's car hacking boffins for the warning, fixes bugs

Video The Autopilot feature in Tesla Model S vehicles could be fooled into swerving across lanes into oncoming traffic by mere stickers on road, researchers discovered. The eggheads also found a way to take control of the flash motor's steering with a wireless gamepad.

Tencent Keen Security Lab documented this week how it successfully infiltrated the autopilot’s system engine control unit (ECU) to take remote control of the car. The Chinese boffins also described how they fed on-board cameras with dodgy inputs, using stickers on the road surface, to force the vehicle to swerve across lanes, or turn on its windscreen wipers.

The flaws were uncovered from 2017 to 2018, and reported to the automaker, which has since seemingly patched the security bugs. An in-depth paper describing the attacks was published at the end of last month, and Tesla CEO Elon Musk praised those behind the discoveries.

Here’s a video demonstrating how to exploit the uncovered flaws, with English subtitles:

Youtube Video

Sticky crash

By far the most worrying finding was the researchers' ability to make a Tesla Model S 75 swerve across lanes, potentially into the path of oncoming vehicles, when in Autopilot mode, just by laying down a few stickers in front of it.

Tesla cars employ a deep neural network to analyse real-time sensor data and images of the road ahead. These are fed as inputs into the neural network so it can identify obstacles and lane markings, allowing the software to build a virtual map of its environment. This is then used to direct Autopilot’s autosteer function so that the Tesla stays in lane. Autopilot is a super cruise-control system, for what it's worth, rather than a full-blown self-driving car brain.

The researchers manipulated the car's camera image feed by placing three small squares on the ground, thus strategically changing a few pixels around the area where lanes are marked out. Alternatively, malware present in the system could modify the pixels directly as they streamed in from the camera.

These adversarial inputs subsequently tricked the neural network into thinking it was in the wrong lane, or drifting into another lane, forcing it to swerve across the roadway to correct itself, potentially resulting in a deadly crash. In one attack, the car failed to identify certain lane markings, as a result of the stickers on the road, and didn’t steer in the appropriate direction to stay in lane. In another scenario, the car was made to see phantom lanes, causing it to change direction for no good reason.

This attack posed the frightening possibility of nefarious miscreants making roads unsafe for Model S cars on Autopilot. Accident black spots for Teslas could have been created overnight, and the tiny stickers would have been very difficult to spot by smash investigators.

Ultimate Mario Kart

The team also found that – after gaining root access on the engine control unit (ECU), by getting the in-car dashboard or entertainment system's WebKit-based browser to open a malicious webpage, and then exploiting the underlying Android kernel to pivot through the internal computer network – they could control the vehicle's steering over the air with a Bluetooth game controller.

That gamepad would connect wirelessly to an attacker's laptop, tablet, or some other mobile device, which turned the pad's key presses into commands sent to the compromised Autopilot system via 3G or Wi-Fi. Essentially, it appears, you get the car's built-in browser to open a booby-trapped webpage somehow, gaining remote code execution and escalated privileges to then open a network connection over Wi-Fi or 3G between the ECU and your laptop or tablet, which then feeds the game controller key presses to the steering system. Voila, you're remotely controlling someone's motor.

The report noted that this was a complex hack, and was dependent on maintaining constant communication with the car's on-board systems; the demonstration showed an attacker in the front seat controlling the car with a game controller, to give you an idea of how practical this was.

Neural networks might be flashy and cool, but they're terribly insecure

Tesla also uses a convolutional neural network to process images taken by its in-car camera to control the windscreen wipers.

A fisheye camera monitors the moisture collected on the car’s windshield. Splashes of water blurs the images of the car’s window. When the blurriness passes above a threshold level, the neural network decides to send out a command to kick the windscreen wipers into action.

The researchers tricked the car into turning on the windscreen wipers when there wasn’t any moisture on the car’s window. They used malware running on the internal computer network to manipulate the fisheye camera feed to confuse the neural network into thinking the windscreen was covered with water. “We uploaded the adversarial image to the APE [Autopilot ECU] with the autowipers function on, and the wipers started working very fast,” they said.

Another way to perform this attack is to place an image in front of the cameras themselves: a TV screen showing rain-like blobs was placed in front of the sensor to fool it. That's hardly a practical attack, of course.

Should we be scared?

“Although machine learning represents the future of technology, from the consumer’s point of view, we hope it could have better stability,” the paper stated. “We hope that the potential product defects exposed by these tests can be paid attention to by the manufacturers, and improve the stability and reliability of their consumer-facing automotive products.”

Now that's sticker shock: Sticky labels make image-recog AI go bananas for toasters

READ MORE

Tesla said it fixed the ECU vulnerabilities, particularly the gamepad-related one, with a system update in 2017 and 2018. It's not clear if the automaker fully fixed the lane and wiper computer-vision flaws, as it considers them an unrealistic attack method. We've asked for clarification.

“We developed our bug bounty program in 2014 in order to engage with the most talented members of the security research community, with the goal of soliciting this exact type of feedback," a spokesperson told The Register.

"While we always appreciate this group’s work, the primary vulnerability [the gamepad hack] addressed in this report was fixed by Tesla through a robust security update in 2017, followed by another comprehensive security update in 2018, both of which we released before this group reported this research to us.

"The rest of the findings are all based on scenarios in which the physical environment around the vehicle is artificially altered to make the automatic windshield wipers or Autopilot system behave differently, which is not a realistic concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so, and can manually operate the windshield wiper settings at all times." ®

More about

TIP US OFF

Send us news


Other stories you might like