Tencent Keen Security Lab Tricks Tesla Autopilot into Oncoming Traffic

Tesla Autopilot recognizes lanes and assists control by identifying road traffic markings. Tencent Keen Labs proved that by placing interference stickers on the road, the Autopilot system will capture this information and make an abnormal judgment, which causes the vehicle to enter into the reverse lane. Tesla stated “a driver can easily override Autopilot at any time by using the steering wheel or brakes and should be prepared to do so at all times”. Tesla said this did not represent real-world problems and no drivers had encountered any of the report’s identified problems. At the Black Hat USA 2018 security conference, Keen Lab presented the first ever demonstration to remotely compromise the Autopilot system on a Tesla Model S.

Control Steering System with a Gamepad

After hacking the Autopilot system on the Tesla Model S(ver 2018.6.1), Keen Lab further proved that they control the steering system through the Autopilot system with a wireless gamepad, even when the Autopilot system is not activated by the driver.

SOURCES- Tencent Keen Labs, Tesla

Written By Brian Wang, Nextbigfuture.com

15 thoughts on “Tencent Keen Security Lab Tricks Tesla Autopilot into Oncoming Traffic”

  1. You are correct that I did not read the linked story. However, I disagree that the detail that the problem was found, fixed, and reintroduced by accident invalidates my overall point.

    That the problem was reintroduced illustrates that Tesla had a problem managing their software updates. That, too, will get fixed, and is only a temporary blip in the overall continuous improvement of the cars’ self driving capabilities.

    Reply
  2. What? This pisses over the Trump Fluffery much more than I ever do.

    How did this end up being published by NBF?

    Reply
  3. As a society, we much accept that the initial versions of self-driving cars will not be perfect. It is reasonable to require that they be at least as good at driving as the typical, average human driver before they are allowed on the road. It also is reasonable to require that bad decisions made by self-driving cars that lead to injury or death (and perhaps to any property damage) be analyzed and the software fixed so as react correctly in those situations and the updated software be distributed to all the cars of that manufacturer.

    Reply
  4. I suppose the disagreement depends on what “it” means in this context.

    My point is that the manufacturers of the self-driving cars presumably have a way to correct the software when it is found to have made a wrong decision. (If not, they have no business selling such a product.) Once the manufacturer corrects the problem, the software in all the cars will be updated, and that exact mistake will never be made again.

    There might be some similar circumstances in which a similar mistake might still occur, but that is not, to my mind, the same mistake. When that similar mistake occurs, it will be corrected. as above, and, thereafter, it also never will be made again.

    If the software that is driving cars cannot be corrected in this way, the entire approach is wrong, and the regulators must step in and force a more reliable approach to be adopted.

    The regulators also should require that enough information about the circumstances that led any self-driving car to make a bad decision that resulted in injury or death be shared among all self-driving car manufacturers so that all manufacturers have the opportunity to correct their own cars’ software to properly react to the situation, if their cars do not already handle that situation correctly.

    (comment size limit — continued on the next rock)

    Reply
  5. “it never happens again”

    Sorry friend, you must not be a software developer because having the same issue happen multiple times is extremely common. Also we’re dealing with machine learning which can be unpredictable in its results.

    Reply
  6. You seem to overlook a VERY important difference: When a defect is found in software driving a car, when the defect is fixed, it never happens again, in any of the cars.

    You can’t really fix the defects in humans driving cars, and even if you say the human’s experience fixes him, that only fixes that one human, not all of the humans. To my mind, the choice is clear: Once self driving cars get to the level that their skill is comparable to an average human driver and so can be deployed widely, their competence will rapidly improve so that they all are better than any human driver.

    Reply
  7. But humans can take control if they want to over ride the system. And who is most self confident and convinced they can drive better than any “stupid computer!”? Why the person who just finished a bottle of tequila mixed with red bull.

    Reply
  8. There are also some videos appearing on the web of the latest Tesla software upgrade steering the car into freeway crash barriers.
    https://www.techspot.com/news/79331-tesla-autopilot-steering-towards-lane-dividers-again.html

    Remember, just because your Tesla is working OK now, doesn’t mean there won’t be a software “upgrade” overnight and a new bug appears tomorrow while driving the same road under the same conditions that you have grown completely used to.

    Having controls and behaviour change on you is just about the scariest thing I can imagine for a car.

    (I might be fairly negative at the moment. Currently got an injured leg and broken rib from a crash or three on the weekend.)

    Reply

Leave a Comment