Control Steering System with a Gamepad
After hacking the Autopilot system on the Tesla Model S(ver 2018.6.1), Keen Lab further proved that they control the steering system through the Autopilot system with a wireless gamepad, even when the Autopilot system is not activated by the driver.
SOURCES- Tencent Keen Labs, Tesla
Written By Brian Wang, Nextbigfuture.com
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
You are correct that I did not read the linked story. However, I disagree that the detail that the problem was found, fixed, and reintroduced by accident invalidates my overall point.
That the problem was reintroduced illustrates that Tesla had a problem managing their software updates. That, too, will get fixed, and is only a temporary blip in the overall continuous improvement of the cars’ self driving capabilities.
Thank you.
You didn’t read the linked story.
https://www.techspot.com/news/79331-tesla-autopilot-steering-towards-lane-dividers-again.html
This was a software fault that was detected early 2018, people complained, it was fixed, and then this recent software update reintroduced the fault.
The fault was found, solved, and then came back.
So no, software doesn’t fix issues once and for all.
What? This pisses over the Trump Fluffery much more than I ever do.
How did this end up being published by NBF?
As a society, we much accept that the initial versions of self-driving cars will not be perfect. It is reasonable to require that they be at least as good at driving as the typical, average human driver before they are allowed on the road. It also is reasonable to require that bad decisions made by self-driving cars that lead to injury or death (and perhaps to any property damage) be analyzed and the software fixed so as react correctly in those situations and the updated software be distributed to all the cars of that manufacturer.
I suppose the disagreement depends on what “it” means in this context.
My point is that the manufacturers of the self-driving cars presumably have a way to correct the software when it is found to have made a wrong decision. (If not, they have no business selling such a product.) Once the manufacturer corrects the problem, the software in all the cars will be updated, and that exact mistake will never be made again.
There might be some similar circumstances in which a similar mistake might still occur, but that is not, to my mind, the same mistake. When that similar mistake occurs, it will be corrected. as above, and, thereafter, it also never will be made again.
If the software that is driving cars cannot be corrected in this way, the entire approach is wrong, and the regulators must step in and force a more reliable approach to be adopted.
The regulators also should require that enough information about the circumstances that led any self-driving car to make a bad decision that resulted in injury or death be shared among all self-driving car manufacturers so that all manufacturers have the opportunity to correct their own cars’ software to properly react to the situation, if their cars do not already handle that situation correctly.
(comment size limit — continued on the next rock)
I hope you will feel better and get well soon.
Musk should send Keen Labs a thank you note.
I can’t believe it. Something we agree on.
“it never happens again”
Sorry friend, you must not be a software developer because having the same issue happen multiple times is extremely common. Also we’re dealing with machine learning which can be unpredictable in its results.
You seem to overlook a VERY important difference: When a defect is found in software driving a car, when the defect is fixed, it never happens again, in any of the cars.
You can’t really fix the defects in humans driving cars, and even if you say the human’s experience fixes him, that only fixes that one human, not all of the humans. To my mind, the choice is clear: Once self driving cars get to the level that their skill is comparable to an average human driver and so can be deployed widely, their competence will rapidly improve so that they all are better than any human driver.
As opposed to faulty humans programming the cars.
Musk needs to recruit some Black Hats it seems.
Wouldn’t be surprised if he does
But humans can take control if they want to over ride the system. And who is most self confident and convinced they can drive better than any “stupid computer!”? Why the person who just finished a bottle of tequila mixed with red bull.
There are also some videos appearing on the web of the latest Tesla software upgrade steering the car into freeway crash barriers.
https://www.techspot.com/news/79331-tesla-autopilot-steering-towards-lane-dividers-again.html
Remember, just because your Tesla is working OK now, doesn’t mean there won’t be a software “upgrade” overnight and a new bug appears tomorrow while driving the same road under the same conditions that you have grown completely used to.
Having controls and behaviour change on you is just about the scariest thing I can imagine for a car.
(I might be fairly negative at the moment. Currently got an injured leg and broken rib from a crash or three on the weekend.)