Any Required Human User Input is an Error for the Tesla Self-Driving Autopilot

Elon Musk views any human user intervention is an error situation for the Tesla Autopilot. Elon means that whenever a human has to take control from the Tesla Autopilot system this is indicating an error that must be fixed for a future fully autonomous car.

Currently, the autopilot does not recognize stoplights. A regular autopilot user will perform a fair bit of adjustment of the maximum speed and how much of a gap to leave during commuting. The system has can mostly handle on-ramp to off-ramp highway commuting.

It can still be a fair bit of work to monitor autopilot to get through the lane changes to get to the HOV lane.

SOURCES- Youtube, Elon Musk
Written By Alvin Wang, Nextbigfuture.com

6 thoughts on “Any Required Human User Input is an Error for the Tesla Self-Driving Autopilot”

  1. In principle there isn’t any reason you couldn’t roll out an update where the lights transmitted their status, planned transitions, and exact GPS coordinates on some designated frequency. Just include it in the new light specs, and implement it as lights get replaced.

    You’d have to be very careful that the system would be resistant to spoofing, though, because you KNOW that as soon as self-driving cars are relying on externally delivered information, creeps are going to try causing accidents for fun.

    The positive thing about doing it all with vision systems is that it would be backwards compatible.

  2. I will take that further, I wouldn’t be surprised if we need to change the way we build roads and sign in order to have autonomy. I wouldn’t be surprised if in order to have autonomy ALL cars will need to be autonomous.

  3. I wonder if it will always be the case that it would be easier to just retrofit all stoplights with EM signals that an autonomous car can read reliably instead of ambiguous – to robot vision – lights on a traffic light. Maybe ditto for road lines; embed them with sensors that cars can read instead of relying on robot vision to “see” them.
    Of course, this would be hugely expensive and invasive, but what if there is a limit to the low hanging fruit of robot vision and after that the cost scales exponentially?

  4. Kind of odd that they rank so low when they have the most advanced capabilities in cars sold to consumers. Put your money where your mouth is and short that stock 🙂

  5. Recognizing stop lights in a known set is easy. The things get muddy when you have a what of red lights with ambiguous position. If the whole area is pre-mapped then it is so much easier. But still, if it cannot reliably recognize a red light than there is no “auto” in the pilot…

Comments are closed.