Elon Musk Tweets that Feature-Complete Full Self Driving Will Be Released Soon

Trent Eady writes on self-driving and artificial intelligence on Seeking Alpha.

Trent asked Elon Musk when its feature-complete full self-driving would be released. Elon Musk replied that it would soon be released.

Trent reported that Tesla is working on a dedicated computer, Dojo, for training neural networks using self-supervised learning. Tesla will use a technique called active learning. Tesla will be an automatically curate only the most useful video clips for self-supervised learning from its fleet of roughly 750,000 camera-equipped, Internet-connected cars.

Tesla has about 750,000 cars with eight surround cameras that likely drive about an hour per day on average. There is 20 million hours of 360-degree video per month across the whole fleet and about 170 million hours of video per month across the eight cameras on each vehicle. Self-supervised learning can pull useful cases from that video.

Baidu had a study which indicated that Deep Learning improvements are predictable based upon the amount of data that is leveraged. Applying the Baidu research, IF Tesla is able to collect 1,000x as much training data as its competitors then its neural network performance will end up being 10x better in those areas where they have 1000x the data.

12 thoughts on “Elon Musk Tweets that Feature-Complete Full Self Driving Will Be Released Soon”

  1. I don’t know if this is it or not but, sooner or later, there will be a good enough system with enough data to show that it is far safer in most situations and has a far lower accident rate that humans.

    Again, don’t know when that will be, but the day will come. It is no longer a matter of if.

    My grandfather still drove to Florida every year (across four states) when he was 101. His license was good to 103, but he cashed in at 102 (and not in a car accident). All the same, self-driving vehicles, accompanied by a demonstrably superior safety record, will make me feel safer. Especially if I’m the one that’s driving to Florida when I’m 101 (and assuming Florida has not yet been submerged by rising seas).

  2. That sounds like that ridiculous Uber SDC crash, where it turns out the software was still using a “rough draft: do not allow to actually control vehicle” level series of hacks.
    1.. There were too many false positives, so they had a one second delay between detecting a hazard and acting on it. (That, right there, is a “do not allow to actually control a vehicle” level hack.)
    2.. It was programmed that humans only walk across roads at crosswalks. Another rough draft hack that must have been intended for later revision that got forgotten about.
    3.. The human WAS crossing at a crosswalk, but the crosswalk, while physically on the road, wasn’t programmed into the maps.
    4.. The human was pushing a bike, which meant the recognition software got confused. It’s a bike? No a human? (Can’t be a human, not crossing at programmed crosswalk.) Is it a vehicle? No a bike? No it’s unknown? No, a bike…
    5.. (And this is the kicker) After each time the software changed the guess as to what it was looking at, it restarted the 1 second timer.
    6.. Finally it hit the pedestrian at full speed.

    Now the real issue is program management. I’ve worked with robots and there is a whole procedure about full code reviews and multiple people going over algorithms to make sure all the basic kludges (that everyone starts with) are fixed, finalised, and actually implemented in full long before a robot that can kill someone, or even hurt their finger, is EVER let out into even closed room testing.

  3. This is the third instance of this happening that I am aware of (Tesla ramming an emergency vehicle on the left hand side of the road).

    Not to belittle the whole “people kinda suck too” line it is worth pointing out that collision avoidance radar should prevent this crash from happening. So the question is this: we have multiple instances of a wholly obvious mode of failure over several years and yet it keeps happening.

    To me being someone who writes software for a living a single bug like this is pure gold- it is an opportunity to address a serious shortcoming in the program/AI training. I’d be all over fixing this. Why is this still a problem?

  4. The nice thing about software is that, once you identify a case where it fails, you can add a routine to handle that case separately.

    I think the problem here may be that Tesla relies on a combination of cameras, ultrasonics for short range, and radar for long range. This leaves them incapable of detecting moderately distant obstacles that look like open road, and don’t reflect radar.

    Either upgrading the image recognition to see those obstacles, or adding lidar to handle obstacles that don’t reflect radar well enough, should do the job.

  5. That is a problem of how were perceive things. 42,000 people get killed on our roads every year and we shrug our shoulders and say: “Well, what can you expect. If ten people every year get killed by self-driving cars we will say: “Oh my God, we must ban them!”.

  6. Haha so that seems like a glitch in the imitation learning xD

    DL Algorithm: “Humans slam into emergency vehicles every once in a while? Ok, I’m on it!”

  7. He said that so many times in the past, some Idiots will always believe him. He is falling again to the same fallacy that neuron networks if you make them big enough can learn by themselves anything. Instead of being so myopic, it is good to review what other players, ones that actually yielded better results than Tesla, have done so far in the field of autonomous driving:

    https://arstechnica.com/cars/2020/01/intels-mobileye-has-a-plan-to-dominate-self-driving-and-it-might-work/

  8. But will it still plow in to emergency vehicles on the side of the road? Because that seems to have been an ongoing bug for years.

Comments are closed.