Tesla AI Process Driving Images 10X Faster Than Humans

Andrej Karpathy, Director of artificial intelligence and Autopilot Vision at Tesla, says that the cameras and AI used by Tesla can process visual driving data ten times faster than humans.

    SOURCES- Tesla, Andrej Karpathy
    Written by Brian Wang, Nextbigfuture.com (Brian owns shares of Tesla)

29 thoughts on “Tesla AI Process Driving Images 10X Faster Than Humans”

  1. Not just creating randomness, but accessing awareness or information that is practically useful through quantum interactions.

    What we have here is a universe of energy coalescing to physical matter in cooperation with surrounding energies. Is intent an energy? As Native Americans practiced the modification of weather through focused attention, so are there investigations of remote healings, prayer, visualization, etc. that show repeatable and verifiable results, and many practices can be found online to experiment for oneself (like Law of Attraction stuff). Maybe much is still somewhat speculative and anecdotal but the science is still evolving and maybe we will be fascinated to see it more quantified even in our lifetimes.

    Here is some study:

    "Consciousness has been shown to affect plant growth, bacteria growth, rodent behavior, cats and dogs, enzyme activity, crime rates and even Healing in strangers."

    Anyway, I learned to make the intention to be safe when I first sit down in the driver seat and like the results for 30+ years.

  2. If the law stops you using the correct lights at night, the same law will probably insist you turn off self driving when on long boring stretches of highway.

  3. I am very interested in night driving. When I drive on an interstate highway at 75 mph (legal in most western states on rural stretches of highway) with only my low lights on, I am often uncomfortable. (I still do it, but I am uncomfortable. Silly me.) The law restricts me from using my high beam lights.

    Has anyone driven a Tesla at night on rural stretches of Interstate Highway using the current version of self driving? I am curious as to how well it does. I know this has been thought out, but I haven't seen much data.

  4. The currently available statistics show Tesla's self driving to be safer than human drivers by quite a large margin.
    Human drivers cause 30,000 deaths each year in the US alone. 10,000 of those are from drunk drivers. I think that FSD will make our roads a lot safer, even if it won't be perfect in the beginning.

  5. So, it can mistake a plastic bag blowing across the road for a genuine obstacle much faster than a human can realize it's safe to ignore?

  6. I believe I understand your skepticism, however I don't have in mind to try to get the public to understand a very complex topic. The main, maybe only, point to be learned would be that self driving cars make different kinds of errors than human drivers make, BUT FAR FEWER ERRORS THAN HUMAN DRIVERS MAKE, and so self driving cars are safer than human drivers.

    I believe it would have to be a continuing campaign because from time to time, accidents will occur that ambitious lawyers will try to capitalize on to get undeserved money from the car manufacturers or to get unneeded regulations put on self driving cars. At such times, the public needs to be reminded of the main point I stated above so that they don't put pressure on politicians to "fix" the "self driving car problem".

  7. A purely speculative paper that speculates that overall neuron firing may be stochastically determined by quantum probability.
    Not really any different from the way that transistors and optical sensors that govern robot cars are also activated, at the most basic levels, by quantum functions, hence leading to some stochastic rather than deterministic behaviour.
    The emergent chaotic behaviour of neural networks mean that rather than being averaged and damped out (as would be done in a pure logic circuit) such randomness can be amplified to give overall a degree of randomness in behaviour.

    Cool, and as a speculative paper (which is what this is openly presented as) totally valid.

    But nothing there about how such things result in decisions that are in tune with your "intention to remain safe".

  8. Exactly. A little kid playing with a ball is a classic example that everyone mentions when talking about driver anticipation. That's a prime example of what WOULD be incorporated into a driving AI.

    What does not get incorporated is something else, something that you don't think of, but that once again a human would recognise as dangerous when they see it but the AI might not.

  9. At a wild guess, as it approaches actual release into the wild, the lawyers have come in and made sure that every statement now goes through them before it leaves the building.

    No more complete schmozzles like calling the cruise control "autopilot" and then acting surprised when consumers think it can be an autopilot.

  10. My faith in the ability of public education campaigns to educate the public about anything more complex than suncream, or to even make it from science to campaign without getting half the facts mixed up and reversed, has suffered a major blow over the past 18 months.

  11. Why do you believe things like that cannot be included in the self driving car? Do you think the developers cannot anticipate them, cannot incorporate them into the AI, or do you have some other reason in mind?

    I think the developers can anticipate such things and include appropriate responses into the AI. They will not anticipate every possibility, but when new ones are discovered (by accidents or just by close calls), the new ones could be added to the AI's evaluation.

  12. Those sort of errors are why we need a strong and continued public education effort getting people not to lose focus on the overall safety advantage of self driving cars (once they reach that point). The cars are going to make mistakes, some of which will look pretty "dumb". They will make different mistakes than humans make because they are different from humans. But they will make far fewer mistakes, and that is what the general public must not be allowed to lose sight of (and why it will need to be a continuing public education effort).

    I imagine an important part of the approach, though not the whole approach, would be to frequently list the many errors the self driving car never makes — never is drunk, never sleepy, never distracted by things happening inside the car, never gets road rage, never shows off to its buddy, and probably more.

    Another advantage that probably is too technical to use is that once an error happens and is analyzed, the correction will rapidly be in *all* cars, but maybe someone clever can figure a good way to present that, too.

  13. So far I doubt the AI is doing the same "processing" as the brain. For example, I'd bet it won't recognize "there's a little kid playing with a ball, better go extra slow in case he runs out into the road."

  14. I'm afraid that IA driving is a lot more complicated that just "process images".
    Not only IA must tag every object. IA must have a range of expected behavior.
    Normal behavior (regular driving), extreme behavior (sudden braking or turn), weird behavior (drunken driver, cars suddenly change from forward to backward…), unidentified objects (a plastic bag flying), etc.

    Identify an object is just a small portion of it. Because some reactions should be faster than others, it's better if more than one algorithm works in parallels and the identification is not a work of 0 or 1 but a progression of identification.

    Unknown object, 3m -> car (unknown behavior), 3m -> car (regular behavior) with different reactions. As unknown, maintain the distance is the safest strategy. when the behaviour analysis has been completed, the distance criteria could be relaxed and a shorter distance, although enough to brake, could seems more natural.

    That's more like humans work. If you are not sure if a fast object is a plastic bag (no problem with "collision" or a wild animal), it is better to assume the worst scenario until safe identification is completed.

  15. What is remarkable is how the information flow from Tesla about their FSD-program has been reduced to a trickle. Elon tweets now and then that the version 9.0 is a couple of weeks out or that the "button" to request FSD will come imminently. But other than that, there are no new bits of information forthcoming.

    Oh and by the way, according to James Douma, the FSD SW runs at 17 Hz, even if the cameras run at 36 Hz…

  16. Awarding costs against the plaintiff is a common method of discouraging nuisance suits.

    I think the sticking point will be crashes that occur when the SDC makes an error that a human wouldn't have made.
    An error that the average person looks at and goes "yeah, I can see how that would be super tricky…" that's subject to reasoned debate.
    But when the car driving system thinks that a white truck is the sky because they have the same albedo. Or that a person pushing a bike across a road can't be because humans only cross roads at crosswalks. Or a plastic bag blowing across the highway causes the car to lock on brakes and results in a pileup.
    Those sorts of errors will make most people shake their heads and vote "NOT READY YET" on self driving cars, even if the overall number of crashes/km is down.

  17. Probably not better yet, but it will become better at some point. I don't know how soon we will get to that point, but it will come.

    However, shifting to the whole self-driving system (not just the vision processing), the point where society should be willing to accept full self driving cars is not when they are better than the best human driver, but when they are better than the average performance of all human drivers, statistically.

    I believe that if you had a calm conversation with most people today, they would intellectually agree with that position, but they would still be easily swayed away from it by sensational news coverage of any fatal accident that was clearly caused by a mistake made by a self driving car. We probably will need a strong and continued public education effort to pound home the point that nothing is perfect, but the death toll that can be blamed on self driving cars is far lower than that of human drivers.

    Also we will need strong legislation that blocks liability being assessed against the manufacturers and owners of self driving cars except when they have been seriously negligent. It will be hard to craft such legislation that can block lawsuits that have no real basis, but are intended only to take the opportunity to harass the manufacturer or owner, while still being an effective deterrent against negligence. I don't know what the techniques are that such legislation should use, but I hope there is a way to create such legislation.

  18. There are experiments with reliable data showing people can make decisions before they’re consciously aware of seeing the event, not before the event occurs. Any results as stated would be parapsychology/precognition and if repeatable would overturn everything in Science from Physics to neurophysiology. There are NO repeatable experimental results supporting ANYTHING in parapsychology much less results that challenge causality.

  19. Studies have shown that the human brain can made a decision about an event before the event even occurs. Sounds impossible but it has being observed.

  20. Interesting, Tesla hadn't shared the video resolution and frame rate yet.
    This is an interesting datapoint with what the sensors are capable of visualizing.
    36Hz seems excessive enough.
    1.2MP might be a bit low if you want to see far ahead in the distance. But combining multiple images for 4D imaging, as Tesla does, might enhance long distance viewing.

  21. No contest? Human brain still has a quantum aspect that further informs decision making with impulses/intuitions based on your intention to remain safe.

Comments are closed.