Stringent Tesla FSD Tester Says FSD 10.69.1.1 Feels Ready $TSLA

Chuck Cook performs some of the most stringent testing of Tesla’s Full Self Driving software. He has performed many tests on unprotected left turns and roundabouts. Based upon his latest batch of tests for version FSD 10.69.1.1 he feels FSD feels ready for wide release.

He says it is still early to jump up and down but it the drive feels better and more confident than the previous release.

The FSD is not perfect it did run a red light. It does have good acceleration and smoothness in the driving.

18 thoughts on “Stringent Tesla FSD Tester Says FSD 10.69.1.1 Feels Ready $TSLA”

  1. One person’s experience can be very misleading. It takes a lot more data than this to validate something as ready to release.

    Note, even if ready now with Tesla’s frequent update policy it might regress to something that is downright dangerous at the drop of a hat. Tesla needs to fully retest each update from scratch, it’s a property of neural networks that any change to the data set can have unknown consequences.

    • Agreed.
      This is yet another example of how a machine is close to doing human work.
      The fact that it ran a stoplight probably means the programmers are more concerned about what’s going on in the real world, rather than just following rules, such as an inferior mapped autonomous service would *have* to do.
      If a vehicle was coming, it probably would not have ran the light, not to say they didn’t consider programming to recognize lights.
      I believe that, because autonomous accidents most probably won’t make sense, the legal issues will take many more years to iron out.
      Yet, there will be some jurisdictions on the planet that will permit it before others. This is when Tesla will substantially increase in value (yet again).

      • This is why a lot of people don’t like us.

        Quick summary: Red lights should be optional if Tesla think the red light is inconvenient. And from there, anything else that we think is dumb, that’s your problem.

        Simple solution: you are responsible for what your car does.100%, no excuse. You run a red light, it’s on you. You run up the back of a motorcycle and kill two people, you go to jail for negligent driving. Don’t like this? Don’t buy a car you can’t effectively control.

        The issue is that wanting to change laws whilst accepting cars will get better eventually is that the price that you are willing to pay is other people’s lives, safety, sanity. Claims of ‘greater good’ just make it sound like us Tesla owners don’t care about anyone currently, just future people, and I’m tired of being considered an a****e for having a nice car.

  2. The safety aspect is interesting. To buy a FSD car I wouldn’t focus on whether it was statistically better than people as a whole – I’d want to know whether it was safer than me as an experienced, attentive middle-aged driver of the kind that relatively rarely has accidents. For me as an individual purchaser of a car, comparing to the population is a fallacy because it incorporates all the young, reckless, drunk and drug-addled drivers who increase the accident rates but who have nothing to do with me (and who also are probably not going to be able to afford FSD anytime soon anyway).

    • This response probably would be more common than I had thought about. Just about everyone thinks he is a better than average driver, right? Just like Lake Wobegon, children.

      Your self description as “an experienced, attentive middle-aged driver of the kind that relatively rarely has accidents” seems to me to be pretty much the typical, average driver. I don’t have any actual statistics about drivers and accidents on which to base that, but looking at the experience of my friends and family over the years makes that feel right. As I said, I don’t have any statistics on the subject, so I might be wrong about what the average accident rate is. If it is substantially higher than I imagine it to be, maybe we’d have to set the acceptable accident rate level for self driving cars to be authorized to be somewhat lower than the average of all the human drivers is, as Kevin suggested in another comment. It would not be unreasonable to consider that. But I think it would be unreasonable to require the accident rate of self driving cars to be hugely better than that of human drivers. Keep in mind, that their accident rate will constantly improve, so they will fairly quickly become hugely better than human drivers.

  3. Watched his commute. Self driving is making excellent strides. The FedEX situation is genuinely a difficult one as the FSD car has right of way and does not have to let the delivery truck make a left in front of the cars. It is a situation where people will work to accommodate other vehicles when we are waiting at a light.

  4. What a low standard.
    Maybe we can lower it some more by reducing the numbers of cameras to 2. Because comparison to humans is what matters- right?

    • Yes, but less often than human drivers do.

      Which raises the interesting question: will we allow self driving cars when they reach the point where they save lives? Or are we going to wait until they save ALL lives. If we wait for the latter, then we’ll never use them. And many lives will be lost that could have been saved.

      It’s worth remembering that some fraction of the human drivers on the road right now are driving while drunk or high, some fraction are severely sleep deprived, some are 16 and just got their license, some are over 90 and don’t drive as well as they used to, some (many?) are texting on their phone. No computer will ever be perfect. The question is whether it can be better than the average human on the road right now.

      • Yes. I have been making this point in comments about self driving cars for quite a while. Self driving cars should be authorized when we can see they statistically demonstrate safer driving than human drivers. Self driving cars will never be perfect, and they will make different mistakes than human drivers make, but when they are statistically safer than human drivers, they should be permitted on the roads.

        But the manufacturers and owners of self driving cars will have to have protection against frivolous lawsuits. When a self driving car makes some mistake that no human driver would make, that must not be allowed to be a basis for a lawsuit against the manufacturer or owner of the car. Putting in place such legal protections that still allow for action against incompetent manufacturers of self driving cars probably will be much harder than solving the technical problems of self driving.

        There probably also needs to be some discussion, and probably laws or regulations, about how much detail should be shared among manufacturers about the accidents that occur that involve self driving cars. On the one hand, you would like every self driving car to be updated so it never again makes the mistakes that allowed the accident to occur. On the other hand, car manufacturers would want to maintain competitive advantages by improving their cars’ safety over that of other manufacturers’ cars. I don’t know how best to balance those competing interests.

        • Perhaps some target should be adopted. For example, before releasing them, driverless cars should be ten times less likely to create an accident and those accidents should be one-half as serious.

          • Requiring that the self driving cars have some amount lower accident rate than the human driver population might not be unreasonable, but I’d say putting it at a tenth of the human driver rate is taking that idea too far. Even putting the requirement at half the human driver rate feels too far to me.

            Currently, we allow anyone from the least experienced (and generally incautious) 16-year-old people to rather old people with noticeably slow reaction time to drive (not to mention all the people who drive while intoxicated). It seems to me that just requiring self driving cars to be statistically better than the average of that population would be good enough, and as more and more people use self driving, the overall driving safety statistics would improve.

            But maybe it would be easier to get the public to accept if the requirement were set that the self driving cars had to have an average accident rate of, say, 80% or lower than that of the whole population of human drivers. That would make them, on average, a bit better than the average human driver. I’d hope that would be acceptable to most people. An important selling point ought to be that the self driving cars accident statistics would improve fairly rapidly, since what any manufacturer learns from the accidents that do occur would fairly rapidly be incorporated into all of their vehicles (assuming over the air updates). And if we work out fair ways to share the details of those accidents among all the manufacturers, the improvements would not be limited to just one manufacturer’s cars.

            And I want to mention again, I think none of this will happen unless the manufacturers and car owners are protected against frivolous lawsuits — that will be a harder problem to solve than any actual self driving problem.

        • That kind of legal shield is not going to go over well to the general. Say a self driving car makes a mistake, that a good driver wouldn’t, which results in a death. Try telling a grieving parent that no one is to blame… This means self driving cars have to be close to perfect or the manufacturers have to be prepared to pay for the mistakes their cars make.

          • What you mention is exactly the point I was making — there has to be a pretty big public education and lobbying campaign to persuade the public and politicians that self driving cars will make mistakes, because nothing is perfect, but when they are statistically safer, as a group, than human drivers, taken as a group, it is the smart thing to do to enable, and even encourage, adoption of self driving cars. Part of that campaign also would have to make clear that some of the mistakes self driving cars will make will be different than mistakes humans make — sometimes seeming ludicrous. But the campaign must convince people that, on balance, the overall lower accident rate is better than not permitting self driving cars.

            The campaign also should make clear that the accident rate of self driving cars will improve fairly quickly, given that once any mistake is found, it will be fixed in all the self driving cars (assuming the accident details are shared among all manufacturers, which itself will be controversial to some).

            Without successfully educating the public and the politicians and thereby getting legal blockage of frivolous lawsuits when self driving cars make mistakes that cause accidents, I doubt self driving cars can be economically viable. I don’t claim that will be easy to do. But I believe it is necessary if we ever are going to have self driving cars as more than an experiment or demonstration.

Comments are closed.