Better 24 by 7 all conditions vision for self driving cars

Startup AdaSky is creating a far infrared thermal camera called Viper for self driving cars. Viper is a breakthrough, complete-sensing solution to enable 24/7 driving, combining a far infrared thermal camera with advanced machine vision algorithms that lets autonomous vehicles see and understand the road in any lighting or weather condition.

AdaSky’s advanced thermal sensing solution allows the vehicle to sense and analyze its surroundings by passively collecting FIR signals through detection of thermal energy radiated from objects and their body heat. AdaSky’s image processing and computer vision algorithms process the signals collected by the camera to provide accurate object detection and scene analysis, giving the vehicle a new layer of information.

It spots differences in the heat emitted by object. Warm-blooded humans and animals are clearly seen. Road surfaces stand out from trees and vegetation. Oncoming headlights, direct sunlight, and abrupt lighting changes do not wash out the entire scene.

14 thoughts on “Better 24 by 7 all conditions vision for self driving cars”

  1. All the articles on this site suck!!!!! They are just nonsensical clippings and plagerized text from other sites!

  2. A new type of sensor information is a good thing but nobody should pretend that just one kind of senor information is sufficient.

  3. Nice. But how does this help SDC’s solve the Trolly Car Problem?

    It doesn’t. Because nothing they are doing with AI’s can solve it.

    But until this is solved, they shouldn’t be allowed on the roads.

    • It’s not like the Trolley Car Problem is a requirement in the driving exam. It’s clearly a non-issue that doesn’t need to be solved.

      • This, the trolly problem is very academic and its only use in this discussion is to show how gigantic enemy perfect is to good enough.
        Its far better than to keep all the bad drivers around.

        Now one related issue to the trolly problem is then to drive into the ditch rater than hit another car or person. This is far more relevant at lest if driving a lot of icy roads.

        • …or another approach.

          Car’s AI decisions are extension of the driver’s ones (because it was the driver who decide to use tha AI). In this kind of situation, the driver would have to choose (if he is willing to sacrifice his own life or not) and AFAIK nobody could have held him reponsible for the results (unless the dangerous situation itself was his fault) – for a civilian, sacrifice is a privilege, not a duty.

          So, let’s just make the driver make the decision in advance and complete privacy – set the setting (save me first vs. calculate losses and save most people) that would be hidden and inaccessible to anyone else until it takes effect – just like his own private decision would be. After all, AI in the car does not remove the responsibility, it is still the driver’s car.

        • This, the trolly problem is very academic …
          Now one related issue to the trolly problem is then to drive into the ditch rater than hit another car or person. This is far more relevant …

          So you say the trolley problem is academic, and then you say that one version of the trolley problem is relevant? This is the trolley problem.

      • You can’t just not solve it. The software has to be written. The software decision tree will be queried when someone inevitably gets killed.
        Then what do they find? The car just chose an action based on a random number generator because the programmer decided not to solve that particular problem?
        Well obviously not. But what do you mean by not solving the problem?

    • Save vehicle occupants first, then bystanders. Because of ownership of vehicle, which makes a car the owner’s vassal and requires loyalty first. That’s kind of how humans have worked for centuries (loyalty before morals) so good enough, right?

Comments are closed.