There is a lot of hype for the accomplishments of Artificial Intelligence. Nextbigfuture keeps you up to date about the achievements made in AI. Two hours ago Nextbigfuture published what IBM has done with AI.
at some point in the early twenty-first century all of mankind was united in celebration. We marveled at our own magnificence…. …as we gave birth…to A.I. Morpheus from the Matrix.
Latest Advanced AI Solution Versus Twenty-Year Old Technology
Waze and Google Maps use a lot of Artificial Intelligence and real-time updates to provide the best driving instructions. They are useful products and provide good value.
What did we have twenty years ago and what is the difference?
Mapquest in the late 1990s or Yahoo maps worked pretty decently. They sometimes messed up on the route but you were getting a pretty good route. It did not have real-time traffic updates. Bad traffic updates came from radio reports. Radio stations had traffic helicopters and listeners phoned in reports.
Mapquest was clearly superior to paper maps. There was a huge leap in certain situations. Although even paper maps and memorizing how to get places works. If you have never made the drive and were not familiar with the area then you would need maps. You would plan out the route. I would use paper maps maybe twice a year. You would use them on long road trips mainly.
The other driving solution is to ask someone who knew how to get someplace how to get there. This would be done before you start the drive. How many further questions would be asked might vary depending upon your gender. There is a difference between most men and most women.
I used to have a regular long commute for a 30-mile drive. I had this commute every day for seven years. I already had the Waze and Google Map options. However, there were only three practical highway options and two ways to get to the highway and two main ways to get off the highway. In general one route was the best 80% of the time. Waze could add value if there was horrible traffic. It could offer up a complex route on surface streets as an alternative to avoid a traffic jam. This happened maybe twenty times in a year because of Bay Area traffic. The route was worth taking maybe 10 times in a year. There were actually only two main off the beaten path routing options. After they were each driven twice, then I knew what they were and when I might use them. They were an alternative route onto one of the highways in order to get out of Oakland and another was an alternative path through Hayward to avoid problems on the 880 or 580.
The two decades old AI or software was getting to an 85-90% solution and a good approximation answer.
Waze uses a lot of AI but its alternative routing may not work great. Waze with its AI and real-time information can direct drivers to single lane surface streets to theoretically gain two minutes. But they could send too many cars down this “better” route and suddenly they created a Waze traffic jam on a single lane road.
Yes, computing power went up a lot. AI and software got a lot better but there was a best non-AI or low-AI alternative. The non-AI or low-AI alternative may not be that far off optimal.
There are other technological solutions and advancement that can vastly move the needle towards better solutions.
We now have drones and better cameras. This can be used to act like 1000 traffic helicopters for the same cost. There is some AI in the control software of the drones. However, stationary cameras on tall poles, trees and buildings would not need AI to fly. A lot of stationary cameras works too.
Simple control software could have a drone fly up and then hover in place.
Even Elon Musk Says Too Much Robotics is Not Always Best
Elon says humans are underrated. He was talking in the context of robots and automation in factories. There is AI for robots and AI in automation.
Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated.
— Elon Musk (@elonmusk) April 13, 2018
Self-driving cars will be where AI can make a huge difference versus human driving. It will save lines and allow more cars to safely drive on existing roads. It will save a lot of money and grow the economy.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
15 thoughts on “We Hype Artificial Intelligence But How Good Are Non-AI Solutions?”
I mean this seems obvious to me but you can use a state estimator and form an adaptive feedback loop, which corrects for some disturbances. You can use CNN or DCNN on the state estimator to get more accurate results. You can call congestion on the shortest non-congested route (your typical steady state output) as your disturbance and treat it like noise, which then will deviate the route based on your estimated congestion and shortest time to next route. You can incorporate loops for timed traffic lights and compare logic on if you will hit a green or a red.
Sure, non deterministic seeding is a real thing. Usually, the seed has to be sourced from the non deterministic reality. However, non deterministic code is not possible as far as I know. That is why we love computers.
This makes the current AI software very non magical. It’s just really a statistical application where functions are approximated by sampling a lot of data. Same input == same output. Completely deterministic.
Yes, but with the caveat that the vertexes have real-world statistical distributions. Its also fairly easy, for example, to conjure vertex lists which individual heuristic solvers miserably fail to optimally solve. The key is “conjure”. real-world statistical distributions have a vanishingly low probability of ever happening on such rare special cases.
Mmm… I wrote a random number generator which is definitely not deterministic, and it wasn’t even very difficult. Seed bits derived from LSB of sum-of-packet-sizes did the trick, followed by 10⁴⁸ ‘depth’ pseudorandom generator space and reasonably asynchronous reseeding. Definitely a deterministic algorithm, with non-deterministic output. Blessed be one’s computer’s background chatter of unknowable packet traffic.
Other decent sources of nondeterministic randomness are inter-keystroke µs timing, mouse movements, streaming video feeds, and on some computers, the LSB (least significant bit) sampling of the audio-IN A-to-D converter. Have to watch out tho’ for harmonic beats. They all combine to be quite good entropy randomizers, as seeds to pseudorandom future-value generators.
Yet… even with that, were it the case that one couldn’t really obtain nondeterministic entropy from a source of pseudorandom values, it wouldn’t matter – run-to-run (non-restart) pseudoentropy would guarantee no-repeat non-determinism. Especially if code-branch choices are made in a fuzzy-logic manner throughout the corpus of a ‘determined’ piece of code.
A good personal rapid transit (PRT) system could accomplish all the same things that self driving cars can with a fraction of the computing power. It would take more infrastructure instead though.
SkyTran and Musk’s Boring tunnels are good examples of possibilities.
For now, but things will change in the future, people will probably asking if too much humans is always the best.
AI as used today is a buzzword to make some good algo work seem “sexy” or narrow use of Bayesian networks. My yardstick of AI is HAL … I will accept no substitute.
The set of human life problems is huge and growing, potentially infinite, while the set of problems with automated solutions is much smaller (but also growing).
The only general solution engine for human problems known so far are humans. They can be applied to nearly any problem (except death and taxes, apparently) and find a solution to it. Understanding of the problem results in abstraction and tool creation, eventually in automation.
Automation can be replicated infinitely, giving the illusion of eventual human intelligence replacement, but the set of unsolved problems simply shifted to other areas and variations of it, often brought or caused by the solutions themselves.
The case of Tesla is telling, because by defining the problem of fully automating a car factory, they created a large set of complex secondary problems that couldn’t be tackled in a timely or efficient fashion, until humans got involved and solved it.
Route finding between two points is deterministic and not statistically based.
Route finding between multiple points could be written to use a quantum annealer but you can find a route very close to optimal with the CPU power of a modern smart phone using any one of many heuristic approaches.
No worries there is a lot of AI marketing buzzword bingo going on. Want to make a splash in your marketing blog post? Mention AI!
You are correct. I should probably say more powerful computers, better software more data and some cases a little bit of AI or algorithmic improvements.
A lot of companies say that AI is the main source of an innovation when the solution depends upon other data. AI could only be 0.1% of the overall tasks but then they call it an AI solution in the marketing.
There is a balance between accuracy and getting the flow of in the story of an article.
The take aways would be to have less fear about AI and getting a better understanding of how much of a difference it makes.
There is also parsing marketing versus reality.
The problem is not having so much precision and caveats that most people eyes glaze over.
The separate article goes into what is called AI by the general public, in marketing is not the same thing. There is also moving definitions of programming that was called AI before in research is no longer called AI, that is now just an algorithm.
It is a lot of parsing. People fear better computers and software. They fear job loss whether it is from AI, software marketed as AI, computers, sensors, drones, or technology and automation or process changes in general.
How much do people need to know to make good decisions.
AI so far is also deterministic. It helps to look at it as a statistical application. It’s not really possible to program non-deterministic software with conventional computers at all. We can’t even code a random generator that is non deterministic. This is probably the main reason why AI in current form will not behave like living creatures at all.
With quantum computers and AI, this may change.
Getting a route with real time traffic doesn’t require AI. Period.
Whoever told you this lied to you.
A route between two points is an AI free purely deterministic algorithm, even in the presence of traffic. The algorithms used to preprocess large graphs are AI free and purely deterministic even if traffic will be present.
Google has used AI to mine data from their street view cameras (e.g. addresses, street names, stop signs and the like).
Hmmm… yes. But you know — even though there are excellent solutions to many problems that aren’t even remotely related to AI, and recognizing that a lot of what is called ”AI” these days really is kind of disingenuous, it all comes down to whether having algorithms that fish unusual inference paths through truly enormous volumes of non-obviously-interrelated data need to use AI methods, or can be done by conventional programming.
I, for one, place high regard for conventional algorithm design for many problem solving situations. Even with profound data loads.
My living — at least for awhile — was taking enormous lists of contacts and possibly associated (but not linked) data, and correlating them both to provide what today is almost amusingly called “big data” analytics. Did it require AI? Nope. Were the results remarkable? Yep. Would it have been either easier to do, or far more sophisticated to do using state-of-the-Art AI? Maybe. Actually no, I don’t think so. It wold take far more learning investment to just get the first answers out, and given what I know, I rather doubt that the longer term “annealed” delivered results would be substantially more “smart” than what conventional programming achieved.
A whole of of AI is faddish.
And a whole little is not.
Comments are closed.