Managers are finding AI projects difficult to implement and the results are disappointing.
* Getting and fixing the data to train AI is time-consuming, expensive and the data may not be good enough
What works
* understanding X-ray and MRI images. It is straight forward to correlate particular images with a disease or non-disease result.
* car driving. Again many images and videos are understood from 1 million+ Tesla cars to get to desired results
What seems to be missing
* operating when a lot of data is missing
* translating pattern matching to knowledge graphs and building context and actual understanding or pseudo understanding
* being able to properly generalize learnings
* having a model of the world and reality that sanity checks results
Other critiques say:
A serious challenge is how to develop algorithms that can deal with the combinatorial explosion as researchers address increasingly complex visual tasks in increasingly realistic conditions. Although Deep Nets will surely be one part of the solution, we believe that we will also need complementary approaches involving compositional principles and causal models that capture the underlying structures of the data.
Can all of the problems be solved without the systems failing to significantly improve or getting worse under complexity of trying to overcome problems?
Transferring Learnings to Other Applicable Domains and Continual Learning
Geoffrey Hinton (godfather of deep learning) and Demis Hassabis (Deep Mind) have indicated that AGI is far away. There are many fundamental issues to generalize deep learning and reinforcement learning. There are issues transferring skills between systems and being able to amplify weak reinforcement signals.
A way to solve reinforcement learning’s scalability problem is to amplify the reinforcment signal with hierarchical architecture. This would create a system of sub-goals.
There are limitations with deep learning vision systems.
A typical neural network will forget the last thing it was trained to do. Virtually all neural networks today suffer from catastrophic forgetting.
In 2017, there was a paper called Learning to Continually Learn.
In 2020, Open AI’s Jeff Clune shared ANML (a neuromodulated meta-learning algorithm), which is able to learn 600 sequential tasks with minimal catastrophic forgetting. Clune believes AI like ANML is key to achieving a faster path to the greatest challenge: artificial general intelligence (AGI).
He argued a faster path to AGI can be achieved through improving meta-learning algorithm architectures, the algorithms themselves, and the automatic generation of training environments.
ANML scales catastrophic forgetting reduction to a deep learning model with 6 million parameters.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.