Ray Kurzweil and Peter Diamandis presented an Abundance 360 webinar on Friday, October 13 on mind-boggling predictions and transformative (even “dangerous”) ideas.
They discussed
* Radical life extension
* space
* artificial intelligence
* uplifting the developed world with internet and other tools
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
That’s a fascinating graph, but on some levels it’s kind of like assuming if you dump enough mice into a cargo bin you’ll end up with an elephant.
I’ve asked Brian to turn on comment-editing. It would also be useful to see “comment clustering” where replies are indented.Right now, there’s next-to-nothing.
HOWEVER, we do have bold and possibly italic festooning. GoatGuy
Previous attempts to comments gave me a notice that I’m banned from NBF, which is a surprise since I wasn’t banned before the comments system changed. I’m trying this from a different email but I would like to be able to use my regular email and old user name.
Kurzweil’s been making these type of predictions for a long time. Most of it is based on Moore’s law which, as has been discussed on NBF, has stopped. So unless there is some technological breakthrough I don’t think Kurzweil’s predictions will count for much.
Yes. Kurzweil’s prediction are already off the rails, since Moore’s law is no more.
Whilst Moore’s law has been part of the reason for Ray’s success it is not the whole reason.
His hypothesis for technological singularity does not rely solely on Moore’s law.
Many of his repetitions relate to health, way of life etc why are independent of Moore’s laws.
Please read his book before making one line remarks.
Having said that I agree that his stated prediction accuracy of 85+% is overestimated & that
the time frame for the singularity may be up to 50 years too early. However, in the scheme of things
that is a minuscule difference in time.
Moore’s Law may not apply to a single processor but it still applies to the amount of computational power a chip can deliver.
IIRC Kurzweil’s basis is more than Moore’s law; it’s the combination of exponential progress in that and other tech fields.
Comment thread structure is broken.
What do u mean?
Unless I can’t find how to turn it on, it seems there’s no actual thread structure (post-reply clustering) anymore.
Making a faster calculator in now way will make it sentient or smart. Software is where its at. And making predictions on software advances based on moore’s law is crazy talk.
Deep learning is a very powerful AI learning technique which currently can deliver human level performance in some areas. Increasing the raw computational power available increases the size of the problems that can be tackled with it so clearly raw computational power matters.
There’s not a lot to Deep Learning that computer scientists couldn’t have come up with back in the 1970’s. They simply couldn’t even begin to dream of attempting it with what they had, so they stuck with ‘toy’ problems, demonstrations of principles, etc.
So processing ‘faster’ can easily open up new capabilities.
The current limit on Deep Learning seems to be the need for LOTS of examples.
So the next big movement will be finding ways to train AI systems to apply previous learning to accelerate learning of new tasks, rather than starting mostly from scratch.
Let the AI do the work of identifying and ‘abstracting’ more general approaches/methods, e.g. to pre-process data. E.g. train 10000 deep learning networks, then have a deep learning network examine those to build pre-trained networks that it then tests to see how fast they learn.