Nvidia’s AI platform set eight records in training performance, including three in overall performance at scale and five on a per-accelerator basis.
The AI platform now slashes through models that once took a whole workday to train in less than two minutes.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
I guess you cannot bluff your way out now…
Cant some irrational/inept guy use the super smart non self aware AI to cause the extinction of man in order to get all the gold in the world for himself?
A very very smart AI could be used to potentially solve most of our problems, it could also be used to create a customized personal hell for everyone.
It’s best for everyone if super smart AI & tech like longevity are kept far out of the reach of “children”.
The recent AI that beat poker players is damn scary
I for one welcome our new AI Overlords!
You are exactly right. The only basis for that claim might be that some animal species seem to be self aware while others are not. That self awareness does not seem to directly correlate with the animal’s species intelligence or even a individual animals intelligence.
Quote “The biological examples suggest it can be an emergent property.”
Of course that is right, how else unless we were programed that way. But that does not mean it emerged as a trait(?) of intelligence.
It should have been called “(super)optimizing compilers”:
https://news.ycombinator.com/item?id=17949990
https://en.wikipedia.org/wiki/Program_optimization#Levels_of_optimization
Easy things are hard. But the hard (for the humans) are not so hard for a computer. It might be easier to replace highly qualified specialists using just a fraction of the supercomputer’s cycles. And yet be unable to replace a plumber. That would still send shock-waves across the economy, as no education will be worth the money and time invested, for example.
Keep in mind, in the past NVIDIA’s hardware and CUDA was more generalized and optimized to deal with a different task, that resulted in less than optimum performance with AI workloads. These gains are the result of deep hard & soft optimizations tailored to the task of AI workloads.
No need to fear these efficiency gains in and of itself.
> Self aware is caused by intentional programming it that way.
There’s no basis for that claim. We don’t know how self-awareness works, or how it formed. Our only examples of self-awareness are biological, and the accepted theory suggests it wasn’t intentionally programmed. The biological examples suggest it can be an emergent property.
I’m not that worried at the moment since the very largest supercomputers are only just reaching par with the human rain in raw computing power, and cost on the order of $200 million. They’re still a long way from economically out-competing humans.
If the cost of computing keeps falling exponentially over the course of the next half-century though, there could serious cause for alarm. If computing gets 3 orders of magnitude cheaper then at present then professionals’ jobs could be seriously threatened. 4 orders of magnitude and human employment is potentially obsolete across the board, 5 orders of magnitude and humans are hopelessly outclassed, 6 orders of magnitude and we’re in Skynet territory.
” to smart or to fast”
I’m still waiting for the computer to know that the correct form in this context is “too” instead of “to” since so few humans seem to understand the difference now.
Intelligence in not the same as being self aware.
We want out computers to be as smart as possible. They can not be to smart or to fast.
Self aware is caused by intentional programming it that way.
A self aware computer could easily mean the extinction of man, while a computer that’s not self aware, but very very smart could potentially solve most of our problems.
Examples – Best, cheapest way to produce fusion, Genetic engineering, new medicines, new materials. Etc.
I have been saying for a long time they need to train logic diagram simplifiers (compiler optimizators) on those machines. It will accelerate CAD, CAE software significantly, I guess:
https://drive.google.com/file/d/1GSv89tiQmPDcnFEu4n4CqfaJcUJxVmL5KrSCJ047g4o/edit