This site has looked at a list of technologies for what I call the Mundane Singularity. Technological Singularity and Transhumanism are often criticized because the primary technologies that justify it are Molecular Nanotechnology and greater than human intelligence general AI, which some believe are not possible. Much of the projected benefits of a technological singularity could be achieved even without Molecular Nanotechnology and without greater than human Artificial General Intelligence as the technology triggers.
A Mundane Singularity could bring about a large amount of
1. Economic abundance
2. Radical life extension
3. Physical and Cognitive enhancement
4. Blood Stream Robots
6. Open Access to space
7. Pollution elimination
8. Computer Advancement
9. Shape changing functional devices like utility fog
Early versions of the controversial molecular nanotechnology are emerging with DNA nanotechnology, DNA origami and Synthetic biology The vision and work of Shawn Douglas, Ido Bachelet and George Church could be part of realizing radical life extension and something more powerful than mere blood stream robots.
DNA nanorobots have been demonstrated in live cockroaches and could be in humans by 2019 and could scale to Commodore 64 – eight bit computing power.
Nanoparticles with computational logic has already been done
Load an ensemble of drugs into many particles for programmed release based on situation that is found in the body
J Storrs Hall defines a technical takeoff
– Embodies the essential function of the proposed technology
– is proof that the concept works
– focuses technical effort
– is a vehicle for practical experience
– attracts financial (etc) resources
– forms a crack in the dam
The list of technologies and policies that I believe play a major part in achieving those things over then next 20 years are
1. Pro-growth Policies and aggressive adoption and deployment of best practices
2. Energy Efficiency – superconductors, thermoelectrics, improved grid
3. Energy Revolution – Mass produced fission, fusion, and maybe cold fusion, battery singularity
4. Additive manufacturing
5. Not so mundane – neuromorphic chips, quantum computers, photonics
6. Automated transportation (leading to robotic cars and planes)
7. Urbanization MegaCities
8. Urbanization Broad Group skyscrapers, Tata flat packed buildings
12. Improve medicine and public health
14. Synthetic biology and recombineering
15. Sensors everywhere
16. Education transformed and accelerated innovation
17. Supersmartphones, exoskeletons and wearable systems
18. Memristors and other significant computing and electronic improvements.
The Mundane Singularity still has a normal adoption and deployment cycle. So the impact will increase over time. ie. More robots in 2020 and still more in 2025 and 2030.
1. Pro-growth Policies and aggressive adoption and deployment of best practices
There are great deal of gains that can be made by just replicating the best practices and adopting and deploying mature existing technology. The world economy could be doubled. Life expectancy could be around 90-100 for the entire world.
Air pollution remains a major problem because the world has not moved on from fossil fuel technologies that are over 100 years old. This also impacts public health by causing 7 million premature deaths (indoor and outdoor air pollution).
The World Bank is pushing efforts to eliminate extreme poverty. To end extreme poverty, the vast numbers of the poorest – those earning less than $1.25 a day – will have to decrease by 50 million people each year until 2030. This means that 1 million people each week will have to lift themselves out of poverty for the next 16 year. There has been progress over the last twenty years to halve the percentage of people in extreme poverty. This has mainly come from the rise of China’s economy. Eliminating extreme poverty is all part of improving public health for the world poorest 2 billion. Extreme poverty goes together with bad nutrition and bad health. This also causes reduced intelligence from stunting.
Developed World also can achieve large gains by spreading best practices
A new report from the McKinsey Global Institute (MGI), China’s digital transformation: The Internet’s impact on productivity and growth, projects that new Internet applications could fuel some 7 to 22 percent of China’s incremental GDP growth through 2025, depending on the rate of adoption. That translates into 4 trillion to 14 trillion renminbi in annual GDP in 2025.
General Electric has a vision for the Industrial Internet [sensors and analytics embedded into all aspects of industrial processes] which could boost average incomes by 25 to 40 percent over the next 20 years and lift growth back to levels not seen since the late 1990s. If the rest of the world achieved half of the U.S. productivity gains, the Industrial Internet could add from $10 to $15 trillion to global GDP – the size of today’s U.S. economy – over the same period. In November, 2012, GE announced it would invest $1.5 billion in efforts to fine-tune its machines’ performance and capture big efficiency gains by connecting them to its enterprise software and to the wider Internet. GE thinks that cheaper computing power and sensors are now poised to usher in a new era of big data for industry. Jeff Immelt, GE’s CEO, has called the idea a revolution, and the company’s top economist has suggested it could help increase worker productivity by as much as 1.5 percent a year.
In 2011, McKinsey talked about policies to get the economic growth of the USA up to 3.5%-5% per year. These various policies also can be adapted and applied to other countries as well.
* adopt best practices systematically across industries (and across countries)
* adopt the next wave of innovation (life RFID for end to end supply chain)
* adopt practices for faster response to customer needs
* Drive productivity gains in public and regulated sectors (20% of the economy and 5-15% productivity gap with private sector)
* Reinvigorate innovation economy (data driven business decisions, cloud computer, application of advances in biology and life sciences.)
* Develop the talent pool to match the economy of the future and harness full capabilities of population. [This also involves transforming education and accelerating innovation]
* Build 21st century infrastructure [this is also talked about in Energy efficiency, urbanization sections and hyperbroadband]
* Enhance the competitiveness of business and regulatory environment
* Embrace the energy productivity challenge
* Harness the regional and local capabilities to boost growth and productivity
Also increased availability and speed of broadband
Better broadband boosts GDP
I think super wireless and fiber broadband that was truly always available (ie terabit per second everywhere) would make it a lot easier for industrial and consumer robotic and sensor revolutions
Energy Revolution – Battery Singularity
Lithium-ion batteries have a fifteen year history of exponential price reduction. Between 1991 and 2005, the capacity that could be bought with $100 went up by a factor of 11. The trend continues through to the present day. Lithium ion could get even cheaper (if only from economies of scale from factories that are ten times larger like Elon Musks planned Gigafactory). Lithium Sulfur batteries are getting close to commercialization. They have the potential to drive costs to about $60 per KWh. This would be about $5000 for a Tesla S battery pack.
China’s current long term plan is to use better fast neutron reactors (China’s own designs or russian versions. Russia has a 600 MWe and 800MWe fast neutron reactors that are commercially operating) and pyro-processing.
Molten salt fission reactors – like Terrestrial Energy’s Integral Molten Salt Reactor (IMSR) would be far deeper burn (use more of the fuel in one pass) and leave less waste for handling or reprocessing. I think Terrestrial Energy has the best economics and business plan and model for a commercial molten salt reactor. IMSR can get down to 0.86 cents per Kwh. I think a few hundred will be used in Canada’s oilsands.
China supercritical water reactor could cut costs in half and is an evolved design step from pressure water reactors. It would mostly leverage the pressure water reactor supply chain.
A startup company called Optalysis is trying to invent. a fully-optical computer that would be aimed at many of the same tasks for which GPUs are currently used. Amazingly, Optalysis is claiming that they can create an optical solver supercomputer astonishing 17 exaFLOPS machine by 2020.
A 340 gigaflops proof-of-concept model is slated for launch in January 2015, sufficient to analyze large data sets, and produce complex model simulations in a laboratory environment, according to the company. Unlike current supercomputers, which still use what are essentially serial processors, the Optalysys Optical Processor takes advantage of the properties of light to perform the same computations in parallel and at the speed of light.
The company is developing two products: a ‘Big Data’ analysis system and an Optical Solver Supercomputer, both on track for a 2017 launch.
The analysis unit works in tandem with a traditional supercomputer. Initial models will start at 1.32 petaflops and will ramp up to 300 petaflops by 2020.
The Optalysys Optical Solver Supercomputer will initially offer 9 petaflops of compute power, increasing to 17.1 exaflops by 2020.
Perhaps the most impressive trait of all is the reduced energy footprint. Power remains one of the foremost barriers to reaching exascale with a traditional silicon processor approach, but these optical computers are said to need only a standard mains supply. Estimated running cost: just £2,100 per year (US$3,500).
The DARPA 2015 budget (page 192) reported that a 1 million neuron chip was developed in 2013.. This was announced in the last week by IBM and published research in the journal Science.
In 2015 the goal [stated in 2010] is a prototype chip system simulating 10 billion neurons connected via 1 trillion synapses. The device must use 1 kilowatt or less (about what a space heater uses) and take up less than 2 liters in volume. 100 of the systems would have 1 trillion neurons and 100 trillion synapses and would be about the complexity of the human brain.
In 2014, IBM should have integrated the board with 16 chips into a larger rack with 4 billion neurons using 4 kilowatts of power. IBM would need another iteration or two of chip design to get to about triple the density with four times lower power usage.
In 2011, IBM research suggested that a full-scale model of the human brain—which has 20 billion neurons connected by about 200 trillion synapses—could be reached by 2019, given enough processing power. This is still on track. It is a hardware model. This does not indicate the actual intelligence that would be in the system. It also does not specify the quality of the neurons and synapses that are part of the system. However, it is one hundred times more energy efficient for various pattern recognition applications. Still being at human brain scale would be interesting and it would be interesting to see what could be possible and what will be learned. Refinement to better neurons and synapses could progress in the 2020s.
Dharmendra S Modha envisions augmenting their neurosynaptic cores with synaptic plasticity to create a new generation of field–adaptable neurosynaptic computers capable of online learning.
The IBM SyNAPSE teams hope is that will become an integral component of IBM WATSON group offerings.
There is also a nearly US$2 billion european brain computing project which should advance the understanding needed for brain emulation and better neuron and synapse components.
Dwave has had a 1000 qubit quantum computer in their lab since the end of 2013 They will release a commercial version of the 1000 qubit system this year. Dwave Systems has a substantial line of government and commercial customers beyond their first three major customers. This probably could mean several dozen now or next year. Dwave will be releasing a 2000 qubit chip in 2015 and 4000 qubits in 2016. There is still research ongoing to determine the types of problems where the quantum computing systems will be clearly superior to classical computing.
Memristors and HP’s machine
HP will start delivering memristor-based RAM DIMMs in 2016, and the Machine [HP’s name for its new memristor and photonic communication systems is “the Machine”] itself is expected to be available as a product by 2019. But within the next year, HP will release an open-source Machine OS software developer’s kit and start producing prototypes for collaboration with software vendors.
HP promises 6 times the computing while using 80 times less power and transforming the cloud into a distributed compute mesh. The Machine will scale from embedded systems to data centers and clouds.