This is part of a series of articles that have been posted at Accelerating Future, CRNano (Center for Responsible Nanotech), Foresight and this site (nextbigfuture) that consider how nanofactories will emerge and the impact.
From the outside, the line from here to nanofactories goes through 3D printers. To the user, a nanofactory as described in the above technical discussions is just a 3D printer that can produce a wider variety of products than the ones now available. My guess at the point they start being called nanofactories is when the process includes nanoscale printing (as in embedded circuitry or surface nantennas for color effects and photovoltaics. Which can be done now — so it shouldn’t be too long before someone includes it in a solid-freeform-fab process.
So: why should we expect a sudden jump in 3D printer capabilities? We shouldn’t. They will continue getting cheaper for the same capabilities, and more capable for the same price; but on the same smooth growth curve we’ve been seeing all along.
On the other end of the scale, a billion-dollar factory will always be able to out-produce a million-dollar factory, and that will always outproduce a thousand-dollar countertop machine. The fact that the workstation I’m using to write this essay could out-calculate all the Cray-1s ever built doesn’t mean that they quit building supercomputers: it means that for the same money they build unimaginably monsterific humongulated gigantoid supercomputers. The same will be true of factories.
J Storrs Hall had a paper from 1998 that addresses nanofactory architecture and has the concept of a “Zeno Factory, which is the end product of iterative design bootstrapping.A>
Iterative Design Bootstrap – J Storrs Hall
This is the optimal bootstrapping pathway.
Given a single “hand built” simple system, we could:
* Build a complex system immediately atom by atom. This would take 2×10^10 seconds (over 600 years).
* Reproduce simple systems until, 69 days later, one has macroscopic capability.
Simply assume that building the first complex system is the goal, and employ the formula above; it gives an optimum of 17 generations. This results in a first, reproductive, phase of 472 hours, resulting in 131,072 assemblers, which build the complex system in a second phase of 42 hours, for a total of 21 and a half days.
This has implications with respect to an optimal overall approach to bootstrapping replicators. Suppose we have a series of designs, each some small factor, say 5.8, more complex, and some small factor, say 2, faster to replicate, than the previous. Then we optimally run each design 2 generations and build one unit of the next design. As long as we have a series of new designs whose size and speed improve at this rate, we can build the entire series in a fixed, constant amount of time no matter how many designs are in the series. (It’s the sum of an arbitrarily long series of exponentially decreasing terms. Perhaps we should call such a scheme “Zeno’s Factory”.)
For example, with the appropriate series of designs starting from the simple system above, the asymptotic limit is a week and a half. (About 100 hours for design 1 to build design 2, followed by 50 hours for design 2 to build design 3, plus 25 hours for design 3 to build design 4, etc.) Note that this sequence runs through systems with a generation time similar to the “complex system” at about the 25th successive design. Attempts to push the sequence much further will founder on physical constraints, such as the ability to provide materials, energy, and instructions in a timely fashion. Well before then we will run into the problem of not having the requisite designs. Since all the designs need to be ready essentially at once, construction time is to all intents and purposes limited by the time it takes to design the all the replicators in the series.
Systems Analysis of Self-replicating Manufacturing
J Storrs Hall has done a systems analysis for self-replicating manufacturing systems. There are several results of this analysis that have implications for their design. First, replicators do not benefit from raw internal parallelism but do benefit from concurrency of effort involving specialization and pipelining. Given the enormous range of possible replicator designs, the optimal pathway from a given (microscopic) replicator to a given (macroscopic) product generally involves a series of increasingly complex replicators. The optimal procedure for replicators of any fixed design to build a given product is to replicate until the quantity of replicators is 69% of the quantity of desired product, and then divert to building the product.
Therefore, it is desirable to design a replicator with the understanding that it will reproduce itself only for a few generations, and then build something else. Furthermore, it is crucial to design replicators that can cooperate in the construction of objects larger and more complex than themselves. I have outlined a system that embodies these desiderata.
Finally, considerations encountered in the design of the present system have convinced the author that, given a precursor or bootstrap technology (e.g. self-assembly from biomolecules) that can produce usable parts, the first thing to be built from the parts should be a parts-assembly mechanism and not a parts-fabrication mechanism. A parts-assembly robot constitutes a self-replicating kernel in an environment of parts, and a growth path that maintains the self-replicating system property (i.e. that each vertex of its diagram is the terminus of an endogenous capital edge) appears to work best.
The most important constraint is the availability of designs for the successive systems.
J Storrs Hall says that there will not be a lengthy time when there will be perfected desktop nanofactory capability.
You’re within a month of replacing the entire infrastructure of the Earth, every last farmer’s hut and the plants and animals grown for food as well as the cars, trucks, roads, and cities, with one vast, integrated machine. Luxury apartment, robot servants, personal aircraft, you name it, for everyone (and all still a tiny fraction of the capabilities of the overall machine). Ask for anything, and it will simply ooze out of the nearest wall, which will of course be a solid slab of productive nanomachinery (or Utility Fog). To recycle anything, just drop it on the floor.
This would also mean that issues of who has access to the actual machines and factories is less vital. Widespread specialized production systems and high quality product would be mostly not constrained. There is the issue of how things play out in the where things do not work so well phase but where there is big impact.
We can project forward from
* Flexible electronics production
* Advanced rapid manufacturing
* DNA Nanotechnology, guide self assembly and other developing methods
See how well we can get to the start of useful bootstrapping.
A Book Review on Nanofuture A Book Review on Nanofuture Aileen Grace Delima STS – nanotechnology
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.