The first nanofactories will probably be DNA/RNA/protein gadgets requiring thousands of steps by skilled scientists to coax them to build a new gadget (which will consist only of DNA/RNA/protein), or diamondoid gadgets in high vacuum requiring thousands of steps by skilled scientists to coax them to build a new gadget (which will consist only of diamondoid), or possibly even tungsten carbide gadgets doing EDM with nanotubes, requiring thousands of steps by skilled scientists to coax them to build a new gadget (which will consist only of tungsten carbide, the nanotubes having to be supplied from outside). Early nanofactories will be cranky and experimental, expensive, require expensive inputs, be able to produce only very limited products, and be very lucky to replicate themselves before they break down.
95% of the investment costs in building a nanofactory will go into building nanoscale machines, including an assembler, making them work reliably, putting them into a cooperative, redundant architecture that works without letting in molecular contamination, letting molecular contamination escape its internal confines, and so on. These are all low-level problems. If they aren’t all pretty much solved, you are going to get precisely nowhere. The cost of building the first nanofactory will be immense. But if you have a basic nanoscale modular architecture that can reliably build itself up from the micron level to the centimeter level, then it’s not going to matter whether you are building 100 centimeter-scale units or a million. The salient scaling issues are at the nanoscale and microscale. By the time you’re at the macroscale, the system has to be completely automated, and hence likely inexpensive.
The number one expense in any product comes from human input, attention, and craftsmanship on a per unit basis — the less you need, the cheaper it is. Desktop nanofactories will need to be almost completely automated, or they wouldn’t exist in the first place. You cannot micromanage each of every 10^17 fabrication event and expect to leave the work bench any time in this geologic eon.
So there is general agreement that current trends are towards specialized “potentially buggy multi-step mainframe” and micro/nano level components of the assembly systems.
How useful will the specialized kluge systems be ?
How long will it take from “almost” getting things to work right for end to end molecular manufacturing to reliably and cheaply doing it ?
Once you start bootstrapping up the capability ladder then how fast can you proceed ?
Chris Phoenix, at the Center for Responsible Nanotechnology, has recently been considering issues around a “fast-takeoff”.
After being able to perform atomically precise chemistry at a reasonable rate (being able to do it at all beyond proof of concept handful of reactions which we are at now).
In some processes, it may be relatively easy to build a 100 atom part (of which maybe 98% will be perfect), then test and throw out the 2%, then stick the perfect parts together to make a perfect meta-part.
In other processes, an error will not only destroy the workpiece, but also the tool. So if a 10,000-atom tool is destroyed by each error, then the success rate needs to be 99.99% or better.
In a multi-stage process, each stage may have a different error rate for a different reason, and need different error handling.
Tools that can only be built by molecular manufacturing may reduce the error rate drastically. For example, sorting rotor cascades can give you any purity you need, and atomically perfect sliding seals can exclude all contaminants. In that case, it may be a quick step from just barely good enough, to so good you don’t even have to think about it.
A more general argument is that once you have general-purpose molecular manufacturing, you can probably build improved versions of your tools right away. So… although I can’t prove that any given molecular manufacturing pathway will have a fast takeoff thanks to error rates crossing a threshold of significance, it does seem pretty likely.
This follows a general trend of argument: the difference between unfeasible and adequate is generally bigger than the difference between adequate and excellent. Feed “excellent” into an exponential growth equation, and you get a fast takeoff.
This does not mean that the difference between adequate and excellent is less than one year or is a perfectly smooth transition. Also, many early assumptions for a fast takeoff had more pre-design work (particularly system design work) so that once we had adequate then we rapidly roll out pre-planned strategies to upgrade. Pre-design is not happening yet, but will likely happen during the time of clearly feasible and almost adequate to barely adequate and beyond phases.
There are industries and product-lines that are already using some nanoscale nanopatterned surfaces and nanoparticles which would be able to readily take almost adequate for nanofactory technology and run with them. Things like various kinds of computer disks and memory and certain communication and computer components. Synthetic biology, medicine, gene therapy, stem cells, microbiology and other areas would get a big boost from almost nanofactory ready technology.
How useful will the specialized kluge systems be ? I would say very useful and very profitable. This means beyond the current few billion dollar per year going into the not well defined and mostly chemistry research area of “nanotechnology” this money will become more sharply focused on the real molecular manufacturing development and one hundred times that money and effort will come from the big industries that benefit from each small step from feasible to barely adequate to excellent. However, it appears that funding transition could be slower from clearly feasible to barely adequate. We have seen that many people are will to deny “clearly feasible”. We are likely to be going at our current pace with slight increase into the specialized kluge phase that J Storrs Hall described. Some companies have to make whacks of dough and make splashy impacts to jar people and wake them to the fact that “the game is finally on and molecular manufacturing will be leaving the station”.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.