This achievement is a milestone in framework-directed self assembly of composite nanosystems, a line of development that I’ve argued is a strategic direction in atomically precise fabrication — useful in itself, and as part of a technology platform for further progress. In self-assembling molecular machine systems, carbon nanotubes could serve as structural components that are orders of magnitude stiffer than biomolecules, and can also serve as moving parts, including low-friction linear and rotary bearings.
The work is scalable to billions of units self assembled in parallel.
2. J Storrs Hall discusses the AI takeover
There are at least 4 stages of intelligence levels that AI will have to get through to get to the take-over-the-world level. In Beyond AI I refered to them as hypohuman, diahuman, epihuman, and hyperhuman; but just for fun let’s use fake species names:
Robo insectis: rote, mechanical gadgets (or thinkers) with hand-coded skills, such as Roomba or industrial robots or automated call-center systems or dictation programs.
Robo habilis: Rosie the housemaid robot level intelligence, able to handle service level jobs in the real world but not a rocket scientist.
Robo sapiens: up to and including rocket scientists, AI researchers, corporate executives, any human capability.
Robo googolis: a collection of top R. sapiens wired together in a box running at accelerated speed, equivalent to, say, Google (the company and the search engine together).
First point: One R. googolis can’t take over the world, any more than Google could. You’d have to get to the next stage (R. unclesammus).
3. J Storrs Hall asks do we need Friendly AI?
We should be spending our time on is figuring out how to build competent AI.
First principle of competent AI design: Build a machine that understands what you want. The paperclip maximizer is a study in amazing contrasts — presumably an intelligence powerful enough to take over the world would be capable of understanding human motivations even better than we do, so as to manipulate us effectively. Yet it’s built with a complete cognitive deficit of appropriate motivations, goals, and values for itself. Incompetent.
Second principle: build machines that know their limitations. This basically means that it should confine its activities to those areas where it does understand the effects of its actions.
But in order to do that, we first have to be able to build a machine that can actually understand something — anything — in the full human-level meaning of understanding.
4.J Storrs Hall proposes a Robo Habilis Tests
One of the goals of the AGI Roadmap is to chart paths to full human intelligence, and one of the paths might follow the one that evolution took. The Wozniak Test, i.e. being able to make coffee in any randomly-chosen home, is a case of tool use competence. It is a special case of what we might call the Nilsson Test, as outlined in a paper in 2005 by Nils Nilsson, one of the leading figures in AI:
I suggest we replace the Turing test by something I will call the “employment test.” To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines.
J Storrs Hall defends the Robo Habilis test by indicating that it is necessary to test intelligence and also test what we think are the "simple things" AI researchers made the mistake decades ago of mistaking what was easy and what was hard.
5. Drexler notes how press releases and articles overstate the capabilities of a recent MIT quantum algorithm. the algorithm does not provide solutions to systems of linear equations: It outputs scalars, not vectors, and this is not at all the same thing.
The algorithm delivers a scalar measurement on a solution vector, which can be a function of the entire vector or, as a special case, any one of the trillion vector components