April 03, 2009

Mars has a layer of Ice

Formed sometime between January and September 2008, this fresh crater has dredged up barely buried water ice and splashed it onto the Martian surface. The HiRISE camera aboard NASA's Mars Reconnaissance Orbiter recorded this colour close-up image on 1 November 2008. The scene is about 30 metres across. (Image: NASA/JPL/University of Arizona)

Mars has a layer of ice as shallow as a few tens of centimeters below the surface. If the Viking lander had been able to dig deeper it would have found it in the 1970s. Analysis of recent impact craters show the exposed ice, which then is sublimated into the atmosphere.

The water ice seems to have a high salt content based on the analysis.

BBC news has more coverage

Over tens to hundreds of millions of years, the ice has been transported to lower latitudes. We have found evidence for huge tropical mountain glaciers where the sides of big volcanoes at the equator have these huge deposits - 170,000 sq km - on their north-west flanks that are caused by big changes in Mars' obliquity.

On Earth, obliquity variations actually caused the Ice Ages that we experienced over the last tens of thousands of years. But changes in Mars' obliquity have been significantly greater. So we're seeing evidence for ice having been transported all the way down to the equator.

Some of it has gone to make the polar caps. Some amount, certainly, has left the planet through dissociation in the upper atmosphere. But what we're finding is that a significant amount may have been sequestered in glacial deposits.

Data from the Sharad (SHAllow RADar) instrument (on Mars Reconnaissance Orbiter), document that some of these glacial-like features, which you find at Mars' mid-latitudes, have a significant volume of ice left below a surface of rock debris.


The expected depth to ground ice is close to 84cm while the crater depth is 65-70cm. This particular model uses an average water vapor concentration of 20pr μm and these new data are so far consistent with this value or perhaps one slightly higher. This contrasts with the current observations of average atmospheric water vapor of ~14pr μm or ~10pr μm. Thus the ground ice exposed here is probably in the process of retreat from a previously larger extent perhaps due to recent variations in the argument of perihelion

New Scientist also has coverage.

The disappearing act might also be due in part to a coating of dust blown in from the atmosphere. Either way, notes HiRISE investigator Shane Byrne of the University of Arizona, the icy deposits had to be at least a couple of inches (several centimetres) thick, and they couldn't have been unearthed from more than a foot or two (0.3-0.6 m) down.

Byrne announced these findings on Friday at the Lunar and Planetary Science Conference in The Woodlands, Texas. He points out that prior surveys, particularly one done by the neutron spectrometer aboard NASA's Mars Odyssey orbiter, show that vast reservoirs of ice lay barely buried across most of the planet's polar and mid-latitude regions.

Trend Tracking and Projections

The first 6 weeks of technology developments in 2009 and
the second 6 weeks have seen a lot of big developments. One of the comments was to have trend tracking and projections included in the big development highlight roundups.

That will be included as part of future highlight packages.

Here are the trends and projections for the first two highlight packages.

Graphene electronics, material applications and commercialization is happening faster than many expected.

Eric Drexler also covers the graphene nanotechnology progress.

Production volumes and specifics of what is being produced, quality and sizes are key to tracking the progress of carbon nanotubes and graphene. It mostly not a question of can something be done, it is can it be commercially developed and brought out of lab demonstrations and scaled up to industrial quality, consistency and volume.

The target markets for graphene and carbon nanotubes will be places where superior properties are useful and enable something highly valuable to be done which was not possible with cheaper alternatives and niche markets with lower volumes of material requirements.

Replacing and improving all of the electrical shielding in airplanes with a volume of carbon nanotubes formed into tape instead of copper wiring can save one third of weight currently and production volumes of the required form of carbon nanotubes is being rolled out. The retrofits and production can occur over the next 2-5 years. This will be discussed in detail in an upcoming post from an phone interview with Nanocomp Technologies. Flat out replacing copper for the electric power grid using carbon nanotubes is still many years away. Those kind of bulk applications need many thousands to millions of tons of carbon nanotubes and production is less than a couple of hundred tons for carbon nanotubes that are mostly like powder. The millimeter(s) long carbon nanotubes are going to be produced a few tons at time perhaps next year. The many centimeter long carbon nanotubes are expensive lab curiousities at this point.

Carbon fiber which many people have already bought sporting goods built from those more crude materials are still only about 52,000 tons/year in global production. Steel is over 1 billion tons per year. World copper production is about 18.5 million tons.

Carbon nanotubes and graphene do not need to replace ton per ton. Higher performance means you can use less to do the same thing. Also, you can mix the pure material in with polymers to enhance properties. The final product might only be a few percent carbon nanotubes or graphene. Still as commonly as we see carbon fiber products that is at 52,000 tons/year and we do not have carbon fiber as major part of the power grid.

DNA nanotechnology, self-assembly, synthetic biology, synthetic life, quantum dots, and other nanotechnology seem to be on the verge of commercial breakthroughs to significant first markets.

There is a lot of interesting capability at just above the atomically precise level. 2-15 nanometers scaled control, science and capabilities are looking to be very useful.

Something to keep a close eye on is the guided self assembly of 2 nanometer precise structures for computer chips. This is claimed to be easily commercializable. Whether tens of thousands of wafers per month can be produced by a factory is what needs to be tracked if we were to see a major replacement of existing silicon technology.

The scale of our current technological society is why it takes time for superior technology to make a big impact. How long has it taken Flash Memory to get to where it is now ? Hard drives also kept improving at the same time.

Stem cells, gene therapy, regeneration, tissue engineering, super-cheap biometrics and diagnosis are all areas where the science is advancing rapidly and I expect many more announcements of development of lab capabilities. There is a lag in implementation and deployment, which is made even longer because of regulations and societal resistance. Super-cheap biometric marker analysis and diagnosis will likely have a major impact first because the approval and deployment processes are easier. Stem cell therapies, gene therapies and other advanced medicine will likely be tested and tried first in overseas markets.

I am becoming more and more confident that breakthroughs in energy technology will be happening. Energy technology game-changers would be absolutely certain with a greater shift from incremental refinement to a willingness to build more, test more and fail more instead of debate and paper and lab studies. Technological progress in aircraft was purposely slowed down by Robert McNamara because the X-plane projects were complicating arms control negotiations by advancing capabilities too quickly.

It helps no one and makes no one safer to take 20-40 years to build or fix a bridge or skyscraper. Do 5-10 years of regulatory work help make nuclear plants safer ? Will the tons of paperwork be used as extra radiation shielding ? China is only building as fast as the USA used to build. Over 99% of what the US built in the old days has done just fine in terms of safety over the decades. Design improvements and other advances could be incorporated for added safety without the delays.

Where the future of key technologies starts to have big impact will be greatly determined by regulations and policies. Clearly the economic incentives will force some of those restrictions to be lifted.

The tree of market niches for each area of technology needs to be known in terms of size and requirements (including marketing and regulatory requirements and willingness and cost to switch) to displace the current technology. This will determine which areas are conquered first and the impact that it will have.

Microbes to Convert Coal to Methane and Algae Fuel are Both Close to Industrial Scaleup

Microbes to Convert Coal to Methane Scale Up Announcement Soon
At Synthetic Genomics, the San Diego startup he co-founded in 2005, Venter said scientists are using such techniques to create new microbial species with enhanced and even unique capabilities. For example, he said Synthetic Genomics has created new species of microbes that grow on the surface of coal particles—and produce methane by consuming the coal.

He displayed a black-and-white image of a piece of coal that appeared to be carpeted with a mossy substance, saying it’s an organism that eats coal and makes a cleaner-burning fuel. “We and BP think we can scale this up substantially,” Venter said, referring to the global energy giant that became a development partner and investor in Synthetic Genomics two years ago. “We’re not too far away from making an announcement to scale this up.”

Venter says the team at Synthetic Genomics also has created new types of cells that consume carbon dioxide and hydrogen and make methane and long chains of organic molecules with as many as 18 carbon atoms “in a pure form.”

Algae Biofuel Prospects
Sapphire Energy’s algae fuel process has been used successfully to make the three most important fuels, gasoline, diesel, and jet fuel, Pyle says, and all three products have been independently certified to meet fuel standards set by the American Society for Testing and Materials. In September 2008, Sapphire Energy raised $100 million in a second venture round from Bill Gates, Arch Venture Partners and others.

Biofuels technologies appear capable of someday producing 200,000 barrels of jet fuel a day—enough to supply the needs of the U.S. Air Force—from algae grown on less than 800,000 acres. [10-11 gallons per day or 3650-4000 gallons per year] “It’s not crazy to imagine that by the year 2050 we (the United States) could become an oil exporter again,” Briggs said. 80 million acres would replace the current oil demand of the United States. 3% of total land in the United States. Other estimates are 1-2% or less as the processes are improved. Light pipes allow for deeper algae ponds and over ten times more efficient land usage.

H/T and references at Alfin

There are algae fuel companies that are targeting near term (5 years or less) production costs of $1.50/gallon of algae produced biofuel.

The tantalizing quality of algae is that some algal species contain up to 40 percent lipids by weight. And therefore, according to some sources, an acre of algae could yield 5,000 to 10,000 gallons of oil a year, making algae far more productive than soy (50 gallons per acre), rapeseed (110 to 145 gallons), mustard (140 gallons) jatropha (175 gallons) palm (650 gallons) or cellulosic ethanol from poplars (2,700 gallons).

More optimistic data from less informed people indicate the theoretical biodiesel yield from microalgae is in the range of 11,000 to 20,000 gallons per acre per year.

But according to Dr. John Benemann, a cantankerous algae consultant whose research is widely cited in the field, the realistic potential production level (despite claims to the contrary) is about 2,000 gallons of algal oil per acre per year.

At the 20,000 gallons per year level that is 16 million acres to replace the current US oil demand. At 10,000 gallons per year that is 32 million acres to replace the US oil demand.

Biofuels Compared and Land Use

Here is comparison of some biofuel sources

Unmodified Miscanthus has been found to be 2.5 times more efficient than corn and switchgrass.
9.3% of cropland equivalent to grow Miscanthus to offset 20% of fuel. 23.25% to offset 50% of fuel. Genetic modifications can boost Miscanthus efficiency by 300%. Modified Miscanthus 8% of land to offset 50% of fuel.

So algae and Modified miscanthus should be pushed for biofuels. Plus the other stuff as stopgap.

The cropland argument against biofuels is not correct

Metamaterials for higher resolution ultrasound, sonar invisibility/camouflage and wide-band optics outside the Visible range

Schematic showing the experimental setup. The sample with PI/NI interface is composed of an array of different designed Helmholtz resonators machined from an
aluminum plate. Unit cells of each half part and the corresponding inductor–capacitor circuit analogy are shown in the insets.

Metamaterials are progressing to enable invisibility to sonar and creating superlenses for ultrasound. Superlenses can enable resolution that is ten to twenties time shorter than the wavelength being used instead of the classic limit of half of a wavelength. Typical ultrasound physics is compared to X-rays here and it lists resolutions at around one millimeter. Currently the sonic superlenses are just at the best classical limit of half of a wavelength but these proof of concepts are expected to be improved. Improved ultrasound resolution would enable simple and cheap higher resolution scans for studies of the brain and other organs for science and diagnostics.

Note: The whole field of metamaterials is relatively new and a several years ago many believed metamaterials to be impossible. They believed that negative indexes of refraction were impossible. They were wrong because we did not really understand how light interacts with matter at small scales and in detail. Why is it then unreasonable to believe that we also have a very incomplete understanding of the details of what is going on with cold fusion and physical reactions on small scales ? How can scientists who now admit that there is real science and possibly new physics being uncovered with cold fusion (low energy nuclear reactions LENR) be so sure that it will not lead to anything important ? By definition of "new physics" they do not know what it is doing or what could be done.

This is the first experimental demonstration of focusing ultrasound waves through a flat acoustic metamaterial lens composed of a planar network of subwavelength Helmholtz resonators. We observed a tight focus of halfwavelength in width at 60.5 KHz by imaging a point source. This result is in excellent agreement with the numerical simulation by transmission line model in which we derived the effective mass density and compressibility. This metamaterial lens also displays variable focal length at different frequencies. Our experiment shows the promise of designing compact and light-weight ultrasound imaging elements.

The resolution of 0.5wavelength was recorded by focusing the acoustic field of a point source. This is not sub diffraction imaging, but among the best achievable passive acoustic imaging elements. The unit cell of the acoustic network is only one eighth of the operating wavelength, making the lens in a compact size. Compared with conventional lenses, the flat thin slab lens takes advantages in that there is no need to manufacture the shapes of spherical curvatures and the focus position is insensitive to the offset of source along the axis. Also this negative index lens offers tunable focal length at different frequencies. More generally, this design approach may lead to novel strategies of acoustic cloak for camouflage under sonar.

Metamaterials are also now able to work in wide band of optical frequencies that are currently the visible range. If the same width of frequencies was shifted into the visible range they would cover the entire visible range.

For the ultrasound metamaterial lenses, it was noted that single PI/NI interface does not allow the enough growth of evanescent fields to achieve sub diffraction focusing while sandwich structure (two PI/NI interfaces) offers better chance to overcome the diffraction limit.

April 02, 2009

Mass Incidents in China and the United States and Predicting Results

China has the government statistic of mass incidents.

China's Public Security Ministry reported 87,000 mass incidents in 2005, up 6.6 per cent over the number in 2004, and 50 per cent over the 2003 figure. The ministry has not released the latest figures.

Mass incidents - the Chinese government's term for riots, demonstrations and protests - should not be mistaken for attempts to "rebel against or overthrow the government", said Dr Wang Erping of the Chinese Academy of Sciences' Institute of Psychology.

The United States, Europe and other places also have mass protests. Here is a google map of Tea party tax protests. (H/T Instapundit and Freedomworks.org)

View Larger Map

UPDATE: Bruce Bueno de Mesquita's Predictions on Iran from Feb 2009.

Just counting the number of incidents some of which are the gathering of a handful of people while another could be 20,000 people burning things is not especially meaningful.

When do protests lead to change within the current system and when does the system get changed ? Are simple metrics able to give effective indications ?

More analysis of the volume of people, their resources and goals and profiles and detailed analysis of movement leaders and actors in the establishment seems required to have a useful prediction of whether movements will be effective and to what degree.

The methodology of mathematician Bruce Bueno de Mesquita seems to be the current state of the art

In fact, the professor says that a computer model he built and has perfected over the last 25 years can predict the outcome of virtually any international conflict, provided the basic input is accurate. What’s more, his predictions are alarmingly specific. His fans include at least one current presidential hopeful, a gaggle of Fortune 500 companies, the CIA, and the Department of Defense. Naturally, there is also no shortage of people less fond of his work. “Some people think Bruce is the most brilliant foreign policy analyst there is,” says one colleague. “Others think he’s a quack.”

Bueno de Mesquita has made a slew of uncannily accurate predictions—more than 2,000, on subjects ranging from the terrorist threat to America to the peace process in Northern Ireland—that would seem to prove him right.

He is the chairman of New York University’s Department of Politics, a senior fellow at the Hoover Institution at Stanford, and the author of many weighty academic tomes.

To verify the accuracy of his model, the CIA set up a kind of forecasting face-off that pit predictions from his model against those of Langley’s more traditional in-house intelligence analysts and area specialists. “We tested Bueno de Mesquita’s model on scores of issues that were conducted in real time—that is, the forecasts were made before the events actually happened,” says Stanley Feder, a former high-level CIA analyst. “We found the model to be accurate 90 percent of the time,” he wrote. Another study evaluating Bueno de Mesquita’s real-time forecasts of 21 policy decisions in the European community concluded that “the probability that the predicted outcome was what indeed occurred was an astounding 97 percent.” What’s more, Bueno de Mesquita’s forecasts were much more detailed than those of the more traditional analysts. “The real issue is the specificity of the accuracy,” says Feder. “We found that DI (Directorate of National Intelligence) analyses, even when they were right, were vague compared to the model’s forecasts. To use an archery metaphor, if you hit the target, that’s great. But if you hit the bull’s eye—that’s amazing.”

How does Bueno de Mesquita do this? With mathematics. “You start with a set of assumptions, as you do with anything, but you do it in a formal, mathematical way,” he says. “You break them down as equations and work from there to see what follows logically from those assumptions.” The assumptions he’s talking about concern each actor’s motives. You configure those motives into equations that are, essentially, statements of logic based on a predictive theory of how people with those motives will behave. From there, you start building your mathematical model. You determine whether the predictive theory holds true by plugging in data, which are numbers derived from scales of preferences that you ascribe to each actor based on the various choices they face.

A sample of Bruce Bueno de Mesquita’s wilder—and most accurate—predictions:

Forecasted the second Intifada and the death of the Mideast peace process, two years before it happened.

Defied Russia specialists by predicting who would succeed Brezhnev. “The model identified Andropov, who nobody at the time even considered a possibility,” he says.

Predicted that Daniel Ortega and the Sandanistas would be voted out of office in Nicaragua, two years before it happened.

Four months before Tiananmen Square, said China’s hardliners would crack down harshly on dissidents.

Predicted France’s hair’s-breadth passage of the European Union’s Maastricht Treaty.

Predicted the exact implementation of the 1998 Good Friday Agreement between Britain and the IRA.

Predicted China’s reclaiming of Hong Kong and the exact manner the handover would take place, 12 years before it happened.

Why Not Try to Find Technological Solutions For Problems?

* Why start with the assumption that no current, emerging or future technology can solve a particular class of problem ?
* Why assume that no technology can be a significant part of a solution or reduction of a problem ?
* People say that it is Utopian and imply it is impossible to solve all of our current problems. Perhaps. But some people who are familiar with technology can try to propose new approaches.

There are many different kinds of technologies and creative approaches and rethinking of issues can change the dynamics of situations.

This site is trying to do several things:
1. Technological due diligence: Up to date status on technology and projects that can have big impact or be very useful. Provide full understanding of poorly understood technology and relevant history. Most people do not understand nuclear power, nuclear fuel/waste, nuclear weapons, molecular nanotechnology, cognitive enhancement etc... People are unaware of many existing and upcoming technological options and real choices and plans.

2. This site is trying to propose better development plans, policies and options. Ideally ones that can be acted upon by smaller and more responsive groups.

3. This site is researching and trying to communicate understanding of what the real risks and scenarios are.

If anything peoples attitudes, beliefs are too entrenched and not changing based on real facts and they do not seek out full information to base their decisions upon.

Technology is often empowering. Technology is also reaching the level where smaller groups can do what previously could only be funded by large and successful nation-states. This is already the case for certain things like funding the cure for some disease. This is a two edged thing. Smaller groups can cause big problems ala super-terrorists but smaller groups can solve bigger problems/challenges. This is a eons long trend. This trend is unlikely to be changed without complete relinquishment, which would likely only be a unilateral move by some Luddites.

Smaller empowered groups means that effective action can be taken instead of allowing deadlocks to persist because there is insufficient consensus to authorize necessary resources.

The new Smart Dew 25 cent sensors can enable a 10 mile deep X 2000 mile sensor grid for border security for $10-100 million including base stations. This should change the debate and dynamics of the border security issue.

New technology can change the facts around access to space. Currently there is a devoted minority that is interested in space access and development. Sufficient reduction of costs would mean that indifferent majorities would not need to be persuaded.

Then the situation becomes what do the groups on the other sides do when more and more minorities can implement their own solutions and break deadlocks in their favor ?

If multiple groups were able to have competing implementations it would be an arms race of solutions and counter-solutions.

Where there is no real opposition to something (say poverty and diseases) but just an unwillingness to spend the money or effort then technology lowering costs or enabling a solution would mean actual progress to results. Being able to actually achieve significant results and improvements would bring things closer to Utopia, when solved problems are happening faster than new or growing problems.

April 01, 2009

Mesoscale Resolution Brain Map Project Proposed

More details on the mesoscale mammal brain map proposal that this site first mentioned in February 2009 as the last entry in this post. Mesoscale is a few tens of nanometers and a few thousand molecules. There is a millimeter cube resolution brain map. So this would be about one million times higher resolution.

The importance of circuit considerations for differentially characterizing disorders such as major depression, anxiety, and obsessive–compulsive disorders, and substance (including nicotine) addiction is beginning to be recognized. These illnesses are considered disorders of the affective circuitry underlying emotion and motivated behaviors, which spans the brainstem, hypothalamus, frontal and cingulate cortices, and basal cortical nuclei.

We propose a concerted experimental effort to comprehensively determine brainwide mesoscale neuronal connectivity in model organisms. Our proposal is to employ existing neuroanatomical methods, including tracer injections and viral gene transfer, which have been sufficiently well-established and are appropriately scalable for deployment at this level. The first and primary objective is to apply these methods in a standardized, high-throughput experimental program to fully map the mesoscale wiring diagram for the mouse brain and, following the model of successful genome projects, to rapidly make the results and digitized primary data publicly accessible. The second objective is to collate and, where possible, digitize existing experimental data from the macaque, and to pursue targeted experiments using standardized protocols to plug key gaps in our knowledge of primate brain connectivity. Additionally, we argue for similar efforts in other model organisms and for the pursuit of experimental methods that can be used in postmortem human brain tissue.

For as little as a few million dollars ranging up to perhaps $20 million, depending on the redundancy in coverage that we commit to researchers could get a detailed mammal brain map.

The full dataset will comprise hundreds of terabytes - a very modest data-storage burden, according to Mitra. "That will give us about twofold coverage of the entire brain circuit in a first draft, and could be accomplished in two or three years, with cooperation from the neuroscience community. At more robust funding levels, we could attempt 10-fold coverage over, perhaps, a five-year period." In the spirit of genome projects, all data from the proposed program would be made rapidly available over the web to the entire research community.

While the immediate focus now is to bring a mouse connectivity project to fruition, the authors also make the case for cataloging and digitizing results from existing studies in other species, and for filling key gaps with targeted studies. Moreover, there is an important need, they argue, for further development and validation of experimental techniques that can be used directly in the human brain.

Smooth Carbon Nanotubes Luminescence Efficiency 40 times Higher, Complex nanomembranes Made , Molecular Cluster assembly

The wrapped carbon nanotube has defects shaved off so the surface is smooth and it is protected from further damage so that it tends to stay smooth.

1. Chemists at the University of Connecticut have found a way to increase the luminescence efficiency of single-walled carbon nanotubes by 40 fold, a discovery that could have significant applications in medical imaging and other areas.

The best scientists have been able to do with solution-suspended carbon nanotubes was to raise their luminescence efficiency to about one-half of one percent, which is extremely low compared to other materials, such as quantum dots and quantum rods.

By wrapping a chemical ‘sleeve’ around a single-walled carbon nanotube, Papadimitrakopoulos and his research team were able to reduce exterior defects caused by chemically absorbed oxygen molecules.

This process can best be explained by imagining sliding a small tube into a slightly larger diameter tube, Papadimitrakopoulos says. In order for this to happen, all deposits or protrusions on the smaller tube have to be removed before the tube is allowed to slip into the slightly larger diameter tube. What is most fascinating with carbon nanotubes however, Papadimitrakopoulos says, is the fact that in this case the larger tube is not as rigid as the first tube (i.e. carbon nanotube) but is rather formed by a chemical “sleeve” comprised of a synthetic derivative of flavin (an analog of vitamin B2) that adsorbs and self organizes onto a conformal tube. Papadimitrakopoulos claims that this process of self-assembly is unique in that it not only forms a new structure but also actively “cleans” the surface of the underlying nanotube. It is that active cleaning of the nanotube surface that allows the nanotube to achieve luminescence efficiency to as high as 20 percent.


Complex nanomembrane structures can be made with silicon and other materials.

Research on nanomembranes and graphene sheets represents the “third wave” of work on nanomaterials, following earlier studies of nanoparticles/fullerenes and, somewhat later, nanowires/nanotubes. Inorganic semiconductor nanomembranes are particularly appealing due to their materials diversity, the ease with which they can be grown with high quality over large areas, and the ability to exploit them in unique, high-performance electronic and optoelectronic systems. The mechanics of such nanomembranes and the coupling of strain to their electronic properties are topics of considerable current interest. A new paper by the Lagally group in this issue combines single-crystalline silicon nanomembranes with chemical vapor deposition techniques to form “mechano-electronic” superlattices whose properties could lead to unusual classes of electronic devices

3. Reviewing current methods and capabilities with molecular cluster assembled material.

Cluster-assembled materials offer the ability to tune component properties, lattice parameters, and thus coupling of physical properties through the careful selection and assembly of building blocks. Multi-atom clusters have been found to exhibit physical properties beyond those available from the standard elements in the periodic table; classification of the properties of such clusters effectively enables expansion of the periodic table to a third dimension. Using clusters as superatomic building blocks for hierarchically assembled materials allows these properties to be incorporated into designer materials with tailored properties. Cluster-assembled materials are currently being explored and methods developed to control their design and function. Here, we discuss examples of building block syntheses, assembly strategies, and property control achieved to date.

High Temperature Reactor Joint Venture and Potential to Replace Just the Coal Furnace of Coal Plants

Rod Adams at Atomic Insights talks about the joint venture between China and South Africa to develop pebble bed reactor technology

China has started construction of the first of many 210 MWe high temperature reactors which they are targeting to complete in 2013.

The first commercial-scale plant (HTR-PM) in China will make use of indirect cycle, steam turbine systems, while PBMR has been developing a direct cycle gas turbine system. The HTR-PM features two 250 MW (thermal) reactor modules and a 210 MW (electric) steam turbine-generator set.

Towards Deep Burn
Next Generation Nuclear Plant research has already invested about four years in turning the "art" of making TRISO particles into a repeatable process that that can reliably produce fuel that is demonstrating superior results in its test runs in the high flux test facility. Testing has reached the point where fuel particles (TRISO baseball size pellets used for the pebble bed reactors) are achieving 16% burn-up without failure. To provide some perspective, most light water fuels are only licensed to achieve about 5% burn-up.

There is a $7 million project to work on achieving deep burn (60-70% burnup) of TRISO fuel pebbles.

The concept of destruction of spent fuel transuranics in a TRISO-fueled (TRIstructural ISOtropic) gascooled reactor is known as Deep-Burn. The term “Deep-Burn” reflects the large fractional burnup of up to 60-70% fissions per initial metal atoms (FIMA) that can be achieved with a single pass, multi-cycle irradiation in these reactors. The concept is particularly attractive because it employs the same reactor design that is used for the NGNP program, with the same potential for highly efficient electricity and hydrogen production. Spent TRISO fuel from Deep-Burn can be either placed directly into geologic storage to provide long-term containment to the residual radioactivity or recycled for fast reactor fuel.

In parallel to the physics analysis, preliminary work has indicated that, due to the large amount of useful energy that can be extracted from the Deep-Burn TRISO fuel (up to 20 times larger than from mixedoxide (MOX) fuel in LWRs), it may be possible to recover all or part of the costs of reprocessing LWR spent fuel.

A 4 page presentation on project deep burn

Gas reactors have an advantage over light-water reactors in terms of their ability to burn TRU because the use of robust particle fuel, a solid moderator and a neutronically transparent coolant enables the use of fully enriched TRU TRISO fuel, and the attainment of very high burnups (~ 500,000 – 700,000 MWD/tHM). Thus, the overall amount of TRU burned in a single recycle can be much greater in a DB-VHTR than an LWR. In addition, the higher thermal efficiency of the VHTR increases the amount of electricity produced during consumption of the TRU. Preliminary assessments of the DBVHTR indicate that fuel cycle lengths of 1 to 1.5 years are
feasible and that the reactivity swing over the cycle could be managed.

In the early days of VHTR technology, a crush-burn-leach process was proposed to reprocess VHTR fuel. This process produced large quantities of carbon dioxide that needed to be trapped. A new head-end process consistent with the UREX+ separation technologies has been identified and demonstrated at the proof-of-principle level for TRISO fuel in the past few years. The process flow consists of separation of the compacts from the graphite block, disposal of the graphite block as lowlevel waste, grinding and jet-milling of the compact components (matrix, coatings, fuel kernels) into a fine powder to support chemical separation, and leaching, to dissolve the TRU for aqueous separation or a novel electrochemical process termed METROX for the pyroprocessing separation. The project will develop a full flowsheet for TRISO recycling using both aqueous and non-aqueous reprocessing, particularly as it pertains to spent DB-TRISO fuel. The process of crushing the ceramic coatings and exposing the spent-fuel kernels to dissolving agents will be brought up to today’s standards of low secondary waste streams and process losses. The project will study the crush-leach flowsheet to minimize waste, establish and test a laboratory filtering system, and study the suitability of the fuel solution for liquid separation. Lab-scale tests of the equipment for separation of the solid coating and compact material from the fuel solutions will be performed.

The current TRU destruction scenario adopted by GNEP/AFCI is termed the single-tier approach; it is the simplest demonstration of closing the fuel cycle. In this case, spent fuel from LWRs is sent directly to the Advanced Burner Reactor (ABR) for destruction. Our studies suggest that DBVHTRs can have a synergistic relationship with the ABR when operated in dual-tier mode. This synergy allows relaxed operating parameters for the two reactor types and a smaller inventory of recycling TRU relative to the single-tier approach. It would also reduce the number of fast reactors by a factor of 3 as compared to the LWR two-tier scenario. (i.e.,
thermal to fast reactor ratio of 9 to 1 rather than a 3 to 1 ratio).

Details on Using Pebble Beds to Replace Coal Burners from Coal Plants
The principels of converting coal plants has been described in detail at the coal2nuclear site.

HOW the modified power plant would work: The reactor (right) is in a sealed underground silo located in the power plant's coal storage yard. The heat comes from the bed (or pile) of atomic pebbles (the little red dots). The pebbles heat helium gas in the reactor to 1,300°F. The hot helium gas is circulated clockwise to carry the heat from the pebbles to the attached helium-to-water heat exchanger (a "fire-tube" water heater). The heated water (red) that exits through the bottom water pipe of the heat exchanger is supercritically hot (1,150°F), and under about 4,500 pounds per square inch pressure to keep the water from turning into steam. The reactor is rated at about 500 megaWatts thermal so even though the water is carrying almost 1,000°F differential of heat, this will be a large volume of water. In comparison, a conventional PWR reactor's supercritical water undergoes a differential of only about 70°F as it passes through its reactor core - utilizing massive water volume and 6,000 horsepower circulating pumps instead.

NEXT, The heat is carried by the water through new, heavily insulated pipes to a new steam generator located in the power plant. The steam generator is also a type of heat exchanger. This time, the 1,150°F supercritical water is used to make the 1,000°F, 2,400 psi superheated steam needed by the power plant's turbine. The steam generator's steam pipes are connected to the three-stage steam turbine (devices 11, 9, 6) that spins the electricity generator (device 5). The "new" steam is identical to the "old" steam that used to be made by the coal boiler.

Two identical special 200 ton storage vault railroad cars, equipped with with elliptically-keyed wheels, (temporarily removed) would be welded to the ground next to the silo to supply and remove pebbles through pneumatic tubes connected to the car bottoms. The Germans used automated pneumatic transport systems on their pebble bed reactors, the U.S. MIT pebble bed reactor design is even more sophisticated. The pebbles would be held in metal clips on a conveyor belt storage system in the railroad cars. A full load of 450,000 pebbles is about 112 tons containing perhaps 9 tons of uranium.

That's all there is to it folks! What a simple way to end Climate Change. The only new items are the reactor, the two heat exchangers, and a small control and service building located in the now-unneeded coal yard. It should be pointed out that power plant water heaters and steam generators, while not trivial devices, are about 30% the size of conventional nuclear power plant steam equipment so they are much less expensive and can be built in several months almost anywhere.

Research on one-pass deep burn of fuel pellets.

Singapore has made computer memory devices using graphene

Ones and zeros: By depositing a ferroelectric material on top of graphene, researchers have coaxed graphene into holding on to two different levels of electrical conductivity, which could serve as bits 1 and 0 in computer memory.
Credit: Barbaros Özyilmaz, National University of Singapore

MIT TEchnology Review reports that researchers at the National University of Singapore have made computer memory devices using graphene based on the well understood ferroelectric effect. This is the first step toward memory that could be much denser and faster than the magnetic memory used in today's hard drives. The researchers have made hundreds of prototype graphene memory devices, and they work reliably, according to Barbaros Özyilmaz, the physics professor who led the work presented at a recent American Physical Society meeting in Pittsburgh.

The new memory idea is "thrilling because it's very simple," says Andre Geim, professor of physics at the University of Manchester, UK, who first isolated graphene sheets from graphite. "Ferroelectrics are well known. It's also known that an electric field changes graphene's resistivity by a factor of typically 10. [Özyilmaz] combines those two very well-known facts."

Graphene memory would have significant advantages over today's magnetic memory. Bits could be read 30 times faster because electrons move through graphene quickly. Plus, the memory could be denser. Bit areas on hard disks are currently a few tens of nanometers across. At densities of 1 terabit per square inch, they will be about 25 nanometers across, too small to hold their magnetization direction. With graphene, bits could shrink to 10 nanometers or even smaller. In fact, the memory devices would work better with smaller graphene areas. Stanford University researchers have shown that cutting graphene into ribbons a few nanometers wide enhances the difference between its two conductivity states.

University of Ohio is one of many places making progress on enabling mass production of graphene electronics. They use a stamping method.

Simple graphene frequency boosters could take communication chips or computer chips up to the terahertz range.

Graphene is also one of the strongest materials and there is research to use it to boost the strength of polymers. Development with graphene for materials and electronics is proceeding quickly and is likely to have significant commercial impact starting in two years.

The unique linear energy band dispersion and its purely 2D crystalline structure have made graphene a rising star not only for fundamental research but also for nanoscale device applications. Here we demonstrate a novel non-volatile memory device using a combination of graphene and a ferroelectric thin film. The binary information, i.e. ``1'' and ``0'', is represented by the high and low resistance states of the graphene working channels and is switched by the polarization directions of the ferroelectric thin film. A highly reproducible resistance change exceeding 300\% is achieved in our graphene-ferroelectric hybrid devices under ambient conditions. The experimental observations are explained by the electrostatic doping of graphene by the remnant electrical field at the ferroelectric/graphene interface.

March 31, 2009

Next Big Future Highlights of Weeks 7-12 in 2009

The highlights from the first 6 weeks of 2009 had 128 qubit quantum computers, artificial brain projects, breakthroughs with carbon nanotubes and more.

1. DNA based assembly of nanoclusters.

2. Self assembled memory could have 250 DVDs in the space of quarter. 10-100 terabits per square inch.

3. Synthetic Ribosome made.

4. DNA Origami can grow

5. Wireless brain computer interfaces with nanoparticles communicating with and controlling neurons.

6. Graphene chips could enable 1 terahertz communication chips within 2-3 years.

7. Atomic Force Microscopes (AFM) are made 100 times more stable by taking into account photons reflecting off of the microscope tip.

8. Atomic Force Microscopes (AFM) can swap atoms from the tip to the target by pressing into a target.

9. Carbon nanotubes are being developed into faster and stronger artificial muscle for robots and other actuator applications.

10. Carbon nanotubes are being used to stick together layers of material for stronger and more durable airplane skins and for other applications.

11. Room temperature single atom quantum dots have been made.

12. Pyrite could be a key material for molecular nanotechnology that could scale solar power to terawatt levels.

13. Another roundup of nanotechnology. Nanowalls where we want them, control of nanocrystals and better nanolenses and more

Stem cells, gene therapy and advanced medicine
14. Stem cells for universal tissue replacement.

15. Unlimited stem cell supplies and bone marrow and fat stem cells able to survive in bioscaffolds.

16. Stem cells resistant to chemotherapy give tissue replacement advantage so that they can replace diseased tissue. Kill diseased tissue with chemotherapy which leaves the healthier or enhanced stem cells.

17. Lab grown nerves promote nerve regeneration.

18. Localization and delivery keys to early gene therapy strategies

19. Nerve and brain modifications could be keys to enhancing human strength to gorilla and chimpanzee level.

20. Monoclonal antibodies can provide fast durable immunity against bird flu and other flu.

21. Super cheap paper based medical tests could transform medicine and diagnostics.

22. Micro-cantilevers enable faster and more accurate virus detection.

23. Taking short nanosecond range lasers bursts to the petawatt, exawatt and zettawatt levels.

24.Mercury laser enables frequent powerful shots are key to enabling laser nuclear fusion.

25. Solid state lasers are past minimum battlefield strength of 100 kilowatts. Now at 105 kw and adding an eighth section moves it to 120 kw. Further progress to megawatt level is funded.

Nuclear Space Cannon
This is a nextbigfuture variation on the nuclear pulse propulsion project of Project Orion.

26A. One underground shot Nuclear Orion idea introduced. No fallout or electromagnetic pulse issues.

26B. Containing underground nuclear explosion details.

26C. The nuclear space cannon could be used to jump start asteroid mining and lower costs of follow on space travel with fuel depots and deploying space based tethers.

Military Technology
27. Lockheed is deploying exoskeletons for the army which can help soldiers carry 200 lbs loads and move at 7-10 mph with those loads. There is also attachments possible for large riot shields or an over the shoulder gun turret.

28. Exoskeletons that can carry 200 lbs can be combined with electric bikes and scooters for all-terrain persoonal transportation solution. You can walk onto public transportation in an exoskeleton with a folded electric bike on its back.

29. Another article on the exoskeleton electric bike combination for public transportation.

Energy Technology
30. Algae fuel cost and production.

31. Neutron tracks have been detected in conjunction with cold fusion.

32. Superconducting meissner effect has been detected at -40 celsius.

Quantum Computers, AI and advanced electronics and communications
33. Quantum computer tunable qubits and other quantum computer related advances.

34. An IPhone can help monitor the state of your brain and help you put yourself into memorization mode.

35. 25 cent sensors could be used to make an affordable border security system. Monitor motion and cars up to 10 miles deep across a 2000 mile border for $10 to 100 million.

36. $1000 electronic gear on a soldiers helmet can form a network of sensors for locating snipers and the type of gun they are using.

37. Wolfram Alpha can offer computed answers for math, science and other fields of interest as an alternative to multiple answers from internet search.

38. 12.5 gigabit per second 5G mobile phone service is on the way.

39. Supercomputers are starting to defeat expert human players at the game of GO

Big Ideas
40. Collapsitarians and doomers do not take into account societal response if things really start getting bad. Therefore, total collapse over the span of several years to several decades will not happen. Real disasters that can threaten civilization and humanity have to hit so that there is not time to adapt or enable emergency response.

41. Emerging technological black swans that could change the world.

42. Open source crowdsourcing of financial regulation would be effective. This should be used mostly in place of cumbersome and ineffective layering of regulation onto the SEC.

Carnival of Space 96

March 30, 2009

Gorilla, Chimp Strength Secret in Brain and Muscle

Evolutionary biologist Alan Walker, a professor at Penn State University, argues that humans may lack the strength of chimps because our nervous systems exert more control over our muscles. Our fine motor control prevents great feats of strength, but allows us to perform delicate and uniquely human tasks.

Walker's hypothesis stems partly from a finding by primatologist Ann MacLarnon. MacLarnon showed that, relative to body mass, chimps have much less grey matter in their spinal cords than humans have. Spinal grey matter contains large numbers of motor neurons—nerves cells that connect to muscle fibers and regulate muscle movement. More grey matter in humans means more motor neurons, Walker proposes. And having more motor neurons means more muscle control.

If augmentations were made to enable better brain and nerve control of muscles then people could retain fine motor control while being able to unleash more strength when necessary.

There will be human trials of surgically implanted stem cells to reverse paralysis in New Zealand.

The switch for turning stem cells into muscle cells has been found. This can lead to muscle stem cells being able to regenerate muscle tissue. You could also add muscle tissue.

Great apes, with their all-or-nothing muscle usage, are explosive sprinters, climbers and fighters, but not nearly as good at complex motor tasks. In other words, chimps make lousy guests in china shops.

In addition to fine motor control, Walker suspects that humans also may have a neural limit to how much muscle we use at one time. Only under very rare circumstances are these limits bypassed—as in the anecdotal reports of people able to lift cars to free trapped crash victims.

"Add to this the effect of severe electric shock, where people are often thrown violently by their own extreme muscle contraction, and it is clear that we do not contract all our muscle fibers at once," Walker writes. "So there might be a degree of cerebral inhibition in people that prevents them from damaging their muscular system that is not present, or not present to the same degree, in great apes."

A previous look at gene therapy for enhanced strength and speed.

DNA based Assembly Line for Precision Nano-Cluster Construction

Two 5-10 nanometer dimers are put together. Single dots of that size put together to form pairs of dots

Using DNA to assemble nanoclusters: (a) (1) DNA linker strands (squiggly lines) are used to attach DNA-coated nanoparticles to a surface. (2) Linker strands are attached to the top side of the nanoparticle. (b) (3a) A nanoparticle of a second type with complementary DNA encoding recognizes the exposed linker strands and attaches to the surface-anchored nanoparticle. (4a and 5a) The assembled structure is released from the surface support, resulting in a two-particle, dimer cluster. (c) (3b) Alternatively, the immobilized particles produced in step (a) are released from the surface, leaving the opposite-side linker strands free to bind with multiple particles (4b) to form asymmetric "Janus" clusters.

Building on the idea of using DNA to link up nanoparticles — particles measuring mere billionths of a meter — scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have designed a molecular assembly line for predictable, high-precision nano-construction. Such reliable, reproducible nanofabrication is essential for exploiting the unique properties of nanoparticles in applications such as biological sensors and devices for converting sunlight to electricity. The work will be published online March 29, 2009, by Nature Materials.

The Brookhaven team has previously used DNA, the molecule that carries life’s genetic code, to link up nanoparticles in various arrangements, including 3-D nano-crystals. The idea is that nanoparticles coated with complementary strands of DNA — segments of genetic code sequence that bind only with one another like highly specific Velcro — help the nanoparticles find and stick to one another in highly specific ways. By varying the use of complementary DNA and strands that don’t match, scientists can exert precision control over the attractive and repulsive forces between the nanoparticles to achieve the desired construction. Note that the short DNA linker strands used in these studies were constructed artificially in the laboratory and don’t “code” for any proteins, as genes do.

Using DNA to assemble nanoclusters: (a) (1) DNA linker strands (squiggly lines) are used to attach DNA-coated nanoparticles to a surface. (2) Linker strands are attached to the top side of the nanoparticle. (b) (3a) A nanoparticle of a second type with complementary DNA encoding recognizes the exposed linker strands and attaches to the surface-anchored nanoparticle. (4a and 5a) The assembled structure is released from the surface support, resulting in a two-particle, dimer cluster. (c) (3b) Alternatively, the immobilized particles produced in step (a) are released from the surface, leaving the opposite-side linker strands free to bind with multiple particles (4b) to form asymmetric "Janus" clusters.

The latest advance has been to use the DNA linkers to attach some of the DNA-coated nanoparticles to a solid surface to further constrain and control how the nanoparticles can link up. This yields even greater precision, and therefore a more predictable, reproducible high-throughput construction technique for building clusters from nanoparticles.

Instead of assembling millions and millions of nanoparticles into 3-D nanocrystals, as was done in the previous work, this technique allows the assembly of much smaller structures from individual particles. In the Nature Materials paper, the scientists describe the details for producing symmetrical, two-particle linkages, known as dimers, as well as small, asymmetrical clusters of particles — both with high yields and low levels of other, unwanted assemblies.

New size and geometric based properties emerge.

the scientists describe an optical effect that occurs when nanoparticles are linked as dimer clusters. When an electromagnetic field interacts with the metallic particles, it induces a collective oscillation of the material’s conductive electrons. This phenomenon, known as a plasmon resonance, leads to strong absorption of light at a specific wavelength.

“The size and distance between the linked particles affect the plasmonic behavior,” said Gang. By adjusting these parameters, scientists might engineer clusters for absorbing a range of wavelengths in solar-energy conversion devices. Modulations in the plasmonic response could also be useful as a new means for transferring data, or as a signal for a new class of highly specific biosensors.

Asymmetric clusters, which were also assembled by the Brookhaven team, allow an even higher level of control, and therefore open new ways to design and engineer functional nanomaterials.

Because of its reliability and precision control, Brookhaven’s nano-assembly method would be scalable for the kind of high-throughput production that would be essential for commercial applications.

Christian Science Monitor has an article DNA nanotechnology pioneer Ned Seeman as the Henry Ford of nanotechnology for his progress towards DNA robotic factories.

Singularity Motivated Donation and Singularity Related Research at Oxford

The James Martin 21st Century school has received 36 million pounds in a matching grant donation from James Martin. [H/T to Michael Anissimov at Accelerating future

The Institutes who will share the money:
Cancer Therapy
Carbon Reduction
Emerging Infections
Energy Materials
Environmental Change
Ethics of Biosciences
Future of Humanity [this is where Nick Bostrom, Robin Hanson, Anders Sandberg and others are working]
Future of the Mind
Oceans Science
Innovation & Society
Stem Cells

The James Martin 21st Century School was founded in June 2005 at the University of Oxford with about 3 million pounds in initial funding.

Atomic Force Microscopes 100 times More Stable: Picometer Stability

The JILA team has controlled the probe’s position in three dimensions to better than 40 picometers (1 nanometer = 1000 picometers) over 100 seconds. In imaging applications, they showed the long-term drift at room temperature was a mere 5 picometers per minute, a 100-fold improvement over the best previous results under ambient conditions. Just like photographers use the stability of a tripod and longer exposures to improve picture quality, the JILA team used their improved stability to scan the AFM probe more slowly, leading to a 5-fold improvement in AFM image quality. A bonus, says Perkins, is the technique works with standard commercial probes.

By taking account of the photons reflecting from the tip of an atomic force microscope, researchers have improved stability to picometer accuracy and greatly reduced undesired drifting of the tip.

Instrumental drift in atomic force microscopy (AFM) remains a critical, largely unaddressed issue that limits tip−sample stability, registration, and the signal-to-noise ratio during imaging. By scattering a laser off the apex of a commercial AFM tip, we locally measured and thereby actively controlled its three-dimensional position above a sample surface to <40 pm (Δf = 0.01−10 Hz) in air at room temperature. With this enhanced stability, we overcame the traditional need to scan rapidly while imaging and achieved a 5-fold increase in the image signal-to-noise ratio. Finally, we demonstrated atomic-scale (100 pm) tip−sample stability and registration over tens of minutes with a series of AFM images on transparent substrates. The stabilization technique requires low laser power (<1 mW), imparts a minimal perturbation upon the cantilever, and is independent of the tip−sample interaction. This work extends atomic-scale tip−sample control, previously restricted to cryogenic temperatures and ultrahigh vacuum, to a wide range of perturbative operating environments.

In an atomic force microscope (AFM), force is measured by a laser beam (yellow in this artist's rendition) bouncing off the diving-board like cantilever. To make an ultrastable AFM, researchers at JILA added two other lasers (green and red) to measure the three dimensional position of both the tip and a reference mark in the sample. These measurements allow researchers to remove drift and vibration in the instrument's measurements caused by environmental factors.

The JILA research group at the University of Colorado.

While extremely sensitive to atomic-scale features, AFMs also are extremely sensitive to interference from acoustic noise, temperature shifts and vibration, among other factors. This makes it difficult or impossible either to hold the probe in one place to observe the specimen under it over time (useful for studying the dynamics of proteins) or to move the probe away and return to exactly the same spot (potentially useful for nanoscale manufacturing). “At this scale, it’s like trying to hold a pen and draw on a sheet of paper while riding in a jeep,” observes NIST physicist Thomas Perkins. A few instruments in specialized labs, including some at NIST, solve this problem by operating at extremely cold temperatures in ultra-high vacuums and in heavily isolated environments, but those options aren’t available for the vast majority of AFMs, particularly those used in bioscience laboratories where the specimen often must be immersed in a fluid.

The JILA solution uses two additional laser beams to sense the three-dimensional motion of both the test specimen and the AFM probe. The beams are held stable relative to each other to provide a common reference. To hold the specimen, the team uses a transparent substrate with tiny silicon disks—“fiducial marks”—embedded in it at regular intervals. One laser beam is focused on one of these disks. A small portion of the light scatters backwards to a detector. Any lateral vibration or drift of the sample shows up at the detector as a motion of the spot while any vertical movement shows up as a change in light intensity.** A similar trick with the second beam is used to detect vibration or drift in the probe tip, with the added complication that the system has to work with the scant amount of light reflected off the apex of the AFM probe. Unwanted motion of the tip relative to the sample is corrected on the fly by moving the substrate in the opposite direction. “This is the same idea as active noise cancellation headphones, but applied to atomic force microscopy,” says Perkins.

Форма для связи


Email *

Message *