July 25, 2009

Carnival of Space 113

PICA and PICA-X Heat shield

NASA is working on two heat shield materials. Two ablative materials for heat shields, AVCOAT and PICA, are being tested.

Both materials proved successful in previous missions to space. AVCOAT, which is manufactured directly onto the spacecraft and has an embedded honeycomb-like material, was used for the original Apollo capsules. PICA, or Phenolic Impregnated Carbon Ablator, which is manufactured in blocks and attached to the vehicle after fabrication, was used on Stardust, NASA's first unmanned space mission dedicated solely to exploring a comet.

Space-X is working on PICA variants PICA-X.

The "X" stands for the SpaceX-developed variants of the rigid, lightweight material, which has several improved properties and greater ease of manufacture.

"We tested three different variants developed by SpaceX," said Tom Mueller, VP of Propulsion, SpaceX. "Compared to the PICA heat shield flown successfully on NASA's Stardust sample return capsule, our SpaceX versions equaled or improved the performance of the heritage material in all cases."

The Dragon capsule will enter the Earth's atmosphere at around 7 kilometers per second (15,660 miles per hour), heating the exterior of the shield to up to 1850 degrees Celsius. However, just a few inches of the PICA-X material will keep the interior of the capsule at room temperature.

In January 2006, NASA's Stardust sample return capsule, equipped with a PICA heat shield, set the record for the fastest reentry speed of a spacecraft into Earth's atmosphere - experiencing 12.9 kilometers per second (28,900 miles per hour). SpaceX's Dragon spacecraft will return at just over half of that speed, and will experience only one tenth as much heating.

PICA is a modern TPS [Thermal protection systems] material and has the advantages of low density (much lighter than carbon phenolic) coupled with efficient ablative capability at high heat flux. Stardust's heat shield (0.81 m base diameter) was manufactured from a single monolithic piece sized to withstand a nominal peak heating rate of 1200 W/cm^2

Atmospheric re-entry at wikipedia

US Federal Budgets Past, Present and Future : What are the Components of $2 trillion per year deficits ?

The US annual deficit was less than $500 billion/year for every year before the first budget of the Obama administration. The current 2009 deficit is projected to be about $2 trillion. In the pictures, you can see the actual deficit spending under President George Bush and the projected deficits under President Obama. The projected deficits under President Obama are now optimistic as they were made assuming unemployment peaks in 2010 at 9% and that GDP growth heads back to 3-4% per year. Interests rates could also go up because of high deficits and again the situation would be worse than what is projected.

If unemployment, GDP growth and interests do not work out to be as good or better than the projections then the deficits will be higher than about $2 trillion in 2010, and $1.4 trillion in 2011 and about $1 trillion out to 2019.

Unemployment rate is at 9.5 percent [July 18, 2009 statistic], a 26-year high and expected to go higher.

The revenue drops from the 2008, 2009 levels of $2.66-2.7 trillion down to an estimated $2.38 trillion while spending goes up $2.8-3.1 trillion to over $4 trillion.

The Committee for a Responsible Federal Budget (CRFB) provided this 5 page pdf analysis.

The US goes from spending $1.20 for every $1 of revenue to $1.80-2.00 for every $1 of revenue.

Wikipedia has information on the federal budget.

Here is the CBOs (Congressional Budget Office's) long term budget outlook.

The CFRB reaction to the longterm outlook.

Under its baseline scenario, CBO projects deficits will briefly drop below 2 percent of GDP next decade before rising to above 5.5 percent in 2035, 8 percent in 2050, and over 19 percent by the end of the 75-year window. Under their "Alternative Fiscal Scenario," which makes policy assumptions consistent with current practices, deficits would never drop below 4 percent of GDP, would hit 15 percent by 2035, and would surpass 45 percent of GDP by the end of the 75-year period. Unfortunately, given current political sentiment, the second scenario appears far more likely.

This outlook is considerably worse than previous projections, with the 75-year fiscal gap increasing from 6.9 percent of GDP to 8.1 percent of GDP since the December 2007 Long-Term Outlook.

"Having spent over a decade worrying about budget deficits, I can quite honestly say that things have never looked as bad as they do now. We need to be focused on slowing spending and finding better ways to raise revenue, not on cutting taxes and introducing new entitlement programs," said MacGuineas. "We can either make these hard choices now, on our own terms, or we can make them in a panic on the heels of a full-blown fiscal crisis."

Federal, State and Local Government

USgovernmentspending.com has charts and interactivity for people to analyze government policy.

Total US government spending of about $6.2 trillion is about 42% of GDP. In 2006, total US government spending was $4.7 trillion and 36% of GDP.

2010 Federal Spending Buckets

Estimated receipts for fiscal year 2010 are $2.381 trillion [Revenue]

Mandatory spending: $2.184 trillion (-17.9%)
$695 billion (+4.9%) - Social Security
$453 billion (+6.6%) - Medicare
$290 billion (+12.0%) - Medicaid
$0 billion (-100%) - Troubled Asset Relief Program (TARP)
$0 billion (-100%) - Financial stabilization efforts
$11 billion (+275%) - Potential disaster costs
$571 billion (-15.2%) - Other mandatory programs
$164 billion (+18.0%) - Interest on National Debt

Discretionary spending: $1.368 trillion (+7.0%)
$663.7 billion (+12.7%) - Department of Defense (including Overseas Contingency Operations)
$78.7 billion (-1.7%) - Department of Health and Human Services
$72.5 billion (+2.8%) - Department of Transportation
$52.5 billion (+10.3%) - Department of Veterans Affairs
$51.7 billion (+40.9%) - Department of State and Other International Programs
$47.5 billion (+18.5%) - Department of Housing and Urban Development
$46.7 billion (+12.8%) - Department of Education
$42.7 billion (+1.2%) - Department of Homeland Security
$26.3 billion (-0.4%) - Department of Energy
$26.0 billion (+8.8%) - Department of Agriculture
$23.9 billion (-6.3%) - Department of Justice
$18.7 billion (+5.1%) - National Aeronautics and Space Administration
$13.8 billion (+48.4%) - Department of Commerce
$13.3 billion (+4.7%) - Department of Labor
$13.3 billion (+4.7%) - Department of the Treasury
$12.0 billion (+6.2%) - Department of the Interior
$10.5 billion (+34.6%) - Environmental Protection Agency
$9.7 billion (+10.2%) - Social Security Administration
$7.0 billion (+1.4%) - National Science Foundation
$5.1 billion (-3.8%) - Corps of Engineers
$5.0 billion (+100%) - National Infrastructure Bank
$1.1 billion (+22.2%) - Corporation for National and Community Service
$0.7 billion (0.0%) - Small Business Administration
$0.6 billion (-14.3%) - General Services Administration
$19.8 billion (+3.7%) - Other Agencies
$105 billion - Other

July 24, 2009

Financial Crisis Impact on Russia and Russia's Nuclear Energy Program

Russia will utilize 1.36 trillion rubles (US$43.7 billion) from its US$95.4 billion Reserve Fund in Q3 2009 in an attempt to finance its first budget deficit in nearly a decade. The reserve fund has fallen from US$137 billion at the end of 2008. (Bloomberg)

Russia has been drawing heavily from its sovereign wealth funds to finance its anti-crisis measures and its budget deficit. Russia's savings run a risk of being depleted by 2010 given the current trend.

Russia is planning a US$102 billion deficit in 2010

Russia will slow the pace of its nuclear power reactor construction program due to the financial crisis. Meanwhile, the country's president has laid down three priorities for Russia's nuclear industry.

Sergei Kiriyenko, director general of the Rosatom corporation, told a meeting of the Committee on Modernization and Technological Development of Economy that the rate of nuclear reactor construction in Russia would be reduced from two per year to just one.

"We are implementing the program of nuclear power plant construction in Russia in compliance with our task. The task has not been changed and we will have to build all of the 26 units stipulated by the program," Kiriyenko told the meeting at the Federal Research Institute of Experimental Physics in Sarov in the Nizhny Novgorod region.

In April 2007, the Russian government approved in principle a construction program to 2020 for electricity-generating plants. The program is designed to maximise the share of electricity from nuclear, coal, and hydro while reducing that from gas to make more available for export. This envisaged starting up one unit per year from 2009, two from 2012, three from 2015 and four from 2016. Present nuclear capacity is to increase at least 2.3 times by 2020.

However, he said, "now in the face of the financial crisis and declining energy demand, we have decided to put off the peak of the program for several years.

Russian President Dmitry Medvedev put forward three priority challenges for the country's nuclear power industry.

The first task, he said, was to improve the performance of pressurised water reactors over the next two to three years. The second, over the medium-term, is to develop a new technological basis for nuclear energy based on a closed fuel cycle with fast neutron reactors. Thirdly, the industry must develop nuclear fusion as a future energy source.

Medvedev optimistically called for a detailed timetable for the implementation of these programs to be submitted by the next meeting of the committee.

July 23, 2009

Skin cells from Human Stem Cells and Baby Mice Cloned from Mice Skin Stem Cells

1. -- Dental and tissue engineering researchers at Tufts University School of Dental Medicine and the Sackler School of Graduate Biomedical Sciences at Tufts have harnessed the pluripotency of human embryonic stem cells (hESC) to generate complex, multilayer tissues that mimic human skin and the oral mucosa (the moist tissue that lines the inside of the mouth). The proof-of-concept study is published online in advance of print in Tissue Engineering Part A.

Using a combination of chemical nutrients and specialized surfaces for cell attachment, an hES cell line (H9) was directed to form two distinct specialized cell populations. The first population forms the surface layer (ectodermal, the precursor to epithelial tissue), while the second is found beneath the surface layer (mesenchymal).

Following the isolation and characterization of these cell populations, the researchers incorporated them into an engineered, three-dimensional tissue system where they were grown at an air-liquid interface to mimic their growth environment in the oral cavity. Within two weeks, tissues developed that were similar in structure to those constructed using mature cells derived from newborn skin, which are the current gold standard for tissue fabrication.

"These engineered tissues are remarkably similar to their human counterparts and can be used to address major concerns facing the field of stem cell biology that are related to their clinical use. We can now use these engineered tissues as 'tissue surrogates' to begin to predict how stable and safe hESC-derived cells will be after therapeutic transplantation. Our goal is to produce functional tissues to treat oral and skin conditions, like the early stages of cancer and inflammatory disease, as well as to accelerate the healing of recalcitrant wounds," said Garlick.

2. Mice have been cloned from adult mouse skin cells that have been reprogrammed to turn them into a versatile embryo-like state, marking an important advance in stem-cell research.

The results demonstrate for the first time that it is possible for adult tissue to develop into the full range of the body’s different cell types, in a manner similar to embryonic stem cells.

If the technique were to be repeated in humans, it could offer the prospect of a limitless supply of an individual’s own stem cells and be used to treat conditions such as Parkinson’s disease, paralysis and diabetes.

In the Chinese study, details of which were published online yesterday in the journal Nature, skin cells were taken from adult mice. These were then reprogrammed to turn them into a versatile, embryo-like state by modifying four key genes using viruses.

Previously, iPS cells have been shown to be capable of turning into different cell types in culture, such as blood, skin and muscle cells. But until now they have not passed the ultimate test of versatility — that of being turned into a living creature.

In order to create a suitable environment for the stem cells to grow into an embryo, they were injected into a blastocyte — a group of cells that can only become placental tissue. This was implanted into an adult female mouse, which went on to give birth to pups that were clones of the mouse from which the skin cell had been taken.

While mice and other mammals — but not human beings — have been cloned before, this has always involved inserting DNA from an adult cell into an empty egg.

“This paper demonstrates that mouse cells can be reprogrammed to reacquire the characteristics of genuine embryonic stem cells — namely the ability to form an entire mouse,” said Professor Ernst Wolvetang, a stem-cell specialist at the Australian Institute of Bioengineering and Nanotechnology.

In total, 27 mice have been created using the new technique. They have since gone on to produce about 200 offspring, which, in turn, have also reproduced. The majority of the mice showed no obvious health problems.

Interview with Ross Tierney of Direct Launch by Sander Olson

Here is an interview with Ross Tierney. Mr. Tierney is a representative of the of the Direct Launcher organization, which has a proposal to get to the moon using NASA shuttle components and other existing technology. This Jupiter rocket system could also be used to go to near-earth objects and possibly even Phobos and Mars. The Direct Launch system is based on the Jupiter rocket, which can provide all of the capabilities of the NASA Ares system in less time and at a fraction of the cost.

Question: Tell us about Direct Launcher. How did it get started, and what is its main objective?

Answer: The Direct Launcher concept began about four years ago, when I began collaborating with NASA engineers on better ways to get payloads into orbit. I put some ideas on a discussion board regarding creating a launch system based largely on shuttle and other off-the-shelf components, and some NASA engineers responded positively to them. Many of these ideas had been proposed by the NASA engineers but NASA management had not been receptive. When Mike Griffin became NASA's head, he came in with the Ares launch system proposal, which basically entails creating an entirely new series of rockets from scratch. So NASA rejected our concepts in favor of Ares. But we now have a team of 78 individuals who have devised a detailed launch system that is fundamentally superior to the Ares launch system, and we are trying to get the Obama administration and NASA to accept it.

Question: Direct Launcher's most recent analysis indicates that the Jupiter heavy lift vehicle could be developed for only $12 billion. How much confidence do you have in these cost estimates?

Answer: We are extremely confident in these estimates. Our system makes extensive use of proven hardware, such as the Space Shuttle Main Engines (SSME) and booster rockets. These components have been functioning reliably for decades, and are already extensively tested and man-rated. Moreover we can take full advantage of the launch infrastructure which already exists. The Ares concept, by contrast, will require all new hardware and a new infrastructure, and that will be quite expensive in terms of both time and money.

Question: You estimate that per launch costs of a heavy-lift Jupiter will run $240 million, less than half the anticipated cost of an equivalent Ares V launch. Are these cost estimates realistic?

Answer: The Ares V heavy-lift launch vehicle is a huge rocket that will require the development of many new technologies. For example, the upper stage will need to be made of composite materials, which haven't been used on rockets before. All of our cost estimates are based on existing heritage equipment. With our Jupiter 246 rockets, we are using RL-10 engines, which have been reliably operated for four decades and are quite inexpensive. Moreover, our designs are quite robust - they can survive multiple engine failures during flight.

Question: Have any engineers found inconsistencies or flaws with your proposed designs?

Answer: None so far. We currently have 69 engineers involved with a broad range of structural experience. Besides NASA engineers, we also have private contractors involved. These individuals have comprehensively examined these plans and they are all confident in it. We also have a major contractor who has done their own study of our approach. So multiple sources have independently verified our launch system. Some within NASA have tried to portray this concept as “breaking the laws of physics”, but we find it ironic that NASA was able to make this concept work in 1991 and now tried to claim it can’t work – all because their management have their own competing proposal.

Question: What launch frequencies can be expected from these rockets? Will weekly launches be feasible with this system?

Answer: Although this system is theoretically capable of launching 24 times a year, we do not see that frequency as being realistic due to cost constraints. Even though per-unit costs go down with increasing launches, total costs still rise. But we are confident that we can launch about twelve times per year for around $4 billion. But we will also be able to launch payloads within days of each other, when the need arises.

Question: A team from Direct Launcher is meeting with NASA officials. What do you hope to accomplish with this meeting?

Answer: We have ongoing meetings with various officials. The Augustine Commission and Aerospace Corporation are in essence trying to create an objective baseline on which to evaluate all cost proposals. Although it is still preliminary we believe that the Aerospace Corporation has been able to validate our figures.

Question: To what extent could the Direct Launch system be used to launch commercial satellites?

Answer: Although this system is not appropriate for commercial launches, it could be used for a variety of tasks besides supplying the space station and sending astronauts to the moon. For instance, it could be used to launch space telescopes which could either be pointed either at space targets or earth targets. The Jupiter launch system is primarily a Government asset that will provide a capability that the commercial industry can't. But hopefully this system will provide time for the commercial launch industry to advance the technology and upgrade their capabilities.

Question: If NASA accepts your proposals, when is the earliest that a Jupiter rocket could fly? How much upfront funding is required?

Answer: We are highly confident that we will be able to send a crew to the International Space Station (ISS) in 2014. Our plan would conservatively cost about $8.3 billion, and would include three test flights before 2014.

Question: Your design relies extensively on Space Shuttle Main Engines. Aren't these engines excessively complex and expensive?

Answer: They are expensive and complex, but they have several virtues. First and foremost, they have been flying for nearly thirty years, and have established an impressive reliability record. So they are proven technology and already man-rated. Second, the specific impulse of the SSME is 453 seconds, making it one of the most efficient conventional rocket engines ever constructed. Finally, a factory to construct SSMEs already exists, so we already have the infrastructure necessary to build and maintain it. Furthermore, when one factors in economies of scale, costs go down considerably.

Question: But the SSMEs were never designed as expendable hardware.

Answer: Pratt & Whitney Rocketdyne, the makers of the SSME, have indicated that they will have seventeen SSMEs available after the final shuttle flight. Since each flight consumes three engines, that would last us for several years. PWR say they will require five years to modify and test the new, inexpensive, expendable versions of the SSME. These new expendable versions could cost only 2/3 as much as current versions and could be mass-produced.

Question: What avionics system will the Jupiter rockets use? The Space Shuttle avionics are obsolete.

Answer: We believe that this system will require an entirely new avionics system, and this will be the single biggest technical challenge and development cost of the program. Avionics are critically important, and current avionics systems are not adequate. So this will definitely be the largest single expense of the program. But even with this requirement, the Jupiter spacecraft could still be launched at least six months before the Orion spacecraft.

Question: What is your opinion of commercial space launchers such as SpaceX? Couldn't they provide many of the services that NASA now performs?

Answer: I am delighted to hear about the success that private space contractors such as SpaceX are achieving. These new companies are creating a competitive atmosphere that will compel established corporations to improve their designs. We hope to establish an industry in which Jupiter rockets launch spacecraft and commercial rockets launch consumables, such as fuel. That reduces the cost for space missions while providing business for these commercial rockets.

Question: What is the likelihood of NASA endorsing your vision?

Answer: We are confident that the Augustine Commission will endorse this. Our cost and performance estimates are quite conservative, and our timelines are realistic. If the Augustine Commission officially endorses the DIRECT plans and the Jupiter rocket, then NASA will have to follow the guidance of this Presidential committee. There are currently certain managers within NASA who are fixated on the Ares rocket, even though the Ares program is prohibitively expensive. If such managers are unwilling to change their opinion despite the endorsement of a Presidential committee, then perhaps there isn't a place at NASA for them any longer.

Question: If the Jupiter rocket scheme is accepted and properly funded, how will it affect space exploration during the next decade?

Answer: I see a major revolution in the space exploration industry. We could begin to explore the solar system in a serious manner. We will be able to launch massive payloads into orbit. Missions to the moon and near-earth objects will become feasible. We will also be able to lay the groundwork for missions to Phobos and Mars. It might take 20-30 years, but it will happen. This marks a radical change from the Shuttle era, when we were limited to taking extremely expensive trips to low-earth orbit. This truly represents a once-in-a-generation opportunity to jump-start a new ear of exploration and eventual colonization of our solar system.

Background on Shuttle Derived Vehicles

Shuttle derived launch vehicles have been around since
before the Shuttle-C (1984-1995). I recall seeing shuttle variants since before the space shuttle
Astronautix has a excellent history with pictures of various Shuttle history and variants

Relevant Wikipedia Entries

DIRECT at wikipedia

Three major versions of the DIRECT proposal have been released with the latest, Version 3.0, having been unveiled in May 2009. On 17 June 2009, the group presented its proposal at a public hearing of the Review of U.S. Human Space Flight Plans Committee, a panel reviewing US space efforts, in Washington, D.C.

Direct Launcher website

Monolayer Nanotechnology Will Enable Silicon to Maintain Conductance for Smaller Devices and Sustain Moore's Law Progress

Scientists at Rice University and North Carolina State University have found a method of attaching molecules to semiconducting silicon that may help manufacturers reach beyond the current limits of Moore's Law as they make microprocessors both smaller and more powerful.

The electronic properties of silicon, such as the conductivity, are largely dependent on the density of the mobile charge carriers, which can be tuned by gating and impurity doping. When the device size scales down to the nanoscale, routine doping becomes problematic due to inhomogeneities. Here we report that a molecular monolayer, covalently grafted atop a silicon channel, can play a role similar to gating and impurity doping. Charge transfer occurs between the silicon and the molecules upon grafting, which can influence the surface band bending, and makes the molecules act as donors or acceptors. The partly charged end-groups of the grafted molecular layer may act as a top gate. The doping- and gating-like effects together lead to the observed controllable modulation of conductivity in pseudometal−oxide−semiconductor field-effect transistors (pseudo-MOSFETs). The molecular effects can even penetrate through a 4.92-μm thick silicon layer. Our results offer a paradigm for controlling electronic characteristics in nanodevices at the future diminutive technology nodes.

The paper suggests that monolayer molecular grafting -- basically, attaching molecules to the surface of the silicon rather than mixing them in -- essentially serves the same function as doping, but works better at the nanometer scale. "We call it silicon with afterburners," Tour said. "We're putting an even layer of molecules on the surface. These are not doping in the same way traditional dopants do, but they're effectively doing the same thing."

Tour said years of research into molecular computing with an eye toward replacing silicon has yielded little fruit. "It's hard to compete with something that has trillions of dollars and millions of person-years invested into it. So we decided it would be good to complement silicon, rather than try to supplant it."

He anticipates wide industry interest in the process, in which carbon molecules could be bonded with silicon either through a chemical bath or evaporation. "This is a nice entry point for molecules into the silicon industry. We can go to a manufacturer and say, 'Let us make your fabrication line work for you longer. Let us complement what you have.'

"This gives the Intels and the Microns and the Samsungs of the world another tool to try, and I guarantee you they'll be trying this."

Carbon Nanotubes integrated with MEMS

Researchers in Isreal have integrated suspended carbon nanotubes into micro-fabricated (MEMS) devices. They grow the carbon nanotubes onto the MEMS. Chirality is not controlled in their method. Other researchers have used DNA to sort carbon nanotubes by chirality but these methods have not been integrated.

The full paper is available for 30 days with a free registration.

The integration of suspended carbon nanotubes into micron-scale silicon-based devices offers many exciting advantages in the realm of nano-scale sensing and micro- and nano-electromechanical systems (MEMS and NEMS). To realize such devices, simple fabrication schemes are needed. Here we present a new method to integrate carbon nanotubes into silicon-based devices by applying conventional micro-fabrication methods combined with a guided chemical vapor deposition growth of single-wall carbon nanotubes. The described procedure yields clean, long, taut and well-positioned tubes in electrical contact to conducting electrodes. The positioning, alignment and tautness of the tubes are all controlled by the structural and chemical features of the micro-fabricated substrate. As the approach described consists of common micro-fabrication and chemical vapor deposition growth procedures, it offers a viable route toward MEMS–NEMS integration and commercial utilization of carbon nanotubes as nano-electromechanical transducers.

In separate work, european researchers have used carbon nanotubes to weigh a single atom. Eventually arrays of carbon nanotubes could be used to determine the composition of atoms in any gas. The two methods could work together to make very sensitive sensors.

Weighing atoms - separate european work

A noted impediment in the present technique is the inability to control the CNT chirality. Thus, the electrical properties of the tethered tubes vary greatly between different devices. It should be noted that the mechanical properties of CNTs, on the other hand, do not depend much on the chirality. Although undesired, this large variability in device performances is not a major impediment considering the fact that MEMS devices suffer from exactly the same problem.

In fact, commercial devices are often individually tested and calibrated. Thus the variability between CNT devices can readily be overcome during this testing stage. In fact, Raman spectroscopy can readily be implemented to characterize the tubes and to identify their metallic or semi-conducting properties. A detailed study of the Raman maps of our devices is currently underway and will be discussed elsewhere. With the fabrication process described here at hand, the manner by which CNTs can be effectively integrated into MEMS devices strongly depends on theMEMS device design so the CNT response to mechanical deformation is fully and efficiently exploited. Additionally, in these devices the tubes have to be properly anchored, and the range of deformation limited to avoid plastic effects.

To summarize, we used a simple, CVD CNT growth method to achieve the integration of carbon nanotubes into micro-fabricated devices. Material selectivity of the CNT growth enabled us to tailor novel electronic devices as a step toward building CNT-based NEMS devices. The procedure described above is suitable for straightforward utilization of CNTs as electromechanical elements in otherwise silicon based fabrication, thus opens up the prospects of commercial utilization of CNT technology in micro-electronic-based applications.

July 22, 2009

Anti-poverty Devices: Smart-phone microscope with Fluorescent Imaging and a Combo Stove/Refridgerator/Electricity Generator

1. It’s a cooker, a fridge and a generator in one — and it could have a huge impact on the lives of people in the world’s poorest communities.

Two billion people use open fires as their primary cooking method. These fires have been found to be highly inefficient, with 93 per cent of the energy generated lost. And when used in enclosed spaces, smoke from the fires can cause health problems.

The unit would be capable of converting heat into acoustic energy and then electricity, for around one hour’s use per kilogram of fuel. The cost target for the generator is £20 [USD33] per household.

Score Technical Targets:
- Cost: target (£20) per household in 1 million quantities, weight: 10-20kg, power
output: 150 W (electrical), 1.6 kWth for cooking and 0.75 kWth for simmering.
- Fuel: consumption 1 kg/hour, wood, dung and other bio-mass.

2. From MIT Technology Review: The Cellscope uses a blue-light LED and filters for fluorescence imaging. The sample is inserted next to the metal focusing knob.

The contraption--a tube-like extension hooked onto the cell phone with a modified belt clip--works just like a traditional microscope, using a series of lenses that magnify blood or spit samples on a microscope slide. To detect TB, for example, a spit sample is infused with an inexpensive dye called auramine. An "excitation" wavelength is emitted by the light source--a blue light-emitting diode (LED) on the opposite end of the device from the cell phone--and absorbed by the auramine dye in the spit sample, which fluoresces green to illuminate TB bacteria.

Self Assembled Superlens

From MIT Technology Review: Korean researchers have created nanoscale lenses with superhigh resolution using a novel self-assembly method. So far, they've demonstrated that the tiny lenses can be used for ultraviolet lithography, for imaging objects too tiny for conventional lenses, and for capturing individual photons from a light-emitting nanostructure called a quantum dot.

The new lenses, developed by researchers at the Pohang University of Science and Technology in Korea, overcome the diffraction limit because of their size. The lenses are flat on one side and spherical on the other and range in diameter from about 50 nanometers to three micrometers.

Kim's team makes the tiny spherical lenses by evaporating a solution containing cup-shaped organic molecules. First, the molecules, which are based on carbon rings, are dissolved in an organic solvent; then water is added, and the solution is allowed to slowly evaporate. During the evaporation process, the organic molecules form crystalline nanotubes that form the lenses. By changing the temperature and the evaporation rate, Kim says, it is possible to control the lenses' ultimate size. Once the lenses have formed, they're stable.

21 page pdf with supplemental information.

Singularity Summit 2009 in New York

The fourth annual Singularity Summit, a conference devoted to the better understanding of increasing intelligence and accelerating change, will be held in New York on October 3-4 in Kaufmann Hall at the historic 92nd St Y.

David Chalmers will be there talking about uploading. The Summit is branching out into topics like biotechnology and decision theory instead of just robotics and AI. Peter Thiel, who backed SENS (Strategies for Engineered Negligible Senescence) with $3.5 million in funding, will be speaking as well.

Ned Seeman [Prolific DNA Nanotechnology pioneer], Robin Hanson and many others will be speaking.

Here are highlights of the 2008 Singularity Summit.

Kurzweil AI has some information as well.

Composite of diamond and copper Helps to Make Heat Transfer 100 Times Better

sample of a novel material that may be used to remove heat from future radar systems. (Georgia Tech Photo: Gary Meek)

Researchers at the Georgia Tech Research Institute (GTRI) are developing a novel material for transferring heat away from ultra-high-power defense electronics. The exotic material, a composite of diamond and copper, is one of the materials under development as part of a new concept called a “Thermal Ground Plane” that aims to remove heat up to 100 times more effectively than present thermal-conducting schemes.

Georgia Tech is working with the Raytheon Co. on a project that seeks to raise thermal conductivity capabilities to 20,000 watts per meter Kelvin (a measure of thermal-conductivity efficiency). That’s a tall order, considering that the current conductivity champion, for radar applications, is a copper material with performance of approximately 200 to 300 watts per meter Kelvin.

The three-phase, four-year project is sponsored by the Microsystems Technology Office of the Defense Advanced Research Projects Agency (DARPA).

This improved cooling capability could benefit future high-power transmit-receive (T/R) module packages. Because of their higher power, those transmit-receive modules will also have higher cooling needs that may require a Thermal Ground Plane—a sort of heat-dissipating sandwich about one millimeter thick that would be part of the T/R module’s packaging.

"A Thermal Ground Plane is basically a materials system,” Nadler explained. “The most thermally conductive natural material, pure diamond, has a conductivity of about 2,000 watts per meter Kelvin. We’re aiming for 20,000, and to do that we have to look at the problem from a materials systems standpoint.”

The conductivity of that material would be improved with the addition of a liquid coolant able to carry heat away from the T/R module devices in the same way that sweat cools a body. A metal heat sink would help the liquid coolant dissipate the heat by condensing the vapor back to a fluid.

Using a liquid coolant takes advantage of phase changes—the conversion of matter between liquid and vapor states. The diamond-copper material would conduct heat to the liquid coolant and optimize cooling through wicking and evaporation. Then, the heat would be rejected as the vapor is re-condensed to a liquid on the side attached to the metal heat sink.

"The trick is to use evaporation, condensation and intrinsic thermal conductivity together, in series, in a continuous system,” Nadler said. “The whole device is a closed loop.”

In addition, the porous internal structure of the diamond-copper material must have exactly the right size and shape to maximize its own intrinsic heat conductivity. Yet its internal structure must also be designed in ways that can help draw the liquid coolant toward the heat source to facilitate evaporation.

Nadler explained that liquid coolant flow can be maximized by fine tuning such mechanisms as the capillarity of the diamond-copper material. Capillarity refers to a given structure’s ability to draw in a substance, especially a liquid, the way a sponge absorbs water or a medical technician pulls a drop of blood up into a narrow glass tube.

Pebble Bed Reactors Safety and Feasibility

There have been some articles that are critical of the design and safety of pebble bed reactors and claim the demise of pebble bed reactors.

Here is the rebuttal of the criticism of pebble bed reactors. Dr Albert Koster, PBMR (Pty) senior consultant, nuclear safety, replies directly to criticisms of the PBMR reactor.

Moormann’s views are old news and not supported by new, advanced work or the preponderance of contemporary evidence, analysis and expert opinion established during the AVR and THTR operating periods and more recently. Further, modern tools and techniques are highly reliable tools only conceived of just a short time ago that make the ability to accurately predict system and component performance in ways unimagined when earlier designs were developed.

On 5 February 2009 PBMR issued a news bulletin stating that the company will be focusing on the design of a plant to service both the electricity and process heat markets. In the March issue of this magazine [pp22-3] Prof. Stephen Thomas insinuates that the change was motivated not by commercial considerations but because PBMR was aware of problems with pebble bed performance at high temperatures (see article), as alleged by Dr. Rainer Moormann in a second article [pp16-20]. In order to set the record straight it is necessary to chart a short history of the development of HTRs and PBMR in particular.

This brings us to the allegations of hiding facts and supposed safety problems in pebble bed reactors as so stridently described in the March issueI. To this end, only the safety concerns raised by Moormann need to be addressed as Thomas based his argument on the premise that Moormann is correct: that PBMR knew about safety problems all the time and opted to keep quiet about it.

In his articles, Moormann presents a number of different arguments but by some analysis, these can be boiled down to two major issues. The first is that both the AVR and THTR were shut down because of safety concerns. The second is that the AVR was highly contaminated and this was due to high fuel temperatures that caused excessive release of caesium and strontium from the fuel. He then advances reasons why the fuel temperatures might have been high and draws conclusions about the safety of future pebble bed reactors based on his speculations. The major contentions are addressed below; others have been covered at the HTR 2008 conference or in prior published articles.

Technical arguments aside, it appears that China should be proceeding with their
commercial pebble bed in Sept of this year. Presumably the people on this project, which involves a few thousand people and few hundred million dollars are doing their homework. The site of the Shidaowan project will install 18 additional modules, which will total 3,800 MWe.

Upon startup of the Shidaowan plant, China will become the first country to commercially venture into HTR nuclear technology. The plant will be owned and operated by Huaneng Group, one of China’s largest independent utilities; China Nuclear Engineering and Construction Corp., China’s construction company for the nuclear island; and Tsinghua University.

The Shidaowan project received environmental clearance in March 2008 for construction start in 2009 and commissioning by 2013. The 200 MWe (two reactor modules, each of 250 MWt) plant will drive a single steam turbine at about 40 percent thermal efficiency. The reactor module, which was originally planned for 458 MWt, was reduced to 250 MWt in order to retain the same core configuration as the prototype HTR-10.

The HTR-10 is powered by graphite balls about the size of standard billiard balls packed with tiny flecks of uranium, rather than with the conventional white-hot fuel rods used in existing nuclear reactors. Instead of water, the core is bathed in inert helium, which can reach much higher temperatures. The HTR-10 reached full power in 2003 and has an outlet temperature of 700 C to 950 C.

“First and foremost, this generator will be the safest nuclear power plant ever designed and built,” said Wu. The major safety issue regarding nuclear reactors lies in how to cool them efficiently, as they continue to produce residual heat even after shutdown. Gas-cooled reactors discharge surplus heat and don’t need additional safety systems like water-cooled reactors do. The HTR-10 was subject to a test of its intrinsic safety in September 2004 when, as an experiment, it was shut down with no cooling. Fuel temperature reached less than 1600 C and there was no failure.

“Using the existing operating HTR-10 reactor at the Institute of Nuclear and New Energy Technology of Tsinghua University in Beijing, we have already done what would be unthinkable in a conventional reactor—we switched off the helium coolant and successfully let the reactor cool down by itself,” said Wu.

Second, the modular design enables the plant to be assembled much quicker and cost-effectively than traditional nuclear generators. Its streamlined construction timetable is also a first for the nuclear power industry, where designing and building generators usually take decades, rather than years.

The modules are manufactured from standardized components that can be mass-produced, shipped by road or rail and assembled relatively quickly

The pebble bed reactor program is number six in terms of priority for national projects. There have been some delays but it appears on track now.

South Africa continues to press forward. Their new version of the project is to try to get an 80MW version going in 2018.

There are 25 nuclear plants (mostly Westinghouse AP1000 reactors and CPR-1000. CPR-1000 are based on transferred Areva nuclear reactor) forecast to be built in the next five years in China, compared to only two new plants scheduled to be built in the next 10 years in the U.S

Moormann proposes areas where he feels more research is needed, some of which are addressed below.

Full evaluation of the operational experience and problems of AVR and THTR300.

PBMR (Pty) Ltd. has been in the process of evaluating the AVR for the last two years. The starting point was to collect all the design drawings and descriptions to (for the first time) enable the AVR to be modelled in detail. The latest results were presented at HTR-2008 [5]. Additional results explaining the fuel temperatures will appear soon in print.

Components from the AVR continue to be examined, at the request of PBMR (Pty) Ltd., for dust characterization, concentration and dust adherence to better understand the mobility of agglomerated dust, or the lack of it.

Experiments on iodine release from fuel elements in core heat-up accidents.

This is part of the planned PBMR fuel qualification tests where fuel will be placed in test reactors and subjected to expected operating temperature conditions. Afterwards the irradiated fuel will be subjected to post-irradiation heat-up testing to simulate design-based accident events where measurements of all significant nuclides will be made.

Full understanding and reliable modelling of core temperature behaviour, and of pebble bed mechanics, including pebble rupture.

The publications presented at HTR-2008 and the final model results show that this is already achieved. The terminology ‘pebble rupture’ is misleading; pebbles do not rupture. A very small percentage may fail due to mechanical handling and movement and the faulty pebbles are automatically removed from the core when they exit at the bottom of the core.

Commercial Shipping Uses 9% of world oil and is Major Air Pollution Source

Air pollution from commercial shipping kills 60,000 people per year.

Converting all commercial ships to run on nuclear power would be economic even without considering carbon taxes or fees.

In 2000, there were 6800 container ships in the world. At the cold war peak the Soviets had or had almost built about 400 nuclear powered ships and the USA had over 200.

Factory mass produced small nuclear reactors like the one being developed by Hyperion Power Generation or variants of the pebble bed reactor being made in China or new liquid flouride thorium reactor proposals would all work for total nuclearizing commercial shipping. There would also be the benefit that the ships would need to rarely stop for refueling and in general could operate at faster speed.

The slid show below presents more on commercial shipping and pollution and discusses more conventional ways to make it a little cleaner.

Britain and other European governments have been accused of underestimating the health risks from shipping pollution following research which shows that one giant container ship can emit almost the same amount of cancer and asthma-causing chemicals as 50m cars.

Confidential data from maritime industry insiders based on engine size and the quality of fuel typically used by ships and cars shows that just 15 of the world's biggest ships may now emit as much pollution as all the world's 760m cars. Low-grade ship bunker fuel (or fuel oil) has up to 2,000 times the sulphur content of diesel fuel used in US and European automobiles.

Pressure is mounting on the UN's International Maritime Organisation and the EU to tighten laws governing ship emissions following the decision by the US government last week to impose a strict 230-mile buffer zone along the entire US coast, a move that is expected to be followed by Canada.

The setting up of a low emission shipping zone follows US academic research which showed that pollution from the world's 90,000 cargo ships leads to 60,000 deaths a year in the US alone and costs up to $330bn per year in health costs from lung and heart diseases. The US Environmental Protection Agency estimates the buffer zone, which could be in place by next year, will save more than 8,000 lives a year with new air quality standards cutting sulphur in fuel by 98%, particulate matter by 85% and nitrogen oxide emissions by 80%

July 21, 2009

Nuclear Power for Commercial Shipping

There were four nuclear powered cargo ships. The NS Savannah was one of them.

NOTE: A follow up article has a video about air pollution from commercial shipping and has some other statistics on the scope of the effort needed and conventional alternatives for lessening but not eliminating the pollution.

Commercial shipping releases half as much particulates as all of the worlds cars.

In 2000, there were 6800 container ships in the world. At the cold war peak the Soviets had or had almost built about 400 nuclear powered ships and the USA had over 200.

A nuclear powered container ship was analyzed by Femenia, C.R. Cushing & Co, Inc. in 2008.

Capacity 15,000 TEU (a big container ship)
Length 405 m
Beam 60 m
Draft 15.5 m
Speed 32 knots
Power 150 Mw (200,000 SHP)
Propellers 2

Economic Issues
Capital Costs (Source: Femenia, C.R. Cushing & Co, Inc)
150,000 kW (200,000 HP)
1. Assumes Nuclear @ $2500 / kW
2. Assumes Diesel @ $800 / kW
3. Assumes Plant Life 40 Years
4. Assumes Interest Rate 10%

Fujitsu Building 10 Petaflop Supercomputer and IBM, Fujitsu and Intels Fastest Chips

Fujitsu torus network (right)

Fujitsu Sparc VIIIfx chip (left)

Fujitsu and Japan's Institute of Physical and Chemical Research, known as RIKEN, announced that RIKEN has decided to employ a new system configuration with a scalar processing architecture for its next-generation supercomputer. Despite the NEC/Hitachi withdraw, the plan is to get a "partially operational system" by late 2010, and the complete production system ready by 2012.

* Supercomputer will boast a performance of 10 petaflops upon completion in 2012, as initially planned

* Scalar system will be built from the world's fastest CPUs (128 gigaflops)

* CPUs will feature error-recovery function; network will have excellent fault tolerance and operability

The system will adopt Fujitsu's SPARC64™ VIIIfx CPU (8 cores, 128 gigaflops), which is manufactured using the company's 45-nm process technology. As the world's highest-performance general-purpose CPU, the processor offers both performance and energy efficiency, achieving a computational speed of 128 gigaflops per CPU. The inclusion of an error-recovery function(7) also enhances its operability.

Venus was advertised as the fastest CPU on the planet at 128 gigaflops.

It's doubtful Venus will hold that title when it is deployed in Japan's prototype machine late next year. By 2010 the eight-core Power7 chips should be in the field, and IBM is saying those processors will deliver over 256 gigaflops per CPU. The Power7 will be used in the multi-petaflop "Blue Waters" supercomputer for NCSA, which is scheduled to be running full tilt in 2011. Even Intel's Xeon chips should be well into triple-digit gigaflops when the Westmere 32nm Xeon processors hit the streets in 2010.

What may set Venus apart from its competition is its energy efficiency. Fujitsu is claiming the SPARC64_VIIIfx design allows it to operate at less than one-third the power of current Intel processors. The company didn't specify which Intel parts they were referring to, but since even the high-end Itanium CPUs top out at about 122 watts, the Venus chip should draw no more than 40 watts or so.

Aside from Fujitsu silicon, the next-gen Japanese super will also feature a multidimensional mesh/torus network as well as custom system software to glue it all together. The fact that there will no longer be vector hardware to contend with will undoubtedly make this software simpler than it otherwise would have been.

But there will be some attempt to accommodate applications developed for NEC's SX vector machines. According to the press announcement: "Although the next-generation supercomputer will consist only of scalar units, through the use of application parallelization and tuning it will support applications that have run on previous supercomputers with vector units. Other ways to assist users of vector-based supercomputers are also being considered."

Implantable Device for Monitoring Cancer and Biomarkers Developed

MIT researchers have developed a device, right, that can be implanted into a tumor to monitor how it responds to treatment

MIT, Boston General Hospital and other researchers have developed an implantable device for monitoring cancer.

An implantable diagnostic device that senses the local in vivo environment. This device, which could be left behind during biopsy, uses a semi-permeable membrane to contain nanoparticle magnetic relaxation switches. A cell line secreting a model cancer biomarker produced ectopic tumors in mice. Short term applications for this device are numerous, including verification of successful tumor resection. This may represent the first continuous monitoring device for soluble cancer biomarkers in vivo.

MIT has an article on this from May 2009.

The cylindrical, 5-millimeter implant contains magnetic nanoparticles coated with antibodies specific to the target molecules. Target molecules enter the implant through a semipermeable membrane, bind to the particles and cause them to clump together. That clumping can be detected by MRI (magnetic resonance imaging).

The device is made of a polymer called polyethylene, which is commonly used in orthopedic implants. The semipermeable membrane, which allows target molecules to enter but keeps the magnetic nanoparticles trapped inside, is made of polycarbonate, a compound used in many plastics.

Cima said he believes an implant to test for pH levels could be commercially available in a few years, followed by devices to test for complex chemicals such as hormones and drugs.

Recent PhD recipient Christophoros Vassiliou, right, holds the cancer monitoring device that he and Professor Michael Cima, left, and recent PhD recipient Grace Kim developed.

Photograph of in vivo device (a) and schematic of MRSw aggregation (b). Two populations of MRSw, each functionalized with a different monoclonal antibody for the β-subunit of hCG. Both particle populations must be present for aggregation of the MRSw to occur.

July 20, 2009

Radiation Sickness Cures and Anti-radiation Pills

1. Dr. Andrei Gudkov has developed >medication that suppresses the "suicide mechanism" of cells hit by radiation, while enabling them to recover from the radiation-induced damages that prompted them to activate the suicide mechanism in the first place.

The first series of tests included experiments on more than 650 monkeys. Each test featured two groups of monkeys exposed to radiation, but only one group was given the medication. The radiation dosage was equal to the highest dosage sustained by humans as result of the Chernobyl mishap.

The experiment's results were dramatic: 70% of the monkeys that did not receive the cure died, while the ones that survived suffered from the various maladies associated with lethal nuclear radiation. However, the group that did receive the anti-radiation shot saw almost all monkeys survive, most of them without any side-effects. The tests showed that injecting the medication between 24 hours before the exposure to 72 hours following the exposure achieves similar results.

Another test on humans, who were given the drug without being exposed to radiation, showed that the medication does not have side-effects and is safe. Prof. Gudkov's company now needs to expand the safety tests, a process expected to be completed by mid-2010 via a shortened test track approved for bio-defense drugs. Should experiments continue at the current rate, the medication is estimated to be approved for use by the FDA within a year or two.

Technical information on the radiation sickness cure is here

The drug CBLB502 switches on teh genes TLR5 and NF-kB but it did not cause an excessive response from the immune system. It was also far less toxic than raw flagellin and mice could tolerate double the dose. The drug activated a parade of protective proteins that greatly reduced the levels of apoptosis in the vulnerable intestines of irradiated mice, and protected the stem cells in both their guts and their bloodstreams.

A single shot of CBLB502 at less than 1% of the maximum dose, 87% of mice managed to survive an otherwise lethal 13 Gray of radiation. The drug completely outclassed all known protective chemicals. Even the maximum possible dose of the second-best chemical - amifostine - only saved 54% of the irradiated mice.

The drug could be given to cancer patients who are being treated with radiation, to protect healthy cells from dying off. She treated tumour-bearing mice with three daily radiation doses of 4 Gray to mimic the regular treatments that cancer patients often go through. When mice were injected with simple saline solution, the accumulated damage killed them all but when the doses were pre-empted by injections of CBLB502, every one of the mice survived.

The celebrations would be short-lived however, if the drug defended tumour cells in a similar way. Fortunately, that wasn't the case and the tumours in protected mice succumbed to the radiotherapy as per usual. If anything, the addition of CBLB502 killed slightly more cancer cells than usual, which may be due to small immune boosts triggered by the compound's resemblance to flagellin.

Burdelya also saw to a final worry. Apoptosis exists for a reason, and there is a risk that protected cells could survive the effects of radiation but live with damage that will lead to cancer in the long run. With this in mind, Burdelya tested the drug on a strain of cancer-prone mice and found that after a burst of radiation, they did not develop tumours any quicker or more frequently than they normally would.

Abstract: An Agonist of Toll-Like Receptor 5 Has Radioprotective Activity in Mouse and Primate Models.

The drug CBLB502 seems to be far ahead of the next two treatments in terms of proven effectiveness and in terms of deployment. However, the next two work on different mechanisms and appear to be complimentary to the drug CBLB502. (You might take both. One to reduce free radicals that cause damage and the CBLB502 to switch genes prevent cell death.)

2. Researchers from Boston University School of Medicine (BUSM) and collaborators have discovered and analyzed several new compounds, collectively called the ''EUK-400 series,'' which could someday be used to prevent radiation-induced injuries to kidneys, lungs, skin, intestinal tract and brains of radiological terrorism victims. The findings, which appear in the June issue of the Journal of Biological Inorganic Chemistry, describe new agents which can be given orally in pill form, which would more expedient in an emergency situation.

3. There are still no online results for Rice University's DARPA funded work on Nanovector Trojan Horses (NTH). These carbon nanotube based drugs scavenge free radicals and mitigate the biological affects that are induced through the initial ionizing radiation.

Preliminary tests had found the drug was greater than 5,000 times more effective at reducing the effects of acute radiation injury than the most effective drugs currently available.

The drug is based on single-walled carbon nanotubes, hollow cylinders of pure carbon that are about as wide as a strand of DNA. To form NTH, Rice scientists coat nanotubes with two common food preservatives -- the antioxidant compounds butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) -- and derivatives of those compounds.

THz detection and Imaging by nanometer size FETs at room temperature

Room temperature image into an envelope with millimeter resolution.

At Arxiv, "Field Effect Transistors for Terahertz Detection: Physics and First Imaging Applications" THz detection and imaging has been demonstrated by nanometer size FETs at room temperature. Other work has enabled terahertz resolution to be increased to the nanometer level. In Japan, there is work to combine carbon nanotubes and quantum dots into a room temperature terahertz video camera.

Experimental and theoretical results clearly indicate that nanometer transistors are promising candidates for a new class of efficient THz detectors. The natural next step is the realization of real-time imaging THz cameras. To understand whether FETs are the best candidates for this purpose, let us briefly consider other approaches that have already demonstrated their potential in THz real-time recording systems.

The simplest way is using a commercial infrared 160×120 element microbolometer camera. Although the device is designed for wavelengths of 7.5-14 μm, it retains the sensitivity to the THz radiation delivered by optically pumped molecular THz laser. It was shown that in a transmission-mode THz images can be obtained at the video rate of 60 frames/s; signal-to-noise ratio is estimated to be 13 dB for a single frame of video at 10 mW power. An essential step in scaling down the dimensions of a real-time imaging system is the replacement of the optically-pumped laser by a quantum cascade laser. For instance, a quantum cascade laser operating at 4.3 THz with the power of 50 mW allowed reaching the signal-to-noise ratio of 340 at 20 frames/s acquisition rate and an optical NEP of 320 pW.

In another promising approach based on a thin-film absorber upon a silicon
nitride membrane, with thermopile temperature readout produced with the CMOS
technology, a 5 ms thermal time constant of the detector, together with the noise equivalent power of 1 nW/Hz1/2 enables the real-time imaging at 50 frames/s with a signal-to-noise ratio of 10 for an optical intensity of 30 μW/cm2. Very recently, THz images below 1 THz at room temperature were recorded using InGaAs-based bowties diodes with a broken symmetry. The operation principles rely on a nonuniform carrier heating in a specific diode structure merging an antenna concept for coupling of the radiation and a high mobility 2DEG as an active medium. The response time was found to be less than 7 ns, the NEP of about 5.8 nV/√Hz, the sensitivity in the range of 6 V/W, and the dynamic range of about 20 dB at the bandwidth of 100 MHz.

In this context, FETs can be regarded as the most promising option

An electrical current applied to the metamaterial – a hybrid structure of metallic split-ring resonators – controlled the phase of a terahertz (THz) beam 30 times faster and with far greater precision than a conventional optical device, the researchers report in the journal Nature Photonics.
The metamaterial devised by the research team electronically controlled the flow of terahertz radiation over roughly 70 percent of the frequency band – not simply at the points of maximum or minimum frequency

Roberto Merlin of the University of Michigan has devised a different way of making a superlens that promises to focus light more efficiently, and to an even smaller spot – perhaps 500 times smaller than light's wavelength.

Terahertz (THz) near-field microscopy can resolve 40 nanometers based on THz scattering at atomic force microscope tips.

Carnival of Space 112

Carnival of Space 112 is up at Out of the Cradle

As we are celebrating the 40th anniversary of the Apollo moon landing, the Carnival of Space 112 is loaded with a lot of Apollo and moon articles.

From NextBigFuture there is
More details on what Powersat is planning to do enable affordable space based solar power.
They want to launch to low earth orbit and inflate their system their and use the power to drive electric propulsion to move up to geosynchronous orbit. They are also
looking at wireless coupling of the modules to reduce/eliminate the cost of any orbital construction.

Centauri Dreams presents the idea of a fast orbiter mission to Haumea, a mysterious ellipsoidal object in trans-Neptunian space.

July 19, 2009

J Storrs Hall: Feynman Path to Molecular Nanotechnology

Here are links and summaries of the first ten parts of the Feynman path to molecular nanotechnology as conceived and writen by Foresight President J Storrs Hall.

1. Feynman's proposal to achieve molecular manufacturing.

The idea is to start from the macroscale machining and fabrication and move to the nanoscale without ever losing the general fabrication and manipulation ability.

2. A historical note on the idea : Heinlein's fictional Waldoes. Waldoes in the story were (a) self-replicating (”Reduplicating”) and (b) scale-shifting (”Pantograph”).

3. Why hasn’t the Feynman Path been attempted, or at least studied and analyzed?

* there still seems to be a “giggle factor” associated with the notion of a compact, macroscale, self-replicating machine using standard fabrication and assembly techniques
* In standard technology a factory is much bigger and more complex than whatever it makes
* KSRMs (kinematic self-replicating machines) are difficult
* KSRMs defy standard design methodologies

A major step toward the Feynman Path would be to work out a scalable architecture for a workable KSRM that actually closed the circle all the way. A reasonable start would be a deposition-based fab machine, a multi-axis mill for surface tolerance inprovement, and a pair of waldoes. See how close you could get to replication with that, and iterate.

4. The Feynman path involves more than MEMS

A full machining and manipulation capability at the microscale would allow lapping, polishing, and other surface improvement techniques, which photolithography-based MEMS does not.

5. Is it worth starting now ?

The bottom-up folks are not nearly as close to real nanotech as the impression the nano-hype news gives.

Top-down and bottom-up can meet in the middle. When nanoscientists succeed in making an atomically precise nanogear, for example, it means that when the Feynman Path machines get to that scale, they can take the gear off the shelf instead of having to fabricate it the hard way. In fact it seems likely that the bottom-up approaches will likely be the way parts are made and the top-down the way they’re put together.

I’ll stick my neck out and say at a wild guess that if only bottom-up approaches are pursued, we have 20 years to wait for real nanotech; but if the Feynman Path is actively pursued as well, it could be cut to 10.

6. Some of the Open Questions

1. Is it in fact possible to build a compact self-replicating machine using macroscopic parts fabricators and manipulators? We know that a non-compact one is possible — the world industrial infrastructure can replicate itself — and we know that a compact microscopic replicator can work, e.g. a bacterium. But the bacterium uses diffusive transport, associative recognition of parts by shape-based docking, and other complexity-reducing techniques that are not available at the macroscale.
2. Not quite the same question: how much cheating can we get away with? In KSRM theory, it’s common to specify an environment for the machine to replicate in and some “vitamins,” bits that the machine can’t make and have to be provided, just as our bodies can’t synthesize some of the molecules we need and must get them pre-made in our diets

and others

7. Outline of the steps to make a Feynman Path roadmap.

1. Design a scalable, remotely-operated manufacturing and manipulation workstation capable of replicating itself anywhere from its own scale to one-quarter relative scale. As noted before, the design is allowed to take advantage of any “vitamins” or other inputs available at the scales they are needed.
2. Implement the architecture at macroscale to test, debug and verify the design. This would be a physical implentation, probably in plastic or similar materials, at desktop scale, and would include operator controls that would not have to be replicated.
3. Identify phase changes and potential roadblocks in the scaling pathway and determine scaling steps. Verify scalability of the architecture through these points in simulation. Example: electromagnetic to electrostatic motors. It would be perfectly legitimate to use externally supplied coils above a certain scale if they were available, and shift to electrostatic actuation, which would involve only conducting plates, below that scale, and never require the system to be able to wind coils.
4. Identify the smallest scale, using best available fabrication and assembly technology, at which the target architecture can currently be built.
5. Write up a detailed, actionable roadmap to the desired fabrication and manipulation techniques at the nanoscale.

8. An example of prior work which suggests that 1/1000th scale is a good place to start on the Feynman Path.

In 1994 Japanese researchers at Nippondenso Co. Ltd. fabricated a 1/1000th-scale working electric car. As small as a grain of rice, the micro-car was a 1/1000-scale replica of the Toyota Motor Corp’s first automobile, the 1936 Model AA sedan.

9. Promising candidates technologies for fabricating key components or steps and considerations for the Feynman Path.

* It seems very likely that the motors we use will be electrostatic steppers
* A particularly important aspect of the Feynman Path is that not much more than halfway down to molecular scale in part size, we already hit atomic scale in tolerance
* A Feynman Path workcell actually avoids the problem that a standard solid-freeform-fab (SFF) design has with building something its own size, because it’s building a copy that’s smaller than itself
* electrodeposition (and electro-removal, as in EDM) and electroplating will be useful

10. The Feynman Path initiative is a specific, concrete proposal — but more, it’s one that can be done in an open-source way, for at least the first, roadmap.

There’s a fundamental similarity between a Feynman Path machine (FPm) and a RepRap, obviously, in their orientation to self-replication. This includes the fact that both schemes require a human to be actively involved in the replication process, in the FPm by teleoperation. But there are some fundamental differences:

Attitude to cost: a RepRap is intended to be a means to cheap manufacturing, so it’s oriented to using the least expensive materials available. An FPm has much less concern about that: each successive machine in the series uses less than 2% the material of the previous one. It would be perfectly reasonable to design an FPm that had to carve all its parts out of solid diamond, once past the millimeter scale, for example. The goal is to understand principles, not supplant the economy (at least until the nanoscale is reached).

Attitude to closure: RepRap assumes human assembly labor, but an FPm has to provide its own manipulating capabilities. RepRap allows exogenous parts that are widely available and inexpensive; an FPm allows parts that are available at all scales.

Assembly time vs accuracy: As a consumer-goods production machine, RepRap has at least some concern for how long it takes to do its job. An FPm has much less concern about time, but much more about accuracy, since it has to improve its product’s tolerance over its own by a substantial factor.

Форма для связи


Email *

Message *