July 04, 2009

House and Senate Climate Bills and Stimulus Bill Energy Impact

The last few months and the next few months are seeing a flurry of energy bills and energy impacting legislation.

Stimulus Bill's Energy Impact
The stimulus bill (America Recovery and Reinvestment Act 2009) ARRA allocated a total of $9.45 billion to weatherize and/or increase the energy efficiency of low-income housing and assist local governments in implementing energy efficiency programs.

ARRA allocates $3.1 billion for States to implement or enhance energy efficiency programs.

ARRA contains several changes to the plug-in hybrid electric vehicle (PHEV) tax credit originally included in the Energy Improvement and Extension Act of 2008 that have been included in the updated reference case. For example, ARRA allows a $2,500 tax credit for the purchase of qualified PHEVs with a battery capacity of at least 4 kilowatthours. Starting at a battery capacity of 5 kilowatthours, PHEVs earn an additional $417 per kilowatthour battery credit up to a maximum of $5,000. The maximum total PHEV credit that can be earned is capped at $7,500 per vehicle.

Prior to the passage of ARRA, the production tax credit (PTC) for certain renewable technologies was to expire on January 1, 2010. ARRA extended this date to January 1, 2013, for wind and January 1, 2014, for all other eligible renewable resources. In addition, ARRA allows companies to choose an investment tax credit (ITC) of 30 percent in lieu of the PTC and allows for a grant in lieu of this credit to be funded by the U.S. Treasury.

ARRA provides $6 billion to pay the cost of guarantees for loans authorized by the Energy Policy Act of 2005.

Wind generation with the ARRA is expected to be more than twice tha
projected in the no-stimulus case, 201 billion kilowatthours compared to 86 billion
kilowatthours and estimated generation of 53 billion kilowatthours in 2008.

ARRA reduces commercial sector energy expenditures by an average of $5.7 billion
(2.7 percent) annually (real 2007 dollars) between 2010 and 2030.

Excluding transportation-related expenditures, total residential and commercial
energy bills are $13 billion (2.6 percent) and $21 billion (3.8 percent) lower
respectively in 2020 and

In the AEO2009 reference case, with assumptions developed prior to the current economic downturn, domestic cellulosic ethanol production was projected to reach 150 million gallons in 2012. However, a review of projects proceeding towards construction, suggests that, without assistance, only about 74 million gallons of domestic cellulosic ethanol production capacity will be built by 2012, because financing for these developers has become extremely difficult to obtain and some projects have been canceled. With the loan guarantees arising from the stimulus package, it is assumed that the 2012 production rises back to about 110 to 170 million gallons, with additional capacity additions occurring under the same financing structure as in AEO2009.

ARRA provides $3.4 billion for additional research and development on fossil energy

ARRA provides $4.5 billion for smart grid demonstration projects. The funds
provided will not fund a widespread implementation of smart grid technologies. In July 2004 the Electric Power Research Institute (EPRI) estimated that full deployment would cost $165 billion. However, successful deployment of several demonstration projects could stimulate more rapid investment than would otherwise occur. Smart grid technologies generally include a wide array of measurement, communications, and control equipment employed throughout the transmission and distribution system that will enable real-time monitoring of the production, flow, and use of power from generator to consumer.

In the updated reference case, it is assumed that the Federal expenditures on
smart grid technologies will stimulate further efforts to lower losses, reducing them by an additional 10 to 15 billion kilowatthours, roughly one-third the maximum EPRI estimate. In a 2008 report, EPRI estimated that smart grid technologies could reduce line losses in 2030 by between 3.5 and 28.0 billion kilowatthours

House and Senate Climate Bills
Kansas City Star reports the nonprofit American Council for an Energy-Efficient Economy examined the bill's efficiency provisions and concluded that they would save 1.4 million barrels of oil per day in 2030. That's roughly 10 percent of the projected use of 14.3 million barrels a day in that year, according to the government's Energy Information Administration.

The Environmental Protection Agency put the oil savings at 700,000 barrels a day by 2030. The EPA looked mainly at the bill's terms that would put a declining cap on the amount of emissions of heat-trapping gases allowed each year and create a pollution-permit trading system.

EPA's analysis showed only a modest decrease because the bill would have little impact on the price of gasoline - and thus little impact on people's driving behavior and choice of cars. EPA estimated that gasoline prices would go up about 25 cents a gallon in 2030 as a result of the bill.

The House-passed climate legislation focuses primarily on electricity generation. Its backers said they sought the quickest and cheapest ways to bring down U.S. emissions to 83 percent below 2005 levels by 2050.

The senate bill will yield energy efficiency savings of about 2 quadrillion Btu’s of energy (“quads”) in 2020 and nearly 4 quads in 2030, according to a preliminary analysis released today by the American Council for an Energy-Efficient Economy (ACEEE). ACEEE estimates that this bill will save about half of the energy in 2020 and one-third of the energy in 2030.

ACEEE estimates that 70% of the 2020 energy savings in the Senate bill will come from buildings, including a major building retrofit program, improvements to building codes, and a variety of other buildings provisions. Of the remaining savings, 18% are from new minimum efficiency standards on appliances and 12% from industrial programs

Impact of 25% renewable energy requirement

The RES program analyzed in this report has the following characteristics:

The program begins in 2012 with the required renewable share starting at 6 percent and growing in scheduled increments to 25 percent in 2025. The program sunsets in 2040.

Power sellers with retail sales of at least 1 billion kilowatthours (1,000,000 megawatthours) are covered. Entities with retail sales below this level are exempt.

Generation from existing hydroelectric and municipal solid waste (MSW) facilities are not included in the base electricity sales but also do not earn compliance credits.

Most of the projected increase in wind generation is due to existing State renewable portfolio standard programs and the passage of ARRA. This occurs in both the reference case and the RES cases. Total wind generation in the two RES cases is projected to increase from 32 billion kilowatthours in 2007 to between 208 billion kilowatthours and 249 billion kilowatthours in 2030. Total biomass generation increases from 39 billion kilowatthours in 2007 to between 438 billion kilowatthours and 577 billion kilowatthours in 2030 in the two RES cases. The renewable provisions of ARRA do not have as large an impact on biomass as on wind, because the production subsidies provided for the co-firing of biomass are smaller and because new dedicated biomass plants generally take longer to develop than would be required to meet the deadline to qualify for production subsidies under ARRA.

The higher renewable generation stimulated by the Federal RES leads to lower coal and natural gas generation. In the two RES cases, coal generation ranges between 182 billion kilowatthours (8 percent) and 257 billion kilowatthours (11 percent) below the reference case level. Similarly, natural gas generation in the two RES cases in 2030 is between 55 billion kilowatthours (6 percent) and 150 billion kilowatthours (15 percent) below the level projected in the reference case.

Given the amount of eligible renewable generation projected in the reference case, the RES is not expected to affect national average electricity prices until after 2020. As the required RES share increases to its maximum value in 2025, the value of RES credits increases, and impacts on national average electricity prices become evident. The peak effect on national average electricity prices, 2.7 percent in the RESFEC case and 2.9 percent in the RESNEC case, occurs as the required renewable share ramps up more rapidly than the demand for electricity is growing. In the later years of the projections, the impact on national average electricity prices is smaller, as the impact of the RES requirement on the cost of coal and natural gas, fuels whose use is reduced by added renewables, is increasingly reflected in electricity prices. By 2030, electricity prices are projected to be little changed from the reference case in both RES cases, with 2030 prices less than 1 percent higher than in the reference case.

Main page for EIA Energy analysis.

Laser Switched Optical Transistor Could Enable future generation of ultrafast light-based computers

An artist's impression of a molecule acting as a transistor that makes it possible to use one laser beam to tune the power of another (Image: Robert Lettow)

An optical transistor that uses one laser beam to control another could form the heart of a future generation of ultrafast light-based computers, say Swiss researchers.

Conventional computers are based on transistors, which allow one electrode to control the current moving through the device and are combined to form logic gates and processors. The new component achieves the same thing, but for laser beams, not electric currents.

A green laser beam is used to control the power of an orange laser beam passing through the device.

They suspended tetradecane, a hydrocarbon dye, in an organic liquid. They then froze the suspension to -272 °C using liquid helium – creating a crystalline matrix in which individual dye molecules could be targeted with lasers.

When a finely tuned orange laser beam is trained on a dye molecule, it efficiently soaks up most of it up – leaving a much weaker "output" beam to continue beyond the dye.

But when the molecule is also targeted with a green laser beam, it starts to produce strong orange light of its own and so boosts the power of the orange output beam.

This effect is down to the hydrocarbon molecule absorbing the green light, only to lose the equivalent energy in the form of orange light.

"That light constructively interferes with the incoming orange beam and makes it brighter," says Sandoghar's colleague Jaesuk Hwang.

Abstract at the journal Nature: A single-molecule optical transistor

a, Energy level scheme of a molecule with ground state (|1), and ground (|2) and first excited (|3) vibrational states of the first electronic excited state. Manifold |4 shows the vibronic levels of the electronic ground state, which decay rapidly to |1. Blue arrow, excitation by the gate beam; green double-headed arrow, coherent interaction of the CW source beam with the zero-phonon line (ZPL); red arrow, Stokes-shifted fluorescence; black dashed arrows, non-radiative internal conversion. b, Time-domain description of laser excitations and corresponding response of the molecule simulated by the Bloch equations with periodic boundary conditions. Blue spikes and red curve represent the pump laser pulses and the population of the excited state |2, respectively. Black curve shows the time trajectory of Im(21). Straight green line indicates the constant probe laser intensity that is on at all times. Inset, magnified view of curves during a laser pulse. c, Schematic diagram of the optical set-up. BS, beam splitter; LP, long-pass filter; BP, band-pass filter; HWP, half-wave plate; LPol, linear polarizer; S, sample; SIL, solid-immersion lens; PD1, PD2, avalanche photodiodes. Transmission of the probe beam (green) is monitored on PD1, and the Stokes-shifted fluorescence (red) is recorded on PD2.

The transistor is one of the most influential inventions of modern times and is ubiquitous in present-day technologies. In the continuing development of increasingly powerful computers as well as alternative technologies based on the prospects of quantum information processing, switching and amplification functionalities are being sought in ultrasmall objects, such as nanotubes, molecules or atoms. Among the possible choices of signal carriers, photons are particularly attractive because of their robustness against decoherence, but their control at the nanometre scale poses a significant challenge as conventional nonlinear materials become ineffective. To remedy this shortcoming, resonances in optical emitters can be exploited, and atomic ensembles have been successfully used to mediate weak light beams. However, single-emitter manipulation of photonic signals has remained elusive and has only been studied in high-finesse microcavities or waveguides. Here we demonstrate that a single dye molecule can operate as an optical transistor and coherently attenuate or amplify a tightly focused laser beam, depending on the power of a second 'gating' beam that controls the degree of population inversion. Such a quantum optical transistor has also the potential for manipulating non-classical light fields down to the single-photon level. We discuss some of the hurdles along the road towards practical implementations, and their possible solutions.

Nature's editor's summary of nanoptics and optical transistors.

Quantum information processing systems and related technologies are likely to involve switching and amplification functions in ultrasmall objects such as nanotubes. In today's electronic devices the transistor performs these functions. A 'quantum age' equivalent of the conventional transistor would, ideally, use photons rather than electrons as information carriers because of their speed and robustness against decoherence. But robustness also stops them being easily controlled. Now a team from optETH and ETH in Zurich demonstrates the realization of a single-molecule optical transistor. In it, a single dye molecule coherently attenuates or amplifies a tightly focused laser beam, depending on the power of a second 'gating' beam.

A single molecule, represented here as a rotating mirror, can in principle behave as an all-optical transistor — it can modulate the transmission of a beam of light (the source beam, blue) in response to another beam of light (the gate beam, red). The waist-shaped surface represents a beam of light, focused on the molecule. The diagrams under each of the transistors represent the electronic energy levels of the molecule. a, If the molecule is in its ground state (g) and the source photons are equivalent in energy to the electronic energy transition from g to an excited state (e), then the source photons are resonantly scattered (totally reflected) as electrons oscillate between the e and g states. b, A gate photon of appropriate energy (different from that of the source photons) excites the molecule to a long-lived excited state (s). c, The excited molecule no longer absorbs source photons, which are instead perfectly transmitted. Hwang et al.3 report the first all-optical transistor that works on similar principles.

Lasers Can Create Temporal Lens of Attosecond Electron Pulses for Molecular Movies

A team at the University of Nebraska-Lincoln has figured out a possible way to observe and record the behavior of matter at the molecular level. That ability could open the door to a wide range of applications in ultrafast electron microscopy used in a large array of scientific, medical and technological fields.

The "lenses" in question are not made of glass like those found in standard tabletop microscopes. They're created by laser beams that would keep pulses of electrons from dispersing and instead focus the electron packets on a target. The timescales required, however, are hardly imaginable on a human scale -- measured in femtoseconds (quadrillionths of a second) and attoseconds (quintillionths of a second).

The physicists modeled two types of lenses. One was a temporal "thin" lens created using one laser beam that could compress electron pulses to less than 10 femtoseconds. The second was a "thick" lens created using two counterpropagating laser beams that showed the potential of compressing electron pulses to reach focuses of attosecond duration.

Abstract from PNAS: Temporal lenses for attosecond and femtosecond electron pulses

Here, we describe the “temporal lens” concept that can be used for the focus and magnification of ultrashort electron packets in the time domain. The temporal lenses are created by appropriately synthesizing optical pulses that interact with electrons through the ponderomotive force. With such an arrangement, a temporal lens equation with a form identical to that of conventional light optics is derived. The analog of ray diagrams, but for electrons, are constructed to help the visualization of the process of compressing electron packets. It is shown that such temporal lenses not only compensate for electron pulse broadening due to velocity dispersion but also allow compression of the packets to durations much shorter than their initial widths. With these capabilities, ultrafast electron diffraction and microscopy can be extended to new domains,and, just as importantly, electron pulses can be delivered directly on an ultrafast techniques target specimen.

10 page pdf of supplemental information.

University of Nebraska-Lincoln press release, that equates the short pulses to create clearer pictures of a baseball using a strobe.

July 03, 2009

Carbon Nanotube Quantum Dot Terahertz Detectors and On-Chip High Resolution near-field terahertz detector

Two types of emerging terahertz detectors are based on novel nanoelectronic technologies. Future work to combine the two will enable a real time terahertz video camera.

1. A highly sensitive and frequency tunable terahertz detector based on a carbon nanotube (CNT) quantum dot (QD).

Observations have been made of electron tunneling via terahertz-photon detection, called photon-assisted tunneling. This result means that the CNT-QD structure can be utilized as a frequency tunable terahertz detector. CNT-QD detector functions properly up to approximately 7 K. Higher-temperature operation of the CNT-QD terahertz detector is also possible with more refined fabrication techniques.

The next important step is to improve detector performance in two important ways: sensitivity and frequency selectivity. A much more sensitive readout of the terahertz-detected signal could be achieved by capacitively coupling a CNT-QD with a quantum point contact device on a GaAs/AlGaAs heterostructure, which makes it possible to observe single-electron dynamics. And frequency selectivity could be improved by using a double-coupled CNT-QD, in which photon-assisted tunneling takes place as a result of electron transitions between two well-defined discrete levels.

2. A near-field terahertz detector for high-resolution imaging.

Contrary to the situation in the microwave and visible-light region, the development of near-field imaging in the terahertz region has not been well established. Japan RIKEN has developed a new device for near-field terahertz imaging in which all components—an aperture, a probe, and a detector—are integrated on one gallium arsenide/aluminum gallium arsenide (GaAs/AlGaAs) chip. This scheme allows highly sensitive detection of the terahertz evanescent field alone, without requiring optical or mechanical alignment.

Two approaches can be used to achieve high spatial resolution in optical imaging: a solid immersion lens and near-field imaging. Though we have previously constructed a terahertz imaging setup based on a solid immersion lens, its resolution is restricted by the diffraction limit.3 A powerful method for overcoming the diffraction limit is the use of near-field imaging. This technique has been well established in visible and microwave regions using either a tapered, metal-coated optical fiber or a metal tip, and either a waveguide or a coaxial cable. However, the development of near-field imaging in the terahertz region has been hindered by the lack of terahertz fibers or other bulk terahertz-transparent media suitable for generating near-field waves, as well as the low sensitivity of commonly used detectors in the terahertz region.

In conventional near-field imaging systems, the propagation field arising from the scattering of the near-field (evanescent) wave is measured with a distant detector, which requires detecting very weak waves (and the influence of far-field waves is unavoidable). In contrast, our near-field terahertz imager places the aperture, probe, and detector in close proximity. The 8-µm-diameter aperture and planar probe, each of which is insulated by a 50-nm-thick silicon dioxide (SiO2) layer, are deposited on the surface of a GaAs/AlGaAs heterostructure chip.

An optical micrograph (left) and a schematic representation (right) shows the design of a highly sensitive on-chip near-field THz detector. The 8-µm-diameter aperture and planar metallic probe, each of which is insulated by a 50-nm-thick silicon dioxide (SiO2) layer, are deposited on the surface of a GaAs/AlGaAs heterostructure chip. (Courtesy of RIKEN)

Because integration with the CNT-QD detector requires improvements in the device fabrication process (specifically, by using higher-performance electron-beam lithography equipment), a two-dimensional electron gas (2DEG)—located only 60 nm below the chip surface—is used as the terahertz detector.

Why Terahertz Detection is Tough

The photon energy of the terahertz wave, on the order of millielectron volts (meV), is two to three magnitudes lower than that of the visible light, making the development of a high-performance terahertz detector a difficult task. Another problem with terahertz detection is low spatial resolution of terahertz imaging, which results from the longer wavelengths of terahertz radiation compared to that of visible light.

Work to Combine the Carbon Nanotube Quantum Dot Detector for Near Field Detection

One of the challenges for future terahertz sensing technology is to achieve high detection sensitivity and high spatial resolution simultaneously. To realize this, we are now trying to combine the two techniques described above; namely to modify the CNT-QD terahertz detector into a similar structure for near-field detection. Compared to the 2DEG detector, the CNT detector exhibits much higher sensitivity and has a much smaller sensing area (approximately 200 nm compared to 8 µm for the 2DEG detector). This detector, integrated with an aperture and a probe, would show ultrahigh sensitivity and nanometer resolution simultaneously.

We further expect that when many CNTs are integrated in a two dimensional configuration, the resulting device will serve as a real-time, high-resolution terahertz imaging detector; in effect, a terahertz video camera.

DARPA Funds Phase 2 of Nano UAV Development - 10 gram Fake Hummingbirds

DARPA is providing following up funding to develop 10 gram UAVs (Nano Unmanned Aerial Vehicles- NAV) (4 page pdf) Phase 2 will end in the summer of 2010.

The U.S. Air Force is also funding a number of research projects in universities across the country. An Air Force Research Laboratory report, obtained by the Air Force Times and described in a recent article, suggests just where the Air Force wants to go with this research: The Air Force wants so-call Micro-Air Vehicles, or MAVs, about the size of a sparrow, ready to fly by 2015 and even smaller, dragonfly-sized drones ready to fly in swarms by 2030. Currently popular are Raven UAVs. They are about 4.5 feet across, weigh six pounds and can stay aloft for about an hour and a half.

The goals of the NAV program; namely to develop an approximately 10 gram aircraft that can hover for extended periods, can fly at forward speeds up to 10 meters per second, can withstand 2.5 meter per second wind gusts, can operate inside buildings, and have up to a kilometer command and control range; will stretch our understanding of flight at these small sizes and require novel technology development.

Nano air vehicles will be revolutionary in their ability to harness flapping wing, low Reynolds number physics, navigate in complex environments, and communicate over significant distances. Flight-enabling nano air vehicle system technologies being developed in the program include:
• Aerodynamic design tools to achieve high lift-to-drag airfoils;
• Lightweight, efficient propulsion and power subsystems; and
• Advanced manufacturing and innovative subsystem packaging and configuration layout.

The program will continue to develop conformal, multifunctional structural hardware and strong, light, robust aerodynamic lifting surfaces for efficient flight at low Reynolds numbers (<15,000). In addition, researchers will remain focused on developing advanced technologies that enable collision avoidance and navigation systems for use in GPS-denied indoor and outdoor environments as well as improving efficiency and stability in hovering flight and during the deployment or emplacement of sensors.

A micro aircraft(6 inches or less) in size and carrying all necessary systems on
board, such as energy sources and flight control sensors achieved 20 seconds of hovering in December of 2008.

The challenge of the Phase II effort will concentrate on optimizing the aircraft for longer flight endurances, transition capability from hover to forward flight and back, as well as reducing the size, weight, and acoustic signature. All of which are distinct technical challenges in their own right, that actually conflict with each other." Keennon elaborates. Dr. Hylton added, “There are still many hurdles to achieve the vehicle we envisioned when the program was started, but we believe that the progress to date puts us on the path to such a vehicle.”

July 02, 2009

New Nanomedicine Writing from Robert Freitas

Robert Freitas published a major new theory paper on aspects of medical nanorobot control, providing an early glimpse of future discussions of this topic that are planned to appear in Chapter 12 (Nanorobot Control) of Nanomedicine, Vol. IIB: Systems and Operations, the third volume of the Nanomedicine book series (still in preparation).

The paper is part of an edited book collection on bio-inspired nanoscale computing that was published about a week ago by Wiley.

Robert Freitas contributed the 15th chapter:

Robert A. Freitas Jr., “Chapter 15. Computational Tasks in Medical Nanorobotics,” in M.M. Eshaghian-Wilner, ed., Bio-inspired and Nano-scale Integrated Computing, John Wiley & Sons, New York, 2009, pp. 391-428.

The chapter is about 5.2 MB in size and a draft preprint version may be downloaded from the nanomedicine website: http://www.nanomedicine.com/Papers/NanorobotControl2009.pdf

Nanomedicine is the application of nanotechnology to medicine: the preservation
and improvement of human health, using molecular tools and molecular knowledge
of the human body. Medical nanorobotics is the most powerful form of
future nanomedicine technology. Nanorobots may be constructed of diamondoid
nanometer-scale parts and mechanical subsystems including onboard sensors,
motors, manipulators, power plants, and molecular computers. The presence of
onboard nanocomputers would allow in vivo medical nanorobots to perform
numerous complex behaviors which must be conditionally executed on at least a
semiautonomous basis, guided by receipt of local sensor data and constrained by
preprogrammed settings, activity scripts, and event clocking, and further limited
by a variety of simultaneously executing real-time control protocols. Such
nanorobots cannot yet be manufactured in 2007 but preliminary scaling studies
for several classes of medical nanorobots have been published in the literature.
These designs are reviewed with an emphasis on the basic computational tasks
required in each case, and a summation of the various major computational
control functions common to all complex medical nanorobots is extracted from
these design examples. Finally, we introduce the concept of nanorobot control
protocols which are required to ensure that each nanorobot fully completes its
intended mission accurately, safely, and in a timely manner according to plan. Six
major classes of nanorobot control protocols have been identified and include
operational, biocompatibility, theater, safety, security, and group protocols. Six
important subclasses of theater protocols include locational, functional, situational, phenotypic, temporal, and identity control protocols.

Robert Freitas' nanomedicine books remain freely available online at http://www.nanomedicine.com, with links to MNT-based medical nanorobot designs at http://www.nanomedicine.com/index.htm#NanorobotAnalyses.

Chinas Nuclear Energy Target for 2020 is 86 Gigawatts and Wind Energy Target of 150 GW

China Daily reports: China is planning for an installed nuclear power capacity of 86 gigawatts (gW) by 2020, up nearly 10-fold from the 9 gW capacity it had by the end of last year, two people familiar with the matter said. the new target is higher than targets earlier this year of 70-75 GW and higher than two-three years ago when the target was 40 GW.

The goal, which is part of an alternative energy development roadmap covering 2009-20, seeks to have at least 12 gW of installed nuclear power capacity by 2011.

The plan "will call for the government to accelerate nuclear power development in coastal provinces and autonomous regions, namely Liaoning, Guangdong, Zhejiang, Fujian, Guangxi, Jiangsu, Shandong and Hainan," the sources said.

In order to achieve the goal, the government will also set up a "reasonable number of nuclear power plants in inland provinces in Jiangxi, Anhui, Hunan and Hubei", they said.

The government is also planning to have 150 gW of installed wind power capacity by 2020, of which 30 gW will come from offshore wind farms. Installed wind power capacity should reach 35 gW by the end of 2011, of which 5 gW will come from offshore wind farms.

The [Energy] industry would attract investment worth 2.97 trillion yuan by 2011, creating 5 million jobs. And, total investment in the sector would touch 13.5 trillion yuan and create 20 million jobs by 2020

Chinese nuclear build continues apace with procurements for multi-unit power plants Hongyanhe and Ningde.

Having already won a contract for a simulator for Hongyanhe 1 and 2, Canada's L3-MAPPS has now been picked to provide another for Hongyanhe 3 and 4.

The plant's first two nuclear power generators are currently under construction on the Hongyan river in Liaoning province with first concrete for those coming in August 2007 and April 2008. First concrete at Hongyanhe 3 was poured on 15 March this year with the same for Hongyanhe 4 set for 15 September.

A similar plant, also based on domestic CPR-1000 pressurized water reactors, is being built at Ningde in Fujian province. The first two units there had first concrete in February and November 2008, the second two are set for 15 November this year and July 2010.

Both the plants are based on the domestic CPR-1000 design and are being managed by China Guangdong Nuclear Power Company (CGNPC) which is the lead partner in both projects They will both also feature forged steel valves from China Valves Technology after a contract signed a few days ago. CGNPC has paid 10% of the contract value up front, with the rest due on delivery. Half of the valves are required by the end of this year, with the others before March 2010.

July 01, 2009

Progress in Understanding Regeneration in Salamanders

Shwann cells are shown here in a salamander limb. When the limb regrew after being amputated, only these cells wrapped around nerve fibers; other cell types did not turn into Shwann cells. Credit: D. Knapp/E. Tanaka

The salamander is a superhero of regeneration, able to replace lost limbs, damaged lungs, sliced spinal cord — even bits of lopped-off brain. In a paper set to appear Thursday in the journal Nature, a team of seven researchers, including a University of Florida zoologist debunk the source of the salamander regeneration as “pluripotent” cells.

The researchers show that cells from the salamander’s different tissues retain the “memory” of those tissues when they regenerate, contributing with few exceptions only to the same type of tissue from whence they came. The new findings suggest that harnessing the salamander’s regenerative wonders is at least within the realm of possibility for human medical science.

The researchers’ main conclusion: Only ‘old’ muscle cells make ‘new’ muscle cells, only old skin cells make new skin cells, only old nerve cells make new nerve cells, and so on. The only hint that the axolotl cells could revamp their function came with skin and cartilage cells, which in some circumstances seemed to swap roles, Maden said.

MIT Technology review has coverage.

Tanaka's team employed a novel method for tracking the fate of cells from different tissues in a type of salamander called the axolotl. The researchers first created transgenic axolotls that carried green fluorescent protein (GFP) in their entire bodies. When the animals were still embryos, the researchers removed a piece of tissue from the limb region of the transgenic animals and transplanted the tissue into the same location in nontransgenic axolotls. The transplants were incorporated into the growing body as normal cells, and when the limb of the transplant recipients were then severed, the researchers could track the fate of the fluorescent cells as the limb regrew.

Sánchezalso says that the idea that blastemas held several different cell types was a "minority hypothesis" and that this study "shows that this hypothesis turns out to be correct." He cautions that scientists now need to determine whether this phenomenon is the same in adult axolotls and in newts, which are a primary model organism for regeneration studies. But if the same mechanism turns out to underlie other cases of regeneration, it would change what scientists believe is required to regrow body parts, Sánchezsays. But it leaves a major question unanswered: if humans already have tissue-specific stem cells, what exactly is the difference between our cells and those of salamanders?

Maden said the findings will help researchers zero in on why salamander cells are capable of such remarkable regeneration. “If you can understand how they regenerate, then you ought to be able to understand why mammals don’t regenerate,” he said.

Maden said UF researchers will soon begin raising and experimenting on transgenic axolotls at UF as part of the The Regeneration Project, an effort to treat human brain and other diseases by examining regeneration in salamanders, newts, starfish and flatworms.

The Value of Real Disease Cures and Inexpensive Tests

A blog makes a point that the healthcare funding battles are like generals fighting the last war. The new healthcare should focus on cures and cheap tests.

This site covered the detailed statistics that most of the healthcare costs are focused on the chronic diseases for the sickest 5% of people.

Curing cancer is worth $50 trillion to the USA alone according to a 2006study by Kevin M. Murphy and Robert H. Topel of the University of Chicago.

- A 10% reduction in cancer death rates has a value of roughly 5 trillion dollars to current and future Americans
- Reducing cancer death rates by 10% would generate roughly 180 billion dollars annually in value for the U.S. population
- These figures don’t even count any gains from reduced morbidity and improved quality of life
- Gains in longevity from 1970 to 2000 were worth roughly 95 trillion dollars to current and future Americans
- This amounts to a gain of over 3 trillion dollars per year (roughly 25% of annual GDP)
-Value of reducing the death rate by 1/10,000 worth roughly $630 to one person
- This corresponds to a value of a statistical life of $6.3 million

A critical factor is not to implement care that is more expensive then the value of the benefit in order to improve the economics of healthcare. (Only pay for what we can afford.) This is illustrated in the following example.

Simple Example
200 billion dollar “war on cancer”
50% probability of success – 50% probability of total failure
Success = 10% reduction in cancer death rates
Based on Murphy & Topel – value of success = $5 trillion
What about costs of care?

Costs of care Two scenarios:
“good” outcome = treatment adds 2.5 trillion (50% of value) to costs of care
“bad” outcome = treatment adds 10 trillion (200% of value) to costs of care

Assume each scenario is equally likely
Three potential outcomes:
50% chance of “Failure” = -$200 billion
25% chance of “Good Success” = +$2.3 trillion
25% chance of “Bad Success” = -$5.2 trillion
Expected gain = -$825 billion

What matters in this calculation?
* Costs of research are small by comparison to costs and benefits (making them $100 billion or $300 billion has little effect)
Probability of success matters some but not much
Expected costs of care matter a lot
* Question: What can we do to improve the situation?
* Answer: Make good care decisions!

* Improve care system = don’t implement if costs of care are high
* Chance of “failure” now 75%
* But expected gain now +$425 billion
* Bottom line: appropriate cost containment RAISES the value of research by eliminating the major downside
* The potential downside to research is not failure but unaffordable “success”

Best solution: improve incentives and decisions in the delivery system – research will follow
Second best: change the direction of research to look only for lowest costs solutions
Both enhance the case for more research

Improve incentives for doctors and patients to control costs
Use technologies appropriately – not all or nothing – many treatments will be cost effective for some patients not for others
Focus on treatments with low incremental costs – reduces problem of over use

Singapore Makes Aligned Nanoarches

Researchers in Singapore have successfully fabricated a family of aligned one-dimensional C-curved nanoarches of different compositions by a simple and scalable method for the first time. Article in ACS Nano: A Family of Aligned C-Curved Nanoarches. The nanoarches are actually nanotubes with their extremities firmly attached to the silicon surface, thereby forming a turned letter C.

One-dimensional (1-D) nanomaterials are basic building blocks for the construction of nanoscale devices. However, the fabrication and alignment of 1-D nanomaterials with specific geometry and composition on a given substrate is a significant challenge. Herein we show a successful example of fabricating a family of aligned 1-D C-curved nanoarches of different compositions on an extended Si surface by a simple and scalable method. The nanoarches are made up of either single-crystalline Sn nanorods encapsulated in carbon nanotubes (CNTs), SnO2 nanotubes, or CNTs. The aligned 1-D C-curved nanoarches of single-crystalline Sn nanorods in CNTs are prepared first by a facile in situ reduction of SnO2 nanoparticles under standard chemical vapor deposition conditions. Nanoarches of CNTs and SnO2 nanotubes were then derived from the Sn@CNT nanoarches by acid etching and by calcination in air, respectively.

Coverage at Nanowerks.com as well.

The new methodology for synthesizing tin oxide nanotubes using in situ formed carbon nanotubes as the active template. Our fabrication method is generic and could, in principle, be applied to the preparation of other aligned 1-D nanomaterials.

June 29, 2009

Cobalt atoms and carbon rings proposed as subnanometer magnetic storage bits

It is demonstrated by means of density functional and ab-initio quantum chemical calculations, that transition metal - carbon systems have the potential to enhance the presently achievable area density of magnetic recording by three orders of magnitude (1000 times more). As a model system, Co2-benzene with a diameter of 0.5 nm is investigated. It shows a magnetic anisotropy in the order of 0.1 eV
per molecule, large enough to store permanently one bit of information at temperatures considerably larger than 4 K. A similar performance can be expected, if cobalt dimers are deposited on graphene or on graphite. It is suggested that the subnanometer bits can be written by simultaneous application of a moderate magnetic and a strong electric field.

Long-term magnetic data storage requires that spontaneous magnetization reversals
should occur significantly less often than once in ten years. This implies that the total magnetic anisotropy energy (MAE) of each magnetic particle should exceed 40 kT where k is the Boltzmann constant and T is the temperature.

More recently, the magnetic properties of transition metal dimers came into the focus
of interest. Isolated magnetic dimers are the smallest chemical objects that possess a magnetic anisotropy as their energy depends on the relative orientation between dimer axis and magnetic moment. Huge MAE values of up to 100 meV per atom were predicted by density functional (DFT) calculations for the cobalt dimer.

The researchers predict that bonding of Co dimers on hexagonal carbon rings like benzene or graphene results in a perpendicular arrangement of the dimers with respect to the carbon plane and in a magnetic ground state. In this structure, a division of tasks takes place: while the Co atom closer to the carbon ring is responsible for the chemical bonding, the outer Co atom hosts the larger share of the magnetic moment. The huge magnetic anisotropy of the free dimer is preserved in this structure, since the degeneracy of the highest occupied 3d- orbital is not lifted in a hexagonal symmetry. Thus, it should be possible to circumvent the hitherto favored use of heavy metal substrates to achieve large magnetic anisotropies. On the contrary, robust and easy-to-prepare carbon-based substrates are well suited for this task. Once confirmed, the present results may constitute an important step towards a molecular magnetic storage technology.

Carnival of Space 109

Carnival of Space 109 is up at Twisted physics, a discovery blog.

This site contributed its article about General Fusion getting funded by the Canadian government.

The Event Horizon Telescope: Are We Close to Imaging a Black Hole?

(Sub)Millimeter VLBI observations in the near future will combine the angular resolution necessary to identify the overall morphology of quiescent emission, such as an accretion disk or outflow, with a fine enough time resolution to detect possible periodicity in the variable component of emission. In the next few years, it may be possible to identify the spin of the black hole in Sgr A*, either by detecting the periodic signature of hot spots at the innermost stable circular orbit or parameter estimation in models of the quiescent emission. Longer term, a (sub)millimeter VLBI "Event Horizon Telescope" will be able to produce images of the Galactic center emission to the see the silhouette predicted by general relativistic lensing

The paper that discusses very long baseline interferometry and black hole imaging challenges. (13 page pdf)

Several technological advancements are currently in progress to increase the sensitivity of the millimeter VLBI array. A phased array processor to sum the collecting area on Mauna Kea (Weintroub 2008) has been tested. Similar hardware could be used to increase sensitivity at CARMA, Plateau de Bure, and the Atacama Large Millimeter/ submillimeter Array (ALMA) in Chile. Digital backends (DBEs) have been developed to process 1 GHz of data (4 Gbit s−1 with 2-bit Nyquist sampling), and next-generation DBEs will improve upon this by a factor of four. Mark 5B+ recorders can already record 2 Gbit s−1 data streams (presently requiring two at each site per DBE), and the Mark 5C recorders currently being developed will be able to handle even faster data rates. Cryogenic sapphire oscillators are being examined as a possible frequency standard to supplement or replace hydrogen masers to provide greater phase stability, which may improve coherence at higher frequencies.

Future observations will initially focus on improving sensitivity by observing a wider bandwidth and using phased array processors. Dual polarization observations will become a priority not only for the p2 improvement in sensitivity for total-power observations but also to allow full polarimetric VLBI of Sgr A*. Higher frequency observations, such as in the 345 GHz atmospheric window, will provide even greater spatial resolution in a frequency regime where interstellar scattering and optical depth effects are minimized.

The timing is right to move forward on building an Event Horizon Telescope to produce high-fidelity images of Sgr A* as well as other scientifically compelling sources, such as M87. Receivers currently being produced en masse for ALMA could be procured for other millimeter VLBI stations, in many cases providing substantial improvements in sensitivity. Studies of climate and weather will be necessary to provide information on the astronomical suitability of prospective sites for future telescopes, such as those at the present ALMA Test Facility or additional telescopes constructed specifically for millimeter VLBI (which would mesh well with present ALMA construction). Some existing telescopes will require improvements to their systems, such as increasing the bandwidth of the intermediate frequency signal after mixing. It will also be highly desirable to install permanent VLBI hardware at all sites to allow turnkey VLBI observing in order to maximize the efficiency of VLBI observations in terms of personnel time and transportation costs.

Current 1.3 mm VLBI observations have established that the millimeter emission emanates from a compact region offset from the center of the black hole. These data are already being used to constrain key physical parameters (e.g., spin, inclination, orientation) in models of the emission (e.g., RIAF models). Future additions to the VLBI array would allow the millimeter emission to be imaged directly

Phil Plait, Bad Astronomer, looks at the conflicting studies of Saturn's moon Enceladus. One study finds evidence of salt water ocean and the other finds the opposite.

Check out the Carnival of Space 109 at Twisted physics for a lot more.

Interview of Eric Lerner, Lawrenceville Plasma Physics/Focus Fusion, by Sander Olson

Eric Lerner interview.

Mr. Lerner heads the Focus Fusion Society, which is a charitable organization attempting to create focus fusion technology. He believes that his technique is fundamentally superior to Tri-alpha Energy (Colliding beam fusion in the reverse field configuration) and EMC2 fusion (inertial electrostatic confinement/pollywell fusion) because it results in more of the proton-boron fuel being burned. He is confident that this technology could lead to electricity generation at 2 cents per kilowatt hour. We should know if this technology if feasible or not within the next two years. If it is successful as Lerner hopes, this technology could have a profound impact on the world.

Question: Tell us about the DB-11 fusion reactor

Answer: We are using a technique which we call focus fusion. This technology involves using dense plasma to burn hydrogen-boron fuel (PB-11). The advantage of this approach is that the reaction does not produce neutrons but rather charged particles. So we get the energy out in the form of moving charged particles which is already electricity. So we don't need to use any turbine, and this dramatically reduces size, costs, and energy requirements.

Question: How easy would this fusion reactor be to operate? Would there be any danger of a meltdown, or a terrorist incident with this technology?

: The safe and easy operation of this device is one of its selling features. The amount of fuel being burned at any given time is extremely small, so the possibility of uncontrolled release of energy is nil - a misfire will simply cause the device to stop operating. Furthermore the container will be shielded by water and boron-10, which absorbs neutrons. So radioactivity simply isn't an issue.

Question: What major milestones has your project achieved?

Answer: We developed the general theory for this back in the 1980s, based on astrophysical phenomena such as quasars and solar flares. NASA's jet propulsion lab funded us in the 1990s, and in 2001 we demonstrated that we could achieve the extremely high temperatures (over a billion degrees) that would be needed for fusion,. Unfortunately NASA ceased funding fusion in 2001, but with some difficulty we were able to obtain private funding to continue our research. We are now performing experiments which will test out the scientific feasibility of our approach, and trying to obtain net energy production. We will begin to build the energy device within weeks, and we hope to get the experimental data back by fall 2009.

Question: How does your reactor compare to the EMC2 fusion reactor?

Answer: We, Bussard [those who carry on the late Robert Bussard's fusion project], and a company called Tri-Alpha Energy are all trying to burn the PB-11 fuel, which is difficult to burn but otherwise highly desirable. EMC2 and Tri-Alpha require stable plasma. Our approach, however, uses a very dense, unstable plasma, which results in more of the fuel being burned. On this metric we are several orders of magnitude closer to the goal of maximum fuel efficiency than either EMC2 or tri-Alpha. We strongly support the funding of competing projects but believe we have a superior approach that will yield the quickest results.

Question: Your website claims that electricity could be produced for 1/50th the cost of conventional plants. Is 2/10 cent per kilowatt hour feasible?

Answer: We are confident that 2/10 of a penny per kilowatt hour is eventually attainable for several reasons. First, for conventional energy sources, the equipment for energy conversion production costs about $1 per watt produced. We are planning on building a 5 megawatt generator that when in mass-production should cost approximately $300,000. So that comes out to about 6 cents per watt for the equipment. Second, the fuel itself is virtually free, and each generator only consumes about 5 pounds of fuel per year. Third, labor costs should be quite modest since few workers are required to run these plants.

: Your website mentions 5 megawatt plants, but larger 5 gigawatt plants will be required. Is this technology scalable?

: Not in individual units, since there are limits as to how frequently these machines can be pulsed. It should be possible, however, to build large numbers of modules in one location. So for instance an aluminum plant might want to construct 100 fusion generators at their plant.

Question: You have divided your fusion development projects into stages. What are these stages, and are you meeting the timetables?

Answer: There are three basic stages. We are currently trying to determine the scientific feasibility of our approach. This involves constructing a laboratory device that generates net energy and unequivocally proves viability. This phase has just begun and should be completed within the next two years. The second stage would result in a working prototype, and that will be a much larger project, involving about $20 million and taking about 3 years. The final stage would be implementation - getting our fusion technology out to the economy.

Question: How long do you anticipate between a successful prototype demonstration and commercial production?

Answer: We anticipate having a commercial reactor no more than eighteen months after the prototype is completed. So eighteen months after the prototype there should be significant numbers of reactors being manufactured. If this technology can generate electricity for 1/10 the cost of current approaches, as we believe, then it will quickly supplant them. We could eventually see a million of these units being produced per year.

Question: You have given a talk on fusion at google. Is google or any other corporation funding any of your research?

Answer: No corporation is currently funding our research. The Abell foundation in Baltimore is our only institutional funder, and they have invested a half million. That has constituted half of our budget. We hope that the Federal Government will fund our research, and we are currently applying for a grant from ARPA-E. We are also hoping to garner funding from private accredited investors.

Question: Can this proposed fusion reactor use multiple fuel sources, or is it limited to hydrogen-boron fuel?

Answer: It is limited to hydrogen-boron fuel. Although we plan on experimenting with other fuels, such as deuterium, we don't anticipate that there is a way of using other fuels to economically generate energy using our fusion approach.

Question: At what point do you anticipate that your company, Lawrenceville Plasma Physics, will become consistently profitable?

Answer: Shortly after the successful completion of our second stage, we plan on selling licenses for the production of these generators, since demand for the product will overwhelm any single corporation. Although we haven't worked out the details of these license agreements, we intend to ensure that the price of focus fusion power is as close as possible to the cost of production.

City Scale Climate Engineering: Expanding the Eden Project Concept into Houston and other City Domes

Ads : Nano Technology   Netbook    Technology News    Computer Software

Discovery Channel's Mega-engineering discusses a proposal to place a dome over Houston if climate change threats fully materialized. The Houston dome would even be more economical even without addressing climate change but as a more efficient way to keep residents confortable than using air conditioning for cars and buildings. A detailed cost and efficiency analysis might show that such structures and a retrofit of many cities could be economically and environmentally justified.

Economic benefits:
1. Removes or greatly reduce the need or cost for individual hurricane and weather insurance
2. Shifts the costs for air conditioning and temperature control of buildings
3. Reduces/shifts environmental damage from structures under the dome to the dome

If we rethink and retrofit how we climate control cities, it could turn out to make more sense than climate control for all the buildings inside the city.

For such dome cities all vehicles under the dome should be zero emission and electrically powered.

For farming/agriculture in a climate change scenario where such domed cities are common there should be vertical farming (farming in high rise buildings).

Eden Project
These city dome proposals are an extension of the work of famed architect Buckminister Fuller and of the Eden project in the UK where domes were actually built.

One mile wide Buckminster Fuller proposed dome

The Eden Project is a visitor attraction in the United Kingdom, including the world's largest greenhouse. Inside the artificial biomes are plants that are collected from all around the world.

Sir Robert McAlpine and Alfred McAlpine constructed the Eden Project and MERO designed and built the biomes. The project took 2½ years to construct and opened to the public on 17 March 2001. It is made of steel and thermoplastic

The Rainforest Biome, which is the largest greenhouse in the world, covers 1.559 hectares (3.9 acres) and measures 180 feet (55 m) high, 328 feet (100 m) wide and 656 feet (200 m) long. It is used for tropical plants, such as fruiting banana trees, coffee, rubber and giant bamboo, and is kept at a tropical temperature.

The Mediterranean Biome covers 0.654 hectares (1.6 acres) and measures 115 feet (35 m) high, 213 feet (65 m) wide and 443 feet (135 m) long. It houses familiar warm temperate and arid plants such as olives and grape vines and various sculptures.

The biomes are constructed from a tubular steel space-frame (hex-tri-hex) with mostly hexagonal external cladding panels made from the thermoplastic ETFE. Glass was avoided due to its weight and potential dangers. The cladding panels themselves are created from several layers of thin UV-transparent ETFE film, which are sealed around their perimeter and inflated to create a large cushion.

Eden Project has received £130 million of funding from various sources.

How stuff works describes the construction of the Eden Project

Eden's designers decided not to use these traditional materials in their greenhouses -- they went with glazed ethyl tetra fluoro ethylene (ETFE) foil instead. ETFE foil is a perfect covering for a greenhouse because it is strong, transparent and lightweight. A piece of ETFE weighs less than 1 percent of a piece of glass with the same volume. It is also a better insulator than glass, and it is much more resistant to the weathering effects of sunlight

More technical details:
- The dome can be made of Texlon ETFE which can protect the city from 180 MPH winds, water and fire.
- Houston dome area would be over 21 Million square feet.
- Houston Dome's broadest panels will be 15 feet across. It will take 147,000 panels to cover the city of Houston

Monolithic domes at nextbigfuture.

Bolonkin air supported city dome for missile protection and other uses.


Trading Futures  
  Nano Technology  
Technology News  
  Computer Software
    Future Predictions

Thank You

Форма для связи


Email *

Message *