April 16, 2009

Energetics Technologies and Other New Cold Fusion Research Will Be on CBS 60 Minutes April 19, 2009

One of three segments on 60 Minutes will be on cold fusion.

COLD FUSION IS HOT AGAIN - Presented in 1989 as a revolutionary new source of energy, cold fusion was quickly dismissed as junk science. But today, the buzz among scientists is that these experiments produce a real physical effect that could lead to monumental breakthroughs in energy production. Scott Pelley reports. Denise Schrier Cetta is the producer.

This site had videos of the American Chemical Society press conferences on Cold Fusion/Low Energy Nuclear Reactions (LENR).

Pamela Mosier-Boss and colleagues at Space and Naval Warfare Systems Command (SPAWAR) in San Diego, California, are claiming to have made a "significant" discovery – clear evidence of the products of cold fusion. Tracks of energetic Neutrons are being detected.

One of the most important new research was done in Japan by Arata where heat was produced without adding electricity but just loading deuterium into metal powder.

Energetics Technologies LLC and its superwave fusion process will be one of the groups featured on the next 60 Minutes.

Energetics Technologies’ proprietary SuperWave Fusion has already demonstrated the production of extraordinary amounts of excess heat. The SuperWave driven cells have generated OVER 25 times (2,500%) the amount of energy that was used to operate the system.

At the present time, using the approaches described above, and thanks in large part to these unique relationships, Energetics Technologies is able to produce excess heat in a significant percentage of the experiments. Extraordinary breakthroughs have been accomplished, backed by tested reproducibility through the multiple independent channels of SRI, and ENEA. With proof of principle, it is now time to accelerate the work, leading to the commercialization of this promising technology.

The Promise of SuperWave™ Fusion / Dr. Irving Dardik

Is it Cold Fusion? / Dr. Irving Dardik [the fusion is motion]

New Energy Times is all over the 60 Minutes feature.

CBS asked Robert Duncan, vice chancellor for research at the University of Missouri and an expert in low-temperature physics, to look into the LENR research. Duncan was referred to CBS by Allen Goldman, the head of the condensed matter physics group at the American Physical Society.

Duncan spent several weeks (on his own time) investigating LENR in October. CBS paid his travel expenses to meet with researchers at Energetics' laboratory in Omer, Israel, and observe a working LENR excess-heat experiment. Duncan emphasized to New Energy Times his objectivity of and independence from the research.

"‘60 Minutes’ asked the American Physical Society for a reference for someone like myself who’s done very careful measurements in related fields but not specifically in LENR," Duncan said. "I've never been involved in any 'cold fusion' research in the past, nor am I involved in any now."

Duncan also met with researchers at NRL in Washington, D.C., and the SPAWAR researchers when they were in Salt Lake City at the American Chemical Society meeting in March.

He was skeptical of the LENR excess heat before his investigation. New Energy Times spoke with Duncan today.

"Sam Hornblower of CBS asked me to read some papers and talk to some of the scientists, and it quickly became clear to me that it was a very interesting result. After I saw some of the hardware, I had a chance to ask about the experimental configurations and dig in deeper, and now I am convinced that this excess-heat effect is real."

Duncan was particularly impressed with the SPAWAR research because of its clear evidence for nuclear reactions.

Preserving and Repairing Lungs Outside the Body

Lung tissue is preserved in the Toronto XVIVO Lung Perfusion System developed at the Toronto General Hospital with the aim of repairing lungs prior to transplant. The technique allows for lungs to be removed from a donor and kept at body temperature while being assessed for function and transplant outcome.
Credit: Toronto General Hospital

A new out-of-body lung-repair technique developed at the Toronto General Hospital may dramatically increase the number of lungs that can be used in transplants and improve surgical outcome.

In an operating room at the hospital, the technology can keep a pair of human lungs slowly breathing inside a glass dome attached to a ventilator, pump, and filters. The lungs are maintained at normal body temperature of 37 °C and perfused with a bloodless solution that contains nutrients, proteins, and oxygen. The organs are kept alive in the machine, developed with Vitrolife, for up to 12 hours while surgeons assess function and repair them.

Normally, as few as one in ten lungs available for transplant is usable, and even those may not work properly when grafted. "The system allows you to assess the lungs, to diagnose what's wrong with them, and then repair them," says Shaf Keshavjee, who directs the hospital's Lung Transplant Program. "Therefore, we're transplanting lungs that have a more predictable outcome."

Some speculation besides transplants: A patient could be placed onto a lung machine. Their own lungs could be taken out and repaired and improved and then placed back into the original patient.

Other experts hail the Toronto technique but caution that more work is needed on how to fix lungs, stop the inflammatory response in grafting, and improve mortality in transplant patients.

"The Toronto system seems to re-create normal lung function outside the body," says Jacques-Pierre Fontaine of Brown University's cardiothoracic-surgery division. "If we can keep the organ outside the body longer with minimal ischemic damage, we can go farther to get a lung." However, says Fontaine, "the real test" will be how well the patients do with the transplanted lungs. "Proof will be in the survival data."

List of other videos of regenerative medicine and stem cell work from the McEwen Centre for Regenerative Medicine was established at University of Toronto

Vitrolife transplantation website

Inertial Electrostatic (Bussard) Fusion Gets $2 million in Funding

Department of Defense (DoD) released its EXPENDITURE PLAN for the projects to be funded with the American Recovery and Reinvestment Act of 2009 ($7.4 billion) and $2 million of it is going to fund Inertial Electrostatic Fusion. [H/T IECfusiontech.blogspot.com

There is a pdf of the plan. On pdf page 166 there is a small item under the heading Domestic Energy Supply/Distribution. It is as follows:

Plasma Fusion (Polywell) Demonstrate fusion plasma confinement system for shore and shipboard applications; Joint OSD/USN project. 2.0 [million]

Introduction to Bussard Fusion

This site has covered IEC (Bussard) Fusion many times. Bottom line is that it is one of the most promising technologies for achieving cheap, clean and non-controversial energy within ten years. Success would alter energy production, the world economy, propulsion of ships and other vehicles and enable inexpensive access to space.

IEC fusion uses magnets to contain an electron cloud in the center. It is a variation on the electron gun and vacuum tube in television technology. Then they inject the fuel (deuterium or lithium, boron) as positive ions. The positive ions get attracted to the high negative charge at a speed sufficient for fusion. Speed and electron volt charge can be converted over to temperature. The electrons hitting the TV screen can be converted from electron volts to 200 million degrees.

The old problem was that if you had a physical grid in the center then you could not get higher than 98% efficiency because ions would collide with the grid. The problem with grids is that the very best you can do is 2% electron losses (the 98% limit). With those kinds of losses net power is impossible. Losses have to get below 1 part in 100,000 or less to get net power. (99.999% efficiency)

Bussard system uses magnets on the outside to contain the electrons and have the electrons go around and around 100,000 times before being lost outside the magnetic field.

The fuel either comes in as ions from an ion gun or it comes in without a charge and some of it is ionized by collisions with the madly spinning electrons. The fuel is affected by the same forces as the electrons but a little differently because it is going much slower. About 64 times slower in the case of Deuterium fuel (a hydrogen with one neutron). Now these positively charged Deuterium ions are attracted to the virtual electrode (the electron cloud) in the center of the machine. So they come rushing in. If they come rushing in fast enough and hit each other just about dead on they join together and make a He3 nucleus (two protons and a neutron) and give off a high energy neutron.

Ions that miss will go rushing through the center and then head for one of the grids. When the voltage field they traveled through equals the energy they had at the center of the machine the ions have given up their energy to the grids (which repel the ions), they then go heading back to the center of the machine where they have another chance at hitting another ion at high enough speed and close enough to
cause a fusion.

Discussion Board Technical Details From IEC Fusion Research Lead Dr Nebel

Some technical comments from Dr Nebel

A few comments on scaling laws….
To a certain extent we are in the same boat as everyone else as far as the previous experiments go since Dr. Bussard’s health was not good when we started this program and he died before we had a chance to discuss the previous work in any detail. Consequently, we have had to use our own judgement as to what we believe from the earlier experiments and what we think may be questionable. Here’s how we look at it:
1. We don’t rely on any scaling results from small devices. The reason for this is that these devices tend to be dominated by surface effects (such as outgassing) and it’s difficult to control the densities in the machines. This is generally true for most plasma devices, not just Polywells.
2. Densities for devices prior to the WB-7 were surmised by measuring the total light output with a PMT and assuming that the maximum occurred when beta= 1. We’re not convinced that this is reliable. Consequently, we have done density interferometry on the WB-7. We chose this diagnostic for the WB-7 because we knew through previous experience that we could get it operational in a few months (unlike Thomson scattering which by our experience takes more than a man-year of effort and requires a laser which was outside of our budget) and density is always the major issue with electrostatic confinement. This is particularly true for Polywells which should operate in the quasi-neutral limit where Debye lengths are smaller than the device size.
3. As discussed by several people earlier, power output for a constant beta device should scale like B**4*R**3. All fusion machines scale this way at constant beta. Input power scales like the losses. This is easy to derive for the wiffleball, and I’ll leave that as an “exercise to the reader”. This is the benchmark that we compare the data to.
4. As for Mr. Tibbet’s questions relating to alpha ash, these devices are non-ignited (i.e. very little alpha heating) since the alpha particles leave very quickly through the cusps. If you want to determine if the alphas hit the coils, the relevant parameter is roughly the comparison of the alpha Larmor radius to the width of the confining magnetic field layer. I’ll leave that as an “exercise to the reader” as well.

Loss fraction = (summation (pi*rl**2))/(4*pi*R**2) where rl is the electron gyroradius and R is the coil radius. The summation is a summation over each of the point cusps. If you calculate rl from one of the coil faces, then there are "effectively" ~ 10 point cusps (fields are larger in the corners than the faces). The factor that your observed confinement exceeds this model is then lumped together as the cusp recycle factor.

The other model is to look at mirror motion along field lines. For this model you look at loss cones and assume that the electrons effectively scatter every time they pass through the field null region. This model describes the confinement which was observed on the DTI machine in the late 80s.

I don't know how to predict cross-field diffusion on these devices. The gradient scale lengths of the magnetic fields are smaller than the larmor radii and the electrostatic fields should give rise to large shear flows. On top of that, the geometry is 3-D.

The mirror model is a bit of a handwaving model that I believe Nick Krall came up with. The mirror ratio is calculated from the field where the electron Larmor radius is on the order of the device size. Any smaller field than that will not have adiabatic motion. If particles enter the field null region, it is assumed that they effectively scatter. I believe that Dave Anderson at LLNL did a fair amount of particle tracing calculations for FRMs in the late 70s, and not surprisingly saw jumps in the adiabatic invariants when moving through field null regions. I presume similar behavior was observed on FRC simulations. Anyway, it's a ballpark model.

My other comment was related to electrons trapped in the wiffleball. Over most of their orbit there is little or no magnetic field (i.e. Larmor radius bigger than the device size) with the electrons turning when they hit the barrier magnetic field. The electron behavior is stochastic since there are no invariants. We don't have any direct measure of the internal magnetic fields, but we do know the density and have a pretty good idea what the electron energy is. High beta discharges should expel the magnetic field. The vacuum fields should be in a mirror regime (as was the DTI device) while the wiffleball fields should transition to better confinement. There is about 3 orders of magnitude difference in the predicted confinement times so it's pretty easy to see which regime the device operates in (unless, of course, the cusp recycle is truly enormous).

As you suggest, Bohm diffusion is kind of a catch-all for any kind of confinement you don't understand. We hope we don't end up there, and so far we're OK.

If you are interested in pumps, the specifications for ITER can be found at:
http://www.iter.org/a/index_nav_4.htm . If I am reading this correctly, the pumping power is about 60,000 liters/second. This is ~ 30 times more than the WB-7. It doesn't take a lot of power. Our system takes ~ 500 watts of power. ITER probably requires 10-20 kW.

Highlights of Bussard's Google Talk on IEC Fusion

IEC Fusion for Dummies

Polywell fusion discussion board

The next IEC fusion record could be a 100 MW version.

Successful development of IEC fusion would transform space travel and energy

Non-electric uses of nuclear fusion. Commercial generation of energy from pure fusion is one of the toughest goals.

The most promising opportunities for non-electric applications of fusion fall into four categories:
1. Near-Term Applications (isotopes for medicine, detection of explosives)
2. Transmutation (this also includes fusion/fission hybrid reactors. Fusion can make fission burn 100% of the fuel - ie almost no waste -unburned fuel)
3. Hydrogen Production
4. Space Propulsion

Here is an introduction to the inertial electrostatic fusion concept.

ETH Zurich researchers copy bacteria in Bacteriabot

Artificial bacterial flagella are about half as long as the thickness of a human hair. They can swim at a speed of up to one body length per second. This means that they already resemble their natural role models very closely. (Image: Institute of Robotics and Intelligent Systems/ETH Zurich)

ETH Zurich researchers have built micro-robots as small as bacteria. Their purpose is to help cure human beings.

They look like spirals with tiny heads, and screw through the liquid like miniature corkscrews. When moving, they resemble rather ungainly bacteria with long whip-like tails. They can only be observed under a microscope because, at a total length of 25 to 60 µm, they are almost as small as natural flagellated bacteria. Most are between 5 and 15 µm long, a few are more than 20 µm.

The tiny spiral-shaped, nature-mimicking lookalikes of E. coli and similar bacteria. are called “Artificial Bacterial Flagella” (ABFs), the “flagella” referring to their whip-like tails. They were invented, manufactured and enabled to swim in a controllable way by researchers in the group led by Bradley Nelson, Professor at the Institute of Robotics and Intelligent Systems at ETH Zurich. In contrast to their natural role model, some of which cause diseases, the ABFs are intended to help cure diseases in the future.

The practical realization of these artificial bacteria, the smallest yet created, with a rigid flagellum and external actuation, was made possible mainly by the self-scrolling technique from which the spiral-shaped ABFs are constructed. ABFs are fabricated by vapor-depositing several ultra-thin layers of the elements indium, gallium, arsenic and chromium onto a substrate in a particular sequence. They are then patterned from it by means of lithography and etching. This forms super-thin, very long narrow ribbons that curl themselves into a spiral shape as soon as they are detached from the substrate, because of the unequal molecular lattice structures of the various layers. Depending on the deposited layer thickness and composition, a spiral is formed with different sizes which can be precisely defined by the researchers. Nelson says, “We can specify not only how small the spiral is, but even the scrolling direction of the ribbon that forms the spiral .”

External propulsion via magnetic field

Even before releasing the ribbon that will afterwards form the artificial flagellum, a kind of head for the mini-robot is attached to one of its ends. It consists of a chromium-nickel-gold tri-layer film, also vapor-deposited. Nickel is soft-magnetic, in contrast to the other materials used, which are non-magnetic. Nelson explains that, “This tiny magnetic head enables the ABF to move in a specific way in a magnetic field.” The spiral-shaped ABF swim through the liquid and its movements can be observed and recorded under a microscope.

With the software developed by the group, the ABF can be steered to a specific target by tuning the strength and direction of the rotating magnetic field which is generated by several coils. The ABFs can move forwards and backwards, upwards and downwards, and can also rotate in all directions. Brad Nelson says “There’s a lot of physics and mathematics behind the software.” The ABFs do not need energy of their own to swim, nor do they have any moving parts. The only decisive thing is the magnetic field, towards which the tiny head constantly tries to orientate itself and in whose direction it moves. The ABFs currently swim at a speed of up to 20 µm, i.e. up to one body length, per second. Nelson expects that it will be possible to increase the speed to more than 100 µm per second. For comparison: E. coli swims at 30 µm per second.

Possible applications in medicine

The ABFs have been designed for biomedical applications. For example, they could carry medicines to predetermined targets in the body, remove plaque deposits in the arteries or help biologists to modify cellular structures that are too small for direct manipulation by researchers. In initial experiments, the ETH Zurich researchers have already made the ABFs carry around polystyrene micro-spheres.

More Videos
More videos and information is here

The Institute of Robotics and Intelligent Systems has a lot more related robotics research

April 15, 2009

Getting Graphene Edges to Atomic Precision and Real Time Atom Observation

Foresight highlighted two important research developments.

MIT can use to heat to get the edges of Graphene to Atomic Precision and scientists at the University of California at Berkeley and the Lawrence Berkeley National Laboratory were able to observe carbon atoms moving around the edges of a hole punched in a graphene crystal.

Live Action Movies of Carbon Atoms Around Graphene

Researchers with the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), working with TEAM 0.5, the world’s most powerful transmission electron microscope, have made a movie that shows in real-time carbon atoms repositioning themselves around the edge of a hole that was punched into a graphene sheet. Viewers can observe how chemical bonds break and form as the suddenly volatile atoms are driven to find a stable configuration. This is the first ever live recording of the dynamics of carbon atoms in graphene.

Two years ago, co-authors Cohen and Louie, theorists who hold joint appointments with Berkeley Lab’s Materials Sciences Division and UC Berkeley, calculated that nanoribbons of graphene can conduct a spin current and could therefore serve as the basis for nanosized spintronic devices. Spin, a quantum mechanical property arising from the magnetic field of a spinning electron, carries a directional value of either “up” or “down” that can be used to encode data in the 0s and 1s of the binary system. Spintronic devices promise to be smaller, faster and far more versatile than today’s devices because — among other advantages — data storage does not disappear when the electric current stops.

Said Cohen, “Our calculations showed that zigzag graphene nanoribbons are magnetic and can carry a spin current in the presence of a sufficiently large electric field. By carefully controlling the electric field, it should be possible to generate, manipulate, and detect electron spins and spin currents in spintronics applications.”

Said Louie, “If electric fields can be made to produce and manipulate a 100-percent spin-polarized carrier system through a chosen geometric structure, it will revolutionize spintronics technology.”…

Action movie of individual atoms.

Simulation of individual atoms

Controlling the Edges of Graphene Nanoribbons

Controlled Formation of Sharp Zigzag and Armchair Edges in Graphitic Nanoribbons

Graphene nanoribbons can exhibit either quasi-metallic or semiconducting behavior, depending on the atomic structure of their edges. Thus, it is important to control the morphology and crystallinity of these edges for practical purposes. MIT researchers demonstrated an efficient edge-reconstruction process, at the atomic scale, for graphitic nanoribbons by Joule heating. During Joule heating and electron beam irradiation, carbon atoms are vaporized, and subsequently sharp edges and step-edge arrays are stabilized, mostly with either zigzag- or armchair-edge configurations. Model calculations show that the dominant annealing mechanisms involve point defect annealing and edge reconstruction.

9 pages of supporting material on the research work.

Diamond-based Quantum Information Processing and Communication Project

Projects at the University of California at Santa Barbara will focus on developing new quantum measurement techniques to manipulate and read out single electron spins in diamond. The projects will also focus on the on-chip integration of single electron spins with photonics, for communication. Additionally, the project aims to build a world-class research facility for the creation of synthetic crystal diamond and diamond heterostructure materials and devices. Diamonds fabricated by the team will complement many ongoing research initiatives on campus and around the world, including programs working towards solid-state lighting, nanoelectronics, and atomic-level storage.

Two government funding agencies are putting $6.1 million into a pair of research projects aimed at utilizing diamond for quantum communication processing. UCSB is leading the charge on both efforts, due to dramatic developments in quantum physics in the past decade at the university.

This work relates to work by the company Element Six to get longer coherence times by increasing the Carbon 12 purity of diamond.

The UCSB work will advance quantum computing, quantum communications and spintronics.

Muscle Gene Therapy that is Effective as Well as Safe

Gene Therapy volunteers were evaluated at set intervals through 180 days, and therapy effectiveness was measured by assessing alpha-SG protein expression in the muscle, which was four to five times higher than in the muscles that received only the saline [placebo]. The volunteers encountered no adverse health events, and the transferred genes continued to produce the needed protein for at least six months after treatment.

Limb-girdle muscular dystrophy actually describes more than 19 disorders that occur because patients have a faulty alpha-sarcoglycan gene. In each of the disorders, the muscle fails to produce a protein essential for muscle fibers to thrive. It can occur in children or adults, and it causes their muscles to get weaker throughout their lifetimes. The trial evaluated the safety of a modified adeno-associated virus — an apparently harmless virus known as AAV that already exists in most people — as a vector to deliver the alpha-SG gene to muscle tissue.

It is easier to treat localized conditions. So the initial successes with gene therapy are for localized conditions. There have been several trials that have proved the safety of the new gene theraphy methods.

Muscle-fiber size increased in the treated areas, suggesting that it may be possible to combat the so-called "dystrophic process" that causes muscles to waste away during the course of the disease. Beyond muscular dystrophy, the discovery shows muscle tissue can be an effective avenue to deliver therapeutic genes for a variety of muscle disorders, including some that are resistant to treatment, such as inclusion body myositis, and in conditions where muscle is atrophied, such as in cancer and aging.

"These exciting results demonstrate the feasibility of gene therapy to treat limb-girdle muscular dystrophy," said Jane Larkindale, portfolio director with Muscular Dystrophy Association Venture Philanthropy, a program that moves basic research into treatment development. "The lack of adverse events seen in this trial not only supports gene therapy for this disease, but it also supports such therapies for many other diseases."

DUPIC Fuel Cycle : Direct Use of Pressurized Water Reactor Spent Fuel in CANDU

DUPIC stands for Direct Use of Pressurized Water Reactor Spent Fuel in CANDU. CANDU is the Canadian heavy water nuclear reactor. (H/T David Walters

The extra cost of DUPIC has been estimated to be six to ten percent above the once-through cost. This is far below credible estimates for the cost of the fast-reactor fuel cycle, which started at 25 percent above once-through. Moreover, there are many CANDU reactors operating around the world, with excellent in-service records. Their operating costs are well known, and competitive with those of light water reactors.

DUPIC Advantages and Technical Issues
The DUPIC technique has certain advantages:

No materials are separated during the refabrication process. Uranium, plutonium, fission products and minor actinides are kept together in the fuel powder and bound together again in the DUPIC fuel bundles.
* A high net destruction rate can be achieved of actinides and plutonium.
* Up to 25% more energy can be realised compared to other PWR used fuel recycling techniques.
* And a DUPIC fuel cycle could reduce a country¹s need for used PWR fuel disposal by 70% while reducing fresh uranium requirements by 30%.

Used nuclear fuel is highly radioactive and generates heat. This high activity means that the DUPIC manufacture process must be carried out remotely behind heavy shielding. While these restrictions make the diversion of fissile materials much more difficult and hence increase security, they also make the manufacture process more complex compared with that for the original PWR fuel, which is barely radioactive before use.

Canada, which developed the CANDU reactor, and South Korea, which hosts four CANDU units as well as many PWRs, have initiated a bilateral joint research program to develop DUPIC and the Korean Atomic Energy Research Institute (KAERI) has been implementing a comprehensive development program since 1992 to demonstrate the DUPIC fuel cycle concept.

KAERI believes that although it is too early to commercialise the DUPIC fuel cycle, the key technologies are in place for a practical demonstration of the technique. Challenges which remain include the development of a technology to produce fuel pellets of the correct density, the development of remote fabrication equipment and the handling of the used PWR fuel. However, KAERI successfully manufactured DUPIC small fuel elements for irradiation tests inside the HANARO research reactor in April 2000 and fabricated full-size DUPIC elements in February 2001. AECL is also able to manufacture DUPIC fuel elements.

Research is also underway on the reactor physics of DUPIC fuel and the impacts on safety systems.

A further complication is the loading of highly radioactive DUPIC fuel into the CANDU reactor. Normal fuel handling systems are designed for the fuel to be hot and highly radioactive only after use, but it is thought that the used fuel path from the reactor to cooling pond could be reversed in order to load DUPIC fuel, and studies of South Korea's Wolsong CANDU units indicate that both the front- and rear-loading techniques could be used with some plant modification.

Used fuel from light water reactors (at normal US burn-up levels) contains approximately:

95.6% uranium (U-232: 0.1-0.3%; U-234: 0.1-0.3%; U-235: 0.5-1.0%; U-236: 4-0.7%; balance: U-238)
2.9% stable fission products
0.9% plutonium
0.3% caesium & strontium (fission products)
0.1% iodine and technetium (fission products)
0.1% other long-lived fission products
0.1% minor actinides (americium, curium, neptunium)

South Korea Nuclear Power

South Korea is now moving toward a full 60% of electricity capacity from Nuclear power (versus 40% now), which could put them close to 75% of real generation. They have now 5 1 GW-plus reactors under construction and 3 more planned to start over the next 5 years. These 3 including the last one that was started in the fall of last year, is the APR-1400, a 1350 MW ingeniously built, partially Korean designed nuclear power plant based on a Generation III+ reactor originally conceived by, but never implement by, Westinghouse on a NRC approved design known as "System 80+".

The Koreans are also working on advanced reprocessing facilities to recover the 97% of the energy that sits in spent nuclear fuel facilities.

DUPIC Status Report from 2006

6 page DUPIC status report from 2006

The Korea Atomic Energy Research Institute (KAERI) established the DUPIC fuel development facility (DFDF) in 1999 to process the PWR spent fuel and to
fabricate the DUPIC fuel on a laboratory scale. In this facility, about 25 pieces of fuel fabrication equipment are installed. (Lab scale facility)

1) Decladding machine, OREOX furnace, off-gas treatment system, attrition mill and mixer to produce
DUPIC fuel powder from the PWR spent fuel
2) Compaction press, high temperature sintering furnace, center-less grinder, pellet cleaner and dryer,
pellet stack length adjuster and pellet loader to fabricate DUPIC fuel pellets
3) Remote laser welder and welding chamber to fabricate DUPIC fuel elements
4) Quality inspection devices to characterize the DUPIC fuel powder, pellets and elements.

KAERI fabricated real size DUPIC fuel elements in February 2001.

A comparison of the optical microscopy photos showed that the irradiation
behavior of the DUPIC fuel is similar to that of the standard CANDU spent fuel or PWR spent fuel of 40000 MWd/tHM.

The engineering-scale DUPIC facility will be designed with a capacity of 50 ton/yr and a plant lifetime of 40 yrs. The design also considers the expansion of the facility to a commercial-scale plant. The main process building is located in the centre, surrounded by auxiliary buildings such as a utility facility, health physics buildings, etc. The overall process can be categorized into a DUPIC fuel fabrication, a structural part recycling and a radioactive waste treatment. A detailed flow path of the main processes is as follows:
- PWR spent fuel receiving and storage
- Spent fuel disassembly and decladding (99% recovery of the fuel material from the clad)
- Fuel powder preparation by the OREOX process
- Fuel pellet fabrication with a theoretical density of more than 95%
- Fuel rod fabrication including a surface decontamination and fissile content measurement.
- Fuel bundle fabrication in the CANFLEX geometry.

Though it is yet too early to launch the commercialization of the DUPIC fuel based on the basic DUPIC fuel technologies developed until now, it is also true that the key technologies have been developed for the DUPIC fuel cycle. Therefore it is expected that there should be no technical problems to develop the commercial DUPIC fuel technology once the DUPIC fuel technology and its performance are demonstrated through a practical use of the DUPIC fuel, which will be an extremely important turning point in the history of nuclear power development. By utilizing spent fuel by an internationally-proven proliferation-resistant technology, it is expected that the burden of a spent fuel accumulation will be relieved not only in the domestic nuclear grid but also in the worldwide nuclear power industry.

Electrolytic/electrometallurgical processing techniques ('pyroprocessing') to separate nuclides from a radioactive waste stream have been under development in the US Department of Energy laboratories, notably Argonne, as well as by the Korea Atomic Energy Research Institute (KAERI) in conjunction with work on DUPIC

The KAERI advanced spent fuel conditioning process (ACP) involves separating uranium, transuranics including plutonium, and fission products including lanthanides. It utilises a high-temperature lithium-potassium cathode. Development of this process is at the heart of US-South Korean nuclear cooperation, and will be central to the renewal of the bilateral US-South Korean nuclear cooperation agreement in 2014, so is already receiving considerable attention in negotiations.

With US assistance through the International Nuclear Energy Research Initiative (I-NERI) program KAERI built the Advanced Spent Fuel Conditioning Process Facility (ACPF) at KAERI in 2005. KAERI hopes the project will be expanded to engineering scale by 2012, leading to the first stage of a Korea Advanced Pyroprocessing Facility (KAPF) starting in 2016 and becoming a commercial-scale demonstration plant in 2025

Older 6 page report on Korea's DUPIC plans

Canada nuclear yearbook for 2008. 54 pages

Rice University and Stanford University Researchers Have Unzipped Carbon Nanotubes into Thin Graphene Strips

Rice University researchers have unzipped carbon nanotubes to make graphene ribbons tens of nanometres wide. This is the cover story from the April 16, 2009 issue of the Journal Nature.

"Ribbon structures are very important structures and they're not easy to make," says James Tour, a chemist at Rice University in Houston, Texas. Early techniques used chemicals or ultrasound to chop graphene sheets into ribbons, but could not make ribbons in large amounts or with controlled widths.

As a solution, Tour and his co-workers, and a separate group led by Hongjie Dai of Stanford University in California, decided to try to generate ribbons from carbon nanotubes.

Dai and his colleagues opted to slice the tubes using an etching technique borrowed from the semiconductor industry. They stuck nanotubes onto a polymer film and then used ionized argon gas to etch away a strip of each tube. Once cleaned, the remaining ribbons were just 10–20 nanometres wide.

Tour's group, by contrast, used a combination of potassium permanganate and sulphuric acid to rip the tubes open along a single axis. The resulting ribbons are wider — around 100–500 nanometres — and not semiconducting, but easier to make in large amounts.

"The techniques complement each other," says Mauricio Terrones, a physicist at the Institute for Science and Technology Research of San Luis Potosi in Mexico, who was not involved in the work.

Nanowerk has coverage.

In addition to being fairly straightforward and easy to do, the process can be extremely efficient. "We can open up every carbon nanotube at the same time and convert many nanotubes into ribbons at the same time," Dai said.
Depending on how large a surface they cover with nanotubes – anything from a chip to a wafer – Dai said his team can create anywhere from one to tens of thousands of graphene nanoribbons at a time. The ribbons can easily be removed from the polymer film and transferred onto any other substrate, making it easy to create items such as graphene transistors, which may hold promise as a way to possibly make high performance electronic devices.

"How much better computer chips using graphene nanoribbons would be than silicon chips is an open question," Dai said. "But there is definite potential for them to give a very good performance."

Another advantage of Dai's method is that the edges of the nanoribbons produced are fairly smooth, which is critical to having them perform well in electronics applications.

The next step in the team's research is to better characterize the ribbons and try to refine their control of the production process. Dai said it is important to control the width of the ribbon and the edges of the structure of the ribbon, as those things could potentially affect the electrical properties of the ribbons and any device in which they are used. [There is separate recent MIT work in using heat to control the edges of graphite]

MIT Technology Review has coverage as well

Tour's unzipping method yields graphene in bulk, which is an advantage from a manufacturing perspective. But "[Dai]'s going to have better control," admits Tour. The width of the Rice group's nanoribbons is determined by the diameter of the nanotubes that they come from. In contrast, using the Stanford team's technique, it's possible to finely control the width of the nanoribbons. In today's publication, Dai and his colleagues describe nanoribbons six nanometers wide, but he says that they have subsequently made narrower and more semiconducting ones. "There might be an optimum width; that needs to be investigated," he says.

Tour's nanoribbons are easy to process because they are graphene oxide, which is soluble in water. "You can use sheer force to align them like logs in a river lining up in parallel," says Tour. "You can paint them down, and they will align." Tour adds that the nanoribbons can be made into devices using ink-jet printing. Once the ribbons are in place on a chip, they're treated with hydrogen at high heat to remove the oxygen at their edges and turn them into semiconductors. Without this step, the ribbons are insulators.

The Stanford research was funded by Intel, and Tour says that he is in talks with companies interested in licensing his manufacturing method as well as devices made with the nanoribbons.

Both techniques are likely to be useful to researchers, and both have a variety of potential applications. Tour believes that his larger ribbons could be used in solar panels and flexible touch displays, where cheap, transparent materials are in demand. They could even be spun into lightweight, conducting fibres that might replace bulky copper wiring on aircraft and spacecraft. Dai's narrower ribbons, meanwhile, might find uses in electronics because of their semiconducting properties

Dai says that his group has already used the ribbons to make basic transistors, but, he adds, it's too early to tell whether they will be commercially competitive. "It's very early in the game," he says.

April 14, 2009

Neutron star crust is ten billion times stronger than Steel

Computer simulations show that the breaking strength of neutron star crust is about 10 billion times more than for terrestrial engineering materials such as metal-alloys where the strength is measured in fractions of a GPa. The largest contributor to this tremendous difference is of course the enormous pressure and thus density of the crust.

The screened Coulomb interaction is purely repulsive (in a neutron star) and has no explicit length scale, i.e. the system at twice the density behaves just like the system at the original density only at a lower temperature (Eq.2). This causes the material to fail abruptly in a collective manner at a large strain, rather than yielding continuously at low strain as observed in metals, because of the formation of dislocations. For example, the breaking strain of steel is around 0.005, some twenty times smaller than what we find for the neutron star crust. We speculate that the collective plastic behavior found here could help to improve design strategies that suppress the weakening effects of dislocations and other more localized defects in conventional materials. Note that small Coulomb solids have been studied in the laboratory using cold trapped ions.

Materials like rock and steel break because their crystals have gaps and other defects that link up to create cracks. But the enormous pressures in neutron stars squeeze out many of the imperfections. That produces extraordinarily clean crystals that are harder to break. A cube of neutron star crust can be deformed by 20 times more than a cube of stainless steel before breaking.

So if metals and other material can be made with perfect crystals then they would have 20 times the strength of regular materials.

Now, "all else being equal, the maximum height of a 'mountain' on a neutron star is now 10 times what we thought," Owen told New Scientist.

That would produce gravitational waves with 100 times the energy as those previously calculated, which could boost the likelihood that ground-based experiments like the US Laser Interferometer Gravitational-Wave Observatory (LIGO) could spot the signals, he added.

Cellscope Will Enable Malaria and Tuberculosis Diagnosis in Developing World

A few years ago, Daniel Fletcher posed a challenge to the students in his Optics and Microscopy class at the University of California at Berkeley (UC Berkeley).

He told them to imagine working in a remote African village at the time of a
disease outbreak, and that all they have at their disposal is a camera cell phone and
an assortment of basic optics lenses and mounts. Would it be possible, Fletcher asked, to convert the camera phone into a sort of mobile microscope that could be used to diagnose the disease?

Using less than US$100 in supplies, they proved that their idea of converting a camera phone into a low-cost, clinical-quality light microscope was feasible.
Now, with funding and software support from Microsoft Research, a small team of graduate, postdoctoral and undergraduate students is working with Fletcher to refine
their invention, which they call CellScope.

Cancer patients, who often have to make frequent trips to the hospital for complete blood cell counts, could use a CellScope to do in-home tests and then send the results to their physician. Farmers could use the device to take images of crop blights and then transmit the results to an expert. Fletcher and his team have submitted a patent disclosure on CellScope. But he says the researchers will follow
UC Berkeley’s policy of offering free licenses for technologies used to improve health and welfare in developing countries.

Half the world is middle class and there are over 4 billion cellphones.

Fletcher and his students have developed two lens systems for CellScope so far—a low-magnification scope for viewing conditions such as infections or skin rashes, and a high-magnification scope for analyzing microorganisms in blood, saliva or other samples.

They used a standard clip-on cell-phone holder to make a mounting system for the lenses and for the sample slides. They also have developed an illumination system
using light-emitting diodes (LEDs) powered by the cellphone battery. The team hopes to begin field testing CellScope in early 2009, but first they have to refine the microscope and make it more robust.

“Right now, there’s an awful lot of epoxy,” Fletcher says. They are also still working on two software packages: CellScopeCapture, an imaging application that will run on the Microsoft® Windows Mobile® operating system, and CellScopeAnalysis, an image-analyzer.

Information from the Human Enhancement & Nanotechnology Conference

The Human Enhancement & Nanotechnology Conference was at Western Michigan University at Kalamazoo on March 28-29, 2009. The conference is organized by the Ethics and Emerging Technologies which is headed by Patrick Lin at California State Polytechnic University. There were about fifty folks here at the conference.

Nicole Hassoun, an Assistant Professor in Philosophy at Carnegie Mellon University. Her talk was “Nanotechnology, Enhancement, and Human Nature,”. Here discussion centered around eco-aesthetics and environmental ethics. Neither eco-aesthetics and environmental ethics tell us much about permissible or impermissible enhancements.

Ron Sandler, a professor of philosophy at Northeastern University, focused on whether human enhancement technologies are likely to impair social justice. His conclusion boils down to making access to enhancements more equal.

Daniel Moore, who works on nano-applications to semiconductors for IBM and has done work on nano-neural scaffolding at MIT’s Institute for Soldier Nanotechnologies, spoke on military applications of enhancement. He started by describing the enhancing technologies of the last three thousand years, from spear, shields, swords and armor. His argument was that there was a continuity from these enhancements to those being explored by military research today. He distinguished between civilizational vs. individual technologies, defensive vs. offensive, permanent vs. temporary, and internal vs. external. A temporary external nano-enhancement would be better armor, for instance, while a chip in the head would be a permanent internal nano-enhancement. Some of the problems being addressed with nano-enhancement include carrying heavy loads, non-lethal crowd control, stamina under stress and sleeplessness, and surviving battlefield wounds. Ubiquitous nano-sensors in the battlefield and internal medical sensors can increase command and control, and allow remout triggers of nano-therapies in wounded soldiers.

The Human Enhancement & Nanotechnology conference site has abstracts of the talks.

Colin Allen: Goggles vs. Implants: Why Cognitive Nanoethics Just Ain't in the Head
For under $5,000 one can already purchase a computer the size of a cigarette pack that is 10-15 times more powerful than the average desktop machine and can be worn on one's belt. When combined with a heads-up VR display, these systems are being used to serve up a virtually-enhanced reality -- a capability that the U.S. military is already developing in order to train soldiers by blending virtual combatants into real physical environments. Networks of these wearable supercomputers will enhance humans in ways that we can barely imagine, and nanoscale computing will further extend the possibilities for enhancing ordinary sensory input with computer-mediated and computer-generated information. Because these augmented reality devices use (at least three of) the familiar five senses, they don't depend on the development of new neural-technological interfaces that are required by implants. Implants will continue to be developed, especially for people whose medical conditions mean that standard sensory routes are impaired. But because of the technological hurdles facing neural implants I surmise that goggles will deliver nanocomputer-based cognitive enhancements to most humans sooner than implants. Although the idea of embedding nanocomputers into our bodies has captured the imaginations of many futurists and nanoethicists, I will suggest that the actual issues for cognitive nanoethics may be somewhat different from what they've imagined.

Tee Toth-Fejel
Nanotechnology and Productive Nanosystems for the U.S. Military: Progress and Implications
A survey of recent and ongoing nanoscale research at government defense contractors shows continual improvements that will lead to high-performance equipment for warfighters. Continued progress in nanoscale structures, devices, machines, and systems will lead to Productive Systems, and this direction is most notable in DARPA's Tip-Based Nanofabrication program. Defense-oriented research in nanotechnology, while currently aimed at clothing and other external gear, will eventually end up inside the bodies of warfighters, with a wide variety of implications. The ethical evaluation of these implications depends on non-provable assumptions about reality, and the most important relevant issues have been discussed by philosophers for millennia: the nature of the human person and the ethics of war.

Enhancing Stem Cell Production for Miraculous Bone Healing in Less than Half the Time

The drug teriparatide, or Forteo, which was approved by the FDA in 2002 for the treatment of osteoporosis appear to also boost bone stem cell production for "miraculous bone healing".

Astute observations led a team of clinicians and researchers to uncover how this drug can also boost our bodies' bone stem cell production to the point that adults' bones appear to have the ability to heal at a rate typically seen when they were young kids.

"The decreased healing time is significant, especially when fractures are in hard-to-heal areas like the pelvis and the spine, where you can't easily immobilize the bone - and stop the pain," Bukata added. "Typically, a pelvic fracture will take months to heal, and people are in extreme pain for the first eight to 12 weeks. This [healing] time was more than cut in half; we saw complete pain relief, callus formation, and stability of the fracture in people who had fractures that up to that point had not healed."

When a fracture occurs, a bone becomes unstable and can move back and forth creating a painful phenomenon known as micromotion. As the bone begins healing it must progress through specific, well-defined stages. First, osteoclasts - cells that can break down bone - clean up any fragments or debris produced during the break. Next, a layer of cartilage - called a callus - forms around the fracture that ultimately calcifies, preventing the bony ends from moving, providing relief from the significant pain brought on by micromotion.

Only after the callus is calcified do the bone forming cells - osteoblasts - begin their work. They replace the cartilage with true bone, and eventually reform the fracture to match the shape and structure of the bone into what it was before the break.

According to Puzas, teriparatide significantly speeds up fracture healing by changing the behavior and number of the cartilage and the bone stem cells involved in the process.

"Teriparatide dramatically stimulates the bone's stem cells into action," Puzas said. "As a result, the callus forms quicker and stronger. Osteoblasts form more bone and the micromotion associated with the fracture is more rapidly eliminated. All of this activity explains why people with non-healing fractures can now return to normal function sooner."

I had patients with severe osteoporosis, in tremendous pain from multiple fractures throughout their spine and pelvis, who I would put on teriparatide," said Bukata. "When they would come back for their follow-up visits three months later, it was amazing to see not just the significant healing in their fractures, but to realize they were pain-free - a new and welcome experience for many of these patients."

Bukata began prescribing teriparatide to patients with non-healing fractures, and was amazed at her findings: 93 percent showed significant healing and pain control after being on teriparatide for only eight to 12 weeks. And in the lab, Puzas began to understand how teriparatide stimulates bone stem cells into action.

PGE Signs Space Based Solar Power Contract with Startup Solaren

Pacific Gas and Electric Co. has agreed to buy power from a startup company that wants to tap the strong, unfiltered sunlight found in space to solve the growing demand for clean energy.

Sometime before 2016, Solaren Corp. plans to launch the world's first orbiting solar farm. Unfurled in space, the panels would bask in near-constant sunshine and provide a steady flow of electricity day and night.

PGE asked the California Public Utilities Commission on Friday for permission to buy 200 megawatts of electricity from Solaren's orbiting power plant when and if it's built. That's enough electricity for 150,000 homes.

"We're convinced it's a very serious possibility that they can make this work," said PG&E spokesman Jonathan Marshall. "It's staggering how much power is potentially available in space. And I say 'potentially' because a lot remains unknown about the cost and other details."

Many of the project's details remain under wraps, and others haven't been decided yet, said Cal Boerman, Solaren's director of energy services. For example, Solaren still hasn't decided whether to use crystalline silicon solar cells or newer, thin-film cells that weigh less than silicon but aren't as efficient.

But the young company, a collection of aerospace engineers based in Manhattan Beach (Los Angeles County), has the technology and expertise to make it work, Boerman said

There is an interview with Solaren CEO Gary Spirnak at the PGE blog Next 100.

Solaren is a California C Corporation that formed in 2001 and is based in Manhattan Beach, CA. Solaren was formed by a team of satellite engineers and space scientists to build a space energy company to generate and distribute electricity at competitive prices from Space Solar Power (SSP) stations in geosynchronous orbit. Solaren currently consists of about ten engineers and scientists, but plans to grow to more than 100 over the next twelve months.

Solaren's patented SSP plant design uses satellites in Earth orbit to collect solar energy in space and generate power, which is transmitted to the ground receive station for conversion to electricity for delivery to PG&E. Specifically Solaren's SSP satellites use solar cells in space to convert the sun's energy to electricity. This electricity powers high efficiency generator devices, known as solid state power amplifiers (SSPA). The SSPA devices on-board the satellite convert electricity into RF energy. Next the SSP satellite, using the RF energy and the satellite's antenna, directs and transmits the RF power to the California ground receive station. The ground receiver directly converts the RF energy to electricity, and uses the local power grid for transmission to the PG&E delivery point.

The SSP pilot plant satellites are designed to use existing launch capabilities. No new space launch vehicle capabilities need to be developed to launch our satellites into space. The SSP pilot plant design for the power satellites and ground receive station will be built and validated and the power satellites prepared for shipment to the launch site during the construction phase. At the launch site, the power satellites are launched into space using existing launch vehicle capabilities and moved to their final orbital positions.
When will Solaren be able to provide more details about your SSP pilot plant project?

A: We are currently supporting the CPUC (California Public Utilities Commission) regulatory filing process, and plan to provide additional details about our SSP pilot plant project in early Summer 2009.

Cleantech coverage of the deal and plans.

Solaren would deploy a solar array into space to beam an average of 850 gigawatt hours (“GWh”) for the first year of the term, and 1,700 GWh per year over the remaining term, according to a filing to the PUC. Under the agreement, Solaren, a startup, would design, build and launch the solar array into space, operate the satellite and deliver the electricity to PG&E's grid.

Solaren is based in Manhattan Beach, Calif., and is seeking investors for a private stock placement to raise "billions" of dollars for its business plan, said Gary Spirnak, CEO of Solaren to Cleantech Group.

Solaren is in talks with investment trusts in Europe and the United States, with which it hopes to finalize investment agreements by the summer, said Spirnak.

Next would come engineering and design of the solar plant that would orbit in space, catch the sun's rays and send them down to a ground station on Earth, he continued.

While Solaren would provide 200MW of electricity to PG&E, according to the filing with the PUC, Solaren anticipates generating a total 1,000MW from its satellite, said Spirnak.

Hobbyspace is also tracking this development.

Solaren Space company website is just a logo and contact email as of April 14, 2009.

Europe Space Agency Hypersonic Plane Work: FAST 20XX and Lapcat II

European Space Agency (ESA) activities in the non-space themes of EC’s programmes are the Long Term Advanced Propulsion Concepts and Technologies (LAPCAT) II and Future High-Altitude High-Speed Transport (FAST) 20XX. LAPCAT II is a logical follow-up of the previous LAPCAT I, whose objective was to reduce the duration of antipodal flights (that is, flights between two diametrically opposite points on the globe) to less than four, or even less than two hours.

Flight Global has coverage.

Funding of a hypersonic spaceliner is at a few million euro per year and they do not expect it to be completed until 2075. A date that far in the future means toying with the idea until it becomes obvious it can work or to catch up with another country. In the meantime making some pictures and analyzing some technical problems. A suborbital competitor to Virgin Galactic is likely for 2015.

German aerospace center DLR has been studying the rocket spaceliner concept.

April 13, 2009

Tracking Chinas Move to Second Largest Economy on an Exchange Rate Basis

The move to second place is not complete yet but depends primarily in the near term on weakness in the japanese Yen. If the yen moves to 110 to the US Dollar over the next couple of months or to 105 to the US Dollar later in the year and the Chinese Yuan stays about constant then China's GDP will be larger than Japan's on an exchange rate basis.

The USD-Yen exchange rate can be seen at this link

Encouraging economic data flowed out of China over the weekend and on Monday. Domestic lending hit a record in March, in a signal that stimulus spending is trickling through the economy, and exports fell less than expected (See "And Beijing Said: Let There Be Loans"). But Fan Gang, a member of the central bank monetary advisory committee, warned that the Chinese economy has not yet 'bottomed out,' and still needs one or two years to rebound. He said he expected the global recession to last for three or four years, according to the official Shanghai Securities News.

Some technical analysis of the USD and Yen suggests further weakness in the Yen with resistance in its exchange rate slide at 103.3

China is planning a new economic stimulus package targeted at boosting consumption, the China Securities Journal reported on Monday, citing a senior official of the State Information Center, which is affiliated with the country's top planning agency.

Japan is trying to fight off a slide toward deflation.

This site had looked at China's and Japan's GDP situation and forecast.

Dwave 128 Qubit Adiabatic Quantum Computer Chip Wired Up

Coal to Natural Gas Process Will be Piloted in China

Greatpoint energy has a coal to natural gas catalyzation process that would make coal cleaner than natural gas as a power source.

Greatpoint Energy current plan is to get to full scale plants in 2022.

The company's analysis found that it can produce natural gas with a lower carbon footprint than extracting natural gas from the ground, assuming a carbon storage site can be found. Its financial target is making gas at between $4 and $5 per million British thermal units (MMBtu), which is in the range of today's prices but lower than natural gas prices before the global recession hit.

Massachusetts Governor Deval Patrick played a key role in getting its demonstration facility sited and quickly approved. It took just 18 months to build and get its plant online.

Its next project is to open a larger pilot facility in China at a coal-fired power plant with Datang Huanyin Electric Power, one of the biggest polluters on the planet. "If we can show (Datang) that they can make more money being clean rather than dirty, then we can make a real impact," says GreatPoint Energy CEO Andrew Perlman.

These facilities aren't cheap: the pilot plant in China will cost between $100 million and $200 million, financed primarily by Datang. A full-scale operation would cost $1 billion to build

In its demonstration plant in China, GreatPoint's technology will convert 1,500 tons of coal a day into natural gas.

Carnival of Space 98

Smaller and More Powerful Particle Accelerators

Proton-driven plasma-wakefield acceleration in a computer simulation accelerate electrond bunches to 500 GeV in 300 meters of plasma. Compare that to the proposed $7 billion International Linear Collider (ILC), which will need at least nine miles to hit the same target, and SLAC's linear accelerator, which needed 10 times the distance to reach a tenth of the energy. Combining the new proton-driven PWFA with the LHC's powerful proton beam, Caldwell says it might be possible to accelerate electrons to several TeV, so that physicists can have their power, and their precision too.

Perhaps the biggest issue is the proton bunch length, which must be very small to allow the electrons to overshoot and create the wakefield. "It's easy to do for electron bunches," says co-author Frank Simon of the Max Planck Institute. "But hadron colliders have bunches that are centimeters in length. We need bunches that are a hundred micrometers in length. We're still looking at how to test the idea with present technology."

Proton-driven plasma-wakefield acceleration

Plasmas excited by laser beams or bunches of relativistic electrons have been used to produce electric fields of 10–100 GV m-1. This has opened up the possibility of building compact particle accelerators at the gigaelectronvolt scale. However, it is not obvious how to scale these approaches to the energy frontier of particle physics—the teraelectronvolt regime. Here, we introduce the possibility of proton-bunch-driven plasma-wakefield acceleration, and demonstrate through numerical simulations that this energy regime could be reached in a single accelerating stage.

There are only two ways for accelerators to increase the power: create a stronger electric field, or increase the distance over which particles are accelerated. We've already pretty much maxed out the strength of electric fields that can be contained without ripping electrons off the walls and essentially melting the inside of the accelerator. The other option is to create ever larger accelerators.

While proton accelerators are more powerful because of the continuous circular acceleration, electron accelerators are important because they are more precise. This is where plasma-wakefield acceleration may be able to help.

This radically new kind of acceleration skirts the electric field issue by using plasma — gas in which electrons have been ripped from their nuclei. This soup of ionized gas can handle electric fields about a thousand times stronger than can conventional accelerators, meaning the accelerators can potentially be a thousand times shorter.

In PWFA, tightly-packed bunches of electrons are fired into the plasma like bullets from a machine gun, blowing the plasma's electrons away in all directions leaving the heavier plasma nuclei behind. These positively charged nuclei form a bubble of electron-free plasma behind the particle bullet. The negatively charged expelled electrons are drawn back toward the positively charged bubble.

But as the electrons snap back toward the bubble, they overshoot their original positions. So the particle bullet leaves behind a wake of mispositioned electrons, creating an intense electric field. By riding in this wake, the electrons can reach very high energies in a very short distance.

In 2007, a collaboration between SLAC, UCLA, and USC demonstrated PWFA's potential: In a single meter, they were able to boost electrons zooming down SLAC's linear track to twice what they can achieve over the entire two-mile-long accelerator.

Direct Conversion of Nuclear Power to Electricity and Powerplant Efficiency Review

A method which eliminates the radiation damage problem is a Two-Step Photon Intermediate Direct Energy Conversion (PIDEC) method that uses the efficient generation of photons from the interaction of particulate radiation with fluorescer media. The photons are then transported to wide band-gap photovoltaic cells where electrical current is generated. PIDEC holds the promise of 40% energy conversion efficiency in a single cycle. PIDEC can be applied both to large power generation systems and to small scale nuclear batteries based on radioisotopes (Radioisotope Energy Conversion System-RECS).

The direct energy conversion system would be more efficient than current steam cycle systems and would not use water. The system that the University of Missouri and U.S. Semiconductor Corp are developing is mechanically simple, potentially leading to more compact, more reliable and less expensive systems. Mark Prelas is the lead researcher.

A typical light-water reactor nuclear power plant offers thermal efficiency around 35%, while a modern coal-powered plant with super-critical boiler tops out at 44%. New high temperature nuclear reactors could also reach those levels of efficiency.

The latest gas turbines offer thermal efficiencies in the 40% range, with a recent model reportedly obtaining 46%. These values refer to simple-cycle operation, where turbine exhaust is not further used. Real advantage comes from gas turbine exhaust applied as input to a standard steam turbine in a combined-cycle power plant. This is where new-generation gas turbines can become the driving engine to obtain 60%+ overall thermal efficiency.

Japan HAL Ten Times Strength Boosting Exoskeleton Is Going into Mass Production US$4200

H Plus Magazine indicates that the Japanese HAL robotic exoskeleton is going into mass production to make 400 suits per year at US$4200.

The suit can boost strength by ten times.

Lockheed is deploying the HULC lower extremity exoskeleton for the US military.

April 12, 2009

Reviewing Graphene Production

There are a number of methods for generating graphene and chemically modified graphene from graphite and derivatives of graphite, each with different advantages and disadvantages. A new approach is to use colloidal suspensions to produce new materials composed of graphene and chemically modified graphene. This approach is both versatile and scalable, and is adaptable to a wide variety of applications.

Graphene has been made by four different methods. The first was chemical vapour deposition (CVD) and epitaxial growth, such as the decomposition of ethylene on nickel surfaces. These early efforts (which started in 1970) were followed by a large body of work by the surface-science community on 'monolayer graphite'. The second was the micromechanical exfoliation of graphite. This approach, which is also known as the 'Scotch tape' or peel-off method, followed on from earlier work on micromechanical exfoliation from patterned graphite. The third method was epitaxial growth on electrically insulating surfaces such as SiC and the fourth was the creation of colloidal suspensions.

Properties of Graphene

The remarkable properties of graphene reported so far include high values of its Young's modulus (approx 1,100 GPa), fracture strength (125 GPa), thermal conductivity (approx 5,000 W m-1K-1), mobility of charge carriers (200,000 cm2 V-1 s-1) and specific surface area (calculated value, 2,630 m2 g-1), plus fascinating transport phenomena such as the quantum Hall effect. Graphene and chemically modified graphene (CMG) are promising candidates as components in applications such as energy-storage materials, 'paper-like' materials, polymer composites, liquid crystal devices and mechanical resonators.

Comparison of a set of chemical approaches to produce colloidal suspensions of CMG sheets

Challenges and Perspectives

Colloidal suspensions are of great importance in the preparation of many types of materials, and the suspensions of chemically modified graphene (CMG)s hold great promise in this regard. Looming issues in terms of wide-scale applicability include scalability (yield, quantity, cost, etc.), the safety of the solvents used and the removal from the product material (if necessary) of residual solvents or stabilizers used in the colloid. It is also worth emphasizing that although colloidal suspensions are normally only regarded as stable if they persist for very long periods of time, dispersions of CMGs might remain stable long enough to be processed into something else.

Another critical issue is related to our understanding of the chemical structure(s) of CMG sheets and their reaction mechanisms. The better our knowledge of the chemistry of these materials, the better the graphene-based composites, thin films, paper-like materials and so on that we can make. For example, the prospects for sensors based on CMG will hinge on our ability to chemically tune the CMG for each sensing modality.

So far the graphenes derived by the reduction of graphene oxide have contained a significant amount of oxygen and, possibly, significant numbers of defects. Thermal annealing of reduced graphene oxide sheets has produced enhanced results and finding routes for complete restoration of the sp2 carbon network of pristine graphene is of interest. (The graphenes produced from graphite intercalation compounds or expandable graphite may have fewer defects, although they are also produced in lower yields and are less amenable to functionalization than graphenes derived from graphene oxide.)

Finally, we mention the development of new reaction routes and starting materials as an alternative. The worldwide supply of natural graphite has been estimated at 800,000,000 tonnes. If graphene or very thin platelets of multilayer graphene could be produced on a large scale by CVD from various precursors, new routes for creating colloidal suspensions might also be found, and the supply of graphene/few-layer graphene might be enormously increased.

Форма для связи


Email *

Message *