September 19, 2009

Carnival of Space 121 - Our moon, Jupiters moon, black holes and Space Technology Now and in the Future

Centauri Dreams sends "Lightcraft: A Laser Push to Orbit" It is about a laser-powered launch technology that could drastically change the economics of getting to LEO (low earth orbit). Experiments continuing.

2. Out of the cradle covers the lunar lander challenge and the success and vehicles of Armadillo aerospace.

3. The Planetary society blog covers some first results from Lunar Reconnaissance Orbiter

4. Weird warp looks at our best [us space program] hope for seeing a man on the moon and some alternatives.

5. Orbital Hub looks at PROBA-2, which is part of an ESA program called In-Orbit Technology Demonstration Program, which is dedicated to the demonstration of innovative technologies.

Among the new equipment and technologies demonstrated by PROBA-2 are new models of star trackers, GPS receivers, and reaction wheels, a new type of lithium-ion battery, an advanced data and power management system, composite carbon-fibre and aluminum structural panels, and magnetometers. PROBA-2 also hosts a digital Sun-sensor, an experimental solar panel, and a xenon gas propulsion system.

6. 21st Century Waves talks about the ambitious Japanese space bases solar power plans.

Japan announced their spectacular new $ 21 B space-based solar power initiative. According to Japan’s Institute of Energy Economics, the Mitsubishi Electric Corp. and IHI Corp. will lead a 15-company team that will build the first major solar power plant in space. Via microwaves, it will eventually beam enough energy back to Japan for nearly 300,000 houses.

Black Holes, Nova's and Stars
7. The Chandra X-ray observatory images a black hole that is emitting jets of of radio emission enhanced with iron

8. Astro Swanny covers a dwarf nova outburst

Dynamic of Cats looks at the exoplane of the star HD61005

Jupiter's Moons
10. Phil Plait of Bad Astronomy looks at Jupiter's moons lighting Jupiter's aurora borealis.

11. An article covers a paper which discusses spectroscopic observation of Jupiter's moon Io acquired using NASA's Infrared Telescope Facility (IRTF) in Hawaii during five eclipse reappearances in April, May, and June 2004.

Our moon and Apollo Related
12. Cheap astronomy podcast on the Celestial Sphere

Cheap astronomy podcast on Ed White's glove

13. Beyond Apollo looks at the LEM (Lunar Excursion Module) radio lab

14. Collect Space tracks down Apollo moon rocks:
Where Today Are The Apollo 11 Lunar Sample Displays?

Where Today Are The Apollo 17 Goodwill Moon Rocks?

15. Habitation intention talks about the urgency of space habitation

Astronomy for Newcomers and Amateurs

16. Spacewriter ramblings talks about whether young should focus on outreach to the public now or later.

17. Many people enjoyed seeing the space shuttle Discovery and the International Space Station flying across the sky together recently, and it seems that "ISS spotting" is becoming something of a spectator sport for experienced and new skywatchers alike.

CUMBRIAN SKY has a new "Beginner's Guide" to ISS spotting, complete with recommendations for which websites to go to for pass predictions, and tips and hints about how to see the ISS, and other spacecraft, from where you live.

18. Simostronomy has a personal tale of a recent late night as an amateur astronomer in his observatory below.

19. Alice's Astroinfo looks at the September and October sky of 2009 (planets and constellations).

Canada Related Space
20. Commercial Space has a collection of canadian focused space news.

21. Mang's bat has links to 4 star parties around Ontario, Canada

Propulsion Based on Exploiting Mach's Effect
22. A second article with answers to various questions in comments on mach effect propulsion.

Questions spurred from the first nextbigfuture article on mach's effect propulsion which has an interview with Paul March

Mach Effect investigation could be a path to the unification of general relativity and quantum mechanics. Here is links to abot 20 hours of Stanford lectures on General relativity and another 20 on quantum mechanics plus a short video from one of the investigators of mach's effect for potentially revolutionary propulsion.

If Mach's effect can be used for propulsion as envisioned then what has been envisioned in terms of space travel in capabilities in Star Trek and even possibly wormholes for Faster than light travel and communication becomes possible. The work is based on solid General Relativity and Quantum Mechanics and understanding of inertia and the experiments are being carefully conducted. Success development would be a candidate for one of the greatest accomplishments of humanity.

Late Arriving
23. The Skinny on Solar System Sizes from Music of the Spheres

Music of the Spheres reports on some graphics and tools that make it easier to grasp the relative sizes of planets, moons, and other objects in our solar system.

On a Path to Unification of General Relativity and Quantum Mechanics, Mach Effect Propulsion and a Large Collection of Videos Explaining Physics

Paul March believes that the Quantum Vacuum Fluctuation / Magnetohydrodynamics of Harold White is the QM (Quantum mechanical) based version of Dr. Woodward’s M-E (Mach Effect) work

Paul March comment: the GRT and QM worlds will ultimately be combined into a coherent “Quantum Gravity” theory that will better explain our current M-E experimental results and will lead the way to FTL interstellar flight.

The first nextbigfuture article on mach's effect propulsion which has an interview with Paul March

A second article with answers to various questions in comments on mach effect propulsion.

[below are links to 12 videos - about 20 hours - on General Relativity from Stanford, and other shorter videos on mach effect propulsion and six lecture videos on quantum mechanics - also from Stanford]

If the core team of ~5 people including Dr. Woodward could work full time on developing the M-E, we think we could have an R-C controlled M-E technology demonstrator that could move itself over an air hockey table, up and running within 2-to-3 years (for about $4 million in funding).

Inertial Mass Dependency on Local Vacuum Fluctuation Mean Free Path

A proposed relationship between the vacuum energy density, light-radius of the universe, and the plank force. The equation is proposed to infer a connection between inertial mass and an observer's light horizon. This horizon is conjectured to be the mean free path for vacuum fluctuations as seen by an observer in deep space. This fundamental relationship will then be derived from a gravitational wave equation. Once this has been derived, the results will be extended to derive an equation to calculate the effect local matter has on the mean free path of a vacuum fluctuation, and hence the local vacuum energy density (vacuum fluctuation pileup). The paper will conclude by applying the theoretical framework to calculate expected thrust signals in an externally applied ExB application meant to induce plasma drift in the vacuum fluctuations

More on the Dielectric Needed for Really Good Mach Effect Propulsion

Now, in any M-E device, per Andrew Palfreyman’s STAIF-2006 M-E math model and a later unpublished “constrained input power” math model we created together in 2008, which are both based on Jim Woodward’s M-E derivation, the magnitude of the generated M-E derived mass/energy fluctuation signal in the energy storing dielectric is proportional to the available active dielectric mass, but inversely proportional to the density and volume of this active dielectric mass. What these three requirements translates out to is that the magnitude of the M-E delta mass/energy signal is proportional to the peak electrical and mechanical stresses applied to a given volume of the dielectric until it breaks at least. This high dielectric stress requirement limits the maximum lifetime of the dielectric so in any M-E device, a tradeoff between performance verses lifetime will have to be made. Also of note is that since the M-E signal is expressed in a cyclic manner that is in counter-(180 deg)-phase to the cap’s self-generated electrostrictive signal, using a dielectric material with a small electrostrictive constant is a big plus. Otherwise the M-E signal is cancelled out by the electrostrictive signal (E-S) until the M-E signal is driven large enough to overwhelm the E-S signal. This can happen because the M-E signal’s expression is much more nonlinear with input power than the E-S signal.

Operationally, the controlling M-E parameters of interest are the following. The dielectric’s M-E signal is proportional to the summation of the applied ac & dc bulk (relative to the distant stars) accelerations and the square of the da/dt “Jerk” accelerations. Desired peak bulk accelerations should be measured in thousands of gees or higher. Next, the M-E signal is proportional to the capacitance of the accelerated M-E cap dielectric, the cube of the applied operating voltage, the cube of the operating frequency, the square of the active dielectric constant, but varies inversely with the dielectric’s loss factor (i.e., lower ac Equivalent Resistance (E-R) is better) which controls the dissipated power and temp rise of the caps for a given input power. If making a solid state Mach Lorentz thruster (MLT), the magnitude of the rectified unidirectional force is proportional to the volumetric crossed B-field in the dielectric, and the thickness of the dielectric in the direction of the applied E-field which increases the leverage arm of the applied crossed B-field. MLTs also require the use of a single cap dielectric layer to preclude Lorentz force cancellation issues that arise by using standard multilayer capacitors were the applied E-field reverses direction in each layer at a given point in time.

Given the above M-E output signal’s optimization parameter space, the desired characteristics of operational M-E energy storage devices, AKA capacitors, is as follows:

1. Relative dielectric constant (e-r)= 1,000 or greater but depends on operating voltage.
2. Dielectric density should be less than 5.6 grams/cc (BaTiO3) and preferably much less.
3. Operating frequency should be optimized for the 10-to-50 MHz range.
4. Dielectric Loss Tangent should be less than 0.5% at the operating frequency.
5. Operating voltage should be up to 100.0 kV-p (See EEStor process), but depends on obtained e-r. Higher e-r allows lower peak voltage for a given energy storage value.
6. Operating times should be measured in thousands to tens of thousands of hours. This will require using low-k plastic film caps or higher-k single crystal or nano-crystal caps.
7. For MLTs the dielectric magnetic permeability should be 10 or greater in a single layer arrangement.

STAIF 2006: Andrew Palfreyman on Reactionless Drives

25:57 - 3 years ago
Inventor & Engineer Andrew Palfreyman talks about the Mach-Lorentz Thruster, which he participated in presenting at the STAIF 2006 Conference as the world's first true reactionless drive. His research could lead to future star-trek style propulsion based on a novel application of conventional physics, and he discusses with us a set of replication results published in the prestigious American Institute of Physics to support this groundbreaking scientific discovery. Palfreyman is part of a joint research-team including Paul March, Dr. James Woodward, and Dr. Martin Tajmar from the ESA, who are collaborating on several replications to ensure top-quality laboratory research to confirm this amazing effect.

Some other STAIF presentation videos online

Dr. Martin Tajmar Interview: Warp-Drives, Antigravity, and FTL

18:03 - 3 years ago
In our exclusive STAIF 2006 interview, Dr. Martin Tajmar joins us to speak about his theoretical research into FTL, Warp-Drive, and Antigravity technology. Tajmar works at ARC Seibersdorf in Austria, which subcontracts to the European Space Agency (ESA) for a number of innovative new technology projects.

Greg Meholic on FTL Propulsion and the STAIF 2007 Conference

14:48 - 2 years ago
This STAIF 2007 presentation by aerospace engineer Greg Meholic provides an overview of this year's conference experience, with updates on Meholic's own theoretical work in a fluid-dynamics model of physics and his research in Breakthrough Propulsion Physics. Meholic is a communications session co-chair for Section-F of the STAIF Conference, focusing on emerging space propulsion physics technologies.

General Theory of Relativity

Lecture 1 of Leonard Susskind's Modern Physics concentrating on General Relativity. Recorded September 22, 2008 at Stanford University. 1 hour 38 minutes

Einstein's General Theory of Relativity | Lecture 2

Einstein's General Theory of Relativity | Lecture 3

Einstein's General Theory of Relativity | Lecture 4

Stanford's Felix Bloch Professor of Physics, Leonard Susskind, discusses covariant and contra variant indices, tensor arithmetic, algebra and calculus, and the geometry of expanding space time.

Einstein's General Theory of Relativity | Lecture 5

Leonard Susskind's Modern Physics concentrating on General Relativity.

Einstein's General Theory of Relativity | Lecture 6

Geodesics and geodesics motion through spacetime.

Einstein's General Theory of Relativity | Lecture 7

Einstein's General Theory of Relativity | Lecture 8

Einstein's General Theory of Relativity | Lecture 9

Einstein's General Theory of Relativity | Lecture 10

Einstein's General Theory of Relativity | Lecture 11

Einstein's General Theory of Relativity | Lecture 12

Lecture 1 | Modern Physics: Quantum Mechanics (Stanford)

Lecture 1 of Leonard Susskind's Modern Physics course concentrating on Quantum Mechanics. Recorded January 14, 2008 at Stanford University.

This Stanford Continuing Studies course is the second of a six-quarter sequence of classes exploring the essential theoretical foundations of modern physics. The topics covered in this course focus on quantum mechanics

Lecture 2 | Modern Physics: Quantum Mechanics (Stanford)

Lecture 4 | Modern Physics: Quantum Mechanics (Stanford)

Lecture 7 | Modern Physics: Quantum Mechanics (Stanford)

The rest of the 10 course quantum mechanics lectures are through this link

Relativity Lorentz Transformation Equations Part 1of 2

List of other online courses from Stanford.

9 lectures on Quantum entanglement (course 1)

8 videos from course 3 on Quantum Entanglement

Stanford university on Youtube

Korean Scientists Claim Breakthrough in Spintronics

South Korean researchers reported the first ever creation of a spin field-effect transistor, which had previously existed only in theory, claiming it to be a breakthrough in the emerging field of spintronics.

Short for spin-based electronics and also called magnetoelectronics, spintronics, is an up-and-coming technology that focuses on the harnessing of the spin of particles, with the ultimate goal of unlocking infinite a lot computing power and storage from the process.

Korea Institute of Science and Technology (KIST) researchers led by Han Suk-hee described the demonstration of a spin-injected field effect transistor, which is based on a semiconducting channel with two ferromagnetic electrodes.

The transistor's basic structure of source, gate and drain is similar to the complementary metal-oxide-semiconductor (CMOS) model used for making microprocessors and other integrated circuits. However, Han's transistor is different in that the source and drain are made of ferromagnetic materials and that the injected spins are controlled by gate voltage.

Control of Spin Precession in a Spin-Injected Field Effect Transistor

Spintronics increases the functionality of information processing while seeking to overcome some of the limitations of conventional electronics. The spin-injected field effect transistor, a lateral semiconducting channel with two ferromagnetic electrodes, lies at the foundation of spintronics research. We demonstrated a spin-injected field effect transistor in a high-mobility InAs heterostructure with empirically calibrated electrical injection and detection of ballistic spin-polarized electrons. We observed and fit to theory an oscillatory channel conductance as a function of monotonically increasing gate voltage.

16 pages of supplemental information

A high electron mobility transistor with an InAs active layer is used as a spin transport channel. The epitaxial layers comprise an In0.52Al0.48As buffer layer (300 nm), an In0.52Al0.48As n-doped (4×1018 cm−3) carrier supply layer (7 nm), an In0.52Al0.48As layer (6 nm), an In0.53Ga0.47As layer (2.5 nm), an InAs quantum well (2 nm) acting as a twodimensional electron gas (2DEG) channel, an In0.53Ga0.47As layer (13.5 nm), an In0.52Al0.48As layer (20 nm), and an InAs capping layer (2 nm). The In0.52Al0.48As and In0.53Ga0.47As cladding layers act as potential barriers that confine electrons to the quantum well.

The InAs channel was defined using photolithography and Ar-ion milling.

September 18, 2009

John Cramers Retrocausal Experiments

From a slide from 2007, later slides below are from a 2009 talk.

From wikipedia: Retrocausality (also called retro-causation, backward causation and similar terms) is any of several hypothetical phenomena or processes that reverse causality, allowing an effect to occur before its cause.

According to Paul March from a Talk Polywell comment: Dr. Cramer's retrocausal experiment should be completed by the end of this year. And if verified it would buttress Dr. Woodward's M-E (Mach Effect) arguments and provide a path to finally merging GRT (General relativity) with QM (Quantum Mechanics).

A 36 page presentation from 2007: The UW Nonlocal Quantum Communication Experiment by John Cramer

Entanglement: The separated but “entangled” parts of the same quantum system can only be described by referencing the state of other part.
The possible outcomes of measurement M2 depend of the results of measurement M1, and vice versa. This is usually a consequence of conservation laws.

Nonlocality: This “connectedness” between the separated system parts is called quantum nonlocality. It should act even of the system parts are separated by light years. Einstein called this “spooky actions at a distance.”

* A series of EPR experiments, beginning with the 1972 Freedman-Clauser experiment, have demonstrated convincingly that measurements performed on one of a pair of polarization-entangled photons affect the outcome of measurements performed on the other entangled photon.

[Einstein–Podolsky–Rosen paradox at wikipedia]

In quantum mechanics, the EPR paradox (or Einstein–Podolsky–Rosen paradox) is a thought experiment which challenged long-held ideas about the relation between the observed values of physical quantities and the values that can be accounted for by a physical theory. "EPR" stands for Einstein, Podolsky, and Rosen, who introduced the thought experiment in a 1935 paper to argue that quantum mechanics is not a complete physical theory

The EPR paradox draws on a phenomenon predicted by quantum mechanics, known as quantum entanglement, to show that measurements performed on spatially separated parts of a quantum system can apparently have an instantaneous influence on one another.

This effect is now known as "nonlocal behavior" (or colloquially as "quantum weirdness" or "spooky action at a distance").

Simple version
Before delving into the complicated logic that leads to the 'paradox', it is perhaps worth mentioning the simple version of the argument, as described by Greene and others, which Einstein used to show that 'hidden variables' must exist.

A positron and an electron are emitted from a source, by pion decay, so that their spins are opposite; one particle’s spin about any axis is the negative of the other's. Also, due to uncertainty, making a measurement of a particle’s spin about one axis disturbs the particle so you now can’t measure its spin about any other axis.

Now say you measure the electron’s spin about the x-axis. This automatically tells you the positron’s spin about the x-axis. Since you’ve done the measurement without disturbing the positron in any way, it can’t be that the positron "only came to have that state when you measured it", because you didn’t measure it! It must have had that spin all along. Also you can now measure the positron’s spin about the y-axis. So it follows that the positron has had a definite spin about two axes – much more information than the positron is capable of holding, and a "hidden variable" according to EPR.

* It is now well established that quantum nonlocality really does “connect” the separated parts of the same quantum mechanical system (c.f. Freedman-Clauser, Aspect, etc.)

* There are several “No-Signal Theorems” in the literature (c.f. P. Eberhard, A. Shimony, …) showing that quantum nonlocal signaling is impossible, e.g., a change on one measurement has no observable effect on the other, in the absence of coincidence links.

* However, Peacock and Hepburn have argued that these “proofs” are tautological and that certain key assumptions (e.g., measurements are local) are inconsistent with the quantum formalism (e.g., Bose-Einstein symmetrization).
Therefore, the question of nonlocal signaling remains “open” (at least a crack) and should be tested.

Status of the UW Test of Nonlocal Quantum Communication with Momentum-Entangled Photon Pairs

The Cramer Symposium was just held on Sept 10-11, 2009 where there were several interesting presentations listed and linked from this program. This included an update on the Cramer retrocausal experiments.

Tech Roundup: Super high density IBM eDRAM, Xbox supercomputers, Sharp Blue Violet Laser can Enable 100 Gb Blue ray discs

1. IBM has successfully developed a prototype of the semiconductor industry's smallest, densest and fastest on-chip dynamic memory device in next-generation, 32-nanometer, silicon-on-insulator (SOI) technology. IBM's used 32 nanometer SOI technology to fabricate a test chip with an embedded dynamic random access memory (eDRAM) with transistor density four times higher than conventional 32 nanometer SRAM memory and twice the density of any announced 22 nm embedded SRAM and equal to the density expected of 15 nm SRAM.

The IBM eDRAM in 32nm SOI technology is the fastest embedded memory announced to date, achieving latency and cycle times of less than 2 nanoseconds. In addition, the IBM eDRAM uses four times less standby power (power used by the chip as it sits idle) and has up to a thousand times lower soft-error rate (errors caused by electrical charges), offering better power savings and reliability compared to a similar SRAM.

2.IBM Corporation (NYSE: IBM) today announced the industry's highest performance, highest throughput processor for system-on-chip (SoC) product families in the communication, storage, consumer, and aerospace and defense markets. The PowerPC 476FP operates at clock speeds in excess of 1.6 GHz, and 2.5 Dhrystone MIPS (million instructions per second) per MHz, delivering over two times the performance of IBM's most advanced embedded core currently available for the original equipment manufacturing (OEM) market. The processor dissipates just 1.6 watts at these performance levels when fabricated in IBM's 45-nanometer, silicon-on-insulator (SOI) technology, positioning the 476FP as one of the most energy efficient embedded processor cores in the industry. It provides a growth platform for emerging applications such as 4G networks and WiMax infrastructure products.

3. A new study by a University of Warwick researcher has demonstrated that researchers trying to model a range of processes could use the power and capabilities of a particular XBox chip as a much cheaper alternative to other forms of parallel processing hardware. The results of his work have just been published in the journal Computational Biology and Chemistry under the title of “Implications of the Turing completeness of reaction-diffusion models, informed by GPGPU simulations on an XBox 360: Cardiac arrhythmias, re-entry and the Halting problem”. Sony PS3 with cell processors have already been made into supercomputers.

(H/T Sander Olson)

4. Sharp Corporation has announced the development of a new 500 mW blue-violet semiconductor laser for triple- and quadruple- layer Blu-ray discs.

The semiconductor laser is blue-violet, producing an optical output up to 500 mW and 405 nm wavelength of oscillation under pulsed operation. The new laser has been proven reliable over 1,000 hours of testing.

The device is designed to be used in Blu-ray Disc recorders, and can write at 8 x speed on both triple- and quadruple- layer discs. This would mean recordable discs (with 25 GB per layer at present) to be 75 or 100 GB. The development follows the mass production of a 320 mW blue-violet semiconductor laser starting in June this year. The 320 mW device can write at 8 x speed on single- and dual- layer discs.

Evidence Three Forks formation is Separate from Bakken Which Would Mean a Lot more Oil

Testing done in the Bakken shale area found a "stacked play," meaning one oil formation is on top of another, which could allow more oil to be recovered at a lower cost in a smaller area with less environmental damage, said Continental Resources Hamm said the testing showed two distinct formations. He said the Three Forks well initially fetched 140 barrels daily. The Bakken well fetched about 1,200. State officials said in July that production results from 103 wells in the Three Forks-Sanish formation show some wells recovering more than 800 barrels a day, considered "as good or better" than some in the Bakken, where the record is thought to be more than 4,000 barrels a day.

State geologist Ed Murphy called Continental's findings interesting but said more wells are needed before researchers know for sure the characteristics and potential of the Three Forks formation.

The company's tests and other promising results from Three Forks wells have fueled speculation that the formation could add billions of barrels to government reserve estimates.

Continental, which is marking 20 years in North Dakota, also trademarked the process of drilling multiple wells from one pad, the area cleared for drilling machinery. It plans to drill two wells into the Bakken and another two into the Three Forks from one pad, which means the well site's footprint will be cut from 20 acres to six acres, Hamm said.

The company estimates its ECO-Pad process will cut drilling and well completion costs, which run as high as $7 million in the Bakken, by about 10 percent. The process could be in place by the end of the year.

The company also plans to use a single drill rig that can be moved to different sites on a pad, which will require only one road and fewer power lines, pipelines and other infrastructure, he said.

Seeking Alpha has the transcript of the August 2009 conference call for continental resources.

This test was very important to us and I believe we did (inaudible) is stacking two laterals and established not even with unrealistically tight spacing the Middle Bakken and Three Forks/Sanish reservoirs are separate and need be developed individually. Consequently in terms of testing we have seen what we effectively need to see. So given the extensive number of wells that we and others have completed across playing both zones, as I said earlier, Continental is now transitioning into the developmental mode with a staggered drilling pattern that we will use to harvest the two reservoirs.

The most effective way to drain these two tanks so to speak is to drill north south oriented Middle Bakken well and then step over to about 660 feet east or west and drill Three Forks/Sanish well in the same orientation and then step over another 660 feet and drill the next Middle Bakken well working your way out across play. We think this development plan dovetails very well with the ECO-pad concept that the NDIC approved this last week. Continental has developed an innovative new approach for drilling multiple wells around the same old drilling pad specifically the two Middle Bakken and two Three Forks/Sanish wells per ECO-pad.

The key advantages we think are very apparent. We drilled four wells from one ECO-pad minimizing the environmental impact. One ECO-pad will have about 70% less space as the surface footprint area than four conventional drilling pads. Instead of four pads, basically we use about 5 acres each up there for (inaudible) drilling platform and therefore we will be drilling four wells sequentially from a single 6-acre ECO-pad.

The NDIC granted ECO-pads an exemption from setback requirements on section [ph] property lines. We'll be drilling fence to fence from 1280 acreage spacing unit to the next, instead of leaving about 1100 feet or more untouched rock between these two 1200 acre space units. So we will be utilizing all the reservoirs within our space unit.

September 17, 2009

Superconducting Radiofrequency Cavity Advance for Accelerators and Other Purposes

The U.S. Department of Energy's (DOE's) Thomas Jefferson National Accelerator Facility marked a step forward in the field of advanced particle accelerator technology with the successful test of the first U.S.-built superconducting radiofrequency (SRF) niobium cavity to meet the exacting specifications of the proposed International Linear Collider (ILC).

Superconducting radiofrequency accelerator cavities are crucial components of particle accelerators or colliders, harnessing the energy that the collider pumps into a beam of particles. If it were built, the ILC would require about 16,000 niobium cavities, and vendors worldwide are vying to produce test cavities that meet the ILC's stringent performance goals.

The cavity was cooled to operating temperature (2 Kelvin or negative 456 degrees Fahrenheit) and its ability to harness radiofrequency energy was gauged. The test revealed that the cavity's accelerating gradient, its ability to impart energy to particles, was 41 megavolts per meter, far exceeding the GDE specification of 35 MV/m.

Controversial EMDrive
These kinds of superconducting cavities would be useful for enabling the Emdrive if the the EMdrive is feasible.

China is building prototype emdrive systems

Emdrive Presentation at Space 08 conference

Key points from the slideshow: The chinese are making a S-band prototype engine. There is an version 1.5 superconducting system with Q 6*10**6 which would have 100 times more thrust than the version 1 system. This would be 32 Newtons of static thrust. One thousand times less than the version 2 superconducting system that would equal the best current superconducting Q of 5*10**9. If these systems are working then the sizes of the forces involved should be unambiguous. Not tiny millinewton forces which could be from mistakes or other causes but larger forces.

Koreans Show Feasibility of Room Temperature Version of IBM Millipede Super High Density Memory

Ultrahigh-density phase-change data storage without the use of heating in Nature Nanotechnology

Non-volatile memories based on scanning probes offer very high data densities, but existing approaches require the probe to be heated, which increases the energy expenditure and complexity of fabrication. Here, we demonstrate the writing, reading and erasure of an ultrahigh-density array of nanoscopic indentations without heating either the scanning probe tip or the substrate. An atomic force microscope tip causes microphase transitions of the polystyrene-block-poly(n-pentyl methacrylate) of a block copolymer to occur at room temperature by application of pressure alone. We demonstrate a data storage density of 1 Tb in^-2, which is limited only by the size of the tip. This demonstration of a pressure-based phase-change memory at room temperature may expedite the development of next-generation ultrahigh-density data storage media.

New Scientist has coverage

The Korean team has shown that the tip of an atomic force microscope (AFM) can etch the kind of tiny pits that store data in millipede-like systems simply by pressing on the new material. Lighter pressure can be used to feel for and read out the pits without altering them. It solves one issue but raises another. "The forces needed are relatively high, and this is likely to lead to tip wear issues," he says. Even systems that use heat suffer such problems, and they would be worse if more force was being used, he says.

"The key development on the polymer side is new bilayer materials," he says. These combine a hard polymer skin just a few nanometres thick with a softer layer – for example polystyrene – beneath. "It combines the softness you want [to avoid damaging the probe tip and for fast writing speeds] with the thermal stability necessary for long data lifetimes."

18 pages of supplemental information in a pdf.

Nanopattern formation on various polymer films by the indentation of AFM tip

We used other polymer films for the fabrication of nanopatterns by the indentation of AFM tip at room temperatures. Here, we employed three different homopolymers: PS, PnPMA, and PMMA. All nanopatterns were prepared by using the same AFM tip (AR5-NCHR) by varying the indentation force from 200 to 1400 nN at room temperature.

The shape of the generated nanopatterns and the depth were maintained for 5 months at room temperature.

The depth of each written nanopattern is almost the same (~ 7.5 nm). Once this patterned film was placed for 2 s onto a heating plate maintained at 120 oC, the nanopatterns were completely erased and a flat film was observed. We did not see any evidence of polymer degradation during repeated writing and erasing. Although we did not carry out the experiments for very long cycles (say more than 104), we consider that the cyclability would be still good even after long write-erase cycles, because no degradation of polymer films is observed during repeated writing and erasing.

The recording time for the fabrication of a pattern with a 5 nm depth is calculated to 5 ms at a down/up speed of 2μm/s, and it would decrease as the down/up speed increases.

India Plans to Export Uranium and Thorium Fueled Nuclear Reactors

The head of India's Atomic Energy Commission, Anil Kakodkar, announced yesterday in Vienna a special version of the forthcoming Advanced Heavy Water Reactor (AHWR) adapted to use low-enriched uranium (LEU) fuel. The original design is fuelled by a mix of uranium-233 and plutonium bred from thorium using fast neutron power reactors earlier in a thorium fuel cycle. The LEU variant is suitable for export because it does away with the plutonium, replacing it with uranium enriched to 19.75% uranium-235.

The long-term goal of India's nuclear program has been to develop an advanced heavy-water thorium cycle. The first stage of this employs the pressurized heavy-water reactors and light water reactors, to produce plutonium.

Stage two uses fast neutron reactors to burn the plutonium and breed uranium-233 from locally mined thorium. The blanket around the core will have uranium as well as thorium, so that further plutonium is produced as well.

In stage three, AHWRs burn the uranium-233 from stage two with plutonium and thorium, getting about two thirds of their power from the thorium.

The first AHWR is meant to start construction in 2012, although no site has yet been announced. A prototype 500 MWe fast neutron reactor being built at Kalpakkam should be complete in 2011.

Producing 300 MWe, the unit is less than one third the capacity of a typical large reactor. It is designed to operate for up to 100 years and has a "next generation" level of safety that grants operators three days' grace in the event of a serious incident and requires no emergency planning beyond the site boundary under any circumstances.

Aubrey de Grey Singularity and Methuselarity Paper

8 page pdf: The singularity and the Methuselarity: similarities and differences by Aubrey de Grey H/T Accelerating Future - Michael Anissimov

A fundamental difference between the singularity and the Methuselarity that I wish to highlight is its impact on “the human condition” – on humanity’s experience of the world and its view of itself. I make at this point perhaps my most controversial claim in this essay: that in this regard, the Methuselarity will probably be far more momentous than the singularity.

How can this be? Surely I have just shown that the Methuselarity will be the consequence of only quite modest (and, thereafter, actually decreasing) rates of progress in postponing aging, whereas the singularity will result from what for practical purposes can be regarded as infinite rates of progress in the prowess of computers? Indeed I have. But when we focus on humanity’s experience of the world and its view of itself, what matters is not how rapidly things are changing but how rapidly those changes affect us. In the case of the singularity, I have noted earlier in this essay that if we survive it at all (by virtue of having succeeded in making these ultra-powerful computers permanently friendly to us) then we will move from a shortly-pre-singularity situation in which computers already make our lives rather easy to a situation in which they fade into the background and stay there. I contend that, from our point of view, this is really not much of a difference, psychologically or socially: computers are already far easier to use than the first PCs were, and are getting easier all the time, and the main theme of that progression is that we are increasingly able to treat them as if they were not computers at all. It seems to me that the singularity may well, in this regard, merely be the icing on a cake that will already have been baked.

Compare this to the effect of the Methuselarity on the human condition. In this case we will progressively and smoothly improve our remaining life expectancy as calculated from the rate of accumulation of those types of damage that we cannot yet fix. So far, so boring. But wait – is that the whole story? No, because what will matter is the bottom line, how long people think they’re actually going to live.

the singularity will take us from a point of considerable computing power
that is mostly hidden from our concern to one of astronomical computing power that is just slightly more hidden. The Methuselarity, by contrast, will take us from a point of considerable medical prowess that only modestly benefits how long we can reasonably expect to live, to one of just slightly greater medical prowess that allows us confidence that we can live indefinitely.

A More Affordable, High G force Magnetic Space Launcher Proposal

A magnetic space launcher is proposed by Bolonkin.

If this magnetic launcher costs 50 millions of dollars, lifetime of installation is 10 year and mountain is $2 millions of dollars per year. The launcher operates 350 days and launches 100 kg payload every 30 min (This means about 5000kg/day and 1750 tons/year). Then additional cost from installation is $2.86/kg then total cost is $6/kg.

The installation consists of a space apparatus, power drive stations, which include a flywheel accumulator (for storage) of energy, a variable reducer, a powerful homopolar electric generator and electric rails. The drive stations accelerate the apparatus up to hypersonic speed. The estimations and computations show the possibility of making this project a reality in a short period of time (for payloads which can tolerate high g-forces). The launch will be very cheap at a projected cost of 3 ─ 5 dollars per pound.

A homopolar generator is a DC electrical generator that is made when a magnetic electrically conductive rotating disk has a different magnetic field passing through it (it can be thought of as slicing through the magnetic field). Relatively speaking they can source tremendous electric current (10 to 10000 amperes) but at low potential differences (typically 0.5 to 3 volts). This property is due to the fact that the homopolar generator has very low internal resistance.

The engine accelerates the flywheel to maximum safe rotation speed. At launch time, the fly wheel connects through the variable reducer to the homopolar electric generator which produces a high-amperage current. The gas gun takes a shot and accelerates the space apparatus up to the speed of 1500 – 2000 m/s. The apparatus leaves the gun and gains further motion on the rails where its body turns on the heavy electric current from the electric generator. The magnetic force of the electric rails accelerates the space apparatus up to speeds of 8000 m/s. (or more) The initial acceleration with a gas gun can decrease the size and cost of the installation when the final speed is not high. An affordable gas gun produce a projectile speed of about 2000 m/s. The railgun does not have this limit, but produces some engineering problems such as the required short (pulsed) gigantic surge of electric power, sliding contacts for some millions of amperes current, storage of energy, etc.

The current condensers have a small electric capacity 0.002 MJ/kg ([2], p.465). We would need about 10^10 J energy and 5000 tons of these expensive condensers. The fly-wheels made of cheap artificial fiber have capacity about 0.5 MJ/kg ([2], p.464). The need mass of fly-wheel is decreased to a relatively small 25 – 30 tons. The unit mass of a fly-wheel is significantly cheaper then unit mass of the electric condenser.

Bolonkin ideas to reduce costs:

1. Fly-wheels (25 tons and 710 m/s max rotating speed) from artificial fiber.
2. Small variable reducer with smooth change of turns and high variable rate.
3. Multi-stage monopolar electric generator having capacity of producing millions of amperes and a variable high voltage during a short time.
4. Sliding mercury (gallium) contact having high pass capacity.
5. Double switch having high capacity and short time switching.
6. Special design of projectile (conductor ring) having permanent contact with electric rail.
7. Thin (lead) film on projectile contacts that improve contact of projectile body and the conductor rail.
8. Homopolar generator has magnets inserted into a disk (wheel) form. That significantly simplifies the electric generator.
9. The rails and electric generator can have internal water-cooling.
10. The generator can return rotation energy back to a flywheel after shooting, while rails can return the electromagnetic energy to installation. That way a part of shot energy may be returned. This increases the coefficient of efficiency of the launch installation.

A short rail way (412 m) would launch 7500 Gs into orbit.

A manned rail launcher must be 1100 km for acceleration a = 3g (untrained passengers) and about 500 km (a = 6g) for trained cosmonauts.

Graphene and gallium arsenide

PTB Physikalisch-Technische Bundesanstalt has for the first time made graphene visible on gallium arsenide.

Scientists of the Physikalisch-Technische Bundesanstalt (PTB) have now, with the aid of a special design, succeeded in making graphene visible on gallium arsenide. Previously it has only been possible on silicon oxide. Now that they are able to view with a light optical microscope the graphene layer, which is thinner than one thousandth of a light wavelength, the researchers want to measure the electrical properties of their new material combination.

They use the principle of the anti-reflective layer: If on a material one superimposes a very thin, nearly transparent layer of another material, then the reflectivity of the lower layer changes clearly visibly. In order to make their lower layer of gallium arsenide (plus graphene atomic layer) visible, the PTB physicists chose aluminium arsenide (AlAs). However, it is so similar to gallium arsenide (GaAs) in its optical properties that they had to employ a few tricks: They vapour-coated not only one, but rather several wafer-thin layers.

Applied Physics Letters : Graphene on gallium arsenide: Engineering the visibility

September 16, 2009

MIT team finds way to combine Silicon and Gallium Nitride for Microprocessors

Silicon and Gallium nitride have been used to create a single hybrid microchip. This will allow transistors to be made smaller and sets of several chips made of different material in a cellphone can be combined into a single chip This is also an advance towards photonics on a chip which are needed for high speed interchip communication and for zettaflop computers. It could take a couple of years to get to the point where it could be commercialized.

Results: An MIT team led by Tomás Palacios, assistant professor in the Department of Electrical Engineering and Computer Science, has succeeded in combining two semiconductor materials, silicon and gallium nitride, that have different and potentially complementary characteristics, into a single hybrid microchip. This is something researchers have been attempting to do for decades.

Why it matters: This advance could point to a way of overcoming fundamental barriers of size and speed facing today's silicon chips. "We won't be able to continue improving silicon by scaling it down for long," Palacios says, so it's crucial to find other approaches. Besides microprocessor chips, the new integrated technology can be used for other applications such as hybrid chips that combine lasers and electronic components on a single chip, and energy-harvesting devices that can harness the pressure and vibrations from the environment to produce enough power to run the silicon components. It could also lead to more efficient cell phone manufacturing, replacing four or five separate chips made from different semiconductor materials. "With this technology, you could potentially integrate all these functions on a single chip," Palacios says.

Source: "Seamless On-Wafer Integration of Si(100) MOSFETs and GaN HEMTs," Jinwook W. Chung, Tomás Palacios, et al, IEEE Electron Device Letters, October 2009

The chips can be manufactured using the standard fab technology currently used for commercial silicon chips. Currently they are one inch square chips so they have to scale up the process to 6, 8 and 12 inch wafer sizes.

The faster chip is also highly efficient— most of the transistors operate at slower speeds consuming less energy.

Thomas Kazior, technical director of Advanced Microelectronics Technology at Raytheon Integrated Defense Systems, said "this provides a path to RF 'systems on a chip."

The technology can also be used for combining lasers and electronic components on a single chip, and accommodate energy-harvesting devices that can harness the pressure and vibrations from the environment to produce enough power to run the silicon components.

First Production Extreme Ultraviolet Lithography On Track for Second Half of 2010

Carl Zeiss, the world’s leading manufacturer of optical systems for chip fabrication, has now delivered a complete optical system for production-ready Extreme Ultraviolet Lithography (EUVL), a new technology for microchip fabrication. This optical system forms a core module of the first EUVL production system from the Dutch manufacturer and long-term partner to Carl Zeiss, ASML. Delivery of the complete EUVL system, starting at a rate of 60 wafers per hour, is planned in the second half of 2010. It is intended for production of microchips with structures in the 20 Nanometer range.

ASML has already received five orders for the EUVL production system, with deliveries starting in 2010. "Our recent successes are important milestones which show that EUVL is making excellent progress as a cost effective single patterning technology. EUVL has the resolution power to carry Moore's law beyond the next decade," says Christian Wagner, Senior Product Manager at ASML

Berkeley Lab: Putting a Strain on Nanowires Could Yield Colossal Magnetoresistance

These optical images of a multiple-domain vanadium oxide microwire taken at various temperatures show pure insulating (top) and pure metallic (bottom) phases and co-existing metallic/insulating phases (middle) as a result of strain engineering. (Image from Junqiao Wu)

Berkeley Labs found that structural irregularities in correlated electron materials - a phenomenon known as “phase inhomogeneity” – could be engineered at the sub-micron scale to achieve such desired properties as colossal magnetoresistance.

This unique class of materials is commanding much attention now because they can display properties such as colossal magnetoresistance and high-temperature superconductivity, which are highly coveted by the high-tech industry

Wu says that in the future strain engineering might be achieved by interfacing a correlated electron material such as vanadium oxide with a piezoelectric - a non-conducting material that creates a stress or strain in response to an electric field.

“By applying an electric field, the piezoelectric material would strain the correlated electron material to achieve a phase transition that would give us the desired functionality,” says Wu. ”To reach this capability, however, we will first need to design and synthesize such integrated structures with good material quality.”

Nanotube Roundup: Carbon Nanotubes in Computer Chips, Nanotubes in Solar Cells

1. From MIT, a new technique for growing carbon nanotubes should be easier to integrate with existing semiconductor manufacturing processes.

"Low Temperature Synthesis of Vertically Aligned Carbon Nanotubes with Electrical Contact to Metallic Substrates Enabled by Thermal Decomposition of the Carbon Feedstock," Gilbert Nessim, Carl V. Thompson et al, Nano Letters, Aug. 31, 2009

Results: Researchers in the lab of MIT materials science professor Carl V. Thompson grew dense forests of crystalline carbon nanotubes on a metal surface at temperatures close to those characteristic of computer chip manufacturing. Unlike previous attempts to do the same thing, the researchers' technique relies entirely on processes already common in the semiconductor industry. The researchers also showed that the crucial step in their procedure was to preheat the hydrocarbon gas from which the nanotubes form, before exposing the metal surface to it.

Why it matters: The transistors in computer chips are traditionally connected by tiny copper wires. But as chip circuitry shrinks and the wires become thinner, their conductivity suffers and they become more likely to fail. A simple enough manufacturing process could enable carbon nanotubes to replace the vertical wires in chips, permitting denser packing of circuits.

2. Researchers at Cornell University have made a photovoltaic cell out of a single carbon nanotube that can take advantage of more of the energy in light than conventional photovoltaics. Carbon nanotube photovoltaics can wring twice the charge from light.

Researchers led by Paul McEuen, professor of physics at Cornell, began by putting a single nanotube in a circuit and giving it three electrical contacts called gates, one at each end and one underneath. They used the gates to apply a voltage across the nanotube, then illuminated it with light. When a photon hits the nanotube, it transfers some of its energy to an electron, which can then flow through the circuit off the nanotube. This one-photon, one-electron process is what normally happens in a solar cell. What's unusual about the nanotube cell, says McEuen, is what happens when you put in what he calls "a big photon" -- a photon whose energy is twice as big as the energy normally required to get an electron off the cell. In conventional cells, this is the energy that's lost as heat. In the nanotube device, it kicks a second electron into the circuit. The work was described last week in the journal Science.

McEuen cautions that his work on carbon nanotube photovoltaics is fundamental. "We've made the world's smallest solar cell, and that's not necessarily a good thing," he says. To take advantage of the nanotubes' superefficiency, researchers will first have to develop methods for making large arrays of the diodes. "We're not at a point where we can scale up carbon nanotubes, but that should be the ultimate goal," says Lee, who developed the first nanotube diodes while a researcher at General Electric. It's not clear why the nanotube photovoltaic cell offers this two-for-one energy conversion.

3. An international team studying the effects of friction on carbon nanotubes claims that friction can be cut in half when carbon nanotubes are aligned lengthwise rather than transversely.

September 15, 2009

Liposuction Fat Leftovers Can Be Easily Converted to Stem Cells

Globs of human fat removed during liposuction conceal versatile cells that are more quickly and easily coaxed to become induced pluripotent stem cells, or iPS cells, than are the skin cells most often used by researchers, according to a new study from Stanford’s School of Medicine. 30-40% of Americans are obese.

Thirty to 40 percent of adults in this country are obese,” agreed cardiologist Joseph Wu, MD, PhD, the paper’s senior author. “Not only can we start with a lot of cells, we can reprogram them much more efficiently. Fibroblasts, or skin cells, must be grown in the lab for three weeks or more before they can be reprogrammed. But these stem cells from fat are ready to go right away.”

The fact that the cells can also be converted without the need for mouse-derived “feeder cells” may make them an ideal starting material for human therapies. Feeder cells are often used when growing human skin cells outside the body, but physicians worry that cross-species contamination could make them unsuitable for human use.

Even those of us who are not obese would probably be happy to part with a couple of pounds (or more) of flab. Nestled within this unwanted latticework of fat cells and collagen are multipotent cells called adipose, or fat, stem cells. Unlike highly specialized skin-cell fibroblasts, these cells in the fat have a relatively wide portfolio of differentiation options—becoming fat, bone or muscle as needed. It’s this pre-existing flexibility, the researchers believe, that gives these cell an edge over the skin cells.

“These cells are not as far along on the differentiation pathway, so they’re easier to back up to an earlier state,” said first author and postdoctoral scholar Ning Sun, PhD, who conducted the research in both Longaker’s and Wu’s laboratories. “They are more embryonic-like than fibroblasts, which take more effort to reprogram.”

These reprogrammed iPS cells are usually created by expressing four genes, called Yamanaka factors, normally unexpressed (or expressed at very low levels) in adult cells.

Sun found that the fat stem cells actually express higher starting levels of two of the four reprogramming genes than do adult skin cells—suggesting that these cells are already primed for change. When he added all four genes, about 0.01 percent of the skin-cell fibroblasts eventually became iPS cells but about 0.2 percent of the fat stem cells did so—a 20-fold improvement in efficiency.

The new iPS cells passed the standard tests for pluripotency:

“Imagine if we could isolate fat cells from a patient with some type of congenital cardiac disease,” said Wu. “We could then differentiate them into cardiac cells, study how they respond to different drugs or stimuli and see how they compare to normal cells. This would be a great advance.”

“The field now needs to move forward in ways that the Food and Drug Administration would approve —with cells that can be efficiently reprogrammed without the risk of cross-species contamination—and Stanford is an ideal place for that to happen.”

September 14, 2009

Nanotechnology from Metamodern (Drexler) and CRNano (Phoenix)

Chris Phoenix looks at carbohydrate strands, a new molecular building block

Cells in a multicellular organism are surrounded by a matrix of molecules called, appropriately enough, the extracellular matrix (ECM). The ECM is made up of protein and carbohydrate. It anchors the cells and provides structure to the organism. It also provides signaling for mobile cells such as immune system cells. There's a technology being developed to build many different carbohydrate strands in an array in parallel under optical control, similar to the way an array of DNA strands can be built. The point of this - or at least, one application - is to research the way cells react with the ECM, and perhaps develop new medical sensors.

A molecular manufacturing system based on carbohydrates might start with an extension of whatever approach is being used for synthesizing the carbohydrate arrays. Then, as useful structures were built, it might be possible to mechanically protect and deprotect various sites on the carbohydrate molecules, making the chemistry simpler and more flexible

Drexler at Metamodern points out new DNA Nanotechnology

1. Ned Seeman’s lab has come the first engineered, high-quality set of 3D DNA crystals. Ned Seeman’s lab has come the first engineered, high-quality set of 3D DNA crystals This result provides a basis for organizing other components into regular 3D arrays.

We live in a macroscopic three-dimensional (3D) world, but our best description of the structure of matter is at the atomic and molecular scale. Understanding the relationship between the two scales requires a bridge from the molecular world to the macroscopic world. Connecting these two domains with atomic precision is a central goal of the natural sciences, but it requires high spatial control of the 3D structure of matter. The simplest practical route to producing precisely designed 3D macroscopic objects is to form a crystalline arrangement by self-assembly, because such a periodic array has only conceptually simple requirements: a motif that has a robust 3D structure, dominant affinity interactions between parts of the motif when it self-associates, and predictable structures for these affinity interactions. Fulfilling these three criteria to produce a 3D periodic system is not easy, but should readily be achieved with well-structured branched DNA motifs tailed by sticky ends. Complementary sticky ends associate with each other preferentially and assume the well-known B-DNA structure when they do so; the helically repeating nature of DNA facilitates the construction of a periodic array. It is essential that the directions of propagation associated with the sticky ends do not share the same plane, but extend to form a 3D arrangement of matter. Here we report the crystal structure at 4 Å resolution of a designed, self-assembled, 3D crystal based on the DNA tensegrity triangle. The data demonstrate clearly that it is possible to design and self-assemble a well-ordered macromolecular 3D crystalline lattice with precise control.

a, Schematic of the tensegrity triangle. The three unique strands are shown in magenta (strands restricted to a single junction), green (strands that extend over each edge of the tensegrity triangle) and dark blue (one unique nicked strand at the centre passing through all three junctions). Arrowheads indicate the 3' ends of strands. Nucleotides with A-DNA-like characteristics are written in bright blue. Cohesive ends are shown in red letters. b, An optical image of crystals of the tensegrity triangle. The rhombohedral shape of the crystals and the scale are visible.

4 pages of supplemental information

2. Paul Rothemund work already blogged here. DNA Scaffolding.

3. Shih work on DNA Shapes was also already blogged here. DNA nanotech makes more shapes and tools.

4. An article summarizing work on delivering small interfering RNA (siRNA), the key double-stranded molecule in this gene-silencing pathway, into cells

Lipid and polymer-based technologies are poised to enter late-stage trials and possibly even reach the market in the next few years

Scientists trying to deliver siRNA need to engineer around several troublesome properties. RNA has a molecular weight that is 10 to 20 times that of a traditional small-molecule drug. And because the molecule is highly negatively charged, it typically can’t cross the similarly negatively charged plasma membranes to enter the cell. It’s no wonder naked strands of siRNA didn’t make it as a therapeutic approach.

Delivery systems for siRNA must overcome three major obstacles: getting the drug to its target in the body, coaxing it inside the cell, and releasing it. Even after all that is accomplished, companies then need to worry about safety, a major concern given the power of siRNA to turn off cellular processes.

Lipid- and polymer-based systems are the most established approaches for systemic delivery of RNAi. In the clinic, lipid nanoparticles (LNPs) have advanced the most. Alnylam Pharmaceuticals, widely acknowledged as the leader in the RNAi arena, has a liver cancer drug in Phase I trials that applies Tekmira Pharmaceuticals’ stable nucleic acid lipid particle technology. Alnylam is also conducting early-stage studies of other drugs that use its own LNP formulations.

One bright spot is that once researchers figure out how to get an LNP or polymer system into a particular tissue, the possibility opens for treating a host of diseases; in general, companies need only to change the siRNA payload.

So far, most of the success with both LNPs and polymer-based systems has been in delivering siRNA to the liver, which has leaky walls that enable the particles to slip in, explains Jon Wolff, vice president and head of research at Roche’s Madison, Wis., labs. Tumor vasculature and blood vessels are similarly “leaky,” and both are the subject of intensive drug development efforts, he adds.

Currently, the drug industry is focused on designing newer and better lipids that could enhance delivery to different kinds of tissues. As a result, scientists have become better at getting the particle inside the cell, but getting the siRNA out of the pathway to the destructive lysosome remains a major hurdle. “If the particle is unable to escape the endosome, it enters into a degradation pathway. Basically, it’s game over,” Akinc says.

More cutting-edge research and thinking will be needed to overcome the myriad obstacles to broader therapeutic use for siRNA. “One would like to say the path is clear,” MDRNA’s Polisky says, “but in reality this is still a large challenge.”

Wireless Standards : 802.11n Ratified and 802.11ca (6 GHz and Below) and 802.11ad (60Ghz) in Development

IEEE ratified 802.11n wireless standard.

802.11ac (6 GHz and below) and 802.11ad (60 GHz) standards are in development. Both of which aim for speeds of 1 Gbps and faster.

60 Ghz technology has been covered here before

General Fusion will Leverage Computer Technology

Cleantech Groups provides some info on General Fusion. Advances in computing technology could be the key to magnetized target fusion being a cheaper energy source than coal, according to Canadian startup General Fusion.

General Fusion is developing nuclear fusion technology that could one day provide power more cheaply than coal and more safely than nuclear fission plants. The company is using 30-year old magnetized target fusion (MTF) technology but applying modern computer processing capabilities to control and speed compressions.

General Fusion is seeking to raise an additional $4.75 million before the end of the year to close its Series A round at $13.75 million. The company secured $9 million from GrowthWorks Capital, Braemar Energy Ventures, Chrysalix Energy Ventures and the Entrepreneurs Funds in August, in addition to about $2 million in seed and friends-and-family funding.

General Fusion also secured C$13.9 million (US$12.9 million) from Sustainable Technology Development Canada in August, but that money requires matching funds and is to be dispersed as the company meets technological milestones.

The capital is expected to finance the first, two-year phase of General Fusion's project, which is now underway. Richardson estimated a cost of $47 million to $50 million for the entire four-year project.

In the first phase, General Fusion plans to build full-scale prototypes to demonstrate that all the elements work to the specifications required. That includes the magnetized ball of plasma, and demonstrating the compression screen. However, the company doesn’t plan to build the reactor until the second phase, which is expected to start in July 2011.

General Fusion plans to return to private financiers before beginning the final phase.

“By then we will be backed by a whole lot of technological demonstrations that what we do is feasible,” Richardson said. “It will be a lot easier to raise funds.”

After the second phase is completed, it could take five years or more until the technology could be incorporated in a grid-connected power plant. But General Fusion will likely seek licensing agreements or strategic partners for that step in the technology deployment, Richardson said.

Popular Science has step by step diagrams of how the sonic waves driven by pistons hitting a metal sphere would drive fusion.

Shockwave already on the way in

As I noted before: M Simon over at IECfusion Tech has come around to thinking it could work, although it will be very tough.

Later version will have 200 big pistons.

General Fusion has $7 million of a $10 million second round raised.

Previous update: General fusion had completed proof of concept experiments and performed full scale computer simulations.

First article by this site on General Fusion's and magnetized target approach

Форма для связи


Email *

Message *