February 01, 2008

Singularity lite: one to two levels of faster technological change

The technological singularity is a hypothesized point in the future variously characterized by the technological creation of self-improving intelligence, unprecedentedly rapid technological progress, or some combination of the two.

I would want to focus on the aspect of "unprecendentedly rapid technological progress". I feel that a proxy for measuring "technological progress" can be the rate of human or world GDP growth (gross domestic product) or economic growth.

Money represents a near universal medium of exchange. You can change money for goods and services. Therefore, it is a proxy for increasing value and progress.

Economic growth would in general mean positive technological change. Faster growth would be faster technological change.

The Importance of Growth

From Tyler Cowan at the Marginal Revolution:
The importance of the growth rate increases, the further into the future we look. If a country grows at two percent, as opposed to growing at one percent, the difference in welfare in a single year is relatively small. But over time the difference becomes very large. For instance, had America grown one percentage point less per year, between 1870 and 1990, the America of 1990 would be no richer than the Mexico of 1990. At a growth rate of five percent per annum, it takes just over eighty years for a country to move from a per capita income of $500 to a per capita income of $25,000, defining both in terms of constant real dollars. At a growth rate of one percent, such an improvement takes 393 years. There are enormous long-run benefits of economic growth.

History of Economic growth

The shifts in rates of growth can be
clearly measured in the shift to a higher magnitude of growth rate. (Hanson)

Mode Doubling Date Began Doubles Doubles
Grows Time (DT) To Dominate of DT of WP
---------- --------- ----------- ------ -------
Brain size 34M yrs 550M B.C. ? "~16"
Hunters 230K yrs 2000K B.C. 7.2 8.7
Farmers 860 yrs 4700 B.C. 8.1 7.5
?? 58 yrs 1730 3.9 3.2
Industry 15 yrs 1903 1.9 >6.3

The 15 year doubling time is 4.7-4.8% GDP growth. An improvement of doubling time by 3-5 times would indicate another level of progress that is line with the long term historic trend.

Some things to examine are what were the sustainable technological and process innovations that enabled the historic shifts to sustainably higher levels of growth for human civilization overall. Also, what are current countries, regions, industries and companies that already have the next higher level of growth.

Note: There is no inevitability in higher levels of growth. Large segments of the world did not make the shift to Industry back in 1903. China was a later adopter. Much of Africa has not really adopted any of the innovations of Industry. Bad choices and lack of the various precursors and enabling factors can cause the inability to shift to higher levels of growth. It can be a haphazard process to remove the bottlenecks to higher growth or it can be a more directed and planned effort. There are many different ways to screwup growth corruption, lack of education, incorrect financial or economic system (one that does not reward or encourage higher productivity etc...)

Three times faster doubling would be 5 years to double or 15% annual growth. China has come close to this level of progress over several decades. China has at times achieved 12% annual GDP growth. 26% growth would be a doubling every 3 years. Warren buffet's investments made investments made in excess of 30% compounded annually between 1956 to 1969, in a market where 7% to 11% was the norm. There have been companies and industries which have sustained for a decade or three 26% compounded annual growth. Aspects of the Internet can be considered to have those levels of high long term growth rates.

China achieved its high levels of growth because it was catching up with past technological and business progress. So if some technology were to enable faster discovery of improved technological or process innovation, then the effect would be like more advanced nations also being in "catchup or higher growth mode". China also had higher rates of investment.

Technology that could provide vast improvements in the ability to find optimal solutions.

Giga-qubit and tera-qubit quantum computers. Dwave systems could be making a breakthough in quantum computers in 2008. It could change the rate of progress with vastly superior molecular models of the physical world.

In terms of capital inputs, if the drastically reduced energy costs combined with vastly increased supplies of energy and higher growth rates in energy supplies from say a breakthrough in nuclear fusion could also provide a sustainable increase in economic growth rate.

High performance printable electronics and faster and cheaper reel to reel production could increase growth rates and capital production.

Reconfigurable phase change chips could allow for in place hardware to be improved on the fly as easily as a software update.

DNA nanotechnology and synthetic biology seem to be reaching new levels of capability and could provide a steady stream of innovations (synthetic life, more efficient bio-fuels, etc...) and enable enhancements to human health and performance (physical and mental. I would focus less on whether intelligence is enhanced but whether productivity is enhanced and whether growth in productivity is sustainably improved (year after year there is some extra percentage improvement in additional productivity).

Continuing advances in robotics are a multiplier to human productivity. If robotic cars are able to convert commuting time into productive time for people that would be a one time 6-20% increase in productivity. There is a constant stream of successes in robotics and automation for handling some human tasks (vacuuming, dish washing, factory robots, etc...). Robotics needs to breakthrough more completely as able and seemless assistants to people. The artificial general intelligence (AGI) situation is when computers and AI can take over making faster innovations by themselves.

Wider and more successful adoption of the best business practices of the growth leading companies and industries combined with innovation and resource enhancing technologies should be able to sustain 10-20% growth rates even without AGI.

Urbanization and what a Higher Rate of Growth Means at the Industry, Corporate and Individual Leval

For the current growth cycle of Industrialization from 1903 to now, we are looking at almost seven doublings or 128 fold increase from 1903 to now. The shift was from 80-90% rural to cities for the developed world. Countries like China that are catching up are seeing the same shift at an accelerated pace. People in cities are two to three times more efficient and productive on per capita income basis.

So part of China's 8-12% growth is from 1-2% of people in the countyside shifting to small and large cities each year. Those people adapt and are absorbed into the higher productivity cities. So 2% people in a pipeline becoming 300% more productive and with 300% more income. This is a 6% boost to annual GDP growth. So the overall 5-12% growth masks far higher shifts to smaller population segments which propogate through the population.

This is similar to higher growth companies and Venture capital as well, where there are a portfolio of companies. Several fail but one is the ten or hundred bagger that raises the growth rate of the whole portfolio.

Looking over the long term of 105 years, it is a 100 times boost in productivity from $500 per person per year farm workers and low productivity industrial work (1903) to $50,000/year white collar work, information technologists etc...

The next wave would be 100 times over 21-35 years. If the people at the end are productive enough to justify $5,000,000/year then many new industries and waves of new products and services would be needed. Something approaching nanofactories would be needed for that level of commerce and scale of productivity.

Looking at what each 3-5 year and 6-10 year doubling and quadrupling, it means that every cycle a company either gets twice as big and then four times as big or they stay closer to the same size and trim down their workforce. That workforce would then become part of new companies to make up the increased economic size. IBM workers laid off while IBM stayed the same size to spawn new Intels, Ciscos and Microsofts but at a faster pace.

Economic Growth Models

Some background definition and theory:
The Exogenous growth model, also known as the Neo-classical growth model or Solow growth model is a term used to sum up the contributions of various authors to a model of long-run economic growth within the framework of neoclassical economics.

Total Factor Productivity (TFP) addresses any effects in total output not caused by inputs or productivity.

The equation below (in Cobb-Douglas form) represents total output (Y) as a function of total-factor productivity (A), capital input (K), labor input (L), and the two inputs' respective shares of output (α is the capital input share of contribution).

Y = A X K**α X L**(1-α)

Technology Growth and Efficiency are regarded as two of the biggest sub-sections of Total Factor Productivity, the former possessing "special" inherent features such as positive externalities and non-rivalness which enhance its position as a driver of economic growth.

Removing Human Population as a Limiting Factor in Growth

Robin Hanson describes how unlimited automation would remove the limitations of human population from the growth equations.

While machines have sometimes displaced human workers, they have much more often helped humans be more productive at tasks that machines cannot do. Machines have thus on net raised the value, and hence the cost, of human labor. And because people are essential, the limited rate of human population growth has limited the economic growth rate.

Once we have machines that can do almost all the tasks that people can do, however, this picture changes dramatically. Since the number of machines can grow as fast as the economy needs them, human population growth no longer limits economic growth. In fact, simple growth models which assume no other changes can easily allow a new doubling time of a month, a week, or even less.

Now admittedly, progress in robotics and artificial intelligence has been slow over the decades, primarily because it is so hard to write the software. And at these rates it could be centuries before we have software that can do almost all tasks that people do. The “upload” approach, however, of scanning human brains then simulating them in detail in computers, seems likely to succeed within the next half century or so.

The transition from farming to industry seems to have been more gradual than the transition from hunting to farming. Even such a “gradual” transition, however, would be very dramatic. Assume that a new transition was as gradual as the one to industry, and that the world economic growth rate was six percent in both 2039 and 2040, plus or minus a typical yearly fluctuation of half a percent.

If so, then in 2041, the increase in the growth rate might be the size of a typical fluctuation, and then in 2042 the growth rate would be a noticeably different eight percent. Growth would then be 14% in 2043, 50% in 2044, 150% in 2045, and 500% in 2046. Within five years the change would go from barely noticeable to overwhelming.

Femtosecond laser can change any metal to any color

Using a tabletop femtosecond laser powered from a regular wall electrical outlet, University of Rochester optical scientists can change any metal to have any color. Picture is Guo in lab at the Institute of Optics at the University of Rochester (photo credit Richard Baker, University of Rochester). This technique will also be adaptable to future rapid manufacturing and rapid prototyping.

Chunlei Guo, the researcher who a year ago used intense laser light to alter the properties of a variety of metals to render them pitch black, has pushed the same process further in a paper in today's Applied Physics Letters. He now believes it's possible to alter the properties of any metal to turn it any color—even multi-colored iridescence like a butterfly's wings.

Gold Aluminum, Blue Titanium, Gold Platinum (photo credit Richard Baker, University of Rochester)

Guo and his assistant, Anatoliy Vorobeyv, use an incredibly brief but incredibly intense laser burst that changes the surface of a metal, forming nanoscale and microscale structures that selectively reflect a certain color to give the appearance of a specific color or combinations of colors.

Guo's black metal, with its very high absorption properties, is ideal for any application where capturing light is desirable. The potential applications range from making better solar energy collectors, to more advanced stealth technology, he says. The ultra-brief/ultra-intense light Guo uses is produced by a femtosecond laser, which produces pulses lasting only a few quadrillionths of a second. A femtosecond is to a second what a second is to about 32 million years. During its brief burst, Guo's laser unleashes as much power as the entire electric grid of North America does, all focused onto a spot the size of a needlepoint.

The intense blast forces the surface of the metal to form nanostructures—pits, globules, and strands that response incoming light in different ways depending on the way the laser pulse sculpted the structures. Since the structures are smaller than the wavelength of light, the way they reflect light is highly dependent upon their specific size and shape, says Guo. Varying the laser intensity, pulse length, and number of pulses, allows Guo to control the configuration of the nanostructures, and hence control what color the metal reflects.

To alter an area of metal the size of a dime currently takes 30 minutes or more, but the researchers are working on refining the technique. Fortunately, despite the incredible intensity involved, the femtosecond laser can be powered by a simple wall outlet, meaning that when the process is refined, implementing it should be relatively simple.

Nanodynamics IPO follow up

Enhanced Oil Recovery part of getting 17 million more barrels per day of oil in North America

Enhanced oil recovery already supplies 650,000 barrels per day of oil in the USA. It could scale to 2-3 million barrels per day by 2030.

CO2-EOR (Enhanced oil Recovery) is already being applied to selected, geologically favorable oil reservoirs with access to affordably priced natural and industrial sources of CO2. Based on the latest (April, 2004) Oil and Gas Journal’s enhanced oil recovery survey, approximately 206,000 barrels per day is being produced domestically from the application of CO2-EOR, with the bulk of this oil production coming from the Permian Basin. Another 102,000 barrels per day is produced using hydrocarbon miscible and flue gas immiscible enhanced oil recovery from fields that would be amenable to CO2-EOR should affordable supplies of CO2 become available. Finally, application of thermal EOR technology (TEOR), primarily in the large heavy oil fields of California, provides 346,000 barrels per day.

California will be shifting to steam based thermal EOR recovery to access seven times more (2.45 billion barrels of oil instead of 350 million barrels of oil) shallow heavy oil. To access deeper heavy oil, there is work in Canada and the USA to develop more advanced technologies involving horizontal wells, low cost immiscible CO2, and advanced thermal EOR technology could significantly increase the
recovery of this otherwise “stranded” oil.

A 2006 consideration of what is economic for CO2-EOR oil has a $35 per barrel oil assumption. More oil becomes economic at higher prices. It seems safe to assume that $35/barrel oil prices will be sustained to justify accessing more of the oil.

More “advanced” CO2-EOR and other EOR technologies, such as gravity stable CO2 injection and horizontal wells, could improve the recovery efficiency of “stranded” oil from domestic reservoirs. Miscibility enhancers, conformance control agents, and advanced immiscible CO2-EOR technology could extend the application of CO2-EOR to reservoir and basin settings currently excluded from further development. Extending these technologies to recovery of “residual oil in the transition zone” (ROZ) would add additional volumes of recoverable oil. Successful pursuit of advanced EOR technology will be central to achieving the 70% national oil recovery efficiency goal established by DOE/FE for its oil technology R&D program.

North American could have 17 million barrels per day of oil with an all out push for oil. This does not include a reassessment of the Bakken oil formation.

Getting more of the oil that is under the ground

How much oil could be accessed in which states ?

Here is a 64 page report from the US dept of energy on undeveloped oil.

A 120 page DOE report on game change technology for enhanced oil recovery

Five potential “nextgeneration” advances in CO2-EOR technology, namely:

1. Increasing the volume of injected CO2 to 1.5 hydrocarbon pore volume
(HCPV), considerably beyond what has been traditionally used.

2. Examining innovative flood design and well placement options for contacting and producing the higher oil-saturated (less efficiently waterflood swept) portions of the reservoir, often containing the bulk of the ”stranded” oil. This would include adding new horizontal and vertical wells targeting selected reservoir strata and using gravity-stable CO2-EOR process designs (in steeply dipping and domed oil reservoirs) to increase overall reservoir contact and oil displacement by the injected CO2.

3. Improving the viscosity of the injected water to reduce the mobility ratio between the injected CO2/water and the reservoir’s oil to reduce viscous fingering of the CO2 through the mobilized oil bank.

4. Adding “miscibility enhancers” to extend miscible CO2-EOR to additional oil
reservoirs that would otherwise be produced by the less efficient immiscible CO2-EOR process.

5. Finally, using the full combination of “next generation” CO2-EOR technologies, which involves injecting higher volumes of CO2, adopting innovative CO2 flood and well design, and adding mobility control, to bring about “game changer” increases in oil recovery efficiency from favorable domestic oil reservoirs.

Costs of enhanced oil recovery versus regular methods. Regular methods get 10% of the oil and enhanced methods get 47% in this case. The DOE target is to get at 70% of the original oil in place (%OOIP, the second line).

This 2006 DOE press release has links to the research reports and summarizes the findings in terms of billions of barrels to be made accessible.

A counter position that lists out complaints against these targets. I would note that peak oil position wants to have it both ways that oil prices will be too high and we will face economic ruin and societal collapse and if oil is developed that we cannot make the new oil efforts in time or without environmental damage. I would say that there will be balance. If oil prices are high enough (economically and to society) then more costs will be willing to be incurred to get at the oil. Getting at the oil should be a transition phase while we get other sources of energy going like nuclear (fission and fusion), wind, solar, geothermal and more efficiency (use less energy).

Improving wind power

For the same wind velocity, FloDesign’s Mixer Ejector Wind Turbine (MEWT) having a maximum diameter 50% smaller than an existing 3-Bladed regular wind turbine can potentially generate over 50% more power, and can potentially cost 25-35% less than the same conventional wind turbine (horizontal axis wind turbines, HAWT).

Hat tip to Al fin for this and the Aerogenerator info later in this article.

FloDesign Wind Turbine used advanced aerospace technology to develop the unique, state of the art wind power machine called the Mixer Ejector Wind Turbine. The MEWT machine uses cambered ringed airfoils (shrouds) surrounding a stator-rotor turbine cascade design, and an efficient mixer/ejector pump to produce more energy than a HAWT system from any wind at any site location. The cambered shrouds act similar to an aircraft wing when landing. The camber produces low pressure on the shroud inside surface which sucks in more wind flow into the turbine. The same low pressure on a wing would produce more aircraft lift for landing or taking off.

The low inertia, smaller rotor blades spin faster and provide more energy extraction at both lower and higher wind speeds. The shrouded blades and higher rotor speeds also reduce gear box complexity and result in quieter, safer wind turbines.

Here is a ten page research paper that describes the new Flodesign system

An MEWT can produce much higher power levels at higher annual mean wind speeds—such as encountered in off-shore applications for example.

FloDesign’s MEWT machine also delivers many additional valuable benefits such as:
• Significant load shift from the rotating to static parts
• Earlier, easier startup
• Minimization or elimination rotor stall complications
• More robust, easier to manufacture blades
• Reduction of gearing requirements
• Reduced sensitivity to wind incidence or gusts
• Quieter and safer design
• Lower first and life costs

Another proposed new wind generator is the vertical aerogenerator.
It could be up to 144 meters tall, should have less maintenance costs and could generate up to 9MW.

It will be at least 2013 before we see Aerogenerators as powerful as 9MW

I do not think the Aerogenerator has enough advantages to be the dominant wind turbine design. 9MW is not enough as conventional horizontal systems can be made up to 10MW or more in size by using superconducting wire to reduce component size.

I also believe kitegen can work better

Dubai's current and future wonders: biggest skyscrapers, man made islands but also palaces and mansions developed by Tiger Woods and Dubailand

There will be snow boarding in a Dubailand dome which is part of a massive larger development shown in architectural drawing. DUBAILAND will cover an area of 3 billion square feet and a population of 2.5 million people, which includes tourists, workers and residents, once fully operational. The development will have multiple theme parks, culture & art, science & planetariums, sports & sports academies, wellbeing & health, shopping & retail and resorts & hotels.

A section of the Dubailand development is Tiger Wood' new golf course.
Tiger Woods plans to build his own 16,500 square-foot (1,533 square meter) mansion on the estate, where the 287 homes will sell for between $12 million and $23 million. Dubai is just east of Saudi Arabia and is big duty free shop (duty free country) and playground for the rich.

one of the tiger woods dubai homes
One of the Tiger Woods Dubai homes. Many more will be custom built.

The Tiger Woods Dubai will include 21 palaces, 75 mansions, 100 Villas, 90 leased villas, Boutique Hotel, Golf Academy, Club House and Al Ruwaya golf course.

tiger woods dubai clubhouse
There will be a 60,000 square foot clubhouse

More houses and information at the tiger woods dubai residence site

There are many massive projects going on in Dubai.

Dubai land map

A separate development is the Bawadi. The longest strip of hotels. 51 hotels along 10 kilometers (6 miles). Each hotel will be individually themed.

Dubai already has the world's tallest skyscraper almost completed and has another even taller one in the works

the two tallest buildings in this picture are the proposed Dubai tower and the one that is being completed. the third one is the current tallest Taipei 101 building, then the Empire state building and the Eiffel tower.

Dubai is also building the largest artificial island The largest island/peninsula is the Palm islands in Dubai The largest of those is the Palm Deira, which will be 80 square kilometers Manhattan is 64 square kilometers.

Dubia Palm Deira

Dubai shares a border with Saudi Arabia. It is east of Saudi Arabia and is along the Persian Gulf.

Revenues from petroleum and natural gas contribute less than 6% (2006) of Dubai's US$ 37 billion economy (2005). A majority of the emirate's revenues are from the Jebel Ali free zone authority (JAFZA) and, increasingly, from tourism and other service-oriented businesses.

January 31, 2008

If you like this site, consider writing a Stumbleupon review

If you like this site, consider writing a Stumbleupon review at this link

Individual articles can also be reviewed for Stumbleupon.


Artificial letters added to the four natural DNA bases

Two artificial DNA "letters" that are accurately and efficiently replicated by a natural enzyme have been created by US researchers. Adding the two artificial building blocks to the four that naturally comprise DNA could allow wildly different kinds of genetic engineering, they say.

This combines with the previous articles about using DNA to assemble millions of three dimensional nanoparticles, being able to synthesize strings of DNA over 500,000 base pairs long and all molecular programmable DNA construction.

I would say that the combined work indicates that we are completely within the age of DNA nanotechnology (using DNA for programmatic molecular control and construction).

As these processes are further mastered, there is the potential to quickly scale up to rapid construction of many trillions of components (various nanoparticles and strands of DNA).

The ability to scale these approaches is illustrated by : The world’s first gene detection platform made up entirely from self-assembled DNA nanostructures has been made. The other interesting aspect is to generalize the techniques used to rapidly create 100 trillion reactive and functional DNA components with easily readable results.

Frustrated by the slow pace designing and synthesising potential new bases one at a time, Romesberg borrowed some tricks from drug development companies. The resulting large scale experiments generated many potential bases at random, which were then screened to see if they would be treated normally by a polymerase enzyme.

With the help of graduate student Aaron Leconte, the group synthesized and screened 3600 candidates. Two different screening approaches turned up the same pair of molecules, called dSICS and dMMO2.

The molecular pair that worked surprised Romesberg. "We got it and said, 'Wow!' It would have been very difficult to have designed that pair rationally."

But the team still faced a challenge. The dSICS base paired with itself more readily than with its intended partner, so the group made minor chemical tweaks until the new compounds behaved properly.

We probably made 15 modifications," says Romesberg, "and 14 made it worse." Sticking a carbon atom attached to three hydrogen atoms onto the side of dSICS, changing it to d5SICS, finally solved the problem. "We now have an unnatural base pair that's efficiently replicated and doesn't need an unnatural polymerase," says Romesberg. "It's staring to behave like a real base pair."

The team is now eager to find out just what makes it work. "We still don't have a detailed understanding of how replication happens," says Romesberg. "Now that we have an unnatural base pair, we are continuing experiments to understand it better."

In the near future, Romesberg expects the new base pairs will be used to synthesize DNA with novel and unnatural properties. These might include highly specific primers for DNA amplification; tags for materials, such as explosives, that could be detected without risk of contamination from natural DNA; and building novel DNA-based nanomaterials.

More generally, Romesberg notes that DNA and RNA are now being used for hundreds of purposes: for example, to build complex shapes, build complex nanostructures, silence disease genes, or even perform calculations. A new, unnatural, base pair could multiply and diversify these applications.

The most challenging goal, says Romesberg, will be to incorporate unnatural base pairs into the genetic code of organisms. "We want to import these into a cell, study RNA trafficking, and in the longest term, expand the genetic code and 'evolvability' of an organism."

Scripps Research Institute

Romesberg Lab

Expanding the genetic alphabet

Unnatural base design and characterization

The Kool research group

DNA used to assemble and glue a 3D structure of one million 15 nm gold nanoparticles

DNA was used to build a three-dimensional structure out of 15 nanometer gold nanoparticles. The gold nanoparticles are the bricks and the DNA is scaffold and mortar. Three-dimensional nanoparticle arrays are likely to be the foundation of future optical and electronic materials.

The novel part of the work is that the researchers use DNA to drive the assembly of the crystal. Changing the DNA strand’s sequence of As, Ts, Gs and Cs changes the blueprint, and thus the shape, of the crystalline structure

“We are now closer to the dream of learning, as nanoscientists, how to break everything down into fundamental building blocks, which for us are nanoparticles, and reassembling them into whatever structure we want that gives us the properties needed for certain applications,” said Chad A. Mirkin, one of the paper’s senior authors and George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences, professor of medicine and professor of materials science and engineering. In addition to Mirkin, George C. Schatz, Morrison Professor of Chemistry, directed the work.

Giving nanoparticles arms of carefully designed DNA can let them assemble themselves into complex 3D structures (Image: Nature)

Using the extremely brilliant X-rays produced by the Advanced Photon Source synchrotron at Argonne National Laboratory in combination with computational simulations, the research team imaged the crystals to determine the exact location of the particles throughout the structure. The final crystals have approximately 1 million nanoparticles.

Mirkin, Schatz and their team just used one building block, gold spheres, but as the method is further developed, a multitude of building blocks of different sizes can be used -- with different composition (gold, silver and fluorescent particles, for example) and different shapes (spheres, rods, cubes and triangles). Controlling the distance between the nanoparticles is also key to the structure’s function.

“Once you get good at this you can build anything you want,” said Mirkin, director of Northwestern’s International Institute for Nanotechnology.

Mirkin says that he and his team are just getting started. "To me, it's really only the start rather than the ending," he says. Over the past three years, Mirkin's group has been demonstrating methods to place different DNA linkers on different faces of nonspherical particles, such as triangle-faced prisms and virus particles. That, he says, should enable programming of more complex materials with repeating patterns of three or more components. "The really intriguing possibility here is the ability to program the formation of any structure you want," says Mirkin.
Stroud says that the structures already produced will be useful as the DNA-programmed assembly is extended to particles other than gold. Applications could include photonic crystals, in which the precise periodicity of particles can tune the overall materials to manipulate specific wavelengths of light, and photovoltaics that capture a broader range of the solar spectrum.

The structures are highly porous--10 percent particles and DNA and 90 percent water. That could hinder applications in which water is undesirable. Drain out the water, and the crystals collapse. Gang says that one could stabilize the crystals by filling the lattice with a polymer, but he is also exploring alternate stabilization schemes that would preserve the lattice's open space.

This work is the cover story for the Jan 31, 2008 issue of Nature

New Scientist coverage of DNA construction

The article in nature on DNA assembly of nanoparticles

Mirkin research publications

Chad Mirkin Research group site

Carnival of Space #39

January 30, 2008

SpaceX progress to Falcon 9

SpaceX tested two Merlin 1C engines operated at full power while attached to a rocket that was strapped to the launch pad. SpaceX offers the possibility of more inexpensive US rockets and a possible replacement for the Space shuttle in 2010.
UPDATE Spacex is trying to successfully launched the first Falcon 9 on June 4, 2010. Nextbigfuture is covered the live launch.

SpaceX falcon 9 test hold down test firing

The achievement clears the way for more multiple-engine tests and a Falcon 9 test flight scheduled for later in 2008. The final Falcon 9 design calls for nine Merlin engines generating over 450,000 kg (1 million pounds) of thrust, or four times the maximum thrust of a 747 aircraft. SpaceX plans to steadily increase the number of simultaneously firing engines on the rocket over the next few months. A three-engine test is scheduled for February, to be followed by a five, seven, and finally a nine-engine test.

Current and expected heavy lift launch systems are compared at wikipedia

The Falcon 9 if successful should be able to launch for about $3200/kg and be slightly cheaper than the well proven Proton. The Falcon 9 ($7,500/kg) would be half the price of the Proton ($18,359/kg) to GTO. TI would be three teimes cheaper than
the Shuttle to LEO and seven times cheaper to GTO.

Wikipedia on Spacex plans and follow on to Falcon 9.

Q4 2008: Demonstration flight of Falcon 9 from Cape Canaveral
Q4 2008: Demo flight 1 of Falcon 9 for NASA Commercial Orbital Transportation Services (COTS) program
Q4 2008: Launch of MDA Corp. payload on Falcon 9[2]
Q2 2009: Launch of HYLAS for Avanti Communications. First geosynchronous launch.
Q2 2009: Demo flight 2 of Falcon 9 for NASA COTS program, 2nd stage becomes a rendezvous target for the Dragon capsule
Q3 2009: Demo flight 3 of Falcon 9 for NASA COTS program, demonstration of cargo delivery to the International Space Station
Q1 2010: Launch of Bigelow Aerospace prototype inflatable space station module on Falcon 9

Merlin 1C engine

SpaceX has officially completed development of the Merlin 1C engine, another major milestone for SpaceX. This new version of Merlin uses regenerative cooling, wherein the rocket grade kerosene propellant first flows around the combustion chamber and nozzle walls before igniting with the liquid oxygen in the thrust chamber. This active cooling allows for higher performance without significantly increasing engine mass, and represents a huge improvement over the ablatively cooled Merlin, which lofted the first two Falcon 1 flights.

Falcon 9 payload quarter fairing
Falcon 9 payload quarter fairing

In its Falcon 9 configuration, Merlin has a thrust at sea level of 95,000 lbs, a vacuum thrust of over 108,000 pounds, vacuum specific impulse of 304 seconds and sea level thrust to weight ratio of 92. A planned turbo pump upgrade in 2009 will improve the thrust by over 20% and the thrust to weight ratio by approximately 25%.

Falcon 9 Heavy configuration

Falcon 9 rocket

Dragon crew capsule schematic
The company is also designing a crewed capsule, called the Dragon, that can carry up to seven people or a mixture of personnel and cargo. The SpaceX Dragon will be launched atop a Falcon 9 rocket and will be capable of docking with the International Space Station. SpaceX plans to have the capsule in service by 2009, in time for the retirement of NASA's shuttle fleet in 2010.

Dragon module in orbit graphic


Trading Futures
Nano Technology
Netbook     Technology News
Computer Software
Future Predictions

Nanofibers in complex shapes and unlimited lengths

The continuous fabrication of complex, three-dimensional nanoscale structures and the ability to grow individual nanowires of unlimited length are now possible with a process developed by researchers at the University of Illinois.

Based on the rapid evaporation of solvent from simple “inks,” the process has been used to fabricate freestanding nanofibers, stacked arrays of nanofibers and continuously wound spools of nanowires. Potential applications include electronic interconnects, biocompatible scaffolds and nanofluidic networks.

They have fabricated freestanding nanofibers approximately 25 nanometers in diameter and 20 microns long, and straight nanofibers approximately 100 nanometers in diameter and 16 millimeters long (limited only by the travel range of the device that moves the micropipette).

The researchers drew nanofibers out of sugar, out of potassium hydroxide (a major industrial chemical) and out of densely packed quantum dots. While the nanofibers are currently fabricated from water-based inks, the process is readily extendable to inks made with volatile organic solvents, Yu said.

“Our procedure offers an economically viable alternative for the direct-write manufacture of nanofibers made from many materials,” Yu said. “In addition, the process can be used to integrate nanoscale and microscale components.”

Min-Feng Yu led the research

Beckman institute

Min Feng's page at the Beckman Institute, Nanoelectronics group

Min-Feng Yu research publications list

Mechanical science and Engineering at the University of Illinois at Urbana-champaign

Center for Microanalysis of Materials website

Yu Research on Nanoscale Mechanics and Physics page: NOTE: slow page, not updated since 2006

Advanced Materials Journal will publish the article

January 29, 2008

Genetic testing Lab on a chip for less than $100 Canadian

Since a journal article was submitted to the Royal Society of Chemistry, the U of Alberta researchers have already made the processor and unit smaller and have brought the cost of building a portable unit for genetic testing down to about $100 Cdn. In addition, these systems are also portable and even faster (they take only minutes). Backhouse, Elliott and McMullin are now demonstrating prototypes of a USB key-like system that may ultimately be as inexpensive as standard USB memory keys that are in common use – only tens of dollars. It could help with Pandemic disease control and detecting and controlling tainted water supplies.

This development fits in with my belief that there should be widespread inexpensive blood, biomarker and genetic tests to help catch disease early and to develop an understanding of biomarker changes to track disease and aging development. We can also create adaptive clinical trials to shorten the development and approval process for new medical procedures

The device is now much smaller than size of a shoe-box (USB stick size) with the optics and supporting electronics filling the space around the microchip

Canadian scientists have succeeded in building the least expensive portable device for rapid genetic testing ever made. The cost of carrying out a single genetic test currently varies from hundreds to thousands of pounds, and the wait for results can take weeks. Now a group led by Christopher Backhouse, University of Alberta, Edmonton, have developed a reusable microchip-based system that costs just 500 (pounds) to build, is small enough to be portable, and can be used for point-of-care medical testing.

To keep costs down, 'instead of using the very expensive confocal optics systems currently used in these types of devices we used a consumer-grade digital camera', Backhouse explained.

The device can be adapted for used in many different genetic tests. 'By making small changes to the system you could test for a person's predisposition to cancer, carry out pharmacogenetic tests for adverse drug reactions or even test for pathogens in a water supply,' said Backhouse.

The heart of the unit, the ‘chip,’ looks like a standard microscope slide etched with fine silver and gold lines. That microfabricated chip applies nano-biotechnologies within tiny volumes, sometimes working with only a few molecules of sample. Because of this highly integrated chip (containing microfluidics and microscale devices), the remainder of the system is inexpensive ($1,000) and fast.

There are many possible uses for such a portable genetic testing unit:

Backhouse notes that adverse drug reactions are a major problem in health care. By running a quick genetic test on a cancer patient, for example, doctors might pinpoint the type of cancer and determine the best drug and correct dosage for the individual.

Or health-care professionals can easily look for the genetic signature for a virus or E. coli – also making it useful for testing water quality.

“From a public health point of view, it would be wonderful during an epidemic to be able to do a quick test on a patient when they walk into an emergency room and be able to say, ‘you have SARS, you need to go into that (isolation) room immediately.’ ”

A family doctor might determine a person’s genetic predisposition to an illness during an office visit and advise the patient on preventative lifestyle changes.

Microfabrication technologies research at the University of Alberta

Rapid genetic analysis

In collaboration with the Glerum Lab we have been developing microchip based implementations of genetic amplification (PCR - the polymerase chain reaction) and capillary electrophoresis (CE) that are extremely fast.

- Cancer diagnostics

- Cell manipulation on a chip

- On chip PCR (polymerase chain reaction)

- Single cell PCR

- DNA Sequencing

Progress to artificial gecko like wall climbing for people

Researchers at the University of California, Berkeley, have developed an adhesive that is the first to master the easy attach and easy release of the reptile's padded feet. The material could prove useful for a range of products, from climbing equipment to medical devices. One of my predictions from 2006 was that there would be artificial gecko like wall climbing. [Gecko mimicing wallcrawling suits for military and enthusiasts 2008-2012]

This collage illustrates gecko adhesion, from toes to nanostructures.
Credit: K. Autumn, Lewis and Clark College. Full resolution images are available for license, and require permission from Kellar Autumn for use (http://lclark.edu/~autumn).

Two centimeters on a side can support 400 grams (close to a pound). While tape sticks when it presses onto a surface, the new adhesive sticks as it slides on a surface and releases as it lifts -- this is the trick behind a gecko's speedy vertical escapes.

Therefore if this scales linearly then 300 square centimeters would support a 131 lb person. 30 cm by 10 cm would be a largish shoe. 600 square centimeters would support a 262 lb person+ gear. 200 square centimeters per foot and hand would allow one foot or hand to be moved while the other three kept contact to the wall.

The gecko-inspired adhesive can support significant weight. Increasing weight increases contact area for the adhesive (contact area is the bright area near the top of the patch). As the load increases, more fibers are recruited to make contact, increasing the strength of the adhesion parallel to the surface. When the sliding force is removed, the fibers straighten, and the patch is easily released with negligible pull-off force. The patch has demonstrated better than 1/6 of a real gecko's stress on the same glass surface.

Researchers have developed a directional adhesive, inspired by the gecko, using microfibers made from a hard polymer, polypropylene. The polymer fibers are 600 nanometers in diameter, just 1/100 the diameter of a human hair, and are formed by a casting process. Like the gecko, the synthetic microfiber array is not sticky except when fibers slide a small distance along a surface. While the present microfiber array works on smooth glass, future versions could be useful for medical equipment, sporting goods, or climbing robots where a directional and easy attach-release adhesive is needed.
Credit: J. Lee and R.S. Fearing, UC Berkeley

The current work is an improvement over the Gekkomat. It seems the current work will not need air tanks for suction and the contact pads can be smaller and less cumbersome

Another step closer to large scale graphene electronics

Researchers from University of Wollongong, New South Wales have a new and better way of separating graphene sheets. Their process allows sheets to be kept apart in aqueous solution by electrostatic repulsion alone - without the need for chemical stabilizers.

Graphene sheets have an extremely large surface area and non-bonding interactions can cause the sheets to stack together into graphite. Current ways of producing them involve carefully peeling away individual sheets from graphite - a process that is impractical on a large scale.

One alternative is to use chemicals to break down graphite oxide into graphene - but this has previously required surfactants and polymers to keep the individual sheets apart, preventing the graphene from being easily integrated into materials or devices.

Professor Wallace said that this low-cost approach offers the potential for large-scale production of stable graphene colloids that can be processed using well-established solution-based techniques — such as filtration or spraying — to make conductive films.

Director of the ARC Centre of Excellence for Electromaterials Science (ACES), Professor Gordon Wallace, said results already indicated that the discovery would lead to advances in energy conversion (new transparent electrodes for solar cells) , energy storage (new electrodes for batteries -- especially flexible batteries) and as new electrodes in medical bionics.

“In addition to antistatic coatings, these materials are expected to have applications in flexible transparent electronics, high-performance composites and nanomedicine,” he said.

PhD student Benjamin Mueller holds a solution of graphene oxide solution watched on by fellow research team members Dr Dan Li and Professor Gordon Wallace

'The method proposed in this paper should allow easier production of high quality graphene,' Kostya Novoselov of the University of Manchester's mesoscopic physics group in the UK, told Chemistry World. 'There are many possibilities for this, such as making transparent electrodes for LCD displays. At the moment we can only make small displays with graphene, but using this method we could potentially make full-scale displays.'

The team have filed a patent on their new process and are continuing to study the fundamental properties of graphene and investigating its potential in energy conversion and storage.

Meanwhile, another study published earlier this month reports a new chemical technique to make strips of graphene or 'carbon nanoribbons'. Hongjie Dai and colleagues at Stanford University, US, first loosened layers of graphene from graphite by heating it to 1000ºC for a minute in 3 per cent hydrogen in argon gas. The team then broke up the graphene into strips using ultrasound. Nanoribbons made in this way have much 'smoother' edges than those produced by traditional lithographic methods, the researchers say.

Bakken oil field is highly profitable for Petrobank

Besides EOR Resources there are other players developing the Bakken oil resource. The economics are attractive in the Bakken play. At least four wells per section can be drilled. Drilling and completion costs are approximately $1.7 million per well, and according to our independent reserve evaluator, proved plus probable reserves are 100,000 barrels of oil per well, representing less than 10 percent recovery of original oil-in-place, are well below our internal estimates of well potential. This leaves considerable upside potential for improved recoveries.

Of the eight horizontal wells drilled in 2006 by Petrobank, seven were successful. The first four producers came on-stream at over 250 bopd and, in the first three months of production, have already produced more than 12,000 barrels per well. Each of these successful wells has yielded at least three follow-up development locations to be drilled through 2007.

So most wells are profitable after 6 months.

Petrobank’s Canadian Business Unit production now exceeds 17,000 boepd including more than 12,200 boepd of high netback, Bakken production. Petrobank now has an inventory of 540 net Bakken locations based on a drilling density of only four wells per section, and we plan to drill 154 of these locations in 2008, which we expect will make Petrobank the most active operator in the play.

At an average of 250 bopd for each of the locations, that would be 52,000 bopd by the end of 2008 and 147000 bopd by about 2010 for the current holdings from this one company. Some of wells have been coming in strong at 1000-2000 bopd.

Petrobank is also a leader in developing the THAI (Toe to heel air injection) oil sand recovery process. They are developing a 100,000 bpd site using THAI.

Carbon nanotube based drug could be 5000 times more effective for acute radiation treatment

A carbon nanotube based drug has had preliminary tests showing it is more than 5,000 times more effective at reducing the effects of acute radiation injury than the most effective drugs currently available. There were already results that gene therapy can increase resistance to radiation.

"More than half of those who suffer acute radiation injury die within 30 days, not from the initial radioactive particles themselves but from the devastation they cause in the immune system, the gastrointestinal tract and other parts of the body," said James Tour, Rice's Chao Professor of Chemistry, director of Rice's Carbon Nanotechnology Laboratory (CNL) and principal investigator on the grant. "Ideally, we'd like to develop a drug that can be administered within 12 hours of exposure and prevent deaths from what are currently fatal exposure doses of ionizing radiation."

To form Nanovector Trojan Horses (NTH), Rice scientists coat nanotubes with two common food preservatives -- the antioxidant compounds butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) -- and derivatives of those compounds. "The same properties that make BHA and BHT good food preservatives, namely their ability to scavenge free radicals, also make them good candidates for mitigating the biological affects that are induced through the initial ionizing radiation event," Tour said. In preliminary tests at M.D. Anderson in July 2007, mice showed enhanced protection when exposed to lethal doses of ionizing radiation when they were given first-generation NTH drugs prior to exposure. Tour said the researchers are also interested in finding out whether the new drugs can prevent the unwanted side effects that cancer patients suffer after undergoing radiation therapy.

January 28, 2008

New polymer for lower cost computer chips and compatible with lithography and nanoimprinting

Researchers at Rensselaer Polytechnic Institute and Polyset Company have developed a new inexpensive, quick-drying polymer that could lead to dramatic cost savings and efficiency gains in semiconductor manufacturing and computer chip packaging. The compatibility with lithography (the current process for making computer chips) and nanoimprinting (a likely future method of computer chip making with smaller features) with the same material will allow for easier, more flexible and cheaper development of future computer chips.

Chip manufacturers will be able to trim several steps from their production and packaging processes.

The widely adopted technique of photolithography involves using a mix of light and chemicals to generate intricate micro- and nano-scale patterns on tiny areas of silicon. As part of the process, a thin polymer film – called a redistribution layer, and crucial to the effectiveness of device – is deposited onto the silicon wafer, in order to ease the signal propagation delay and to protect the chip from different environmental and mechanical factors. Their new PES material can also be used as a thin polymer film for ultraviolet (UV) on-chip nanoimprinting lithography technology, which is still in the early phases of development.

PES cures, or dries and hardens, at 165 degrees Celsius, about 35 percent cooler than the other two materials. The need for less heat should translate directly into lower overhead costs for manufacturers, Lu said. Another advantage of PES is its low water uptake rate of less than 0.2 percent, less than the other materials.

Further Bakken Formation news

The Bakken Formation is being hailed as the most significant find since the Pembina Cardium play discovered in Alberta in 1957.

For those in need of a history lesson, Pembina's reserves were estimated to contain 7.8 billion barrels of oil, of which 1.6 billion were recoverable. To date, more than 1.2 billion barrels have been produced and it's still going strong. Big oil, say the engineering types, gets bigger.

What they mean is simply that when companies start to develop these big pools of reserves, they tend to find more.

By extension, then, the current estimate of the Bakken play containing three billion barrels is likely on the low side because the limits of the formation have yet to be determined. [Three billion barrels is also only the current conservative estimate of the Canadian portion which is 25% of the total land area and an unknown amount of the oil]

Crescent Point Energy Trust -- which earlier this week announced a $370-million deal to increase its stake in the play by buying privately held Landex Petroleum -- said its netbacks were $62.71 per barrel on the Bakken oil it produced in the third quarter. In addition to Crescent Point -- which made two acquisitions in 2007 to expand its asset base -- Petrobank Resources and TriStar Oil and Gas have also been busy shoring up positions in the area through a series of deals. Petrobank bought Peerless Energy in late 2007 for its Bakken assets and TriStar bought Bulldog Resources for the same reason.

Unlike the Pembina formation, which has been developed using standard vertical wells, the Bakken play requires the use of horizontal wells.

These tend to cost about twice as much as a conventional well.

In the past 12 months, a Calgary-based company called Packers Plus has cracked the nut on how to get at these more challenging formations using horizontal drilling techniques.

More specifically, it used to be that the exploitation of these tight reservoirs -- whether gas or oil -- were dependent on where the natural fracturing in the reservoir occurred. Packers Plus has developed a technology that allows for companies to control where the fracturing takes place, avoid water and access multiple zones through the well bore.

"The technology has increased recovery rates by 50 per cent," said Tristone Capital's Chris Theal.

Petrobank Energy and Resources Ltd. (TSX:PBG) says its Canadian unit will be the most active player in southeast Saskatchewan‘s Bakken trend in 2008, with drilling in 135 locations.

Petrobank said early Tuesday its Canadian unit is currently producing more than than 11,000 barrels of oil equivalent output a day and drilled or participated in 100 wells in the Bakken light oil play in Saskatchewan last year.

For 2008, the company will drill 135 Bakken wells, making the company the most active player in the area. In addition to these wells, Petrobank expects to take part in a further 38 Bakken wells with partners this year.

In late November, Petrobank struck a deal to acquire Peerless Energy Inc., in a move that will boost Petrobank‘s current conventional Canadian production to about 16,600 oil equivalent barrels a day.

A new USGS study of Bakken is expected to be released April, 2008.

Maraothon fact sheet on Bakken Sept 2006
NY times on Bakken in North Dakota
North Dakota news from the state government on Bakken

Peakoil message board on Bakken

World Oil reserves on wikipedia

Country Reserves Production Reserve life 3
(10**9bbl) (10**6 bpd) (years)
Saudi Arabia 260 8.8 81
Canada 179 2.7 182
Iran 136 3.7 101
Iraq 115 2.2 143
Kuwait 99 2.5 108
United Arab Emirates 97 2.5 107
Venezuela 80 2.4 91
Russia 60 9.5 17
Libya 41.5 1.8 63
Nigeria 36.2 2.3 43
United States 21 4.9 12
Mexico 12 3.2 10

1. Estimated reserves in billions (10**9) of barrels. (Source: Oil & Gas Journal, January, 2007)
2. Production rate in millions (10**6) of barrels per day (Source: US Energy Information Authority, September, 2007)
3. Reserve life in years, calculated as reserves / annual production. (from above

China yuan breaks through another technical milestone

Форма для связи


Email *

Message *