September 19, 2008

Space flux telescope without upper size limit

Here is a diagram for a many small mirror modules that held in position for a space telescope using magnetic flux pinning is 200 meters in diameter.

The phenomenon of magnetic flux pinning might provide a way to connect the telescope components that overcomes the limitations of formation flight or a mechanical support structure. A type of interaction between a magnet and a type II superconductor, flux pinning is analogous to a damped spring force that acts over a distance. A simple model is that of a magnet and a superconductor connected by a virtual spring and damper. This interaction is passively stable and requires only the power needed to keep the superconductor cooled below a critical temperature.

Other Ways to Make Space Telescopes
The limited capacity of a launch vehicle places an upper bound on the size of monolithic telescopes that can be assembled on the ground and sent into orbit. To maximize the size of a monolithic telescope that can be launched, several ingenious strategies have been developed. These strategies include designing inflatable structures and using creative folding techniques to minimize the volume of the telescope. The 6.6m primary mirror of the James Webb Space Telescope, for example,
is designed to fold compactly to allow the telescope to fit into a 5m shroud.

-Origami-like folding technique allows a 25m lens to fit inside current launch
vehicles, the telescope has a focal length of approximately 1 km, necessitating the use of two spacecraft: one for the lens and one for the detector.

-A primary mirror diameter of over 40m in a monolithic telescope is needed for directly imaging earth size planets in other solar systems.

There are several potential advantages to the flux telescope design.

First and foremost, the design is scalable. Although the initial radius of the telescope is determined by the number of mirror segments that are launched, the radius can grow if additional mirror segments are added later. As a result, the aperture of the telescope can be increased gradually, spreading the cost over time. The reconfigurable nature of the flux-pinning interfaces offers another advantage: unlike the Hubble Space Telescope, which required an expensive manned mission to repair a defective mirror, a telescope whose mirror segments were flux-pinned could be repaired autonomously. Misplaced or imperfectly deformed mirror segments can be repositioned or reoriented remotely, and since the mirror segments are interchangeable, removing and replacing any damaged or destroyed segments can require minimal human involvement. In addition, since flux pinning requires no active control, the telescope is passively stable in the event of a software-related failure and able to maintain its shape using only the minimal amount of power required to cool the superconductors. One final advantage of this design stems from its overall geometry. As a spherical shell of mirror segments with a detector floating at the center, the telescope is roughly isotropic, so it can be repointed by rotating the central detector and deforming the mirror segments appropriately. If the mirrors are capable of deforming sufficiently quickly, then this telescope could repoint in less time [shapeshifting like a liquid Terminator to point in a new direction] than an equivalently sized telescope requires to slew, lending the telescope high agility.

September 18, 2008

DARPA Seeks to Use the Force, the Casimir Force

Casimir force is the result of virtual particles and could be related to inertia and gravity.

DARPA is soliciting innovative research proposals in the area of Casimir Effect Enhancement (CEE). The goal of this program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir Force. One could leverage this ability to control phenomena such as adhesion in nanodevices, drag on vehicles and many other interactions of interest to the DoD. H/T to

A specific goal of this single-phase DARPA program is to demonstrate the ability to manipulate and engineer the Casimir force including the ability to neutralize the Casimir force.

This site has covered recent success in reducing the Casimir force by 30-40% and in reversing the Casimir Force using nanoscale combs and perfect mirrors. Sufficient control of the casimir force could enable a breakthrough in space propulsion and energy extraction from the vacuum and highly efficient energy conversion.

Measurement of the casimir force is described here

The casimir force effect on MEMS and NEMS devices

31 pages detailing what DARPA wants to do with the Casimir force

Program metrics to be used for determination of success will be drawn from this list:
(1) Unambiguous detection of Casimir Force
(2) Demonstrated ability to neutralize the Casimir Force
(3) Real-time manipulation of the Casimir Force

Explanations and Rationale for Goals
Casimir Force Detection: The Casimir force is one of many forces present at the
nanometer scale near surfaces. In order to unambiguously detect the Casimir force, it is necessary to show that the Casimir force can be distinguished from all the other forces present in the system. This can be accomplished by studying the dependence of the measured force on controllable geometric, materials, environmental or other parameters, and distinguishing all of the forces from one another. A successful proposal will describe how the team intends to distinguish Casimir forces from other forces in their device and system.

Casimir Force Neutralization: In order to utilize the Casimir force in applications, it is important to demonstrate the ability to design and demonstrate devices where the Casimir force can be completely neutralized. Model-based design of materials and surfaces will be necessary to develop a successful demonstration of a Casimir-free interface. Successful proposals will include a detailed description of the means by which the Casimir force will be neutralized in a specific structure.

Dynamic Manipulation of the Casimir Force: Of particular interest are approaches that
allow the Casimir force to be dynamically modified within a device or structure –
specifically, the ability to modulate the Casimir force between “normal” and
“neutralized” states. To meet this objective, proposers should describe the materials, structures or other concepts that can provide this dynamic modification of the Casimir force within a device. Detailed models and calculations will be necessary to provide a convincing discussion of the approach used.

A scientific paper from 1988 suggested that advanced control of the casimir force could be used to stabilize a wormhole and enable faster than light travel and time travel.

Largest Semantic Map of English Language and AI decipher complex multigene relationships

Artificial intelligence is helping to accelerate and deepen our understanding of genetics and getting better at understanding our language.

Cognition Technologies, a next-generation Semantic Natural Language Processing (NLP) company, has announced the release of the largest commercially available Semantic Map of the English language. The scope of Cognition’s Semantic Map is more than double the size of any other computational linguistic dictionary for English, and includes over 10 million semantic connections that are comprised of semantic contexts, meaning representations, taxonomy and word meaning distinctions.

The semantic map is reportedly the world's largest, and gives computers a vocabulary more than 10 times as extensive as that of a typical US college graduate.

Artificial Intelligence was used to identify the genes and genetic interrelationships underlying the impact of calorie restriction on maximum lifespan.

- They pooled data from several different microarray studies
- They used an unusual algorithm to classify samples as CR or normal (for the computer scientists: they used genetic programming to learn an ensemble of classification rules).

Their algorithm votes for whether a sample is CR or normal based on the outputs of several short classification rules (short because each rule looks only at the expression levels of a few genes).
An advantage of this type of approach (over, say, a ‘black box’ neural network) is that the classification rules are easy to interpret biologically: you can search through them to identify important genes and genetic relationships. A gene is important for CR if it appears in many different rules, and two (or more) genes are related if they appear together in many rules.

The interpretation of genes with AI work was done by Biomind

Ben Goertzel, Biomind LLC, Rockville, Maryland.
Cassio Pennachin, Biomind LLC, Rockville, Maryland.
Maurício de Alvarenga Mudado, Biomind LLC, Rockville, Maryland.
Lúcio de Souza Coelho, Biomind LLC, Rockville, Maryland.

Novel artificial intelligence methodologies were applied to analyze gene expression microarray data gathered from mice under a calorie restriction (CR) regimen. The data were gathered from three previously published mouse studies; these datasets were merged together into a single composite dataset for the purpose of conducting a broader-based analysis. The result was a list of genes that are important for the impact of CR on lifespan, not necessarily in terms of their individual actions but in terms of their interactions with other genes. Furthermore, a map of gene interrelationships was provided, suggesting which intergene interactions are most important for the effect of CR on life extension. In particular our analysis showed that the genes Mrpl12, Uqcrh, and Snip1 play central roles regarding the effects of CR on life extension, interacting with many other genes (which the analysis enumerates) in carrying out their roles. This is the first time that the genes Snip1 and Mrpl12 have been identified in the context of aging. In a follow-up analysis aimed at validating these results, the analytic process was rerun with a fourth dataset included, yielding largely comparable results. Broadly, the biological interpretation of these analytical results suggests that the effects of CR on life extension are due to multiple factors, including factors identified in prior theories of aging, such as the hormesis, development, cellular, and free radical theories.

Solar Electric Sail Developments and Plans

A simplified picture of the electric sail. An actual system would have 50 to 100 or more 20 kilometer wires. 100 kg spaceships could be accelerated to final speeds of 40-100 km/second. [Further refinement can enable a 800km/s top speed - the top solar wind speed]

This site has identified the solar electric sail as one of the top ten near term space developments that would have the most impact in increasing capabilities in space.

The preparation of components for an actual deployment in space of an electric sail is proceeding. There was an electric sail workshop by ESA ESTEC (European Space Agency) on May 19, 2008

A powerpoint where progress on electric sail work was presented.

Technical Status Summary
-Tether manufacture: Progressing well, required before test mission can fly
-Tether reels: No serious problems seen, but must be done to demonstrate reeling of final-type tether
-Electron gun: Straightforward (could use spare cathodes/guns for redundancy)
-Tether direction sensors: Should be straightforward
-Dynamic tether simulations: No problems seen, but should be done more comprehensively still
-Orbital calculations: OK
-Overall design: OK

Demonstration Goals
-Reel to reel tether production (10 m, 100 m, 1 km, 10 km) with quality control
-Reliable reeling of the tether
-After these, one can make decision to build test mission. Technological development risk remaining after this is small.

Commercial Uses of E-sails
Electric Sail is a propellantless non-impulsive propulsion method, suitable for small and medium payloads
● Electric Sail does not produce much thrust inside the magnetosphere, i.e. at Earth orbit
● Water mining and transporting from asteroids, for producing chemical propellant, is a way to use the E-sail to the utility of any space activity

Asteroid mining schemes
● Water
– Mine water at ice-containing asteroid (KY-26 ?)
– Transfer to Earth orbit by E-sailer
– Water customers at LEO, GTO or MEO:
● Electrolysis spacecraft (Orbital Transfer Service for satellites)
● Platinum group metals:
– Challenge: mining
– Transfer by E-sailer to Earth reentry
– PGMs are rare on Earth (differentiated planet), needed as catalysts (fuel cells + other “green” tech's)
● Structural materials (bricks, stones, basic trusses)

How to mine water
● Straightforward way: Dig out material one piece at a time. Put piece into container, close the lid and heat. Container fills with vapour. Open pipe
into cold trap where let H2O condense.
● Another way: Enclose whole asteroid in gold covered bag so that it gets heated. Install pipe to a cold (white) bag where ice condenses. Might be feasible for small asteroid such as KY-26.
Benefit: insensitive to type of asteroid material.

Getting to Earth orbit from asteroid
● E-sailer used to get payload to Earth-Moon system rendezvous
● Lunar capture used to kill incoming delta-v (up to 1.5 km/s) ==> get into high elliptic orbit (stable for ~1 year)
● Use aerobrakings to lower apogee (using solar panels, like Mars missions do) until at GTO or LEO
● E-sailer can detach before Moon ==> no need to fly with E-sail through near-Earth region

Mining platinum group metals
● Many benefits and one big challenge
● Benefits:
– Easy to store in space during E-sail transportation
– Easy to sell once dropped to Earth
– Precious enough (> 10,000 eur/kg)
– Guaranteed, growing market (automotive industry)
● Challenge:
– Mining (enrichment) at asteroid is probably not simple
– Can be done, since can be done at Earth; but at what initial cost?

E-sail logistics chain. How to use that capability?
● Cheaper launch to GEO, MEO and make space operations beyond LEO cheaper

Is E-sail required for asteroid mining ?
● If icy asteroids exist nearby, water can be fetched by electrolysis rockets without losing too much on the way. But E-sail is more lightweight than any
electrolysis rocket.
● “Dry” ores reasonable to fetch by electrolysis rocket only if water is also mined nearby. E-sail is not dependent on any fuel supply.
● E-sail has better thrust/power ratio than ion engines, plus needs no propellant

Tether Material and Technology selection was made
Technology options covered:
-Laser-cut tether from metal sheet (efficiency? quality?)
-Metal-clad fibres (CTE? radiation?)
-Wire-wire bonding
---Laser welding
---Ultrasonic welding [This was chosen, others are fallback]
---Soldering (temperature range? CTE?)
---Glueing (reliability? CTE?)
---Wrap wire (not done at 20 um scale?)

Good-conductivity alloys:
90% Cu, 10% Ag: Tensile strength 1000-1600 MPa, Density 9 g/cm3
99% Al, 1% Si: Tensile strength ~300 MPa, Density 2.7 g/cm3

Tether Reels
Baseline plan
-Spinning reel, maybe with capstains
-Outreeling only, or reeling both in and out
-Ordinary or magnetic bearing
-Other ideas also considered
TRL 4 level work can commence when at least few metre piece of tether is available.

Electron Gun
Evlanov, Space Research Institute IKI, Moscow
Three new designs produced, based on IKI heritage hardware:
-300 V low-voltage gun for ionospheric testing
-20 kV/2kW baseline model for solar wind
-40 kV/2kW enhanced voltage model for solar wind

-40 kV, 2 kW, 50 mA gun: Mass 3.9 kg including power supply (2 kg) and radiator (0.9 kg)
-LaB6 cathode lifetime: theoretically should be at least 10 years in high vacuum

Tether Direction Sensors
Main idea: Detect tethers optically with stereo camera, Reconstruct 3-D directions from images onboard
Purpose: Tether lengths must be actively fine-tuned to avoid their collisions. One must first detect them.
-TRL 3 analysis done, basically
-Modest-sized cameras enough unless >10-15 AU distance
-May have to mat-finish wires to create diffuse reflectance
-Seeing root of tether enough to determine its direction
-Seeing the tip would be good as tether breakage alarm

Integration of Components
-Design whole s/c around electric sail
-Add electric sail to existing s/c design
-Spinup strategy
-Spinup rockets
-Siamese Twins
-Placement of reels
-At outer edge of s/c disk
-At deployable booms at ends of solar panel arrays
-High voltage path design (grounding plan)
-Whole s/c at high positive potential
-Only reels and electron gun at high positive potential

Need two controls: potential (controls solar wind force) and length (controls angular speed)
-Length fine-tuning strategies:
-Reel in and out (needs reliable reeling of partly damaged tether or thicker monofilament base tether)
-Reel out only (must have enough spare tether)

Comparison of solar electric sail to dandelion seed and tethering several e-sails together to move heavier objects.

The base design is for one hundred 20km wires that would generate 0.1-0.2 N thrust which gives 1-2 mm/s^2 acceleration to a 100 kg spacecraft. In one year this acceleration changes the velocity vector of the spacecraft by 30-60 km/s which is already an excellent achievement.

One can increase the thrust by increasing the number of tethers [no limit], their length [100 km with current material] and the power of the electron gun. In addition, it may be possible to use part of the electric power for radio frequency modulation of the electron beam, which may give a possibility to heat the electron population which is trapped in the potential well of the tethers. The heating expands the electron cloud, in other words the Debye length increases. Then the electric field of the tether penetrates a longer distance into the surrounding solar wind plasma so that the effective sail area of the tether increases, which increases the thrust. Modelling the electron heating is challenging, but testing it in space would be straightforward. For this reason, an electric sail test mission should be built as soon as possible. After becoming familiar with electric sail technology, one could increase its thrust perhaps even hundredfold, that is, to some tens of newtons, by using these techniques. Progress of material physics into industrial production of tethers made of carbon nanowires could possibly increase the upper limit of the thrust even further. With faster acceleration and more force the solar electric sail could reach the top speed of the solar wind at 800 km/s. [1,720,000 mph)]

IBM Develops Computational Scaling Solution for Next Generation "22nm" Semiconductors

IBM (NYSE: IBM) today announced the semiconductor industry's first computationally based process for production of next generation 22nm semiconductors. Known as Computational Scaling (CS) -- a process that enables the production of complex, powerful and energy-efficient semiconductors at 22nms and beyond -- this new initiative will feature support from several of IBM's key partners initially including Mentor Graphics and Toppan Printing.

Computational lithography was used at the 32 nanometer lithography node.

Intel technologists have also been working with computational lithography which involves etching pixels with various shapes and slopes on what appears to be a totally transparent, chromeless piece of glass. When 193 nm light is projected, the pixelated mask creates phase-shifted patterns that could extend immersion lithography to 22 nm.

Introducing circuits at 22nm is a challenging milestone since current lithography methods -- the process of designing photomasks to image circuit patterns on silicon wafers in mass quantity -- are not adequate for critical layers at 22nm due to fundamental physical limitations. Computational Scaling overcomes these limitations by using mathematical techniques to modify the shape of the masks and characteristics of the illuminating source at each layer of an integrated circuit.

The individual components of IBM's CS solution include:

Source Mask Optimization
IBM has partnered with Mentor Graphics on a new resolution enhancement technique to enable cost-effective printing of two dimensional patterns for the 22nm semiconductor technology generation. This new technology, know as source mask optimization, will provide a means to minimize the use of double patterning by employing highly customized sources with optimized mask shapes.

"Our partnership with IBM will ensure production-ready technologies are in place when they are needed for the 22nm node," said Joseph Sawicki, vice president and general manager for the design-to-silicon division at Mentor Graphics. "Because this next generation solution will be built on the familiar Calibre platform, designers will see a smooth transition path to 22nm, and will also enjoy added benefits in managing turnaround time and the cost of computing."

Virtual Fabricator
Together with Rensselaer Polytechnic Institute and the State of New York, IBM has made significant investments in the area of high performance computing and remains devoted to the advancement of semiconductor technology through the establishment of the Computational Center for Nanotechnology Innovations (CCNI). CCNI provides the unprecedented computational power to enable accurate predictions of advanced manufacturing processes. When combined with predictive models and TCAD, this platform will allow virtual co-optimization of semiconductor unit processes and critical circuit design elements to cut development learning cycles and improve time-to-market for advanced semiconductor technology.

Design Technology Co-Optimization
Within semiconductor fabrication, design 'rules' are created as an abstract representation of the information or model that describes the technology being created. Often, these rules are only defined after an exhaustive negotiation process between the technology and design team. To improve the timeliness and certainty of this process, IBM's Design Technology Co-optimization (DTCO) process helps integrate and automate this complex procedure, cutting the time it takes to reach a clear and stable set of rules for use by the circuit design teams.

Design Enablement Tools
As a result of using IBM's DTCO, a semiconductor modeling process will have a new class of design rules that are simpler and more prescriptive (what to do vs. what not to do). Working with engineering design automation (EDA) suppliers, IBM will be providing new design enablement solutions for a seamless transition.

Critical Dimension Variance Control
Working with leading equipment suppliers, IBM will play the role of lead integrator of providing an adaptive control system to minimize critical dimension variance. As a result production yield and circuit parameters will be more stable reducing the cost of production.

Photomask Fabrication
To address the gap in raw optical resolution, aggressive resolution enhancement techniques such as SMO drive unprecedented minimum feature sizes on the photomask -- the opaque plate with holes or transparencies that allow light to shine through in a defined guide for casting the circuit patterns. IBM has partnered with Toppan to ensure timely availability of masks with the required feature sizes.

September 17, 2008

SiCortex Introduces the World’s Most Energy Efficient High-Productivity Computers

SiCortex, maker of energy-efficient high-productivity computing (HPC) platforms, today announced that it has doubled the price/performance metric of its entire product line.
This breakthrough is the result of increased processor speed, advancements to system software and leading-edge compilers. SiCortex computers scale from 72 to 5,832 processors running Linux and other open-source codes, in packages ranging from desk-side to departmental to the data center. All are uniquely simple to deploy: unpack, plug in, turn on.

SiCortex’s breakthrough cost-per-teraflop sets a new standard for delivered price/performance, in particular when considering the Total Cost of Ownership (TCO) due to savings in power, cooling, staffing and space. Compared to Intel x86-based cluster systems, SiCortex computers deliver:

* Up to twice the delivered Results/$CapEx advantage
* Up to nine times the results-per-kilowatt advantage
* Up to three times results/$TCO advantage over a 3-year period

The SiCortex 5832 is the first and only computer system to pack Top500 performance onto a single backplane. It offers 5832 1GFlops 64-bit processors, each dissipating just 600 milliwatts of power. The SC5832 was a 5.8 teraflop system (Nov 2006) but with double the performance it would be almost 12 teraflops.

Before doubling performance the systems had:
•The SC5832, which is a 5.8 Teraflop system with up to 8 Terabytes of memory. The
SC5832 fits into a single cabinet and draws 18 KW of power.
•The SC1458, which is a 1458 Gigaflop system with up to 1.94 Terabytes of memory.
The SC1458 fits in a single cabinet and draws 4 KW of power
•The SC648, which is a 648 Gigaflop system with up to 864 Gigabytes of memory.
A single SC648 system draws 2 KW of power.
•The SC072, which is a 72 GigaFlop system with 48 Gigabytes of memory. The lowpower
desk-side cabinet uses less than 200 watts of power.

SiCortex recently closed on a $37 million round of venture funding.

September 16, 2008

Petrobanks Capri / Thai processes for upgrading and recovering oilsands and heavy oil

Graphics and information from the Sept 2008 Petrobank investors presentation (48 pages) If the Capri/Thai processes are successful then Canada's oilsands, other oilsands and heavy oil deposits around the world will have higher recovery rates using a more economic process and the oil will be upgrading in the ground to a higher and more valuable quality. This would be the technology that would crush peak oil for several decades and allow an orderly transition to a post oil world. The processes would enable trillions of barrels of oil to be economically accessed. In a few months the Capri process could be proven out and the energy world would be changed. Oil technology would change the world by unlocking the oilsand and heavy oil around the world. Trillions of barrels of oil would become economically feasible. It would be a and game changer. More projects like the one in would go ahead to access 3-4 billion barrels of oil at 120,000 bpd within 5 years.

Petrobank aims to upgrade oilsand bitumen before it ever comes to the surface. Petrobank will soon launch the CAPRI component of its in situ toe-to-heel air injection production technology. Upgrading the oil in the ground avoids the delays and costs associated with building refineries and transporting the bitumen to upgraders.

It´s called the CAPRI system and it´s been designed to do the job of a refinery at the bottom of Petrobank´s patented THAI (toe-to-heel air injection) wells at the company´s pilot project near Christina Lake in northeastern Alberta. In the THAI system, an air pressure-driven combustion front loosens heavy oil as it slowly works its way forward, and the freed oil flows under gravity through slots in horizontal collector pipes, then is gas-lifted to surface processing systems.

For the CAPRI pilot, the horizontal pipes have been uniquely configured such that after passing through the slots, the hot crude will pass through a bed of catalyst and on through slots in a concentric inner pipe before being lifted.

The cracking to be achieved by CAPRI will be a step further in the upgrading process already occurring with THAI. With temperatures of over 600C, THAI has achieved coking, raising 8 API oil to 13 or 14 degrees.

Lab results have predicted a 7 API boost using CAPRI. This would the oil up to 20 to 21 API which would be almost as good as Mexican heavy oil. API 21 oil could get 80% of the price of Brent oil.

"Even with another two or three out of CAPRI, you´re getting significantly up the food chain in terms of oil quality," he says. In that minimum case, together with THAI, it means they´d be going from an 8-degree crude to a 16-degree crude. "With such a viscosity, it greatly reduces the need for a condensate," he says. "So there´s an economic advantage."

Once the THAI-CAPRI pair is working in concert, it will be a genuine self-contained underground refinery. "Basically, you´ve got the coker and the cat cracker [catalytic cracking]," says Bloomer.

Along with the efficiencies expected from the pair, Petrobank has complementary technologies it´s developing through its research and development facility, Archon Technologies, to run in parallel.

"We´re looking at a number of technologies," says Bloomer. "Enriched oxygen, sulphur recovery-taking the H2S and creating a solid sulphur product that is close enough to 100 per cent to be usable."

It is a global heavy oil technology. It can be applied around the world in all kinds of reservoirs. Colombia, Venezuela, the United States, Saskatchewan, Russia, offshore Brazil.

API and Price

Generally speaking, oil with an API gravity between 40 and 45 commands the highest prices. Above 45 degrees the molecular chains become shorter and less valuable to refineries.

Light crude oil is defined as having an API gravity higher than 31.1 °API
Medium oil is defined as having an API gravity between 22.3 °API and 31.1 °API
Heavy oil is defined as having an API gravity below 22.3 °API.

Bitumen sinks in fresh water, while oil floats.

Crude oil with API gravity less than 10 °API is referred to as extra heavy oil or bitumen. Bitumen derived from the oil sands deposits in the Alberta, Canada area has an API gravity of around 8 °API. It is 'upgraded' to an API gravity of 31 °API to 33 °API and the upgraded oil is known as synthetic crude.

Oil prices for different API grade oil from the EIA.

THAI Process Benefits
• Minimal natural gas and water use
• Higher recovery rates - 70-80% of oil in place
• Improved economics
• Lower capital cost – 1 horizontal well, no steam & water handling facilities
• Lower operating cost – negligible natural gas & minimal water handling
• Higher netbacks for partially upgraded product
• Faster project execution time
• Lower environmental impact
• 50% less greenhouse gas emissions
• Net useable water production
• Partial upgraded oil requires less refining
• Smaller surface footprint
• THAI /CAPRI - step change heavy oil technologies
• Up to 804 mmbbls recoverable (based on SAGD) in Petrobanks Whitesand block

Petrobank is also big in Saskatchewans part of the Bakken Oil Formation

A coker or coker unit is an oil refinery processing unit that converts the residual oil from the vacuum distillation column or the atmospheric distillation column into low molecular weight hydrocarbon gases, naphtha, light and heavy gas oils, and petroleum coke. The process thermally cracks the long chain hydrocarbon molecules in the residual oil feed into shorter chain molecules.

Fluid catalytic cracking (FCC) is the most important conversion process used in petroleum refineries. It is widely used to convert the high-boiling hydrocarbon fractions of petroleum crude oils to more valuable gasoline, olefinic gases and other products. Cracking of petroleum hydrocarbons was originally done by thermal cracking which has been almost completely replaced by catalytic cracking because it produces more gasoline with a higher octane rating. It also produces byproduct gases that are more olefinic, and hence more valuable, than those produced by thermal cracking.

Heavy oil issues

Ecuador projectg that would be funded by another company, Ivanhoe, using Petrobank technology

Pungarayacu reserves are 3-4 billion bbl of 8° gravity oil, and the company hopes to produce 30,000-120,000 b/d from the field within 5 years.

Petrobank Energy & Resources Ltd. has evaluated the commercial viability of developing the field, examining various alternatives for the optimal exploitation strategy for the extensive heavy oil resource. Ivanhoe has the technology to transform the heavy crude into a lighter, 23° gravity grade.

If the project to develop Pungarayacu is approved, the Canadian company plans to invest nearly $5 billion in the project, according to a Bloomberg report, with Ecuador paying $37/bbl for the oil extracted.

According to project plans, the field will be evaluated and results ratified, then production would begin at 30,000 b/d, rising gradually to as much as 120,000 b/d. The contract will be for 20 years with options for 10-year extensions

Cray CX-1 Personal Supercomputer starting at $25,000 and up to 786 gigaflops performance

Nuclear energy roundup Sept 16, 2008

1. Brazil's nuclear energy company has submitted a six-reactor plan to government, while ministers talk of building more than one per year until 2050.

At present Brazil employs only the two nuclear power units at Angra, giving 1900 MWe, while the completion of the long-stalled Angra 3 would take this to 3120 MWe around 2014. Eletronuclear projected the completion of the first two northeast reactors in 2019 and 2021, and the southeast ones in 2023 and 2025.

Speaking at the Angra 3 site on 12 September, minister for mines and energy Edison Lobao said four states in the northeast had already expressed interest in hosting a plant: Pernambuco, Alagoas, Sergipe and Bahia.

He went on to say he thought Brazil would need 50 to 60,000 MWe of nuclear capacity by 2050, as compared to the country's current total electricity generating capacity of 100,000 MWe.

2. IAEA projects world nuclear power could double by 2030 and this projection is an increase of 57 GWe from last years projection.

For 2030, the World Nuclear Association (WNA) projects global nuclear generating capacity under the low scenario of 552 GWe, while under the high scenario, 1203 GWe. This increases to 1136 GWe and 3488 GWe, respectively, by 2060. By the end of the century the WNA puts a maximum nuclear capacity of around 11,000 GWe under the high scenario.

3. Australia's anti-nuclear liberal party lost its majority. Colin Barnett, leader of the Liberal Party - which supports uranium mining - was announced as Premier on 14 September, 2008.

Mega Uranium said there is "no longer any political impediment" to the development of its Lake Maitland uranium project Western Australia's most advanced uranium project. The Canadian-based company, which had threatened to pull out of Australia if Carpenter had won the election, said that it is on schedule to develop a mine and commission a plant in 2011 with an initial production capacity of 750 tonnes U3O8 per year.

International Rectifier new gallium nitride devices will make Power Conversion a lot more efficient

International Rectifier Corporation (NYSE:IRF) announced the successful development of a revolutionary gallium nitride (GaN)-based power device technology platform that can provide customers with improvements in key application-specific figures of merit (FOM) of up to a factor of ten compared to state-of-the-art silicon-based technology platforms to dramatically increase performance and cut energy consumption in end applications in a variety of market segments such as computing and communications, automotive and appliances.

IR’s GaN-based power device technology platform enables revolutionary advancements in power conversion solutions. The portfolio of system solution products and related intellectual property (IP) extends far beyond leading-edge discrete power devices by effectively deploying the company’s 60-year heritage in power conversion expertise in a wide variety of applications including AC-DC power conversion, DC-DC power conversion, motor drives, lighting, high density audio and automotive systems.

Power MOSFETs at wikipedia

Power semiconductor devices at wikipedia

Power semiconductor devices are semiconductor devices used as switches or rectifiers in power electronic circuits (switch mode power supplies for example). They are also called power devices or when used in integrated circuits, called power ICs.

Power Electronics at wikipedia

Power electronic systems are virtually in every electronic device. For example, around us:

* DC/DC converters are used in most mobile devices (mobile phone, pda...) to maintain the voltage at a fixed value whatever the charge level of the battery is. These converters are also used for electronic isolation and power factor correction.

* AC/DC converters (rectifiers) are used every time an electronic device is connected to the mains (computer, television,...)

* AC/AC converters are used to change either the voltage level or the frequency (international power adapters, light dimmer). In power distribution networks AC/AC converters may be used to exchange power between utility frequency 50 Hz and 60 Hz power grids.

* DC/AC converters (inverters) are used primarily in UPS or emergency light. During normal electricity condition, the electricity will charge the DC battery. During blackout time, the DC battery will be used to produce AC electricity at its output to power up the appliances.

September 15, 2008

Carbon Nanotube Reinforced Composites for artificial muscle and skin

Nanotube composites can generate more than an order of magnitude improvement in the longitudinal modulus (up to 3300%) as well as damping capability (up to 2100%). It is also observed that composites with a random distribution of nanotubes of same length and similar filler fraction provide three times less effective reinforcement in composites.

Jonghwan Suhr,an assistant professor of mechanical engineering, said his study of continuous reinforced carbon nanotube composites brings him a step closer to his hope of bio-mimicking artificial muscles or skins, which can be applied to a wide variety of fields.

In addition, the continuous composites are lightweight, flexible, have mechanical robustness, outstanding fatigue resistance, electrical and thermal conductivities and also has tissue-like behavior, Suhr said.

While Suhr is interested in the mechanical uses for the composite, he is also exploring the use of the composite for mimicking muscle tissue. Suhr is currently working with the aircraft company, Boeing, to investigate creating artificial skin made from continuous reinforced carbon nanotube composites for wing structures of unmanned air vehicles. Suhr said he hopes the artificial skin on unmanned air vehicles will decrease wind resistance to the vehicle, which will result in energy efficiency. Suhr also hopes to develop artificial skin to apply to wind turbine blades to increase energy efficiency for the renewable energy systems.

Suhr’s plan for the new composite also includes biological applications. He hopes to make the inactive material electro active. This would eliminate the need for many mechanical parts in a mechanism.

“This fascinating soft tissue-like material can be made into an electroactive polymer,” Suhr said. “So that we don’t have to add mechanical motors, which is typically heavy. So maybe we can develop bio-mimicking artificial muscles using this material.”

Suhr and his colleagues’ advance in creating a new nanotube composite material lead to a new frontier in nanotechnology. It makes Suhr’s future plans in mimicking muscles and producing new mechanical and structural applications possible.

“We need new material to break through our state of art technology,” Suhr said. “There are many interesting nanomaterials whose properties have not been fully understood yet. We may want to explore them and understand the fundamentals so as to be utilized for emerging applications such as next generation aircraft or alternative energy systems.”

Financial Condition of Different Countries

Light Crude trading below $92 per barrel

Light Crude has fallen below $92. It was at $91.90 at 11:36 a.m. Singapore time on the New York Mercantile Exchange.

The plunge in oil, cotton and copper led to the Reuters/Jefferies CRB Index of 19 commodities erasing its gains for the year. The CRB index fell 3.3 percent to 348.26 yesterday, down 2.9 percent for the year.

Gold declined as some investors sold the precious metal to raise cash after U.S. stocks tumbled.

Gold for immediate delivery fell 1 percent to $778.63 an ounce at 9:49 a.m. in Singapore after earlier rising to $788.10 an ounce, the highest in a week. Silver for immediate delivery fell 1.8 percent to $10.93 an ounce.

Superconductors Under a Pile of Regular Metal Could Have Critical Temperatures of 200K instead of 50K

Theorists propose that for certain types of superconductors, contact with a metal layer could greatly increase the transition temperatures of these materials—in some cases by as much as an order of magnitude.

This relates to recent research which suggests that superconductors do not achieve their best performance because of quantum traffic jams with electrons The piling on of regular metal can help unblock the electron traffic jam.

Designing ways to raise the superconducting transition temperature (Tc) has always been an important goal of condensed matter research. In the past twenty years, two families of superconducting materials with transition temperature above 50 K have been discovered: the cuprates and more recently, the iron-pnictides. Many believe that some cuprate compounds should be very high temperature superconductors (that is, with a Tc~200 K) were it not for the fact that the superconducting carriers, the Cooper pairs, have a low mobility. Writing in Physical Review B, Erez Berg and Steve Kivelson of Stanford University and Dror Orgad of The Hebrew University in Jerusalem turn this logic around and suggest that making contact between a nominally low-mobility superconductor and a high-mobility metal will increase the mobility of Cooper pairs in the superconductor and raise Tc.

Berg et al. consider a two-dimensional lattice where Δ0 is the attraction between two electrons on the same site. To mimic the poor Cooper pair mobility, they either assume that the probability that electron pairs “hop” from site to site is zero, or the electron pairs can only hop in one direction. As constructed, this model exhibits finite Δ0 but zero ρs and cannot be superconducting—at least not in all directions—even at zero temperature. Since Tc is set by ρs, the trick is to find a way to increase ρs by modifying the system so the electrons can move around more easily. Berg et al. therefore propose to put the two-dimensional lattice in contact with a normal metallic layer (Fig. 1). Electrons can now hop to the metal and ρs increases. They demonstrate that for an appropriate choice of the electron transfer parameter between the two layers, the Tc of the composite system can be raised to a substantial fraction of Δ0/kB. Plugging in the numbers appropriate for the relevant cuprate compounds, this amounts to increasing Tc from around 10 K to over 100 K, assuming the interfaces are “ideal” as in the model.

The fact that coupling a normal metal to a low ρs superconductor can raise the Tc actually has a close analogy to what is found in electronic devices called Josephson junction arrays (Fig. 1). A Josephson junction array consists of many superconducting islands connected by insulating materials. In such a system, the quantum tunneling of the Cooper pairs between neighboring islands eventually triggers superconductivity through the entire array. Once the superconducting island is smaller than a certain size, however, even a single extra Cooper pair would significantly raise the electrostatic energy (the so-called charging energy) of the island. When this extra charging energy overwhelms the kinetic-energy gain of allowing the Cooper pairs to spread out, the system becomes insulating. Under this condition the system has a finite Δ0 (proportional to the bulk Tc of the superconducting island) but zero ρs—just as in the model from Berg et al.

There has been other advances this year to create superconducting compounds that can reach 195K or the temperature of dry ice.

3-d Processor designed to run in 3-D first not just a stack of 2D chips

The 'Rochester Cube' points way to more powerful chip designs.
The next major advance in computer processors will likely be the move from today's two-dimensional chips to three-dimensional circuits, and the first three-dimensional synchronization circuitry is now running at 1.4 gigahertz at the University of Rochester.

Unlike past attempts at 3-D chips, the Rochester chip is not simply a number of regular processors stacked on top of one another. It was designed and built specifically to optimize all key processing functions vertically, through multiple layers of processors, the same way ordinary chips optimize functions horizontally. The design means tasks such as synchronicity, power distribution, and long-distance signaling are all fully functioning in three dimensions for the first time.

But with vertical expansion will come a host of difficulties, and Friedman says the key is to design a 3-D chip where all the layers interact like a single system. Friedman says getting all three levels of the 3-D chip to act in harmony is like trying to devise a traffic control system for the entire United States—and then layering two more United States above the first and somehow getting every bit of traffic from any point on any level to its destination on any other level—while simultaneously coordinating the traffic of millions of other drivers.

Complicate that by changing the two United States layers to something like China and India where the driving laws and roads are quite different, and the complexity and challenge of designing a single control system to work in any chip begins to become apparent, says Friedman.

Since each layer could be a different processor with a different function, such as converting MP3 files to audio or detecting light for a digital camera, Friedman says that the 3-D chip is essentially an entire circuit board folded up into a tiny package. He says the chips inside something like an iPod could be compacted to a tenth their current size with ten times the speed.

Eby Friedman has written a book on three integrated circuit design.

V. F. Pavlidis and E. G. Friedman, Three-Dimensional Integrated Circuit Design, Morgan Kaufmann, 2008, ISBN # 978-0-12-374343-

September 14, 2008

Curing Disease Versus Curing Disease and Aging

The Speculist has a clear article and graph that explains how curing all diseases (other than aging) compares to curing aging in terms of life expectancy. This is simplified.

If we cure all diseases (all diseases, that is, except aging itself) 20% will make it to 95. So if you're part of that lucky 1 in 5, curing all disease would give you only 10 years more than you would have had in 1960.

The graph assumes that we don't ever get any better at avoiding accidents or violent incident - but at least 80% of us would make it to age 250. 20% would live 1000 years or more.

Aubrey de Grey and Chris Phoenix worked out various graphs based on how old someone is when effective anti-aging therapy is developed and the constant need for ever improving rejuvenation treatments.

The de Grey/ Phoenix model predicts that, if progress in eliminating successive types of damage continues at a sufficient pace (which, we stress, is a pace very typical of past technologies – prominent examples include powered flight, computers and the combating of infectious diseases), the amount of damage present in the body can be kept low, with a negligible rate of age-related death per year, irrespective of a person’s age.

Your Chances for Extreme Longevity Depend Age when the First in a Series of Rejuvenation Treatments Arrive

How long could you live and the odds depend on when major rejuvenation technology starts.
If you are 50 when SENS 1.0 arrives then you have better than even odds of living hundreds of years.
If you are 70 when SENS 1.0 arrives then you 10% chance of living hundreds of years.
If you are 80 when SENS 1.0 arrives then most likely you might a few extra years and then die as the current normal situation.

The gap between better aging treatments and the quality of the treatments also is a factor. The above graph assumes aging mechanisms are halved every 40 years. If the aging mechanisms are halved at a faster pace then more people will live longer. If aging mechanisms are more than halved then more people will live. If the treatments are less effective and take longer to develop then more people will die.

Vaccines to eliminate extracellular aggregates (especially amyloid which is a big part of Alzheimers) seems likely to be deployed soon

Aubrey de Grey think there's a 50% chance of getting the first-generation SENS [life extension] therapies working within 25-30 years.

Stem cells could provide unlimited disease free blood and significant rejuvenation therapies.

Calorie Restriction mimicking drug could be available in five years and provide a 3-13 year boost in life expectancy.

De-aging livers and preventing cellular aging makes progress.

Light Crude Oil Below $100 a barrel

Light crude oil prices have fallen below $100 amid signs that refineries along the Gulf of Mexico coast will soon resume operations after shutting for Hurricane Ike and escaping major damage.

``It looks like we've dodged another bullet,'' said Peter Beutel, president of energy consultant Cameron Hanover Inc. in New Canaan, Connecticut. ``The refineries in the Houston area seem to have come out of the storm remarkably intact.''

Crude oil for October delivery fell $2.10, or 2.1 percent, to $99.08 a barrel at 1:22 p.m. on the Nymex. Futures touched $98.55, the lowest since Feb. 26. Prices are up 25 percent from a year ago. Gasoline for October delivery fell 12.91 cents, or 4.7 percent, to $2.6405 a gallon in New York.

Oil prices have fallen to $96 barrel

On Saturday, the chief executive of Eni, Italy’s top oil company, predicted that prices might rapidly drop as low as $70 a barrel.

The financial crisis finally took the wind out of the great oil rally. Lehman Brothers filing for bankruptcy and Merrill Lynch agreeing to sell itself to Bank of America. Analysts said the market had become convinced that Wall Street’s meltdown could spread to other parts of the world, and that Asian economic growth would suffer, slowing down global oil demand.

Форма для связи


Email *

Message *