March 02, 2013

Japan altering regulations to approve practical regenerative medicine in 5 to 7 years instead of 10 years

The government will likely create a system to approve regenerative medicine products earlier than before as part of efforts to promote practical use of induced pluripotent stem cells and other such treatments.

The Health, Labor and Welfare Ministry as been proceeding with a plan to create the system by granting production approval at an earlier stage, a key pillar of the plan.

The government has so far approved only two regenerative medicine products--one for the reproduction of new skin to repair severe burns and another for the regeneration of knee cartilage.

The government's screening process, which usually takes about 10 years, is set to become a major bottleneck and hinder the progress of a large number of regenerative medicine products.

Under the new system, the government plans to grant approval with some conditions after confirming the safety and estimating the effectiveness of the products.

After the products are put on the market, the government will verify their effectiveness and grant official approval.

The introduction of the new system is expected to shorten the time needed to secure government approval by three to five years and help lessen the burden on companies--which should accelerate the commercialization of the products.

Japan's government also promises to back regenerative medicine and stem cell ventures with 110 billion yen ($1.17 billion) of hard cash.

Genetic level map of the newt regeneration process

Scientists at the University of Dayton have mapped out the process at the genetic level newts use to regenerate lenses, limbs and other tissue.

The research, published this week in BioMed Central's open access journal Genome Biology, identifies the protein families expressed during tissue regeneration in the common North American newt, laying the groundwork for research into what particular sets of genes are used for this purpose. This is the first comprehensive map of all RNA molecules — called the transcriptome — expressed in regeneration.

For 250 years, scientists have believed old age and repeated amputation weaken a newt's ability to regenerate. They were wrong. And that's good news for humans.

Panagiotis Tsonis, director of the University of Dayton's Center for Tissue Regeneration and Engineering at Dayton (TREND), said his discovery will benefit the entire field of regeneration research and brings us one step closer to a complete understanding of how newts regenerate, which could one day enable humans to replicate the process.

Nature Communications published Tsonis' research July 12, 2011. The study shows even after surgically removing the lens from a newt 18 times over 16 years, the newt was still able to regenerate a perfect lens. Tsonis' findings overturn long-accepted theories proposed by regeneration scientists, including Charles Darwin.

"When would a person benefit from regeneration most? It's when they are older," Tsonis said. "This shows the newt is an excellent source for finding answers to regeneration, particularly as it relates to old age. It has the ability to protect and preserve regeneration."

Genome Biology - A de novo assembly of the newt transcriptome combined with proteomic validation identifies new protein families expressed during tissue regeneration

Zhang Demonstrates a cloaking box at TED that makes invisible from audience anything placed behind it

a scientist from Nanyang Technological University in Singapore demonstrated a small box made of calcite optical crystal that made anything placed behind the box appear invisible to people watching the demonstration at the TED conference.

Professor Zhang admitted that his research was in its early stages, and said that his team was still working out how to make larger and more useful prototypes of the invisibility cloak.

“There are still many limitations here and I don’t have the answers for how to solve them,” he said. “At this stage, this is already the best we can do. There will be quite a long way to go before it can be applied on a practical level. But all researchers in this field, including myself, are making progress, albeit slowly.”

He said that his work with calcite might have more uses than hiding objects, as the substance could also help to improve optical fibres, such as that cables used for broadband internet, or create better “imaging” products such as digital cameras.

Previous Lab Demonstration by Zhang of Invisibility

March 01, 2013

Elon Musk tweets that the thrusters are working on Spacex Dragon

5 minutes ago Elon Musk tweeted
Pods 1 and 4 now online and thrusters engaged. Dragon transitioned from free drift to active control. Yes!!

SpaceX’s Falcon 9 rocket lifted off on time this morning from the Cape Canaveral Air Force Station, sending a Dragon freighter into orbit on its way to the International Space Station. The spacecraft separated from the second stage about 10 minutes after launch.

However, SpaceX announced that the Dragon is experience some kind of problem in orbit. The company then abruptly cut off its live coverage, saying it might have more to say at a press conference in a few hours.

There were problems starting up some of the four thrusters on the Dragon.

Apparently Elon says they fixed it.

Icarus Interstellar Fusion and Beamed Propulsion Starship Studies and Eight other Projects

Project Icarus President Richard Obousy gave an overview of the projects and work of the Icarus interstellar starship studies to the Centuari Dreams website.

Project Icarus is an engineering challenge and designer capability exercise to design an unmanned fusion based, interstellar starship capable of exploring a star system within 15 light-years. The total mission duration is limited to a maximum of 100 years from launch. This study started in September 2009 and is being conducted by an international team of volunteer physicists, engineers, and other suitably qualified people.

Project Forward, Beamed Propulsion Starship Study

Project Forward, led by Dr. Jim Benford, is a parallel study performed by members of Icarus Interstellar and affiliated organizations with expertise in the field of beamed propulsion. The study involves:

1) Analyzing past concepts to see if they are off-optimal, in terms of the recent cost-optimized model, so can be improved. Then quantify such improved sail system concepts.

2) Exploring properties of materials that are being used for solar sails or have been suggested for beam-powered sails to determine their practicality. In particular, studying their properties in several domains of EM (microwave, millimeter wave, laser) to find out what accelerations they are limited to due to heating in the beam.

3) Quantifying an alternate use of sails-deceleration of sail probes from a fusion-powered starship as it approaches stellar systems.

Big Dog Robot gets a Robotic Arm to throw concrete blocks

BigDog handles heavy objects. The goal is to use the strength of the legs and torso to help power motions of the arm. This sort of dynamic, whole-body approach is routinely used by human athletes and animals, and will enhance the performance of advanced robots. The control techniques and actuators needed for dynamic manipulation are being developed by Boston Dynamics with funding from the Army Research Laboratory's RCTA program.

BigDog is the size of a large dog or small mule; about 3 feet long, 2.5 feet tall and weighs 240 lbs.

Seamless integration of computing into everyday objeccts

Seamless integration of computing into everyday objects isn’t quite here yet, in large part because we still don’t have cheap, thin, flexible electronics. But the technology is already on a path toward ubiquity: radio frequency identification (RFID) tags are used to track goods (and, increasingly, pets and people), flexible sensors in car seats warn parents not to leave their babies behind when they go shopping, and bendable displays are on the way for e-readers. hese inherently flexible products can be mass-produced, and some can even be printed, inkjet style, to create large displays.

February 28, 2013

Superconducting Distributed propulsion - many small engines that are integrated with the airframe for radically different airplanes

Advances in computational and experimental tools along with new technologies in materials, structures, and aircraft controls, etc. are enabling a high degree of integration of the airframe and propulsion system in aircraft design. The National Aeronautics and Space Administration (NASA) has been investigating a number of revolutionary distributed propulsion vehicle concepts to increase aircraft performance. The concept of distributed propulsion is to fully integrate a propulsion system within an airframe such that the aircraft takes full synergistic benefits of coupling of airframe aerodynamics and the propulsion thrust stream by distributing thrust using many propulsors on the airframe. Some of the concepts are based on the use of distributed jet flaps, distributed small multiple engines, gasdriven multi-fans, mechanically driven multifans, cross-flow fans, and electric fans driven by turboelectric generators. This paper describes some early concepts of the distributed propulsion vehicles and the current turboelectric distributed propulsion (TeDP) vehicle concepts being studied under the NASA’s Subsonic Fixed Wing (SFW) Project to drastically reduce aircraft related fuel burn, emissions, and noise by the year 2030 to 2035.

Other aspects of this work were covered in 2009

All Superconducting motors could be three times smaller

Next Generation More-Electric Aircraft: A Potential Application for HTS Superconductors

230 years after first recorded manned lighter than air flight and 110 years of heavier than air flight

110 years ago, the Wrights took to the air on December 17, 1903, making two flights each from level ground into a freezing headwind gusting to 27 miles per hour (43 km/h). The first flight, by Orville, of 120 feet (37 m) in 12 seconds, at a speed of only 6.8 miles per hour (10.9 km/h) over the ground, was recorded in a famous photograph. The next two flights covered approximately 175 feet (53 m) and 200 feet (61 m), by Wilbur and Orville respectively. Their altitude was about 10 feet (3.0 m) above the ground.

The following is Orville Wright's account of the final flight of the day:

Wilbur started the fourth and last flight at just about 12 o'clock. The first few hundred feet were up and down, as before, but by the time three hundred ft had been covered, the machine was under much better control. The course for the next four or five hundred feet had but little undulation. However, when out about eight hundred feet the machine began pitching again, and, in one of its darts downward, struck the ground. The distance over the ground was measured to be 852 feet; the time of the flight was 59 seconds. The frame supporting the front rudder was badly broken, but the main part of the machine was not injured at all. We estimated that the machine could be put in condition for flight again in about a day or two.

The original Wright brother's plane is in the Smithsonian Air and Space Museum. They have changed out the cloth of the wings and body. However, the original cloth is also at the museum. It is a beautiful piece of technology.

Montgolfier Brothers and the first manned hot air balloon

Joseph-Michel Montgolfier and Jacques-Étienne Montgolfier were the inventors of the Montgolfière-style hot air balloon, globe aérostatique. The brothers succeeded in launching the first manned ascent, carrying Étienne into the sky on October 15, 1783.

Japan testing Convoys of Robotic Cargo Trucks to Save 15 percent or more on Fuel

Japan is already making headway with autonomous heavy duty trucks. In order to save fuel, the New Energy and Industrial Technology Development Organization (NEDO) has programmed a convoy of four trucks to drive just four meters (about 13 feet) apart. That cuts down on air resistance, reducing drag (and thus improving fuel efficiency) similar to drafting with a race car.

NEDO's been working on the idea for a while, demonstrating a three-car convey with each car travelling about 15 meters apart (about 50 feet) in 2010. Three years later, the organization's four-car caravan is far more efficient. Japanese broadcaster NHK reports that the plan could reduce the vehicles' fuel consumption by up to 15 percent.

This is combining the road train technology that Europe has worked on in the SARTRE program.

Radical Improvement like a 290 mpg car from 3D Printing Requires Radical Redesign

The Urbee 2 is a 3D printed car and it could revolutionize parts manufacturing while creating a cottage industry of small-batch automakers intent on challenging the status quo.

Precise Control to Enable Lighter and Stronger Construction

Making the car body via FDM [3D Printed ABS plastic via Fused Deposition Modeling (FDM)] affords precise control that would be impossible with sheet metal. When he builds the aforementioned bumper, the printer can add thickness and rigidity to specific sections. When applied to the right spots, this makes for a fender that’s as resilient as the one on your Prius, but much lighter. That translates to less weight to push, and a lighter car means more miles per gallon. And the current model has a curb weight of just 1,200 pounds.

Radically fewer Parts for more Strength and Simplicity

To further remedy the issues caused by modern car-construction techniques, Kor used the design freedom of 3-D printing to combine a typical car’s multitude of parts into simple unibody shapes. For example, when he prints the car’s dashboard, he’ll make it with the ducts already attached without the need for joints and connecting parts. What would be dozens of pieces of plastic and metal end up being one piece of 3-D printed plastic.

More Aerodynamic Designs are Enabled

“The thesis we’re following is to take small parts from a big car and make them single large pieces,” Kor says. By using one piece instead of many, the car loses weight and gets reduced rolling resistance, and with fewer spaces between parts, the Urbee ends up being exceptionally aerodynamic.” How aerodynamic? The Urbee 2′s teardrop shape gives it just a 0.15 coefficient of drag.

U.S. crude oil production tops 7 million barrels per day in December 2012, highest since December 1992

U.S. crude oil production exceeded an average 7 million barrels per day (bbl per day) in November and December 2012, the highest volume since December 1992.

The breakdown of oil production by state shows

Texas had 2.22 million bpd for Dec (68.88 million barrels for the month of Dec)
North Dakota has 739,000 bpd for Dec (23.83 million barrels for the month of Dec)

February 27, 2013

Feasibility Analysis for a Manned Mars Free - Return Mission in 2018 technical paper is online

The 18 page technical paper is online. Feasibility Analysis for a Manned Mars Free - Return Mission in 2018

In 1998 Patel et al searched for Earth - Mars free - return trajectories that leave Earth, fly by Mars, and return to Earth without any deterministic maneuvers after Trans - Mars Injection. They found fast trajectory opportunities occurring two times every 15 years with a 1.4 - year duration, significantly less than most Mars free return trajectories, which take up to 3.5 years. This paper investigates these fast trajectories. It also determines the launch and life support feasibility of flying such a mission using hardware expected to be available in time for an optimized fast trajectory opportunity in January, 2018.

The authors optimized the original trajectory using patched - conic approximations ,and then modeled the trajectory using numerical integration with high fidelity force models and the JPL planetary ephemerides. We calculated an optimum trajectory launching in early January, 2018. At the Mars encounter, the spacecraft will pass within a few hundred kilometere of the surface. We investigated the Earth reentry conditions and developed some aerocapture options to mitigate G - loads on the returning crew. We also describe tradeoffs and studies necessary to develop the Thermal Protection System (TPS).

To size the Environmental Control and Life Support System (ECLSS) we set the initial mission assumption to two crew members for 500 days in a modified SpaceX Dragon class of vehicle. The journey is treated as a high - risk mission, which drives towards reliable — but minimalist — accommodations and provisions. As such, we investigated State Of the Art (SOA) technologies that would meet only basic human needs to support metabolic requirements and limited crew comfort allowances

Spaceref has a review of the details of the plan

One aspect is if there is a large solar mass ejection then the crew would likely be lost.

Mars Flyby Plan will cost 1 to 2 Billion dollars and will send Middle Aged Couple that Already had Children

Dennis Tito provided details of the Mars Manned Flyby Plan

The mission, a “return fly-by”, in which the spacecraft would fly around Mars rather than land, would last for 500 days. It is expected to cost between $1 billion and $2 billion, which Mr Tito is hoping to fund partly through television rights and by selling data to Nasa.

His organisation, Inspiration Mars, is planning to select a middle-aged couple who may have already had children and would be willing to risk the potential risk to their fertility of being exposed to radiation for a prolonged period. (Note- A few days ago when details were not released and people discussed the problem of sending two people together, I noted that sending a married couple would make more sense.)

They would be forced to spend a year and a half together in a 14ft x 12 ft Dragon space craft, accompanied by supplies ranging from more than a tonne of dehydrated food to 28kg of lavatory paper.

They will use a private rocket (probably a Spacex Falcon Heavy) and space capsule (Spacex Dragon probably) and some kind of habitat that might be inflatable (Bigelow Aerospace), employing an austere design that could take people to Mars for a fraction of what it would cost NASA to do with robots.

UPDATE - The 18 page technical paper is online. Feasibility Analysis for a Manned Mars Free - Return Mission in 2018

A technical paper will be present in a couple of weeks with details on the mission.

The Mars mission will launch Jan 5, 2018 to enable a free gravitation return.

Fabrication Progress to Atomic Layer Deposition of solar nanorectennas that could collect solar energy with 70% efficiency

A novel fabrication technique developed by UConn engineering professor Brian Willis could provide the breakthrough technology scientists have been looking for to vastly improve today’s solar energy systems. Over the next year, Willis and his collaborators in Pennsylvania plan to build prototype rectennas and begin testing their efficiency.

Read more:

For years, scientists have studied the potential benefits of a new branch of solar energy technology that relies on incredibly small nanosized antenna arrays that are theoretically capable of harvesting more than 70 percent of the sun’s electromagnetic radiation and simultaneously converting it into usable electric power.

The potential breakthrough lies in a novel fabrication process called selective area atomic layer deposition (ALD) that was developed by Willis, an associate professor of chemical and biomolecular engineering and the previous director of UConn’s Chemical Engineering Program.

It is through atomic layer deposition that scientists can finally fabricate a working rectenna device. In a rectenna device, one of the two interior electrodes must have a sharp tip, similar to the point of a triangle. The secret is getting the tip of that electrode within one or two nanometers of the opposite electrode, something similar to holding the point of a needle to the plane of a wall. Before the advent of ALD, existing lithographic fabrication techniques had been unable to create such a small space within a working electrical diode. Using sophisticated electronic equipment such as electron guns, the closest scientists could get was about 10 times the required separation. Through atomic layer deposition, Willis has shown he is able to precisely coat the tip of the rectenna with layers of individual copper atoms until a gap of about 1.5 nanometers is achieved. The process is self-limiting and stops at 1.5 nanometer separation.

The size of the gap is critical because it creates an ultra-fast tunnel junction between the rectenna’s two electrodes, allowing a maximum transfer of electricity. The nanosized gap gives energized electrons on the rectenna just enough time to tunnel to the opposite electrode before their electrical current reverses and they try to go back. The triangular tip of the rectenna makes it hard for the electrons to reverse direction, thus capturing the energy and rectifying it to a unidirectional current.

Complex Carbon Nanotube Circuits Demonstrated

Mitra and Philip Wong at Stanford University have a wafer of complex carbon nanotube circuits

The demonstration carbon nanotube circuit converts an analog signal from a capacitor—the same type of sensor found in many touch screens—into a digital signal that’s comprehensible by a microprocessor. The Stanford researchers rigged a wooden mannequin hand with the capacitive switch in its palm. When someone graspsed the hand, turning on the switch, the nanotube circuit sent its signal to the computer, which activated a motor on the robot hand, moving it up and down to shake the person’s hand.

The nanotube circuit is still relatively slow—its transistors are large and far apart compared to the latest silicon circuits. But the work is an important experimental demonstration of the potential of carbon nanotube computing technology.

“This shows that carbon nanotube transistors can be integrated into logic circuits that perform at low voltage,” says Aaron Franklin, who is developing nanotube electronics at the IBM Watson Research Center

Carbon complexity: This wafer is patterned with a complex carbon nanotube circuit that serves as a sensor interface

Holographic System Creates Live 3D movie of a room and its contents to allow firefighters to see through smoke and flames

The ability to see behind flames is a key challenge for the industrial field and particularly for the safety field. Development of new technologies to detect live people through smoke and flames in fire scenes is an extremely desirable goal since it can save human lives. The latest technologies, including equipment adopted by fire departments, use infrared bolometers for infrared digital cameras that allow users to see through smoke. However, such detectors are blinded by flame-emitted radiation. Here we show a completely different approach that makes use of lensless digital holography technology in the infrared range for successful imaging through smoke and flames. Notably, we demonstrate that digital holography with a cw laser allows the recording of dynamic human-size targets. In this work, easy detection of live, moving people is achieved through both smoke and flames, thus demonstrating the capability of digital holography at 10.6 μm.

In the United States, for example, fire departments respond to about 1.6 million fire calls per year, and domestic house fires make up the majority of them (3000 deaths occurring each year in house fires).

The recent generation of infrared (IR) bolometer detectors, commercially available for imaging in the IR spectrum in the range of 7–14 µm, are uncooled (i.e., they operate without liquid nitrogen), thus they are lighter in weight and have reached high density and resolution array (680×480 pixels; pixel size down to 25 µm). Also, the cost of such devices is no longer so high, considering their brilliant performance. Such devices allow passive or active clear vision (i.e., with laser IR illumination) through smoke or fog since IR electromagnetic radiation is scattered just slightly by fog drops or smoke particles. However, visible radiation is strongly affected by scattering, and vision can be completely impaired in such situations. Many fire departments use IR cameras based on a bolometer for exploring fire scenes in order to have clearer vision and to allow the rescue of human lives, or to operate safely in such a hostile environment. As explained above, while imaging through smoke is possible in the range of 7–14 µm, flames can completely blind the detector. In fact, electromagnetic radiation emitted by flames can severely saturate the detector, occluding the scene behind them.
Target imaging through smoke. (a) Metal object in Plexiglas™ box. Images recorded by a standard white-light photo camera before and after letting smoke into the box. (b) Thermographic imaging of the metal object through smoke. (c) Holographic amplitude reconstruction. This confirms that holography has the same capability of IR imaging to see through smoke

4th Augmented Human Conference for augmented reality, smart environments, cognition enhancement and wearable devices

The Fourth Augmented Human conference will be held in Germany.

The conference looks at new work in augmented reality and having a more seamless and productive interaction with devices.

There are finger worn input devices.

Finger-worn interfaces remain a vastly unexplored space for user interfaces, despite the fact that our fingers and hands are naturally used for referencing and interacting with the environment. In this paper we present design guidelines and implementation of a finger-worn I/O device, the EyeRing, which leverages the universal and natural gesture of pointing. We present use cases of EyeRing for both visually impaired and sighted people. We discuss initial reactions from visually im- paired users which suggest that EyeRing may indeed offer a more seamless solution for dealing with their immediate surroundings than the solutions they currently use. We also re- port on a user study that demonstrates how EyeRing reduces effort and disruption to a sighted user. We conclude that this highly promising form factor offers both audiences enhanced, seamless interaction with information related to objects in the environment.

February 26, 2013

Wearable Electronic sensor readings communicated via touch feedback can simulate a "Spider Sense"

Recent scientifi c advances allow the use of technology to expand the number of forms of energy that can be perceived by humans. Smart sensors can detect hazards that human
sensors are unable to perceive, for example radiation. This fusing of technology to human's forms of perception enables exciting new ways of perceiving the world around us. In this paper we describe the design of SpiderSense, a wearable device that projects the wearer's near environment on the skin and allows for directional awareness of objects around him. The millions of sensory receptors that cover the skin presents opportunities for conveying alerts and messages. We discuss the challenges and considerations of designing similar wearable devices.

There are three scenarios that can bene fit from the use of SpiderSense.

1. One of the wearer's senses has already identi ed an object and SpiderSense helps localize the direction of the object. Pedestrians for example when walking use their vision to locate obstacles and avoid them. By using SpiderSense they could bene t by \feeling" on their body how far away, qualitatively, an obstacle is. This is especially useful if, at some point, the object is hidden from the wearer as they approach.

2. Sometimes senses are overwhelmed with information and SpiderSense may be used to ease the load on one sense by displaying this information through another sense. Firemen for example, when working in a hazardous environment have limited visibility because of smoke and need to be constantly aware of their surroundings to avoid falling debris for example. By using SpiderSense they get spatial information of the room from these Sensor Modules, therefore potentially allowing them to concentrate their vision on the re hazards.

3. There is an incoming obstacle or threat that is not being detected by any of the other senses (e.g. an intruder approaching from behind).

Alex Knapp at Forbes had coverage

The suit weighs a little over 3 pounds.
The prototype cost around $500.
Technology miniaturization and getting the SpiderSense into a production line will cut the costs down even more.

Revolutionary Superconducting Technology Capabilities are here and will scale for high impact deployment over the next 10-30 years

Hyper fuel efficient 150 Person electric aircraft doable without superconductors and just todays technology

ESAero,studied hybrid propulsion systems, became convinced that conventional, non-superconducting electrical systems could be made to work in a large aircraft. It was funded by NASA Ames to take the ECO-150 concept and rework it around ambient-temperature generator and motor technology available to meet NASA's 2020-25 timeframe N+2 goals (40% less fuel used, lower emissions and lower noise).

To the evident surprise of both ESAero and NASA, the N+2 ECO-150 design closed - met its requirements - despite having a significantly heavier turboelectric distributed-propulsion system using technology available today in industries outside aerospace. "Our main interest was could we even get the aircraft to close, and the answer is yes," says Gibson.

"This is our first shot at getting the aircraft to close, and performance is about equal to a CFM56-powered 737-700," he says. Without the benefit of high-efficiency superconducting motors and generators, the propulsors are significantly larger (below, superconducting on the right and non-superconducting on the left). Gibson says ESAero might redo the N+2 ECO-150 design and increase fan diameter, which would allow the motors to be shorter.

From the ESAero website. NOTE- the Read more is not linked here. I will remove it later

China's continues work to nuclear aircraft carriers and launches new stealth missile frigates

Beijing has approved funding for major projects to develop core technologies for nuclear-powered vessels, a first official indication of plans to build nuclear-powered aircraft carriers.

China probably kicked off a research program aimed at developing nuclear reactors to power its future aircraft carriers.

A report posted on the website of the state-owned China Shipbuilding Industry Corporation (CSIC) on Feb. 19 stated that the Ministry of Science and Technology has formally kicked off an effort to develop nuclear power plants for ships.

All of the U.S. Navy's aircraft carriers and submarines are nuclear powered. The key advantage of nuclear powered ships is that they don't have to refuel nearly as often as conventionally fueled vessels -- think decades rather than months. (On a side note, naval nuclear reactors tend to use highly enriched uranium, the same stuff that's key to making nuclear weapons.)

China's planed homemade carriers are said to be based on the Liaoning's design and will incorporate lessons learned from operating the "starter carrier," as she has been called. Media reports have suggested that the first two locally built carriers will be conventionally-powered and enter service around 2015, with a third nuclear-powered vessel possibly entering service around 2020.

Regional Power Balance

Economist - A fairly small carrier fitted with a “ski jump” ramp rather than a catapult, the Liaoning is no match for America’s Nimitz-class supercarriers, which are almost double the displacement, let alone the new Ford-class ships, the first of which is expected to enter service in 2015. Nor does China yet have any fast jets to fly from the Liaoning. The Shenyang J-15, a not entirely convincing copy of Russia’s Sukhoi S-33, has flown, but is unlikely to enter service until 2016.

As a military threat to America, the Liaoning is therefore negligible and that will remain true even when it is joined over the next 15 years by two indigenously-built carriers that have been modelled on it. What worries America far more are the impressive anti-access/area denial capabilities that China has built up (mainly with missiles and submarines).

It is likely that the PLAN is seeking a more limited power-projection capability that will support both the defence of China’s regional interests.

Efficient Stealth Missile Frigates

China has launched the first ship in a new class of stealth missile frigates.

The China Defense blog has coverage.

Hole Spin Quantum Dots Brings us closer to new high-speed quantum computers

A new method preserves spin qubits up to ten times longer. Hole spins, rather than electron spins, can keep quantum bits in the same physical state up to 10 times longer than before.

The holes within hole spins, Frolov explained, are literally empty spaces left when electrons are taken out. Using extremely thin filaments called InSb (indium antimonide) nanowires, the researchers created a transistor-like device that could transform the electrons into holes. They then precisely placed one hole in a nanoscale box called “a quantum dot” and controlled the spin of that hole using electric fields. This approach— featuring nanoscale size and a higher density of devices on an electronic chip—is far more advantageous than magnetic control, which has been typically employed until now, said Frolov.

“Our research shows that holes, or empty spaces, can make better spin qubits than electrons for future quantum computers.”

“Spins are the smallest magnets in our universe. Our vision for a quantum computer is to connect thousands of spins, and now we know how to control a single spin,” said Frolov. “In the future, we’d like to scale up this concept to include multiple qubits.”

Nature Nanotechnology - Electrical control of single hole spins in nanowire quantum dots

Arxiv - Electrical control over single hole spins in nanowire quantum dots

Blueprint for a Memristor based artificial brain and neurorobotics

Andy Thomas constructed a memristor that is capable of learning. Andy and Bielefeld University researchers are now using his memristors as key components in a blueprint for an artificial brain. Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Andy is first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.

Both a memristor and a bit work with electrical impulses. However, a bit does not allow any fine adjustment – it can only work with ‘on’ and ‘off’. In contrast, a memristor can raise or lower its resistance continuously. ‘This is how memristors deliver a basis for the gradual learning and forgetting of an artificial brain,’ explains Thomas.

Hewlett Packard should being selling chips with billions to trillions of memristors that compete with flash memory in density and scale in 2014. Memristors seem able to create simple, scalable and efficient devices for mimicking trillions of neurons and synapses.

Schematic representation of two interconnected neurons. The contact areas where the information is transmitted are called synapses. A signal from the presynaptic cell is transmitted through the synapses to the postsynaptic cell.

Journal of Physics D: Applied Physics - Memristor-based neural networks

DARPA VTOL X-Plane is pushing for a novel mix of fixed and rotary design to get to efficiently double the speed of helicopters

One of the greatest challenges of the past half century for aerodynamics engineers has been how to increase the top speeds of aircraft that take off and land vertically without compromising the aircraft's lift to power in hover or its efficiency during long-range flight.

The versatility of helicopters and other vertical take-off and landing (VTOL) aircraft make them ideal for a host of military operations. Currently, only helicopters can maneuver in tight areas, land in unprepared areas, move in all directions, and hover in midair while holding a position. This versatility often makes rotary-wing and other VTOL aircraft the right aerial platform for transporting troops, surveillance operations, special operations and search-and-rescue missions.

"For the past 50 years, we have seen jets go higher and faster while VTOL aircraft speeds have flat-lined and designs have become increasingly complex," said Ashish Bagai, DARPA program manager. "To overcome this problem, DARPA has launched the VTOL X-Plane program to challenge industry and innovative engineers to concurrently push the envelope in four areas: speed, hover efficiency, cruise efficiency and useful load capacity."

"We have not made this easy," he continued. "Strapping rockets onto the back of a helicopter is not the type of approach we're looking for. The engineering community is familiar with the numerous attempts in the past that have not worked. This time, rather than tweaking past designs, we are looking for true cross-pollinations of designs and technologies from the fixed-wing and rotary-wing worlds. The elegant confluence of these engineering design paradigms is where this program should find some interesting results."

Carnival of Space 290

Harvard study shows wind power will have a lot of trouble scaling beyond a few terawatts

The generating capacity of large-scale wind farms has been overestimated.

Each wind turbine creates behind it a "wind shadow" in which the air has been slowed down by drag on the turbine's blades. The ideal wind farm strikes a balance, packing as many turbines onto the land as possible, while also spacing them enough to reduce the impact of these wind shadows. But as wind farms grow larger, they start to interact, and the regional-scale wind patterns matter more.

Keith’s research has shown that the generating capacity of very large wind power installations (larger than 100 square kilometers) may peak at between 0.5 and 1 watts per square meter. Previous estimates, which ignored the turbines' slowing effect on the wind, had put that figure at between 2 and 7 watts per square meter.

“If wind power’s going to make a contribution to global energy requirements that’s serious, 10 or 20 percent or more, then it really has to contribute on the scale of terawatts in the next half-century or less,” says Keith.

If we were to cover the entire Earth with wind farms, he notes, “the system could potentially generate enormous amounts of power, well in excess of 100 terawatts, but at that point my guess, based on our climate modeling, is that the effect of that on global winds, and therefore on climate, would be severe—perhaps bigger than the impact of doubling CO2.”

“Our findings don't mean that we shouldn’t pursue wind power—wind is much better for the environment than conventional coal—but these geophysical limits may be meaningful if we really want to scale wind power up to supply a third, let’s say, of our primary energy,” Keith adds.

It’s worth asking about the scalability of each potential energy source—whether it can supply, say, 3 terawatts, which would be 10 percent of our global energy need, or whether it’s more like 0.3 terawatts and 1 percent.”

“Wind power is in a middle ground,” he says. "It is still one of the most scalable renewables, but our research suggests that we will need to pay attention to its limits and climatic impacts if we try to scale it beyond a few terawatts."

Carnival of Nuclear Energy 145

The Carnival of Nuclear Energy 145 is up at Atomic Power Review

ANS Nuclear Cafe looks at potential nuclear plant closures and what could be done to stop them from closing

Kewaunee may not be the last plant to close for purely economic reasons. Many experts are saying that several other small plants in merchant power markets (including Vermont Yankee, Fitzpatrick, Nine Mile Point, Cooper, Ginna, Indian Point, and Clinton) are at risk of closing, due to weak demand and continuing low natural gas prices.

* Staffing is higher than it needs to be.
* Natural gas prices will not stay as low as they are now

February 25, 2013

F22 Deadly to F22 Pilots and US Budgets

The Air Force admitted losing two of its 184 — make that 182 — top-of-the-line F-22 Raptor stealth fighters on Thursday. It was one of the worst days yet in what’s turning out to be a bad year for the pricey, radar-evading jet built by Lockheed Martin.

The F22 stealth fighter costs as much as $678 million per copy.

The recent crashes are only the latest bad news for the cutting-edge F-22, which currently ranks as the Air Force’s most accident-prone fighter. The last of the Raptors rolled out of the Marietta, Georgia, factor in December and flew into a veritable firestorm of controversy.

The Air Force twice grounded all or some of the fleet over concerns about the Raptor’s apparently faulty oxygen system, which might have contributed to a fatal crash in 2010. Two F-22 pilots even mutinied, refusing to fly the speedy, high-flying jet until the Air Force worked out its problems. Months of investigation costing millions of dollars failed to definitively solve the jet’s oxygen woes, although the Air Force is installing a backup oxygen generator just in case.

It seems clear neither the May crash nor yesterday’s incident are related to the stealth plane’s oxygen flaw. But that hardly softens the blow from the recent mishaps. The Air Force wanted 381 F-22s but in 2009 then-Defense Secretary Robert Gates cut that number to just 187, dismissing the pricey jet as a “niche, silver-bullet solution” to the Pentagon’s air-defense needs.

Oxygen Flaw and Raptor Cough

the F-22′s faulty oxygen system, which since at least 2008 has been choking pilots, leading to confusion, memory loss and blackouts — combined known as hypoxia — that may have contributed to at least one fatal crash. Ground crews have also reported growing sick while working around F-22s whose engines are running.

The Air Force claims its has a handle on the in-flight blackouts. All 180 or so F-22s are having faulty filters removed and new backup oxygen generators installed. There have also been changes to the G-suits pilots wear. But the Air Force says the alterations won’t do anything to fix the so-called “Raptor cough,” a chronic condition afflicting almost all F-22 pilots.

Elon Musk Talks Spacex and Mentions Hyperloop on Jimmy Kimmel

Elon Musk was interviewed on Jimmy Kimmel.

Spacex is making version 2 of the Dragon spacecraft with landing gear and rockets for propulsive landing. This might be in parallel to the Grasshopper first stage reusable work.

No more news on Hyperloop other than Elon wants to get Tesla profitable first before publishing Hyperloop to not get shareholder irate that he is distracted.

Tesla is expected to be slightly profitable in this quarter

Model S production began last June but started slowly. The company made 3,100 cars during the year, with the vast majority of them - 2,750 - built during the fourth quarter. The company delivered 2,650 cars for the year. The car is priced from $59,900 to $94,900 before state and federal incentives are factored in.

Tesla had 15,000 reservations for the Model S on hand at the end of the year. More than 6,000 of those reservations landed in the fourth quarter. But during the same period, about 4,000 potential customers dropped reservations they had already made when asked by Tesla to configure their cars and lock in the sale.

Still, Musk said the company is still generating enough new reservations that it can sell its entire expected production run of 20,000 cars this year. Anyone placing a reservation for a Model S faces an average wait of five month

Read more:

Solid-State Sequencer can Scan 1 million nucleotides per second for targetted sequences

Nabsys, a DNA technology startup, showed off today its solid-state gene sequencing machine at the Advances in Genome Biology and Technology conference in Marco Island, Florida. The company says that later this year it will begin selling its machine, which will allow researchers to determine the structural organization of long stretches of DNA. This differs from most existing sequencing methods, which read DNA in short snippets that are later stitched together by software. The new system will, at first, complement existing methods, but it could eventually offer cheaper and faster sequencing than other approaches.

Groups such as Oxford Nanopore (see “Nanopore Sequencing”), which introduced its technology a year ago at the same conference, and Gundlach’s lab are developing nanopore technologies as another method for getting long sequences, but so far no nanopore technology has made it to the market. These systems use a biological pore as the site of DNA analysis, which limits the speed at which DNA can be read.

Nabsys’s technology also passes DNA through a pore, but instead of the protein pore approach that Oxford Nanopore and others are taking, Nabsys uses a pore cut into a solid-state chip. According to the journal Biotechniques, Oxford Nanopore’s system can process DNA at a maximum rate of 400 bases per second. Nabsys claims its system can read up to a million nucleotides per second. Such speed could be critical in clinical settings, where fast diagnoses are needed to make treatment choices.

Combination Sequencing with Nabsys Positional Maps

Because of the highly repetitive nature of human DNA and the relatively short length scales over which DNA sequencing platforms obtain information, assembling the data produced by these platforms is computationally intensive and results in contig lengths that are very short compared to the lengths of chromosomes. Complete genomes typically have required additional finishing to unambiguously place repeated or difficult regions. In contrast, Positional Sequencing as developed by Nabsys using nanoscale detectors and specific hybridization
probes will provide information over hundreds of kilobases and even megabases of contiguous sequence. Specifically, the platform locates, with sub-diffraction- limit resolution, the positions of oligonucleotide probes that have bound to long DNA fragments. This information can be assembled into contigs whose lengths approach the lengths of chromosomes. This information can used to automatically finish sequencing projects as well as correct misassemblies.

The limitations of short read sequencing are becoming more and more recognized. Long range sequence information such as that offered by Nabsys Positional Sequencing is essential for full genomic analysis. We have demonstrated a complementary relationship between short read sequencing and Nabsys mapping. The combination of these two low cost technologies produces sequence quality surpassing that of current standard practices.

We have demonstrated the ability to place short read contigs on a genome wide scaffold. This type of information is useful for discovering clinically relevant structural variants. We have also demonstrated the ability to detect and correct misassemblies in short read de novo sequence. These improvements point the way to a regime in which sequencing is not only fast and cheap but also correct and complete.

Electrophoresis - Mapping and sequencing DNA using nanopores and nanodetectors

Samsung Galaxy Note 8.0 and Chromebook Pixel are like the Acura NSX - Brand Defining Extreme Products

Branding is the use of a name, term, symbol or design to give a product a unique identity in the marketplace. It seems that Samsung and Google are using there most recent products as products that embody an effort to shift/entrench their brand in smartphone/tablet and laptop categories.

Samsung announced the launch of the 8 inch tablet, the GALAXY Note 8.0; a new era of intelligent Note technology set to re-ignite the mid-size tablet category that Samsung established in 2010. Providing unrivalled multimedia performance within a compact one-hand-grip screen, the GALAXY Note 8.0 has the power and advanced technology to evolve the tablet experience and ensure you achieve new levels of efficient multi-tasking while benefitting from superb voice call functionality. Furthermore, the intelligent S Pen brings together the latest innovation and the ease of using a traditional pen and paper; creating a sophisticated mobile experience that will enhance life on the go.

The Galaxy Note 8.0 features a 1.6 GHz quad-core CPU, a WXGA display, 5-megapixel camera with 1080p HD recording, a front-facing camera, cellular connectivity and Android 4.1 Jelly Bean with TouchWiz. It is very similar in size to the Apple iPad Mini.

The Google Chromebook Pixel has higher screen resolution (2560X1700) then the Apple Macbook Air (1440X900). The Google Chromebook Pixel costs $1299 and $1449 for the LTE version. The screen is 12.85 inches. Interestingly no one is talking about holding the Chromebook Pixel with LTE up their ear. Apparently the obvious solution of using speaker or some kind of earbud is expected to be used.

Chromebook is not just a utilitarian product.

The Chromebook Pixel shows that it can highend as well.

Chromebook Pixel has a lot of pixels per inch

February 24, 2013

Brookhaven on the verge of revolutionizing superconducting magnet technology to 20-25 tesla in 2013 and 35-40 tesla by 2018

A 15+ Tesla High Temperature Superconductor solenoid was already designed, built and tested in 2012. An all superconductor solenoid can be combined with a conventional 10 tesla magnet to achieve a hybrid 25 tesla. Ramesh Gupta, Brookhaven National labs and others, are working to a more ambitious 20-25 Telsa goal (all high temperature superconductor in 2013 in multiple programs.

The Ultimate target is about 40 Telsa in a hybrid design (HTS+LTS).

• High strength HTS (e.g., with Hastelloy substrate from SuperPower) are very attractive for high field applications
• Progress in conductor to date has been impressive. There is even more room for progress – even higher Ic and more uniform Ic
• But conductor is only the beginning. There are several challenges in making very high field magnets out of them

If the Large Hadron Collider had upgrades to the 25 tesla high temperature superconducting magnets it could have 3 times the colliding energy levels. There has also been breakthroughs with cryocoolers that do not use helium which can cool down to 12-50K with lower operating costs.

Zubrin on Green Antihumanism and Paul Ehrlich repeats call for more Abortion and Birth control

On February 11, 2013, the Denver Post ran a guest commentary of great clinical interest. In the piece in question, Colorado State University philosophy professor Philip Cafaro advanced the argument that immigration needs to be sharply cut, because otherwise people from Third World nations will come to the United States and become prosperous, thereby adding to global warming.

Cafaro says “And make no mistake: Immigrants are not coming to the United States to remain poor,” warns the philosopher. “Those hundreds of millions of new citizens will want to live as well and consume energy at the same rates as other Americans. . . . What climate change mitigation measures . . . could possibly equal the increased greenhouse gas emissions we would lock in by adding 145 million more new citizens to our population?”

Robert Zubrin notes that according to Cafaro’s liberal argument, the wretched of the Earth must be kept poor wherever they reside, because otherwise they will ruin the weather for the rest of us. Following this logic, the United States should adopt the role of the world’s oppressor, enforcing the continuation of poverty around the globe.

The argument has always been the same:

1. There isn’t enough of x to go around.
2. Therefore human numbers, activities, or liberties must be severely constrained.
3. Those of us enlightened by wisdom must be empowered to do the constraining.
4. And having obtained such power, let’s make the best of it and stick it to those we despise anyway.

All these cases were frauds. Ireland never lacked the capacity to feed its people. During the entire “great famine,” the island continued to produce massive amounts of beef and grain. The Irish just couldn’t afford to buy any of it due to the enforcement of rack-renting, high taxation, and suppression of manufactures. Germany never needed additional living space. It has a bigger population now than it did under the Third Reich, on much less land, yet it has a far higher living standard. Hitler just used the Lebensraum imperative as an excuse for genocide. Contrary to Population Bomb author Paul Ehrlich, the world was not overpopulated in 1967. In fact, since that time, as world population has doubled, average GDP per capita has nearly tripled. Yet, unfortunately, that did not stop population-control advocates from obtaining billions of dollars of U.S. taxpayer money to help Third World regimes stop reproduction among their poor, in general, and despised national minorities, in particular. And there is certainly no moral case for limiting carbon emissions.

Paul Ehrlich wrote the population bomb and has been wrongly predicting a starvation doom since the 1960s.

Paul Ehrlich has again repeated his forecast of a food calamity and the only solution is for population control.

Quantum Hypercube Memory will Enable Parallel Small Quantum Computers to Provide Exponential Speed up over Classical Computing

A quantum computer doesn't need to be a single large device but could be built from a network of small parts, new research from the University of Bristol has demonstrated. As a result, building such a computer would be easier to achieve.

Many groups of research scientists around the world are trying to build a quantum computer to run algorithms that take advantage of the strange effects of quantum mechanics such as entanglement and superposition. A quantum computer could solve problems in chemistry by simulating many body quantum systems, or break modern cryptographic schemes by quickly factorising large numbers.

Previous research shows that if a quantum algorithm is to offer an exponential speed-up over classical computing, there must be a large entangled state at some point in the computation and it was widely believed that this translates into requiring a single large device.

In a paper published in the Proceedings of the Royal Society A, Dr Steve Brierley of Bristol's School of Mathematics and colleagues show that, in fact, this is not the case. A network of small quantum computers can implement any quantum algorithm with a small overhead.

The key breakthrough was learning how to efficiently move quantum data between the many sites without causing a collision or destroying the delicate superposition needed in the computation. This allows the different sites to communicate with each other during the computation in much the same way a parallel classical computer would do.

Arxiv - Efficient Distributed Quantum Computing

Saudi Arabia, China, Kuwait Pakistan and Azerbiajan competing for the Next Worlds Tallest Building

Kuwait, China and Azerbaijan had already announced their plans for the tallest tower in previous years, now Pakistan has announced new plans.

Kingdom Holding Company that has commenced work on the 1,000-metre high Kingdom Tower in Riyadh, which is set to overtake Burj Khalifa, currently the tallest tower in the world, by 2017.

With a total construction area of over 500,000 square meters, the Kingdom Tower will be a mixed-use building, featuring a Four Seasons Hotel, Four Seasons serviced apartments, office space, luxury condominiums and an observatory at higher level than the world's current highest observation deck.

The Kingdom Tower complex will contain 59 elevators, including 54 single-deck and five double-deck elevators, along with 12 escalators. Elevators serving the observatory will travel at a rate of 10 meters per second in both directions. Another unique feature of the design is a sky terrace, roughly 30 meters (98 feet) in diameter, at level 157. It is an outdoor amenity space intended for use by the penthouse floor.

The Kingdom Tower was previously known as Mile-High Tower It is a supertall skyscraper proposed for construction in Jeddah, Saudi Arabia at a preliminary cost of SR4.6 billion (US$1.23 billion). It will be the centerpiece and first phase of a SR75 billion (US$20 billion) proposed development known as Kingdom City that will be located along the Red Sea on the north side of Jeddah. If completed as planned, the tower will reach unprecedented heights, becoming the tallest building in the world, as well as the first structure to reach the one-kilometer mark. The tower was initially planned to be 1.6-kilometre (1 mi) high; however, the geology of the area proved unsuitable for a tower of that height.

Kingdom Tower

China’s Broad Group is waiting for government approval to build the 220-storey Sky City in Changsha, 10 meters taller than the 828-metre Burj Khalifa. The tower will be standing tall in mere three months.

The 838-meter high Sky City is expected to have residences, offices, elementary and secondary schools, kindergarten, old people’s home, healthcare hospital, store, hotel, sports and entertainment centre, 17 helipads and house nearly 30,000 people.

Sky City will use BSB modular technology which features 95 per cent factory prefabrication with a construction pace of five storeys a day.

Plans for 2000 meter 636 Story follow up to Skycity

Pinned up on the office wall of Broad Groups CEO Zhang Yue are plans for a project even more audacious building that is two kilometers high. When asked to estimate the odds of this 636-floor giganto-scraper ever being built, Zhang responds without hesitation, "One hundred percent! Some say that it's sensationalism to construct such a tall building. That's not so. Land shortages are already a grave problem. There's also the very serious transportation issue. We must bring cities together and stretch for the sky in order to save cities and save the Earth. We must eliminate most traffic, traffic that has no value! And we must reduce our dependency on roads and transportation."

Nextbigfuture has discussed what it would mean to have 600-700 story buildings

Higher density and larger cities would boost the per capita GDP of a city

Sky City Skyscrapers (200-300 stories) and robotic cars (4 times the density of road traffic) will make certain megacities (future New York, Shanghai, Tokyo etc...) one third to one half of the overall world population and they would have 75% more GDP per capita than they do today. There would be rural, regular urban then super-urban. Research shows that doubling population and increased urban density boosts productivity by about 15%.

Search for Modifications and Alternatives after the NIF fusion laser missed key milestone

The world's biggest laser missed a key target date on the road to producing clean energy via nuclear fusion, an independent review panel says the technology holds enough promise to continue the quest – with a few modifications.

NIF's approach was to fire a 192-beam laser at a metal shell the size of a pencil eraser, holding a ball of frozen hydrogen. This produces a burst of X-rays that heats and compresses the hydrogen, fusing the nuclei in a brief implosion.

When NIF was being built in the 1990s, computer models predicted that short laser pulses delivering 1.8 megajoules of energy would create the pressures needed for ignition. The giant laser surpassed this energy level last year but still wasn't achieving enough pressure.

Until we know why NIF fell short, the panel recommends trying out other options, such as shifting to a different type of laser. For instance, firing an electron beam through a mixture of krypton and fluorine produces bright laser pulses at a shorter wavelength. This technology is less mature, but if it works it could implode the targets more uniformly than NIF's lasers.

Google Expanding the Googleplex

Hint of 150 MHz radio emission from the Neptune-mass extrasolar transiting planet HAT-P-11b

Since the radio-frequency emission from planets is expected to be strongly influenced by their interaction with the magnetic field and corona of the host star, the physics of this process can be effectively constrained by making sensitive measurements of the planetary radio emission. Up to now, however, numerous searches for radio emission from extrasolar planets at radio wavelengths have only yielded negative results. Here we report deep radio observations of the nearby Neptune-mass extrasolar transiting planet HAT-P-11b at 150 MHz, using the Giant Meterwave Radio Telescope (GMRT). On July 16, 2009, we detected a 3σ emission whose light curve is consistent with an eclipse when the planet passed behind the star. This emission is at a position 14′′ from the transiting exoplanet’s coordinates; thus, with a synthetized beam of FWHM∼16′′, the position uncertainty of this weak radio signal encompasses the location of HAT-P-11. We estimate a 5% false positive probability that the observed radio light curve mimics the planet’s eclipse light curve. If the faint signature is indeed a radio eclipse event associated with the planet, then its flux would be 3.87 mJy±1.29 mJy at 150 MHz. However, our equally sensitive repeat observations of the system on November 17, 2010 did not detect a significant signal in the radio light curve near the same position. This lack of confirmation leaves us with the possibility of either a variable planetary emission, or a chance occurrence of a false positive.

Exoplanet Habitability adjusted for atmospheric pressure and seasonality

A new Energy Balance Model (EBM) provides more insight into the habitability of extrasolar planets. It also has a seasonal model of planetary climate, with new prescriptions for most physical quantities. Researchers use the EBM to investigate the surface habitability of planets with an Earth-like atmospheric composition but diff erent levels of surface pressure. The habitability, defi ned as the mean fraction of the planet's surface on which liquid
water could exist, is estimated from the pressure-dependent liquid water temperature range, taking into account seasonal and latitudinal variations of surface temperature. By running several thousands of EBM simulations they generated a map of the habitable zone (HZ) in the plane of the orbital semi-major axis, a, and surface pressure, p, for planets in circular orbits around a Sun-like star.

As pressure increases, the HZ becomes broader, with an increase of 0.25 AU in its radial extent from p=1/3 bar to p=3 bar. At low pressure, the habitability is low and varies with a; at high pressure, the habitability is high and relatively constant inside the HZ. We interpret these results in terms of the pressure dependence of the greenhouse e ffect, the effi ciency of horizontal heat transport, and the extent of the liquid water temperature range. Within the limits discussed in the paper, the results can be extended to planets in eccentric orbits around non-solar type stars. The main characteristics of the pressure-dependent HZ are modestly aff ected by variations of planetary properties, particularly at high pressure.

Circumstellar habitable zone of planets with Earth-like atmospheres and di fferent levels of surface pressure obtained with our EBM climate simulations. Abscissae: semi-major axis, a (bottom axis), or insolation (top axis). Ordinates: logarithm of the total surface pressure, p. The circles indicate solutions with mean global annual habitability h over 0. The area of the circles is proportional to h; the colors are coded according to the mean annual global surface temperature, Tm. The size and color scales are shown in the legend. The solid lines are contours of equal mean temperature Tm = 273 K (magenta), 333 K (red) and 393 K (black). Results above the contour at Tm = 333 K (red line) are tentative; see Section 3.3. Red crosses: simulations stopped on the basis of the water loss limit criterion; blue crosses: simulations interrupted when Tm less then Tmin;

Форма для связи


Email *

Message *