February 04, 2017

Evidence Supporting Accelerated Universe Expansion

The Hubble constant — the rate at which the Universe is expanding — is one of the fundamental quantities describing our Universe. A group of astronomers from the H0LiCOW collaboration, led by Sherry Suyu, Max Planck professor at the Technical University Munich (TUM) and the Max Planck Institute for Astrophysics in Garching, Germany, used the NASA/ESA Hubble Space Telescope and other telescopes in space and on the ground to observe five galaxies in order to arrive at an independent measurement of the Hubble constant.

The new measurement is completely independent of — but in excellent agreement with — other measurements of the Hubble constant in the local Universe that used Cepheid variable stars and supernovae as points of reference

However, the value measured by Suyu and her team, as well as those measured using Cepheids and supernovae, are different from the measurement made by the ESA Planck satellite. But there is an important distinction — Planck measured the Hubble constant for the early Universe by observing the cosmic microwave background.

While the value for the Hubble constant determined by Planck fits with our current understanding of the cosmos, the values obtained by the different groups of astronomers for the local Universe are in disagreement with our accepted theoretical model of the Universe. “The expansion rate of the Universe is now starting to be measured in different ways with such high precision that actual discrepancies may possibly point towards new physics beyond our current knowledge of the Universe,” elaborates Suyu.

The targets of the study were massive galaxies positioned between Earth and very distant quasars — incredibly luminous galaxy cores. The light from the more distant quasars is bent around the huge masses of the galaxies as a result of strong gravitational lensing. This creates multiple images of the background quasar, some smeared into extended arcs.

International astronomers using the NASA/ESA Hubble Space Telescope have made an independent measurement of how fast the Universe is expanding. The newly measured expansion rate for the local Universe is consistent with earlier findings. These are, however, in intriguing disagreement with measurements of the early Universe.
Credits: NASA, ESA, Suyu (Max Planck Institute for Astrophysics), Auger (University of Cambridge)



Elon Musk tweets picture of his tunneling machine as he plans to make tunneling up to ten times faster

Elon Musk has tweeted out a picture of his tunneling machine.

Last week, Musk admitted that “they don’t really know what they are doing” when it comes to digging holes, but he sees an opportunity to significantly increase the speed of making tunnels.

Elon Musk estimates a 5 to 10x increase in tunneling speed is possible.

The goal is make tunneling easier and faster so that tunnels become more popular, which Musk sees as a solution to traffic in urban areas and a way to bring transportation into the three-dimensional world – like buildings.









Process for producing ammonia that generates electricity instead of consuming energy. 500 million tons of ammonia are made each year for fertilizer

Nearly a century ago, German chemist Fritz Haber won the Nobel Prize in Chemistry for a process to generate ammonia from hydrogen and nitrogen gases. The process, still in use today, ushered in a revolution in agriculture, but now consumes around one percent of the world's energy to achieve the high pressures and temperatures that drive the chemical reactions to produce ammonia.

Today, University of Utah chemists publish a different method, using enzymes derived from nature, that generates ammonia at room temperature. As a bonus, the reaction generates a small electrical current.

Although chemistry and materials science and engineering professor Shelley Minteer and postdoctoral scholar Ross Milton have only been able to produce small quantities of ammonia so far, their method could lead to a less energy-intensive source of the ammonia, used worldwide as a vital fertilizer.




Angewandte Chemie International Edition- Bioelectrochemical Haber–Bosch Process: An Ammonia-Producing H2/N2 Fuel Cell

Google Word Lens translates written Japanese in realtime

The Google Word Lens app is now available in Japanese. You’ll never have to worry about taking a wrong turn on a busy Shibuya street or ordering something you wouldn't normally eat.

The Google Translate app already lets you snap a photo of Japanese text and get a translation for it in English. But it’s a whole lot more convenient if you can just point your camera and instantly translate text on the go. With Word Lens, you just need to fire up the Translate app, point your camera at the Japanese text, and the English translations will appear overlaid on your screen—even if you don't have an Internet or data connection. It’s every savvy traveller’s dream.

The instant translation feature Word Lens has support for Chinese and with Japanese now translate about 30 languages. Word Lens can translate both Simplified and Traditional Chinese to English, or the other way around.








23000 atoms precisely mapped in nanoparticle

Scientists at Berkeley Lab's Molecular Foundry used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.




Nature - Deciphering chemical order / disorder and material properties at the single-atom level

Fedex is investing in autonomous trucks, and is interested in delivery robots and an Alexa app

Your FedEx package might someday be delivered by a robot.

Rob Carter, FedEx’s chief information officer, says the shipping giant is considering small vehicles that could drive around neighborhoods and make deliveries without human drivers.

Carter is responsible for setting the technology agenda across FedEx’s various operating companies, including its planes-and-trucks Express shipping service and office-and-home Ground delivery service, which operate in 220 countries.

The investments FedEx makes in AI and robotics technologies could shape the multi-trillion-dollar logistics market, affecting everything from the way people send and receive parcels to the global movement of large fleets of vehicles.

Fedex is working with the startup Peloton Technology, whose semi-autonomous technology electronically links trucks into small caravan groups called platoons. The system, which uses wireless vehicle-to-vehicle communication to enable the driver of a lead truck to control the gas and brakes of a truck following closely behind him, is designed to reduce wind resistance and save fuel. The technology is considered a significant step toward fully autonomous trucks, and Peloton has said it will release it in late 2017.

Carter says FedEx is also “very much interested in” completely autonomous trucking and has partnered with several automakers that specialize in that technology, including Daimler and its Freightliner truck division and Volvo. Daimler has piloted semi-autonomous trucks on highways in Nevada and Germany while Volvo recently demonstrated a fully autonomous construction truck in an underground Swedish mine. Carter says he expects to see “significant implementations” of automated vehicles in the shipping industry within 10 years, but declined to specify when FedEx might adopt semi- or fully autonomous trucks.



February 03, 2017

1,000 times more efficient nano-LED opens door to faster microchip

The electronic data connections within and between microchips are increasingly becoming a bottleneck in the exponential growth of data traffic worldwide. Optical connections are the obvious successors but optical data transmission requires an adequate nanoscale light source, and this has been lacking. Scientists at Eindhoven University of Technology (TU/e) now have created a light source that has the right characteristics: a nano-LED that is 1000 times more efficient than its predecessors, and is capable of handling gigabits per second data speeds. They have published their findings in the online journal Nature Communications.

With electrical cables reaching their limits, optical connections like fiberglass are increasingly becoming the standard for data traffic. Over longer distances almost all data transmission is optical. Within computer systems and microchips, too, the growth of data traffic is exponential, but that traffic is still electronic, and this is increasingly becoming a bottleneck. Since these connections ('interconnects') account for the majority of the energy consumed by chips, many scientists around the world are working on enabling optical (photonic) interconnects. Crucial to this is the light source that converts the data into light signals which must be small enough to fit into the microscopic structures of microchips. At the same time, the output capacity and efficiency have to be good. Especially the efficiency is a challenge, as small light sources, powered by nano- or microwatts, have always performed very inefficiently to date.

This is a scanning electron microscope picture of the new nano-LED, including some details. CREDIT Eindhoven University of Technology

Nature Communications - Waveguide-coupled nanopillar metal-cavity light-emitting diodes on silicon

China carrying forward with large scale development of nuclear energy from US research that has been underdeveloped

China’s rapid nuclear expansion will result in it overtaking the U.S. as the nation with the largest atomic power capacity by 2026, according to BMI Research.

The world’s second biggest economy will almost triple its nuclear capacity to nearly 100 gigawatts by 2026, making it the biggest market globally, analysts said in a note dated Jan. 27. The nation added about 8 gigawatts of nuclear power last year, boosting its installed capacity to about 34 million kilowatts, according to BMI.

China has 20 reactors currently under construction, according to the International Atomic Energy Agency. Another 176 are either planned or proposed, far more than any other nation, according to the World Nuclear Association.

Coal’s share in the nation’s energy mix will gradually fall to just under 54 percent by 2026 from its current 70 percent

China is building new conventional reactors, as well as investing in research for new next-generation hardware, such as thorium molten-salt reactors, high-temperature gas-cooled reactors, and sodium-cooled fast reactors.

The new reactors are not that exotic, it is just that nuclear technology has been virtually frozen in the united states and other countries since the 1970s. There has only been the development of modular construction and computerization of the control systems.

Molten salt reactors were tested in the 1960s. It was actually expected that the boiler water and pressurized water reactors made in the 1960s and 1970s were stop gap technology before better systems were developed. During the Nixon administration the expectation was that by the year 2000 there would be thousands of nuclear reactors. Instead high interest rates in the 1970s, cheap oil, cheap coal and cheap gas and the increased cost from regulations from Three Mile Island froze nuclear energy development.

China is carrying forward with large scale development of nuclear energy from US research that has been underdeveloped. There are some startups in the US, Canada, UK and other countries that are trying to develop next generation nuclear. Most of the startups are more poorly funded compared to larger efforts in China.

China also has the advantage that the vast majority of all new power generation is being built in China. The US, Japan and Europe with 0 to 3% GDP growth only needs 1% more power generation each year after energy efficiency. China with 7% GDP growth is adding 5% more power each year. China is planning to double its power generation by the 2030s from its current level. China is already at about 130% of the US power generation level.

China is building and spending over ten times more on building new energy generation and distribution than the USA.

The US would be able to get closer to the scale of what China is doing in energy if the USA chose to replace all of its coal generation.



Trump may fund the Spacex Mars Colonization plan

Elon Musk, the founder of SpaceX and Tesla, has made trips to Trump Tower. He met with Trump and the Washington Post has ben reliably told, discussed Mars and public-private partnerships.

Elon Musk and SpaceX have the bold dream of colonizing Mars, and think they can launch the first human mission to the surface of the Red Planet as soon as 2024 — when Trump, if reelected, would still be in the White House. (We understand that Musk also talked with Trump about other issues, including the need for a smart grid — the kind of infrastructure that would give a boost to the solar energy business, in which Musk is a leader via his investments in the company Solar City.)

Trump seems to be cozying up to Elon Musk and is entertaining the idea of financing Musk’s Mars colonization project

Elon's Vision of the Mars Colony

Initially, glass panes with carbon fiber frames to build geodesic domes on the surface, plus a lot of miner/tunneling droids. With the latter, you can build out a huge amount of pressurized space for industrial operations and leave the glass domes for green living space.



Real Mars and Spacex Plans

The current Mars plan is:

  1. Send Dragon scouting missions, initially just to make sure we know how to land without adding a crater and then to figure out the best way to get water for the CH4/O2 Sabatier Reaction.
  2. Heart of Gold spaceship flies to Mars loaded only with equipment to build the propellant plant.
  3. First crewed mission with equipment to build rudimentary base and complete the propellant plant.
  4. Try to double the number of flights with each Earth-Mars orbital rendezvous, which is every 26 months, until the city can grow by itself.


The Flight Tank for the Interstellar Transport was the most important part of the announcement

The flight tank will actually be slightly longer than the development tank shown, but the same diameter.

That was built with latest and greatest carbon fiber prepreg. In theory, it should hold cryogenic propellant without leaking and without a sealing linker. Early tests are promising.

Will take it up to 2/3 of burst pressure on an ocean barge in the coming weeks.

The spaceship would be limited to around 5 g's nominal, but able to take peak loads 2 to 3 times higher without breaking up.

Booster would be nominal of 20 and maybe 30 to 40 without breaking up.

Spacex and Elon Musk have the 61 page presentation of the Interplanetary Transport System and the plan from early exploration to a sustainable colony on Mars

Spacex has built a full sized carbon composite fuel tank.

The Interplanetary Transport system can launch 550 tons to low earth orbit which is nearly four times as much as the Saturn V. It would be over four times as powerful as the SLS in the final version of the SLS




Next version of Falcon 9 will have uprated thrust

Final Falcon 9 has a lot of minor refinements that collectively are important, but uprated thrust and improved legs are the most significant.

Elon thinks the F9 boosters could be used almost indefinitely, so long as there is scheduled maintenance and careful inspections. Falcon 9 Block 5 -- the final version in the series -- is the one that has the most performance and is designed for easy reuse, so it just makes sense to focus on that long term and retire the earlier versions. Block 5 starts production in about 3 months and initial flight is in 6 to 8 months, so there isn't much point in ground testing Block 3 or 4 much beyond a few reflights.


Robert Zubrin, Longtime Mars Colonization advocate, gives a Critique of the SpaceX Interplanetary Transport System.

Zubrin was struck by many good and powerful ideas in the Musk plan. However, Musk’s plan assembled some of those good ideas in an extremely suboptimal way, making the proposed system impractical. Still, with some corrections, a system using the core concepts Musk laid out could be made attractive — not just as an imaginative concept for the colonization of Mars, but as a means of meeting the nearer-at-hand challenge of enabling human expeditions to the planet.

Zubrin explains the conceptual flaws of the new SpaceX plan, showing how they can be corrected to benefit, first, the near-term goal of initiating human exploration of the Red Planet, and then, with a cost-effective base-building and settlement program, the more distant goal of future Mars colonization.

Robert Zubrin, a New Atlantis contributing editor, is president of Pioneer Energy of Lakewood, Colorado, and president of the Mars Society.

Highlights
* Have the second stage go only out to the distance of the moon and return to enable 5 payloads to be sent instead of one
* Leave the 100 person capsule on Mars and only have a small cabin return to earth
* use the refueling in orbit and other optimizations to enable a Falcon Heavy to deliver 40 tons to Mars instead of 12 for exploration missions in 2018, 2020 etc...
* Reusable first stage makes rocketplanes going anywhere point to point on Earth feasible. Falcon Heavy would have the capacity of a Boeing 737 and could travel in about one hour of time anywhere

There are videos of the Elon Musk presentation and an interview with Zubrin about the Musk plan at the bottom of the article


Spacex Falcon Heavy




Design of the SpaceX Interplanetary Transport System

As described by Musk, the SpaceX ITS would consist of a very large two-stage fully-reusable launch system, powered by methane/oxygen chemical bipropellant. The suborbital first stage would have four times the takeoff thrust of a Saturn V (the huge rocket that sent the Apollo missions to the Moon). The second stage, which reaches orbit, would have the thrust of a single Saturn V. Together, the two stages could deliver a maximum payload of 550 tons to low Earth orbit (LEO), about four times the capacity of the Saturn V. (Note: All of the “tons” referenced in this article are metric tons.)

At the top of the rocket, the spaceship itself — where some hundred passengers reside — is inseparable from the second stage. (Contrast this with, for example, NASA’s lunar missions, where each part of the system was discarded in turn until just the Command Module carried the Apollo astronauts back to Earth.) Since the second-stage-plus-spaceship will have used its fuel in getting to orbit, it would need to refuel in orbit, filling up with about 1,950 tons of propellant (which means that each launch carrying passengers would require four additional launches to deliver the necessary propellant). Once filled up, the spaceship can head to Mars.

The duration of the journey would of course depend on where Earth and Mars are in their orbits; the shortest one-way trip would be around 80 days, according to Musk’s presentation, and the longest would be around 150 days. (Musk stated that he thinks the architecture could be improved to reduce the trip to 60 or even 30 days.)

After landing on Mars and discharging its passengers, the ship would be refueled with methane/oxygen bipropellant made on the surface of Mars from Martian water and carbon dioxide, and then flown back to Earth orbit.

Spacex will have to fix turbopumps for next version of Falcon 9 to qualify it for NASA manned flights

The Wall Street Journal indicates a forthcoming report from the US Government Accountability Office focuses most closely on issues with turbopumps in SpaceX's Falcon 9 rocket. The report has found a "pattern of problems" with the turbine blades within the turbopumps, which deliver rocket fuel into the combustion chamber of the Merlin rocket engine. Some of the components used in the turbopumps are prone to cracks, the government investigators say, and may require a redesign before NASA allows the Falcon 9 booster to be used for crewed flights. NASA has been briefed on the report's findings, and the agency's acting administrator, Robert Lightfoot, told the newspaper that he thinks “we know how to fix them.”

A spokesman for SpaceX, John Taylor, said the company already has a plan in place to fix the potential cracking issue. "We have qualified our engines to be robust to turbine wheel cracks," Taylor said. "However, we are modifying the design to avoid them altogether. This will be part of the final design iteration on Falcon 9." This final variant of the Falcon 9 booster, named Block 5, is being designed for optimal safety and easier return for potential reuse. According to company founder Elon Musk, it could fly by the end of this year.

The new report also cites other problems with the commercial crew development efforts by SpaceX and Boeing. The latter company, for example, may be having difficulty with ensuring the reliability of its parachute systems to bring crews safely back to a land-based landing.

Spacex and Boeing are struggling to meet NASA's mission requirement for a loss-of-crew probability of 1-in-270. NASA has previously acknowledged this issue, citing the challenge of dealing with micrometeoroid and orbital debris




February 02, 2017

Elon Musk believes he can convince Trump to support colonizing Mars






Toward all-solid lithium batteries

Most batteries are composed of two solid, electrochemically active layers called electrodes, separated by a polymer membrane infused with a liquid or gel electrolyte. But recent research has explored the possibility of all-solid-state batteries, in which the liquid (and potentially flammable) electrolyte would be replaced by a solid electrolyte, which could enhance the batteries’ energy density and safety.

Now, for the first time, a team at MIT has probed the mechanical properties of a sulfide-based solid electrolyte material, to determine its mechanical performance when incorporated into batteries.

Lithium-ion batteries have provided a lightweight energy-storage solution that has enabled many of today’s high-tech devices, from smartphones to electric cars. But substituting the conventional liquid electrolyte with a solid electrolyte in such batteries could have significant advantages. Such all-solid-state lithium-ion batteries could provide even greater energy storage ability, pound for pound, at the battery pack level. They may also virtually eliminate the risk of tiny, fingerlike metallic projections called dendrites that can grow through the electrolyte layer and lead to short-circuits.

“Batteries with components that are all solid are attractive options for performance and safety, but several challenges remain,” Van Vliet says. In the lithium-ion batteries that dominate the market today, lithium ions pass through a liquid electrolyte to get from one electrode to the other while the battery is being charged, and then flow through in the opposite direction as it is being used. These batteries are very efficient, but “the liquid electrolytes tend to be chemically unstable, and can even be flammable,” she says. “So if the electrolyte was solid, it could be safer, as well as smaller and lighter.”

Lithium metal anodes exhibit a significant increase in capacity compared to state-of-the-art graphite anodes. This could translate into about a 100 percent increase in energy density compared to [conventional] Li-ion technology.


Advanced Energy Materials - Compliant Yet Brittle Mechanical Behavior of Li2S–P2S5 Lithium-Ion-Conducting Solid Electrolyte

Young's modulus, hardness, and fracture toughness are measured by instrumented nanoindentation for an amorphous Li2S–P2S5 Li-ion solid electrolyte. Although low elastic modulus suggests accommodation of significant chemomechanical strain, low fracture toughness can facilitate brittle crack formation in such materials.

Modular construction improvement for the second Ford Aircraft Carrier

There is an improved build strategy for the second Gerald R. Ford aircraft carrier. The Kennedy carrier (CVN-79)is being built using more modular construction, a process where smaller sections of the ship are welded together to form large structural units, equipment is installed, and the large units are lifted into the dry dock using the shipyard’s 1,050-metric ton gantry crane. The modules can weigh over 1000 tons.

CVN-79 is about 25 percent complete and set for deliver in 2022. The ship is on tract to be completed with 445 modules lifts, which is 51 fewer than Ford and 149 fewer than USS George H.W. Bush (CVN-77), the last Nimitz-class carrier, according to a company statement.


The largest module in the first Ford carrier was 1026 tons



Rewritable Paper that uses light and no ink

Developing efficient photoreversible color switching systems for constructing rewritable paper is of significant practical interest owing to the potential environmental benefits including forest conservation, pollution reduction, and resource sustainability. Here we report that the color change associated with the redox chemistry of nanoparticles of Prussian blue and its analogues could be integrated with the photocatalytic activity of TiO2 nanoparticles to construct a class of new photoreversible color switching systems, which can be conveniently utilized for fabricating ink-free, light printable rewritable paper with various working colors. The current system also addresses the phase separation issue of the previous organic dye-based color switching system so that it can be conveniently applied to the surface of conventional paper to produce an ink-free light printable rewritable paper that has the same feel and appearance as the conventional paper. With its additional advantages such as excellent scalability and outstanding rewriting performance (reversibility over 80 times, legible time over 5 days, and resolution over 5 μm), this novel system can serve as an eco-friendly alternative to regular paper in meeting the increasing global needs for environment protection and resource sustainability.

Currently, paper production and disposal have a large negative impact on the environment: paper production is a leading source of industrial pollution, discarded paper is a major component (approximately 40%) of landfills, and even recycling paper contributes to pollution due to the process of ink removal. There is also the issue of deforestation: in the US, about one-third of all harvested trees are used for paper and cardboard production.

Working to address these problems, researchers have been investigating alternatives to disposable paper. One possibility is to take advantage of the color-switching ability of certain chemicals when exposed to light, although in the past this approach has faced challenges in terms of stability, limited reversibility, high cost, toxicity, and difficulty in applying the coating to ordinary porous paper.



Nanoletters - Photocatalytic Color Switching of Transition Metal Hexacyanometalate Nanoparticles for High-Performance Light-Printable Rewritable Paper

Space based observation for earthquakes

The quantity and quality of satellite-geodetic measurements of tectonic deformation have increased dramatically over the past two decades improving our ability to observe active tectonic processes. We now routinely respond to earthquakes using satellites, mapping surface ruptures and estimating the distribution of slip on faults at depth for most continental earthquakes. Studies directly link earthquakes to their causative faults allowing us to calculate how resulting changes in crustal stress can influence future seismic hazard. This revolution in space-based observation is driving advances in models that can explain the time-dependent surface deformation and the long-term evolution of fault zones and tectonic landscapes.

The next decade should see us begin to discriminate between earthquake models using more and better Earth Observation data that describe the evolution of deformation in space and time for an increasing number of earthquake faults. The models make specific predictions about the temporal and spatial behavior of deformation that can be discriminated with long time-series of observations. At the same time, complementary data from seismic imaging and rheological constraints from rock mechanics will be vital in solving this problem.

Satellite geodesy offers the opportunity to measure the complete earthquake cycle: first, coseismic slip in the seismogenic upper crust, its relationship with aftershocks and fault segmentation; second, postseismic deformation localized on fault structures as shallow and deep afterslip, or more widely distributed through the ductile lower crust and upper mantle flow as viscoelastic relaxation; and third, interseismic strain accumulation across fault zones between earthquakes. By using the high spatial and temporal resolution of satellite observations, it will become possible to determine the time-dependent rates of deformation as well as the spatial extent of shear zones and weak zones beneath faults. Improved measurements of these processes in time and space will allow us to better constrain the lateral variability and depth-dependent rheology within the crust.

On a broader scale, Earth Observation data are now reaching the spatial resolution and accuracy to enable us to assess the fundamental mechanics of how continents deform. We have known for decades that the continents do not deform as large rigid plates like the oceans, but the kinematics and dynamics of continental deformation are still unclear. The debate has historically been polarized between two end member views. In one, the continents have been considered to act like a viscous fluid, with internal buoyancy forces playing a key role in controlling the distribution of deformation, and faults only acting as passive markers reflecting the deformation of a deeper, controlling layer. The alternative view has been that the continents can be considered to be a collection of rigid blocks, each behaving in essence like an independent plate. Resolving this issue is important for earthquake hazard assessment–we need to understand the degree to which deformation and earthquakes are focused on the major, ‘block-bounding’ faults, as opposed to being distributed throughout the continents. Long time-series of surface deformation data from Earth Observation satellites will enable us to quantify the degree to which deformation occurs away from the major ‘block-bounding’ faults


Conceptual cartoon of deformation in the crust and uppermost mantle.

High quality graphene made from soybean oil in a single step

Until now, the high cost of graphene production has been the major roadblock in its commercialiZation. Previously, graphene was grown in a highly-controlled environment with explosive compressed gases, requiring long hours of operation at high temperatures and extensive vacuum processing. Australian CSIRO scientists have developed a novel “GraphAir” technology which eliminates the need for such a highly-controlled environment. The technology grows graphene film in ambient air with a natural precursor, making its production faster and simpler.

“This ambient-air process for graphene fabrication is fast, simple, safe, potentially scalable, and integration-friendly,” CSIRO scientist Dr Zhao Jun Han, co-author of the paper published today in Nature Communications said.

“Our unique technology is expected to reduce the cost of graphene production and improve the uptake in new applications.”

GraphAir transforms soybean oil – a renewable, natural material - into graphene films in a single step.

“Our GraphAir technology results in good and transformable graphene properties, comparable to graphene made by conventional methods,” CSIRO scientist and co-author of the study Dr Dong Han Seo said.

With heat, soybean oil breaks down into a range of carbon building units that are essential for the synthesis of graphene.

The team also transformed other types of renewable and even waste oil, such as those leftover from barbecues or cooking, into graphene films.

“We can now recycle waste oils that would have otherwise been discarded and transform them into something useful,” Dr Seo said.


Growing graphene films in the ambient-air process.

Nature Communications - Single-step ambient-air synthesis of graphene from renewable precursors as electrochemical genosensor

Large scale microwave trapped ion universal quantum computer design can scale to billions trapped ions and would solve 2048 bit Shor factoring in 110 days

The microwave trapped ion universal quantum computer design work features a new invention permitting actual quantum bits to be transmitted between. individual quantum computing modules in order to obtain a fully modular large-scale machine capable of reaching nearly arbitrary large computational processing powers.

Previously, scientists had proposed using fibre optic connections to connect individual computer modules. The new invention introduces connections created by electric fields that allow charged atoms (ions) to be transported from one module to another. This new approach allows 100,000 times faster connection speeds between individual quantum computing modules compared to current state-of-the-art fibre link technology.

The new blueprint is the work of an international team of scientists from the University of Sussex (UK), Google (USA), Aarhus University (Denmark), RIKEN (Japan) and Siegen University (Germany).

They estimate that a 2 billion trapped ion system could be used to crack 2048 bit encryption in 110 days. In December 2009, Lenstra and his team announced the factorization of a 768-bit RSA modulus.

In 2012, a 923 bit code was cracked using 21 computers

The Lenstra group estimated that factoring a 1024-bit RSA modulus would be about 1,000 times harder than their record effort with the 768-bit modulus, or in other words, on the same hardware, with the same conditions, it would take about 1,000 times as long. They also estimated that their record achievement would have taken 1,500 years if they normalized processing power to that of the standard desktop machine at the time - this assumption is based on a 2.2 Ghz AMD Opteron processor with 2GB RAM. Breaking a DigiCert 2048-bit SSL certificate would take about 4.3 billion times longer (using the same standard desktop processing) than doing it for a 1024-bit key. It is therefore estimated, that standard desktop computing power would take 4,294,967,296 x 1.5 million years to break a DigiCert 2048-bit SSL certificate. Or, in other words, a little over 6.4 quadrillion years.

As a next step, the team will construct a prototype quantum computer, based on this design, at the University.





The effort is part of the UK Government’s plan to develop quantum technologies towards industrial exploitation and makes use of a recent invention by the Sussex team to replace billions of laser beams required for quantum computing operations within a large-scale quantum computer with the simple application of voltages to a microchip.

Professor Hensinger said: “The availability of a universal quantum computer may have a fundamental impact on society as a whole. Without doubt it is still challenging to build a large-scale machine, but now is the time to translate academic excellence into actual application building on the UK’s strengths in this ground-breaking technology. I am very excited to work with industry and government to make this happen.”

The computer’s possibilities for solving, explaining or developing could be endless. However, its size will be anything but small. The machine is expected to fill a large building, consisting of sophisticated vacuum apparatus featuring integrated quantum computing silicon microchips that hold individual charged atoms (ions) using electric fields.

In previously proposed trapped ion quantum computing architectures, modules are powered by laser-driven single- and multiqubit gates. However, the vast amount of individually controlled and stabilized laser beams required in such architectures would make the required engineering to build a large-scale quantum computer challenging. Here, we propose an architecture that is based on a concept involving global long-wavelength radiation and locally applied magnetic fields. The gate interactions are based on a mechanism first proposed by Mintert and Wunderlich in 2001, making use of magnetic field gradients within dedicated gate zones. Only global laser light fields are required for loading, Doppler cooling, and state preparation and readout of ions, whereas laser-driven quantum gates requiring careful alignment in each gate zone are not required in our approach. Large-scale quantum computers, which rely on laser gates and are capable of solving classically intractable problems, may require millions of individual laser beams that have to be precisely aligned with respect to individual entanglement regions and need to be individually controlled. In our microwave-based architecture, all laser fields do not have to be precisely aligned or individually controlled. However, one should note that our architecture still incorporates a number of technical challenges, such as the creation of strong magnetic field gradients and the requirement of calibration operations and well-controlled voltages, which are required to execute quantum gates. We present the blueprint for a scalable microwave trapped ion quantum computer module, which is based on today’s silicon semiconductor and ion trap technology. The modules, driven by global laser and microwave fields, perform ion loading and ion shuttling, generate locally addressable magnetic fields as well as magnetic field gradients to perform single- and multiqubit gates, and feature on-chip photo detectors for state readout. All gate, shuttling, and state readout operations are controlled by on-chip electronics, and a cooling system is integrated into the module to allow for efficient temperature management. Each module, when placed in an ultrahigh vacuum (UHV) system and powered by global laser and microwave fields, operates as a modular stand-alone quantum computer.


They propose a blueprint for a scalable quantum computer module, which makes use of the discussed microwave-based multiqubit gate scheme and is fabricated using silicon microfabrication technology. Each module is a unit cell for a large-scale quantum computer and features microfabricated ion trap X-junction arrays. In each X-junction, two or more ions are trapped and feature up to three different zones including a microwave-based gate zone, a state readout zone, and a loading zone. Once an ion is trapped in the loading zone, high-fidelity ion shuttling operations transfer the ion to the gate zone. There, ions can be individually addressed using locally adjustable magnetic fields and entangled using static magnetic field gradients in conjunction with global microwave and rf fields. When the state of the qubit needs to be detected, the ion is transferred to the readout zone, where global laser fields and on-chip photo detectors are used for state readout. A second ion species is used to sympathetically cool the qubit ion without affecting its internal states. All coherent quantum operations are performed and controlled by on-chip electronics, relying only on global microwave and rf fields. In our microwave-based architecture, laser light is only required for state preparation and detection, photoionization, and sympathetic cooling. The required laser beams have much less stringent requirements than laser beams for quantum gate realization. The laser beams do not need to have high intensity, and do not need to be phase-stable; the mode profile only requires some overlap with the ion to scatter sufficient photons. Laser beams for sympathetic cooling can even be provided as sheets.

On the basis of the same scheme, they can give quantitative estimates on the system size and processing time for a machine that solves a relevant, hard problem, such as the Shor factoring of a 2048-bit number. For the calculations, they assume a single-qubit gate time of 2.5 μs, two-qubit gate time of 10 μs, ion separation and shuttling time of 15 μs each, static magnetic field gradient ramp-up and ramp-down time of 5 μs each, and a measurement time of 25 μs, resulting in a total error correction cycle time of 235 μs. On the basis of these numbers, performing a 2048-bit number Shor factorization will take on the order of 110 days and require a system size of 2 billion trapped ions. Shor factoring of a 1024-bit number will take on the order of 14 days. Both of these factorizations will require almost the same amount of physical qubits because the required pace of the ancilla qubit generation is the same for a 2048-bit and a 1024-bit factorization. Trapping 2 billion ions will require 23 × 23 vacuum chambers occupying an area of ca. 103.5 × 103.5 m2.

Science Advances - Blueprint for a microwave trapped ion quantum computer

US military indicates Navy F-35C is only version at risk for F-18 SuperHornet replacement

Of the three types of Lockheed Martin’s F-35 Joint Strike Fighter, only the US Navy's carrier-launched F-35C is at risk of being replaced by Boeing’s F-18 Super Hornet, the Marine Corps’s top pilot said today. It’s not on the table to substitute Hornets for either the land-based F-35A variant or the vertical-takeoff-and-landing F-35B, Lt. Gen. Jon Davis, deputy Commandant for aviation said today.

Those instructions come from Defense Secretary Jim Mattis, a former Marine infantryman himself, who on Jan. 27 ordered two “parallel” reviews of the F-35: one of the program in general to find “opportunities to significantly reduce the cost of the F-35”; and one of the F-35C specifically as compared to “an advanced Super Hornet.” Trump had proposed to replace the F-35 — not specifying any particular model — with “a comparable F-18 Super Hornet.”

The current US plan is that the Marine Corps will purchase 340 of the F-35B and 80 of the F-35C, while the Navy will purchase 260 of the F-35C. The US Air force plans to buy 1,763 F-35A conventional takeoff and landing variant.


The F18 does not have vertical takeoff capability but it would not be unreasonable to look at displacing some of the US Air Force F35A order.




F-22 getting stealth and weapons upgrades

Lockheed Martin and the United States Air Force have been working on improving the performance of the F-22 Raptor’s stealth coatings.

While the Raptor is by far the most capable air superiority fighter ever built, its Achilles’ Heel since it entered service has been maintenance. The F-22’s sensitive radar absorbent coatings have proven to be costly and difficult to repair, but the Air Force and Lockheed have been working on improving the performance. Those efforts are starting to payoff as Lockheed has completed work on the first Raptor to enter the company’s Inlet Coating Repair (ICR) Speedline facility—delivering the aircraft back to the U.S. Air Force ahead of schedule.

Lockheed notes that the recent increase in the number of F-22 deployments—particularly the ongoing operations in the Middle East—have increased the demand to its stealth repair services. The jet always required periodic overhauls to maintain its very low radar signature, however, the current workload is greater than usual.

While the F-22 might no longer be in production, Lockheed is heavily involved in keeping the Raptor fleet flying. Lockheed provides sustainment services to the F-22 fleet through a comprehensive weapons management program called Follow-on Agile Sustainment for the Raptor (FASTeR). With a fleet of only 186 operational jets—of which only 120 are “combat-coded”—the Air Force needs every flyable Raptor it can get.

The Air Force is continually working to improve its F-22 fleet, which is the core of its air superiority capability. Currently, the Raptor fleet is operating in an Increment 3.2A/Upgrade 5 configuration that finally allows the jets to carry the AIM-9X Sidwinder high off-boresight missile, but the weapon won’t be fully integrated into F-22’s systems until the Increment 3.2B configuration is fielded. Even then, it won’t be until later upgrades that the jet will be equipped with a helmet-mounted cuing system for that weapon

The F-22 was designed for a lifespan of 30 years and 8,000 flight hours, with a $100 million "structures retrofit program". Investigations are being made for upgrades to extend their useful lives further

F-22 Increment 3.2B Modernization (F-22 Inc 3.2B Mod) integrates the Air Intercept Missiles AIM-9X and AIM-120D into the F-22, adds Electronic Protection techniques, incorporates new hardware, enhances Geolocate capability, and expands IFDL (Intra-Flight Data Link) functionality.

The 3.2B mod is a $641 million upgrade

It is part of a $6.9 billion modernization program from 2013 to 2023






China test launches an ICBM with 10 independent warheads

China flight tested a new variant of a long-range missile with 10 warheads in what defense officials say represents a dramatic shift in Beijing's strategic nuclear posture.

The flight test of the DF-5C missile was carried out earlier this month using 10 multiple independently targetable reentry vehicles, or MIRVs. The test of the inert warheads was monitored closely by U.S. intelligence agencies, said two officials familiar with reports of the missile test.

The missile was fired from the Taiyuan Space Launch Center in central China and flew to an impact range in the western Chinese desert.

The DF-5C missile has a range of 12,000 kilometers and can carry 12 nuclear warheads. Though China’s nuclear stockpile of 260 warheads is no match for America’s estimated 6,800 warheads




China's Manufacturing cost advantage is eroding so China will spend trillions for automation, robotics, 3D manufacturing and research

While the USA has been extremely concerned about losing jobs (particularly manufacturing jobs to China), China performed a survey of businesses in the American Chamber of Commerce in Beijing and found that 25% had moved or were planning to move their businesses out of China. Half were going to other Asian countries and 40% to America, Canada or Mexico.

China's worker wages are rising about 7-8% each year and they have a shrinking working age population as the people age.

China is making big moves in automation and large scale deployment of robotics. In 2014, President Xi Jinping talked about a robotics revolution. China has been the number one buyer of industrial robots since 2013. However, China lags other nations in terms of robots per worker.

This more than the occasional story of one fully automated factory in 2015 that got rid of 90% of its 600 workers or Foxconn automating their iPhone factories with thousands of workers. Roughly 100 million people work in the Chinese manufacturing sector, which contributes nearly 36% of China's gross domestic product, but IDC's Zhang believes Beijing is prepared to manage the transition.

There will be about 1.3 million industrial robots in the world in 2018.

If China were to become a top ten country in industrial robot density in 2025 then they would need about 13-15 million robots (if they maintained the 100 million workforce). If the manufacturing workforce were halved to 50 million then China would need 7 to 10 million robots by 2025.

The automation could boost production and revenue from manufacturing by 25%.

China's 13th five year plan includes a made in China 2025 plan. The plan is to make China an advanced manufacturing power within a decade.

The current global leader in industrial robotic automation is South Korea. South Korea's robotic density exceeds the global average by a good seven-fold (478 units), followed by Japan (314 units) and Germany (292 units). At 164 units, the USA currently occupies seventh place in the world.

At 36 units per 100,000 employees or about half the global average figure, China is currently in 28th place. Within the overall global statistics, this is roughly on a par with Portugal (42 units), or Indonesia (39 units). However, about five years ago, China embarked on a historically unparalleled game of catch-up aimed at changing the status quo, and already today it is the world's largest sales and growth market for industrial robots.

Never before have so many robot units been sold in one year as were sold in China in 2014 (57,100 units). The boom is continuing unabated in line with the forecasts: In 2018, China will account for more than one-third of the industrial robots installed worldwide.

Progress toward automation is moving at a good clip in both the public and private spheres. In 2015, Guangdong province, long China's top manufacturing hub, pledged to spend $ 150 billion to install industrial robots in its factories and establish new advanced automation centers.

Global Manufacturing competitiveness was analyzed by Deloitte

By 2030, China's share of younger population, i.e. those in the age group of 15-39 years, will likely drop to 28 percent of the population from 38 percent in 2013.










February 01, 2017

Dune will get another attempt at a feature movie adaptation

Denis Villeneuve has been nominated for Oscars for his movie "Arrival" and his "Blade Runner 2049" sequel arrives in October. Denis has been hired to make a new feature movie version of the science fiction classic Dune.

Frank Herbert wrote six novels in the "Dune" series, beginning in 1965. The books were adapted into David Lynch's 1984 movie, as well as two TV miniseries.

The 1984 version was generally poorly received. One problem is that even the first Dune book needs a longer movie or several movies or a miniseries to properly adapt. The other problem was the 1984 movie had some funky dialog and David Lynch quirkiness.

Here is some dialog from the 1984 movie.
Paul: What do you call the mouse shadow on the second moon?

Stilgar: We call that one, Muad'Dib

Paul: Could I be known as Paul Muad'Dib?

Stilgar: You are Paul Muad'Dib.


There is a lot of repetition of the phrase - Dune, desert planet.

The Lord of the Rings and the Hobbit adaptations give a guide for adapting intricate books into movies.



A decelerating gravity slingshot and solar pressure could be used to slow an interstellar solar sail travelling up to 4.6% of lightspeed

In April last year, billionaire Yuri Milner announced the Breakthrough Starshot Initiative. He plans to invest 100 million US dollars in the development of an ultra-light light sail that can be accelerated to 20 percent of the speed of light to reach the Alpha Centauri star system within 20 years. The problem of how to slow down this projectile once it reaches its target remains a challenge. René Heller of the Max Planck Institute for Solar System Research in Göttingen and his colleague Michael Hippke propose to use the radiation and gravity of the Alpha Centauri stars to decelerate the craft. It could then even be rerouted to the red dwarf star Proxima Centauri and its Earth-like planet Proxima b.

* a very light sail about 300 meters on a side for a 10 gram probe can approach a target star system to with 5 solar radii and slow down and then swing to the next star where it would fully stop and go into orbit

Heller and his colleague Michael Hippke wondered, “How could you optimize the scientific yield of this type of a mission?” Such a fast probe would cover the distance from the Earth to the Moon in just six seconds. It would therefore hurtle past the stars and planets of the Alpha Centauri system in a flash.

The solution is for the probe’s sail to be redeployed upon arrival so that the spacecraft would be optimally decelerated by the incoming radiation from the stars in the Alpha Centauri system. René Heller, an astrophysicist working on preparations for the upcoming Exoplanet mission PLATO, found a congenial spirit in IT specialist Michael Hippke, who set up the computer simulations.

The two scientists based their calculations on a space probe weighing less than 100 grams in total, which is mounted to a 100,000-square-metre sail, equivalent to the area of 14 soccer fields. During the approach to Alpha Centauri, the braking force would increase. The stronger the braking force, the more effectively the spacecraft’s speed can be reduced upon arrival. Vice versa, the same physics could be used to accelerate the sail at departure from the solar system, using the sun as a photon cannon.

The tiny spacecraft would first need to approach the star Alpha Centauri A as close as around four million kilometres, corresponding to five stellar radii, at a maximum speed of 13,800 kilometres per second (4.6 per cent of the speed of light). At even higher speeds, the probe would simply overshoot the star.

During its stellar encounter, the probe would not only be repelled by the stellar radiation, but it would also be attracted by the star’s gravitational field. This effect could be used to deflect it around the star. These swing-by-manoeuvres have been performed numerous times by space probes in our solar system. “In our nominal mission scenario, the probe would take a little less than 100 years – or about twice as long as the Voyager probes have now been travelling. And these machines from the 1970s are still operational,” says Michael Hippke.

Theoretically, the autonomous, active light sail proposed by Heller and Hippke could settle into a bound orbit around Alpha Centauri A and possibly explore its planets. However, the two scientists are thinking even bigger. Alpha Centauri is a triple star system. The two binary stars A and B revolve around their common centre of mass in a relatively close orbit, while the third star, Proxima Centauri, is 0.22 light years away, more than 12,500 times the distance between the Sun and the Earth.

The sail could be configured so that the stellar pressure from star A brakes and deflects the probe toward Alpha Centauri B, where it would arrive after just a few days. The sail would then be slowed again and catapulted towards Proxima Centauri, where it would arrive after another 46 years − about 140 years after its launch from Earth.

In order to keep the weight down, the sail would have to be just a few atoms thick. That means it would be orders of magnitude thinner than the wavelength of light that it aims to reflect, and so its reflectivity would be low. “It does not appear feasible to reduce the weight by so many orders of magnitude and yet maintain the rigidity and reflectivity of the sail material,” Loeb says.

Hippke acknowledges the problem. “The issue of producing an extremely thin material with high surface reflectivity seems to be a very challenging exercise,” he says. However, he can see solutions coming over the horizon. A one-atom thick coating of silicon would boost the reflectivity of the graphene sail enormously, he points out, and silicon-based metamaterial monolayers are now being designed.

At a distance of about 4.22 ly, it would take about 100,000 years for humans to visit our closest stellar neighbor Proxima Centauri using modern chemical thrusters. New technologies are now being developed that involve high-power lasers firing at 1 gram solar sails in near-Earth orbits, accelerating them to 20% the speed of light (c) within minutes. Although such an interstellar probe could reach Proxima 20 years after launch, without propellant to slow it down it would traverse the system within hours. Here we demonstrate how the stellar photon pressures of the stellar triple α Cen A, B, and C (Proxima) can be used together with gravity assists to decelerate incoming solar sails from Earth. The maximum injection speed at α Cen A to park a sail with a mass-to-surface ratio (σ) similar to graphene (7.6 × 10^−4 gram m^−2) in orbit around Proxima is about 13,800 km per second (4.6% c), implying travel times from Earth to α Cen A and B of about 95 years and another 46 years (with a residual velocity of 1280 km s−1) to Proxima. The size of such a low-σ sail required to carry a payload of 10 grams is about 100,000 square meters = (316 meters)2. Such a sail could use solar photons instead of an expensive laser system to gain interstellar velocities at departure. Photogravitational assists allow visits of three stellar systems and an Earth-sized potentially habitable planet in one shot, promising extremely high scientific yields.

They present a new method of decelerating interstellar light sails from Earth at the α Cen system using a combination of the stars’ gravitational pulls and their photon pressures. This sailing technique, which we refer to as a photogravitational assist, allows multiple stellar fly-bys in the α Cen stellar triple system and deceleration of a sail into a bound orbit. In principle, photogravitational assists could also allow sample return missions to Earth. The maximum injection speed to deflect an incoming, extremely light and tensile sail (with properties akin to graphene) carrying a payload of 10 grams into a bound orbit around Proxima is about 4.6 % c, corresponding to travel times of 95 years from Earth. After initial fly-bys at α Cen A and B, the sail could absorb another 1280 km s−1
upon the arrival at Proxima, implying an additional travel time between α Cen AB and Proxima of 46 years.

Arrival at Proxima with maximum velocity could result in a highly elliptical orbit around the star, which could be circularized into a habitable zone orbit using the photon pressure near periastron. The time required for such an orbit transfer is small (years) compared to the total travel time. Once parked in orbit around Proxima, a sail could eventually use the stellar photon pressure to transfer into a planetary orbit around Proxima b. In a more general context, photogravitational assists of a large, roughly 100000 square meters= (316 meters)2 -sized graphene sail could
(1.) decelerate a small probe into orbit around a nearby exoplanet and therefore substantially reduce the technical demands on the onboard imaging systems;
(2.) in principle allow sample return missions from distant stellar systems;
(3.) avoid the necessity of a large-scale Earth-based laser launch system by instead using the sun’s radiation at the departure from the solar system;
(4.) limit accelerations to about 1000 g compared to some 10, 000 g invoked for a 1 m2 laser-riding sail; and
(5.) leave of the order of 10 grams for the sail’s reflective coating and equipment.

These benefits come at the price of a yet to be developed large graphene sail, which needs to be assembled or unfold in near-Earth space and which needs to withstand the harsh radiation environment within 5 radii of the target star for several hours. This technical challenge, however, could be easier to tackle than the construction of a high-power ground-based laser system shooting laser sails in near-Earth orbits.




Arxiv - Deceleration of High-velocity Interstellar Photon Sails into Bound Orbits at α Centauri

Researchers need more proof from Harvard of the claim for the creation of solid metallic hydrogen which is a holy grail of physics

Researchers doubt the Harvard claim that solid metallic hydrogen has been created Ranga Dias and Isaac Silvera, both physicists at Harvard University in Cambridge, Massachusetts, first posted a report of their results on the arXiv preprint server last October, which attracted immediate criticism. A peer-reviewed version of the report was published on 26 January in Science2, but sceptics say that it includes little new information. Silvera and Dias say that they wanted to publish their first observation before making further tests on their fragile material.

Physicists have crushed tiny samples of hydrogen between diamond anvils at pressures exceeding those in the centre of Earth. The experiments are delicate and fraught with potential for error. Researchers have seen the material change from transparent to dark as it is compressed, which suggests that as electrons are crowded together, they are able to absorb photons of visible light. But no one has proven the existence of metallic, shiny hydrogen, which would reflect light.

Dias and Silvera say that they were able to squeeze their hydrogen gas at greater pressures than anyone else has managed. To do so, they used an anvil that can fit inside a cryostat, enabling them to cool their hydrogen sample to just above absolute zero. They also say they have found a better way to polish the tips of their diamonds, to remove irregularities that could break the gems. They then turned a screw to crank up the pressure to 495 billion pascals (495 GPa), or almost 5 million times higher than atmospheric pressure at sea level.

“Then, suddenly, it becomes a lustry, reflective sample, which you can only believe is a metal,” Silvera says. Seen through a microscope, the sample appeared shiny, and it reflected light in the way metallic hydrogen should do, he says.


Other researchers aren't convinced. It’s far from clear that the shiny material the researchers see is actually hydrogen, says geophysicist Alexander Goncharov of the Carnegie Institution for Science in Washington DC. Goncharov has criticized the Silvera lab’s methods before. He suggests that the shiny material may be alumina (aluminium oxide), which coats the tips of the diamonds in the anvil, and may behave differently under pressure.

Loubeyre and others think that Silvera and Dias are overestimating the pressure that they reached, by relying on an imprecise calibration between turns of the screw and pressure inside the anvil. Eugene Gregoryanz, a physicist at the University of Edinburgh, UK, adds that part of the problem is that the researchers took only a single detailed measurement of their sample at the highest pressure — making it hard to see how pressure shifted during the experiment.

“If they want to be convincing, they have to redo the measurement, really measuring the evolution of pressure,” says Loubeyre. “Then they have to show that, in this pressure range, the alumina is not becoming metallic.”

But Silvera says that he just wanted to get the news out there before making confirmation tests, which, he says, could break their precious specimen. “We wanted to publish this breakthrough event on this sample,” he says. To preserve the material, he and Dias have kept it in the cryostat; the lab has only two cryostats, and the other is in use for other experiments, he says. “Now that the paper has been accepted, we’re going to do further experiments.”

Форма для связи

Name

Email *

Message *