January 28, 2017

5 Xprize teams are launching to the moon this year

Five finalist teams in the Google Lunar XPRIZE have been set, and they all have rocket launch contracts to go to the Moon in 2017.

$30 million in prizes is on the line, including the $20 million grand prize for whoever gets there first (and completes the competition goals), and these teams are employing a variety of technology to land first and claim the fame.

They hail from Japan, India, the United States, and Israel, with launches set to happen from places as far-flung as California, USA; Sriharikota, India; and Mahia, New Zealand. Here’s a look at all five teams, and their technology that’s set to make big space history.

* SpaceIL, Tel Aviv, Israel

SpaceIL signed a deal with SpaceX (through Spaceflight Services) to launch in late 2017. Using a “hopper” style of craft, where they will land on the lunar surface, fly 500m, and then touch down again to claim the prize

* Moon Express, Cape Canaveral, Florida

Famous for being the first private organization to get approval from their government to operate on the Moon (a necessary step for international space law), United States space start-up Moon Express are ready to bring the Moon back in a big way to Florida’s Space Coast. Using their hopping Moon lander (see the test version above), Moon Express looks to open up a bright future for the “eighth continent” and bring a real business plan to space. They’ve signed up with New Zealand’s up-and-coming Rocket Lab to launch from Mahia, New Zealand in late 2017.

Synergy Moon, International

Why buy a ride when you can build one? Synergy Moon team member Interorbital Systems will serve as their launch provider, using a NEPTUNE 8 rocket to carry the team to the moon from an open-ocean location off the California coast during the second half of 2017

John Hurt died. He was famous for having chest burst by Alien and as Doctor Who during Dalek Time War

John Hurt, who died on Friday, was a brilliant and versatile actor who made memorable film appearances over five decades.

As the subject of the celebrated ‘chestburster’ scene, Hurt is on the receiving end of one of the most played clips ever. The extra dose of chicken giblets and offal that director Ridley Scott used to surprise the actors gave them all a nasty shock, not least Veronica Cartwright who played Lambert.

Capaldi reference to Time War in Hell Bent

Organovo bioprinting human tissue for drug testing and within 6 years for implanting human livers

Organovo designs and create functional human tissues using our proprietary three-dimensional bioprinting technology. Their goal is to build living human tissues that are proven to function like native tissues. With reproducible 3D tissues that accurately represent human biology, they are enabling ground-breaking therapies by:

  • Partnering with biopharmaceutical companies and academic medical centers to design, build, and validate more predictive in vitro tissues for disease modeling and toxicology.
  • Giving researchers something they have never had before – the opportunity to test drugs on functional human tissues before ever administering the drug to a living person; bridging the gulf between preclinical testing and clinical trials.
  • Creating functional, three-dimensional tissues that can be implanted or delivered into the human body to repair or replace damaged or diseased tissues.
Bioprinting is automated fabrication of a tissue through the spatially controlled deposition of cells and/or cell-containing materials in defined, user-controlled geometric patterns, wherein the resulting multi-cellular tissue is viable, three-dimensional and mimics key aspects of native tissue architecture and / or function. Organovo finished tissues are scaffold free, comprised only of relevant tissue cell types and the extracellular matrix the cells produce

Organovo has advance work on bioprinting skin and liver and just started a partnership for kidneys.

Organovo announced a collaboration with Professor Melissa Little and the Murdoch Childrens Research Institute, The Royal Children’s Hospital, Melbourne, Australia to develop an architecturally correct kidney for potential therapeutic applications. The collaboration has been made possible by a generous gift from the Methuselah Foundation (“Methuselah”) as part of its ongoing University 3D Bioprinter Program.

“Partnerships with world-class institutions can accelerate groundbreaking work in finding cures for critical unmet disease needs and the development of implantable therapeutic tissues," said Keith Murphy, CEO, Organovo. “This collaboration with Professor Little’s lab is another important step in this direction. With the devoted and ongoing support of the Methuselah Foundation, leading researchers are able to leverage Organovo’s powerful technology platform to achieve significant breakthroughs.”

“We have developed an approach for recreating human kidney tissue from stem cells,” said Professor Melissa Little, Theme Director of Cell Biology at Murdoch Childrens Research Institute. “Using Organovo’s bioprinter will give us the opportunity to bioprint these cells into a more accurate model of the kidney. While initially important for modelling disease and screening drugs, we hope that this is also the first step towards regenerative medicine for kidney disease. We are very grateful to Organovo and the Methuselah Foundation for this generous support, which will enable us to advance our research with the first Organovo bioprinter in the southern hemisphere.”

Under Methuselah Foundation's University 3D Bioprinter Program, Methuselah is donating at least $500,000 in direct funding to be divided among several institutions for Organovo bioprinter research projects. This funding will cover budgeted bioprinter costs and key aspects of project execution.

Printed human body parts could be available for human transplants within a few years

Every year about 120,000 organs, mostly kidneys, are transplanted from one human being to another. Sometimes the donor is a living volunteer. Usually, though, he or she is the victim of an accident, stroke, heart attack or similar sudden event that has terminated the life of an otherwise healthy individual. But a lack of suitable donors, particularly as cars get safer and first-aid becomes more effective, means the supply of such organs is limited. Many people therefore die waiting for a transplant. That has led researchers to study the question of how to build organs from scratch.

One promising approach is to print them. Lots of things are made these days by three-dimensional printing, and there seems no reason why body parts should not be among them. As yet, such “bioprinting” remains largely experimental. But bioprinted tissue is already being sold for drug testing, and the first transplantable tissues are expected to be ready for use in a few years’ time.

Researchers have implanted printed ears, bones and muscles into animals, and watched these integrate properly with their hosts.

Last year a group at Northwestern University, in Chicago, even printed working prosthetic ovaries for mice. The recipients were able to conceive and give birth with the aid of these artificial organs.

Sichuan Revotek, a biotechnology company based in Chengdu, China, has successfully implanted a printed section of artery into a monkey. This is the first step in trials of a technique intended for use in humans.
Revotek implanted artery into a monkey

Revotek presentation

Similarly, Organovo, a firm in San Diego, announced in December that it had transplanted printed human-liver tissue into mice, and that this tissue had survived and worked. Organovo hopes, within three to five years, to develop this procedure into a treatment for chronic liver failure and for inborn errors of metabolism in young children. The market for such treatments in America alone, the firm estimates, is worth more than $3 billion a year.

Nature Biotechnology also had a review of bioprinting

January 26, 2017

Tsunami prevention with massic deep ocean sound waves

Devastating tsunamis could be halted before hitting the Earth’s shoreline by firing deep-ocean sound waves at the oncoming mass of water, new research has proposed.

Dr Usama Kadri, from Cardiff University’s School of Mathematics, believes that lives could ultimately be saved by using acoustic-gravity waves (AGWs) against tsunamis that are triggered by earthquakes, landslides and other violent geological events.

AGWs are naturally occurring sounds waves that move through the deep ocean at the speed of sound and can travel thousands of meters below the surface.

AGWs can measure tens or even hundreds of kilometers in length and it is thought that certain lifeforms such as plankton, that are unable to swim against a current, rely on the waves to aid their movement, enhancing their ability to find food.

In a paper published today in the journal Heliyon, Dr Kadri proposes that if we can find a way to engineer these waves, they can be fired at an incoming tsunami and will react with the wave in such a way that reduces its amplitude, or height, and causes its energy to be dissipated over a large area.

Medical first,children had cancer cured with genetically engineered T-cells from another person

Doctors in London say they have cured two babies of leukemia in the world’s first attempt to treat cancer with genetically engineered immune cells from a donor.

Experiments, which took place at London’s Great Ormond Street Hospital, raise the possibility of off-the-shelf cellular therapy using inexpensive supplies of universal cells that could be dripped into patients' veins on a moment’s notice.

The ready-made approach could pose a challenge to companies including Juno Therapeutics and Novartis, each of which has spent tens of millions of dollars pioneering treatments that require collecting a patient’s own blood cells, engineering them, and then re-infusing them.

Both methods rely on engineering T cells—the hungry predator cells of the immune system—so they attack leukemic cells.

The British infants, ages 11 and 16 months, each had leukemia and had undergone previous treatments that failed, according to a description of their cases published Wednesday in Science Translational Medicine. Waseem Qasim, a physician and gene-therapy expert who led the tests, reported that both children remain in remission.

Although the cases drew wide media attention in Britain, some researchers said that because the London team also gave the children standard chemotherapy, they failed to show the cell treatment actually cured the kids. “There is a hint of efficacy but no proof,” says Stephan Grupp, director of cancer immunotherapy at the Children’s Hospital of Philadelphia, who collaborates with Novartis. “It would be great if it works, but that just hasn’t been shown yet.”

Rights to the London treatment were sold to the biotech company Cellectis, and the treatment is now being further developed by the drug companies Servier and Pfizer.

Treatments using engineered T-cells, commonly known as CAR-T, are new and not yet sold commercially. But they have shown stunning success against blood cancers. In studies so far by Novartis and Juno, about half of patients are permanently cured after receiving altered versions of their own blood cells.

NASA has new test for life on other planets that is ten thousand times more sensitive

A simple chemistry method could vastly enhance how scientists search for signs of life on other planets.

The test uses a liquid-based technique known as capillary electrophoresis to separate a mixture of organic molecules into its components. It was designed specifically to analyze for amino acids, the structural building blocks of all life on Earth. The method is 10,000 times more sensitive than current methods employed by spacecraft like NASA's Mars Curiosity rover, according to a new study published in Analytical Chemistry. The study was carried out by researchers from NASA's Jet Propulsion Laboratory, Pasadena, California.

One of the key advantages of the authors' new way of using capillary electrophoresis is that the process is relatively simple and easy to automate for liquid samples expected on ocean world missions: it involves combining a liquid sample with a liquid reagent, followed by chemical analysis under conditions determined by the team. By shining a laser across the mixture -- a process known as laser-induced fluorescence detection -- specific molecules can be observed moving at different speeds. They get separated based on how quickly they respond to electric fields.

While capillary electrophoresis has been around since the early 1980s, this is the first time it has been tailored specifically to detect extraterrestrial life on an ocean world, said lead author Jessica Creamer, a postdoctoral scholar at JPL.

“Our method improves on previous attempts by increasing the number of amino acids that can be detected in a single run," Creamer said. "Additionally, it allows us to detect these amino acids at very low concentrations, even in highly salty samples, with a very simple 'mix and analyze' process."

CRISPR genome engineering research institute expands into agriculture, microbiology

An initiative launched two years ago by UC Berkeley and UC San Francisco to use CRISPR-Cas9 gene editing to develop new disease therapies is expanding into research on the planet’s major crops and poorly understood microbiomes, with plans to invest $125 million in these areas over the next five years.

The funds will not only boost support for biomedical research, but also allow the renamed Innovative Genomics Institute to explore the potential of gene editing in the globally important areas of agriculture and microbiology, and fund projects focused on the social and ethical implications of editing human, animal and plant genomes.

“The CRISPR-Cas9 technology, which is only four years old, is improving by leaps and bounds and has already altered the way doctors approach disease and scientists do research,” said IGI executive director Jennifer Doudna, a professor of molecular and cell biology and Howard Hughes Medical Institute investigator at UC Berkeley. “The IGI has shown that the technology can cure the defect that causes sickle cell anemia, and we are moving toward clinical trials within a few years.”

“But we’ve realized,” she added, “that there are many others arenas in which better gene-editing tools can promote global health, specifically by improving crops and sustaining a healthy microbial environment that has been shown to prevent illness, improve crop yields and nurture a balanced ecosystem. At UC Berkeley we have the expertise in plant science and microbiology research to make a real contribution by designing higher-yield, more pest-resistant crops that a large proportion of the world’s population depend on, and fostering the microbial populations critical to human health and the health of the planet.”

Scientists unveil new form of matter: time crystals

most people, crystals mean diamond bling, semiprecious gems or perhaps the jagged amethyst or quartz crystals beloved by collectors.

To Norman Yao, these inert crystals are the tip of the iceberg.

If crystals have an atomic structure that repeats in space, like the carbon lattice of a diamond, why can’t crystals also have a structure that repeats in time? That is, a time crystal?

In a paper published online last week in the journal Physical Review Letters, the UC Berkeley assistant professor of physics describes exactly how to make and measure the properties of such a crystal, and even predicts what the various phases surrounding the time crystal should be — akin to the liquid and gas phases of ice.

This is not mere speculation. Two groups followed Yao’s blueprint and have already created the first-ever time crystals. The groups at the University of Maryland and Harvard University reported their successes, using two totally different setups, in papers posted online last year, and have submitted the results for publication. Yao is a co-author on both papers.

Time crystals repeat in time because they are kicked periodically, sort of like tapping Jell-O repeatedly to get it to jiggle, Yao said. The big breakthrough, he argues, is less that these particular crystals repeat in time than that they are the first of a large class of new materials that are intrinsically out of equilibrium, unable to settle down to the motionless equilibrium of, for example, a diamond or ruby.

“This is a new phase of matter, period, but it is also really cool because it is one of the first examples of non-equilibrium matter,” Yao said. “For the last half-century, we have been exploring equilibrium matter, like metals and insulators. We are just now starting to explore a whole new landscape of non-equilibrium matter.”

A one-dimensional chain of ytterbium ions was turned into a time crystal by physicists at the University of Maryland, based on a blueprint provided by UC Berkeley’s Norman Yao. Each ion behaves like an electron spin and exhibits long-range interactions indicated as arrows. (Image courtesy of Chris Monroe)

Two Bellafonte nuclear plants could be completed by 2028 with $13 billion investment

The future development of Bellefonte Nuclear Plant, the northeast Alabama facility sold at auction in November, will generate an economic impact of $1 billion and provide more than 12,000 jobs.

Nuclear Development said in its proposal that it plans to complete the two partially completed nuclear power plants and to operate those plants as merchant power plants connected to the grid through the existing transmission lines of TVA and Southern Company.

Nuclear Holdings LLC, the Washington, D.C.-company established in 2012 affiliated with the Birmingham landlord and Chattanooga-based developer Franklin Haney, purchased the plant in November from the Tennessee Valley Authority for more than $111 million. According to the initial release from the TVA, the company plans to invest an additional $13 billion to bring the unfinished nuclear station online, which will create 2,000 permanent jobs in the region, along with 4,000 temporary construction jobs.

According to the TVA, the plant is roughly 55 percent constructed. As recently as 2011, the TVA sought to restart work on one of the reactors, but by 2014, the utility was ready to discontinue the project again. Earlier this year, the TVA Board of Directors deemed Bellefonte surplus property and began accepting preliminary bidding offers for the site in September.

TVA also determined the power demand wouldn't catch up to the plant's output for another 20 years, which could buy some time for a potentially lengthy permitting process.

Bellefonte has Nuclear Regulatory Commission construction licenses. When completed, the plants will be the largest capacity advanced reactors in the United States, according to the proposal.

To complete the plant, Nuclear Development's proposal said it would require 8,000 to 10,000 direct and indirect construction jobs during peak construction

In a prior proposal, Nuclear Development was trying to capitalize on more than $2 billion of investment tax credit then available for new nuclear generation. As a public power entity, TVA does not qualify for such credits, however. TVA rejected Haney's offer and those credits have since been phased out, although Congress could reconsider such incentives for new power generation.

Watts Bar Unit 2 was 80% complete when construction on both units was stopped in the 1980s due in part to a projected decrease in power demand. In 2007, the Tennessee Valley Authority (TVA) Board approved completion of Unit 2 on August 1, and construction resumed on October 15, 2007. The project was expected to cost $2.5 billion, and employ around 2,300 contractor workers. It was fully completed in 2016 at a cost of $4.7 billion.

The letter of intent to bid submitted by Nuclear Development LLC, which outlined the value of its proposal, was released Thursday by TVA - the federal utility which sold the mothballed plant at auction.

Nuclear Development submitted a bid of $111 million to win the auction for the plant, which is located in Jackson County near Scottsboro.

"The positive ongoing economic impact to the surrounding region will exceed $1 billion per year," the proposal stated.

There are 39 nuclear reactors operating across 10 Southeast states. Along with extensions granted to their original operating licenses, most of these reactors are poised to operate well into the second half of this century. Add to that the reactors that are under construction, two each in Georgia and South Carolina and projected to begin generating electricity by 2020.

New Spacesuit Unveiled for Starliner Astronauts

Astronauts heading into orbit aboard Boeing’s Starliner spacecraft will wear lighter and more comfortable spacesuits than earlier versions. The suit capitalizes on historical designs, meets NASA requirements for safety and functionality, while introducing cutting-edge innovations. Boeing unveiled its spacesuit design Wednesday as the company continues to move toward flight tests of its Starliner spacecraft and launch systems that will fly astronauts to the International Space Station.

Astronauts heading into orbit aboard Boeing’s Starliner spacecraft will wear lighter and more comfortable spacesuits than earlier suits astronauts wore. The suit capitalizes on historical designs, meets NASA requirements for safety and functionality, and introduces cutting-edge innovations. Boeing unveiled its spacesuit design Wednesday as the company continues to move toward flight tests of its Starliner spacecraft and launch systems that will fly astronauts to the International Space Station.

A few of the advances in the design:
  • Lighter and more flexible through use of advanced materials and new joint patterns
  • Helmet and visor incorporated into the suit instead of detachable
  • Touchscreen-sensitive gloves
  • Vents that allow astronauts to be cooler, but can still pressurize the suit immediately
  • The full suit, which includes an integrated shoe, weighs about 20 pounds with all its accessories – about 10 pounds lighter than the launch-and-entry suits worn by space shuttle astronauts.

The new Starliner suit's material lets water vapor pass out of the suit, away from the astronaut, but keeps air inside. That makes the suit cooler without sacrificing safety. Materials in the elbows and knees give astronauts more movement, too, while strategically located zippers allow them to adapt the suit's shape when standing or seated.

January 25, 2017

US Navy will fire 150 kilowatt laser on a test ship in 2018 and then from carriers and destroyers in 2019

The U.S. Navy is moving at warp speed to develop lasers with more lethality, precision and power sources as a way to destroy attacking missiles, drones aircraft and other threats.

The USNavy plans to fire a 150-kw weapon off a test ship within a year, he said. “Then a year later, we’ll have that on a carrier or a destroyer or both.”

That’s quite a jump from the kw AN/SEQ-3(XN-1) Laser Weapon System (LaWS), which deployed in 2014 on the amphibious transport dock USS Ponce.

And the kind of power needed to power such a weapon won’t come with a simple flip of a switch.

“The Navy will be looking at ships’ servers to provide three times that much power,” says Donald Klick, director of business development, for DRS Power and Control Technologies. “To be putting out 150 kws, they (the laser systems) will be consuming 450 kws.”

That is more than most currently operational ships are designed to accommodate, at least when they are conducting other tasks

Few power systems onboard ships can support sustained usage of a high-powered laser without additional energy storage,” noted a recent Naval Postgraduate School paper titled “Power Systems and Energy Storage Modeling for Directed Energy Weapons”.

The paper said, “The new DDG-1000 may have enough electrical energy, but other platforms … may require some type of ‘energy magazine.’ This magazine stores energy for on-demand usage by the laser. It can be made up of batteries, capacitors, or flywheels, and would recharge between laser pulses. The energy magazine should allow for sustained usage against a swarm of targets in an engagement lasting up to twenty minutes.

The Navy has contracted the development of a Li-Ion battery subsystem designed and provided by Lithiumstart housed in three distributed steel, welded cabinets that are 48” x 66” x 100” – although they are modular, Klick says, and can be arranged for a tailored fit. Each cabinet contains 18 drawers with 480 Li-Ion phosphate cells in each drawer.

The redundant power modules can provide 465 k each for a total of 930 kw. It can hold that full-power mark for about three minutes.

Improvements of high power fiber lasers used to form the laser beam enable the increased 150 kilowatt power levels and extended range capabilities.

DWave next redesign will allow for higher connection density 4000 qubits by late 2018 and expansion beyond 10,000 qubits

D-Wave is also working on a fifth model, which will provide even greater capacity and connectivity and a closer fit to scientists’ needs. It will likely to launch within two years, the machine will again double the number of qubits, to around 4,000. Crucially, it will also provide more-complex connections between qubits, allowing it to tackle more-complicated problems.

“Changing the underlying connectivity is going to be a game-changer,” says Mark Novotny, a physicist at Charles University in Prague, who is exploring a D-Wave machine’s applications to cybersecurity. “I’m basically drooling hoping for it. It’s very exciting.”

D-Wave’s latest 2000 qubit iteration includes an upgrade that Novotny has been clamoring for. The feature gives more control when different groups of qubits go through the annealing process. In at least one case, D-Wave has shown that this can speed up certain calculations 1,000-fold. For Novotny, the feature is crucial because it will allow his team to “sample” qubits during the process, which opens the door to D-Wave exploring a different type of machine-learning algorithm that could learn to recognize much more complex patterns of cyberattacks.

Arxiv - Quantum Annealing amid Local Ruggedness and Global Frustration

A recent Google study [Phys. Rev. X, 6:031015 (2016)] compared a D-Wave 2X quantum processing unit (QPU) to two classical Monte Carlo algorithms: simulated annealing (SA) and quantum Monte Carlo (QMC). The study showed the D-Wave 2X to be up to 100 million times faster than the classical algorithms. The Google inputs are designed to demonstrate the value of collective multiqubit tunneling, a resource that is available to D-Wave QPUs but not to simulated annealing. But the computational hardness in these inputs is highly localized in gadgets, with only a small amount of complexity coming from global interactions, meaning that the relevance to real-world problems is limited. In this study we provide a new synthetic problem class that addresses the limitations of the Google inputs while retaining their strengths. We use simple clusters instead of more complex gadgets and more emphasis is placed on creating computational hardness through global interactions like those seen in interesting real-world inputs.
We use these inputs to evaluate the new 2000-qubit D-Wave QPU. We include the HFS algorithm---the best performer in a broader analysis of Google inputs---and we include state-of-the-art GPU implementations of SA and QMC. The D-Wave QPU solidly outperforms the software solvers: when we consider pure annealing time (computation time), the D-Wave QPU reaches ground states up to 2600 times faster than the competition. In the task of zero-temperature Boltzmann sampling from challenging multimodal inputs, the D-Wave QPU holds a similar advantage and does not see significant performance degradation due to quantum sampling bias.

But researchers want greater connectivity. Currently, each qubit in the processor can ‘talk’ to only six others, says Scott Pakin, a computer scientist and D-Wave scientific and technical lead at the Los Alamos National Laboratory in New Mexico, which has had a D-Wave computer since August. “The richer the connections, the easier and faster it is to get problems onto the D-Wave. So that’s top of my wish list.”

D-Wave is redesigning its fifth processor to increase connectivity significantly, says ­Jeremy Hilton, the company’s senior vice-president responsible for technology. And because this upgrade involves a hardware overhaul, it will have an additional benefit: allowing the firm to expand beyond the 10,000-qubit limit imposed by the current processor’s design in future machines, he adds.

D-wave machines are a long way from showing the exponential speed increase over classical computers that their advocates hope to see. But in a paper posted on 17 January and not yet peer-reviewed, a D-Wave team claimed the 2000Q could find solutions up to 2,600 times faster than any known classical algorithm

D-wave’s qubits are much easier to build than the equivalent in more traditional quantum computers, but their quantum states are also more fragile, and their manipulation less precise. So although scientists now agree that D-wave devices do use quantum phenomena in their calculations, some doubt that they can ever be used to solve real-world problems exponentially faster than classical computers — however many qubits are clubbed together, and whatever their configuration. The uncertainty hasn’t stopped the number of users growing: last September, around 100 scientists attended D-Wave’s first users’ conference in Santa Fe, New Mexico.

Existing D-Wave computers are located in the United States, but researchers globally can access them remotely, including through schemes such as the USRA’s. The machines are attracting new kinds of researcher, says Venturelli, who uses one of them to try to find the best way for rovers to autonomously schedule operations and manage time. “Universities with nothing to do with quantum physics are now trying their algorithms,” he says.

D-Wave machines have attracted scepticism as well as excitement since they went on sale six years ago. So far, researchers have proved that, for a problem crafted to suit the machine’s abilities, the quantum computer can offer a huge increase in processing speed over a classical version of an algorithm (V. S. Denchev et al. Phys. Rev. X 6,031015; 2016). But the computers do not beat every classical algorithm, and no one has found a problem for which they outperform all classical rivals.

What is the Computational Value of Finite-Range Tunneling? [2016]

Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite-range tunneling can provide considerable computational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to simulated annealing (SA). For instances with 945 variables, this results in a time-to-99%-success-probability that is ∼10^8 times faster than SA running on a single processor core. We also compare physical QA with the quantum Monte Carlo algorithm, an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X again runs up to
∼10^8 times faster than an optimized implementation of the quantum Monte Carlo algorithm on a single core. We note that there exist heuristic classical algorithms that can solve most instances of Chimera structured problems in a time scale comparable to the D-Wave 2X. However, it is well known that such solvers will become ineffective for sufficiently dense connectivity graphs. To investigate whether finite-range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that algorithms designed to simulate QA scale better than SA. We discuss the implications of these findings for the design of next-generation quantum annealers.

Ukraine and China progressing on making super heavy helicopter with 30 ton capacity

Ukraine has test fired for the first time newly designed engine AI-136T. It is intended for China's super heavy helicopter program. The helicopter itself is planned to be based on Soviet MI-26 airframe design, although scaled down somewhat to a smaller size. In combination with more powerful engine than previously used, new helicopter is expected to be a much more energetic machine, with higher combat ceiling, higher speed and better maneuverability. Expected payload capacity is 30 tons.

Ukraine AI-136T engine

planned heavy copter

The MI-26 was the largest and most powerful helicopter to have gone into series production.

Soviet MI-26

Russia was also supplying technology and engineering for China's planned heavy copter

Robotic Fabricator Can precisely lay brick and weld wire and next generation bot will be stronger and lighter

A construction robot has to be powerful enough to handle heavy material, small enough to enter standard buildings, and flexible enough to navigate the terrain.

Researchers have developed a new class of robot capable of creating novel structures on a construction site. They call their new robot the In Situ Fabricator1 and today show what it is capable of.

The In Situ Fabricator is designed from the bottom up to be practical. It can build stuff using a range of tools with a precision of less than five millimeters, it is designed to operate semi-autonomously in a complex changing environment, it can reach the height of a standard wall, and it can fit through ordinary doorways. And it is dust- and waterproof, runs off standard electricity, and has battery backup. On top of all this, it must be Internet-connected so that an architect can make real-time changes to any plans if necessary.

Those are a tricky set of targets but ones that the In Situ Fabricator1 largely meets. It has a set of cameras to sense its environment and powerful onboard processors for navigating and planning tasks. It also has a flexible, powerful robotic arm to position construction tools.

To show off its capabilities, Giftthaler and co have used it to build a pair of structures in an experimental construction site in Switzerland called NEST (Next Evolution is Sustainable building Technologies). The first is a double-leaf undulating brick wall that is 6.5 meters long and two meters high and made of 1,600 bricks.

Even positioning such a wall correctly on a construction site is a tricky task. In Situ Fabricator1 does this by comparing the map of the construction site it has gathered from its sensors with the architect’s plans. But even then, it must have the flexibility to allow for unforeseen problems such as uneven terrain or material sagging that changes a structure’s shape.

“To fully exploit the design-related potentials of using such a robot for fabrication, it is essential to make use not only of the manipulation skills of this robot, but to also use the possibility to feed back its sensing data into the design environment,” say Giftthaler and co.

The resulting wall, in which all the bricks are positioned to within seven millimeters, is an impressive structure.

The second task was to weld wires together to form a complex, curved steel mesh that can be filled with concrete. Once again, In Situ Fabricator1’s flexibility proved crucial. One problem with welding is that the process creates tensions that can change the overall shape of the structure in unpredictable ways. So at each stage in the construction, the robot must assess the structure and allow for any shape changes as it welds the next set of wires together. Once again, the results at NEST are impressive.

In Situ Fabricator1 is not perfect, of course. As a proof-of-principle device, Giftthaler and co use it to identify improvements they can make to the next generation of construction robot. One of these is that at almost 1.5 metric tons, In Situ Fabricator1 is too heavy to enter many standard buildings—500 kilograms is the goal for future machines.

But perhaps the most significant problem is a practical limit on the strength and flexibility of robotic arms. In Situ Fabricator1 is capable of manipulating objects up to about 40 kilograms but ideally ought to be able to handle objects as heavy as 60 kilograms.

But that pushes it up against a practical limit. In Situ Fabricator1’s arm is controlled by electric motors that are incapable of handling heavier objects with the same level of precision. What’s more, electric motors are notoriously unreliable in the conditions found on construction sites, which is why most heavy machinery on these sites is hydraulic.

They have designed and built a hydraulic actuator that can control a next-generation robot arm while handling heavier objects more reliably and with the same precision. They are already using this design to build the next generation construction robot that they call In Situ Fabricator2, which should be ready by the end of this year.

Automation speeds clinical safety surveillance

Using patient outcomes data from approximately 1,800 hospitals, the largest demonstration to date of automated safety surveillance of a medical device is reported in this week's New England Journal of Medicine.

Vanderbilt medical informatics researcher and internal medicine specialist Michael Matheny, M.D., MPH, M.S., and colleagues demonstrate the effectiveness of an automated web-based surveillance system for spotting potential safety problems with less delay.

Their demonstration concerns a device used to close the small hole left in an artery after the insertion and removal of flexible tubing, that is, the hole left after vascular catheterization.

After three quarterly data uploads from the National Cardiovascular Data Repository CathPCI Registry, the automated system found a statistical signal indicating a potential safety issue with a vascular closure device called the Mynx.

The study's final results were based on data from approximately 146,000 registry patients, half of whom got the Mynx.

"The takeaway is we were able to detect a safety signal fairly quickly. When it comes to post-market safety surveillance for medical devices, you have to be really timely in your safety alerting for it to matter. In our world, nine months is quick enough that if you could get the signal to the FDA, they could act on it," said Matheny, assistant professor of Biomedical Informatics, Medicine and Biostatistics.

Deep Learning algorithm will enable 6.3 billion cellphones in 2021 to diagnose skin cancer matching Dermatologist performance

Computer scientists at Stanford set out to create an artificially intelligent diagnosis algorithm for skin cancer. They made a database of nearly 130,000 skin disease images and trained their algorithm to visually diagnose potential cancer. From the very first test, it performed with inspiring accuracy.

“We realized it was feasible, not just to do something well, but as well as a human dermatologist,” said Sebastian Thrun, an adjunct professor in the Stanford Artificial Intelligence Laboratory. “That’s when our thinking changed. That’s when we said, ‘Look, this is not just a class project for students, this is an opportunity to do something great for humanity.’”

The final product, the subject of a paper in the Jan. 25 issue of Nature, was tested against 21 board-certified dermatologists. In its diagnoses of skin lesions, which represented the most common and deadliest skin cancers, the algorithm matched the performance of dermatologists.

A dermatologist uses a dermatoscope, a type of handheld microscope, to look at skin. Computer scientists at Stanford have created an artificially intelligent diagnosis algorithm for skin cancer that matched the performance of board-certified dermatologists. (Image credit: Matt Young)

Neural Network Layout

Nature - Dermatologist-level classification of skin cancer with deep neural networks

US Navy Sticking With Advanced Arresting Gear in Next Carrier despite tripling cost from $476 million to $1.4 billion

The Navy is electing to use the controversial Advanced Arresting Gear on its next Gerald R. Ford-class carrier, John F. Kennedy (CVN-79), USNI News has learned.

The landing gear had been estimated in 2009 to cost $476 million in costs for research development and acquisition for four systems but have increased to a 2016 cost estimate of $1.4 billion – about a 130 percent increase when adjusted for inflation.

Earlier this month, the Navy’s chief weapons buyer notified Congress it was set to install the General Atomics-built AAG on JFK following an evaluation between the AAG and the legacy Mk-7 MOD3 hydraulic arresting system found on the Nimitz-class carriers.

In the last half of 2016, the future of the AAG on carriers beyond Gerald R. Ford (CVN-78) was in doubt and drew scrutiny from the Senate Armed Services Committee and the Office of the Secretary of Defense as part of a larger look of the Ford program.

For its part, the Navy stood up a review board to evaluate use of the system past Ford.

The board – which included Chief of Naval Operations Adm. John Richardson and the Navy’s head of research – reported back to the House and Senate defense committees that reverting to the Mk-7 arresting gear would be cost-prohibitive and result in disruption to construction of future carriers.

In 2015, Naval Sea Systems Command said a design flaw in the AAG’s water twister — a complex paddle wheel that is designed to absorb 70 percent of the force of an airplane’s tailhook landing against an arresting wire, which brings the airplane to a stop. In November, the head of NAVSEA commander Vice Adm. Tom Moore, said the testing program for the AAG had shown marked improvements.

“When that ship delivers we’ll be ready to land aircraft on AAG. I think (CVN) 78 is doing much better, and I think we’ll have a fully functional system,” Moore said.
“I don’t want to presuppose any decision, but I believe if the system functions the way it does on 78 — and given where we are on CVN-79 and the construction of the ship — that it’s a very strong and viable path forward for us.”

Still, testing for the system on the ground — slated to be completed two years earlier — had to occur even as Ford was wrapping up pre-delivery testing. The failures in AAG development — in part — was responsible for several delays in delivery of Ford.

In a Monday statement, NAVAIR lauded progress of the program.

“AAG works,” said Capt. Steve Tedford, Aircraft Launch and Recovery Equipment (PMA 251) program manager, whose team manages the recovery system program said in a statement.
“The progress of AAG testing this past year has been significant and has demonstrated the system’s ability to meet Navy requirements. The team overcame many challenges to get the system to this point and ensure its readiness to support CVN 78 and future Ford-class ships.”

Elon Musk will start making traffic busting tunnels next month

Elon Musk thinks being stuck in traffic is “soul-destroying” — but, he has a solution: tunnels. Musk has been tweeting about tunnels for a month now, and even said he’s going to build a tunnel boring machine and start digging. In developed cities, we can’t widen roadways, and elevated highways are ugly. So, we dig.

“Without tunnels, we will all be in traffic hell forever,” Musk told The Verge via Twitter DM today. “I really do think tunnels are the key to solving urban gridlock. Being stuck in traffic is soul-destroying. Self-driving cars will actually make it worse by making vehicle travel more affordable.”

where will your tunnel be?

Elon Musk
@_wsimson Starting across from my desk at SpaceX. Crenshaw and the 105 Freeway, which is 5 mins from LAX

Physicists patent detonation technique to mass-produce graphene

A Kansas State University team of physicists has discovered a way to mass-produce graphene with three ingredients: hydrocarbon gas, oxygen and a spark plug.

Their method is simple: Fill a chamber with acetylene or ethylene gas and oxygen. Use a vehicle spark plug to create a contained detonation. Collect the graphene that forms afterward.

Chris Sorensen, Cortelyou-Rust university distinguished professor of physics, is the lead inventor of the recently issued patent, "Process for high-yield production of graphene via detonation of carbon-containing material." Other Kansas State University researchers involved include Arjun Nepal, postdoctoral researcher and instructor of physics, and Gajendra Prasad Singh, former visiting scientist.

"We have discovered a viable process to make graphene," Sorensen said. "Our process has many positive properties, from the economic feasibility, the possibility for large-scale production and the lack of nasty chemicals. What might be the best property of all is that the energy required to make a gram of graphene through our process is much less than other processes because all it takes is a single spark."

Graphene is a single atom-thick sheet of hexagonally coordinated carbon atoms, which makes it the world's thinnest material. Since graphene was isolated in 2004, scientists have found it has valuable physical and electronic properties with many possible applications, such as more efficient rechargeable batteries or better electronics.

January 24, 2017

Terrestrial Energy notifies nuclear regulator of planned 2019 molten salt reactor licensing application

Terrestrial Energy USA announced today it had informed the US Nuclear Regulatory Commission (NRC) of its plans to license a small modular reactor (SMR) in the USA. Terrestrial said it intends to start "pre-application interactions" with the regulator this year and to make its licensing application in late 2019.

The NRC recently published a letter from Terrestrial responding to the agency's Regulatory Issue Summary (RIS) published on 7 June last year. An RIS is an NRC request for information regarding future nuclear reactor licence filings.

In its letter, dated 18 November 2016, Terrestrial said it plans to submit an application to the NRC for a design certification or a construction permit "no later than October 2019".

Terrestrial included the status of the design, analyses, testing, licensing, and project planning for its Integral Molten Salt Reactor (IMSR), which is a liquid-fuelled, high-temperature, 400 MWt advanced reactor power plant design

Why is Terrestrial Energy's Integral Molten Salt Reactor a big deal ?
  • A molten salt 7.4 MWth test reactor was operated at Oak Ridge from 1965-1969. So no question about technical feasability
  • A conservative first IMSR design should be competitive with established power at about 3 cents per kWh
  • Later designs should be able to get lower than 1 cent per kWh
  • Design is walk away safe with passive safety systems
  • First designs would produce 6 times less nuclear waste and later designs can close the fuel cycle
  • Canada can use the first several hundred reactors to directly produce steam to profitably produce oil from the oilsands
  • Canada and Terrestrial Energy can thus use the oilsand reactors to profitably climb the learning curve before factory mass production of supersafe, super efficient and disruptively lower cost reactors
  • These system could provide 100% of global electricity demand without any emissions

Dwave adiabatic quantum system can factor numbers up to 200,000 and a lot larger likely in the 40 bit range

There are adiabatic factoring algorithms and methods.

Dwave is focused on optimization problems, however the system can be used to solve other problems including factoring.

In November 2014, it was discovered that this 2012 adiabatic quantum computation had also factored larger numbers, the largest being 56153. A paper in 2016 discussed factoring with Dwave using about 900 qubits up to 200099 (about 20 bits). Extrapolating to 2000 qubits would be 40 bits. The latest Dwave has 2000 qubits.

In 2014, the highest RSA number factored on a classical computer was RSA-768, which has 768 bits, and took two years to compute (from 2007 to 2009).

Tutorial on adiabatic quantum computation (42 pages)

Quantum adiabatic optimization is a class of procedures for solving optimization problems using a quantum computer.

Basic strategy:
• Design a Hamiltonian whose ground state encodes the solution of an optimization problem.
• Prepare the known ground state of a simple Hamiltonian.
• Interpolate slowly.

There is a list of quantum algorithms (adiabatic and regular quantum) at the quantum zoo

Dwave systems are adiabatic

D-Wave Systems Inc., the leader in quantum computing systems and software, today announced general commercial availability of the D-Wave 2000Q quantum computer. D-Wave also announced the first customer for the new system, Temporal Defense Systems Inc. (TDS), a cutting-edge cyber security firm. With 2000 qubits and new control features, the D-Wave 2000Q system can solve larger problems than was previously possible, with faster performance, providing a big step toward production applications in optimization, cybersecurity, machine learning, and sampling.

There are adiabatic and quantum annealing prime factoring algorithms

Arxiv - Prime factorization using quantum annealing and computational algebraic geometry (2016) Note this is a public paper. The NSA has worked with DWave systems. I would believe the NSA has more efficient quantum annealing prime factoring algorithms.

We [researchers] investigate prime factorization from two perspectives: quantum annealing and computational algebraic geometry, specifically Gr¨obner bases. We present a novel autonomous algorithm which combines the two approaches and leads to the factorization of all bi-primes up to just over 200 000, the largest number factored to date using a quantum processor. We also explain how Gr¨obner bases can be used to reduce the degree of Hamiltonians.

They used one of the D-Wave 2X processors, DW2X SYS4, as their quantum annealing solver. This processor operates at a temperature range of 26(±5) millikelvin (mK) and
has 1100 qubits with a 95.5-qubit yield. To embed the problem graph into the hardware graph we used the sapiFindEmbedding and sapiEmbedProblem modules, and to solve the problems we used the sapiSolveIsing and sapiUnembedAnswer modules. For all problems they opted for the maximum number of reads available (10 000) in order to increase the fraction of ground state samples. The following table shows some statistics of the embedding and solving stages for several of the highest numbers that we were able to successfully embed and solve.

Prime factorization is at the heart of secure data transmission because it is widely believed to be NP-complete. In the prime factorization problem, for a large bi-prime M, the task is to find the two prime factors p and q such that M = pq. In secure data transmission, the message to be transmitted is encrypted using a public key which is, essentially, a large bi-prime that can only be decrypted using its prime factors, which are kept in a private key. Prime factorization also connects to many branches of mathematics; two branches relevant to us are computational algebraic geometry and quantum annealing.

Column factoring procedure
They used two single-bit multiplication methods of the two primes p and q. The first method generates a Hamiltonian for each of the columns of the long multiplication expansion, while the second method generates a Hamiltonian for each of the multiplying cells in the long multiplication expansion.

The equation for an arbitrary column (i) can be written as the sum of the column’s multiplication terms plus all previously generated carry-on terms from lower significant columns. This sum is equal to the column’s bi-prime term mi plus the carry-ons generated from higher significant columns

The paper used about 900 qubits.
Larger numbers could be factored with the 2000 qubit Dwave system

there are other papers and work to solve other problems with Dwave like adiabatic systems

Another adiabatic algorithm solves exact cover problems

Ultrafast adiabatic quantum algorithm for the NP-complete exact cover problem

An adiabatic quantum algorithm may lose quantumness such as quantum coherence entirely in its long runtime, and consequently the expected quantum speedup of the algorithm does not show up. Here we present a general ultrafast adiabatic quantum algorithm. We show that by applying a sequence of fast random or regular signals during evolution, the runtime can be reduced substantially, whereas advantages of the adiabatic algorithm remain intact. We also propose a randomized Trotter formula and show that the driving Hamiltonian and the proposed sequence of fast signals can be implemented simultaneously. We illustrate the algorithm by solving the NP-complete 3-bit exact cover problem (EC3), where NP stands for nondeterministic polynomial time, and put forward an approach to implementing the problem with trapped ions

High Frequency Financial Traders can now have 20 nanosecond latency down from 200 nanoseconds

In high frequency trading, the latency gold standard is 200 nanoseconds. If you’re an equity trader using a Bloomberg Terminal or Thomson Reuters Eikon, latency of more than 200 nanoseconds is considered to be shockingly pedestrian, putting you at risk of buying or selling a stock at a higher or lower price than the one you saw quoted. Now, with its announcement of TCPDirect, Solarflare said it has cut latency by 10X, to 20-30 nanoseconds.

High frequency trading occurs because it is highly profitable. It would have to be a regulatory step to have all orders execute precisely say at the top of each second. No inter-second trading. This would get rid of the advantages of sub-second racing. Until then it is profitable

Solarflare Communications is an unheralded soldier in the eternal war on latency. With its founding in 2001, Solarflare took on the daunting raison d’être of grinding down latency from one product generation to the next for the most latency-sensitive use cases, such as high frequency trading. Today, the company has more than 1,400 customers using its networking I/O software and hardware to cut the time between decision and action.

Solarflare offers customers the lowest latency networking solutions for electronic/high frequency trading and other financial services applications with its high-performance 10GbE server adapters and Onload™ application acceleration middleware. These products enable customers to leverage their existing Ethernet and IP infrastructures while achieving the absolute lowest latency with no need to modify applications.

The CTO of an equity trading firm, who agreed to talk with HPCwire‘s sister pub EnterpriseTech anonymously, said his company has been a Solarflare customer for four years and that its IT department has validated Solarflare’s claims for TCPDirect of 20-30 nanoseconds latency.

Financial traders are in a race to make transactions ever faster. In today's high-tech exchanges, firms can execute more than 100,000 trades in a second for a single customer. This summer, London and New York's financial centres will become able to communicate 2.6 milliseconds (about 10%) faster after the opening of a transatlantic fibre-optic line dubbed the Hibernia Express, costing US$300 million. As technology advances, trading speed is increasingly limited only by fundamental physics, and the ultimate barrier — the speed of light.

Through glass optical fibres, information travels at two-thirds of the speed of light in a vacuum (300,000 kilometres per second). To go faster, data must travel through the air. Next up may be hollow-core fibre cables, through which light would travel in a tiny air gap at light speed.

High-frequency trading relies on fast computers, algorithms for deciding what and when to buy or sell, and live feeds of financial data from exchanges. Every microsecond of advantage counts. Faster data links between exchanges minimize the time it takes to make a trade; firms fight over whose computer can be placed closest; traders jockey to sit closer to the pipe. It all costs money — renting fast links costs around $10,000 per month.

Colocation beats the speed of light

Locating servers at major exchanges and as close as possible the actual computers that commit the trades.

A group of computers colocated with the exchange servers is optimal. Solarflare further shaves the trading latency

Solarflare is regarded as a partner that allows high frequency trading firms to focus on core competencies, rather than devoting in-house time and resources to lowering latency.

“It used to be the case that there weren’t a lot of commercial, off-the-shelf products applicable to this space,” he said. “If one of our competitors wanted to do something like this for competitive advantage, Solarflare can do it better, faster, cheaper, so they’re basically disincentivized from doing so. In a sense this is leveling the playing field in our industry, and we like that because we want to do what we’re good at, rather than spending our time working on hardware. We’re pleased when external vendors provide state-of-the-art technology that we can leverage.”

TCPDirect is a user-space, kernel bypass application library that implements Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), the industry standards for network data exchange, over Internet Protocol (IP). It’s as part of Onload, Solarflare’s application acceleration middleware designed to reduce CPU utilization and increase message rates.

The latency through any TCP/IP stack, even written to be low-latency, is a function of the number of processor and memory operations that must be performed between the application sending/receiving and the network adapter serving it. According to Ahmet Houssein, Solarflare VP/marketing and strategic development, TCP/IP’s feature-richness and complexity means implementation trade-offs must be made between scalability, feature support and latency. Independently of the stack implementation, going via the kernel imposes system calls, context switches and, in most cases, interrupts that increase latency.

Carnival of Space 492

Форма для связи


Email *

Message *