June 04, 2016

Northrop Grumman funded for $91 million to advance US Navy to 150 kilowatt tactical laser modules

Northrop Grumman's space and mission systems business will provide support for the Solid State High Power Laser Weapon System Demonstrator program under a potential three-year, $91.1 million U.S. Navy contract.

The Defense Department said Thursday the cost-plus-fixed-fee contract has a base value of $53.1 million and base period of performance through Oct. 21, 2016.

Northrop will perform work in California in support of the Office of Naval Research‘s efforts to further develop the weapon systems design and components as well as boost the lethality of its laser weapon systems, DoD added.

The obligated amount of $36.5 million at the time of award is from the Navy’s research, development, test and evaluation funds for fiscal years 2015 and 2016.

The US government believes that improvements in lethality may be achieved through maturation and optimization of a variety of system characteristics, including laser power, beam quality, beam director architecture, and other physical and optical aspects of the laser system design.

53 page description of the Solid State, High Power Laser Weapon System Demonstrator (LWSD) Design, Development and Demonstration for Surface Navy, USN

Government estimates indicate that systems with laser power of 100-150 kW may be supportable using ship power and cooling.

The Government is interested in an integrated TLCM, as demonstrated in Figure 1 below, which will
include, at a minimum:
• The high power SSL subsystem,
• The beam director subsystem (including accommodation for Mission Specific Modules described later in this document),
• The targeting and tracking subsystem,
• The fire control subsystem, and
• The necessary power or cooling subsystems to address interface or capacity issues that might be presented by the available ship utilities.





Body's immune system induced to attack tumors and it is largely free from side effects - could lead to universal cancer vaccine

Scientists have taken a “very positive step” towards creating a universal vaccine against cancer that makes the body’s immune system attack tumours as if they were a virus, experts have said.

Writing in Nature, an international team of researchers described how they had taken pieces of cancer’s genetic RNA code, put them into tiny nanoparticles of fat and then injected the mixture into the bloodstreams of three patients in the advanced stages of the disease.

The patients' immune systems responded by producing "killer" T-cells designed to attack cancer.

The vaccine was also found to be effective in fighting “aggressively growing” tumors in mice, according to researchers, who were led by Professor Ugur Sahin from Johannes Gutenberg University in Germany.

“[Such] vaccines are fast and inexpensive to produce, and virtually any tumor antigen [a protein attacked by the immune system] can be encoded by RNA," they wrote

The paper said the three patients were given low doses of the vaccine and the aim of the trial was not to test how well the vaccine worked. While the patients' immune systems seemed to react, there was no evidence that their cancers went away as a result.

In one patient, a suspected tumor on a lymph node got smaller after they were given the vaccine. Another patient, whose tumours had been surgically removed, was cancer-free seven months after vaccination.

The third patient had eight tumours that had spread from the initial skin cancer into their lungs. These tumours remained “clinically stable” after they were given the vaccine, the paper said.

The vaccine, which used a number of different pieces of RNA, activated dendritic cells that select targets for the body's immune system to attack. This was followed by a strong response from the "killer" T-cells that normally deal with infections.

Cancer immunotherapy is currently causing significant excitement in the medical community.




Nature - Systemic RNA delivery to dendritic cells exploits antiviral defense for cancer immunotherapy

Modified stem cells used for overall 11.4 improvement in motor function of stroke patients - People in wheelchairs are walking

People disabled by a stroke demonstrated substantial recovery long after the event when modified adult stem cells were injected into their brains.

Injecting modified, human, adult stem cells directly into the brains of chronic stroke patients proved not only safe but effective in restoring motor function, according to the findings of a small clinical trial led by Stanford University School of Medicine investigators.

The patients, all of whom had suffered their first and only stroke between six months and three years before receiving the injections, remained conscious under light anesthesia throughout the procedure, which involved drilling a small hole through their skulls. The next day they all went home.

Although more than three-quarters of them suffered from transient headaches afterward — probably due to the surgical procedure and the physical constraints employed to ensure its precision — there were no side effects attributable to the stem cells themselves, and no life-threatening adverse effects linked to the procedure used to administer them, according to a paper, published online June 2 in Stroke, that details the trial’s results.

“My right arm wasn’t working at all,” said Coontz. “It felt like it was almost dead. My right leg worked, but not well.” She walked with a noticeable limp. “I used a wheelchair a lot.”

Not anymore, though.

“After my surgery, they woke up,” she said of her limbs.

Sonia Olea Coontz had a stroke in 2011 that affected the movement of her right arm and leg. After modified stem cells were injected into her brain as part of a clinical trial, she says her limbs "woke up."
Mark Rightmire


June 03, 2016

Complex analogue and digital computations in engineered bacterial cells

A team of researchers at MIT has developed a technique to integrate both analogue and digital computation in living cells, allowing them to form gene circuits capable of carrying out complex processing operations.

Living cells are capable of performing complex computations on the environmental signals they encounter.

These computations can be continuous, or analogue, in nature — the way eyes adjust to gradual changes in the light levels. They can also be digital, involving simple on or off processes, such as a cell’s initiation of its own death.

Synthetic biological systems, in contrast, have tended to focus on either analogue or digital processing, limiting the range of applications for which they can be used.

The synthetic circuits are capable of measuring the level of an analogue input, such as a particular chemical relevant to a disease, and deciding whether the level is in the right range to turn on an output, such as a drug that treats the disease.

In this way they act like electronic devices known as comparators, which take analogue input signals and convert them into a digital output, according to Timothy Lu, an associate professor of electrical engineering and computer science and of biological engineering, and head of the Synthetic Biology Group at MIT’s Research Laboratory of Electronics, who led the research alongside former microbiology PhD student Jacob Rubens.



A startup company, Synlogic, is aiming to create a new class of medicines, by re-programming bacteria found in the gut as “living therapeutics.” In March, 2016, Synlogic raised an additional $40 million in venture capital and secured its first industry partnership with pharmaceutical giant AbbVie.

Synlogic is working with Abbvie to create innovative synthetic biotics for the treatment of certain forms of inflammatory bowel disease (IBD) such as Crohn’s disease and ulcerative colitis.

Synlogic engineers synthetic biotics, a new class of medicines designed from natural probiotic bacteria that are programmed with exquisite precision to correct disease-causing metabolic dysregulation while operating within the microbiome

Two of Synlogic’s main candidate drugs, expected to enter clinical trials during the next 12 months, treat rare genetic metabolic disorders. One drug candidate is for treating urea cycle disorder (UCD), which is caused by an enzyme deficiency that leads to a buildup of toxic ammonia in the blood. The other is for treating phenylketonuria (PKU), which involves a dangerous excess of the amino acid phenylalanine due to a mutation in another metabolic enzyme. In both cases, Synlogic’s drugs process and flush out the toxic metabolites from the body.

Local programming of the microbiome using synthetic biology enables therapeutic impact throughout the body. Synlogic achieves the therapeutic programming of synthetic biotics by engineering the bacteria to carry specialized assemblies of DNA, called genetic circuits. Genetic circuits are built using synthetic biology methods and components from our proprietary platform. The genetic circuits allow the synthetic biotic to sense a patient’s internal environment and respond by turning an engineered metabolic pathway on or off. When turned on, the synthetic biotic completes all of the necessary, programmed biochemical steps in a metabolic pathway to achieve therapeutic effect.

They engineer synthetic biotics from probiotic bacteria.

The human microbiota consists of hundreds of trillions of symbiotic microbial cells that live within and on each of us. The human microbiome are the quadrillion genes of those cells. To put these numbers into perspective, there are 10 times more microbial cells than human cells and 100 times more microbial genes than human genes in each of us. The role of human microboime as an agent of health and a potential avenue for therapeutic intervention is rapidly evolving.

Many scientists are beginning to regard the microbiota that resides in the gut as an additional human organ, albeit an organ who’s function is still emerging. It weighs as much as many organs somewhere between two and six pounds, is highly organized, and carries out functions essential to our health.

Nature Communication - Synthetic mixed-signal computation in living cells

Infinite number of quantum mechanics based speed limits

Techniques from information geometry have been used to show that there are an infinite number of quantum speed limits. They also develop a way to determine which of these speed limits are the strictest, or in other words, which speed limits offer the tightest lower bounds. As the researchers explain, the search for the ultimate quantum speed limits is closely related to the very nature of time itself.

The attempt to gain a theoretical understanding of the concept of time in quantum mechanics has triggered significant progress towards the search for faster and more efficient quantum technologies. One of such advances consists in the interpretation of the time-energy uncertainty relations as lower bounds for the minimal evolution time between two distinguishable states of a quantum system, also known as quantum speed limits. They investigate how the nonuniqueness of a bona fide measure of distinguishability defined on the quantum-state space affects the quantum speed limits and can be exploited in order to derive improved bounds. Specifically, they establish an infinite family of quantum speed limits valid for unitary and nonunitary evolutions, based on an elegant information geometric formalism. Their work unifies and generalizes existing results on quantum speed limits and provides instances of novel bounds that are tighter than any established one based on the conventional quantum Fisher information.

Researchers illustrated their findings with relevant examples, demonstrating the importance of choosing different information metrics for open system dynamics, as well as clarifying the roles of classical populations versus quantum coherences, in the determination and saturation of the speed limits. Their results can find applications in the optimization and control of quantum technologies such as quantum computation and metrology, and might provide new insights in fundamental investigations of quantum thermodynamics.


Illustration of geometric quantum speed limits. The ultimate speed limit arises from the length of the shortest geodesic path between two states, which here is the dashed blue curve.

In the future, the researchers plan to experimentally investigate the quantum speed limits derived here using nuclear magnetic resonance (NMR) techniques.

Arxiv- Generalized Geometric Quantum Speed Limits (7 pages)

Update of Death per Terawatt hour by Energy Source

This is an updating of my older articles based upon the updated estimates of air pollution related deaths.

The deaths from coal, oil and natural gas and biofuels have a higher estimate.

Basically the burning of fuel causes particulates and increases deaths by over 20 to 200 times compared to other energy production.

However, deaths from extreme world poverty is worse than air pollution. Diseases of poverty kill approximately 14 million people annually versus about 6.45-7 million for outdoor air pollution. Although there is overlap (and double counting) as indoor air pollution is mainly a problem for poor people.

Some researchers estimate the effects of particle pollutants killed 3.15 million individuals in 2010, with strokes (cerebrovascular disease) and heart attacks (ischemic heart disease) contributing most heavily. Analysis of ozone related mortality revealed a total estimate of 3.30 million people dying prematurely in 2010. About 50-90% of the particulates are from energy and transportation. The other part is from volcanos, construction (a lot is from construction in China), road dust, large fires and dust storms.

Particulates and ozone increased by about 10% since 2010. Therefore the combined 6.45 million deaths would be about 7.0 million in 2016. However, about half of the particulates is from non-energy and transportation sources. There are some other sources of ozone. Therefore 5.0 million total deaths from energy generation, industrial and transportation usage.

India's pollution board identified road dust as the biggest contributor (52.5%) to particulate matter in Delhi’s air, followed by industries (22.1%). The study attributed only 6.6% of particulate emissions to vehicles. For NOx, the study found industries contributed 79% and vehicles 18%; vehicles were the main source for CO and hydrocarbons: 59% and 50% respectively. India's vehicles are mainly not electric. So I will include the dust generated from a oil powered vehicle as part of deaths per twh.

Outdoor air pollution is linked to 1.6 million deaths per year in China

My original deaths per twh article in 2008 so the references were looking at data from about 2005 and 2006.

ExternE - the european study on energy deaths was a major source.

ExternE numbers which were for Europe would underestimate energy deaths for the world.
I calculated world and China numbers for coal using World Health numbers.

I had a 2011 lifetime deaths per twh article

I looked at how to lower deaths per TWh

Here is a review of the peer reviewed literature on air pollution deaths

Prof Michael Greenstone has some newer studies with higher numbers

Outdoor particulate air pollution kills over 3 million people.

In 2012, world primary energy supply amounted to 155,505 terawatt-hour (TWh) or 13,371 Mtoe, while the world final energy consumption was 104,426 TWh or about 32% less than the total supply. World final energy consumption includes products as lubricants, asphalt and petrochemicals which have chemical energy content but are not used as fuel. This non-energy use amounted to 9,404 TWh (809 Mtoe) in 2012.

The world's electricity consumption was 18,608 TWh in 2012. This figure is about 18% smaller than the generated electricity, due to grid losses, storage losses, and self-consumption from power plants (gross generation). Cogeneration (CHP) power stations use some of the energy that is otherwise wasted for heating buildings or in industrial processes.



Taking the TWH generation numbers for the World and for China (for the coal.)

ENERGY SOURCE        DEATHS FATAL/TWH     TWH NOTES

----------------- --------- --------- ------- -------------------------------------
Coal – world avg. 2,200,000    244.00   9,000 (10% world energy, 41% of electricity.)
Coal – China      1,300,000    325.00   4,000  Utilizing heavily-manual practices

Coal – USA                      10.00         Mostly open-pit and u/g machine
Oil               2,400,000     52.00  42,000 (40% of world energy, 4.4% of electricity)
Natural Gas         300,000     20.00  15,000 (15% of world energy, 11% of electricity)

Biofuel/Biomass                 50.00         
Peat                            50.00         
Solar (rooftop)          12      0.1      110  (1.0% of world electricity)
Wind                    105      0.15     700  (2.8% of world electricity)

Hydro                   400      0.10   4,000 (EU deaths, 2.2% of world energy)
Hydro + Banqiao)      4,000      1.00   4,000 (~4000 TWh/yr + 171,000 Banqiao dead)
Nuclear                 104      0.04   2,600 (3% of world energy, 10% of electricity)

----------------- --------- --------- ------- -------------------------------------
World             5,000,000     47    105,000 Terawatt-hours
Unaccounted for      95,400             1,500 TWh = 6.00% … fatalities prorated

China will use about 4.5 billion tons of coal this year. Previously official figures under reported the coal usage by 18%. This is triple the level in 2000.
There are no good numbers for deats solar and wind. Not well tracked. Occupational deaths from middle east and Russian oil are easy to find publicly.

World coal usage is about 9 billion tons.

June 02, 2016

Navies will likely need to start with smaller 5 megajoule railguns

University of North Carolina physicist Mark Gubrud says the main limitation of the railgun is how much energy per shot you can deliver to the projectile and sabot without destroying the rails too fast. Patrick Tucker at Defense One details the challenges facing railguns using more than 5 megawatts of power per shot.

This is about 5 megajoules, depending on the energy loss. “All that plasma that you see when the gun erupts, that’s material from the rails and sabot being vaporized at the sliding contact".

Much of the blast seen in this railgun shot are the vaporized rails

The railgun pulse power system by Raytheon has diamond heat spreaders and manifolds that have a high density of embedded micro-fins to spread the heat very efficiently away from the small point heat sources and move them through things like graphite and aluminum graphite to the edges of cold rails, so that coolant can be circulated at that point and brought back to a chiller.

3D manufacturing could be used for packaging the inductor and capacitor into a very small size while also providing some of the insulating capabilities that you can get by printing dielectrics.

Engineering at the molecular scale, and future breakthroughs in dielectric materials, will enable more efficient railguns that could make their way onto a wider variety of naval platforms.

World oil demand and supply could hit 100 million barrels per day by 2018 or 2019

The International Energy Association has a mid-term world oil forecast

By 2021 non-OECD Asia will be importing 16.8 million barrels per day of crude oil and products, a rise of 2.8 million barrels per day compared to 2015. The People’s Republic of China (hereafter ‘China’), remains central to this growth, partly because of the underlying rise of oil demand but also due to its build-up of strategic reserves which will reach at least 500 million barrels by 2020.


World oil demand seems to be running ahead of the mid-term project by about 1 million barrels per day. If this trend held then world oil supply and demand could cross 100 million barrels per day in 2018 or 2019.

World oil supply hit 97.2 million barrels per day at the end of 2015.


Spacex to send 3 unmanned missions to Mars that would lead to a manned Mars mission in 2024

SpaceX will send an unmanned mission to Mars using the Dragon V2 rocket — starting in 2018 and launch a rocket to Mars every 26 months. The plan is for the first manned Mars mission in 2024.

The plan for 2018 is for a sample return Mars rover to be delivered to the Martian surface while also testing techniques to enter the Martian atmosphere with equipment a human crew could eventually use


Elon Musk will present his architecture for Mars colonization at the International Astronautical Congress conference in September

On 27 April 2016 SpaceX announced that they would be going forward with the uncrewed mission for a 2018 launch and NASA will be providing technical support. NASA expects to spend "on the order of $30 million" helping SpaceX send the capsule to Mars.

A modified Dragon V2 capsule may perform all the necessary entry, descent and landing (EDL) functions in order to deliver payloads of 1 tonne (2,200 lb) or more to the Martian surface without using a parachute; the use of parachutes is not feasible without significant vehicle modifications.

It is calculated that the capsule's own aerodynamic drag may slow it sufficiently for the remainder of descent to be within the capability of the SuperDraco retro-propulsion thrusters. 1900 kg of propellant would provide the Δv required for soft landing.

This approach should make it possible to land the capsule at much higher Martian elevations than could be done if a parachute was used, and with 10 km (6.2 mi) landing accuracy. The engineering team continues developing options for payload integration with the Dragon capsule. Potential landing sites would be polar or mid-latitude sites with proven near-surface ice.

A study of a potential 2021 Red Dragon mission suggested that it could offer a low-cost way for NASA to achieve a Mars sample return for study. The Red Dragon capsule would be equipped with the system needed to return samples gathered on Mars, including a Mars Ascent Vehicle (MAV), an Earth Return Vehicle (ERV), and hardware to transfer a sample collected in a previously landed rover mission, such as NASA's planned Mars 2020 rover, to the ERV. ERV would transfer the samples to high Earth orbit, where a separate future mission would pick up the samples and de-orbit to Earth



China will soon deploy J-20 stealth fighter and T-20 transport plane part of shrinking air technology gap

China’s J-20 stealth fighter-bomber Aircraft and Y-20 tactical transport plane will be put into service “in the near future. The official Chinamil.com.cn website made this assertion while denying reports that the J-20 stealth fighter had entered PLA Southern Theater Command service and that comprehensive training between J-20 and J-10 fighter are already on.

The J-20’s stealth features has been copied from the US F-22 and the Y-20 transport is a replica of the Boeing C-17. China still has inferior jet engines.

On 6 February 2016 the Y-20 was flown for the first time and pictures of the fifth prototype in flight appeared on Chinese military webpages. Other known prototypes carry identification numbers 781, 783 and 785. On 27 January 2016, former Chinese test pilot Xu Yongling had reported in a Xinhua article that Chinese aviation industry officials had stated that the Y-20 "completed development" at the end of 2015. Xu, who participated in the Chengdu Aircraft Corporation J-10 fighter test programme, suggested that the Y-20 could enter service with the People's Liberation Army Air Force (PLAAF)in 2016.

The first Y-20 prototype is powered by four 12-ton thrust Soloviev D-30KP-2 engines early production units are likely to be similarly powered. The Chinese intend to replace the D-30 with the 14-ton thrust WS-20, which is required for the Y-20 to achieve its maximum cargo capacity of 66 tons. The Shenyang WS-20 is derived from the core of the Shenyang WS-10A, an indigenous Chinese turbofan engine for fighter aircraft. In 2013, Shenyang Engine Design and Research Institute was reportedly developing the SF-A, a 28700-pound thrust engine, for the Y-20 and the Comac C919. The SF-A is derived from the core of the WS-15. Compared to the WS-20, the SF-A is a conservative design that does not seek to match the technology of more modern engine.



The J-20 stealth fighter and Y-20 transport plane are undergoing relevant test flights based on schedules

The Pentagon’s latest annual report to the US Congress on China's military and security progress indicates that China is closing the military technology gap in several areas.

The J-20 and the FC-31 are fifth-generation stealth aircraft with high maneuverability, low-observability and internal weapons bays, capable of operating in a network-centric environment. They could enter service as early as 2018, although the report is undecided on whether the FC-31 is for export only. Both of them have radars with advanced tracking and targeting capabilities, and protection against electronic countermeasures.


The PLAAF “is rapidly closing the gap with western air forces across a broad spectrum of capabilities,” the report assesses. These include command-and-control, electronic warfare and datalinks. The J-10B, the latest version of the indigenous fighter that was unveiled in 2007, is expected to enter service shortly. Four J-11Bs (the Chinese-produced Su-27) have been deployed to one of the islands in the South China Sea that China has been expanding by land reclamation. An indigenous version of the Russian Kh-31P anti-radiation missile is being fielded on Chinese fighter-bombers.

The PLAAF has acquired three Ilyushin Il-78 aerial refueling aircraft from Ukraine to augment the domestically produced H-6U tanker. Flight tests of the Y-20 large airlifter continue, and it could also be produced as a tanker, as well as an AEW (airborne early warning) aircraft.

Russia and India have timeline for mach 7 hypersonic missile and will export existing mach 3 missile to about a dozen countries

A prototype of the BrahMos hypersonic missile is expected to be manufactured in 2024 according to the Marketing Director of the Russian-Indian BrahMos Aerospace Company Praveen Pathak.

  • Speed of the existing BrahMos supersonic missile will be increased 4 Mach by 2020.
  • Development of a prototype hypersonic BrahMos will begin in 2022
  • Prototype will be flight tested started in 2024
  • A mach 7+ hypersonic missile should then be fielded from 2029-2034

The future hypersonic missile was expected to have the same weight and size as the existing BrahMos missile so that it can be used with the same platforms and launchers.

The BrahMos-II is expected to have a range of 290 kilometers (180 mi; 160 nmi) and a speed of Mach 7.

Previously the hypersonic missile was expected to be ready for testing by 2017



India and Russia have agreed 'in principle' to export the existing mach 3 anti-ship cruise missile, BrahMos, to third countries - the UAE, Vietnam, South Africa and Chile, Praveen Pathak, spokesman for BrahMos Aerospace.

Talks with countries like UAE, Chile, South Africa and Vietnam are in advanced stages. Discussions with several other countries, including Philippines, South Korea, Algeria, Greece, Malaysia, Thailand, Egypt, Singapore, Venezuela and Bulgaria have now been taken to the next level.

There have been claims that an upgraded model capable of reaching Mach 6 speed is already being tested.



Jeff Bezos Wants to Build Giant Factories in Space to save the Earth

How do you protect planet Earth? "By going into outer space," Bezos told Walt Mossberg Tuesday. What Bezos meant was something "slightly more measured" than Elon Musk's idea of building colonies on Mars.

Activities that require the most energy should be performed in space, in order to leave Earth clean and habitable for humans.

"Earth will be zoned residential and light industrial. You shouldn't be doing heavy energy on Earth. We can build gigantic chip factories in space," he said.



Bezos also said that we are on the verge of golden era from computers with true natural language understanding.



Amazon.com and Blue Origin CEO Jeff Bezos envisions “millions” of people living in orbit as his exploration company, Blue Origin, and other commercial ventures develop spacecraft to make travel more widely available.

Investment from wealthy entrepreneurs with a passion for space will usher in a new era that makes leaving the Earth’s atmosphere accessible to anyone, Bezos said Tuesday.

Earlier, he announced that Blue Origin will put $200 million into a new rocket assembly facility and launch site in Cape Canaveral, Florida.

"Our ultimate vision is millions of people living and working in space," Bezos said during a rare, 30-minute interview in Florida with reporters after the Blue Origin announcement.

"We have a long way to go."

The 52 year old Bezos the world’s seventh-richest man with a net worth of around $62 billion. Bezos is over 4 times richer than Elon Musk ($13 billion net worth). Elon Musk also wants millions of people in space. Although Elon wants more people in cities in Mars than in orbit.

Bigelow expandable space stations and larger reusable rockets would enable large scale space colonization

Bigelow Aerospace has designed 2100 cubic meter expandable space station modules which might be launchable by a slightly refined Spacex Heavy. Bigelow now has a expanded room on the International Space Station.

The larger planned Mars colonization transport (MCT) would be able to launch modules that are three to five times larger.
Fuel could be launched and stored at fuel depots in orbit. This would enable more cargo to be moved to Mars with refueling in orbit and other locations in space.


Spacex could launch 100 Bigelow 2100 cubic meter modules for about $1 billion using two reusable Spacex Heavies over as little as one year (one launch per week). Blue Origin might also be able to make larger reusable rockets.

This would be 200,000 cubic meters of volume. This would be enough for 2000 people with the same facilities per person as the Hercules resupply depot design.

Spacex could launch 1000 Bigelow 6000 cubic meter modules in one year.

This would be 600,000 cubic meters of volume. This would be enough for 6000 people with the same facilities per person as the Hercules resupply depot design.

Reaching 1 million people in orbit would be 170 of the one thousand expandable modules. 6000 people is a bit more than the number of people in a large aircraft carrier. The Mandalay Bay hotel in Las Vegas has 3309 rooms and suites.

1 million people would be like 170 large light weight versions of cruise ships, hotels or air craft carrier structures in orbit.

Robotic and additive manufacturing could enable massive frames and massive solar power arrays

Tethers Unlimited is currently developing a revolutionary suite of technologies called "SpiderFab" to enable on-orbit fabrication of large spacecraft components such as antennas, solar panels, trusses, and other multifunctional structures. SpiderFab provides order-of-magnitude packing- and mass- efficiency improvements over current deployable structures and enables construction of kilometer-scale apertures within current launch vehicle capabilities, providing higher-resolution data at lower life-cycle cost.

They have received a $500,000 phase 2 NASA NIAC contract, which follows a $100,000 phase 1 contract to develop the technology.




100 of the 2100 cubic meter stations would be about $50 billion without any volume discount.
100 of the 6000 cubic meter station might be about $100 billion.
Launching with reusable rockets would be about $1 billion.
Say $10-20 billion for Spiderfab constructed solar power dish arrays and structure.
There would need to be $10-20 billion for operations.
It would be less than the cost of the international space station.

Can Dwave Quantum Computers help save finance and prevent future financial meltdowns from flawed models

Dwave Systems and 1QB Information Technologies Inc. (1QBit), a quantum software firm, and financial industry experts today announced the launch of Quantum for Quants (quantumforquants.org), an online community designed specifically for quantitative analysts and other experts focused on complex problems in finance. Launched at the Global Derivatives Trading and Risk Management conference in Budapest, the online community will allow quantitative finance and quantum computing professionals to share ideas and insights regarding quantum technology and to explore its application to the finance industry. Through this community financial industry experts will also be granted access to quantum computing software tools, simulators, and other resources and expertise to explore the best ways to tackle the most difficult computational problems in finance using entirely new techniques.

“Quantum computers enable us to use the laws of physics to solve intractable mathematical problems,” said Marcos López de Prado, Senior Managing Director at Guggenheim Partners and a Research Fellow at Lawrence Berkeley National Laboratory's Computational Research Division. “This is the beginning of a new era, and it will change the job of the mathematician and computer scientist in the years to come."




Marcos Lopez de Prado argues that some of the most popular optimization techniques used in Finance are in fact detrimental. Take mean-variance optimization (MVO), the most commonly used portfolio construction technique, with its multiple upgrades and variations throughout the past 60 years. A great majority of academic papers apply MVO when the authors are faced with the dilemma of building a diversified portfolio. One would expect that such venerable technique would be among the best performing portfolio construction methods, right? Think again.

A number of studies have demonstrated that MVO portfolios underperform the so called “naïve portfolio”, that is the portfolio that splits assets equally among holdings (see here for example). And yet MVO is taught in every business school as one of the key results in Finance. Shouldn’t students be warned that MVO is detrimental, relative to a naïve allocation? How can a Nobel prize-winning theory lose to the most rudimentary scheme.

This is not a unique example. There are plenty of revered financial techniques that fail to perform as advertised. Cointegration models are known to lack robustness, in the sense that small changes on a few observations will lead to entirely different forecasts. This is particularly problematic in a discipline like Finance, where the signal-to-noise ratio is low and measurements are far from precise. Still, unstable econometric methods are routinely used by economists to forecast macro variables and by the Federal Reserve to inform their life-changing decisions.

In fairness, these methods were designed and vetted for academic consumption only. They are toy models, to be used for in-sample philosophical disquisitions, not in out-of-sample industrial applications.

Real-world applications require a degree of complexity and robustness that simple models cannot satisfy.

How Quantum Computing May Save Finance

There are numerous instances in which machine learning methods deliver better results than classical calculus or linear algebra applications. But machine learning often deals with NP-complete or NP-hard problems, which demand overwhelming computational power

Building Diversified Portfolios that Outperform Out-of-Sample by Marcos Lopez de Prado Guggenheim Partners, LLC; Lawrence Berkeley National Laboratory; Harvard University - RCC

The promise of financial quantum computing (QC) is that soon we will not need to dumb-down models, or rely on heuristics. We will develop models cognizant of reality’s complexity, and solve them in their NP-complete grandeur. Think about it: If HRP can improve your out-of-sample Sharpe ratio by 31% over MVO’s, what will the improvement be once you replace the DC+Heuristics tandem with QC+Completeness? Perhaps 50%? That means boosting your Sharpe ratio from 1.50 to 2.25, quite worth the management fee.

June 01, 2016

China's domestic wind turbines are 50% inferior to US wind turbines

Overall wind turbine quality in the United States is higher than in China. China's domestically produced wind turbines are currently inferior.

The total capacity of wind farms installed in China exceeds that in the US, which should lead to greater wind-generated electricity in China relative to the US (67.7%); on the other hand, the advantage of greaterinstalled capacity in China is more than offset by the combinedeffects of delayed grid connection (negative 50.3%), less favorable wind resources (negative 17.9%), and lower quality of wind turbines(negative 50.2%). China’s high curtailment rate for wind power(negative 49.3%) further reduces the actual wind power generation.In total, wind-generated electricity in China is 39.3 TWh less thanthat in the US. With other factors fixed, curtailment of wind powerin China would contribute to such shortage by 19.37 TWh, themagnitude of which is comparable to the total electricity generatedfrom wind in Canada in 2012





The inferior domestic wind turbines can be addressed with a short-term switch to more international suppliers, while focusing on domestic research and development efforts and technology transfer agreements with other nations in the long term.

Pumped hydro storage (PHS) facilities could provide an opportunity to store electricity w hen supply exceeds demand. Current PHS capacity in China amounts to 16.9 GW, and is projec ted to increaseto 50 GW by 2020. Much of this additional capacity is planned forthe eastern and southern regions of the country, with only 16 GW for the north (China's wind farms are mainly in the north). Increasing the development of PHS in the northcould make an important contribut ion to limiting wind curtailment

China is planning to ramp up non-pumped hydro storage to 14.7 GW by 2020. This will likely be battery systems.

Neural Dust - ultra small brain interfaces - is being used to make cyborg insects

As the computation and communication circuits we build radically miniaturize (i.e. become so low power that 1 picoJoule is sufficient to bang out a bit of information over a wireless transceiver; become so small that 500 square microns of thinned CMOS can hold a reasonable sensor front-end and digital engine), the barrier to introducing these types of interfaces into organisms will get pretty low. Put another way, the rapid pace of computation and communication miniaturization is swiftly blurring the line between the technological base that created us and the technological based we’ve created. Michel Maharbiz, University of California, Berkeley, is giving an overview (june 16, 2016) of recent work in his lab that touches on this concern. Most of the talk will cover their ongoing exploration of the remote control of insects in free flight via implantable radio-equipped miniature neural stimulating systems.; recent results with neural interfaces and extreme miniaturization directions will be discussed. If time permits, he will show recent results building extremely small neural interfaces they call “neural dust,” work done in collaboration with the Carmena, Alon and Rabaey labs.

Radical miniaturization has created the ability to introduce a synthetic neural interface into a complex, multicellular organism, as exemplified by the creation of a “cyborg insect.”

“The rapid pace of computation and communication miniaturization is swiftly blurring the line between technological base we’ve created and the technological base that created us,” explained Dr. Maharbiz. “These combined trends of extreme miniaturization and advanced neural interfaces have enabled us to explore the remote control of insects in free flight via implantable radio-equipped miniature neural stimulating systems.”







Strapping tiny computers and wireless radios onto the backs of giant flower beetles and recording neuromuscular data as the bugs flew untethered, scientists determined that a muscle known for controlling the folding of wings was also critical to steering. The researchers then used that information to improve the precision of the beetles’ remote-controlled turns.

The beetle backpack is made up of a tiny, off-the-shelf microcontroller and a built-in wireless receiver and transmitter. Six electrodes are connected to the beetle’s optic lobes and flight muscles. The entire device is powered by a 3.9-volt micro lithium battery and weighs 1 to 1.5 grams.

A recent 2016 paper is Application of canonical polyadic decomposition for ultrasonic interrogation of neural dust grids: a simulation study

One of the major engineering challenges in the current ‘century of the brain’ is the development of chronic neuromonitoring techniques, i.e., devices that allow to monitor the brain 24 by 7, over a period of 10-20 years or longer. Such a technology would form an important breakthrough for brain-machine interfaces (BMIs), allowing to improve the quality of life of people suffering from debilitating neurological conditions. Recently, a neural recording platform based on a distributed ultrasonic backscattering system has been proposed, referred to as ’neural dust’ (ND) The ND system consists of a large number of free-floating ND motes (NDMs), which are implanted at 3mm depth in the cortex in a grid with a less than 100 µm pitch. These NDMs measure extracellular action potentials or ’spikes’ generated by neurons in their neighborhood.

Sub-dural ultrasound (US) transceiver modules, referred to as ’interrogators’, are implanted on top of the cortex (without penetrating it), to collect the spike signals recorded by the NDMs. The interrogators send a US carrier wave to the targeted NDM, which then modulates the recorded neural signal onto the reflected carrier. The interrogator then demodulates the reflected wave and sends the result to an external transceiver through near-field electromagnetic communication.

US communication is based on passive backscattering which was only allowing 10% of the neural dusts to be interrogated over a desired timeframe. An alternative approach is used where a few random (non-focal) US beam patterns are transmitted in Tx mode towards the entire grid of NDMs simultaneously, resulting in a grid-wide MIMO source separation problem.

Tesla Motors one million car per year target for 2020 would make it 20th largest in the world and about half of BMW

At the 2016 annual Tesla Motor conference call, Elon Musk and Tesla Motors plans to advance the Model 3 build plan substantially. Tesla is aiming to get to the half million unit per year run rate in 2018 instead of 2020. This is based off of the tremendous amount of interest received for the Model 3, which I think is actually a fraction of the ultimate demand once people fully understand what the car's capable of and are able to do a test drive.

Tesla would then plan to have around a 1 million car per year production rate in 2020.

Reaching the 2020 target of a million cars per year would make Tesla Motors about the 20th largest car maker by production volume.


In 2015, BMW had a production volume of about 2.25 million.

In 2020, Tesla would be nearing the production volume of Mazda and Mitsubishi.

Tesla autonomous driving gathering twice as many miles every day as Google's entire multi-year self driving project

The combined fleet of Tesla Model S and X are driving more than 3 million miles a day. So in just one day, Tesla cars do about twice the distance that Google's done in the entire history of their self-driving car project. This gives Tesla an advantage in the race for sustainable transport an accident-free driving.

Tesla plans to log billions of miles showing that the car is unequivocally safer in autonomous mode compared to manual mode in a wide range of circumstances in countries all around the world with different rules of the road and ways of behavior. And it'll have to be something statistically significant like billions of miles.

Tesla will argue for autonomous driving, but Tesla will not argue against manual driving. And Elon Musk believe people should have the freedom to choose to do what they want to do. And, yes, sometimes those things are dangerous but freedom is important. And if people want to drive, even if it's dangerous, they should be allowed to drive in my view. But then the autonomous safety systems should be in there such that even if you're in manual mode, the car will still aid you in avoiding an accident





Tesla plans to make 500,000 cars per year by 2018 and one million cars per year by 2020

Elon Musk and Tesla Motors are advancing the Model 3 build plan substantially, and just the overall volume plan, with Tesla aiming to get to the half million unit per year run rate in 2018 instead of 2020. And this is based off of the tremendous amount of interest received for the Model 3, which I think is actually a fraction of the ultimate demand once people fully understand what the car's capable of and are able to do a test drive.

Tesla would then plan to have around a 1 million car per year production rate in 2020.



Tesla would need to triple the total planned battery output of the Gigafactory to ~105 GWh of cells and ~150 GWh of battery packs – or over 3 times the current total li-ion battery production worldwide. The new potential total capacity would be based on the current planned factory of 13 million sq-ft – with no expansion needed. Tesla and Panasonic, the automaker’s strategic partner in the Gigafactory, will manufacture a new 20700 cell format – compared to the current 18650. The battery cells will be a bit larger than the current cells.

One third of the Gigafactory production would go toward Tesla Energy products for energy storage, Powerwall and Powerpacks, and two thirds toward battery packs for Tesla vehicles.

Elon Musk said that he wants to turn the Gigafactory into a product itself: a machine building machines.

Tesla has a lot more land around the Gigafactory so further expansion at the site is planned as well

The higher Model 3 volume is the biggest change strategically. Tesla is going to be hell-bent on becoming the best manufacturer on earth. Thus far, Elon Musk thinks they have done a good job on design and technology of their products.

Tesla should comfortably reach 89,000 car deliveries this year.

The date Tesla setting with suppliers to get to a volume production capability with the Model 3 is July 1, 2017.

Now, will Tesla actually be able to achieve volume production on July 1 next year? Of course, not. The reason is that even if 99% of the internally produced items and supplier items are available on July 1, they still cannot produce the car because you cannot produce a car that is missing 1% of its component. Nonetheless, they need to both internally and with suppliers take that date seriously.

Tesla aims to produce 100,000 to 200,000 Model 3s in the second half of 2017. If you place your order now, there's a high probability you will actually receive your car in 2018.

The design of the Model 3 lends itself to high-volume production very efficiently. It is being designed for easy high volume manufacturing.

The Tesla battery cost is now under $190 per kilowatt hour. Battery costs will go down another 30% when the Gigafactory is online.

Telsa has almost completed the design of Model 3. And in fact, the prototype that was driving at the [Motor Event] at the end of March was actually using the production drivetrain. So they feel pretty good about engineering completion of the last items for the Model 3 probably within six to eight weeks

May 31, 2016

Bayer And Planetary Resources Intend To Collaborate To Improve Agriculture With Space Data

Bayer and the aerospace technology company Planetary Resources, based in Redmond, Washington, USA, have signed a memorandum of understanding about the development of applications and products based on satellite images. Bayer intends to purchase these data from Planetary Resources to create new agricultural products and improve existing ones. The collaboration will be part of the Digital Farming Initiative at Bayer


Planetary Resources’ space-based Earth observation constellation Ceres will provide a new level of crop intelligence for the global agricultural industry.

Automating DNA origami makes it easy to build DNA nanoparticles

Researchers can build complex, nanometer-scale structures of almost any shape and form, using strands of DNA. But these particles must be designed by hand, in a complex and laborious process.

This has limited the technique, known as DNA origami, to just a small group of experts in the field.

Now a team of researchers at MIT and elsewhere has developed an algorithm that can build these DNA nanoparticles automatically.

A novel synthesis approach could allow the DNA origami technique to be used to develop nanoparticles for a much broader range of applications, including scaffolds for vaccines, carriers for gene editing tools, and in archival memory storage.

Unlike traditional DNA origami, in which the structure is built up manually by hand, the algorithm starts with a simple, 3-D geometric representation of the final shape of the object, and then decides how it should be assembled from DNA, according to Mark Bathe, an associate professor of biological engineering at MIT, who led the research.

“The paper turns the problem around from one in which an expert designs the DNA needed to synthesize the object, to one in which the object itself is the starting point, with the DNA sequences that are needed automatically defined by the algorithm,” Bathe says. “Our hope is that this automation significantly broadens participation of others in the use of this powerful molecular design paradigm.”



Science - Designer nanoscale DNA assemblies programmed from the top down

Liquid armor and tiny high power engines will be in 2018 special forces exoskeleton prototypes

US Special Ops Command plans to have some initial TALOS exoskeleton suit prototypes by 2018.

Progress is being made on exoskeletons for US special forces. The exoskeletons are designed to increase strength and protection and help keep valuable operators alive when they kick down doors and engage in combat.

The technologies currently being developed include

  • body suit-type exoskeletons
  • strength and power-increasing systems and
  • additional protection.



Liquid Piston high efficiency engine

Liquid Piston is developing several small rotary internal combustion engines developed to operate on the High Efficiency Hybrid Cycle (HEHC). The cycle, which combines high compression ratio (CR), constant-volume (isochoric) combustion, and overexpansion, has a theoretical efficiency of 75% using air-standard assumptions and first-law analysis. This innovative rotary engine architecture shows a potential indicated efficiency of 60% and brake efficiency of over 50%. As this engine does not have poppet valves and the gas is fully expanded before the exhaust stroke starts, the engine has potential to be quiet. Similar to the Wankel rotary engine, the ‘X’ engine has only two primary moving parts – a shaft and rotor, resulting in compact size and offering low-vibration operation. Unlike the Wankel, however, the X engine is uniquely configured to adopt the HEHC cycle and its associated efficiency and low-noise benefits. The result is an engine which is compact, lightweight, low-vibration, quiet, and fuel-efficient.

  • High power density – up to 2 HP / Lb (3.3 kW / kg)
  • 30% smaller and lighter for spark-ignition (SI) gasoline engines
  • Up to 75% smaller and lighter for compression-ignition (CI) diesel engines

In an exoskeleton the engines would only be run to recharge batteries.





Liquid Armor

A SOCOM statement said some of the potential technologies planned for TALOS research and development include
  • advanced armor,
  • command and control computers,
  • power generators, and
  • enhanced mobility exoskeletons.

The TALOS program is costing an estimated $80 million.

TALOS will have a physiological subsystem that lies against the skin that is embedded with sensors to monitor core body temperature, skin temperature, heart rate, body position and hydration levels

MIT and Poland working on liquid body armor

MIT is developing a next-generation kind of armor called “liquid body armor.”

Liquid body armor transforms from liquid to solid in milliseconds when a magnetic field or electrical current is applied.

Scientists at a Polish company that produce body armor systems are working to implement a non-Newtonian liquid in their products.

The liquid is called Shear-Thickening Fluid (STF). STF does not conform to the model of Newtonian liquids, such as water, in which the force required to move the fluid faster must increase exponentially, and its resistance to flow changes according to temperature. Instead STF hardens upon impact at any temperature, providing protection from penetration by high-speed projectiles and additionally dispersing energy over a larger area

Volvo Trucks’ concept truck cuts fuel consumption by more than 30 %

Almost one-third lower fuel consumption. Volvo Trucks’ new concept vehicle shows how it is possible to drastically boost productivity in long-haul operations. Among the secrets behind these remarkable fuel savings are aerodynamic design and lower kerb weight.

With support from the Swedish Energy Agency, Volvo Trucks has developed a new concept vehicle, the Volvo Concept Truck. It is the result of a five year long research project aimed at creating more energy-efficient vehicles. The new concept truck cuts fuel consumption by more than 30 %.

One of the key factors behind the low fuel consumption is the massive 40 % improvement in aerodynamic efficiency that has benefited both the tractor and trailer.

"We've modified the entire rig and optimised it for improved aerodynamics as much as possible. For instance, we use cameras instead of rear-view mirrors. This cuts air resistance, so less energy is needed to propel the truck," explains Åke Othzen, Chief Project Manager at Volvo Trucks.

In addition to the aerodynamic improvements, the concept vehicle is fitted with newly developed tyres with lower rolling resistance. The trailer weighs two tonnes less than the reference trailer, which translates into either lower fuel consumption or the possibility of higher payload. The project also includes an improved driveline. The rig was test driven on Swedish roads in autumn 2015.

Work on the Volvo Concept Truck has been in progress since 2011. The aim is to improve the efficiency for long-haul truck transportation by 50 %



The aerodynamic improvements

  • Optimised aerodynamic trailer and tractor.
  • In order to reduce air resistance, the conventional rear-view mirrors have been replaced by cameras, which have the added advantage of offering better visibility and increased safety.
  • Aerodynamically optimised chassis side-skirts cover the rear wheels on the tractor and all the trailer wheels.
  • Aerodynamic spoilers extend the trailer and cut air resistance.
  • Optimised air flow for the engine's cooling system
  • Minimised air resistance at the front of the tractor, the wheel housings and entry steps.


Eagles and Falcons being used to intercept drones is seeing 3000 year old falconry applied to modern drone problems

Dutch company, Guard from Above, intercept bad drones using of birds of prey.

Not every location has the same threat of drones. A drone doesn’t have be a threat at all. Is a drone carrying a HD-camera? This could be a threat but doesn’t always have to pose a threat.

The first step is always to do a Threat analysis.
The next step is to select the best combination of contra drone solutions. Detection and classification and of course neutralisation (interception).
After a drone incident it is important to investigate the background of the incident and to share the incident data.

They are working on a Database with global drone incidents and future threats (also Red teaming).

Guard from Above (GFA) trained birds and GFA-trained Birdhandlers are stationed at High Risk Locations. Their services are like security dog handler services. They also train staff of Police, Defense forces and Security companies to handle GFA-trained birds.

Their training program is based on -over 25 years’ specialist experience in working with birds of prey combined with experience in international consulting.

Falconry is a practice that is over 3000 years old Falconry was practiced Mongolia at a very remote period and was already in high favor some 1000 years BC, that’s 3000 years ago. Falcons were given as presents to Chinese princes as early as 2200 BC, but these may have been for pets and not for hunting. Falconry appeared with the emergence of civilizations and was already popular in the Middle East and Arabian Gulf region several millennia BC. In the Al Rafidein region (Iraq) it was widely practiced 3500 years BC; in 2000 BC the Gilgamesh Epic clearly referred to hunting by birds of prey in Iraq.

It achieved a very high level of refinement on the military campaigns of the Great Khans who practiced falconry for food and for sport between battles. One such military expedition reached almost to the gates of Vienna. By the time of Marco Polo there were over 60 officials managing over 5000 trappers and more than 10000 falconers and falconry workers.

Today the IAF - International Association for Falconry and Conservation of Birds of Prey, founded in 1968, represents 75 falconry clubs and conservation organisations from 50 countries worldwide totaling over 30,000 members. Currently there are an estimated 4,000 falconers in the United States with roughly 5,000 birds.



Currently the available anti-drone options fall into two camps: “shoot it” or “jam its sensors.” In the former case, you might miss (and hit something else), and even if you hit you’re left with a heavy drone falling to earth, potentially onto someone’s head. In the latter, you end up canceling out GPS or radio signals for everyone in the area, which isn’t practical as a preventative measure.





Compact spherical tokamak would be 100 times smaller than ITER and has a chance to start operating decades earlier

Startup company Tokamak Energy has published three papers showing size is not an important factor in fusion reactors and proving that a compact spherical tokamak reactor can produce high power. This turns the pursuit of fusion into a series of engineering challenges. The Tokamak Energy plan will overcome these challenges, such as the development of magnets made from high temperature superconductors, delivering a fusion power gain within five years, first electricity within ten years and a 100 MWe power plant within 15 years.

The best-performing tokamak in the world is JET, producing 16 MW of fusion power with 24 MW input in 1997 - i.e. 65% as much energy out as was put in. It holds the world record for total fusion power produced and for getting closest to breakeven. To reach this point, fusion research followed a Moore's law-like path. The temperature, density and energy confinement time, which indicates fusion performance, was increasing at a faster and faster rate up until the JET experiments.

But since then it seems that progress has stalled. There have still been experiments built and much learned, but progress towards energy breakeven has slowed. We still haven't actually reached energy breakeven almost 20 years after we nearly got there.

Traditional designs have moved to larger dimensions, culminating in the ITER experiment currently under construction in the south of France. This will be over 30m tall and weigh about 23,000 tonnes. The demonstration reactor that follows, dubbed DEMO, will likely be slightly bigger again. When ITER was being designed in the 1990s, it was believed that the only feasible way to increase fusion power was to increase machine size. But the size and complexity of ITER has led to very slow progress in the fusion program, with first fusion set for the mid 2020s. Tired of waiting so long and recognising the inherent difficulties of such a big project, some have been questioning the possibility of a smaller way to fusion.

Fusion reactor development could proceed much more rapidly by scaling down the size of reactors being developed, potentially helping the first compact fusion pilot plants to be ready to produce electricity for the first time within the next decade.




Theoretical calculations show that a Spherical Tokamak using high fields produced by HTS magnets could be significantly smaller than other fusion machines currently proposed. For example, a compact ST power plant would have a volume up to 100 times smaller than ITER – the successor to JET currently being built in France at a cost of €15bn – so would be approximately room-sized rather than aircraft-hangar-sized.



DEMO (DEMOnstration Power Station) is a proposed nuclear fusion power station that is intended to build upon the ITER experimental nuclear fusion reactor. The objectives of DEMO are usually understood to lie somewhere between those of ITER and a "first of a kind" commercial station. While there is no clear international consensus on exact parameters or scope, the following parameters are often used as a baseline for design studies: DEMO should produce at least 2 gigawatts of fusion power on a continuous basis, and it should produce 25 times as much power as required for breakeven. DEMO's design of 2 to 4 gigawatts of thermal output will be on the scale of a modern electric power station.

To achieve its goals, DEMO must have linear dimensions about 15% larger than ITER, and a plasma density about 30% greater than ITER. As a prototype commercial fusion reactor, DEMO could make fusion energy available some 15 years after ITER. ITER schedule is slipping. DEMO will not start tests before 2035. It is estimated that subsequent commercial fusion reactors could be built for about a quarter of the cost of DEMO

PROTO is a beyond-DEMO experiment, part of European Commission long-term strategy for research of fusion energy. PROTO would act as a prototype power station, taking in any remaining technology refinements, and demonstrating electricity generation on a commercial basis. It is only expected after DEMO, beyond 2050, and may or may not be a second part of DEMO/PROTO experiment.

Navy Lasers, Railgun and Hypervelocity Projectile are each game changers but combined will be a revolution

The Navy is currently developing three potential new weapons that could improve the ability of its surface ships to defend themselves against enemy missiles—solid state lasers (SSLs), the electromagnetic railgun (EMRG), and the hypervelocity projectile (HVP). Any one of these new weapon technologies, if successfully developed and deployed, might be regarded as a “game changer” for defending Navy surface ships against enemy missiles. If two or three of them are successfully developed and deployed, the result might be considered not just a game changer, but a revolution. Rarely has the Navy had so many potential new types of surfaceship missile-defense weapons simultaneously available for development and potential deployment.

Although Navy surface ships have a number of means for defending themselves against anti-ship cruise missiles (ASCMs) and anti-ship ballistic missiles (ASBMs), some observers are concerned about the survivability of Navy surface ships in potential combat situations against adversaries, such as China, that are armed with advanced ASCMs and with ASBMs. Concern about this issue has led some observers to conclude that the Navy’s surface fleet in coming years might need to avoid operating in waters that are within range of these weapons, or that the Navy might need to move toward a different fleet architecture that relies less on larger surface ships and more on smaller surface ships and submarines.

Two key limitations that Navy surface ships currently have in defending themselves against ASCMs and ASBMs are limited depth of magazine and unfavorable cost exchange ratios. Limited depth of magazine refers to the fact that Navy surface ships can use surface-to-air missiles (SAMs) and their Close-in Weapon System (CIWS) Gatling guns to shoot down only a certain number of enemy unmanned aerial vehicles (UAVs) and anti-ship missiles before running out of SAMs and CIWS ammunition—a situation (sometimes called “going Winchester”), that can require a ship to withdraw from battle, spend time travelling to a safe reloading location (which can be hundreds of miles away), and then spend more time traveling back to the battle area.

Unfavorable cost exchange ratios refer to the fact that a SAM used to shoot down a UAV or antiship missile can cost the Navy more (perhaps much more) to procure than it cost the adversary to build or acquire the UAV or anti-ship missile. In the FY2016 defense budget, procurement costs for Navy SAMs range from about $900,000 per missile to several million dollars per missile, depending on the type


SSLs, EMRG, and HVP offer a potential for dramatically improving depth of magazine and the cost exchange ratio:

  • Depth of magazine. SSLs are electrically powered, drawing their power from the ship’s overall electrical supply, and can be fired over and over, indefinitely, as long as the SSL continues to work and the ship has fuel to generate electricity. The EMRG’s projectile and the HVP (which are one and the same—see next section) can be stored by the hundreds in a Navy surface ship’s weapon magazine.
  • Cost exchange ratio. An SSL can be fired for a marginal cost of less than one dollar per shot (which is the cost of the fuel needed to generate the electricity used in the shot), while the EMRG’s projectile / HVP has an estimated unit procurement cost of about $25,000

Will the kinds of surface ships that the Navy plans to procure in coming years have sufficient space, weight, electrical power, and cooling capability to take full advantage of SSLs (particularly those solid state lasers with beam powers above 200 kW) and EMRG (electromagnetic railguns) ? What changes, if any, would need to be made in Navy plans for procuring large surface combatants (i.e., destroyers and cruisers) or other Navy ships to take full advantage of SSLs and EMRG ?




Форма для связи

Name

Email *

Message *