During the past decade the amount of electricity used by data centers has steadily increased, and data centers now consume almost 3% of worldwide electricity. Despite continuing efforts to increase computing efficiency, power consumption by data centers is continuing to consume ever greater quantities of energy. Keith Lofstrom is an electrical engineer who believes that he may have found a long term solution to the data center problem. Lofstrom wants to put huge number of miniature, solar powered servers (called thinsats) in orbit. This concept, known as “server sky”, would be facilitated by a low-cost launch system referred to as “Launch Loop”. This loop could put many thousands of tons into space at a small fraction of the cost of using rockets. In an interview with Sander Olson for Next Big Future, Lofstrom describes how the server-sky concept could greatly reduce the need for ground-based data centers, and how Launch Loop might be the best solution to opening up space.
Question: A number of methods for putting payloads into space without rockets have been proposed. How does the Launch Loop concept compare with other schemes, such as the space elevator, the airship to orbit initiative, or James Powell’s Startram concept?
Getting into orbit requires a bit of altitude and lots of velocity. Both require energy. 100 kilometers altitude requires 270 kilowatt hours per ton, about $33 of 12 cent electricity at 100% efficiency. A 10 kilometer per second lunar transfer orbit requires 15 megawatt hours per ton about $1800 of electricity. An inefficient system (like a rocket, a laser powered space elevator, or a hypersonic airship) require far more energy. A launch loop can be about 40% efficient, because energy is transferred in a 14 kilometer per second iron rotor powered by high efficiency linear motors, and delivered to vehicles as magnetic drag. The energy conversion process is very cheap, compared to the switching electronics of a startram mass driver, an airship solar cell and ion engine, or a space elevator climber.
I’m fond of cheap dumb materials like iron and Kevlar. Startram requires superconductors far beyond the state of the art, the space elevator requires molecular-engineered superstrong materials, and Powell’s airship requires ultrathin materials at wing leading edges that can tolerate hypersonic heat for days. Proponents of those systems are not calculating skeptically. Optimism is a fundraising tool, not an engineering material.
All electrical launch systems require huge amounts of power electronics, which costs pennies to dollars per watt depending on the cooling system. Vacuum is a lousy coolant. Most launch loop power electronics are on the surface of the ocean, a very good coolant.
I’m a skeptic. I’ve looked at, and discarded, a lot of alternative launch systems over many decades. So far, I’ve found a lot of problems with the launch loop, but none that my colleagues and I don’t know how to solve. I hope advocates of other approaches will diligently look for and solve their own problems – those solutions will help all the rest of us.
Question: The launch loop structure would be approximately 2,000 kilometers long and 80 kilometers high. What methods would need to be used to build such a structure? How would the labor at high-altitude be handled?
The launch loop rotor moves faster than orbital velocity. Force is needed to hold it down, about twice the rotor’s weight. This can support a lot of structure against gravity. The rotor and a stationary vacuum enclosure are coupled by a near-frictionless DC magnetic field, a small fraction of a Tesla. When the system is up to speed, it will lift off the ground, held down by Kevlar cables. Robert Forward called structures supported by moving mass “dynamic structures”.
The elevated portions, 50 to 80 kilometers high, will need occasional, mostly provided by small tele-operated robots. Large elevated stations at the east and west end will sometimes be visited by maintenance workers, but visits will be brief to limit cosmic ray exposure. The vast majority of the workforce will be on the surface. If the system needs major repair, the elevated systems will be brought back down to the surface.
Question: You’ve estimated costs of around $10 billion for a first-generation system, and have provided materials cost estimates. Could you break down the labor costs – how long would construction take, how much skilled labor would be required, etc.
In the late 70s, when I first estimated those costs, schedule, and labor, I was a young engineer. With more experience, I’ve learned how little I (or any engineer) knows about what things cost until they’ve been done a few times. Rocket development cost the U.S. four trillion dollars – not even an organizational genius like Wernher von Braun could have guessed that figure. Boeing, with decades of experience developing passenger jets, still can’t pin down the development cost of a new family of jets to within a factor of two.
I’ve also learned that the best way to develop a new technology is with a small series of incremental improvements from older technologies. Dynamic structures like the launch loop will be developed from smaller dynamic structures, like kilometer scale offshore loops for radar platforms. Those will be developed from energy storage loops, which will use similar technologies to store terawatt-days of energy. The trick is to work backwards to even smaller but still profitable systems, just as rockets evolved from aircraft, which evolved from automobiles, kites, and bicycles.
So no, I can’t tell you how much a launch loop will cost, but I can tell you how to make hundreds of billions of dollars developing the technologies that will lead to it.
Question: What difficulties are presented with having the LaunchLoop constructed in the ocean? How intractable are these problems?
The launch loop is constructed in factories on land, and assembled over the ocean for safety and ease of deployment. It will sometimes fail, and throw pieces. Those should land in the ocean or go into orbit, not hit people. A launch loop will move millions of tons of payload a year – the cheapest way to get the payload there is by ship. Ocean structures are easier to secure; planes and ships can be watched. From 50 kilometers altitude, a radar can see 2 million square kilometers.
Question: Have any detailed computer simulations been made for the LaunchLoop concept?
I did most of the simulations back in the 80’s, on ancient CDC cyber computers. I verified stability problems that showed up in the mathematics, and found some control systems that seemed to fix them.
The launch loop is a lot more complicated now, with systems of cables for shifting lateral stabilizing forces to the ground, actuators to adjust the forces, sensors and tolerances. Most of the issues involve accurate sensing – small perturbations can grow to large ones in tens of milliseconds, so measurement and error propagation in control systems is very important.
Loose particles inside the vacuum enclosure can literally make hell break loose. Ultra high speed particle collisions can also be studied with a lot of computing.
Promising tools for these studies are the numerical array processors that are used as shading engines on graphics cards; a $200 nVidia graphics card can outperform those ancient cybers by factors of thousands. However, I’ve been too busy with other projects to learn how to program these complicated systems; perhaps some of your physics savvy readers can help turn large systems of partial differential equations into simulations.
For now, my computation time is mostly spent studying simple stuff like rotor heating, and making pretty graphs with gnuplot and pretty pictures with povray and libgd, with hundreds of little C and Perl programs to solve small problems. Power storage rings will be designed using millions of lines of code. Hopefully, all of it will be open source, evolved from other open source projects. Borrow from the best!
Question: How much maintenance would the Launch Loop require? How would the high-altitude maintenance be performed?
We will know more after we’ve operated thousands of power storage rings for thousands of hours each. The complicated bits are stationary – they can be swapped out (in a vacuum enclosure) while the system is running. Robots take up less room than people in space suits, so there will be many specialized robots working inside the vacuum enclosures.
The high speed rotor is dumb and hard to break, but if a segment is behaving less than perfectly, a robot carrying a replacement can accelerate at 200 gees, rendezvous with the damaged segment, replace it in perhaps six seconds, then slow down at 200 gees, elapsed time 20 seconds and total distance less than 200 kilometers. The important task is not the replacement, but bringing back the oddball segment for microscopic failure analysis. I hope there is a simpler way, but the fast robot is fun to think about.
I design semiconductors made in automated factories by thousands of specialized robots. Engineers sit in comfortable offices, sometimes many time zones away, writing programs for the robots, and analyzing voluminous measurements to improve the programs. We analyze the snarf out of failed components (I license technologies for that) so we can drive failures down to a few parts per million. We know more about every ten cent chip we make than you know about your car. Launch loop will evolve from that same obsession with measurement and analysis and continual improvement.
Question: When is the earliest that a Launch Loop could be become operational? How long would it take for the launch loop to ramp up to full capacity?
Launch loops will get built after the enabling technologies are developed for other profitable markets. More importantly, ten thousand ton per day launchers will wait until the launch market scales to many times that level.
Though most of my space cadet friends are obsessed with cheap launch, I’m a lot more interested in developing applications that can grow exponentially even with expensive launch. I spend far more time designing mass-efficient satellites. Using the solid state technologies I understand, we can build gram-weight satellites a few microns thick, thousands of times lighter and more manueverable than the multiton monsters we launch now. It is better to launch a kilogram of satellite at $10,000 a kilogram, rather than a functionally equivalent three tons of satellite at $10 a kilogram. See http://server-sky.com for a hugely lucrative application.
In electronics, we talk about the learning curve – a factor of 10 increase in manufacturing quantity (and experience, and data, and defect analysis) leads to a factor of two drop in price. That is what drives Moore’s Law. So if we’ve spent 4 trillion on rockets, making rockets a thousand times cheaper will require ten billion times the accumulated experience and 40 quintillion dollars of expenditure. Not likely.
Question: SpaceEx believes that they may eventually reduce launch costs to $10 per pound using conventional reusable rockets.
I don’t know what Elon Musk actually believes – I’d like to see the actual quote. I have seen quotes of $1000 per kilogram, (10x cheaper), and I expect him to do that with a relentless drive for automated logistics and the same kind of data handling we use for semiconductors. This is the guy that automated a global internet bank (PayPal) without getting robbed blind by thousands of clever online crooks. That, and not spiffy new rocket technologies, is how he will make SpaceEx deliver low cost launch services.
The fuel and LOX cost of a rocket launch is far more than $10 a pound, and designing a rocket to be fully reusable means adding weight to components that are cheaper to use once then replace. Most of the weight of a rocket is fuel tanks, and those are essentially big soda pop cans. Imagine the cost of a Coke if the containers were designed to be reused many times (they were called glass bottles, and cost a LOT to make and transport and prepare for re-use). (BTW, the Russian aluminum plant that rolled the metal for rocket tanks is now operated by Alcoa, making pop cans for Europe). No, I expect SpaceX will use the “pop can” as a cheesy parachute for the (reusable) avionics and flight recorders, slowing down enough of the other components to do detailed analysis. They will use that data to design cheaper components for subsequent rockets. A relentless drive for more automation in the recovery and analysis of data will mean fewer employees can launch more rockets with less material inputs. So SpaceX may gather two thousand times as much applicable experience as the rest of the launch industry has, and that will give them their 10x cost advantage without spending a quadrillion dollars to get there.
But it can’t take them down to launch loop costs. Even if a rocket was free and zero mass, most of the fuel is there to accelerate other fuel, which in turn accelerates more fuel, which accelerates the small fraction of payload. Rockets are inherently expensive (and polluting).
Meanwhile, an electrically powered launcher like the launch loop can launch space solar power systems using server sky style thinsat technology, which can beam down cheaper electricity, which can make launch cheaper (electricity is the main cost). That will make electricity even cheaper, and launch and energy prices can plummet at Moore’s Law rates, all without polluting the atmosphere with rocket exhaust.
Will this put SpaceX out of business? Hardly. This is how SpaceX will someday deliver ever-cheaper launch to their customers – they are a /launch/ company, not a /rocket/ company, and they are clever enough to use the most cost effective technology available.
Moore’s Law is not a physical scaling prediction, but a promise of ever decreasing cost per value that customers can use to plan future technology buys. When scaling quits, the semiconductor companies will focus on power, versatility, time to market, and other cost and value inputs so they can keep decreasing cost per value. I expect SpaceX to do the same.
Question: Let’s talk about your concept for having solar powered servers in orbit. Data centers currently consume 2-3% of the nations electricity. But efforts are underway to massively increase data center efficiency. If the efficiency of data centers increases 10x, would the server sky concept still be viable?
The efficiency of data centers is indeed increasing rapidly, But not as fast as the demand for computing, which may be doubling every year. So the power per core is going down, but the number of cores is increasing far faster.
Indeed, the techniques being developed for faster, cheaper, lower power CPUs will increase the performance of Server Sky thinsats at the same rate it increases the performance of ground data centers. For example, the University of Michigan is working with Intel to develop RAZOR, a technique to detect and correct bit errors in computer logic. Why? Because if you dial down the power per bit, you can increase compute efficiency, but the bit errors increase from (for example) one per quintillion to one per billion. But if you can detect the error and recompute a fraction of a percent of the time, you might halve the power while increasing the compute time unnoticably. Similar techniques can compensate for radiation-induced bit flips. That plus other techniques eliminates the need for shielding, and makes orbits in the van Allen belt practical.
However, I do not expect to go head-to-head with first world industrial park data centers. The richest man in the world is Carlos Helu of Mexico, who got rich peddling cell phones to the developing world. Those cell phones can be greatly enhanced with massive computation in orbit, while building and staffing and powering data centers and communication infrastructure in those countries is very difficult. Clayton Christianson tells us that the chances of success for startups is perhaps 40% if they serve a real and unmet need, while it is 4% if they compete head-to-head with established players in a mature market (no matter what cost or performance advantage they offer).
I am a lot more interested in making the developing world rich (and getting a share) with technologies that do not damage the environment the way our developed world technologies do. Facebook, Amazon, Google, and other companies may dabble in “alternative” energy, but primarily they burn fossil fuel and kill salmon to deliver computation to customers. They should focus on their strengths, and the wonderful services they deliver to all of us. Intel and AMD should focus on better chip technology, and reducing unnecessary power waste in data centers. But we’ve got a whole planet to bring into the 21st century (not just the 4% of us in the United States). That will require new technologies.
We should’t do that with fossil fuels. So-called alternative sources are feeble and mismatched to demand. Land-eating technologies like solar compete with nature for sunlight. Nuclear power frightens people. Meanwhile, the Sun blows 384 trillion terawatts into empty space, lost forever. We can learn to use a tiny fraction of that, and dump the waste heat into the void. And when efficiency improvements collide with the brick wall of thermal noise, we can continue to expand compute infrastructure to serve new applications to new customers.
Question: Your concept of a thinsat has evolved, with current versions being glass-silicon cylinders. According to your latest research, what performance would a typical thinsat provide? How much power would a thinsat require, and how long would the average thinsat last?
A thinsat communicates to neighbors in an array with radio links – that is less efficient than wires, so they will always be somewhat less efficient as array processors for a given technology generation. However, their power source is free and effectively infinite, while their heat sink is a universe at 2.7K.
Google’s biggest line item expense is electric power. If a thinsat is thin enough, it costs more to make the chips than it does to launch the thinsat. A Pentium die costs around $100, and costs about $100/year to power with 12 cent electricity. The actual silicon die weighs about one milligram (the package and heat sink much more!) and at $10,000 per kilogram would cost a penny to put into orbit.
The problem is, the value of that processor (on the ground or in orbit) drops at Moore’s law rates, as faster and more efficient processors appear. So a processor designed for thinsats might last 10 or even 100 years, but its value will drop exponentially. Thinsats will be scrap long before the electronics fail.
But there is a compensating “problem”. We would like to make thinsats lighter and lighter. That makes them cheaper to launch (even with high fixed launch costs) and more nimble (the maneuver by light pressure, and can travel halfway around their orbit in a few weeks). However, if they are too light, they are too sensitive to light pressure, and won’t stay in their assigned orbit.
What to do? You launch very light thinsats, a tiny fraction of the mass they need to stay put, and use fragments of old thinsats as ballast (as well as all that yummy “space debris” going unused up there). You can continue the mass reduction until a 5 watt thinsat is perhaps a micrometer thick and has a launch weight of 100 milligrams ( $1 per thinsat at $10,000/kg current launch cost). A 3 ton, $30M launch would put 150 megawatts of thinsat in orbit. Compared to a $5/watt terrestrial power plant, plus power lines and buildings and concrete and fiber infrastructure (all of which Google pays for with their enormous monthly electric and communication bill), space computation looks mighty attractive. Far more attractive to countries that can’t afford all that infrastructure.
Question: You have proposed various solutions to radiation damage of thinsats, such as the RAZOR error correction technology and frequent rewrites for the FLASH memory. Even using such techniques, how will the reliability and accuracy of thinsats compare with ground based servers?
We won’t know how reliable thinsats will be until we build them and develop them and measure them, and what we set as design goals. Regardless of technology or location, any data you do not have multiple copies of you will eventually have zero copies of. A constellation of thinsat arrays might need to make six copies to get the same reliability as three copies on the ground. On the other hand, a system designed for reliable interchange between multiple copies, on the ground or in the sky, will have big advantages over our current atomized and uncoordinated processes. With server sky, we have an opportunity to start fresh, bypassing the limitations designed into our existing infrastructure. Will we make better choices if we have the chance? That is an investment decision, not a technology decision.
Ground server farms fail too, usually for external reasons. Power grid failures, earthquakes, storms, and most importantly bad market forecasts and changing market conditions. “Cloud computing” can move new apps to old data centers, but it cannot physically relocate the equipment in a data center to a place with more reliable electricity or better fiber. Thinsats can move between arrays, expanding and shrinking them as needed.
Again, the initial markets will be places with sparse or nonexistent data service; western China, rural India, the mountains of Afghanistan. Server sky arrays communicate by microwave, and can see 25% of the globe at once. They can refocus their computation and communications from one part of the world to another in milliseconds, and provide pennies per year of computation to the very poorest, cost effectively, either directly or through the cell network. If a region loses fiber data connectivity in an earthquake or tsunami, arrays can reestablish communication instantly.
Question: Collision hazards in orbit are an obvious concern, one that you have discussed at length. Could you describe how using discarded thinsats as ballast could solve the Kessler syndrome caused by satellites colliding?
As mentioned above, ballast allows us to reuse the mass of a thinsat, forever, making future thinsats cheaper to launch.
We can also use the arrays as “radar guns”, making very narrow time-coded beams of radar energy to light up potential colliders so that radar satellites can see and track them (small thinsats make lousy receivers, though they might be able to physically connect into large antenna arrays. TBD.) We currently track space debris with a few ground stations, with tracking errors on the order of a kilometer. With high orbit, continuous tracking, we can track much smaller objects, and locate them by time of flight to centimeters. So besides identifying and precisely locating debris so that other satellites can precisely manuever to avoid it (missing by meters is good enough), we can also make rendezvous and capture faster and more cost effective.
A major reason for the debris problem is that we have too many satellites in uncoordinated, high inclination, low Earth orbits. That means a lot of orbits that intersect close to the poles at high relative velocity. Server sky will be deployed in near-equatorial orbits, nested such that neighboring orbits will have very small relative velocities. Very few fratricidal collisions, even with trillions of thinsats in orbit.
However, the equatorial location also means that cities like Anchorage and Stockholm will be below the horizon from the server sky constellation. Many northern cities are small, rich, and have ample existing service through fiber. The ones that don’t can communicate through satellite services like Iridium and Globalstar. Server sky won’t solve everybody’s problems, but providing direct data services to the 90% of the world’s inadequately served population is a good start.
Question: Thinsats will use light pressure from the sun to make minor orbital adjustments. But how long will larger orbital maneuvers take? How difficult will such maneuvers be to coordinate?
Thinsats accelerate a few microns per second squared, a few centimeters per minute squared, many meters per day squared, etc. The accumulated distance goes as the square of the time, so halfway around the orbit might take a couple of months. Orbital mechanics says that satellites in lower orbit move around the sky faster. We can designate “highways” for thinsats (lower orbits moving forwards, higher orbits moving backwards) and coordinate very large movements. To manage the arrays and point at precise locations on the ground, we will use time of flight location and triangulation to know where thinsats are, to fractions of a millimeter, all the time. With long distance journeys moving at perhaps 20 meters per second relative to the array, and 100 km above or below, avoiding collisions will be easy. A “lane” might be 10 meters wide, and thinsats in the next lane are moving only 2 millimeters per second faster or slower. If only terrestrial freeways had such low relative velocities, and their drivers had microsecond reaction time!
Question: You have estimated that thinsats in orbit could cost 10x as much as ground-based servers in 2015, but might cost only 1/100 of ground-based server costs in 2035. Is this predicted massive cost reduction almost entirely based on getting LaunchLoop established?
Not at all. Mostly this will be driven by launch mass reduction and ballast recycling, but there are many other opportunities for improvement that will be discovered by through experience. We are on the same silicon technology curve as the ground servers, but we will not face the same changes in energy and infrastructure cost. Most of a thinsat’s cost will be manufacturing, not launch, so increased volumes and experience will reduce that. Cheaper launch from organizations like SpaceX will help, too, especially if we speed their growth with our ouwn.
An unfortunate part of the changing differential is that terrestrial energy will become more expensive as we deplete resources and chase dirtier sources (requiring more cost to clean up). The billions in the developing world will get a bigger slice of a smaller pie. So thinsats may get 100x cheaper while terrestrial data centers may get 10x more expensive to operate. Technology scales, finite resources don’t.
I see launch loop coming later, partly driven by increased fuel cost for rockets and a greatly expanded launch market. We will develop power storage rings first.