Proposed First Gravity Lens Mission by 2028 that Could Spot Large Islands on Exoplanets by 2050

A meter-class telescope with a coronagraph to block solar light, placed in the strong interference region of the solar gravitational lens (SGL), is capable of imaging an exoplanet at a distance of up to 30 parsecs with a few 10 km-scale resolution on its surface. The picture shows results of a simulation of the effects of the SGL on an Earth-like exoplanet image. Earth diameter is 12724 kilometers. Ten-kilometer resolution would be better than a one-megapixel image.

Above : Left: original RGB color image with a (1024´1024) pixels; center: image blurred by the SGL, sampled at an SNR of ~1000 per color channel, or overall SNR of 3000; right: the result of image deconvolution

The resolution used against the Earth would let an observer identify Java, the main island of Indonesia and island of Cuba.

Executive Summary: Innovations and Advanced Concepts Enabled
Direct multipixel imaging of exoplanets requires significant light amplification and very high angular
resolution. With optical telescopes and interferometers, we face the sobering reality: i) to capture even a single-pixel image of an “Earth 2.0” at 30 parsec (pc), a ~90 kilometer (km) telescope aperture is needed (for the wavelength of l = 1 µm); ii) interferometers with telescopes (~30 meter) and baselines (~1 kilometer) will require integration times of ~10,000 years to achieve a signal-to-noise ratio, SNR=7 against the exozodiacal background. These scenarios involving the classical optical instruments are impractical, giving us no hope to spatially resolve and characterize exolife features.

To overcome these challenges, in a NIAC Phase II study they examined the solar gravitational lens (SGL) as the means to produce direct high-resolution, multipixel images of exoplanets. The SGL results from the diffraction of light by the solar gravitational field, which acts as a lens by focusing incident light at distances >548 AU behind the sun (Figure 1). The properties of the SGL
are remarkable: it offers maximum light amplification of ~100 billion and angular resolution of ~10 billionth of ab arcsec, for l = 1 µm. A probe with a 1-meter telescope in the SGL focal region (SGLF), namely, in its strong interference region, can build an image of an exoplanet at 30 pc [100 light years] with 10-km scale resolution of its surface, which is not possible with any known classical optical instruments. This resolution is sufficient to observe seasonal changes, oceans, continents and surface topography.

They reached and exceeded all objectives set for our Phase II study:
* They developed a new waveoptical approach to study the imaging of exoplanets while treating them as extended, resolved, faint sources at large but finite distances.
* They designed coronagraph and spectrograph instruments needed to work with the SGL.
* They properly accounted for the solar corona brightness.
* They developed deconvolution algorithms and demonstrated the feasibility of high-quality image reconstruction.
* They identified the most effective observing scenarios and integration times.

As a result, they are now able to estimate the SNR for light from realistic sources in the presence of the solar corona. They have proven that multipixel imaging and spectroscopy of exoplanets up to 30 pc [100 light years] are feasible. By doing so, they were able to move the idea of applications of the SGL from a domain of theoretical physics to the practical mainstream of astronomy and astrophysics. Under a Phase II NIAC program, they confirmed the feasibility of the SGL-based technique for direct imaging and spectroscopy of an exoplanet, yielding technology readiness level (TRL) of TRL 3.

They have developed a new mission concept that delivers an array of optical telescopes to the SGL focal region and then flies along the focal line to produce high resolution, multispectral images of
a potentially habitable exoplanet. Our multisatellite architecture is designed to perform concurrent observations of multiple planets and moons in a target exoplanetary system. It allows for a reduction in integration time, to account for target’s temporal variability, to “remove the cloud cover”.

In this Report, they describe the mission architecture and the relevant technology steps, which they can begin today, that would allow the launch of a Solar Gravity Lens Focus mission by 2028-2030.

About Six Times Faster than Voyager Less Than 25 Year Travel Time – Maybe 2050 Arrival

The new architecture developed in this study uses smallsats (Less than 100 kg) with solar sails to fly a trajectory spiraling inward toward a solar perihelion of 0.1-0.25 AU and then out of the solar system on a nearly radial-out trajectory at 15-25 AU/year. Our design goal is 25 AU/year, to permit reaching the SGLF region in less than 25 years (maybe 2050 arrival). A long time, but less than it took Voyager to reach the heliopause at less than a fifth of the distance of our goal in the interstellar medium (ISM). We would reach the heliopause and enter the ISM in ~7 years, compared to the ~40 years of Voyager.
Today we are technologically ready to seize the unprecedented opportunity of using the SGL with a mission transit time only ~2.5´ longer than the transit time of New Horizons to Pluto.

The SGLF CONOPS uses multiple small satellites in an innovative “string-of-pearls” (SoP) architecture where a pearl consisting of an ensemble of smallsats is periodically launched. As a series of such pearls are launched (to form the “string”) they provide the needed comm relays, observational redundancy and data management needed to perform the mission. For example, if pearls are launched annually, then they will fly outward towards and then along the SGLF at 20 AU intervals.

By employing smallsats using AI technologies to operate interdependently, we build in mission flexibility, reduce risk, and drive down mission cost. This makes possible concurrent investigations of multiple exosolar systems by launching strings towards multiple exoplanet candidates.

We concluded that most of the technologies for SGLF mission either already exist (rideshare/cluster launch, sailcraft, RF/optical comm, all at TRL9), or are at intermediate levels of readiness: Sail materials (TRL 2-3), thermal management in solar proximity (TRL7), swarm operations (TRL5), terabit onboard processing (either FPGA or GPU, TRL 9/7), CONOPS (TRL7).

What is missing is the system approach to assemble all these technologies for autonomous operations in deep space (TRL3).

There is a clear path on how to close this gap, maturing the SGLF concept to TRL 4-5.

This affordable architecture design reduces cost in many ways:
1) It cuts the cost of each participant by enabling multiple participants (space agencies, commercial firms, universities, etc.) broad choices of funding, building, deploying, operating, and analyzing system elements.
2) It delivers economies of scale in an open architecture designed for mass production to minimize recurring costs.
3) It drives down the total mass (and thereby both NRE and recurring costs) by using smallsats.
4) It uses solar arrays of realistic size (~16 vanes of 1000 square meters) to achieve high velocity at perihelion (~150 km/s).
5) It applies maturing AI technologies to allow virtually autonomous mission
execution, eliminating the need for operator-intensive mission management,
(6) It reduces launch costs by relying on “rideshare” opportunities to launch the smallsats, avoiding the costs of large dedicated launchers, and
7) the SoP approach makes possible concurrent and affordable investigations of multiple exosolar systems by launching strings towards multiple exoplanet candidates.

The SGLF mission concept proposes three innovations:
i) a new way to enable exoplanet imaging,
ii) use of smallsat solar sails to go further and faster at lower cost into the interstellar medium, and
iii) an open architecture to take advantage of swarm technology in the future. It enables entirely
new missions, providing a great leap in capabilities for NASA and the greater aerospace community.

It lays the foundation for fast transit (over 20 AU/yr) and exploration of our solar system and beyond (outer planets, moons, Keiper Belt Objects (KBOs), and interstellar objects/comets).

Treatment of extended sources

The entire image of an Earth-like planet at 30 pc is compressed by the SGL into a cylinder with a diameter of ~1.3 km in the vicinity of the focal line. The telescope, acting as a single-pixel detector while traversing this region, can build an image of the exoplanet with kilometer-scale resolution of its surface.

NOTE- Everything along the path to the exoplanet can be observed with 100 billion time amplification. You can look beyond at stars and galaxies.

Assuming that the planet is positioned at z0 = 30 pc away from the Sun, we estimated the signal from the planet as Qplanet = 8.01´104 (d/1 m)2 (650 AU/z)
1/2(30 pc/z0) (l/ 1 µm) photons/s.

This estimate translates to a flux of 240,000 photon/s for an exoplanet at z0 = 10 pc and 1.85 million photon/s for at z0 =1.3 pc. Figure 11 summarizes the photon fluxes and the relevant signal-limited
SNR (i.e., no noise) as a function of heliocentric distance.

Using these estimates, we compared the performance of a conventional telescope against one aided by the SGL. The angular resolution needed to resolve features of size D in the source plane requires
a telescope with aperture dD~1.22 (l/D) z0 ~1.19´105 km = 18.60 RÅ, which is not realistic. The photon flux of a d=1m telescope from such a small area on the exoplanet yields the value of 1.97
´ 10-8 photons/s, which is extremely small. Comparing this flux with Qplanet received with the SGL, we see that the SGL, used in conjunction with a d = 1 m telescope, amplifies the light from the
directly imaged region (i.e., an unresolved source) by a factor of ~3.38´109 (d/1m)(650 AU/z)1.5(z0/30 pc)2 . This estimate justifies using the SGL for imaging of faint sources.

Realistic Signal to Noise

They accounted for the zodiacal background, solar corona brightness, spacecraft jitter, realistic losses, etc. We assumed a coronagraph suppression of one million. With these assumptions, they estimate that a 1-m telescope, operating beyond 650 AU would allow reaching a post-deconvolution SNR of 7 in ~1 year, yielding an image of this target with (100´100)-pixel resolution. Ten thousand pixels.

Creating a megapixel image requires one million separate measurements. For a typical photograph, each detector pixel within the camera is performing a separate measurement. This is not the case for the
SGL. Only the pixels in the telescope detector that image the Einstein ring measure the exoplanet, and the ring contains information from the entire exoplanet, due to the blur of the SGL and also to
the relative distribution of different regions of the exoplanet to different azimuths of the ring.

What is encouraging is that temporal variability in the cloud cover helps the deconvolution. Assuming N~50 observations of every pixel, clouds “disappear” after ~10 observations. If spectroscopic data is also used, we can reduce this issue and “see through” the clouds. The deconvolution of the data from several s/c allows to see the surface of the Earth in a few months of data.

By studying direct deconvolution, we have shown that with a 1-m telescope we would need ~1 year to build a (100´100)-pixel image with SNR~7. Two factors that can reduce the integration time by a factor of up to 100 times are i) the number of image pixels, N, and ii) the telescope diameter.

A larger image of 1000X1000 pixels of an exoplanet at 25 pc may be produced in ~6 years with a 2-meter telescope. This time may be reduced if there are time-varying features of predictable periodicity on the planet’s surface or in its atmosphere. Also, the integration time is reduced by a factor of ~1/n if we fly n imaging spacecraft. Six 2-meter scopes would let the megapixel image be produced in one year. Seventy 2-meter scopes could let us make a megapixel image in one month.

Accounting for the motion of the Sun in the solar system BCRF, consider the plate scale. The SGL reduces the size of the image of the exoplanet at 30 pc by a factor of ~10,000 at 650 AU. An orbital radius of 1 AU becomes ~150000 km and an orbital velocity of 30 km/s translates into 3 meters per second. Solar gravity accelerates the Earth at 6 mm/s2. Consequently, the imager s/c needs to accelerate at ~1 micron per second squared to move in a curved line mimicking the motion of the exoplanet.

For a given number of desirable pixels, more distant (fainter) planets will require longer integration times resulting in a longer imaging mission phase. In fact, image quality improves with time allowing for more repeat scanning of the same pixel and also as spacecraft moves to further heliocentric ranges the SNR increases. These factors allow for an improved image reconstruction. It is desirable for the duration of the imaging mission phase to be on the order of 10 years. This would translate into much-increased image quality and temporal resolution of the atmospheric and surface processes occurring on the target exoplanet.

Mass Production of Smallsats

However, the SGL mission requires radioisotope power. It will easily cost billions for one first mission. Technology and methods can be drastically improved to try to get each missions costs down to the tens of millions each. However, it would be very beneficial to drive the costs down so more solar systems could be observed.

SpaceX is mass-producing Starlink satellites of comparable size for less than $1 million. It could be possible to bring the costs for these types of satellites down to $100,000. Mass producing a million would cost about $100 billion. About $200 billion was spent on the international space station. We could have eighty satellites observing each of over ten thousand solar systems within 100 light-years.

Test Flight

Interplanetary smallsats are still to be developed – the recent success of MarCO brings them perhaps to TRL 7. Solar sails have now flown – IKAROS and LightSail-2 already mentioned, and NASA is preparing to fly NEA-Scout. Scaling sails to be thinner and using materials to withstand higher temperatures near the Sun remains to be done. As mentioned above, we propose to do this in a technology test flight to the aforementioned 0.3 AU with an exit velocity ~6 AU/year. This would still be the fastest spacecraft ever flown. They have roughly estimated could be done within three years at a cost less than $40 million – and using a rideshare launch to approximately GEO.

Getting There- Existing and Nearterm Technology

Interplanetary smallsats are still to be developed – the recent success of MarCO brings them perhaps to TRL 7. Solar sails have now flown – IKAROS and LightSail-2 already mentioned, and NASA is preparing to fly NEA-Scout. Scaling sails to be thinner and using materials to withstand higher temperatures near the Sun remains to be done. As mentioned above, we propose to do this in a technology test flight to the aforementioned 0.3 AU with an exit velocity ~6 AU/year. This would still be the fastest spacecraft ever flown.

Analysis based on current materials show that solar orbit injection with a perihelion distance of 10 solar radii (approximate orbit of the NASA’s Parker Solar Probe) generates exit velocities of 15–18 AU/yr for A/M ratio of 100–200 m2/kg, and 25 to 30 AU/yr for A/M of 400–600 m2/kg.

Key technologies being developed to drive down weight risk and cost include solar sail materials, solar sail propulsion control, higher speed computers and rad-hard computers. Relevant developments in the next 10 years anticipate battery density (J/kg) to increase by factor of 2 to 4, removing about 5-7 kg of mass per SGL s/c. Onboard clocks can foresee a factor of 100 improvement in chip scale atomic clocks and 33 years for 1 Hz drift (0.1 ppm). Star trackers with high resolution data from Gaia mission should reach 1 microarcsecond angular resolution.

Millimeter-wave D-band RF antenna arrays could provide for efficient RF crosslink and lower SWaP (e.g. NuvoTronics). NASA’s Inter-spacecraft omnidirectional optical communicator (ISOC), or Honeywell’s Optical Pointing and Tracking Relay Assembly (OPTRAC) 10Gps optical could enable intersatellite communication. And for large-volume data transfer downlink communication NASA’s Terabyte Infrared Delivery (TBIRD) Program could be utilized.


A design for the SGL coronagraph (Zhou, 2018) rejects sunlight with a contrast ratio of about ten million. At this level of rejection, light from the solar disk is completely blocked to the level
comparable to the brightness of the solar corona. Taking a further step, we consider two possible coronagraph concepts: A conventional coronagraph (which we call a “disk coronagraph”) that blocks light only from the solar disk and the solar corona up to the inner boundary, b-, of the l/d annulus centered on the Einstein ring, and a coronagraph that also blocks light outside the outer boundary, b+, of the l/d-annulus centered at the Einstein ring (the “annular coronagraph”).

Figure 12 describes the relevant sizes and observing configuration.

FIG. 9: The annular coronagraph concept. The coronagraph blocks light from both within and outside the Einstein ring.
The thickness of the exposed area is determined by the diffraction limit of the optical telescope at its typical observational

Solar coronagraphy was invented by Lyot to study the solar corona by blocking out the Sun and reproducing solar eclipses artificially. Coronagraphs are also considered to block out light from point sources, such as the host star of an exoplanet imaged with conventional telescope. The SGL coronagraph is different, as it needs to block the light from the Sun and the solar corona, leaving visible only those areas where the Einstein ring appears.

Written By Brian Wang,

41 thoughts on “Proposed First Gravity Lens Mission by 2028 that Could Spot Large Islands on Exoplanets by 2050”

  1. Would the Einstein ring deconvolution be consistent with the use of a quantum algorithm for further multiplication of any apertures resolution by a million fold? I read about this several years ago in New Scientist. It didn't go into any details about what the algorithm is, but it supposedly uses literally quantum principles.

  2. Just dumb on so many levels. Fusion is privatized too. It’s the government run stuff that is lagging.

  3. After some more thought, for r close to r_sol, the solar corona will likely obscure the Oort objects. The focal distance increases as the square of r ( D = r/tan[2GM/(r*c^2)] ~= (rc)^2/2GM ), so for r large enough that the corona isn’t a problem, we’d be well inside the Oort cloud. And then the question becomes: is using the SGL from there still better than just looking around locally, in the same region of the Oort cloud?

  4. There is a difference between a solar shade (such as the ones on JWST), which only needs to block the Sun, and can be relatively thick and heavy, vs a solar sail, which has to be as thin and light as physically possible.

  5. Excellent work MK.

    Because, googling Oort cloud and Kuiper Belt they are both predicted to have
    Trillions of objects a km or more in size.

    The asteroid belt is just a sample taster.

  6. However we are not that far off getting lots of more exo planet data, and telescopes you can get information on the atmosphere of earth sized planets. If its oxygen we send something like this,

  7. Nevertheless, when one looks at James Web Space Telescope solar shield, they do manage to do exactly that (with 5 or six spaced layers). I wouldn’t write it off that easily.

  8. See my posts above. If I got the math right, the SGL can image the inner Oort cloud (but not the Kuiper belt, since it’s too close). But finding stuff in it is another matter entirely, since you’ll need to scan a wide area.

  9. Yah. Funny you should have mentioned this. Couple of days back, I thought to myself, “well … what’s really going to be needed is an extensive map of all the large-mass objects in the Kuiper and inner Oört belts, to be used for course-correction gravitational slingshot vectoring.”

    Supposedly, and who really knows, there are millions-to-trillions of chunks of primordial stuff out there, still gravitationally “attached” to Sol.  Name after the famous guys.  Oört and Kuiper.  

    heck, we can’t even find Planet X.  If it were the size of Neptune (3.78 R♁), with the same reflectance as Pluto (ρ 0.35), and at 2500 AU on its long aphelion, well … it’d be dâhmned dim. 1 photon per second arriving, here at Earth, per m² of telescope (in space, one presumes) looking for it.  Very dim indeed.

    And very slow moving, relative to the background of so-called fixed stars. i’m sure fairly easily overlooked, even if we knew kind-of-sort-of where it was supposed to be.  

    My own thinking is that there must certainly be a few thousands of objects, maybe up to a million, of sizes suitable for calling any one of them a “dwarf planet”. Whether they’d individually be ‘useful’ for gravitational sling-shot gambits, well … time would only tell.  

    PS: and no, the darn Sol Gravity Lens doesn’t help one squit in finding these darn things.  

    ⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
    ⋅-=≡ GoatGuy ✓ ≡=-⋅

  10. After a quick skim of , it seems the basic math is pretty simple:

    The angle of deflection is given by: Q = 4GM/(r*c^2), where M is the mass of the lensing body, and r is the distance from its center where the light passes.

    For an object at distance Dobj from the lensing body, the incoming angle Qin towards a point r “sideways” is simple trigonometry. From there, the light will bend by angle Q, which gives the outgoing angle Qout. Then it’s a bit more trig to find the focal distance Dimg.

    For the special case where Dobj = Dimg = D, Q = 2*Qin = 4GM/(r*c^2). So:
    Qin = 2GM/(r*c^2)
    D = r/tan(Qin) = r/tan[2GM/(r*c^2)]

    Taking r = Rsol and M = Msol, I get D ~= 1.65e14 m, which is 1100 AU. Surprisingly little.

    If this is true, we should be able to image the opposite side of the inner Oort cloud from a little inside the inner Oort cloud.

  11. —————————————————————————————————-
    part IV, working with a non-collimated solar sail instead, at max_g = 1.00 m/s^2
    netting count     = 438178.000000 ea
    len  per tether    = 3496.16 m
    force per tether (max) = 0.013600
    area per tether (max) = 2.72e-12 m^2
    dia. per tether (max) = 1.86097e-06 m
    tether volume, total  = 0.00950954 m^3 
    tether re-mass     = 33.3 kg  
    netting volume     = 0.0015 m^3  
    sum of solar sail work = 2.08e+14
    delta V … out to 227957.33 au 4
    velocity (v) = 227957.3 m/s (0.0760% of c) (48.12 AU/a)

  12. Agreed, all I meant was that if we have a target that checks all the boxes that the funding will appear for a plausible colonization effort. It will involve a lot of research, sure, but with a real target the funding becomes easier, and plans will be presented to do it in budgetary and technological means. People would love to have their own planet.
    If we are shooting blind, just to go to other solar systems not knowing what is there, the likelihood of funding is minuscule. Couple hundred years does sound about right, unless someone manages to increase Elon Musk’s life span anyway. Finding the planet first gives us a goal and incentive.

  13. vi integ_light_sail ; ./integ_light_sail 
    part I, setting up parameters
    sail density = 250.0 kg/m^3 (g/L and mg/cc)
    sail thicknes= 0.000001 m (1.00 um)
    tether dens. = 3500.0 kg/m^3 (g/L and mg/cc)
    teth modulus = 5000000000.0 Pa (5.0 GPa)     
    teth count  = 1000000             
    teth angle  = 45 deg           
    sail sp. mass= 0.000250 kg/m^2
    payload   = 2000 kg
    payload frac = 25.0%
    tether frac = 15.0%
    max accel  = 1.00 m/s^2
    sail reflect = 85.00%
    sail absorb = 0.02%
    sail temp mx = 400 K
    stefan-bolz = 5.67e-08
    absorb load = 1451.5 W/m^2
    illum sp.pwr = 9676800 W/m^2
    min Sol AU  = 0.0119 AU (for max allowed sail heating)
    total mass = 8000
    sail  mass = 4800
    tether mass = 1200
    max force  = 8000.00 N
    max sol power= 2398339664000.00 W
    sail area  = 19200000 m^2
    sail diameter= 4944 m  4.9 km
    min sol AU  = 0.1045 AU (for max accel)
    … yielding (next message)

  14. And just to be complete, in modeling, there actually were iteratively tested physics reasons for assuming 60% (sail), 15% (tether) and 25% (all the rest AKA “payload”).  

    To offer a useful amount of thrust (for the whole thing), the sail part needs to be really, really, really big.  And very light weight per unit area.  To capture and reflect the most light possible with the gossamer nano-cloth sail.  BUT… it’d be of no use at all, if the captured photon force weren’t transferred to a payload.  The telescope. The transmitter to Earth, to get the data collected. The guidance hooz-i-dongers to make sure El Sail ends up precisely where it is optimally positioned for imaging. Computational stuff. Bits and pieces. A superstructure.   

    Well, that takes distributed tethers.  My calcs show the tethers (and “balloon netting”) of distributing and collecting the force by itself is only about 3% of the total. But the motors, actuators, and control system to yaw-the-job (loosely, of course) will call for the rest of the mass fraction.j

    ⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
    ⋅-=≡ GoatGuy ✓ ≡=-⋅

  15. It is the usual problem, when working with things that have to be astoundingly light weight just to work at all.  Might not seem related, but think about aircraft.  

    Not only does every pound-or-kilogram of payload … need lifting, but every bit requires a superstructure strong enough to safely hold that kilogram, with all the buffeting and torquing that’s going to happen on any conceivable trip, over the life of the airplane. And, in turn, every last kilogram of airframe requires more kg of airframe to support the airframe itself.  Surely a diminishing exponential growth curve, but grow it does.  

    And so it does to carry fuel. Kilograms that require a bigger airframe, which requires a bigger airframe, which requires more fuel, which … arr … ggg … hhh… !!!

    By the grace of the Physics Gods, it turns out that this is all quite readily model-able using iterative computer programming techniques. Complex, but not hard. The biggest issue I ran into when modeling the solar-sail-to–550-AU-with-decent-sized-telescope is that even with a 50% sail, 25% tether and 25% payload design point (by mass fraction), every additional kilogram makes the whole thing pretty preposterously big.  

    Gods of Physics…
    ⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
    ⋅-=≡ GoatGuy ✓ ≡=-⋅

  16. And time is a resource as well. We can wait decades to make modest investments in this kind of technology, but the transit time is significant. Improved technology may slice away at the delay in arrival time, but getting to 0.1c is already pretty fast. Assuming we don’t invent antimatter drives in the mean time delaying this kind of mission is delaying the insights as well.

  17. I have seen a proposal to use a planetary atmosphere as a similar sort of lens. So we could have a Terrascope or a Jovescope.

  18. A few things follow:

    1. For any given M and r, the amount of bending is only a function of m (the photon energy, in our case). Same m => same force => same amount of bending.

    2. If the amount of bending is the same, then the closer the object is, the further out its image is (the further out it’ll come into focus). The closer the object is to the minimum focal distance (i.e. the closer it is to ~500 AU from the Sun), the closer out to infinity its image will be focused. And vice versa: the closer the object is to infinity, the closer to the Sun it becomes focused. Other stars are close enough to infinity that there isn’t a big difference between their image distance.

    3. That means that anything inside of the minimal focal distance can’t be focused at all. But in more practical terms, any objects that are “too close” will have their image focused too far.

    The inner Oort cloud is 2000-5000 AU, which is likely too close to 500 AU to be practically observed via the SGL. The focus will be too far out. If we’re lucky, we may be able to view the outer Oort cloud 20,000-50,000 AU, from about the opposite end of the outer Oort cloud. This is where the math comes in to find the actual numbers…

    BUT, you should be able to view another star’s inner Oort cloud in pretty good detail, by using both the stars’ gravity lenses.

  19. Without actually knowing the answer, here’s what I can guesstimate:

    We know that the less massive an object is, the further out its gravitational focus. Conversely, the more massive, the closer the focus. We also know the focus “point” isn’t a point, but a line segment. This image gives an important clue:

    As we can see, the further a group of photons passes from the heavy object, the further out they are focused.

    All of this is a result of the gravity force: F = GMm/r^2. (Roughly speaking, you can treat photons as having some small equivalent mass m, due to their energy.) The bigger M is => the greater the gravity force => the more the light bends => the closer it focuses. And the smaller r is => the greater the gravity force => the more the light bends => the closer it focuses.

    Conclusion in the reply… (out of room here)

  20. it would be interesting to use the solar sail mass as propellant mass for deceleration when reaching the target point. However, the engine+reactor, and the whole contraption might become too heavy / too big.

  21. It would be possible to detect light pollution from alien cities with this sensitivity ? 🙂

  22. In my opinion seeing pretty good image of exoplanets with its continents and so on is priceless. Imagine just inspiration people will get and would want to go there, study science, work hard. That is priceless. To see another Gaia like habitable exoplanet is just amazing. 

    I think it would be a good decision to invest in private fusion. Fusion rocket could shorten such a trip significantly, the key is to get fusion to work and public sector is just too slow.

  23. Hey GG, you can probably answer this one*:

    Yes, we know that gravitational lensing can give us apparently spectacular levels of resolution of other star systems. But what is the nearest objects they’ll work on?

    Namely can we send a handful (OK, several hands) of probes out to the lens points to get Google maps level images of all the stuff in the Oort cloud?

    Because then we’ve dropped the time horizon of when this will become actually useful from a century or two to… well almost right away. As in, by the time we have put telescopes out 600AU, then we are about ready to start work on sending probes out to 2000 AU, which is Oort territory.

    *Yeah, I looked up the grav lens maths myself, but I got scared.

  24. A planetary imaging mission won’t be able to image other stellar systems, unless you really luck out on their angular separation, but it could still do really good deep field astronomy in the same direction.

  25. If I were to speculate, due to the fuzzy nature of the gravitational lens, even when you’re off location you should be able to “see” the planet, just at much lower resolution. So you can use that lower resolution image to navigate to the precise focal point.

  26. The less massive the object, the further out its gravitational lens focal point. The Sun’s gravitational lens (SGL) is the closest one, and the most powerful one on our vicinity.

    We can still get good results much closer to home without a gravitational lens, but nowhere near as good as at the SGL.

  27. ignorant question here but is there some way to do this without going so far out? I’m dismayed by the long mission time and complexity. Could we do something similar with Jupiter or other massive nearby objects? I assume the gravitational magnification effect from Jupiter would be a lot less than the sun, but maybe the telescopes wouldn’t need to be so far away? I don’t know the optical science, just guessing about that.

  28. The MOST important question it could answer really, is validation of the range-of-planet-forms found in other systems; to give much more accurate ‘legs’ to some of the Drake Equation’s mostly-just-wild-guessed parameters.   

    THEN, the next finding might be, to determine probabilities of finding what look to be interesting rocky planets with oceans, not of methane, but water. Until we actually go venture to worlds covered with what seems to be life-but-not-based-on-water, well … water is kind of a given.  And water implies clouds, implies oceans, implies all sorts of other things that have to be “parameters that work” of known statistical value.

    AND THEN… come spectrographic researching(s) of the likely-very-few planets that seem to have “all the right stuff” going for them.  Their position will be very well known, and sending more serious probes to the SGL point to image them with detailed spectrographic analysis, will be warranted.  Millions searched, millions imaged, dozens to hundreds as followup candidates.  

    BY THIS TIME, Science (say, 100–200 years away) will have dealt with the prodigious amounts of power to comport large masses (think people, computers, ai’s, all foodstufs, factories, etc.) at sub-liminal speed.  5% to 10% of ‘c’.  That’ll be enough. Then mankind will earn a place in the stars.  


    ⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
    ⋅-=≡ GoatGuy ✓ ≡=-⋅

  29. Terra-Luna-Mars, but I think you’re right on the money. Or perhaps “while we’re setting up competing satellite internet providers” or something like that, as it looks like they want to rideshare on commercial projects more than the NASA ones.

  30. While it’s true that TANSTAAFL, the history of tech development, and development in general, is of finding lunches that were already paid for by someone or something else, but not currently being eaten. And grabbing them, if not for free, then for cents in the dollar.

    Good old fire. Grab the energy stored by plants over decades and use it for near free to keep your cave warm for a night.

    Fossil fuels? Paid for by millions of years of ancient life forms. Currently sitting around making the geology untidy and not being used by anyone (except maybe a handful of particularly inventive microbes).

    Nuclear power? Paid for by killing entire stars to make the heavy nuclei, but now capable of being pulled out using WW2 level tech.

    Sailing itself (both solar and old fashioned wind). Just hang some textiles out to dry and get nature to replace multiple decks of oarsmen.

  31. I think the proposal is, “this is potentially so darn cheap, that WHILE we’re investing trillions on the Terra-Luna-Marta circuit, we can also do this for like a penny on a buck, and get outstanding science at the same time.” ⋅-=≡ GoatGuy ✓ ≡=-⋅

  32. Nice, but wouldn’t work: the thinness of the sailcloth is so tiny that the incident solar radiation heats it to its radiative equilibrium temperature in milliseconds. Once it reaches equilibrium (where it radiates to the depths of space its heat, as infrared), at that temperature it’ll stay. Equilibrium.  

    Also, double sailing doubles the sail mass without doubling the force of the solar radiation pressure.  So, acceleration is diminished.  

    ⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
    ⋅-=≡ GoatGuy ✓ ≡=-⋅

    TANSTAAFL … There Ain’t No Such Thing As A Free Lunch

  33. you could put two sails behind each other in tandem. When one sail comes close to overheating, you flip the spacecraft and the second row of solar sails is used while the first cool down. Anyway, weight and packaging being an issue this is a trade space, but it could solve your overheating problem.

  34. My understanding is that it is one planetary system-subject mission, since adjusting the angular position at 300 000 light-seconds out from Sol is, shall we say, difficult. But that is the point with small-sats so that large fleets may be constructed and sent to different points out in the Kuyper belt to point at different star systems.

  35. I’m not qualified to say whether this is feasible or not; I leave the skull work to smart people like GoatGuy. What I can say definitively is that, if you want the funds to develop and launch ships to other solar systems, this is where you start. It is hard to get the gargantuan funding for something like that without a defined target. Show us another planet close to earthlike, one prime for habitation with temperatures atmospheric composition gravity etc., and funding will appear like magic.

  36. Hmmm… I suppose.

    I’ve been noodling with a passive (not-illuminated-by-a-laser) 85% reflective, 250 mg/m² sail concept, with 250°K max heating, and significant mass for sail-to-payload tethers.  Integration by parts may be inelegant compared to classical Eulerian Calculus, but hey… Goats are inelegant. Accurate tho!

    I come up with 25 AU/year with a min solar approach of 0.11 AU, and a maximum useful acceleration distance of 2.5 AU. Losing, or not losing sails is merely a matter of deciding (like the imagined mission per article) whether to ‘pick up fuel’ to decelerate significantly “out there”, for SGL loitering and extended optical metrology.  Maybe, but others here have chimed that “you’ll have 20 years of observations” even if the thing races past the 550 AU point.  


    Feasible? It all still depends on whether unicorn nanofibers can be woven into 85% reflective cloth AND equally amazing nanofibers can be spun to create the tens-of-thousands of tethers for the 8 km diameter solar sail.  

    ⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
    ⋅-=≡ GoatGuy ✓ ≡=-⋅

  37. With multiple point imaging, you can get much further once you move to the multiple pixel arena.

  38. It wasn’t clear to me whether you could easily re-configure the system to view multiple targets over the years (decades). Certainly, this can’t be a one exoplanet-subject mission?

  39. I very much want to be excited by ExoPlanets, and their viewing, discovery, understanding, and eventual mission planning, but creating GEO/LEO infrastructure, placing bases/mining on the moon, accessing/ exploiting asteroids, planning Mars colonization routes, and overall just populating within the cis-lunar system is going to take massive resources, so perhaps more pure-sciencey/ early-humanity-scattering thinking through exo-planets needs to be aligned with searches/ launch areas/ viewing&sensing equipment within that zone. Sciencey stuff, exploitation stuff, touristy stuff, etc., may all mutually reinforce each other if they all integrate into a larger kind-of-self-contained LEO/GEO/LaGrange/GMO?Moon surface catchment area… just my 2c

Comments are closed.