Stanford researchers and collaborators in Korea have developed a new architecture for OLED – organic light-emitting diode – displays that could enable televisions, smartphones and virtual or augmented reality devices with resolutions of up to 10,000 pixels per inch (PPI). Resolutions of new smartphones are around 400 to 500 PPI. This technology was adapted from existing designs for electrodes of ultra-thin solar panels.
These displays will be able to provide stunning images with true-to-life detail for Virtual Reality.
The new “metaphotonic” OLED displays would also be brighter and have better color accuracy than existing versions, and they’d be much easier and cost-effective to produce as well.
This research aims to offer an alternative to the two types of OLED displays that are currently commercially available. One type – called a red-green-blue OLED – has individual sub-pixels that each contain only one color of emitter. These OLEDs are fabricated by spraying each layer of materials through a fine metal mesh to control the composition of each pixel. They can only be produced on a small scale, however, like what would be used for a smartphone.
Larger devices like TVs employ white OLED displays. Each of these sub-pixels contains a stack of all three emitters and then relies on filters to determine the final sub-pixel color, which is simpler to fabricate. Since the filters reduce the overall output of light, white OLED displays are more power-hungry and prone to having images burn into the screen.
The crucial innovation behind both the solar panel and the new OLED is a base layer of reflective metal with nanoscale (smaller than microscopic) corrugations, called an optical metasurface. The metasurface can manipulate the reflective properties of light and thereby allow the different colors to resonate in the pixels. These resonances are key to facilitating effective light extraction from the OLEDs.
In lab tests, the researchers successfully produced miniature proof-of-concept pixels. Compared with color-filtered white-OLEDs (which are used in OLED televisions) these pixels had a higher color purity and a twofold increase in luminescence efficiency – a measure of how bright the screen is compared to how much energy it uses. They also allow for an ultrahigh pixel density of 10,000 pixels-per-inch.
Science – Metasurface-driven OLED displays beyond 10,000 pixels per inch
Organic light-emitting diodes (OLEDs) have found wide application in high-resolution, large-area televisions and the handheld displays of smartphones and tablets. With the screen located some distance from the eye, the typical number of pixels per inch is in the region of hundreds. For near-eye microdisplays—for example, in virtual and augmented reality applications—the required pixel density runs to several thousand pixels per inch and cannot be met by present display technologies. Joo et al. developed a full-color, high-brightness OLED design based on an engineered metasurface as a tunable back-reflector. An ultrahigh density of 10,000 pixels per inch readily meets the requirements for the next-generation microdisplays that can be fabricated on glasses or contact lenses.
Optical metasurfaces are starting to find their way into integrated devices, where they can enhance and control the emission, modulation, dynamic shaping, and detection of light waves. In this study, we show that the architecture of organic light-emitting diode (OLED) displays can be completely reenvisioned through the introduction of nanopatterned metasurface mirrors. In the resulting meta-OLED displays, different metasurface patterns define red, green, and blue pixels and ensure optimized extraction of these colors from organic, white light emitters. This new architecture facilitates the creation of devices at the ultrahigh pixel densities (over 10,000 pixels per inch) required in emerging display applications (for instance, augmented reality) that use scalable nanoimprint lithography. The fabricated pixels also offer twice the luminescence efficiency and superior color purity relative to standard color-filtered white OLEDs.
SOURCES- Stanford, Science
Written By Brian Wang, Nextbigfuture.com
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
23 thoughts on “Ultrahigh-res OLED Displays With Over 10,000 Pixels Per Inch”
The digital lightfield display tech is already solve (7 years ago) from Nvidia, up to the point where it can scan for eye curvatures & make it such that you could have the equivalent of 20/15 vision (of the lightfield scene) without glasses on, while wearing the display.
Their problem was market fit & purchasers/manufacturers of their (working) prototype digital lightfield technology.
If they can make such a screen, let's say for use in a smartphone, then why can't they link up a couple of fancy mirrors and lenses, and use a smartphone to power a 75 inch television? We may be seeing the rebirth of the projection TV era.
At a certain point, the consumer says, "This is good enough". There are places that sell 8K televisions, but few people buy them. I just shopped around for a new TV and I saw some priced at over $10,000. I ended up spending $300 on a 50-inch Visio because it was good enough.
Make a smartphone with only 10% of the smarts, but a really nice screen. Blow it up to 65 inches and you have a really nice projector TV for cheap.
It is a big deal. Streaming is out of the question, and there are no storage devices that could fit entire movies on personal computers at that resolution either.
And better not talk about GPU rendering (games, CAD, …) at that resolution.
25.4 mm/inch ÷ 10,000 pixels per inch = 0.00254 mm per pixel or 2.54µm per pixel
They actually do have cameras with pixels that are even smaller.
Your pupil is only 2-8 mm across depending on lighting conditions. The only light you can ever usefully see is what makes it in through that opening (a small amount of light will diffuse through tissues and make it to the retina anyway, e.g. starting at the sun with eyes closed). The ultimate goal is to properly format <10 mW of light and feed it directly through that opening; this light should ideally appear to come from the right focal length (digital lightfield), but that will probably take some time to get right.
You want about 120 pixels per degree. That's how well humans with very good vision can see high contrast lines (e.g. dark hairs against a white background). You want this display to cover about 100×100 degrees field of view and appear to come from at least a metre away for comfortable use. That's a resolution of 12000×12000. Your eye doesn't actually have that level of accuity anywhere else except a tiny area near your fovea. Knowing where you are looking is necessary so that you can skimp wherever you are not. With 3 zones of resolution you get about a factor 100 savings in computational power needed to render this view. The bandwidth requirements and the performance requirements are much closer to 1440p. The bottleneck is not the performance required! The bottleneck is optics, very fast eye tracking, screens, compression protocols to handle multiple zones of different resolution, manufacturability, cost etc.
Integral imaging. Put an array of the microlenses in a front of the screen and you have a 3D display.
I don't think 'the feed' is all that wicked. Just above, for an eye-proximal mounted emitter of foveal-quality resolution, over a 120° by 90° field, I computed it out at 39 megapixels. 39 times RGB is about 110 megapixels of color information.
Refreshing at 150 Hz or so … (which is on par with the retina's detection time constant in bright light), that works out to about 16 gigapixels per second … or so. If we assume what, 12 bit per-color levels … 1.5 bytes, then we're talking 24 GB/s, for the video feed.
Know what I mean? Hardly seems like THAT big of a deal. Already today, on single glass monomode fibers, we squirt over 6 GB/s per encoded wavelength. 4 wavelengths encodes the 24 GB/s on a fiber that maximally 'shaved down' might be on the order of a very fine human hair in width. Very flexible, that!
⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
⋅-=≡ GoatGuy ✓ ≡=-⋅
Tho' fractal is probably accurate, hierarchical coding might be a better term. At least the computer scientist in me finds cause to say so… ⊕1 back at yah.
Fractal image compression. It has the handy feature that it's resolution independent: You can locally decode the signal to whatever resolution you want, and as you exceed the resolution of the original source, it just invents details that "look right".
How will you feed these monsters? We can barely deal with 8K right now.
Very good explanation. Much appreciated!
10,000×10,000 per square inch would be comparable to the resolution of the human eye. If they can make cameras with that sort of resolution, it could be used in bionic eyes.
I wonder about the application to high speed resin printers?
Indeed. But the 'in front of one's eye' as part of a VR headset, is a very real and I'm guessing, very likely to become a high-demand application in the not very distant future. Turns out that at quite-good surround-field resolution (more than 200° horizontally and about 120° vertically), the actual planar emitter would need to be less than 30 mm by 24 mm. Pretty tame.
⊕1 'cuz its funny as well as true. But mirthfully silly.
see my reply to rick . c . above. it takes the hand-waving away, and replaces it with hard numbers, also which can be scaled per a team's actual design-spec.
At the very height of our lives (say between ages of 5 years old and perhaps 25 or 35), the very best human eyes have a foveal resolution of about 1 arc-minute = ¹⁄₆₀°
Mathematically, combined with one's field of view, this translates readily into pixels-per-millimeter (or inch) …
So, say one wants about a 120° field-of-view, at a ¹⁄₆₀° resolution. Well, work it backward:
| horiz pixels = 120° / (¹⁄₆₀°)
| horiz pixels = 7,200
| vert pixels = 90° / (¹⁄₆₀°)
| vert pixels = 5,400
| total = h × v
| total = 39,000,000
Another way to resolve this (bad pun) is to then divide the horizontal and vertical pixels by the 10,000 px/in (400 px/mm):
| horiz mm = 7,200 ÷ 400
| horiz mm = 18 mm
} vert mm = 5,400 ÷ 400
| vert mm = 13.5 mm
So… those would be the emitter chips sizes, at the article's stated pixel packing resolution. Coming up with a compact planar-to-curvilinear focussing arrangement for the closely positioned eyeball … ah, that's a challenge.
⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
⋅-=≡ GoatGuy ✓ ≡=-⋅
Fluffy will indeed appreciate it, engaging in VR hunting from the comfort of the living room.
Great. But if the intention is to ever use them for any sort of macro-screen display, I don't think there's a human on this planet that could appreciate the resolution. It'll become one of these overpriced features we can't see a difference on, literally.
Augmented reality for cats.
For VR there you eye is couple of cm from the screen you want very high resolution.
For most other uses its kind of overkill
What is the application for displays with resolutions way above the human eye?
Microled with 1 million nits and 5000 PPI can be had today. 2 million nits and 10 000 PPI has already been demonstrated. These are monochrome microdisplays made on a single silicon die (just like a processor), but RGB(W) is coming. I can't see OLED making sense for microdisplays.
Comments are closed.