3D games and CGI just received a massive boost as a new breakthrough 3D technology promises 100X faster rendering of metallic and shiny surfaces. Computer scientists at University of California-San Diego developed a method to improve how computer graphics software reproduces the way light interacts with extremely small details, called glints, on the surface of a wide range of materials, including metallic car paints, metal finishes for electronics and injection-molded plastic finishes.
The method developed by Ramamoorthi and colleagues is 100 times faster than the current state of the art. They are presenting their work this month at SIGGRAPH 2016 in Anaheim, California. The method requires minimal computational resources and can be used in animations. Current methods can only reproduce these so-called glints in stills.
Iron Man’s suit. Captain America’s shield. The Batmobile. These all could look a lot more realistic thanks to a new algorithm developed by a team of U.S. computer graphics experts.
A scratched stainless steel kettle rendered with our method (left). The kettle is lit by small area lights and an environment map, with surface microstructure modeled using a high-resolution normal map. Our method uses millions of 4D Gaussians to fit the position-normal distribution induced by the normal map; this lets us approximate the normal distribution function of a given pixel almost as accurately as Yan et al. , but our evaluation is two orders of magnitude faster. Moreover, our technique can integrate area and environment lighting, and multiple importance sampling, which was not practical with Yan et al. . Our rendering takes only 1.4× longer than a standard microfacet BRDF rendering (right).
Position-Normal Distributions for Efficient Rendering of Specular Microstructure
Accurate rendering of a material’s appearance has always been a critical feature of computer graphics, Ramamoorthi said. It has become even more important with the advent of today’s ever-higher display resolutions.
The standard approach to modeling the way surfaces reflect light assumes that the surfaces are smooth at the pixel level. But that’s not the case in the real world for metallic materials as well as fabrics, wood finishes and wood grain, among others. As a result, with current methods, these surfaces will appear noisy, grainy or glittery.
“There is currently no algorithm that can efficiently render the rough appearance of real specular surfaces,” Ramamoorthi said. “This is highly unusual in modern computer graphics, where almost any other scene can be rendered given enough computing power.”
The researchers’ solution was to break down each pixel of an uneven, intricate surface into pieces covered by thousands of light-reflecting points smaller than a pixel, called microfacets. The team then computed the vector that is perpendicular to the surface of the materials for each microfacet, called the point’s normal. The normal is key to figuring out how light reflects off a surface.
For any specific computer-generated scene, the microfacets on a surface reflect light back to the computer’s virtual camera only if its normal is located exactly halfway between the ray from the light source and the light ray that bounces back from the surface. Computer scientists calculated the normals’ distribution within each patch of microfacets. Then they used the distribution to determine which normals where in that halfway position.
The key to the algorithm’s speed is its ability to approximate this normal distribution at each surface location, called a “position-normal distribution.” This enables the algorithm to easily compute the amount of net reflected light with a speed that is orders of magnitude faster than previous methods. Using a distribution rather than trying to calculate how light interacts with every single microfacet resulted in considerable time and computer power savings.
In addition to Ramamoorthi, study co-authors are: Ling-Qi Yan, from the University of California, Berkeley; Milos Hasan from Autodesk and Steve Marschner from Cornell University.
Specular BRDF rendering traditionally approximates surface microstructure using a smooth normal distribution, but this ignores glinty effects, easily observable in the real world. While modeling the actual surface microstructure is possible, the resulting rendering problem is prohibitively expensive. Recently, Yan et al.  and Jakob et al.  made progress on this problem, but their approaches are still expensive and lack full generality in their material and illumination support. We introduce an efficient and general method that can be easily integrated in a standard rendering system. We treat a specular surface as a four-dimensional position-normal distribution, and fit this distribution using millions of 4D Gaussians, which we call elements. This leads to closed-form solutions to the required BRDF evaluation and sampling queries, enabling the first practical solution to rendering specular microstructure.
SOURCES- University of California at San Diego
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.