Setting its sights on evolving graphics processing units in a growing universe of generative AI, Intel announced the release of several papers outlining efforts it is pursuing in what observers say is a multibillion-dollar opportunity in coming years for the semiconductor chip giant.
Intel is presenting seven papers over three conferences covering advances in computer graphics.
The first papers were formally presented at last month's joint conference conducted by the High Performance Graphics (HPG) forum and the Eurographics Symposium on Rendering at the the Delft University of Technology in the Netherlands. The remaining papers will be discussed at a conference to be held by SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) in August.
A key focus is on how to improve historically heavy graphics-rendering processes.
The papers discuss two processes in particular, ray tracing and path tracing. Both are used for recreating realistic images, especially in gaming, where accurate representation of the physics of light is critical for natural-looking imagery.
Ray tracing applies algorithms to track the trajectory of light waves and calculate color values, reflections and shadows. The enormous processing power required for real-time rendering is so great that frame rates often take a noticeable hit.
Path tracing can require even heavier processing. It follows multiple rays of light, tracking paths as they reflect off surfaces and interact with lighting among other elements. A process known as Monte Carlo integration helps determine accurate color and shading values.
Intel says these tracing methods can be accomplished more efficiently. One of its papers, "Sampling Visible GGX Normals with Spherical Caps," describes an innovative approach to calculating hemispheric items that accomplished "systematic speed-ups in our benchmarks."
Another paper reveals a 500% speed improvement in renderings of "glittery" objects such as speckled car paints, snow, molded plastics and running water. "Real-Time Rendering of Glinty Appearances using Distributed Binomial Laws on Anisotropic Grids" explains that current approaches achieve stunning realism but "come at a very high cost" in terms of processing power and speed.
In a paper to be discussed at the August SIGGRAPH conference, Intel will review advances in neural graphics, an approach the company says "is revolutionizing the graphics field." It is used to quickly scale high-quality graphics across games and movies.
"New neural level of detail representation achieves 70%–95% compression compared to 'vanilla' path tracing," Intel reported.
Other papers examine improvements in renderings of translucent materials and "sampling photon trajectories in difficult illumination scenarios."
Ultimately, Intel hopes significant progress in processing approaches will allow users to enjoy realistic imagery in real time without requiring high-power GPUs.
"The new building blocks presented at this year's conferences, along with our wide offering of GPU products and scalable cross-architecture rendering stack, will help developers and businesses to do more efficient rendering of digital twins, future immersive AR and VR experiences, as well as synthetic data for sim2real AI training," Intel stated in its blog.
Intel plans on making its work open source.
The Intel papers have appeared throughout June on the preprint server arXiv.
More information: Jonathan Dupuy et al, Sampling Visible GGX Normals with Spherical Caps, arXiv (2023). DOI: 10.48550/arxiv.2306.05044
Real-Time Rendering of Glinty Appearances using Distributed Binomial Laws on Anisotropic Grids, arXiv (2023). DOI: 10.48550/arxiv.2306.05051
Laurent Belcour et al, One-to-Many Spectral Upsampling of Reflectances and Transmittances, arXiv (2023). DOI: 10.48550/arxiv.2306.11464
Intel's blog: www.intel.com/content/www/us/e … ative-ai-update.html
Journal information: arXiv
© 2023 Science X Network