Skip to content Skip to footer

This AI Paper from NVIDIA Proposes Compact NGP (Neural Graphics Primitives): A Machine Learning Framework Corresponding Hash Tables with Learned Probes for Optimal Speed and Compression

[ad_1]

Neural graphics primitives (NGP) are promising in enabling the smooth integration of old and new assets across various applications. They represent images, shapes, volumetric and spatial-directional data, aiding in novel view synthesis (NeRFs), generative modeling, light caching, and various other applications. Notably successful are the primitives representing data through a feature grid containing trained latent embeddings, subsequently decoded by a multi-layer perceptron (MLP).

Researchers at NVIDIA and the University of Toronto propose Compact NGP, a machine-learning framework that merges the speed associated with hash tables and the efficiency of index learning by utilizing the latter for collision detection through learned probing methods. This combination is achieved by unifying all feature grids into a shared framework where they function as indexing functions mapping into a table of feature vectors.

Compact NGP has been specifically crafted with content distribution in focus, aiming to amortize compression overhead. Its design ensures decoding on user equipment remains low-cost, low-power, and multi-scale, enabling graceful degradation in bandwidth-constrained environments.

These data structures can be amalgamated in innovative ways through basic arithmetic combinations of their indices, resulting in cutting-edge compression versus quality trade-offs. In mathematical terms, these arithmetic combinations involve assigning the different data structures to subsets of the bits within the indexing function, significantly reducing the cost of learned indexing, which otherwise scales exponentially with the number of bits.

Their approach inherits the speed advantages of hash tables while achieving significantly improved compression, approaching levels comparable to JPEG in image representation. It retains differentiability and does not rely on a dedicated decompression scheme like an entropy code. Compact NGP demonstrates versatility across various user-controllable compression rates and offers streaming capabilities, allowing partial results to be loaded, especially in bandwidth-limited environments.

They conducted an evaluation of NeRF compression on both real-world and synthetic scenes, comparing it with several contemporary NeRF compression techniques primarily based on TensoRF. Specifically, they employed masked wavelets as a robust and recent baseline for the real-world scene. Across both scenes, Compact NGP demonstrates superior performance compared to Instant NGP concerning the trade-off between quality and size.

Compact NGP’s design has been tailored to real-world applications where random access decompression, level of detail streaming, and high performance play pivotal roles, both in the training and inference stages. Consequently, there is an eagerness to explore its potential applications in various domains such as streaming applications, video game texture compression, live training, and numerous other areas.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, LinkedIn Group, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.


[ad_2]

Source link