Skip to content Skip to footer

NTU and Meta Researchers Introduce URHand: A Universal Relightable Hand AI Model that Generalizes Across Viewpoints, Poses, Illuminations, and Identities

[ad_1]

The constant visibility of hands in our daily activities makes them crucial for a sense of self-embodiment. The problem is the need for a digital hand model that is photorealistic, personalized, and relightable. Photorealism ensures a realistic visual representation, personalization caters to individual differences, and reliability allows for a coherent appearance in diverse virtual environments, contributing to a more immersive and natural user experience.

Building photorealistic relightable hand models involves two main approaches. One is Physically based rendering models that offer generalization to various illuminations through offline path-tracing but often lack real-time photorealism and struggle with accurate material parameter estimation. The other is Neural relighting, which directly achieves real-time photorealism by inferring outgoing radiance but requires costly data augmentation for generalizing to natural illuminations. Cross-identity generalization is a challenge in both approaches.

The researchers from Codec Avatars Lab, Meta, and Nanyang Technological University have proposed URHand, the first Universal Relightable Hand model, designed to generalize across viewpoints, motions, illuminations, and identities. Their method combines physically based rendering and data-driven appearance modeling using neural relighting. They balance generalization and fidelity by incorporating known physics, such as the linearity of light transport, and introducing a spatially varying linear lighting model. 

The approach involves a single-stage training process, enabled by linearity preservation, and introduces a physics-based refinement branch for estimating material parameters and high-resolution geometry. The model incorporates a spatially varying linear lighting model, maintaining light transport linearity for generalization to arbitrary illuminations. It comprises two parallel rendering branches—physical and neural—jointly trained to enhance geometry and provide accurate shading features for the final appearance. The physical branch uses a parametric BRDF for geometry refinement, and the neural branch employs a linear lighting model for real-time relighting.

Comparing URHand with state-of-the-art 3D hand relighting and reconstruction methods, including RelightableHands and Handy, URHand significantly outperforms baseline methods in per-identity training, showcasing the effectiveness of its design. URHand reproduces detailed geometry, specularities, and shadows, surpassing the quality of other methods, including Handy and RelightableHands. The generalizability of URHand is evident, even on withheld test subjects, where it outperforms other baselines by a significant margin.

In conclusion, The researchers introduce URHand, demonstrating its capability to generalize across various factors, including viewpoints, poses, illuminations, and identities. Their physics-inspired spatially varying linear lighting model and hybrid neural-physical learning framework enable scalable cross-identity training, achieving high-fidelity relightable hands. The experiments showcase URHand’s adaptability beyond studio data, allowing quick personalization from a phone scan. 


Check out the Paper and Project. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..


Asjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.




[ad_2]

Source link