Skip to content Skip to footer

This AI Paper from China Proposes a Novel Architecture Named-ViTAR (Vision Transformer with Any Resolution)

[ad_1]

The remarkable strides made by the Transformer architecture in Natural Language Processing (NLP) have ignited a surge of interest within the Computer Vision (CV) community. The Transformer’s adaptation in vision tasks, termed Vision Transformers (ViTs), delineates images into non-overlapping patches, converts each patch into tokens, and subsequently applies Multi-Head Self-Attention (MHSA) to capture inter-token dependencies.

Leveraging the robust modeling prowess inherent in Transformers, ViTs have demonstrated commendable performance across a spectrum of visual tasks encompassing image classification, object detection, vision-language modeling, and even video recognition. However, despite their successes, ViTs confront limitations in real-world scenarios, necessitating the handling of variable input resolutions. At the same time, several studies incur significant performance degradation.

To address this challenge, recent efforts such as ResFormer (Tian et al., 2023) have emerged. These efforts incorporate multiple-resolution images during training and refine positional encodings into more flexible, convolution-based forms. Nevertheless, these advancements still need to improve to maintain high performance across various resolution variations and integrate seamlessly into prevalent self-supervised frameworks.

In response to these challenges, a research team from China proposes a truly innovative solution, Vision Transformer with Any Resolution (ViTAR). This novel architecture is designed to process high-resolution images with minimal computational burden while exhibiting robust resolution generalization capabilities. Key to ViTAR’s efficacy is the introduction of the Adaptive Token Merger (ATM) module, which iteratively processes tokens post-patch embedding, efficiently merging tokens into a fixed grid shape, thus enhancing resolution adaptability while mitigating computational complexity. 

Furthermore, to enable generalization to arbitrary resolutions, the researchers introduce Fuzzy conditional encoding (FPE), which introduces positional perturbation. This transforms precise positional perception into a fuzzy one with random noise, thereby preventing overfitting and enhancing adaptability.

Their study’s contributions encompass the proposal of an effective multi-resolution adaptation module (ATM), which significantly enhances resolution generalization and reduces computational load under high-resolution inputs. Additionally, introducing Fuzzy Positional Encoding (FPE) facilitates robust position perception during training, improving adaptability to varying resolutions. 

Their extensive experiments unequivocally validate the efficacy of the proposed approach. The base model not only demonstrates robust performance across a range of input resolutions but also showcases superior performance compared to existing ViT models. Moreover, ViTAR exhibits commendable performance in downstream tasks such as instance segmentation and semantic segmentation, underscoring its versatility and utility across diverse visual tasks.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 39k+ ML SubReddit


Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.




[ad_2]

Source link

Leave a comment

0.0/5