Skip to content Skip to footer

Huawei Researchers Introduce a Novel and Adaptively Adjustable Loss Function for Weak-to-Strong Supervision

[ad_1]

The progress and development of artificial intelligence (AI) heavily rely on human evaluation, guidance, and expertise. In computer vision, convolutional networks acquire a semantic understanding of images through extensive labeling provided by experts, such as delineating object boundaries in datasets like COCO or categorizing images in ImageNet. 

Similarly, in robotics, reinforcement learning often relies on human-defined reward functions to guide machines toward optimal performance. In Natural Language Processing (NLP), recurrent neural networks and Transformers can learn the intricacies of language from vast amounts of unsupervised text generated by humans. This symbiotic relationship highlights how AI models advance by leveraging human intelligence, tapping into the depth and breadth of human expertise to enhance their capabilities and understanding.

Researchers from Huawei introduced the concept of ” superalignment ” to address the challenge of effectively leveraging human expertise to supervise superhuman AI models. Superalignment aims to align superhuman models to maximize their learning from human input. A seminal concept in this area is Weak-to-Strong Generalization (WSG), which explores using weaker models to supervise stronger ones. 

WSG research has shown that stronger models can surpass their weaker counterparts in performance through simple supervision, even with incomplete or flawed labels. This approach has demonstrated effectiveness in natural language processing and reinforcement learning.

Researchers extend their idea to “vision superalignment,” specifically examining the application of Weak-to-Strong Generalization (WSG) within the context of vision foundation models. Multiple scenarios in computer vision, including few-shot learning, transfer learning, noisy label learning, and traditional knowledge distillation settings, were meticulously designed and examined. 

Their approach’s effectiveness stems from its capacity to blend direct learning from the weak model with the strong model’s inherent capability to comprehend and interpret visual data. By leveraging the guidance provided by the weak model while capitalizing on the advanced capabilities of the strong model, this method enables the strong model to transcend the constraints of the weak model, thereby enhancing its predictions.

However, to deal with the problems of weak models not providing precise guidance and strong models sometimes giving incorrect labels, one needs a smarter method than just mixing these labels. Since it’s hard to know how accurate each label is, in the future, researchers plan to use confidence as a measure to pick the most likely correct label. This way, by considering confidence levels, one can choose the best labels more effectively, making the model’s predictions more accurate and reliable overall.


Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 37k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel


Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.




[ad_2]

Source link