online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 Researchers find way to boost self-supervised AI models’ robustness

摘要: In self-supervised learning, an AI technique where the training data is automatically labeled by a feature extractor, the said extractor not uncommonly exploits low-level features (known as “shortcuts”) that cause it to ignore useful representations. In search of a technique that might help to remove those shortcuts autonomously, researchers at Google Brain developed a framework — a “lens” — that makes changes enabling self-supervised models to outperform those trained in a conventional fashion.

 

 

In self-supervised learning, an AI technique where the training data is automatically labeled by a feature extractor, the said extractor not uncommonly exploits low-level features (known as “shortcuts”) that cause it to ignore useful representations. In search of a technique that might help to remove those shortcuts autonomously, researchers at Google Brain developed a framework — a “lens” — that makes changes enabling self-supervised models to outperform those trained in a conventional fashion.

As the researchers explain in a preprint paper published this week, in self-supervised learning, extractor-generated labels are used to create a pretext task that requires learning abstract, semantic features. A model pre-trained on the task can then be transferred to tasks for which labels are expensive to obtain, for example by fine-tuning the model for a given target task. But defining pretext tasks is often challenging because models are biased toward exploiting the simplest features, like logos, watermarks, and color fringes caused by camera lenses.

Fortunately, the features a model can use to solve a pretext task can be used by an adversary to make the pretext task harder. The researchers’ framework, then — which targets self-supervised computer vision models — processes images with a lightweight image-to-image model called lens, which is trained adversarially to reduce pretext task performance. Once trained, the lens can be applied to unseen images, so it can be used when transferring the model to a new task. In addition, the lens can help to visualize the shortcuts by spotlighting the differences between the input and the output images, providing insights into how shortcuts differ.

......

標題

 

Full Text: venturebeat

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page: Big Data In Finance

 


留下你的回應

以訪客張貼回應

0
  • 找不到回應