MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
training
Search

Researchers Are Training Image-Generating AI With Fewer Labels

Thursday March 7, 2019. 09:45 PM , from Slashdot
An anonymous reader shares a report: Generative AI models have a propensity for learning complex data distributions, which is why they're great at producing human-like speech and convincing images of burgers and faces. But training these models requires lots of labeled data, and depending on the task at hand, the necessary corpora are sometimes in short supply.

The solution might lie in an approach proposed by researchers at Google and ETH Zurich. In a paper [PDF] published on the preprint server Arxiv.org ('High-Fidelity Image Generation With Fewer Labels'), they describe a 'semantic extractor' that can pull out features from training data, along with methods of inferring labels for an entire training set from a small subset of labeled images. These self- and semi-supervised techniques together, they say, can outperform state-of-the-art methods on popular benchmarks like ImageNet.

'In a nutshell, instead of providing hand-annotated ground truth labels for real images to the discriminator, we... provide inferred ones,' the paper's authors explained. In one of several unsupervised methods the researchers posit, they first extract a feature representation -- a set of techniques for automatically discovering the representations needed for raw data classification -- on a target training dataset using the aforementioned feature extractor.

Read more of this story at Slashdot.
rss.slashdot.org/~r/Slashdot/slashdot/~3/MGlsuMHSdSM/researchers-are-training-image-generating-ai-wi...
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Apr, Thu 25 - 15:13 CEST