Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Google DeepMind Introduces AI Diffusion Models

Text-to-image diffusion models- Google Deep Mind

Diffusion models have been game-changers in generative modeling for many different kinds of data. However, fine-tuning is sometimes required in practical applications such as creating aesthetically acceptable graphics from text descriptions. To enhance alignment and image quality, text-to-image diffusion models use methods like classifier-free guiding and curated datasets like LAION Aesthetics.

Recommended AI News: Amazon and Anthropic Announce Strategic Collaboration to Advance Generative AI

Direct Reward Fine-Tuning (DRaFT)

Using a technique called “diffusion sampling,” the authors of this study provide a straightforward and quick approach to fine-tuning gradient-based rewards. They describe the idea of Direct Reward Fine-Tuning (DRaFT), which recursively iterates across a 50-step unrolled computation graph representing the complete sampling chain. Instead of changing all of the model parameters, they use gradient checkpointing approaches to efficiently manage memory and computational costs, and they optimize LoRA weights.

In addition, the authors present improvements to the DRaFT approach to boost its efficacy and performance. They begin with the DRaFT-K variation, which, in order to compute the gradient for fine-tuning, restricts backpropagation to the most recent K sampling steps.

Because full backpropagation can cause problems with expanding gradients, empirical results show that this reduced gradient strategy significantly outperforms full backpropagation with the same amount of training steps.

Related Posts
1 of 40,589

The authors also introduce DRaFT-LV, a variant of DRaFT-1 that averages over numerous noise samples to produce lower-variance gradient estimates.

Recommended AI News: Cloudera Signs Strategic Collaboration Agreement with AWS

Gradient-based fine-tuning method

The research team implemented DRaFT in Stable Diffusion 1.4 and tested it with several kinds of reinforcement and direction prompts. Their gradient-based techniques outperformed RL-based fine-tuning baselines significantly in terms of efficiency. When optimizing LAION Aesthetics Classifier scores, for instance, they were able to outperform RL algorithms by a factor of over 200.

One of the variants they proposed, called DRaFT-LV, was extraordinarily effective, learning at a rate roughly twice as fast as the earlier method known as ReFL, which relied on gradients. They also showed the flexibility of DRaFT by merging DRaFT models with pre-trained models and changing LoRA weights by mixing or scaling.

In conclusion, explicitly fine-tuning diffusion models on differentiable rewards offers a viable option for enhancing generative modeling techniques, with implications for applications spanning images, text, and more. Researchers and practitioners in the fields of machine learning and generative modeling will find it useful because to its efficiency, adaptability, and efficacy.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.