Loading [a11y]/accessibility-menu.js
A Distributed Conditional Wasserstein Deep Convolutional Relativistic Loss Generative Adversarial Network With Improved Convergence | IEEE Journals & Magazine | IEEE Xplore

A Distributed Conditional Wasserstein Deep Convolutional Relativistic Loss Generative Adversarial Network With Improved Convergence


Impact Statement:D-GANs enable clients to collectively train a global GAN by employing local GANs on individual clients, trained with local private data and coordinated by a central serve...Show More

Abstract:

Generative adversarial networks (GANs) excel in diverse applications such as image enhancement, manipulation, and generating images and videos from text. Yet, training GA...Show More
Impact Statement:
D-GANs enable clients to collectively train a global GAN by employing local GANs on individual clients, trained with local private data and coordinated by a central server. However, contemporary D-GANs are often computationally expensive and struggle to achieve convergence due to poor synchronization between clients and the central server, particularly in the presence of nonconvex loss functions at both local and global levels. These challenges limit their practical applicability. This article introduces a novel and computationally inexpensive D-GAN model, named DRL-GAN, designed to achieve stable training and convergence in linear time, even in the presence of nonconvex loss functions, without encountering mode collapses or vanishing gradients. This makes DRL-GAN highly applicable in various real-world distributed applications.

Abstract:

Generative adversarial networks (GANs) excel in diverse applications such as image enhancement, manipulation, and generating images and videos from text. Yet, training GANs with large datasets remains computationally intensive for standalone systems. Synchronization issues between the generator and discriminator lead to unstable training, poor convergence, vanishing, and exploding gradient challenges. In decentralized environments, standalone GANs struggle with distributed data on client machines. Researchers have turned to federated learning (FL) for distributed-GAN (D-GAN) implementations, but efforts often fall short due to training instability and poor synchronization within GAN components. In this study, we present DRL-GAN, a lightweight Wasserstein conditional distributed relativistic loss-GAN designed to overcome existing limitations. DRL-GAN ensures training stability in the face of nonconvex losses by employing a single global generator on the central server and a discriminato...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 9, September 2024)
Page(s): 4344 - 4353
Date of Publication: 09 April 2024
Electronic ISSN: 2691-4581

Contact IEEE to Subscribe

References

References is not available for this document.