4 January 2024 Dynamic PDGAN: discriminator-boosted knowledge distillation for StyleGANs
Yuesong Tian, Li Shen, Xiang Tian, Zhifeng Li, Yaowu Chen
Author Affiliations +
Abstract

Generative adversarial networks have shown remarkable success in image synthesis, especially StyleGANs. Equipped with delicate and specific designs, StyleGANs are capable of synthesizing high-resolution and high-fidelity images. Previous works aiming at improving StyleGANs mainly focus on modifying the architecture of StyleGANs or transferring knowledge from other domains. However, the knowledge contained in StyleGANs trained in the same domain is still unexplored. We aim to further boost the performance of StyleGANs from the perspective of knowledge distillation, i.e., improving uncompressed StyleGANs with the aid of teacher StyleGANs trained in the same domain. Motivated by the implicit distribution contained in the pretrained teacher discriminator, we propose to exploit the teacher discriminator to additionally supervise the student generator of StyleGANs so as to leverage the knowledge in the teacher discriminator. With the proposed distillation scheme, our method can outperform original StyleGANs on several large-scale datasets, achieving state-of-the-art on AFHQv2.

© 2024 SPIE and IS&T
Yuesong Tian, Li Shen, Xiang Tian, Zhifeng Li, and Yaowu Chen "Dynamic PDGAN: discriminator-boosted knowledge distillation for StyleGANs," Journal of Electronic Imaging 33(1), 013005 (4 January 2024). https://doi.org/10.1117/1.JEI.33.1.013005
Received: 30 June 2023; Accepted: 12 December 2023; Published: 4 January 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Education and training

Gallium nitride

Visualization

Performance modeling

Signal attenuation

Data modeling

Head

Back to Top