Research Article
BibTex RIS Cite
Year 2024, Volume: 8 Issue: 2, 213 - 225
https://doi.org/10.38088/jise.1497968

Abstract

References

  • [1] Shahriar, S. (2022). GAN computers generate arts? A survey on visual arts, music, and literary text generation using generative adversarial network. Displays, 73, 102237.
  • [2] Chakraborty, T., KS, U. R., Naik, S. M., Panja, M., & Manvitha, B. (2024). Ten years of generative adversarial nets (GANs): a survey of the state-of-the-art. Machine Learning: Science and Technology, 5(1), 011001.
  • [3] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville A. & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
  • [4] Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. Advances in neural information processing systems, 29.
  • [5] Zhang, Z., Li, M., & Yu, J. (2018). On the convergence and mode collapse of GAN. SIGGRAPH Asia 2018 Technical Briefs, 1-4.
  • [6] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.
  • [7] Iglesias, G., Talavera, E. & Díaz-Álvarez, A. (2023). A survey on GANs for computer vision: Recent research, analysis and taxonomy. Computer Science Review, 48, 100553.
  • [8] Xia, W., Zhang, Y., Yang, Y., Xue, J. H., Zhou, B., & Yang, M. H. (2022). Gan inversion: A survey. IEEE transactions on pattern analysis and machine intelligence, 45(3), 3121-3138.
  • [9] Wang, P., Li, Y., Singh, K. K., Lu, J. & Vasconcelos, N. (2021). Imagine: Image synthesis by image-guided model inversion. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3681-3690.
  • [10] Yildiz, E., Yuksel, M. E., & Sevgen, S. (2024). A Single-Image GAN Model Using Self-Attention Mechanism and DenseNets. Neurocomputing, 596, 127873.
  • [11] Zhang, Z., Han, C. & Guo, T. (2021). Exsingan: Learning an explainable generative model from a single image. 32nd British Machine Vision Conference.
  • [12] Shaham, T. R., Dekel, T. & Michaeli, T. (2019). Singan: Learning a generative model from a single natural image. IEEE/CVF international conference on computer vision, 4570-4580.
  • [13] Ulyanov, D., Vedaldi, A. ve Lempitsky, V. (2018). Deep image prior. IEEE conference on computer vision and pattern recognition, 9446-9454.
  • [14] Shocher, A., Bagon, S., Isola, P., & Irani, M. (2019). Ingan: Capturing and retargeting the" dna" of a natural image. IEEE/CVF international conference on computer vision, 4492-4501.
  • [15] Isola, P., Zhu, J. Y., Zhou, T. & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. IEEE conference on computer vision and pattern recognition, 1125-1134.
  • [16] Hinz, T., Fisher, M., Wang, O. & Wermter, S. (2021). Improved techniques for training single-image gans. IEEE/CVF Winter Conference on Applications of Computer Vision, 1300-1309.
  • [17] Karras, T., Aila, T., Laine, S. & Lehtinen, J. (2018). Progressive Growing of GANs for Improved Quality, Stability, and Variation. International Conference on Learning Representations.
  • [18] Granot, N., Feinstein, B., Shocher, A., Bagon, S. ve Irani, M. (2022). Drop the gan: In defense of patches nearest neighbors as single image generative models. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13460-13469.
  • [19] Arjovsky M., Chintala S. & Bottou L. (2017). Wasserstein generative adversarial networks. 34th International Conference on Machine Learning, ICML, 298–321.
  • [20] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. & Courville, A. C. (2017). Improved training of wasserstein gans. Advances in neural information processing systems, 30.
  • [21] Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S. (2017). Least squares generative adversarial networks. IEEE international conference on computer vision, 2794-2802.
  • [22] Lim, J. H., & Ye, J. C. (2017). Geometric gan. arXiv preprint arXiv:1705.02894.
  • [23] Iglesias, G., Talavera, E. & Díaz-Álvarez, A. (2023). A survey on GANs for computer vision: Recent research, analysis and taxonomy. Computer Science Review, 48, 100553.
  • [24] Jabbar, A., Li, X., & Omar, B. (2021). A survey on generative adversarial networks: Variants, applications, and training. ACM Computing Surveys (CSUR), 54(8), 1-49.
  • [25] Wang, Z., Simoncelli, E. P. & Bovik, A. C. (2003). Multiscale structural similarity for image quality assessment. The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers IEEE, 1398-1402.
  • [26] Zhang, R., Isola, P., Efros, A. A., Shechtman, E. & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. IEEE conference on computer vision and pattern recognition, 586-595.

Investigating the effect of loss functions on single-image GAN performance

Year 2024, Volume: 8 Issue: 2, 213 - 225
https://doi.org/10.38088/jise.1497968

Abstract

Loss functions are crucial in training generative adversarial networks (GANs) and shaping the resulting outputs. These functions, specifically designed for GANs, optimize generator and discriminator networks together but in opposite directions. GAN models, which typically handle large datasets, have been successful in the field of deep learning. However, exploring the factors that influence the success of GAN models developed for limited data problems is an important area of research. In this study, we conducted a comprehensive investigation into the loss functions commonly used in GAN literature, such as binary cross entropy (BCE), Wasserstein generative adversarial network (WGAN), least squares generative adversarial network (LSGAN), and hinge loss. Our research focused on examining the impact of these loss functions on improving output quality and ensuring training convergence in single-image GANs. Specifically, we evaluated the performance of a single-image GAN model, SinGAN, using these loss functions in terms of image quality and diversity. Our experimental results demonstrated that loss functions successfully produce high-quality, diverse images from a single training image. Additionally, we found that the WGAN-GP and LSGAN-GP loss functions are more effective for single-image GAN models.

Ethical Statement

Etik iznine gerek yoktur.

References

  • [1] Shahriar, S. (2022). GAN computers generate arts? A survey on visual arts, music, and literary text generation using generative adversarial network. Displays, 73, 102237.
  • [2] Chakraborty, T., KS, U. R., Naik, S. M., Panja, M., & Manvitha, B. (2024). Ten years of generative adversarial nets (GANs): a survey of the state-of-the-art. Machine Learning: Science and Technology, 5(1), 011001.
  • [3] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville A. & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
  • [4] Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. Advances in neural information processing systems, 29.
  • [5] Zhang, Z., Li, M., & Yu, J. (2018). On the convergence and mode collapse of GAN. SIGGRAPH Asia 2018 Technical Briefs, 1-4.
  • [6] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.
  • [7] Iglesias, G., Talavera, E. & Díaz-Álvarez, A. (2023). A survey on GANs for computer vision: Recent research, analysis and taxonomy. Computer Science Review, 48, 100553.
  • [8] Xia, W., Zhang, Y., Yang, Y., Xue, J. H., Zhou, B., & Yang, M. H. (2022). Gan inversion: A survey. IEEE transactions on pattern analysis and machine intelligence, 45(3), 3121-3138.
  • [9] Wang, P., Li, Y., Singh, K. K., Lu, J. & Vasconcelos, N. (2021). Imagine: Image synthesis by image-guided model inversion. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3681-3690.
  • [10] Yildiz, E., Yuksel, M. E., & Sevgen, S. (2024). A Single-Image GAN Model Using Self-Attention Mechanism and DenseNets. Neurocomputing, 596, 127873.
  • [11] Zhang, Z., Han, C. & Guo, T. (2021). Exsingan: Learning an explainable generative model from a single image. 32nd British Machine Vision Conference.
  • [12] Shaham, T. R., Dekel, T. & Michaeli, T. (2019). Singan: Learning a generative model from a single natural image. IEEE/CVF international conference on computer vision, 4570-4580.
  • [13] Ulyanov, D., Vedaldi, A. ve Lempitsky, V. (2018). Deep image prior. IEEE conference on computer vision and pattern recognition, 9446-9454.
  • [14] Shocher, A., Bagon, S., Isola, P., & Irani, M. (2019). Ingan: Capturing and retargeting the" dna" of a natural image. IEEE/CVF international conference on computer vision, 4492-4501.
  • [15] Isola, P., Zhu, J. Y., Zhou, T. & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. IEEE conference on computer vision and pattern recognition, 1125-1134.
  • [16] Hinz, T., Fisher, M., Wang, O. & Wermter, S. (2021). Improved techniques for training single-image gans. IEEE/CVF Winter Conference on Applications of Computer Vision, 1300-1309.
  • [17] Karras, T., Aila, T., Laine, S. & Lehtinen, J. (2018). Progressive Growing of GANs for Improved Quality, Stability, and Variation. International Conference on Learning Representations.
  • [18] Granot, N., Feinstein, B., Shocher, A., Bagon, S. ve Irani, M. (2022). Drop the gan: In defense of patches nearest neighbors as single image generative models. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13460-13469.
  • [19] Arjovsky M., Chintala S. & Bottou L. (2017). Wasserstein generative adversarial networks. 34th International Conference on Machine Learning, ICML, 298–321.
  • [20] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. & Courville, A. C. (2017). Improved training of wasserstein gans. Advances in neural information processing systems, 30.
  • [21] Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S. (2017). Least squares generative adversarial networks. IEEE international conference on computer vision, 2794-2802.
  • [22] Lim, J. H., & Ye, J. C. (2017). Geometric gan. arXiv preprint arXiv:1705.02894.
  • [23] Iglesias, G., Talavera, E. & Díaz-Álvarez, A. (2023). A survey on GANs for computer vision: Recent research, analysis and taxonomy. Computer Science Review, 48, 100553.
  • [24] Jabbar, A., Li, X., & Omar, B. (2021). A survey on generative adversarial networks: Variants, applications, and training. ACM Computing Surveys (CSUR), 54(8), 1-49.
  • [25] Wang, Z., Simoncelli, E. P. & Bovik, A. C. (2003). Multiscale structural similarity for image quality assessment. The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers IEEE, 1398-1402.
  • [26] Zhang, R., Isola, P., Efros, A. A., Shechtman, E. & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. IEEE conference on computer vision and pattern recognition, 586-595.
There are 26 citations in total.

Details

Primary Language English
Subjects Computer Vision, Pattern Recognition, Deep Learning, Neural Networks, Machine Learning (Other)
Journal Section Research Articles
Authors

Eyyüp Yıldız 0000-0002-7051-3368

Erkan Yüksel 0000-0001-8976-9964

Selçuk Sevgen 0000-0003-1443-1779

Early Pub Date December 11, 2024
Publication Date
Submission Date June 8, 2024
Acceptance Date August 9, 2024
Published in Issue Year 2024Volume: 8 Issue: 2

Cite

APA Yıldız, E., Yüksel, E., & Sevgen, S. (2024). Investigating the effect of loss functions on single-image GAN performance. Journal of Innovative Science and Engineering, 8(2), 213-225. https://doi.org/10.38088/jise.1497968
AMA Yıldız E, Yüksel E, Sevgen S. Investigating the effect of loss functions on single-image GAN performance. JISE. December 2024;8(2):213-225. doi:10.38088/jise.1497968
Chicago Yıldız, Eyyüp, Erkan Yüksel, and Selçuk Sevgen. “Investigating the Effect of Loss Functions on Single-Image GAN Performance”. Journal of Innovative Science and Engineering 8, no. 2 (December 2024): 213-25. https://doi.org/10.38088/jise.1497968.
EndNote Yıldız E, Yüksel E, Sevgen S (December 1, 2024) Investigating the effect of loss functions on single-image GAN performance. Journal of Innovative Science and Engineering 8 2 213–225.
IEEE E. Yıldız, E. Yüksel, and S. Sevgen, “Investigating the effect of loss functions on single-image GAN performance”, JISE, vol. 8, no. 2, pp. 213–225, 2024, doi: 10.38088/jise.1497968.
ISNAD Yıldız, Eyyüp et al. “Investigating the Effect of Loss Functions on Single-Image GAN Performance”. Journal of Innovative Science and Engineering 8/2 (December 2024), 213-225. https://doi.org/10.38088/jise.1497968.
JAMA Yıldız E, Yüksel E, Sevgen S. Investigating the effect of loss functions on single-image GAN performance. JISE. 2024;8:213–225.
MLA Yıldız, Eyyüp et al. “Investigating the Effect of Loss Functions on Single-Image GAN Performance”. Journal of Innovative Science and Engineering, vol. 8, no. 2, 2024, pp. 213-25, doi:10.38088/jise.1497968.
Vancouver Yıldız E, Yüksel E, Sevgen S. Investigating the effect of loss functions on single-image GAN performance. JISE. 2024;8(2):213-25.


Creative Commons License

The works published in Journal of Innovative Science and Engineering (JISE) are licensed under a  Creative Commons Attribution-NonCommercial 4.0 International License.