Derleme
BibTex RIS Kaynak Göster

Automatic Code Generation Techniques from Images or Sketches: A Review Study

Yıl 2023, Cilt: 16 Sayı: 2, 125 - 136, 20.11.2023
https://doi.org/10.54525/tbbmd.1190177

Öz

In the process of developing a software, design and prototyping are the most important and time-consuming stages. Users attach great importance to the visual interfaces and designs of the software. A software with a good visual interface design is preferred more than a similar one with better functionality but an unusable interface. In the process of visual interface design, developers first design on paper and then turn it into digital design with visual interface design progra
ms. In the next step, the design needs to be coded with various markup languages (xml, html, css etc.) or directly with programming languages. The aim of automatic code generation approaches is to develop efficient and fast applications in a short time with minimum software developer cost. In this study, a large literature review was created that includes studies that perform automatic code generation using various methods. In the reviewed articles, mostly deep learning, image processing, artificial neural networks or machine learning methods were used. With this review study, it is aimed to guide researchers who will work in this field.

Kaynakça

  • D. Stone, C. Jarrett, M. Woodroffe, and S. Minocha, User interface design and evaluation. Elsevier, 2005.
  • S. Mohian and C. Csallner, “Doodle2App: Native app code by freehand UI sketching,” in Proceedings - 2020 IEEE/ACM 7th International Conference on Mobile Software Engineering and Systems, MOBILESoft 2020, Jul. 2020, pp. 81–84. doi: 10.1145/3387905.3388607.
  • T. M. Mitchell and T. M. Mitchell, Machine learning, vol. 1, no. 9. McGraw-hill New York, 1997.
  • T. M. Mitchell and T. M. Mitchell, Machine learning, vol. 1, no. 9. McGraw-hill New York, 1997.
  • D. Ozdemir and M. S. Kunduraci, “Comparison of Deep Learning Techniques for Classification of the Insects in Order Level With Mobile Software Application,” IEEE Access, vol. 10, pp. 35675–35684, 2022, doi: 10.1109/ACCESS.2022.3163380.
  • M. F. Kunduraci and H. K. Örnek, “Vehicle Brand Detection Using Deep Learning Algorithms,” International Journal of Applied Mathematics Electronics and Computers, pp. 0–3, 2019.
  • M. Mandal, “Introduction to Convolutional Neural Networks (CNN),” analyticsvidhya.com, May 01, 2021.
  • S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN : Towards Real-Time Object Detection with Region Proposal Networks,” pp. 1–14, 2016.
  • J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
  • W. Liu, “SSD : Single Shot MultiBox Detector SSD : Single Shot MultiBox Detector,” no. December, 2015.
  • M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, 2019, pp. 6105–6114.
  • R. Yang and Y. Yu, “Artificial Convolutional Neural Network in Object Detection and Semantic Segmentation for Medical Imaging Analysis,” Frontiers in Oncology, vol. 11. Frontiers Media S.A., Mar. 09, 2021. doi: 10.3389/fonc.2021.638182.
  • L. R. Medsker and L. C. Jain, “Recurrent neural networks,” Design and Applications, vol. 5, pp. 64–67, 2001.
  • M. Gao, G. Shi, and S. Li, “Online prediction of ship behavior with automatic identification system sensor data using bidirectional long short-term memory recurrent neural network,” Sensors (Switzerland), vol. 18, no. 12, Dec. 2018, doi: 10.3390/s18124211.
  • S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput, vol. 9, no. 8, pp. 1735–1780, 1997.
  • Y. Guo, X. Cao, B. Liu, and K. Peng, “El Nino index prediction using deep learning with ensemble empirical mode decomposition,” Symmetry (Basel), vol. 12, no. 6, Jun. 2020, doi: 10.3390/SYM12060893.
  • J. A. Landay and B. A. Myers, “Interactive sketching for the early stages of user interface design,” in Proceedings of the SIGCHI conference on Human factors in computing systems, 1995, pp. 43–50.
  • D. Baulé, C. G. von Wangenheim, A. von Wangenheim, J. C. R. Hauck, and E. C. V. Júnior, “Automatic code generation from sketches of mobile applications in end-user development using Deep Learning,” arXiv preprint arXiv:2103.05704, 2021.
  • Y. Han, J. He, and Q. Dong, “CSSSketch2Code: An automatic method to generate web pages with CSS style,” in ACM International Conference Proceeding Series, Oct. 2018, pp. 29–35. doi: 10.1145/3292448.3292455.
  • B. Asiroglu et al., “Automatic HTML Code Generation from Mock-upImages Using Machine Learning Techniques,” IEEE, 2019.
  • T. Calò and L. de Russis, “Style-Aware Sketch-to-Code Conversion for the Web,” in EICS 2022 - Companion of the 2022 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Jun. 2022, pp. 44–47. doi: 10.1145/3531706.3536462.
  • G. Vitkare, R. Jejurkar, S. Kamble, Y. Thakare, and A. P. Lahare, “AUTOMATED HTML CODE GENERATION FROM HAND DRAWN IMAGES USING MACHINE LEARNING METHODS”.
  • B. B. Adefris, “Automatic Code Generation From Low Fidelity Graphical User Interface Sketches Using Deep Learning,” 2020.
  • Y. S. Yun, J. Park, J. Jung, S. Eun, S. Cha, and S. S. So, “Automatic Mobile Screen Translation Using Object Detection Approach Based on Deep Neural Networks,” Journal of Korea Multimedia Society, vol. 21, no. 11, pp. 1305–1316, 2018, doi: 10.9717/kmms.2018.21.11.1305.
  • Y. S. Yun, J. Jung, S. Eun, S. S. So, and J. Heo, “Detection of GUI elements on sketch images using object detector based on deep neural networks,” in Lecture Notes in Electrical Engineering, 2019, vol. 502, pp. 86–90. doi: 10.1007/978-981-13-0311-1_16.
  • Jisu Park, Jinman Jung, Seungbae Eun, and Young-Sun Yun, “UI Elements Identification for Mobile Applications based on Deep Learning using Symbol Marker,” The Journal of The Institute of Internet, Broadcasting and Communication (IIBC), vol. 20, no. 3, pp. 89–95, Mar. 2020, doi: https://doi.org/10.7236/JIIBC.2020.20.3.89.
  • A. A. Rahmadi and A. Sudaryanto, “Visual Recognition Of Graphical User Interface Components Using Deep Learning Technique,” Surabaya, Jan. 2020.
  • V. Jain, P. Agrawal, S. Banga, R. Kapoor, and S. Gulyani, “Sketch2Code: Transformation of Sketches to UI in Real-time Using Deep Neural Network,” Oct. 2019, [Online]. Available: http://arxiv.org/abs/1910.08930
  • S. Kim et al., “Identifying UI Widgets of Mobile Applications from Sketch Images,” 2018.
  • X. Ge, “Android GUI Search Using Hand-drawn Sketches.”
  • W. O. Galitz, The essential guide to user interface design: an introduction to GUI design principles and techniques. John Wiley & Sons, 2007.
  • R. Lal, Digital design essentials: 100 ways to design better desktop, web, and mobile interfaces. Rockport Pub, 2013.
  • D. Gavalas and D. Economou, “Development platforms for mobile applications: Status and trends,” IEEE Softw, vol. 28, no. 1, pp. 77–86, 2010.
  • X. Pang, Y. Zhou, P. Li, W. Lin, W. Wu, and J. Z. Wang, “A novel syntax-aware automatic graphics code generation with attention-based deep neural network,” Journal of Network and Computer Applications, vol. 161, Jul. 2020, doi: 10.1016/j.jnca.2020.102636.
  • Y. Liu, S. Chen, L. Fan, L. Ma, T. Su, and L. Xu, “Automated Cross-Platform GUI Code Generationfor Mobile Apps,” 2019.
  • C. Chen, T. Su, G. Meng, Z. Xing, and Y. Liu, “From UI design image to GUI skeleton: A neural machine translator to bootstrap mobile GUI implementation,” in Proceedings - International Conference on Software Engineering, May 2018, pp. 665–676. doi: 10.1145/3180155.3180240.
  • C. Chen, S. Feng, Z. Xing, L. Liu, S. Zhao, and J. Wang, “Gallery D.C.: Design search and knowledge discovery through auto-created GUI component gallery,” Proc ACM Hum Comput Interact, vol. 3, no. CSCW, Nov. 2019, doi: 10.1145/3359282.
  • X. Xiao, X. Wang, Z. Cao, H. Wang, and P. Gao, “IconIntent: Automatic Identification of Sensitive UI Widgets based on Icon Classification for Android Apps.”
  • N. Sethi, A. Kumar, and R. Swami, “Automated web development: Theme detection and code generation using Mix-NLP,” in ACM International Conference Proceeding Series, Jun. 2019. doi: 10.1145/3339311.3339356.
  • K. Kolthoff, “Automatic generation of graphical user interface prototypes from unrestricted natural language requirements,” in Proceedings - 2019 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, Nov. 2019, pp. 1234–1237. doi: 10.1109/ASE.2019.00148.
  • T. T. Nguyen, P. M. Vu, H. V. Pham, and T. T. Nguyen, “Deep learning UI design patterns of mobile apps,” in Proceedings - International Conference on Software Engineering, May 2018, pp. 65–68. doi: 10.1145/3183399.3183422.
  • J. Chen et al., “Object detection for graphical user interface: Old fashioned or deep learning or a combination?,” in ESEC/FSE 2020 - Proceedings of the 28th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Nov. 2020, pp. 1202–1214. doi: 10.1145/3368089.3409691. Bilgisayar Bilimleri ve Mühendisliği Dergisi (2023 Cilt: 16 - Sayı:2) - 135
  • M. Xie, S. Feng, Z. Xing, J. Chen, and C. Chen, “UIED: A hybrid tool for GUI element detection,” in ESEC/FSE 2020 - Proceedings of the 28th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Nov. 2020, pp. 1655–1659. doi: 10.1145/3368089.3417940.
  • S. Mohian and C. Csallner, “PSDoodle: Searching for App Screens via Interactive Sketching,” Apr. 2022, doi: 10.1145/3524613.3527807.
  • W. Y. Chen, P. Podstreleny, W. H. Cheng, Y. Y. Chen, and K. L. Hua, “Code generation from a graphical user interface via attention-based encoder–decoder model,” Multimed Syst, vol. 28, no. 1, pp. 121–130, Feb. 2022, doi: 10.1007/s00530-021-00804-7.
  • V. Saravanan, “Automated Web Design And Code Generation Using Deep Learning,” Turkish Journal of Computer and Mathematics Education (TURCOMAT), vol. 12, no. 6, pp. 364–373, 2021.
  • T. Zhao, C. Chen, Y. Liu, and X. Zhu, “Guigan: Learning to generate gui designs using generative adversarial networks,” in 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 2021, pp. 748–760.
  • Y. Xu, L. Bo, X. Sun, B. Li, J. Jiang, and W. Zhou, “image2emmet: Automatic code generation from web user interface image,” Journal of Software: Evolution and Process, vol. 33, no. 8, p. e2369, 2021.
  • J. Wu, X. Zhang, J. Nichols, and J. P. Bigham, “Screen Parsing: Towards Reverse Engineering of UI Models from Screenshots,” in The 34th Annual ACM Symposium on User Interface Software and Technology, 2021, pp. 470–483.
  • K. Moran, B. Li, C. Bernal-Cárdenas, D. Jelf, and D. Poshyvanyk, “Automated reporting of GUI design violations for mobile apps,” May 2018, pp. 165–175. doi: 10.1145/3180155.3180246.
  • K. Moran, C. Bernal-Cárdenas, M. Curcio, R. Bonett, and D. Poshyvanyk, “Machine learning-based prototyping of graphical user interfaces for mobile apps,” IEEE Transactions on Software Engineering, vol. 46, no. 2, pp. 196–221, 2018.
  • A. A. Abdelhamid, S. R. Alotaibi, and A. Mousa, “Deep learning-based prototyping of android gui from hand-drawn mockups,” IET Software, vol. 14, no. 7, pp. 816–824, Dec. 2020, doi: 10.1049/iet-sen.2019.0378.
  • T. Bouças and A. Esteves, “Converting web pages mockups to HTML using machine learning,” 2020.
  • T. Bouças and A. Esteves, “Converting web pages mockups to HTML using machine learning,” in WEBIST 2020 - Proceedings of the 16th International Conference on Web Information Systems and Technologies, 2020, pp. 217–224. doi: 10.5220/0010116302170224.
  • G. Jadhav, H. Gaikwad, and M. Gawande, “Generation Source Code from Hand Draw Image–A Machine Learning Approach,” Generation Source Code from Hand Draw Image–A Machine Learning Approach (February 25, 2022), 2022.
  • B. Deka et al., “Rico: A mobile app dataset for building data-driven design applications,” in UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, Oct. 2017, pp. 845–854. doi: 10.1145/3126594.3126651.
  • B. Deka, Z. Huang, and R. Kumar, “ERICA: Interaction Mining Mobile Apps,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Oct. 2016, pp. 767–776. doi: 10.1145/2984511.2984581.
  • A. S. Shirazi, N. Henze, A. Schmidt, R. Goldberg, B. Schmidt, and H. Schmauder, Insights into Layout Patterns of Mobile User Interfacesby an Automatic Analysis of Android Apps. 2013.
  • X. Zhang, L. de Greef, and S. White, “Screen Recognition: Creating Accessibility Metadata for Mobile Applications from Pixels,” in Conference on Human Factors in Computing Systems - Proceedings, May 2021. doi: 10.1145/3411764.3445186.
  • Y. Liu, Y. Zhou, S. Wen, and C. Tang, “A strategy on selecting performance metrics for classifier evaluation,” International Journal of Mobile Computing and Multimedia Communications (IJMCMC), vol. 6, no. 4, pp. 20–35, 2014.
  • K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 2002, pp. 311–318.
  • S. Banerjee and A. Lavie, “METEOR: An automatic metric for MT evaluation with improved correlation with human judgments,” in Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, 2005, pp. 65–72.
  • C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” in Text summarization branches out, 2004, pp. 74–81.
  • J. Sauro and E. Kindlund, “A method to standardize usability metrics into a single score,” in Proceedings of the SIGCHI conference on Human factors in computing systems, 2005, pp. 401–409.

Görüntülerden veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması

Yıl 2023, Cilt: 16 Sayı: 2, 125 - 136, 20.11.2023
https://doi.org/10.54525/tbbmd.1190177

Öz

Bir yazılımın geliştirilmesi sürecinde, tasarım ve öncül üretim en önemli ve zaman alıcı aşamalardır. Kullanıcılar yazılımların görsel arayüzlerine ve tasarımlarına oldukça önem vermektedir. İyi bir görsel arayüz tasarımına sahip bir yazılım daha iyi işleve sahip olup fakat arayüzü kullanışsız olan benzerinden daha fazla tercih edilmektedir. Görsel arayüz tasarımı sürecinde geliştiriciler öncelikle kâğıt üzerinde tasarım gerçekleştirip ardından görsel arayüz tasarım programları ile dijital tasarıma dönüştürürler. Sonraki aşamada, tasarımın çeşitli biçimlendirme dilleriyle (xml, html, css vb.) veya doğrudan programlama dilleriyle kodlanması gerekmektedir. Otomatik kot üretme yaklaşımlarında amaç minimum yazılım geliştirici maliyeti ile kısa zamanda verimli ve hızlı uygulamalar geliştirmektir. Bu çalışmada, çeşitli yöntemleri kullanarak otomatik kot üretimi gerçekleştiren çalışmaları içeren geniş bir yayın taraması oluşturulmuştur. İncelenen makalelerde çoğunlukla derin öğrenme, görüntü işleme, yapay sinir ağları veya makine öğrenmesi yöntemleri kullanılmıştır. Bu derleme çalışması ile bu alanda çalışma yapacak araştırmacılara rehber olunması amaçlanmıştır.

Kaynakça

  • D. Stone, C. Jarrett, M. Woodroffe, and S. Minocha, User interface design and evaluation. Elsevier, 2005.
  • S. Mohian and C. Csallner, “Doodle2App: Native app code by freehand UI sketching,” in Proceedings - 2020 IEEE/ACM 7th International Conference on Mobile Software Engineering and Systems, MOBILESoft 2020, Jul. 2020, pp. 81–84. doi: 10.1145/3387905.3388607.
  • T. M. Mitchell and T. M. Mitchell, Machine learning, vol. 1, no. 9. McGraw-hill New York, 1997.
  • T. M. Mitchell and T. M. Mitchell, Machine learning, vol. 1, no. 9. McGraw-hill New York, 1997.
  • D. Ozdemir and M. S. Kunduraci, “Comparison of Deep Learning Techniques for Classification of the Insects in Order Level With Mobile Software Application,” IEEE Access, vol. 10, pp. 35675–35684, 2022, doi: 10.1109/ACCESS.2022.3163380.
  • M. F. Kunduraci and H. K. Örnek, “Vehicle Brand Detection Using Deep Learning Algorithms,” International Journal of Applied Mathematics Electronics and Computers, pp. 0–3, 2019.
  • M. Mandal, “Introduction to Convolutional Neural Networks (CNN),” analyticsvidhya.com, May 01, 2021.
  • S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN : Towards Real-Time Object Detection with Region Proposal Networks,” pp. 1–14, 2016.
  • J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
  • W. Liu, “SSD : Single Shot MultiBox Detector SSD : Single Shot MultiBox Detector,” no. December, 2015.
  • M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, 2019, pp. 6105–6114.
  • R. Yang and Y. Yu, “Artificial Convolutional Neural Network in Object Detection and Semantic Segmentation for Medical Imaging Analysis,” Frontiers in Oncology, vol. 11. Frontiers Media S.A., Mar. 09, 2021. doi: 10.3389/fonc.2021.638182.
  • L. R. Medsker and L. C. Jain, “Recurrent neural networks,” Design and Applications, vol. 5, pp. 64–67, 2001.
  • M. Gao, G. Shi, and S. Li, “Online prediction of ship behavior with automatic identification system sensor data using bidirectional long short-term memory recurrent neural network,” Sensors (Switzerland), vol. 18, no. 12, Dec. 2018, doi: 10.3390/s18124211.
  • S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput, vol. 9, no. 8, pp. 1735–1780, 1997.
  • Y. Guo, X. Cao, B. Liu, and K. Peng, “El Nino index prediction using deep learning with ensemble empirical mode decomposition,” Symmetry (Basel), vol. 12, no. 6, Jun. 2020, doi: 10.3390/SYM12060893.
  • J. A. Landay and B. A. Myers, “Interactive sketching for the early stages of user interface design,” in Proceedings of the SIGCHI conference on Human factors in computing systems, 1995, pp. 43–50.
  • D. Baulé, C. G. von Wangenheim, A. von Wangenheim, J. C. R. Hauck, and E. C. V. Júnior, “Automatic code generation from sketches of mobile applications in end-user development using Deep Learning,” arXiv preprint arXiv:2103.05704, 2021.
  • Y. Han, J. He, and Q. Dong, “CSSSketch2Code: An automatic method to generate web pages with CSS style,” in ACM International Conference Proceeding Series, Oct. 2018, pp. 29–35. doi: 10.1145/3292448.3292455.
  • B. Asiroglu et al., “Automatic HTML Code Generation from Mock-upImages Using Machine Learning Techniques,” IEEE, 2019.
  • T. Calò and L. de Russis, “Style-Aware Sketch-to-Code Conversion for the Web,” in EICS 2022 - Companion of the 2022 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Jun. 2022, pp. 44–47. doi: 10.1145/3531706.3536462.
  • G. Vitkare, R. Jejurkar, S. Kamble, Y. Thakare, and A. P. Lahare, “AUTOMATED HTML CODE GENERATION FROM HAND DRAWN IMAGES USING MACHINE LEARNING METHODS”.
  • B. B. Adefris, “Automatic Code Generation From Low Fidelity Graphical User Interface Sketches Using Deep Learning,” 2020.
  • Y. S. Yun, J. Park, J. Jung, S. Eun, S. Cha, and S. S. So, “Automatic Mobile Screen Translation Using Object Detection Approach Based on Deep Neural Networks,” Journal of Korea Multimedia Society, vol. 21, no. 11, pp. 1305–1316, 2018, doi: 10.9717/kmms.2018.21.11.1305.
  • Y. S. Yun, J. Jung, S. Eun, S. S. So, and J. Heo, “Detection of GUI elements on sketch images using object detector based on deep neural networks,” in Lecture Notes in Electrical Engineering, 2019, vol. 502, pp. 86–90. doi: 10.1007/978-981-13-0311-1_16.
  • Jisu Park, Jinman Jung, Seungbae Eun, and Young-Sun Yun, “UI Elements Identification for Mobile Applications based on Deep Learning using Symbol Marker,” The Journal of The Institute of Internet, Broadcasting and Communication (IIBC), vol. 20, no. 3, pp. 89–95, Mar. 2020, doi: https://doi.org/10.7236/JIIBC.2020.20.3.89.
  • A. A. Rahmadi and A. Sudaryanto, “Visual Recognition Of Graphical User Interface Components Using Deep Learning Technique,” Surabaya, Jan. 2020.
  • V. Jain, P. Agrawal, S. Banga, R. Kapoor, and S. Gulyani, “Sketch2Code: Transformation of Sketches to UI in Real-time Using Deep Neural Network,” Oct. 2019, [Online]. Available: http://arxiv.org/abs/1910.08930
  • S. Kim et al., “Identifying UI Widgets of Mobile Applications from Sketch Images,” 2018.
  • X. Ge, “Android GUI Search Using Hand-drawn Sketches.”
  • W. O. Galitz, The essential guide to user interface design: an introduction to GUI design principles and techniques. John Wiley & Sons, 2007.
  • R. Lal, Digital design essentials: 100 ways to design better desktop, web, and mobile interfaces. Rockport Pub, 2013.
  • D. Gavalas and D. Economou, “Development platforms for mobile applications: Status and trends,” IEEE Softw, vol. 28, no. 1, pp. 77–86, 2010.
  • X. Pang, Y. Zhou, P. Li, W. Lin, W. Wu, and J. Z. Wang, “A novel syntax-aware automatic graphics code generation with attention-based deep neural network,” Journal of Network and Computer Applications, vol. 161, Jul. 2020, doi: 10.1016/j.jnca.2020.102636.
  • Y. Liu, S. Chen, L. Fan, L. Ma, T. Su, and L. Xu, “Automated Cross-Platform GUI Code Generationfor Mobile Apps,” 2019.
  • C. Chen, T. Su, G. Meng, Z. Xing, and Y. Liu, “From UI design image to GUI skeleton: A neural machine translator to bootstrap mobile GUI implementation,” in Proceedings - International Conference on Software Engineering, May 2018, pp. 665–676. doi: 10.1145/3180155.3180240.
  • C. Chen, S. Feng, Z. Xing, L. Liu, S. Zhao, and J. Wang, “Gallery D.C.: Design search and knowledge discovery through auto-created GUI component gallery,” Proc ACM Hum Comput Interact, vol. 3, no. CSCW, Nov. 2019, doi: 10.1145/3359282.
  • X. Xiao, X. Wang, Z. Cao, H. Wang, and P. Gao, “IconIntent: Automatic Identification of Sensitive UI Widgets based on Icon Classification for Android Apps.”
  • N. Sethi, A. Kumar, and R. Swami, “Automated web development: Theme detection and code generation using Mix-NLP,” in ACM International Conference Proceeding Series, Jun. 2019. doi: 10.1145/3339311.3339356.
  • K. Kolthoff, “Automatic generation of graphical user interface prototypes from unrestricted natural language requirements,” in Proceedings - 2019 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, Nov. 2019, pp. 1234–1237. doi: 10.1109/ASE.2019.00148.
  • T. T. Nguyen, P. M. Vu, H. V. Pham, and T. T. Nguyen, “Deep learning UI design patterns of mobile apps,” in Proceedings - International Conference on Software Engineering, May 2018, pp. 65–68. doi: 10.1145/3183399.3183422.
  • J. Chen et al., “Object detection for graphical user interface: Old fashioned or deep learning or a combination?,” in ESEC/FSE 2020 - Proceedings of the 28th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Nov. 2020, pp. 1202–1214. doi: 10.1145/3368089.3409691. Bilgisayar Bilimleri ve Mühendisliği Dergisi (2023 Cilt: 16 - Sayı:2) - 135
  • M. Xie, S. Feng, Z. Xing, J. Chen, and C. Chen, “UIED: A hybrid tool for GUI element detection,” in ESEC/FSE 2020 - Proceedings of the 28th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Nov. 2020, pp. 1655–1659. doi: 10.1145/3368089.3417940.
  • S. Mohian and C. Csallner, “PSDoodle: Searching for App Screens via Interactive Sketching,” Apr. 2022, doi: 10.1145/3524613.3527807.
  • W. Y. Chen, P. Podstreleny, W. H. Cheng, Y. Y. Chen, and K. L. Hua, “Code generation from a graphical user interface via attention-based encoder–decoder model,” Multimed Syst, vol. 28, no. 1, pp. 121–130, Feb. 2022, doi: 10.1007/s00530-021-00804-7.
  • V. Saravanan, “Automated Web Design And Code Generation Using Deep Learning,” Turkish Journal of Computer and Mathematics Education (TURCOMAT), vol. 12, no. 6, pp. 364–373, 2021.
  • T. Zhao, C. Chen, Y. Liu, and X. Zhu, “Guigan: Learning to generate gui designs using generative adversarial networks,” in 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 2021, pp. 748–760.
  • Y. Xu, L. Bo, X. Sun, B. Li, J. Jiang, and W. Zhou, “image2emmet: Automatic code generation from web user interface image,” Journal of Software: Evolution and Process, vol. 33, no. 8, p. e2369, 2021.
  • J. Wu, X. Zhang, J. Nichols, and J. P. Bigham, “Screen Parsing: Towards Reverse Engineering of UI Models from Screenshots,” in The 34th Annual ACM Symposium on User Interface Software and Technology, 2021, pp. 470–483.
  • K. Moran, B. Li, C. Bernal-Cárdenas, D. Jelf, and D. Poshyvanyk, “Automated reporting of GUI design violations for mobile apps,” May 2018, pp. 165–175. doi: 10.1145/3180155.3180246.
  • K. Moran, C. Bernal-Cárdenas, M. Curcio, R. Bonett, and D. Poshyvanyk, “Machine learning-based prototyping of graphical user interfaces for mobile apps,” IEEE Transactions on Software Engineering, vol. 46, no. 2, pp. 196–221, 2018.
  • A. A. Abdelhamid, S. R. Alotaibi, and A. Mousa, “Deep learning-based prototyping of android gui from hand-drawn mockups,” IET Software, vol. 14, no. 7, pp. 816–824, Dec. 2020, doi: 10.1049/iet-sen.2019.0378.
  • T. Bouças and A. Esteves, “Converting web pages mockups to HTML using machine learning,” 2020.
  • T. Bouças and A. Esteves, “Converting web pages mockups to HTML using machine learning,” in WEBIST 2020 - Proceedings of the 16th International Conference on Web Information Systems and Technologies, 2020, pp. 217–224. doi: 10.5220/0010116302170224.
  • G. Jadhav, H. Gaikwad, and M. Gawande, “Generation Source Code from Hand Draw Image–A Machine Learning Approach,” Generation Source Code from Hand Draw Image–A Machine Learning Approach (February 25, 2022), 2022.
  • B. Deka et al., “Rico: A mobile app dataset for building data-driven design applications,” in UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, Oct. 2017, pp. 845–854. doi: 10.1145/3126594.3126651.
  • B. Deka, Z. Huang, and R. Kumar, “ERICA: Interaction Mining Mobile Apps,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Oct. 2016, pp. 767–776. doi: 10.1145/2984511.2984581.
  • A. S. Shirazi, N. Henze, A. Schmidt, R. Goldberg, B. Schmidt, and H. Schmauder, Insights into Layout Patterns of Mobile User Interfacesby an Automatic Analysis of Android Apps. 2013.
  • X. Zhang, L. de Greef, and S. White, “Screen Recognition: Creating Accessibility Metadata for Mobile Applications from Pixels,” in Conference on Human Factors in Computing Systems - Proceedings, May 2021. doi: 10.1145/3411764.3445186.
  • Y. Liu, Y. Zhou, S. Wen, and C. Tang, “A strategy on selecting performance metrics for classifier evaluation,” International Journal of Mobile Computing and Multimedia Communications (IJMCMC), vol. 6, no. 4, pp. 20–35, 2014.
  • K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 2002, pp. 311–318.
  • S. Banerjee and A. Lavie, “METEOR: An automatic metric for MT evaluation with improved correlation with human judgments,” in Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, 2005, pp. 65–72.
  • C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” in Text summarization branches out, 2004, pp. 74–81.
  • J. Sauro and E. Kindlund, “A method to standardize usability metrics into a single score,” in Proceedings of the SIGCHI conference on Human factors in computing systems, 2005, pp. 401–409.
Toplam 64 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler(Derleme)
Yazarlar

Musa Selman Kunduracı 0000-0001-9823-3387

Turgay Tugay Bilgin 0000-0002-9245-5728

Erken Görünüm Tarihi 22 Ekim 2023
Yayımlanma Tarihi 20 Kasım 2023
Yayımlandığı Sayı Yıl 2023 Cilt: 16 Sayı: 2

Kaynak Göster

APA Kunduracı, M. S., & Bilgin, T. T. (2023). Görüntülerden veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması. Türkiye Bilişim Vakfı Bilgisayar Bilimleri Ve Mühendisliği Dergisi, 16(2), 125-136. https://doi.org/10.54525/tbbmd.1190177
AMA Kunduracı MS, Bilgin TT. Görüntülerden veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması. TBV-BBMD. Kasım 2023;16(2):125-136. doi:10.54525/tbbmd.1190177
Chicago Kunduracı, Musa Selman, ve Turgay Tugay Bilgin. “Görüntülerden Veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması”. Türkiye Bilişim Vakfı Bilgisayar Bilimleri Ve Mühendisliği Dergisi 16, sy. 2 (Kasım 2023): 125-36. https://doi.org/10.54525/tbbmd.1190177.
EndNote Kunduracı MS, Bilgin TT (01 Kasım 2023) Görüntülerden veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması. Türkiye Bilişim Vakfı Bilgisayar Bilimleri ve Mühendisliği Dergisi 16 2 125–136.
IEEE M. S. Kunduracı ve T. T. Bilgin, “Görüntülerden veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması”, TBV-BBMD, c. 16, sy. 2, ss. 125–136, 2023, doi: 10.54525/tbbmd.1190177.
ISNAD Kunduracı, Musa Selman - Bilgin, Turgay Tugay. “Görüntülerden Veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması”. Türkiye Bilişim Vakfı Bilgisayar Bilimleri ve Mühendisliği Dergisi 16/2 (Kasım 2023), 125-136. https://doi.org/10.54525/tbbmd.1190177.
JAMA Kunduracı MS, Bilgin TT. Görüntülerden veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması. TBV-BBMD. 2023;16:125–136.
MLA Kunduracı, Musa Selman ve Turgay Tugay Bilgin. “Görüntülerden Veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması”. Türkiye Bilişim Vakfı Bilgisayar Bilimleri Ve Mühendisliği Dergisi, c. 16, sy. 2, 2023, ss. 125-36, doi:10.54525/tbbmd.1190177.
Vancouver Kunduracı MS, Bilgin TT. Görüntülerden veya Çizimlerden Otomatik Kod Oluşturma Teknikleri: Bir Derleme Çalışması. TBV-BBMD. 2023;16(2):125-36.

https://i.creativecommons.org/l/by-nc/4.0Makale Kabulü

 

Çevrimiçi makale yüklemesi yapmak için kullanıcı kayıt/girişini kullanınız.

Dergiye gönderilen makalelerin kabul süreci şu aşamalardan oluşmaktadır:

1.       Gönderilen her makale ilk aşamada en az iki hakeme gönderilmektedir.

2.       Hakem ataması, dergi editörleri tarafından yapılmaktadır. Derginin hakem havuzunda yaklaşık 200 hakem bulunmaktadır ve bu hakemler ilgi alanlarına göre sınıflandırılmıştır. Her hakeme ilgilendiği konuda makale gönderilmektedir. Hakem seçimi menfaat çatışmasına neden olmayacak biçimde yapılmaktadır.

3.       Hakemlere gönderilen makalelerde yazar adları kapatılmaktadır.

4.       Hakemlere bir makalenin nasıl değerlendirileceği açıklanmaktadır ve aşağıda görülen değerlendirme formunu doldurmaları istenmektedir.

5.       İki hakemin olumlu görüş bildirdiği makaleler editörler tarafından benzerlik incelemesinden geçirilir. Makalelerdeki benzerliğin %25’ten küçük olması beklenir.

6.       Tüm aşamaları geçmiş olan bir bildiri dil ve sunuş açısından editör tarafından incelenir ve gerekli düzeltme ve iyileştirmeler yapılır. Gerekirse yazarlara durum bildirilir.

 88x31.png   Bu eser Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı ile lisanslanmıştır.