Generative AI in Automated Software Testing: A Comparative Study

International Journal of Science and Technology (IJST)

International Journal of Science and Technology (IJST)

An Open access, Peer-reviewed, Quarterly Journal

ISSN: 3049-1118

Call For Paper - Volume - 2 Issue - 2 (April - June 2025)
Article Title

Generative AI in Automated Software Testing: A Comparative Study

Author(s) Ayush Mishra.
Country India
Abstract

Software testing is a crucial phase in the software development lifecycle, ensuring quality, reliability, and performance. Traditional automated testing tools, such as Selenium and JUnit, have improved efficiency but often require extensive manual intervention for test case creation. Recent advancements in Generative AI, particularly models like GPT-4, Codex, and CodeT5, have introduced a new paradigm in test automation by generating intelligent, dynamic test cases with minimal human involvement. This paper presents a comparative study of Generative AI models in automated software testing, analyzing their effectiveness in terms of test coverage, accuracy, execution time, and false positive rates. We benchmark multiple AI-driven testing approaches against traditional methods and evaluate their strengths and limitations. Experimental results indicate that Generative AI significantly enhances test efficiency, with models like GPT-4 achieving up to 92% test coverage and a 95% accuracy rate. However, challenges such as AI hallucinations, dependency on training data, and ethical considerations remain critical.

Area Computer Engineering
Published In Volume 2, Issue 1, February 2025
Published On 09-02-2025
Cite This Mishra, A. (2025). Generative AI in Automated Software Testing: A Comparative Study. International Journal of Science and Technology, 2(1), pp. 1-13.

PDFView / Download PDF File