AI-Assisted Test Automation Framework (Data Labeling AI Training)
Integrated LLM tools such as Claude Code and Cursor to automatically generate and validate complex test cases for software systems. Automated black-box and functional testing workflows were developed, leveraging AI to augment manual efforts and improve test diversity. The system produced high-quality test data used to benchmark and score internal software tools for robustness and performance. • Utilized AI models to generate edge-case code scenarios for functional validation • Improved test coverage and reduced manual workload for development teams • Enabled continuous integration of AI-assisted test data in internal pipelines • Focused on generating, labeling, and validating programming code for test coverage purposes