AI Code Evaluator & Software Tester (Freelance – Outlier, uTest)
As an AI Code Evaluator & Software Tester, I focused on reviewing AI-generated programming tasks and providing targeted feedback to improve model understanding of software logic. My primary responsibility was to assess the reasoning, syntax, and efficiency of machine-generated code for advanced coding challenges. This work directly contributed to refining machine learning models focused on programming expertise and correctness. • Evaluated logic, syntax, and algorithmic soundness in AI-generated Python and C++ code submissions. • Provided clear, structured feedback to facilitate targeted machine learning model improvements. • Engaged in rigorous quality assurance testing and debugged code for accuracy and reliability. • Collaborated on a highly technical project aimed at enhancing model performance on complex coding tasks.