Code Review Practice - LLM Prompting and Evaluation
I regularly review and evaluate code snippets in Python, Java, and SQL based on correctness and style. My process involves creating structured prompts that simulate large language model (LLM) training and evaluation scenarios. These practices support the development and improvement of coding standards for AI training purposes. • Analyzed and rated code according to industry best practices. • Developed and tested prompts for generative AI and LLM workflows. • Created a reference database of code review findings and annotated issues. • Enhanced awareness of model evaluation and code quality criteria in AI contexts.