Aether
• I am actively contributing to the Aether project on Outlier, supporting the development and refinement of large language models through expert data labeling and evaluation. • I review and assess model outputs for accuracy, relevance, clarity, safety, and adherence to instructions. • I perform detailed annotation tasks, including rating responses, identifying policy or safety concerns, comparing multiple outputs, and selecting the highest‑quality answer. • I create gold‑standard rewrites and corrected responses that serve as training data for model improvement. • I generate new prompts, edge cases, and test scenarios to broaden model evaluation coverage across diverse subjects and reasoning types. • I ensure high levels of accuracy, consistency, and clarity by closely following project guidelines and established quality standards. • I participate in ongoing quality‑control reviews and feedback cycles to continually refine my annotations.