Social Media Quality Evaluation (Ads & Content Personalization)
Project Scope Conducted large-scale data annotation for Project Nile, a long-term AI initiative aimed at optimizing feed personalization and ad relevancy for a global social media leader (Meta/Facebook). Served as a human-in-the-loop (HITL) to provide high-quality labels used to train and validate machine learning models in understanding user intent, cultural context, and content safety. Specific Tasks Performed Ad & Search Evaluation: Critically analyzed 20+ ads or search results per hour, rating them based on a rigorous Needs Met and Relevancy scale. Intent Mapping: Deciphered complex user queries to determine the specific informational or transactional intent behind social media interactions. Content Safety Labeling: Identified and flagged low-quality content, misinformation, or offensive material according to evolving platform policies. Project Size Contributed to a massive distributed workforce, processing hundreds of data units monthly over a continuous 6 months period. Maintained consistent output in a high-volume production environment, typically averaging 29 hours/week]. Quality Measures Adhered To Accuracy Thresholds: Consistently maintained an accuracy score of 85%. Instructional Adherence: Followed 100+ pages of detailed project-specific guidelines, adapting quickly to frequent algorithmic updates and policy changes. Inter-rater Reliability: Participated in consensus-based tasks where multiple judgments were compared to ensure data consistency and reduce subjective bias.