For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Olusegun Opalanwo

Olusegun Opalanwo

Fullstack Developer

NIGERIA flag
Ilesa, Nigeria
$20.00/hrIntermediateOtherScale AIClickworker

Key Skills

Software

Other
Scale AIScale AI
ClickworkerClickworker
AppenAppen
SuperAnnotateSuperAnnotate

Top Subject Matter

No subject matter listed

Top Data Types

TextText
Computer Code ProgrammingComputer Code Programming
ImageImage

Top Label Types

Classification
Text Summarization
Evaluation Rating
Action Recognition
Prompt Response Writing SFT
Question Answering
Object Detection
Polygon
Segmentation
Data Collection

Freelancer Overview

I have hands-on experience in AI prompt engineering and data annotation, having worked with Scale AI to engineer and optimize prompts for high-level programming languages and technical problem-solving tasks. My responsibilities included rating, categorizing, and evaluating AI model outputs to enhance dataset quality, giving me a strong understanding of the data labeling process and quality assurance in AI training data. With a background in computer science and expertise in PHP, Laravel, CodeIGniter, Javascipt, Nodejs, Express, JQuery, C#, Asp.Net, Vb.Net, MSSQL, Python, SQL, MongoDB, and AWS cloud technology, I am skilled at building functioning systems, optimizing data pipelines, web scraping, and backend systems. I am passionate about improving data quality and have contributed to projects that required careful attention to detail, such as optimizing database performance and developing scalable data processing modules. I am eager to apply my technical skills and analytical mindset to advance the quality and effectiveness of AI training data. In addition, I am an excellent software developer.

IntermediateEnglishYoruba

Labeling Experience

Social Media Algorithm Evaluator

OtherTextClassificationText Summarization
As a social media algorithm trainer at Facebook, the role involves evaluating and classifying user-generated posts to enhance content recommendation systems. This includes analyzing engagement metrics, applying machine learning techniques for sentiment analysis, and ensuring the accuracy of classification algorithms to improve user experience and content relevance. The project focuses on developing a robust framework for verifying the authenticity of social media posts. This involves: Data Collection: Gathering a diverse set of posts from various social media platforms to create a comprehensive dataset for analysis. Evaluation Criteria: Establishing clear guidelines for evaluating the truthfulness of posts, including fact-checking against reliable sources and assessing the credibility of the information presented. Classification Techniques: Implementing machine learning algorithms, such as Naive Bayes and Support Vector Machines, to classify posts as true, false, or misleading based

As a social media algorithm trainer at Facebook, the role involves evaluating and classifying user-generated posts to enhance content recommendation systems. This includes analyzing engagement metrics, applying machine learning techniques for sentiment analysis, and ensuring the accuracy of classification algorithms to improve user experience and content relevance. The project focuses on developing a robust framework for verifying the authenticity of social media posts. This involves: Data Collection: Gathering a diverse set of posts from various social media platforms to create a comprehensive dataset for analysis. Evaluation Criteria: Establishing clear guidelines for evaluating the truthfulness of posts, including fact-checking against reliable sources and assessing the credibility of the information presented. Classification Techniques: Implementing machine learning algorithms, such as Naive Bayes and Support Vector Machines, to classify posts as true, false, or misleading based

2023 - 2024

Prompt Rating - Scale AI

Computer Code ProgrammingEvaluation Rating
The AutoSxS (Automatic Side-by-Side) evaluation tool is designed to assess the performance of generative AI models by comparing their responses to prompts. We rate the responses to evaluate the quality of responses, ensuring that the evaluation is objective and aligned with predefined criteria for tasks like summarization and question-answering. The main tasks include: Pairwise Comparison, Predefined Criteria evaluation, Dataset Preparation, and overall response quality and correctness.

The AutoSxS (Automatic Side-by-Side) evaluation tool is designed to assess the performance of generative AI models by comparing their responses to prompts. We rate the responses to evaluate the quality of responses, ensuring that the evaluation is objective and aligned with predefined criteria for tasks like summarization and question-answering. The main tasks include: Pairwise Comparison, Predefined Criteria evaluation, Dataset Preparation, and overall response quality and correctness.

2024 - 2024

Prompt Engineering - Scale AI

Computer Code ProgrammingComputer Programming Coding
AI prompt training involves designing effective instructions or queries to guide generative AI models in producing accurate, relevant, and high-quality outputs. We focus on crafting clear, concise, and contextually relevant instructions. Effective prompts typically include specific details, clear objectives, and any necessary context to help the AI understand the task, especially in the areas of code generation, code explanation, bug fixing, code error detection, and troubleshooting.

AI prompt training involves designing effective instructions or queries to guide generative AI models in producing accurate, relevant, and high-quality outputs. We focus on crafting clear, concise, and contextually relevant instructions. Effective prompts typically include specific details, clear objectives, and any necessary context to help the AI understand the task, especially in the areas of code generation, code explanation, bug fixing, code error detection, and troubleshooting.

2024 - 2024
Scale AI

AI Prompt Engineer

Scale AIComputer Code ProgrammingEvaluation Rating
As an AI Prompt Engineer at Scale AI, I engineered and optimized AI prompts designed for evaluating programming languages and solving technical problems. I rated, categorized, and evaluated outputs from language models to improve overall dataset quality. I also checked and validated co-raters’ labeling work for accuracy and adherence to standards. • Optimized prompt structures for maximum clarity and model performance. • Assessed and labeled model outputs for correctness, logic, and relevance. • Conducted peer reviews and quality assurance checks on label validity. • Collaborated closely with technical teams to refine data labeling protocols.

As an AI Prompt Engineer at Scale AI, I engineered and optimized AI prompts designed for evaluating programming languages and solving technical problems. I rated, categorized, and evaluated outputs from language models to improve overall dataset quality. I also checked and validated co-raters’ labeling work for accuracy and adherence to standards. • Optimized prompt structures for maximum clarity and model performance. • Assessed and labeled model outputs for correctness, logic, and relevance. • Conducted peer reviews and quality assurance checks on label validity. • Collaborated closely with technical teams to refine data labeling protocols.

2024 - 2024

Social Media Evaluator - Appen

TextQuestion Answering
The Appen Social Media Evaluator role involves evaluating social media content for quality and relevance, requiring strong analytical skills. As a Social Media Evaluator at Appen, my primary responsibilities will include: 1. Evaluating Content: Review and assess posts, advertisements, and search results on various social media platforms to ensure they meet quality and relevance standards. 2. Providing Feedback: Offer constructive feedback to enhance the credibility and effectiveness of the content. 3. Identifying Issues: Report any potential violations of community guidelines or content quality issues. 4. Analyzing Engagement: Evaluate user engagement metrics to understand how well content resonates with the audience.

The Appen Social Media Evaluator role involves evaluating social media content for quality and relevance, requiring strong analytical skills. As a Social Media Evaluator at Appen, my primary responsibilities will include: 1. Evaluating Content: Review and assess posts, advertisements, and search results on various social media platforms to ensure they meet quality and relevance standards. 2. Providing Feedback: Offer constructive feedback to enhance the credibility and effectiveness of the content. 3. Identifying Issues: Report any potential violations of community guidelines or content quality issues. 4. Analyzing Engagement: Evaluate user engagement metrics to understand how well content resonates with the audience.

2022 - 2024

Education

L

Lagos State University

Bachelor of Science, Computer Science

Bachelor of Science
2023 - 2023
L

Lagos State Univeristy

Bachelor, Computer Science

Bachelor
2018 - 2023

Work History

L

Livingston Research

IT Specialist and Research Assistant

Remote
2023 - Present
F

FillyCoder

Fullstack Developer

Remote
2023 - Present