For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
P
Peter

Peter

WordPress Developer (Freelance / Contract) in Contract Review, Compliance, and Legal Research

Nigeria flagAkure, Nigeria
$20.00/hrIntermediateOther

Key Skills

Software

Other

Top Subject Matter

Legal Services & Contract Review
Regulatory Compliance & Risk Analysis
Legal Research & Document Analysis

Top Data Types

ImageImage
TextText
DocumentDocument

Top Task Types

Bounding BoxBounding Box
ClassificationClassification
SegmentationSegmentation
Object DetectionObject Detection
Text GenerationText Generation
Question AnsweringQuestion Answering
Text SummarizationText Summarization
Fine-tuningFine-tuning
TranscriptionTranscription
Evaluation/RatingEvaluation/Rating

Freelancer Overview

WordPress Developer (Freelance / Contract) in Contract Review, Compliance, and Legal Research. Brings 5+ years of professional experience across legal operations, contract review, compliance, and structured analysis. Education includes Bachelor of Science, Adekunle Ajasin University (2024). Well suited for text-focused AI training, including legal document review, compliance annotation, and rubric-based quality evaluation.

IntermediateEnglishYoruba

Labeling Experience

Medical Image Tumor Segmentation

ImageSegmentation
This ongoing project involves annotating MRI scans for tumor regions. A team of 5 board-certified radiologists (plus 2 QA reviewers) segmented tumors on 15,000 scans (about 450,000 image slices) using ITK-SNAP and Labelbox. We established strict guidelines for tumor boundary and tissue classes, and each annotator passed benchmark tests on a pilot set. A double-annotation process (20% overlap) yielded a Dice IoU of ~0.78 and Cohen’s κ ≈0.82 on a gold-standard subset. These high-quality annotations have boosted a CNN’s tumor detection recall by ~15% over the baseline while keeping false positives low.

This ongoing project involves annotating MRI scans for tumor regions. A team of 5 board-certified radiologists (plus 2 QA reviewers) segmented tumors on 15,000 scans (about 450,000 image slices) using ITK-SNAP and Labelbox. We established strict guidelines for tumor boundary and tissue classes, and each annotator passed benchmark tests on a pilot set. A double-annotation process (20% overlap) yielded a Dice IoU of ~0.78 and Cohen’s κ ≈0.82 on a gold-standard subset. These high-quality annotations have boosted a CNN’s tumor detection recall by ~15% over the baseline while keeping false positives low.

2024 - Present

Multi-Label Product Feedback Classification

TextClassification
In this project, the team collected ~25,000 e-commerce product reviews and annotated each review with multiple labels (e.g. product category, sentiment, feature tags). We developed detailed annotation guidelines and ran a pilot with 3 annotators (plus 1 QA reviewer) to calibrate label consistency. Weekly QA reviews (including blind re-labeling of 10% of data) tracked inter-annotator agreement (Cohen’s κ ~0.85) and achieved an overall F1 score ≈0.92 on a held-out test set. The high-quality labels trained an NLP model that improved sentiment classification accuracy from 80% to 90% and reduced false negatives by ~50%.

In this project, the team collected ~25,000 e-commerce product reviews and annotated each review with multiple labels (e.g. product category, sentiment, feature tags). We developed detailed annotation guidelines and ran a pilot with 3 annotators (plus 1 QA reviewer) to calibrate label consistency. Weekly QA reviews (including blind re-labeling of 10% of data) tracked inter-annotator agreement (Cohen’s κ ~0.85) and achieved an overall F1 score ≈0.92 on a held-out test set. The high-quality labels trained an NLP model that improved sentiment classification accuracy from 80% to 90% and reduced false negatives by ~50%.

2024 - 2024

LiDAR Object Detection Annotation

3D SensorBounding Box
In a 12-month annotation effort, we labeled 50,000 LiDAR frames for vehicles and pedestrians. A team of 10 annotators (with 3 QA leads) used CVAT and a Scale AI pipeline to draw 3D bounding boxes. We created thorough guidelines (including occlusion rules) and performed double-labeling on 10% of frames to compute consistency. The process achieved inter-annotator Cohen’s κ ~0.90 and average 3D IoU ≈0.75 on QA sets. Incorporating this labeled data into the perception model reduced its object classification error by ~20% and increased recall to 0.88 (mAP improvement of ~5%).

In a 12-month annotation effort, we labeled 50,000 LiDAR frames for vehicles and pedestrians. A team of 10 annotators (with 3 QA leads) used CVAT and a Scale AI pipeline to draw 3D bounding boxes. We created thorough guidelines (including occlusion rules) and performed double-labeling on 10% of frames to compute consistency. The process achieved inter-annotator Cohen’s κ ~0.90 and average 3D IoU ≈0.75 on QA sets. Incorporating this labeled data into the perception model reduced its object classification error by ~20% and increased recall to 0.88 (mAP improvement of ~5%).

2023 - 2023

Education

A

Adekunle Ajasin University

Bachelor of Science, Computer Science

Bachelor of Science
2019 - 2024

Work History

S

Self-Employed

WordPress Developer (Freelance / Contract)

Akure
2022 - Present
L

Livepetals Systems Limited

Full Stack Development Intern

Akure
2023 - 2024