Desktop Support Engineer in Contract Review, Compliance, and Legal Research
London, United Kingdom
$25.00/hrIntermediateHivemindDataturkDatasaur
Key Skills
Software
HiveMind
Dataturk
Datasaur
CVAT
Data Annotation Tech
CrowdSource
AWS SageMaker
Anno-Mage
Roboflow
Remotasks
Telus
Top Subject Matter
Legal Services & Contract Review
Regulatory Compliance & Risk Analysis
Legal Research & Document Analysis
Top Data Types
Video
Text
Document
Top Task Types
Bounding Box
Entity (NER) Classification
Segmentation
Polygon
Question Answering
Text Generation
Text Summarization
Evaluation/Rating
Freelancer Overview
Desktop Support Engineer in Contract Review, Compliance, and Legal Research. Brings 12+ years of professional experience across complex professional workflows, research, and quality-focused execution.
Education includes Certificate, North Kent College and Bachelor of Science, London Metropolitan University.
IntermediateEnglishYoruba
Labeling Experience
Data Labeling
ImageEntity Ner Classification
I have worked on several large-scale data labeling and AI training projects across computer vision and natural language processing, handling datasets ranging from 50,000 to over 500,000 data points per project.
In a computer vision project for an autonomous driving use case, I annotated over 120,000 image and video frames. My tasks included drawing bounding boxes around vehicles, pedestrians, and cyclists, performing polygon annotations for irregular objects, and semantic segmentation for road elements such as lanes, sidewalks, and traffic signs. I also handled keypoint annotation for pedestrian movement tracking. The project required careful handling of edge cases such as occlusion, low visibility, and crowded urban scenes.
For a retail analytics project, I worked on annotating approximately 80,000 shelf images. My responsibilities included object detection, SKU-level classification, and attribute tagging, for example brand, packaging type, and placement. This required strict adherence to a detailed taxonomy and consistent labeling across visually similar products.
On the NLP side, I contributed to a customer support AI dataset consisting of over 200,000 chat and text records. I performed tasks such as intent classification, sentiment labelling, and named entity recognition NER, identifying entities like names, locations, order IDs, and product references. I also worked on a content moderation project, labelling text and image data for categories such as spam, abusive language, and policy violating content.
To ensure high quality outputs, I followed rigorous quality assurance measures, including multi level review processes, peer audits, and gold-standard benchmarking. I consistently maintained high inter-annotator agreement (IAA) by strictly following annotation guidelines and contributing to their refinement when ambiguities were identified. I performed regular self QA checks, error analysis, and batch validation before submission.
Additionally, I adhered to key performance metrics such as precision, recall, and accuracy thresholds defined by the project. I also ensured consistency, completeness, and compliance with project-specific guidelines while meeting turnaround time requirements. My focus on quality and attention to detail helped reduce rework rates and improve overall dataset reliability for downstream AI model training.
I have worked on several large-scale data labeling and AI training projects across computer vision and natural language processing, handling datasets ranging from 50,000 to over 500,000 data points per project.
In a computer vision project for an autonomous driving use case, I annotated over 120,000 image and video frames. My tasks included drawing bounding boxes around vehicles, pedestrians, and cyclists, performing polygon annotations for irregular objects, and semantic segmentation for road elements such as lanes, sidewalks, and traffic signs. I also handled keypoint annotation for pedestrian movement tracking. The project required careful handling of edge cases such as occlusion, low visibility, and crowded urban scenes.
For a retail analytics project, I worked on annotating approximately 80,000 shelf images. My responsibilities included object detection, SKU-level classification, and attribute tagging, for example brand, packaging type, and placement. This required strict adherence to a detailed taxonomy and consistent labeling across visually similar products.
On the NLP side, I contributed to a customer support AI dataset consisting of over 200,000 chat and text records. I performed tasks such as intent classification, sentiment labelling, and named entity recognition NER, identifying entities like names, locations, order IDs, and product references. I also worked on a content moderation project, labelling text and image data for categories such as spam, abusive language, and policy violating content.
To ensure high quality outputs, I followed rigorous quality assurance measures, including multi level review processes, peer audits, and gold-standard benchmarking. I consistently maintained high inter-annotator agreement (IAA) by strictly following annotation guidelines and contributing to their refinement when ambiguities were identified. I performed regular self QA checks, error analysis, and batch validation before submission.
Additionally, I adhered to key performance metrics such as precision, recall, and accuracy thresholds defined by the project. I also ensured consistency, completeness, and compliance with project-specific guidelines while meeting turnaround time requirements. My focus on quality and attention to detail helped reduce rework rates and improve overall dataset reliability for downstream AI model training.
2025 - 2025
Education
B
Baptist Secondary School
General Certificate of Secondary Education, General Education