For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
D

Daniel Mwaka

AI Telco Troubleshooting Challenge (LLM Fine-tuning)

KENYA flag
Nairobi, Kenya
IntermediateOther

Key Skills

Software

Other

Top Subject Matter

Telecommunications/Network Signals
Education/K-12 Mathematics
Social Media/User Sentiment Analysis

Top Data Types

TextText
DocumentDocument

Top Task Types

Fine Tuning

Freelancer Overview

AI Telco Troubleshooting Challenge (LLM Fine-tuning). Brings 7+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Other. Education includes Certificate, Flatiron School (2025) and Bachelor of Science, Dedan Kimathi University of Technology (2019). AI-training focus includes data types such as Text and labeling workflows including Fine-tuning.

Intermediate

Labeling Experience

Rubric Editing and Evaluation Instruction review for Failure Axis Alignment of LLM Chatbots

TextText Generation
Review multi-turn conversations, evaluate pre-provided rubric criteria against three LLM chatbot model responses, edit rubrics when necessary, and rate each rubric on quality dimensions.

Review multi-turn conversations, evaluate pre-provided rubric criteria against three LLM chatbot model responses, edit rubrics when necessary, and rate each rubric on quality dimensions.

2026 - 2026

Twitter Sentiment Analysis (Transformer Fine-tuning)

OtherTextFine Tuning
Fine-tuned the RoBERTa transformer to categorize sentiment in user posts about Google and Apple products. Responsible for preparing labeled training data, training the model, and evaluating the effectiveness. Achieved a high macro F1-score demonstrating the quality of the labeling and fine-tuning process. • Curated and labeled sentiment data from Twitter • Implemented data preprocessing and model evaluation steps • Utilized Python and deep learning software for experiments • Successfully increased accuracy of sentiment classification

Fine-tuned the RoBERTa transformer to categorize sentiment in user posts about Google and Apple products. Responsible for preparing labeled training data, training the model, and evaluating the effectiveness. Achieved a high macro F1-score demonstrating the quality of the labeling and fine-tuning process. • Curated and labeled sentiment data from Twitter • Implemented data preprocessing and model evaluation steps • Utilized Python and deep learning software for experiments • Successfully increased accuracy of sentiment classification

2024 - 2024

Charting Student Math Misunderstandings (Transformer Fine-tuning)

OtherTextFine Tuning
Fine-tuned SciBERT and MathBERTa transformer models to identify and classify specific pedagogical errors in student responses to K-12 math problems. The project focused on error analysis for improving educational AI systems. Extensive data labeling and preparation were performed to support effective model fine-tuning. • Labeled and pre-processed K-12 student response data for model training • Applied fine-tuning techniques to transformer architectures • Used Python and deep learning libraries for all modeling tasks • Enhanced the ability to detect nuanced student misunderstandings

Fine-tuned SciBERT and MathBERTa transformer models to identify and classify specific pedagogical errors in student responses to K-12 math problems. The project focused on error analysis for improving educational AI systems. Extensive data labeling and preparation were performed to support effective model fine-tuning. • Labeled and pre-processed K-12 student response data for model training • Applied fine-tuning techniques to transformer architectures • Used Python and deep learning libraries for all modeling tasks • Enhanced the ability to detect nuanced student misunderstandings

2024 - 2024

AI Telco Troubleshooting Challenge (LLM Fine-tuning)

OtherTextFine Tuning
Fine-tuned the Qwen-2.5 1.5-Instruct large language model using QLoRA (PEFT) for automated root-cause analysis in telecommunications troubleshooting. The project aimed to enhance the model's ability to classify suboptimal network signals with increased accuracy and efficiency. The process involved data preparation, model training, and evaluation using advanced Python and deep learning frameworks. • Automating root-cause analysis tasks in network signal troubleshooting • Data set curation and preparation for fine-tuning a language model • Employed QLoRA, Python, and modern deep learning libraries • Measured improvements in classification accuracy and memory efficiency

Fine-tuned the Qwen-2.5 1.5-Instruct large language model using QLoRA (PEFT) for automated root-cause analysis in telecommunications troubleshooting. The project aimed to enhance the model's ability to classify suboptimal network signals with increased accuracy and efficiency. The process involved data preparation, model training, and evaluation using advanced Python and deep learning frameworks. • Automating root-cause analysis tasks in network signal troubleshooting • Data set curation and preparation for fine-tuning a language model • Employed QLoRA, Python, and modern deep learning libraries • Measured improvements in classification accuracy and memory efficiency

2024 - 2024

Education

F

Flatiron School

Certificate, Data Science

Certificate
2025 - 2025
D

Dedan Kimathi University of Technology

Bachelor of Science, Electrical and Electronics Engineering

Bachelor of Science
2013 - 2019

Work History

N

Numerical Machining Complex

Maintenance Department Intern

Nairobi
2024 - Present
K

Kalu Electrical Works

Electrical Technician Apprentice

Machakos
2020 - 2023