I am a deep learning researcher and cybersecurity practitioner with hands-on experience spanning computer vision, transfer learning, and offensive AI security. My work focuses on building high-performance AI systems while systematically evaluating and hardening them against real-world cyber threats, adversarial manipulation, and misuse scenarios.
I have designed and implemented deep learning solutions for document enhancement, classification, and detection, supported by robust data pipelines for simulation and sensor data collection in autonomous and safety-critical environments. Alongside this, I actively conduct AI red-teaming and security assessments, testing AI and LLM-powered systems for vulnerabilities such as prompt injection, jailbreak techniques, data poisoning, model extraction, model inversion, insecure fine-tuning, and abuse of agent-based workflows.
My cybersecurity experience includes assessing AI-integrated web applications, APIs, and cloud-hosted platforms, where I evaluate attack surfaces introduced by LLM integrations, retrieval-augmented generation (RAG), and autonomous agents. I have performed controlled jailbreak testing to identify weaknesses in alignment, safety guardrails, and prompt handling logic, helping organizations strengthen policy enforcement, monitoring, and defensive controls before production deployment.
Technically, I work extensively with TensorFlow, Python, and backend technologies such as Node.js, integrating AI models into secure, production-grade systems. I have also led hands-on courses and labs in machine learning, deep learning, reinforcement learning, and secure AI deployment, translating advanced research concepts into operationally safe implementations.
A key pillar of my work is data and model trust—ensuring that datasets, annotations, and training pipelines are resilient against tampering, leakage, and bias, while maintaining high model accuracy and reliability. I am particularly passionate about applying adversarial testing, threat modeling, and red-team methodologies to ensure AI systems are not only intelligent, but robust, explainable, and secure by design.
ExpertEnglishGermanSinhalese
Labeling Experience
Adversarial Data Labeling & Prompt Annotation for AI Red Teaming and Jailbreak Testing
OtherTextText GenerationFine Tuning
Conducted structured data labeling and annotation for AI red teaming and controlled jailbreak testing engagements. Work involved labeling prompt-response pairs to identify security risks such as prompt injection, policy bypass attempts, unsafe completions, and misuse scenarios in LLM-powered systems.
Developed annotation taxonomies for risk severity, exploitability, and mitigation priority, supporting security assessments, model hardening, and guardrail validation. Labeled datasets were used to improve detection logic, safety evaluations, and governance controls for enterprise AI deployments.
Conducted structured data labeling and annotation for AI red teaming and controlled jailbreak testing engagements. Work involved labeling prompt-response pairs to identify security risks such as prompt injection, policy bypass attempts, unsafe completions, and misuse scenarios in LLM-powered systems.
Developed annotation taxonomies for risk severity, exploitability, and mitigation priority, supporting security assessments, model hardening, and guardrail validation. Labeled datasets were used to improve detection logic, safety evaluations, and governance controls for enterprise AI deployments.
2025
Research-Grade Data Labeling & Annotation for Computer Vision and Transfer Learning
OtherImageBounding BoxSegmentation
Performed research-grade data labeling and annotation as part of university-led deep learning and computer vision research. Work included annotation of image, video, and sensor datasets for object detection, semantic segmentation, and classification tasks supporting transfer learning and domain adaptation experiments.
Responsibilities included dataset curation, annotation guideline development, quality assurance, and validation to ensure consistency and statistical reliability across training and evaluation datasets. Data was used in safety-critical and research environments such as autonomous systems, robotics, and document analysis.
Performed research-grade data labeling and annotation as part of university-led deep learning and computer vision research. Work included annotation of image, video, and sensor datasets for object detection, semantic segmentation, and classification tasks supporting transfer learning and domain adaptation experiments.
Responsibilities included dataset curation, annotation guideline development, quality assurance, and validation to ensure consistency and statistical reliability across training and evaluation datasets. Data was used in safety-critical and research environments such as autonomous systems, robotics, and document analysis.
2019 - 2022
Education
C
Carinthia University of Applied Sciences
Master of Science, Systems Design
Master of Science
2013 - 2015
K
Kingston University
Bachelor of Engineering, Aerospace Engineering Design