Voice chat with multimodal understanding
The model called tools, like searching on the internet, maps, calendar, and notes. I compared 2 responses with tool calls and rated them.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
No subject matter listed
I am a native Japanese speaker and fluent in English, currently studying at the University of Tokyo and working in AI training and data annotation roles. My experience spans projects at Outlier, RWS, and Invisible Technologies, where I have handled a wide range of data types including text, audio, image, and multimodal datasets for AI development. I am skilled in ensuring data quality, accuracy, and unbiased labeling, with hands-on practice in RLHF tasks and image annotation using Roboflow. My technical background includes programming in Python and JavaScript, as well as adapting simulation tools like openMM for research purposes. I am passionate about making complex information accessible and am eager to contribute my skills to innovative AI projects.
The model called tools, like searching on the internet, maps, calendar, and notes. I compared 2 responses with tool calls and rated them.
I generated a lot of speaking files to answer the prompt.
I corrected the response and in some cases I generated whole the paragraph by myself.
I compared the pair of chats with 4 or more shots (meaning there were at least 4 prompts and 4 responses) .
I used DataCompute to conduct this task
Bachelor of Science, Physics
Anju K. hasn’t added any Work History to their OpenTrain profile yet.