Project Apollo
This project with Alignerr and was designed to evaluate rubrics that were created for individual engagements with an AI. Taskers were to evaluate the rubric for importance, specificity, atomicity, verifiability, difficulty. Taskers were then requested to provide feedback as to why the scores for each value were given, as well as either modify the given dynamic rubrics or write their own rubrics if need be.