Turnitin Affirms Guiding Principles For Responsible AI Integration Into Education Technologies
In recognition of World AI Week and the growing significance of artificial intelligence in education, leading EdTech solutions provider Turnitin issued the following statement about its products and the role of AI. Dr. Eric Wang, Senior Director of Machine Learning, Turnitin AI states:
World AI Week provides the opportunity to remind ourselves, EdTech developers, and technology users of the responsibility and human oversight needed when integrating machine learning and artificial intelligence into the product roadmap.
At Turnitin, we believe in human-centered AI. This means putting the human user at the center of research, development, and function. We seek to build AI that assists, strengthens, and scales human abilities—not replace them. Our work with AI furthers our mission to ensure that students, educators, and institutions have the power to make data-driven decisions that promote academic integrity and improve learning outcomes at scale.
We see an opportunity for AI to reduce repetitive tasks, so that the human relational element of education is enhanced. For example, one of our flagship solutions Gradescope by Turnitin helps over 110,000 educators cut grading time in half with AI-assisted answer grouping. After more than a year that has put heavy strain on classrooms, we strive to deliver this type of impact across all of our AI features to help students and teachers have more opportunities to engage in quality teaching and learning moments.
Over the last year, Turnitin has coalesced its AI teams into a single, global organization called Turnitin AI. As Turnitin grows its AI division, seeking new talent, perspectives, and efficiency brought to bear by integrating AI technologies into our products, we have pledged to the following guiding principles in our commitment to responsible AI:
- AI should help improve learning outcomes and promote and protect academic integrity.
- AI should be intentionally designed to mitigate the impact of potential biases.
- AI should be designed, developed, and tested by a wide range of learners and educators, not only engineers and AI scientists.
- AI should adhere to high standards of privacy and data ownership. Read more.
- AI should be rooted in rigorous and peer-reviewed statistical and machine learning standards.
- AI should be continuously improved to make it more accessible, fair, and beneficial.
- AI should be built by a diverse team.
[To share your insights with us, please write to email@example.com]