Journal ArticleParallel publicationPublished versionDOI: 10.48548/pubdata-3125

Automated scoring in the era of artificial intelligence: An empirical study with Turkish essays

Chronological data

Date of first publication2025-07-21
Date of publication in PubData 2026-03-11

Language of the resource

English

Related external resources

Variant form of DOI: 10.1016/j.system.2025.103784
Aydın, B., Kışla, T., Elmas, N. T., & Bulut, O. (2025). Automated scoring in the era of artificial intelligence: An empirical study with Turkish essays. System, 133, Article 103784.
Published in ISSN: 0346-251X
System

Editor

Case provider

Other contributors

Abstract

Automated scoring (AS) has gained significant attention as a tool to enhance the efficiency and reliability of assessment processes. Yet, its application in under-represented languages, such as Turkish, remains limited. This study addresses this gap by empirically evaluating AS for Turkish using a zero-shot approach with a rubric powered by OpenAI's GPT-4o. A dataset of 590 essays written by learners of Turkish as a second language was scored by professional human raters and an artificial intelligence (AI) model integrated via a custom-built interface. The scoring rubric, grounded in the Common European Framework of Reference for Languages, assessed six dimensions of writing quality. Results revealed a strong alignment between human and AI scores with a Quadratic Weighted Kappa of 0.72, Pearson correlation of 0.73, and an overlap measure of 83.5 %. Analysis of rater effects showed minimal influence on score discrepancies, though factors such as experience and gender exhibited modest effects. These findings demonstrate the potential of AI-driven scoring in Turkish, offering valuable insights for broader implementation in under-represented languages, such as the possible source of disagreements between human and AI scores. Conclusions from a specific writing task with a single human rater underscore the need for future research to explore diverse inputs and multiple raters.

Keywords

Automated Scoring; Large Language Models; Rater Reliability; Turkish; Essays

Faculty / department

More information

DDC

Creation Context

Research