Technology-Enhanced Language Assessment: Validating Digital Tools for Measuring English Proficiency in Higher Education

Main Article Content

Jing Sun
Nur Ainil
Nur Ehsan Mohd Said

Abstract

This study explores students’ and instructors’ perceptions of the validity, usability, and educational impact of technology-enhanced English language assessments in higher education contexts. Using a qualitative research design, semi-structured interviews were conducted with six participants (three students and three instructors) who had experience with digital English proficiency assessments. Thematic analysis was applied to analyse participant responses and identify patterns related to assessment validity, platform usability, and instructional impact. Participants acknowledged the efficiency and practicality of technology-enhanced assessments in evaluating basic language skills like grammar and vocabulary. However, they questioned their ability to assess higher-order competencies, such as argumentation and interaction. Usability concerns were reported, especially among less digitally literate students, while educators noted the shift in teaching practices toward test-oriented strategies. Mixed perceptions about fairness and accessibility further emphasized the need for more inclusive, pedagogically aligned assessment models. Findings suggest that TEAs should be complemented by human input to ensure comprehensive language evaluation. Institutions must prioritize accessibility and inclusive design, while educators require training to balance assessment preparation with communicative pedagogy. Future research should expand to diverse contexts and integrate performance data for validation.

Article Details

Section
Articles