COMPARATIVE ANALYSIS OF AI AND TEACHERGENERATED MATHEMATICS MULTIPLE-CHOICE ITEMS USING 2-PARAMETER MODEL
Pdf

Keywords

Test
Teacher-made test
AI-generated test
2 parameter model

Abstract

The study was on comparative analysis of AI and teacher-generated mathematics multiple-choice items using 2-parameter model. Instrumentation design was used in the study. A sample of 178 SS2 students were drawn from a population 3,450 across public schools in Obio-Akpor L.G.A of Rivers State. Two versions of Mathematics Performance Test containing 50 items each of AI-generated (MPT-AI) and maually Generated (MPT-T) were used. Findings of this study showed that for difficulty index (Jaipurkar et'al, 2021 reasonable range of 0.30-0.70), MPT-AI format had 27 items with moderate difficulty indices meaning that 23 items were either two difficult or
simple. For MPT-T, 38 items had moderate difficulty index indicating that 12 items were either too tough or simple in the test. For item discrimination (cut-off score of >0.40 proposed by Aljehani et'al, 2020) MAT-AI showed that 36 items discriminated adequately with 14 items discarded. MPT-T test also reveal that 26 items had better discrimination index with 24 being discarded. Comparatively, MPT-T test had greater number of items with good difficulty indices (38>27) than MPT-AI test. On the contrary, MPT-AI test had more items with good discrimination index (36>26) compared to MPT-T items. Additionally, KR was used in determining the reliability 20 indices of 0.75 for MPT-AI and 0.96 for MPT-T format respectively. Based on the findings, it was recommended among others that test developers should compulsorily check the difficulty index of AI-generated items than discrimination while they should also compulsorily check discrimination index for teachergenerated test than the difficulty index.

Pdf