Determining the performance of five multiple choice scoring methods in estimating examinee’s ability / Lau Sie Hoe ... [et. al]

Despite the current popularity of performance-based assessment and the emergence of new assessment methods, multiple choices (MC) item remain a major form of assessment. Conventional Number Right (NR) scoring method where one point for correct response and zero for other response has been consiste...

Full description

Saved in:
Bibliographic Details
Main Authors: Lau, Sie Hoe, Paul Lau, Ngee Kiong, Ling, Siew Eng, Hwa, Tee Yong
Format: Thesis
Language:English
Published: 2006
Subjects:
Online Access:https://ir.uitm.edu.my/id/eprint/94756/1/94756.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite the current popularity of performance-based assessment and the emergence of new assessment methods, multiple choices (MC) item remain a major form of assessment. Conventional Number Right (NR) scoring method where one point for correct response and zero for other response has been consistently criticized for failure to credit partial knowledge and encourage guessing. Various alternative scoring methods such as Number Right with Correction for Guessing (NRC). Elimination Testing (ET), Confidence Weighting (CW) and Probability Measurement (PM) had been proposed to overcome these two weaknesses. 1 lowever to date, none has been widely accepted although the theoretical rationale behind various scoring methods under Classical Test Theory (CTT) is sound. A major cause of concern is the possibility that complicated scoring instruction might introduce other factors, which may affect the reliability and validity of the test scores. Studies on whether examinees can be trained to follow the new test instructions realistically have been inconclusive. Whether they can consistently follow the test instruction throughout the whole test remain an open question. There have been intense comparisons studies on scores obtain through various CTT scoring methods with NR scores. What yet to be explore is the comparison of these scores with Item Response Theory (IRT) ability estimates.