Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur

The purpose of this study is to evaluate the validity of an existing English test in order to examine its potential and shortcomings in assessing the engineering students' English ability. The test was built mainly to measure grammar and reading ability, while adopting recognition testing techn...

Full description

Saved in:
Bibliographic Details
Main Author: Mohamed Bannur, Fahima
Format: Thesis
Language:English
Published: 2016
Online Access:https://ir.uitm.edu.my/id/eprint/18559/1/TP_FAHIMA%20MOHAMED%20BANNUR%20APB%2016_5.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
id my-uitm-ir.18559
record_format uketd_dc
spelling my-uitm-ir.185592022-03-08T07:24:43Z Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur 2016 Mohamed Bannur, Fahima The purpose of this study is to evaluate the validity of an existing English test in order to examine its potential and shortcomings in assessing the engineering students' English ability. The test was built mainly to measure grammar and reading ability, while adopting recognition testing techniques. The focus on the validity event for the ESP reading test arose from the urgent need of the University of Tripoli, Libya as well as students' appeal for an improved English test, where thousands of students from different engineering departments at the faculty of engineering study English (ESL) as a compulsory course and take the ESP test to carry on their academic study at the faculty. The current method of providing the English test to these students presents the university with some problems in terms of test design, construction, content, efficiency, reliability and validity. These are significant aspects for any validation process, and to date, they have not been addressed formally at the university. To achieve this, a framework for validating a reading test (Weir 2005) was adopted throughout the study. The framework is instructive and comprehensive in nature. It has five components and various parameters that ensure meaningful and typical test validity process in all test stages which include a 'priori', during and a 'posteriori' of the test event. The framework was operationalized such that data collection and analysis were conducted according to validity elements of the framework, and consequently all findings were systematically reported. The study involved three phases: a validation study on the Existing English Test (Ti), the development, administration and validation of a Sample Proposed ESP Test (T2), and the report analysis of the two tests. Data gathered from the main validation study point to deficiencies found in the existing test such as test specifications, test format and content, test construction, rating process, and other administrative and evaluative issues. Through the sample proposed test (T2) these issues were considered. The comparative validity report of the two ESP tests addressed the question of whether the use of an alternative test fulfills to some extent the requirements of a valid test, and students' needs for academic study and their future career. Recommendations were made for using systematic frameworks, such as that proposed by Weir (2005), to validate and improve language tests in which validity parameters are incorporated and further validation can subsequently be conducted. 2016 Thesis https://ir.uitm.edu.my/id/eprint/18559/ https://ir.uitm.edu.my/id/eprint/18559/1/TP_FAHIMA%20MOHAMED%20BANNUR%20APB%2016_5.pdf text en public phd doctoral Universiti Teknologi MARA Academy of Language Studies
institution Universiti Teknologi MARA
collection UiTM Institutional Repository
language English
description The purpose of this study is to evaluate the validity of an existing English test in order to examine its potential and shortcomings in assessing the engineering students' English ability. The test was built mainly to measure grammar and reading ability, while adopting recognition testing techniques. The focus on the validity event for the ESP reading test arose from the urgent need of the University of Tripoli, Libya as well as students' appeal for an improved English test, where thousands of students from different engineering departments at the faculty of engineering study English (ESL) as a compulsory course and take the ESP test to carry on their academic study at the faculty. The current method of providing the English test to these students presents the university with some problems in terms of test design, construction, content, efficiency, reliability and validity. These are significant aspects for any validation process, and to date, they have not been addressed formally at the university. To achieve this, a framework for validating a reading test (Weir 2005) was adopted throughout the study. The framework is instructive and comprehensive in nature. It has five components and various parameters that ensure meaningful and typical test validity process in all test stages which include a 'priori', during and a 'posteriori' of the test event. The framework was operationalized such that data collection and analysis were conducted according to validity elements of the framework, and consequently all findings were systematically reported. The study involved three phases: a validation study on the Existing English Test (Ti), the development, administration and validation of a Sample Proposed ESP Test (T2), and the report analysis of the two tests. Data gathered from the main validation study point to deficiencies found in the existing test such as test specifications, test format and content, test construction, rating process, and other administrative and evaluative issues. Through the sample proposed test (T2) these issues were considered. The comparative validity report of the two ESP tests addressed the question of whether the use of an alternative test fulfills to some extent the requirements of a valid test, and students' needs for academic study and their future career. Recommendations were made for using systematic frameworks, such as that proposed by Weir (2005), to validate and improve language tests in which validity parameters are incorporated and further validation can subsequently be conducted.
format Thesis
qualification_name Doctor of Philosophy (PhD.)
qualification_level Doctorate
author Mohamed Bannur, Fahima
spellingShingle Mohamed Bannur, Fahima
Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
author_facet Mohamed Bannur, Fahima
author_sort Mohamed Bannur, Fahima
title Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_short Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_full Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_fullStr Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_full_unstemmed Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_sort evaluating and validating esp testing in a specific context: stakeholders' perspectives / fahima mohamed bannur
granting_institution Universiti Teknologi MARA
granting_department Academy of Language Studies
publishDate 2016
url https://ir.uitm.edu.my/id/eprint/18559/1/TP_FAHIMA%20MOHAMED%20BANNUR%20APB%2016_5.pdf
_version_ 1783733694259789824