Validity for Automatic Generation of Items for the Basic Competences Exam (Excoba)

Authors

  • María Fabiana Ferreyra Métrica Educativa
  • Eduardo Backhoff-Escudero National Institute for Education Evaluation (INEE) in Mexico

DOI:

https://doi.org/10.7203/relieve.22.1.8048

Keywords:

Automatic Item Generation, Educational Testing, Construct Validity, Factor Structure, Item Analysis

Abstract

Automatic Item Generation (AIG) is the process of designing and producing items for a test, as well as generating different versions of exams that are conceptually and statistically equivalent. Automatic Item Generation tools are developed with the assistance of information systems, which make these tools very efficient. Under this aim, GenerEx, an automatic item generation tool, was developed. GenerEx is used to automatically generate different versions of the Basic Competences Exam (Excoba). Even though AIG represents a great advance for the development of psychological and educational assessment, it is a methodological challenge to obtain evidence of validity of the enormous quantity of possible items and tests generated in an automatic process. This paper has the purpose of describing an approach to analyze the internal structure and the psychometric equivalence of exams generated by GenerEx and, additionally, to describe kinds of results obtained to reach this objective. The approach is based on the process for selecting samples from the generation tool, founded on the assumption that items and exams must be psychometrically equivalent. This work includes three kinds of conceptually different and complementary analysis: the Classical Test Theory, Item Response Theory and Confirmatory Factor Analysis. Results show that GenerEx produces psychometrically similar exams; however there are problems in some learning areas. The methodology was useful for obtaining a description about GenerEx’s psychometric functioning and the internal structure of two randomly generated versions of Excoba. Analysis can be complemented by a qualitative study of this item deficiencies.

Downloads

Download data is not yet available.

Author Biographies

María Fabiana Ferreyra, Métrica Educativa

Mathematics teacher at the Instituto Nacional Superior del Profesorado Joaquín V. González, Buenos Aires, Argentina. She holds a master’s degree in Education Sciences and a Ph.D. in Education Sciences, both of which are from the Institute for Education Development and Research, part of the Universidad Autónoma de Baja California, Mexico. Her area of interest is the development and validation of large-scale learning tests, and teaching mathematics. She is currently a research associate at Métrica Educativa, A.C., Mexico. Her postal address is: Métrica Educativa, Alvarado 921, Zona Centro. Ensenada, Baja California, C.P. 22800 (México)

Eduardo Backhoff-Escudero, National Institute for Education Evaluation (INEE) in Mexico

He holds a bachelor’s degree in Psychology from the Universidad Nacional Autónoma de México, a master’s degree in Education from the University of Washington and a Ph.D. in Education from the Universidad Autónoma de Aguascalientes. His area of interest is the development and validation of large-scale learning tests and computer-aided assessment. He has been Director of Tests and Measuring at the National Institute for Education Evaluation (INEE) in Mexico. He is currently a Member of the Governing Board of INEE.

Published

2016-02-16

How to Cite

Ferreyra, M. F., & Backhoff-Escudero, E. (2016). Validity for Automatic Generation of Items for the Basic Competences Exam (Excoba). RELIEVE – Electronic Journal of Educational Research and Evaluation, 22(1). https://doi.org/10.7203/relieve.22.1.8048

Issue

Section

Research Articles