You are here

Use of a Meta-Analysis Technique in Equating and Its Comparison with Several Small Sample Equating Methods

Title: The Use of a Meta-Analysis Technique in Equating and Its Comparison with Several Small Sample Equating Methods.
Name(s): Caglak, Serdar, author
Paek, Insu, professor directing dissertation
Patrangenaru, Victor, university representative
Almond, Russell G., committee member
Roehrig, Alysia D., 1975-, committee member
Florida State University, degree granting institution
College of Education, degree granting college
Department of Educational Psychology and Learning Systems, degree granting department
Type of Resource: text
Genre: Text
Issuance: monographic
Date Issued: 2015
Publisher: Florida State University
Place of Publication: Tallahassee, Florida
Physical Form: computer
online resource
Extent: 1 online resource (164 pages)
Language(s): English
Abstract/Description: The main objective of this study was to investigate the improvement of the accuracy of small sample equating, which typically occurs in teacher certification/licensure examinations due to a low volume of test takers per test administration, under the Non-Equivalent Groups with Anchor Test (NEAT) design by combining previous and current equating outcomes using a meta-analysis technique. The proposed meta-analytic score transformation procedure was called "meta-equating" throughout this study. To conduct meta-equating, the previous and current equating outcomes obtained from the chosen equating methods (ID (Identity Equating), Circle-Arc (CA) and Nominal Weights Mean (NW)) and synthetic functions (SFs) of these methods (CAS and NWS) were used, and then, empirical Bayesian (EB) and meta-equating (META) procedures were implemented to estimate the equating relationship between test forms at the population level. The SFs were created by giving equal weight to each of the chosen equating methods and the identity (ID) equating. Finally, the chosen equating methods, the SFs of each method (e.g., CAS, NWS, etc.), and also the META and EB versions (e.g., NW-EB, CA-META, NWS-META, etc.) were investigated and compared under varying testing conditions. These steps involved manipulating some of the factors that influence the accuracy of test score equating. In particular, the effect of test form difficulty levels, the group-mean ability differences, the number of previous equatings, and the sample size on the accuracy of the equating outcomes were investigated. The Chained Equipercentile (CE) equating with 6-univariate and 2-bivariate moments log-linear presmoothing was used as the criterion equating function to establish the equating relationship between the new form and the base (reference) form with 50,000 examinees per test form. To compare the performance of the equating methods, small numbers of examinee samples were randomly drawn from examinee populations with different ability levels in each simulation replication. Each pairs of the new and base test forms were randomly and independently selected from all available condition specific test form pairs. Those test forms were then used to obtain previous equating outcomes. However, purposeful selections of the examinee ability and test form difficulty distributions were made to obtain the current equating outcomes in each simulation replication. The previous equating outcomes were later used for the implementation of both the META and EB score transformation procedures. The effect of study factors and their possible interactions on each of the accuracy measures were investigated along the entire-score range and the cut (reduced)-score range using a series of mixed-factorial ANOVA (MFA) procedures. The performances of the equating methods were also compared based on post-hoc tests. Results show that the behaviors of the equating methods vary based on the each level of the group ability difference, test form difficult difference, and new group examinee sample size. Also, the use of both META and EB procedures improved the accuracy of equating results on average. The META and EB versions of the chosen equating methods therefore might be a solution to equate the test forms that are similar in their psychometric characteristics and also taken by new form examinee samples less than 50. However, since there are many factors affecting the equating results in reality, one should always expect that equating methods and score transformation procedures, or in more general terms, estimation procedures may function differently, to some degree, depending on conditions in which they are implemented. Therefore, one should consider the recommendations for the use of the proposed equating methods in this study as a piece of information, not an absolute guideline, for a rule of thumbs for practicing small sample test equating in teacher certification/licensure examinations.
Identifier: FSU_2015fall_Caglak_fsu_0071E_12863 (IID)
Submitted Note: A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Degree Awarded: Fall Semester 2015.
Date of Defense: October 28, 2015.
Keywords: Collateral Information, Empirical Bayesian, Meta-Analysis, NEAT design, Small Samples, Test Equating
Bibliography Note: Includes bibliographical references.
Advisory Committee: Insu Paek, Professor Directing Dissertation; Victor Patrangenaru, University Representative; Russell Almond, Committee Member; Alysia Roehrig, Committee Member.
Subject(s): Educational tests and measurements
Educational evaluation
Persistent Link to This Record:
Host Institution: FSU

Choose the citation style.
Caglak, S. (2015). The Use of a Meta-Analysis Technique in Equating and Its Comparison with Several Small Sample Equating Methods. Retrieved from