Some of the material in is restricted to members of the community. By logging in, you may be able to gain additional access to certain collections or items. If you have questions about access or logging in, please use the form on the Contact Page.
In research synthesis, researchers may aim at summarizing peoples' attitudes and perceptions of phenomena that have been assessed using different measures. Self-report rating scales are among the most commonly used measurement tools to quantify such latent constructs in education and psychology. However, self-report rating-scale questions measuring the same construct may differ from each other in many ways. Scale format, number of response options, wording of questions, and labeling of response option categories may vary across questions. Consequently, variations across the measures of the same construct bring about the issue of comparability of the results across the studies in meta-analytic investigations. In this study, I examine the complexities of summarizing the results of different survey questions about the same construct in the meta-analytic fashion. More specifically, this study focuses on the practical problems that arise when combining survey items that differ from one another in the wording of question stems, numbers of response option categories, scale direction (i.e., unipolar and bipolar scales), response scale labeling (i.e., fully-labeled scales and endpoints-labeled scales), and response-option labeling (e.g., "extremely happy" - "completely happy" - "most happy", "pretty happy", "quite happy"- "moderately happy", and "not at all happy" - "least happy" - "most unhappy"). In addition, I propose practical solutions to handle the issues that arise due to such variations when conducting a meta-analysis. I discuss the implications of the proposed solutions from the perspective of meta-analysis. Examples are obtained from the collection of studies in the World Happiness Database (Veenhoven, 2006), which includes various single-item happiness measures.