A demonstration and evaluation of the use of cross-classified random-effects models for meta-analysis

Belén Fernández-Castilla, Marlies Maes, Lies Declercq, Laleh Jamshidi, S. Natasha Beretvas, Patrick Onghena, Wim van Den Noortgate

Research output: Contribution to journalArticle

Abstract

It is common for the primary studies in meta-analyses to report multiple effect sizes, generating dependence among them. Hierarchical three-level models have been proposed as a means to deal with this dependency. Sometimes, however, dependency may be due to multiple random factors, and random factors are not necessarily nested, but rather may be crossed. For instance, effect sizes may belong to different studies, and, at the same time, effect sizes might represent the effects on different outcomes. Cross-classified random-effects models (CCREMs) can be used to model this nonhierarchical dependent structure. In this article, we explore by means of a simulation study the performance of CCREMs in comparison with the use of other meta-analytic models and estimation procedures, including the use of three- and two-level models and robust variance estimation. We also evaluated the performance of CCREMs when the underlying data were generated using a multivariate model. The results indicated that, whereas the quality of fixed-effect estimates is unaffected by any misspecification in the model, the standard error estimates of the mean effect size and of the moderator variables’ effects, as well as the variance component estimates, are biased under some conditions. Applying CCREMs led to unbiased fixed-effect and variance component estimates, outperforming the other models. Even when a CCREM was not used to generate the data, applying the CCREM yielded sound parameter estimates and inferences.

LanguageEnglish (US)
Pages1-19
Number of pages19
JournalBehavior Research Methods
DOIs
StateAccepted/In press - Jun 5 2018

Fingerprint

Epidemiologic Effect Modifiers
Meta-Analysis
Random Effects Model
Evaluation
Meta-analysis
Effect Size

Keywords

  • Cross-classified random-effects model
  • Meta-analysis
  • Multiple effect sizes

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Developmental and Educational Psychology
  • Arts and Humanities (miscellaneous)
  • Psychology (miscellaneous)
  • Psychology(all)

Cite this

Fernández-Castilla, B., Maes, M., Declercq, L., Jamshidi, L., Beretvas, S. N., Onghena, P., & van Den Noortgate, W. (Accepted/In press). A demonstration and evaluation of the use of cross-classified random-effects models for meta-analysis. Behavior Research Methods, 1-19. https://doi.org/10.3758/s13428-018-1063-2

A demonstration and evaluation of the use of cross-classified random-effects models for meta-analysis. / Fernández-Castilla, Belén; Maes, Marlies; Declercq, Lies; Jamshidi, Laleh; Beretvas, S. Natasha; Onghena, Patrick; van Den Noortgate, Wim.

In: Behavior Research Methods, 05.06.2018, p. 1-19.

Research output: Contribution to journalArticle

Fernández-Castilla, Belén ; Maes, Marlies ; Declercq, Lies ; Jamshidi, Laleh ; Beretvas, S. Natasha ; Onghena, Patrick ; van Den Noortgate, Wim. / A demonstration and evaluation of the use of cross-classified random-effects models for meta-analysis. In: Behavior Research Methods. 2018 ; pp. 1-19.
@article{bcd274da04714d06b444a2e4df41a904,
title = "A demonstration and evaluation of the use of cross-classified random-effects models for meta-analysis",
abstract = "It is common for the primary studies in meta-analyses to report multiple effect sizes, generating dependence among them. Hierarchical three-level models have been proposed as a means to deal with this dependency. Sometimes, however, dependency may be due to multiple random factors, and random factors are not necessarily nested, but rather may be crossed. For instance, effect sizes may belong to different studies, and, at the same time, effect sizes might represent the effects on different outcomes. Cross-classified random-effects models (CCREMs) can be used to model this nonhierarchical dependent structure. In this article, we explore by means of a simulation study the performance of CCREMs in comparison with the use of other meta-analytic models and estimation procedures, including the use of three- and two-level models and robust variance estimation. We also evaluated the performance of CCREMs when the underlying data were generated using a multivariate model. The results indicated that, whereas the quality of fixed-effect estimates is unaffected by any misspecification in the model, the standard error estimates of the mean effect size and of the moderator variables’ effects, as well as the variance component estimates, are biased under some conditions. Applying CCREMs led to unbiased fixed-effect and variance component estimates, outperforming the other models. Even when a CCREM was not used to generate the data, applying the CCREM yielded sound parameter estimates and inferences.",
keywords = "Cross-classified random-effects model, Meta-analysis, Multiple effect sizes",
author = "Bel{\'e}n Fern{\'a}ndez-Castilla and Marlies Maes and Lies Declercq and Laleh Jamshidi and Beretvas, {S. Natasha} and Patrick Onghena and {van Den Noortgate}, Wim",
year = "2018",
month = "6",
day = "5",
doi = "10.3758/s13428-018-1063-2",
language = "English (US)",
pages = "1--19",
journal = "Behavior Research Methods",
issn = "1554-351X",
publisher = "Springer New York",

}

TY - JOUR

T1 - A demonstration and evaluation of the use of cross-classified random-effects models for meta-analysis

AU - Fernández-Castilla, Belén

AU - Maes, Marlies

AU - Declercq, Lies

AU - Jamshidi, Laleh

AU - Beretvas, S. Natasha

AU - Onghena, Patrick

AU - van Den Noortgate, Wim

PY - 2018/6/5

Y1 - 2018/6/5

N2 - It is common for the primary studies in meta-analyses to report multiple effect sizes, generating dependence among them. Hierarchical three-level models have been proposed as a means to deal with this dependency. Sometimes, however, dependency may be due to multiple random factors, and random factors are not necessarily nested, but rather may be crossed. For instance, effect sizes may belong to different studies, and, at the same time, effect sizes might represent the effects on different outcomes. Cross-classified random-effects models (CCREMs) can be used to model this nonhierarchical dependent structure. In this article, we explore by means of a simulation study the performance of CCREMs in comparison with the use of other meta-analytic models and estimation procedures, including the use of three- and two-level models and robust variance estimation. We also evaluated the performance of CCREMs when the underlying data were generated using a multivariate model. The results indicated that, whereas the quality of fixed-effect estimates is unaffected by any misspecification in the model, the standard error estimates of the mean effect size and of the moderator variables’ effects, as well as the variance component estimates, are biased under some conditions. Applying CCREMs led to unbiased fixed-effect and variance component estimates, outperforming the other models. Even when a CCREM was not used to generate the data, applying the CCREM yielded sound parameter estimates and inferences.

AB - It is common for the primary studies in meta-analyses to report multiple effect sizes, generating dependence among them. Hierarchical three-level models have been proposed as a means to deal with this dependency. Sometimes, however, dependency may be due to multiple random factors, and random factors are not necessarily nested, but rather may be crossed. For instance, effect sizes may belong to different studies, and, at the same time, effect sizes might represent the effects on different outcomes. Cross-classified random-effects models (CCREMs) can be used to model this nonhierarchical dependent structure. In this article, we explore by means of a simulation study the performance of CCREMs in comparison with the use of other meta-analytic models and estimation procedures, including the use of three- and two-level models and robust variance estimation. We also evaluated the performance of CCREMs when the underlying data were generated using a multivariate model. The results indicated that, whereas the quality of fixed-effect estimates is unaffected by any misspecification in the model, the standard error estimates of the mean effect size and of the moderator variables’ effects, as well as the variance component estimates, are biased under some conditions. Applying CCREMs led to unbiased fixed-effect and variance component estimates, outperforming the other models. Even when a CCREM was not used to generate the data, applying the CCREM yielded sound parameter estimates and inferences.

KW - Cross-classified random-effects model

KW - Meta-analysis

KW - Multiple effect sizes

UR - http://www.scopus.com/inward/record.url?scp=85048049264&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85048049264&partnerID=8YFLogxK

U2 - 10.3758/s13428-018-1063-2

DO - 10.3758/s13428-018-1063-2

M3 - Article

SP - 1

EP - 19

JO - Behavior Research Methods

T2 - Behavior Research Methods

JF - Behavior Research Methods

SN - 1554-351X

ER -