During the congress the following keynotes were presented:

Towards a deeper understanding of the effectiveness of interventions: New methods based on structural equation models and causal inference

Axel Mayer

RWTH Aachen University, Germany

Traditionally, the majority of studies investigating the effectiveness of interventions focused on the average effect, but there is much more to learn about the effects of a treatment or an intervention. Researchers are for example interested in heterogeneity of effects, in subgroup effects, or in conditional effects given values of one or multiple covariates. In addition, there is interest and need for visualizing the effects of interventions and in making the key findings from intervention studies usable for selecting the best treatment for a concrete person. In personalized medicine and related fields, attention is shifting towards estimation of interindividual differences in effects and there is a variety of statistical approaches that can be used for this purpose. However, the investigation of conditional effects and interindividual differences in the effects is particularly challenging in the social and behavioral sciences, because many constructs-of-interest such as depression or anxiety are latent variables. In addition, the selection of variables for the analysis is crucial, and modeling conditional effects oftentimes requires interactions and potentially non-linear relationships. In this keynote, I will bring together concepts from the causal inference literature and from structural equation modeling to allow researchers to gain a deeper understanding of the effectiveness of interventions based on latent variable models. I will use causality conditions and effect definitions from the causal inference literature and show how multigroup structural equation models with stochastic group sizes can be used to estimate the effects of interest in experimental and quasi-experimental studies. The new approach (and the accompanying open source R package) is termed EffectLiteR approach Several empirical examples from psychology and educational science are used to illustrate the proposed approach. Furthermore, I will show that many popular methods like ANOVA or moderation analysis are special cases of the general multigroup SEM approach for analyzing treatment effects. A SEM-based approach also has the advantage that many recent advancements that have been made in this area, like robust estimators and standard errors, modern fit statistics, and measurement models can be used for estimating causal effects. Finally, I will show some extensions of the proposed model, namely how to include propensity scores in the analysis, how it can be extended to multilevel SEM approaches and how Bayesian non-linear structural equation models can be used to deal with latent interactions and non-normally distributed latent variables.

How to make missingness ignorable in longitudinal modeling

Sophia Rabe-Hesketh

University of California, Berkeley, USA

In longitudinal modeling, maximum likelihood estimators of model
parameters are consistent if missingness depends only on the
covariates and/or observed outcomes. Such missingness processes are
hence ignorable. When missingness depends on unobserved outcomes or
on the random effects in mixed effects/multilevel models, it is
said to be not missing at random (NMAR) and is no longer ignorable.
For such NMAR missingness, joint modeling of the outcomes and
missingness has been advocated, but these approaches rely on
strong, unverifiable assumptions, such as parameteric specification
of the missingness process. In this talk, I will show that minimal
assumptions about missingness, such as whether it depends on random
effects or contemporaneous observed/unobserved outcomes, often
allows us to make the missingness ignorable. In other words, we can
obtain consistent estimates of (some) model parameters using
standard estimators, a concept also referred to as "protective"
estimation. Perhaps surprisingly, one approach is to simply discard
more data. Another approach is to change the estimator, for
example, by switching from a random-effects model to a
fixed-effects model (Skrondal & Rabe-Hesketh, 2014, Biometrika 101,
175-188). This is joint work with Anders Skrondal.

The Statistics of Replication

Larry Hedges

Northwestern University, Illinois, USA

Replication is a fundamental aspect of the scientific method and is central to the rhetoric of science. Yet recent empirical research has called into question the replicability of experimental research in fields as diverse as economics, medicine, and psychology. This work undermines the credibility of science and the evidence science provides. Surprisingly, there has been little research on the methodology of replication itself, including the design of replication studies and appropriate statistical analyses to determine whether a set of studies replicate one another. Perhaps as a result, some recent programs of research on replication have used multiple, and sometimes mutually contradictory, methods to study replication. This talk will draw on a meta-analytic perspective to formalize ideas about the definition of replication and the analysis of replication studies. I will focus on three problems that seem straightforward, but will argue that each of them is more complex than it first appears. One is the precise definition of replication: What exactly does it mean to say that the results of a set of studies replicate one another? The second is the statistical analysis of replications: Given a definition of replication, what statistical analysis is appropriate? The third is the design of replication studies: What kind of ensemble of two or more studies should we assemble to evaluate whether results replicate?