To better understand psychological mechanisms, researchers are increasingly capitalizing on longitudinal studies. The lead-lag structure and the possibility to disentangle between-person and within-person sources of variance, are two major assets of longitudinal (panel) data. However, how to best exploit this information in data analysis and interpretation? In this presentation, I want to identify several common problems and show how to avoid them by paying closer attention to the role of time. I will begin with a short example that illustrates some of the typical problems and questions faced by applied researcher and practitioners. Second, I will distinguish between static versus dynamic and discrete versus continuous time modeling approaches and discuss their advantages and disadvantages in the study of psychological mechanisms. Third, I will review different approaches to dealing with between-person differences, highlighting their dual role as a potential source of confounding as well as a source of information to improve estimation and causal inference. I will outline a possible way to better integrate information on between person differences and within person changes in the search for causal mechanisms in future research and end with a discussion of current problems and limitations.
Tests represent the evaluation technology most widely used by psychologists in their professional practice. In recent years there have been great advances in psychological evaluation, which have affected both the tests themselves and their use. In this presentation, recent advances in the construction and use of tests are reviewed, and some future challenges discussed. This review is structured around six dimensions of change: the evolution of psychometric models, changes in the technology used, developments in the construction of items, estimation of reliability, conceptualization of validity, and use of tests in professional practice. Finally, some future perspectives are discussed, taking into account the great impact of new information technologies on the evaluation methods, tests included.
Optimal design allows for estimating parameters of statistical models according to important optimality criteria, e.g., minimizing standard errors of estimators. Thus, optimal designs may considerably reduce the number of experimental units, such as respondents or items in empirical studies. For a long time, optimal design has not received much attention within psychology, but meanwhile interest for this subject is rapidly increasing as such designs are needed in large scale assessment, e.g. PISA, or for adaptive testing.
In this presentation, first, fundamental principles of optimal design are introduced using well-known linear models, e.g. analysis of variance or multiple regression. The rationale of adaptive, Bayesian, and minimax designs needed for nonlinear models will then be outlined. Such designs are presented for fixed and random effects models, e.g. IRT models or growth curve models. Finally, two R packages for deriving Bayesian and minimax designs based on recently developed algorithms will briefly be demonstrated.
Although once considered by some researchers to be on the fringe of conventional methodology, systematic observation has been progressively incorporated into diverse areas of psychological research. Psychological science is increasingly focusing on the study of everyday behavior, and studies applying systematic observation methodologies can now be found in mainstream interdisciplinary psychology journals (e.g., Frontiers in Psychology and Psicothema) and methodology journals (e.g., Behavior Research Methods and Quality & Quantity).
In this state-of-the-art lecture, I will discuss core aspects of systematic observation as a scientific method, with a focus on the profile of this approach and the specific processes it involves. Observational methodology is characterized by high scientific rigor and flexibility throughout its different stages and allows the objective study of spontaneous behavior in natural settings.
The study of spontaneous behavior is characterized by a richness of information that can only be captured by video or sound recordings, without elicitation. Furthermore, the tools now available to explore this richness, often hidden within the deeper layers of the data, have been greatly enhanced by recent technological advances. Quantification in observational methodology is particularly robust and observational studies applying this methodology deserve consideration as mixed methods research.
One particularly interesting area is the use of indirect observation of everyday behavior in natural settings based on textual material, such as conversations, blog posts, tweets, etc. This approach involves ‘liquefying’ original or transcribed texts into a format in which the original qualitative data can be quantified and analyzed using techniques based on the order or sequence of events rather than on traditional frequency counts.
I will conclude this lecture by providing an overview of the broad range of applications of scientific systematic observation and highlight the possibilities of systematically studying spontaneous behavior in natural settings.
Incidental data are data that people produce incidentally, as a byproduct of the normal course of operations of a platform, business, or government. Well-known examples include using Twitter, Facebook, Google search, smartphones, badges, etc. to study social phenomena, such as election behavior, attitudes, employment, or consumer confidence. It has been almost ten years since various high-impact papers and books have proclaimed the end of traditional social research and the beginning of a new era of exciting new possibilities for social science.
In this talk, I review the evidence from the past decade or so regarding the value of incidental data for social research. This includes not only research in social science, but also in the humanities, data mining, and machine learning communities. While it is safe to say traditional social research has not ended, I conclude that incidental data may indeed allow for an "update" of (some of) social science. However, to accomplish this, a considerable amount of work is still needed; I envision that methodologists will be at the forefront of this work - if we can "update" ourselves as well. I end the lecture with some suggestions of where methodologists could start "pulling the thread" of other literatures to start leveraging incidental data for social research.
Randomized experiments provide the strongest warrant for causal inference. However, randomized experiments assume full treatment adherence for a proper estimate of the causal effect. One form of treatment non-adherence is binary in which participants in the treatment group do not accept treatment. Biases resulting from binary treatment non-adherence using several traditional analysis approaches (e.g., per protocol analysis) are briefly illustrated. Newer approaches that compare the effect for those participants in the treatment group who received treatment with those participants in the control group who would have accepted treatment if offered are presented (average treatment effect on the treated). Another form of non-adherence is partial adherence in which participants in the treatment group receive only a proportion of the intervention (e.g., 5 sessions of a 10-sesssion intervention program). Confounder adjustment and instrumental variable approaches that provide estimates of the treatment effect conditional on the proportion of the full treatment that was received are presented. We discuss the assumptions of these approaches and the conditions under which they may be most useful. The usefulness of binary and partial adherence approaches as supplements to intention to treat analyses as an estimate of the causal effect will be discussed.