Research topics

Note: There are currently no job openings at the Cognition and Individual Differences lab.

Working at CIDLAB

Researchers at CIDLAB can choose to contribute to any of the various research projects that are currently running at CIDLAB and in collaboration with other research groups at UC Irvine, or they can choose to begin new projects.

Current projects include quantitative modeling of cognition and individual differences, Bayesian statistics, and implementation and deployment of useful computational algorithms. I am also interested in quantitative approaches of detecting and undoing some of the societal challenges currently faced by psychological science (such as publication bias, fraud detection, and closed access to scientific literature) and in new design and analysis methods.

Since research at CIDLAB is focused on mathematical modeling, strong quantitative skills are highly desirable. Note that no advanced knowledge of classical statistics is required, nor is a degree in psychology or cognitive science (although the latter might help). Knowledge of at least one programming language such as MATLAB/Octave, R, Python, Julia, C(++), or similar is a requirement, and any background knowledge of linear algebra, advanced calculus, stochastic processes is helpful. Depending on the project, a willingness to learn any of these topics may be required. Strong candidates will also be proficient writers.

CIDLAB tends to be well funded and its researchers are typically given many opportunities to attend academic conferences and professional development seminars. The University of California, Irvine is an equal opportunity employer and southern California is generally a wonderful place to live.

If you have any questions, feel free to contact the Principal Investigator at any time.

Topic: Modeling individual differences in cognition

This is the primary activity at CIDLAB. The basic idea of this research line is to apply computational and mathematical models of cognition in contexts where measurement models would ordinarily be used. Ultimately, the goal is to improve measurement by making informed assumptions about behavior in a task and taking into account various qualitatively different sources of variability. Improving the quality of measurement has applications in a wide variety of situations, ranging from fundamental research to clinical diagnosis.

As an example, consider a typical psychophysical 2AFC task in which both accuracy and response time were measured in a large number of conditions. A measurement model for this type of data would be a signal detection model to translate "hits" and "false alarms" into a discrimination parameter (d') and a bias parameter. In a second stage, some nonlinear link function might be used to relate the d' parameter to, say, stimulus intensity. A cognitive model could instead be used to also take into account the response times — effectively using more data from the same paradigm. A cognitive model for the joint analysis of accuracy and response time can account for the fact that people are able to trade speed for accuracy (i.e., choose to respond more slowly in order to be more accurate). This is an undesirable confound that can be taken care of by a cognitive model, for example a diffusion model (Vandekerckhove, Tuerlinckx, & Lee, 2011 ).

Useful: Good knowledge of linear algebra or stochastic calculus, knowledge of sampling algorithms like Gibbs, Metropolis, particle swarm optimization, etc., good programming skills, some knowledge of neuropsychological testing or other research where batteries of behavioral tests are used.

Topic: Bayesian statistics

Bayesian statistics is the new thing. Between immense advances in commercial computing power and people realizing that null hypothesis testing carries an inherent logical flaw, there is now much talk of a Bayesian revolution. Without going into detail, the core of Bayesian data analysis is to use probability theory to compute the relative probability that one theory is true compared to some other theory or set of theories. This is very different from classical null hypothesis testing, which is often based on the selection of one specific (often rather unlikely) hypothesis about which we will then conclude that it is probably not true and one of its infinitely many competitors is true (Lindley, 1993 ).

There are two paths forward for Bayesian statistics in cognitive science, one practical, and one rather fundamental.

The practical next step is to provide easy-to-use software for common Bayesian analyses. Existing packages like JAGS and Stan are extremely powerful and flexible, but that flexibility makes them sometimes hard to use. It would be helpful if there existed software to do simple, common analyses without having to learn a complete software package.

A theoretical next step could be the development of Bayesian analogues of existing (common) analyses. Some recent examples in that vein are Bayesian SEMs, Bayesian t tests, ANOVA, and regression, and Bayesian power analysis.

Useful: Good knowledge of calculus, knowledge of sampling algorithms like Gibbs, Metropolis, particle swarm optimization, etc., good programming skills.

Topic: Societal challenges for academic psychology

Recently there has been a lot of attention focused on perceived problems in the world of academic psychology. One issue that has found its way into the popular media is a number of cases of academic fraud. I am interested in investigating not only the statistical methods used to uncover fraud, but also in the consequences of academic misconduct and questionable research practices.

A lesser version of the same problem is publication bias. This type of bias, which comes in many forms, is not a form of misconduct by individual researches, but rather a result of a somewhat pathological state of the "current standards and practices" in academic publishing. While it is commonly recognized that publication bias is both pervasive and damaging to scientific progress, there is no agreed-upon strategy for combating it. I am interested in research that aims to uncover, undo, or prevent publication bias.

Finally, I am supportive of any graduate applicant who wishes to contribute to reproducibility, transparency, team science, or open science with novel quantitative methods.

Useful: Good knowledge of classical statistics, logic and philosophy of science

Topic: General-purpose implementations of computational algorithms

There exist in the statistical literature a number of highly useful algorithms, models and programming techniques that nevertheless find little use in practice. The reason for this, presumably, is that researchers rarely have the time to invest in learning about new models, and certainly not to build and test software implementations of those models and algorithms. However, the field is advanced by the application of state-of-the-art methods, and providing accessible and user-friendly tools for other researchers is good practice for any modeler.

Useful: Background in computer science, good programming skills, knowledge of linear algebra or stochastic calculus, knowledge of sampling algorithms like Gibbs, Metropolis, particle swarm optimization, etc.