I joined a panoply of scholars who argue that it is necessary to justify your alpha (that is, the acceptable long-run proportion of false positive results in a specific statistical test) rather than redefine a threshold for statistical significance across all fields. We’d hoped to emphasize that more than just alpha needed justification, but space limitations (and the original article’s title) focused our response on that specific issue. We also didn’t have the space to include specific examples of how to justify alpha or other kinds of analytic decisions. Because people are really curious about this issue, I elaborate here my thought process about how one can justify alpha and other analytic decisions.
0) Do you even need to consider alpha?
The whole notion of an alpha level doesn’t make sense in multiple statistical frameworks. For one, alpha is only meaningful in the frequentist school of probability, in which we consider how often a particular event would occur in the real world out of the total number of such possible events. In Bayesian statistics, the concern is typically adjusting the probability that a proposition of some sort is true, which is a radically different way of thinking about statistical analysis. Indeed, the Bayes factor represents the relative likelihood of an alternative hypothesis to that of a null hypothesis given the data obtained. There’s also no clear mapping of p values to Bayes factors, so many discussions about alpha aren’t relevant if that’s the statistical epistemology you use.
Another set of considerations comes from Sir Ronald Fisher, one of the original statisticians. In this view, there’s no error control to be concerned with; p values instead represent levels of evidence against a null hypothesis. Crossing a prespecified low p value (i.e., alpha) then entails rejection of a statistical null hypothesis. However, this particular point of view has fallen out of favor, and Fisher’s attempts to construct a system of fiducial inference also ended up being criticized. Finally, there are systems of statistical inference based on information theory and non-Bayesian likelihood estimation that do not entail error control in their epistemological positions.
I come from a clinical background and teach the psychodiagnostic assessment of adults. As a result, I come from a world in which I must use the information we gather to make a series of dichotomous decisions. Does this client have major depressive disorder or not? Is this client likely to benefit from exposure and response prevention interventions or not? This framework extends to my thinking about how to conduct scientific studies or not. Will I pursue further work on a specific measure or not? Will my research presume this effect is worth including in the experimental design or not? Thus, the error control epistemological framework (borne of Neyman-Pearson reasoning about statistical testing) seems to be a good one for my purposes, and I’m reading more about it to verify that assumption. In particular, I’m interested in disambiguating this kind of statistical reasoning from the common practice of null hypothesis significance testing, which amalgamates Fisherian inferential philosophy and Neyman-Pearson operations.
I don’t argue that it’s the only relevant possible framework to evaluate the results of my research. That’s why I still report p values in text whenever possible (the sea of tabular asterisks can make precise p values difficult to report there) to allow a Fisherian kind of inference. Such p value reporting also allows those in the Neyman-Pearson tradition to use different decision thresholds in assessing my work. It should also be possible to use the n/df and values of my statistics to compute Bayes factors for the simpler statistics I report (e.g., correlations and t tests), though more complex inferences may be difficult to reverse engineer.
1) What are the goals of your study?
My lab has run four different kinds of studies, each of which has unique goals based on the population being sampled, the methods being used, and the practical importance of the questions being asked. A) One kind of study uses easy-to-access and convenient students or MTurk workers as a proxy for “people at large” for studies involving self-report assessments of various characteristics. B) Another study draws from a convenient student population to make inferences about basic emotional functioning through psychophysiological or behavioral measures. C) A third study draws from (relatively) easily sampled clinical populations in the community to bridge self-report and clinical symptom ratings, behavioral, and psychophysiological methods of assessment. D) The last study type comes from my work on the Route 91 shooting, in which the population sampled is time-sensitive, non-repeatable, and accessible only through electronic means. In each case, the sampling strategy from these populations entails constraints on generality that must be discussed when contextualizing the findings.
In study type A), I want to be relatively confident that the effects I observe are there. I’m also cognizant that measuring everything through self-report has attendant biases, so it’s possible processes like memory inaccuracies, self-representation goals, and either willful or unintentional ignorance may systematically bias results. Additionally, the relative ease of running a few hundred more students (or MTurk workers, if funding is available) makes running studies with high power to detect small effects a simpler proposition than in other study types, as once the study is programmed, there’s very little work needed to run participants through it. Indeed, they often just run themselves! In MTurk studies, it may take a few weeks to run 300 participants; if I run them in the lab, I can run about 400 participants in a year on a given study. Thus, I want to have high power to detect even small effects, I’ll use a lower alpha level to guard against spurious results borne of mega-multiple comparisons, and I’m happy to collect large numbers of participants to reduce the errors surrounding any parameter estimates.
Study type B) deals with taking measurements across domains that may be less reliable and that do not suffer from that same single-method biases as self-report studies. I’m willing to achieve a lower evidential value for the sake of being able to say something about these studies. In part, I think for many psychophysiological measures, the field is learning how to do this work well, and we need to walk statistically before we can run. I also believe that to the extent these studies are genuinely cross-method, we may reduce some of the crud factor especially inherent in single-method studies and produce more robust findings. However, these data require two research assistants to spend an hour of their time applying sensors to participants’ bodies and then spend an extra two to three hours collecting data and removing those sensors, so the cost of acquiring new participants is higher than in study type A). For reasons not predictable from the outset, a certain percentage of participants will also not yield interpretable data. However, the participants are still relatively easy to come by, so replacement is less of an issue, and I can plan to run around 150 participants a year if the lab’s efforts are fully devoted to the study. In such studies, I’ll sacrifice some power and precision to run a medium number of participants with a higher alpha level. I also use a lot of within-subjects designs in these kinds of studies to maximize the power of any experimental manipulations, as they allow participants to serve as their own controls.
In study type C), I run into many of the same kinds of problems as study type B) except that participants are relatively hard to come by. They come from clinical groups that require substantial recruitment resources to characterize and bring in, and they’re relatively scarce compared to unselected undergraduates. For cost comparison purposes, it’s taken me two years to bring in 120 participants for such a study (with a $100+ session compensation rate to reflect that they’d be in the lab for four hours). Within-subjects designs are imperative here to keep power high, and I also hope that study type B) has shown us how to maximize our effect sizes such that I can power a study to detect medium-sized effects as opposed to small ones.
Study type D) entails running participants who cannot be replaced in any way, making good measurement imperative to increase precision to detect effects. Recruitment is also a tremendous challenge, and it’s impossible to know ahead of time how many people will end up participating in the study. Nevertheless, it’s still possible to specify desired effect sizes ahead of time to target along with the precision needed to achieve a particular statistical power. I was fortunate to get just over 200 people initially, though our first follow-up had the approximately 125 participants I was hoping to have throughout the study. I haven’t had the ability to bring such people into the lab so far, so it’s the efforts needed in recruitment and maintenance of the sample that represent substantial costs, not the time it takes research assistants to collect the data.
2) Given your study’s goals, what’s the minimum effect size that you want your statistical test to detect?
Many different effect sizes can be computed to address different kinds of questions, yet many researchers answer this question by defaulting to an effect size that either corresponds to a lay person’s intuitions about the size of the effect (Cohen, 1988) or a typical effect size in the literature. However, in my research domain, it’s often important to consider whether effect sizes make a practical difference in real-world applications. That doesn’t mean that effects must be whoppingly large to be worth studying; wee effects with cheap interventions that feature small side effect profiles are still well worth pursuing. Nevertheless, whether theoretical, empirical, or pragmatic, researchers should take care to justify a minimum effect size of interest, as this choice will guide the rest of the justification process.
In setting this minimum effect size of interest, researchers should also consider the reliability of the measures being used in a study. All things being equal, more reliable measures will be able to detect smaller effects given the same number of observations. However, savvy researchers should take into account the unreliability of their measures when detailing the smallest effect size of interest. For instance, a researcher may want to detect a correlation of .10 – which corresponds to an effect explaining 1% of the linear relationship between two measures – and the two measures the researcher is using have internal consistencies of .80 and .70. Rearranging Lord and Novick’s (1968) correction formula, the actual smallest effect size of interest should be calculated as .10*√(.80*.70), or .10*√.56, or .10*.748, or .075.
However, unreliability of measurement is not the only kind of uncertainty that might lead researchers to choose a smaller minimum effect size of interest to detect. Even if researchers consult previous studies for estimates of relevant effect sizes, publication bias and uncertainty around the size of an effect in the population throw additional complications into these considerations. Adjusting the expected effect size of interest in light of these issues may further aid in justifying an alpha.
In the absence of an effect that passes the statistical threshold in a well-powered study, it may be useful to examine whether it is instead inconsistent with the smallest effect size of interest. In this way, we can articulate whether proceeding as if even that effect is not present is a reasonable one rather than defaulting to “retaining the null hypothesis”. This step is important for completing the error control process to ensure that some conclusions can be drawn from the study rather than leaving it out in an epistemological no-man’s land should results not pass the justified statistical threshold.
3) Given your study’s goals, what alpha level represents an adequate level of sensitivity to detect effects (or signals) of interest balanced against a specificity against interpreting noise?
Utility functions. Ideally, the field could compute some kind of utility function whose maximum value represents a balance among sample size from a given population, minimum effect size of interest, alpha, and power. This function could provide an objective alpha to use in any given situation. However, because each of these quantities has costs and benefits associated with them – and the relative costs and benefits will vary by study and investigator – such a function is unlikely to be computable. Thus, when justifying an alpha level, we need to resort to other kinds of arguments. This means that it’s unlikely all investigators will agree that a given justification is sufficient, but a careful layout of the rationale behind the reported alpha combined with detailed reporting of p values would allow other researchers to re-evaluate a set of findings to determine how they comport with those researchers’ own principles, costs, and benefits. I would also argue that there is no situation in which there are no costs, as other studies could always be run in place of one that’s chosen, participants could be allocated to other studies instead of the one proposed, and the time spent programming a study, reducing its data, and analyzing the results are all costs inherent in any study.
However, that paper provides additional sets of justifications for a stricter alpha level that entail multiple possible inferential benefits derived from normative or bridged principles. In particular, the authors of that paper bridge frequentist and Bayesian statistical inferential principles to emphasize the added rigor an alpha of .005 would lend the field. They also note that normatively, other fields have adopted stricter levels for declaring findings “significant” or as “discoveries”, and that such a strict alpha level would reduce the false positive rate below that of the field currently while requiring less than double the number of participants to maintain the same level of power.
Bridging principles. One could theoretically justify a (two-tailed) alpha level of .05 on multiple grounds. For instance, humans tend to think in 5s and 10s, and a 5/100 cutoff seems intuitively and conveniently “rare”. I should also note here that I use the term “convenient” to denote something adopted as more than “arbitrary”, inasmuch as our 5×2 digit hands provide us a quick, shared grouping for counting across humans. I fully expect that species with non-5/10 digit groupings (or pseudopods instead of digits) might use different cutoffs, which would similarly shape their thinking about convenient cutoffs for their statistical epistemology.
Such a cutoff has also been bridged to values of the normal distribution, as an alpha of .05 corresponds to twice that distribution’s standard error. Because many parametric frequentist statistical tests assume normal distributions of the standard errors of scores, this bridge links the alpha level to a fundamental assumption of these kinds of statistical tests.
Another bridging principle entails considering that lower alpha levels correspond to increasingly severe tests of theories. Thus, a researcher may prefer a lower alpha level if the theory is more well-developed, its logical links are clearer (from the core theoretical propositions to its auxiliary corollaries to its specific hypothetical propositions to the statistical hypotheses to test in any study), and its constructs are more precisely measured.
How to describe findings meeting or exceeding alpha? From a normative perspective, labeling findings as “statistically significant” has led to decades of misinterpretation of the practical importance of statistical tests (particularly in the absence of effect sizes). In our commentary, we encouraged abandoning that phrase, but we didn’t offer an alternative. I propose describing these results as “passing threshold” to reduce misinterpretative possibilities. This term is far less charged with…significance…and may help separate evaluation of the statistical hypotheses under test from larger practical or theoretical concerns.
4) Given that alpha level, what beta level (or 1-power) represents an acceptable tradeoff between missing a potential effect of interest and leaving noise uninterpreted?
Though justifying alpha is an important step, it’s just as important to justify your beta (which is the long-run proportion of false retentions of the null hypothesis). From a Neyman-Pearson perspective, the lower the beta, the more evidential value a null finding possesses. This is also why Neyman-Pearson reasoning is inductive rather than deductive: Null hypotheses have information value as opposed to being defaults that can only be refuted with deductive logic’s modus tollens tests. However, the lower the beta, the more observations are needed to make a given effect size pass the statistical threshold set by alpha. As shown above, one key to making minimum effect sizes of interest larger is measuring that effect with more reliably. A second is maximizing the strength of any manipulation such that a larger minimum effect size would be interesting to a researcher.
Another angle on the question leading this section is: How precise would the estimate of the effect size need to be to make me comfortable with accepting the statistical hypothesis being tested rather than just retaining it in light of a test statistic that doesn’t pass the statistical threshold? Just because a test has a high power (on account of a large effect size) doesn’t mean that the estimate of that effect is precise. More observations are needed to make precise estimates of an effect – which also reduces the beta (and thus heightens the power) of a given statistical test.
Power curves visualize the tradeoffs among effect size, beta, and the number of observations. They can aid researchers in determining how feasible it is to have a null hypothesis with high evidential value versus being able to conduct the study in the first place. Some power curves start showing a non-linear relationship between observations and beta when beta is about .20 (or power is about .80), consistent with historical guidelines. However, other considerations may take precedence over the shape of a power curve. Implicitly, traditional alpha (.05) and beta (.20) levels imply that erroneously declaring an effect passes threshold is four times worse than erroneously declaring an effect does not. Some researchers might believe even higher ratios should be used. Alternatively, it may be more important for researchers to fix one error rate or another at a specific value and let the other vary as resources dictate. These values should be articulated in the justification.
5) Given your study’s goals and the alpha and beta levels above, how can you adjust for multiple comparisons to maintain an acceptable level of power while guarding against erroneous findings?
Most studies do not conduct a single comparison. Indeed, many studies toss in a number of different variables and assess their relationships, mediation, and moderation among them. As a result, there are many more comparisons conducted than the chosen alpha and beta levels are designed to guard against! There are four broad methods to use when considering how multiple comparisons impact your stated alpha and beta levels.
Per-comparison error rate (PCER) does not adjust comparisons at all and simply accepts the risk of there likely being multiple spurious results. In this case, no adjustments to alpha or beta need to be made in determining how many observations are needed.
False discovery error rate (FDER) allows that there will be a certain proportion of false discoveries in any set of multiple tests; FDER corrections attempt to keep the rate of these false discoveries at the given alpha level. However, this comes at a cost of complexity for those trying to justify alpha and beta, as each comparison uses a different critical alpha level. One common method for controlling FDER adjusts alpha levels for each comparison in a relatively linear fashion, retaining null hypotheses starting from the highest p value to the last one in which p > [(step #)/(total # of comparisons)]*(justified alpha). The remaining comparisons are judged as passing threshold. So, which comparison’s alpha value should be used in justifying comparisons? This may require knowing on average how many comparisons would typically pass threshold within a given comparison set size to plan for a final alpha to justify. After that, the number of observations may need adjusting to maintain the desired beta.
Corrections to the family-wise error rate (FWER) seek to reduce the error rate across a set of comparisons by lowering alpha in a more dramatic way across comparisons than do corrections for FDER. One popular method for controlling FWER entails dividing the desired alpha level by the number of comparisons. If the smallest p value in that set is smaller than alpha/(# comparisons), then it passes threshold and the next smallest p value is compared to alpha/(# comparisons-1). Once the p value of a comparison is greater than that fraction, that comparison and the remaining comparisons are considered not to have passed threshold. This correction has the same problems of an ever-shifting alpha and beta as the FDER, so the same cautions apply.
Per-family error rate (PFER) represents the most stringent error control of all. In this view, making multiple errors across families of comparisons is more damaging than making one error. Thus, tight control should be exercised over the possibility of making an error in any family of comparisons. The Bonferroni correction is one method of maintaining a PFER that is familiar to many researchers. In this case, alpha simply needs to be divided by the number of comparisons to be made, beta adjusted to maintain the appropriate power, and the appropriate number of observations collected.
Many researchers reduce alpha in the face of multiple comparisons to address the PCER without taking steps to address other kinds of error rates formally. Such ad hoc adjustments should at least also report how many tests would pass the statistical threshold by chance alone. Using FDER or FWER control techniques represent a balance between leniency and strictness in error control, though researchers should specify in advance whether false discovery or family-wise error control is more in line with their epistemological stance at a given time. Researchers may prefer to control the PFER when the number of comparisons is kept to a minimum through the use of a few critical, focal tests of a well-developed theory.
In FDER, FWER, and PFER control mechanisms, the notion of “family” must be justified. Is it all comparisons conducted in a study? Is it a set of exploratory factors that are considered separately from focal confirmatory comparisons? Does it group together conceptually similar measures (e.g., normal-range personality, abnormal personality, time-limited, psychopathology, and well being)? All of these and more may be reasonable families to use in lumping or splitting comparisons. However, to help researchers believe that these families were considered separable at the outset of a study, family membership decisions should be pre-registered.
6) Examples of justified alphas
Though the epistemological principles involved in justifying alphas and similar quantities run deeply, I don’t believe that a good alpha justification requires more than a paragraph. Ideally, I would like to see this paragraph placed at the start of a Method section in a journal article, as it sets the epistemological stage for everything that comes afterward. For each study type listed above, here are some possible paragraphs to justify a particular alpha, beta, and number of observations. I note that these are riffs off possible justifications; they do not necessarily represent the ways I elected to treat the same data detailed in the first sentences of each paragraph. To determine appropriate corrections for unreliability in measures when computing power estimates, I used Rosnow and Rosenthal’s (2003) conversions of effect sizes to correlations (rs).
Study type A): We planned to sample from a convenience population of undergraduates to provide precise estimates of two families of five effects each; we expected all of these effects to be relatively small. Because our measures in this study have historically been relatively reliable (i.e., internal consistencies > .80), we planned our study to detect a minimum correlation of .08, as that corresponds to a presumed population correlation of .10 (or proportion of variance shared of 1%) measured with instruments with at least 80% true score variance. We also recognized that our study was conducted entirely with self-report measures, making it possible that method variance would contaminate some of our findings. As a result, we adopted a critical one-tailed α level of .005. Because we believed that spuriously detecting an effect was ten times as undesirable as failing to detect an effect, we chose to run enough participants to ensure we had 95% power to detect a correlation of .08. We used the Holm-Bonferroni method to provide a family-wise error rate correction entailing a minimum α of .001 for the largest effect in the study in each of the two families. This required a sample size of 3486 participants with analyzable data according to G*Power (Faul, Erdfelder, Buchner, & Lang, 2009). Based on previous studies with this population (e.g., Molina, Pierce, Bergquist, & Benning, 2018), we anticipated that 2% of our sample would produce invalid personality profiles that would require replacement to avoid distorting the results (Benning & Freeman, 2017). Consequently, we targeted a sample size of 3862 participants to anticipate these replacements.
Study type B): We planned to sample from a convenience population of undergraduates to provide initial estimates of the extent to which four pleasant picture contents potentiated the postauricular reflex compared to neutral pictures. From a synthesis of the extant literature (Benning, 2018), we expected these effects to vary between ds of 0.2 to 0.5, with an average of 0.34. Because our measures in this study have historically been relatively unreliable (i.e., internal consistencies ~ .35; Aaron & Benning, 2016), we planned our study to detect a minimum mean difference of 0.20, as that corresponds to a presumed population mean difference of 0.34 measured with approximately 35% true score variance. We adopted a critical α level of .05 to keep false discoveries at a traditional level as we sought to improve the reliability and effect size of postauricular reflex potentiation. Following conventions that spuriously detecting an an effect was four times as undesirable as failing to detect an effect, we chose enough participants to ensure we had 80% power to detect a d of 0.20. We used the Benjamini-Hochberg (1995) method to provide a false detection error rate correction entailing a minimum one-tailed α of .0125 for the largest potentiation in the study. This required a sample size of 156 participants with analyzable data according to G*Power (Faul, Erdfelder, Buchner, & Lang, 2009). Based on previous studies with this population (e.g., Ait Oumeziane, Molina, & Benning, 2018), we anticipated that 15% of our sample would produce unscoreable data in at least one condition for various reasons and would require additional data to fill in those conditions. Thus, we targeted a sample size of 180 participants to accommodate these additional participants.
Study type C): We planned to sample from our university’s community mental health clinic to examine how anhedonia manifests itself in depression across seven different measures that are modulated by emotional valence. However, because there were insufficient cases with major depressive disorder in that sampling population, we instead used two different advertisements on Craigslist to recruit local depressed and non-depressed participants who were likely to be drawn from the same population. We believed that a medium effect size for the Valence x Group interaction (i.e., an f of 0.25) represented an effect that would be clinically meaningful in this assessment context.Because our measures’ reliabilities vary widely (i.e., internal consistencies ~ .35-.75; Benning & Ait Oumeziane, 2017), we planned our study to detect a minimum f of 0.174, as that corresponds to a presumed population f of 0.25 measured with approximately 50% true score variance. We adopted a critical α level of .007 in evaluating the Valence x Group interactions, using a Bonferroni correction to maintain a per-family error rate of .05 across all seven measures. To balance the number of participants needed from this selected population with maintaining power to detect effects, we chose enough participants to ensure we had 80% power to detect an f of 0.174. This required a sample size of 280 participants with analyzable data according to G*Power (Faul, Erdfelder, Buchner, & Lang, 2009). Based on previous studies with these measures (e.g., Benning & Ait Oumeziane, 2017), we anticipated that 15% of our sample would produce unscoreable data in at least one condition for various reasons and would require additional data to fill in those conditions. Thus, we targeted a sample size of 322 participants to accommodate these additional participants.
Study type D): We planned to sample from the population of survivors of the Route 91 Harvest Festival shooting on October 1, 2017, and from the population of the greater Las Vegas valley area who learned of that shooting within 24 hours of it happening. Because we wanted to sample this population within a month after the incident to examine acute stress reactions – and recruit as many participants as possible – this study did not have an a priori number of participants or targeted power. To maximize the possibility of detecting effects in this unique population, we adopted a critical two-tailed α level of .05 with a per-comparison error rate, as we were uncertain about the possible signs of all effects. Among the 45 comparisons conducted this way, chance would predict approximately 2 to pass threshold. However, we believed the time-sensitive, unrepeatable nature of the sample justified using looser evidential thresholds to speak to the effects in the data.
On New Year’s Eve 2016, Mariah Carey had a…notable performance in which she had difficulties rendering the songs “Emotions” and “We Belong Together”. She roared back on New Year’s Eve 2017, sparking the first meme of 2018.
Alas, it is unlikely that the field of psychophysiology will un-mangle its measurement of emotions with reflexes in such a short span of time.
My lab uses two reflexes to assess the experience of emotion, both of which can be elicited through short, loud noise probes. The startle blink reflex is measured underneath the eye, and it measures a defensive negative emotional state. The postauricular reflex is a tiny reflex behind the ear that measures a variety of positive emotional states. Unfortunately, neitherreflexassessesemotionreliably.
When I say “reliably”, I mean an old-school meaning of reliability that addresses what percentage of variability in a measurement’s score is due to the construct it’s supposed to measure. The higher that percentage, the more reliable the measurement. In the case of these reflexes, in the best-case scenarios, about half of the variability in scores is due to the emotion they’re supposed to assess.
That’s pretty bad.
For comparison, the reliability of many personality traits is at least 80%, especially from modern scales with good attention to the internal consistency of what’s being measured. The reliability of height measurements is almost 95%.
Why is reflexive emotion’s reliability so bad?
Part of it likely stems from the fact that (at least in my lab), we measure emotion as a difference of reactivity during a specific emotion versus during neutral. For the postauricular reflex, we take the reflex magnitude during pleasant pictures and subtract from that the reflex magnitude during neutral pictures. For the startle blink, we take the reflex magnitude during aversive pictures and subtract from that the reflex magnitude during neutral pictures. Differences can have lower reliabilities than single measurements because the unreliability in both emotion and neutral measures compound when making the difference scores.
However, it’s even worse when we use reflex magnitudes just during pleasant or aversive pictures. In fact, it’s so bad that I’ve found both reflexes have negative reliabilities when measured just as the average magnitude during either pleasant or aversive pictures! That’s a recipe for a terrible, awful, no good, very bad day in the lab. That’s why I don’t look at reflexes during single emotions by themselves as good measures of emotion.
Now, some of these difficulties look like can be alleviated if you look at raw reflex magnitude during each emotion. If you do that, it looks like we could get reliabilities of 98% or more! So why don’t I do this?
Because from person to person, reflex magnitudes during any stimulus can differ over 100 times, which means that it’s a person’s overall reflex magnitude that raw reflex magnitudes are measuring – irrespective of any emotional state the person’s in at that moment.
Let’s take the example of height again. Let’s also suppose that feeling sad makes people’s shoulder’s stoop and head droop, so they should be shorter (that is, have a lower height measurement) whenever they’re feeling sad. I have people stand up while watching a neutral movie and a sad movie, and I measure their height four times during each movie to get a sense of how reliable the measurement of height is.
If all I do is measure the reliability of people’s mean height across the four sadness measurements, I’m likely to get a really high value. But what have I really measured there? Well, it’s just basically how tall people are – it doesn’t have anything to do with the effect of sadness on their height! To understand how sadness specifically affects people’s heights, I’d have to subtract their measured height in the neutral condition from that in the sad condition: a difference score.
Furthermore, if I wanted to take out entirely the variability associated with people’s heights from the effects of sadness I’m measuring (perhaps because I’m measuring participants whose heights vary from 1 inch to 100 inches), I can use a process called “within-subject z scoring”, which is what I use in my work. It doesn’t seem like the overall reflex magnitude people have predicts many interesting psychological states, so I feel confident in this procedure. Though my measurements aren’t great, at least they measure what I want to some degree.
What could I do to make reflexive measures of emotion better? Well, I’ve used four noise probes in each of four different picture contents to cover a broad range of positive emotions. One thing I could do is target a specific emotion within the positive or negative emotional domain and probe it sixteen times. Though it would reduce the generalizability of my findings, it would substantially improve reliability of the reflexes, as reliabilities tend to increase the more trials you include (because random variations have more opportunities to get cancelled out through averaging). For the postauricular reflex, I could also present lots of noise clicks instead of probes to increase the number of reflexes elicited during each picture. Unfortunately, click-elicited and probe-elicited reflexes only share about 16% of their variability, so it may be difficult to argue they’re measuring the same thing. That paper also shows you can’t do that for startle blinks, so that’s a dead end method for that reflex.
In short, there’s a lot of work to do before the psychophysiology of reflexive emotion can relax with its cup of tea after redeeming itself with a reliable, well-received performance (in the lab).
The 50th anniversary of the TV show Star Trek‘s first broadcast is today. It was a formative franchise for me growing up, informing many of my first ideas about space exploration, heroism, and a collaborative society. Debatesredound about the best episode of theseries. However, I agree with Business Insider’s choice of the episode Balance of Terror. It’s essentially a space version of submarine warfare, for which I’ve been a sucker ever since the game Red Storm Rising for the Commodore 64. This episode has everything: Lore building of the political and technological history of the Federation, the introduction of a new opponent, a glimpse of life on the lower decks, and character development galore for multiple cast members – including a guest star.
One of the moments that always stuck with me was one in the Captain’s quarters as the Enterprise and its Romulan counterpart wait each other out in silence. Dr. McCoy comes to speak with Captain Kirk, who expresses a rare moment of self-doubt regarding his decisions during tactical combat. The doctor’s compassionate nature comes through as he reminds the captain how across 3 million Earth-like planets that might exist, recapitulated across 3 million million galaxies, there’s only one of each of us – and not to destroy the one named Kirk. The lesson of that moment resonates 50 years later and is one I like to revisit when I feel myself beset by doubts about myself or my career.
Another moment I appreciate is the imperfection allowed in Spock’s character without being under the influence of spores, temporal vortices, or other sci-fi contrivances. Already, he has been accused of being a Romulan spy by a bigoted member of the crew who lost multiple family members in a war with the Romulans decades before visual communication was possible. Now, Spock breaks the silence under which the Enterprise was operating with a clumsy grip on the console he is repairing. Is this the action of a spy? Or just an errant mistake that anyone could make, especially when under heightened scrutiny?
Indeed, this error might be expected when Mr. Spock operates under stereotype threat. Just hours earlier, he was revealed to share striking physiological similarities with the Romulan enemies, who Spock described as possible warrior offshoots of the Vulcan race before Vulcans embraced logic. This revelation caused Lt. Stiles, who had branches of his family wiped out in the prior war with the Romulans, to view Spock with distrust and outright bigotry that was so blatant that the captain called him on it on the bridge. Still, Stiles’s prejudice against Spock is keenly displayed throughout the episode, making it more likely that Spock would conform to the sabotaging behavior expected of him by his bridgemate.
On their own ship, the sneaky and cunning Romulans were not depicted as mere stereotypes of those adjectives but instead as a richly developed martial culture. Their commander and his centurion have a deep bond that extends over a hundred campaigns; the regard these two have for each other is highlighted in the actors’ subtle inflections and camaraderie. The internal politics of the Romulan empire are detailed through select lines of dialog surrounding the character of Decius and the pique that character elicits in his commander. In the end, the Romulan commander is shown to be sensitive to the demands of his culture and his subordinates in the culminating action of the episode, though the conflict between these and his own plans is palpable.
The contrast between Romulans and Spock highlights how alien Vulcan logic seems to everyone else. Spock is a character who represents the outsider, the one struggling for acceptance among an emotional human crew even as he struggles to maintain his culture’s logical discipline. Authors with autism have even remarked how Spock helped them understand how they perceive the world differently from neurotypicals in a highly logical fashion. However, given the emotional militarism of the Romulans, I believe that Vulcan logic is a strongly culturally conditioned behavior rather than a reflection of fundamental differences in baseline neurobiological processing.
There are neurobiological differences in sustained attention to different kinds of objects in autism compared to neurotypical controls. Work I did in collaboration with Gabriel Dichter has demonstrated that individuals with autism spectrum disorders have heightened attention to objects of high interest to these individuals (e.g., trains, computers) compared to faces, whereas neurotypicals show the opposite pattern of attention (accesshere). Based on decades of cultural influence, Mr. Spock might be expected to show equal attention to objects and faces, but Dr. McCoy, Captain Kirk, and the Romulans all would be expected to be exquisitely sensitive to faces, as they convey a lot of information about the social world.
UPDATE 20190820: This post led to this paper in the special issue of the Journal of Abnormal Psychology about increasing replicability, transparency, and openness in clinical psychological research. In it, we describe a two-dimensional continuum of registration efforts and now describe preregistrations as those that occur before data are collected, coregistrations as those that occur after data collection starts but before data analysis begins, and postregistrations as those that occur after data analysis begins. The preprint is here.
This is a long post written for both professionals and curious lay people; the links below allow you to jump among the post’s sections. The links in all CAPS represent the portions of this post I view as its unique intellectual contributions.
Preregistration: prelude, problems addressed, and concerns
Psychology is beset with ways to find things that are untrue. Many famous and influential findings in the field are not standing up to closer scrutiny with tightly controlled designs and methods for analyzing data. For instance, a registered replication report in which my lab was involved found that holding a pen between your lips in a smiling pose does not, in fact, make cartoons funnier. Indeed, less than half of 100 studies published in top-tier psychology journals replicated.
One proposal for solving these problems is preregistration. Preregistration refers to making available – in an accessible repository – a detailed plan about how researchers will conduct a study and analyze its results. Any report that is subsequently written on the study would ideally refer to this plan and hew closely to it in its initial methods and results descriptions. Preregistration can help mitigate a host of questionable research practices that take advantage of researcher degrees of freedom, or the hidden steps behind the scenes that researchers can take to influence their results. This garden of forking paths can transmute data from almost any study into something statistically significant that could be written up somewhere; preregistration prunes this garden into a single, well-defined shrub for any set of studies.
Yet prominent figures doubt the benefits of preregistration. Some even deny there’s a replication crisis that would require these kinds of corrections. And to be sure, there are other steps to take to solve the reproducibility crisis. However, I argue that preregistration has three virtues, which I describe below. In addition to enhancing reproducibility of scientific findings, it provides a method for managing conflicts of interest in a transparent way above and beyond required institutional disclosures. Furthermore, I also believe preregistration permits a lab to demonstrate its increasing competence and a field’s cumulative knowledge.
Enhancing reproducibility
Chief among the proposed benefits of preregistration is the ability of science to know what actually happened in a study. Preregistration is one part of a larger open science movement that aims to make science more transparent to everyone – fellow researchers and the public alike. Preregistration is probably more useful for people on the inside, though, as it helps people knowledgeable in the field assess how a study was done and what the boundaries were on the initial design and analysis. Nevertheless, letting the general public see how science is conducted would hopefully foster trust in the research enterprise, even if it may be challenging to understand the particulars without formal training.
Hypothesizing After the Results are Known (HARKing): You can’t say you thought all along something you found in your data if it’s not described in your preregistration.
Altering sample sizes to stop data collection prematurely (if you find the effect you want) or prolong it (to increase the power, or the likelihood you have to detect effects): You said how many observations you were going to make, so you have a preregistered point to stop. Ideally, this stopping point would be determined from a power analysis using reasonable assumptions from the literature or basic study design about the expected effect sizes (e.g., differences between conditions or strengths of relationships between variables).
Eliminating participants or data points that don’t yield the effect you want: There are many reasons to drop participants after you’ve seen the data, but preregistering reasons for eliminating any participants or data from your analyses stops you from doing so to “jazz up” your results.
Dropping variables that were analyzed: If you collect lots of measures, you’ve got lots of ways to avoid putting your hypotheses to rigorous tests; preregistration forces you to specify which variables are focal tests of your hypothesis beforehand. It also ensures you think about making appropriate corrections for making lots of tests. If you run 20 different analyses, each with a 5% chance (or .05 probability) of yielding a result you want (a typical setup in psychology), then you’re likely to find 1 significant result by chance alone!
Dropping conditions or groups that “didn’t work”: Though it may be convenient to collect some conditions “just to see what happens”, preregistering your conditions and groups makes you consider them when you write them up.
Invoking hidden moderators to explain group differences: Preregistering all the things you believe might change your results ensures you won’t pull an analytic rabbit out of your hat.
Many of these solutions can be summed up in 21 words. Ultimately, rather than having lots of hidden “lab secrets” about how to get an effect to work or a multitude of unknown ingredients working their way into the fruit of the garden of forking paths, research will be cleanly defined and obvious, with bright and shiny fruit from its shrubbery.
Managing conflicts of interest
As I was renewing my CITI training (the stuff we researchers have to refresh every 4 years to ensure we keep up to date on performing research ethically and responsibly), I also realized that preregistration of analytic plans creates a conflict of interest management plan. Preregistered methods and data analytic plans ensure researchers to describe exactly what they’re going to do in a study. Those plans can be reviewed by experts to detect ways in which their own interests might be put ahead of the integrity of the data or analyses in the study, including officials at an individual’s university, at a funding agency, or in a journal’s editorial processes. Conscientious researchers can also scrutinize their own plans to see how their own best interests might have crept ahead of the most scientifically justifiable procedures to follow in a study.
These considerations led the clinical trials field to adopt a set of guidelines to prevent conflicts of interest from altering the scientific record. Far more than institutional disclosure forms, these guidelines force scientists to show their work and stick to the script of their initial study design. Since adopting these guidelines, the number of clinical trials showing null outcomes has increased dramatically. This pattern suggests that conflicts of interest may have guided some of the positive findings for various therapies rather than scientific evidence analyzed according to best practices. The preregistered shrub may not bear as much fruit as the garden of forking paths, but the fruit preregistered science bears is less likely to be poisonous to the consumer of the research literature.
Demonstrating scientific competence and cumulative knowledge
One underappreciated benefit of preregistration is the way it allows researchers to demonstrate their increasing competence in an area of study. When we start out exploring something totally new, we have ideas about basic things to consider in designing, implementing, and analyzing our studies. However, we often don’t think of all the probable ways that data might not comport with our assumptions, the procedural shifts that might be needed to make things work better, or the optimal analytic paths to follow.
When you run a first study, loads of these issues creep up. For example, I didn’t realize how hard it was going to be to recruit depressed patients from our clinic for my grant work on depression (especially after changing institutions right as the grant started), so I had to switch recruitment strategies. Right as we were starting to recruit participants, there was also a conference talk in 2013 that totally changed the way I wanted to analyze our data, as the mood reactivity item was better for what we wanted to look at than an entire set of diagnostic subtypes. In dealing with those challenges, you learn a lot for the second time you run a similar study. Now I know how to specify my recruitment population, and I can point to that talk as a reason for doing things a different way than my grant described. Over time, I’ll know more and more about this topic and the experimental methods in it, plugging additional things into my preregistrations to reflect my increased mastery of the domain.
Ideally, the transition from less detailed exploratory analyses to more detailed confirmatory work is a marker of a lab’s competence with a specific set of techniques. One could even judge a lab’s technical proficiency by the number of considerations advanced in their preregistrations. Surveying preregistered projects for various studies might let you know who the really skilled scientists in an area are. That information could be useful to graduate students wanting to know with whom they’d like to work – or potential collaborators seeking out expertise in a particular topic. Ideally, a set of techniques would be well-established enough within a lab to develop a standard operating procedure (SOP) for analyzing data, just as many labs have SOPs for collecting data.
In this way, the fruits of research become clearer and more readily picked. Rather than taking fruitless dead ends down the garden of forking paths with hidden practices and ad hoc revisions to study designs, the well-manicured shrubbery of preregistered research and SOPs gives everyone a way to evaluate the soundness of a lab’s methods without ever having to visit. Indeed, some journals take preregistration so seriously now that they are willing to provisionally pre-accept papers with sound, rigorous, and preregistered methodology. Tenure committees can likewise peek behind the hood of the studies you’ve conducted, which could alleviate a bit of the publish-or-perish culture in academia. A university’s standards could even reward an investigator’s rigor of research beyond a publication history (which may be more like a lottery than a meritocracy).
A model for confirmatory and exploratory reporting and review
In my ideal world, results sections would be divided into confirmatory and exploratory sections. Literally. Whether written as RESULTS: CONFIRMATORY and RESULTS: EXPLORATORY, PREREGISTERED RESULTS and EXPLORATORY RESULTS, or some other set of headings, it should be glaringly obvious to the reader which is which. The confirmatory section contains all the stuff in the preregistered plan; the exploratory section contains all the stuff that came after. Right now, I would prefer that details about the exploratory analyses be kept in that exploratory results section to make it clear it came after the fact and to create a narrative of the process of discovery. However, similar Data Analysis: Confirmatory and Data Analysis: Exploratory or Preregistered Data Analysis and Exploratory Data Analysis sections might make it easier to separate the data analytics from the meat of the results.
It’s also important to recognize that exploratory analyses shouldn’t be pooh-poohed. Curious scientists who didn’t find what they expected could systematically explore a number of questions in their data subsequent to its collection and preliminary analysis. However, it is critical that all deviations from the preregistration be reported in full detail and with sufficient justification to convince the skeptical reader that the extra analyses were reasonable to perform. Much of the problem with our existing literature is that we haven’t reported these details and justifications; in my view, we just need to make them explicit to bolster confidence in exploratory findings.
Reviewers should ask about those justifications if they’re not present, but exploratory analyses should be held to essentially the same standards as we hold current results sections. After all, without preregistration, we’re all basically doing exploratory analyses! As time passes, confirmatory analyses will likely hold more weight with reviewers. However, for the next 5-10 years, we should all recall that we came from an exploratory framework, and to an exploratory framework we may return when justified. When considering an article, reviewers should also look carefully at the confirmatory plan (which should be provided as an appendix to a reviewed article if a link that would not compromise reviewer anonymity cannot be provided). If the researchers deviated from their preregistered plan, call them on it and make them run their preregistered analyses! In any case, preregistration’s goals can fail if reviewers don’t exercise due diligence in following up the correspondence between the preregistration and the final report.
The broad strokes of a paper I’m working on right now demonstrates the value of preregistration in correcting mistakes and the ways exploratory results might be described. I was showing a graduate student a dataset I’d collected years before, and there were three primary dependent variables I planned on analyzing. To my chagrin, when the student looked through the data, that student pointed out one of those three variables had never been computed! Had I preregistered my data analytic plan, I would have remembered to compute that variable before conducting all of my analyses. When that variable turned out to be the only one with interesting effects, we also thought of ways to drill down and better understand the conditions under which the effect we found held true. We found these breakdowns were justifiable in the literature but were not part of our original analytic plan. Preregistration would have given us a cleaner way to separate these exploratory analyses from the original confirmatory analyses.
In any future work with the experimental paradigm, we’ll preregister both our original and follow-up analyses so there’s no confusion. Such preregistration also acts as a signal of our growing competence with this paradigm. We’ll be able to give sample sizes based on power analyses from the original work, prespecify criteria for excluding data and methods of dealing with missing values, and more precisely articulate how we will conduct our analyses.
My template
Many people talk about the difficulties of preregistering studies, so I advance a template I’ve been working on. In it, I pose a bunch of questions in a format structured like a journal article to guide researchers through questions I’d like to have answered as I start a study. It’s a work in progress, and I hope to add to it as my own thoughts on what all could be preregistered grows. I also hope we can publish some data analytic SOPs along with our psychophysiological SOPs that we use in the lab (a shortened version of which we have available for participants to view). I hope it’s useful in considering your own work and the way you’d preregister. If this seems too daunting, a simplified version of preregistration that hosts the registration for you can get you started!
As the heat of summer washes over the country, basic home safety becomes a concern. Sometimes, parents become worried that their messy houses might cause Child Protective Services to view them as unfit parents. A newpaper from my research collaborators and I has shown that even in homes with genuine safety concerns, the beauty of a home (or lack thereof) isn’t associated with being child abuse potential or socioeconomic status. Thus, it doesn’t appear that messy homes come from abusive parenting environments, and unattractive or unsafe are just as likely to be found in poorer and richer neighborhoods.
We found that trained assessors and people inhabiting homes had reasonable agreement about the beauty of the homes, but they didn’t agree on the safety risks present in the home. Part of that may have been because the trained assessors had checklists with over 50 items to check over in each room to assess safety and appearance, whereas the occupants of the homes only provided summary ratings of room safety and appearance on a 1-6 scale. It’s probably easier to give an overall judgment of the attractiveness of a room than to summarize in your mind all the possible safety risks that exist.
Because it’s so hard to notice these safety risks without a detailed guide, the assessment we developed can also be used as a way to point parents to specific things to fix in the home to make their children’s environment safer. We didn’t want people overwhelmed when thinking about what to clean up or make safer – rather, we wanted to give people specific things to address. We’ll be interested to see if people are better able to make their homes cleaner and safer places with the help of that assessment.
Comments