“Theories are nets cast to catch what we call the ‘world’: to rationalize, to explain, and to master it. We endeavor to make the mesh ever finer and finer.” - Karl Popper
Theory is the story or narrative we apply to better understand the world. Theories are a blend of intuition and empirical data. In popular culture, it is common to hear someone say “Oh, that’s just theory” and wave a dismissing hand. In doing so, they are suggesting that theory is simply philosophy with no empirical data and consequently of little value. Instead, theory is best understood as a blend of philosophy and data. Data and philosophy have a reciprocal relationship. Data inform our thoughts and our thoughts inform how we search and interpret our data. Data can come from all sources including personal experience, case studies, qualitative narrative, field experiments, surveys, or experimentation.
Get to Know a Theory
Researchers often feel pressure to come up with novel and exciting new ideas. Contrary to this notion, much of science involves the testing of theory which, for the most part, has already been laid out for the researcher. Simply knowing and caring about theory will provide any researcher with enough questions to study for a lifetime. There are likely, for example, many branches of any given theory that are either unexplored or underdeveloped. If you know a theory well then the hypotheses and your future work will reveal themselves. Lastly, please do not underestimate the importance of replication. Statistical significance is not a guarantee for replication.
A great book on the topic of theory itself was written by Thomas Kuhn called “The Structure of Scientific Revolutions.” In this book he reviews the topic of “normal science” as problem solving and how impactful a community can be on our data analytic decisions.
Another book of interest was written by the Nobel Prize winning psychologist Daniel Kahneman that chronicles his work with Amos Tversky. In this book, Kahneman discusses the importance of our natural thinking styles as they influence how we interpret and collect data.
Theory as Motivation
Working with a specific theory can be exciting and motivational. It’s fun to believe in something, identify with it, and attempt to change it by conducting research and contributing to the field. It also places an emphasis on knowing the truth rather than simply looking for, and likely finding, associations that require post-hoc explanations. The random search for findings based on underdeveloped thought has been referred to as dustbowl empiricism.
As a undergraduate and graduate student, I (Jamie) spent most of my years walking around campus with the work of Harry Stack Sullivan in my back pocket. I appreciated his focus on close relationships and his human approach to schizoprhenia.
I became more excited about the theory as I explored a book by Lorna Smith Benjamin that applied Sullivan’s theory to the treatment of personality disorders. Her book made it easy for me to see the many hypotheses that could be tested examining the development, maintenance, and treatment of personality disorders through an interpersonal lens. Even today, there are many unexplored, underdeveloped, and unreplicated branches of her theory that could be tested by interested researchers.
The components of a theory
The phenomena that make up a theory are called hypothetical constructs. These are the concepts important to our theory of which we make predictions and causal associations. The constructs can have an infinite number of specific definitions called operational definitions. The distinction between hypothetical constructs and their measurement was discussed in a seminal work by MacCorquodale and Meehl (link below). For example, depression is a construct and can be operationally defined using the DSM, a self-report measure, etc. Our hypotheses make specific predictions between two, or more, constructs. General rules for developing a good hypothesis are outlined in the Wampold article cited below.
As an undegraduate, the work of Meehl and his ilk were likely kept from your vision. Instead you were introduced to the more familiar topics of reliability and validity. Reliability speaks to the stability or internal consistency for each operational definition. Validity speaks to how well operational definitions relate to each other when they are expected, or not expected, to be associated. You will revisit these topics in your research methods and assessment/psychometric courses.
[MacCorquodale, K., & Meehl, P. E. (1948). On a distinction between hypothetical constructs and intervening variables. Psychological Review, 55, 95-107.Reprinted: 1991, in Meehl, Selected philosophical and methodological papers (pp. 249-264; C. A. Anderson and K. Gunderson, Eds.). Minneapolis: University of Minnesota Press.](http://meehl.umn.edu/sites/g/files/pua1696/f/013hypotheticalcconstructs.pdf)
Wampold, B. E., Davis, B., & Gould, R. H. (1990). Hypothesis validity of clinical research. Journal of Consulting and Clinical Psychology, 58, 360-367
In order to run statistics on your constructs, you need to make them into a number. That’s where scales of measurment come in. The distinction between nominal, ordinal, interval, ratio scales is something you learned quite early in your education. However, the importance of these scales is rarely made clear. The reason why these scales are important is that they will impact how you summarize, visualize, model (run statistics), and generally think about your data. For example, you can’t calculate a mean for gender or ethnicity and you might consider non-parametric statistics for ordinal data (e.g., Spearman’s Rho).
At the same time, the scale of measurement is inherently limiting. It will force you to think about a problem or concept in a particular way. A good exercise is to think about the variables in your study and consider how you might assess the same constructs on a different scale. For example, depression might be measured using a self-report measure which is likely on the interval scale. Consider the other ways that depression might be measured (i.e., number of pleasant events attended, hours slept, occurrence of negative thoughts)
Scale | Definition | Example | Abstract Number System |
---|---|---|---|
Nominal | When a number is used like a name | 0 = Male; 1 = Female | Identity |
Ordinal | Numbers represent ranking but difference between 1 and 2 can be smaller than difference between 3 & 4 | College Football Rankings | Magnitude |
Interval | Same as ordinal but equal differences between numbers | Most self-report in psychology such as personality and intelligence | Equal Intervals |
Ratio | Similar to interval but has a real zero | Distance rode on a bicycle, number of beers in one night | True Zero |
2a. Arbitrary Metrics
…it is possible that evidence-based treatments with effects demonstrated on arbitrary metrics do not actually help people, that is, reduce their symptoms and improve their functioning. - Alan Kazdin
You should consider and review the concept of arbitrary metrics as discussed by Alan Kazdin in the 2006 article cited below. He raises the point that many of our measurements have no real, practical, or “non-arbitrary” meaning. As an example, people don’t often appreciate a 5-point change on, for example, a self-report measure of depression. Such a measure is considered arbitrary because it has no real-world, practical meaning. Example of non-arbitrary metrics include fewer days of unemployment, having a job or not having a job, number of social events attended, number of days out of bed, etc. It’s much easier for most people to appreciate non-arbitrary metrics because their practical significance.
Generally speaking, the more ways of measuring an outcome the better and it would best to include both arbitrary and non-arbitrary metrics in your study design. By doing so you’ll have a better chance of capturing the breadth of the nomological network and improving the ecological validity of your study.
Kazdin, A. E. (2006). Arbitrary Metrics: Implications for Identifying Evidence-Based Treatments. American Psychologist, 61(1), 42-49. doi:10.1037/0003-066X.61.1.42
Baumeister, R. F., Vohs, K. D. & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2, 396-403.