Imagine a conversation between two people: Preston Power, the CEO of a prestigious corporation, and Alan Awkward, the assistant to the assistant to the regional manager. It wouldn’t take very long to pick up on the difference in social status between these two individuals even if you had no information about who they were. Body language, the tendency to interrupt, volume of speech, and a host of other nonverbal behaviors automatically cue us in to who is the alpha dog in this scenario. While these behaviors are often viewed as personal choices that we can control, Fei Wang and colleagues at the Chinese Academy of Sciences suggest that poor Mr. Awkward may not be at fault for his plight—his neurons may be to blame. Continue reading
The field of psychology had its modern origin just over 100 years ago, and yet interest in the field has grown rapidly. Researchers with broad and varied interests have expanded the field, and as a result there are many different subdisciplines. Highlighted here are several key areas of psychology.
Biological psychologists apply biological principles to the study of mental processes and behavior. The field examines the basic biological processes that underlie normal and abnormal behavior at the level of nerves, neurotransmitters, and brain circuitry.
In clinical psychology, science, theory, and clinical knowledge are combined to improve psychological distress or dysfunction, and to promote personal well-being. Clinical and counseling psychology are similar subdisciplines.
Cognitive psychology is the scientific study of how people perceive, remember, think, speak, and solve problems, by exploring internal mental processes in the brain.
Emotions are a central component of the human experience. They facilitate social interactions, allow us to both appreciate and create powerful works in arts and literature, and guide us in achieving personal goals. These are only a few of the myriad ways that demonstrate the important role emotions play in our lives. In a letter to his brother Theodore, Vincent Van Gogh (1889) advised him not to forget that “emotions are the captains of our lives, and we obey them without realizing it.” Given the source, we might not be inclined to trust such an insight on affect from someone who’s life was plagued by severe emotional distress, but common experience forces us to acknowledge a certain amount of truth to his words: there exist times in each of our lives where we have found ourselves fallen under the sway of an intense emotional experience without even realizing it (at first at least). Perhaps we were propelled to an angry outburst at a reckless driver, or could not hold back the tears while watching a sad movie. Indeed, much research has been carried out investigating the ways in which emotions influence our cognitive abilities such as our attention, memory, and decision-making of which we might not even be consciously aware (Dolan 2002). Continue reading
The feud between religion and science can be compared to the Montague and Capulet relationship – hateful at times, dismissive often, and bridged rarely, often with tragic results for those who try.
A recent article in the journal Science (see Can Science and Religion Get Along?) discussed a controversial panel that aimed to bring together players from both sides in the hopes of starting some sort of dialog. There were cries of foul from both sides before the panel took place, but to me it seems that conversation in general is good as long as both sides come to the table with the right intention – to listen and not just talk. Continue reading
Over the last couple of decades, learning and memory researchers have become increasingly interested in bringing scientific findings out of the lab and into the classroom, where they can be implemented into teaching methods to produce more efficient and effective learning. In a nation mired in an educational crisis, there’s never been a better time or place to bridge the gap between modern scientific knowledge and outdated teaching techniques.
One of the greatest insights in the last 20 years that has serious potential to improve classroom teaching has been Robert Bjork’s concept of desirable difficulties (Bjork, 1994; McDaniel & Butler, in press), which suggests that introducing certain difficulties into the learning process can greatly improve long-term retention of the learned material. In psychology studies thus far, these difficulties have generally been modifications to commonly used methods that add some sort of additional hurdle during the learning or studying process. Some notable examples: Continue reading
Here’s a question that’s been on my mind lately:
Whose job is it to make sure that the non-scientist consumers of science get it right?
I’ve had a few discussions with various psychologists about this lately and they frequently bring up two answers to this question:
(1) It’s the consumer’s job. I heard from a few of the psychologists I spoke to that they are frustrated that non-scientists who read science and then use it (i.e., your dad when he listens to the evening news and then tells you about it) don’t take enough time and put in enough effort to understand it accurately. This leads to misrepresentations of the results and, even worse, a misuse of the findings. The summary of this argument is that “people are too lazy/uninformed to do anything more than read headlines and then run with it.” The implication there is that non-scientists should be more motivated to go the extra mile and try to understand what scientists are doing.
I listen to a lot of podcasts in which various psychological articles are often discussed (e.g., stuff you should know, radiolab, etc.). As a psychologist, I am often frustrated when a podcast mentions a study’s finding (e.g., having a sister is associated with better self-esteem than having a brother) but then says something like this: “well, I’m kind of suspicious of that finding/we should take that finding with a grain of salt/I kind of question that finding because I don’t think that’s the whole story/I have a brother and my self-esteem is great.”
I get frustrated because a little bit more information about effect sizes would help turn those kind of statements from ones that undermine and disregard what are often interesting, useful findings into the kind of statements that would help people understand exactly how useful those findings are.
So, with that in mind, let’s talk about effect sizes. Usually, when you hear someone talk about a study, if they say that something is associated with something or there’s a difference between two things, that means that the study found a statistically significant effect, which most of the time means that the effect they found is different from 0 at at least 95% certainty. With regards to the sisters and self-esteem example, this means that the relationship between having a sister and self-esteem is different from 0 at at least 95% certainty. That’s interesting information, sure, but it doesn’t tell you anything about the strength or size of that relationship. Different from 0 just means that the strength of the relationship could be anywhere from just a tiny bit above 0 to a huge number.
This is where effect sizes come in. Effect sizes give you some information, like you might expect, about the size of the effect, which is much more useful than simply knowing that the effect is different from 0. When you know something about the effect size, you can understand that the effect of having a sister may not be able to explain everything about self-esteem (hence the counter-examples you can think of) but it can explain something, which makes it useful. Understanding effect sizes gives you a sense of just how useful.
I’m going to go over two commonly-used effect sizes: R-Squared and Cohen’s d.
Let’s stick to the sister/self-esteem example here. Let’s imagine that someone does a study of self-esteem and that all of the self-esteem data from the study is represented in the circle below.
We can also talk about this circle as showing 100% of the variance (how much people in the study differ from each other) in self-esteem; if we could explain 100% of the variance, we could predict the exact self-esteem everyone in the study has. Let me say right off that no study in psychology ever does this; if you find a study that comes even close, be very, very suspicious. Why? Because there is so much that goes into self-esteem, or any other psychological construct, for that matter, that it would be very difficult to capture it all in one study. For example, part of self-esteem could be explained by having a sister, part of it by how you happened to feel that morning, part of it by your relationship with your parents, part of it by how the experimenter looked at you when you first came in, and so on, so on.
So when a study reports a statistically significant association between having a sister and self-esteem, that means that having a sister explains more than 0% of the variance in self-esteem. That means that having a sister might explain anywhere from .001% of the variance to 99% of the variance.
What R-Squared tells you is what percentage of the total variance in self-esteem is explained by having a sister; it’s that easy. If a study reports an R-Squared of .5, that means that 50% of the total variance in self-esteem is explained by having a sister (shown in red below).
Another quick thing to note about R-Squared before I move on is that R-Squared is most commonly used as a measure of how much variance is explained by a set of predictors. For example, if a study says that when trying to explain self-esteem from having a sister, parental relationship, and body image, they found an R-Squared of .62, that means that 62% of the variance in self-esteem was explained by a combination of having a sister, parental relationship, and body image.
In order to talk about Cohen’s d, we need to think about our example in a slightly different way. Let’s say a study shows that there is a statistically significant difference in self-esteem between two groups: one that has a sister (group a) and one that does not (group b). Remember from before that knowing that there is a statistically significant difference in self-esteem between the two groups means only that the difference between the two groups is different from 0. That could mean that the self-esteem score in group a is 5 and in group b is 4.5, or that the self-esteem score in group a is 5 and in group b is 2.
Cohen’s d is pretty simple. It takes the literal difference between the two groups (groupa self-esteem – groupb self-esteem) and then divides it by the standard deviation of the data (a measure of how much the data varies). It’s easiest to explain why the difference is divided by the standard deviation using an example.
Let’s say we want to know how big the difference is between how much a garden snail weighs on day 1 and day 2 (after eating a big meal on day 1). Let’s say that difference is .02 ounces. Is that big? Is that small? The number itself is very small but does that mean that the difference is actually small? Let’s say the difference between how much an elephant weighs on day 1 and day 2 (after eating a big meal on day 1) is 3.4 lbs. Is that big? Compared to the .02 ounces for the snail, that’s huge! The problem with looking at the raw difference is that we don’t know what an “average” difference in snail weight or elephant weight actually is so we can compare it to the difference we care about. If we knew that elephants actually fluctuate in weight day-to-day even without a big meal by about 3.35 lbs, that 3.4 lbs doesn’t seem like that big of a deal. Similarly, if we know that snails actually fluctuate in weight day-to-day even without a big meal by about .001 ounces, that .02 ounces is a very large difference.
Cohen’s d allows us to understand the size of a difference, even across completely different comparisons. Using Cohen’s d, we can compare the size of the difference between a snails’ weight from one day to another to an elephant’s weight from one day to another.
Conveniently, Cohen gave us some guidelines for interpreting d. He said that .2 – .3 is a “small” effect, around .5 is a “medium” effect, and .8 and above is a “large” effect.
A Final Note on Effect Sizes
It’s important to remember that effect sizes need to be taken in context with the rest of the study. If the study is on a topic we already know a lot about, explaining a tiny bit of variance (a small R-Squared) or a small Cohen’s d is not very impressive. However, if the study is on a topic we know nothing about and is breaking new ground, small effect sizes can still be provocative and useful – they can mean that the study gives us some information as a starting point for future research! The best way to be a good consumer of science is to try to use as much information as possible before drawing your conclusions. It’s a little more difficult than just reading a headline, but science is complicated, particularly the science of human behavior, thought, and emotion.
I hope this quick tutorial on effect sizes is helpful. I’ve only gone over two here – there are many other kinds of effect sizes. A more mathematically-involved outline of effect sizes is available on Wikipedia (http://en.wikipedia.org/wiki/Effect_size), and a less mathematically-involved outline of effect sizes is available here (http://www.leeds.ac.uk/educol/documents/00002182.htm). A google search will reveal many other resources on effect sizes and their interpretation.