I WANT MY FAME TV: VALUES ON TV FOR CHILDREN 1967-2007

It’s an age-old refrain — adults claim that kids today are completely different from when they were growing up, usually for the worse. And that claim often extends to the TV shows that kids are exposed to – more sex, less depth, endless shows about celebrities and reality TV show stars.

But hasn’t Hollywood always glamorized being rich and famous? The pursuit of fame is embedded in the fabric of our society, in America – every person, no matter where they come from, is supposed to have the opportunity to become successful and achieve to their fullest extent.

So maybe adults are just waxing nostalgic about the past, and things really haven’t changed that drastically. Our study, completed at the Children’s Digital Media Center@LA and just published in Cyberpsychology here, suggests otherwise.

We took a look at the top two shows for tweens, age 9-12, in one year of the last 5 decades. Children at these ages are beginning to form their values, as they move from their families being the most important sphere of influence to peers and other forces outside of the home gaining more influence.

We found that the most important value of tween television shows in 2007 is fame, out of a list of 16 values. Moreover, in every other decade, from the sixties to the nineties, fame ranked at the bottom of the list! So, in just one decade, from 1997 to 2007, fame went from being the least important value to the most important value. In stark contrast, community feeling, number one or two in every other decade, dropped to number 11 in 2007.  The table below shows the ranking for each decade, keyed to the 2007 ranks.

Continue reading

Advertisements

How not to conduct research: Online ethics edition

Note: Everything in the following article and the provided links (at least at the time of posting) is work-safe, though some links may contain explicit language.  However,  please exercise caution in clicking other links found on the web pages referenced here!

A quick Google search for recent Boston University grads Ogi Ogas and Sai Gaddam reveals buzz about their book, “A Billion Wicked Thoughts,” out today.  Ogas and Gaddam analyzed traffic to erotic web sites and millions of web searches to discover, in the words of Amazon’s product description, “ a revolutionary and shocking new vision of human desire that overturns conventional thinking.”  The New York Post, Huffington Post, and Newsweek among others have all picked up the story.  After all, who doesn’t love the science of erotica?

It turns out, some individuals are less than thrilled about Ogas and Gaddam’s research.  Allow me to awkwardly transition to the real focus of this post: the ethics of conducting research online.

Continue reading

Research about teen texting from Society for Research on Child Development

This was first posted on Society for Research on Adolescence’s blog… here is link if you want to read more about news from conference from other bloggers as well.

SRCD in Montreal, Day 1!   One of the first symposiums I attended, bright and early this morning at 8AM, was about a topic that I am very interested in, the media habits of adolescents, in particular texting. I knew already that texting is the number 2 reason why adolescents use mobile phones.  The number one reason?  Not talking on the phone, but checking the time.   As this symposium confirmed, texting for the 21st century adolescent may be their most frequent form of communication. Continue reading

More school, better test scores?

The results of the recent PISA tests, an international assessment comparing countries around the world in reading, math and science, posted extraordinary scores for students in Shanghai, China.  Meanwhile, 15 year olds in the US ranked 23 out of 34 countries!

Why is the US falling so far behind other countries in Math and Science?  Some claim more instructional days are the answer.  As Malcom Gladwell points out in Outliers, China has a long history of working hard, and working longer. Continue reading

Should we blame the media?

The NY Times using nearly all anecdotal evidence based on one child, says the media may be responsible for poor grades and lack of focus.

Don Tapscott rebutes this argument and cites much research.

This is such an interesting example of how even a respected newspaper like the NY Times can flame the fire.  I can’t say agree with everything that Tapscott says, for example he says that ” Time spent online is not coming at the expense of less time hanging out with friends; it’s less time watching television.” and this is not factual.  Indeed, children spend more time watching TV than anything other media according to Kaiser and Nielsen, it’s just they watch it on many different platforms.  But he does make some really interesting and important points.

First posted on parenting in the digital age.

What Can Effect Sizes Do for You? A Quick Tutorial for a Deeper Understanding of Psychological Research

I listen to a lot of podcasts in which various psychological articles are often discussed (e.g., stuff you should know, radiolab, etc.).  As a psychologist, I am often frustrated when a podcast mentions a study’s finding (e.g., having a sister is associated with better self-esteem than having a brother) but then says something like this: “well, I’m kind of suspicious of that finding/we should take that finding with a grain of salt/I kind of question that finding because I don’t think that’s the whole story/I have a brother and my self-esteem is great.”

I get frustrated because a little bit more information about effect sizes would help turn those kind of statements from ones that undermine and disregard what are often interesting, useful findings into the kind of statements that would help people understand exactly how useful those findings are.

So, with that in mind, let’s talk about effect sizes. Usually, when you hear someone talk about a study, if they say that something is associated with something or there’s a difference between two things, that means that the study found a statistically significant effect, which most of the time means that the effect they found is different from 0 at at least 95% certainty. With regards to the sisters and self-esteem example, this means that the relationship between having a sister and self-esteem is different from 0 at at least 95% certainty. That’s interesting information, sure, but it doesn’t tell you anything about the strength or size of that relationship. Different from 0 just means that the strength of the relationship could be anywhere from just a tiny bit above 0 to a huge number.

This is where effect sizes come in. Effect sizes give you some information, like you might expect, about the size of the effect, which is much more useful than simply knowing that the effect is different from 0. When you know something about the effect size, you can understand that the effect of having a sister may not be able to explain everything about self-esteem (hence the counter-examples you can think of) but it can explain something, which makes it useful. Understanding effect sizes gives you a sense of just how useful.

I’m going to go over two commonly-used effect sizes: R-Squared and Cohen’s d.

R-Squared

Let’s stick to the sister/self-esteem example here. Let’s imagine that someone does a study of self-esteem and that all of the self-esteem data from the study is represented in the circle below.

We can also talk about this circle as showing 100% of the variance (how much people in the study differ from each other) in self-esteem; if we could explain 100% of the variance, we could predict the exact self-esteem everyone in the study has. Let me say right off that no study in psychology ever does this; if you find a study that comes even close, be very, very suspicious. Why? Because there is so much that goes into self-esteem, or any other psychological construct, for that matter, that it would be very difficult to capture it all in one study. For example, part of self-esteem could be explained by having a sister, part of it by how you happened to feel that morning, part of it by your relationship with your parents, part of it by how the experimenter looked at you when you first came in, and so on, so on.

So when a study reports a statistically significant association between having a sister and self-esteem, that means that having a sister explains more than 0% of the variance in self-esteem. That means that having a sister might explain anywhere from .001% of the variance to 99% of the variance.

What R-Squared tells you is what percentage of the total variance in self-esteem is explained by having a sister; it’s that easy. If a study reports an R-Squared of .5, that means that 50% of the total variance in self-esteem is explained by having a sister (shown in red below).

Another quick thing to note about R-Squared before I move on is that R-Squared is most commonly used as a measure of how much variance is explained by a set of predictors. For example, if a study says that when trying to explain self-esteem from having a sister, parental relationship, and body image, they found an R-Squared of .62, that means that 62% of the variance in self-esteem was explained by a combination of having a sister, parental relationship, and body image.

Cohen’s d

In order to talk about Cohen’s d, we need to think about our example in a slightly different way. Let’s say a study shows that there is a statistically significant difference in self-esteem between two groups: one that has a sister (group a) and one that does not (group b). Remember from before that knowing that there is a statistically significant difference in self-esteem between the two groups means only that the difference between the two groups is different from 0. That could mean that the self-esteem score in group a is 5 and in group b is 4.5, or that the self-esteem score in group a is 5 and in group b is 2.

Cohen’s d is pretty simple. It takes the literal difference between the two groups (groupa self-esteem – groupb self-esteem) and then divides it by the standard deviation of the data (a measure of how much the data varies). It’s easiest to explain why the difference is divided by the standard deviation using an example.

Let’s say we want to know how big the difference is between how much a garden snail weighs on day 1 and day 2 (after eating a big meal on day 1). Let’s say that difference is .02 ounces. Is that big? Is that small? The number itself is very small but does that mean that the difference is actually small? Let’s say the difference between how much an elephant weighs on day 1 and day 2 (after eating a big meal on day 1) is 3.4 lbs. Is that big? Compared to the .02 ounces for the snail, that’s huge! The problem with looking at the raw difference is that we don’t know what an “average” difference in snail weight or elephant weight actually is so we can compare it to the difference we care about. If we knew that elephants actually fluctuate in weight day-to-day even without a big meal by about 3.35 lbs, that 3.4 lbs doesn’t seem like that big of a deal. Similarly, if we know that snails actually fluctuate in weight day-to-day even without a big meal by about .001 ounces, that .02 ounces is a very large difference.

Cohen’s d allows us to understand the size of a difference, even across completely different comparisons. Using Cohen’s d, we can compare the size of the difference between a snails’ weight from one day to another to an elephant’s weight from one day to another.

Conveniently, Cohen gave us some guidelines for interpreting d. He said that .2 – .3 is a “small” effect, around .5 is a “medium” effect, and .8 and above is a “large” effect.

A Final Note on Effect Sizes

It’s important to remember that effect sizes need to be taken in context with the rest of the study. If the study is on a topic we already know a lot about, explaining a tiny bit of variance (a small R-Squared) or a small Cohen’s d is not very impressive. However, if the study is on a topic we know nothing about and is breaking new ground, small effect sizes can still be provocative and useful – they can mean that the study gives us some information as a starting point for future research! The best way to be a good consumer of science is to try to use as much information as possible before drawing your conclusions. It’s a little more difficult than just reading a headline, but science is complicated, particularly the science of human behavior, thought, and emotion.

I hope this quick tutorial on effect sizes is helpful. I’ve only gone over two here – there are many other kinds of effect sizes. A more mathematically-involved outline of effect sizes is available on Wikipedia (http://en.wikipedia.org/wiki/Effect_size), and a less mathematically-involved outline of effect sizes is available here (http://www.leeds.ac.uk/educol/documents/00002182.htm). A google search will reveal many other resources on effect sizes and their interpretation.

Moniker mumbo jumbo

Social psychology research is known for its counterintuitive, surprising, sometimes even “cute” findings. One of the latest findings in this series is that your initials can affect how successful you are; for instance, students whose names start with C or D get worse grades than students whose names start with A or B. Authors Lief Nelson and Joseph Simmons (2007) describe this effect as a manifestation of implicit egotism, or the tendency to like things that bear some resemblance to you, in their article “Moniker maladies: When names sabotage success.” In this case, because your name is Donald and you like the letter D, you are less averse to getting a bad grade than Amy is, so you don’t try as hard. Similarly, Nelson and Simmons find that baseball players whose names start with K (the letter posted after a strikeout) strike out more than other players, students with C and D names attend lower rank law schools than A and B name students, and people whose initials match a consolation prize solve fewer puzzles.

Sound too good to be true? Unfortunately, it is. A scathing analysis of Nelson and Simmons’ results (McCullough & McWilliams, in prep.) reveals that multiple misapplications of statistics and a few just plain odd assumptions actually account for the results. Take the baseball letter K finding, for example. Nelson and Simmons compared the strikeout rate of players whose names start with K against the average of all other letters. When the analysis was re-run using other letters, McCullough and McWilliams discovered that all initials except C, M, R, U, and V were statistically significant. Basically, any letter you test is likely to correlate with more strikeouts than average or fewer strikeouts than average simply because of variance from the mean. But, of course, the original authors didn’t test the letters that weren’t convenient to them.

Or the GPA example. The natural hypothesis would be that GPA(A) > GPA(B) > GPA(C) > GPA(D) > GPA(F), but that’s not what Nelson and Simmons tested. Rather, they combined A and B into one group, C and D into another group, and left out F altogether. According to a footnote, they “did not consider F initials to be grade relevant because, compared with A through D, F is much less universally associated with an academic-performance outcome” (p. 1107). I’m not sure how that one got past reviewers, but it seems like if you’re looking for initials associated with grades, F should probably be one of the first letters you try.

And the list goes on. The point here is not that implicit egotism as a whole doesn’t exist; there is a wealth of strong evidence that it does. However, the way in which it has been misapplied in this article is troubling. Do reviewers really sacrifice rigorous examination of the methods and analysis used for a cute and memorable set of conclusions? Let’s hope “Moniker maladies” was just a fluke.