It was the headlines that most upset Amy Orben. In 2017, when she was a graduate student in experimental psychology at the University of Oxford researching how social media influences communication, alarming articles began to appear. Giving a child a smartphone was like giving a kid cocaine, claimed one. Smartphones might have destroyed a generation, said another. Orben didn’t think such extreme statements were warranted. At one point, she stayed up all night reanalyzing data from a paper linking increases in depression and suicide to screen time. “I figured out that tweaks to the data analysis caused major changes to the study results,” Orben says. “The effects were actually tiny.”
She published several blog posts, some with her Oxford colleague Andrew K. Przybylski, saying so. “Great claims require great evidence,” she wrote in one. “Yet this kind of evidence does not exist.” Then Orben decided to make her point scientifically and changed the focus of her work. With Przybylski, she set out to rigorously analyze the large-scale data sets that are widely used in studies of social media.
The two researchers were not the only ones who were concerned. A few years ago Jeff Hancock, a psychologist who runs the Social Media Lab at Stanford University, set an alert to let him know when his research was cited by other scientists in their papers. As the notifications piled up in his in-box, he was perplexed. A report on the ways that Facebook made people more anxious would be followed by one about how social media enhances social capital. “What is going on with all these conflicting ideas?” Hancock wondered. How could they all be citing his work? He decided to seek clarity and embarked on the largest meta-analysis to date of the effects of social media on psychological well-being. Ultimately he included 226 papers and data on more than 275,000 people.