What's So Bad About Facebook Editing Our Feeds?
The internet has been ablaze the past few days with commentary on Facebook’s non-consensual “mood manipulation” research. You can read the paper based upon the study here.
It has been critiqued by many, including Violet Blue, who writes in Facebook: Unethical, Untrustworthy, and now Downright Harmful, about the choice to tamper with 689,003 people’s emotional well-being, but also questions the tools used to interfere with users. Jaron Lanier had written an earlier piece in a New York Times Op-Ed called Should Facebook Manipulate Users? He noted that:
“The manipulation of emotion is no small thing. An estimated 60 percent of suicides are preceded by a mood disorder. Even mild depression has been shown to increase the risk of heart failure by 5 percent; moderate to severe depression increases it by 40 percent.”
He goes on to argue for full consent for research of this nature.
Kate Crawford, posted a lovely piece at The Atlantic, giving a nod to sociological research that has preceded this and offering a challenge for Facebook to actually experiment with consent in a new way, in The Test We Can — and Should — Run on Facebook.
“Perhaps we could nudge that process with Silicon Valley’s preferred tool: an experiment. But this time, we request an experiment to run on Facebook and similar platforms. Rather than assuming Terms of Service are equivalent to informed consent, platforms should offer opt-in settings where users can choose to join experimental panels. If they don’t opt in, they aren’t forced to participate.”
And here I stand.
I am a psychologist with a strong interest in researching digital culture and social media, but I am also an ethicist. My Ethics Code, which the lead researcher of this paper, Dr. Adam Kramer, would also be expected to follow, makes it clear to me what rules I’m supposed to follow when it comes to Informed Consent and doing research with human subjects. That said, sometimes there are grey areas.
The APA Ethics Code does allow for people to dispense with Informed Consent, under the following criteria (areas bolded by me, for emphasis):
8.05 Dispensing with Informed Consent for Research
Psychologists may dispense with informed consent only (1) where research would not reasonably be assumed to create distress or harm and involves (a) the study of normal educational practices, curricula, or classroom management methods conducted in educational settings; (b) only anonymous questionnaires, naturalistic observations or archival research for which disclosure of responses would not place participants at risk of criminal or civil liability or damage their financial standing, employability or reputation, and confidentiality is protected; or (c) the study of factors related to job or organization effectiveness conducted in organizational settings for which there is no risk to participants’ employability, and confidentiality is protected or (2) where otherwise permitted by law or federal or institutional regulations.
It is hard for me to categorize the manipulation of what shows up on a person’s news feed as “naturalistic” data collection. And I still think the salient issue is whether this research would reasonably (or not) create distress or harm. This can be a complex question for some people.
How harmful is a depressed mood? It depends. Perhaps it is not that big of a deal to a person who logs off of Facebook and goes to a party, and doesn’t check in regularly. But I’d guess there are 689,003 answers to this question for each person who was unwittingly a part of this study.
I also think there are other less obvious consequences to modifying the news feeds of users.
For example, some segment of the Facebook population was already aware of and managing “mood contagion.” For quite awhile, it hasn’t been uncommon for me or those in my social circle to unfollow people who seemed to continuously complain, rant, post annoying messages, or just post things that left us feeling less happy. In 2008, I first read Tantek Çelik’s wiki post on communication protocols, including reasons he might not follow you back on social media. While his rules may have seemed excessive, some of them made good sense, including not following users who produced a high proportion of “negative reinforcement.”
On some Twitter apps, you can temporarily “mute” someone, but on Facebook, it’s easier to just unsubscribe from someone’s feed and then no longer keep up with them. The result is that these people drop off of your radar, unless you make a point of visiting their profile. In my friend networks and in the psychotherapy sessions with my clients, it has not been uncommon to hear someone say, “I stopped following ‘x” because her/his news feed was just too much of a downer.”
We have become a generation used to curating our content.
Therefore, if Facebook suppressed positive postings — even for a week — it’s not a stretch to imagine some users spending part of that week tuning out people whose posts were showing up on their Walls because they were just not contributing to a positive outlook.
These likely would have been “looser tie” connections than close friends. A looser tie connection would likely be easier to ignore or drop. Whereas if this was a close friend, many of us might call or text and say, “What is going on with all your sad posts on FB this week?”). This may seem less likely to happen in just a week, but for a heavy Facebook user, a lot happens in one day on Facebook. A week can feel like a long time.
For those who were already on the fence about a particular poster, maybe this week clinched it: “This is a person whose postings don’t contribute to my overall sense of happiness.” Maybe others felt this way. And this may have had a deeper impact on the development of these relationships both on and offline in some cases. It may have led to some people losing friends, support, or a sense that people they cared about were still interested in their lives.
Most of us who participate in various online communities know that oftentimes friendships and online connections grow (or wither) based upon the online material posted. People filter email from certain senders to group email lists. People tune-out those who post things that are less rewarding to read.
Thus, I believe that there are potentially farther reaching ripple effects of this type of personal manipulation than those that have just been addressed by the focus on “mood manipulation.” This experiment may have also resulted in relationship manipulation that has yet to be analyzed. If the Facebook study led any user to detach from a friendship or lose a friend in their own circle because that friend found this person’s postings overwhelmingly sad or negative, then we could revisit the concept of “minimal harm” in a deeper and more comprehensive way.
This is another reason why formal debriefing would have also been useful in this study, even if Informed Consent was waived. People could have revisited decisions they made two years ago just a week after the suppression of Wall postings, rather than trying to remember what happened in that time period two years later and trying to track which relationships were affected.
APA Ethics Code on Debriefing:
8.08 Debriefing
(a) Psychologists provide a prompt opportunity for participants to obtain appropriate information about the nature, results, and conclusions of the research, and they take reasonable steps to correct any misconceptions that participants may have of which the psychologists are aware.
(b) If scientific or humane values justify delaying or withholding this information, psychologists take reasonable measures to reduce the risk of harm.
(c) When psychologists become aware that research procedures have harmed a participant, they take reasonable steps to minimize the harm.
In summary, Facebook’s unwitting deliberate manipulation of user data and large scale data collection without consent poses a number of important ethical issues. Assessing harm can also be a tricky exercise. I would like to support Kate Crawford’s call for more clear language that allows those users who want to participate in research to opt-in, rather than being oblivious guinea pigs.
Comments are open. Please feel free to share your thoughts on this post.
For more reading, check the following links:
Everything We Know About Facebook’s Secret Mood Manipulation Experiment
Facebook Flunks its Apology – A data scientist looks at the language of the Facebook apology, (touché!).
The journal that published Facebook’s psychological study is raising a red flag about it
APA Ethics Code Addresses When Informed Consent From Research Participants Is Necessary
Privacy Group Complains to FTC About Facebook Emotion Study
Jennifer Shewmaker
July 4, 2014 @ 7:30 am
Thanks for this well thought out post. I’ll be sharing!
Does Facebook Make You Feel Like a Guinea Pig? | Media Psychology
July 4, 2014 @ 11:57 am
[…] What’s So Bad About Facebook Editing Our Feeds (Dr. Keely Kolmes) […]
Michael Fenichel
July 5, 2014 @ 3:25 pm
As I’ve been following, writing, and lecturing about Facebook “psychology” for years now, I’d agree with most of the points you highlighted. However, to me there is a huge and glaring error in your summary: You state: “In summary, Facebook’s *unwitting manipulation of user data*….” [emphasis mine] to which I’d respond that the whole issue and ‘harm’ and media coverage is due precisely to the fact that this ‘study’ was NOT “unwitting,” not at all! That’s the heart of the issue in large part, to me and many others, the fact that their “research” and “marketing” are so intertwined with all their legal and linguistic masking of what they do – intentionally!
drkkolmes
July 6, 2014 @ 11:42 pm
Thank you, Michael, for your comment, and for also pointing out what was an unintentional error in my post. I meant that the participants were unwitting, certainly not Facebook. I have edited the post with a strikethrough.
Best in Mental Health (weeks 6/23 - 7/5/2014) - SocialWork.Career
January 25, 2015 @ 4:22 am
[…] What’s So Bad About Facebook Editing Our Feeds? Dr. Keely Kolmes […]