"Facebook tinkered with users’ feeds for a massive psychology experiment"

And coming here is like walking off a battlefield into the barracks , I mean that in a good way.
 
And coming here is like walking off a battlefield into the barracks , I mean that in a good way.

Well, in a way the politeness policy here acts in a similar way. We suppress impolite posts, so people are in general more polite with their own posts. It's politeness contagion. :)
 
Probably a better use would be to detect someone's emotions, and use that info to modify the mix of ads fed to them (and maybe their friends)
I agree because at the end of the day this research has to have a financial gain otherwise it wouldn't make sense from a economic point of view.
 
I agree because at the end of the day this research has to have a financial gain otherwise it wouldn't make sense from a economic point of view.

I disagree. Corporation often fund fairly pure research because occasionally something useful (and profitable) will come out of it. It does not mean that every single piece of research they fund has to have a specific profit goal.
 
They don't need to manipulate posts/feeds to do that though. Just monitor them.

But the effect shown in the research could still be used here. If they can get a measure of the upcoming posts in a user's feed, then they can now predict a user's emotions (in the aggregate, over a large population) before they have them.
 
I disagree. Corporation often fund fairly pure research because occasionally something useful (and profitable) will come out of it. It does not mean that every single piece of research they fund has to have a specific profit goal.
Could you pls provide a few examples of corporations funding research for the sole purpose of studying something without using that research to determine their next course of action
 
Fearbook (coined here first) controls what you see so they can get you to click on (buy something) from the advertising they show you. Pretty much what Alex Jones and other fear mongers do. But then no one is forced to join Fearbook or listen to Alex Jones.
 
Could you pls provide a few examples of corporations funding research for the sole purpose of studying something without using that research to determine their next course of action

Microsoft Research is a good example.
http://research.microsoft.com/en-us/groups/mldept/default.aspx?0hp=001a

It's just general stuff. Some of it has more immediate utility than otheres.

Point is, one should not assume that there's an immediate profit motive behind this FB study. It just advances the general knowledge in the field, this benefits FB. But then they also give the info away to everyone.
 
Microsoft Research is a good example.
http://research.microsoft.com/en-us/groups/mldept/default.aspx?0hp=001a

It's just general stuff. Some of it has more immediate utility than otheres.

Point is, one should not assume that there's an immediate profit motive behind this FB study. It just advances the general knowledge in the field, this benefits FB. But then they also give the info away to everyone.
Why did they report the research and if they didn't report it would the public have even known it was going on. How much research like this gets carried out on the net without the public's awareness.
 
Well, in a way the politeness policy here acts in a similar way. We suppress impolite posts, so people are in general more polite with their own posts. It's politeness contagion.
I'm gonna need to see some data on that. Perhaps remove the politeness policy for a week and let's see if you're right.
 
I'm gonna need to see some data on that. Perhaps remove the politeness policy for a week and let's see if you're right.
That would have to be the control week. We would also need a week where being polite is not allowed.
 
Why did they report the research and if they didn't report it would the public have even known it was going on. How much research like this gets carried out on the net without the public's awareness.

It's academic research. Business probably do lots of similar things with A/B type testing. But Facebook is in a rather unique position.
 
But Facebook is in a rather unique position
Content from External Source
How so?
They have a minute by minute access to how they perceive millions (hundreds of millions?) of people are feeling.

Edit:
Apparently over a billion.
 
I agree because at the end of the day this research has to have a financial gain otherwise it wouldn't make sense from a economic point of view.

I think there is a gain for Facebook but not necessarily financial. It helps legitimise Facebook as a form of communication. By showing that emotion contagion can happen online just as in "real life" they are showing online communication is not as facile as critics think.
 
I think there is a gain for Facebook but not necessarily financial. It helps legitimise Facebook as a form of communication. By showing that emotion contagion can happen online just as in "real life" they are showing online communication is not as facile as critics think.
But can't the emotional contagion be applied to advertising.
 
They have a minute by minute access to how they perceive millions (hundreds of millions?) of people are feeling.

Edit:
Apparently over a billion.
Good point. Do we know the demographics of this research. Were the chosen participants all residing in the US or did they use participants from around the world
 
Good point. Do we know the demographics of this research. Were the chosen participants all residing in the US or did they use participants from around the world

Round the world. Random sample.

http://www.pnas.org/content/111/24/8788.full
People who viewed Facebook in English were qualified for selection into the experiment.
...
The experiments took place for 1 wk (January 11–18, 2012). Participants were randomly selected based on their User ID, resulting in a total of ∼155,000 participants per condition who posted at least one status update during the experimental period.
Content from External Source
 
[
I think there is a gain for Facebook but not necessarily financial. It helps legitimise Facebook as a form of communication. By showing that emotion contagion can happen online just as in "real life" they are showing online communication is not as facile as critics think.
not with this study. I think you would have to use a friendless environment (and longer than a week obviously).

When I read a post by a friend I know exactly what their face looks like and what their body language is. So it might as well be in real life.
 
Maybe a little bit. But the actual HOW of that is probably a much bigger question.
I see
Round the world. Random sample.

http://www.pnas.org/content/111/24/8788.full
People who viewed Facebook in English were qualified for selection into the experiment.
...
The experiments took place for 1 wk (January 11–18, 2012). Participants were randomly selected based on their User ID, resulting in a total of ∼155,000 participants per condition who posted at least one status update during the experimental period.
Content from External Source
So they only selected English speaking participants probably for ease in developing the programs and software to decipher emotions. But there's a whole other demographic of those who use FB and don't speak English.

When FB or google or who have you decide to do research like this who oversees the research to make sure there isn't an incursion into the participants privacy and security. Does a company like this just decide to do the research or do they need to get permission from the gov or FCC.
 
When FB or google or who have you decide to do research like this who oversees the research to make sure there isn't an incursion into the participants privacy and security. Does a company like this just decide to do the research or do they need to get permission from the gov or FCC.

It's internal.
LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.
Content from External Source
 
[

not with this study. I think you would have to use a friendless environment (and longer than a week obviously).

When I read a post by a friend I know exactly what their face looks like and what their body language is. So it might as well be in real life.
But we have no way of knowing the actual relationships do we? Again this is due to lack of screening but it seems they are trying to show FB is a facsimile of real life. Hiwever I do agree with with @Jason on the cynical advertising aspect.
 
It's internal.
LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.
Content from External Source
yea but I don't recall the TOS saying "we have the right to manipulate the emotional context of what posts you see in your Newsfeed".

and not having human eyes read the text is ridiculous, as SF pointed out, many people use sarcasm often- in my neck of the woods anyway. Lovely, means not lovely. etc. so the study in my opinion IS unethical AND worthless.
 
yea but I don't recall the TOS saying "we have the right to manipulate the emotional context of what posts you see in your Newsfeed".

and not having human eyes read the text is ridiculous, as SF pointed out, many people use sarcasm often- in my neck of the woods anyway. Lovely, means not lovely. etc. so the study in my opinion IS unethical AND worthless.
So the software would have to be incredibly complex and expensive to be able to decipher sarcasm from intent. That's why I'm inclined to believe there's more at stake here than simply research.
 
The TOS doesn't give them the right to control what you see? The emotional responses to what is seen, isn't FB making people feel that way. There is a good chance people would have seen those posts and felt the way they did had the study not happened.
 
Then I say we probably saw a large influx of Grumpy Cat. One good way things could get skewed.
Yeah he cheers me up, so the connection between negativity in the feed and resultant mood would be off (if grumpy cat memes increased that is, though they probably didn't).

I think people have always been annoyed at not getting updates in their news feed for everyone they follow, with facebook's only giving priority to people you interact with a lot or who already have a large following base, so this furthers the sense of not being in control of something they feel they should be.

Increasing the chances for people to experience negative emotional reactions is a little iffy. Negative emotions are induced in studies, but they would tend to be emotional secure people who consent to be manipulated and studied, there's a lot of emotionally unstable people out there who probably shouldn't be nudged.
But then again that is negativity that was 'there' in the news feed anyway, just not lightened by contrasts.
Ideally though a grown adult should have developed enough mental health to cope with negativity.
I wonder if the study only used adults. There may be an ethical question for manipulating mopey adolescents. Making teenagers depressed is like shooting fish in a barrel.

Would the studied status updates only have included written words, or article shares, music videos, etc?
I rarely post them, but when I do they are usually quotes or music clips, not personal insights into how I'm feeling.
 
I think people have always been annoyed at not getting updates in their news feed for everyone they follow, with facebook's only giving priority to people you interact with a lot or who already have a large following base, so this furthers the sense of not being in control of something they feel they should be.
You can control who you see in your feed, just click follow on anyone you want to be sure to follow.
 
I would imagine that when a certain number of friends/likes is reached, it is near impossible to read everything posted due to the overwhelming amount of posts.
 
badly designed studies are pretty common. so that aspect doesn't indicate anything nefarious.
NOT nefarious, I was speaking more along the lines with advertising or for financial gains. Think about it. If FB could filter out negative comments or adds pertaining to Verizon or AT&T, they could in theory based on the research manipulate members into choosing one service over the other
 
You can control who you see in your feed, just click follow on anyone you want to be sure to follow.
Yeah I do that, but I sometimes think there's still some filtering, I've reloaded and got posts I didn't see the first time.

I would imagine that when a certain number of friends/likes is reached, it is near impossible to read everything posted due to the overwhelming amount of posts.
Yeah could be that.
 
Yeah I do that, but I sometimes think there's still some filtering, I've reloaded and got posts I didn't see the first time.
I just think that's a consequence of how the FB is ran and how much traffic they get, it happens to me too on FB and even on this site.
 
Most likely if you see posts appear on this site you hadn't seen before, it's because they're a new poster and had to have a post approved for it to show up.
 
Most likely if you see posts appear on this site you hadn't seen before, it's because they're a new poster and had to have a post approved for it to show up.
I didn't know that, thanks Pete. Not that it matters, but I've had to regularly leave the thread, click on forums for it to update alerts. Especially from my phone
 
It's subtle mind control! Aaagghhhhxxx.
image.jpg

No but really, it's an interesting experiment.
My question is, how can we find out if we were one of the test dummies?
The link seems to only mention that people were selected at random (by user ID), but how was that determined? An algorithm?
 
Back
Top