And coming here is like walking off a battlefield into the barracks , I mean that in a good way.
I agree because at the end of the day this research has to have a financial gain otherwise it wouldn't make sense from a economic point of view.Probably a better use would be to detect someone's emotions, and use that info to modify the mix of ads fed to them (and maybe their friends)
They don't need to manipulate posts/feeds to do that though. Just monitor them.Probably a better use would be to detect someone's emotions, and use that info to modify the mix of ads fed to them (and maybe their friends)
I agree because at the end of the day this research has to have a financial gain otherwise it wouldn't make sense from a economic point of view.
They don't need to manipulate posts/feeds to do that though. Just monitor them.
Could you pls provide a few examples of corporations funding research for the sole purpose of studying something without using that research to determine their next course of actionI disagree. Corporation often fund fairly pure research because occasionally something useful (and profitable) will come out of it. It does not mean that every single piece of research they fund has to have a specific profit goal.
Could you pls provide a few examples of corporations funding research for the sole purpose of studying something without using that research to determine their next course of action
Why did they report the research and if they didn't report it would the public have even known it was going on. How much research like this gets carried out on the net without the public's awareness.Microsoft Research is a good example.
http://research.microsoft.com/en-us/groups/mldept/default.aspx?0hp=001a
It's just general stuff. Some of it has more immediate utility than otheres.
Point is, one should not assume that there's an immediate profit motive behind this FB study. It just advances the general knowledge in the field, this benefits FB. But then they also give the info away to everyone.
I'm gonna need to see some data on that. Perhaps remove the politeness policy for a week and let's see if you're right.Well, in a way the politeness policy here acts in a similar way. We suppress impolite posts, so people are in general more polite with their own posts. It's politeness contagion.
I'm gonna need to see some data on that. Perhaps remove the politeness policy for a week and let's see if you're right.
That would have to be the control week. We would also need a week where being polite is not allowed.I'm gonna need to see some data on that. Perhaps remove the politeness policy for a week and let's see if you're right.
Why did they report the research and if they didn't report it would the public have even known it was going on. How much research like this gets carried out on the net without the public's awareness.
They have a minute by minute access to how they perceive millions (hundreds of millions?) of people are feeling.How so?External Quote:But Facebook is in a rather unique position
I agree because at the end of the day this research has to have a financial gain otherwise it wouldn't make sense from a economic point of view.
But can't the emotional contagion be applied to advertising.I think there is a gain for Facebook but not necessarily financial. It helps legitimise Facebook as a form of communication. By showing that emotion contagion can happen online just as in "real life" they are showing online communication is not as facile as critics think.
Good point. Do we know the demographics of this research. Were the chosen participants all residing in the US or did they use participants from around the worldThey have a minute by minute access to how they perceive millions (hundreds of millions?) of people are feeling.
Edit:
Apparently over a billion.
But can't the emotional contagion be applied to advertising.
Good point. Do we know the demographics of this research. Were the chosen participants all residing in the US or did they use participants from around the world
External Quote:People who viewed Facebook in English were qualified for selection into the experiment.
...
The experiments took place for 1 wk (January 11–18, 2012). Participants were randomly selected based on their User ID, resulting in a total of ∼155,000 participants per condition who posted at least one status update during the experimental period.
not with this study. I think you would have to use a friendless environment (and longer than a week obviously).I think there is a gain for Facebook but not necessarily financial. It helps legitimise Facebook as a form of communication. By showing that emotion contagion can happen online just as in "real life" they are showing online communication is not as facile as critics think.
I seeMaybe a little bit. But the actual HOW of that is probably a much bigger question.
So they only selected English speaking participants probably for ease in developing the programs and software to decipher emotions. But there's a whole other demographic of those who use FB and don't speak English.Round the world. Random sample.
http://www.pnas.org/content/111/24/8788.full
External Quote:People who viewed Facebook in English were qualified for selection into the experiment.
...
The experiments took place for 1 wk (January 11–18, 2012). Participants were randomly selected based on their User ID, resulting in a total of ∼155,000 participants per condition who posted at least one status update during the experimental period.
When FB or google or who have you decide to do research like this who oversees the research to make sure there isn't an incursion into the participants privacy and security. Does a company like this just decide to do the research or do they need to get permission from the gov or FCC.
External Quote:LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook's Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.
But we have no way of knowing the actual relationships do we? Again this is due to lack of screening but it seems they are trying to show FB is a facsimile of real life. Hiwever I do agree with with @Jason on the cynical advertising aspect.[
not with this study. I think you would have to use a friendless environment (and longer than a week obviously).
When I read a post by a friend I know exactly what their face looks like and what their body language is. So it might as well be in real life.
yea but I don't recall the TOS saying "we have the right to manipulate the emotional context of what posts you see in your Newsfeed".It's internal.
External Quote:LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook's Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.
So the software would have to be incredibly complex and expensive to be able to decipher sarcasm from intent. That's why I'm inclined to believe there's more at stake here than simply research.yea but I don't recall the TOS saying "we have the right to manipulate the emotional context of what posts you see in your Newsfeed".
and not having human eyes read the text is ridiculous, as SF pointed out, many people use sarcasm often- in my neck of the woods anyway. Lovely, means not lovely. etc. so the study in my opinion IS unethical AND worthless.
badly designed studies are pretty common. so that aspect doesn't indicate anything nefarious.That's why I'm inclined to believe there's more at stake here than simply research
Yeah he cheers me up, so the connection between negativity in the feed and resultant mood would be off (if grumpy cat memes increased that is, though they probably didn't).Then I say we probably saw a large influx of Grumpy Cat. One good way things could get skewed.
You can control who you see in your feed, just click follow on anyone you want to be sure to follow.I think people have always been annoyed at not getting updates in their news feed for everyone they follow, with facebook's only giving priority to people you interact with a lot or who already have a large following base, so this furthers the sense of not being in control of something they feel they should be.
NOT nefarious, I was speaking more along the lines with advertising or for financial gains. Think about it. If FB could filter out negative comments or adds pertaining to Verizon or AT&T, they could in theory based on the research manipulate members into choosing one service over the otherbadly designed studies are pretty common. so that aspect doesn't indicate anything nefarious.
Yeah I do that, but I sometimes think there's still some filtering, I've reloaded and got posts I didn't see the first time.You can control who you see in your feed, just click follow on anyone you want to be sure to follow.
Yeah could be that.I would imagine that when a certain number of friends/likes is reached, it is near impossible to read everything posted due to the overwhelming amount of posts.
I just think that's a consequence of how the FB is ran and how much traffic they get, it happens to me too on FB and even on this site.Yeah I do that, but I sometimes think there's still some filtering, I've reloaded and got posts I didn't see the first time.
I didn't know that, thanks Pete. Not that it matters, but I've had to regularly leave the thread, click on forums for it to update alerts. Especially from my phoneMost likely if you see posts appear on this site you hadn't seen before, it's because they're a new poster and had to have a post approved for it to show up.