Jump to content

Ever Complained about a Student Survey? Facebook Introduces a Whole New Level of Creepy


You are about to reply to a thread that has been inactive for 3584 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Many of you have probably seen this utterly appalling and creepy story about the manipulation of Facebook users' feeds in the service of a psychological study. I'm not sure which horrifies me more: Facebook's willingness to treat its users as guinea pigs, or the highly questionable ethics (to put it mildly) of the academics involved.

http://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/

Given Linden Lab's willingness to play along with the NSA in the past, would we ever see anything like this here?

Perhaps not: Linden Lab doesn't have the money for this kind of research. And the results would be highly questionable in terms of their application to RL.

Still . . . it's becoming evident that, so far as some social media corporations (and their pet academics) are concerned, online social platforms are giant petri dishes.

Link to comment
Share on other sites


Syo Emerald wrote:

Reads like someone again wasted money and time on a useless research.

Well, yeah. For two reasons.

First, the "results" tell us nothing we didn't know. Over 250 years ago, the novelist Henry Fielding noted that theatre-goers who were leaving a comedy were much more inclined to be generous to beggars than those who had just viewed a tragedy -- and he wasn't the first to notice this effect by a long shot. We all intuitively know that what we read changes our mood, and that that is reflected in our communications. So, yeah . . . duh.

Second, the methodology of this study seems astonishingly crude. I don't understand everything there is to know about digital text analysis (although I've played with a few software tools a bit), but I do know that, properly done, it involves much more than merely cherry-picking words out of context. Sophisticated text analysis examines words in context, and generally in multi-dimensions in order to plot any one given expression in relation to perhaps dozens of other variables. I haven't read the actual research, but this one seems to have been, as I say, much cruder. And frankly, I'm deeply suspicious of the value of this kind of "sentiment analysis" anyway.

The point, though, has less to do with the research outcome, which is probably fairly useless, and more to do with the ethics of this kind of research. This one involved not merely deception, but also outright manipulation of its subjects -- all without so much as a notification or request for consent.

Personally, I think that Facebook should be censured for this kind of thing, and the academics involved raked over the coals. This was deeply unethical.

Link to comment
Share on other sites


LaskyaClaren wrote:


Syo Emerald wrote:

Reads like someone again wasted money and time on a useless research.

Well, yeah. For two reasons.

First, the "results" tell us nothing we didn't know. Over 250 years ago, the novelist Henry Fielding noted that theatre-goers who were leaving a comedy were much more inclined to be generous to beggars than those who had just viewed a tragedy -- and he wasn't the first to notice this effect by a long shot. We all intuitively know that what we read changes our mood, and that that is reflected in our communications. So, yeah . . . duh.

Second, the methodology of this study seems astonishingly crude. I don't understand everything there is to know about digital text analysis (although I've played with a few software tools a bit), but I do know that, properly done, it involves much more than merely cherry-picking words out of context. Sophisticated text analysis examines words in context, and generally in multi-dimensions in order to plot any one given expression in relation to perhaps dozens of other variables. I haven't read the actual research, but this one seems to have been, as I say, much cruder. And frankly, I'm deeply suspicious of the value of this kind of "sentiment analysis" anyway.

The point, though, has less to do with the research outcome, which is probably fairly useless, and more to do with the ethics of this kind of research.
This one involved not merely deception, but also outright manipulation of its subjects -- all without so much as a notification or request for consent.

Personally, I think that Facebook should be censured for this kind of thing, and the academics involved raked over the coals. This was deeply unethical.

Google has been doing this to your search results for ages. We've discussed it. They know what you like, so they return results you'll like. This is how they keep you using Google. So your world view gets colored by glasses you don't even know you're wearing.

But I suppose this has always been the case. Maddy wears slightly rose-tinted glasses in SL and their metaphorical equivalent in RL. I'm sure I don't know how much of that tint is of my own choosing, and how much is an unwitting response to manipulations by the world around me. What's changing is that those external manipulations are increasingly being narrowcast. In the past they were largely broadcast.

I might chose a favorite news network, religion, political party, etc, which broadcasts messages I like. But they're not particularly aware of me, and so I receive the "average" message. Now, algorithms can watch my online (and increasingly offlne) behavior and taylor the messages specifically to me. This allows such sophisticated and covert manipulation. When the Democratic party twists the truth in a campaign ad, someone will call them out. If Google decides to give my search results a liberal spin, who's going to catch that?

I can (when I'm in a particularly nefarious mood) imagine a world in which the algorithms herd us like border collies. (Have you ever attended a party at the home of a border collie owner? It's an amusing experience.)

Link to comment
Share on other sites


Derek Torvalar wrote:

Borg_Queen_disembodied.jpg

 

"We are the FaceBorg. Your biological and technological distinctiveness will be added to our own.
Your culture will adapt to service us. 
Resistance is futile."

Well, if Donna Haraway is right (and Donna Haraway is always right, duh!), then we're all already borgs anyway. Especially here in Second Life.

Link to comment
Share on other sites


Madelaine McMasters wrote:


LaskyaClaren wrote:

The point, though, has less to do with the research outcome, which is probably fairly useless, and more to do with the ethics of this kind of research.
This one involved not merely deception, but also outright manipulation of its subjects -- all without so much as a notification or request for consent.


Google has been doing this to your search results for ages. We've discussed it. They know what you like, so they return results you'll like. This is how they keep you using Google. So your world view gets colored by glasses you don't even know you're wearing.

But I suppose this has always been the case. Maddy wears slightly rose-tinted glasses in SL and their metaphorical equivalent in RL. I'm sure I don't know how much of that tint is of my own choosing, and how much is an unwitting response to manipulations by the world around me. What's changing is that those external manipulations are increasingly being narrowcast. In the past they were largely broadcast.

I might chose a favorite news network, religion, political party, etc, which broadcasts messages I like. But they're not particularly aware of me, and so I receive the "average" message. Now, algorithms can watch my online (and increasingly offlne) behavior and taylor the messages specifically to me. This allows such sophisticated and covert manipulation. When the Democratic party twists the truth in a campaign ad, someone will call them out. If Google decides to give my search results a liberal spin, who's going to catch that?

I can (when I'm in a particularly nefarious mood) imagine a world in which the algorithms herd us like border collies. (Have you ever attended a party at the home of a border collie owner? It's an amusing experience.)

Yeah, the filter bubble is pretty old news, I guess.

What makes this a bit different, though, is that this is no longer just about analyzing and responding to what we write by feeding us filtered information that we will supposedly "like." It's about actually changing what we write.

Now, it's probably true that the underlying, unspoken point of filtering algorithms is to impact upon our state of mind and emotion, but I've never seen an experiment involving quite this degree of social engineering before.

The other aspect that really bothers me about this is the uncritical participation of academics. Academics working hand-in-hand with the corporate world, governments, or the military is also not new news, but we have research ethics guidelines for a reason, and this study completely flouts their spirit, if not the letter. I really want to see these researchers nailed for this one -- and maybe the REB that okayed the study in the first place as well.

 

ETA: Is this participation of these academic researchers a sign of the increasing corporatization of the academy? Insofar as research funding is increasingly coming from corporations, and not from arms-length government agencies, yes, I think it is.

Link to comment
Share on other sites


LaskyaClaren wrote:


Madelaine McMasters wrote:


LaskyaClaren wrote:

The point, though, has less to do with the research outcome, which is probably fairly useless, and more to do with the ethics of this kind of research.
This one involved not merely deception, but also outright manipulation of its subjects -- all without so much as a notification or request for consent.


Google has been doing this to your search results for ages. We've discussed it. They know what you like, so they return results you'll like. This is how they keep you using Google. So your world view gets colored by glasses you don't even know you're wearing.

But I suppose this has always been the case. Maddy wears slightly rose-tinted glasses in SL and their metaphorical equivalent in RL. I'm sure I don't know how much of that tint is of my own choosing, and how much is an unwitting response to manipulations by the world around me. What's changing is that those external manipulations are increasingly being narrowcast. In the past they were largely broadcast.

I might chose a favorite news network, religion, political party, etc, which broadcasts messages I like. But they're not particularly aware of me, and so I receive the "average" message. Now, algorithms can watch my online (and increasingly offlne) behavior and taylor the messages specifically to me. This allows such sophisticated and covert manipulation. When the Democratic party twists the truth in a campaign ad, someone will call them out. If Google decides to give my search results a liberal spin, who's going to catch that?

I can (when I'm in a particularly nefarious mood) imagine a world in which the algorithms herd us like border collies. (Have you ever attended a party at the home of a border collie owner? It's an amusing experience.)

Yeah, the filter bubble is pretty old news, I guess.

What makes this a bit different, though, is that this is no longer just about analyzing and responding to what we write by feeding us filtered information that we will supposedly "like." It's about actually 
changing
what we write.4/7.

Now, it's probably true that the underlying, unspoken point of filtering algorithms is to impact upon our state of mind and emotion, but I've never seen an experiment involving quite this degree of social engineering before.

The other aspect that 
really
bothers me about this is the uncritical participation of academics. Academics working hand-in-hand with the corporate world, governments, or the military is also not new news, but we have research ethics guidelines for a reason, and this study completely flouts their spirit, if not the letter. I really want to see these researchers nailed for this one -- and maybe the REB that okayed the study in the first place as well.

 

ETA: Is this participation of these academic researchers a sign of the increasing corporatization of the academy? Insofar as research funding is increasingly coming from corporations, and not from arms-length government agencies, yes, I think it is.

I presume Facebook and Google are doing PHd and cringe worthy research internally, 24/7. I wonder how much the academics were bringing to the table. Just because you've not seen this degree of social engineering before doesn't mean it hasn't been happening.

Regarding your ETA: The privatization of research has been happening for a long time. Once the world went online and data could be gathered remotely, the corporate world leaped well ahead of the academic world in research capability.

There are those who want the government to keep it's fingers out of our stuff, and given the stereotype of government ineptitude, you can't brush off their argument. But there are times I'm not sure I want things in the hands of the efficient and focused.

;-).

Link to comment
Share on other sites


LaskyaClaren wrote:


Madelaine McMasters wrote:


LaskyaClaren wrote:

The point, though, has less to do with the research outcome, which is probably fairly useless, and more to do with the ethics of this kind of research.
This one involved not merely deception, but also outright manipulation of its subjects -- all without so much as a notification or request for consent.


Google has been doing this to your search results for ages. We've discussed it. They know what you like, so they return results you'll like. This is how they keep you using Google. So your world view gets colored by glasses you don't even know you're wearing.

But I suppose this has always been the case. Maddy wears slightly rose-tinted glasses in SL and their metaphorical equivalent in RL. I'm sure I don't know how much of that tint is of my own choosing, and how much is an unwitting response to manipulations by the world around me. What's changing is that those external manipulations are increasingly being narrowcast. In the past they were largely broadcast.

I might chose a favorite news network, religion, political party, etc, which broadcasts messages I like. But they're not particularly aware of me, and so I receive the "average" message. Now, algorithms can watch my online (and increasingly offlne) behavior and taylor the messages specifically to me. This allows such sophisticated and covert manipulation. When the Democratic party twists the truth in a campaign ad, someone will call them out. If Google decides to give my search results a liberal spin, who's going to catch that?

I can (when I'm in a particularly nefarious mood) imagine a world in which the algorithms herd us like border collies. (Have you ever attended a party at the home of a border collie owner? It's an amusing experience.)

Yeah, the filter bubble is pretty old news, I guess.

What makes this a bit different, though, is that this is no longer just about analyzing and responding to what we write by feeding us filtered information that we will supposedly "like." It's about actually 
changing
what we write.

Now, it's probably true that the underlying, unspoken point of filtering algorithms is to impact upon our state of mind and emotion, but I've never seen an experiment involving quite this degree of social engineering before.

The other aspect that 
really
bothers me about this is the uncritical participation of academics. Academics working hand-in-hand with the corporate world, governments, or the military is also not new news, but we have research ethics guidelines for a reason, and this study completely flouts their spirit, if not the letter. I really want to see these researchers nailed for this one -- and maybe the REB that okayed the study in the first place as well.

 

ETA: Is this participation of these academic researchers a sign of the increasing corporatization of the academy? Insofar as research funding is increasingly coming from corporations, and not from arms-length government agencies, yes, I think it is.

http://www.telegraph.co.uk/technology/facebook/10932534/Facebook-conducted-secret-psychology-experiment-on-users-emotions.html

"The lead scientist, Adam Kramer, said in an interview when he joined Facebook in March 2012 that he took the job because 'Facebook data constitutes the largest field study in the history of the world.' "

 

http://www.pnas.org/content/111/24/8788.full

 

  1. Adam D. I. Kramer a,1
  2. Jamie E. Guillory b, and 
  3. Jeffrey T. Hancock c,d

Author Affiliations

aCore Data Science Team, Facebook, Inc., Menlo Park, CA 94025;

bCenter for Tobacco Control Research and Education, University of California, San Francisco, CA 94143; and

Departments of cCommunication and

dInformation Science, Cornell University, Ithaca, NY 14853

  1. Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved March 25, 2014 (received for review October 23, 2013)

 

http://www.pnas.org/site/authors/fees.xhtml

 

'You would be surprised at the tripe that passes for meaningful research' Or not. Anyone can pay to have their garbage published, especially online.

 

 

 

Link to comment
Share on other sites


Madelaine McMasters wrote:

I presume Facebook and Google are doing PHd and cringe worthy research internally, 24/7. I wonder how much the academics were bringing to the table. Just because you've not seen this degree of social engineering before doesn't mean it hasn't been happening.

Regarding your ETA: The privatization of research has been happening for a long time. Once the world went online and data could be gathered remotely, the corporate world leaped well ahead of the academic world in research capability.

There are those who want the government to keep it's fingers out of our stuff, and given the stereotype of government ineptitude, you can't brush off their argument. But there are times I'm not sure I want things in the hands of the efficient and focused.

;-).

Even the Editor of Facebook's Mood Study Thought It Was Creepy

 

Indeed, I'm sure this kind of thing isn't new. But that just makes it all the more alarming.

What makes the corporate involvement in this particularly sinister is that the Facebook ToS was taken, for all intents and purposes, to supercede the normal research ethics requirements. 

In other words, where research ethics would normally require informed consent, that was deemed unnecessary because the FB terms of service don't require it. So, corporate "morality" replaces academic ethics. That's just plain wrong.

Link to comment
Share on other sites


LaskyaClaren wrote:


Madelaine McMasters wrote:

I presume Facebook and Google are doing PHd and cringe worthy research internally, 24/7. I wonder how much the academics were bringing to the table. Just because you've not seen this degree of social engineering before doesn't mean it hasn't been happening.

Regarding your ETA: The privatization of research has been happening for a long time. Once the world went online and data could be gathered remotely, the corporate world leaped well ahead of the academic world in research capability.

There are those who want the government to keep it's fingers out of our stuff, and given the stereotype of government ineptitude, you can't brush off their argument. But there are times I'm not sure I want things in the hands of the efficient and focused.

;-).

 

Indeed, I'm sure this kind of thing isn't new. But that just makes it all the more alarming.

What makes the corporate involvement in this particularly sinister is that the Facebook ToS was taken, for all intents and purposes, to supercede the normal research ethics requirements. 

In other words, where research ethics would normally require informed consent, that was deemed unnecessary because the FB terms of service 
don't
require it. So, corporate "morality" replaces academic ethics. That's just plain wrong.

Just a second Lasky.

It is unclear what is meant by Fiske's role as 'editor'. Was she the referee for the research for peer review or merely an editor checking punctuation? Either way, she has sullied her reputation by becoming associated with this nonsense. "...until I queried the authors and they said their local institutional review board had approved it"

 

Research ethics are not a universal. Each publication has varying standards. Each association as well. The corporate world, to my knowledge are not bound by 'research ethics' per se, hence the discussion around whether the research was 'legal'. The lead author was an employee of FaceBorg and not affiliated with an academic or professional institution.

 

The FaceBorg ToS incoporates informed consent when you sign up. The question is one of time. How long does the informed consent apply? In most cases, as was pointed out in regards to the use of deception, the participants have an expectation to be informed within a reasonable time frame at some point during or near the completion of the research. In this case it is so open ended to warrant ethical eyebrow raising.

 

The characterization made in one of the articles that it had been published by "the prestigious" journal PNAS now calls their reputation into question. 

 

Yes, corporate morality rules in this case. Their money, their rules.

 

Link to comment
Share on other sites

Facebook users have lower average IQs and worse manners than rats, so there is no need to worry about ethics, morality or even the accuracy of the results generated, which will simply demonstrate that morons allow their feelings to supersede (you may wish to note that this is the correct spelling, Laskya) their higher brain functions, something that those of us who retain our higher brain functions when confronted with differential gender attitudes already knew.

Father "Maybe you are my puppet" Jim

 

 

Link to comment
Share on other sites


LaskyaClaren wrote:


Madelaine McMasters wrote:

I presume Facebook and Google are doing PHd and cringe worthy research internally, 24/7. I wonder how much the academics were bringing to the table. Just because you've not seen this degree of social engineering before doesn't mean it hasn't been happening.

Regarding your ETA: The privatization of research has been happening for a long time. Once the world went online and data could be gathered remotely, the corporate world leaped well ahead of the academic world in research capability.

There are those who want the government to keep it's fingers out of our stuff, and given the stereotype of government ineptitude, you can't brush off their argument. But there are times I'm not sure I want things in the hands of the efficient and focused.

;-).

 

Indeed, I'm sure this kind of thing isn't new. But that just makes it all the more alarming.

What makes
the corporate involvement in this particularly sinister
is that the Facebook ToS was taken, for all intents and purposes, to supercede the normal research ethics requirements. 

In other words, where research ethics would normally require informed consent, that was deemed unnecessary because the FB terms of service 
don't
require it. So, corporate "morality" replaces academic ethics. That's just plain wrong.

I'd say this was more a case of academic involvement in something corporations have been doing forever. Microsoft employs game psychologists to help tune Windows and Office to ping our pleasure centers to encourage increased use of their products, not increase our desire or ability to get work done. I think that's pretty creepy. What might be making these things seem so sinister is the involvement of computer intelligence and the distancing of human involvement.

When persuasion and deception were the handiwork of nefarious individuals, we could always tell ourselves that good people outnumbered bad, and there were only so many places to hide. But when nefarious algorithms can propagate like invisible gremlins?

Research was once part of the product/service development workflow. Now, it's the product/service.

Link to comment
Share on other sites


Madelaine McMasters wrote:


LaskyaClaren wrote:


Madelaine McMasters wrote:

I presume Facebook and Google are doing PHd and cringe worthy research internally, 24/7. I wonder how much the academics were bringing to the table. Just because you've not seen this degree of social engineering before doesn't mean it hasn't been happening.

Regarding your ETA: The privatization of research has been happening for a long time. Once the world went online and data could be gathered remotely, the corporate world leaped well ahead of the academic world in research capability.

There are those who want the government to keep it's fingers out of our stuff, and given the stereotype of government ineptitude, you can't brush off their argument. But there are times I'm not sure I want things in the hands of the efficient and focused.

;-).

 

Indeed, I'm sure this kind of thing isn't new. But that just makes it all the more alarming.

What makes
the corporate involvement in this particularly sinister
is that the Facebook ToS was taken, for all intents and purposes, to supercede the normal research ethics requirements. 

In other words, where research ethics would normally require informed consent, that was deemed unnecessary because the FB terms of service 
don't
require it. So, corporate "morality" replaces academic ethics. That's just plain wrong.

I'd say this was more a case of academic involvement in something corporations have been doing forever. Microsoft employs game psychologists to help tune Windows and Office to ping our pleasure centers to encourage increased use of their products, not increase our desire or ability to get work done. I think that's pretty creepy. What might be making these things seem so sinister is the involvement of computer intelligence and the distancing of human involvement.

When persuasion and deception were the handiwork of nefarious individuals, we could always tell ourselves that good people outnumbered bad, and there were only so many places to hide. But when nefarious algorithms can propagate like invisible gremlins?

Research was once part of the product/service development workflow. Now, it's the product/service.


Speaking of tripe.

Christ! 

ROFL

Link to comment
Share on other sites

The information gained from the study is, as been ponted out already, very old news. Humans get emotionally involved about things they witness. People leaving the parking area after watching a night of dirt track racing are belligerent when ttwo lanes merge to one. Fortunately, the guy who was driving the car I was riding in had wheels that stuck out past his fenders so he could bang into the car next to us with his tires instead of his sheet metal. The guy driving the other car chose the better part of valor (or maybe of insurance) and we moved ahead.

That's just a small example. I can't believe there aren't tons of studies already that bear out the results. Surely someone's taken a look at men leaving a prize fight or even a live televised prize fight. Presumably women are susceptible to the same sort of triggers (which males have wished since the dawn of time they could figure out).

Either way, if you think Google respects  your privacy, you're being silly. If you think Facebook respects your privacy, you're a nitwit.

 

 

Link to comment
Share on other sites

just as a fyi bc is topical I think. Microsoft have just changed their Windows Services Agreement. I get a email about it from them. Is effective from 1 July 2014

am quite pleased with the changes. Specially this part:

"Privacy:
As part of our ongoing commitment to respecting your privacy, we won't use your documents, photos or other personal files or what you say in email, chat, video calls or voice mail to target advertising to you."

Link to comment
Share on other sites


irihapeti wrote:

just as a fyi bc is topical I think. Microsoft have just changed their Windows Services Agreement. I get a email about it from them. Is effective from 1 July 2014

am quite pleased with the changes. Specially this part:

"Privacy:

As part of our ongoing commitment to respecting your privacy, we won't use your documents, photos or other personal files or what you say in email, chat, video calls or voice mail to target advertising to you."

#dbeyroti

Father "You won't find the word 'gullable' in the OED" Jim

Link to comment
Share on other sites

"The U.S. Department of Health & Human Service’s (DHHS) Office for Human Research Protections (OHRP) mandates that scientists adhere to several requirements in order to have “informed consent.” As summarized in part at Social Psychology Network, people conducting studies need to give their test subjects:

  • A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject's participation, a description of the procedures to be followed, and identification of any procedures which are experimental
  • A description of any reasonably foreseeable risks or discomforts to the subject
  • For research involving more than minimal risk, an explanation as to whether there are any treatments or compensation if injury occurs and, if so, what they consist of, or where further information may be obtained (Note: A risk is considered "minimal" when the probability and magnitude of harm or discomfort anticipated in the proposed research are not greater, in and of themselves, than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests)
  • A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled"

 

Facebook May Have Broken The Law

Link to comment
Share on other sites


irihapeti wrote:


madjim wrote:


 

 'gullable OED'

i dont think is whats is called. I think is called mswindows OS. the one where they already pwn you. like they not in your stuff. You in theirs

 

 

I think you need to retake Comprehensible Forum Posting 101, Granny Jane.

Father "Or have you caught the Early Late Onset Dementia from Phil?" Jim

Link to comment
Share on other sites


Madelaine McMasters wrote:
 Microsoft employs game psychologists to help tune Windows and Office to ping
our
pleasure centers to encourage increased use of their products,


Careful how you use that word "our".

Father "is not in your cult" Jim

Link to comment
Share on other sites


Perrie Juran wrote:

"The U.S. Department of Health & Human Service’s (DHHS) Office for Human Research Protections (
) mandates that scientists adhere to several requirements in order to have “informed consent.” As summarized in part at
, people conducting studies need to give their test subjects:
  • A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject's participation, a description of the procedures to be followed, and identification of any procedures which are experimental
  • A description of any reasonably foreseeable risks or discomforts to the subject
  • For research involving more than minimal risk, an explanation as to whether there are any treatments or compensation if injury occurs and, if so, what they consist of, or where further information may be obtained (Note: A risk is considered "minimal" when the probability and magnitude of harm or discomfort anticipated in the proposed research are not greater, in and of themselves, than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests)
  • A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled"

 


As I understand it, they may also have broken rules associated with their research grant from the US military.

The problem is, I can't see much blowback from this, because there aren't many mechanisms in place -- other than purely reputational ones -- to deal with this sort of instance. On the one hand, Facebook probably simply doesn't care; on the other, the scholarly community tends to be self-regulating, and really doesn't have formal procedures for disciplining breaches of ethics.

PNAS has taken a hit to its reputation, as perhaps have some of the individual scholars, and the editor, associated with the piece. Institutions can nail academics who are caught fabricating or plagiarizing research, but I personally don't know of any case where this kind of ethical issue has been pursued post-publication.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3584 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...