Research fraud (plagiarism, data faking, etc)

btw, doesn’t look like I was linking the Data Colada updates

so here they are:

The post on the trial:
8 May 2024

So then Gino’s lawyers have to prove that we knew we were lying or that we were reckless. That poses something of a challenge because, amongst other things, we believe every single thing we wrote on the topic. Furthermore, the hundreds of hours of painstaking analysis that went into those blog posts doesn’t scream out “reckless”. So at this point, you’re probably wondering what the Gino attorneys are arguing. We’ll be honest here and say that, although we were at the hearing and listened to every word, we are not really sure. There was a moment when Gino’s lawyer tried to make a point by taking one of our statements out of context and simply misquoting another. But our lawyer took care of that. They make no legitimate claim that we knew that we were lying. And that makes sense. Because we weren’t.

Most recent post on the altered data:

from 9 July:

As you may know, Harvard professor Francesca Gino is suing us for defamation after (1) we alerted Harvard to evidence of fraud in four studies that she co-authored, (2) Harvard investigated and placed her on administrative leave, and (3) we summarized the evidence in four blog posts.

As part of their investigation, Harvard wrote a 1,288-page report describing what they found. Because of the lawsuit, that report was made public: .pdf. And because it was made public, we now know what the investigators say was in the “original” dataset for one of the four studies: Study 3A of Gino, Kouchaki, and Casciaro (2020) [1]. By simply comparing the Original and Posted versions of the dataset, we can see exactly how the data were altered to produce the published result. In this post, we will show that:

  • We correctly deduced how the data were altered.
  • Gino’s prevailing explanation for the alterations is extremely implausible.

I am jumping over stuff in the middle:

Gino’s hypothesis says that Participant 4 said that the event was “authentic” but truly rated the event as “inauthentic”. Our hypothesis, confirmed by Harvard, says that Participant 4 said that the event was “authentic” and truly rated the event as “authentic”.

We can look at this more completely. We had asked some workers to rate the positivity/negativity of the words that participants wrote about the networking event. This allows us to see the relationship between those word ratings and the moral impurity ratings. If these word ratings are valid, then participants who gave higher ratings of moral impurity should have written words that were rated to be more negative. And, indeed, in the Original dataset, this is what you see:

When people gave negative ratings they said negative things. When people gave positive ratings they said positive things. Of course [10].

Now if Gino’s hypothesis were true – if the Posted data are real and the Original data were altered – then we should see a more sensible relationship between the words and the ratings within the Posted dataset than within the Original dataset. We can test this by looking at the words/ratings relationship in the 104 altered observations. Is that relationship more sensible in the Posted data than in the Original data?:

No. Opposite. In the Original data, the relationship is completely sensible: more negative ratings = less positive words. In the Posted data, it is . . . backwards: more negative ratings = more positive words.

To believe that the Original data are fake and the Posted data are real, you’d have to believe that the sensible data are fake and the backwards data are real. That is a difficult thing to believe.

An easier thing to believe is that when the fraudster changed the ratings, she forgot to change the words, and so, when she altered them, the relationship between the words and the ratings got all cattywampus [11].

Conclusions

We were right about how the data were altered, Gino’s prevailing explanation for the alterations does not make sense, and yet we are the defendants in this case.

2 Likes

And a different piece of falsified Qualtrics data, supposedly

I’ve used Qualtrics for various survey projects, and I have no idea how to change the data on their servers. Qualtrics could change it themselves, duh, but why would they? All they’ve got going for themselves is their credibility.

two podcast eps on research fraud – lots in Alzheimers research

Initial substack post I saw:

Part 1:

Part 2:

3 Likes

ZZ Biotech received 30 million from the NIH to explore drug candidate 3K3A-APC for stroke recovery based on Zlokovic’s work. The drug didn’t work and there were six deaths in the active group compared to one in the placebo group.

Yikes!

1 Like

https://www.wsj.com/tech/ai/mit-says-it-no-longer-stands-behind-students-ai-research-paper-11434092

By

Justin Lahart

May 16, 2025 9:46 am ET

MIT Says It No Longer Stands Behind Student’s AI Research Paper

The university says it has no confidence in a widely circulated paper by an economics graduate student

The Massachusetts Institute of Technology said Friday it can no longer stand behind a widely circulated paper on artificial intelligence written by a doctoral student in its economics program.

The paper said that the introduction of an AI tool in a materials-science lab led to gains in new discoveries, but had more ambiguous effects on the scientists who used it.

MIT didn’t name the student in its statement Friday, but it did name the paper. That paper, by Aidan Toner-Rodgers, was covered by The Wall Street Journal and other media outlets.

More stuff

In a press release, MIT said it “has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.”

The university said the author of the paper is no longer at MIT.

Toner-Rodgers didn’t respond to requests for comment.

The paper said that after an AI tool was implemented at a large materials-science lab, researchers discovered significantly more materials—a result that suggested that, in certain settings, AI could substantially improve worker productivity. But it also showed that most of the productivity gains went to scientists who were already highly effective, and that overall the AI tool made scientists less happy about their work.

The paper was championed by MIT economists Daron Acemoglu, who won the 2024 economics Nobel, and David Autor. The two said they were approached in January by a computer scientist with experience in materials science who questioned how the technology worked, and how a lab that he wasn’t aware of had experienced gains in innovation. Unable to resolve those concerns, they brought it to the attention of MIT, which began conducting a review.

MIT didn’t give details about what it believes is wrong with the paper. It cited “student privacy laws and MIT policy.”

Toner-Rodgers presented the paper at a National Bureau of Economic Research conference in November. The paper is on the preprint site arXiv, where researchers post papers prior to peer review.

MIT said it has asked for the paper to be removed from arXiv. The paper was submitted to the Quarterly Journal of Economics, a leading economics journal, but was still being evaluated. MIT has asked that it be withdrawn from consideration.

“More than just embarrassing, it’s heartbreaking,” Autor said.

Prior article:
https://www.wsj.com/economy/will-ai-help-hurt-workers-income-productivity-5928a389?mod=article_inline
29 Dec 2024

Will AI Help or Hurt Workers? One 26-Year-Old Found an Unexpected Answer.

New research shows AI made some workers more productive—but less happy.

Daron Acemoglu, the Massachusetts Institute of Technology professor who recently won the Nobel Prize in economics, worries that artificial intelligence will worsen income inequality and not do all that much for productivity. His friend and colleague David Autor is more hopeful, believing that AI could do just the opposite.

New research from Aidan Toner-Rodgers, an MIT doctoral student, challenges both Acemoglu’s pessimism and Autor’s optimism. Both professors are raving about it.

“It’s fantastic,” said Acemoglu.

“I was floored,” said Autor.

Neither Autor nor Acemoglu is changing his mind on AI. But the research by Toner-Rodgers, 26 years old, is a step toward figuring out what AI might do to the workforce, by examining AI’s effect in the real world.


That’s all I’m going to copy from the fakey fake piece.

1 Like

First time they have ever revoked tenure since the 1940s?

That did surprise me. Far more rare than I thought.

From fabricated research to paid authorships and citations, organized scientific fraud is on the rise, according to a new Northwestern University study.

When people think about scientific fraud, they might remember news reports of retracted papers, falsified data or plagiarism. These reports typically center around the isolated actions of one individual, who takes shortcuts to get ahead in an increasingly competitive industry. But Amaral and his team uncovered a widespread underground network operating within the shadows and outside of the public’s awareness.

“These networks are essentially criminal organizations, acting together to fake the process of science,” Amaral said. “Millions of dollars are involved in these processes.”

To conduct the study, the researchers analyzed extensive datasets of retracted publications, editorial records and instances of image duplication.

Most of the data came from major aggregators of scientific literature, including Web of Science (WoS), Elsevier’s Scopus, National Library of Medicine’s PubMed/MEDLINE and OpenAlex, which includes data from Microsoft Academic Graph, Crossref, ORCID, Unpaywall and other institutional repositories.

Richardson and his colleagues also collected lists of de-indexed journals, which are scholarly journals that have been removed from databases for failing to meet certain quality or ethical standards.

https://www.pnas.org/doi/10.1073/pnas.2420092122

The entities enabling scientific fraud at scale are large, resilient, and growing rapidly

Reese A. K. Richardson ORCID, Spencer S. Hong ORCID, Jennifer A. Byrne ORCID jennifer.byrne@health.nsw.gov.au, +1 , and Luís A. Nunes Amaral ORCID amaral@northwestern.eduAuthors Info & Affiliations

Edited by Daniel Acuña, University of Colorado Boulder, Boulder, CO; received September 30, 2024; accepted March 18, 2025 by Editorial Board Member Mark Granovetter

August 4, 2025

122 (32) e2420092122

https://doi.org/10.1073/pnas.2420092122

Significance

Numerous recent scientific and journalistic investigations demonstrate that systematic scientific fraud is a growing threat to the scientific enterprise. In large measure this has been attributed to organizations known as research paper mills. We uncover footprints of activities connected to scientific fraud that extend beyond the production of fake papers to brokerage roles in a widespread network of editors and authors who cooperate to achieve the publication of scientific papers that escape traditional peer-review standards. Our analysis reveals insights into how such organizations are structured and how they operate.

Abstract

Science is characterized by collaboration and cooperation, but also by uncertainty, competition, and inequality. While there has always been some concern that these pressures may compel some to defect from the scientific research ethos—i.e., fail to make genuine contributions to the production of knowledge or to the training of an expert workforce—the focus has largely been on the actions of lone individuals. Recently, however, reports of coordinated scientific fraud activities have increased. Some suggest that the ease of communication provided by the internet and open-access publishing have created the conditions for the emergence of entities—paper mills (i.e., sellers of mass-produced low quality and fabricated research), brokers (i.e., conduits between producers and publishers of fraudulent research), predatory journals, who do not conduct any quality controls on submissions—that facilitate systematic scientific fraud. Here, we demonstrate through case studies that i) individuals have cooperated to publish papers that were eventually retracted in a number of journals, ii) brokers have enabled publication in targeted journals at scale, and iii), within a field of science, not all subfields are equally targeted for scientific fraud. Our results reveal some of the strategies that enable the entities promoting scientific fraud to evade interventions. Our final analysis suggests that this ability to evade interventions is enabling the number of fraudulent publications to grow at a rate far outpacing that of legitimate science.

An interesting comment in the comments section:

“It is absurd to expect medical students to do extracurricular stuff like reseach, let alone making publication (semi-)mandatory for residency application. MD program and PhD program are separated for some reason. If PhD students are not required to do rotations in hospitals, I don’t see why MD students are required to do research.”

…MD students aren’t required to do research

I mean, my sis wasn’t (for pediatrics). And has no academic publications.

Perhaps it depends on the specialty.

Probably more on the school’s department, which wants the research done.

2 Likes

There is a dual- degree called MD-PHD. Our university here offers it. One gets trained in both clinical practice and research. Some end up as faculty members in a medical school. It takes at least 6 years after the bachelor’s degree .

Our son has two friends who did this program.

An interesting series of comments on this general topic…

Maybe for residency?

there was no research component of my sister’s residency.

What country are these people in? Or specialties?

I can imagine specialties that don’t have enough patients or are cutting-edge enough that research would be an important component.

But my sister was just doing regular practice in a children’s hospital. She may have supported somebody else’s research, potentially, but it would have been as part of treating patients.

Again, if this is just a MD/PhD program, it makes sense.

Dr. Glaucomfleckem is a character played by an ophthalmologist who I think lives in Oregon, so I’m pretty sure this is all US focused posts.

ETA: this is residency focused, so post MD/PhD

1 Like

PhDs have to do research. That’s the way it goes.

The Black Market for Fake Science Is Growing Faster Than Legitimate Research, Study Warns | WIRED?

This is fun.

https://www.science.org/content/article/how-easy-it-fudge-your-scientific-rank-meet-larry-world-s-most-cited-cat

1 Like

Should I scan actuarial papers to see if Larry got a mention?