Search Ebook here:


Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth



Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth PDF

Author: Stuart Ritchie

Publisher: Metropolitan Books

Genres:

Publish Date: July 21, 2020

ISBN-10: 1250222699

Pages: 368

File Type: EPub, PDF

Language: English

read download

Book Preface

It is the peculiar and perpetual error of the human understanding to be more moved and excited by affirmatives than by negatives.

Francis Bacon, Novum Organum (1620)

January 31, 2011 was the day the world found out that undergraduate students have psychic powers.

A new scientific paper had hit the headlines: a set of laboratory experiments on over 1,000 people had found evidence for psychic precognition – the ability to see into the future using extrasensory perception. This wasn’t the work of some unknown crackpot: the paper was written by a top psychology professor, Daryl Bem, from the Ivy League’s Cornell University. And it didn’t appear in an obscure outlet – it was published in one of the most highly regarded, mainstream, peer-reviewed psychology journals.1 Science seemed to have given its official approval to a phenomenon that hitherto had been considered completely impossible.

At the time, I was a PhD student, studying psychology at the University of Edinburgh. I dutifully read Bem’s paper. Here’s how one of the experiments worked. Undergraduate students looked at a computer screen, where two images of curtains would appear. They were told that there was another picture behind one of the curtains, and that they had to click whichever they thought it was. Since they had no other information, they could only guess. After they’d chosen, the curtain disappeared and they saw whether they’d been correct. This was repeated thirty-six times, then the experiment was over. The results were quietly stunning. When a picture of some neutral, boring object like a chair was behind one of the curtains, the outcome was almost perfectly random: the students chose correctly 49.8 per cent of the time, essentially fifty-fifty. However – and here’s where it gets strange – when one of the pictures was pornographic, the students tended to choose it slightly more often than chance: 53.1 per cent of the time, to be exact. This met the threshold for ‘statistical significance’. In his paper, Bem suggested that some unconscious, evolved, psychic sexual desire had ever-so-slightly nudged the students towards the erotic picture even before it had appeared on screen.2

Some of Bem’s other experiments were less explicit, but no less puzzling. In one of them, a list of forty unrelated words appeared on the screen, one at a time. Afterwards came a surprise memory test, where the students had to type in as many of the words as they could remember. At that point, the computer randomly selected twenty of the words and showed them to the students again. Then the experiment ended. Bem reported that, during the memory test, the students were more likely to remember the twenty words they were about to see again, even though they couldn’t have known – except by psychic intuition – which ones they were going to be shown. This would be a bit like studying for an exam, sitting the exam, then studying again afterwards, and that post-exam study somehow winding its way back in time to improve your grade. Unless the laws of physics had suddenly been repealed, time is supposed to run in only one direction; causes are supposed to come before, not after, their effects. But with the publication of Bem’s paper, these bizarre results were now a part of scientific literature.

Crucially, Bem’s experiments were extremely simple, requiring nothing more complicated than a desktop computer. If Bem was right, any researcher could produce evidence for the paranormal just by following his experimental instructions – even a PhD student with next to no resources. That is what I was, so that is exactly what I did. I got in touch with two other psychologists who were also sceptical of the results, Richard Wiseman of the University of Hertfordshire and Chris French of Goldsmiths, University of London. We agreed to re-run Bem’s word-list experiment three times, once at each of our respective universities. After a few weeks of recruiting participants, waiting for them to complete the memory test and then dealing with their looks of bewilderment as we explained afterwards what we’d been looking for, we had the results. They showed … nothing. Our undergraduates weren’t psychic: there was no difference in their recall of the words presented after the test. Perhaps the laws of physics were safe after all.

We duly wrote up our results and sent the resulting paper off to the same scientific journal that had published Bem’s study, the Journal of Personality and Social Psychology. Almost immediately the door was slammed in our faces. The editor rejected the paper within a few days, explaining to us that they had a policy of never publishing studies that repeated a previous experiment, whether or not those studies found the same results as the original.3

Were we wrong to feel aggrieved? The journal had published a paper that had made some extremely bold claims – claims that, if true, weren’t just interesting to psychologists, but would completely revolutionise science. The results had made their way into the public domain and received significant publicity in the popular media, including an appearance by Bem on the late-night talk show The Colbert Report where the host coined the memorable phrase ‘time-travelling porn’.4 Yet the editors wouldn’t even consider publishing a replication study that called the findings into question.5

Meanwhile, another case was unfolding that also raised alarming questions about the current state of scientific practice. Science, widely considered one of the world’s most prestigious scientific journals (second only to Nature), had published a paper by Diederik Stapel, a social psychologist at Tilburg University in the Netherlands. The paper, entitled ‘Coping with Chaos’, described several studies performed in the lab and on the street, finding that people showed more prejudice – and endorsed more racial stereotypes – when in a messier or dirtier environment.6 This, and some of Stapel’s dozens of other papers, hit the headlines across the world. ‘Chaos Promotes Stereotyping’, wrote Nature’s news service; ‘Where There’s Rubbish There’s Racism’, alliterated the Sydney Morning Herald.7 The results exemplified a type of social psychology research that produced easy-to-grasp findings with, as Stapel himself wrote, ‘clear policy implications’: in this case, to ‘diagnose environmental disorder early and intervene immediately’.8

The problem was that none of it was real. Some of Stapel’s colleagues became suspicious after they noticed the results of his experiments were a little too perfect. Not only that, but whereas senior academics are normally extremely busy and rely on their students to do such menial tasks as collecting data, Stapel had apparently gone out and collected all the data himself. After the colleagues brought these concerns to the university in September 2011, Stapel was suspended from his professorship. Multiple investigations followed.9

In a confessional autobiography he wrote subsequently, Stapel admitted that instead of collecting the data for his studies, he would sit alone in his office or at his kitchen table late into the night, typing the numbers he required for his imaginary results into a spreadsheet, making them all up from scratch. ‘I did some things that were terrible, maybe even disgusting,’ he wrote. ‘I faked research data and invented studies that had never happened. I worked alone, knowing exactly what I was doing … I didn’t feel anything: no disgust, no shame, no regrets.’10 His scientific fraud was surprisingly elaborate. ‘I invented entire schools where I’d done my research, teachers with whom I’d discussed the experiments, lectures that I’d given, social-studies lessons that I’d contributed to, gifts that I’d handed out as thanks for people’s participation.’11

Stapel described printing off the blank worksheets he’d ostensibly be giving to his participants, showing them to his colleagues and students, announcing he was heading off to run the study … then dumping the sheets into the recycling when nobody was looking. It couldn’t last. The findings of the investigations were clear; he was fired not long after his suspension. Since then, no fewer than fifty-eight of his studies have been retracted – struck off the scientific record – due to their fake data.

The Bem and Stapel cases – where esteemed professors published seemingly impossible results (in Bem’s case) and outright fraudulent ones (in Stapel’s) – sent a jolt through psychology research, and through science more generally. How could prestigious scientific journals have allowed their publication? How many other studies had been published that couldn’t be trusted? It turned out that these cases were perfect examples of much wider problems with the way we do science.

In both cases, the central issue had to do with replication. For a scientific finding to be worth taking seriously, it can’t be something that occurred because of random chance, or a glitch in the equipment, or because the scientist was cheating or dissembling. It has to have really happened. And if it did, then in principle I should be able to go out and find broadly the same results as yours. In many ways, that’s the essence of science, and something that sets it apart from other ways of knowing about the world: if it won’t replicate, then it’s hard to describe what you’ve done as scientific at all.

What was concerning, then, wasn’t so much that Bem’s experiments were unreliable or that Stapel’s were a figment of his imagination: some missteps and spurious results will always be with us (and so, alas, will fraudsters).12 What was truly problematic was how the scientific community had handled both situations. Our attempted replication of Bem’s experiment was unceremoniously rejected from the journal that published the original; in the case of Stapel, nobody had ever even tried to replicate his findings. In other words, the community had demonstrated that it was content to take the dramatic claims in these studies at face value, without checking how durable the results really were. And if there are no double-checks on the replicability of results, how do we know they aren’t just flukes or fakes?

Perhaps Bem himself best summed up many scientists’ attitudes to replication, in an interview some years after his infamous study. ‘I’m all for rigor,’ he said, ‘but I don’t have the patience for it … If you looked at all my past experiments, they were always rhetorical devices. I gathered data to show how my point would be made. I used data as a point of persuasion, and I never really worried about, “Will this replicate or will this not?”’13

Worrying about whether results will replicate or not isn’t optional. It’s the basic spirit of science; a spirit that’s supposed to be made manifest in the system of peer review and journal publication, which acts as a bulwark against false findings, mistaken experiments and dodgy data. As this book will show, though, that system is badly broken. Important knowledge, discovered by scientists but not deemed interesting enough to publish, is being altered or hidden, distorting the scientific record and damaging our medicine, technology, educational interventions and government policies. Huge resources, poured into science in the expectation of a useful return, are being wasted on research that’s utterly uninformative. Entirely avoidable errors and slip-ups routinely make it past the Maginot Line of peer review. Books, media reports and our heads are being filled with ‘facts’ that are either incorrect, exaggerated, or drastically misleading. And in the very worst cases, particularly where medical science is concerned, people are dying.

Other books feature scientists taking the fight to a rogue’s gallery of pseudoscientists: creationists, homeopaths, flat-Earthers, astrologers, and their ilk, who misunderstand and abuse science – usually unwittingly, sometimes maliciously, and always irresponsibly.14 This book is different. It reveals a deep corruption within science itself: a corruption that affects the very culture in which research is practised and published. Science, the discipline in which we should find the harshest scepticism, the most pin-sharp rationality and the hardest-headed empiricism, has become home to a dizzying array of incompetence, delusion, lies and self-deception. In the process, the central purpose of science – to find our way ever closer to the truth – is being undermined.


The book begins by showing, in Part I, that doing science involves much more than just running experiments or testing hypotheses. Science is inherently a social thing, where you have to convince other people – other scientists – of what you’ve found. And since science is also a human thing, we know that any scientists will be prone to human characteristics, such as irrationality, biases, lapses in attention, in-group favouritism and outright cheating to get what they want. To enable scientists to convince one another while trying to transcend the inherent limitations of human nature, science has evolved a system of checks and balances that – in theory – sorts the scientific wheat from the chaff. This process of scrutiny and validation, which leads to the supposed gold-standard of publication in a peer-reviewed scientific journal, is described in Chapter 1. But Chapter 2 shows that the process must have gone terribly wrong: there are numerous published findings across many different areas of science that can’t be replicated and whose truth is very much in doubt.

Then, in Part II, we’ll ask why. We’ll discover that our publication system, far from neutralising or overriding all the human problems, allows them to leave their mark on the scientific record – and does so precisely because it believes itself to be objective and unbiased. A peculiar complacency, a strange arrogance, has taken hold, where the mere existence of the peer-review system seems to have stopped us from recognising its flaws. Peer-reviewed papers are supposedly as near as one can get to an objective factual account of how the world works. But in our tour through many dozens of those papers, we’ll discover that peer review can’t be relied upon to ensure scientists are honest (Chapter 3), detached (Chapter 4), scrupulous (Chapter 5), or sober (Chapter 6) about their results.

Part III digs deeper into scientific practice. Chapter 7 shows that it’s not just that the system fails to deal with all the kinds of malpractice we’ve discussed. In fact, the way academic research is currently set up incentivises these problems, encouraging researchers to obsess about prestige, fame, funding and reputation at the expense of rigorous, reliable results. Finally, after we’ve diagnosed the problem, Chapter 8 describes a set of often-radical reforms to scientific practice that could help reorient it towards its original purpose: discovering facts about the world.

To make the case about the frailties of scientific research, throughout the book I’ll draw on cautionary tales from a wide variety of scientific fields. Partly because I’m a psychologist, there’ll be a preponderance of examples from that subject.15 My background isn’t the only reason there’s so much psychology in the book: it’s also because after the Bem and Stapel affairs (among many others), psychologists have begun to engage in some intense soul-searching. More than perhaps any other field, we’ve begun to recognise our deep-seated flaws and to develop systematic ways to address them – ways that are beginning to be adopted across many different disciplines of science.

The first step in fixing our broken scientific system is learning to spot, and correct, the mistakes that can lead it astray. And the only way to do this is with more science. Throughout the book, I’ll draw on meta-science: a relatively new kind of scientific research that focuses on scientific research itself. If science is the process of exposing and eliminating errors, meta-science represents that process aimed inwards.

Much can be learned from mistakes. On one of his albums, the musician Todd Rundgren has a spoken-word introduction encouraging the listener to play a little game he calls ‘Sounds of the Studio’. Rundgren describes all the missteps that can be made when recording music: hums, hisses, pops on the microphone whenever someone sings a word with a ‘p’ in it, choppy editing, and so on. He suggests that the reader listen for these mistakes in the songs that follow, and on other records. And just as a better understanding of recording studio slip-ups can give you a new insight into how music is made, learning about how science goes wrong can tell you a lot about the process by which we arrive at our knowledge.


Download Ebook Read Now File Type Upload Date
Download here Read Now EPub, PDF September 21, 2022

How to Read and Open File Type for PC ?