Data voids in search results can lead down rabbit holes that bolster belief in fake news
โDo your own researchโ is a popular tagline among fringe groups and ideological extremists. Noted conspiracy theorist Milton William Cooper first ushered this rallying cry into the mainstream in the 1990s through his radio show, where he discussed schemes involving things such as the assassination of President John F. Kennedy, an Illuminati cabal and alien life. Cooper died in 2001, but his legacy lives on. Radio host Alex Jonesโs fans, anti-vaccine activists and disciples of QAnonโs convoluted alternate reality often implore skeptics to do their own research.
Yet more mainstream groups have also offered this advice. Digital literacy advocates and those seeking to combat online misinformation sometimes spread the idea that when you are faced with a piece of news that seems odd or out of sync with reality, the best course of action is to investigate it yourself. For instance, in 2021 the Office of the U.S. Surgeon General put out a guide recommending that those wondering about a health claimโs legitimacy should โtype the claim into a search engine to see if it has been verified by a credible source.โ Library and research guides, often suggestthat people โGoogle it!โ or use other search engines to vet information.
Unfortunately, this time science seems to be on the conspiracy theoristsโ side. Encouraging Internet users to rely on search engines to verify questionable online articles can make them more prone to believing false or misleading information, according to a study published today in Nature. The new research quantitatively demonstrates how search results, especially those prompted by queries that contain keywords from misleading articles, can easily lead people down digital rabbit holes and backfire. Guidance to Google a topic is insufficient if people arenโt considering what they search for and the factors that determine the results, the study suggests.
In five different experiments conducted between late 2019 and 2022, the researchers asked a total of thousands of online participants to categorize timely news articles as true, false or unclear. A subset of the participants received prompting to use a search engine before categorizing the articles, whereas a control group didnโt. At the same time, six professional fact-checkers evaluated the articles to provide definitive designations. Across the different tests, the nonprofessional respondents were about 20 percent more likely to rate false or misleading information as true after they were encouraged to search online. This pattern held even for very salient, heavily reported news topics such as the COVID pandemic and even after months had elapsed between an articleโs initial publication and the time of the participantsโ search (when presumably more fact-checks would be available online).
For one experiment, the study authors also tracked participantsโ search terms and the links provided on the first page of the results of a Google query. They found that more than a third of respondents were exposed to misinformation when they searched for more detail on misleading or false articles. And often respondentsโ search terms contributed to those troubling results: Participants used the headline or URL of a misleading article in about one in 10 verification attempts. In those cases, misinformation beyond the original article showed up in results more than half the time.
For example, one of the misleading articles used in the study was entitled โU.S. faces engineered famine as COVID lockdowns and vax mandates could lead to widespread hunger, unrest this winter.โ When participants included โengineered famineโโa unique term specifically used by low-quality news sourcesโin their fact-check searches, 63 percent of these queries prompted unreliable results. In comparison, none of the search queries that excluded the word โengineeredโ returned misinformation.
โI was surprised by how many people were using this kind of naive search strategy,โ says the studyโs lead author Kevin Aslett, an assistant professor of computational social science at the University of Central Florida. โItโs really concerning to me.โ
Tech Billionaires Need to Stop Trying to Make the Science Fiction They Grew Up on Real
Todayโs Silicon Valley billionaires grew up reading classic American science fiction. Now theyโre trying to make it come true, embodying a dangerous political outlook
Science fiction (SF) influences everything in this day and age, from the design of everyday artifacts to how weโincluding the current crop of 50-something Silicon Valley billionairesโwork. And thatโs a bad thing: it leaves us facing a future we were all warned about, courtesy of dystopian novels mistaken for instruction manuals.
Billionaires who grew up reading science-fiction classics published 30 to 50 years ago are affecting our life today in almost too many ways to list: Elon Musk wants to colonize Mars. Jeff Bezos prefers 1970s plans for giant orbital habitats. โPeter Thiel is funding research into artificial intelligence, life extension and โseasteading.โ Mark Zuckerberg has blown $10 billion trying to create the Metaverse from Neal Stephensonโs novel Snow Crash. And Marc Andreessen of the venture capital firm Andreessen Horowitz has published a โtechno-optimist manifestoโ promoting a bizarre accelerationist philosophy that calls for an unregulated, solely capitalist future of pure technological chaos.
These men collectively have more than half a trillion dollars to spend on their quest to realize inventions culled from the science fiction and fantasy stories that they read in their teens. But this is tremendously bad news because the past centuryโs science fiction and fantasy works widely come loaded with dangerous assumptions.
SF is a profoundly ideological genreโitโs about much more than new gadgets or inventions. Canadian science-fiction novelist and futurist Karl Schroeder has told me that โevery technology comes with an implied political agenda.โ And the tech plutocracy seems intent on imposing its agenda on our planetโs eight billion inhabitants.
We were warned about the ideology driving these wealthy entrepreneurs by Timnit Gebru, former technical co-lead of the ethical artificial intelligence team at Google and founder of the Distributed Artificial Intelligence Research Institute (DAIR), and รmile Torres, a philosopher specializing in existential threats to humanity. They named this ideology TESCREAL, which stands for โtranshumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism.โ These are separate but overlapping beliefs in the circles associated with big tech in California. Transhumanists seek to extend human cognition and enhance longevity; extropians add space colonization, mind uploading, AI and rationalism (narrowly defined) to these ideals. Effective altruism and longtermism both discount relieving present-day suffering to fund a better tomorrow centuries hence. Underpinning visions of space colonies, immortality and technological apotheosis, TESCREAL is essentially a theological program, one meant to festoon its high priests with riches.
How did this ideology come about, and why do I think itโs dangerous?
The science-fiction genre that todayโs billionaires grew up withโthe one that existed in the 1970sโgoes back to inventor and publisher Hugo Gernsback. Gernsback published general articles about science and technology and then fiction in that vein. He started publishing Amazing Stories magazine in 1926 as a vehicle for fantastic tales about a technological future. His magazineโs strain of SF promoted the combination of the American dream of capitalist success, combined with uncritical technological solutionism and a side order of frontier colonialism.
Gernsbackian SF mirrored Italian futurismโs rejection of the past and celebration of speed, machinery, violence, youth and industry, and both were wide open to far-right thought. Gernsbackโs rival, John W. Campbell, Jr. (editor of Astounding Science Fiction from 1937 until 1971), promoted many now famous authors, including Robert Heinlein and Isaac Asimov. But Campbell was also racist, sexist and a red-baiter. Nor was Campbell alone on the right wing of SF: for example, bestselling author Ayn Rand held that the only social system compatible with her philosophy of objectivism was laissez-faire capitalism. The appeal this holds for todayโs billionaires is obvious.
Perhaps SFโs weirdest contribution to TESCREAL is Russian cosmism, the post-1917 stepchild of the mystical theological speculation of philosopher Nikolai Fyodorovich Fyodorov. Itโs pervasive in science fictionโseen in topics from space colonization to immortalism, superhumans, the singularity, mind uploading, and more.
Cosmismโs contribution to the TESCREAL ideology is a secular quasi-religion with an implied destinyโcolonize Mars and then the galaxy, achieve immortality, prioritize the long-term interests of humanityโthat provides billionaires with an appealing justification for self-enrichment. We can see this with Thiel, who co-founded analytics company Palantir Technologies with a Lord of the Ringsโthemed name and recently told the Atlantic that he wanted to be immortal like J.R.R. Tolkienโs elves. And we can see it when Musk lands his rockets on barges with names taken from a science-fiction series by Iain M. Banks (ironically enough, one about a galactic socialist utopia). TESCREAL is also heavily contaminated with Christian theological reasoning, Campbellian white supremacism, Randian ruthlessness, the eugenics that was pervasive in the genre until the 1980s and the imperialist subtext of colonizing the universe.
But there is a problem: SF authors such as myself are popular entertainers who work to amuse an audience that is trained on what to expect by previous generations of science-fiction authors. We are not trying to accurately predict possible futures but to earn a living: any foresight is strictly coincidental. We recycle the existing materialโand the result is influenced heavily by the biases of earlier writers and readers. The genre operates a lot like a large language model that is trained using a body of text heavily contaminated by previous LLMs; it tends to emit material like that of its predecessors. Most SF is small-c conservative insofar as it reflects the history of the field rather than trying to break ground or question received wisdom.
Science fiction, therefore, does not develop in accordance with the scientific method. It develops by popular entertainers trying to attract a bigger audience by pandering to them. The audience today includes billionaires who read science fiction in their childhood and who appear unaware of the ideological underpinnings of their youthful entertainment: elitism, โscientificโ racism, eugenics, fascism and a blithe belief today in technology as the solution to societal problems.
In 2021 a meme arose based on writer and game designer Alex Blechmanโs tweet about this issue (which was later posted to Mastodon):
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Donโt Create The Torment Nexus
Itโs a worryingly accurate summary of the situation in Silicon Valley right now: the billionaires behind the steering wheel have mistaken cautionary tales and entertainments for a road map, and weโre trapped in the passenger seat. Letโs hope there isnโt a cliff in front of us.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.