Search Ebook here:


How to Stay Smart in a Smart World



How to Stay Smart in a Smart World PDF

Author: Gerd Gigerenzer

Publisher: The MIT Press

Genres:

Publish Date: August 2, 2022

ISBN-10: 0262046954

Pages: 320

File Type: Epub, PDF

Language: English

read download

Book Preface

Imagine a digital assistant who does everything better than you. Whatever you say, it knows better. Whatever you decide, it will correct. When you come up with a plan for next year, it will have a superior one. At some point you may give up making any personal decisions on your own. Now it’s the AI that efficiently runs your finances, writes your messages, chooses your romantic partner and plans when it’s best to have children. Packages will be delivered to your door with products you didn’t even know you needed. A social worker may turn up because the digital assistant predicted that your child is at risk of severe depression. And, before you waste time agonizing over which political candidate you favour, your assistant will already know and cast your vote. It’s just a matter of time until tech companies run your life, and the faithful assistant morphs into a supreme superintelligence. Like a flock of sheep, our grandchildren will cheer or tremble in awe of their new master.

In recent years, I have spoken at many popular artificial intelligence (AI) events and am repeatedly surprised at how widespread unconditional trust in complex algorithms appears to be. No matter what the topic was, representatives of tech companies assured listeners that a machine will do the job more accurately, quickly and cheaply. What’s more, replacing people by software might well make the world a better place. In the same vein, we hear that Google knows us better than we know ourselves and that AI can predict our behaviour almost perfectly, or soon will be able to do so. Tech companies proclaim this ability when they offer their services to advertisers, insurers or retailers. We too tend to believe it. Even those popular authors who paint doomsday pictures of robots ripping the guts out of humans assume the near-omniscience of AI, as do some of the tech industry’s most outspoken critics who brand the business as evil surveillance capitalism and fear for our freedom and dignity.3 It is this belief that makes many worry about Facebook as a terrifying Orwellian surveillance machine. Data leaks and the Cambridge Analytica scandal have amplified this worry into fearful awe. Based on faith or fear, the storyline remains the same. It goes like this:

AI has beaten the best humans in chess and Go.

Computing power doubles every couple of years.

Therefore, machines will soon do everything better than humans.

Let’s call it the AI-beats-humans argument. Its two premises are correct, but the conclusion is wrong.

The reason is that computing power goes a long way for some kinds of problems, but not for others. To date, the stunning victories of AI have been in well-defined games with fixed rules such as chess and Go, with similar successes for face and voice recognition in relatively unchanging conditions. When the environment is stable, AI can surpass humans. If the future is like the past, large amounts of data are useful. However, if surprises happen, big data – which are always data from the past – may mislead us about the future. Big data algorithms missed the financial crisis of 2008 and in 2016 predicted Hillary Clinton’s victory by a large margin.

In fact, many problems we face are not well-defined games but situations in which uncertainty abounds: finding true love, predicting who will commit a crime and reacting in unforeseen emergency situations are examples. Here, more computing power and bigger data are of limited help. Humans are the key source of uncertainty. Imagine how much more difficult chess would be if the king could violate the rules at a whim and the queen could stomp off the board in protest after setting the rooks on fire. With people involved, trust in complex algorithms can lead to illusions of certainty that become a recipe for disaster.

To appreciate that complex algorithms are likely to succeed when situations are stable but will struggle with uncertainty exemplifies the general theme of this book, staying smart in a smart world:

Staying smart means understanding the potentials and risks of digital technologies, and the determination to stay in charge in a world populated by algorithms.

I wrote this book to help you understand the potential of digital technologies, such as AI, but even more important, the limitations and risks of these technologies. As these technologies become more widespread and dominant, I want to provide you with strategies and methods to stay in charge of your life rather than let yourself get steamrollered.

Should we simply lean back and relax while software makes our personal decisions? Definitely not. Staying smart does not mean obliviously trusting technology; nor does it mean anxiously mistrusting it. Instead, it is about understanding what AI can do and what remains the fancy of marketing hype and techno-religious faiths. It is also about one’s personal strength to control a device rather than being remote-controlled by it.

Staying smart is not the same as digital skills for using technology. Educational programmes worldwide seek to increase digital skills by buying tablets and smart whiteboards for classrooms and teaching children how to use them. But these programmes rarely teach children how to understand the risks delivered by digital technology. As a consequence, most digital natives shockingly cannot tell hidden ads from real news and are taken in by the appearance of a website. For instance, a study of 3,446 digital natives showed that 96 per cent of them do not know how to check the trustworthiness of sites and posts.4

A smart world is not just the addition of smart TVs, online dating, and gimmicks to our lives. It is a world transformed by digital technology. When the door to the smart world was first opened, many pictured a paradise where everyone had access to the tree of truthful information, which would finally put an end to ignorance, lies and corruption. Facts about climate change, terrorism, tax evasion, exploitation of the poor and violations of human dignity would be laid open. Immoral politicians and greedy executives would be exposed and forced to resign. Government spying on the public and violations of privacy would be prevented. To some degree, this dream has become reality, although the paradise has also been polluted. What is really happening, however, is a transformation of society. The world does not simply get better or worse. How we think about good and bad is changing. For instance, not long ago, people were extremely concerned about privacy and took to the streets to protest against governments and corporations that tried to surveil them and get hold of their personal data. A wide spectrum of activists, young liberals and mainstream organizations held massive protests against the 1987 German census, fearing that computers could de-anonymize their answers, and angry people plastered the Berlin Wall with thousands of empty questionnaires. In the 2001 census in Australia, more than 70,000 people declared their religion as ‘Jedi’ (after the movie Star Wars), and still in 2011, British citizens protested against questions infringing on their privacy, such as ones concerning religion.5 Today, when our smart home records everything we do 24/7, including in our bedroom, and our child’s smart doll records every secret it is entrusted with, we shrug our shoulders. Feelings of privacy and dignity adapt to technology, or risk becoming concepts of the past. The dream of the internet was once freedom; for many, freedom now means free internet.

Since time immemorial, humans have created impressive new technologies that they have not always used wisely. To reap the many benefits of digital technology, we need the insight and courage to remain smart in a smart world. This is not the time to sit back and relax, but to keep our eyes open and stay in charge.

Staying in Charge

If you are not a devil-may-care person, you might occasionally worry about your safety. Which disaster do you think is more likely to happen in the next ten years?

  • You will be killed by a terrorist.
  • You will be killed by a driver distracted by a smartphone.

If you opt for the terrorist attack, you are among the majority. Since the 9/11 attack, surveys in North America and Europe have indicated that many people believe terrorism to pose one of the greatest hazards to their lives. For some, it is the greatest fear. At the same time, most admit to texting while driving without much concern. In the ten years before 2020, 36 people on average were killed annually in the US by terrorists, Islamic, right-wing or other.6 In that same period, more than 3,000 people were killed annually by distracted drivers, often people busy on their phone texting, reading or streaming.7 That figure amounts to the death toll of the 9/11 attack, but for every year.

Most Americans are also more afraid of terrorism than of guns, even though they are less likely to be shot by a terrorist than by a child playing with a gun in their household. Unless you live in Afghanistan or Nigeria, you will much more likely be killed by a distracted driver, possibly yourself. And it is not difficult to understand why. When twenty-year-old drivers use a phone, their reaction times decline to that of a seventy-year-old without one.8 That’s known as instant brain ageing.

Why do people text while driving? They might not be aware of how dangerous it is. However, in a survey, I found that most are well aware that a hazard exists.9 At issue is not lack of awareness. It is lack of self-control. ‘When a text comes in, I just have to look, no matter what,’ one student explains. And self-control has been made more difficult ever since platforms introduced notifications, likes and other psychological tricks to keep users’ eyes glued to their sites rather than their surroundings. Yet so much damage could be avoided if people managed to overcome their urge to check their phone when they should be paying attention to the road. And it’s not just young people. ‘Don’t text your loved ones when you know they are driving’ – so said a devastated mom who found her badly injured daughter in intensive care, face scarred and one eye lost, after having sent her child ‘a stupid text’.10 A smartphone is an amazing piece of technology, but it requires smart people who use it wisely. Here, the ability to stay in charge and control technology protects the personal safety of yourself and your loved ones.

Mass Surveillance Is a Problem, Not a Solution

Part of the reason why we fear a terrorist attack rather than a driver glued to a smartphone is that more media attention is devoted to terrorism than to distracted driving, and politicians have followed suit. To protect their citizens, governments all around the world experiment with face recognition surveillance systems. These systems do an exceptional job of recognizing faces when tested in a lab using visa or job application photographs, or other well-lit photos with people’s heads held in similar positions. But how accurate are they in the real world? One test took place close to my home.

On the evening of 19 December 2016, a twenty-four-year-old Islamist terrorist hijacked a heavy truck and ploughed into a busy Berlin Christmas market packed with tourists and locals enjoying sausages and mulled wine, killing twelve people and injuring forty-nine. The following year, the German Ministry of the Interior installed face recognition systems at a Berlin train station to test how accurately they recognized suspects. At the end of the year-long pilot test, the ministry proudly announced in its press release two exciting numbers: a hit rate of 80 per cent, meaning that of every ten suspects, the systems identified eight correctly and missed two; and a false alarm rate of 0.1 per cent, meaning that only one out of every 1,000 innocent passers-by was mistaken for a suspect. The minister hailed the system an impressive success and concluded that nationwide surveillance is feasible and desirable.

After the press release, a heated debate arose. One group had faith that more security justifies more surveillance, while the other group feared that the cameras would eventually become the ‘telescreens’ in George Orwell’s Nineteen Eighty-Four. Both, however, took the accuracy of the system for granted.11 Instead of taking sides in that emotional debate, let’s consider what would actually happen if such face recognition systems were widely implemented. Every day, about 12 million people pass through train stations in Germany. Apart from several hundred wanted suspects, these are normal people heading for work or out for pleasure. The impressive-sounding false positive rate of 0.1 per cent translates into nearly 12,000 passers-by per day who would be falsely mistaken as suspects. Each one would have to be stopped, searched for weapons or drugs and be restrained or held in custody until their identity is proven.12 Police-related resources, already strained, would be used for scrutinizing these innocent citizens rather than for effective crime prevention, meaning that such a system would in fact come at the cost of security. Ultimately, one would end up with a surveillance system that infringes on individual freedom and disrupts social and economic life.

Face recognition can perform a valuable service, but for a different task: identification of an individual rather than mass screening. After a crime happens at a subway station, or a car runs through a red light, a video recording can help to identify the perpetrator. Here we know that the person has committed a crime. When screening everyone at the station, in contrast, we do not know whether the people being screened are suspects. Most of them aren’t, which – like mass medical screenings – leads to the large number of false alarms. Face recognition is even better at another task. When you unlock your phone by looking at the screen, it performs a task called authentication. Unlike a perpetrator running away in the subway, you look directly into the camera, hold it close to your face and keep perfectly still; it’s virtually always you who tries to unlock your phone. This situation creates a fairly stable world: you and your phone. Errors rarely occur.

To discuss the pros and cons of face recognition systems, one needs to distinguish between these three situations: many-to-many, one-to-many and one-to-one. In mass screening, many people are compared with many others in a data bank; in identification, one person is compared with many others; and in authentication, one person is compared with one other. Once again, the smaller the uncertainty, as in identification as opposed to mass screening, the better the performance of the system. Recall the storming of the Capitol in January 2021, where face recognition systems speedily identified some of the intruders who had forced their way into the building. The general point is that AI is not good or bad, but useful for some tasks and less so for others.

Last but not least, the concerns about privacy fit in with this analysis. The general public is most concerned about mass surveillance by governments, not identification of perpetrators and authentication. And mass surveillance is exactly what face recognition systems are most unreliable at doing. Understanding this crucial difference helps protect the individual freedoms valued in Western democracies against the surveillance interests of their own governments.

I Have Nothing to Hide

This phrase has become popular in discussions about social media companies that collect all of the personal data they can get their hands on. You might hear it from users who prefer to pay with their data, not with their money. And the phrase could well hold true for those of us who live uneventful lives without any serious health issues, have never made any potential enemies and wouldn’t speak up on civil rights denied by a government. Yet the issue is not about hiding or the freedom to post pictures of adorable kittens at no cost. Tech companies don’t care whether or not you have something to hide. Rather, because you don’t spend any money for their services, they have to employ psychological tricks to get you to spend as much time as possible on their apps. You are not the customer; the customers are the advertisers who pay tech companies to grab your attention. Many of us have become glued to our smartphone, get too little sleep because of our new bed partner, find hardly any time for anything else and eagerly await another dopamine shot via each new ‘like’. Jia Tolentino, a writer at the New Yorker, wrote about her struggle with her mobile phone:13 ‘I carry my phone around with me as if it were an oxygen tank. I stare at it while I make breakfast and take out the recycling, ruining what I prize most about working from home – the sense of control, the relative peace.’ Others are hurt after destructive online comments from strangers about their looks or wits. Others again drift into extremist groups that fall prey to fake news and hate speech.

The world is split between those who don’t worry much about being affected by digital technology and those like Tolentino who believe it makes them addicted in the same way that compulsive gamblers cannot stop thinking about gambling. Yet technology, and social media in particular, could well exist without being designed to rob people of time and sleep. It is not the social media per se that make some of us addicted, it is the personalized ad-based business model. The damage to its users flows from that ‘original sin’.

The Free Coffeehouse

Imagine a coffeehouse that has eliminated all competitors in town by offering free coffee, leaving you no choice but to go there to meet your friends. While you enjoy the many hours you spend there chatting with your friends, bugs and cameras wired into the tables and walls closely monitor your conversations and record whom you are sitting with. The room is also filled with salespeople who pay for your coffee and constantly interrupt you to offer their personalized products and services on sale. The customers in this coffeehouse are in effect the salespeople, not you and your friends. This is basically how platforms like Facebook function.14

Social media could function in a healthier way if they were based on the business model of a real coffeehouse or of TV, radio or other services where you as the customer pay for the amenities you want. In fact, in 1998, Sergey Brin and Larry Page, the young founders of Google, had criticized ad-based search engines because these are inherently biased towards the needs of advertisers, not the consumers.15 Yet under the pressure of venture capitalists, they soon caved in and built the most successful personalized advertisement model in existence. In this business model, your attention is the product being sold. The actual customers are the companies who place ads on the sites. The more often people see their ads, the more the advertisers pay, leading social media marketers to run experiment after experiment in order to maximize the time you spend on their sites and to make you want to return as quickly as possible. The urge to grab the phone while driving a car is a case in point. In short, the quintessence of the business model is to capture users’ time and attention to the greatest extent possible.

To serve advertisers, tech companies collect data minute by minute on where you are, what you are doing and what you are looking at. Based on your habits, they make a kind of avatar of you. When an advertiser places an ad, say for the latest hand guns or expensive lipsticks, the ad is shown to those who most likely will click on it. Typically, advertisers pay the tech company for every time a user clicks on the ad, or for every impression. Therefore, to increase the chance that you click on an ad, or just see it, everything is done to influence you to stay as long on the page as possible. Likes, notifications and other psychological tricks work together to make you dependent – day and night. Thus, it’s not your data that are being sold, it’s your attention, time and sleep.

If Google and Facebook had a fee-for-service model, none of that would be necessary. The armies of engineers and psychologists who run experiments on how to keep you glued to your smartphone could be working on more useful technological innovations. Social media companies would still have to collect specific data for improving recommendations in order to meet your specific needs, but they would no longer be motivated to collect other superfluous personal data – such as data that might indicate that you are depressed, have cancer or are pregnant. The main motivation behind collecting these data on you – personalized advertisements – would disappear. Netflix is a good example of a company that has already implemented this fee-for-service model.16 From the user perspective, the small disadvantage would be that we all would have to pay a few pounds every month to use social media. For the social media companies, however, the big advantage of the more lucrative pay-with-your-data plan is that the men – yes, virtually all men – at the top of the ladder are now among the wealthiest and most powerful people on earth.

STAYING ON TOP OF TECHNOLOGY

These examples provide a first impression of what staying on top of technology is about. Resisting the siren call to text while driving entails the ability to stay in charge and control a technology. The possibilities and limitations of face recognition systems show us that the technology is excellent in fairly stable situations, such as unlocking your phone or for border control, where your passport photo is compared with another photo taken of you. But when screening faces in real-world conditions, the AI stumbles and creates too many false alarms, which can lead to huge problems when masses of innocent people are stopped and searched. Finally, the problems caused by social media – from loss of time, sleep and the ability to concentrate to addiction – are not the fault of social media per se, but of the companies’ pay-with-your-data business model. To stamp out these severe problems, we need to go beyond new privacy settings or government regulations of online content and tackle the root of the problem, such as by changing the underlying business model. Governments need more political courage to protect the people they represent.

One might think that helping everyone to understand the potentials and risks of digital technology is a primary goal of all education systems and governments worldwide. It is not. In fact, it is not even mentioned in the OECD’s ‘Key Issues for Digital Transformation in the G20’ of 2017 and the European Commission’s 2020 ‘White Paper on Artificial Intelligence.’17 These programmes focus on other important issues, including creating innovation hubs, digital infrastructures and proper legislation and increasing people’s trust in AI. As a consequence, most digital natives are woefully unprepared in telling facts from fakes, and news from hidden ads.

Solving the problems, however, entails more than infrastructure and regulation. It requires taking time to reflect and do some serious research. Did you have to wait for a long time on a service hotline? It could be that your address or a prediction algorithm indicated that you are a low-value customer. Have you noticed that the first result in a Google search is not the most useful one for you? It is likely the one for which an advertiser paid most.18 Are you aware that your beloved smart TV may record your personal conversations in your living room or bedroom?19

If none of this is new to you, you might be surprised to learn that for most people it is. Few know that algorithms determine their waiting time or analyse what smart TVs record for the benefit of unnamed third parties. Studies report that about 50 per cent of adult users do not understand that the marked top search entries are ads rather than the most relevant or popular results.20 These ads are in fact marked, but over the years they have come to look more like ‘organic’ search results (non-ads). In 2013, Google’s ads were no longer highlighted with a special background colour, and a small yellow ‘Ad’ icon was instead introduced; since 2020, the yellow colour has also been removed, and the word ‘Ad’ is in black and white, blending into the organic search results. Google gets paid by advertisers for each click on their ads, so if people mistakenly believe the first results are the most relevant for them, that is good for business.

As mentioned, many executives and politicians are excessively enthusiastic about big data and digitalization. Enthusiasm is not the same as understanding. Many of the overly zealous prophets do not appear to know what they are talking about. According to a study of over 400 executives in eighty large publicly listed companies, 92 per cent of the executives have no recognizable or documented experience with digitalization.21 Similarly, when Mark Zuckerberg had to testify on Facebook’s latest privacy controversy to politicians from the US Senate and the House, the most stunning revelation was not what he said in his rehearsed responses, it was how little US politicians seemed to know about the opaque ways in which social media companies operate.22 When I served on the Advisory Council for Consumer Affairs at the German Ministry of Justice and Consumer Protection, we looked at how the secret algorithms of credit-scoring companies are supervised by the data protection authorities, who are there to ensure that the algorithms are reliable indicators of creditworthiness that do not discriminate on the basis of gender, race or other individual features. When the largest credit-scoring company submitted its algorithms, the authorities admitted to lacking the necessary expertise in IT and statistics to evaluate them. In the end, the company itself bailed them out by selecting the experts who wrote the report, even paying their fees.23 Ignorance appears to be the rule rather than the exception in our smart world. We need to change that quickly, not in the distant future.

Technological Paternalism

Paternalism (from the Latin word pater, for father) is the view that a chosen group has the right to treat others like children, who should willingly defer to that group’s authority. Historically, its justification has been that the ruling group has been chosen by God, is part of the aristocracy or owns secret knowledge or splendid wealth. Those under their authority are considered inferior because they are women, people of colour, poor or uneducated, among others. During the twentieth century, paternalism was on the retreat after the vast majority of people finally had the opportunity to learn to read and write, and after governments eventually granted both men and women freedom of speech and movement, along with the right to vote. That revolution, in the name of which committed supporters were imprisoned or gave their lives, enabled the next generations, including us, to take matters into our own hands. The twenty-first century, however, is witnessing the rise of a new paternalism by corporations that use machines to predict and manipulate our behaviour, whether we consent or not. Its prophets even announce the coming of a new god, an omniscient superintelligence known as AGI (Artificial General Intelligence) that is said to surpass humans in all aspects of brainpower. Until its arrival, we should defer to its prophets.24

Technological solutionism is the belief that every societal problem is a ‘bug’ that needs a ‘fix’ through an algorithm. Technological paternalism is its natural consequence, government by algorithms. It doesn’t need to peddle the fiction of a superintelligence; it instead expects us to accept that corporations and governments record where we are, what we are doing, and with whom, minute by minute, and also to trust that these records will make the world a better place. As Eric Schmidt explains, ‘The goal is to enable Google users to be able to ask the question such as “What shall I do tomorrow?” and “What job shall I take?”’25 Quite a few popular writers instigate our awe of technological paternalism by telling stories that are, at best, economical with the truth.26 More surprisingly, even some influential researchers see no limits to what AI can do, arguing that the human brain is merely an inferior computer and that we should replace humans with algorithms whenever possible.27 AI will tell us what to do, and we should listen and follow. We just need to wait a bit until AI gets smarter. Oddly, the message is never that people need to become smarter as well.

I have written this book to enable people to gain a realistic appreciation of what AI can do and how it is used to influence us. We do not need more paternalism; we’ve had more than our fair share in the past centuries. But nor do we need technophobic panic, which is revived with every breakthrough technology. When trains were invented, doctors warned that passengers would die from suffocation.28 When radio became widely available, the concern was that listening too much would harm children because they need repose, not jazz.29 Instead of fright or hype, the digital world needs better-informed and healthily critical citizens who want to keep control of their lives in their own hands.

This book is not an academic introduction to AI, or to its subfields such as machine learning and big data analytics. Rather, it is about the human affair with AI: about trust, deception, understanding, addiction and personal and social transformation. It is written for a general audience as a guide to navigating the challenges of a smart world and draws, among others, on my own research on decision making under uncertainty at the Max Planck Institute for Human Development, a topic that continues to fascinate me. In the course of the book, my personal take on freedom and dignity is not disguised, but I do my best to stick to the evidence and let the readers make up their own minds. My deeply held conviction is that we human beings are not as stupid and incapable of functioning as is often claimed – so long as we continue to remain active and make use of our brains, which have developed in the intricate course of evolution. The danger of falling for the negative AI-beats-humans narrative and passively agreeing to let authorities or machines ‘optimize’ our lives on their terms is growing by the day, and has particularly motivated me to write this book. As in my previous books Gut Feelings and Risk Savvy, How to Stay Smart in a Smart World is ultimately a passionate call to keep the hard-fought legacies of personal liberty and democracy alive.

Today and in the near future, we face a conflict between two systems, autocratic and democratic, not unlike the Cold War. Yet, different from that era, where nuclear technology maintained an uneasy balance between the two forces, digital technology can easily tilt the scales towards autocratic systems. We have seen this happen during the Covid-19 pandemic, when some autocratic countries successfully contained the virus with the help of strict digital surveillance systems.

I cannot deal with all aspects related to the vast field of digitalization but will provide a selection of topics explaining general principles that can be more widely applied, such as the stable-world principle and the Texas sharpshooter fallacy discussed in Chapter 2, and the adapt-to-AI principle and the Russian-tank fallacy discussed in Chapter 4. As you may have noticed, I use the term AI in a broad sense, including any kind of algorithms that are meant to do what human intelligence does, but will differentiate whenever necessary.

In each culture, we need to talk about the future world in which we and our children wish to live. There will be no single answer. But there is a general message that applies to all visions. Despite – or because of – technological innovation, we need to use our brains more than ever.

Let’s begin with a problem dear to our heart, finding true love, and with secret algorithms that are so simple that everyone can understand them.


Download Ebook Read Now File Type Upload Date
Download here Read Now Epub, PDF September 2, 2022

How to Read and Open File Type for PC ?