Chapter 3: The hype

Why is everyone talking about AI right now? 
Should we be scared?


In this section, I’ll outline some of the key myths and narratives that are driving our current moment of AI hype, and explain why it is important for journalists to be able to look past the hype in order to use and report on these systems critically.

Inevitability
The strangest thing about the name “artificial intelligence” is that, while we often refer to certain technologies as “AI,” an actual digital brain that can think exactly as a human can doesn’t actually exist yet. The field is not named after what it does, but what its creators imagine it could be.

A hypothetical machine that can do anything that humans are intellectually capable of is often referred to as artificial general intelligence or AGI. It is unclear if AGI is even possible, but conversations about AI seem to operate under the assumption that AGI is not only achievable, but inevitable. We frame conversations about being “ready” for AGI, as if it is not being built by humans but delivered to us from the heavens instead.

OpenAI’s guiding charter is a glaring example of this narrative in action. In the introductory paragraph, they write: “The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development.” It goes on to state that OpenAI’s mission is to “ensure that artificial general intelligence (AGI) – by which we mean highly autonomous systems that outperform humans at most economically valuable work – benefits all of humanity.” 

The charter boldly assumes that we are all on a fixed “timeline” towards the eventual creation of AGI. It does not question who put us on that timeline, why we are here in the first place, or if we could have perhaps built a different timeline. This is the foundation for OpenAI’s founding declaration that they will be our noble shepards, guiding us safely towards AGI’s eventual “arrival.” 

This narrative also shows up in government policies surrounding artificial intelligence. In the paper "Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics,” AI governance researchers Christian Katzenbach and Jascha Bareis analyze the rhetoric surrounding AI government policies in the US, China, France, and Germany. 

They find a remarkably uniform narrative of AI’s inevitability being promoted across nations. In these policies, the development of AI is frequently characterized as an unprecedented technological breakthrough rather than a product of human agency. 

This narrative, when perpetuated by governments, indirectly determines the future of the field through the allocation of resources. They continue: “As governments endow these imaginary pathways with massive resources and investments, they contribute to coproducing the installment of these futures and, thus, yield a performative lock-in function.” 

Magical thinking and anthropomorphization
A great deal of AI hype also comes from the fact that many of these new commercially available AI products are able to make computers “talk” and “act” like humans in a way that they never have before.

This is probably the coolest thing about AI tools like ChatGPT, but it is imperative to understand that the system’s seemingly coherent, often friendly output isn't coming from some all-knowing, well-mannered ghost inside the machine. It’s been anthropomorphized, given human characteristics through its design, to make it easier for the average person to use. The model’s ability to process natural language makes the model extremely accessible, but it also creates an illusion of coherence that is particularly difficult for humans to get over.

In their paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜,” AI ethicists Emily Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell make the important point that humans beings are predisposed to mistake the seemingly coherent language that is outputted by language models as a sign of genuine intelligence. They write:

“Our human understanding of coherence derives from our ability to recognize interlocutors’ beliefs and intentions within context. That is, human language use takes place between individuals who share common ground and are mutually aware of that sharing (and its extent), who have communicative intents which they use language to convey, and who model each others’ mental states as they communicate.”

Anthropomorphization plays a huge role in the marketing of these products. OpenAI’s blog often boasts about GPT-4’s ability to meet human academic and professional benchmarks like the SAT, LSAT, GRE, AP tests, and even the bar exam. But anyone who has ever taken one of these exams knows that they don’t always have a whole lot to do with the real value of human intelligence. As Arvind Narayanan and Sayash Kapoor write in an entry on their, Substack AI Snake Oil, “It’s not like a lawyer’s job is to answer bar exam questions all day.”

This anthropomorphic design, plus the nearly inexplicable complexity of AI contributes to an overall air of mysticism surrounding this technology. This imagining of deep learning systems as both unfathomably mysterious and superhuman in their computational abilities leads to a way of thinking that AI researchers Alexander Campolo and Kate Crawford call “Enchanted Determinism.”

In their paper “Enchanted Determinism: Power without Responsibility in Artificial Intelligence,” they define the term as “a discourse that presents deep learning techniques as magical, outside the scope of present scientific knowledge, yet also deterministic, in that deep learning systems can nonetheless detect patterns that give unprecedented access to people’s identities, emotions and social character.”

They make the point that to imagine AI as inexplicably magical is to detach the technology from its social and political reality. That is, the reality that AI systems do not magically arrive, but are built, through thousands of hours of human labor and trillions of bytes of human-created data. They are powered by machines built in factories, using raw materials mined out of the earth by human hands. They are the products of our labor, politics, and systemic power structures. 

This is the truth of AI that journalists need to report on, and that’s why it’s so important to be able to see through the hype. To ignore this reality is to ignore the real danger of AI, which is not “on the horizon,” but already here. The allure of immense computational power, our bias towards automated decisions, and our hunger for ever more efficient systems can lead us to dangerous outcomes where algorithms are used without human oversight to make dire decisions that determine our social conditions. 

This is already happening, in cases of biased algorithms being used to determine where police officers are dispatched, diagnose diseases, and, in one particularly disastrous case, predict who is most likely to commit fraud. 

The common assumption that these systems rest on is that AI systems are so powerful that they can understand humans better than we can understand each other. But in actuality, AI can only “see” the world as well as we can chop the world up into data. And when that happens, something always gets left out. 


Chapter 3 Homework:
Here are some readings that cover key topics in AI ethics, like scale, transparency, the politics of tech, and human benchmarking.

If you want to read more about the ethics of building increasingly larger language models:
  • Read “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” by Emily Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell.” The paper outlines the dangers of pursuing ever larger language models, considering the environmental effects, difficulty of documentation, and potential for the mass perpetuation of bias in LLMs. Gebru and Mitchell were fired from Google as a result of the paper’s publication and subsequent events.

If you want to read about how government policy is informed by and contributes to AI hype:
If you want to read about the magical thinking that goes on surrounding AI:
If you want to read about the ethics of computer vision and image recognition:
  • Read “Excavating AI” by Kate Crawford and Trevor Paglen. This essay tells the story of influential image recognition datasets, diving into how they were used to teach computers to classify people based on attributes like race and gender. The authors break down the biased assumptions built into these systems, and whether “fairness” and “neutrality” are even possible in a system that attempts to determine race based on digital facial measurements and algorithmic prediction. 

If you want to read about the issues with testing AI systems against human benchmarks:
If you want to read more about the idea of “transparency”:
If you want to read AI giant OpenAI’s guiding mission statement:
  • Read the OpenAI charter. This document details the core principles around which OpenAI has organized its mission to build AGI in a “safe and beneficial” way. This document is said to be treated like “scripture” at the company, as reported in great detail by Karen Hao for MIT Technology Review.

If you want to read about what the AI industry thinks about the existential risk of this technology:
  • First, read the Future of Life Institute open letter calling for a pause on AI systems more powerful than GPT-4. This letter amassed over signatures from over 33 thousand people, including OpenAI founder Sam Altman and Elon Musk.
  • Then, read AI Snake Oil’s analysis, which breaks down the letter and addresses some harms that they believe are being ignored in favor of sci-fi speculation.

If you want to read an early critique of the politics of Silicon Valley:
  • Read “The Californian Ideology” written in 1995 by English media theorists Richard Barbook and Andy Cameron. This essay criticizes the techno-utopian political sensibilities of 1990s Silicon Valley tech founders. It raises interesting questions about how the technology industry came to prioritize the creation of powerful personal computing machines, and how the politics of the people creating technology impact the rest of us.