A number of books have looked at how science is portrayed in the movies with all of them bringing a unique perspective to the issue (in fact, IEEE Spectrum’s own Stephen Cass coauthored such a book just last year).
Now, a new book written by one of the preeminent experts on how to recognize and prevent risks in emerging technologies brings a new perspective to the topic, and asks: Can we use sci-fi films as a road map to avoid the pitfalls of new technologies?
Andrew Maynard, who is director of the Risk Innovation Lab in the School for the Future of Innovation in Society at Arizona State University, picks up on sci-fi’s long tradition of providing cautionary tales of hubris, or critiques of today’s social ills, and looks to see what both the science professional and the layperson can glean in order to forge a better path going forward.
Maynard’s new book Films From the Future: The Technology and Morality of Sci-Fi Movies is not a list of a scientist’s favorite sci-fi movies, or what he considers the most realistic portrayals of science in movies. Instead, Maynard examines whether sci-films can provide us with actionable insights and tools to address the big moral challenges we face in emerging technologies, including AI and artificially upgrading our brains.
We spoke with Maynard ahead of the release of his book to ask how he approached it and how he hopes the book will be used. This interview has been edited and condensed.
This book joins a number of titles that have examined the intersection of sci-fi movies (or literature) with real-world science and technology. How is yours different?
Many books that connect science fiction movies with real-world science and technology use these to unpack specific scientific principles and ideas, and to make science and technology more accessible (The Physics of Star Trek by Lawrence Krauss, for instance, or Jeanne Cavalos’s The Science of Star Wars). There are also books that use these films as a lens through which to examine and better understand ourselves and the society we live in. Cass Sunstein’s The World of Star Wars is a good example, as is Judith Barad’s The Ethics of Star Trek (Star Trek and Star Wars seem to have a particular attraction to writers!).
These are tried and tested formats, but I wanted to do something different. My starting point was the increasingly tough challenge of making sense of emerging trends in technology, and what it means to innovate in socially responsible ways.
There are relatively few popular books that paint a broad yet informed picture of current advances in technology, and even fewer that grapple seriously with challenges around how we develop and use these in ways that improve lives without leading to unacceptable harm.
And yet, the challenge of how to develop and use emerging tech responsibly is becoming an increasingly important issue, especially in the light of widely discussed dangers from technologies like gene editing or AI, or the social challenges around issues like privacy, autonomy, and social equity.
In setting out to write about how to think usefully about such challenges, science fiction movies turned out to be an intriguing medium through which to do this; not necessarily because they reflect key trends in tech innovation (although many do), but because they have something useful to say about our relationship with science and technology.
Building on this, the book uses a backbone of 12 sci-fi movies on which to hang a narrative around how advances in biotech, cybertech, and materials tech is transforming both the nature of what is possible, and the societal challenges and opportunities this presents. Along with focusing on specific trends, including genetic engineering, smart drugs, human augmentation, and AI, it takes a broader look at trends in technological convergence. And it contextualizes these trends around how to think usefully about socially responsible innovation.
One of the aims you seem to be trying to achieve in your book is using science fiction as a road map for challenges we face. Could you talk about how your role as the director of the Risk Innovation Lab at Arizona State University (ASU) trained your focus on this?
While I started my professional life as a physicist, for the past 20 years or so I’ve been deeply involved in working with people across different disciplines and sectors on the complex interplay between science, technology, and society, starting with nanotechnology and expanding from there to a whole tapestry of other emerging and converging technologies. This has led to two driving questions: How do we think differently and usefully about “risk” as we develop novel technological capabilities; and how do we ensure that emerging technologies benefit society, without causing widespread and potentially irreversible harm?
Some years ago, I began to realize that conventional thinking around risk was inadequate to the task of ensuring the responsible development and use of new technologies. This was seen in particular with nanotechnology, where existing risk frameworks were stretched to the breaking point by the production of materials with novel properties. But the problem extends to many other technologies, including gene editing, autonomous vehicles, AI, and even blockchain and ubiquitous data collection.
The more I explored this growing landscape around emerging and converging technologies, the clearer it became that, if we are to understand how to avoid undue harm from emerging capabilities, we need to rethink what we mean by “harm,” and how to avoid it.
To illustrate this, conventional approaches to risk typically focus on what can be quantified and controlled, while risk-mediated decisions are frequently influenced by threats to what’s important. And the two often don’t coincide, especially when what’s important includes factors like self-worth, sense of identity, belief, and social justice.
Emerging technologies frequently lead to risks that are hard to quantify, and yet still threaten what is deeply important to people, and this leads to a growing tension between established approaches to understanding and managing risk, and those (perceived or otherwise) that are driving decisions within society.
If we are to innovate responsibly, this tension needs to be reduced. And this in turn requires parallel innovation in how we think and act on risk.
In the book, I use the creativity inherent in sci-fi films to illuminate the evolving nature of the risk landscape around emerging technologies. But the movies also help reveal insights around risk and responsibility that may otherwise remain hidden. Here, they are a surprisingly powerful tool for helping better understand the changing nature of the dynamic between technology and society, and how this in turn modulates the risk landscape.
Do you see your book as focusing on the ethical considerations that sci-fi reveals to us, and how broad policy decisions can be positively influenced by paying attention to sci-fi?
There’s certainly an aspect of the book that uses sci-fi to reveal the ethical challenges around novel technological capabilities, and to help think through ways of addressing them. But the book goes beyond ethics and morals, to explore more broadly what it means to be socially responsible. Here, it doesn’t provide many easy answers. But it does help readers better understand the landscape around how emerging technologies potentially threaten what they consider important, and how they can begin to navigate this.
In this way, the book does help establish a foundation for making decisions at all levels—whether by individuals, corporations, policy makers, or others—that are guided by a sophisticated understanding of the relationship we have with emerging technologies. My hope is that, when combined with other resources, this will lead to less naïve and more informed decisions around the development and use of increasingly powerful technologies.
You’ve emphasized the idea of “convergence,” in which different sciences and technologies are merged together, as both a powerful force for technological change and a challenge. Would you say that this idea of convergence—or the singularity—is becoming as big of a threat in today’s sci-fi movies as the nuclear apocalypse was in sci-fi movies from the 1950s on? How is this threat different?
I would separate out the broader concept of technological convergence from the particular (and hypothetical) scenario of the “singularity”—the point at which technological advances accelerate so fast that it becomes impossible to predict the future beyond it. That said, I would argue—and I do so in the book—that we are at a unique point in human history where we are beginning to transcend the natural world in our capabilities, because of the ways in which we are combining what used to be quite distinct areas of science and technology.
This is a tipping point which may end up being just as significant as the development of nuclear capabilities—maybe more so. And yet, while an increasing number of movies hint at aspects of the potential impacts of convergence, there are still relatively few that fully embrace it as an emerging threat. And even here, many of the films are nuanced. Even the movie Transcendence, which takes the idea of the singularity head-on, ends on an optimistic note.
I suspect that part of the reason for this is that the transformative capabilities of technological convergence are multifaceted, complex, hard to predict, and by no means negative. This makes it hard to weave a simple dystopian narrative around potential threats, because they are so elusive.
That said, there are dystopic films that build on technological convergence—The Matrix and Terminator are two good examples, as is Blade Runner (films that sadly didn’t make the cut for the book). Yet compared to 1950s apocalyptic movies, even these are somewhat nuanced—again, I suspect, because of the near-impossibility of boiling down emerging capabilities to a simple all-consuming idea.
What we are maybe beginning to see is more science fiction movies that are more ambivalent about the good or bad to come out of technological convergence, and rather than fixating on a future that is apocalyptic, focus on a future that is simply different. The great thing about this is that it opens up great conversations around “different-good” and “different-bad,” and how to avoid one while trying to achieve the other.
You cover a wide a variety of films, each presenting their own unique challenges, such as the fully autonomous AI of Ex Machinaand Ghost in the Shell and the social inequalities of Elysium. What common themes do you draw on for the book?
Two common themes running through every chapter in the book are the questions “What does it mean to innovate responsibly?” and “How do we do this?” These are emphasized in different ways, and to a different extent, around each of the movies. But in every case, the book begins to unpack where emerging capabilities are taking us, how they potentially threaten what we hold to be important, or what we aspire to; how this impacts an emerging (largely social) risk landscape around emerging technologies; and how to begin thinking about ways to successfully navigate this.
This is where I’m particularly intrigued by the human stories in each of the films, and how they in turn reflect our real-world relationships with technology. As I mention in the book, sci-fi movies are an unreliable source of information on science and technology, but they can reveal surprisingly pertinent aspects of how technology impacts us, and how we in turn impact it.
If I was to try and boil down some of the overarching subthemes, they would be the question of “should” rather than “could,” the potential impacts of permissionless innovation—especially when spearheaded by wealthy entrepreneurs; the challenges of rethinking what is appropriate and inappropriate in a rapidly changing world; and how to channel the sheer wonder of what we are learning how to do toward social good.
Is there any common message that both the scientist and the layperson should take away from these movies?
In the closing chapter of the book, I reflect on the risk of becoming overwhelmed by the potential downsides of technology innovation, and channel Douglas Adam’s Hitchhiker’s Guide to the Galaxy, with the simple message “Don’t Panic.”
At heart, Films From the Future is a book with a cautiously optimistic message. The challenge we face at this point in our technological development is that we have the power to do great and irreversible harm with the capabilities we’re developing, but we also have the chance to transform lives for the better.
Emerging trends in science and technology are quite astounding, and it would be irresponsible to slow them down simply because we were scared of the future we imagine they may bring about. And yet, if we are to use them to build a better future, we need to learn fast how to wield them responsibly. And this is ultimately the message of the book—that we all have a responsibility to ensure that emerging technologies are developed and used in ways that benefit the lives of as many people as possible, without causing undue harm.
This is going to take some big changes in how we think about our relationship with science and technology, how we understand the potential risks and benefits, and how we draw on the insights and expertise of everyone potentially impacted. But the prize is a future where more people live better lives, because we thought ahead of time about what we’re trying to do, and how to do it well.