#236: Stop Saying 'Never'. Practice Saying 'What If...?'
The rate of change of technology including AI, coupled with other changes and volatility means that we need to break the shackles of our thinking.
Welcome to the last IEX for 2024
I read this excellent piece by Doug Shapiro analysing the impact of AI on hollywood. It was triggered by Ben Affleck suggesting in an interview that AI would be a sustaining innovation for Hollywood. Shapiro argues otherwise, pointing out that it could be deeply disruptive. Not just in terms of how Hollywood makes movies, but the potential impact on Hollywood as a industrial film making hub itself.
One of the most interesting things that jumped out at me is this statement - which appears in a number of forms. Affleck says AI can’t create art or replace actors. Even Shapiro repeats this sentiment ‘Can GenAI replace emotive actors? Not yet and potentially never.’ It’s that last piece that I have a problem with. Never is a very long time. But why don’t we go back, say just 10 years? Imagine we were having this conversation in 2014, enjoying a coffee at street-side cafe. If I told you any of the following things, how many would you take seriously?
Something called “Generative AI” will be able to speak fluent conversational english (or any other language), pass college exams, write essays that are better than the average high school student, write code, and do well in difficult tests like the LSATs (the Law School Entrance Exam in the US), scoring in the top 10 percentile.
An AI tool will solve for protein folding. It will immediately figure out all the hundreds of millions ways a protein can fold compered to the few hundred that human scientists have managed so far in history
Gene editing, quantum computing, self driving cars, flying taxis, and space tourism will all be real things.
More Nobel prizes in physics and chemistry in a year will be won by AI Scientists than by Physics or Chemistry researchers.
A global pandemic will bring the world to its knees. Millions will die, and billions will have to be house bound; and yet we’ll discover a completely new cure based on RNA and invent, test, manufacture and distribute vaccines in order to vaccinate 90% of the earth’s population within 2 years.
Donald Trump will become the President of the US for 1 term; he will subsequently be charged and found guilty on 33 counts of felony, and be re-elected for President again, with Elon Musk as one of his key advisors.
All of these seemed to come out of the blue, but take a closer look and almost all of them have been building up, they come after years of effort by many teams of scientists, or they are predictable socio political events that have their roots in the context of the world. Which means that as we speak, each one of these things and hundreds of other potential game changing tech and events are evolving and changing, driven by the efforts of scientists, engineers, politicians and also bad actors. Generative AI is really only 2 years old. What will it look like when its 10? 20? What else could surprise us this dramatically? Dramatically humanoid robots? Cures for incurable diseases? New political theories? What if the next pandemic is a digital one with a self generating virus that has the intelligence to evolve at speed?
The bottom line is that as technology evolution (especially AI) accelerates along with the accompanying energy transition and computational biology, our world will become increasingly unpredictable in good and bad ways. Most of us are impacted by confirmation bias and the rooting effect, so despite all the changes, we really struggle to think of how something we think of as fundamental might change. I also find that most of us (including myself) identify with this statement: while AI can do a lot of things, what I do can’t be done by AI. I think this is a dangerous position to take, for the same reasons above.
As we get ready to step into a new year, therefore, maybe the mind-shift we need to make, is to stop saying ‘this will never happen’ and start asking ourselves ‘what if it did happen’? So instead of saying “AI will never replace actors” let’s ask ‘what if it did?’ - what would we do? How would we react as film makers, or film watchers? What if AI could replace teachers? Doctors? Lawyers? Managers? CEOs? (As an interesting aside, the Shapiro article also points out that some of the biggest hits of the year actually don’t involve human faces in key roles (Deadpool & Wolverine / Planet of the Apes / Inside Out2 / Beetlejuice/ Kung Fu Panda/ Godzilla & Kong)
My futurist colleagues always say that you can’t predict the future, but you can rehearse versions of it to stay better prepared for whatever comes to pass. Because it’s not just the thing itself that catches us by surprise when it happens, it’s that we’re underprepared for all the knock on and network effects of the change. AI may never replace all actors, but if it did become a part of the options and some movies used AI actors, it wouldn’t just impact a few actors, it would also create new agencies, new jobs, new contracts, while possibly adversely affecting some make up artists, coaches, agents, and acting schools. A new range of films might be produced, and entirely new forms of creativity might be unleashed.
So once again, lets welcome 2025 with less of ‘never’ and more of ‘what if?’
Book Review: The Coming Wave
Mustapha Suleyman’s book - The Coming Wave - focuses on 2 key drivers of seismic change - AI and Synthetic biology. I heard Suleyman speak at an event last year. Suleyman is a cofounder of Deepmind, started Inflection AI, and now heads Microsoft AI. Needless to say the book does it’s best work when it gets into the detail of how and why AI is evolving, and what we could expect. I particularly his summarising equation of today’s change drivers - which he calls out as AI, Synthetic Biology, and Energy Transition. It’s an excellent book if you want to really understand the depth and breadth of the ways in which these changes are driving the world in different directions. Whether it’s highlighting China’s ascent in everything from robotics to super computers, or talking about the openness of today’s AI research. Suleyman highlights the truism that most science is directed by commercial needs and that scientific breakthroughs have to be converted into desirable products for it to really spread. Most technology is made to earn money. And the range of technologies we are talking about could easily add another 10-15% to the world economy over the next decade, and unlock $100 trillion of additional GDP over the first half of this century. There’s a lot of interesting data on sustainability - for example that the manufacture of a single EV requires “extracting 225 tonnes of finite raw materials”.
He also takes a lot of pains in working through a plethora of good and bad outcome scenarios we could find ourselves in given the potentially unchecked power of AI and the intentions of bad actors. He poses the question of whether the nation state as we know it, and the unwritten contract between citizens and the state, both need to be rethought. The last part of the book Suleyman’s take on potential ways to address and ringfence the risks. I found this section harder going perhaps because it lacks the depth and detail of the earlier sections, and is by it’s nature very broad sweep, covering audits, choke points, culture, alliances, and movements, amongst others. And the techno-solutionist mindset that works so well in the first part of the book may actually come across to some as superficial when discussing goverment, regulation, social challenges, and policies without adequate nuance. Perhaps I’ll come back to the last section over the next few years and it’ll make a more thought provoking read as the future unfolds.
AI Reading
Sustainable AI: AI’s resource footprint - data, power, water, and everything else. (Bloomberg)
ChatGPT o3 - Are we inching towards AGI? The ARC-AGI benchmark evaluates the capability of AI to solve general purpose reasoning tasks which it hasn’t been explicitly trained on. It looks for whether the AI can generalise from fewer sample data sets. The most recent release of Chat GPT o3 scores 85% on this where the previous best was 55%. (Arcprize.org, The Conversation)
AI Safety: Deliberative Alignment - which is a way of making language models safer, as created by Open AI. In essence this is an additional set of instructions given to the LLM which means that whenever you ask it a question, it first takes a few seconds to run through these additional questions ‘in it’s mind’, so to say. These questions are essentially a set of questions that allow the AI to ascertain whether your question is in any way intended to break the law or do something unethical or dangerous (make a bomb, fake somebody’s identity, steal somebody’s password etc.) The AI is essentially checking against a set of guidelines which could be Open AI’s ethics or the laws of the land. If it assesses that there is a clear attempt at unethical information or purpose, it will then decline the answer. (Open AI, Tech Crunch)
AI Impact on Labour: This report from the Tony Blair Institute (TBI) is optimistic about the net impact of AI on the labour market. Improvement in academic achievement, net positive job creation, impact on productivity via health, and better job market dynamics are some of the ways in which this can play out. Ultimately AI will impact labour demand (number and types of jobs), labour supply (skills and resourcing), and the workplace experience (automation and decision systems). I believe that it (intentionally?) glosses over some of the more painful aspects of the transition - as it generally uses euphemistic language while discussing the change. And also it perhaps doesn’t factor in the accelerating rate of improvement of AI tools and technologies. Nonetheless, it’s a very thorough look at the impact of AI and all the ways in which it will seep into the future of work. (Tony Blair Institute)
AI & Biology: What next after protein folding?. And is prediction without understanding good enough? Much like quantum computing - we can use it but we don’t really understand it - does that change the nature of science itself? (HBS)
Other Reading
Fertility Market: One of the side effects of an ageing world - a global market for fertility and human egg trading. (Bloomberg)
Antibiotics: Great work by the team at OWID on how antibiotics work, and on the golden age of Antibiotics discovery (spoiler alert: It’s over) and how we could reignite future discoveries. (Our World In Data)
Choice Making: Brian Villimoare, a scientist at the University of Nevada started to question why humans value short term gain decisions over long term ones. It’s connected to evolutionary programming of living in uncertainty and the mathematical model created by Villimoare quantifies the extent to which this is weighted for rewards and uncertainty between the short and long terms. (Advanced Science News)
Ibelin: Probably the most heartwarming thing I’ve read this year is the story of Ibelin - aka Mats Steen - who was an adventurer, a romantic, a do-gooder, all in the online game World of Warcraft. In his other life, he had Duchenne Muscular Dystrophy, used a wheelchair for much of his life, and died at 25. It was only after his death that his parents discovered his alter ego, his expansive social network, his good deeds, and also much of his thoughts and reflections. Now a Netflix documentary called The Remarkable Life of Ibelin. (BBC, Netflix)
See you in the new year!