#202: AI Policy Design - The Critical (And Tricky) Work of Shaping The Future
How to control risks but spur innovation. Also in this issue: non-disruptive innovation, football playing robots, and more.
AI Policy might just be the most important thing any government does today, for the future. Despite the absolutely bewildering explosion of technologies from quantum to fusion, sometimes it feels like the only game in town is AI. And the reason it seems so central and all pervasive is this: most other technologies represent tools for us to use - be it dealing with materials, shaping our genetic material, or decomposing the atom. But AI challenges us and threatens to do what we do - what we believe we do best - thinking, choice making, decisions, and creativity. And the writing on the wall is that it will do these things at a scale and ability far superior to humans in a generation or so. Which means that it's an existential question for professions, from legal to medicine, a threat to many white collar jobs, and ultimately it could supplant organisational leadership. At the very least, it promises to completely change how businesses are run. Perhaps even how complex ecosystems of businesses work, and perhaps economies and by extension, even countries. It shouldn't surprise us therefore that thinkers, leaders, organisations, and governments are all introspecting deeply on the future of AI.
As a government, or a policy maker, you're weighing up a number of risks of AI. Here are the top 10 that I can think of:
AI - Risks for policy making
Some companies become too powerful and exert undue influence on the market. Google and Microsoft are probably at the forefront but Palantir is not too far behind.
AI widens the digital divide and exacerbates economic inequality.
Specific groups of citizens (minorities and under privileged) are unfairly excluded / adversely impacted by algorithms
Criminals deploy AI for illegal purposes such as cybercrime.
AI creates harmful unintended consequences through poor governance.
Other countries (notably China) leap ahead with AI capabilities and enjoy competitive economic advantage
Other countries might weaponize AI (again the fear is China or Russia).
International bad actors deploy AI to the detriment of humanity. BTW It's getting easier.
AI becomes more powerful than, and potentially antagonistic towards humanity - or at least in Harari's words, hack the societal code.
The rush towards AI harms the environment
Interestingly, the first 5 are largely domestic concerns, while the next 3 are to do with national safety and competitiveness. The last 2 are at the scale of humanity and the planet, and should really override all other concerns. Policy makers need to balance the contrasting challenges of all these kinds of risk. For example, for the domestic concerns, governments would want to control the speed of evolution of AI, while for international competitiveness countries would want to speed up their own growth and competence in the field. In fact, taking a leading role in AI development and deployment is arguably a fundamental objective for any government today.
Here's the UK Government's response - the AI Regulation white paper (currently out for consultation) is interesting in that it seeks to specifically inform lawmakers rather than make generic observations, aimed at the market in general. At its core is the idea that any regulation in this area should continue to promote innovation rather than simply create roadblocks and regulatory constraints.
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
Some of the key aspects of the paper are:
Articulation of AI Principles (Secure/ Fair/ Transparent/ Accountable/ Challengeable)
The need for a risk Function that cuts across domains - which identifies new risks, and brokers discussions on who should regulate and govern these risks.
Monitoring and Assessment function
Support for innovators by way of a sandbox which can address data challenges - specifically a multi-regulator sandbox
Overall Education & Awareness
The need for continuous horizon scanning
Here's how ChatGPT summarised this paper:
The UK Government policy paper 'A pro-innovation approach to AI regulation' sets out the government's approach to regulating artificial intelligence (AI) to foster innovation while ensuring that it is developed and used in a safe and responsible manner.
The policy paper emphasizes the need for regulation that is flexible, adaptive, and risk-based, and that balances the potential benefits of AI with the potential risks. It identifies four key principles for regulating AI: promoting innovation and growth, ensuring that AI is safe and operates within clear ethical boundaries, maintaining public trust and confidence, and ensuring that the regulatory framework is transparent and accountable.
To achieve these goals, the policy paper proposes a number of measures, including the establishment of an AI Council to provide strategic oversight and advice on AI policy, the creation of a new regulatory body to oversee the development and deployment of AI, the development of clear ethical guidelines for AI, and the establishment of a national research program to support the development of safe and responsible AI.
The policy paper also acknowledges the importance of international cooperation in regulating AI, and highlights the need for a coordinated approach to ensure that AI is developed and used in a way that promotes global economic growth and social progress while also protecting fundamental human rights and values.
Overall, the policy paper represents a proactive and forward-thinking approach to AI regulation that seeks to foster innovation and growth while ensuring that AI is developed and used in a way that is safe, ethical, and transparent.
The US too, has set forward it's version of a government view of AI: The US US AI Blueprint and beyond is a whitehouse AI bill of rights (non legally binding) - exploring the future of AI and Regulation
https://medium.com/berkman-klein-center/building-on-the-blueprint-9471276f2cce
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
And here's how ChatGPT describes the US AI Blueprint
The U.S. AI Blueprint report is a policy document outlining the government's strategy for advancing artificial intelligence (AI) in the United States. The report emphasizes the importance of AI for national security, economic growth, and social progress, and identifies key areas for investment and action.
The report identifies five key pillars of the U.S. AI strategy: 1) investing in AI research and development, 2) unleashing AI resources for economic prosperity, 3) leveraging AI for national security and defense, 4) ensuring that the development and deployment of AI is done safely and responsibly, and 5) building a diverse and skilled workforce that is equipped to thrive in the AI era.
To achieve these goals, the report sets out a series of specific recommendations, including increasing federal funding for AI research and development, expanding access to AI resources and expertise, promoting the adoption of AI in key industries such as healthcare and transportation, and investing in education and training programs to develop a diverse and highly skilled AI workforce.
The report also highlights the need for collaboration between government, industry, and academia to advance AI in a responsible and ethical way, and calls for the development of clear standards and guidelines for the development and deployment of AI.
Overall, the U.S. AI Blueprint report provides a comprehensive roadmap for advancing AI in the United States and ensuring that the country remains a global leader in this critical field.
The Berkman Klein Centre at Harvard University offers its own take on the Blueprint and looks at how to build on this foundation.
The Indian Government’s AI discussion is on this website although there isn’t as yet a national position or paper. The Chinese position is articulated here - the language also feels more legislative/ statutory (although it’s possible something’s lost in translation).
Expect this space to be of great interest over the next months as newer technological capabilities create unforeseen scenarios which call for fresh thinking on regulation and safety.
Disruptive vs Non-Disruptive Innovation
Disruptive innovation enjoys a disproportionate share of the spotlight in any discussion about innovation. But there are 2 categories of innovation which are arguably more important. The first is marginal innovation - which for a large organisation can be a big number anyway, but more importantly, can compound if done consistently and well. Think of chip design over the years that has allowed us to think about a trillion transistor chip today. But there is a second important category - innovation which is a breakthrough, but creates new markets rather than destroying existing markets. These can occur at an individual company level or a societal level. From sanitary pads to windshield wipers, and from dishwashers to microfinance - these have all created new markets rather than disrupt existing ones. This HBR piece on non-disruptive innovation presents a strong argument for these non-disruptive innovations.
TLDR: Non-disruptive innovations (NDI) can be based on new or old tech, or appear at the bottom or top of the pyramid. Disruption creates value for consumers (Netflix/ Amazon/ Uber) but does so using a win/lose model. Often creating "painful adjustment costs" for society. NDI can work at Micro, Meso, and Macro (company/ group/ societal) levels all of which create overall better outcomes for society. Kickstarter for example didn't cannibalise existing investments - it created a new category of investors. This kind of market-creating rather than destroying innovation has high social value and usually goes through a 3-stage process - (1) identify nondisruptive opportunity, (2) find a way to unlock it, and (3) realise the scale via high value/ low cost ways. Opportunities often lie at the edge of existing models, or new problems altogether that have not been seen as solvable in the past. Example - Square (Payments), Not Impossible (Music for deaf people), GoPro, Liquid Paper, Viagra, Windshield Wiper, Dishwasher - examples of market creating innovations. Water based photovoltaic cells, or eSports are other examples.
Reading …
Sustainability: Security camera for the planet - tracking methane in the environment via satellite imaging, using ‘MethaneSat’. (New Yorker)
Compex Decisions: It's not a surprise that we under-estimate the challenge of complex decisions, and our choices vary with the level of complexity about the same things. In this example, Tim Harford highlights people's views in referendums, where we want government to reduce spending, but on 90% of the areas where the government spends money, such as healthcare, or education, we want the government to spend more money. (FT)
Innovation Management: Clive Thompson makes an interesting point here. When the SpaceX rocket exploded, Musk was quick to recognise the success of the launch as a classic innovation milestone. Yet, his dealing with twitter’s suggests a completely different mode of management. (Medium)
Creativity: The New Yorker has an excellent article about the origins of creativity as we know it today. (New Yorker)
Robotics: How do you teach a robot to play football? Here’s a research project for teaching agile soccer skills to robots (Google/ Arxiv)
Product Innovation: Watches. An old but interesting story about how Apple is taking on the world’s watchmakers. And just for good measure, here’s a fabulous example of how mechanical watches work. (Medium / Ciechanowski) And yet, once you get past the engineering, this is really Apple’s Trojan Horse into the healthcare space. This is a good example of how you go from engineering to design thinking. Rather than competing on the mechanics and electronics of the watch, you start to look at what are users’ needs and jobs to be done, and how this device could address some of those.
Have a great week! See you soon.