#219: AI Adoption - Three Key Choices To Make
Where we agree / where we disagree on AI adoption approaches.
Lift Off:
This week we passed a very significant milestone. We have been building our innovation hub AKA Pace Port London for what seems like a very long time. I remember our first discussions almost two years ago. At the time, we had no building, no space, nothing more than a power point. Now after 24 months, and working layer by layer, through the physical space, the technical infrastructure, the audiovisual capability, the demos and showcases, and the methodologies and workshops, we have lift off!
Although we started running workshops from mid November, this week we had a formal launch event, with a roundtable with senior executives from across industries, talking about AI. The discussion threw up three key areas of choices that companies have to make.
Think vs Act: some organisations are jumping straight into implementation, and encouraging ‘learning by doing’. They are launching proofs of concept, and experimenting continuously. Other organisations are taking stock of what’s being used across the business, evaluating frameworks, and establishing clear guidelines for use, before enabling actual projects and work. Of course into this comes the pressure from leaders who have experienced personally the power of Gen AI and are typically asking ‘what are we doing about AI?’
Transformation vs Tactical productivity: there is an argument that AI will be transformative to many businesses, specifically in the knowledge work that happens in the business. This includes software development, decision making, analytics, and many other areas. This kind of transformation would be expected to lead to re-structure and a revised operating model. The other view is that in the short term there are clear areas of productivity improvement that should be the focus. This is where today’s budgeting cycles and project funding lives.
Enterprise Data vs Gen AI Utility: some organisations are focusing on their data stack and driving towards and architectural model for AI, often looking at Gen AI as just another use case for the same enterprise stack. Others are going outside in and are bypassing the enterprise stack in capturing value from Gen AI as a utility, using a mix of internal and external data sources.
Needless to say, there isn’t one right way and you might go one way or the other for each of these, or find yet another approach. There were also some areas that our roundtable broadly agreed on:
Not everything needs AI: in the rush to implement AI, there is a danger that a lot of things are done with AI tools that could be done via automation or more traditional analytics. Using AI is not only overkill, but that long term (at scale) energy and cost footprint for these solutions may be unsustainable. Triaging of use cases and projects is critical.
Early days: we are still at the foothills of this journey. The next versions of AI products may be 100x more powerful. We are not used to this kind of speed of change, and we will have to rethink our approach constantly.
Education will be key: whether we are looking for productivity benefits, or delivering new value to the business. And this cuts across roles, jobs, and skillsets.
By the way, we also put the text of the discussion through a local instance of a Generative AI tool, and asked it to summarise the discussion below (many thanks to my colleague Arun). It does a pretty good job of summarising the conversation but I think it needs more work to pick out what’s really interesting and debate-worthy. In any data set, the spikes are often more interesting and this summary kind of smoothens the spikes out. I guess that’s our challenge for training the tool.
Adoption and Use Cases of AI:
We shared our experiences and perspectives on integrating AI into your businesses. While some of us have started utilizing AI for efficiency and automation, others are still exploring its potential. The industries we all represented, ranging from telecommunications and finance to healthcare and education, each face unique challenges and opportunities with AI.Ethical Considerations and Trust:
The discussion highlighted our concerns around the ethical use of AI, including data privacy, bias, and ensuring AI's trustworthiness. There was a discussion about our need for frameworks and standards (referred to as AI assurance) to ensure that our AI applications are ethical, transparent, and accountable.Skills and Education:
We all acknowledge the changing skills landscape, where there's a need for both technical and non-technical stakeholders to understand AI's capabilities and limitations. This includes recognizing the importance of education and training for us to keep up with the rapid development of AI technologies.
Business Impact and Strategy:
We discussed how AI could transform various business processes, from customer service to internal operations. There's an ongoing debate about whether we should build AI solutions in-house or collaborate with external partners and how to measure the business value of our AI investments.
Regulatory Environment and Compliance:
The conversation touched upon the evolving regulatory landscape for AI that affects us, including upcoming legislation like the EU AI Act, and how we must navigate these regulations while adopting AI in your business.
Technological and Operational Challenges:
There were discussions about the technological hurdles we face in implementing AI, such as data integration, platform selection, and managing the rapid pace of AI development. We also expressed concerns about the environmental impact of AI, particularly in terms of energy consumption.
Collaboration and Co-creation:
The importance of collaboration between different stakeholders, including businesses, academia, and government, was emphasized to drive AI innovation and ensure its beneficial and ethical use. The overall sentiment is that while we are excited about the potential of AI, there is also a cautious approach due to ethical, operational, and regulatory challenges. We all seem to agree on the need for collaborative efforts to navigate the AI landscape responsibly and effectively.
So there you have it. Things we agree on, and things we agree to disagree on, in our approach to AI adoption.
AI Reading
EU AI Regulation: One of the most significant events over the past fortnight is the passing EU AI Act. This is a very useful breakdown of the act. The 4 categories of AI: Unacceptable Risk, High Risk, Low Risk, and General Purpose AI seem like a useful categorisation. I do wonder if the goalposts will have to shift, with the rapid evolution of AI. Either somethings could become more risky, thanks to the mushrooming power of AI, or go the other way - where the models mature in ways which actually significantly contains the risk. The biggest challenge of regulating AI will probably be agility. For consumers and organisations the risk will both be in assessing AI generated content, as well as using the tools themselves. (Towards Data Science / Guardian)
Future of Coding: Jensen Hwang has opined recently that future generations won’t require coding. On the other hand, the invention of calculators didn’t eliminate the need for learning arithmetic. Perhaps, learning to code isn’t just for generating code. It also sharpens key mental faculties and sharpens your creative and analytical ability, and as this piece says, nurtures decision making skills. (Medium)
Loneliness & AI: Scott Galloway speaks about the dangers posed when the loneliness epidemic (significantly higher % of time spent alone by US Adults compared to 2013) meets scaled automated bots designed for radicalising the lonely. (Scott Galloway)
AI Cost Reduction: UK Research agency ARIA is funding a program to slash the cost of AI development by a thousand. This will focus on alternatives to traditional silicon based compute - which could lead us to alternatives such as neuromorphic and quantum computing. (FT)
ICYMI: The CEO of Stability AI, Emad Mostaque resigned to pursue his vision of a decentralised AI. In his words, we’re “not going to beat centralized AI with more centralized AI”. (The Verge)
Nvidia Healthcare: Nvidia is taking aim at healthcare. They’ve launched over a dozen healthcare focused AI tools. (CNBC)
Football AI: Google Deepmind has launched TacticAI to enable football (that’s football according to the rest of the world) managers to study corner kick routines and work out better movements and positions for converting them to goals (or defend against them). Probably my favourite news item of the past fortnight! (MIT Technology Review)
Other Reading
What’s in a name? Anguilla is the new Tuvalu. (NYT)
Waiters Race in Paris - a nice way to remind us about the Olympics (BBC)
Moon Tracks: DARPA wants to construct a railroad on the moon. (QZ)
Intelligence vs EQ: Why super intelligent people struggle to fit in (Economist)
Have a great week!