Grading 2024 Predictions.
Last January I made these predictions. I remember it was a challenging time personally, so I wasn’t really able to give it all my focus at the time, so it was a short list of thoughts. It’s time to take a look at how they panned out.
5 Key Predictions
I suggested that Humanoid robots would become a thing. Perhaps have a ‘chat GPT’ moment. This hasn’t quite happened. Robots have gotten increasingly better with a range of niche skills. We aren’t quite there with an OMG moment for a general purpose robot yet. And I’m less sure about this for 2025. But we’ll see robots in manufacturing, warehousing and logistics, health & care, and as it happens, in restaurants.
I predicted AI driven Job Losses: again this hasn’t happened yet at any scale, but we’re definitely on the cusp on this one. Customer experience, contact centre, and software development are 3 areas of jobs I see at risk in 2025. Although most people still believe that overall AI will be a net job creator, I think we’ll see a significant transition challenge as location, time, and skills will be the barriers to an easy shift to a new environment.
I predicted that one of the early areas of impact of AI would be on the SDLC (Software Development Life Cycle): Google announced in October that 25% of all code released by them was written by AI. Klarna have said that they have stopped hiring people. Expect this to continue in 2025. I think there’s a lot more going on in this area that hasn’t been reported yet but will come out in the wash when we look at employment and growth numbers of tech firms over the next 12 months.
Tech and Politics: I suggested that technology and politics would be a heady mix in 2024. I hadn’t quite imagined how prescient this might be. Even when they ‘did nothing’ such as Bezos preventing the Washington Post to declare support for Harris, they played a role in the political process. Obviously the whole DOGE initiative with Musk and Ramaswamy has changed the nexus between tech entrepreneurs and politics quite decisively. And the spectre of AI as an area of national concern has brought most senior tech leaders closer to policy making.
Which brings us to Musk and my fifth prediction - I wrote that Musk was the Annakin Skywaker of our times. A brilliant mind, and an incredible entrepreneur whose values and choices seem increasingly at odds with what most of us probably stand for as tolerant and liberal people. I suggested that in 2024 he would go over to the dark side, and possibly self destruct. And in some ways, he has gone to the dark side, but clearly he’s a long way from self destruction. (although I’m sure some people will disagree - and that’s fine)
I also called out the following trends to continue:
The progress of artificially created and ‘digital’ organs. A steady growth of all kinds of technological progress including digital twins, and artificial organs, including blood vessels - which are key to replacing any major organs. I’ll give myself a 3/5
Gen AI adoption leading to 5-10% cost savings for businesses. Clearly last year was the year of GenAI adoption, with some businesses getting much higher returns on specific areas of GenAI deployment. So this was a fairly straightforward position to take. Score: 4/5
Climate change and the move to renewable energy. I’m afraid this wasn’t much of a prediction, on reflection. I mean all of the things I said was bound to happen. Renewable energy production and contribution rose, and costs fell - check. Global temperatures kept rising - check. There were more climate disasters this year - check. Although anybody could have called that. Score: 3/5
EV to break the 1000Km mark for battery range and focus on mining for the rare metals in the supply chain. We didn’t quite get to 1000 - but we’re closing in on it. Although there are some EVs that have breached the 1000Km barrier, this is not a mass market yet. Meanwhile the geopolitical posturing between China and the US could make interesting discussions around critical minerals. But it hasn’t happened in 2024. Score: 2/5
Bitcoin legitimacy, stability and growth. Bitcoin started the year at $40K. It spent most of the year above $60K, and ended the year around $90K post the elections. I remain a cryptocurrency sceptic but Bitcoin is clearly a steady part of the currency basket. Larry Fink, amongst others came out saying that he had been wrong in the past and Bitcoin was now a legitimate financial instrument, and a useful hedge against currency fluctuation. Score 5/5
The continued erosion of physical Retail and high street Banks. I give myself full marks for this rather obvious trend (though I have had people argue against this). The UK lost 37 high street stores every day of 2024. And about 400 high street banks closed in 2024. There are actually towns in the UK that will have no high street banks, in 2025. Score 5/5
The rising popularity of Lab Grown Meat - Plant based meat has continued a steady march and is growing at 14% per year. But I realised I had inadvertently not included plant based meat in my articulation (my oversight) so I’ll give myself 1/5 for this.
What Surprised Me Last Year:
Beyond the predictions, as the year unfolded, here’s what caught me by surprise:
(1) The AI universe: The speed at which the AI universe is being built out and all the various components, and the whole new lexicon that is forming at lightening speeds. Agents, inference, GPUs, co-pilots, knowledge graphs, RHLF, RDF, prompt engineering, SLMs, context windows, tokenisation. All of a sudden, AI isn’t just a technology, it is actually a universe and one that we all need to get to grips with pretty quickly. We are having to educate ourselves at high speed.
(2) The number of people that still focus on what AI can’t do. Ah but it can’t load the dishwasher, they say, or it can never really write poetry, or do what a real doctor can. First, this is the old joke about how my dog can play chess, but he’s not too smart because I always beat him. The point isn’t what it can’t do, but the incredible things it can already do. Second, this is the very very early stage of AI development and even at half the current speed of evolution, it’s going to surprise and shock us over the next few years. So my advice is stop saying ‘never’ and start asking ‘what if?’.
Somethings that I realised in 2024:
(1) The Iceberg Organisation is here to stay. A much bigger portion (and value) of most corporations will lie in it’s software and systems going forward. And increasingly more people will be employed in the software and systems functions than in any other part of the business.
(2) Customer service will be amongst the first areas to be transformed by AI, especially contact centres. At some point, organisations will have to restructure to bring contact centres and digital operations under the same (AI?) roof. Obviously, IT and SDLC will be the other immediate impact area.
(3) We are going to require fundamentally new ways of thinking about many aspects of our lives. I used the metaphor of the intellectual caravan. Also, if everything is fleeting, and nothing has time to take root, are we likely to leave culture behind?
Other Thoughts in the Past Week
Are cars the new guns?
We’ve all seen the news about the mayhem in New Orleans. While there will be more investigation and debate about the radicalisation and/or organisation behind the humans involved, it’s also worth noting that we’re seeing an increasing incidence of cars being used as weapons of terrorism. This has happened in London and in Germany in recent times. In this sense, cars are indeed the new guns.
Which means that the steady mainstreaming of autonomous vehicles takes on special significance. Could autonomous and intelligent cars be a bulwark against this kind of weaponisation? On the one hand these cars can be designed so that they can’t be driven into people by lone wolf terrorists. They could even have special modes that notifies the police in case of any such attempt. On the other hand, more organised bad actors could launch more sophisticated attacks if they could hack these cars, with much less fear of retribution.
Think beyond cars to the entire range of unmanned autonomous vehicles which would involve drones, and various kinds of robots, and you can see both sides of this playing out in various environments. The Ukraine war and the Israel Palestine conflict has already seen the use of sophisticated drones and AI for warfare, and it can’t be long before those technologies spill over into civilian life as well.
Short of stopping the development of these capabilities which is clearly not going to happen (because it can only be done multilaterally, and the world isn’t at a cooperative phase currently), what could be good ways of safeguarding against the weaponisation of UAVs? And will autonomous vehicles prevent the kind of incidents we saw playing out in New Orleans?
Long Term Vs Short Term: Why We Make Bad Choices
There’s something fascinating about evolution - it has no long term plan whatsoever. It’s a series of short term responses to environment and essentially a trial-and-error process. As a consequence it’s prone to introducing errors which no long term plan would sustain. If you want a closer look, read a book called Human Errors, by Nathan Lents, which details some of natures strange evolutionary glitches, such as why we don’t manufacture our own vitamins and need them in our diet.
After all we are biological creatures born of evolution and natural selection, so it’s not a surprise that as a species we are also not great at choosing long term reward over the short term. For most of the past 200,000 years, we have lived in unpredictable environments, which makes the so called long term particularly unpredictable, and as such any plans beyond the immediate are fraught with risk. So we’re also biologically tuned to discount the long term. This is essentially the finding of scientists at the University of Nevada who conclude that uncertainty is the key driver for us to make poor choices between short and long term reward.
Given that the world is getting more uncertain, if anything, the chances are therefore that we will continue to prioritise short term gains and we’re likely to see this across voting patterns, investment and savings choices, consumption and holidays, and many other aspects of life. Which I suppose also means that those who can actually make better longer term choices might have disproportionate success.
Does Al Write Better Poetry than Chaucer and Shakespeare?
That's not a clickbait title - although it smells like one. The clue is in the interpretation of 'better' - and its clearly in the eye of the beholder.
Researchers at the University of Pittsburg ran 2 experiments with interesting results. To put it simply, they found that a group of 696 lay users were unable to distinguish between Al vs Human Generated poetry when given a random sample of 10 poems half written by famous poets (such as Chaucer, Shakespeare, Byron, Dickinson and others, and half written by Al "in the style of.." the same set of poets. Participants rated more Al written poems as human and human rated poems as Al.
Another larger set of participants by and large rated poems better when they were written by Al than by the poets themselves. Although when they were told that a poem was written by Al they rated it worse even when it was actually written by a human poet.
Clearly we have a bias about Al generated poetry but we can't really tell the difference, as lay people.
This is both a reflection of the state of Al evolution with respect to poetry, and also of our own biases about Al and our faith in our ability to distinguish between human and Al creative work.

What does this say about creative work, and our relationship with it?
Also sometime soon, on a bored holiday afternoon, might we just be able to tell a generative Al tool: "I have 55 minutes till I have to board my flight - can you write a novella based on the Russia-Ukraine war in the style of lan Fleming and present to me as a graphic novel that I can finish just in time for my flight?" and then sit back and enjoy a spy novel?
What would you ask Generative Al to create for you?
AI Reading
AI Jobs: as you would expect, there’s a huge spike in AI jobs. Here’s some actual data on the trends for AI related jobs (Fast Company)
The Cognitive Industrial Revolution: Reid Hoffman talks to McKinsey and describes what the ‘steam engine of the mind’ is all about and how businesses should respond. (McKinsey)
Cognitive Offloading: I loved this piece about cognitive offloading - it makes sense as a model, and also as the piece says, how the design of AI systems should work with users to help with this. (Tetiana Sydorenko on Medium)
Other Reading
Brain Speed: It turns out that data moves inside your head at a hilariously slow 10BPS, even though our senses gather information at about 1MBPS. This shines the torch on some of the limitations of the brain, and also how it allows us to function despite this meagre speed. (NYT/ Nature)
Digital Twins of Organs: digital twins of humans hearts and other organs is now a thing. You can see a beating heart in our innovation hub at Pace Port London. The critical milestone is that these are now being allowed in clinical trials. (MIT Tech Review)
Just read this and we spoke yesterday, totally relates . Cheers