A couple of weeks ago, at the design resilience session with the Helen Hamlyn Centre for Design with the effervescent Madelaine Dowd, I got to meet Viraj Joshi and Dylan Amada-Rice. Viraj's stories about the future of design and technology were captivating and Dylan's work with children on storytelling were fascinating. Viraj's short story about a future where technology effectively becomes a form of government and decision making was particularly thought provoking for me. And here's an example of why:
Data - Somebody Always Knows What You Did
We usually play a cat and mouse game with speed cams when we drive. We sneak in a higher than regulated speed when we know there are no cameras around, and we slow down appropriately when we see them. Our behaviours suggest that we are often more influenced as much by the risks of being caught as we are by the principle of rule breaking and road safety. In this sense we remain transactional about the ethics of driving.
Here's the thing though, when we drive today, most new cars have a little symbol on their dashboard which tells them what the speed limit is. What this means is that your car always knows when you're over the speed limit. In my Kia model, the speed limit goes red when I go over it. If you have a reasonably new model of car, yours probably does some version of this too.
Who else knows? It's likely that if your car knows, so does your auto-maker. Given your phone's accelerometer, and GPS, its likely that your phone and telecom provider (and possibly Apple and Google) also know that you're speeding, or can deduce it relatively easily. The question then is, why don't they share this with the road authorities? This is probably where we start to trade off privacy with safety, and in a typical European liberal democracy, we will tend to lean towards privacy, even at the cost of road safety. In a different context, with different sensibilities, priorities, and political systems, you could see that this choice might just go a different way.
The bottom line though is that we don't even need the civic authorities, do we? What if there was a progressive fine that took the money out of your account every time you exceeded the speed limit? A fine that increased exponentially for repeat offenders, for the extent of violation, and for the duration? Would that still be considered a privacy violation? Nobody else really needs to know. Or rather no other human or organisation needs to know.
Of course, you can see where this becomes tricky, because your bank knows, and maybe your insurance premium starts to grow because of your speeding fines. It all sounds draconian and big brother. But if you think about it, do our civil liberties need to be embodied in our ability to endanger others, or in our willingness to break rules without being caught?
Do our civil liberties need to be embodied in our ability to endanger others, or in our willingness to break rules without being caught?
Traditionally, we've had to think about these key aspects of data usage
Is the data being shared with or without our knowledge? This question needs to be asked not just once but for each entity that gets our data.
Is it being used for the purpose for which it was created - not just now but in future? You might have shared your details with Amazon so that they can deliver to your home, but should they be combining your profile and your product searches to sell your data to advertisers? If you buy books on obesity should they be using the data to sell you gym subscriptions? If it's not okay, why? And if it is, why?
In addition, there's also the additional concern when your data is used by 'authority' - any form of governments. We have an inbuilt fear and resistance to authoritarian control, so we worry about our data being used in any way to manipulate us.
Is our data being looked after well? Who is getting to see our data? Would we be comfortable if our neighbour knew our shopping history because he works at the eCommerce site we shop at? Which brings us to the level of trust we have in the competence of the people collecting our data.
Lastly there's the derivative nature of data - your data may be used in aggregation along with millions of others, to create products. Should you be compensated for this? OpenAI is being sued as we speak for this very reason. But if a photographer wins a cash prize for a picture of a group of football fans, do we expect that she will share the prize money with the subjects? Or that automobile companies and pharmaceuticals should share revenues with people who tested their products or gave feedback that helped shaped the product? It only gets more complicated from here.
The question we have to answer today, is how we treat access to our data by machines as opposed to humans.
Is machine access the same as access by humans?
The first question is the hardest to address, and it goes to the heart of how we define privacy. If all our driving data was known to a single smart machine, but no human ever looked at it, the machine simply handed out fines based on all those times when you crossed the speed limit, would you think of it as a privacy problem? Do we consider machines to be entities like humans when it comes to our data? And if so, what about our laptops and all the servers our data goes to, and through, even without our thinking about it? Which means what we really have to think about is who controls the machine, and whether that matters. But again, if the humans in that organisation never look at your data, if only machines access and process your data, is it still a privacy invasion? Would you be more or less comfortable giving access to your data if you knew that no human would access it?
I'm attempting of course to draw a line between the value exchange around data, from a more fundamental sense of privacy in the original sense - something others don't get to see - irrespective of how they act based on that insight. If a stranger walks into our home, or if you discover somebody has set up a video camera in your house, you would consider it an invasion of privacy in a way that would be completely disconnected from what they did with the access they had. That wouldn't matter. And yet, when we set up a video camera for your own monitoring, we ignore that the data is potentially visible to a whole lot of intermediaries, who we have no data relationship with.
Let's invent a fictional person called Marilyn. Marilyn could be male or female, or perhaps androgynous. Marilyn is permanently in your home, observing what you cooked, when you woke, what you watched on TV, what you shopped for online, how much exercise you did, and what made your heart beat faster. And Marilyn sat in your car and noted where you drove to through the day. Creepy? Uncomfortable? But what if I told you that Marilyn wasn't a human but a broadband router, connecting to all your devices. If Marilyn is your broadband router which 'knows' which websites you visit, and what you shopped for, and your car which knows where you drove to, and your Alexa and your smart watch know when you woke up, what music you heard, or how much you walked today, there maybe a dozen Marilyns in your home already.
The Evolving Roles of Machines - Welcome to Stage 3
A part of the problem as we step into the world of AI is the evolving role of machines, which seems to have have gone through 3 levels:
Machines as routing points
Machines have long been the routing points as we just discussed - your broadband router or your laptop computer is an obvious example. We mostly behave as though that despite having intimate access to much of our data, there is no desire or intent on the part of these machines (or the institutions associated with them) to try to understand or use our data in any way for their own benefit, and potentially at our cost. I.e. your telecom provider isn't building a complete profile of you and your family to sell you other goods and services. We do this with humans as well sometimes - we often treat taxi drivers and waiters as they they were inanimate objects and completely uninterested in the conversations we have in their presence. So this behaviour is often ingrained.
Machines as surrogates
Machines have in recent times become surrogates for the people who make decisions in organisations that hold our data. We worry about our data in the servers of Google or Amazon because we understand that this can be accessed by people who will in turn make choices about the services we consume and the price we pay for them. For example our driving data (or the data regarding our speeding habits) being accessed by insurance company execs, who will control or influence our premium. This is largely true of the current world. It's the reason we worry about what Alexa can hear, or how much our smart phone maker knows about our location.
Machines as decision makers
The world we are stepping into is one where the machines are the end point - there is no human - the machine makes any decision that's required. That could include fixing your insurance premium or serving you your next ad. Many new challenges will present themselves in this stage. (1) Many people will feel that these decisions are worse than those made by humans, even if data and the maths suggests otherwise. (2) There is a possibility that the machines will use hundreds or even thousands of parameters for decisions which will not be clear or apparent to humans, which is especially a problem if the earlier point is true. (3) You can negotiate with humans, but you can't with machines. If you're over your weight limit at an airline check in desk by 200 grams and you have a child travelling with you, the person behind the counter may wave you through, but a machine will not make any such allowances, so a new challenge will be how to design machines for empathy. This can be accentuated or moderated if the machines are supervised as opposed to unsupervised.
How Do We Feel About This?
My personal view is that if my data is viewed by a machine, rather than a human, I would have less concerns about privacy. Of course if the machine was also making decisions that impacted me in anyway - socially or financially, then I would want to be sure that I could trust the machine. When I posted this as a poll on LinkedIn though, almost twice the number of people had felt that they would be more concerned with machines accessing their data compared to humans. On the plus side almost 70% felt that they would feel the same way or be less concerned. Read it as you will.
Either way, as AI adoption goes up as it promises to do, we can expect a growing impact of machines and AI on how our data is viewed, analysed, and acted upon.
Healthy Ageing Update
On the 29th of June, I was speaking at the ‘Smart Housing/ Smart Communities Knowledge Exchange Webinar’ organised by Janette Hughes and the DHI (The Digital Health Innovation Centre). We spoke about Smart Communities, data, and healthy ageing. The wheels seem to be moving in this space, so watch out for much more.
Future of Work Update
At the ManpowerGroup Conference, we got to speak about the future of work. One of the things I thought about was how we might need to re-componentize work. For most of our jobs, we have a homogenous view of work, but in future we might need to break it into it's components and re-assemble them differently, while specialising in components rather then jobs, and sharing these components with AI tools. In the future, as a marketer, a lawyer, or a doctor, our jobs could have a dozen components of which we would only do a few. Alongside these, we might have other gigs - as musicians, cybersecurity analysts, or social workers. In the future we're all gig economy workers.
Other Reading
It's hard to remember the time when multiple social media channels like Bebo and Friends United duelled for supremacy. But with the launch of Threads, the wars have started again. Here's a view of the key differences between Twitter and Threads. The biggest advantage of Threads seems to be it's part of a family. (WSJ)
Some of the reports of an AI apocalypse may be overblown. But the biosecurity threat seems to be real. (FT)
An excellent drill down into large language models and the Generative AI landscape and evolution. As a primer, this will take you from the introduction to Generative AI, to players like Neeva and Jasper. (Medium)
Podcast Spotlight
A shout out to my current favourite Podcast - Adam Grant’s ReThinking. The snappy interviews with CEOs and achievers in many fields, in a way that gets inside their heads and focuses on what makes them tick, makes for excellent listening. From Hamdi Ulukaya (Chobani CEO), to Abby Wambach, the former USWNT soccer player.
Also worth listening to is the Satya Nadella episode of the Freakonomics Podcast.
Re:Reading
I’m re-reading the John Kay book on Obliquity, which suggests that sometimes in order to achieve an outcome, the most direct way may not be the best way. Sometimes, you have to structure your thinking and your actions differently, and the outcome becomes a bye product. One of the interesting examples in the book is right at the beginning. When you sail from the Atlantic Ocean to the Pacific Ocean, you are actually travelling west to east because of the orientation of the Panama Canal.
Have a sunny week or two, and see you soon.
Thanks for reading!