Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
From search, to health cures or school lessons, AI is leaking into all areas of life. We need a national debate
I use Google Search all day everyday. This week, for example, to find out how long it takes to drive to West Wittering from London (I was planning a trip with my mother to the seaside), or whether West Beach, just over the River Arun from Littlehampton, is a better option.
My mother is not in the first flush of youth, so I have to also consider how accessible it is – we couldn’t drive all that way only to sit in the car park because there are one hundred steep steps to the sandy bit. As I am searching, looking for useful links which I can click on for various accessibility and route options, my phone tells me it is providing an “AI Overview”.
What in the age of sorcery is this? My phone is not offering me links to experts in navigation or accessibility, like the AA, it is deciding itself how to package and deliver that information to me in human-like prose. I don’t need to look any further, Google has done the research, and the thinking, for me. It has a “take”, a point of view, a position, created by Gemini, a large language AI model which uses data analysis of billions of sentence structures to provide answers. It relies on the content of the internet for its source material.
And it’s a bit creepy.
To be clear, I like Google and have covered the technology behemoth for much of its 26-year history. They take their mission to “sort the world’s information” seriously, produce remarkable products, want to engage with their critics and in Demis Hassabis, the chief executive of their AI business, DeepMind, have one of the most thoughtful and ethical leaders in the tech world.
Thank goodness the man with that much influence over the future direction of who we are – “AI is in the water” as one tech leader told me – is not called Elon Musk. AI will help us cure diseases, discover climate change solutions and run governments more successfully. But the sudden appearance of “AI Overview” in search brought me up short. Google might think it’s “neat”, but I didn’t ask for it.
Admittedly, I am not expecting million-strong marches on Westminster to defend the way search provides results on the accessibility of West Bay, West Sussex. I doubt even Nigel Farage could create a culture war about it, assuming he doesn’t mind second generation immigrants like me going to English beaches. Which, come to think of it, he might.
And Google claims that Gemini, which it recently embedded into search with little fanfare, will not answer what the tech giant considers “political” questions. Search “what caused the Southport riots” or “is Keir Starmer a socialist” and there is no “AI overview”, just the traditional links.
But here’s why we shouldn’t just move on from this new, seemingly innocuous, addition. It says something much deeper about the AI revolution. Who decides the direction of travel? What it does and does not do? Is it helping us think? Or telling us what to think? At the moment it is Google, Meta, Microsoft, TikTok. And we have seen where leaving them in charge has often led us.
Let’s just take that “no politics” point. What constitutes “political” is open to debate, a debate in search that Google is now the master of. Google dominates more than 81 per cent of the global desktop search market, a figure that rises to 96 per cent on smartphones. A US Justice Department judgement recently described Google as commanding an “illegal monopoly” in search, a position it refutes.
How it manages answers to questions matters for how discussions develop and what millions of people may think about different subjects. The delivery of political information is the 21st century equivalent of newspapers and television in the 20th century. Who controls the pipes – and what they deliver – matters.
The accessibility of West Bay may not be very contestable, but other issues are. Search “Is working from home a good thing” and I receive an AI Overview answer – six positive points, followed by five negative. Positive first – thereby framing the debate. “What are the consequences of a smoking ban?” produces only positive results from The Smoke Free Action Coalition, The British Medical Journal and the British Heart Foundation – though you would be hard pressed to spot the links to those organisations. The Institute of Economic Affairs report on the threat to pubs is nowhere to be seen. The Budget is described as “an independent analysis of the UK’s economic and fiscal situation”. The Budget policy documents might be that – although opposition parties would quibble – but “The Budget”, which includes Reeves’ financial statement to the House of Commons, certainly cannot.
Google says that “AI Overview” is in its experimental phase. It’s not wrong. Within days of it being released in the US, users were sharing bizarre results, including “adding glue to pizza” and “eating one small rock a day”. In February Gemini was criticised for over-correcting racial bias inherent on the internet when it produced images of the The Founding Fathers who were black.
The AI Overview also means that search results are pushed further down the page or are so small they are easy to ignore. This changes the delicate economics of search, harming the businesses that produce the content that Google relies on to power its highly profitable business. No link click-throughs means lower revenues for those that spent the money producing the content in the first place.
Google is not alone. Last week Apple announced that ChatGPT – a Gemini-like product – will be embedded into its Siri search function. Microsoft’s Bing search engine already does so.
Search is just one part of the AI forest. Last week, the Government announced that artificial intelligence products will help teachers plan lessons and mark homework. In a recent interview Hassabis said “I think we could cure most diseases within the next decade or two if AI drug design works”.
“Controlling AI” is a fraught business. The European Council of the EU has recently passed the first international treaty on AI. It has already been criticised as the fastest way to stifle innovation. The UN suggests global governance for which there will never be agreement or enforcement. In 2019 Google abandoned its “AI ethics board” after controversy over who should be invited to be a member. Its internal ethics council has just lost its leader.
A smart politician would make AI the public debate of the next decade. The UK has argued, sensibly, for a “principles based” approach. As one technology source told me, trying to regulate AI is like trying to “regulate maths”. Markets and innovation must be allowed to flourish. But big tech needs to take the public with them, not leave them tripping over new AI products which threaten to take over the very way we humans think.