"The Oldie is an incredible magazine - perhaps the best magazine in the world right now" Graydon Carter, founder of Air Mail and former Editor of Vanity Fair

Subscribe to the Oldie and get a free cartoon book

Subscribe

Real Intelligence v Artificial Intelligence. Matthew Webster: Digital Life

Blog | By Matthew Webster | Aug 23, 2023


Hold onto your hats: the digital world is about to take a giant leap, courtesy of the much-discussed advent of artificial intelligence (AI).

We will see as much change in the next two years as we have seen in the last ten. I hope that I can keep up.

It’s about time we had a bit of progress, anyway. I realised the other day that it is over 25 years since I bought my first internet-connected computer, when the worldwide web was fresh and young. But if I look back, it’s hard to point to any really fundamental change in the underlying infrastructure, beyond its all becoming faster and more reliable.

You might say that websites like

Facebook and Twitter, which allow us to interact online, were quite a change, but they’ve been around for over eight years.

Or you might feel that the streaming services – iPlayer, Netflix, Spotify et al – were a significant development, and they were. But they really only reflected the increased availability of faster internet connections. Anyway, they’ve been churning stuff out since 2006 – so they are hardly the new kids on the block. Grandpa Microsoft was formed in 1975, almost 50 years ago.

So we are due something new, and AI might be it. AI is, essentially, software that works on existing computers and can allow them to accumulate and remember almost limitless quantities of data, way beyond human capacity.

It then identifies patterns in the data and uses these patterns to make predictions, decisions, conclusions and recommendations based on questions it’s asked. Or it even (and this is where it gets worrying) takes actions with little or no direct human intervention, such as on an aeroplane autopilot.

However, it’s impressive, even at a very basic level. Have a go at Google’s version called Bard (bard.google.com) and ask it a question. Don’t be shy – ask for something like a summary of the English county-cricket system, a limerick that starts ‘A magazine known as The Oldie’ or a speech to deliver at a friend’s retirement party.

You'll be surprised by the results. The syntax will be excellent and the structure pretty good, if perhaps a little stilted.

However, this masks the danger. Where is the information that it is so smoothly presented to you coming from, and is it accurate? It offers no citations.

Try asking it about something you really know about, and you may find some bizarre inaccuracies. I asked it about my father, and was surprised to learn that he was married to the late Countess of Longford (which he certainly wasn’t). But it was all written in such a confident and convincing way that had he still been alive, I might have been tempted to make enquiries.

As the saying goes, garbage in, garbage out. However good a computer is at processing data, if you give it rubbish to work with, rubbish is what it will deliver. Bard is (I assume) scraping all this stuff from websites it’s looked through, chewing it over and cobbling together some convincing words – but not carefully enough.

No doubt it will become more sophisticated, but I’d be nervous about leaving it in charge of anything with a motor if it makes mistakes at this lowest level of its capacity.

Nevertheless, this sort of mass data- sifting and analysis is entirely new, and change is coming, for good or ill – or perhaps for good and ill. It could certainly be useful for operating machines, but it could also be used to spread misinformation and propaganda.

So it’s up to humans to ensure that proper ethics are observed. What could possibly go wrong?