Barely a day goes by now without a Robots Taking Jobs story. If that wasn’t bad enough, once they’ve taken all our jobs they will eventually take over the world. They might even wipe us out, though why they would bother to enslave us is beyond me.
All good clean entertainment but in most news pieces there is little or no attempt to explain terms like artificial intelligence and machine learning. As Matt Ballantine pointed out a few weeks ago, some of the robot stories are pure hype.
Imagery matters. Imagery shapes the agenda. And there’s a whole load of crap, clichéd stock imagery that time-pressed and underpaid online editors attach to their copy without really thinking.
So just what is artificial intelligence and can machines really learn?
There’s no generally agreed definition but there is a useful explanation of the various terms here, summarised below, which is a good starting point.
Artificial Intelligence – a broad term referring to computers and systems that are capable of essentially coming up with solutions to problems on their own. The solutions aren’t hardcoded into the program; instead, the information needed to get to the solution is coded and AI uses the data and calculations to come up with a solution on its own.
Machine Learning takes the process one step further by offering the data necessary for a machine to learn and adapt when exposed to new data. Machine learning is capable of generalizing information from large data sets, and then detects and extrapolates patterns in order to apply that information to new solutions and actions. Obviously, certain parameters must be set up at the beginning of the machine learning process so that the machine is able to find, assess, and act upon new data.
Essentially, what we call artificial intelligence (AI) has come about because we now have vast amounts of digital data and machines with massive computing power. They are therefore able to trawl this data within seconds, enabling them to do things which, for humans to do, requires intelligence.
Here’s a simple example. A few weeks ago, a friend of mine posted a picture of himself in an old church and, having stripped out any identifying tags, asked his friends to guess where he was. It wasn’t difficult. I knew he was on a trip to York. I knew that most people who go to York visit the cathedral first. It was a simple matter of getting a map, looking at the nearby churches and doing an image search until I found the right one. I found it on the third attempt.
Google have developed a programme which can do this. It can identify any location from a photograph, without needing digital GPS information. It would be able to find my friend’s location just by recognising the pixels and matching the photographs. It doesn’t need to know that he’s in York. It doesn’t need know he’s in England. It doesn’t even need to know that it’s looking for a church. It can trawl millions of photographs at such speed that it has made the human intelligence that I needed to apply to solve the problem redundant. It’s not actually thinking but it can process data at such a rate that it achieves things that would require a lot of thinking for a human to do.
Furthermore, AI can recognise patterns in data and learn from them. Neural networks enable machines to cluster and classify data so that they can, for example, recognise faces and identify objects. They can also establish correlations and therefore make predictions based on past data.
Machine learning, too, involves the application of huge amounts of data. As Bernard Marr explains:
A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain.
Essentially it works on a system of probability – based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.
There was a lot of fuss last year about a system designed to judge beauty contests which turned out to be racist. Or, at least, that is how some people interpreted it. I came across the story when someone on Twitter accused the people who developed Beauty.AI of programming racism into it. They didn’t, of course, but the truth is even more interesting.
Beauty.AI leant its understanding of beauty from millions of images. The trouble is, most of those images were white. The programme therefore assumed that lighter skin was one of the criteria by which it should judge contestants and didn’t pick any dark-skinned people as winners. The machine itself wasn’t being racist. It was simply reflecting the data it had been given.
Microsoft’s AI chatbot, Tay, ran into similar problems when it began tweeting racist comments. Again, it hadn’t been programmed to be racist but it had been instructed to replicate the speech patterns of people with whom it engaged. It only took a few targeted tweets from a band of dedicated fascists, or mischief-makers pretending to be fascists, and before long, poor Tay was denying the holocaust, praising Hitler and going on about building a wall and making Mexico pay for it. Eventually it signed off sounding tired and emotional.
Of course, the programme wasn’t actually racist. It only appeared so because it was imitating the speech of the people with whom it had interacted. Like Beauty.AI, it was just doing as it was told and learning from the data it had been given.
We find it entertaining to endow artificial intelligence with human characteristics but really it is simply machines crunching massive amounts of data at incredible speed. I say simply, but the sheer power of these machines means they will be able to perform tasks which currently require a significant level of human intelligence.
A couple of weeks ago I chaired a panel on the future of work, made up of some very distinguished experts. One of them, Sarah O’Connor from the Financial Times, told us how she pitted herself against an AI programme called Emma in a competition to write a commentary on the latest employment figures. Both pieces were submitted to editor Malcolm Moore who then had to decide which one to run. This short video tells the story.
In the end Sarah won. The machine produced copy much more quickly than she did but it wasn’t as good. It lacked Sarah’s insight and ability to make wider connections.
Emma was indeed quick: she filed in 12 minutes to my 35. Her copy was also better than I expected. Her facts were right and she even included relevant context such as the possibility of Brexit (although she was of the dubious opinion that it would be a “tailwind” for the UK economy). But to my relief, she lacked the most important journalistic skill of all: the ability to distinguish the newsworthy from the dull. While she correctly pointed out the jobless rate was unchanged, she overlooked that the number of jobseekers had risen for the first time in almost a year.
Interestingly, Emma also appeared to blame poor wage growth on immigration. Again, this simply reflects the data the programme was accessing and its aggregation of previously written UK labour market commentary.
As Sarah went on to point out, Emma isn’t going to take her job bit it could save her a lot of time. By pulling out the relevant data and creating a starter commentary, it would give Sarah more time to add the creative insights that make an article informative and thought provoking. Machines might take over the more routine and tedious bits of people’s jobs leaving them to do something more interesting.
There is evidence that this is starting to happen in a number of professions. Last week the FT reported on law firms using AI to do some of the mundane work that used to be done by junior lawyers, such as trawling through Land Registry documents and pulling out information from title deeds.
Bertalan Mesko reckons AI will make him a better doctor. It won’t literally tell doctors how to treat people but it can collate and give doctors rapid access to vast amounts of medical information which to base their diagnoses. Last year, doctors in Japan used IBM’s Watson computer to cross-reference a patient’s condition against 20 million oncological records and discovered that the patient had a rare form of leukaemia. Using its ability to mine data and find patterns, a machine can provide a diagnosis that is quite often right.
Machines, then, can produce intelligent outcomes without actually being intelligent in the same way that humans are. They can do things that look intelligent to us because we need intelligence to do them, simply by processing huge amounts of data very quickly and by being able to recognise patterns within that data.
Will we ever develop artificial general intelligence, which would enable machines to think in the way humans can? Opinion is divided. Some scientists believe the human mind is too complex to replicate. Nigel Shadbolt, professor of AI at Southampton University, says:
Brilliant scientists and entrepreneurs talk about this as if it’s only two decades away. You really have to be taken on a tour of the algorithms inside these systems to realise how much they are not doing.
The machines, he says, might look clever but we are a long way from making them intelligent:
[I]t is easy to endow our AI systems with general intelligence. If you watch the performance of IBM’s Watson as it beats reigning human champions in the popular US TV quiz show you feel you are in the presence of a sharp intelligence. Watson displays superb general knowledge – but it has been exquisitely trained to the rules and tactics of that game and loaded with comprehensive data sources from Shakespeare to the Battle of Medway. But Watson couldn’t play Monopoly. Doubtless it could be trained – but it would be just another specialised skill.
We have no clue how to endow these systems with overarching general intelligence. DeepMind, a British company acquired by Google, has programs that learn to play old arcade games to superhuman levels. All of this shows what can be achieved with massive computer power, torrents of data and AI learning algorithms. But our programs are not about to become self-aware. They are not about to apply a cold calculus to determine that they and the planet would be better off without us.
What of “emergence” – the idea that at a certain point many AI components together display a collective intelligence – or the concept of “hard take off” a point at which programs become themselves self-improving and ultimately self-aware? I don’t believe we have anything like a comprehensive idea of how to build general intelligence – let alone self-aware reflective machines.
Others, though, say that it is only a matter of time. A human brain is simply a series of atoms so, sooner or later, we will be able to replicate it. A paper by Oxford University’s Future of Humanity Institute noted:
[P]redictions on the future of AI are often not too accurate and tend to cluster around ‘in 25 years or so’, no matter at what point in time one asks.
As if to prove the point, their survey of 550 AI experts, carried out in 2013, concluded:
[T]he results reveal a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50.
As Andrew Ng, chief scientist at Chinese web search giant Baidu and associate professor at Stanford University, said:
Those of us on the frontline shipping code, we’re excited by AI, but we don’t see a realistic path for our software to become sentient.
There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.
But even if machines don’t learn to think in the near future, their sheer power may cause them to do things their creators didn’t anticipate. Machines learn from data but the sheer scale and complexity of that data means that humans can’t possibly know what conclusions the machines will draw. It’s not until they start discriminating on the grounds of skin colour, making racist remarks, creating super weapons in a computer game or concluding from data that asthma plus pneumonia means a lower risk of death that the people who programmed them realise something might be wrong. It is possible to make a computer do something you didn’t mean it to just be making a mistake in code. When you are telling it learn for itself by trawling a mass of data, there will inevitably be some unintended consequences.
Furthermore, data produced by humans may also reflect long-standing social prejudices so, while we may think a machine is impartial, if it is basing its decisions on what has happened previously it will replicate the bias of the past. If prevailing social attitudes associate lighter skin with beauty then the machine will do so too. Likewise, if we train machines to select job candidates based on examples of people who have been good performers in the past, they will simply replicate the biases of an organisation’s human recruiters and managers. By ascribing objectivity to machines we might further entrench existing prejudices.
As Nigel Shadbolt says, the potential dangers in AI are not the stuff of apocalyptic science fiction. What we should be worried about, he says, is far more mundane:
[T}here is the danger that arises from a world full of dull, pedestrian dumb-smart programs.
We might also want to question the extent and nature of the great processing and algorithmic power that can be applied to human affairs, from financial trading to surveillance, to managing our critical infrastructure. What are those tasks that we should give over entirely to our machines?
Anyone who has wept with frustration tying to find a contact phone number on a corporation’s website when its hard-coded processes can’t answer a slightly unusual query will see the potential danger. An assumption that clever systems are comprehensive and objective could result in frustration for users, unfair decisions or even serious harm. The threat isn’t from robots running amok but from an alignment of unforeseen circumstances and small mistakes, amplified by the power and reach of connected machines. The usual perfect storm but running at breakneck speed.
No-one can be sure where artificial intelligence will take us and what it will enable us to do. It is likely to have a huge impact on work and employment over the next couple of decades. But Professor Shadbolt’s term ‘dumb-smart’ is a useful reminder that, at the moment, it’s not actually that clever and we are still not clever enough to anticipate what it might do with our instructions. AI therefore still requires human supervision and vigilance. It’s not intelligent enough to be allowed out on its own and perhaps it never will be.