Content Block

Science

Artificial intelligence: all too human?

It's not the robots we have to worry about. It's ourselves

Last week, at a research institute in upstate New York, a robot passed the self awareness test for the first time. Based on an age-old riddle about a king and his wise men, the experiment was designed to see if the robot could take note of their own answers in relation to each others. Among the trio that participated, one of them answered correctly. But as they’re all programed the same, technically it means they all passed.

The result was a breakthrough for artificial intelligence and The New Scientist said the achievement “scaled the foothills of consciousness”. It signifies progree towards programming robots to solve complex philosophical problems in order to get a better understanding of how we think.

At the same time news was breaking of the awareness test, at Conway Hall in London a group of academics who specialise in sentient machines were discussing the ethical implications of engineering artificial intelligence.

Chaired by writer, broadcaster and geneticist Dr Adam Rutherford the London Thinks: Waiting For GLaDOS panel included leading roboticist Professor Alan Winfield, philosopher and technology ethicist Dr Blay Whitby and Senior Research Fellow in the Ethics of Robotics at De Montfort University Dr Kathleen Richardson.

Despite each having their own definition of artificial intelligence, they all agreed on one thing – it’s not what we think it is.

The singularity problem

Media coverage of artificial intelligence tends towards horror stories of robots engineered to surpass human capabilities and enslave us all. In fact, the moment where AI develops the potential to overtake human intelligence, often referred to as the singularity, may never happen. But that doesn’t stop us from treating it like a real danger, even prioritising it over actual threats.

But it’s still the main way we think about robots. Take Channel 4’s new series Humans. The drama is set in the not too distant future, where a new type of android has been manufactured to undertake demanding physical work, such as careers or cleaners.

Luxury automated communism this is not, instead, the series explores how we might interact with machines that look exactly like us, often confusing care for signs of affection. In the series, some of the characters develop significant emotional attachments to the droids, even choosing them over their own partners.

Of course it transpires that some of the robots do have a level of consciousness, programmed by a rogue roboticist, which means they can experience a level of emotion but don’t know how to relate to others, as they’ve never had to.

It might be not scientifically accurate, but Humans is a great exercise in ethics. For example, in one episode a father has sex with a robot who looks after the family. The family finds out and ostracises him, not just because he was "unfaithful" to the mother, but because he exploited the nanny.

If we’re capable of mistreating something that looks and sounds like a person, then isn’t that just as bad as doing it to an actual person? Even if the object you’re abusing cannot experience suffering, it’s the act itself that is wrong. If the robot has no free will it cannot consent and non consensual sex is wrong even if it doesn’t cause distress.

This is how we should treat the singularity, as a hypothetical problem not an impending threat. But we don’t. We’re so preoccupied with the idea of technology might one day takeover, that sometimes we focus on it so much we end-up neglecting real dangers.

Professor Alan Winfield, a specialist in robotics, told the Conway Hall audience that he was once asked to brief a G8 committee set-up to address existential threats to humanity on the possibilities of the singularity. Winfield advised them that it couldn’t happen and perhaps they should focus on more pressing issues, such as climate change.

Professor Alan Winfield, a specialist in robotics, told the Conway Hall audience that he was once asked to brief a G8 committee set-up to address existential threats to humanity committee on the possibilities of the singularity. Winfield advised them that it couldn’t happen and to only entertain the idea as a thought experiment.

But there are real dangers surrounding artificial intelligence, just they’re more nuanced than we had anticipated. Even if we never manage to recreate intelligence, just the mere impression of it could have a devastating impact on humanity.

How do we know if a robot has developed consciousness?  If it looks sounds and behaves like an emotional creature, then it might as well be one because even when we know something isn’t human we tend to anthropomorphise it anyway.

Dr Blay Whitby recalled an experiment at Tufts University in Massachusetts that demonstrates how human will often put emotions before reason.

"A group of undergraduate psychology students were given the task of commanding six orders to a commercially available robot. As a simple machine, the robot was only programmed to respond to those six tasks. The first command was to knock down a tower of blocks next to the robot. What the participants didn’t know was that when asked to knock down the tower the robot was designed to object. First it would plead ‘please don’t make me knock down that tower of bricks, it took me an hour to build it’ in a human voice, before adopting a crying pose in an attempt to appear distressed. 98 per cent of the students stopped following the test as told and began to negotiate and reason with the machine. One recipient even pretended to cry themselves.”

Even when we know that a robot is a robot, we’re still likely to start interacting with it like its human. So why do we do this?

One of the general faults in our perception of artificial intelligence is the assumption that in the event we manage to create consciousness in robots, that it would be the same as our own.

“You know how people say cats rub themselves on humans because they think we’re cats?” explained Dr Blay Whitby. “They don’t. It’s because they only have cat behavior in their repertoire. As humans, we only have human behavior in our repertoire."

This vulnerability can be easily manipulated. Robots don’t need to be super intelligent to deceive us. Part of the problem depends on who is manipulating them, an area that is not currently regulated. Then there’s the issue of how we implement these new technologies.

Never trust a robot video

Currently, driverless cars are the new face of AI. As stories emerge of fully automatic cars learning to drive around purpose built towns in Silicon Valley, the public’s response has been sceptical. What if they malfunction? Can we really trust them? If we compare a human driver to a driverless car the advantages speak for themselves.

Humans are not very good on the road. We get drunk, distracted, tired and bored. 1,713 people per year die on the roads in the UK alone, yet we’re still hesitant to make them significantly safer.

“If there’s a technology that reduces those deaths, we’re ethically bound to introduce it.” Dr Whitby remarked.

But we’re not even completely sure how sophisticated the technology is. At the moment we know driverless cars can’t undertake complex ethical decisions, meaning they only work in very precise circumstances.

“The problem with the current generation of driverless cars is that they are way too cautious. The interesting ethical debate is whether we make them a little more aggressive?”

At the moment, we can only speculate on the level of progress made in AI  because most the developments are shrouded in secrecy, either for commercial or military reasons, So, what information can we rely on?

“One thing I never believe is any robot footage,” explained Kathleen Richardson

“I’d advise anyone who goes on Google and sees robots doing amazing things to take it with a pinch of salt.”

“Or to ask the researchers how many times they filmed it falling over before they got that little bit of footage,” Whitby pitched in.

But there is one aspect of AI that we are familiar with and that’s because we use it every day.

Social networking relies on digital algorithms that are becoming increasingly in tune with our behavioral patterns, and in turn we are becoming increasingly reliant on them. In a study, the venture capital firm Kleiner Perkins Caufield & Byers, researchers found the average person checks their phone around 150 times per day.

If the singularity represents a type of integration between humans and machines, then we may have reached that point already. Integrated services like Google have access to your emails, smartphone, GPS and search results, which has resulted in an unprecedented bank of human behavior.

This large-scale capturing of information causes a new problem for AI. Gone are the days when we feared robots would become more capable than humans: in reality just knowing more about us could be just as dangerous.

You can watch the whole London Thinks: Waiting For GLaDOS discussion here

 

Caroline is the section editor of Art & Design at Little Atoms. She has written for The Guardian, Vice and Dazed & Confused.

Related Posts