Should We Be Making Robots That Pretend To Be Human?

Some of you may know about Sophia. She is one of technology's most advanced models of artificial intelligence. She has a human-like face and the ability to display over 62 facial expressions. Sophia's purposes are primarily social. Her chief engineer, David Hanson, believes that many robots like her will be used in future for customer experience, as carers and even therapists. Sophia looks like a human, sounds like a human and she will try her best interact with us as a human would, but she isn't human, she's not a conscious being, she doesn't feel, and she doesn't have her own motives. To create a robot which is solely made for interaction, which appears so very human-like, but isn't, seems a little like an elaborate game of play pretend. It is hard to comprehend how it will feel to receive a compassionate performance from therapist or a carer that isn't.. well, alive. How can a misleading human replica possibly aid our wellbeing, without putting us into a state of delusion?

The Science behind Hanson's ambitions is based on the latest innovations into machine learning. Using machine learning, Robots like Sophia should develop emotional intelligence by absorbing information from the humans they interact with. Sophia does not only display emotion, but she has also been programmed to read it. Eventually, after huge leaps of progress, it is feasible to imagine a computer getting our emotions almost completely right, perhaps with less bias and more accuracy than a loving family member, with all the expressions, and social abilities of a human a robot might also learn exactly how to respond. It would probably work; psychopaths are famously great manipulators, because they, like a robot, can observe humans free of emotional bias. psychology is a science after all and humans all share the same basic facial expressions.

Once we understand that this is the way they learn, it becomes more apparent that there’s something more to Sophia’s pretty face than just a fun performance. The more vigorously as social robot can rouse emotions in a human, the easier they should find it to learn from them. We already have a tendency to anthropomorphise robots. A Neurological Study, published in Scientific Reports in 2015, found that we subconsciously feel empathy for robotic hands which appear to be in pain and, even more, the more human-like a robot appears, the stronger our feelings become.

We may discover that humans are completely predictable and programmable themselves. We are designing tools which are going to play us, as though we were an instrument, there’s something different about being played by a robot as opposed to an experienced human therapist. If we are easily manipulated by robots how different are we really to them? Hanson believes that they will “rehumanize us”. They could certainly lead to some philosophical revelations; perhaps, harrowing ones.

There are ethical dangers. Others believe the opposite to Hanson, that human-like robots may dehumanise us. The Neurological Study also found that, although we empathise with robots, there remain subtle differences. Our “Top-Down” or our more rational response is less empathetic, we still know that they’re only robots even if our physiological arousal tells us otherwise. If we design human-like robots and then treat them as subordinate we could easily become desensitised to doing the same to real human beings. Classical conditioning is a powerful force, and if we stop associating careless behaviour with guilt there may be profound consequences.

Many suggest we need to make laws to prevent humans from mistreating robots for this reason. But it is the sort of law that the public may not fully understand or respect, and what one does to a robot in their own home will be difficult to control. The “sextech” industry, for example, is growing fast and what may soon be widely available for the bedrooms of many is an obedient toy in the shape of a very real seeming human: “Samantha” the sex robot with artificial intelligence, invented by a Spanish man, Sergi Santos. As the lines between robots and humans become increasingly blurred, we need to work out how the law should respond.

The more obvious danger attached to anthropomorphising robots is that if we regularly spent time with a human-robot we could begin to feel that we shared a bond. In fact, this is presumably an essential aspect of the robot therapist. Such a robot would need to make us feel as though they cared. This may soon lead to disappointment on a level far beyond that moment when you realised that the only reason your pet cat acknowledges your existence is because you feed it each day. It is impossible to imagine what effect this is going to have on the psychologically vulnerable, but is it really worth finding out?

Despite the many fears, including the famous reservations of Stephen Hawkins and Elon Musk. It looks like AI is advancing into our world at an increasing pace. Some, such as Hanson welcome the prospect with open arms. Hanson, who looks forward to robots becoming more capable and perhaps even more intelligent than us, argues that the solution to our fears of extinction is to make our computers care about us. It is essential that they learn compassion, and to do this it will help them if they look like humans. 

But what does he mean by compassion, is that really possible? And what sort of intelligence? Could they really learn creativity for example? What is creativity? Questions that we look forward to hearing the answers to.