18 Nov 2016UpdateIs it possible to code the perfect moral... code?UPDATE: Dispatches from the Canvas8 HQ
image-7fbee8c18b0ed2e69346c841d38ebee306c2546a-1349x472-jpg

Are people fundamentally good? A year of political turmoil and public tragedy might make a compelling case for the contrary. But as we reflect on our progress as a species, optimism hasn’t been ruled out. Despite all kinds of human rights violations and some questionable products of democracy, there’s a bright side – world hunger and child labour are decreasing, and literacy and life expectancies are on the rise. And as time goes on, people seem to be raising the bar for ethical standards, too.

Author
Mira KopolovicMira Kopolovic is a senior social scientist at Canvas8. She has a master’s degree that focused on visual culture and artist-brand collaborations, and spends her spare time poring over dystopian literature.

So now seems like the perfect time to address this question – what does ‘doing good’ mean? And do we – as humans – fit the bill? For Mindshare’s 2016 Huddle – the industry-renowned ‘unconference’ – doing good was the theme in question. But while figuring out how to embed ‘good’ in the heartland of a brand narrative, or how to appeal to people’s better sides via a multichannel campaign might seem to fit the bill, at Canvas8 we’re more concerned with who people really are.

Because while people say they aspire to be good – 70% of Gen Yers care if brands do good, for example – these numbers would only point towards a promising future for mankind, if they reflected the truth. But they don’t. In reality, research suggests admirable aspirations often turn out to be untrue, or at least selfishly motivated. People consistently misjudge their own convictions – despite being concerned over whether or not brands do good, in practice, research shows that 85% of people are more interested in the style, wash, and price of jeans, than whether they’re a product of child labour. Which is why for our Huddle this year we chose to explore ‘The Science of Selfishness’, hosted by our Head of insight Sam Shaw and Dr Nichola Raihani from the Department of Experimental Psychology at UCL.

Robots could one day be better people than peopleIbmphoto24 (2016)

So what’s so bad about doing good? Nothing, technically. But there’s usually something in it for the do-gooder in question – like making them look good. People have a tendency to install solar panels on the shady side of the house if it ensures passersby on the street will have no doubts about the homeowner’s moral righteousness. And the people who are morally superior get nothing but eye-rolls from the rest of us. In the case of vegetarians, almost half of meat-eaters think poorly of them, describing them as ‘malnourished’ or ‘self-righteous’. Despite claiming to aspire to ethical living, self-sacrifice in others seems to annoy more than impress.

Since it’s already on the public agenda, it makes sense that – as technology progresses – morality is increasingly on the minds of those working in AI. What moral code should we upload to machines? Robots with human-level intelligence will require human-level morality as a safety mechanism against bad behaviour. But while AI pops up in fields from legal to healthcare, people are calling for a moral code – and hearing nothing but crickets. The fact that we’ve been able to build machines that mimic human intelligence before we managed to agree on a universal standard of what being ‘fundamentally good’ actually means is telling. Morality, bound up with emotion and clouded by self-interest, isn’t a topic that we can debate with a cool head.

But while we can’t seem to figure out where to start with AI morality, the robots themselves might be able to help. As tech giants like Alphabet and Microsoft prepare to address ethics in AI, people are imagining the potential for machines to practice morality perfectly – without human flaws or biases. After all, if a driverless car is technically safer than one equipped with a human, perhaps a robotic system of morals would be similarly superior. A robot police officer that never racially profiles or a robot judge that’s entirely impartial – artificial intelligence puts these concepts within reach. As it stands, a bot can outthink a human brain. But once robots learn a moral code, a bot could even be a ‘better person’ than an actual person.

Discover more insights like this by signing up to the Canvas8 Library.

Mira Kopolovic is a writer and researcher, with an MA in creative industries, which focused on artist-brand collaborations. She spends her spare time poring over dystopian literature.