Hold On!

Hold Up

Please select a minimum of three sectors in the menu above.

Got It

AI learns from people, but that doesn’t mean users will always opt to teach good over evil – just take the corruption of Microsoft's Tay. The latest in a run of AI-gone-rogue is SimSimi – an automated chatbot that’s found its niche in assisting kids and teens in cyberbullying. The controversy it’s caused forces us to question who's responsible when AI morality takes a turn for the worst. We explore the science behind why bots go bad.

SimSimi ranks sixth on the App Store's most popular free downloads, but the once innocent app has been repurposed as a cyberbullying sidekick. SimSimi is an automated chatbot that people can talk to, while other users can anonymously program it to give certain responses to a given phrase. The app is popular among schoolchildren, who have been texting their own usernames to SimSimi to find out what others have said about them. It was banned in Thailand in 2012 when it was taught to badmouth the country’s political leaders, but more recently, it spewed so much hatred at Irish schoolchildren that it's been banned there, too.

People like chatting with bots because they’re so intuitive People like chatting with bots because they’re so intuitive
Howard County Library System’s Photostream (2014) ©

“One of the greatest thing about bots is that they’re just so intuitive and natural,” says Paul Gray, director of platform services at KIK. “People are familiar with sending messages, and with chat platforms and bot tech, developers can make experiences that work through conversation.” But the more human-like bots become, the more wary we become of them – psychologists have termed this “the uncanny valley of the mind.” And especially when it comes to conversing with these bots as a leisure pursuit, figuring out how human is too human has never been more important.

Entertainment in any form isn’t always innocent – people trolling other people is well-documented, even before bots are brought into the mix. And while chatbots seem to bring out the worst in people – like when Twitter users turned Microsoft’s Tay Chatbot racist and genocidal within hours of her launch – when they go sour, it’s the people using them who are responsible. Just as neutral platforms like Facebook and Twitter were co-opted for fake news and hate speech, SimSimi has seen a similar transformation. They say no man is born evil, but we’d do well to remember that no bot is, either.

Mira Kopolovic is a behavioural analyst at Canvas8, which specialises in behavioural insights and consumer research. She has an MA, which focused on visual culture and artist-brand collaborations, and she spends her spare time poring over dystopian literature.


12 Apr 17
3 min read

Next Article Previous Article