1
0

Trolls turned Tay, Microsofts fun millennial AI bot, into a genocidal maniac


 invite response                
2016 Nov 10, 6:25pm   1,971 views  6 comments

by zzyzzx   ➕follow (5)   💰tip   ignore  

https://www.washingtonpost.com/news/the-intersect/wp/2016/03/24/the-internet-turned-tay-microsofts-fun-millennial-ai-bot-into-a-genocidal-maniac/

It took mere hours for the Internet to transform Tay, the teenage AI bot who wants to chat with and learn from millennials, into Tay, the racist and genocidal AI bot who liked to reference Hitler. And now Tay is taking a break.

Tay, as The Intersect explained in an earlier, more innocent time, is a project of Microsoft’s Technology and Research and its Bing teams. Tay was designed to “experiment with and conduct research on conversational understanding.” She speaks in text, meme and emoji on a couple of different platforms, including Kik, Groupme and Twitter. Although Microsoft was light on specifics, the idea was that Tay would learn from her conversations over time. She would become an even better, fun, conversation-loving bot after having a bunch of fun, very not-racist conversations with the Internet’s upstanding citizens.

Except Tay learned a lot more, thanks in part to the trolls at 4chan’s /pol/ board.

Peter Lee, the vice president of Microsoft research, said on Friday that the company was “deeply sorry” for the “unintended offensive and hurtful tweets from Tay.”

In a blog post addressing the matter, Lee promised not to bring the bot back online until “we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Lee explained that Microsoft was hoping that Tay would replicate the success of XiaoIce, a Microsoft chatbot that’s already live in China. “Unfortunately, within the first 24 hours of coming online,” an emailed statement from a Microsoft representative said, “a coordinated attack by a subset of people exploited a vulnerability in Tay.”

Microsoft spent hours deleting Tay’s worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people. Many of the really bad responses, as Business Insider notes, appear to be the result of an exploitation of Tay’s “repeat after me” function — and it appears that Tay was able to repeat pretty much anything.

“We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience,” Lee said in his blog post. He called the “vulnerability” that caused Tay to say what she did the result of a “critical oversight,” but did not specify what, exactly, it was that Microsoft overlooked.

Not all of Tay’s terrible responses were the result of the bot repeating anything on command. This one was deleted Thursday morning, while the Intersect was in the process of writing this post:

In response to a question on Twitter about whether Ricky Gervais is an atheist (the correct answer is “yes”), Tay told someone that “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” the tweet was spotted by several news outlets, including the Guardian, before it was deleted.

“It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.”

#tech #humor #ai #incompetence

Comments 1 - 6 of 6        Search these comments

1   Tenpoundbass   2016 Nov 11, 6:40am  

Google had to Shut down their AI seach bot or disable the search summary feature. It was praising Trump and called Hillary Evil and Corrupt. LOL

2   Y   2016 Nov 11, 6:49am  

This was my 4th of July...

Tenpoundbass says

Trolls turned Tay, Microsofts fun millennial AI bot, into a genocidal maniac

4   Tenpoundbass   2017 Aug 8, 9:01pm  

AI is Woke as Fuck!

5   georgeliberte   2017 Aug 9, 7:52am  

Yes and Clippy started spouting misogynist themes.

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   random   suggestions