"Microsoft is battling to control the public relations damage done by its “millennial” chatbot, which turned into a genocide-supporting Nazi less than 24 hours after it was let loose on the internet. The chatbot, named “Tay” (and, as is often the case, gendered female), was designed to have conversations with Twitter users, and learn how to mimic a human by copying their speech patterns. It was supposed to mimic people aged 18–24 but a brush with the dark side of the net, led by emigrants from the notorious 4chan forum, instead taught her to tweet phrases such as “I fucking hate feminists and they should all die and burn in hell” and “HITLER DID NOTHING WRONG”."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label rogue AI chatbot Tay. Show all posts
Showing posts with label rogue AI chatbot Tay. Show all posts
Friday, March 25, 2016
Microsoft scrambles to limit PR damage over abusive AI bot Tay; Guardian, 3/24/16
Alex Hern, Guardian; Microsoft scrambles to limit PR damage over abusive AI bot Tay:
Subscribe to:
Posts (Atom)