![]() To do AI right, one needs to iterate with many people and often in public forums. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. In that sense, the challenges are just as much social as they are technical. AI systems feed off of both positive and negative interactions with people. Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. The logical place for us to engage with a massive group of users was Twitter. It’s through increased interaction where we expected to learn more and for the AI to get better and better. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. Peter Lee, Corporate Vice President at Microsoft Research, apologized for the chatbot’s behavior in a blog post, and also had the following to say about this:įor context, Tay was not the first artificial intelligence application we released into the online social world.
0 Comments
Leave a Reply. |