13:45 - 14:30
When Microsoft released its AI chatbot Tay on Twitter in March 2016, Tay was supposed to learn to chat as "an average American teenage girl". However, she quickly became sexist, racist, and anti-Semitic. Microsoft turned out to be overly naïve about the intentions of users who had to "train" Tay.
In designing and developing technology, there are always risks involved. Next to calculating a design’s hard impacts, in terms of safety and health, also "soft" impacts, such as undesirable consequences, deserve our attention. Today, logging off has become an illusion in our always-on world. Enormous data sets are generated every day. Algorithms not only have an important share in how we see the world, they also predict our future behavior and even influence our thoughts. Yet, neither algorithms nor data sets are ever neutral; on the contrary, users’ and developers’ biases slip into them. And still we rely on them on a daily basis. How can we anticipate and reduce undesirable consequences and pitfalls?