ML CONFERENCE Blog

blog

Jul 9, 2019

In the field of machine learning, many ethical questions are taking on new meaning: On what basis does artificial intelligence make decisions? How can we avoid the transfer of social prejudices to machine learning models? What responsibility do developers have for the results of their algorithms? In his keynote from the Machine Learning Conference 2019, Eric Reiss examines dark patterns in the ethics of machine learning and looks for a better answer than "My company won’t let me do that."

Eric Reiss started working with user experience (UX) long before the term was even known. Over the past 40 years, he has encountered many issues that have disturbed him – from creating purposely addictive programs, sites, and apps, to the current zeitgeist for various design trends at the expense of basic usability. He has seen research that is faked, ignored, or twisted by internal company politics and by the cognitive bias of the design team. And he has seen countless dark patterns that suppress accessibility and diversity by promoting false beliefs and false security.

Whenever we say, “That’s not my problem,” or, “My company won’t let me do that,” we are handing over our ethical responsibility to someone else – for better or for worse. Do innocent decisions evolve so that they promote racism or gender discrimination through inadvertent cognitive bias or unwitting apathy? Far too often they do.

We, as technologists, hold incredible power to shape the things to come. That’s why he shares his thoughts with you so you can use this power to truly build a better world for those who come after us!

 

 

Behind the Tracks