Bad People, Bad Computers
Earlier this year, NPR did a story answering the question, can computers be racist? (Yes.) Not soon after, Microsoft launched an AI chatbot experiment, called Tay, which shut down when the software began spewing hateful speech on Twitter. One of the knowns fears of AI and machine learning is the notion of algorithmic bias, which can create or indirectly allow machines to learn prejudiced behavior. In this talk, we will explore what it really means to “teach” a computer to have prejudices, and what this can mean for the future of computing.
Terri Burns is a developer and technologist based in San Francisco. She's an editorial contributor at Forbes, where she offers business and leadership advice in the form of data visualizations. In addition to making visualizations, Terri's cohosted the Forbes podcast Well, Technically, a podcast about startups and technology. She's the former-President and current Chair of Tech@NYU, the largest student technology organization in New York City. She regularly writes about technology, diversity, and inclusion, and her work has been featured in Forbes, Scientific American, and Model View Culture.