Bad People, Bad Computers


Earlier this year, NPR did a story answering the question, can computers be racist? (Yes.) Not soon after, Microsoft launched an AI chatbot experiment, called Tay, which shut down when the software began spewing hateful speech on Twitter. One of the knowns fears of AI and machine learning is the notion of algorithmic bias, which can create or indirectly allow machines to learn prejudiced behavior. In this talk, we will explore what it really means to “teach” a computer to have prejudices, and what this can mean for the future of computing.

Language: English

Level: Intermediate

Jessica Rose

Technical Manager - FutureLearn

Jessica Rose is an internationally recognized consultant and speaker focused on how people work in the technology industry. She's obsessed with fostering more equal access to technical education and meaningful work in our industry. She's helping FutureLearn reach a new wave of learners by managing a team building educational experiences for the future. She's founded the Open Code meetup series, helped co-found Trans*Code and hosts the Pursuit Podcast.

Go to speaker's detail

Terri Burns

Associate Product Manager - Twitter

Terri Burns is a developer and technologist based in San Francisco. She's an editorial contributor at Forbes, where she offers business and leadership advice in the form of data visualizations. In addition to making visualizations, Terri's cohosted the Forbes podcast Well, Technically, a podcast about startups and technology.  She's the former-President and current Chair of Tech@NYU, the largest student technology organization in New York City. She regularly writes about technology, diversity, and inclusion, and her work has been featured in Forbes, Scientific American, and Model View Culture.

Go to speaker's detail