Might we be better able to understand what’s going on inside the “black box” of machine learning algorithms? In episode 53, Been Kim from Google Brain talks with us about her research into creating algorithms that can explain why they make the recommendations they do via concepts that are relatable by their users. Her articles “Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV)” and “Human-centered tools for coping with imperfect algorithms during medical decision-making” were first published on the open-access preprint server arxiv.org, and presented at the International Conference on Machine Learning in 2018.

Behind the Curtain of Algorithms - Been Kim Behind the Curtain of Algorithms - Been Kim Behind the Curtain of Algorithms - Been Kim Behind the Curtain of Algorithms - Been Kim Behind the Curtain of Algorithms - Been Kim Behind the Curtain of Algorithms - Been Kim {{svg_share_icon}} {{svg_share_icon}}



Subscribe: iTunes | Google Podcasts | Google Play | Spotify | RSS

Websites and other resources

Been’s website and Twitter Been at Google Brain TCAVs as described by Google CEO Sundar Pichai (through 39 minutes):



Been presenting on TCAVs:

Bonus Clips

Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.

Support us for as little as $1 per month at Patreon. Cancel anytime.



Clips available to patrons include …

Coming soon!

