If it wasn't already clear that Microsoft learned a few hard lessons after its Tay AI went off the deep end with racist and sexist remarks, it is now. The folks in Redmond have posted reflections on the incident that shed a little more light on both what happened and what the company learned. Believe it or not, Microsoft did stress-test its youth-like code to make sure you had a "positive experience." However, it also admits that it wasn't prepared for what would happen when it exposed Tay to a wider audience. It made a "critical oversight" that didn't account for a dedicated group exploiting a vulnerability in Tay's behavior that would make her repeat all kinds of vile statements.