http://arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/
I suspect that this is on a par with ELIZA learning abusive language from users, e.g. Bad data rather than conscious malice on the part of the software itself, but it's an interesting problem.