- Joined
- Oct 28, 2007
- Messages
- 23,905
- Reaction score
- 16,382
- Gender
- Male
- Political Leaning
- Independent
Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist.
But in doing so made it clear Tay's views were a result of nurture, not nature. Tay confirmed what we already knew: people on the internet can be cruel. Link.
Apologies if this thread has come up before but there are interesting ramifications of this experiment. I did do a search of the whole forum for "Tay" and nothing came up.
- Microsoft's Chinese version, XiaoIce has delighted and succeeded whereas Tay (designed for Western audiences) had to be pulled for quickly becoming racist and agreeing holocaust denial. Does this denote a difference in culture? Approaches to technology that speak to western / eastern cultural "difference?"
Learning behaviour,
- Should Tay have been allowed (as originally intended) to learn and develop - was it pulled before it could learn? Could it have learned and challenged the hateful views taught it?
"Free Will"
- Could this have helped in exploring whether that old argument about free will and determinism has any merit one way or another?
Learned behaviour and internet trolling.
- What did the speed an "innocent" quickly became indoctrinated say about some of the hateful preachers on all sides of the internet?