AI. Amoral Intelligence?

Posted: 10 February 2019 in Cultural, Understanding Others
Tags: , , , ,

Tay_bot_logo.jpgThe trendy fashion retailer, Rag & Bone, held their latest show on Saturday, entitled “A Last Supper”.  Instead of Jesus hosting they promised a robot, to entertain the New York celeb guests with witty conversation.  In the end, as dinner was served, a projection on the wall displaying a “3D point cloud” of guest’s actions as processed by an artificial intelligence system, with voice and dance displays managed interactively.

Managed?  Clever, though not “Artificial” but human.  And the “Intelligence” was not machine created, but human made.

Intelligence
Just as well as AI is not so intelligent. Yet.  Alexa will tell you the date and play you for track-favourites.  AI systems will guide exploratory drilling for oil rigs and for breast cancer surgeons.  And better than humans.  Yet hosting conversations for the glitterati, or just ordinary humans?  Beware.

The Tay Disaster
Tay was an AI chat-bot launched by Microsoft in 2016.  Tay was modelled on 19 year old American girls, to target 18-24 year olds.  It was meant to be cheery and pleasant.  It didn’t work out like that at all.

In order to engage and entertain, Tay’s database consisted of public data as well as input from comedians. Public data was modelled, filtered, and anonymised by the developers. In addition, the nickname, gender, favourite food, postcode and relationship status of the users who interacted with Tay were collected for the sake of personalization. Powered by technologies such as natural language processing and machine learning, Tay was supposed to understand the speech patterns and context through increased interaction.[1]

However – it all went horribly wrong. Tay started giving racist messages and making sexual comments.  It said “(President) Bush did 9/11”, “Hitler would have done a better job than the monkey (Obama) we have now”, “(Candidate) Donald trump is the only hope we’ve got” and so on.  There was immediate outrage.  After just 16 hours, and 96,000 inflammatory tweets, Tay was whisked off-line.

tay_twitter.jpgUsers perverted Tay.  She(?) tended to repeat what Twitter users said to her. One user taught Tay to repeat Trump’s ‘Mexico Border Wall’ comments. Like a parrot, Tay was not able to understand the meaning of words.  Let alone the context of those words. The algorithm enabled recognition of  patterns, but not meaning. As a result, Tay parroted Nazism, attacked feminists and Jewish people.  Tay denied The Holocaust, just as her abusers told her to.

Ethics (bad or good) still come from humans
Microsoft accepted responsibility for the offence caused.  Yet that misses a more interesting point.  Tay’s AI had no moral/ethical framework.  The software engineers did not provide enough of one.  Tay ended up as a 19 year old naïve computer-girl who was abused by pranksters and the malicious.

Future chat-bots will have a moral/ethical basis, not self-learned but given to it from humans.  Either from software specialists generating a bland “safe space” reflecting prevailing Silicon Valley mores.  Or from the bile and vomit of an abusive section of the user community.  Neither are attractive, though the latter is much much worse than the former.

Western societies have largely abandoned the overarching Judeo-Christian moral/ethical vision.  Not only does that leave our communities vulnerable, it seems it leaves our AI vulnerable too.  Which moral/ethical vision to use then?  This is a real problem.  Use the lowest common-denominator?  Well that level seems to be getting lower and lower at the moment.

And considering the reach of some AI systems, that is a major weakness.  There are players willing to provide moral/ethical frameworks to support their own vision of what is right.  The governments that are veering toward an autocratic popularist form of managed democracy (e.g. Xi Jinping- China, Ergogan- Turkey, Putin- Russia, Trump- USA.), together with the social media giants (Jeff Bezos, Jack Ma, Mark Zukerberg, Sundar Pichai, and co).

Judeo-Christian moral/ethical vision would be a safer bet than these “Brave New Worlds”.

… … Bezos … …Trump … … … … … Bezos … …Trump … … …

… … Trump  … …Bezos … … … … … Trump  … …Bezos … … …

Sorry Alexa, what were you saying?

Yes, I know, I must pay more attention

pay  pay  pay

more  more  more

attention   attention   attention

Bill

[1] “The Accountability of AI — Case Study: Microsoft’s Tay Experiment.” Yuxi Liu 16 Jan 17

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s