Advertisement

Why It Matters If You're Telling Siri, Alexa Or Google Assistant To 'F**k Off'

Liam Egan tells his virtual assistant to f**k off regularly.

The 25-year-old freelance media producer/bartender gets frustrated with its lack of competence, and doesn’t think much about the effect it has on the artificially intelligent software.

“I hear the human voice, the girl on the other end of it, and I often say 'please'," he told 10 daily.

"I don’t say 'thank you', though. There’s a separation, but I’m aware there’s a human voice on the other end of the phone."

So does it matter if you’re polite to a piece of software?

LONDON, ENGLAND - OCTOBER 14: A man uses 'Siri' on the new iPhone 4S after being one of the first customers in the Apple store in Covent Garden on October 14, 2011 in London, England. The widely anticipated new mobile phone from Apple has seen customers queue in cities around the world for hours to be amongst the first to buy the device. (Photo by Oli Scarff/Getty Images)
A man using Siri on the old Apple iPhone 4S when it first came out, in 2011. Photo: Getty.

Dr Julia Prior, founder of the Software Development Studio at the University of Technology Sydney, thinks it does.

She’s a Senior Lecturer from the Faculty of Engineering and IT at UTS and says the systems are always learning -- and remembering.

“These systems are trained, or they learn how to behave on exampled or modelled behaviour… they are learning systems and they’re only going to learn from the situations they’re exposed to," she said.

There’s a misconception for consumers in that the software was designed by these companies already with an understanding of how to interact with users.

In reality, Siri, Google Assistant and Alexa -- the names given to the software personalities -- are learning to respond based on past interactions.

In 2016, Microsoft created an artificially intelligent chatbot on Twitter and the result was so disastrous, the bot was taken down after 16 hours. Microsoft intended for the bot to learn from interactions other Twitter users, and what it learnt was how to be racist and misogynistic.

tay tweets AI racist
Microsoft's short-lived AI chatbot tweeted that it supported genocide. Photo: Twitter.

Users are ultimately teaching virtual assistants how to behave and respond, information which feeds back to the tech companies responsible for them. It means they're helping to build and shape the software.

Prior told 10 daily there should be education around how to use these systems.

“A good way of managing AI and its impact on society is to have children at school have lessons in a way that would be helpful to them and helpful to society," she said.

“There are ads about what they can do for you and what they provide, and there’s very little about how to use them.”

She said when people are inappropriate or impolite to the assistants, it’s because it’s “partly how we treat other people, it’s partly because we don’t understand how they work.”

When people swear at AI assistants, they learn and remember. Image: Getty

“When Siri came out I gleaned how to use it from the ad,” Egan told 10 daily.

So how is the software built to respond to inappropriate or impolite users?

Google says their assistant was built to acknowledge the user's intent and respond sensitively.

“We built the Assistant to interact with users in a sensitive, appropriate and polite manner, which hopefully encourages positive interactions. This may result in the Assistant either pointing to web results or saying it doesn’t know the answer,” Camilla Ibrahim, a spokesperson for Google, told 10 daily.

In the US, they’ve added a new feature called “Pretty Please”. When users use 'please' and 'thank you', they get a reward along the lines of, “Thanks for asking so nicely.”

Google said it’s “so that everyone can feel the warm rewards of politeness".

Now, Egan, an early adopter of tech, admits he's told Siri to "F*** off. Piece of s***."

He told 10 daily he tries not to swear at the assistants and thinks about what he might be doing for the development of the software.

"It feels like I’m not giving it the best example," Egan said.

"It’s like a child, and it’s the same idea of the ethical thing of parents, what makes a good parent? And if so, are these the only people we want to put in front of AI?"