Zephyrnet Logo

“Why should I trust you Alexa?”

Date:

The design goal of many of the conversational agents today is something akin to usability and the metrics used to evaluate these products are: number of interactions, purchases, and surveys are used to evaluate these products. There’s certainly also machine learning involved, using the vast amount of data collected during conversations and trained using the metrics mentioned before in mind. A problem with this is that machine learning that relies on human feedback has the risk of tricking humans into believing it’s doing a good job when in reality it found a shortcut[6]. My point with this is that it’s important to sit down and evaluate if the agent is actually improving in terms of usability and trust, and not only appearing to be.

In the paper a robot trained with human feedback has developed a way to trick the human evaluator into believing that it is grasping the ball when it in fact only is grasping air in front of the camera.[6]

How would a conversational agent be designed with a different design goal in mind? How would we design a conversational agent that is built to optimize for trust instead of usability? I have a few suggestions.

Voice assistants such as Alexa, Siri, and Google Home is an interface for their respective ecosystem and services. This should be clear when the user adopts one of these tools, but too often it is marketed as a personal assistant that is there to help with whatever you need, an all-in-one assistant powered by Artificial Intelligence and invented by engineers in silicon valley.

An example of ambiguous marketing by Google

It’s better to communicate what the voice assistant is actually used for, and what it is capable of doing such as playing music, accessing smart-home functions, and helping you cook. A trustworthy voice assistant would be marketed as a small computer with a voice interface that is excellent at pulling information from the web and help keep track of small tasks. When asked what the meaning of life is it should either make it very clear that it is performing a google search with the keywords “what is the meaning of life” or answer that it does not have the capability to answer the question.

In everyday conversations, we appreciate when people mix it up. I personally struggle with coming up with unique phrases to end emails with so that it doesn’t look like a bot has written it. In the same sense, conversational agents try to vary their language in order to appear more human. The trustworthy voice assistant would have to distinguish between communication that is factual and communication that is sociable. In the case of asking how tall the Eiffel tower is the agent should always give the same answer, but in other cases such as what type of greeting it should say the communication can be more varied, for example:

Google assistant: I can sadly not answer that question.
Google assistant: Sorry, I can’t answer that question.

Siri, Alexa, Cortana, and the chatbots you encounter online have been very carefully designed. Their every aspect can be tweaked and manipulated however the designers want it to, the voice could be that of Arnold Schwarzenegger or encourage you to speak in another language but the current voice and personality of Siri are what Apple has decided to use. In the example of Eugene Goostman, the agent pretended to be a young child to easier pass as being human in a Turing test but in the case of a voice assistant, we know that it is a small computer. A conversational agent appearing as a human is impressive and is often seen as a sign of quality, but giving it too many human qualities can make it fall into the uncanny valley [7].

There are also many interesting design prototypes on this subject, for example, a genderless voice for conversational agents. Gender is something that we assign the device anyway.

Our imaginary trustworthy voice assistant would have a personality and voice that is aligned with its capabilities and purpose, it should be clear from the way it speaks that this a robot that can help you select music hands-free. Perhaps it should follow the same design tradition as GPS devices, prioritizing clarity and being heard over the noise of other cars. But I would love to hear a genderless and clearly robotic voice with human-like behavior in action.

A project at the Umeå Institute of Design explored the concept of a voice assistant with a completely different language that the user has to learn in order to talk with the assistant. This forces the user to re-evaluate their relationship to the device and also provides a sense of privacy for the user, knowing that it can’t understand the language they are speaking in its presence[8].

Finally, the conversational agent should be designed with accountability and responsibility in mind. The agent interprets voice input and translates it into actionable commands that the computer can execute. Our trustworthy AI should make this process transparent to the user, making it clear what it heard and what function it executes. During or after a voice command the user should be able to ask how and why it responded in the manner that it did.

An example:

User: Okay agent, how tall is the Eiffel tower?
Agent: According to wikipedia the Eiffel tower is a 324 meter tall iron tower located in Paris…
User: Okay agent, explain yourself.
Agent: What i heard was you asking for the height of the eiffel tower, I navigated to the Eiffel towers wikipedia page and pulled the information regarding the height of the structure. I processed this information and generated a shorter version and communicated to you.

This information can be overwhelming, and the person who created this function can probably break it down into many smaller steps. What is important is that that the user can ask for an explanation, and see that this is a piece of software with its own flaws and strengths and not the result of an intelligent being simply explaining something it remembers. If the answer is wrong it’s the fault of the engineers and designers and not the agent.

Hedonismbot from Futurama

Designing conversational agents with responsibility, transparency, and trust in mind is growing in popularity and people are starting to realize that it needs to happen for AI to be developed sustainably and ethically. I hope this article is read by someone working with conversational agents and thinks of these aspects when they design the chatbots and voice assistants of the future.

And I have much more to write, there are many questions I have only touched upon that I want to explain more thoroughly. Such as: What is trust in a conversational agent? What data does a voice assistant collect and should you be worried about it? What are the dangers of powerful natural language processing such as GPT-3 https://en.wikipedia.org/wiki/GPT-3 and what are the implications for consumers and scientists?

[1]Horned, A. (2020). Department of informatics Conversational agents in a family context A qualitative study with children and parents investigating their interactions and worries regarding conversational agents. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-172348
[2]http://www.reading.ac.uk/news-archive/press-releases/pr583836.html
[3]Sciuto, A., Saini, A., Forlizzi, J., & Hong, J. I. (2018). “Hey Alexa, what’s up?”Studies of in-home conversational agent usage. DIS 2018 — Proceedings of the 2018 Designing Interactive Systems Conference, 857–868. https://doi.org/10.1145/3196709.3196772
[4]Ammari, T., Kaye, J., Tsai, J. Y., & Bentley, F. (2019). Music, Search, and IoT: How people (really) use voice assistants. ACM Transactions on Computer-Human Interaction, 26(3), 1–28. https://doi.org/10.1145/3311956
[5]Druga, S., Breazeal, C., Williams, R., & Resnick, M. (2017). “Hey Google is it ok if I eat you?” Initial explorations in child-agent interaction. IDC 2017 — Proceedings of the 2017 ACM Conference on Interaction Design and Children, 595–600. https://doi.org/10.1145/3078072.3084330
[6]Luger, E., & Sellen, A. (2016). “Like having a really bad pa”: The gulf between user expectation and experience of conversational agents. Conference on Human Factors in Computing Systems — Proceedings, 5286–5297. https://doi.org/10.1145/2858036.2858288
[7]https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/
[8]Troshani, I., Rao Hill, S., Sherman, C., & Arthur, D. (2020). Do We Trust in AI? Role of Anthropomorphism and Intelligence. Journal of Computer Information Systems, 1–11. https://doi.org/10.1080/08874417.2020.1788473
[9]https://medium.com/designing-fluid-assemblages/cube-caring-through-the-language-of-the-voice-companions-to-mediate-privacy-concerns-da2a70f70d71

Source: https://chatbotslife.com/why-should-i-trust-you-alexa-48cd7be88dde?source=rss—-a49517e4c30b—4

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?