Conversational interfaces will become part of our everyday life soon. It will be everywhere from our room to the kitchen to the car. We are going to have many more devices than we currently have. The challenge would be making sure that people benefit from the numerous opportunities this emerging technology presents.
People with impaired hearing can use apps that transcribes audio speech on a screen in real-time. This avails them the opportunity to have a conversation with anyone without any problem.
Without looking at a screen, people with impaired vision can send messages, check what time it is, confirm the time of a meeting, etc. by talking to a voice assistant.
Below is a Google Home example of how conversational interface is helping someone with impaired mobility navigate through life. This allows him to exercise control over his own functioning while his caregiver can do other things or rest.
There are a lot of possibilities in this field of emerging tech if we consider designing for a wide range of use cases and people with different dreams, experiences, and identities. The more we push the limit, making it easier for people to access and use, the more we can achieve broadening the scope of who is empowered by this emerging technology.
Farmers with access to no crop or weather information can interact with an app that informs them of the optimal date to sow seeds based on the region’s local weather and soil information. They can also be reminded to accurately allocate resources for crop growth. All of this is information an assistant can help with.
For people living with at least one chronic condition, PilloHealth built “Pillo”, a voice-activated robot to help monitor a user’s health. It reminds a user to take their medication, provide nutritional advice, or alert family members if help is required.
Are we safe?
As conversational interfaces broaden and become more intelligent especially when combined with other aspects of artificial intelligence(AI) like facial recognition, speech recognition, etc., more detailed data could be compiled about any individual using these technologies raising trust issues and fears about data abuse/misuse.
Researchers and user advocates have created a way of removing emotion-based information and restoring it to a neutral state to preserve users’ privacy. You can read more here. As we figure out better ways to use conversational interfaces, technology leaders and researchers must be held accountable to ensure users are protected, especially in terms of data protection.
We need designers to keep working and doing their best for the users by making a smooth and seamless design that won’t exploit them. UX designers need to be gatekeepers that protect the users and hold leading tech companies in this space accountable while we push government officials to take UX ethics seriously.
It is important to note that conversational interfaces, though designed to simulate human conversation, have limitations in what they can do and the way they respond, but they are here to stay and they will get better and more advanced and that the current issues will be resolved over time.
Designers and technology leaders need to continually monitor developments in this rapidly growing space, create guidelines to ensure conversational interfaces are accessible and to protect users’ data and privacy.
Most voice devices are in English and if you want to create a voice assistant that speaks in Twi or Yoruba, you would need a large amount of voice data in these languages to do that which isn’t available. Mozilla has an open-source database called CommonVoice that allows designers and developers to create voice applications that can speak in several other languages. You can help designers and developers teach machines how real people speak by donating your speech or help validate submitted speeches. It is accessible to everyone.