AI Ethics on Voice-Activated Digital Assistants

11 minute read

Published:

Abby is expecting a guest, but she needs to run a quick errand, so she leaves instructions to Troy, her digital home assistant, instead. The guest arrives and once recognized by the security camera on the porch, Troy unlocks the door, allows the guest in and politely says “Welcome”. Troy explains to the guest that Abby is on the way and will be home soon. Through the bluetooth speakers, Troy streams music from the guest’s profile in Abby’s Spotify friend network. The digital assistant then offers the guest orange juice, coincidentally her usual preference whenever she comes over. After completing the welcoming service, Troy would go silent and wait on the guest while they patiently expect Abby to arrive soon.

From the scenario above, it’s amazing how a non-human assistant is capable of accurately validating the identity of a person and independently managing appliances at home. This is an example of an AI companion we imagined many years ago that is recently slowly turning into reality. From being able to exchange text messages with AI chatbots, today we can actually speak to it and have it respond like a human personal assistant would. Apple’s Siri, Microsoft’s Cortana, and Amazon’s Alexa are the closest thing we have to getting our own Jarvises that we can take everywhere in our pockets. Activated by voice commands, they can play soothing background music for us, schedule appointments, check the weather, send texts, make restaurant reservations, and so much more including even coming up with a grocery list. Needless to say, users access these digital assistants everywhere — from their homes, offices, schools, and cars. To a certain extent, they are comparable to and can even replace human assistants which should supposedly be discreet and serve only for the good of its owner. In order for digital assistants to return a useful response, the information from commands and questions told by users are parsed using AI.

Amazon, Apple, and Google are the biggest companies leading in the development of these virtual assistants and they try to make these interfaces more life-like. This is only possible through more real-time collection of data that can be contextualized by AI. However, most of this collected information are potentially identifiable and possibly sensitive information. Data on users’ searches, queries, commands, and locations is used to build an accurate profile of their habits, whereabouts, and preferences. This technological innovation is being developed for the purpose of convenience, but it requires the tradeoff of personal privacy and security. Just like almost everything on the internet, user requests and behavior will leave a trail of breadcrumbs. This could be a great thing but only when used appropriately and responsibly.

Recent Updates
These companies are trying to develop an improved AI assistant that could guess what users are about to ask before they even say it. For example, applications or interfaces that would notify them about traffic jams, grocery queues, or restaurant schedules on appropriate situations. They are also planning to release an improved feature that would show you recommendations every chance it deems necessary. For example, when you receive a message or photo from a friend, it will suggest a response that you would most likely say, and you simply need to tap to send those mimicked messages. For them to have been able to make accurate and suitable suggestions, it should have been learning how you do conversations, and this means that it has been spying on you.

Behavioral Concerns
Digital assistants can record and save conversations, photos, and many other pieces of sensitive information including user location. They use this data to improve themselves overtime which may lead to them gaining a wider universe of information — even more than what people actually know or remember. How companies that own these algorithms use and share the collected data is not the only source of concern, it is also alarming how dangerous and vulnerable these applications are to process and technical failures. These failures open the opportunity for hacking that could result to breach of privacy. Another concern would be these digital assistants are not limited to collecting and recording data of their owners and authorized users. They may also collect unauthorized personal data from the owner’s friends or family members.

Required Etiquette
In our example above, Troy needed to process different sources of information in order to be a smart host to the guest. The information used does not only belong to Abby but also to her guest. It raises the question of whether or not Troy (and even Abby) invaded the guest’s privacy. Even with digital technologies, we are required to have social and ethical responsibility to each other. These technologies have been rapidly growing and the society has not been able to keep up yet in terms of redefining social norms. Would it have been more polite for Abby to inform her guest that she owns a digital home assistant? Would it be ethical for guests to inquire about the presence of digital home assistants? Would it be acceptable for guests to ask the host to deactivate it temporarily? This is not very different from business meeting participants who would like to be informed if the session is being recorded. No matter how advanced we are today, etiquette rules stay the same — they require consideration, honesty, and kindness.

This application may sound amazing and is very convenient, but people should not forget that privacy is a fundamental human right that we all should work together for to maintain. Users, especially non-technically savvy people, should be educated on the potential risks of digital innovations. The critical role for governments involves coming up with and implementing stronger privacy laws that covers issues associated with digital assistants. Right now, tech giants such as Amazon, Apple, and Google are the ones setting the rules. As a community, we should be proactive in staying informed and keeping our private information secure by taking actions when necessary.

Privacy

The topmost concern raised about digital assistants is privacy since they store information about the user — this includes his/her behavior, locations, and routines. Moreover, it stores data of the user’s voice too, not only when he/she speaks to the digital assistant but also when she talks on the phone and even during offline conversations. Users are also unsure when exactly does these digital assistants collect data, is it only when they are activated? It’s difficult to figure out on which occasions they get recordings of our conversations and other sensitive data. When malicious people get a hold of this data, they could learn things about the user that they are not supposed to; including intimate details about the user’s personal life. The utilization of user information is not only limited for advertisements, it could also be used against people in any way possible, i.e. political and social.

Principle: Beneficence, Non-Maleficence, Explicability

Consequentialism

They say that every person has three different versions that they reveal to different people — what you show to everyone, what you show to your family and close friends, and what you never show to anyone. The third one is the reflection of your most authentic identity and personality. With AI digital assistants snooping around all the time, they may be with you longer and know you better than your family and friends. Unknowingly, you are also allowing it access to a version that only yourself know. This raises the concern of whether it is necessary that they “spy” on you all the time to be able to do its function. To maximize its potential and increase the quality of its performance, it would require as much information as it can gather from you.

Principle: Beneficence, Non-Maleficence

Paternalism

Every day we make decisions — from inconsequential, like picking which dress to wear, to major ones like which company to invest in. These decisions affect us personally to varying degrees especially those affecting financial, career, political, or health aspects. Every recommendation that AI digital assistants make somehow influences us and since they’re supposed to be “smart” or intelligent, we rely on them a lot; most of the time even without regarding our own cognitive biases, intuition, or past experiences. In the long run, this would affect our decision-making process / habit and how we regard our personal relevance. At worst, we may unknowingly cease control of the decisions we make and the outcomes they bring. The question is, if anything goes wrong, who should be blamed — ourselves or our digital assistant? Alternatively, however, AI digital assistants could also be used to make informed decisions while not relinquishing all control nor letting it manipulate our lives entirely.

Principle: Autonomy

Dehumanization

Digital assistants not only replace time we may otherwise spend speaking or interacting with other people, it also replaces some of the decision-making people will have to make. Digital assistants can also learn from unpredictable situations, for example, usually a self-driving car only optimizes the route to help you get to destinations quicker, but in unprecedented situations, it can also pick between hitting a pedestrian or hitting a pole that could possibly injure the people inside the car. Its decision would not necessarily be the same as a human driver since the way it’s programmed / trained does not include relating about human feelings when making a choice. Additionally, if it decided to hit the pedestrian, who should be responsible for the accident? Would it be the autonomous car or its owner? It’s also somehow discomforting that the life of the pedestrian lies on an AI.

Principle: Non-maleficence, Justice

Transhumanism

The slave-master mindset created by the digital assistant technology may be molding especially for children. Even though we speak to them like they are real people, we know they do not have feelings, so we don’t necessarily show respect and worry about being polite when saying commands or requests. Not caring about how and what we ask of it doesn’t really provoke a negative emotion. Long term exposure to this might affect how we feel sympathy and compassion to real human assistants and service people.

Principle: Non-maleficence



• • •
As part of the final requirement in our AI Ethics class at the Asian Institute of Management, we were to write about a specific AI application and list down the potential ethical and/or legal issues that could arise from it.

To learn about the five principles for ethical AI, check this article that explains the unified framework.

Reference
AI Ethics Personal Assistants Like Alexa, Siri, and Google


• • •
This post is also available on my medium account.