Cognitive Ethics Series: Use Case Under Pressure: The Robot Naming Debate (Part Two)

This blog serves as a follow up in our series about Cognitive Ethics for AI. To read the first blog in the series, “The Unexpectedly Human Side of Artificial Intelligence (Part One)” click here.

There are three categories of Cognitive Ethics for AI as we see it: Humans to Robot, Robot to Humans, and Machines to Humanity. Today, we’re diving deeper into one of these categories - Humans to Robot. This relates to the topic of “roboethics” or “machine ethics” where we consider ethical design and owner/creator responsibility.

One of the seemingly basic discussions within the industry right now is whether or not, and what, to name robots. This poses the question, should robots be given a name, in some way humanizing them? Naming is a natural tendency of humans —especially for things they have conversations with. So, if a robot has a name, are we more likely to trust it?

From here emerges a more controversial topic of gender slants in robot naming. Some are questioning gender normative decisions around bot names--with assistant robots consistently bearing female names – Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana - while many expert and advisory robots are given male names – IBM’s Watson, ROSS the lawyer bot, and Ernest, a UK Facebook bot that provides banking advice. Given the make-up of the data and technology industries are largely male, the concern is that gender bias, even in the act of naming, are subconsciously embedded in these designs. Gender bias could be only the beginning, as racial and socio-economic biases within AI and the development of robots are likely to form as well. Even something as simple as a global company giving their bot a name that some of their users cannot pronounce due to dialect can cause unintended consequences.

Companies that want to be mindful of broader impacts can take steps to fully understand the implications of robot naming, (and other humanizing characteristics like speech patterns, vocabulary usage, personality / temperament) through proper researching, testing, and branding activities to ensure a strong user connection and avoid unwanted impacts. Just like any product, bots should be accompanied by branding efforts that include defining the brand promise, ambition, and personality before rolling them out, even if they are only an internal tool. Being more ethically-minded involves more than just considering the robot name, but also considering the implications of releasing a new brand and personality into the world.

There are those that believe you shouldn’t name robots at all and that we should keep our human connection more distant from robots. Over and over again though, research shows it is most beneficial to humanize technology as part of the desire to personalize experiences to the furthest extent possible, and accept anthropomorphism as a natural human tendency. Adoption and engagement with these technologies is also accelerated when humans “connect” with the tools. Studies show that 80% of Roomba owners have given their vacuum bot a unique name—and that is just a perfunctory chore bot!

Whether you’re trying to build a Knowledge Management bot, or looking to launch an AI product in-market, it’s important to contemplate all positions on robot naming through the lens of Cognitive Ethics.

To do this, we recommend organizations treat any bot rollout like a product launch and:

  1. Test. Continue: The best technologists and innovators never stop scrutinizing or contemplating their design choices. Structure your teams (the more diverse the better!) and your project plan to help ward against unintentionally ignorant design choices that surround the name, and “personality,” of the bot, including obtaining user feedback on bot name and personality. Tirelessly fail test against worst case scenarios.
  2. Fully vet names: It’s imperative to base naming off of a macro-brand strategy; and to be sure any chosen name of the technology does not mistakenly touch on a global or diversity sensitivity. Take time to deeply research names, and engage stakeholders from all parts of the organization with diverse backgrounds to weigh in. Beyond personality, look at the construct of the name to determine how functional it is. The name Alexa was chosen because it has a hard consonant with the X and therefore could be recognized with higher precision. As opposed to “Joe” which may accidentally activate the device every time you say “No.”  You also need to consider whether or not a global user can easily pronounce the name.
  3. Tell the human story: Rather than speaking to only the tech capabilities, tell the story of how the bot serves humans. Does your bot serve a long-desired need within your organization? Will it save your employees time and energy? Did you name the bot with this in mind? Think about how you can market the name choice and engage employees in the process.
  4. Connected communication: Communicate in advance, during, and after the launch around the intent of the bot, anchoring it to your brand and amplify the context in which the bot will live, so the purpose and name origin of the bot is clear to users. Understanding your brand purpose is critical but also the larger portfolio of products / services and how this new robot plays a role in supporting your master brand strategy should guide how you name.

Mindful design with a cognitive ethics lens achieves a couple key things: it increases engagement and speed of adoption when the name and “personality” of your bot are a cultural and organizational match. Companies not only control the messaging and align the bot to the brand, but also avoid any potential brand disasters that don’t represent the intention of the technology.

To learn more about Cognitive Ethics for AI and our perspective stayed tuned for our next blog in this series. In case you missed it, check out Part One of the Cognitive Ethics Series here and Part Three here.