Cognitive Ethics Series: The Unexpectedly Human Side of Artificial Intelligence (Part One)

It’s an incredibly exciting time to be alive. Regardless of your industry or profession, you are no doubt seeing transformations and technological advancements like you’ve never seen before. We are on the cusp of innovations that will and are already dramatically changing the way we live, the way we work, and our relationships with one another. Understandably, consumers and executives are reeling in the vulnerabilities this surge of technology introduces to humans, especially given the pace at which this technology is becoming a part of our everyday existence.

While most agree AI, also known as Cognitive Solutions, can make lives easier and businesses better, there is still a great amount of fear connected with the unknown and relatively unregulated space that AI occupies. The current narrative surrounding this technology is nothing short of apocalyptic, and by the time people have a foundational understanding of the true capabilities of this technology, it may be too late to correct bad behaviors.

Enter ethical AI deployment as a key differentiator.

The development and socialization of these technologies is bringing up profound philosophical questions that most businesses are not prepared to answer—most significantly, “what does it look like to explore and deploy these technologies responsibly?” Organizations must solve the tension that exists between leveraging the technology while simultaneously caring for and improving the human experience.

According to Forrester Research, “Last year, AI investments topped $40B. Yet less than 50% of firms are seeing results” (2017). What this tells us is, while the technology capability is there, organizations are not well versed in how to properly design or implement for adoption. We’ve learned that in AI, organizations are sorely underestimating the level of human effort, connection, and ethical compass required to quickly and responsibly, optimize these technologies. Just as in general business ethics, when we think of the proper business policies and practices around potentially controversial issues, so too does AI design and application require the same rigor. Companies that consider ethical topics in the design and deployment of these tools will set the tone and write the rules for the industry. Those who do not will become the cautionary tales.

We call this layer of design discipline Cognitive Ethics, and believe it is the most critical component in an organization’s journey from considering AI use cases to building and deploying them. So, how do you gain clarity around Cognitive Ethics responsibilities? How do you test your data for cognitive bias? How can you design and leverage these tools as an extension of your organization’s brand and purpose so they are meaningful?

To start, we believe it’s important to understand the three categories of Cognitive Ethics for AI:

Humans to Robot: What humans do with and to bots (also known as “roboethics”) – Roboethics was addressed as early as 2004 when a World Robot Declaration was signed at the International Robot Fair, during which participants recognized and began to define standard practices for the community. Questions of robot rights and owner responsibility live here. For instance, if you own a robot that does something illegal—who should be punished? What does justice look like for a robot?

Robot to Humans: How robots interact with humans – HRI (human robot interaction) as often referred to by researchers, has the goal to define human expectations for robot interaction which allows for more natural interaction between the two. This includes the process of teaching robots to interact better with humans be that through language, expressions, motions, behaviors, and/or their responses in various environments or situations. Inevitably determining language that is “natural” and teaching robots cultural norms comes riddled with bias and at worst discriminatory practices.

Machines to Humanity: The (indirect) impact of these technologies to humanity at large – questions like, “should robots be designed to make life and death decisions?” loom over us. Robots can be designed to produce lifesaving innovations in healthcare at scale, but at the same time can potentially be created as complex killing machines for war time endeavors. Though we aren’t living in an Iron Man 2 robot drone world just yet, the idea isn’t as farfetched as we might think. Additionally, what does human connection, needed for survival, look like in a world enabled by HRI? What are the things that should remain inherently human, even if the technological capability is available?

It’s ever more important to refine and redefine the appropriate Cognitive Ethics as technology advances at an exponential pace. Evangelizing these new categories of ethical sensibilities with your organization is only the beginning.  Organizations that proactively ask themselves hard questions, take a position on these topics and make brand decisions about how to deploy this technology will be far ahead of their competition.

Want to achieve results faster, stand out amongst competitors and avoid collateral damage as your organization undertakes the new world of AI? Join us in the coming weeks as we continue to explore ethical AI deployment and human impact in a 3-part blog series on Cognitive Ethics.

To learn more about Cognitive Ethics for AI and our perspective stayed tuned for our next blog in this series. Check out Part Two of the Cognitive Ethics Series here and Part Three here.