Pillar Three: Trust, Transparency, and Empathy
The third and final core pillar of our people-first approach can be defined by three qualities: Trust, transparency, and empathy. Combining these concepts is essential to establishing the foundation for responsible AI that serves, supports, and aligns with the needs of our teams.
Before we share three final strategies that define North Highland’s people-first approach to AI transformation, let’s briefly look at trust, transparency, and empathy individually:
Trust: Enterprise-wide AI adoption largely comes down to workforce confidence in new solutions—whether that’s performance, security, or otherwise. And, aside from upskilling, the best way to instill confidence is by building trust. Our Center of Excellence (CoE) has adopted a few behaviors to achieve this:
- Ensuring and communicating that AI systems have been thoroughly vetted prior to implementation, so users know that they are safe and secure
- Leveraging Champion testimonies and early value reporting to show that AI tools are already enhancing ways of working and driving productivity
- Establishing clear governance structures to ensure responsible use and safe testing environments to drive positive experiences
“In a fast-paced world, where things are constantly evolving and changing, investing time and effort into something new feels like it has to be worth the time and energy involved – it’s important that organizations have the trust of their workforce, asking them to do the right thing, at the right time, in order to drive meaningful and impactful change.”
- Ella Smith, CoE Behavioral Science Lead
Transparency: Uncertainty breeds fear, and AI's revolutionary impact on work practices is no exception. For many, these new solutions have the potential to fundamentally reshape roles, triggering understandable anxiety. It's crucial that those most affected are fully informed about emerging capabilities, strategic decisions, inherent limitations, and potential risks. We are committed to ensuring our people have agency in our AI transformation, rather than making them passive bystanders. To bring this commitment to life, the CoE constructed a robust internal communications strategy that includes:
- Regular, in-depth status updates on AI market shifts.
- Comprehensive resources for tool optimization.
- Clear expectations of AI integration decisions.
- Open dialogue about potential impacts on various roles.
But it’s also encouraging employees to be active participants in our transformation—ensuring lasting ownership of this change.
Empathy: Empathy in AI is about considering the diverse experiences of users, reflecting on how new solutions will impact the day-to-day work of employees, and then factoring that into AI-related decisions. It also means handling the implementation of new tools—and the changes they bring—with care and consideration for fears of obsoletion, security, or otherwise. This kind of change cannot be prevented. Maintaining empathy can help organizations steer clear of AI tools that will impede or degrade existing processes, in favor of those that drive business value because they enhance ways of working and day-to-day experiences. Because respecting perspectives across the entire organization helps leaders select the most optimal solutions, leading to higher tool compatibility and stronger adoption rates. To put it simply, this principle crystallizes the core message of this docuseries: True AI transformation happens when you prioritize people, not just the technology.
Now, let’s dive into the actions North Highland is taking to promote trust, transparency, and empathy.
Building trust through governance
Fears of poor security (48 percent) and privacy (37 percent) are part of an ongoing, cyclical reaction to AI that both stems from, and deepens distrust in, new solutions. And when left unattended, these concerns can generate widespread cultural resistance to new tools.
But as the adage goes, trust is earned, not granted. The best avenue to trust? A robust governance framework.
Designing and implementing a framework that is customized to North Highland’s needs has been a high priority as part of our people-first approach to AI. You can take a deeper dive into why and how we have pursued AI governance with urgency in our piece, “AI Odyssey: The Urgency of Governance.”
But within the specific context of trust, transparency, and empathy, governance comes down to two motivations:
Assuaging fears: Merely declaring AI safe isn’t enough; people need concrete evidence and coaching to believe you. At North Highland, we are proactively supplying that evidence by:
- Implementing transparent policies that clearly outline our AI usage and safeguards,
- Conducting continuous risk monitoring to identify and address potential issues quickly, and
- Employing robust data and security management practices to protect sensitive information (to name a few) as our evidence.
These governance measures serve as tangible proof of our commitment to safety. They also allow us to address our employees’ fears with empathy, demonstrating that we take the privacy, security, and well-being of our people and clients seriously. By providing this evidence and fostering open dialogue, we're not just telling people AI is safe—we're showing them.
Creating positive experiences: Governance guardrails also create a safe environment for our people, by preventing potential misuse of AI and biased or harmful system behavior. This positions teams to continually have positive and reliable experiences with AI, which compound over time to build trust in the technology.
For our teams, our governance framework provides an added layer of assurance; that is, we are not only introducing the most advanced, high performing tools. We are also introducing the safest, most trustworthy ones.
Connecting through communications
How else can people get on board with change, if not by having the right information?
As we said earlier: Uncertainty breeds fear. Because in the absence of information, people are inclined to fill in the gaps with their own narratives, which take root and propagate across the organization. You’ve likely heard them already…
“AI will take my job.”
“New solutions are unreliable security risks.”
“The number of available solutions is overwhelming and they’re too complex to use effectively.”
“AI is prone to making mistakes.”
And these sentiments can trigger widespread resistance to change—an AI adoption hurdle faced by more than one in four leaders (28 percent) in our proprietary research. Our CoE is employing consistent and transparent internal communications to deconstruct and recast negative narratives before they can take root. As a result, we hear narratives more along the lines of…
“AI is helping accomplish mundane tasks faster, so I have more time for the ones I enjoy.”
“The quality of my work has improved vastly with the help of AI.”
“I’m learning new skills by using AI tools, which will help me grow professionally.”
Communication strategy #1: Behavioral science
The CoE set out to create a framework for internal communications that went a step beyond basic information delivery. They wanted to turn AI anxiety into excitement, confusion into clarity, and passive observation into active participation. Enter: Behavioral science.
In the workplace, behavioral science combines insights from psychology and human behavior to form evidence-based strategies to achieve outcomes like employee productivity, high-performance culture, and engagement. To put it simply, it helps leaders align decisions with how people actually think and behave, rather than how one may assume they should. For that reason, behavioral science is immensely useful in anticipating and mitigating resistance, earning trust, and helping employees thrive with AI.
“Behavioral science enables you to better understand and work with individuals to design solutions that feel right—they make sense, they’re accessible and individuals can clearly see the “what’s in it for me and why should I bother?” Not only does this drive increased adoption—and more importantly value delivery—it ensures an overall positive impact and experience for individuals, where it feels collaborative and not like something is being done to you.”
- Ella Smith, CoE Behavioral Science Lead
Knowing the benefits, the CoE partnered with stakeholder and resident behavioral science expert, Ella Smith. Let’s look at a few techniques the team is using and why:
Communication strategy #2: Small change
“How do you eat an elephant? One bite at a time.” That common phrase underlines the concept of small change as a behavioral science technique. And the infinite list of similar colloquialisms (“One step, day, week at a time”) attests to how well the concept of incremental change resonates. In the context of AI-related internal communications, small change shows up in two ways:
We’re gradually introducing new tools and processes, allowing teams to adapt at a comfortable pace while consistently moving forward. And by focusing on adopting one tool at a time, teams are more likely to use these solutions consistently and meaningfully.
We’re helping teams build confidence and trust in AI in bitesize chunks. Communications have included tips and “quick wins” for getting started, such as prompt suggestions, ways to integrate AI into common daily tasks and ideas for how to use it based on the employee’s role within the organization. New users aren’t expected to be overnight AI-experts, they’re being encouraged to start small. This is tapping into the psychology of momentum and habit stacking to help employees build momentum through small wins, leading to a snowball effect of positive change.
Communication strategy #3: Local-level advocacy
If you’ve read about Pillars One and Two, you’re well acquainted with our AI Champions program. Engaging this local-level network capitalizes on the credibility colleagues have from those close to them or in similar roles. It also harnesses the behavioral technique of social proof, where individuals are more likely to adopt new behaviors or ideas when they see their peers doing so. In other words, if the CoE showcases a new prompt guide in the weekly newsletter, and AI Champions are seen implementing these prompts with success, others are more likely to follow suit.
Communication strategy #4: Fact-based motivation
Numbers don't lie, but they do tell stories. Statistics are being incorporated into internal communications to showcase the positive impact of AI on the firm so far, motivating users and building trust. By presenting clear, measurable benefits of AI implementation—like time saved, increased productivity, or improved accuracy—communications appeal to our employees' rational decision-making processes. The approach cuts through speculation and uncertainty, providing a solid foundation for understanding the value of AI in daily work. The CoE has been combining organization-wide metrics with team-specific success stories to create a narrative that motivates our workforce to embrace AI as a powerful tool for individual and collective growth.
Thoughtful tool selection
In our blog, “AI Odyssey: Learning Through Play,” we shared that our CoE is in the process of identifying the best solutions to become part of our suite of AI tools. But here's the twist: This tech-centric initiative is, at its core, all about people.
It comes down to how we're doing it—keeping it as transparent as possible by:
Democratizing decision-making: We've made criteria for tool selection and evaluation accessible to all.
Eliminating black boxes: We're openly discussing which solutions the CoE is considering and why, providing regular updates on the status of the list to keep everyone in the loop.
Crowdsourcing knowledge: From C-suite to employees across functions and levels, we're engaging stakeholders to test, research, and provide real-world feedback on how these tools impact their ability to work and deliver value.
Painting the big picture: For each tool, we're clearly communicating its potential uses and impacts, helping everyone envision how they might leverage AI.
Keeping score: Our value tracker measures the real-world impact of these tools, ensuring we're not just chasing shiny objects, but delivering tangible benefits.
For our CoE, transparency isn't just a buzzword— it enables the flow of diverse perspectives and comprehensive insights that collectively lead to strong alignment between solutions and business initiatives. This ensures that the selection process isn't confined to a single viewpoint, but rather incorporates varied lenses for a more comprehensive evaluation of each tool's potential. By understanding a multitude of experiences and viewpoints, we're ensuring our AI solutions aren't merely technically sound, but also organizationally aligned. This approach shatters the one-size-fits-all myth, allowing us to curate a versatile AI toolkit that caters to every function in our organization.
This approach goes beyond breaking down silos and supercharging collaboration; it builds confidence among our employees, fostering trust in the selection process itself. Our people know that it's about more than selecting tools. We're creating a process that considers the human dimensions of AI alongside the technological capabilities.
Assembling an optimal suite of tools also helps empower our teams with digital dexterity, a crucial skill in today's tech-driven landscape. We recognize that our clients' AI journeys are unique: No two organizations will use identical AI solutions or follow the same path. By exposing our teams to a wide range of AI tools, we're positioning them to better support clients in their own AI journeys. This broad exposure enables our teams to adapt quickly to different client scenarios, offering tailored support regardless of the AI tools or strategies a client chooses because our teams will have already had first-hand experience with changing their ways of working by introducing AI.
As the AI landscape evolves, our teams' digital dexterity allows them to stay ahead of the curve, ready to tackle emerging technologies and challenges. This exposure also maintains transparency, giving our teams the insights needed to make informed decisions, whether that's for day-to-day use or developing client support strategies.
By factoring in the needs and concerns of our teams, we're taking our due diligence to the next level. We know this approach will lead to higher utilization of new tools and help build trust, which is great for adoption. Ultimately, ensuring we get the best solutions for our teams means we're getting the best solutions for our business.
Human values guide AI success
When it comes to AI, it’s easy to get swept up in the whirlwind of algorithms and processing power. But the true catalysts of AI transformation are much more human.
The principles outlined above—trust, transparency, and empathy—help form an essential bridge between cutting-edge technology and human potential. Trust turns AI from “Unknown Entity” to “Reliable Ally.” Transparency invites all to participate in shaping our AI-enabled future. And empathy ensures that, as we push boundaries, we remain anchored in human needs and experiences.
This third and final pillar is about combining these three concepts to form a transformation strategy that aligns technological progress with organizational, human values. By keeping these ideas at the heart, we’re evolving our entire approach to innovation.