AI Odyssey: Launching a Large Language Model

 

 

Our AI Odyssey

At North Highland, we understand that initiating and scaling change can be intimidating and complex.  

Like many of our clients, we are embarking on our own AI business transformation—made possible through an internal AI Center of Excellence (CoE) + Accelerator team. And we want to share this journey—or odyssey—with you.

Through a new docu-series, we will unveil our experience driving organizational growth through the CoE + Accelerator team, starting by sharing how we are:

  • Leveraging institutional knowledge  
  • Building and implementing a large language model (LLM)
  • Designing an AI governance framework
  • Maintaining a people-first approach in the age of AI
  • Measuring AI ROI

And much more.  

By chronicling our first-hand journey to AI adoption, we hope to provide a resource that is both relatable and enlightening for other organizations navigating this new digital landscape. 

On the stage of digital business transformation, the spotlight has remained firmly fixed on artificial intelligence (AI). And with nine in 10 leaders positioning AI as a priority in 2024, it’s safe to say that it’s set to be a permanent fixture across most industries in the years ahead.  

But what comes after saying ‘yes’ to AI?  

For many organizations, the next step is navigating the myriad questions and decisions surrounding large language models (LLMs)—or even considering how to create their own. These modern tools have the power to process large datasets, generate human-like text, improve insight analysis, and (when implemented properly) deliver powerful competitive advantages. You’ve heard of them: ChatGPT, Claude, CoPilot, etc.  

But believe it or not, the technology itself has marginal influence over AI ROI. Achieving value from an enterprise LLM is largely about having a solid foundation of knowledge management practices. The LLM capabilities that can ultimately create business value originate directly from the knowledge base used during the model's training process.

Here are the steps we’re taking to achieve this at North Highland as we build our very own LLM:  

  1. Defining intent for strategic alignment
  2. Collaborating across functions for robust and diverse knowledge sharing
  3. Employing a gold standard for training materials  
  4. Adopting knowledge management best practices

Creating value with an LLM  

While the technology itself only goes so far on its own, we would be remiss not to touch on the capabilities an LLM stands to offer (with the right knowledge and training, of course).  

The Enterprise LLM

Anyone who has interacted with generative AI chatbots—think ChatGPT or Claude—has already interfaced with a “generalized” LLM. This type of AI uses non-company specific data sets and deep learning to process and generate100 percent of business leaders we surveyed in 2024 claimed to be pursuing AI use cases, but more than 1 in 5 say they are unprepared or very unprepared for content generation as a use case. human-like text. LLMs are designed for content generation, information retrieval, and automated decision-making.  

The enterprise LLM performs these functions, but with the added benefit of being customized or trained with company-specific data, ensuring relevancy, accuracy, privacy and security.

Customization: This tool can be fine-tuned to align with organizational objectives, workflows, and capabilities—even as the business evolves.

Security: Employees can safely use and reap the benefits of AI without compromising proprietary information or leaking sensitive data, thereby mitigating risk.

Internally, many organizations are exploring LLMs for their ability to…  

  • Make data and knowledge more accessible,
  • Spark innovation,
  • Drive operational efficiency,  
  • Improve employee experiences and engagement, and
  • Enhance enterprise connectivity.  

Externally, they’re seeking the ability to…  

  • Streamline and improve user/customer experience,
  • Strengthen engagement, and  
  • Maintain brand equity.  

For businesses in the Retail and CPG industry, this looks like personalized customer recommendations and buying assistance.

In the Life Sciences industry, LLMs are enhancing the accuracy and speed of diagnostics and streamlining client documentation, making critical health information more accessible.  

Financial services institutions are leaning into LLMs for compliance support and rapid fraud detection.  

And as a leader in change and transformation, North Highland wants to reduce the time spent on manual tasks, so our teams have greater bandwidth for the unique work that only highly skilled and specialized people can do.

Managing the knowledge 

At North Highland, we set out to design an enterprise LLM that gives our people the right capabilities to better drive transformation and deliver lasting change for our clients.  

To achieve this goal, our AI Center of Excellence took four key steps to ensure we have a model built to position our people to make change happen:  

  1. Defining intent
  2. Identifying key capabilities and core knowledge areas
  3. Establishing the critical knowledge foundations to train the model with premier content
  4. Creating traceability  

Zooming out & defining intent

Properly customizing an LLM requires a clear vision from the start. Leaders should begin by outlining core objectives or the key motivations behind an AI business transformation.

"The tech is the easy part. Let’s start with what you want to do as a business—your goals, your objectives. The tech is fun, but the processes around getting it to adoption, having people use it and bring it into their ways of working, that’s where the money is." - Paul Harris, Director of Technology

At North Highland, we call this the Design Intent phase—wherein our experts collaborate with internal stakeholders, other CoE team members, and senior leaders to define the intent behind the design.  

In other words, answering questions like...

What operations do we hope to improve?

Where do we want to see the most impact?

How will this tool help us transform for the future of work?

This is where we began to “zoom out” to get the wider context of how this tool will be used, who it will impact, and what was required to make it a success. Answering these questions has effectively laid the initial foundation for future work. Defining our core objectives illuminated key qualities and capabilities our LLM would need to support North Highland’s overall mission.  

"The design intent phase allowed us to zoom out and align across workstreams, specificallyRag vs. fine-tuning: North Highland is utilizing retrieval augmented generation (RAG) to leverage and strengthen our LLM. This has allowed us to enrich and optimize responses, as the model references an external authoritative knowledge base in addition to internal training resources. with regards to AI Use Cases. Having that initial, high-level view of where LLMs and RAG applications could provide maximum value, then tailoring our design intent accordingly, kept us focused on delivering an impactful product and facilitated smoother collaboration by giving the whole team a shared strategic roadmap.  

You can do a lot of cool things with AI, but if you aren’t careful, you can find yourself getting caught up in what’s possible instead of what’s valuable." - Jonah Epstein, Scrum Master of LLM Workstream

Identifying core knowledge areas

As we pointed out earlier, knowledge management is the sine qua non of a successful LLM implementation. So naturally, the next step for our team was tapping into our firm’s existing knowledge landscape.

At North Highland, this meant:  


Defining objectives and partnering with stakeholders: Understanding our goals and the core knowledge areas needed to achieve them helped us mobilize critical internal expertise and partnerships. We incorporated multi-functional perspectives for a model that can support the range of functions, needs, and challenges across the organization. And we did this by bringing in executive leaders and partners in sales enablement, product, strategic proposals, and marketing, among others, for a comprehensive point of view. Here, we focused on understanding how we talk about our clients’ needs, our work, and the ways we show up with unique value for our clients.


Examining existing practices: Assessing data about our most important services, functions, and resource allocation in core knowledge areas was the next significant step. Our knowledge management leaders partnered with leaders in our consulting practices and industry teams to align on firm priorities and the kinds of knowledge needed to support them in delivering excellent value to our clients.


Aligning to firm values: An organization’s ethos matters. Embedding cultural values—like DEI and a people-first stance, for example—within the model helps us uphold our commitment to DEI and other core North Highland values. We partnered with internal experts to represent the way we talk about these elements that make us who we are as a firm. Ultimately, this step ensures that content used to feed our LLM reflects our values and tone of voice.  


All of this adds up to a meticulous process, but it is a worthwhile one that the CoE knows will pay dividends. By taking the time to strategize properly up front, they're laying the groundwork for an LLM that won't just be a technological marvel, but an enabler of change and transformation—both for us and for our clients.  

Establishing critical knowledge foundations to train the model with premier content 

It’s no secret that LLMs and other AI tools rely on people to establish a solid foundation of critical knowledge and train them with it before they can perform properly. To have a model that generates quality content, you must first feed it with quality content.  

So, after outlining core knowledge areas and needs, our experts began sourcing the right content to train our model.  

What constitutes the “right” content?

From industry focus areas to best examples of prior work, organizations should source and leverage a spectrum of high-quality content—with heavy emphasis on high-quality.  

The content sourced and employed during this phase directly influences a model’s ability to support core objectives—whether that includes regulating revenue processes, streamlining customer support, or enhancing project proposals. To put it simply, the model’s performance is directly tied to how well this step is executed. Quality input = quality output. 

"People use Generative AI tools largely to save themselves time and effort, but this is only worth doing if, on the balance, the output of these tools is of equal or greater value and quality than what we could have produced without using them. In content development, it is rarely worthwhile to trade quality for speed of creation." - Lizzi Winter, Director of Knowledge Management

How can we ensure we are sourcing quality content? 

We’ve established that a model's performance directly correlates with the materials it’s trained on. Modern day gold mining: Powerful LLMs require gold standard content. These are the pillars of our grading process: Aligned, Integrated, Scalable + Repeatable, High QualityBut with vast amounts of data available across the organization, how could our LLM team ensure it was feeding the AI with the very best? Thankfully, quality content serves both machines and humans, so we were already focused on capturing our best knowledge. Our work in recent years on knowledge strategy and focused archiving and curation has served as the foundation for our model’s success.  

Our core solution at North Highland was applying a gold standard: Our custom benchmark for assessing content and identifying premium examples. Following this standard will ensure our model is trained to generate only high value content that emulates our best work.  

 

"As part of a focus on maturing our Knowledge Management capability over the past few years, we already had a strong foundation for our LLM by developing gold standard criteria and establishing repeatable and scalable curation processes to capture gold standard knowledge. Now, we are simply refining these processes and content types to accommodate LLMs as the “user” of the content. Our best content is our best content regardless of whether its audience is a human or a machine assisting a human." - Lizzi Winter, Director of Knowledge Management

To meet the gold standard, content must be:

  • Relevant to the core knowledge areas with which you want to feed the model  
  • Supportive of the internal or external capabilities you want your model to support or enhance
  • Exemplary of the high-caliber work you want your model to emulate  
  • Accurate and trustworthy
  • Comprised of diverse and robust datasets  

The processes used for grading and sourcing content must be thorough and rigorous. Failing to do this well can result in a model that generates inaccurate, irrelevant, or incomplete responses—reducing the value gained from an enterprise LLM.  

"It is difficult to overstate the importance of high-quality knowledge foundations in producing high-quality outputs from your LLM. Knowledge management is having a moment because it is not just about collecting documents. It's about embedding valuable knowledge capital where it's most needed in our business decision-making and daily processes." - Lizzi Winter, Director of Knowledge Management

Embedding traceability 

When it comes to AI, accuracy and trustworthiness are top concerns for everyone involved—from the leaders driving the transformation to the employees using the technology daily. More than 75 percent of consumers are wary of generative AI producing misinformation.

This is a key reason why AI adoption fails in so many organizations.  

The CoE team knew that ensuring accuracy went beyond just using gold standard knowledge to train the model. To truly build trust and confidence in the model, the team needed to prioritize traceability between the information supplied to the user and the original source (knowledge).  

"We wanted our own model to embody traceability. When it provided an answer, we wanted people to know the why, where, and what behind it. That’s the point of traceability." - Jonah Epstein, Scrum Master of LLM Workstream  

Here’s an example:  

Q: How is North Highland ranked as a leader in change management? A: North Highland has been named one of America's best management consulting firms for nine consecutive years. Source: 'Meet America's Best Management Consulting Firms' by Forbes Magazine, issued March 2024

 

This commitment to traceability has far-reaching benefits.  

First and foremost, it helps mitigate risk. With a clear audit trail, the team can identify and correct any biases, inconsistencies, or misused information, ensuring the model remains compliant and reliable.

Traceability also helps uphold data quality best practices, like maintaining a single source of truth and enabling our model to cite its sources. By holding the model accountable, the team can continually assess and strengthen the data it relies on.

Perhaps most importantly, traceability helps foster trust. And building workforce confidence in a new digital tool, like an LLM, plays a pivotal role in paving the way to full-scale acceptance and adoption.  

"Not only does traceability play a role in quality, but it also facilitates the democratization of knowledge as a whole across the enterprise. If an individual asks a question on a subject they are unfamiliar with, they will receive an answer and be pointed to the most relevant source to learn more." - Jonah Epstein, Scrum Master of LLM Workstream

Paving the way for adoption

We hope this blog has made it clear that the model itself has been one minor piece of the puzzle making up our AI transformation.

Because what we've seen is that it's our people, content, and processes that will secure sustainable change for our organization. And how well we prepare them will determine whether we reap the full benefits of AI.  

Our progress is made up of the non-negotiables we've covered in this piece: Meticulous planning, strategic alignment, core knowledge foundations, and immense cross-team collaboration. It’s not a one-and-done process, either. We’re committed to leaning into these processes continuously to ensure our model is evolving alongside our firm and remains up-to-date and relevant. But it doesn’t stop there. The members of our CoE continue to take critical measures to ready our firm for implementation, so that our custom LLM can be launched with confidence.

Overlapping blue arrows with the text "Coming up next"

So stay tuned: We'll be back later in this docu-series with a behind-the-scenes look at our LLM rollout to share with you...

  • Lessons learned 
  • Hacks for adoption
  • The importance of change management

And more insights that can help you on your own AI transformation journey.