The Intelligent Enterprise 2.0 – Managing Transition

A compelling vision will stand the test of time, but it will also evolve to meet the needs of the day.

There are inherent challenges with developing strategy for multiple reasons:

  • It is difficult to balance the long- and short-term goals of an organization
  • It generally takes significant investments to accomplish material outcomes
  • The larger the change, the more difficult it can be to align people to the goals
  • The time it takes to mobilize and execute can undermine the effort
  • The discipline needed to execute at scale requires a level of experience not always available
  • Seeking “perfection” can be counterproductive by comparison with aiming for “good enough”

The factors above, and other too numerous to list, should serve as reminders that excellence, speed, and agility require planning and discipline, they don’t happen by accident.  The focus of this article will be to break down a few aspects of transitioning to an integrated future state in the interest of increasing the probability of both achieving predictable outcomes, but also in getting there quickly and in a cost-effective way, with quality.  That’s a fairly tall order, but if we don’t shoot for excellence, we are eventually choosing obsolescence.

This is the sixth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Key Considerations

Create the Template for Delivery

My assumption is that we care about three things in transitioning to a future, integrated environment:

  • Rapid, predictable delivery of capabilities over time (speed-to-market, competitive advantage)
  • Optimized costs (minimal waste, value disproportionate to expense)
  • Seamless integration of new capabilities as they emerge (ease of use and optimized value)

What these three things imply is the need for a consistent, repeatable approach and a standard architecture towards which we migrate capabilities over time.  Articles 2-5 of this series explore various dimensions of that conceptual architecture from an enterprise standpoint, the purpose here will be to focus in on approach.

As was discussed in the article on the overall framework, the key is to start the design of the future environment around the various consumers of technology, the capabilities being provided to them, now and in the future, and prioritizing the relative value of those capabilities over time.  That should be done on an internal and external level, so that individual roadmaps can be developed by domain to eventually surface and provide those capabilities as part of a portfolio management process.  In the course of identifying those capabilities, some key questions will be whether those capabilities are available today, involve some level of AI services, whether those can/should be provided through a third-party or internally, and whether the data ultimately exists to enable them.  This is where a significant inflection point should occur: the delivery of those capabilities should follow a consistent approach and pattern, so it can be repeated, leveraged, and made more efficient over time.

For instance, if an internally-developed AI-enabled capability is needed, the way that the data is exposed, processed, the AI service (or data product) exposed, and integrated/consumed by a package or custom application should be exactly the same from a design standpoint, regardless of what the specific capability is.  That isn’t to say that the work needs to be done by only one team, as will be explored in the final article on IT organizational implications, but rather that we ideally want to determine the best “known path” to delivery, execute that repeatedly, and evolve it as required over time. 

Taking this approach should provide a consistent end-user experience of services as they are brought online over time, a relatively streamlined delivery process as you are essentially mass-producing capabilities over time, and a relatively cost-optimized environment as you are eliminating the bloat and waste of operating multiple delivery silos that will eventually impede speed-to-market at scale and lead to technical debt.

From an architecture standpoint, without wanting to go too deep into the mechanics here, to the extent that the current state and future state enterprise architecture models are different, it would be worth evaluating things like virtualization of data as well as adapters/facades in the integration layer as a way to translate between the current and future models so there is logical consistency in the solution architecture even where the underlying physical implementations are varied.  Our goal in enterprise architecture should always be to facilitate rapid execution, but promote standards, simplification, and reduce complexity and technical debt wherever possible over time.

 

Govern Standards and Quality

With a templatized delivery model and target architecture in place, the next key aspect to transition is to both govern the delivery of new capabilities to identify opportunities to develop and evolve standards, as well as to evolve the “template” itself, whether that involves adding automation to the delivery, building reusable frameworks or components that can be leveraged, or other assets that can help reduce friction and ease future efforts.

Once new capabilities are coming online, the other key aspect is to review them for quality and performance, to also look for ways to evolve the approach, adjust the architecture, and continue to refine the understanding of how these integrated capabilities can best be delivered and leveraged on an end-to-end basis.

Again, the overall premise of this entire series of articles is to chart a path towards an integrated enterprise environment for applications, data, and AI in the future.  To be repeatable, we need to be consistent in how we plan, execute, govern, and evolve in the delivery of capabilities over time.

Certainly, there will be learnings that come from delivery, especially early in the adoption and integration of these capabilities. The way that we establish and enable the governance and evolution stemming from what we learn is critical in making delivery more predictable and ensuring more useful capabilities and insights over time.

 

Evolve in a Thoughtful Manner

With our learnings firmly in hand, the mental model I believe would be worth considering is more of a scaled Agile-type approach, where we have “increment-level”/monthly scheduled retrospectives to review the learnings across multiple sprints/iterations (ideally across multiple product/domains) and to identify opportunities to adjust estimation metrics, design patterns, develop core/reusable services, and anything else that essentially would improve efficiency, cost, quality, or predictability.

Whether these occur monthly at the outset and eventually space out to a quarterly process as the environment and standards are better defined could depend on a number of factors, but the point is not to make it too reactive to any individual implementation and to try and look across a set of deliveries to look for consistent issues, patterns, and opportunities that will have a disproportionate impact on the environment as a whole.

The other key aspect to remember in relation to evolution is that the capabilities of AI in particular are evolving very rapidly at this point, which is also a reason for thinking about the overall architecture, separation of concerns, and standards for how you integrate in a very deliberate way.  A new version of a tool or technology shouldn’t require us to have to rewrite a significant portion of our footprint, it ideally should be a matter of upgrading the previous version and having the new capabilities available everywhere that service is consumed or in swapping out one technology for another, but everything on the other side of an API remaining nearly unchanged to the extent possible.  This is understood as a fairly “perfect world” description of what likely would happen in practice, depending on the nature of the underlying change, but the point is that, without allowing for these changes up front in the design, the level of disruption they cause will likely be amplified and slow overall progress.

 

Summing Up

Change is a constant in technology.  It’s a part of life given that things continue to evolve and improve, and that’s a good thing to the extent that it provides us with the ability to solve more complex problems, create more value, and respond more effectively to business needs as they evolve over time.  The challenge we face is in being disciplined enough to think through the approach so that we can become effective, fast, and repeatable.  Delivering individual projects isn’t the goal, it’s delivering capabilities rapidly and repeatably, at scale, with quality.  That’s an exercise in disciplined delivery.

Having now covered the technology and delivery-oriented aspects of the future state concept, the remaining article will focus on how to think about the organizational implications on IT.

 

Up Next: IT Organizational Implications

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/15/2025

The Intelligent Enterprise 2.0 – Integrating Artificial Intelligence

“If only I could find an article that focused on AI”… said no one, any time recently.

In a perfect world, I don’t want “AI” anything, I want to be able to be more efficient, effective, and competitive.  I want all of my capabilities to be seamlessly folded into the way people work so they become part of the fabric of the future environment.  That is why having an enterprise-level blueprint for the future is so critically important.  Things should fit together seamlessly and they often don’t, especially when we don’t design with integration in mind from the start.  That friction slows us down, costs us more, and makes us less productive than we should be.

This is the third post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

Natural Language First (1)

I don’t own an Alexa device, but I have certainly had the experience of talking to someone who does, and heard them say “Alexa, do this…”, then repeat themself, then repeat themself again, adjusting their word choice slightly, or slowing down what they said, with increasing levels of frustration, until eventually the original thing happens. 

These experiences of voice-to-text and natural language processing have been anything but frictionless: quite the opposite, in fact.  With the advent of large language models (LLMs), it’s likely that these kinds of interactions will become considerably easier and more accurate, along with the integration of written and spoken input being a means to initiate one or more actions from an end user standpoint.

Is there a benefit?  Certainly.  Take the case of a medical care provider directing calls to a centralized number for post-operative and case management follow ups.  A large volume of calls needs to be processed and there are qualified medical personnel available to handle them on a prioritized basis.  The technology can play the role of a silent listener, both recording key points of the conversation and recommended actions (saving time in documenting the calls), and also making contextual observations integrated with the healthcare worker’s application (providing insights) to potentially help address any needs that arise mid-discussion.  The net impact could be a higher volume of calls processed due to the reduction in time documenting calls and improved quality of care from the additional insights provided to the healthcare professional.  Is this artificial intelligent replacing workers?  No, it is helping them be more productive and effective, by integrating into the work they are already doing, reducing the lower value add activities and allowing them to focus more on patient care.

If natural language processing can be integrated such that comprehension is highly accurate, I can foresee where a large amount of end user input could be provided this way in the future.  That being said, the mechanics of a process and the associated experience still need to be evaluated so that it doesn’t become as cumbersome as some voice response mechanisms in place today can be, asking you to “say or enter” a response, then confirming what you said back to you, then asking for you to confirm that, only to repeat this kind of process multiple times.  No doubt, there is a spreadsheet somewhere to indicate savings for organizations in using this kind of technology by comparison with having someone answer a phone call.  The problem is that there is a very tedious and unpleasant customer experience on the other side of those savings, and that shouldn’t be the way we design our future environments.

Orchestration is King (2)

Where artificial intelligence becomes powerful is when it pivots from understanding to execution.

Submitting a natural language request, “I would like to…” or “Do the following on my behalf…”, having the underlying engine convert that request to a sequence of actions, and then ultimately executing those requests is where the power of orchestration comes in.

Back to my earlier article on The Future of IT from March of 2024, I believe we will pivot from organizations needing to create, own, and manage a large percentage of their technology footprint to largely becoming consumers of technologies produced by others, that they configure to enable their business rules and constraints and that they orchestrate to align with their business processes.

Orchestration will exist on four levels in the future:

  • That which is done on behalf of the end user to enable and support their work (e.g., review messages, notifications, and calendar to identify priorities for my workday)
  • That which is done within a given domain to coordinate transaction processing and optimize leverage of various components within a given ecosystem (e.g., new hire onboarding within an HR ecosystem or supplier onboarding within the procurement domain)
  • That which is done across domains to coordinate activity that spans multiple domains (e.g., optimizing production plans coming from an ERP systems to align with MES and EAM systems in Manufacturing given execution and maintenance needs)
  • Finally, that which is done within the data and analytics environment to minimize data movement and compute while leveraging the right services to generate a desired outcome (e.g., optimizing cost and minimizing the data footprint by comparison with more monolithic approaches)

Beyond the above, we will also see agents taking action on behalf of other, higher-level agents, where there is more of a heirarchical relationship where a process is decomposed into subtasks executed (ideally in parallel) to serve an overall need.

Each of these approaches refer back to the concept of leveraging defined ecosystems and standard integration as discussed in the previous article on the overarching framework.

What is critical is to think about this as a journey towards maturing and exposing organizational capabilities.  If we assume an end user wants to initiate a set of transactions through a verbal command, that then is turned in a process to be orchestrated on their behalf, we need to be able to expose the services that are required to ultimately enable that request, whether that involves applications, intelligence, data, or some combination of the three.  If we establish the underlying framework to enable this kind of orchestration, however it is initiated, through an application, an agent, or some other mechanism, we could theoretically plug new capabilities into that framework to expand our enterprise-level technology capabilities more and more over time, creating exponential opportunity to make more of our technology investments.  The goal is to break down all the silos and make every capability we have accessible to be orchestrated on behalf of an end user or the organization.

I met with a business partner not that long ago who was a strong advocate for “liberating our data”.  My argument would be that the future of an intelligent enterprise should be to “liberate all of our capabilities”.

Insights, Agents, and Experts (3)

Having focused on orchestration, which is a key capability within agentic solutions, I did want to come back to three roles that I believe AI can fulfill in an enterprise ecosystem of the future, they are:

  • Insights – observations or recommendations meant to inform a user to make them more productive, effective, or safer
  • Agents – applications that orchestrate one or more activities on behalf of or in concert with an end user
  • Experts – applications that act as a reference for learning and development and to serve as a representation of the “ideal” state either within a given domain (e.g., a Procurement “Expert” may have accumulated knowledge of both best practices, market data, and internal KPIs and goals that allow end users and applications to interact with it as an interactive knowledge base meant to help optimize performance) or across domains (i.e., extending the role of a domain-based expert to be broader to focus on enterprise-level objectives and to help calibrate the goals of individual domains to help achieve those overall outcomes more effectively)

I’m not aware of the “Expert” type capabilities existing for the most part today, but I do believe having more of an autonomous entity that can provide support, guidance, and benchmarking to help optimize performance of individuals and systems could be a compelling way to leverage AI in the future.

AI as a Service (4)

I will address how AI should be integrated into an application portfolio in the next article, but I felt it was important to clarify that I believe that, while AI is being discussed as an objective, a product, and an outcome in many cases today, it is important to think of it as a service that lives and is developed as part of a data and analytics capability.  This feels like the right logical association because the insights and capabilities associated with AI are largely data-centric and heavily model dependent, and that should live separate from applications meant to express those insights and capabilities to an end user.

Where the complicating factor could arise from my experience is in how the work is approached and the capabilities of the leaders charged with AI implementation, something I will address in the seventh article in this series on organizational consideration.

Suffice is to say that I see AI as an application-oriented capability, even though it is heavily dependent on data and your underlying model.  To the extent that a number of data leaders can come from a background focused on storage, optimization, and performance of traditional or even advanced analytics/data science capabilities, they may not be ideal candidates to establish the vision for AI, given it benefits from more of an outside-in (consumer-driven) mindset than an inside-out (data-focused) approach.

Summing Up

With all the attention being given to AI, the main purpose of breaking it down in the manner I have above it to try and think about how we integrate and leverage it within and across an enterprise, and most importantly: not to treat it as a silo or a one-off.  That is not the right way to approach AI moving forward.  It will absolutely become part of the way people work, but it is a capability like many other in technology, and it is critically important that we continue to start with the consumers of technology and how we are making them more productive, effective, safe, and so on.

The next two articles will focus on how we integrate AI into the application and data environments.

Up Next: Evolving Applications

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/28/2025

Bringing AI to the End User

Overview

In my recent article on Exploring Artificial Intelligence, I covered several dimensions of how I think about the direction of AI, including how models will evolve from general-purpose and broad-based to be more value-focused and end consumer-specific as the above diagram is intended to illustrate.

The purpose of this article is to dive a little deeper into a mental model for how I believe the technology could become more relevant and valuable in an end-user (or end consumer) specific context.

Before that, a few assertions related to the technology and end user application of the technology:

  • The more we can passively collect data in the interest of simplifying end-user tasks and informing models, the better. People can be both inconsistent and unreliable in how they capture data.  In reality, our cell phones are collecting massive amounts of data on an ongoing basis that is used to drive targeted advertising and other capabilities to us without our involvement.  In a business context, however, the concept of doing so can be met with significant privacy and other concerns and it’s a shame because, while there is data being collected on our devices regardless, we aren’t able to benefit from it in the context of doing our work
  • Moving from a broad- or persona-based means of delivering technology capabilities to a consumer-specific approach is a potentially significant advancement in enabling productivity and effectiveness. This would be difficult or impossible to achieve without leveraging an adaptive approach that synthesizes various technologies (personalization, customization, dynamic code generation, role-based access control, AI/ML models, LLMs, content management, and so forth) to create a more cohesive and personalized user experience
  • While I am largely focusing on end-user application of the technology, I would argue that the same concepts and approach could be leveraged for the next generation of intelligent devices and digital equipment, such as robotics in factory automation scenarios
  • To make the technology both performant and relevant, part of the design challenge is to continually reduce and refine to level of “model” information that is needed at the next layer of processing so as not to overload the end computing device (presumably a cell phone or tablet) with a volume of data that isn’t required to enable effective action on behalf of the data consumer.

The rest of this article will focus on providing a mental model for how to think about the relationship across the various kinds of models that may make up the future state of AI.

Starting with a “Real World” example

Having spent a good portion of my time off traveling across the U.S., while I had a printed road atlas in my car, I was reminded of the trust I place in Google Maps more than once, particularly when driving through an “open range” gravel road with cattle roaming about in northwest Nebraska on my way to South Dakota.  In many ways, navigation software represents a good starting point for where I believe intelligent applications will eventually go in the business environment.

Maps is useful as a tool because it synthesizes what data is has on roads and navigation options with specific information like my chosen destination, location, speed traps, delays, and accident information that is specific to my potential routes, allowing for a level of customization if I prefer to take routes that avoid tolls and so on.  From an end-user perspective, it provides a next recommended action, remaining contextually relevant to where I am and what I need to do, along with how long it will be both until that action needs to be taken as well as the distance remaining and time I should arrive at my final destination.

In a connected setting, navigation software pulls pieces of its overall model and applies data on where I am and where I’m going, to (ideally) help me get where I’m going as efficiently as possible.  The application is useful because it is specific to me, to my destination, and to my preferred route, and is different than what would be delivered to a car immediately behind me, despite leveraging the same application and infrastructure.  This is the direction I believe we need to go with intelligent applications, to drive individual productivity and effectiveness

Introducing the “Tree of Knowledge” concept

The Overall Model

The visual above is meant to represent the relationship of general-purpose and foundational models to what ultimately are delivered to an end-user (or piece of digital equipment) in a distributed fashion.

Conceptually, I think of the relationship across data sets as if it were a tree. 

  • The general-purpose model (e.g., LLM) provides the trunk that establishes a foundation for downstream analytics
  • Domain-specific models (e.g., RAG) act as the branches that rely on the base model (i.e., the trunk) to provide process- or function-specific capabilities that can span a number of end-user applications, but have specific, targeted outcomes in mind
  • A “micro”-model is created when specific branches of the tree are deployed to an end-user based on their profile. This represents the subset that is relevant to that data consumer given their role, permissions, experience level, etc.
  • The data available at the end point (e.g., mobile device) then provides the leaves that populate the branches of the “micro”-models that have been deployed to create an adaptive model used to inform the end user and drive meaningful and productive action.

The adaptive model should also take into account user preferences (via customization options) and personalization to tune their experience as closely as possible to what they need and how they work.

In this way, the progression of models moves from general to very specific, end-user focused solutions that are contextualized with real-time data much the same as the navigation example above.

It is also worth noting that, in addition to delivering these capabilities, the mobile device (or endpoint) may collect and send data back to further inform and train the knowledge models by domain (e.g., process performance data) and potentially develop additional branches based on gaps that may surface in execution.

Applying the Model

Having set context on the overall approach, there are some notable differences from how these capabilities could create a different experience and level of productivity than today, namely:

  • Rather than delivering content and transactional capabilities based on an end-user’s role and persona(s), those capabilities would be deployed to a user’s device (the branches of the “micro”-model), but synthesized with other information (the “leaves”) like the user’s experience level, preferences, location, training needs, equipment information (in a manufacturing-type context), to generate an interface specific to them that continually evolves to optimize their individual productivity
  • As new capabilities (i.e., “branches”) are developed centrally, they could be deployed to targeted users and their individual experiences would adapt to incorporate in ways that work best for them and their given configuration, without having to relearn the underlying application(s)

Going Back to Navigation

On the last point above, a parallel example would be the introduction of weather information into navigation. 

At least in Google Maps, while there are real-time elements like speed traps, traffic delays, and accidents factored into the application, there is currently no mechanism to recognize or warn end users about significant weather events that also may surface along the route.  In practice, where severe weather is involved, this could represent safety risk to the traveler and, in the event that the model was adapted to include a “branch” for this kind of data, one would hope that the application would behave the same from an end-user standpoint, but with the additional capability integrated into the application.

Wrapping Up

Understanding that we’re still early in the exploration of how AI will change the way we work, I believe that defining a framework for how various types of models can integrate and work across purposes would enable significant value and productivity if designed effectively.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 09/14/2024