The Intelligent Enterprise 2.0 – Managing Transition

A compelling vision will stand the test of time, but it will also evolve to meet the needs of the day.

There are inherent challenges with developing strategy for multiple reasons:

  • It is difficult to balance the long- and short-term goals of an organization
  • It generally takes significant investments to accomplish material outcomes
  • The larger the change, the more difficult it can be to align people to the goals
  • The time it takes to mobilize and execute can undermine the effort
  • The discipline needed to execute at scale requires a level of experience not always available
  • Seeking “perfection” can be counterproductive by comparison with aiming for “good enough”

The factors above, and other too numerous to list, should serve as reminders that excellence, speed, and agility require planning and discipline, they don’t happen by accident.  The focus of this article will be to break down a few aspects of transitioning to an integrated future state in the interest of increasing the probability of both achieving predictable outcomes, but also in getting there quickly and in a cost-effective way, with quality.  That’s a fairly tall order, but if we don’t shoot for excellence, we are eventually choosing obsolescence.

This is the sixth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Key Considerations

Create the Template for Delivery

My assumption is that we care about three things in transitioning to a future, integrated environment:

  • Rapid, predictable delivery of capabilities over time (speed-to-market, competitive advantage)
  • Optimized costs (minimal waste, value disproportionate to expense)
  • Seamless integration of new capabilities as they emerge (ease of use and optimized value)

What these three things imply is the need for a consistent, repeatable approach and a standard architecture towards which we migrate capabilities over time.  Articles 2-5 of this series explore various dimensions of that conceptual architecture from an enterprise standpoint, the purpose here will be to focus in on approach.

As was discussed in the article on the overall framework, the key is to start the design of the future environment around the various consumers of technology, the capabilities being provided to them, now and in the future, and prioritizing the relative value of those capabilities over time.  That should be done on an internal and external level, so that individual roadmaps can be developed by domain to eventually surface and provide those capabilities as part of a portfolio management process.  In the course of identifying those capabilities, some key questions will be whether those capabilities are available today, involve some level of AI services, whether those can/should be provided through a third-party or internally, and whether the data ultimately exists to enable them.  This is where a significant inflection point should occur: the delivery of those capabilities should follow a consistent approach and pattern, so it can be repeated, leveraged, and made more efficient over time.

For instance, if an internally-developed AI-enabled capability is needed, the way that the data is exposed, processed, the AI service (or data product) exposed, and integrated/consumed by a package or custom application should be exactly the same from a design standpoint, regardless of what the specific capability is.  That isn’t to say that the work needs to be done by only one team, as will be explored in the final article on IT organizational implications, but rather that we ideally want to determine the best “known path” to delivery, execute that repeatedly, and evolve it as required over time. 

Taking this approach should provide a consistent end-user experience of services as they are brought online over time, a relatively streamlined delivery process as you are essentially mass-producing capabilities over time, and a relatively cost-optimized environment as you are eliminating the bloat and waste of operating multiple delivery silos that will eventually impede speed-to-market at scale and lead to technical debt.

From an architecture standpoint, without wanting to go too deep into the mechanics here, to the extent that the current state and future state enterprise architecture models are different, it would be worth evaluating things like virtualization of data as well as adapters/facades in the integration layer as a way to translate between the current and future models so there is logical consistency in the solution architecture even where the underlying physical implementations are varied.  Our goal in enterprise architecture should always be to facilitate rapid execution, but promote standards, simplification, and reduce complexity and technical debt wherever possible over time.

 

Govern Standards and Quality

With a templatized delivery model and target architecture in place, the next key aspect to transition is to both govern the delivery of new capabilities to identify opportunities to develop and evolve standards, as well as to evolve the “template” itself, whether that involves adding automation to the delivery, building reusable frameworks or components that can be leveraged, or other assets that can help reduce friction and ease future efforts.

Once new capabilities are coming online, the other key aspect is to review them for quality and performance, to also look for ways to evolve the approach, adjust the architecture, and continue to refine the understanding of how these integrated capabilities can best be delivered and leveraged on an end-to-end basis.

Again, the overall premise of this entire series of articles is to chart a path towards an integrated enterprise environment for applications, data, and AI in the future.  To be repeatable, we need to be consistent in how we plan, execute, govern, and evolve in the delivery of capabilities over time.

Certainly, there will be learnings that come from delivery, especially early in the adoption and integration of these capabilities. The way that we establish and enable the governance and evolution stemming from what we learn is critical in making delivery more predictable and ensuring more useful capabilities and insights over time.

 

Evolve in a Thoughtful Manner

With our learnings firmly in hand, the mental model I believe would be worth considering is more of a scaled Agile-type approach, where we have “increment-level”/monthly scheduled retrospectives to review the learnings across multiple sprints/iterations (ideally across multiple product/domains) and to identify opportunities to adjust estimation metrics, design patterns, develop core/reusable services, and anything else that essentially would improve efficiency, cost, quality, or predictability.

Whether these occur monthly at the outset and eventually space out to a quarterly process as the environment and standards are better defined could depend on a number of factors, but the point is not to make it too reactive to any individual implementation and to try and look across a set of deliveries to look for consistent issues, patterns, and opportunities that will have a disproportionate impact on the environment as a whole.

The other key aspect to remember in relation to evolution is that the capabilities of AI in particular are evolving very rapidly at this point, which is also a reason for thinking about the overall architecture, separation of concerns, and standards for how you integrate in a very deliberate way.  A new version of a tool or technology shouldn’t require us to have to rewrite a significant portion of our footprint, it ideally should be a matter of upgrading the previous version and having the new capabilities available everywhere that service is consumed or in swapping out one technology for another, but everything on the other side of an API remaining nearly unchanged to the extent possible.  This is understood as a fairly “perfect world” description of what likely would happen in practice, depending on the nature of the underlying change, but the point is that, without allowing for these changes up front in the design, the level of disruption they cause will likely be amplified and slow overall progress.

 

Summing Up

Change is a constant in technology.  It’s a part of life given that things continue to evolve and improve, and that’s a good thing to the extent that it provides us with the ability to solve more complex problems, create more value, and respond more effectively to business needs as they evolve over time.  The challenge we face is in being disciplined enough to think through the approach so that we can become effective, fast, and repeatable.  Delivering individual projects isn’t the goal, it’s delivering capabilities rapidly and repeatably, at scale, with quality.  That’s an exercise in disciplined delivery.

Having now covered the technology and delivery-oriented aspects of the future state concept, the remaining article will focus on how to think about the organizational implications on IT.

 

Up Next: IT Organizational Implications

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/15/2025

The Intelligent Enterprise 2.0 – Deconstructing Data-Centricity

Does having more data automatically make us more productive or effective?

When I wrote the original article on “The Intelligent Enterprise”, I noted that the overall ecosystem for analytics needed to change.  In many environments, data is moved from applications to secondary solutions, such as data lakes, marts, or warehouses, enriched and integrated with other data sets, to produce analytical outputs or dashboards to provide transparency into operating performance.  Much of this is reactive and ‘after-the-fact’ analysis of things we want to do right or optimize the first time, as events occur.  The extension of that thought process was to move those insights to the front of the process, integrate them with the work as it is performed, and create a set of “intelligent applications” that would drive efficiency and effectiveness to different levels than we’ve been able to accomplish before.  Does this eliminate the need for downstream analytics, dashboards, and reporting?  No, for many reasons, but the point is to think about how we can make the future data and analytics environment about establishing a model that enables insight-driven, orchestrated action.

This is the fifth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

 

Starting with the Consumer (1)

Just give me all the data” is a request that isn’t unusual in technology.  Whether that is a byproduct of the challenges associated with completing analytics projects, a lack of clear requirements, or something else, these situations cause an immediate issue in practice: what is the quality of the underlying data and what are we actually trying to do with it?

It’s tempting to start an analytics effort from the data storage and to work our way up the stack to the eventual consumer.  Arguably, this is a central premise in being “data-centric.” While I agree with the importance of data governance and management (the next topic), it doesn’t mean everything is relevant or useful to an end consumer, and too much data very likely just creates management overhead, technical complexity, and information overload.

A thoughtful approach needs to start with identifying the end consumers of the data, their relative priority, information and insights needs, and then developing a strategy to deliver those capabilities over time.  In a perfect world, that should leverage a common approach and delivery infrastructure so that it can be provided in iterations and elaborated to include broader data sets and capabilities across domains over time.  The data set should be on par with having an integrated data model and consistent way of delivering data products and analytics services that can be consumed by intelligent applications, agents, and solutions supporting the end consumer.

As an interesting parallel, it is worth noting that ChatGPT is looking to converge their reasoning and large language models from 4x into a single approach for their 5x release so that end customers don’t need to be concerned with having selected the “right” model for their inquiry.  It shouldn’t matter to the data consumer.  The engine should be smart enough to leverage the right capabilities based on the nature of the request, and that is what I am suggesting in this regard.

 

Data Management and Governance (2)

Without the right level of business ownership, the infrastructure for data and analytics doesn’t really matter, because the value to be obtained from optimizing the technology stack will be limited by the quality of the data itself.

Starting with master data, it is critical to identify and establish data governance and management for the critical, minimum amount of data in each domain (e.g., customer in sales, chart of accounts in finance), and the relationship between those entities in terms of an enterprise data model.

Governing data quality has a cost and requires time, depending on the level of tooling and infrastructure in place, and it is important to weigh the value of the expected outcomes in relation to the complexity of the operating environment overall (people, process, and technology combined).

 

From Content to Causation (3)

Finally, with the level of attention given to Generative AI and LLMs, it is important to note the value to be realized when we shift our focus from content to processes and transactions in the interest of understanding causation and influencing business outcomes.

In a manufacturing context, the increasing level of interplay between digital equipment, digital sensors, robotics, applications, and digital workers, there is a significant opportunity to orchestrate, gather and analyze increasing volumes of data, and ultimately optimize production capacity, avoid unplanned events, and increase the safety and efficacy of workers on the shop floor.  This requires deliberate and intentional design, with outcomes in mind.

The good news is that technologies are advancing in their ability to analyze large data sets and derive models to represent the characteristics and relationships across various actors in play and I believe we’ve only begun to scratch the surface on the potential for value creation in this regard.

 

Summing Up

Pulling back to the overall level, data is critical, but it’s not the endgame.  Designing the future enterprise technology environment is about exposing and delivering the services that enable insightful, orchestrated action on behalf of the consumers of that technology.  That environment will be a combination of applications, AI, and data and analytics, synthesized into one meaningful, seamless experience.  The question is how long it will take us to make that possible.  The sooner we begin the journey of designing that future state with agility, flexibility, and integration in mind, the better.

Having now elaborated the framework and each of the individual dimensions, the remaining two articles will focus on how to approach moving from the current to future state and how to think about the organizational implications on IT.

Up Next: Managing Transition

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/03/2025

The Intelligent Enterprise 2.0 – Evolving Applications

No one builds applications or uses new technology with the intention of making things worse… and yet we have and still do at times.

Why does this occur, time and again, with technology?  The latest thing was supposed to “disrupt” or “transform” everything.  I read something that suggested this was it.  The thing we needed to do, that a large percentage of companies were planning, that was going to represent $Y billions of spending two years from now, generating disproportionate efficiency, profitability, and so on.  Two years later (if that), there was something else being discussed, a considerable number of “learnings” from the previous exercise, but the focus was no longer the same… whether that was windows applications and client/server computing, the internet, enterprise middleware, CRM, Big Data, data lakes, SaaS, PaaS, microservices, mobile applications, converged infrastructure, public cloud… the list is quite long and I’m not sure that productivity and the value/cost equation for technology investments is any better in many cases.

The belief that technology can have such a major impact and the degree of continual change involved have always made the work challenging, inspiring, and fun.  That being said, the tendency to rush into the next advance without forming a thoughtful strategy or being deliberate about execution can be perilous in what it often leaves behind, which is generally frustration for the end users/consumers of those solutions and more unrealized benefits and technical debt for an organization.  We have to do better with AI, bringing intelligence into the way we work, not treating it as something separate entirely.  That’s when we will realize the full potential of the capabilities these technologies provide.  In the case of an application portfolio, this is about our evolution to a suite of intelligent applications that fit into the connected ecosystem framework I described earlier in the series.

This is the fourth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

Before exploring various scenarios for how we will evolve the application landscape, it’s important to note a couple overall assumptions:

  • End user needs and understanding should come first, then the capabilities
  • Not every application needs to evolve. There should be a benefit to doing so
  • I believe the vast majority of product/platform providers will eventually provide AI capabilities
  • Providing application services doesn’t mean I have to have a “front-end” in the future
  • Governance is critical, especially to the extent that citizen development is encouraged
  • If we’re not mindful how many AI apps we deploy, we will cause confusion and productivity loss because of the fragmented experience

 

Purchased Software (1)

The diagram below highlights a few different scenarios for how I believe intelligence will find its way into applications.

In the case of purchased applications, between the market buzz and continuing desire for differentiation, it is extremely likely that a large share of purchased software products and platforms will have some level of “AI” included in the future, whether that is an AI/ML capability leveraging OLTP data that lives within its ecosystem, or something more causal and advanced in nature. 

I believe it is important to delineate between internally generated insights and ones coming as part of a package for several reasons. First, we may not always want to include proprietary data on purchased solutions, especially to the degree they are hosted in the public cloud and we don’t want to expose our internal data to that environment from a security, privacy, or compliance standpoint. Second, we may not want to expose the rules and IP associated with our decisioning and specific business processes to the solution provider. Third, to the degree we maintain these as separate things, we create flexibility to potentially migrate to a different platform more easily than if we are tightly woven into a specific package.  And, finally, the required data ingress to comingle a larger data set to expand the nature of what a package could provide “out of the box” may inflate operating costs of the platforms unnecessarily (this can definitely be the case with ERP platforms).

The overall assumption is that, rather than require custom enhancements of a base product, the goal from an architecture standpoint would be for the application to be able to consume and display information from an external AI service that is provided from your organization.  This is available today within multiple ERP platforms, as an example.

The graphic below shows two different migration paths towards a future state where applications have both package and internally provided AI capabilities, one where the package provider moves first, internal capabilities are developed in parallel as a sidecar application, and then eventually fully integrated into the platform as a service, and the other way around, assuming the internal capability is developed first, run in parallel, then folded into the platform solution.

Custom-Developed Software (2)

In terms of custom software, the challenge is, first, evaluating whether there is value in introducing additional capabilities for the end user and, second, understanding the implications for trying to integrate the capabilities into the application itself versus leaving them separate.

In the event that there is uncertainty on the end user value of having the capability, implementing the insights as a side car / standalone application, then looking to integrate them within the application as an integrated capability a second step may be the best approach. 

If a significant amount of redesign or modernization is required to directly integrate the capabilities, it may make sense to either evaluate market alternatives as a replacement to the internal application or to leave the insights separate entirely.  Similar to purchased products, the insights should be delivered as a service and integrated into the application versus being built as an enhancement to provide greater flexibility for how they are leveraged and to simplify migrations to a different solution in the future.

The third scenario in the diagram above is meant to reflect a separate insights application that is then folded into the custom application as a service over time, so that it is a more seamless experience for the end user over time.

Either way, whether it be a purchased or custom-built solution, the important points are to decouple the insights from the applications to provide flexibility, but also to think about both providing a front-end for users to interact with the applications, but also to allow for a service-based approach as well, so that an agent acting on behalf of the user or the system itself could orchestrate various capabilities exposed from that application without the need for user intervention.

 

From Disconnected to Integrated Insights (3)

One of the reasons for separating out these various migration scenarios is to highlight the risk that introducing too many sidecar or special/single purpose applications could cause significant complexity if not managed and governed carefully.  Insights should serve a process or need, and if the goal is to make a user more productive, effective, or safer, those capabilities should ultimately be used to create more intelligent applications that are easier to use.  To that end, there likely would be value in working through a full product lifecycle when introducing new capabilities, to determine whether it is meant to be preserved, integrated with a core application (as a service), or tested and possibly decommissioned once a more integrated capability is available.

 

Summing Up

While the experience of a consumer of technology likely will change and (hopefully) become more intuitive and convenient with the introduction of AI and agents, the need to be thoughtful in how we develop an application architecture strategy, leverage components and services, and put the end user first will be priorities if we are going to obtain the value of these capabilities at an enterprise level.  Intelligent applications is where we are headed and our ability to work with an integrated vision of the future will be critical to realizing the benefits available in that world.

The next article will focus on how we should think about the data and analytics environment in the future state.

Up Next: Deconstructing Data-Centricity

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/30/2025

The Intelligent Enterprise 2.0 – Integrating Artificial Intelligence

“If only I could find an article that focused on AI”… said no one, any time recently.

In a perfect world, I don’t want “AI” anything, I want to be able to be more efficient, effective, and competitive.  I want all of my capabilities to be seamlessly folded into the way people work so they become part of the fabric of the future environment.  That is why having an enterprise-level blueprint for the future is so critically important.  Things should fit together seamlessly and they often don’t, especially when we don’t design with integration in mind from the start.  That friction slows us down, costs us more, and makes us less productive than we should be.

This is the third post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

Natural Language First (1)

I don’t own an Alexa device, but I have certainly had the experience of talking to someone who does, and heard them say “Alexa, do this…”, then repeat themself, then repeat themself again, adjusting their word choice slightly, or slowing down what they said, with increasing levels of frustration, until eventually the original thing happens. 

These experiences of voice-to-text and natural language processing have been anything but frictionless: quite the opposite, in fact.  With the advent of large language models (LLMs), it’s likely that these kinds of interactions will become considerably easier and more accurate, along with the integration of written and spoken input being a means to initiate one or more actions from an end user standpoint.

Is there a benefit?  Certainly.  Take the case of a medical care provider directing calls to a centralized number for post-operative and case management follow ups.  A large volume of calls needs to be processed and there are qualified medical personnel available to handle them on a prioritized basis.  The technology can play the role of a silent listener, both recording key points of the conversation and recommended actions (saving time in documenting the calls), and also making contextual observations integrated with the healthcare worker’s application (providing insights) to potentially help address any needs that arise mid-discussion.  The net impact could be a higher volume of calls processed due to the reduction in time documenting calls and improved quality of care from the additional insights provided to the healthcare professional.  Is this artificial intelligent replacing workers?  No, it is helping them be more productive and effective, by integrating into the work they are already doing, reducing the lower value add activities and allowing them to focus more on patient care.

If natural language processing can be integrated such that comprehension is highly accurate, I can foresee where a large amount of end user input could be provided this way in the future.  That being said, the mechanics of a process and the associated experience still need to be evaluated so that it doesn’t become as cumbersome as some voice response mechanisms in place today can be, asking you to “say or enter” a response, then confirming what you said back to you, then asking for you to confirm that, only to repeat this kind of process multiple times.  No doubt, there is a spreadsheet somewhere to indicate savings for organizations in using this kind of technology by comparison with having someone answer a phone call.  The problem is that there is a very tedious and unpleasant customer experience on the other side of those savings, and that shouldn’t be the way we design our future environments.

Orchestration is King (2)

Where artificial intelligence becomes powerful is when it pivots from understanding to execution.

Submitting a natural language request, “I would like to…” or “Do the following on my behalf…”, having the underlying engine convert that request to a sequence of actions, and then ultimately executing those requests is where the power of orchestration comes in.

Back to my earlier article on The Future of IT from March of 2024, I believe we will pivot from organizations needing to create, own, and manage a large percentage of their technology footprint to largely becoming consumers of technologies produced by others, that they configure to enable their business rules and constraints and that they orchestrate to align with their business processes.

Orchestration will exist on four levels in the future:

  • That which is done on behalf of the end user to enable and support their work (e.g., review messages, notifications, and calendar to identify priorities for my workday)
  • That which is done within a given domain to coordinate transaction processing and optimize leverage of various components within a given ecosystem (e.g., new hire onboarding within an HR ecosystem or supplier onboarding within the procurement domain)
  • That which is done across domains to coordinate activity that spans multiple domains (e.g., optimizing production plans coming from an ERP systems to align with MES and EAM systems in Manufacturing given execution and maintenance needs)
  • Finally, that which is done within the data and analytics environment to minimize data movement and compute while leveraging the right services to generate a desired outcome (e.g., optimizing cost and minimizing the data footprint by comparison with more monolithic approaches)

Beyond the above, we will also see agents taking action on behalf of other, higher-level agents, where there is more of a heirarchical relationship where a process is decomposed into subtasks executed (ideally in parallel) to serve an overall need.

Each of these approaches refer back to the concept of leveraging defined ecosystems and standard integration as discussed in the previous article on the overarching framework.

What is critical is to think about this as a journey towards maturing and exposing organizational capabilities.  If we assume an end user wants to initiate a set of transactions through a verbal command, that then is turned in a process to be orchestrated on their behalf, we need to be able to expose the services that are required to ultimately enable that request, whether that involves applications, intelligence, data, or some combination of the three.  If we establish the underlying framework to enable this kind of orchestration, however it is initiated, through an application, an agent, or some other mechanism, we could theoretically plug new capabilities into that framework to expand our enterprise-level technology capabilities more and more over time, creating exponential opportunity to make more of our technology investments.  The goal is to break down all the silos and make every capability we have accessible to be orchestrated on behalf of an end user or the organization.

I met with a business partner not that long ago who was a strong advocate for “liberating our data”.  My argument would be that the future of an intelligent enterprise should be to “liberate all of our capabilities”.

Insights, Agents, and Experts (3)

Having focused on orchestration, which is a key capability within agentic solutions, I did want to come back to three roles that I believe AI can fulfill in an enterprise ecosystem of the future, they are:

  • Insights – observations or recommendations meant to inform a user to make them more productive, effective, or safer
  • Agents – applications that orchestrate one or more activities on behalf of or in concert with an end user
  • Experts – applications that act as a reference for learning and development and to serve as a representation of the “ideal” state either within a given domain (e.g., a Procurement “Expert” may have accumulated knowledge of both best practices, market data, and internal KPIs and goals that allow end users and applications to interact with it as an interactive knowledge base meant to help optimize performance) or across domains (i.e., extending the role of a domain-based expert to be broader to focus on enterprise-level objectives and to help calibrate the goals of individual domains to help achieve those overall outcomes more effectively)

I’m not aware of the “Expert” type capabilities existing for the most part today, but I do believe having more of an autonomous entity that can provide support, guidance, and benchmarking to help optimize performance of individuals and systems could be a compelling way to leverage AI in the future.

AI as a Service (4)

I will address how AI should be integrated into an application portfolio in the next article, but I felt it was important to clarify that I believe that, while AI is being discussed as an objective, a product, and an outcome in many cases today, it is important to think of it as a service that lives and is developed as part of a data and analytics capability.  This feels like the right logical association because the insights and capabilities associated with AI are largely data-centric and heavily model dependent, and that should live separate from applications meant to express those insights and capabilities to an end user.

Where the complicating factor could arise from my experience is in how the work is approached and the capabilities of the leaders charged with AI implementation, something I will address in the seventh article in this series on organizational consideration.

Suffice is to say that I see AI as an application-oriented capability, even though it is heavily dependent on data and your underlying model.  To the extent that a number of data leaders can come from a background focused on storage, optimization, and performance of traditional or even advanced analytics/data science capabilities, they may not be ideal candidates to establish the vision for AI, given it benefits from more of an outside-in (consumer-driven) mindset than an inside-out (data-focused) approach.

Summing Up

With all the attention being given to AI, the main purpose of breaking it down in the manner I have above it to try and think about how we integrate and leverage it within and across an enterprise, and most importantly: not to treat it as a silo or a one-off.  That is not the right way to approach AI moving forward.  It will absolutely become part of the way people work, but it is a capability like many other in technology, and it is critically important that we continue to start with the consumers of technology and how we are making them more productive, effective, safe, and so on.

The next two articles will focus on how we integrate AI into the application and data environments.

Up Next: Evolving Applications

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/28/2025

The Intelligent Enterprise 2.0 – A Framework for the Future

Why does it take so much time to do anything strategic?

This not an uncommon question to hear in technology and, more often than not, the answer is relatively simple: because the focus in the organization is delivering projects and not establishing an environment to facilitate accelerated delivery at scale.  Those are two very different things and, unfortunately, it requires more thought, partnership, and collaboration between business and technology teams than headlines like “let’s implement product teams” or “let’s do an Agile transformation” imply.  Can those mechanisms be part of how you execute in a scaled environment?  Absolutely, but choosing an operating approach and methodology shouldn’t precede or take priority over having a blueprint to begin with, and that’s the focus of this article.

This is the second post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

User-Centered Design (1)

A firm conducts a research project on behalf of a fast-food chain.  There is always a “coffee slick” in the drive thru, where customers stop and pour out part of their beverage.  Is there something wrong with the coffee?  No.  Customers are worried about burning themselves.  The employees are constantly overfilling the drink, assuming that they are being generous, but actually creating a safety concern by accident.  Customers don’t want to spill their coffee, so that excess immediately goes to waste.

In the world of technology, the idea of putting enabling technology in the hands of “empowered” end users or delivering that one more “game changing” tool or application is tempting, but often doesn’t deliver the value that is originally expected.  This can occur for a multitude of reasons, but what can often be the case is an inadequate understanding of the end user’s mental model, approach to performing their work, or a fundamental misunderstanding that more technology in the hands of a user is always a good thing (when it definitely isn’t the case).

Two learnings that came out of the dotcom era were, first, the value to be derived from investing in user-centered design, thinking through their needs and workflow, and designing experiences around that.  The other common practice was assuming that a disruptive technology (the internet in this case) was cause to spin out a separate organization (the “eBusiness” teams of the time) meant to incubate and accelerate development of capabilities that embraced the new technology.  These teams generally lacked a broader understanding of the “traditional” business and its associated operating requirements, and thus began the ”bricks versus clicks” issues and channel conflict that eventually led to these capabilities being folded back into the broader organization, but only after having spent time and (in many cases) a considerable amount of money experimenting without producing sustainable business value.

In the case of artificial intelligence, it’s tempting to want to stand up new organizations or repurpose existing ones to mobilize the technology or to assume everything will eventually be relegated to a natural language-based interface where a user provides a description of what they want into an agent acting at their personal virtual assistant, with system deriving the appropriate workflow and orchestrating the necessary actions to support the request.  While that may be a part of our future reality, taking an approach similar to the dotcom era would be a mistake and there will lost opportunity where this is the chosen path.

To be effective in a future digital world with AI, we need to think of how we want things integrated at the outset, starting with that critical understanding of the end user and how they want to work.  Technology is meant to enable and support them, not the other way around, and leading with technology versus a need is never going to be an optimal approach… a lesson we’ve been shown many times over the years, no matter how disruptive the technology advancement has been.

I will address some of the organizational implications of the model in the seventh article in this series, so the remainder of this post will be on the technology framework itself.

Designing Around Connected Ecosystems (2)

Domain-driven design is not a new concept in technology.

As I mentioned in the first article, the technology footprint of most medium- to large-scale organizations is complex and normally steeped in redundancies, varied architectures, unclear boundaries between custom and purchased software, hosted and cloud-based based environments, hard coded integrations, and standard ways of moving data within and across domains.

While package solutions offer some level of logical separation of concerns between different business capabilities, the natural tendency in the product and platform space is to move towards more vertically or horizontally integrated solutions that create customer lock in and make interoperability very challenging, particularly in the ERP space.  What also tends to occur is that an organization’s data model is biased to conform to what works best for a package or platform, but not necessarily the best representation of their business or their consumers of technology (something I will address in article 5 on Data & Analytics).

In terms of custom solutions, given they are “home grown”, there is a reasonable probability that, unless they were well-architected at the outset, they very likely provide multiple capabilities, without clear separation of concerns in ways that make them difficult to integrate with other systems in “standard” ways.

While there is nothing unique about these kinds of challenges, the problem comes when new technology capabilities like AI are available and we want to either replace or integrate things in a different way.  This is where the lack of enterprise-level design and a broader, component-based architecture takes its toll, because there likely will be significant remediation, refactoring, and modernization required to enable existing systems to interoperate with the new capabilities.  These things take time, add risk, and ultimately cost to our ability to respond when these opportunities arise, and no one wants to put new plumbing in a house that is already built with a family living in it.

On the other hand, in an environment with defined, component-based ecosystems that uses standard integration patterns, replacing individual components becomes considerably easier and faster, with much less disruption at both a local- and an enterprise-level.  In a well-defined, component-based environment, I should be able to replace my Talent Acquisition application without having to impact my Performance Management, Learning & Development, or Compensation & Benefits solutions from an HR standpoint.  Similarly, I shouldn’t need to make changes to my Order Management application within my Sales ecosystem because I’m transitioning to a different CRM package.  To the extent that you are using standard business objects to support integration across systems, the need to update downstream systems in other domains should be minimized as well.  Said differently, if you want to be fast, be disciplined in your design.

Modular, Composable, Standardized (3)

Beyond designing towards a component-based environment, it is also important to think about capabilities independent of a given process so you have more agility in how you ultimately leverage and integrate different things over time. 

Using a simple example from personal lines insurance, I want to be able to support a third-party rating solution by exposing a “GetQuote” function that takes necessary customer information and coverage-related parameters and sends back a price.  From a carrier standpoint, the process may involve ordering credit and pulling a DMV report (for accidents and violations) as inputs to the process.  I don’t necessarily want these capabilities to be developed internal to the larger “GetQuote”, because I may want to leverage them for any one of a number of other reasons, so that smaller grained (more atomic) transactions should also be defined as services that can be leveraged by the larger one.  While this is a fairly trivial case, there are often situations where delivery efforts move at such a rapid pace that things are tightly coupled or built together that really should be discreet and separate, providing more flexibility and leverage of those individual services over time.

This also can occur in the data and analytics space, where there are normally many different tools and platforms between the storage and consumption layers and ideally you want to optimize data movement and computing resources such that only the relevant capabilities are included in a data pipeline based on specific customer needs.

The flexibility described above is predicated on a well-defined architecture that is service-based and composable, with standard integration patterns, that leverages common business objects for as many transactions as practical.  That isn’t to say that there are times where the economics make sense to custom code something or to leverage point-to-point integration, rather that thinking about reuse and standardized approaches up front is a good delivery practice to avoid downstream cost and complexity, especially when the rate of new technologies being introduced is as high as it is today.

Leveraging Standard Integration (4)

Having mentioned standard integration above, my underlying assumption is that we’re heading towards a near real-time environment where streaming infrastructure and publish and subscribe models are going to be critical infrastructure to enable delivery of key insights and capabilities to consumers of technology.  To the extent that we want that infrastructure to scale and work efficiently and consistently, there is a built-in incentive to be intentional about the data we transmit (whether that is standard business objects or smaller data sets coming from connected equipment and devices) as well as the ways we connect to these pipelines across application and data solutions.  Adding a data publisher or consumer shouldn’t require rewriting anything per se, any more than plugging in a new appliance to a power outlet in your home should require you to either unplug something else or change the circuit board and wiring itself (except in extreme cases).

Summing Up

I began this article with the observation about delivering projects by comparison with establishing an environment for delivering repeatably at scale.  In my experience, depending on the scale of an organization, there will be some level of many of the things I’ve mentioned above in place, but then a potentially large set of pain points and opportunities across the footprint where things are suboptimized.

This is not about boiling the ocean or suggesting we should start over.  The point of starting with the framework itself is to raise awareness that the way we establish the overall environment has a significant ripple effect into our ability to do things we want to do downstream to leverage new capabilities and get the most out of our technology investments later on.  The time spent in design is well worth the investment, so long as it doesn’t become analysis paralysis.

To that end, in summary:

  • Design from the end user and their needs first
  • Think and design with connected ecosystems in mind
  • Be purposeful in how you design and layer services to promote reuse and composability
  • Leverage standards in how you integrate solutions to enable near real-time processing

It is important to note that, while there are important considerations in terms of hosting, security, and data movement across platforms, I’m focusing largely on the organization and integration of the portfolios needed to support an organization.  From a physical standpoint, the conceptual diagram isn’t meant to suggest that any or all of these components or connected ecosystems need to be managed and/or hosted internal to an organization.  My overall belief is that, the more we move to a service-driven environment, the more of a producer/consumer model will emerge where corporations largely act as an integrator and orchestrator (aka “consumer”) of services provided by third-parties (the “producers”).  To the extent that the architecture and standards referenced above are in place, there shouldn’t be any significant barriers to moving from a more insourced and hosted environment to a more consumption-based, outsourced, and cloud-native environment in the future.

With the overall framework in place, the next three articles will focus on the individual elements of the environment of the future, in terms of AI, applications, and data.

Up Next: Integrating Artificial Intelligence

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/26/2025

The Intelligent Enterprise 2.0 – The Cost of Complexity

How did we get here…?

The challenges involved in managing a technology footprint today at any medium to large organization are very high for a multitude of reasons:

  • Proliferation of technologies and solutions that are disconnected or integrated in inconsistent ways, making simplification or modernization efforts difficult to deliver
  • Mergers and acquisitions that bring new systems into the landscape that aren’t rationalized with or migrated to existing systems, creating redundancy, duplication of capabilities, and cost
  • “Speed-to-market” initiatives involving unique solution approaches that increase complexity and cost of ownership
  • A blend of in-house and purchased software solutions, hosted across various platforms (including multi-cloud), increasing complexity and cost of security, integration, performance monitoring, and data movement
  • Technologies advancing at a rate, especially with the introduction of artificial intelligence (AI), that organizations can’t integrate them quickly enough to do so in a consistent manner
  • Decentralized or federated technology organizations that operate with relative autonomy, independent of standards, frameworks, or governance, which increases complexity and cost

The result of any of the above factors can be enough cost and complexity that the focus within a technology organization can shift from innovation and value creation to struggling to keep the lights on and maintaining a reliable and secure operating environment.

This article will be the first in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Why It Matters

Before getting into the dimensions of the future state, I wanted to first clarify how these technology challenges manifest themselves in meaningful ways, because complexity isn’t just an IT problem, it’s a business issue, and partnership is important in making thoughtful choices in how we approach future solutions.

 

Lost Productivity

A leadership team at a manufacturing facility meets first thing in the morning.  It is the first of multiple they will have throughout the course of a day.  They are setting priorities for the day collectively because the systems that support them: a combination of applications, analytics solutions, equipment diagnostics, and AI tools, are all providing different perspectives on priorities and potential issues, but in disconnected ways, and it is now on the leadership team to decide which of these should receive attention and priority in the interest of making their production targets for a day.  Are they making the best choices in terms of promoting efficiency, quality, and safety?  There’s no way to know.

Is this an unusual situation?  Not at all.  Today’s technology landscape is often a tapestry of applications with varied levels of integration and data sharing, data apps and dashboards meant to provide insights and suggestions, and now AI tools to “assist” or make certain activities more efficient for an end user.

The problem is what happens when all these pieces end up on someone’s desktop, browser, or mobile device and they are left to copy data from one solution to the other, arbitrate which of various alerts and notifications is most important, identify dependencies to help make sure they are taking the right actions in the right sequence (in a case like directed work activity), and quite often that time is lost productivity in itself, regardless of which path they take, which may amplify the impact further, given retention and/or high turnover are real issues in some jobs that reduce the experience available to navigate these challenges successfully.

 

Lower Profitability

The result of this lost productivity and ever-expanding technology footprint is both lost revenue (to the extent it hinders production or effective resource utilization) and higher operating cost, especially to the degree that organizations introduce the next new thing without retiring or replacing what was already in place, or integrating things effectively.  Speed-to-market is a short-term concept that tends to cause longer-term cost of ownership issues (as I previously discussed in the article “Fast and Cheap Isn’t Good”), especially to the degree that there isn’t a larger blueprint in place to make sure such advancements are done in a thoughtful, deliberate manner.

To this end, how we do something can be as important as what we intend to do, and there is an argument for thinking through the operating implications when undertaking new technology efforts with a more holistic mindset than a single project tends to take in my experience.

 

Lost Competitive Advantage

Beyond the financial implications, all of the varied solutions, accumulated technologies and complexity, and custom or interim band aids built to connect one solution to the next eventually catches up in a form of what one organization used to refer to as “waxy buildup” that prevents you from moving quickly on anything.  What seems on paper to be a simple addition or replacement becomes a lengthy process of analysis and design that is cumbersome and expensive, where the lost opportunity is speed-to-market in an increasingly competitive marketplace. 

This is where the new market entrants thrive and succeed, because they don’t carry the legacy debt and complexity of entrenched market players who are either too slow to respond or too resistant to change to truly transform at a level that allows them to sustain competitive advantage.  Agility gives way to a “death by a thousand paper cuts” of tactical decisions made that were appropriate and rational in the moment, but created significant amounts of technical debt that inevitably must be paid.

 

A Vision for the Future

So where does this leave us?  Pack up the tent and go home?  Of course not.

We are at a significant inflection point with AI technology that affords us the opportunity to examine where we are and to start adjusting our course to a more thoughtful and integrated future state where AI, applications, and data and analytics solutions work in concert and harmony with each other versus in a disconnected reality of confusion.

It begins with the consumers of these capabilities, supported by connected ecosystems of intelligent applications, enabled by insights, agents, and experts, that infuse intelligence into making people productive, businesses agile and competitive, and improve value derived from technology investments at a level disproportionate to what we can achieve today.

The remaining articles in this series will focus on various dimensions of what the above conceptual model means, as a framework, in terms of AI, applications, and data, and then how we approach that transition and think about it from an IT organizational perspective.

Up Next: Establishing the Framework for the future…

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/24/2025

Four Years In: The Patterns That Define Performance and Leadership

Having had this blog for nearly four years, I took a look at the nature of the articles written to date, and subjects included therein, wondering if there were any patterns that emerged.  I found the resulting chart (above) interesting as a reflection of the relative importance I associate with certain topics overall.  To that end, I thought I’d provide some perspective on what’s been written to date before moving to the next article, whatever that may be.

 

Leadership and Culture

The two largest focus areas were leadership and culture, which isn’t surprising given I’ve worked for many years across corporate and consulting environments and have seen the relative impact that both can have on organizational performance on the whole.  Nearly two-thirds of my articles to date touch on leadership and one-half on culture, because they are fundamental to setting the stage for everything else you want to accomplish.

In the case of organizational excellence, courageous leadership has to be at the top of the list, given that difficult decisions and a level of fearlessness are required to achieve great things.  By contrast, hesitancy and complacency will almost always lead to suboptimized results, because there will be apprehension about innovating, challenging the status quo, and effectively managing relationships where the ability to be a partner and advisor may require difficult conversations at times.

With leadership firmly rooted, it becomes possible to establish a culture that promotes integrity, respect, collaboration, innovation, productivity, and results.  Where one or more of these dimensions is missing, it is nearly impossible to be effective without compromising performance somewhere.  That isn’t to say that you can’t deliver in an unhealthy environment, you certainly can and many organizations do.  It is very likely, however, that those gains will be short-lived and difficult to repeat or sustain because of the consequential impact of those issues on the people working in those conditions over time.  In this case, the metrics will likely tell the tale, between delivery performance, customer feedback, solution quality, and voluntary attrition (to name a few).

 

Delivery and Innovation

With the above foundation in place, the next two areas of focus were delivery and innovation, which is reassuring given that I believe strongly in the concept of actionable strategy versus one that is largely theoretical in nature.  Having worked in environments that leaned heavily on innovation without enough substantive delivery as well as ones that delivered consistently but didn’t innovate enough, the answer is to ensure both are occurring on a continual basis and managed in a very deliberate way.

Said differently, if you innovate without delivering, you won’t create tangible business value.  If you deliver without ever innovating, at some point, you will lose competitive advantage or risk obsolescence in some form or other.

 

The Role of Discipline

While not called out as a topic in itself, in most cases where I discuss delivery or IT operations, I mention discipline as well, because I believe it is a critical component of pursuing excellence in anything.  The odd contradiction that exists, is the notion that having discipline somehow implies bureaucracy or moving slowly, when the reality is the exact opposite.

Without defined, measurable, and repeatable processes, it is nearly impossible to drive continuous improvement and establish a more predictable operating environment over time.  From a delivery standpoint, having methodology isn’t about being prescriptive to the point that you lose agility, as an example, it’s about having an understood approach that you can estimate and plan effectively.  It also defines rules of engagement within and across teams so that you can partner and execute efficiently in a repeatable fashion.  Having consistent processes also allows for monitoring, governing, and improving the efficiency and efficacy of how things are done over time. 

The same could be said for leveraging architectural frameworks, common services, and design patterns as well.  There is a cost for establishing these things, but if you amortize these investments over time, they ultimately improve speed, reduce risk, improve quality, and thereby reduce TCO and complexity of an environment once they are in place.  This is because every team doesn’t invent their own way of doing things, ultimately creating complexity that needs to be maintained and supported down the road.  Said differently, it would be very difficult to have reliable estimation metrics when you never do something in a consistent way and analyze variance.

 

Mental Models and Visualization

The articles also reflect that I prefer having a logical construct and visualizations to organize, illustrate, analyze, and evaluate complex situations, such as AI and data strategy, workforce and sourcing strategy, digital manufacturing facilities, and various other situations.  Any of these topics involve many dimensions and layers of associated complexity.  Having a mental model, whether it is a functional decomposition, component model, or some other framework, is helpful for both identifying the dimensions of a problem, and also surfacing dependencies and relationships in the interest of driving transformation.

Visualizations also can help facilitate alignment across broader groups of stakeholders where a level of parallel execution is required, making dependencies and relationships more evident and easier to coordinate.

 

Wrapping Up

Overall, the purpose of writing this article was simply to pause and reflect on what has become a fairly substantive body of work over the last several years, along with recognizing the themes that reoccur time and again because they matter when excellence is your goal.  Achieving great things consistently is a byproduct of having vision, effective leadership, discipline, commitment, and a lot of tenacity.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/14/2025

Why Excellence Matters

A new leader in an organization once asked to understand my role.  My answer was very simple: “My role is to change mindsets.

I’m fairly sure the expectation was something different: a laundry list of functional responsibilities, goals, in-flight activities or tasks that were top of mind, the makeup of my team, etc.  All relevant aspects of a job, to be sure, but not my primary focus.

I explained that my goal was to help transform the organization, and if I couldn’t change people’s mindsets, everything else that needed to be done was going to be much more difficult.  That’s how it is with change.

Complacency is the enemy. Excellence is a journey and you are never meant to reach the destination.

Having been part of and worked with organizations that enjoyed tremendous market share but then encountered adversity and lost their advantage, there were common characteristics, starting with basking in the glow of that success too long and losing the hunger and drive that made them successful in the first place.

The remainder of this article will explore the topic further in three dimensions: leadership, innovation, and transformation in the interest of providing some perspective on the things to look for when excellence is your goal.

Fall short of excellence, you can still be great.  Try to be great and fail?  You’re going to be average… and who wants to be part of something average?  No one who wants to win.

Courageous Leadership

As with anything, excellence has to start with leadership.  There is always resistance and friction associated with change.  That’s healthy and good because it surfaces questions and risks and, in a perfect world, the more points of view you can leverage in setting direction, the more likely you’ll avoid blind spots or avoidable mistakes just for a lack of awareness or understanding of what you are doing.

There is a level of discipline needed to accomplish great things over time and courage is a requirement, because there will inevitably be challenges, surprises, and setbacks.  How leaders respond to that adversity, through their adaptability, tenacity and resilience will ultimately have a substantial influence on what is possible overall.

Some questions to consider:

  • Is there enough risk tolerance to create space to try new ideas, fail, learn, and try again?
  • Is there discipline in your approach so that business choices are thoughtful, reasoned, intentional, measured, and driven towards clear outcomes?
  • Is there a healthy level of humility to understand that, no matter how much success there is right now, without continuing to evolve, there will always be a threat of obsolescence?

Relentless Innovation

In my article on Excellence by Design, I was deliberate in choosing the word “relentless” in terms of innovation, because I’ve seen so many instances over time of the next silver bullet meant to be a “game changer”, “disruptor”, etc. only to see that then be overtaken by the next big thing a year or so later.

One of the best things about working in technology is that it constantly gives us opportunities to do new things: to be more productive and effective, produce better outcomes, create more customer value, and be more competitive.

Some people see that as a threat, because it requires a willingness to continue to evolve, adapt, and learn.  You can’t place too much value on a deep understanding of X technology, because tomorrow Y may come along and make that knowledge fairly obsolete.  While there is an aspect of that argument that is true at an implementation level, it gives too much importance to the tools and not enough to the problems we’re ultimately trying to solve, namely creating a better customer experience, delivering a better product or service, and so on.

We need to plan like the most important thing right now won’t be the most important 6 months or even a year from now.  Assume we will want to replace it, or integrate something new to work with it, improving our overall capability and creating even more value over time.

What does that do?  In a disciplined environment, it should change our mindset about how we approach implementing new tools and technologies in the first place.  It should also influence how much exposure we create in the dependencies we place upon those tools in the process of utilizing them.

To take what could be a fairly controversial example: I’ve written multiple articles on Artificial Intelligence (AI), how to approach it, and how I think about it in various dimensions, including where it is going.  The hype surrounding these technologies is deservedly very high right now, there is a surge in investment, and a significant number of tools are and will be hitting the market.  It’s also reasonable to assume a number of “agentic” solutions will pop up, meant to solve this problem and that… ok… now what happens then?  Are things better, worse, or just different?  What is the sum of an organization that is fully deployed with all of the latest tools?  I don’t believe we have any idea and I also believe it will be terribly inefficient if we don’t ask this question right now

As a comparison, what history has taught us is that there will be a user plugged into these future ecosystems somewhere, with some role and responsibilities, to work in concert (and ideally in harmony) with all this automation (physical and virtual) that we’ve brought to bear on everyone’s behalf.  How will they make sense of it all?  If we drop an agent for everything, is it any different than giving someone a bunch of new applications, all of which spit recommendations and notifications and alerts at them, saying “this is what you need to do”, but leaving them to figure out which of those disconnected pieces of advice make the most sense, which should be the priority, and try somehow not to be overwhelmed?  Maybe not, because the future state might be a combination of intelligent applications (something I wrote about in The Intelligent Enterprise) and purpose-built agents that fill gaps those applications don’t cover.

Ok, so why does any of that matter?  I’m not making an argument against experimenting and leveraging AI.  My point is that, every time there is surge towards the next technology advancement, we seldom think about the reality that it will eventually evolve or be replaced by something else and we should take that into consideration as we integrate those new technologies to begin with.  The only constant is change and that’s a good thing, but we also need to be disciplined in how we think about it on an ongoing basis.

Some questions to consider:

  • Is there a thoughtful and disciplined approach to innovation in place?
  • Is there a full lifecycle-oriented view when introducing new technologies, to consider how to integrate them so they can be replaced or to retire other existing, potentially redundant solutions once they are introduced?
  • Are the new technologies being vetted, reviewed, and integrated as part of a defined ecosystem with an eye towards managing technical debt over time?

Continual Transformation

In the spirit of fostering change, it is very common for a “strategy” conversation to be rooted in a vision.  A vision sets the stage for what the future environment is meant to look like.  It is ideally compelling enough to create a clear understanding of the desired outcome and to generate momentum in the pursuit of that goal (or set of goals)… and experience has taught me this is actually NOT the first or only thing important to consider in that first step.

Sustainable change isn’t just about having a vision, it is about having the right culture.

The process for strategy definition isn’t terribly complicated at an overall level: define a vision, understand the current state, identify the gaps, develop a roadmap to fill those gaps, execute, adapt, and govern until you’re done.

The problem is that large transformation efforts are extremely difficult to deliver.  I don’t fundamentally believe that difficulty is often rooted in the lack of a clear vision or as simple as having execution issues that ultimately undermine success.  I believe successful transformation isn’t a destination to begin with.  Transformation should be a continual journey towards excellence.

How that excellence is manifest can be articulated through one or more “visions” that communicate concepts of the desired state, but that picture can and will evolve as capabilities available through automation, process, and organizational change occur.  What’s most important is having courageous leadership and the innovation mindset mentioned above, but also a culture driven to sustain that competitive advantage and hunger for success.

Said differently: With the right culture, you can likely accomplish almost any vision, but only some visions will be achievable without the right culture.

Some questions to consider in this regard:

  • Is there a vision in place for where the organization is heading today?
  • What was the “previous” vision, what happened to it, did it succeed or fail and, if so, why?
  • Is the current change viewed as a “project” or a “different way of working”? (I would argue the latter is the desired state nearly in all cases)

Wrapping Up

Having shared the above thoughts, it’s difficult to communicate what is so fundamental to excellence, which is the passion it takes to succeed in the first place

Excellence is a choice.  Success is a commitment.  It takes tenacity and grit to make it happen and that isn’t always easy or popular. 

There is always room to be better, even in some of the most mundane things we do every day.  That’s why courageous leadership is so important and where culture becomes critical in providing the foundation for longer-term success.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 06/05/2025

For the Graduates…

First of all, know that character, values, and integrity matter.  They are the foundation of who you are and the reputation you will have with others.  Our beliefs and intentions often make their way to our words and actions, so strive to do what’s right, treat others with respect, take accountability for your choices, and know that, in the long term, those who bring kindness and a positive attitude into the world will succeed far more than those who don’t.  They will also find themselves surrounded by many others, because a good heart and kindness are forces that will attract others to you over time.

Have faith, no matter what life brings.  There will be times when life is challenging and it’s important to know that we are never alone, that God (in all His forms) has a plan, and we will find our way through, as long as we take one day at a time and keeping moving forward.  There is a great quote from Winston Churchill, “When you’re going through Hell, keep going”, that I always remember in this regard.  Faith is our greatest source of hope… and with hope, anything can be possible.  With faith and hope, your possibilities in life will be limited only by your capacity to dream.

Work hard.  It’s a simple point, but one that isn’t evident to everyone at a time that many seem to feel entitled.  Earning your success is both an exercise in diligence and commitment as well as persistence and leadership.  Oftentimes that effort is not glamorous, requires sacrifice, and will drag you through difficulty, but in struggling and overcoming those obstacles, we find out who we are and the strength we have inside us.  No one can give that confidence and experience to you, you simply have to earn it, and it is well worth the effort over time.

Never stop learning.   There is always something to understand about other people, new ideas or subjects, and the world around us.  Always be looking for the people who can guide and advise you in the different aspects of your life.  You will never reach a point where there isn’t an opportunity to grow as a person, and it will make you so much more aware, fulfilled, and worth knowing as time goes on.

Believe in yourself and speak your truth.  In the great debate that life can be at times, you should know that your voice matters.  At a time when so many take a free pass and just parrot the words, ideology, or biases of others, you do yourself and the world a service to educate yourself, form your own opinion, and respectfully speak your truth, including the times you speak for those who are afraid to do so on their own.  Diversity in thought and opinion gives us strength and creates room for change.  Let your voice be heard.  You can make a difference.

Be humble and be kind.  In concert with the previous point, strive to listen as well as you speak.  Seek compassion and understanding, including those who differ most from you.  They have their own form of truth, and it can be worth learning what that is, whether you agree with it or not.  In a world consumed with egocentric thinking, what we do for others brings the world a little closer, creates the connections that bind us together, and reduces the divisiveness that so many waste their days promoting.

Never give up on your dreams but be ready to pursue new ones when you see them.  Life can be like a series of bridges, taking you from one part of your journey to the next, and we often can’t see past the bridge that is immediately in front of us.  While it takes tenacity and courage to pursue your life’s passion, understand that your goals will evolve as time progresses, and that’s not a bad thing.

Build upon your successes, learn from your failures.  Remember that it’s relatively easy to succeed when you’re not doing anything worth doing or that’s not particularly difficult.  Again, this is a relatively simple point, but it’s easy to lose the perspective that failures are a means to learn and become better, and they are definitely something that come with taking risks in life.  There is no benefit to beating yourself up endlessly over your mistakes.  Be thankful for the opportunity to learn and move forward, or likely life will give you the opportunity to learn that lesson again down the road.

Understand that true leaders emerge in adversity.  Aspire to be the light that can lead others out of darkness to a better place, whether that is in your personal or professional life.  It is easy to lead when everything is going well.  It is when things go wrong that poor leaders assign blame and make excuses, and strong leaders take the reins, solve problems, and seek to inspire.  It’s a choice that takes courage, but it’s worth remembering that it is also where character is built, reputations are made, and results are either accomplished or not.

Accept that life is rarely what we expect it to be.  It’s the journey, along with its peaks and valleys, that makes it so worthwhile.  Where possible, the best you can do for yourself and for others is to know when to set aside distractions, be present, and engage in the moments you have throughout your day.   Make the most of the experience and don’t be a passenger in your own life.

Finally, take the time to express your care for those who matter to you.  Life is unpredictable and you will never run out of love to give to others who are truly deserving of it.  We spend far too much time waiting for “the right moment” when that time could be right now.  Express your gratitude, express your love, express your support… both you and whomever is the recipient of those things will be better for it, and you will have an endless supply of those gifts available to give tomorrow as well, so no need to hold them in reserve.

I hope the words were helpful… all the best in the steps you take, in the choices you make, in finding happiness, and living the life of your dreams.

-CJG 05/27/2018

Conducting Effective Workshops

Overview

Having led and participated in many workshops and facilitated sessions over time, I wanted to share some thoughts on what tends to make them effective. 

Unfortunately, there can be a perception that assembling a group of people in a room with a given topic (for any length of time) can automatically produce collaboration and meaningful outcomes.  This is definitely not the case.

Before getting into the key dimensions, I suppose a definition of a “workshop” is worthwhile, given there can be many manifestations of what that means in practice.  From my perspective, a workshop is a set of one or more facilitated sessions of any duration with a group of participants that is intended to foster collaboration and produce a specified set of outcomes.  By this definition, a workshop could be as short as a one-hour meeting and also span many days.  The point is that it is facilitated, collaborative, and produces results.

By this definition, a meeting used to disseminate information is not a workshop.  A “training session” could contain a workshop component, to the degree there are exercises that involve collaboration and solutioning, but in general, they would not be considered workshops because they are primarily focused on disseminating information.

Given the above definition, there are five factors that are necessary for successful workshop:

  • Demonstrating Agility and Flexibility
  • Having the Appropriate Focus
  • Ensuring the Right Participation
  • Driving Engagement
  • Creating Actionable Outcomes

Demonstrating Agility and Flexibility

Workshops are fluid, evolving things, where there is an ebb and flow to the discussion and to the energy of the participants.  As such, beyond any procedural or technical aspect of running a workshop, it’s critically important to think about and to be aware of the group dynamics and to adjust the approach as needed.

What works:

  • Soliciting feedback on the agenda, objectives, and participants in advance, both to make adjustments as needed, but also to identify potential issues that could arise in the session itself
  • Doing pulse checks on progress and sentiment throughout to identify adjustments that may be appropriate
  • Asking for feedback after a session to identify opportunities for improvement in the future

What to watch out for:

  • The tone of discussion from participants, level of engagement, and other intangibles can tend to signal that something is off in a session
    • Tactics to address: Call a break, pulse check the group for feedback
  • Topics or issues not on the agenda that arise multiple times and have a relationship to the overall objectives or desired outcomes of the session itself
    • Tactics to address: Adjusting the agenda to include a discussion on the relevant topic or issue. Surface the issue, put in a parking lot to be addressed either during or post-session
  • Priorities or precedence order of topics not aligning in practice to how they are organized in the session agenda
    • Tactics to address: Reorder the agenda to align the flow of discussion to the natural order of the solutioning. Insert a segment to provide a high-level end-to-end structure, then resume discussing individual topics.  Even if out of sequence, that could help contextualize the conversations more effectively

Having the Appropriate Focus

Workshops are not suitable for every situation.  Topics that involve significant amounts of research, rigor, investigation, cross-organizational input, or don’t require a level of collaboration, such as detailed planning, are better handled through offline mechanisms, where workshops can be used to review, solicit input, and align outcomes from a distributed process.

What works:

  • Scope that is relatively well-defined, minimally at a directional level, to enable brainstorming and effective solutioning
  • Conduct a kick off and/or provide the participants with any pre-read material required for the session up front, along with any expectations for “what to prepare” so they can contribute effectively
  • Choosing topics where the necessary expertise is available and can participate in the workshop

What to watch out for:

  • Unclear session objectives or desired outcomes
    • Tactics to address: Have a discussion with the session sponsor and/or participants to obtain the necessary clarity and send out a revised agenda/objectives as needed
  • Topics that are too broad or too vague to be shaped or scoped by the workshop participants
    • Tactics to address: Same as previous issue
  • An agenda that doesn’t provide a clear line of sight between the scope of the session or individual agenda items and desired outcomes
    • Tactics to address: Map the agenda topics to specific outcomes or deliverables and ensure they are connected in a tangible way. Adjust as needed

Ensuring the Right Participation

Workshops aren’t solely about producing content, they are about establishing a shared understanding and ownership.  To that end, having the right people in the room to both inform the discussion and own the outcomes is critical to establishing momentum post-session

What works:

  • Ensuring the right level of subject matter expertise to address the workshop scope and objectives
  • Having cross-functional representation to identify implications, offer alternate points of view, challenge ideas, and suggest other paradigms and mental models that could foster innovation
  • Bringing in “outside” expertise to the degree that what is being discussed is new or there is limited organizational knowledge of the subject area where external input can enhance the discussion

What to watch out for:

  • People jumping in and out of sessions to the point that it either becomes a distraction to other participants or there is a loss of continuity and effectiveness in the session as a whole
    • Tactics to address: Manage the part-time participants deliberately to minimize disruptions. Realign sessions to try and organize their participation into consecutive blocks of time with continuous input rather than sporadic engagement or see what can be done to either solicit full participation or identify alternate contributors who can participate in a dedicated capacity.
  • There is a knowledge gap that makes effective discussion difficult to impossible. The lack of the right people in the discussion will tend to draw momentum out of a session
    • Tactics to address: Document and validate assumptions made in the absence of the right experts being present. Investigate participation of necessary subject matter experts in key sessions focused on their areas of contribution
  • Limiting participants to those who are “like minded”, which may constrain the outcomes
    • Tactics to address: Explore involving a more diverse group of participants to provide a means for more potential approaches and solutions

Driving Engagement

Having the right people in the room and the right focus is critical to putting the right foundation in place, but making the most of the time you have is where the value is created, and that’s all about energy and engagement.

What works:

  • Leveraging an experienced facilitator, who is both engaging and engaged. The person leading the workshop needs to have a contagious enthusiasm that translates to the participants
  • Ensuring an inclusive discussion where all members of the session have a chance to contribute and have their ideas heard and considered, even if they aren’t ultimately utilized
  • Managing the agenda deliberately so that the energy and focus in the discussion is what it needs to be to produce the desired outcomes

What to watch out for:

  • A lack of energy or lack of the right pace from the facilitator will likely reduce the effectiveness of the session
    • Tactics to address: Switch up facilitators as needed to keep the energy high, pulse check the group on how they feel the workshop is going and make adjustments as needed
  • A lack of collaboration or participation from all attendees
    • Tactics to address: Active facilitation to engage quieter voices in the room and to manage anyone who is outspoken or dominating discussion
  • A lack of energy “in the room” that is drawing down the pace or productivity of the session
    • Tactics to address: Calling breaks as needed to give the participants a disruption, balancing the amount of active engagement of the participants in the event there is too much “presentation” going on where information is being shared and not enough discussion occurring

Creating Actionable Outcomes

One of the worst experiences you can have is a highly energized session that builds excitement, but then leads to no follow up action.  Unfortunately, I’ve seen and experienced this many times over the course of my career, and it’s very frustrating, both when you lead workshops and as a participant, when you spend your time and provide insights that ultimately go to waste.  Workshops are generally meant to help launch, accelerate, and build momentum through collaboration.  To the extent that a team comes together and uses a session to establish direction, it is critical that the work not go to waste, not only to make the most of that effort, but also to provide reassurance that future sessions will be productive as well. If workshops become about process without outcome, they will lose efficacy very quickly and people will stop taking them seriously as a mechanism to facilitate and accelerate change.

What works:

  • Tracking the completion of workshop objectives throughout the process itself and making adjustments to the outcomes as required
  • Leaving the session with clear owners of any next steps
  • Establishing a checkpoint post-session to take a pulse on where things stand on the outcomes, next steps, and recommended actions

What to watch out for:

  • Getting to the end of a workshop and having any uncertainty in terms of whether the session objectives were met
    • Tactics to address: Objectives should be reviewed throughout the workshop to ensure alignment of the participants and commitment to the desired outcomes. There shouldn’t be any surprises waiting by the end
  • Leaving a session not having identified owners of the next steps
    • Tactics to address: In the event that no one “signs up” to own next steps or the means to perform the assignment is unclear for some reason, the facilitator can offer to review the next steps with the workshop sponsor and get back to the group with how the next steps will be taken forward
  • Assigning ownership of next steps without any general timeframe in which those actions were intended to be taken
    • Tactics to address: Setting a checkpoint at a specified point post-session to understand progress, review conflicting priorities, clear barriers, etc.

Wrapping Up

Going back to the original reason for writing this article, I believe workshops are an invaluable tool for defining vision, designing solutions, and facilitating change.  Taking steps to ensure they are effective, engaging, and create impact is what ultimately drives their value.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 05/09/2025