Developing Application Strategy – Executing the Process

Ok, I have the scope identified, but what do I do now?

Having recently written about the intangibles and scope associated with simplification, the focus of this article is the process of rationalization itself, with an eye towards reducing complexity and operating cost.

The next sections will breakdown the steps in the process flow above, highlighting various dimensions and potential issues that can occur throughout a rationalization effort.  I will focus primarily on the first three steps (i.e., the analysis and solutioning), given that is where the bulk of the work occurs.  The last two steps are largely dedicated to socializing and executing on the plan, which is more standard delivery and governance work.  I will then provide a conceptual manufacturing technology example to illustrate some ways the exercise could play out in a more tangible way. 

 

Understand

The first step of the process is about getting a thorough understanding of the footprint in place to enable reasonable analysis and solutioning.  This does not need to be exhaustive and can be prioritized based on the scope and complexity of the environment.

 

Clarify Ownership

What’s Involved:

  • Identifying technology owners of sets of applications, however they are organized. Hereinafter referred to as portfolio owners
  • Identifying primary business customers for those applications (business owners)
  • Identifying specific individuals who have responsibility for each application (application owners)
  • Portfolio and application owners can be the same individual, but in larger organizations, they likely won’t be given the scope of an individual portfolio and ways it is managed

Why It Matters:

  • Subject matter knowledge will be needed relative to applications and the portfolios in which they are organized, the value they provide, their alignment to business needs, etc.
  • Opportunities will need to be discussed and decisions made related to ongoing work and the future of the footprint, which will require involvement of these stakeholders over time

Key Considerations:

  • Depending on the size of the organization and scope of various portfolios in place, it may be difficult to engage the right leaders in the process, in which case a designate should be identified who can serve as a day-to-day representative of a larger organization, who is empowered to provide input and make recommendations on behalf of their respective area.
  • In these cases, a separate process step will need to be added to socialize and confirm the outcomes of the process with the ultimate owners of the applications to ensure alignment, regardless of the designated responsibilities of the people participating in the process itself. Given the criticality of simplification work, there could be substantial risk in making broad assumptions related to organizational support and alignment, so some form of additional checkpoints would be a good idea in nearly all cases where this occurs

 

Inventory Applications

What’s Involved:

  • Working with Portfolio Owners to identify the assets across the organization and create as much transparency as possible into the current state environment

Why It Matters:

  • There are two things that should come from this activity: an improved understanding of what is in place, and an intangible understanding of the volatility, variability, and level of opacity in the environment itself. In the case of the latter point, if I find that I have a substantial amount more applications across a set of facilities or set of operating units than I expected and those vary by business greatly, it should inform how I think about the future state environment and governance model I want in place to manage that proliferation in the future.  This is related to my point on being a “historian” in the process in the previous article on managing the intangibles of the process.

Key Considerations:

  • Catalogue the unique applications in production, providing a general description of what they do, users of the technology (business units, individual facilities, customer segments/groups), primary business function(s)/capabilities provided, criticality of the solution (e.g., whether it is a mission-critical/“core” or supporting/”fringe” application), teams that support the application, number of application instances (see the next point), key owners (in line with the roles mentioned above), mapping to financials (the next point after this), mapping to ongoing delivery efforts (also described below), and any other critical considerations where appropriate (e.g., on a technology platform that is near end of life)
  • In concert with the above, identify the number of application instances in production, specifically the number of different configurations of a base application running on separate infrastructure, supporting various operations or facilities with unique rules and processes, or anything that would be akin to a “copy-paste-modify” version of a production application. This is critical to understand and differentiate, because the simplification process needs to consider reducing these instance counts in the interest of streamlining the future state.  That simplification effort can be a separate and time-consuming activity on top of reducing the number of unique applications as a whole
  • Whether to include hosting and the technology stack of a given application is a key consideration in the inventory process itself. In general, I would try to avoid going too deep, too early in the rationalization process, because these kinds of issues will surface during the analysis effort anyway and putting them in the first step of the process could slow down the work documenting things on applications that aren’t ultimately the top priority for simplification

 

Understand Financials

What’s Involved:

  • Providing a directionally accurate understanding of direct and indirect cost to individual applications across the portfolio
  • Providing a lens on the expected cost of any discretionary projects targeted at enhancing replacing, or modernizing individual applications (to the extent there is work identified)

Why It Matters:

  • Simplification is done primarily to save or redistribute cost and accelerate delivery and innovation. If you don’t understand the cost associated with your footprint, it will be difficult to impossible to size the relative benefit of different changes you might make and, as such, the financial model is fundamental to the eventual business case meant to come as an output of the exercise

Key Considerations:

  • Direct cost related to dedicated teams, licensing, and hosted solutions can be relatively straightforward and easy to gather, along with the estimated cost of any planned initiatives for a specific application
  • Direct cost can be more difficult to ascertain when a team or third-party supports a set of applications, in which case some form of cost apportionment may be needed to estimate individual application costs (e.g., allocate cost based on number of production tickets closed by application within a portfolio of systems)
  • Indirect expenses related to infrastructure and security in particular can be difficult to understand depending on the hosting model (e.g., dedicated versus shared EC2 instances in the cloud versus on premises, managed hardware) and how costs for hardware, network, cyber security tools, and other shared services are allocated and tracked back to the portfolio
  • As I mentioned in my article on the intangibles associated with rationalization, directional accuracy is more important than precision in this activity, because the goal at the early stage of the process is to identify redundancies where there is material cost savings potential, not building out a precise cost allocation for infrastructure in the current state

 

Evaluate Cloud Strategy

What’s Involved:

  • Clarifying the intended direction in terms of enterprise hosting and the cloud overall, along with the approach being taken where cloud migration is in progress or planned at some level moving forward

Why It Matters:

  • Hosting costs change when moving from a hosted to a cloud-based environment, which could affect the ultimate business case, depending on the level of change planned in the footprint (and associated hosting assumptions)

Key Considerations:

  • There is a major difference in costs for hosting depending on whether you are planning to use a lift-and-shift, modernize, or “containerize”-type of approach to the cloud,
  • Not all applications will be suitable to the last approach in particular, and it’s important to understand whether this will play into your application strategy as you are evaluating the portfolio and identifying future alternatives
  • If there is no major shift planned (e.g., because the footprint is already cloud-hosted and modernized or containerized), it could be that this is a non-issue, but likely it does need to be considered somewhere in the process, minimally from a risk management and business case development standpoint

 

Evaluate AI Strategy

What’s Involved:

  • Understanding the role AI applications and agentic AI solutions are meant to be a core component in the future application portfolio and enterprise footprint, along with any primary touchpoints for these capabilities as appropriate
  • Understanding any high opportunity areas from an end user standpoint where AI could aid in improving productivity and effectiveness

Why It Matters:

  • Any longer-term strategy for enterprise technology today needs to contemplate and articulate how AI is meant to integrate and align to what is going to be in place, particularly if agentic AI is meant to be included as part of the future state, otherwise you risk having to iterate your entire blueprint relatively quickly, which could lead to issues in stakeholder confidence and momentum

Key Considerations:

  • If Agentic AI is meant to be a material component in the future state, the evaluation process for targeted applications should include their API model and whether they are effectively “open” platforms that can be orchestrated and remote operated as part of an agentic flow. The larger the overall scope of the strategy and longer the implementation is expected to take, the more important this aspect should be as a consideration in the analysis process itself, because orchestration is going to become more critical in large enterprises over time under almost any circumstances
  • Understanding the role AI is anticipated to play is also important to the extent that it could play a critical role in facilitating transition in the implementation process itself, particularly if it becomes an integrated part of the end user presentment or education and training environment. This could both help reduce implementation costs and accelerate deployment and adoption, depending on how AI is (or isn’t leveraged)

 

Assess Ongoing Work

What’s Involved:

  • The final aspect to understanding the current state is obtaining a snapshot of the ongoing delivery portfolio and upcoming pipeline

Why It Matters:

  • Understanding anticipated changes, enhancements, replacements, or retirements and the associated investments is important to evaluating volatility and also determining the financial consequences of decisions made as part of the strategy

Key Considerations:

  • Gather a list of active and upcoming projects, applications in scope, the scope of work, business criticality, any significant associated risk, relative cost, and anticipated benefits
  • Review the list with owners identified in the initial step with a mindset of “go”, “stop”, and “pause” given the desire to simplify overall. It may be the case that some inflight work needs to be completed and handled as sunk cost, but there could be cost avoidance opportunity early on that can help fund more beneficial changes that improve the health of the footprint overall

 

Evaluate

With a firm understanding of the environment and a chosen set of applications to be explored further (which could be everything), the process pivots to assessing what is in place and identifying opportunities to simplify.

 

Assess Portfolio Quality

What’s Involved:

  • Work with business, portfolio, and application owners to apply a methodology, like Gartner’s TIME model, to evaluate the quality of solutions in place. In general, this would involve looking at both business and technology fit in the interest of differentiating what does and doesn’t work, what needs to change, and what requirements are critical to the future state

Why It Matters:

  • Rationalization efforts can be conducted over the course of months or weeks, depending on the scope and goals of the activity. Consequently, the level of detail that can be considered in the analysis will change based on the time and resources available to support the effort but, regardless of the time and effort available, it is important for there to be a fact-based foundation to support the opportunities identified, even if only at an anecdotal level

Key Considerations:

  • There are generally two levels of this kind of analysis: a higher-level activity like the TIME model, which provides more of a directional perspective on the underlying applications and a more detailed gap analysis-type activity that evaluates features and functionality in the interest of vetting alternatives and identifying gaps that may need to be addressed in the rationalization process itself. The more detailed activity would typically be performed as part of an implementation process and not upstream in the strategy definition phase.  The gap analysis could be performed leveraging a standard package evaluation process (replacing external packages with the applications in place), assuming one exists within the organization
  • The technical criteria for the TIME model evaluation should include things like AI readiness, platform strategy, underlying technical stack, and other key dimensions based on how critical those individual elements are, as surfaced during the initial stage of the work

 

Identify Redundancies

What’s Involved:

  • Assuming some level of functional categories and application descriptions were identified during the data gathering phase of the work, it should be relatively straightforward to identify potential redundancies that exist in the environment

Why It Matters:

  • Redundancies create opportunities for simplification, but also for improved capabilities. The simplification process doesn’t necessarily mean that those having an application replaced will be “giving up” existing capabilities.  It could be the case that the solution to which a given user group is being migrated provides more capabilities than what they currently have in place

Key Considerations:

  • Not all groups within a large organization have equal means to invest in systems capabilities. There can be situations where migrating smaller entities to solutions in use by larger and more well-funded pieces of the organization allows them to leverage new functionality not available in what they have
  • In the situation where organizations move from independent to shared/leveraged solutions, it is important to not only consider how the shift will affect cost allocation, but also the prioritization and management of those platforms post-implementation. A concern can often arise in these scenarios that either costs will be apportioned in a way that burdens smaller entities at a greater level of funding than they can sustain or that their needs may not be prioritized effectively once they are in a shared environment with other.  Working through these mechanics is a critical aspect of making simplification work at an enterprise level.  There needs to be a win-win environment to the maximum extent possible or it will be difficult to incent teams to move in a more common direction

 

Surface Opportunities

What’s Involved:

  • With redundancies identified, costs aligned, and some level of application quality/fit understood, it should be possible to look for opportunities to replace and retire solutions that either aren’t in use/creating value or that don’t provide the same level of capability in relation to cost as others in the environment

Why It Matters:

  • The goal of rationalization is to reduce complexity and cost while making it easier and faster to deliver capabilities moving forward. Where cost is consumed in maintaining solutions that are redundant or that don’t create value, they hamper efforts to innovate and create competitive advantage, which is the overall goal of this kind of effort

Key Considerations:

  • Generally speaking, the opportunities to simplify will be identified at a high-level during the analysis phase of a rationalization effort. The detailed/feature-level analysis of individual solutions is an important thing to include in the planning of subsequent design and implementation work to surface critical gaps, integration points, and workflow dependencies between systems to facilitate transition to the desired future state environment

 

Strategize

Having completed the Analysis effort and surfaced opportunities to simplify the footprint, the process shifts to identifying the target future state environment and mapping out the approach to transition.

 

Define Future Blueprint(s)

What’s Involved:

  • Assuming some representation of the current state environment has been created as a byproduct of the first two steps of the process, the goal of this activity is to define the conceptual end state footprint for the organization
  • To the extent that there are corporate shared services, multiple business/commercial entities, operating units, facilities, locations, etc. to be considered, the blueprint should show the simplified application landscape post-transition, organized by operating entity, where one or more operating unit could be mapped into a common element of the future blueprint (e.g., organized by facility type versus individual locations, lower complexity business units versus larger entities)

Why It Matters:

  • A relatively clear, conceptual representation of the future state environment is needed to facilitate discussion and understanding of the difference between the current environment, the intended future state, and the value for changes being proposed

Key Considerations:

  • Depending on the breadth and depth of the organization itself, the representation of the blueprint may need to be defined at multiple levels
  • The approach to organizing the blueprint itself could also provide insight into how the implementation approach and roadmap is constructed, as well as how stakeholders are identified and aligned to those efforts

 

Map Solutions

What’s Involved:

  • With opportunities identified and a future state operating blueprint, the next step is to map retained solutions into the future state blueprint and project the future run rate of the application footprint

Why It Matters:

  • The output of this activity will both provide a vision of the end state and act as input to socializing the vision and approach with key stakeholder in the interest of moving the effort forward

Key Considerations:

  • There is a bit of art and science when it comes to rationalization, because too much standardization could limit agility if not managed in a thoughtful. I will provide an example of this in the scenario following the process, but a simple example is to think about whether maintaining separate instances of a core application is appropriate in situations where speed to market or individual operating units need the flexibility to have greater autonomy than they might otherwise have if they had to operate off a single, shared instance of one application
  • I mentioned in the article on the intangibles of simplification, that is it a good idea to take an aggressive approach to the future state, because likely not everything will work in practice and the entire goal of the exercise is to try and optimize as much as possible in terms of value in relation to cost
  • From a financial standpoint, it is important to be conservative in assumptions related to changes in operating expense. That should manifest itself in allowing for contingency in implementation schedule and costs as well as assuming the decommissioning of solutions will take longer than expected (it most likely will).  It is far better to be ahead of a conservative plan than to be perpetually behind an overly aggressive one

 

Define Change Strategy

What’s Involved:

  • With the current and future blueprints identified, the next step would be to identify the “building blocks” (in conceptual terms) of the eventual roadmap. This is essentially a combination of three things: application instances to be consolidated, replacement of one application by another, and retirement of applications that are either unused or that don’t create enough value to continue supporting them
  • Opportunities can also be segregated into big bets that affect core systems and material cost/change, those that are more operational and less substantial in nature, and those that are essentially cleanup of what exists. The segregation of opportunities can help inform the ultimate roadmap to be created, the governance model established, and program management approach to delivery (e.g., how different workstreams are organized and managed)

Why It Matters:

  • Roadmaps are generally fluid beyond a near-term window because things inevitably occur during implementation and business priorities change. Given there can be a lot of socializing of a roadmap and iteration involved in strategic planning, I believe it’s a good idea to separate the individual transitions from the overall roadmap itself, which can be composed in various ways, depending on how you ultimately want to tackle the strategy.  At a conceptual level, you can think of it as a set of Post-it notes representing individual efforts that can be organized in a number of legitimate ways with different cost, benefit, and risk profiles

Key Considerations:

  • Individual transitions can be assessed in terms of risk, business implications, priority, relative cost and benefits, and so forth as a means to help determine slotting in the overall roadmap for implementation

 

Develop Roadmap

What’s Involved:

  • With the individual building blocks for transition identified, the final step in the strategy definition stage is to develop one or more roadmaps to assemble those blocks to explore as many implementation strategies as appropriate

Why It Matters:

  • The roadmap is a critical artifact in the formation of an implementation plan, though they generally change quite a bit over time depending on the time horizon, scope, complexity, and scale of the program itself

Key Considerations:

  • Ensure that all work is included and represented, including any foundational or kickoff-related activities that will serve the program as a whole (e.g., establishing a governance model, PMO, etc.)
  • Include retirements (not just new solution deployments), minimally as milestones, in the roadmap so they are planned and accounted for. There are many times this is missed in my experience with new system deployments
  • Depending on the scale of implementation, explore various business scenarios (e.g., low risk work up front, big bets first, balanced approaches, etc.) to ascertain the relative cost, benefit, implementation requirements, and risks of each and determine the “best case” scenario to be socialized

 

Socialize and Mobilize

Important footnote: I’ve generally assumed that the process above would be IT-led with a level of ongoing business participation given much of the data gathering and analysis can be performed within IT itself.  That isn’t to say that solutioning and development of a roadmap needs to be created and socialized in a sequential manner as is outlined here.  It could also be the case that opportunities are surfaced out of the evaluation effort and then the strategy and socialization is done through a collaborative/ workshop process, it depends on the scope of the exercise and nature of the organization.

With the alternatives and future state recommendations prepared, the remaining steps of the process are fairly standard, in terms of socializing and iterating the vision and roadmap, establishing a governance model and launching the work with clear goals for 30, 60, and 90 days in mind.  As part of the ongoing governance process, it is assumed that some level of iteration of the overall roadmap and goals will be performed based on learnings gathered early in the implementation process.

 

Putting Ideas into Practice – An Example

The Conceptual Example – Manufacturing

If you’ve made it this far, I wanted to move beyond the theory to a conceptual scenario to help illustrate various situations that could occur in the course of a simplification exercise.  The example diagram represents the flow of data across the three initial steps of the process outlined above.  The data is logically consistent and traceable across steps in the process if it is helpful in understanding the situation.  I limited the number of application types (lower left corner of the diagram) so I could explore multiple scenarios without making the data too overwhelming.  In practice, there would be multiple domains and many components in each domain to be considered (e.g., HR is a domain with many components represented as a single application category here), depending on the level of granularity being used for the rationalization effort.

From here, I’ll provide some observations on each major step in the hopes of making some example outcomes clear.  I’m not covering the financial analysis given it would make things even more complicated to represent, but for the sake of argument, we can assume that there is financial opportunity associated with reducing the number of applications and instances in place

 

Notes on the Current State

Some observations on the current state based on the data collected:

  • The organization has a limited set of corporate applications for Finance, Procurement, and HR, but most of the core applications are relegated to individual business units (there are three in this example) and manufacturing facilities (there are four)
  • Business Operation 1 is the largest commercial entity, sharing the same HR and Procurement solutions, though with unique copies of its own, a different instance of the core accounting system that is managed separately, with two facilities (1 and 2), using different instances of the same MES system, a common WMS system, and a set of unique fringe applications in most other functional categories, some of which overlap or complement those at the business unit level. Despite these differences in footprint, facilities 1 and 2 are highly similar from an operational/business process standpoint
  • Business Operations 2 and 3 are smaller commercial entities, running on a different HR system and a different instance of the Procurement solutions than Corporate, a different instance of the core accounting system that is managed separately in one and a unique accounting system in the other, with one facility each (3 and 4), using different MES systems, different instances of the same WMS system, and a set of unique fringe applications in most other functional categories, some of which overlap or complement those at the business unit level. Despite these differences in footprint, facilities 3 and 4 are highly similar from an operational/business process standpoint
  • All three business entities operate of unique ERP solutions, two of them leverage the same CRM system, though they are on separate instances, so there is no enterprise-level view of customer and financials need to be consolidated at corporate across all three entities using something like Hyperion or OneStream
  • The facilities utilize three different EAM solutions for Asset Health today, with two of them (2 and 3) using the same software
  • The fringe applications for accounting, EH&S, HR, and Procurement largely exist because of capability gaps in the solutions already available from the corporate or business unit applications

All things considered, the current environment includes 29 unique applications and 15 application instances.

Sounds complicated, doesn’t it? 

Well, while this is entirely a made-up scenario meant to help illustrate various simplification opportunities, the fact is that these things do actually happen, especially as you scale up and out an organization, have acquisitions, or partially roll out technology over time.

 

Notes on the Evaluation

Observations based on the analysis performed:

  • Having worked with business, portfolio, and application owners to classify and assess the applications in place, a set of systems surfaced as creating higher levels of business value, between mission-critical core (ERP, CRM, Accounting, MES) and supporting/fringe (Procurement, HR, WMS, EH&S, EAM) applications.
  • Application A, having been implemented by the largest commercial entity, provides the most capability of any of the solutions in place
  • Application D, as the current CRM system in use by two of the units today, likely offers the best potential platform for a future enterprise standard
  • Application F likely would make sense as an enterprise standard platform for accounting, though there is something about Application I currently in Facility 3 that provides unique capability at a day-to-day level
  • Application V is the best of the MES solutions from a fit and technology standpoint and is in place at two of the facilities today, though running on separate instances
  • Application K is already in place to support Procurement across most of the enterprise, though instances are varied and Applications L and M exist at the facility level because of gaps in capability today
  • Applications M and O surface as the best technical solutions in the EH&S space, with all of the others providing equal or lesser business value and technical quality
  • Application S stands out among other HR solutions as being a very solid technology platform
  • Application AB is the best of the EAM solutions both in terms of business capability and technical quality

 

Notes on the Strategy

The overall simplification strategy begins with the desire to standardize operations for smaller business entities 2 and 3 (operating blueprint B) and to run facilities in a more standard way between those supporting the larger commercial unit (facility blueprint A) and those supporting the smaller ones (facility blueprint B).

From a big bets standpoint:

  • ERP: Make improvements to Application A supporting business operation 1 so that the company can move from three ERPs to one, using a separate instance for the smaller operating units.
  • CRM: Make any necessary enhancements to Application D so that it can be run as a single enterprise application supporting all three business units (removing it from their footprint to manage), providing a mechanism to have a single view of the customer and reduced operating complexity and cost
  • Accounting: Given it is already largely in place across businesses, make improvements to Application F so it can serve as a single enterprise finance instance and remove it from footprint of the individual units. In the case of the facility-level requirements, making updates to accounting Application I and standardizing on that application for the small business manufacturing facilities. 
  • MES: Finally, standardize on Application V across facilities, with a unique instance being used to operate large and small business facilities respectively

 

For Operational Improvements:

  • Procurement and HR: Similar to CRM and Accounting, standardize on Application K and S so that they can be maintained and operated at the enterprise level
  • EH&S: Assuming there are differences how they operate, standardize to Applications M and O as solutions for large and smaller units respectively, eliminating all other applications in place
  • WMS: Y is already the standard for large facilities, so no action is needed there. For smaller facilities, consolidate to a single instance to support both facilities rather than maintain two versions of Application Z
  • EAM: standardize to a single, improved version of Application AB and eliminate other applications currently in place
  • Finally, for low value applications like H and M, to review and ensure no dependencies or issues exist, but to sunset those applications and reduce complexity and any associated cost outright

Post-implementation, the future environment would include 12 unique applications and 2 application instances, which is a net reduction of 17 applications (59%) and 13 instances (87%), likely with a substantial cost impact as well.

 

Wrapping Up

I realized in chalking out this article that it would be a substantial amount of information, but it is aimed at practitioners in the interest of sharing some perspective on considerations involved in doing rationalization work.  In my experience, what seems fairly straightforward on paper (including in my example above) generally isn’t for many reasons that are organizational and process-driven in nature.  That being said, there is a lot of complexity in many organizations to be addressed and so hopefully some of the ideas covered will be helpful in making the process a little more manageable.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 10/26/2025

Developing Application Strategy – Laying the Foundation

I have all this complexity, and I’m not sure what I can do about it…

In medium- to large-scale organizations, complexity is part of daily life.  Systems build up over time, as does the integration between them, and eventually it is easy to have a very complex spider web of applications and data solutions across an enterprise, some of which provide significant value, some that are there for reasons no one can explain… but the solution is somehow still viewed to be essential.

Returning to a simplified technology footprint is often desirable for multiple reasons:

  • A significant portion of ongoing spend is required to support and maintain what’s in place, hampering efforts to innovate, improve, or create new, more highly valued capabilities
  • New delivery efforts take a long time, because there is so much to consider in implementation both to integrate what exists and also not to break something in the process of making changes
  • Technologies continue to advance and security exposures come about, and a large portion of IT spend becomes consumed in “modernization” of things that don’t create enough value to justify the spend, but you don’t have a choice not to invest in them given they also can’t be retired
  • People enter and leave the organization over time. Onboarding people takes time and cost, leading to suboptimized utilization for an extended period of time, and exits can orphan solutions where the risks for modifying or retiring something are difficult to evaluate

The problem is that simplification, like modernization, is often treated as a point-in-time activity and not an ongoing maintenance effort as part of an annual planning and portfolio management process.  As a result, assets accumulate, and the cost for addressing the associated complexity and technical debt increases substantially the longer it takes to address the situation.  I will address optimizing overall IT productivity in a separate article, but this is definitely an issue that exists in many organizations today.

Having recently written about the intangibles associated with simplification, the focus of this article is establishing the foundation upon which a rationalization effort can be built, because the way you define scope and the tools you use to manage the process matter.

 

A quick note on Enterprise Architecture…

While the focus of this series of articles is specifically on app rationalization, my point of view on enterprise architecture overall is no different than what is outlined in the recent 7-part series on “The Intelligent Enterprise in an AI World” (Intelligent Enterprise Recap).  The future of technology will move to a producer/consumer model, involving highly configurable, intelligent applications, with a heavy emphasis on standard integration and orchestration (largely through agentic AI), organized via connected domain-based ecosystems.

While the overall model and framework is the same, the focus in this series is the process for identifying and simplifying the application footprint, where applications are the “components” referenced above.

 

Establishing the Foundation

Infrastructure

In a perfect world, any medium or large organization should have some form of IT Asset Management (ITAM) solution in place to track and manage assets, identify dependencies, enable ongoing discovery and license management, and a host of other things.  In a perfect world, it can also be integrated with your IT Service Management (ITSM) infrastructure provide a fully traceable mechanism to link assets to incidents and strive to improve reliability and operating performance over time.

Is this kind of infrastructure required for a simplification effort?  No.

The goal of rationalization is not boiling the ocean or establishing an end-to-end business aligned enterprise infrastructure for managing the application portfolio, the goal is identifying opportunities to simplify and optimize

In a perfect world, an enterprise architecture strategy should start with business capabilities, map those to technology capabilities, then translate that into the infrastructure (components and services) required to enable those technology capabilities.  It would be wonderful and desirable to lay all of that out in the context of a rationalization effort, but those things take time and investment, and to the extent you are planning to replace and retire a potentially significant portion of what you have, it’s better to reduce the clutter and duplication first (assuming your footprint largely supports and aligns to your business needs) and then clean up and streamline your infrastructure as a secondary priority, with only the retained assets in scope.

In the rare situation where technology is fundamentally believed to be misaligned to the core business needs, it could be necessary and appropriate to start with the business strategy and go top-down from there, but I would assume this would be the exception in most case.  Said differently: If you can get your rationalization effort done with the simplest tools you have (e.g., Microsoft Excel), go ahead, just make sure the data is accurate.  Buy and leverage a more robust platform later, preferably when you know how you are planning to use it and commit to what it will take to maintain the data, because that is ultimately where their value is established and proven over time.

 

Scope

What is an application?

One of the deceptive things about rationalization work is how this simple this question appears when you haven’t gone through the exercise before.  The problem is that we use a host of things in the course of doing work, built with various technologies in multiple ways, and the way we determine scope in the interest of simplification matters.

Specifically, here is a list of things that potentially you should consider in a rationalization effort and my point of view on whether I’d normally consider them to be in scope:

  • Business applications (ERPs, CRM, MES, etc.): This is a given and generally the primary focus.
  • Tools (Microsoft Word, Alteryx, Tableau): Generally no. Tools produce content.  They generally don’t enable business processes, which is what I consider core to an “application”.
  • Third-Party Websites and Platforms (WordPress, PluralSight): In scope, to the degree they are supporting an essential business function or been customized to provide required capabilities
  • Citizen-Developed Applications: Generally yes, to the degree there is associated cost
  • Analytics/BI Solutions: Generally no. While data marts, warehouses, and so on provide business functionality, I consider analytics and applications to be two distinct technology domains that should be analyzed and rationalized separately, while being integrated into a cohesive overall enterprise architecture strategy
  • SharePoint sites: No. SharePoint is used to manage content, not provide functionality.
  • SharePoint applications: Generally yes, because there is an associated workflow and business process being enabled.
  • Robotic Process Automation (RPA): Generally no. RPA tools tend to act as utilities that automate simple tasks, but don’t provide robust capabilities at the level of an application. There could be exceptions to this, but I would be concerned if an RPA tool was providing a critical capability and wasn’t actually architected, designed, and built as an application to begin with.  That is a mismatch from an EA standpoint and likely I’d want to investigate migrating to a more robust and well-architected solution
  • Agentic AI Solutions: Yes. While this may be limited today, it will become more prevalent in the coming months and the relationship to existing solutions needs to be understood.
  • AI Solutions/Packages: Yes. In this case, by comparison with agentic solutions overlapping applications in a footprint, I’d be interested in looking for duplication and redundancy that may occur because AI adoption is relatively new, governance models are largely immature (if they exist at all) and the probability of having multiple tools that perform essentially the same function in medium- to large-organization is or will be relatively high, very soon.
  • Vibe-Coded Solutions: Absolutely yes. These need to be tracked particularly from a security exposure standpoint given how new these technologies are and the associated risk for putting them into production at this stage of the technology’s evolution.
  • Mobile Applications: If it is a standalone application, yes, likely should be included. If it is a different form of presentment on a web-enabled application, it theoretically should be included as part of the source application, so no.  Depending on the criticality of mobility in the enterprise technology strategy, whether an app is mobile enabled could be part of the inventory data gathered, but only if it is critical to the strategy, otherwise I would leave it out
  • IoT Devices: No. Physical assets and devices should be managed and rationalized separately as part of a device strategy (again, that integrates with an overall enterprise architecture)

So, the list above contains more than “business applications”, which is why I said scope can be tricky in a simplification / rationalization effort.

Beyond the considerations mentioned above, a key question to consider on whether any of these additional types of assets is in scope is: is there associated cost to maintain and support the asset, is there cyber security exposure, or are there compliance or privacy-related considerations with it.  Ultimately, any or all of these dimensions need to be considered in the exercise, because simplification not only should reduce complexity and cost, it should reduce security exposure and business risk.

 

Wrapping Up

From here, the focus will pivot to the process itself and a practical example to help illustrate the concepts put into more of a “real world” scenario.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 10/24/2025

Developing Application Strategy – Managing the Intangibles

It ought to be easier (and cheaper) to run a business than this…

Complexity and higher than desirable operating cost are prevalent in most medium- to large-scale organizations.  With that, generally some interest follows in exploring ways to reduce and simplify the technology footprint, to reduce ongoing expenses, mitigate risk and limit security exposure, and free up capital either to reinvest in more differentiated and value-added activity, or to contribute directly to the bottom line overall.

The challenge is really in trying to find the right approach to simplification that is analytically sound while providing insight at speed so you can get to the work and not spend more time “analyzing” than is required at any step of the process.

In starting to outline the content for this article, aside from identifying the steps in a rationalization process and working through a practical example to illustrate some scenarios that can occur, I also started noting some other, more intangible aspects to the work that have come up in my experience.  When that list reached ten different dimensions, I realized that I needed to split what was intended to be a single article on this topic into two parts: one that addresses the process aspects of simplification and one that addresses the more intangible and organizational/change management-oriented dimensions.  This piece is focused on the intangibles, because the environment in which you operate is critical to setting the stage for the work and ultimate results you achieve.

The Remainder of this Article…

The dimensions that came to mind fell into three broader categories, into which they are organized below, they are:

  • Leading and Managing Change
  • Guiding the Process and Setting Goals
  • Planning and Governance

For each dimension, I’ll try to provide some perspective on why it matters and some potential ideas to consider in the interest of addressing them in the context of a simplification effort overall.

Leading and Managing Change

At its core, simplification is a change management and transformational activity and needs to be approached as such.  It is as much about managing the intangibles and maintaining healthy relationships as anything to do with the process you follow or opportunities you surface.  Certainly, the structural aspects and the methodology matter, but without giving attention to the items below, likely you will have either some very rough sledding in execution, suboptimize your outcomes, or fail altogether.  Said differently: the steps you follow are only part of the work, improving your operating environment is critically important.

Leadership and Culture Matter

Like anything else that corresponds to establishing excellence in technology, courageous leadership and an enabling culture are fundamental to a simplification activity.  The entire premise associated with this work rests in change and, wherever change is required, there will be friction and resistance, and potentially significant resistance at that.

Some things to consider:

  • Putting the purpose and objective of the change front and center and reinforcing it often (likely reducing operating expense in the interest of improving profitability or freeing up capital for discretionary spending)
  • Working with a win-win mindset, looking for mutual advantage, building partnerships, listening with empathy, and seeking to enroll as many people in the cause as possible over time
  • Being laser-focused on impact, not solely on “delivery”, as the outcomes of the effort matter
  • Remaining resilient, humble (to the extent that there will be learnings along the way), and adaptable, working with key stakeholders to find the right balance between speed and value

It’s Not About the Process, It’s About Your Relationships

Much like portfolio management, it is easy to become overly focused on the process and data with simplification work and lose sight of the criticality of maintaining a healthy business/technology partnership.  If IT has historically operated in an order taker mode, suggesting potentially significant changes to core business applications that involve training large numbers of end users (and the associated productivity losses and operating disruptions that come with that) may go nowhere, regardless of how analytically sound your process is.

Some things to consider:

  • Know and engage your customer. Different teams have different needs, strategies, priorities, risk tolerance, and so on
  • You can gather data and analyze your environment (to a degree) independent of your business partners, but they need to be equally invested in the vision and plan for it to be successful
  • Establishing a cadence, individually and collectively, with key stakeholders, aligned to the pace of the work, minimally to maintain a healthy, transparent, and open dialogue on objectives, opportunities, risks, and required inventions and support, is important

Be a Historian as Much as You are an Auditor

Back to the point above on improving the operating environment being as important as your process/ methodology, it is important to recognize something up front in the simplification process: you need to understand how you got where you are as part of the exercise, or you may end right back there as you try to make things “better”.  It could be that complexity is a result of a sequence of acquisitions, a set of decentralized decisions without effective oversight or governance, functional or capability gaps in enterprise solutions being addressed at a “local” level, underlying culture or delivery issues, etc.  Knowing the root causes matters.

As an example, I once saw a situation where two teams implemented different versions of the same application (in different configurations) purely because the technology leaders didn’t want to work with each other.  The same application could’ve supported both organizations, but the decisions were made without enterprise-level governance, the operating complexity and TCO increased, and the subsequent cost to consolidate into a single instance was deemed “lower priority” than continuing work.  While this is a very specific example, the point is that understanding how complexity is created can be very important in pivoting to a more streamlined environment.

Some things to consider:

  • As part of the inventory activity, look beyond pure data collection to having an opportunity to understand how the various portfolios of applications came about over time, the decisions that led to the complexity that exists, the pain points, and what is viewed as working well (and why)
  • Use the insights obtained to establish a set of criteria to consider in the formation of the vision and roadmap for the future so you have a sense whether the changes you’re making will be sustainable. These considerations could also help identify risks that could surface during implementation that could reintroduce the kind of complexity in place today

What Defines “Success”

Normally, a simplification strategy is based on a snapshot of a point in time, with an associated reduction in overall cost (or shift in overall spend distribution) and/assets (applications, data solutions, etc.).  This is generally a good way to establish the case for change and desired outcome of the activity itself, but it doesn’t necessarily cover what is “different” about the future state beyond a couple core metrics.  I would argue that it is also important to consider what I mentioned in the previous point, which is how the organization developed a complex footprint to begin with.

As an example, if complexity was caused by a rapid series of acquisitions, even if I do a good job of reducing or simplifying the footprint in place, if I continue to acquire new assets, I will end up right back where I was, with a higher operating cost than I’d like.  In this case, part of your objective could be to have a more effective process for integrating acquisitions.

Some things to consider:

  • Beyond the financial and operating targets, identify any necessary process or organizational changes needed to facilitate sustainability of the environment overall
  • This could involve something as simple as reviewing enterprise-level governance processes, or more structural changes in how the underlying technology footprint is managed

Guiding the Process and Setting Goals

A Small Amount of Good Data is Considerably Better than a Lot of Bad

As with any business situation, it’s tempting to assume that having more data is automatically a good thing.  In the case of maintaining an asset inventory, the larger and more diverse an organization is, the more difficult it is to maintain the data with any accuracy.  To that end, I’m a very strong believer in maintaining as little information as possible, doing deep dives into detail only as required to support design-level work.

As an example, we could start the process by identifying functional redundancies (at a category/component level) and spend allocations within and across portfolios as a means to surface overall savings opportunity and target areas for further analysis.  That requires a critical, minimum set of data, but at a reasonable level of administrative overhead.  Once specific target areas are identified and prioritized, further data gathering in the interest of comparing different solutions, performing gap analyses, and identifying candidate future state solutions can be done as a separate process.  This approach is prioritizing going broad (to Define opportunities) versus going deep (to Design the solution), and I would argue it is a much more effective and efficient way to go about simplification, especially if the underlying footprint has any level of volatility where the more detailed information will become outdated relatively quickly.

Some things to consider:

  • Prioritize a critical, minimum set of data (primary functions served by an application, associated TCO, level of criticality, businesses/operating units supported, etc.) to understand spend allocation in relation to the operating and technology footprint
  • Deep dive into more particulars (functional differences across similar systems within a given category) as part of a specific design activity downstream of opportunity identification

Be Greedy, But Realistic

The simplification process is generally going to be iterative in nature, insofar as there may be a conceptual target for complexity and spend reduction/reallocation at the outset, some analysis is performed, the data provides insight on what is possible, the targets are adjusted, further analysis or implementation is performed, the picture is further refined, and so on.

In general, my experience is that there are always going to be issues in what you can practically pursue, and therefore, it is a good idea to overshoot your targets.  By this, I mean that we should strive to identify more than our original savings goals because if we limit the level of opportunities we identify to a preconceived goal or target, we may either suboptimize the business outcome if things go well, or fall short of expectations in the event we are able to pursue only a subset of what is originally identified for various business, technology, or implementation-related issues.

Some things to consider:

  • Review opportunities, asking what would be different if you could only pursue smaller, incremental efforts, had a target that was twice what you’ve identified, could start from scratch and completely redefine your footprint with an “optimal case” in mind… and consider what, if anything would change about your scope and approach

Planning and Governance

Approach Matters

Part of the challenge with simplification is knowing where to begin.  Do you cover all of the footprint, the fringe (lower priority assets), the higher cost/core systems?  The larger an organization is, the more important it is to target the right opportunities quickly in your approach and not try to boil the ocean.  That generally doesn’t work.

I would argue that the primary question to understand in terms of targeting a starting point is where you are overall from a business standpoint.  The first iteration of any new process tends to generate learnings and improvements, so there will be more disruption than expected the first time you execute the process end-to-end.  To that point, if there is a significant amount of business risk to making widespread, foundational changes, it may make sense to start on lower risk, clean up type activities on non-core/supporting applications (e.g., Treasury, Tax, EH&S, etc.) by comparison with core solutions (like an ERP, MES, Underwriting, Policy Admin, etc.).  On the other hand, if simplification is meant to help streamline core processes, enable speed-to-market and competitive advantage, or some form of business growth, it could be that focusing on core platforms first is the right approach to take.

The point is that the approach should not be developed independent of the overall business environment and strategy, they need to align with each other.

Some things to consider:

  • As part of the application portfolio analysis, understand the business criticality of each application, level of planned changes and enhancements, how those enable upcoming strategic business goals, etc.
  • Consider how the roadmap will enable business outcomes over time; whether that is ideally a slow, build process of incremental gains, or more of a big bets, high impact changes that materially affect business value and IT spend

 

Accuracy is More Important Than Precision

This point may seem to contradict what I wrote earlier in terms of having a smaller amount of good data, but the point here is that it’s important to acknowledge in a transformation effort that there is a directly proportional relationship between the degree of change involved in the effort and the associated level of uncertainty in the eventual outcome.  Said differently: the more you change, the less you can predict the result with any precision.

This is true because there is a limited level of data generally available in terms of the operating impact of changes to people, process, and technology.  Consequently, the more you change in terms of one or more of those elements, your ability to predict the exact outcome from a metrics standpoint (beyond a more anecdotal/conceptual level) will be limited.  In line with the concepts that I shared in the recent “Intelligent Enterprise 2.0” series, with orchestration and AI, I believe we can gather, analyze, and leverage a greater base of this kind of data, but the infrastructure to do this largely doesn’t exist in most organizations I’ve seen today.

Some things to consider:

  • Be mindful not to “overanalyze” the impact of process changes up front in the simplification effort. The business case will generally be based on the overall reduction in assets/ complexity, changes in TCO, and shifts (or reductions) in staffing levels from the current state
  • It is very difficult to predict the end state when a large number of applications are transitioned as part of a simplification program, so allow for a degree of contingency in the planning process (in schedule and finances) rather than spending time. Some things that may not appear critical generally will reveal themselves to be only in implementation, some applications that you believe you can decommission will remain for a host of reasons, and so on.  The best laid plans on paper rarely prove out exactly in the course of execution depending on the complexity of the operating environment and culture in place

Expect Resistance and Expect a Mess

Any large program in my experience tends to go through an “optimism” phase, where you identify a vision and fairly significant, transformative goal, the business case and plan looks good, it’s been vetted and stakeholders are aligned, and you have all the normal “launch” related events that generate enthusiasm and momentum towards the future… and then reality sets in, and the optimism phase ends.

Having written more than once on Transformation, the reality is that it is messy and challenging, for a multitude of reasons, starting with patience, adaptability, and tenacity it takes to really facilitate change at a systemic level.  Status quo feels safe and comforting, it is known, and upsetting that reality will necessarily lead to friction, resistance, and obstacles throughout the process.

Some things to consider:

  • Set realistic goals for the program at the outset, acknowledge that it is a journey, that sustainable change takes time, the approach will evolve as you deliver and learn, and that engagement, communication, and commitment are the non-negotiables you need throughout to help inform the right decisions at the right time to promote success
  • Plan with the 30-, 60-, and 90-day goals in mind, but acknowledge that any roadmap beyond the immediate execution window will be informed by delivery and likely evolve over time. I’ve seen quite a lot of time wasted on detailed planning more than one year out where a goal-based plan with conceptual milestones would’ve provided equal value from a planning and CBA standpoint

Govern Efficiently and Adjust Responsively

Given the scale and complexity of simplification efforts, it would be relatively easy to “over-report” on a program of this type and cause adverse impact on the work itself.  In line with previous articles that I’ve written on governance and transparency, my point of view is that the focus needs to be on enabling delivery and effective risk management, not administrative overhead.

Some things to consider:

  • Establish a cadence for governance early on to review ongoing delivery, identify interventions and support needed, learnings that can inform future planning, and adjust goals as needed
  • Large programs succeed or fail in my experience based on maintaining good transparency to where you are, identifying course corrections when needed, and making those adjustments quickly to minimize the cost of “the turns” when they inevitably happen. Momentum is so critical in transformation efforts that minimizing these impacts is really important to keeping things on track

Wrapping Up

Overall, the reason for separating the process from the change in addressing simplification was deliberate, because both aspects matter.  You can have a thoughtful, well executed process and accomplish nothing in terms of change and you can equally be very mindful of the environment and changes you want to bring about, but the execution model needs to be solid, or you will lose any momentum and good will you’ve built in support of your effort.

Ultimately, recognizing that you’re both engaging in a change and a delivery activity is the critical takeaway.  Most medium- to large-scale environments end up complex for a host of reasons.  You can change the footprint, but you need to change the environment as well, or it’s only a matter of time before you’ll find yourself right back where you started, perhaps with a different set of assets, but a lot of same problems you had in the first place.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 10/22/2025

Culture and Sustaining Success

“Something needs to change… we’re not growing, we’re too slow, we’re not competitive, we need to manage costs…”

Change is an inevitable reality in business.  Even the most successful company will face adversity, new competitors, market shifts, evolving customer needs, expense pressures, near-term shareholder expectations, etc.  While it’s important to focus on adjusting course and remedying issues, the question is whether you will find yourself in exactly the same (or at least a relatively similar) situation again (along with how quickly) and that comes down to leadership and culture.

Culture issues tend to beget other business issues, whether it’s delayed or misaligned decisions, complacency and a lack of innovation, a lack of collaboration and cooperation, risk averse behaviors, redundant or ineffective solutions, high turnover resulting in lost expertise, and so on.  The point is that excellence needs to start “at the top” and work its way throughout an organization, and the mechanism for that proliferation is culture.

The focus of this article will be to explore what it takes to evolve culture in an organization, and to provide ways to think about what can happen when it isn’t positioned or aligned effectively.

 

It Starts with the Right Intentions

Conformance versus Performance

Before attempting to change anything, the fundamental question to be asked is why you want to have a culture in place to begin with?  Certainly, over the course of time and multiple organizations (and clients), I’ve seen culture emphasized to varying degrees, from where it is core to a company’s DNA, to where it is relegated to a poster, web page, or item on everyone’s desk that is seldom noticed or referenced.

In cases where it’s rarely referenced, there is missed opportunity to establish mission and purpose, rallying people around core concepts that can facilitate an effective work environment.

That being said, focusing on culture doesn’t necessarily create a greater good in itself, as I’ve seen environments where culture is used in almost a punitive way, suggesting there are norms to which everyone must adhere and specific language everyone needs to use, or there will be adverse consequences. 

That isn’t about establishing a productive work environment, it’s about control and conformance, and that can be toxic when you understand the fundamental issue it represents: employees aren’t trusted enough to do the right thing, be empowered, and enabled to act, so there needs to be a mechanism in place to drive conformity, enforce “common language”, and isolate those who don’t fit the mold to create a more homogenous organizational society. 

So, what happens to innovation, diversity, and inclusion in these environments?  It’s suppressed or destroyed, because the capabilities and gifts of the individual are lost to the push towards a unified, homogenized whole.  That is a fairly extreme outcome of such authoritarian environments, but the point is that a strong culture is not, in itself, automatically good if the focus is control and not performance and excellence.

I’ve written multiple articles on culture and values that I believe are important in organizations, so I won’t repeat those messages here, but the goal of establishing culture should be fostering leadership, innovation, growth, collaboration, and optimizing the contribution of everyone in an organization to serve the greater good.  If that doesn’t apply in equal measure to every employee, based on their individual capabilities and experience, that’s fine from my perspective, so long as they don’t detract from the performance of others in the process.  The point is that culture isn’t about the words on the wall, it’s about the behaviors that you are aspiring to engender within an organization and the degree to which you live into them every day.

 

Begin with Leadership

 

Words and Actions

It is fairly obvious to say that culture needs to start “at the top” and work its way outward, but there are so many issues I’ve seen over time in this step alone, that it is worth repeating.

It is not uncommon for leaders to speak in town hall meetings or public settings and proclaim the merits of the company culture, asking others to follow the core values or principles as outlined, to the betterment of themselves and everyone else (customers and others included as appropriate).  Now, the question is: what happens when that person returns to their desk and makes their next set of decisions?  This is where culture is measured, and employees notice everything over time.

The challenge for leaders who want excellence and organizational performance is to take culture to heart and do what they can to live into it, even in the most difficult circumstances, which is where it tends to be needed the most.  I remember hearing a speaker suggest that the litmus test of the strength of your commitment to culture could be expressed in whether you would literally walk away from business rather than compromise your values.  That’s a pretty difficult bar to set in my experience, but an interesting way to think about the choices we make and their relative consequence.

 

Aligning Incentives versus Values

Building on the previous point, there is a difference between behaviors and values.  The latter is what you believe and prioritize, the former is how you act.  Behaviors are directly observable; values are indirectly observed through your words and actions.

Why is this important in the context of culture?  It is important, because you can incent people in the interest of influencing their behavior, but you can’t change someone’s values, no matter how you incent them.  To the extent you want to set up a healthy, collaborative culture and there are individual motivations that don’t align with doing the right thing, organizational performance will suffer in some way, and the more senior the individual(s) are in the organization, the more significant the impact will likely be.

This point ultimately comes down to doing the right level of due diligence during the hiring process, but also being willing to make difficult decisions during the performance management process, because sometimes individual performers with unhealthy behaviors cause a more significant impact than is evident without some level of engagement and scrutiny from a leadership standpoint.

 

Have a Thoughtful Approach

Incubate -> Demonstrate -> Extend

As the diagram above suggests, culture doesn’t change overnight, and being deliberate in the approach to change will have a significant impact to how effective and sustainable it is.

In general, the approach I’d recommend is to start from “center” with leadership, raise awareness, educate on the intent and value to the changes proposed, and incubate there.  Broader communication in terms of the proposed shift is likely useful in preparing the next group to be engaged in the process, but the point is to start small, begin “living into” the desired model, evaluating its efficacy, and demonstrating the value it can create, THEN extend to the next (likely adjacent) set of people, and repeat the process over and over until the change has fully proliferated to the entire organization.  The length of any given iteration would likely vary depending on the size of the employee population and the degree of change involved (more substantial = longer windows of time), but the point is to be conscious and deliberate in how it is approached so adjustments can be made along the way and to enable leaders to understand and internalize the “right” set of behaviors before expecting them to help advocate and reinforce it in others.

 

An Example (Building an Architecture Capability)

 

To provide a simple example, when trying to establish an architecture capability across an organization, it would need to ultimately span from the central enterprise architecture team down to technical leads on individual implementation teams.  It would be impractical to implement the model all at once, so it would be more effective to stage it out, working from the top-down, first defining roles and responsibilities across the entire operating model, but then implementing one “layer” of roles at a time, until it is entirely in place.

Since architects are generally responsible for technical solution quality, but not execution, the deployment of the model would need to follow two coordinated paths: building the architecture capability itself and aligning it with the delivery leadership with which it is meant to collaborate and cooperate (e.g., project and program managers).  Trying to establish the role without alignment and support from people leading and participating on delivery teams likely would fail or lead to ineffective implementation, which is another reason why a more thoughtful and deliberate approach to the change is required.

What does this have to do with culture?  Well, architecture is fundamentally about solution quality in technology, reuse, managing complexity and cost of ownership, and enabling speed and disciplined innovation.  Establishing roles with an accountability for quality will test the appetite within an organization when it comes to working beyond individual project goals and constraints to looking at more strategic objectives for simplification, speed, and reuse.  Where courageous leadership and the right culture are not in place, evolving the IT operating model will be considerably more difficult, likely at every stage of the process.

 

Manage Reality

To this point, I’ve addressed change in a fairly uniform and somewhat idealistic manner, but reality is often quite different, so I wanted to explore a couple situations and how I think about the implications of each.

Non-Uniform Execution

 

So, what happens when you change culture within your team, but it doesn’t extend into those who work directly with you?  It depends on the nature of the change itself, of course, but likely the farther “out” from center you go, the more difficult it will be for your team to capitalize on whatever the intended benefits of the change were intended to be.

My assumptions here are in relation to medium- to larger-scale organizations, where the effects are magnified and it is impractical to “be everywhere, all the time” to engage in ways that help facilitate the desired change.

In the case that there isn’t broader alignment to whatever cultural adjustments you want to make within your team, depending on the degree of difference to the broader company culture, it may be necessary to clarify how “we operate internally” versus how “we engage with others”.  The goal of drawing out that separation would be to try and drive performance improvement within your team, but not waste energy and create friction in your external interactions.

There is a potential risk in having teams with a very different culture than the broader organization if it creates an environment where there becomes an “us and them” mentality or a special treatment situation where that team demonstrates unhealthy organizational behaviors or is held to different standards than others. Ultimately those situations cause larger issues and should be avoided where possible.

 

Handling Disparate Cultures

 

Unlike the previous situation where there is one broader culture and a team operates in a slightly different manner; I’ve also seen situations where groups operate with very different cultures within the same overall organization and it can create substantial disconnects if not addressed effectively.  When not addressed there can be a lot of internal friction, competition, and a lack of effective collaboration, which will hinder performance in one or more ways over time.

One way to manage a situation where there are multiple distinct cultures within a single organization, would first be to look for some level of core, universally accepted operating principles that can be applied to everyone, but then to focus entirely on the points of engagement across organizations, clarify roles and responsibilities for each constituent group, and manage to those dependencies the same as you would if working with a third-party provider or partner.  The overall operating performance opportunity may not be fully realized, but this kind of approach could be used to provide clarity of expectations and reduce friction points to a large degree.

 

Wrapping Up

The purpose of this article was to come back to a core element that makes organizations successful over time, and that’s culture.  To the degree that there are gaps or issues, it is always possible to adapt and evolve, but it takes a thoughtful approach, the right leadership, and time to make sustainable change.  In my opinion, it is time worth spending to the degree that performance and excellence is your goal.  It will never be “perfect” for many reasons, but thinking about how you establish, reinforce, and evolve it in a disciplined way can be the difference to remaining agile, competitive, and successful overall.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 10/03/2025

Visualizing Experience

“Are we set up for success?”

This is a question that I’ve heard asked many times, particularly when there is a strategic initiative or transformation effort being kicked off.  Normally, the answer is an enthusiastic “Yes”, because most programs start with a lot of optimism (which is a good thing), but not always a full understanding of risk.  The question is… How do you know whether you have the necessary capabilities to deliver?

In any type of organization, there is a blend of skills and experience, whether that is across a leadership team or within an individual team itself.  Given that reality and the ongoing nature of organizations to evolve, realign, and reorganize, it is not uncommon to leverage some form of evaluation (such as a Kolbe assessment) to understand the natural strengths, communication, or leadership styles of various individuals to help facilitate understanding and improve collaboration.

But what about knowledge and experience?  This part I haven’t seen done as often partially because, if not done well, it can lead to a cumbersome and manually intensive process that doesn’t create value.

The focus of this article is to suggest a means to understand and evaluate the breadth of knowledge and skills across a team.  To the extent we can visualize collective capability, it can be a useful tool to inform various things from a management standpoint, which are outlined in the second section below.

Necessary caveats: The example used is not meant to be prescriptive or exhaustive and this activity doesn’t need to be focused on IT alone.  The goal in the illustration used here was to provide enough specificity to help the reader visualize the concept at a practical level, but the data is entirely made up and not meant to be taken as a representation of an actual set of people.

On the Approach

Thinking through the Dimensions

The diagram above breaks out 27 dimensions from a knowledge and skills standpoint, ranging from business understanding to operations and execution.  The dimensions chosen for the purposes of this exercise don’t particularly matter, but I wanted to select a set that covered many of the aspects of an IT organization as a whole.

From an approach standpoint, the goal would be to identify what is being evaluated, select the right set of dimensions, define them, then determine “what good looks like” in terms of having a baseline for benchmarking (e.g., 10 means X, 8 means Y, 6 means Z, etc.).  With the criteria established, one should then explain the activity to the group being evaluated, prepare a simple survey, and gather the data.  The activity is meant to be rapid and directionally accurate, not to supplant individual performance evaluations, career development, or succession plans that should exist at a more detailed level.  Ideally the dimensions should also align to the competency model for an organization, but the goal of this activity is directional, so that step isn’t critical if it requires too much effort.

Once data has been collected, individual results can be plotted in a spider graph like the one below to provide a perspective on where there are overlaps and gaps across a team.

Ways of Applying the Concept

With the individual inputs from a team having been provided, it’s possible to think about the data in two different respects: how it reflects individual capabilities, gaps, and overlaps as well as what it shows as the collective experience of the team as a whole (the green dotted outline above).

The data now being assembled, there are a number of ways to potentially leverage the information outlined below.

Talent Development: The strengths and gaps in any individual view can be used to inform individual development plans or identify education needs for the team as a whole.  It can also be used to sanity check individual roles and accountability against the actual experience of individuals on the team.  This isn’t to suggest rotations and “learn on the job” situations aren’t a good thing, but rather to raise awareness of those situations so that they can be managed proactively with the individual or the team as a whole.  To the extent that a gap with one person is a strength in another, there could be cross-training opportunities that surface through the process.

Coordination and Collaboration: With overlaps and gaps visible across a team, there can be areas identified where individual team members see opportunities to consult with others who have a similar skillset, and also perhaps a different background that could surface different ways to approach and solve problems.  In larger organizations, it can often be difficult to know “who to invite” to a conversation, where the default becomes inviting everyone (or making everyone ‘mandatory’ versus ‘optional’), which ultimately can lead to less productive or over-attended conversations that lack focus.

Leaders and Teams: In the representative data above, I deliberately highlighted areas where team members were not as experienced as the person leading the team, but the converse situation as well.  In my experience, it is almost never the case that the leader is the most experienced in everything within the scope of what a team has to do.  If that was the case, it could suggest that the potential of that team could be limited to that leader’s individual capabilities and vision, because others lack the experience to help inform direction.  In the event that team members have more experience than their leader, there can also be opportunities for individuals to step up and provide direction, assuming the team leader creates space and a supportive environment for that occur.  Again, the point of the activity is to identify and determine what, if anything, to do with these disparities where they exist.

Sourcing Strategy: Where significant gaps exist (e.g., there is no one with substantial AI experience in the example data above), these could be areas where finding a preferred partner with a depth of experience in the topic could be beneficial while internal talent is acquired or developed (to the extent it is deemed strategic to the organization).

Business Partnership: The visibility could serve as input to a partnership discussion to align expectations for where business leaders expect support and capability from their technology counterparts versus areas where they are comfortable taking the lead or providing direction.  This isn’t always a very deliberate conversation in my experience, and sometimes that can lead to missed expectations in complex delivery situations.

Risk Management: One of the most important things to recognize about a visualization like this is not just what it shows about a teams’ capability, it’s also what isn’t there

Using Donald Rumsfeld’s now famous concept:

  • There is known – something for which we have facts and experience
  • There is known unknown – something we know is needed, but which is not yet clear
  • And the pure unknown – something outside our experience, and therefore a blind spot

The last category is where we should also focus in an activity like this, because the less experience that exists individually and collectively in a leadership team, there will be a substantial increase in risk because there is a lack of awareness of all the “known unknowns” that can have a material impact on delivering solutions and operating IT.  To the extent that a team is relatively inexperienced, no matter how motivated they may be, there is an increased probability that something will be compromised, whether that is cost, quality, schedule, morale, or something else.  To that end, this tool can be an important mechanism to identify and manage risk.

Wrapping Up

Having recently written a fairly thorough set of articles on the future of enterprise technology, I wanted to back up and look at something a little less complex, but also with a focus on improving transparency and informing leadership discussions on risk, development, and coordination.

Whether through a mechanism like this or some other avenue, I believe there is value in understanding the breadth of capabilities that exist within a team and across a leadership group as a means for promoting excellence overall.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/24/2025

The Intelligent Enterprise 2.0 – IT Organizational Implications

“Go West, Young Man”

I remember a situation where an organization decided to aggressively offshore work to reduce costs.  The direction to IT leaders was no different than that of the Gold Rush above.  Leaders were given a mandate, a partner with whom to work, and a massive amount of contracting ensued.  The result?  A significant number of very small staff augmentation agreements (e.g., 1-3 FTEs), a reduction in fixed, but a disproportionate increase in variable operating expenses, and a governance and administrative nightmare.  How did it happen?  Well, there was leadership support, a vision, and a desired benefit, but no commonly understood approach, plan, or governance.  The organization then spent a considerable amount of time transitioning all of the agreements in place to something more deliberate, establishing governance, and optimizing what quickly became a very expensive endeavor.

The requirements of transformation are no different today than they ever have been.  You need a vision, but also the conditions to promote success, and that includes an enabling culture, a clear approach, and governance to keep things on track and make adjustments where needed.

This is the final post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Overall Approach

Writing about IT operating models can be very complex for various reasons, and mostly because there are many ways to structure IT work based on the size and complexity of an organization, and there is nothing wrong with that in principle.  A small- to medium-size IT organization, as an example, likely has little separation of concerns and hierarchy, people playing multiple roles, a manageable project portfolio, and minimal coordination required in delivery.  Standardization is not as complex and requires considerably less “design” than a multi-national or large-scale organization, where standardization and reuse needs to be weighed against the cost of coordination and management involved as the footprint scales.  There can also be other considerations, such as one I experienced where simplification of a global technology footprint was driven off of operating similarities across countries rather than geographic or regional considerations.

What tends to be common across ways of structuring and organizing is the various IT functions that exist (e.g., portfolio management, enterprise architecture, app development, data & analytics, infrastructure, IT operations), just at different scales and levels of complexity.  These can be capability-based, organized by business partner, with some centralized capabilities and some federated, etc. but the same essential functions likely are in place, at varying levels of maturity and scale based on the needs of the overall organization.  In multi-national or multi-entity organizations/conglomerates, these functions likely will be replicated across multiple IT organizations with some or no shared capability existing at the parent/holding company or global level.

To that end, I am going to explore how I think about integrating the future state concepts described in the earlier articles of this series in terms of an approach and conceptual framework that, hopefully, can be applied to a broad range of IT organizations, regardless of their actual structure.

The critical challenge with moving from our current environment to one where AI, apps, and data are synthesized and integrated is doing it in a way that follows a consistent approach and architecture while not centralizing so much that we either constrain progress or limit innovation that can be obtained by spreading the work across a broader percentage of an organization (business teams included).  This is consistent with the templatized approach discussed in the prior article on Transition, but there can be many ways that that effort is planned and executed based on the scale and complexity of the organization undertaking the transformation itself.

 

Key Considerations

Define the Opportunities, Establish the Framework, Define the Standards

Before being concerned with how to organize and execute, we first need to have a mental model for how teams will engage with an AI-enabled technology environment in the future.  Minimally, I believe that will manifest itself in four ways:

  • Those who define and govern the overall framework
  • Those who leverage AI-enabled capabilities to do their work
  • Those who help create the future environment
  • Those who facilitate transition to the future state operating model

I will explore the individual roles in the next section, but an important first step is defining the level of standardization and reuse that is desirable from an enterprise architecture standpoint.  That question becomes considerably more complex in organizations with multiple entities, particularly when there are significant differences in the markets they serve, products/services they provide, etc.  That doesn’t mean, however, that reuse and optimization opportunities don’t exist, but rather that they need to be more thoughtfully defined and developed so as not to slow any individual organization down in their ability to innovate and promote competitive advantage.  There may be opportunities to look for common capabilities that make more sense to develop centrally and reuse that can actually accelerate speed-to-market if built in a thoughtful manner.

Regardless of whether capabilities in a larger organization are designed and developed in a highly distributed manner, having a common approach to the overall architecture and standards (as discussed in the Framework article in this series) could be a way to facilitate learning and optimization in the future (within and across entities), which will be covered in the third portion of this article.

 

Clarify the Roles and Expectations, Educate Everyone

The table below is not meant to be exhaustive, but rather to act as a starting point to consider how different individuals and teams will engage with the future enterprise technology environment and highlight the opportunity to clarify those various roles and expectations in the interest of promoting efficiency and excellence.

While I’m not going to elaborate on the individual data points in the diagram itself, a few points to note in relation to the table.

There is a key assumption that AI-related tools will be vetted, approved, and governed across an enterprise (in the above case, by the Enterprise Architecture function).  This is to promote consistency and effectiveness, manage risk, and consider issues related to privacy, IP-protection, compliance, security, and other critical concerns that otherwise would be difficult to manage without some formal oversight.

It is assumed that “low hanging fruit” tools like Copilot, LLMs, and other AI tools will be used to improve productivity and look for efficiency gains while implementing a broader, modern and integrated future state technology footprint with integrated agents, insights, and experts.  The latter of these things has touchpoints across an organization, which is why having a defined framework, standards, and governance are so important in creating the most cost-effective and rapid path to transforming the environment to one that create disproportionate value and competitive advantage.

Finally, there are adjustments to be made in various operating aspects of running IT, which is to reinforce the idea that AI should not be a separate “thing” or a “silver bullet”, it needs to become an integrated part of the way an organization leverages and delivers technology capabilities.  To the extent it is treated as something different or special and separated from the ongoing portfolio of work and operational monitoring and management processes, it will eventually fail to integrate well, increase costs, and underdeliver on value contributions that are widely being chased after today.  Everyone across the organization should also be made aware of the above roles and expectations, along with how these new AI-related capabilities are being leveraged so they can help identify continuing opportunities to leverage and improve their adoption across the enterprise.

 

Govern, Coordinate, Collaborate, Optimize

With a model in place and organizational roles and responsibilities clarified, there needs to be a mechanism to collect learnings, facilitate improvements to the “templatized approach” referenced in the previous article in this series, and drive continuous improvement in how the organization functions and leverages these new capabilities.

This can be manifest in several approaches when spread across a medium to large organization, namely:

  • Teams can work in partnership or in parallel to try a new process or technology and develop learnings together
  • One team can take the lead to attempt a new approach or innovation and share learnings with other in a fast-follower based approach
  • Teams can try different approaches to the same type of solution (when there isn’t a clear best option), benchmark, and select the preferred approach based on the learning across efforts

The point is that, to achieve maximum efficiency, especially when there is scale, centralizing too much can hamper learning and innovation, and it is better to develop coordinated approaches that can be governed and leveraged than to have a “free for all” where the overall opportunity to innovate and capture efficiencies is compromised.

 

Summing Up

As I mentioned at the outset, the challenge in discussing IT implications from establishing a future enterprise environment with integrated AI is that there are so many possibilities for how companies can organize around it.  That being said, I do believe a framework for the future intelligent enterprise should be defined, roles across the organization should be clarified, and transition to that future state should be governed with an eye towards promoting value creation, efficiency, and learning.

This concludes the series on my point of view related to the future of enterprise technology with AI, applications, and data and analytics integrated into one, aligned strategy.  No doubt the concepts will evolve as we continue to learn and experiment with these capabilities, but I believe there is always room for strategy and a defined approach rather than an excess of ungoverned “experimentation”.  History has taught us that there are no silver bullets, and that is the case with AI as well.  We will obtain the maximum value from these new technologies when we are thoughtful and deliberate in how we integrate them with how we work and the assets and data we possess.  Treating them as something separate and distinct will only suboptimize the value we create over time and those who want to promote excellence will be well served to map out their strategy sooner rather than later.

I hope the ideas across this series were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/19/2025

The Intelligent Enterprise 2.0 – Managing Transition

A compelling vision will stand the test of time, but it will also evolve to meet the needs of the day.

There are inherent challenges with developing strategy for multiple reasons:

  • It is difficult to balance the long- and short-term goals of an organization
  • It generally takes significant investments to accomplish material outcomes
  • The larger the change, the more difficult it can be to align people to the goals
  • The time it takes to mobilize and execute can undermine the effort
  • The discipline needed to execute at scale requires a level of experience not always available
  • Seeking “perfection” can be counterproductive by comparison with aiming for “good enough”

The factors above, and other too numerous to list, should serve as reminders that excellence, speed, and agility require planning and discipline, they don’t happen by accident.  The focus of this article will be to break down a few aspects of transitioning to an integrated future state in the interest of increasing the probability of both achieving predictable outcomes, but also in getting there quickly and in a cost-effective way, with quality.  That’s a fairly tall order, but if we don’t shoot for excellence, we are eventually choosing obsolescence.

This is the sixth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Key Considerations

Create the Template for Delivery

My assumption is that we care about three things in transitioning to a future, integrated environment:

  • Rapid, predictable delivery of capabilities over time (speed-to-market, competitive advantage)
  • Optimized costs (minimal waste, value disproportionate to expense)
  • Seamless integration of new capabilities as they emerge (ease of use and optimized value)

What these three things imply is the need for a consistent, repeatable approach and a standard architecture towards which we migrate capabilities over time.  Articles 2-5 of this series explore various dimensions of that conceptual architecture from an enterprise standpoint, the purpose here will be to focus in on approach.

As was discussed in the article on the overall framework, the key is to start the design of the future environment around the various consumers of technology, the capabilities being provided to them, now and in the future, and prioritizing the relative value of those capabilities over time.  That should be done on an internal and external level, so that individual roadmaps can be developed by domain to eventually surface and provide those capabilities as part of a portfolio management process.  In the course of identifying those capabilities, some key questions will be whether those capabilities are available today, involve some level of AI services, whether those can/should be provided through a third-party or internally, and whether the data ultimately exists to enable them.  This is where a significant inflection point should occur: the delivery of those capabilities should follow a consistent approach and pattern, so it can be repeated, leveraged, and made more efficient over time.

For instance, if an internally-developed AI-enabled capability is needed, the way that the data is exposed, processed, the AI service (or data product) exposed, and integrated/consumed by a package or custom application should be exactly the same from a design standpoint, regardless of what the specific capability is.  That isn’t to say that the work needs to be done by only one team, as will be explored in the final article on IT organizational implications, but rather that we ideally want to determine the best “known path” to delivery, execute that repeatedly, and evolve it as required over time. 

Taking this approach should provide a consistent end-user experience of services as they are brought online over time, a relatively streamlined delivery process as you are essentially mass-producing capabilities over time, and a relatively cost-optimized environment as you are eliminating the bloat and waste of operating multiple delivery silos that will eventually impede speed-to-market at scale and lead to technical debt.

From an architecture standpoint, without wanting to go too deep into the mechanics here, to the extent that the current state and future state enterprise architecture models are different, it would be worth evaluating things like virtualization of data as well as adapters/facades in the integration layer as a way to translate between the current and future models so there is logical consistency in the solution architecture even where the underlying physical implementations are varied.  Our goal in enterprise architecture should always be to facilitate rapid execution, but promote standards, simplification, and reduce complexity and technical debt wherever possible over time.

 

Govern Standards and Quality

With a templatized delivery model and target architecture in place, the next key aspect to transition is to both govern the delivery of new capabilities to identify opportunities to develop and evolve standards, as well as to evolve the “template” itself, whether that involves adding automation to the delivery, building reusable frameworks or components that can be leveraged, or other assets that can help reduce friction and ease future efforts.

Once new capabilities are coming online, the other key aspect is to review them for quality and performance, to also look for ways to evolve the approach, adjust the architecture, and continue to refine the understanding of how these integrated capabilities can best be delivered and leveraged on an end-to-end basis.

Again, the overall premise of this entire series of articles is to chart a path towards an integrated enterprise environment for applications, data, and AI in the future.  To be repeatable, we need to be consistent in how we plan, execute, govern, and evolve in the delivery of capabilities over time.

Certainly, there will be learnings that come from delivery, especially early in the adoption and integration of these capabilities. The way that we establish and enable the governance and evolution stemming from what we learn is critical in making delivery more predictable and ensuring more useful capabilities and insights over time.

 

Evolve in a Thoughtful Manner

With our learnings firmly in hand, the mental model I believe would be worth considering is more of a scaled Agile-type approach, where we have “increment-level”/monthly scheduled retrospectives to review the learnings across multiple sprints/iterations (ideally across multiple product/domains) and to identify opportunities to adjust estimation metrics, design patterns, develop core/reusable services, and anything else that essentially would improve efficiency, cost, quality, or predictability.

Whether these occur monthly at the outset and eventually space out to a quarterly process as the environment and standards are better defined could depend on a number of factors, but the point is not to make it too reactive to any individual implementation and to try and look across a set of deliveries to look for consistent issues, patterns, and opportunities that will have a disproportionate impact on the environment as a whole.

The other key aspect to remember in relation to evolution is that the capabilities of AI in particular are evolving very rapidly at this point, which is also a reason for thinking about the overall architecture, separation of concerns, and standards for how you integrate in a very deliberate way.  A new version of a tool or technology shouldn’t require us to have to rewrite a significant portion of our footprint, it ideally should be a matter of upgrading the previous version and having the new capabilities available everywhere that service is consumed or in swapping out one technology for another, but everything on the other side of an API remaining nearly unchanged to the extent possible.  This is understood as a fairly “perfect world” description of what likely would happen in practice, depending on the nature of the underlying change, but the point is that, without allowing for these changes up front in the design, the level of disruption they cause will likely be amplified and slow overall progress.

 

Summing Up

Change is a constant in technology.  It’s a part of life given that things continue to evolve and improve, and that’s a good thing to the extent that it provides us with the ability to solve more complex problems, create more value, and respond more effectively to business needs as they evolve over time.  The challenge we face is in being disciplined enough to think through the approach so that we can become effective, fast, and repeatable.  Delivering individual projects isn’t the goal, it’s delivering capabilities rapidly and repeatably, at scale, with quality.  That’s an exercise in disciplined delivery.

Having now covered the technology and delivery-oriented aspects of the future state concept, the remaining article will focus on how to think about the organizational implications on IT.

 

Up Next: IT Organizational Implications

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/15/2025

The Intelligent Enterprise 2.0 – Deconstructing Data-Centricity

Does having more data automatically make us more productive or effective?

When I wrote the original article on “The Intelligent Enterprise”, I noted that the overall ecosystem for analytics needed to change.  In many environments, data is moved from applications to secondary solutions, such as data lakes, marts, or warehouses, enriched and integrated with other data sets, to produce analytical outputs or dashboards to provide transparency into operating performance.  Much of this is reactive and ‘after-the-fact’ analysis of things we want to do right or optimize the first time, as events occur.  The extension of that thought process was to move those insights to the front of the process, integrate them with the work as it is performed, and create a set of “intelligent applications” that would drive efficiency and effectiveness to different levels than we’ve been able to accomplish before.  Does this eliminate the need for downstream analytics, dashboards, and reporting?  No, for many reasons, but the point is to think about how we can make the future data and analytics environment about establishing a model that enables insight-driven, orchestrated action.

This is the fifth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

 

Starting with the Consumer (1)

Just give me all the data” is a request that isn’t unusual in technology.  Whether that is a byproduct of the challenges associated with completing analytics projects, a lack of clear requirements, or something else, these situations cause an immediate issue in practice: what is the quality of the underlying data and what are we actually trying to do with it?

It’s tempting to start an analytics effort from the data storage and to work our way up the stack to the eventual consumer.  Arguably, this is a central premise in being “data-centric.” While I agree with the importance of data governance and management (the next topic), it doesn’t mean everything is relevant or useful to an end consumer, and too much data very likely just creates management overhead, technical complexity, and information overload.

A thoughtful approach needs to start with identifying the end consumers of the data, their relative priority, information and insights needs, and then developing a strategy to deliver those capabilities over time.  In a perfect world, that should leverage a common approach and delivery infrastructure so that it can be provided in iterations and elaborated to include broader data sets and capabilities across domains over time.  The data set should be on par with having an integrated data model and consistent way of delivering data products and analytics services that can be consumed by intelligent applications, agents, and solutions supporting the end consumer.

As an interesting parallel, it is worth noting that ChatGPT is looking to converge their reasoning and large language models from 4x into a single approach for their 5x release so that end customers don’t need to be concerned with having selected the “right” model for their inquiry.  It shouldn’t matter to the data consumer.  The engine should be smart enough to leverage the right capabilities based on the nature of the request, and that is what I am suggesting in this regard.

 

Data Management and Governance (2)

Without the right level of business ownership, the infrastructure for data and analytics doesn’t really matter, because the value to be obtained from optimizing the technology stack will be limited by the quality of the data itself.

Starting with master data, it is critical to identify and establish data governance and management for the critical, minimum amount of data in each domain (e.g., customer in sales, chart of accounts in finance), and the relationship between those entities in terms of an enterprise data model.

Governing data quality has a cost and requires time, depending on the level of tooling and infrastructure in place, and it is important to weigh the value of the expected outcomes in relation to the complexity of the operating environment overall (people, process, and technology combined).

 

From Content to Causation (3)

Finally, with the level of attention given to Generative AI and LLMs, it is important to note the value to be realized when we shift our focus from content to processes and transactions in the interest of understanding causation and influencing business outcomes.

In a manufacturing context, the increasing level of interplay between digital equipment, digital sensors, robotics, applications, and digital workers, there is a significant opportunity to orchestrate, gather and analyze increasing volumes of data, and ultimately optimize production capacity, avoid unplanned events, and increase the safety and efficacy of workers on the shop floor.  This requires deliberate and intentional design, with outcomes in mind.

The good news is that technologies are advancing in their ability to analyze large data sets and derive models to represent the characteristics and relationships across various actors in play and I believe we’ve only begun to scratch the surface on the potential for value creation in this regard.

 

Summing Up

Pulling back to the overall level, data is critical, but it’s not the endgame.  Designing the future enterprise technology environment is about exposing and delivering the services that enable insightful, orchestrated action on behalf of the consumers of that technology.  That environment will be a combination of applications, AI, and data and analytics, synthesized into one meaningful, seamless experience.  The question is how long it will take us to make that possible.  The sooner we begin the journey of designing that future state with agility, flexibility, and integration in mind, the better.

Having now elaborated the framework and each of the individual dimensions, the remaining two articles will focus on how to approach moving from the current to future state and how to think about the organizational implications on IT.

Up Next: Managing Transition

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/03/2025

The Intelligent Enterprise 2.0 – Evolving Applications

No one builds applications or uses new technology with the intention of making things worse… and yet we have and still do at times.

Why does this occur, time and again, with technology?  The latest thing was supposed to “disrupt” or “transform” everything.  I read something that suggested this was it.  The thing we needed to do, that a large percentage of companies were planning, that was going to represent $Y billions of spending two years from now, generating disproportionate efficiency, profitability, and so on.  Two years later (if that), there was something else being discussed, a considerable number of “learnings” from the previous exercise, but the focus was no longer the same… whether that was windows applications and client/server computing, the internet, enterprise middleware, CRM, Big Data, data lakes, SaaS, PaaS, microservices, mobile applications, converged infrastructure, public cloud… the list is quite long and I’m not sure that productivity and the value/cost equation for technology investments is any better in many cases.

The belief that technology can have such a major impact and the degree of continual change involved have always made the work challenging, inspiring, and fun.  That being said, the tendency to rush into the next advance without forming a thoughtful strategy or being deliberate about execution can be perilous in what it often leaves behind, which is generally frustration for the end users/consumers of those solutions and more unrealized benefits and technical debt for an organization.  We have to do better with AI, bringing intelligence into the way we work, not treating it as something separate entirely.  That’s when we will realize the full potential of the capabilities these technologies provide.  In the case of an application portfolio, this is about our evolution to a suite of intelligent applications that fit into the connected ecosystem framework I described earlier in the series.

This is the fourth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

Before exploring various scenarios for how we will evolve the application landscape, it’s important to note a couple overall assumptions:

  • End user needs and understanding should come first, then the capabilities
  • Not every application needs to evolve. There should be a benefit to doing so
  • I believe the vast majority of product/platform providers will eventually provide AI capabilities
  • Providing application services doesn’t mean I have to have a “front-end” in the future
  • Governance is critical, especially to the extent that citizen development is encouraged
  • If we’re not mindful how many AI apps we deploy, we will cause confusion and productivity loss because of the fragmented experience

 

Purchased Software (1)

The diagram below highlights a few different scenarios for how I believe intelligence will find its way into applications.

In the case of purchased applications, between the market buzz and continuing desire for differentiation, it is extremely likely that a large share of purchased software products and platforms will have some level of “AI” included in the future, whether that is an AI/ML capability leveraging OLTP data that lives within its ecosystem, or something more causal and advanced in nature. 

I believe it is important to delineate between internally generated insights and ones coming as part of a package for several reasons. First, we may not always want to include proprietary data on purchased solutions, especially to the degree they are hosted in the public cloud and we don’t want to expose our internal data to that environment from a security, privacy, or compliance standpoint. Second, we may not want to expose the rules and IP associated with our decisioning and specific business processes to the solution provider. Third, to the degree we maintain these as separate things, we create flexibility to potentially migrate to a different platform more easily than if we are tightly woven into a specific package.  And, finally, the required data ingress to comingle a larger data set to expand the nature of what a package could provide “out of the box” may inflate operating costs of the platforms unnecessarily (this can definitely be the case with ERP platforms).

The overall assumption is that, rather than require custom enhancements of a base product, the goal from an architecture standpoint would be for the application to be able to consume and display information from an external AI service that is provided from your organization.  This is available today within multiple ERP platforms, as an example.

The graphic below shows two different migration paths towards a future state where applications have both package and internally provided AI capabilities, one where the package provider moves first, internal capabilities are developed in parallel as a sidecar application, and then eventually fully integrated into the platform as a service, and the other way around, assuming the internal capability is developed first, run in parallel, then folded into the platform solution.

Custom-Developed Software (2)

In terms of custom software, the challenge is, first, evaluating whether there is value in introducing additional capabilities for the end user and, second, understanding the implications for trying to integrate the capabilities into the application itself versus leaving them separate.

In the event that there is uncertainty on the end user value of having the capability, implementing the insights as a side car / standalone application, then looking to integrate them within the application as an integrated capability a second step may be the best approach. 

If a significant amount of redesign or modernization is required to directly integrate the capabilities, it may make sense to either evaluate market alternatives as a replacement to the internal application or to leave the insights separate entirely.  Similar to purchased products, the insights should be delivered as a service and integrated into the application versus being built as an enhancement to provide greater flexibility for how they are leveraged and to simplify migrations to a different solution in the future.

The third scenario in the diagram above is meant to reflect a separate insights application that is then folded into the custom application as a service over time, so that it is a more seamless experience for the end user over time.

Either way, whether it be a purchased or custom-built solution, the important points are to decouple the insights from the applications to provide flexibility, but also to think about both providing a front-end for users to interact with the applications, but also to allow for a service-based approach as well, so that an agent acting on behalf of the user or the system itself could orchestrate various capabilities exposed from that application without the need for user intervention.

 

From Disconnected to Integrated Insights (3)

One of the reasons for separating out these various migration scenarios is to highlight the risk that introducing too many sidecar or special/single purpose applications could cause significant complexity if not managed and governed carefully.  Insights should serve a process or need, and if the goal is to make a user more productive, effective, or safer, those capabilities should ultimately be used to create more intelligent applications that are easier to use.  To that end, there likely would be value in working through a full product lifecycle when introducing new capabilities, to determine whether it is meant to be preserved, integrated with a core application (as a service), or tested and possibly decommissioned once a more integrated capability is available.

 

Summing Up

While the experience of a consumer of technology likely will change and (hopefully) become more intuitive and convenient with the introduction of AI and agents, the need to be thoughtful in how we develop an application architecture strategy, leverage components and services, and put the end user first will be priorities if we are going to obtain the value of these capabilities at an enterprise level.  Intelligent applications is where we are headed and our ability to work with an integrated vision of the future will be critical to realizing the benefits available in that world.

The next article will focus on how we should think about the data and analytics environment in the future state.

Up Next: Deconstructing Data-Centricity

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/30/2025

The Intelligent Enterprise 2.0 – Integrating Artificial Intelligence

“If only I could find an article that focused on AI”… said no one, any time recently.

In a perfect world, I don’t want “AI” anything, I want to be able to be more efficient, effective, and competitive.  I want all of my capabilities to be seamlessly folded into the way people work so they become part of the fabric of the future environment.  That is why having an enterprise-level blueprint for the future is so critically important.  Things should fit together seamlessly and they often don’t, especially when we don’t design with integration in mind from the start.  That friction slows us down, costs us more, and makes us less productive than we should be.

This is the third post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

Natural Language First (1)

I don’t own an Alexa device, but I have certainly had the experience of talking to someone who does, and heard them say “Alexa, do this…”, then repeat themself, then repeat themself again, adjusting their word choice slightly, or slowing down what they said, with increasing levels of frustration, until eventually the original thing happens. 

These experiences of voice-to-text and natural language processing have been anything but frictionless: quite the opposite, in fact.  With the advent of large language models (LLMs), it’s likely that these kinds of interactions will become considerably easier and more accurate, along with the integration of written and spoken input being a means to initiate one or more actions from an end user standpoint.

Is there a benefit?  Certainly.  Take the case of a medical care provider directing calls to a centralized number for post-operative and case management follow ups.  A large volume of calls needs to be processed and there are qualified medical personnel available to handle them on a prioritized basis.  The technology can play the role of a silent listener, both recording key points of the conversation and recommended actions (saving time in documenting the calls), and also making contextual observations integrated with the healthcare worker’s application (providing insights) to potentially help address any needs that arise mid-discussion.  The net impact could be a higher volume of calls processed due to the reduction in time documenting calls and improved quality of care from the additional insights provided to the healthcare professional.  Is this artificial intelligent replacing workers?  No, it is helping them be more productive and effective, by integrating into the work they are already doing, reducing the lower value add activities and allowing them to focus more on patient care.

If natural language processing can be integrated such that comprehension is highly accurate, I can foresee where a large amount of end user input could be provided this way in the future.  That being said, the mechanics of a process and the associated experience still need to be evaluated so that it doesn’t become as cumbersome as some voice response mechanisms in place today can be, asking you to “say or enter” a response, then confirming what you said back to you, then asking for you to confirm that, only to repeat this kind of process multiple times.  No doubt, there is a spreadsheet somewhere to indicate savings for organizations in using this kind of technology by comparison with having someone answer a phone call.  The problem is that there is a very tedious and unpleasant customer experience on the other side of those savings, and that shouldn’t be the way we design our future environments.

Orchestration is King (2)

Where artificial intelligence becomes powerful is when it pivots from understanding to execution.

Submitting a natural language request, “I would like to…” or “Do the following on my behalf…”, having the underlying engine convert that request to a sequence of actions, and then ultimately executing those requests is where the power of orchestration comes in.

Back to my earlier article on The Future of IT from March of 2024, I believe we will pivot from organizations needing to create, own, and manage a large percentage of their technology footprint to largely becoming consumers of technologies produced by others, that they configure to enable their business rules and constraints and that they orchestrate to align with their business processes.

Orchestration will exist on four levels in the future:

  • That which is done on behalf of the end user to enable and support their work (e.g., review messages, notifications, and calendar to identify priorities for my workday)
  • That which is done within a given domain to coordinate transaction processing and optimize leverage of various components within a given ecosystem (e.g., new hire onboarding within an HR ecosystem or supplier onboarding within the procurement domain)
  • That which is done across domains to coordinate activity that spans multiple domains (e.g., optimizing production plans coming from an ERP systems to align with MES and EAM systems in Manufacturing given execution and maintenance needs)
  • Finally, that which is done within the data and analytics environment to minimize data movement and compute while leveraging the right services to generate a desired outcome (e.g., optimizing cost and minimizing the data footprint by comparison with more monolithic approaches)

Beyond the above, we will also see agents taking action on behalf of other, higher-level agents, where there is more of a heirarchical relationship where a process is decomposed into subtasks executed (ideally in parallel) to serve an overall need.

Each of these approaches refer back to the concept of leveraging defined ecosystems and standard integration as discussed in the previous article on the overarching framework.

What is critical is to think about this as a journey towards maturing and exposing organizational capabilities.  If we assume an end user wants to initiate a set of transactions through a verbal command, that then is turned in a process to be orchestrated on their behalf, we need to be able to expose the services that are required to ultimately enable that request, whether that involves applications, intelligence, data, or some combination of the three.  If we establish the underlying framework to enable this kind of orchestration, however it is initiated, through an application, an agent, or some other mechanism, we could theoretically plug new capabilities into that framework to expand our enterprise-level technology capabilities more and more over time, creating exponential opportunity to make more of our technology investments.  The goal is to break down all the silos and make every capability we have accessible to be orchestrated on behalf of an end user or the organization.

I met with a business partner not that long ago who was a strong advocate for “liberating our data”.  My argument would be that the future of an intelligent enterprise should be to “liberate all of our capabilities”.

Insights, Agents, and Experts (3)

Having focused on orchestration, which is a key capability within agentic solutions, I did want to come back to three roles that I believe AI can fulfill in an enterprise ecosystem of the future, they are:

  • Insights – observations or recommendations meant to inform a user to make them more productive, effective, or safer
  • Agents – applications that orchestrate one or more activities on behalf of or in concert with an end user
  • Experts – applications that act as a reference for learning and development and to serve as a representation of the “ideal” state either within a given domain (e.g., a Procurement “Expert” may have accumulated knowledge of both best practices, market data, and internal KPIs and goals that allow end users and applications to interact with it as an interactive knowledge base meant to help optimize performance) or across domains (i.e., extending the role of a domain-based expert to be broader to focus on enterprise-level objectives and to help calibrate the goals of individual domains to help achieve those overall outcomes more effectively)

I’m not aware of the “Expert” type capabilities existing for the most part today, but I do believe having more of an autonomous entity that can provide support, guidance, and benchmarking to help optimize performance of individuals and systems could be a compelling way to leverage AI in the future.

AI as a Service (4)

I will address how AI should be integrated into an application portfolio in the next article, but I felt it was important to clarify that I believe that, while AI is being discussed as an objective, a product, and an outcome in many cases today, it is important to think of it as a service that lives and is developed as part of a data and analytics capability.  This feels like the right logical association because the insights and capabilities associated with AI are largely data-centric and heavily model dependent, and that should live separate from applications meant to express those insights and capabilities to an end user.

Where the complicating factor could arise from my experience is in how the work is approached and the capabilities of the leaders charged with AI implementation, something I will address in the seventh article in this series on organizational consideration.

Suffice is to say that I see AI as an application-oriented capability, even though it is heavily dependent on data and your underlying model.  To the extent that a number of data leaders can come from a background focused on storage, optimization, and performance of traditional or even advanced analytics/data science capabilities, they may not be ideal candidates to establish the vision for AI, given it benefits from more of an outside-in (consumer-driven) mindset than an inside-out (data-focused) approach.

Summing Up

With all the attention being given to AI, the main purpose of breaking it down in the manner I have above it to try and think about how we integrate and leverage it within and across an enterprise, and most importantly: not to treat it as a silo or a one-off.  That is not the right way to approach AI moving forward.  It will absolutely become part of the way people work, but it is a capability like many other in technology, and it is critically important that we continue to start with the consumers of technology and how we are making them more productive, effective, safe, and so on.

The next two articles will focus on how we integrate AI into the application and data environments.

Up Next: Evolving Applications

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/28/2025