What It Is: Architecture provides the structure, standards, and framework for how technology strategy should be manifested and governed, including its alignment to business strategy, capabilities, and priorities. It should ideally be aligned at every level, from the enterprise to individual delivery projects
Why It Matters: Technology is a significant enabler and competitive differentiator in most organizations. Architecture provides a mental model and structure to ensure that technology delivery is aligned to overall strategies that create value. Having a lack of architecture discipline is like building a house without a blueprint… it will cost more to build, not be structurally sound, and expensive to maintain
Key Dimensions
Operating Model – How you organize around and enable the capability
Enable Innovation – How you allow for rapid experimentation and introduction of capabilities
Accelerating Outcomes – How you promote speed-to-market through structured delivery
Optimizing Value/Cost – How you manage complexity, minimize waste, and modernize
Inspiring Practitioners – How you identify skills, motivate, enable, and retain a diverse workforce
Performing in Production – How you promote ongoing reliability, performance, and security
Operating Model
Design the model to provide both enterprise oversight and integrated support with delivery
Ensure there is two-way collaboration, so delivery informs strategy and standards and vice versa
Foster a “healthy” tension between doing things “rapidly” and doing things “right”
Innovate
Identify and thoughtfully evaluate emerging technologies that can provide new capabilities that promote competitive advantage
Engage in experimentation to ensure new capabilities can be productionized and scaled
Accelerate
Develop standards, promote reuse, avoid silos, and reduce complexity to enable rapid delivery
Ensure governance processes promote engagement while not gating or limiting delivery efficacy
Optimize
Identify approaches to enable simplification and modernization to promote cost efficiency
Support benchmarking where appropriate to ensure cost and quality of service is competitive
Inspire
Inform talent strategy ensuring the right skills to support ongoing innovation and modernization
Provide growth paths to enable fair and thoughtful movement across roles in the organization
Perform
Integrate cyber security throughout delivery processes and ensure integrated monitoring for reliability and performance in production, across hosted and cloud-based environments
What It Is: Governance (in a delivery context) is the process for reviewing a portfolio of ongoing work in the interest of enabling risk management, containing avoidable cost, and increasing on-time delivery. PMOs (Project/Program Management Offices) are the mechanism through which governance is typically implemented in many organizations.Why It Matters: As the volume, risk, and complexity in a portfolio increase, there is typically a disproportionate increase in issues that come about, leading to cost overruns, missed expectations on scope or schedule (or both), and reduced productivity. PMOs, meant to be a mechanism to mitigate these issues, are often set up or executed poorly, becoming largely administrative and not value-generating capabilities, which furthers or amplifies any underlying execution issues. In an environment where organizations want to transform while managing a high level of value/cost efficiency, a disciplined and effective governance environment is critical to promoting IT excellence
Key Concepts
There are countless ways to set up an operating model in relation to PMOs and governance, but the culture and intent have to be right, or the rest of what follows will be more difficult
There is a significant difference between a “governing” and an “enabling” PMO in how people perceive the capability itself. While PMOs are meant to accomplish both ends, the priority is enabling successful, on-time, quality delivery, not establishing a “police state”
Where the focus of a PMO becomes “governance” that doesn’t drive engagement and risk management it can easily become an administrative entity that drives cost and doesn’t create value and ultimately undermines the credibility of the work as a whole
The structure of the overall operating model should align to the portfolio of work, scale of the organization, and alignment of customers to ongoing projects and programs
It can easily be the case that the execution of the governance model can adapt and change year-over-year but, if designed properly, the structure and infrastructure should be leverageable, regardless of those adjustments
The remainder of this article with introduce a concept for how to think about portfolio composition and then various dimensions to consider in creating an operating model for governance
Framing the Portfolio
In chalking out an approach to this article, I had to consider how to frame the problem in a way that could account for the different ways that IT portfolios are constructed. Certainly, the makeup of work in a small- to medium-size organization is vastly different than a global, diversified organization. It would also be different when there are a large number of “enterprise” projects versus a set of highly siloed, customer-specific efforts. To that end, I’m going to introduce a way of thinking about the types of projects that typically make up an IT project portfolio, then an example governance model, the dimensions of which will be discussed in the next section.
The above graphic provides a conceptual way to organize delivery efforts, using the rocks, pebbles, and sand in a jar metaphor that is relatively well known, and also happens to apply to organizing technology delivery.
To establish effective governance, you generally first want to examine and classify delivery projects/programs based on scale (in effort and budget), risk, timeframe, and so on. This is important so as not to apply a “one size fits all” approach to how you track and govern projects that encumbers lower complexity efforts with the same level of reporting that you would typically have on larger-scale, transformation programs.
In the model above, I went with a simple structure of four project types:
Sand – very low risk projects that can be something like a rate change in Insurance or data change in analytics
Pebbles – medium complexity work like incremental enhancements or an Agile sprint
Rocks – something material, like a package implementation, new technology introduction, product upgrade, or new business or technology capability delivery
Boulders – high complexity, multi-year transformation programs, like an ERP implementation where there are multiple material, related projects under one larger delivery umbrella
The characteristics of these projects and metrics you would ideally like to gather, along with the level of “review” needed on an ongoing basis would vary greatly, which will be explored in the next section.
In a real-world scenario, it is possible that you might want to identify additional sub-categories to the degree it helps inform architecture or delivery governance processes (e.g., security, compliance, modernization, AI-related projects), most of which would likely be specialized kinds of “Pebbles” and “Rocks” in the above model. It is very easy to become bloated quickly in terms of a governance process, so I am generally a proponent of tuning the model to the work and asking only questions relevant to the type of project being discussed.
What about Agile/SAFe and Product team-oriented environments? In my experience, it is beneficial to segment delivery efforts because, even in product-based environments, there are normally a mix of projects that are more monolithic in nature (i.e., that would align to “Rocks” and “Boulders”). Sprints within iterative projects (for a given product team) would likely align to “Pebbles” in the above model and the question would be how to align the outcome of retrospectives into the overall governance model, which will be addressed below.
So, coming back to the diagram, for the purposes of illustration, the assumption we will use is that the portfolio we’re supporting is a mix of all four project types (the “Portfolio Makeup” at right above), so that we can discuss how the governance can be layered and integrated across the different categories expressed in the model itself.
For the remainder of this article, we will assume the work in the delivery portfolio is divided equally between two business customer groups (A and B), with delivery teams supporting each as represented in the below diagram.
If your individual scenario involved a common customer, the model below could be simplified to one branch of the two represented. If there were multiple groups, it could be scaled horizontally (adding branches for each additional organization) or if there were multiple groups across various geographies, it could be scaled by replicating and sizing the entire structure by entity (e.g., work organized by country in a global organization or by operating company in a conglomerate with multiple OpCos) and then adding one additional later for enterprise or global governance.
Key Dimensions
There are many dimensions to consider in establishing an enterprise delivery governance model. The following breakdown is not intended to be exhaustive, but rather to highlight some key concepts that I believe are important to consider when designing the operating model for an IT organization.
General Design Principles
The goal is to enable decisions as close to the delivery as possible to improve efficiency and minimize the amount of “intervention” needed, unless it is a matter of securing additional resources (labor, funding, etc.) or addressing change control issues
The model should leverage a common operating infrastructure to the extent possible, to enable transparency and benchmarking across projects and portfolios. The more consistency and the more “plug and play” the infrastructure for monitoring and governance is, the faster (and more cost-effectively) projects and programs can typically be kicked off and accelerated into execution without having to define these processes independently
Metrics should move from summarized to more detailed as you move from oversight to execution, but the ability to “’drill down” should ideally be supported, so there is traceability
Business and IT PMOs versus “One” consolidated model
There is a proverbial question as to whether it is better to have “one”, integrated PMO construct, or an IT PMO separate from one that manages business dependencies (whether centralized or distributed)
From my perspective, this is a matter of scale and complexity. For smaller organizations, it may be efficient and practical to run everything through the same process, but as work scales, my inclination would be to separate concerns to keep the process from becoming too cumbersome and leverage the issue and risk management infrastructure to track and manage items relevant to the technology aspects of delivery. There should be linkage and coordination to the extent that parallel organizations exist, but I would generally operate them independently so they can focus on their scope of concerns and be as effective as possible
Portfolio Management Integration
I’m assuming that portfolio management processes would operate “upstream” of the governance process and inform which projects are being slotted, address overall resource management and utilization, and release strategy
To the extent that change control in the course of delivery affects a planned release, a reverse dependency exists from the governance process back to the portfolio management process to see if schedule changes necessitate any bumping or reprioritization because of resource contention or deployment issues
IT Operations Integration
The infrastructure used to track and monitor delivery should come via the IT Operations capability, theoretically connecting at the IT Scorecard for executive level delivery metrics to portfolio and project metrics tracked at the execution level
IT Operations should own (or minimally help establish) the standards for reporting across the entire operating model
Participation
IT Operations should facilitate centralized governance processes as represented in the “Unit-level” and “Enterprise” governance processes in the diagram above. Program-level governance for “Boulders” would likely be best run by the delivery leadership accountable for those efforts
Participation should include whoever is needed to engage and resolve 80% (anecdotally) of the issues and risks that could be raised, but be limited to only people who need to be there
Governance processes should never be a “visibility” or “me too” exercise, they are a risk management and issue resolution activity, meant to drive engagement and support for delivery. Notes and decisions can and should be distributed to a broader audience as appropriate so additional stakeholders are informed
In the context of a RACI model (Responsible, Accountable, Consulted, Informed), meetings should include only “R” and “A” parties, who can reach out to extended stakeholders as needed (“C”), but very rarely anyone who would be defined as an “I” only
It is very easy to either overload a meeting to the point it becomes ineffective or not include the right participants to the extent it doesn’t accomplish anything, so this is a critical consideration for making a governance model effective
Session Scope and Scheduling
I’ve already addressed participation, but scheduling should consider the pace and criticality of interventions. Said differently, a frequent, recurring process may make sense when there is a significant volume of work, but something more episodic if there are a limited number of major milestones over the course of time where it makes sense to review progress and check in at specific points in time
Where an ongoing process is intended, “Boulders” and “Rocks” should have a standing spot on the agenda given the criticality and risk profiles of those efforts likely would be high. For “Pebbles”, some form of rotational involvement might make sense, such as including two of the four projects in the example above in every other meeting, or prioritizing any projects that are showing a Yellow or Red overall project health. In the case of the “Sand”, those projects likely are so low risk that, beyond reporting some very basic operating metrics, they should only be included in a governance process when there is an issue that requires intervention or a schedule change that involves potential downstream impacts
Governance Processes
I mentioned this in concert with the example portfolio structure above, but it is important to consider tailoring the governance approach to the type of work so as not to create a cumbersome or bureaucratic environment for delivery teams where they focus on reporting and not managing and delivering their work
Compliance and security projects, as an example, are different than AI, modernization, or other types of efforts and should be reviewed with that in mind. To the extent a team is asked to provide a set of information as input to a governance process that doesn’t align cleanly to what they are doing, it becomes a distraction that creates no value. That being said, there should be some core indicators and metrics that are collected regardless of the project type and reviewed consistently (as will be discussed in the next dimension)
The process should be designed and managed by IT Operations so it can be leveraged across an organization. While individual nuances can be applied that are specific to a particular delivery organization, it is important to have consistency to enable enterprise-level benchmarking and avoid the potential biases that can come from teams defining their own standards that could limit transparency and hinder effective risk management
Delivery Health and Metrics
I’ve written separately on Health and Transparency, but minimally every project should maintain a Red, Yellow, Green on Overall Health, and a second-level indicator on Schedule, Scope, Cost, Quality, and Resourcing that a project/program manager could supply very easily on an ongoing basis. That data should be collected at a defined interval to enable monitoring and inform governance processes on an ongoing basis, regardless of other quantitative metrics gather
Metrics on financials, resourcing, quality, issues, risks, and schedule can vary, but to the extent they can be drawn automatically from defined system(s) of record (e.g., MS Project, financial systems, a time tracking system with defined project coding, defect or incident management tools), the level of manual intervention required to enable governance should ideally be limited to data teams should be utilizing on an ongoing basis
In the event that there are multiple systems in place to track ongoing work, the IT Operations team should work with the delivery stakeholders to identify any enterprise-levels standards required to normalize them for reporting and governance purposes. To give a specific example, I encountered a situation once where there were five different defect management systems in place across a highly diversified IT organization. In that case, the team developed a standard definition of how defects would be tracked and reported and the individual systems of record were mapped to that definition so that reporting was consistent across the organization
Change Control
Change is a critical area to monitor in any governance process because of the potential impact it has to resource consumption (labor, financials), customer delivery commitments, and schedule conflicts with other initiatives
Ideally a governance process should have the right information available to understand the implications of change as and when it is being reviewed as well as the right stakeholders present to make decisions with that information having been provided
To the extent that schedule, financial, or resource considerations change, information would need to be sent back to the IT Portfolio Management process to remedy any potential issues or disruptions that have been caused through decisions made. This is consistently missed in my experience in large delivery portfolios
Issue and Risk Management
Leveraging a common issue and risk management infrastructure both promotes a consistent way to track and report on these things across delivery efforts, but also creates a repository of “learnings” that could be reviewed and harvested in the interest of evaluating the efficacy of different approaches taken for similar issues/risks and promoting delivery health over time
Dependency/Integrated Plan Management
There are two dimensions to consider when it comes to dependencies. First is whether they exist within a project/program or are a dependency from that effort to others in the portfolio or downstream of it. Second is whether the dependency is during the course of effort or connected to the delivery/deployment of the project
In my experience, teams are very good at covering project- or program-driven dependencies, but there can be major gaps in looking across delivery efforts to account for risks caused when things change. To that end, some level of dependency-related matrix should exist to identify and track dependencies across delivery efforts separate from a release calendar that focuses solely on deployment and “T-minus” milestones as projects near deployment
Once these dependencies are being tracked, changes that surface through the governance process can be escalated back to the IT Portfolio Management process and other delivery teams to understand and coordinate any adjustments required
This can include situations where there are sequential dependencies, as an example, where a schedule overrun requires additional resource commitment from a critical resource needed to kick off or participate in another delivery effort. Without a means to identify these dependencies, the downstream effort may be delayed or not have time to explore alternate resourcing options without having a ripple effect to that downstream delivery. This is part of the argument for leveraging named resource planning (versus exclusively FTE-/role-based) for critical resources when slotting during the portfolio management process
Partner/Vendor Management
The IT Operations function should ideally help ensure that partners leverage internal reporting mechanisms or minimally conform to reporting standards and plug into existing governance processes where appropriate to do so
In the case of “Rocks” and “Boulders” that are largely partner-driven they likely will have a standalone governance process that leverages whatever process the partner has in place, but the goal should be to integrate and leverage whatever enterprise tools and standards are in place so that work can be benchmarked across delivery partners and also to compare the service delivery to internally-led efforts as well
It is very tempting to treat sourced work differently than projects delivered internal to IT, but who delivers a project should be secondary to whether the project is delivered on time, with quality and meets its objectives. The standards of excellence should apply regardless of who does the work
Learnings and Best Practices
Part of the potential benefit for having a shared infrastructure for executing governance discussions by comparison with distributing the work is that it enables you to see patterns in delivery, consistent bottlenecks, risks, and delays, and to leverage those learnings over time to improve delivery quality and predictability
Part of the governance process itself can also include having teams provide a post-mortem on their delivery efforts upon completion (successful or otherwise) so that other teams that participate in the governance process and the broader governance team can leverage those insights as appropriate
Change Management
While change management isn’t an explicit focus of a PMO/governance model, the dependency management surrounding deployment and learnings coming from various deployments should be coordinated with larger change management efforts and inform them on an ongoing basis in the interest of promoting more effective integration of new capabilities
Some Notes on Product Teams/Agile/SAFe Integration
It is tempting to treat product teams as isolated, independent, and discrete pieces of delivery. The issue with moving fully to that concept is that it becomes easy to lose transparency and benchmarking across delivery efforts that surface opportunities to more effectively manage risks and issues outside a given product/delivery team
To that end, part of the design process for the overall governance model should look at how to leverage and/or integrate the tooling for Agile projects with other enterprise project tracking tools as needed, along with integrating learnings from retrospectives with overall delivery improvement processes
Wrapping Up
Overall, there are many considerations that go into establishing an operating model for PMOs and delivery governance at end enterprise level. The most important takeaway is to be deliberate and intentional about what you put in place, keep it light, do everything you can to leverage data that is already available, and keep the balance between the project and the portfolio in mind at all times.The more project-centric you become, the more likely you will end up siloed and inefficient overall, and that will translate into missed dates, increased costs, and wasted utilization.
What It Is: An overall IT Strategy sets direction for an organization, providing a framework for the services IT provides, along with key dimensions and objectives, with flexibility to evolve over time
Why It Matters: With the ever-increasing demand for innovation in a competitive, but cost-conscious environment, a thoughtful strategy accelerates results, reduces cost and risk, and enables sustainability
Key Concepts
Technology strategy always needs to be rooted in a business-enabling approach
It is tempting to over-index on one dimension (e.g., cost management) and sacrifice capability
Excellence in IT is rooted in having business aligned objectives, with a disciplined approach
This model is organized around five key dimensions, which should be defined and prioritized
A simple IT scorecard could be created using how business partners evaluate each dimension
This article focuses on delivering IT objectives, IT Excellence focuses on “how to operate” in IT
Key Dimensions
Innovate – Promote Competitive Advantage
Map to business goals, establish a disciplined innovation process aligned to architecture strategy
What It Is: Excellence is core to creating sustainable value through technology in any organization
Why It Matters: Technology advances so rapidly that most organizations can’t keep up. The balance of agility and discipline, speed and quality are essential to optimizing the value of IT at the right cost
Key Dimensions
Courageous Leadership
Excellence requires tenacity, agility, flexibility, risk appetite, humility, and discipline
Given leadership sets the tone and direction for everything else, this is critical to get right
Need to be an advocate, champion, and business partner, knowing when to say “no” if needed
Transformative Culture
Remaining competitive in a continually evolving world requires a culture that enables change
Culture is expressed in what people see as much or more than anything they hear in speeches
Core values need to be consistently demonstrated from leaders to individual contributors
Relentless Innovation
Consider what happens in the technology strategy if core solutions are obsolete in 18-24 months
Make disciplined innovation part of the ongoing portfolio strategy to maintain competitive edge
Plan for “urban” renewal so there is minimal need for large scale, disruptive modernization
Operating with Agility
Establish strong business partnerships to respond to changes in portfolio composition/priorities
Create a minimally invasive, highly transparent operating infrastructure to drive efficiencies
Leverage workforce and sourcing strategy to provide the right capabilities at the right cost
Framework-Centric Design
Leverage enterprise architecture to establish a connected enterprise of intelligent ecosystems
Develop standards to enable ongoing integration of best-of-breed technology capabilities
Integrate artificial intelligence in thoughtful ways that scale and provide sustainable value
Delivering at Speed
Create a disciplined and repeatable environment for delivering solutions that can scale
Design with architecture, quality, and security in mind, not as an afterthought
Understand that total cost of ownership is as important as speed-to-market most of the time
What It Is: Manufacturing continues to move rapidly down a continuum from the highly manual to the digital, from disconnected, asynchronous activities to integrated, orchestrated actions, across an ever-expanding and diverse set of components
Why It Matters: Defining a holistic strategy that enables agility and flexibility, that provides structure without limiting innovation, can be a highly complex activity, but one that is well worth the investment given the right strategy can unlock value in multiple ways (production capacity, productivity, improved quality and safety, etc.), particularly in situations where there is a diverse footprint in place
Key Concepts
Design with a framework in mind, that is intended to connect, monitor, track, orchestrate, and optimize performance within and across digital facilities
Establish data ownership, data management, and data governance to enable long-term value
Think of individual facilities as having varied configurations of logically common components
Manage individual components so that they can be relatively commoditized and replaced easily
Understand that the goal is to optimize the overall system, harmonizing workers and equipment
Design the framework to enable adding individual components rapidly, with minimal disruption
Leverage the framework to create an environment that can simulate changes pre-deployment
Define strategies to insulate legacy equipment so that it integrates the same as modern assets
Work with OEMs to facilitate transition between bolt-on analytics to intelligent equipment
Integrate AR where it provides incremental value without adding complexity / distraction
Reduce complexity with AI, enabling operators to be more productive, effective, and safe
Integrate learning and development content dynamically based on operator experience
Approach
Provide required internal/external connectivity, infrastructure, and monitoring across locations
Identify connected components across facilities by function (equipment, devices, sensors, etc.)
Define relevant personas and capabilities to enable digital workers (shop floor to facility leaders)
Architect the environment to treat individual components as actors in a connected ecosystem
Identify integration standards and relevant characteristics per component to enable analytics
Design facility data solutions to allow for structured and unstructured data aligned to the cloud
Establish an infrastructure for orchestration that can coordinate activity across connected actors
Gather, analyze, and optimize processes given performance data and operating characteristics
Analyze observations centrally to leverage insights and opportunities across similar facilities
Extend the boundaries of orchestration incorporate customers, suppliers, and partners
What It Is: Workforce and Sourcing Strategy is the long-term approach that an organization uses to provide the necessary skills, internal and external, to enable capabilities to deliver on business commitments and support the current and future technology footprint
Why It Matters: Having a deliberate and thoughtful strategy not only creates an agile and responsive workforce to meet ongoing and variable business demand, but also does so at the right cost. Where a defined strategy is not in place and being governed, there is very likely cost optimization opportunity
Key Concepts
Business and technology needs fluctuate. A strategy helps mitigate the cost impact of change
Leverage a competency model internally and externally to benchmark roles, capacity, and costs
Generally speaking, it’s better to align variable capacity to areas of variable demand
Benchmark internal cost of service against best-in-class providers, make adjustments as needed
Understand that not everything needs differentiated service, keep the lights on is valid in cases
Invest in areas where technology creates competitive advantage and IP, outsource elsewhere
Actively manage and govern talent development and performance to optimize productivity
Never assume HC = FTE. Used named resources for capacity planning of critical roles vs FTEs
Source where technology is emerging and immature to facilitate experiments and early learning
It is a reasonable strategy to engage partners in simplification efforts through mutual incentives
Never assume shifting sourcing to captives for arbitrage benefits is a 1:1 FTE exchange, it isn’t
Be mindful in how you manage overall tenure. Motivated inexperience introduces risk and cost
Leverage role-based capacity agreements to shift contract labor costs to a defined model
Scrutinize contracting heavily to avoid inflated cost. Convert or hire longer-term needs
Establish consistent contract language that aligns to service delivery roles and expectations
Define primary and secondary partners for individual sourcing needs, manage them consistently
Negotiate aggressively but fairly, “partnerships” produce more value than a “vendor” mentality
Benchmark and leverage consistent performance metrics across internal and external partners
Apply vendor management and governance processes to captives the same as external partners
Approach
Understand Current State – Benchmarking capacity by role across sources of staff, including cost
Determine What You Need – Evaluate business and industry trends, do the same for technology
Define Sourcing Approach – Identify critical skills to retain and source, and where to get them
Refine Talent Strategy – Clarify gaps between current and future IT staffing, skills and capacity
Develop Transition Plan – Plan change to talent pool and make explicit sourcing decisions
What It Is: With the advent of AI, the question ishow to integrate it effectively at an enterprise level. The long-term view should be a synthesis of applications, AI, and data, working in harmony, providing integrated capabilities that maximize effectiveness and productivity for the end users of technology
Why It Matters: Much like the .com era, there are lofty expectations of what AI can deliver without a fundamental strategy for how those capabilities will be integrated and leveraged at scale. Selecting the right approach that balances tactical gains with strategic infrastructure will be critical to optimizing and delivering differentiated value rapidly and consistently in a highly competitive business environment
Key Concepts
AI is a capability, not an end in itself. User-centered design is more important than ever
Resist the temptation to treat AI as a one-off and integrate it with existing portfolio processes
The end goal is to expose and harness all of an organization’s capabilities in a consistent way
Agentic solutions will become much more mainstream, along with orchestration of processes
The more agentic solutions become standard, the less application-specific front ends are needed
Natural language input will become common to reduce manual entry in various processes
We will shift from content via LLMs to optimizing processes and transactions via causal models
AI should help personalize solutions, reduce complexity, and improve productivity
Only a limited number of sidecar applications can be deployed before overwhelming end users
The less standardized the environment is, the longer it will take to achieve enterprise AI benefits
As with any transformation, don’t try to boil the ocean, have a strategy and migrate over time
Approach
Ensure architecture governance is in place quickly to avoid accruing significant technical debt
Design towards an enterprise architecture framework to enable rapid scaling and deployment
Migrate towards domain-based ecosystems to facilitate evolution and rapid scaling of capability
Enable rapid, disciplined, and governed experiments to explore tools and solution approaches
Place heavy emphasis on integration standards as a means to deploy new AI services with speed
Develop a conceptual “template” for how AI capabilities will be integrated to facilitate reuse
Organize AI services into insights (inform), agents (assist), and experts (benchmark, train, act)
Separate internal from package-provided AI services to provide agility and manage overall costs
Evaluate internal and external solutions by their ability to integrate services and enable agents
Reinforce data management and data governance processes to enable quality insights
Define roles and expectations for those in the organization who develop, use, and manage AI
What It Is: IT Value/Cost Optimization is the process of adjusting IT spend relative to the value being created through IT services in the interest of finding the optimal balance for an organization
Why It Matters: When organizations face financial challenges, there is often a desire to reduce expense. The challenge is that the activity is often managed as a cost cutting exercise focused on direct labor, without regard to other, less disruptive opportunities that exist if a more holistic approach was taken
Key Concepts
Optimization should be a continual activity. Doing it periodically increases negative impacts
The activity requires a clear understanding of costs (direct and indirect) and value being created
Where spending isn’t governed, it is likely inflated and suboptimized
The scale and complexity of a technology footprint has a direct relationship to labor cost
Direct labor should be the last lever adjusted. It represents the potential to create value
Every $1MM you save in other ways is 8 headcounts (@$125k) you could have to perform work
If you can eliminate >5% of your workforce for performance, you aren’t managing it effectively
In the event labor ever becomes “numbers on a spreadsheet”, ask someone else to manage cost
In my experience, people would take other levers more seriously if their headcount was in play
Approach
IT Operations – Provide critical, minimum data to enable benchmarking and governance
Portfolio Management – Ensure effective prioritization, slotting, and resource utilization
Release Strategy – Have a disciplined to minimize operating disruptions and optimize utilization
Enterprise Architecture – Establish a capability to develop blueprints, simplify, and standardize
Applications – Rationalize on an ongoing basis to manage costs and promote speed-to-market
Data – Promote interoperability, minimize data movement, and avoid monolithic solutions
Artificial Intelligence – Establish a disciplined and governed process for AI introduction and use
Technologies – Minimize duplication and manage end-of-life to avoid disruptive costs
Infrastructure – Unless there is a legal or compliance-related reason, shift to external providers
Cloud – Develop a FinOps capability to review and adjust resource consumption to avoid waste
Licensing – Establish an ongoing process to review, optimize, and manage license transitions
Modernization – Actively modernize solutions to avoid episodic efforts that increase costs
Services – Define a workforce and sourcing strategy, govern relationships, negotiate effectively
What It Is: App Rationalization is the process of reducing redundancies that exist in an application portfolio in the interest of reducing complexity, cost of ownership, and improving speed-to-market.
Why It Matters: Organizations typically spend anywhere between 50-80% of their IT budget maintaining and supporting systems in place. That limits investment in innovation and competitive advantage.
Key Concepts
Understand that rationalization is more about change management than technology
Ensure there are healthy relationships in place and strong leadership support for the work
Focus in on critical areas of the portfolio that drive cost. Don’t boil the ocean
Don’t worry about creating the perfect infrastructure day one. Clean that up along the way
Start with how your business operates and simplify and standardize processes first
Align your future blueprint as cleanly to your desired operating footprint as possible
Consider your Artificial Intelligence (AI), cloud, and security strategies in the future vision
Simplification can come through reducing both unique applications and instances of applications
Address how systems will be supported and enhanced moving forward in your design
Explicitly include milestones for decommissioning in your roadmap. Don’t let that go undone
Expect the work to continually evolve and adapt. Plan for change and adjust responsively
Include rationalization as part of your ongoing portfolio strategy so it’s not a one-time event
Approach
Align – Obtain organizational support critical to defining vision, scope, and facilitating change
Understand – Gather an understanding of the current state and alignment to operations
Evaluate – Leverage something like the Gartner TIME model to evaluate portfolio quality and fit
Strategize – Develop a future state blueprint, CBA, and proposed changes to the environment
Socialize – Obtain feedback, iterate, clarify the vision, and finalize the initial roadmap
Mobilize – Launch first wave of delivery, realign ongoing work as required
Execute – Deliver on 30-, 60-, and 90-day goals, governing and adjusting the approach as you go
Ok, I have the scope identified, but what do I do now?
Having recently written about the intangibles and scope associated with simplification, the focus of this article is the process of rationalization itself, with an eye towards reducing complexity and operating cost.
The next sections will breakdown the steps in the process flow above, highlighting various dimensions and potential issues that can occur throughout a rationalization effort. I will focus primarily on the first three steps (i.e., the analysis and solutioning), given that is where the bulk of the work occurs. The last two steps are largely dedicated to socializing and executing on the plan, which is more standard delivery and governance work. I will then provide a conceptual manufacturing technology example to illustrate some ways the exercise could play out in a more tangible way.
Understand
The first step of the process is about getting a thorough understanding of the footprint in place to enable reasonable analysis and solutioning. This does not need to be exhaustive and can be prioritized based on the scope and complexity of the environment.
Clarify Ownership
What’s Involved:
Identifying technology owners of sets of applications, however they are organized. Hereinafter referred to as portfolio owners
Identifying primary business customers for those applications (business owners)
Identifying specific individuals who have responsibility for each application (application owners)
Portfolio and application owners can be the same individual, but in larger organizations, they likely won’t be given the scope of an individual portfolio and ways it is managed
Why It Matters:
Subject matter knowledge will be needed relative to applications and the portfolios in which they are organized, the value they provide, their alignment to business needs, etc.
Opportunities will need to be discussed and decisions made related to ongoing work and the future of the footprint, which will require involvement of these stakeholders over time
Key Considerations:
Depending on the size of the organization and scope of various portfolios in place, it may be difficult to engage the right leaders in the process, in which case a designate should be identified who can serve as a day-to-day representative of a larger organization, who is empowered to provide input and make recommendations on behalf of their respective area.
In these cases, a separate process step will need to be added to socialize and confirm the outcomes of the process with the ultimate owners of the applications to ensure alignment, regardless of the designated responsibilities of the people participating in the process itself. Given the criticality of simplification work, there could be substantial risk in making broad assumptions related to organizational support and alignment, so some form of additional checkpoints would be a good idea in nearly all cases where this occurs
Inventory Applications
What’s Involved:
Working with Portfolio Owners to identify the assets across the organization and create as much transparency as possible into the current state environment
Why It Matters:
There are two things that should come from this activity: an improved understanding of what is in place, and an intangible understanding of the volatility, variability, and level of opacity in the environment itself. In the case of the latter point, if I find that I have a substantial amount more applications across a set of facilities or set of operating units than I expected and those vary by business greatly, it should inform how I think about the future state environment and governance model I want in place to manage that proliferation in the future. This is related to my point on being a “historian” in the process in the previous article on managing the intangibles of the process.
Key Considerations:
Catalogue the unique applications in production, providing a general description of what they do, users of the technology (business units, individual facilities, customer segments/groups), primary business function(s)/capabilities provided, criticality of the solution (e.g., whether it is a mission-critical/“core” or supporting/”fringe” application), teams that support the application, number of application instances (see the next point), key owners (in line with the roles mentioned above), mapping to financials (the next point after this), mapping to ongoing delivery efforts (also described below), and any other critical considerations where appropriate (e.g., on a technology platform that is near end of life)
In concert with the above, identify the number of applicationinstances in production, specifically the number of different configurations of a base application running on separate infrastructure, supporting various operations or facilities with unique rules and processes, or anything that would be akin to a “copy-paste-modify” version of a production application. This is critical to understand and differentiate, because the simplification process needs to consider reducing these instance counts in the interest of streamlining the future state. That simplification effort can be a separate and time-consuming activity on top of reducing the number of unique applications as a whole
Whether to include hosting and the technology stack of a given application is a key consideration in the inventory process itself. In general, I would try to avoid going too deep, too early in the rationalization process, because these kinds of issues will surface during the analysis effort anyway and putting them in the first step of the process could slow down the work documenting things on applications that aren’t ultimately the top priority for simplification
Understand Financials
What’s Involved:
Providing a directionally accurate understanding of direct and indirect cost to individual applications across the portfolio
Providing a lens on the expected cost of any discretionary projects targeted at enhancing replacing, or modernizing individual applications (to the extent there is work identified)
Why It Matters:
Simplification is done primarily to save or redistribute cost and accelerate delivery and innovation. If you don’t understand the cost associated with your footprint, it will be difficult to impossible to size the relative benefit of different changes you might make and, as such, the financial model is fundamental to the eventual business case meant to come as an output of the exercise
Key Considerations:
Direct cost related to dedicated teams, licensing, and hosted solutions can be relatively straightforward and easy to gather, along with the estimated cost of any planned initiatives for a specific application
Direct cost can be more difficult to ascertain when a team or third-party supports a set of applications, in which case some form of cost apportionment may be needed to estimate individual application costs (e.g., allocate cost based on number of production tickets closed by application within a portfolio of systems)
Indirect expenses related to infrastructure and security in particular can be difficult to understand depending on the hosting model (e.g., dedicated versus shared EC2 instances in the cloud versus on premises, managed hardware) and how costs for hardware, network, cyber security tools, and other shared services are allocated and tracked back to the portfolio
As I mentioned in my article on the intangibles associated with rationalization, directional accuracy is more important than precision in this activity, because the goal at the early stage of the process is to identify redundancies where there is material cost savings potential, not building out a precise cost allocation for infrastructure in the current state
Evaluate Cloud Strategy
What’s Involved:
Clarifying the intended direction in terms of enterprise hosting and the cloud overall, along with the approach being taken where cloud migration is in progress or planned at some level moving forward
Why It Matters:
Hosting costs change when moving from a hosted to a cloud-based environment, which could affect the ultimate business case, depending on the level of change planned in the footprint (and associated hosting assumptions)
Key Considerations:
There is a major difference in costs for hosting depending on whether you are planning to use a lift-and-shift, modernize, or “containerize”-type of approach to the cloud,
Not all applications will be suitable to the last approach in particular, and it’s important to understand whether this will play into your application strategy as you are evaluating the portfolio and identifying future alternatives
If there is no major shift planned (e.g., because the footprint is already cloud-hosted and modernized or containerized), it could be that this is a non-issue, but likely it does need to be considered somewhere in the process, minimally from a risk management and business case development standpoint
Evaluate AI Strategy
What’s Involved:
Understanding the role AI applications and agentic AI solutions are meant to be a core component in the future application portfolio and enterprise footprint, along with any primary touchpoints for these capabilities as appropriate
Understanding any high opportunity areas from an end user standpoint where AI could aid in improving productivity and effectiveness
Why It Matters:
Any longer-term strategy for enterprise technology today needs to contemplate and articulate how AI is meant to integrate and align to what is going to be in place, particularly if agentic AI is meant to be included as part of the future state, otherwise you risk having to iterate your entire blueprint relatively quickly, which could lead to issues in stakeholder confidence and momentum
Key Considerations:
If Agentic AI is meant to be a material component in the future state, the evaluation process for targeted applications should include their API model and whether they are effectively “open” platforms that can be orchestrated and remote operated as part of an agentic flow. The larger the overall scope of the strategy and longer the implementation is expected to take, the more important this aspect should be as a consideration in the analysis process itself, because orchestration is going to become more critical in large enterprises over time under almost any circumstances
Understanding the role AI is anticipated to play is also important to the extent that it could play a critical role in facilitating transition in the implementation process itself, particularly if it becomes an integrated part of the end user presentment or education and training environment. This could both help reduce implementation costs and accelerate deployment and adoption, depending on how AI is (or isn’t leveraged)
Assess Ongoing Work
What’s Involved:
The final aspect to understanding the current state is obtaining a snapshot of the ongoing delivery portfolio and upcoming pipeline
Why It Matters:
Understanding anticipated changes, enhancements, replacements, or retirements and the associated investments is important to evaluating volatility and also determining the financial consequences of decisions made as part of the strategy
Key Considerations:
Gather a list of active and upcoming projects, applications in scope, the scope of work, business criticality, any significant associated risk, relative cost, and anticipated benefits
Review the list with owners identified in the initial step with a mindset of “go”, “stop”, and “pause” given the desire to simplify overall. It may be the case that some inflight work needs to be completed and handled as sunk cost, but there could be cost avoidance opportunity early on that can help fund more beneficial changes that improve the health of the footprint overall
Evaluate
With a firm understanding of the environment and a chosen set of applications to be explored further (which could be everything), the process pivots to assessing what is in place and identifying opportunities to simplify.
Assess Portfolio Quality
What’s Involved:
Work with business, portfolio, and application owners to apply a methodology, like Gartner’s TIME model, to evaluate the quality of solutions in place. In general, this would involve looking at both business and technology fit in the interest of differentiating what does and doesn’t work, what needs to change, and what requirements are critical to the future state
Why It Matters:
Rationalization efforts can be conducted over the course of months or weeks, depending on the scope and goals of the activity. Consequently, the level of detail that can be considered in the analysis will change based on the time and resources available to support the effort but, regardless of the time and effort available, it is important for there to be a fact-based foundation to support the opportunities identified, even if only at an anecdotal level
Key Considerations:
There are generally two levels of this kind of analysis: a higher-level activity like the TIME model, which provides more of a directional perspective on the underlying applications and a more detailed gap analysis-type activity that evaluates features and functionality in the interest of vetting alternatives and identifying gaps that may need to be addressed in the rationalization process itself. The more detailed activity would typically be performed as part of an implementation process and not upstream in the strategy definition phase. The gap analysis could be performed leveraging a standard package evaluation process (replacing external packages with the applications in place), assuming one exists within the organization
The technical criteria for the TIME model evaluation should include things like AI readiness, platform strategy, underlying technical stack, and other key dimensions based on how critical those individual elements are, as surfaced during the initial stage of the work
Identify Redundancies
What’s Involved:
Assuming some level of functional categories and application descriptions were identified during the data gathering phase of the work, it should be relatively straightforward to identify potential redundancies that exist in the environment
Why It Matters:
Redundancies create opportunities for simplification, but also for improved capabilities. The simplification process doesn’t necessarily mean that those having an application replaced will be “giving up” existing capabilities. It could be the case that the solution to which a given user group is being migrated provides more capabilities than what they currently have in place
Key Considerations:
Not all groups within a large organization have equal means to invest in systems capabilities. There can be situations where migrating smaller entities to solutions in use by larger and more well-funded pieces of the organization allows them to leverage new functionality not available in what they have
In the situation where organizations move from independent to shared/leveraged solutions, it is important to not only consider how the shift will affect cost allocation, but also the prioritization and management of those platforms post-implementation. A concern can often arise in these scenarios that either costs will be apportioned in a way that burdens smaller entities at a greater level of funding than they can sustain or that their needs may not be prioritized effectively once they are in a shared environment with other. Working through these mechanics is a critical aspect of making simplification work at an enterprise level. There needs to be a win-win environment to the maximum extent possible or it will be difficult to incent teams to move in a more common direction
Surface Opportunities
What’s Involved:
With redundancies identified, costs aligned, and some level of application quality/fit understood, it should be possible to look for opportunities to replace and retire solutions that either aren’t in use/creating value or that don’t provide the same level of capability in relation to cost as others in the environment
Why It Matters:
The goal of rationalization is to reduce complexity and cost while making it easier and faster to deliver capabilities moving forward. Where cost is consumed in maintaining solutions that are redundant or that don’t create value, they hamper efforts to innovate and create competitive advantage, which is the overall goal of this kind of effort
Key Considerations:
Generally speaking, the opportunities to simplify will be identified at a high-level during the analysis phase of a rationalization effort. The detailed/feature-level analysis of individual solutions is an important thing to include in the planning of subsequent design and implementation work to surface critical gaps, integration points, and workflow dependencies between systems to facilitate transition to the desired future state environment
Strategize
Having completed the Analysis effort and surfaced opportunities to simplify the footprint, the process shifts to identifying the target future state environment and mapping out the approach to transition.
Define Future Blueprint(s)
What’s Involved:
Assuming some representation of the current state environment has been created as a byproduct of the first two steps of the process, the goal of this activity is to define the conceptual end state footprint for the organization
To the extent that there are corporate shared services, multiple business/commercial entities, operating units, facilities, locations, etc. to be considered, the blueprint should show the simplified application landscape post-transition, organized by operating entity, where one or more operating unit could be mapped into a common element of the future blueprint (e.g., organized by facility type versus individual locations, lower complexity business units versus larger entities)
Why It Matters:
A relatively clear, conceptual representation of the future state environment is needed to facilitate discussion and understanding of the difference between the current environment, the intended future state, and the value for changes being proposed
Key Considerations:
Depending on the breadth and depth of the organization itself, the representation of the blueprint may need to be defined at multiple levels
The approach to organizing the blueprint itself could also provide insight into how the implementation approach and roadmap is constructed, as well as how stakeholders are identified and aligned to those efforts
Map Solutions
What’s Involved:
With opportunities identified and a future state operating blueprint, the next step is to map retained solutions into the future state blueprint and project the future run rate of the application footprint
Why It Matters:
The output of this activity will both provide a vision of the end state and act as input to socializing the vision and approach with key stakeholder in the interest of moving the effort forward
Key Considerations:
There is a bit of art and science when it comes to rationalization, because too much standardization could limit agility if not managed in a thoughtful. I will provide an example of this in the scenario following the process, but a simple example is to think about whether maintaining separate instances of a core application is appropriate in situations where speed to market or individual operating units need the flexibility to have greater autonomy than they might otherwise have if they had to operate off a single, shared instance of one application
I mentioned in the article on the intangibles of simplification, that is it a good idea to take an aggressive approach to the future state, because likely not everything will work in practice and the entire goal of the exercise is to try and optimize as much as possible in terms of value in relation to cost
From a financial standpoint, it is important to be conservative in assumptions related to changes in operating expense. That should manifest itself in allowing for contingency in implementation schedule and costs as well as assuming the decommissioning of solutions will take longer than expected (it most likely will). It is far better to be ahead of a conservative plan than to be perpetually behind an overly aggressive one
Define Change Strategy
What’s Involved:
With the current and future blueprints identified, the next step would be to identify the “building blocks” (in conceptual terms) of the eventual roadmap. This is essentially a combination of three things: application instances to be consolidated, replacement of one application by another, and retirement of applications that are either unused or that don’t create enough value to continue supporting them
Opportunities can also be segregated into big bets that affect core systems and material cost/change, those that are more operational and less substantial in nature, and those that are essentially cleanup of what exists. The segregation of opportunities can help inform the ultimate roadmap to be created, the governance model established, and program management approach to delivery (e.g., how different workstreams are organized and managed)
Why It Matters:
Roadmaps are generally fluid beyond a near-term window because things inevitably occur during implementation and business priorities change. Given there can be a lot of socializing of a roadmap and iteration involved in strategic planning, I believe it’s a good idea to separate the individual transitions from the overall roadmap itself, which can be composed in various ways, depending on how you ultimately want to tackle the strategy. At a conceptual level, you can think of it as a set of Post-it notes representing individual efforts that can be organized in a number of legitimate ways with different cost, benefit, and risk profiles
Key Considerations:
Individual transitions can be assessed in terms of risk, business implications, priority, relative cost and benefits, and so forth as a means to help determine slotting in the overall roadmap for implementation
Develop Roadmap
What’s Involved:
With the individual building blocks for transition identified, the final step in the strategy definition stage is to develop one or more roadmaps to assemble those blocks to explore as many implementation strategies as appropriate
Why It Matters:
The roadmap is a critical artifact in the formation of an implementation plan, though they generally change quite a bit over time depending on the time horizon, scope, complexity, and scale of the program itself
Key Considerations:
Ensure that all work is included and represented, including any foundational or kickoff-related activities that will serve the program as a whole (e.g., establishing a governance model, PMO, etc.)
Include retirements (not just new solution deployments), minimally as milestones, in the roadmap so they are planned and accounted for. There are many times this is missed in my experience with new system deployments
Depending on the scale of implementation, explore various business scenarios (e.g., low risk work up front, big bets first, balanced approaches, etc.) to ascertain the relative cost, benefit, implementation requirements, and risks of each and determine the “best case” scenario to be socialized
Socialize and Mobilize
Important footnote: I’ve generally assumed that the process above would be IT-led with a level of ongoing business participation given much of the data gathering and analysis can be performed within IT itself. That isn’t to say that solutioning and development of a roadmap needs to be created and socialized in a sequential manner as is outlined here. It could also be the case that opportunities are surfaced out of the evaluation effort and then the strategy and socialization is done through a collaborative/ workshop process, it depends on the scope of the exercise and nature of the organization.
With the alternatives and future state recommendations prepared, the remaining steps of the process are fairly standard, in terms of socializing and iterating the vision and roadmap, establishing a governance model and launching the work with clear goals for 30, 60, and 90 days in mind. As part of the ongoing governance process, it is assumed that some level of iteration of the overall roadmap and goals will be performed based on learnings gathered early in the implementation process.
Putting Ideas into Practice – An Example
The Conceptual Example – Manufacturing
If you’ve made it this far, I wanted to move beyond the theory to a conceptual scenario to help illustrate various situations that could occur in the course of a simplification exercise. The example diagram represents the flow of data across the three initial steps of the process outlined above. The data is logically consistent and traceable across steps in the process if it is helpful in understanding the situation. I limited the number of application types (lower left corner of the diagram) so I could explore multiple scenarios without making the data too overwhelming. In practice, there would be multiple domains and many components in each domain to be considered (e.g., HR is a domain with many components represented as a single application category here), depending on the level of granularity being used for the rationalization effort.
From here, I’ll provide some observations on each major step in the hopes of making some example outcomes clear. I’m not covering the financial analysis given it would make things even more complicated to represent, but for the sake of argument, we can assume that there is financial opportunity associated with reducing the number of applications and instances in place
Notes on the Current State
Some observations on the current state based on the data collected:
The organization has a limited set of corporate applications for Finance, Procurement, and HR, but most of the core applications are relegated to individual business units (there are three in this example) and manufacturing facilities (there are four)
Business Operation 1 is the largest commercial entity, sharing the same HR and Procurement solutions, though with unique copies of its own, a different instance of the core accounting system that is managed separately, with two facilities (1 and 2), using different instances of the same MES system, a common WMS system, and a set of unique fringe applications in most other functional categories, some of which overlap or complement those at the business unit level. Despite these differences in footprint, facilities 1 and 2 are highly similar from an operational/business process standpoint
Business Operations 2 and 3 are smaller commercial entities, running on a different HR system and a different instance of the Procurement solutions than Corporate, a different instance of the core accounting system that is managed separately in one and a unique accounting system in the other, with one facility each (3 and 4), using different MES systems, different instances of the same WMS system, and a set of unique fringe applications in most other functional categories, some of which overlap or complement those at the business unit level. Despite these differences in footprint, facilities 3 and 4 are highly similar from an operational/business process standpoint
All three business entities operate of unique ERP solutions, two of them leverage the same CRM system, though they are on separate instances, so there is no enterprise-level view of customer and financials need to be consolidated at corporate across all three entities using something like Hyperion or OneStream
The facilities utilize three different EAM solutions for Asset Health today, with two of them (2 and 3) using the same software
The fringe applications for accounting, EH&S, HR, and Procurement largely exist because of capability gaps in the solutions already available from the corporate or business unit applications
All things considered, the current environment includes 29 unique applications and 15 application instances.
Sounds complicated, doesn’t it?
Well, while this is entirely a made-up scenario meant to help illustrate various simplification opportunities, the fact is that these things do actually happen, especially as you scale up and out an organization, have acquisitions, or partially roll out technology over time.
Notes on the Evaluation
Observations based on the analysis performed:
Having worked with business, portfolio, and application owners to classify and assess the applications in place, a set of systems surfaced as creating higher levels of business value, between mission-critical core (ERP, CRM, Accounting, MES) and supporting/fringe (Procurement, HR, WMS, EH&S, EAM) applications.
Application A, having been implemented by the largest commercial entity, provides the most capability of any of the solutions in place
Application D, as the current CRM system in use by two of the units today, likely offers the best potential platform for a future enterprise standard
Application F likely would make sense as an enterprise standard platform for accounting, though there is something about Application I currently in Facility 3 that provides unique capability at a day-to-day level
Application V is the best of the MES solutions from a fit and technology standpoint and is in place at two of the facilities today, though running on separate instances
Application K is already in place to support Procurement across most of the enterprise, though instances are varied and Applications L and M exist at the facility level because of gaps in capability today
Applications M and O surface as the best technical solutions in the EH&S space, with all of the others providing equal or lesser business value and technical quality
Application S stands out among other HR solutions as being a very solid technology platform
Application AB is the best of the EAM solutions both in terms of business capability and technical quality
Notes on the Strategy
The overall simplification strategy begins with the desire to standardize operations for smaller business entities 2 and 3 (operating blueprint B) and to run facilities in a more standard way between those supporting the larger commercial unit (facility blueprint A) and those supporting the smaller ones (facility blueprint B).
From a big bets standpoint:
ERP: Make improvements to Application A supporting business operation 1 so that the company can move from three ERPs to one, using a separate instance for the smaller operating units.
CRM: Make any necessary enhancements to Application D so that it can be run as a single enterprise application supporting all three business units (removing it from their footprint to manage), providing a mechanism to have a single view of the customer and reduced operating complexity and cost
Accounting: Given it is already largely in place across businesses, make improvements to Application F so it can serve as a single enterprise finance instance and remove it from footprint of the individual units. In the case of the facility-level requirements, making updates to accounting Application I and standardizing on that application for the small business manufacturing facilities.
MES: Finally, standardize on Application V across facilities, with a unique instance being used to operate large and small business facilities respectively
For Operational Improvements:
Procurement and HR: Similar to CRM and Accounting, standardize on Application K and S so that they can be maintained and operated at the enterprise level
EH&S: Assuming there are differences how they operate, standardize to Applications M and O as solutions for large and smaller units respectively, eliminating all other applications in place
WMS: Y is already the standard for large facilities, so no action is needed there. For smaller facilities, consolidate to a single instance to support both facilities rather than maintain two versions of Application Z
EAM: standardize to a single, improved version of Application AB and eliminate other applications currently in place
Finally, for low value applications like H and M, to review and ensure no dependencies or issues exist, but to sunset those applications and reduce complexity and any associated cost outright
Post-implementation, the future environment would include 12 unique applications and 2 application instances, which is a net reduction of 17 applications (59%) and 13 instances (87%), likely with a substantial cost impact as well.
Wrapping Up
I realized in chalking out this article that it would be a substantial amount of information, but it is aimed at practitioners in the interest of sharing some perspective on considerations involved in doing rationalization work. In my experience, what seems fairly straightforward on paper (including in my example above) generally isn’t for many reasons that are organizational and process-driven in nature. That being said, there is a lot of complexity in many organizations to be addressed and so hopefully some of the ideas covered will be helpful in making the process a little more manageable.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.