Developing Application Strategy – Executing the Process

Ok, I have the scope identified, but what do I do now?

Having recently written about the intangibles and scope associated with simplification, the focus of this article is the process of rationalization itself, with an eye towards reducing complexity and operating cost.

The next sections will breakdown the steps in the process flow above, highlighting various dimensions and potential issues that can occur throughout a rationalization effort.  I will focus primarily on the first three steps (i.e., the analysis and solutioning), given that is where the bulk of the work occurs.  The last two steps are largely dedicated to socializing and executing on the plan, which is more standard delivery and governance work.  I will then provide a conceptual manufacturing technology example to illustrate some ways the exercise could play out in a more tangible way. 

 

Understand

The first step of the process is about getting a thorough understanding of the footprint in place to enable reasonable analysis and solutioning.  This does not need to be exhaustive and can be prioritized based on the scope and complexity of the environment.

 

Clarify Ownership

What’s Involved:

  • Identifying technology owners of sets of applications, however they are organized. Hereinafter referred to as portfolio owners
  • Identifying primary business customers for those applications (business owners)
  • Identifying specific individuals who have responsibility for each application (application owners)
  • Portfolio and application owners can be the same individual, but in larger organizations, they likely won’t be given the scope of an individual portfolio and ways it is managed

Why It Matters:

  • Subject matter knowledge will be needed relative to applications and the portfolios in which they are organized, the value they provide, their alignment to business needs, etc.
  • Opportunities will need to be discussed and decisions made related to ongoing work and the future of the footprint, which will require involvement of these stakeholders over time

Key Considerations:

  • Depending on the size of the organization and scope of various portfolios in place, it may be difficult to engage the right leaders in the process, in which case a designate should be identified who can serve as a day-to-day representative of a larger organization, who is empowered to provide input and make recommendations on behalf of their respective area.
  • In these cases, a separate process step will need to be added to socialize and confirm the outcomes of the process with the ultimate owners of the applications to ensure alignment, regardless of the designated responsibilities of the people participating in the process itself. Given the criticality of simplification work, there could be substantial risk in making broad assumptions related to organizational support and alignment, so some form of additional checkpoints would be a good idea in nearly all cases where this occurs

 

Inventory Applications

What’s Involved:

  • Working with Portfolio Owners to identify the assets across the organization and create as much transparency as possible into the current state environment

Why It Matters:

  • There are two things that should come from this activity: an improved understanding of what is in place, and an intangible understanding of the volatility, variability, and level of opacity in the environment itself. In the case of the latter point, if I find that I have a substantial amount more applications across a set of facilities or set of operating units than I expected and those vary by business greatly, it should inform how I think about the future state environment and governance model I want in place to manage that proliferation in the future.  This is related to my point on being a “historian” in the process in the previous article on managing the intangibles of the process.

Key Considerations:

  • Catalogue the unique applications in production, providing a general description of what they do, users of the technology (business units, individual facilities, customer segments/groups), primary business function(s)/capabilities provided, criticality of the solution (e.g., whether it is a mission-critical/“core” or supporting/”fringe” application), teams that support the application, number of application instances (see the next point), key owners (in line with the roles mentioned above), mapping to financials (the next point after this), mapping to ongoing delivery efforts (also described below), and any other critical considerations where appropriate (e.g., on a technology platform that is near end of life)
  • In concert with the above, identify the number of application instances in production, specifically the number of different configurations of a base application running on separate infrastructure, supporting various operations or facilities with unique rules and processes, or anything that would be akin to a “copy-paste-modify” version of a production application. This is critical to understand and differentiate, because the simplification process needs to consider reducing these instance counts in the interest of streamlining the future state.  That simplification effort can be a separate and time-consuming activity on top of reducing the number of unique applications as a whole
  • Whether to include hosting and the technology stack of a given application is a key consideration in the inventory process itself. In general, I would try to avoid going too deep, too early in the rationalization process, because these kinds of issues will surface during the analysis effort anyway and putting them in the first step of the process could slow down the work documenting things on applications that aren’t ultimately the top priority for simplification

 

Understand Financials

What’s Involved:

  • Providing a directionally accurate understanding of direct and indirect cost to individual applications across the portfolio
  • Providing a lens on the expected cost of any discretionary projects targeted at enhancing replacing, or modernizing individual applications (to the extent there is work identified)

Why It Matters:

  • Simplification is done primarily to save or redistribute cost and accelerate delivery and innovation. If you don’t understand the cost associated with your footprint, it will be difficult to impossible to size the relative benefit of different changes you might make and, as such, the financial model is fundamental to the eventual business case meant to come as an output of the exercise

Key Considerations:

  • Direct cost related to dedicated teams, licensing, and hosted solutions can be relatively straightforward and easy to gather, along with the estimated cost of any planned initiatives for a specific application
  • Direct cost can be more difficult to ascertain when a team or third-party supports a set of applications, in which case some form of cost apportionment may be needed to estimate individual application costs (e.g., allocate cost based on number of production tickets closed by application within a portfolio of systems)
  • Indirect expenses related to infrastructure and security in particular can be difficult to understand depending on the hosting model (e.g., dedicated versus shared EC2 instances in the cloud versus on premises, managed hardware) and how costs for hardware, network, cyber security tools, and other shared services are allocated and tracked back to the portfolio
  • As I mentioned in my article on the intangibles associated with rationalization, directional accuracy is more important than precision in this activity, because the goal at the early stage of the process is to identify redundancies where there is material cost savings potential, not building out a precise cost allocation for infrastructure in the current state

 

Evaluate Cloud Strategy

What’s Involved:

  • Clarifying the intended direction in terms of enterprise hosting and the cloud overall, along with the approach being taken where cloud migration is in progress or planned at some level moving forward

Why It Matters:

  • Hosting costs change when moving from a hosted to a cloud-based environment, which could affect the ultimate business case, depending on the level of change planned in the footprint (and associated hosting assumptions)

Key Considerations:

  • There is a major difference in costs for hosting depending on whether you are planning to use a lift-and-shift, modernize, or “containerize”-type of approach to the cloud,
  • Not all applications will be suitable to the last approach in particular, and it’s important to understand whether this will play into your application strategy as you are evaluating the portfolio and identifying future alternatives
  • If there is no major shift planned (e.g., because the footprint is already cloud-hosted and modernized or containerized), it could be that this is a non-issue, but likely it does need to be considered somewhere in the process, minimally from a risk management and business case development standpoint

 

Evaluate AI Strategy

What’s Involved:

  • Understanding the role AI applications and agentic AI solutions are meant to be a core component in the future application portfolio and enterprise footprint, along with any primary touchpoints for these capabilities as appropriate
  • Understanding any high opportunity areas from an end user standpoint where AI could aid in improving productivity and effectiveness

Why It Matters:

  • Any longer-term strategy for enterprise technology today needs to contemplate and articulate how AI is meant to integrate and align to what is going to be in place, particularly if agentic AI is meant to be included as part of the future state, otherwise you risk having to iterate your entire blueprint relatively quickly, which could lead to issues in stakeholder confidence and momentum

Key Considerations:

  • If Agentic AI is meant to be a material component in the future state, the evaluation process for targeted applications should include their API model and whether they are effectively “open” platforms that can be orchestrated and remote operated as part of an agentic flow. The larger the overall scope of the strategy and longer the implementation is expected to take, the more important this aspect should be as a consideration in the analysis process itself, because orchestration is going to become more critical in large enterprises over time under almost any circumstances
  • Understanding the role AI is anticipated to play is also important to the extent that it could play a critical role in facilitating transition in the implementation process itself, particularly if it becomes an integrated part of the end user presentment or education and training environment. This could both help reduce implementation costs and accelerate deployment and adoption, depending on how AI is (or isn’t leveraged)

 

Assess Ongoing Work

What’s Involved:

  • The final aspect to understanding the current state is obtaining a snapshot of the ongoing delivery portfolio and upcoming pipeline

Why It Matters:

  • Understanding anticipated changes, enhancements, replacements, or retirements and the associated investments is important to evaluating volatility and also determining the financial consequences of decisions made as part of the strategy

Key Considerations:

  • Gather a list of active and upcoming projects, applications in scope, the scope of work, business criticality, any significant associated risk, relative cost, and anticipated benefits
  • Review the list with owners identified in the initial step with a mindset of “go”, “stop”, and “pause” given the desire to simplify overall. It may be the case that some inflight work needs to be completed and handled as sunk cost, but there could be cost avoidance opportunity early on that can help fund more beneficial changes that improve the health of the footprint overall

 

Evaluate

With a firm understanding of the environment and a chosen set of applications to be explored further (which could be everything), the process pivots to assessing what is in place and identifying opportunities to simplify.

 

Assess Portfolio Quality

What’s Involved:

  • Work with business, portfolio, and application owners to apply a methodology, like Gartner’s TIME model, to evaluate the quality of solutions in place. In general, this would involve looking at both business and technology fit in the interest of differentiating what does and doesn’t work, what needs to change, and what requirements are critical to the future state

Why It Matters:

  • Rationalization efforts can be conducted over the course of months or weeks, depending on the scope and goals of the activity. Consequently, the level of detail that can be considered in the analysis will change based on the time and resources available to support the effort but, regardless of the time and effort available, it is important for there to be a fact-based foundation to support the opportunities identified, even if only at an anecdotal level

Key Considerations:

  • There are generally two levels of this kind of analysis: a higher-level activity like the TIME model, which provides more of a directional perspective on the underlying applications and a more detailed gap analysis-type activity that evaluates features and functionality in the interest of vetting alternatives and identifying gaps that may need to be addressed in the rationalization process itself. The more detailed activity would typically be performed as part of an implementation process and not upstream in the strategy definition phase.  The gap analysis could be performed leveraging a standard package evaluation process (replacing external packages with the applications in place), assuming one exists within the organization
  • The technical criteria for the TIME model evaluation should include things like AI readiness, platform strategy, underlying technical stack, and other key dimensions based on how critical those individual elements are, as surfaced during the initial stage of the work

 

Identify Redundancies

What’s Involved:

  • Assuming some level of functional categories and application descriptions were identified during the data gathering phase of the work, it should be relatively straightforward to identify potential redundancies that exist in the environment

Why It Matters:

  • Redundancies create opportunities for simplification, but also for improved capabilities. The simplification process doesn’t necessarily mean that those having an application replaced will be “giving up” existing capabilities.  It could be the case that the solution to which a given user group is being migrated provides more capabilities than what they currently have in place

Key Considerations:

  • Not all groups within a large organization have equal means to invest in systems capabilities. There can be situations where migrating smaller entities to solutions in use by larger and more well-funded pieces of the organization allows them to leverage new functionality not available in what they have
  • In the situation where organizations move from independent to shared/leveraged solutions, it is important to not only consider how the shift will affect cost allocation, but also the prioritization and management of those platforms post-implementation. A concern can often arise in these scenarios that either costs will be apportioned in a way that burdens smaller entities at a greater level of funding than they can sustain or that their needs may not be prioritized effectively once they are in a shared environment with other.  Working through these mechanics is a critical aspect of making simplification work at an enterprise level.  There needs to be a win-win environment to the maximum extent possible or it will be difficult to incent teams to move in a more common direction

 

Surface Opportunities

What’s Involved:

  • With redundancies identified, costs aligned, and some level of application quality/fit understood, it should be possible to look for opportunities to replace and retire solutions that either aren’t in use/creating value or that don’t provide the same level of capability in relation to cost as others in the environment

Why It Matters:

  • The goal of rationalization is to reduce complexity and cost while making it easier and faster to deliver capabilities moving forward. Where cost is consumed in maintaining solutions that are redundant or that don’t create value, they hamper efforts to innovate and create competitive advantage, which is the overall goal of this kind of effort

Key Considerations:

  • Generally speaking, the opportunities to simplify will be identified at a high-level during the analysis phase of a rationalization effort. The detailed/feature-level analysis of individual solutions is an important thing to include in the planning of subsequent design and implementation work to surface critical gaps, integration points, and workflow dependencies between systems to facilitate transition to the desired future state environment

 

Strategize

Having completed the Analysis effort and surfaced opportunities to simplify the footprint, the process shifts to identifying the target future state environment and mapping out the approach to transition.

 

Define Future Blueprint(s)

What’s Involved:

  • Assuming some representation of the current state environment has been created as a byproduct of the first two steps of the process, the goal of this activity is to define the conceptual end state footprint for the organization
  • To the extent that there are corporate shared services, multiple business/commercial entities, operating units, facilities, locations, etc. to be considered, the blueprint should show the simplified application landscape post-transition, organized by operating entity, where one or more operating unit could be mapped into a common element of the future blueprint (e.g., organized by facility type versus individual locations, lower complexity business units versus larger entities)

Why It Matters:

  • A relatively clear, conceptual representation of the future state environment is needed to facilitate discussion and understanding of the difference between the current environment, the intended future state, and the value for changes being proposed

Key Considerations:

  • Depending on the breadth and depth of the organization itself, the representation of the blueprint may need to be defined at multiple levels
  • The approach to organizing the blueprint itself could also provide insight into how the implementation approach and roadmap is constructed, as well as how stakeholders are identified and aligned to those efforts

 

Map Solutions

What’s Involved:

  • With opportunities identified and a future state operating blueprint, the next step is to map retained solutions into the future state blueprint and project the future run rate of the application footprint

Why It Matters:

  • The output of this activity will both provide a vision of the end state and act as input to socializing the vision and approach with key stakeholder in the interest of moving the effort forward

Key Considerations:

  • There is a bit of art and science when it comes to rationalization, because too much standardization could limit agility if not managed in a thoughtful. I will provide an example of this in the scenario following the process, but a simple example is to think about whether maintaining separate instances of a core application is appropriate in situations where speed to market or individual operating units need the flexibility to have greater autonomy than they might otherwise have if they had to operate off a single, shared instance of one application
  • I mentioned in the article on the intangibles of simplification, that is it a good idea to take an aggressive approach to the future state, because likely not everything will work in practice and the entire goal of the exercise is to try and optimize as much as possible in terms of value in relation to cost
  • From a financial standpoint, it is important to be conservative in assumptions related to changes in operating expense. That should manifest itself in allowing for contingency in implementation schedule and costs as well as assuming the decommissioning of solutions will take longer than expected (it most likely will).  It is far better to be ahead of a conservative plan than to be perpetually behind an overly aggressive one

 

Define Change Strategy

What’s Involved:

  • With the current and future blueprints identified, the next step would be to identify the “building blocks” (in conceptual terms) of the eventual roadmap. This is essentially a combination of three things: application instances to be consolidated, replacement of one application by another, and retirement of applications that are either unused or that don’t create enough value to continue supporting them
  • Opportunities can also be segregated into big bets that affect core systems and material cost/change, those that are more operational and less substantial in nature, and those that are essentially cleanup of what exists. The segregation of opportunities can help inform the ultimate roadmap to be created, the governance model established, and program management approach to delivery (e.g., how different workstreams are organized and managed)

Why It Matters:

  • Roadmaps are generally fluid beyond a near-term window because things inevitably occur during implementation and business priorities change. Given there can be a lot of socializing of a roadmap and iteration involved in strategic planning, I believe it’s a good idea to separate the individual transitions from the overall roadmap itself, which can be composed in various ways, depending on how you ultimately want to tackle the strategy.  At a conceptual level, you can think of it as a set of Post-it notes representing individual efforts that can be organized in a number of legitimate ways with different cost, benefit, and risk profiles

Key Considerations:

  • Individual transitions can be assessed in terms of risk, business implications, priority, relative cost and benefits, and so forth as a means to help determine slotting in the overall roadmap for implementation

 

Develop Roadmap

What’s Involved:

  • With the individual building blocks for transition identified, the final step in the strategy definition stage is to develop one or more roadmaps to assemble those blocks to explore as many implementation strategies as appropriate

Why It Matters:

  • The roadmap is a critical artifact in the formation of an implementation plan, though they generally change quite a bit over time depending on the time horizon, scope, complexity, and scale of the program itself

Key Considerations:

  • Ensure that all work is included and represented, including any foundational or kickoff-related activities that will serve the program as a whole (e.g., establishing a governance model, PMO, etc.)
  • Include retirements (not just new solution deployments), minimally as milestones, in the roadmap so they are planned and accounted for. There are many times this is missed in my experience with new system deployments
  • Depending on the scale of implementation, explore various business scenarios (e.g., low risk work up front, big bets first, balanced approaches, etc.) to ascertain the relative cost, benefit, implementation requirements, and risks of each and determine the “best case” scenario to be socialized

 

Socialize and Mobilize

Important footnote: I’ve generally assumed that the process above would be IT-led with a level of ongoing business participation given much of the data gathering and analysis can be performed within IT itself.  That isn’t to say that solutioning and development of a roadmap needs to be created and socialized in a sequential manner as is outlined here.  It could also be the case that opportunities are surfaced out of the evaluation effort and then the strategy and socialization is done through a collaborative/ workshop process, it depends on the scope of the exercise and nature of the organization.

With the alternatives and future state recommendations prepared, the remaining steps of the process are fairly standard, in terms of socializing and iterating the vision and roadmap, establishing a governance model and launching the work with clear goals for 30, 60, and 90 days in mind.  As part of the ongoing governance process, it is assumed that some level of iteration of the overall roadmap and goals will be performed based on learnings gathered early in the implementation process.

 

Putting Ideas into Practice – An Example

The Conceptual Example – Manufacturing

If you’ve made it this far, I wanted to move beyond the theory to a conceptual scenario to help illustrate various situations that could occur in the course of a simplification exercise.  The example diagram represents the flow of data across the three initial steps of the process outlined above.  The data is logically consistent and traceable across steps in the process if it is helpful in understanding the situation.  I limited the number of application types (lower left corner of the diagram) so I could explore multiple scenarios without making the data too overwhelming.  In practice, there would be multiple domains and many components in each domain to be considered (e.g., HR is a domain with many components represented as a single application category here), depending on the level of granularity being used for the rationalization effort.

From here, I’ll provide some observations on each major step in the hopes of making some example outcomes clear.  I’m not covering the financial analysis given it would make things even more complicated to represent, but for the sake of argument, we can assume that there is financial opportunity associated with reducing the number of applications and instances in place

 

Notes on the Current State

Some observations on the current state based on the data collected:

  • The organization has a limited set of corporate applications for Finance, Procurement, and HR, but most of the core applications are relegated to individual business units (there are three in this example) and manufacturing facilities (there are four)
  • Business Operation 1 is the largest commercial entity, sharing the same HR and Procurement solutions, though with unique copies of its own, a different instance of the core accounting system that is managed separately, with two facilities (1 and 2), using different instances of the same MES system, a common WMS system, and a set of unique fringe applications in most other functional categories, some of which overlap or complement those at the business unit level. Despite these differences in footprint, facilities 1 and 2 are highly similar from an operational/business process standpoint
  • Business Operations 2 and 3 are smaller commercial entities, running on a different HR system and a different instance of the Procurement solutions than Corporate, a different instance of the core accounting system that is managed separately in one and a unique accounting system in the other, with one facility each (3 and 4), using different MES systems, different instances of the same WMS system, and a set of unique fringe applications in most other functional categories, some of which overlap or complement those at the business unit level. Despite these differences in footprint, facilities 3 and 4 are highly similar from an operational/business process standpoint
  • All three business entities operate of unique ERP solutions, two of them leverage the same CRM system, though they are on separate instances, so there is no enterprise-level view of customer and financials need to be consolidated at corporate across all three entities using something like Hyperion or OneStream
  • The facilities utilize three different EAM solutions for Asset Health today, with two of them (2 and 3) using the same software
  • The fringe applications for accounting, EH&S, HR, and Procurement largely exist because of capability gaps in the solutions already available from the corporate or business unit applications

All things considered, the current environment includes 29 unique applications and 15 application instances.

Sounds complicated, doesn’t it? 

Well, while this is entirely a made-up scenario meant to help illustrate various simplification opportunities, the fact is that these things do actually happen, especially as you scale up and out an organization, have acquisitions, or partially roll out technology over time.

 

Notes on the Evaluation

Observations based on the analysis performed:

  • Having worked with business, portfolio, and application owners to classify and assess the applications in place, a set of systems surfaced as creating higher levels of business value, between mission-critical core (ERP, CRM, Accounting, MES) and supporting/fringe (Procurement, HR, WMS, EH&S, EAM) applications.
  • Application A, having been implemented by the largest commercial entity, provides the most capability of any of the solutions in place
  • Application D, as the current CRM system in use by two of the units today, likely offers the best potential platform for a future enterprise standard
  • Application F likely would make sense as an enterprise standard platform for accounting, though there is something about Application I currently in Facility 3 that provides unique capability at a day-to-day level
  • Application V is the best of the MES solutions from a fit and technology standpoint and is in place at two of the facilities today, though running on separate instances
  • Application K is already in place to support Procurement across most of the enterprise, though instances are varied and Applications L and M exist at the facility level because of gaps in capability today
  • Applications M and O surface as the best technical solutions in the EH&S space, with all of the others providing equal or lesser business value and technical quality
  • Application S stands out among other HR solutions as being a very solid technology platform
  • Application AB is the best of the EAM solutions both in terms of business capability and technical quality

 

Notes on the Strategy

The overall simplification strategy begins with the desire to standardize operations for smaller business entities 2 and 3 (operating blueprint B) and to run facilities in a more standard way between those supporting the larger commercial unit (facility blueprint A) and those supporting the smaller ones (facility blueprint B).

From a big bets standpoint:

  • ERP: Make improvements to Application A supporting business operation 1 so that the company can move from three ERPs to one, using a separate instance for the smaller operating units.
  • CRM: Make any necessary enhancements to Application D so that it can be run as a single enterprise application supporting all three business units (removing it from their footprint to manage), providing a mechanism to have a single view of the customer and reduced operating complexity and cost
  • Accounting: Given it is already largely in place across businesses, make improvements to Application F so it can serve as a single enterprise finance instance and remove it from footprint of the individual units. In the case of the facility-level requirements, making updates to accounting Application I and standardizing on that application for the small business manufacturing facilities. 
  • MES: Finally, standardize on Application V across facilities, with a unique instance being used to operate large and small business facilities respectively

 

For Operational Improvements:

  • Procurement and HR: Similar to CRM and Accounting, standardize on Application K and S so that they can be maintained and operated at the enterprise level
  • EH&S: Assuming there are differences how they operate, standardize to Applications M and O as solutions for large and smaller units respectively, eliminating all other applications in place
  • WMS: Y is already the standard for large facilities, so no action is needed there. For smaller facilities, consolidate to a single instance to support both facilities rather than maintain two versions of Application Z
  • EAM: standardize to a single, improved version of Application AB and eliminate other applications currently in place
  • Finally, for low value applications like H and M, to review and ensure no dependencies or issues exist, but to sunset those applications and reduce complexity and any associated cost outright

Post-implementation, the future environment would include 12 unique applications and 2 application instances, which is a net reduction of 17 applications (59%) and 13 instances (87%), likely with a substantial cost impact as well.

 

Wrapping Up

I realized in chalking out this article that it would be a substantial amount of information, but it is aimed at practitioners in the interest of sharing some perspective on considerations involved in doing rationalization work.  In my experience, what seems fairly straightforward on paper (including in my example above) generally isn’t for many reasons that are organizational and process-driven in nature.  That being said, there is a lot of complexity in many organizations to be addressed and so hopefully some of the ideas covered will be helpful in making the process a little more manageable.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 10/26/2025

Transforming Manufacturing

Overview

Growing Up in Manufacturing

My father ran his own business when I was growing up.  His business had two components: first, he manufactured pinion wire (steel or brass rods of various diameters with teeth from which gears are cut) and, second, as a producer of specialty gears that were used in various applications (e.g., the timing mechanism of an oil pump).  It was something he used to raise and support a large Italian family and it was pretty much a one-man show, with help from his kids as needed, whether that was counting and quality testing gears with mating parts or cutting, packing, and shipping material to various customers across North America.  He acquired and learned how to operate screw machines to produce pinion wire but eventually shifted to a distribution business, where he would buy finished material in quantity and then distribute to middle market customers at lower volumes at a markup. 

His business was as low tech as you could get, with a little card file he maintained that had every order by customer written out on an index card, tracking the specific item/part, volume, and pricing so he had a way to understand history as new requests for quotes and orders came in and also understand purchase patterns over time.  It was largely a relationship business, and he took his customer commitments to heart at a level that dinner conversation could easily deflect into a worry about a disruption in his supply chain (e.g., something not making it to the electroplater on time) and whether he might miss his promised delivery date.  Integrity and accountability were things that mattered and it was very clear his customers knew it.  He had a note pad on which he’d jot things down to keep some subset of information on customer / prospect follow ups, active orders, pending quotes, and so on, but to say there was a system outside what he kept in his head would be unfair to his mental capacity, which was substantial.

It was a highly manual business and, as a person who taught myself how to write software in the third grade, I was always curious what he could do to make things a little easier, more structured, and less manual, even though that was an inescapable part of running his business on the whole.  That isn’t to say he had that issue or concern, given he’d developed a system and way of operating over many years, knew exactly how it worked, and was entirely comfortable with it.  There was also a small point of my father being relatively stubborn, but that didn’t necessarily deter me from suggesting things could be improved.  I do like a challenge, after all… 

As it happened, when I was in high school, we got our first home computer (somewhere in the mid-1980s) and, despite its relatively limited capacity, I thought it would be a good idea to figure out a way to make something about running his business a little easier with technology.  To that end, I wrote a piece of software that would take all of his order management off of the paper index cards and put it into an application.  The ability to look up customer history, enter new orders, look at pricing differences across customers, etc. were all things I figured would make life a lot easier, not mention reducing the need to maintain this overstuffed card file that seemed highly inefficient to me.

By this point, I suspect it’s clear to the reader what happened… which is that, while my father appreciated the good intentions and concept, there was no interest in changing the way he’d been doing business for decades in favor of using technology he found a lot more confusing and intimidating than what he already knew (and that worked to his level of satisfaction).  I ended up relatively disappointed, but learned a valuable lesson, which is that, the first challenge in transformation is changing mindsets… without that, the best vision in the world will fail, no matter what value it may create.

It Starts with Mindset

I wanted to start this article with the above story because, despite nearly forty years having passed since I tried to introduce a little bit of automation to my father’s business, manufacturing in today’s environment can be just as antiquated and resistant to change as it was then, seeing technology as an afterthought, a bolt on, or cost of doing business rather than the means to unlock the potential that exists to transform a digital business in even the most “low tech” of operating environments.

While there is an inevitable and essential dependence on people, equipment, and processes, my belief is that we have a long way to go on understanding the critical role technology plays in unlocking the potential of all of those things to optimize capacity, improve quality, ensure safety, and increase performance in a production setting.

The Criticality of Discipline

Having spent a number of years understanding various approaches to digital manufacturing, one point that I wanted to raise prior to going into more of the particulars is the importance of operating with a holistic vision and striking the balance between agility and long-term value creation.  As I addressed in my article Fast and Cheap Isn’t Good, too much speed without quality can lead to complexity, uncontrolled and inflated TCO, and an inability to integrate and scale digital capabilities over time.  Wanting something “right now” isn’t an excuse not to do things the right way and eventually there is a price to pay for tactical thinking when solutions don’t scale or produce more than incremental gains. 

This is also related to “Framework-Driven Design” that I talk about in my article on Excellence by Design.  It is rarely the case that there is an opportunity to start from scratch in modernizing a manufacturing facility, but I do believe there is substantial value in making sure that investments are guided by an overall operating concept, technology strategy, and evolving standards that will, over time, transform the manufacturing environment as a whole and unlock a level of value that isn’t possible where incremental gains are always the goal.  Sustainable change takes time.

The remainder of this article will focus on a set of areas that I believe form the core of the future digital manufacturing environment.  Given this is a substantial topic, I will focus on the breadth of the subject versus going too deep into any one area.  Those can be follow-up articles as appropriate over time.

 

Leveraging Data Effectively

The Criticality of Standards

It is a foregone conclusion that you can’t optimize what you can’t track, measure, and analyze in real-time.  To that end, starting with data and standards is critical in transforming to a digital manufacturing environment.  Without standards, the ability to benchmark, correlate, and analyze performance will be severely compromised.  This can be a basic as how a camera system, autonomous vehicle, drone, conveyor, or digital sensor is integrated within a facility, to the representation of equipment hierarchies, or how operator roles and processes are tracked across a set of similar facilities.  Where standards for these things don’t exist, value will be constrained to a set of individual point solutions, use cases, and one-off successes.  Where standards are, however, implemented and scaled over time, the value opportunity will eventually cross over into exponential gains that aren’t otherwise possible, because the technical debt associated with retrofitting and mapping across various standards in place will create a significant maintenance effort that limits focus on true innovation and optimization.  This isn’t to suggest that there is a one-size-fits-all way to thinking about standards and that every solution needs to conform for the sake of an ivory tower ideal.  The point is that it’s worth slowing down the pace of “progress” at times to understand the value in designing solutions for longer-term value creation.

The Role of Data Governance

It’s impossible to discuss the criticality of standards without also highlighting the need for active, ongoing data governance, both to ensure standards are followed, that data quality at the local and enterprise level is given priority (especially to the degree that analytical insights and AI become core to informed decision making), and also to help identify and surface additional areas of opportunity where standards may be needed to create further insights and value across the operating environment.  The upshot of this is that there need to be established roles and accountability for data stewards at the facility and enterprise level if there is an aspiration to drive excellence in manufacturing, no matter what the present level of automation is across facilities.

 

Modeling Distributed Operations

Applying Distributed Computing

There is a power in distributed computing that enables you to scale execution at a rate that is beyond the capacity you can achieve with a single machine (or processor).  The model requires an overall coordinator of activity to distribute work and monitor execution and then the individual processors to churn out calculations as rapidly as they are able.  As you increase processors, you increase capacity, so long as the orchestrator can continue to manage and coordinate the parallel activity effectively.

From a manufacturing standpoint, the concept applies well across a set of distributed facilities, where the overall goal is to optimize the performance and utilization of available capacity given varying demand signals, individual operating characteristics of each facility, cost considerations, preventative maintenance windows, etc.  It’s a system that can be measured, analyzed, and optimized, with data gathered and measured locally, a subset of which is used to inform and guide the macro-level process.

Striking the Balance

While I will dive into this a little further towards the tail end of this article, the overall premise from an operating standpoint is to have a model that optimizes the coordination of activity between individual operating units (facilities) that are running as autonomously as possible at peak efficiency, while distributing work across them in a way that maximizes production, availability, cost, or whatever other business parameters are most critical. 

The key point being that the technology infrastructure for distributing and enabling production across and within facilities should ideally be a matter of business parameters that can be input and adjusted at the macro-level and the entire system of facilities be adjusted in real-time in a seamless, integrated way.  Conversely, the system should be a closed loop where a disruption at the facility level can inform a change across the overall ecosystem such that workloads are redistributed (if possible) to minimize the impact on overall production.  This could be manifest in one or more micro-level events (e.g., a higher than expected occurrence of unplanned outages) that informs production scheduling and distribution of orders to a major event (e.g., a fire or substantial facility outage) that redirects work across other facilities to minimize end customer impact.  Arguably there are elements that exist within ERP systems that can account for some of this today, but the level and degree of customization required to make it a robust and inclusive process would be substantial, given much of the data required to inform the model exists outside the ERP ecosystem itself, in equipment, devices, processes, and execution within individual facilities themselves.

Thinking about Mergers, Acquisitions, and Divestitures

As I mentioned in the previous section on data, establishing standards is critical to enabling a distributed paradigm for operations, the benefit of which is also the speed at which an acquisition could be leveraged effectively in concert with an existing set of facilities.  This assumes there is an ability to translate and integrate systems rapidly to make the new facility function as a logical extension of what is already in place, but ultimately a number of those technology-related challenges would have to be worked through in the interest of optimizing individual facility performance regardless.  The alternative to having this macro-level dynamic ecosystem functioning would likely be excess cost, inefficiency, and wasted production capacity.

 

Advancing the Digital Facility

The Role of the Digital Facility

At a time when data and analytics can inform meaningful action in real-time, the starting point for optimizing performance is the individual “processor”, which is a digital facility.  While the historical mental model would focus on IT and OT systems and integrating them in a secure way, the emergence of digital equipment, sensors, devices, and connected workers has led to more complex infrastructure and an exponential amount of available data that needs to be thoughtfully integrated to maximize the value it can contribute over time.  With this increased reliance on technology, likely some of which runs locally and some in the cloud, the reliability of wired and wireless connectivity has also become a critical imperative of operating and competing as a digital manufacturer.

Thinking About Auto Maintenance

Drawing on a consumer example, I brought my car in for maintenance recently.  The first thing the dealer did was plug in and download a set of diagnostic information that was gathered over the course of my road trips over the last year and a half.  The data was collected passively, provided the technicians with input on how various engine components were performing, and also some insight on settings that I could adjust given my driving habits that would enable the car to perform better (e.g., be more fuel efficient).  These diagnostics and safety systems are part of having a modern car and we take them for granted.

Turning back to a manufacturing facility, a similar mental model should apply for managing data at a local and enterprise level, which is that there should be a passive flow of data to a central repository that is mapped to processes, equipment, and operators in a way that enables ongoing analytics to help troubleshoot problems, identify optimization and maintenance opportunities, and look across facilities for efficiencies that could be leveraged at broader scale.

Building Smarter Equipment

Taking things a step further… what if I were to attach a sensor under the hood of my car, take the data, build a model, and try to make driving decisions using that model and my existing dashboard as input?  The concept seems a little ridiculous given the systems already in place within a car to help make the driving experience safe and efficient.  That being said, in a manufacturing facility with legacy equipment, that intelligence isn’t always built in, and the role of analytics can become an informed guessing game of how a piece of equipment is functioning without the benefit of the knowledge of the people who built the equipment to begin with. 

Ultimately, the goal should be for the intelligence to be embedded within the equipment itself, to enable a level of self-healing or alerting, and then within control systems to look at operating conditions across a connected ecosystem to determine appropriate interventions as they occur, whether that be a minor adjustment to operating parameters or a level of preventative maintenance.

The Role of Edge Computing and Facility Data

The desire to optimize performance and safety at the individual facility level means that decisions need to be informed and actions taken in near real-time as much as possible.  This premise then suggests that facility data management and edge computing will continue to increase in criticality as more advanced uses of AI become part of everyday integrated work processes and facility operations.

 

Enabling Operators with Intelligence

The Knowledge Challenge

With the general labor shortage in the market and retirement of experienced, skilled laborers, managing knowledge and accelerating productivity is a major issue to be addressed in manufacturing facilities.  There are a number of challenges associated with this situation, not the least of which can be safety related depending on the nature of the manufacturing environment itself.  Beyond that, the longer it takes to make an operator productive in relation to their average tenure (something that statistics would suggest is continually shrinking over time), the effectiveness of the average worker can become a limiting factor in the operating performance of a facility overall.

Understanding Operator Overload

One way that things have gotten worse is the proliferation of systems that comes with “modernizing” the manufacturing environment itself.  When confronted with the ever-expanding set of control, IT, ERP, and analytical systems, all of which can be sending alerts and requesting action (to varying degrees of criticality) on a relatively continuous basis, the pressure being created on individual operators and supervisors in a facility has increased substantially (with the availability of exponential amounts of data itself).  This is further complicated in situations where an individual “wears multiple hats” in terms of fulfilling multiple roles/personas within a given facility and arbitrating which actions to take against that increased number of demands can be considerably more complex.

Why Digital Experience Matters

While the number of applications that are part of an operating environment may not be something that is easy to reduce or simplify without significant investment (and time to make change happen), it is possible to look at things like digital experience platforms (DXPs) as a means to manage multiple applications into a single, integrated experience, inclusive of AR/VR/XR technologies as appropriate.  Organizing around an individual operator’s responsibilities can help reduce confusion, eliminate duplicated data entry, improve data quality, and ultimately improve productivity, safety, and effectiveness by extension.

The Role of the Intelligent Agent

With a foundation in place to organize and present relevant information and actions to an operator on a real-time basis, the next level of opportunity comes with the integration of intelligent agents (AI-enabled tools) into a digital worker platform to inform meaningful and guided actions that will ultimate create the most production and safety impact on an ongoing basis.  Again, there is a significant dependency on edge computing, wireless infrastructure, facility data, mobile devices, a delivery mechanism (the DXP mentioned above), and a sound underlying technology strategy to enable this at scale, but it is ultimately where AI tools can have a major impact in manufacturing moving forward.

 

Optimizing Performance through Orchestration

Why Orchestration Matters

Orchestration itself isn’t a new concept in manufacturing from my perspective, as the legacy versions of it are likely inherent in control and MES systems themselves.  The challenge occurs when you want to scale that concept out to include digital equipment, digital workers, digital devices, control systems, and connected applications into one, seamless, integrated end-to-end process.  Orchestration provides the means to establish configurable and dynamic workflow and associated rules into how you operate and optimize performance within and across facilities in a digital enterprise.

While this is definitely a capability that would need to be developed and extended over time, the concept is to think of the manufacturing ecosystem as a seamless collaboration of operators and equipment to ultimately drive efficient and safe production of finished goods. Having established the infrastructure to coordinate and track activity, the process performance can be automatically recorded and analyzed to inform continuous improvement on an ongoing basis.

Orchestrating within the Facility

The number of uses of orchestration within a facility can be as simple as coordinating and optimizing material movement between autonomous vehicles and fork lifts within a facility, to computer vision applications for safety and quality management.  With the increasing number of connected solutions within a facility, having the means to integrate and coordinate activity between and across them offers a significant opportunity in digital manufacturing moving forward.

Orchestrating across the Enterprise

Scaling back out to the enterprise level, looking across facilities, there are opportunities to look at things like procurement of MRO supplies and optimizing inventory levels, managing and optimizing production planning across similar facilities, benchmarking and analyzing process performance and looking for improvements that can be applied across facilities in a way that will create substantially greater impact than is possible if the focus is limited to the individual facility alone.  Given that certain enterprise systems like ERPs tend to operate at largely a global versus local level, having infrastructure in place to coordinate activity across both can create visibility to improvement opportunities and thereby substantial value over time.

Coordinated Execution

Finally, to coordinate between the local and global levels of execution, a thoughtful approach to managing data and the associated analytics needs to be taken.  As was mentioned in the opening, the overall operating model is meant to leverage a configurable, distributed paradigm, so the data that is shared and analyzed within and across layers is important to calibrate as part of the evolving operating and technology strategy.

 

Wrapping Up

There is a considerable amount of complexity associated with moving between a legacy, process and equipment-oriented mindset to one that is digitally enabled and based on insight-driven, orchestrated action.  That being said, the good news is that the value that can be unlocked with a thoughtful digital strategy is substantial given we’re still on the front-end of the evolution overall.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 02/12/2024