InPractice: Digital Manufacturing – Integration in the “Real World”

What It Is: Creating a digitally connected manufacturing environment that enables modernization and optimization requires a clear integration strategy that can align, leverage, and synthesize new and existing elements in a thoughtful and deliberate way

Why It Matters: It is often the case that we are in a “brownfield” environment where we don’t have the opportunity to “start from scratch” when it comes to modernization because of the sunk cost that exists in facilities across an enterprise.  By leveraging strategic integration, we maximize existing investments, provide resiliency, and create agility that will more rapidly generate ROI on new investments over time

Key Concepts

The above diagram will be used as a reference for the remainder of this article.  I have specifically left out facility data and enterprise components to focus largely on configuration management and integration.  To understand the blueprint overall and concepts related to orchestration, please see the links at the bottom of this article.  As there are many levels to this topic, I will focus on the main points in the interest of communicating the concepts overall.

  1. Framework-Driven Design – The foundation of the overall blueprint is the premise that manufacturing facilities can be treated as a configuration of logical components that can be managed through orchestration as part of a digitally connected ecosystem
  2. Building the right bridges – Considerable integration is built on a point-to-point basis, hard-wiring one component to the next, making things brittle, adding maintenance cost, and making change extremely difficult.  The goal in this approach is to identify processes where value can be created, selectively moving from a hard-wired to a model-driven integration approach that maximizes existing investments, increasing resiliency, and promotes longer-term agility
  3. Wrap existing assets – To the maximum extent possible, accelerate the modernization process by instrumenting and adding a digital wrapper around legacy equipment to allow it to integrate seamlessly with more modern digital capabilities and equipment
  4. Standardize integration – Create an Asset Functional Interface (AFI) that establishes a “manage-by-contract” environment where APIs and data exchanged are standardized to insulate the central orchestration framework from the underlying equipment executing the instructions.  These standards then enable a much more plug-and-play and “certified” environment where partner components can be aligned to the specs, deemed compliant, and integrated more rapidly and seamlessly over time
  5. Leveraging a Unified Namespace (UNS) – Pivoting from a hard-wired to a more standardized integration architecture also creates the opportunity to shift relevant transactions towards a more modern, event-driven architecture that is more resilient and plug-and-play, where messaging and data standards can evolve over time, leveraging a topic structure that is partner-neutral and allows for ongoing modernization
  6. Layered integration – Equipment integration itself is accomplished first through a set of façades that are configured to expose and subscribe to the relevant portions of the AFI for a specific piece of connected equipment.  There is then a secondary layer of partner-specific drivers that provide for any additional data transformation required across various partner-specific devices or versions of devices that may be in place over time.   One important point to note from an implementation standpoint is to consider the latency involved in data publication across equipment, as downstream analytics will need a relatively consistent baseline to benchmark operating metrics.  Overall, the goal would be to have streaming data available as close to “near-real-time” regardless of the integration pattern as possible
  7. Providing standardization in a non-standard world – By decoupling, integrating, and orchestrating high value processes between core systems and across critical assets, we create a strategic, reusable set of infrastructure assets that can work with new and old equipment, across facilities, and that can also be integrated with one or more ERP, MES, WMS, EAM, and other systems that may be in place across a heterogeneous facilities environment.  The speed-to-market for delivering new capabilities and ROI on investments associated with this level of reusable infrastructure is considerably higher over time than working within the highly constrained, tightly coupled and diverse environments in place across many manufacturing organizations today
  8. Enabling agentic integration – Moving to an API-centric environment that supports orchestration for critical, high value operations sets the stage for agentic integration for operators and end users.  Agents can only initiate and orchestrate processes and transactions that are exposed, and putting this core infrastructure in place would be a stepping stone towards an enterprise-ready agentic infrastructure environment within manufacturing facilities

Key Components

While the previous article on the power of orchestration covers the conceptual role of these components, I wanted to review these four in the interest of setting up the two example scenarios that will follow:

  1. Configuration Management – Foundational to the model is the concept that each facility will have an asset registry that serves as the single source of truth on all connected components and key characteristics that are necessary to leverage them as part of a digitally connected and orchestrated ecosystem.  This asset registry then acts as a conceptual facility DNS that helps map the logical entities to the physical environment for the purposes of enabling orchestration.  Example attributes would be the logical component type, specific type of underlying equipment, product manufacturer and associated version (as appropriate), any connected devices (e.g., a legacy forklift may have multiple smart devices connected to it), etc.
  2. Actor Management – Once assets are registered and connected via Configuration Management, this component keeps the active state of all equipment, whether it is in service, requires maintenance, is awaiting work, its current workload/request queue, telemetry data, etc.
  3. Work ManagementThis component acts as an intelligent dispatcher, defining rules for what equipment should be used to accomplish different tasks based on status provided by Actor Management and various operating conditions associated with the request.  For example, only forklifts are used to service requests for certain zones in a warehouse, or in the event that no AMRs are available in a given zone to process requests, a forklift is automatically dispatched as a secondary approach to minimize production downtime
  4. Orchestration – This is the “brains” of the facility, driving and coordinating critical processes, ensuring service-level assumptions are met, and that desired outcomes are achieved, leveraging work management to assign tasks that are part of configured workflows

Two Examples

In the interest of showing the relationships across components from a process standpoint, here are two simplified examples (I’m not trying to call out all the technical detail or steps, just the base concepts).

Start Up

First, assume that there are two CNC machines in a facility, one older piece of equipment and one smart CNC machine.  The legacy machine is fitted with a “digital backpack” of sensors and a gateway and registered, along with the smart CNC machine, so that they appear as two devices in the facility configuration.  Their state is tracked in the Actor Management component on a continual basis.

Scenario 1 – There was a recent shutdown and an operator wants to restart all connected CNC equipment (STEPS A-K in the diagram)

  1. The operator initiates a request though their connected device (assuming their persona has the necessary permissions)
  2. The request is published and received by the Orchestration component, which initiates a start up process. The first step is to publish a request for all stopped CNC machines
  3. Work Management receives the request and publishes a request for a list of all stopped CNC machines
  4. Actor Management receives the request and publishes a request for all registered CNC machines that is received by Configuration Management
  5. Configuration Management receives the request, interrogates its data store and publishes the list of connected machines back to the facility message bus
  6. Actor Management receives the message and checks the list against those machines presently represented in its facility model, creating new objects as required for missing items, and publishing the list of machines showing a status of “stopped” back to the queue
  7. Work Management receives the list, checks for any safety interlocks or other operating conditions present that would prevent starting up each item in the list, then publishes the resulting set of items back to the queue
  8. The Orchestrator receives the list and then sequentially processes each item
  9. The Orchestrator sends a standard Activate message
  10. The message is processed by the Façade for each piece of equipment and a result is provided in response to the request. If the machine doesn’t have the capability for an automated restart, a message is sent to the appropriate Operator via the connected worker environment to perform the activity manually
  11. The Orchestrator checks for any error conditions, notifies an operator and/or supervisor if required, then moves to the next piece of equipment until the process is complete. When the process is completed successfully, the transactional performance data is recorded and the workflow is completed

In an agentic future, the entire process above could be issued via voice command through a connected worker solution, including potentially remotely (depending on the infrastructure in place).

Material Movement

Second, assume that there is a fleet of material movement equipment in a facility, from 20-year-old forklifts to modern AMRs.  The legacy forklifts are fitted with a “digital backpack” of sensors and a gateway and registered, along with the rest of the autonomous equipment, so that they appear as active devices in the facility configuration.  Their state is tracked in the Actor Management component on a continual basis.

Scenario 2 – There is a need to move materials in the facility (STEPS 1-11)

  1. The MES or WMS system publishes a MoveMaterial request
  2. The Orchestrator receives the request, initiates a material move workflow, and publishes a request to identify the appropriate piece of equipment to service it
  3. Work Management receives the request, evaluates any associated conditions (e.g., location, work zone) and publishes a request to identify all equipment of the correct type (could be one or more), along with their present status, to perform the activity
  4. Actor Management receives the request and publishes a request for a list of all registered equipment of the specified type(s)
  5. Configuration Management receives the request, interrogates its data store and publishes the list of connected equipment back to the facility message bus
  6. Actor Management receives the message and checks the list against the equipment presently represented in its model, creating new objects as required for missing items, and publishing the list of movement equipment along with their individual status back to the message bus
  7. Work Management receives the list and, based on configured dispatching rules, publishes an ordered list of equipment to service the request
  8. Orchestration receives the list and starts with the first piece of equipment provided in the list
  9. Orchestration sends a standard MoveMaterial request to initiate action
  10. The message is processed by the Façade for the appropriate piece of equipment and a result is provided in response to the request.  If the work was directed at a forklift, a message is sent to the appropriate Operator via the connected worker environment to perform the activity
  11. The Orchestrator checks for any error conditions.  If the task assignment wasn’t completed or wasn’t taken up within a pre-defined OLA, it cancels that request, moves to the next piece of equipment in the list, and sends a new request, until the process is complete.  If the task is completed successfully, the transactional performance data is recorded and material move workflow is completed

Wrapping Up

Hopefully, the above examples provide a reasonable understanding of the nature of the interactions across the core framework for orchestration and value of having a decoupled and abstracted approach to integration.  Simply said: the orchestrator manages the SLAs and high value outcomes, but it isn’t concerned with the physical equipment used to support its workflows.  That separation of concerns creates an incredible amount of flexibility and resiliency, particularly in the mixed environment that exists across many manufacturing organizations today.

For Additional Information: InPractice: Digital Manufacturing – The Blueprint, InPractice: Digital Manufacturing – The Power of Orchestration, InBrief: Defining Manufacturing Maturity

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 03/31/2026

InPractice – Digital Manufacturing – The Power of Orchestration

What It Is: Simply said, orchestration is the means by which a process is managed across a set of entities in a connected ecosystem

Why It Matters: From a digital manufacturing standpoint, thinking of a facility as a connected ecosystem of digital elements creates a framework by which we can manage, coordinate, analyze, and optimize processes for the purposes of improving operating performance

The above diagram highlights the elements of the blueprint from the previous article on Digital Manufacturing (link below) that will be discussed below

Key Concepts

  • Fundamental to leveraging the blueprint is the concept of treating a facility as a collection of digital entities that are part of a dynamic, integrated ecosystem
  • Because equipment can and likely will vary facility-to-facility (or even within a facility itself), there is a benefit to defining the architecture at a logical and abstract level, and then mapping to into individual physical elements present in the production environment.  This allows for significant flexibility in the execution layer, while providing a common, reusable design that can be leveraged across facilities for the purposes of enabling analytics and optimization that otherwise would be much more expensive and complex to deliver at scale
  • For the purposes of illustrating the concept, I will describe the base assumptions related to different components at each layer of the blueprint, using material movement as an example
  • The example is not meant to be exhaustive, but rather to highlight some key assumptions to help highlight how the layers of the blueprint are meant to interact and interoperate at the macro-level.  There are definitely more components involved at an implementation-level (e.g., choices of whether to use OPC UA or MQTT for IT/OT integration at a facility)
  • The intention is not to suggest that the model in this example would be implemented in a big bang approach, but rather through a sequence of incremental steps as will be outlined below
  • The basis for how the model is intended to operate is fundamentally rooted in Object-Oriented Analysis and Design (OOA/OOD), so some familiarity with those concepts may be helpful when reviewing the next set of assumptions

The diagram above represents the subset of components that are relevant for this more detailed illustration of how the blueprint is intended to work.  I will break this down into three parts: how the model fits together overall, providing some clarity on the role of each of the components that are called out (using the numbers above as a guide), and finally how this could be approached from an incremental standpoint over time.

Overall Concepts

  • The blueprint assumes a base framework for organizing, tracking, and managing work across a set of digital components in a connected factory environment
  • The means for coordinating work is largely about orchestration, which assumes business rules and workflow that govern how processes in the facility work, and for which performance data would be collected for the purposes of performing analytics
  • The more than the infrastructure is extended to include additional processes, devices, and capabilities, the more optimization opportunities could be surfaced within and across facilities over time

Key Elements of the Model

  1. MES and WMS – The base assumption is that most or all of the material movement requests will originate in one of these two applications
  2. Movement Equipment – The assumption is the integration for the various types of material movement equipment will be done so that there is as much standard data exchanged as possible to facility cross-device analytics and process and performance optimization
  3. End User Access – For the purposes of making a forklift “appear” relatively equivalent to its autonomous counterparts, the assumption is that the operator will both be provided work assignment through a mobile device and that the device itself will be digitally “visible” through a set of sensors that are added to provide location and other available telemetry data
  4. Traffic Management and Location Services – It is assumed, given the near real-time nature of the devices, that location tracking and some level of traffic management will occur in the OT environment itself.  There is assumed to be a sufficient level of secure and reliable wireless connectivity available at a facility to enable this capability
  5. Configuration Management – At an overall level, from a modeling standpoint, the goal is to enable the framework by designing each of the individual components as an “actor” (an entity capable of interacting with the rest of the digital ecosystem) with a set of associated capabilities and operating characteristics that ultimately help to track and evaluate its function and performance within a given facility.  This is the core area where the object-oriented analysis and design of various components of the digital facility environment would be defined and built out over time.  Managing by configuration allows for an abstraction of the operating model from the physical equipment and actors at a given location
  6. Actor Management – As individual components are identified and integrated into the digital facility environment, there needs to be a mechanism to identify and assign the logical entities of the design to physical assets, along with tracking basic information about their state (for example a specific type of actor would exist for each derived type of autonomous vehicle, but that would need to be actively mapped to each physical device that is in place, so there is a way to translate the logical to the physical, and track things like battery status, whether the actor is presently addressing a task, awaiting work, completed a task successfully, etc.)
  7. Work Management – With various types of connected components identified and being managed, there needs to be a capability to establish rules for what kind of work should be served by which kinds of devices.  Having a separate component that is aware of both the existing configuration and types of actors available creates a dynamic way to analyze, adjust, and optimize that distribution of work as needed without needing to change any of the integration in place
  8. Orchestration – Having a configuration defined, a way to map logical entities to physical assets, and assign work to one or more components in a digitally connected ecosystem, the orchestration capability can provide a dynamic mechanism to manage material movement across connected actors, while tracking process and task performance
  9. Process Optimization – This capability would specifically look at data collected via the above components and look for optimization opportunities that could be fed back into work management and orchestration
  10. Facility Data & Analytics – This set of components is highlighted simply to note that data would be gathered at the edge in multiple formats to support local tracking and optimization, as well as model integration and execution
  11. Simulation – With data being published to the enterprise from individual facilities, the simulation component would be tasked with modeling various scenarios to identify performance improvement or cost optimization opportunities that could be fed back to the facility level
  12. Cross-Facility Optimization – Given the results of various simulation scenarios, a level of cross-facility optimization should be possible, whether that involves fleet composition, work assignment, or some other dimension surfaced through the simulation process
  13. Enterprise Data & Analytics – Different than the facility data environment, the enterprise solutions would have a broader focus across facilities for model development and process improvement identification
  14. Remote Monitoring – This capability is somewhat separate from those involving orchestration and optimization, but the point is that, once you have a standard for connecting and exposing telemetry and other data associated with digital components, a level of remote monitoring and support is possible that could provide efficiencies at scale

Phasing in Capabilities Over Time

  • As was stated in the overview above, the blueprint is meant to serve as a reference architecture to guide implementation efforts over time and promote long-term, sustainable value creation.  As such, it can be implemented in an incremental fashion, as long as certain steps are taken along the way to promote interoperability and extensibility of the design
  • This may not be how the implementation flow works in practice, but provides a way to conceptualize one way to build out the model in parts that create incremental value over time
  • Step 1: Model the Framework and Endpoints: This is arguably one of the most difficult steps, because it requires a level of understanding in terms of the longer-term workings of the model overall.  In this case, that means designing to expose and capture telemetry, route, and process performance data across a set of material movement devices
  • Step 2: Standardize Integration: Once the model is developed, integration should be standardized to allow abstraction of the various material movement devices so that, from a process standpoint, the means by which material movement is decoupled from the activity itself.  This provides longer-term flexibility in changing the makeup of the fleet without having to redesign the infrastructure for how material movement is accomplished in a given facility (or for a specific product setup)
  • Step 3: Incrementally Implement and Gather Data: The assumption is that one type of device could be brought online at a time to test and prove out the infrastructure (including route, task completion, and process performance data), then incrementally add more devices types until all are digitally integrated and collected
  • Step 4: Expose to the Enterprise: Once the digital integration is accomplished within a facility (either at an individual or collective level depending on the business need), it can then be exposed to the enterprise to provide visibility on the behavior of the fleet at each location
  • Step 5: Add Remote Monitoring: Depending on the operating model, once the devices are digitally integrated, it should be possible to add a layer of remote monitoring to support ongoing maintenance and reliability activities across facilities
  • Step 6: Add Orchestration: With multiple types of connected devices, orchestration can be added to provide more of a dynamic capability for assigning directed work, whether that is to forklift operators or autonomous equipment
  • Step 7: Analyze and Optimize at a Facility-Level: Having gathered performance data and established a more dynamic means to assign and manage work, facility-level optimization can be done to improve material handling across all connected devices (individually and collectively)
  • Step 8: Integrate at the Enterprise-Level: With data gathered and analyzed across device types, it can be published to the enterprise data solutions to provide visibility into operating characteristics across individual facilities
  • Step 9: Analyze and Simulate at the Enterprise-Level: Given data gathered across multiple device types and locations, it becomes possible to run simulations to model different scenarios for fleet composition and the relative impact of changing the makeup of devices and assignments by facility
  • Step 10: Optimize Across Facilities: With the output of various simulation scenarios having been generated, a level of cross-facility optimization could be performed to further optimize enterprise-level operating performance

Wrapping Up

Hopefully, providing a more concrete example sheds more light on the power of managing a digital facility as a digitally connected ecosystem.  The manufacturing environment itself is fundamentally layered and complex, so the elegance and layering of the solution, along with how complexity is insulated and abstracted is important in building out a resilient infrastructure that can operate and optimize across a variety of production settings that leverage highly varied pieces of equipment.

 

For Additional Information: In Practice: Digital Manufacturing – The Blueprint

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 03/24/2026

InPractice: Digital Manufacturing – The Blueprint

What It Is: This manufacturing blueprint is intended to serve as the overarching conceptual design for a future state digital factory environment.  It is meant to help inform and guide design decisions, adjusting as new capabilities are available, and allow for incremental evolution over time that conforms to an overall, integrated design.

Why It Matters: Designing a future state manufacturing environment is a complex activity for multiple reasons, the layering of technical and non-technical components, the interaction of human and machine elements, layers of processes (some or all of which may not be standardized), safety and failure points in the production process, operating conditions and constraints, limitations of technology infrastructure at physical locations, and so on.  Having a blueprint creates a “north star” concept that can inform individual implementation efforts in the interest of guiding investments, capturing the cumulative value of those efforts, and ultimately optimizing worker productivity, improving safety, reducing unplanned events, limiting waste, improving quality, and optimizing production over time in an intentional and disciplined way, driven by a combination of architecture and engineering.

Summary View – Concepts

  • Designing a connected ecosystem requires a broader view of the model itself, with a mindset geared towards reality, not an ivory tower that doesn’t and may never exist
  • That being said, the “building blocks” need to be identified so the future state model is architected and elements of the design can be leveraged in a way that components will “fit” and interoperate seamlessly, even if they are built through incremental efforts over time (if at all)
  • The above diagram is an evolution of the Purdue (ISA-95 Standard) model, meant to show the various connected groupings of elements that will comprise the future manufacturing environment, from the IT/OT environments within a facility to the enterprise elements and those that serve digital workers, customer, suppliers, and partners
  • The facility OT environment mimics Levels 0-2 of the Purdue model, with a separation of concerns between equipment (digital and non-digital), industrial control systems, the applications and data solutions, and ultimately the visibility layer that interact directly and provides visibility into that equipment. It is a foregone conclusion that for many manufacturers where more than one facility is in operation, the machinery and composition of this environment will vary, especially as the footprint increases
  • The facility IT environment represents the applications and data solutions, some connected, some standalone, that operate at a facility at a logical level (i.e., there may be cloud-based applications utilized, but that are facility-focused), running on edge computing resources.  The separation of concerns between the application and data layers reflects the conceptual architecture I discuss further in my Intelligent Enterprise 2.0 article (linked below)
  • The enterprise level supports the applications and data solutions that provide capabilities that span facilities, serve broader needs (e.g., ERP, Finance, Procurement), and enabling computing capabilities that would not make economic sense at each facility (e.g., model development)
  • All of these environments are supported by a common set of infrastructure elements that are secured and enabled to provide integrated monitoring and auditing, along with standard integration methods, between the IT/OT environments, the facility and cloud, and these internal capabilities and end users who need to access and consume the services
  • The end user layer is meant to represent and include all the stakeholders, internal and external to the organization (including contractors at a facility, as an example) who need secure, role-based access to the technologies across all three layers of the environment

Blueprint View – Concepts

The next level of elaboration of the model above includes a representative set of solution components in each category of the larger model.  Given the differences across types of manufacturing (discrete, process, etc.) and wide variety of equipment that can come into play, the first step of applying this concept in a “real world” situation would be to tailor and adjust it to the right components that can and would apply (present or future state) to a facility of set of facilities.

Facility – OT Environment

  • As was mentioned in the introduction above, the variability in equipment itself can be very significant, so there is a representative set included for the purpose of illustration, but it would need to be modified as appropriate to a specific organization
  • The primary concepts are the sub-groupings within the equipment category, from entire standalone, non-digital core assets to digitally enabled equipment, sensors, and robotics that can work in concert with those items
  • The supporting layers, from industrial controls, applications, and interfaces are largely consistent with the Purdue model, with additional elements for some more advanced capabilities that come into play as we evolve the broader model
  • The overall assumption with the OT environment remains that it is focused on execution of the manufacturing process, real-time data and decision-making that enables workers on the shop floor

Facility – IT Environment

  • The facility IT environment brings in the data solutions and applications that provide a means to analyze, manage, and orchestrate the underlying shop floor activities
  • Data solutions need to be able to manage multiple types of data, from time series data coming via sensors and equipment to graph-oriented representation of equipment hierarchies and video files from computer vision systems
  • The application layer has traditional applications for manufacturing execution (MES), warehouse management (WMS), and so on, but with some additional capabilities that I will highlight in the next section of the article
  • Again, not all applications identified in the “facility” layer may necessarily be running in the facility environment on edge computing resources, but logically, the assumption is that some or all of those identified are likely being utilized

Enterprise Environment

  • The enterprise environment can and will include more applications than those listed, but the ones listed as for the purposes of reflecting ones that have a connection into the manufacturing ecosystem in one way or another
  • The data solutions would process larger volumes of data, for purposes like model development (via a data lake), broader cross-facility analytics, simulation and so on
  • Part of the role of the enterprise applications would also provide a means to connect the activity and execution from any individual facility to its associated supply chain, other facilities, and so on

Supporting Layers

  • The Infrastructure layer, as represented, would provide connectivity and capabilities, from the individual facility to the enterprise, that provides a secure, reliable, and performant environment for other capabilities to be provided
  • The Security layer would provide capabilities to enable end user access control and identity management, vulnerability management, and zero trust to the appropriate level from the OT layer to the enterprise
  • The Intelligent Monitoring capability is aligned to my recent article on this topic (link below)

End User Access

  • The end user layer is elaborated to include a representative set of end consumers of technology capabilities, from the shop floor worker and supervisors to customers, partners, and suppliers
  • The assumption is that technology capabilities would be delivered through a defined set of mechanisms and devices for which standards can be developed to promote a consistent experience, regardless of delivery channel

From here, I’ll highlight some key components that are part of the evolution of the model itself.

Looking Forward – Concepts

The above diagram highlights three dimensions that are part of the evolutionary story of the digital facility, moving from the physical to the digital, from an “engineered” towards a more “architected” environment.  I will write additional articles to elaborate the concept of the execution model and each of these areas in more detail, so this is meant to provide a summary of the core concepts only

Digital Equipment

  • The general premise is to evolve from a set of largely disconnected and non-instrumented core assets in a facility to a digital world of connected, intelligent assets that support and enable optimization and reliability
  • A large number of organizations have digital equipment today, and sensors of various types.  The key aspect in the blueprint is how they are defined, modeled, and integrated into the larger ecosystem to enable other capabilities described below
  • Given reality is and always will be a blend of the new and the existing, I am planning to write a specific article on how I think about the blending of the two from an architecture standpoint in the interest of maximizing the value of assets across facilities in a heterogenous environment

Artificial Intelligence

  • While the term AI is thrown around more now than ever, understanding and identifying the touchpoints where it creates value in the manufacturing environment requires far more discipline and thought to obtain the most value for the investment
  • The core concept in relation to AI (beyond basic AI/ML capabilities) is that it plays five fundamental roles in the future manufacturing environment: optimizing performance of equipment (via asset health), optimizing performance of a facility (via process optimization), optimizing performance across a set of facilities and a broader application ecosystem involving manufacturing processes, optimizing product and facility investments (via digital twin and simulation capabilities), and optimizing digital worker safety, efficiency, and efficacy (via agents and orchestration capabilities)
  • Individual components related to the delivery of these capabilities are highlighted in the model above that will be explored in a future article on the future of distributed intelligence in manufacturing

Orchestration

  • The final component in a digital manufacturing future is the most critical, which is creating the capability to digitally orchestrate activity between human and machine elements, within and across facilities, in a way that is monitored, analyzed and optimized over time
  • There are various components identified on the diagram that will be part of a third future article on the overall model for execution and orchestration, but the key element is that there will be facility and enterprise elements to how the value is created over time, with the ability to scale these capabilities as new technologies and components are available within the broader, connected digital ecosystem

 

Wrapping Up

In drawing out the concepts associated with this blueprint, the overarching concept is that the future world of manufacturing needs to be more connected, insight-driven, and orchestrated in the interest of optimizing performance, safety, quality, and efficiency.  I hope the concepts were thought provoking.  Feedback is always welcome.

 

For Additional Information: InBrief: Defining Manufacturing Maturity, InBrief: Digital Manufacturing, InBrief: The Intelligent Enterprise 2.0, InBrief: Intelligent Monitoring

 

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 03/23/2026

InBrief: Defining Manufacturing Maturity

What It Is: Many models have been developed over the years to describe the factory environment and its structure and maturity, for various reasons, from the Purdue model to Industry 4.0.  The purpose of this summary article is to share a concept on how to unpack and discuss the evolution of a facility across multiple dimensions.  It is not meant to be prescriptive or “correct” from a content standpoint.  It is intended to be used as a tool to inform a structured conversation

Why It Matters: As we continue to add elements to the manufacturing environment, whether it is additional monitoring through sensors, automation through robotics, or analytics leveraging more powerful AI models, our ability to be disciplined and intentional in how we not only define our desired/”ideal” future state but also measure and map out our transition is critical in reaching those desired outcomes efficiently and in a cost-effective way

Key Concepts

  • Given the complexity and layers of process and automation involved, the “optimized” digital factory of the future needs to be architected, not just engineered
  • The diagram above is a representation meant to enable a structured discussion on the various operating dimensions that are important in the environment, some of which may not apply at all, depending on the nature of the manufacturing process and goals of the organization
  • The goal of the model is to break down the people, process, equipment, and technology dimensions (similar to the SIRI model from 2017) into relevant components and identify what a reasonable maturity curve looks like independent of other aspects of a facility.  That maturity process could vary significantly by organization (or by product/facility), but the overall goal would be to identify meaningful states along a continuum from unstructured and non-standard ways of working to those that are more standardized, supported through automation, and optimized to the extent possible
  • Through discussion of the various dimensions, a common vision and understanding can be reached in terms of “what good looks like” that is much more actionable and critical dependencies can be identified to help inform the prioritization of efforts that would be part of a modernization journey (e.g., the criticality of infrastructure to enable digital capabilities)
  • With a general framework established, one or more “transition states” could be identified to create meaningful tollgates along an evolutionary journey towards whatever the desired state is, assuming more than one step may be required over a period of time
  • Once the framework is defined, it could be used to establish a baseline of where a given facility is along the maturity curve, the “ideal” state, and any transition states and efforts that would be part of evolutionary journey as part of an overall, integrated transformation strategy
  • Operating benchmarks could also be established for each of the transition states to evaluate performance across a set of facilities to look for additional opportunities for improvement
  • In short, the transformation journey could be described as identifying the maturity curve and desired operating states, mapping the current state of facilities against the framework, then developing a roadmap for modernizing each facility (where required) as part of an overall transformation effort  

For Additional Information: InBrief: Digital Manufacturing

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 03/16/2026

InBrief: Intelligent Monitoring

What It Is: There is an opportunity to rethink how we approach monitoring of operations, to better understand dependencies across components, proactively surface risks to avoid issues, and more effectively manage incidents when they occur

Why It Matters: The complexity of operations, particularly in Manufacturing, has increased substantially, both due to the introduction of digital capabilities and automation at facilities, as well as movement to a hybrid cloud environment.  That complexity makes the effective identification of issues time consuming, which can cause significant and costly disruption to production and operating efficiency

Key Concepts

  • Define a conceptual framework for how your environment is organized first, along with what elements are important to monitor in an integrated fashion
  • Review existing toolsets in the market (cloud, APM, infrastructure monitoring) to determine whether they provide adequate capability to identify and manage issues effectively.  If so, there may be no business reason to look any further
  • If the complexity of the environment is such that there are a considerable number of critical components involved, the tools available don’t provide sufficient capability, and the cost of extended outages or inadequate operating performance is material… THEN it may make sense to explore a more integrated and intelligent solution
  • It is important to note that this is not an either/or proposition.  Even when pursuing a more elegant and integrated solution, for certain roles (e.g., security monitoring), existing tools likely should be leveraged in parallel with more advanced tools, so as not to need to replicate capabilities available across other tools that may be in place
  • The goal is to aggregate data from across existing monitoring platforms (or from source components themselves), run analytics to determine dependencies across operating components, and then apply that understanding to both monitor and triage issues more effectively at the time of an incident, as well as to handle the resolution process more effectively by having a streamlined process for notification and problem resolution
  • Once the infrastructure is established to perform the base capabilities described herein, additional components and data sets could be added over time to enrich the capability overall

Key Dimensions of the Solution

  • Enterprise Repository – Aggregated availability, connectivity, performance, and security data
  • Intelligent Monitoring – An AI-informed process that identifies risks and accelerates resolution
  • Performance Analytics –AI capability that identifies dependencies and evaluates benchmarks
  • Integrated Alerting / Notifications – Process to reduce duplication and manage communication
  • Orchestration – Workflow rules for incident management depending on the existing conditions
  • Command Center – A holistic view of operations across connected solution components
  • End-User View – A subset of enterprise data, specific to the end user relevant to their need/role

 

For Additional Information: InBrief: The Intelligent Enterprise 2.0, InBrief: Digital Manufacturing, InBrief: IT Operations

 

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

 

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 03/03/2026

InBrief: Managing Cloud Migration

What It Is: Managing internally hosted infrastructure can be expensive and burdensome, which is one of the reasons that migrating workloads to the cloud has seen a significant surge over the last decade or so

Why It Matters: While the financial argument of shifting from “capex to opex” sounds favorable at first, a significant number of costs will be introduced when pivoting to a hybrid or cloud-native environment.  A structured approach provides a more predictable outcome, while also unlocking new cloud capabilities

Key Concepts

  • Cloud strategy isn’t “one size fits all” – A thoughtful approach should be taken to evaluate where cloud migration creates value or competitive advantage versus going “all in”
  • It’s also never just a “Lift and Shift” – The minute workloads move outside an internally hosted environment, the technical complexity of a footprint and cyber risk materially increase and additional consideration needs to be given to the impact those changes will have to manage exposure
  • Plan to govern from Day One – Moving quickly in the case of the cloud can lead to immediate issues of orphaned assets and spiraling, unexpected costs. Having a migration plan is crucial
  • Establish roles and accountability – While the concept of enabling a broader set of delivery stakeholders to provision assets via a console is attractive and responsive at one level, it also introduces considerable risk of things being handled improperly.  Thinking through the operating model, expectations of various roles in relation to things like provisioning, managing, and utilizing cloud-based assets is critical

Key Dimensions

Educate

  • With any cloud migration, it is important to have the necessary expertise and educate anyone involved in the provisioning and utilization of cloud-based assets to ensure they do so properly

Plan

  • Like any other component of an enterprise footprint, cloud strategy should be thoughtful and integrated with the overall technology strategy and not implemented on an ad-hoc basis
  • Financial planning should allow for contingency cost until a disciplined FinOps process is in place

Secure

  • Cloud migration introduces considerable complexity to vulnerability management, securing transactions and data, managing identities, etc. Additional tooling and monitoring capabilities will be needed for environments, as well as changes to DevSecOps processes across teams

Tag

  • An effective tagging strategy (ideally automated) is crucial to establish ownership, provide transparency into asset utilization, manage cost allocation, and maintain the footprint over time

Monitor

  • It is critical to establish a FinOps process for monitoring and managing spend to avoid waste
  • Cloud migration adds significant complexity to integrated monitoring across hosted, cloud, and edge-based computing environments.  Incident management processes will need to be updated

Govern

  • Establish ownership for monitoring and governance of cloud-based assets to promote consistent use and standards across an organization, even if assets are provisioned in a federated manner

 

For Additional Information: The Future of IT, InBrief: IT Value/Cost Optimization, The Intelligent Enterprise 2.0

 

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

 

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 02/16/2026

InBrief: The Role of Architecture

What It Is: Architecture provides the structure, standards, and framework for how technology strategy should be manifested and governed, including its alignment to business strategy, capabilities, and priorities.  It should ideally be aligned at every level, from the enterprise to individual delivery projects

Why It Matters: Technology is a significant enabler and competitive differentiator in most organizations.  Architecture provides a mental model and structure to ensure that technology delivery is aligned to overall strategies that create value.  Having a lack of architecture discipline is like building a house without a blueprint… it will cost more to build, not be structurally sound, and expensive to maintain

Key Dimensions

  • Operating Model – How you organize around and enable the capability
  • Enable Innovation – How you allow for rapid experimentation and introduction of capabilities
  • Accelerating Outcomes – How you promote speed-to-market through structured delivery
  • Optimizing Value/Cost – How you manage complexity, minimize waste, and modernize
  • Inspiring Practitioners – How you identify skills, motivate, enable, and retain a diverse workforce
  • Performing in Production – How you promote ongoing reliability, performance, and security

Operating Model

  • Design the model to provide both enterprise oversight and integrated support with delivery
  • Ensure there is two-way collaboration, so delivery informs strategy and standards and vice versa
  • Foster a “healthy” tension between doing things “rapidly” and doing things “right”

Innovate

  • Identify and thoughtfully evaluate emerging technologies that can provide new capabilities that promote competitive advantage
  • Engage in experimentation to ensure new capabilities can be productionized and scaled

Accelerate

  • Develop standards, promote reuse, avoid silos, and reduce complexity to enable rapid delivery
  • Ensure governance processes promote engagement while not gating or limiting delivery efficacy

Optimize

  • Identify approaches to enable simplification and modernization to promote cost efficiency
  • Support benchmarking where appropriate to ensure cost and quality of service is competitive

Inspire

  • Inform talent strategy ensuring the right skills to support ongoing innovation and modernization
  • Provide growth paths to enable fair and thoughtful movement across roles in the organization

Perform

  • Integrate cyber security throughout delivery processes and ensure integrated monitoring for reliability and performance in production, across hosted and cloud-based environments

 

For Additional Information: Enterprise Architecture in an Adaptive World, InBrief: The Intelligent Enterprise 2.0, The Future of IT

 

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

 

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 02/12/2026

InBrief: Transformation

What It Is: Transformation is the process of reshaping a way of working in the interest of achieving a significant and measurable impact in business results

Why It Matters: When faced with changing market conditions, competitive threats, technology advancements, etc., it is critical for organizations to reinvent themselves.  This typically involves changes to process, organization, and technology and doing so in a thoughtful manner is critically important

Critical Dimensions

  • A Compelling Vision – The desired future state should be clear enough to create emotional investment and the commitment required to overcome the inertia that naturally resists change
  • Clear Business Outcomes –There should be tangible goals established (along with key financial and operating metrics) to inform prioritization and guide critical decisions throughout execution
  • Courageous, Committed Leadership – Transformation efforts are complex and require resilience, decisiveness, persistence, and determination to work through adversity along the way
  • A Supportive Culture – The environment and culture within an organization will have an impact on the degree of change that is possible to achieve overall and the rate at which it can be done
  • A Thoughtful Approach – It is tempting in larger programs to initiate too much work, too quickly, causing significant disruption and suboptimal results (or even programs to fail overall).  Given that a lot of learnings tend to occur in terms of processes, standards, and governance in initial delivery efforts, it is often the case that incubating large programs with a smaller, expert team and extending and scaling afterward is a more effective way to build momentum and reduce risk
  • Results-Orientation – Another problem in large transformation efforts is taking on too much, too early in terms of scope that can substantially defer any benefit realization, by comparison with a consistent, incremental delivery environment that is both predictable and repeatable over time
  • Adaptability and Agility – Transformation efforts are messy, involve complexity, and typically run into a host of issues throughout execution.  It is critical to maintain a high level of transparency into ongoing work, have an active and engaged governance process, and to make decisions efficiently and thoughtfully as they are needed during execution.  Roadmaps produced at the start of large programs rarely remain unchanged for very long, and it’s important that every inflection point is taken as an opportunity to learn, improve, and respond versus react
  • Patience and Discipline – Sustainable change takes time.  While it’s possible to force a level of change in the short-term and achieve incremental benefits, systemic and holistic changes to the way an organization operates take time.  Managing the process in a disciplined way both helps achieve overall results faster as well as mitigate risk

 

For Additional Information: The Seeds of Transformation, The Criticality of Culture, On “Delivering at Speed”

 

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

 

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 01/28/2026

InBrief: Consulting vs GenAI

What It Is: Companies lean on consulting organizations to bring capabilities to bear in the interest of solving problems.  GenAI has caused disruption in how organizations are thinking about that relationship

Why It Matters: Organizations want to spend money effectively in the interest of fostering innovation, creating competitive advantage, and generating value.  The relatively low cost and speed with which one can write prompts and obtain rapid, well-formed responses challenges many aspects of the time and cost involved in traditional consulting efforts, which is the focus of this article

Where GenAI is Helpful

  • It’s important to acknowledge that leveraging AI can be a relatively easy and highly effective way to quickly surface content that is informative and pertinent to a given subject or request

Some Additional Considerations, however… in favor of Consultants

  • If every complex business problem was as simple as reading a book, that book would’ve been written (many have, in fact), everyone would read it, and achieved peak effectiveness already.  Except, that’s not the case.  We don’t operate in a perfect world of theoretical norms.  Real life involves context and specifics, including the dynamics, idiosyncrasies, and people that are part of organizations.  Part of the role consultants play is to translate and apply conceptual ideals to real-world environments, and that’s one of the ways that they create value for their clients
  • Part of what you buy with consulting is also institutional knowledge that comes from experience working in a given industry, access to other organizations and the ways they operate, etc., that is not “publicly available”.  With the advent of AI, there will be a rise in proprietary data providers that cover gaps in what is available in the public domain, but if you are contracting to access those additional insights, you’re essentially still paying someone for outside perspective
  • Business challenges are complex, have multiple dimensions, and your priorities play a significant role in what you ultimately achieve.  Part of the value consultants provide is not just providing a laundry list of things to do or concepts to consider, but a thoughtful list of priorities and way to navigate complex situations with your organization’s realities and your environment in mind
  • Another reason organizations look to consultants is to help foster and promote innovation.  Innovation, by definition, is not “best practice”.  It exists in the white space between what is and what could be, and that is not something that can be harvested from a knowledge base, no matter how quickly you can access it, or how well-formed it appears on a screen.  It is a creative act in itself, and having someone help facilitate that process can create substantial value
  • To the extent that a consultancy is providing experience on something that is outside of an organization’s core competency, understanding the quality of what comes from an AI-only solution could be difficult to impossible.  If that inquiry is critical to your business, the next question is whether you want to place a bet without understanding whether the answer is based in fact or potential “hallucinations” and the degree to which it is comprehensive at all
  • While it is not always the nature of consulting, having an unbiased perspective can be extremely valuable.  The result of an AI inquiry can be based on the nature of the prompt that was written and what the tool knows about that requestor themself.  Strategy consultants are meant to eliminate a level of confirmation bias that could limit the potential of what is possible
  • Finally, while we may trust technology to varying degrees in every day life, there is something to be said with the level of trust it takes to rely on it without a level of objective and authoritative support.  Prompting can lead to continually refined results, but there is also value and a benefit to having conversation, a true understanding of needs, and the comfort that comes with actual human interaction and discourse and a solution that is right for “you”.

 

For Additional Information: Courageous Leadership, Relentless Innovation, and Pushing the Envelope

 

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

 

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 01/22/2026

InBrief: CIO and CTO Roles

What It Is: Technology organizations are sometimes led through a combination of CIO and CTO roles, working towards a shared vision, each having a clear focus in the interest of promoting IT excellence

Why It Matters: Technology continues to advance at a rate, particularly with the introduction of AI, that exceeds many organizations ability to respond effectively.  Having the right leadership in place to develop strategy, consider longer-term implications of decisions made in ongoing delivery, and govern execution can be key to avoiding technical debt, while promoting delivery excellence over time

Five Types of CTOs

  • Technology Strategist – This is the most common modern orientation, focused on enablement, simplification, optimization, and capability delivery
  • Mistitled CIO – This occurs when the CTO actually has all the typical CIO responsibilities and they are fulfilling that role in every way, leading IT, setting direction, etc. without the CIO title
  • Futurist – This occurs where the CTO plays a more directional, but not action-oriented role, focused on white/position papers, and ideation
  • Infrastructure Lead – This is the historical role of a CTO, focused more on hosting, networking, reliability, performance, and operations with the CIO covering applications and data
  • Lead Designer / Senior Developer – This generally the case in start-up/smaller scale product environments, where the CTO leads the product design and helps code the solution

High-Level Differences

  • CIO focuses on the “what”, obtains business alignment and identifies capabilities, along with desired technology capabilities, focuses on the customer and providing vision and direction
  • CTO focuses on the “how”, understands desired business capabilities, determines how to provide technical capabilities and deliver on commitments, partnering with the broader team
  • Both roles participate in governance, CIO provides and aligns business priorities, CTO provides and aligns technical priorities in support of the CIO

When It Makes Sense to Have a CTO in Addition to a CIO

  • There is sufficient time required working with business partners that additional support is needed to define and evolve the technology strategy and work actively with delivery leaders
  • There is considerable complexity in the technology footprint, a high degree of transformation, or substantial integration required across ongoing delivery where having the CIO focused in the weeds of execution could result in underserving the business team and executive leadership
  • There is a need to move multiple levers (cloud platform migration, modernization, core platform implementation, AI integration, etc.) that a level of dedicated focus and oversight is needed to work through the risks and impacts of various strategies to define the best technology solution
  • When the scale of the organization in people, internally, externally, including customers, suppliers, and partners exceeds one person’s ability to manage relationships effectively
  • Where the CIO has a business background and it is helpful to supplement their capabilities with a more technology-focused leader overall

Benefits of Having a “Healthy Tension”

  • There can be a natural tension created when there is a separation of roles, because a CIO generally is incented to deliver new business capabilities at speed and the CTO should be incented to do things “right” to minimize long-term cost of ownership, managing technical debt, promoting standards and governance, and improving predictability of the delivery environment
  • Business delivery will generally be top priority, but having a CTO can mitigate the impact of tradeoffs made during delivery, particularly in large programs, where consequences are higher

The Downside:

  • There is incremental cost associated with adding leadership roles, there can be confusion across the broader IT leadership if roles and responsibilities aren’t clear, and a strong CIO/CTO partnership is important to making the role effective in practice
  • That being said, the value to any organization with a relatively large technology footprint would likely far exceed the cost of having a CTO focused on managing complexity and optimizing cost

The Difference Between CTO and Chief Architect

  • The CTO is the keeper of the overall technology strategy, apps, data, infrastructure, security integration with all of the above (working with the CISO), inclusive of the delivery environment
  • A “Chief Architect” tends to be more narrowly focused on application and data architecture strategy, but with awareness on how to incorporate cloud and platform strategy as well
  • A Chief Architect could be a role reporting to the CTO, depending on the scale of the organization, focused more on defining, modernizing, or rationalizing the enterprise ecosystem of connected components, acting more like a designer than a strategist, where a CTO without this role would generally do both at the enterprise level

 

For Additional Information: InBrief: IT Excellence, Fast and Cheap, Isn’t Good

 

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 01/17/2026