In my recent article on Exploring Artificial Intelligence, I covered several dimensions of how I think about the direction of AI, including how models will evolve from general-purpose and broad-based to be more value-focused and end consumer-specific as the above diagram is intended to illustrate.
The purpose of this article is to dive a little deeper into a mental model for how I believe the technology could become more relevant and valuable in an end-user (or end consumer) specific context.
Before that, a few assertions related to the technology and end user application of the technology:
The more we can passively collect data in the interest of simplifying end-user tasks and informing models, the better. People can be both inconsistent and unreliable in how they capture data. In reality, our cell phones are collecting massive amounts of data on an ongoing basis that is used to drive targeted advertising and other capabilities to us without our involvement. In a business context, however, the concept of doing so can be met with significant privacy and other concerns and it’s a shame because, while there is data being collected on our devices regardless, we aren’t able to benefit from it in the context of doing our work
Moving from a broad- or persona-based means of delivering technology capabilities to a consumer-specific approach is a potentially significant advancement in enabling productivity and effectiveness. This would be difficult or impossible to achieve without leveraging an adaptive approach that synthesizes various technologies (personalization, customization, dynamic code generation, role-based access control, AI/ML models, LLMs, content management, and so forth) to create a more cohesive and personalized user experience
While I am largely focusing on end-user application of the technology, I would argue that the same concepts and approach could be leveraged for the next generation of intelligent devices and digital equipment, such as robotics in factory automation scenarios
To make the technology both performant and relevant, part of the design challenge is to continually reduce and refine to level of “model” information that is needed at the next layer of processing so as not to overload the end computing device (presumably a cell phone or tablet) with a volume of data that isn’t required to enable effective action on behalf of the data consumer.
The rest of this article will focus on providing a mental model for how to think about the relationship across the various kinds of models that may make up the future state of AI.
Starting with a “Real World” example
Having spent a good portion of my time off traveling across the U.S., while I had a printed road atlas in my car, I was reminded of the trust I place in Google Maps more than once, particularly when driving through an “open range” gravel road with cattle roaming about in northwest Nebraska on my way to South Dakota. In many ways, navigation software represents a good starting point for where I believe intelligent applications will eventually go in the business environment.
Maps is useful as a tool because it synthesizes what data is has on roads and navigation options with specific information like my chosen destination, location, speed traps, delays, and accident information that is specific to my potential routes, allowing for a level of customization if I prefer to take routes that avoid tolls and so on. From an end-user perspective, it provides a next recommended action, remaining contextually relevant to where I am and what I need to do, along with how long it will be both until that action needs to be taken as well as the distance remaining and time I should arrive at my final destination.
In a connected setting, navigation software pulls pieces of its overall model and applies data on where I am and where I’m going, to (ideally) help me get where I’m going as efficiently as possible. The application is useful because it is specific to me, to my destination, and to my preferred route, and is different than what would be delivered to a car immediately behind me, despite leveraging the same application and infrastructure. This is the direction I believe we need to go with intelligent applications, to drive individual productivity and effectiveness.
Introducing the “Tree of Knowledge” concept
The Overall Model
The visual above is meant to represent the relationship of general-purpose and foundational models to what ultimately are delivered to an end-user (or piece of digital equipment) in a distributed fashion.
Conceptually, I think of the relationship across data sets as if it were a tree.
The general-purpose model (e.g., LLM) provides the trunk that establishes a foundation for downstream analytics
Domain-specific models (e.g., RAG) act as the branches that rely on the base model (i.e., the trunk) to provide process- or function-specific capabilities that can span a number of end-user applications, but have specific, targeted outcomes in mind
A “micro”-model is created when specific branches of the tree are deployed to an end-user based on their profile. This represents the subset that is relevant to that data consumer given their role, permissions, experience level, etc.
The data available at the end point (e.g., mobile device) then provides the leaves that populate the branches of the “micro”-models that have been deployed to create an adaptive model used to inform the end user and drive meaningful and productive action.
The adaptive model should also take into account user preferences (via customization options) and personalization to tune their experience as closely as possible to what they need and how they work.
In this way, the progression of models moves from general to very specific, end-user focused solutions that are contextualized with real-time data much the same as the navigation example above.
It is also worth noting that, in addition to delivering these capabilities, the mobile device (or endpoint) may collect and send data back to further inform and train the knowledge models by domain (e.g., process performance data) and potentially develop additional branches based on gaps that may surface in execution.
Applying the Model
Having set context on the overall approach, there are some notable differences from how these capabilities could create a different experience and level of productivity than today, namely:
Rather than delivering content and transactional capabilities based on an end-user’s role and persona(s), those capabilities would be deployed to a user’s device (the branches of the “micro”-model), but synthesized with other information (the “leaves”) like the user’s experience level, preferences, location, training needs, equipment information (in a manufacturing-type context), to generate an interface specific to them that continually evolves to optimize their individual productivity
As new capabilities (i.e., “branches”) are developed centrally, they could be deployed to targeted users and their individual experiences would adapt to incorporate in ways that work best for them and their given configuration, without having to relearn the underlying application(s)
Going Back to Navigation
On the last point above, a parallel example would be the introduction of weather information into navigation.
At least in Google Maps, while there are real-time elements like speed traps, traffic delays, and accidents factored into the application, there is currently no mechanism to recognize or warn end users about significant weather events that also may surface along the route. In practice, where severe weather is involved, this could represent safety risk to the traveler and, in the event that the model was adapted to include a “branch” for this kind of data, one would hope that the application would behave the same from an end-user standpoint, but with the additional capability integrated into the application.
Wrapping Up
Understanding that we’re still early in the exploration of how AI will change the way we work, I believe that defining a framework for how various types of models can integrate and work across purposes would enable significant value and productivity if designed effectively.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
Having had multiple recent discussions related to portfolio management, I thought I’d share some thoughts relative to disciplined operations, in terms of the aforementioned subject and on the associated toolsets as well. This is a substantial topic, but I’ll try to hit the main points and address more detailed questions as and when they arise.
In getting started, given all the buzz around GenAI, I asked ChatGPT “What are the most important dimensions of portfolio management in technology?” What was interesting was that the response aligned with most discussions I’ve had over time, which is to say that it provided a process-oriented perspective on strategic alignment, financial management, and so on (a dozen dimensions overall), with a wonderfully summarized description of each (and it was both helpful and informative). The curious part was that it missed the two things I believe are most important: courageous leadership and culture.
The remainder of this article will focus more on the process dimensions (I’m not going to frame it the same as ChatGPT for simplicity), but I wanted to start with a fundamental point: these things have to be about partnership and value first and process second. If the focus becomes the process, there is generally something wrong in the partnership or the process is likely too cumbersome in how it is designed (or both).
Portfolio Management
Partnership
Portfolio management needs to start with a fundamental partnership and shared investment between business and technology leaders on the intended outcome. Fortunately, or unfortunately, where the process tends to get the most focus (and part of why I’ve heard it so much in the last couple years) is in a difficult market/economy where spend management is the focus, and the intention is largely related to optimizing costs. Broadly speaking, when times are good and businesses grow, the processes for prioritization and governance can become less rigorous in a speed-to-market mindset, the demand for IT services increases, and a significant amount of inefficiency, delivery and quality issues can arise as a result. The reality is that discipline should always be a part of the process because it’s in the best interest of creating value (long- and short-term) for an organization. That isn’t to suggest artificial constraints, unnecessary gates in a process, or anything to hinder speed-to-market. Rather, the goal of portfolio management should be to have a framework in place to manage demand through delivery in a way that facilitates predictable, timely, and quality delivery and a healthy, secure, robust, and modern underlying technology footprint that creates significant business value and competitive advantage over time. That overall objective is just as relevant during a demand surge as it is when spending is constrained.
This is where courageous leadership becomes the other critical overall dimension. It’s never possible to do everything and do it well. The key is to maintain the right mix of work, creating the right outcomes, at a sustainable pace, with quality. Where technology leaders become order takers is where a significant amount of risk can be introduced that actually hurts a business over time. The primary results being that taking on too much without thoughtful planning can result in critical resources being spread too thin, missed delivery commitments, poor quality, and substantial technical debt, all of which eventually undermine the originally intended goal of being “responsive”. This is why partnership and mutual investment in the intended outcomes matters. Not everything has to be “perfect” (and the concept itself doesn’t really exist in technology anyway), but the point is to make conscious choices on where to spend precious company resources to optimize the overall value created.
End-to-End Transparency
Shifting focus from the direction to the execution, portfolio management needs to start with visibility in three areas:
Demand management – the work being requested
Delivery monitoring – the work being executed
Value realization – the impact of what was delivered
In demand management, the focus should ideally be on both internal and external factors (e.g., business priorities, customer needs, competitive and industry trends), a thoughtful understanding of the short- and long-term value of the various opportunities, the requirements (internal and external) necessary to make them happen, and the desired timeframe for those results to be achieved. From a process standpoint, notice of involvement and request for estimate (RFE) processes tend to be important (depending on the scale and structure of an organization), along with ongoing resource allocation and forecast information to evaluate these opportunities as they arise.
Delivery monitoring is important, given the dependencies that can and do exist within and across efforts in a portfolio, the associated resource needs, and the expectations they place on customers, partners, or internal stakeholders once delivered. As and when things change, there should be awareness as to the impact of those changes on upcoming demand as well as other efforts within a managed portfolio.
Value realization is a generally underserved, but relatively important part of portfolio management, especially in spending constrained situations. This level of discipline (at an overall level) is important for two primary reasons: first, to understand the efficacy of estimation and planning processes in the interest of future prioritization and planning and, second, to ensure investments were made effectively in the right priorities. Where there is no “retrospective”, a lot of learnings may be being lost in the interest of continuous improvement and operational efficiency and effectiveness over time (ultimately having an adverse impact on business value created).
Maintaining a Balanced Portfolio
Two concepts that I believe are important to consider in how work is ultimately allocated/prioritized within an IT portfolio:
Portfolio allocation – the mix of work that is being executed on an ongoing basis
Prioritization – how work is ultimately selected and the process for doing so
A good mental model for portfolio allocation is a jigsaw puzzle. Some pieces fit together, others don’t, and whatever pieces are selected, you ultimately are striving to have an overall picture that matches what you originally saw “on the box”. While you also can operate in multiple areas of a puzzle at the same time, you also generally can’t focus in on all of them concurrently and expect to be efficient on the whole.
What I believe a “good” portfolio should include is four key areas (with an optional fifth):
Innovation – testing and experimenting in areas where you may achieve significant competitive advantage or differentiation
Business Projects – developing solutions that create or enable new or enhanced business capabilities
Modernization – using an “urban renewal” mindset to continue to maintain, simplify, rationalize, and advance your infrastructure to avoid significant end of life, technical debt, or other adverse impacts from an aging or diverse technology footprint
Security – continuing to leverage tools and technologies that manage the ever increasing exposure associated with cyber security threats (internal and external)
Compliance (where appropriate) – investing in efforts to ensure appropriate conformance and controls in regulatory environments / industries
I would argue that, regardless of the level of overall funding, these categories should always be part of an IT portfolio. There can obviously be projects or programs that provide forward momentum in more than one category above, but where there isn’t some level of investment in the “non-business project” areas, likely there will be a significant correction needed at some point of time that could be very disruptive from a business standpoint. It is probably also worth noting that I am not calling out a “technology projects” category above on purpose. From my perspective, if a project doesn’t drive one of the other categories, I’d question what value it creates. There is no value in technology for technology’s sake.
From a prioritization standpoint, I’ve seen both ends of the spectrum over the course of time: environments where there is no prioritization in place and everything with a positive business case (and even some without) are sent into execution to ones where there is an elaborate “scoring” methodology, with weights and factors and metrics organized into highly elaborate calculations that create a false sense of “rigor” in the efficacy of the process. My point of view overall is that, with the above portfolio allocation model in place, ensuring some balance in each of the critical categories of spend, a prioritization process should include some level of metrics, with an emphasis on short- and long-term business/financial impact as well as a conscious integration of the resource commitments required to execute the effort by comparison with other alternatives. As important as any process, however, is the discussions that should be happening from a business standpoint to ensure the engagement, partnership, and overall business value being delivered through the portfolio (the picture on the box) in the decisions made.
Release Management
Part of arriving at the right set of work to do also comes down to release management. A good analogy for release management is the game Tetris. In Tetris, you have various shaped blocks dropping continually into a grid, with the goal of rotating and aligning them to fit as cleanly with what is already on the radar as possible. There are and always will be gaps and the fit will never be perfect, but you can certainly approach Tetris in a way that is efficient and well-aligned or in a way that is very wasteful of the overall real estate with which you have to work.
This is great mental model for how project planning should occur. If you do a good job, resources are effectively utilized, outcomes are predictable, there is little waste, and things run fairly smoothly. If you don’t think about the process and continually inject new work into a portfolio without thoughtful planning as to dependencies and ongoing commitments, there can and likely will be significant waste, inefficiency, collateral impact, and issues in execution.
Release management comes down to two fundamental components:
Release strategy – the approach to how you organize and deliver major and minor changes to various stakeholder groups over time
Release calendar – an ongoing view of what will be delivered at various times, along with any critical “T-minus” dates and/or delivery milestones that can be part of a progress monitoring or gating process used in conjunction with delivery governance processes
From a release strategy standpoint, it is tempting in a world of product teams, DevSecOps, and CI/CD pipelines to assume everything comes down to individual product plans and their associated release schedules. The two primary issues here are the time and effort it generally takes to deploy new technology and the associated change management impact to the end users who are expected to adopt those changes as and when they occur. The more fragmented the planning process, the more business risk there is that ultimately end users or customers will be either under or overserved at any given point in time, where a thoughtful release strategy can help create predictable, manageable, and sustainable levels of change over time across a diverse set of stakeholders being served.
The release calendar, aside from being an overall summary of what will be delivered when and to whom, also should ideally provide transparency into other critical milestones in the major delivery efforts so that, in the event something moves off plan (which is a very normal occurrence in technology and medium to larger portfolios), the relationship to other ongoing efforts can be evaluated from a governance standpoint to determine whether any rebalancing or slotting of work is required.
Change Management
While I won’t spend a significant amount of time on this point, change management is often an area where I’ve seen the process managed very well and relatively poorly. The easy part is generally managing change relative to a specific project or program and that governance often exists in my experience. The issue that can arise is when the leadership overseeing a specific project is only taking into account the implications of change on that effort alone, and not the potential ripple effect of a schedule, scope, or financial adjustment on the rest of the portfolio, future demand, or on end users in the event that releases are being adjusted.
On Tooling
Pivoting from processes to tools, at an overall level, I’m generally not a fan of over-engineering the infrastructure associated with portfolio management. It is very easy for such an infrastructure to take a life of its own, become a significant administrative burden that creates little value (beyond transparency), or contain outdated and inaccurate information to the degree that the process involves too much data without underlying ownership and usage of the data obtained.
The goal is the outcome, not the tools.
To the extent that a process is being established, I’d generally want to focus on transparency (demand through delivery) and a healthy ongoing discussion of priorities in the interest of making informed decisions. Beyond that, I’ve seen a lot of reporting that doesn’t generally result in any level of actions being taken, which I consider to be very ineffective from a leadership and operational standpoint.
Again, if the process is meant to highlight a relationship problem, such as a dashboard being created requiring a large number of employees to capture timesheets to be rolled up, marked to various projects, all to have a management discussion to say “we’re over allocated and burning out our teams”, my question would be why all of that data and effort was required to “prove” something, whether there is actual trust and partnership, whether there are other underlying delivery performance issues, and so on. The process and tools are there to enable effective execution and the creation of business value, not drain effort and energy that could better be applied in delivery with administrivia.
Wrapping Up
Overall, having spent a number of years seeing well developed and executed processes as well as less robust versions of the same, effective portfolio management comes down to value creation. When the focus becomes about the process, the dashboard, the report, the metrics, something is amiss in my experience. It should about informing engaged leadership, fostering partnership, enabling decisions, and creating value. That is not to say that average utilization of critical resources (as an example) isn’t a good thing to monitor and keep in mind, but it’s what you do with that information that matters.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
Having covered a couple future-oriented topics on Transforming Manufacturing and The Future of IT, I thought it would good to come back to where we are with Enterprise Architecture as a critical function for promoting excellence in IT.
Overall, there is a critical balance to be struck in technology strategy today: technology-driven capabilities are advancing faster than any organization can reasonably adopt and integrate them (as is the exposure in cyber security), even if you could, the change management issues you’d cause on end users would be highly disruptive, and thereby undermine your desired business outcomes, and, in practice rapidly evolving, sustainable change is the goal, not any one particular “implementation” of the latest thing. This is what Relentless Innovation is about, referenced in my article on Excellence by Design.
Connecting Architecture back to Strategy
In the article, Creating Value Through Strategy, I laid out a framework for thinking about IT strategy at an overall level that can be used to create some focal points for enterprise architecture efforts in practice, namely:
Innovate – leveraging technology advancements in ways that promote competitive advantage
Accelerate – increasing speed to market/value to be more responsive to changing needs
Optimize – improving the value/cost ratio to drive return on technology investments overall
Inspire – creating a workplace that promotes retention and enables the above objectives
Perform – ensuring reliability, security, and performance in the production environment
The remainder of this article will focus on how enterprise architecture (EA) plays a role in enabling each of these dimensions given the pace of change today.
Breaking it Down
Innovate
Adopting new technologies for maximum business advantage is certainly the desired end game in this dimension, but unless there is a very unique, one-off situation, the role of EA is fairly critical in making these advancements leverageable, scalable, and sustainable. It’s worth noting, by the way, that I’m specifically referring to “enterprise architecture” here, not “solution architecture”, which I would consider to be the architecture and design of a specific business solution. One should not exist without the other and, to the degree that solution architecture is emphasized without a governing enterprise architecture framework in place, the probability of significant technical debt, delivery issues, lack of reliability, and a host of other issues will skyrocket.
Where EA plays a role in promoting innovation is minimally in exploring market trends and looking for enabling technologies that can promote competitive advantage, but also, and very critically in establishing the standards and guidelines by which new technologies should be introduced and integrated into the existing environment.
Using a “modern” example, I’ve seen a number of articles of late on the role of GenAI in “replacing” or “disrupting” application development, from the low-code/no code type solutions to the SaaS/package software domain, to everywhere. While this sounds great in theory, it shouldn’t take long for the enterprise architecture questions to surface:
How do I integrate that accumulated set of “point solutions” in any standard way?
How do I meaningfully run analytics on the data associated with these applications?
How do I secure these applications in a way that I’m not exposed to vulnerabilities that I would with any open-source technology (i.e., they are generated by an engine that may have inherent security gaps)?
How do I manage the interoperability between these internally-developed/generated solutions and standard packages (ERP, CRM, etc.) that are likely a core part of any sizeable IT environment?
In the above example, even if I find way to replace existing low-code/no code solutions with a new technology, it doesn’t mean that I don’t have the same challenges as exist with leveraging those technologies today.
In the case of innovation, the highest priorities for EA are therefore: looking for new disruptive technologies in the market, defining standards to enable their effective introduction and use, and then governing that delivery process to ensure standards are followed in practice.
Accelerate
Speed to market is a pressing reality in any environment I’ve seen, though it can lead to negative consequences as I discussed in Fast and Cheap… Isn’t Good. Certainly, one of the largest barriers to speed is complexity, and complexity can come in many forms depending on the makeup of the overall IT landscape, the standards, processes, and governance in place related to delivery, and the diversity in solutions, tools, and technologies that are involved in the ecosystem as a whole.
While I talk about standards, reuse, and governance in the broader article on IT strategy, I would argue that the largest priority for EA in terms of accelerating delivery is in rationalization of solutions, tools, and technologies in use overall.
The more diverse the enterprise ecosystem is, the more difficult it becomes to add, replace, or integrate new solutions over time, and ultimately this will slow delivery efforts down to a snail’s pace (not to mention making them much more expensive and higher risk over time).
Using an example of a company that has performed many acquisitions over time, looking for opportunities to simplify and standardize core systems (e.g., moving to a single ERP versus having multiple instances and running consolidations through a separate tool) can lead to significant reduction in complexity over time, not to mention making it possible to redeploy resources to new capability development versus being spread across multiple redundant production solutions.
Optimize
In the case of increasing the value/cost ratio, the ability to rationalize tools and solutions should definitely lead to reduced cost of ownership (beyond the delivery benefit mentioned above), but the largest priority should be in identifying ways to modernize on a continual basis.
Again, in my experience, modernization is difficult to prioritize and fund until there is an end-of-life or end-of-support scenario, at which point it becomes a “must do” priority, and causes a significant amount of delivery disruption in the process.
What I believe is a much better and healthier approach to modernization is a more disciplined, thoughtful approach that is akin to “urban renewal”, where there is an annual allocation of work directed at modernization on a prioritized basis (the criteria for which should be established through EA, given an understanding of other business demand), such that significant “events” are mitigated and it becomes a way of working on a sustained basis. In this way, the delineation between “keep the lights on” (KTLO) support, maintenance (which is where modernization efforts belong), and enhancement/ build-related work is important. In my experience, that second maintenance bucket is too often lumped into KTLO work, it is underserved/underfunded, and ultimately that creates periodic crises in IT to remediate things that should’ve been addressed far sooner (as a much lower cost) if a more disciplined portfolio management strategy was in place.
Inspire
In the interest of supporting the above objectives, having the right culture and skills to support ongoing evolution is imperative. To that end, the role of EA should be in helping to inform and guide the core skills needed to “lean forward” into advanced technology, while maintaining the right level of competency to support the footprint in place.
Again, this is where having a focus on modernization can help, as it creates a means to sunset legacy tools and technologies, to enable that continuous evolution of the skills the organization needs to operate (whether internally or externally sourced).
Perform
Finally, the role of EA in the production setting could be more or less difficult depending on how well the above capabilities are defined and supported in an enterprise. To the degree standards, rationalization, modernization, and the right culture and skills are in place, the role of EA would be helping to “tune” the environment to perform better and at a lower cost to operate.
Where there is a priority need for EA is ensuring there is an integrated approach to cyber security that aligns to development processes (e.g., DevSecOps) and a comprehensive, integrated strategy to monitor and manage performance in the production environment so that production incidents (using ITIL-speak) can be minimized and mitigated to the maximum degree possible.
Wrapping Up
Looking back on the various dimensions and priorities outlined above in relation to the role of EA, perhaps there isn’t much that I can argue is very different than what the role entailed five or ten years ago… establish standards, simplify / rationalize, modernize, retool, govern… that being said, the pace at which these things need to be accomplished and the criticality of doing them well is more important than ever with the increasing role technology plays in the digital enterprise. Like other dimensions required to establish excellence in IT, courageous leadership is where this needs to start, because it takes discipline to do things “right” while still doing them at a pace and with an agility that discerns the things that matter to an enterprise versus those that are simply ivory tower thinking.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
I’ve been thinking about writing this article for a while, with the premise of “what does IT look like in the future?” In a digital economy, the role of technology in The Intelligent Enterprise will certainly continue to be creating value and competitive business advantage. That being said, one can reasonably assume a few things that are true today for medium to large organizations will continue to be part of that reality as well, namely:
The technology footprint will be complex and heterogenous in its makeup. To the degree that there is a history of acquisitions, even more so
Cost will always be a concern, especially to the degree it exceeds value delivered (this is explored in my article on Optimizing the Value of IT)
Agility will be important in adopting and integrating new capabilities rapidly, especially given the rate of technology advancement only appears to be accelerating over time
Talent management will be complex given the variety of technologies present will be highly diverse (something I’ve started to address in my Workforce and Sourcing Strategy Overview article)
My hope is to provide some perspective in this article on where I believe things will ultimately move in technology, in the underlying makeup of the footprint itself, how we apply capabilities against it, and how to think about moving from our current reality to that environment. Certainly, all of the five dimensions of what I outlined in my article on Creating Value Through Strategy will continue to apply at an overall strategy level (four of which are referenced in the bullet points above).
A Note on My Selfish Bias…
Before diving further into the topic at hand, I want to acknowledge that I am coming from a place where I love software development and the process surrounding it. I taught myself to program in the third grade (in Apple Basic), got my degree in Computer Science, started as a software engineer, and taught myself Java and .Net for fun years after I stopped writing code as part of my “day job”. I love the creative process for conceptualizing a problem, taking a blank sheet of paper (or white board), designing a solution, pulling up a keyboard, putting on some loud music, shutting out distractions, and ultimately having technology that solves that problem. It is a very fun and rewarding thing to explore those boundaries of what’s possible and balance the creative aspects of conceptual design with the practical realities and physical constraints of technology development.
All that being said, insofar as this article is concerned, when we conceptualize the future of IT, I wanted to put a foundational position statement forward to frame where I’m going from here, which is:
Just because something is cool and I can do it, doesn’t mean I should.
That is a very difficult thing to internalize for those of us who live and breathe technology professionally. Pride of authorship is a real thing and, if we’re to embrace the possibilities of a more capable future, we need to apply our energies in the right way to maximize the value we want to create in what we do.
The Producer/Consumer Model
Where the Challenge Exists Today
The fundamental problem I see in technology as a whole today (I realize I’m generalizing here) is that we tend to want to be good at everything, build too much, customize more than we should, and throw caution to the wind when it comes to things like standards and governance as inconveniences that slow us down in the “deliver now” environment in which we generally operate (see my article Fast and Cheap, Isn’t Good for more on this point).
Where that leaves us is bloated, heavy, expensive, and slow… and it’s not good. For all of our good intentions, IT doesn’t always have the best reputation for understanding, articulating, or delivering value in business terms and, in quite a lot of situations I’ve seen over the years, our delivery story can be marred with issues that don’t create a lot of confidence when the next big idea comes along and we want to capitalize on the opportunity it presents.
I’m being relatively negative on purpose here, but the point is to start with the humility of acknowledging the situation that exists in a lot of medium to large IT environments, because charting a path to the future requires a willingness to accept that reality and to create sustainable change in its place. The good news, from my experience, is there is one thing going for most IT organizations I’ve seen that can be a critical element in pivoting to where we need to be: a strong sense of ownership. That ownership may show up as frustration in the status quo depending on the organization itself, but I’ve rarely seen an IT environment where the practitioners themselves don’t feel ownership for the solutions they build, maintain, and operate or have a latent desire to make them better. There may be a lack of a strategy or commitment to change in many organizations, but the underlying potential to improve is there, and that’s a very good thing if capitalized upon.
Challenging the Status Quo
Pivoting to the future state has to start with a few critical questions:
Where does IT create value for the organization?
Which of those capabilities are available through commercially available solutions?
To what degree are “differentiated” capabilities or features truly creating value? Are they exceptions or the norm?
Using an example from the past, a delivery team was charged with solving a set of business problems that they routinely addressed through custom solutions, even though the same capabilities could be accomplished through integration of one or more commercially available technologies. From an internal standpoint, the team promoted the idea that they had a rapid delivery process, were highly responsive to the business needs they were meant to address, etc. The problem is that the custom approach actually cost more money to develop, maintain, and support, was considerably more difficult to scale. Given solutions were also continually developed with a lack of standards, their ability to adopt or integrate any new technologies available on the market was non-existent. Those situations inevitably led to new custom solutions and the costs of ownership skyrocketed over time.
This situation begs the question: if it’s possible to deliver equivalent business capability without building anything “in house”, why not do just that?
In the proverbial “buy versus build” argument, these are the reasons I believe it is valid to ultimately build a solution:
There is nothing commercially available that provides the capability at a reasonable cost
I’m referencing cost here, but it’s critical to understand the TCO implications of building and maintaining a solution over time. They are very often underestimated.
There is a commercially available solution that can provide the capability, but something about privacy, IP, confidentiality, security, or compliance-related concerns makes that solution infeasible in a way that contractual terms can’t address
I mention contracting purposefully here, because I’ve seen viable solutions eliminated from consideration over a lack of willingness to contract effectively, and that seems suboptimal by comparison with the cost of building alternative solutions instead
Ultimately, we create value in business capability enabled through technology, “who” built them doesn’t matter.
Rethinking the Model
My assertion is that we will obtain the most value and acceleration of business capabilities when we shift towards a producer/consumer model in technology as a whole.
What that suggests is that “corporate IT” largely adopts the mindset of the consumer of technologies (specifically services or components) developed by producers focused purely on building configurable, leverageable components that can be integrated in compelling ways into a connected ecosystem (or enterprise) of the future.
What corporate IT “produces” should be limited to differentiated capabilities that are not commercially available, and a limited set of foundational capabilities that will be outlined below. By trying to produce less and thinking more as a consumer, this should shift the focus internally towards how technology can more effectively enable business capability and innovation and externally towards understanding, evaluating, and selecting from the best-of-breed capabilities in the market that help deliver on those business needs.
The implication, of course for those focused on custom development, would be to move towards those differentiated capabilities or entirely towards the producer side (in a product-focused environment), which honestly could be more satisfying than corporate IT can be for those with a strong development inclination.
The cumulative effect of these adjustments should lead to an influx of talent into the product community, an associated expansion of available advanced capabilities in the market, and an accelerated ability to eventually adopt and integrate those components in the corporate environment (assuming the right infrastructure is then in place), creating more business value than is currently possible where everyone tries to do too much and sub-optimizes their collective potential.
Learning from the Evolution of Infrastructure
The Infrastructure Journey
You don’t need to look very far back in time to remember when the role of a CTO was largely focused on managing data centers and infrastructure in an internally hosted environment. Along the way, third parties emerged to provide hosting services and alleviate the need to be concerned with routine maintenance, patching, and upgrades. Then converged infrastructure and the software-defined data center provided opportunities to consolidate and optimize that footprint and manage cost more effectively. With the rapid evolution of public and private cloud offerings, the arguments for managing much of your own infrastructure beyond those related specifically to compliance or legal concerns are very limited and the trajectory of edge computing environments is still evolving fairly rapidly as specialized computing resources and appliances are developed. The learning being: it’s not what you manage in house that matters, it’s the services you provide relative to security, availability, scalability, and performance.
Ok, so what happens when we apply this conceptual model to data and applications? What if we were to become a consumer of services in these domains as well? The good news is that this journey is already underway, the question is how far we should take things in the interest of optimizing the value of IT within an organization.
The Path for Data and Analytics
In the case of data, I think about this area in two primary dimensions:
How we store, manage, and expose data
How we apply capabilities to that data and consume it
In terms of storage, the shift from hosted data to cloud-based solutions is already underway in many organizations. The key levers continue to be ensuring data quality and governance, finding ways to minimize data movement and optimize data sharing (while facilitating near real-time analytics), and establishing means to expose data in standard ways (e.g., virtualization) that enable downstream analytic capabilities and consumption methods to scale and work consistently across an enterprise. Certainly, the cost of ingress and egress of data across environments is a key consideration, especially where SaaS/PaaS solutions are concerned. Another opportunity continues to be the money wasted on building data lakes (beyond archival and unstructured data needs) when viable platform solutions in that space are available. From my perspective, the less time and resources spent on moving and storing data to no business benefit, the more energy that can be applied to exposing, analyzing, and consuming that data in ways that create actual value. Simply said, we don’t create value in how or where we store data, we create value in how consume it.
On the consumption side, having a standards-based environment with a consistent method for exposing data and enabling integration will lend itself well to tapping into the ever-expanding range of analytical tools on the market, as well as swapping out one technology for another as those tools continue to evolve and advance in their capabilities over time. The other major pivot being to minimize the amount of “traditional” analytical reporting and business intelligence solutions to more dynamic data apps that leverage AI to inform meaningful end-user actions, whether that’s for internal or external users of systems. Compliance-related needs aside, at an overall level, the primary goal of analytics should be informed action, not administrivia.
The Shift In Applications
The challenge in the applications environment is arbitrating the balance between monolithic (“all in”) solutions, like ERPs, and a fully distributed component-based environment that requires potentially significant management and coordination from an IT standpoint.
Conceptually, for smaller organizations, where the core applications (like an ERP suite + CRM solution) represent the majority of the overall footprint and there aren’t a significant number of specialized applications that must interoperate with them, it likely would be appropriate and effective to standardize based on those solutions, their data model, and integration technologies.
On the other hand, the more diverse and complex the underlying footprint is for a medium- to large-size organization, there is value in looking at ways to decompose these relatively monolithic environments to provide interoperability across solutions, enable rapid integration of new capabilities into a best-of-breed ecosystem, and facilitate analytics that span multiple platforms in ways that would be difficult, costly, or impossible to do within any one or two given solutions. What that translates to, in my mind, is an eventual decline of the monolithic ERP-centric environment to more of a service-driven ecosystem where individually configured capabilities are orchestrated through data and integration standards with components provided by various producers in the market. That doesn’t necessarily align to the product strategies of individual companies trying to grow through complementary vertical or horizontal solutions, but I would argue those products should create value at an individual component level and be configurable such that swapping out one component of a larger ecosystem should still be feasible without having to abandon the other products in that application suite (that may individually be best-of-breed) as well.
Whether shifting from a highly insourced to a highly outsourced/consumption-based model for data and applications will be feasible remains to be seen, but there was certainly a time not that long ago when hosting a substantial portion of an organization’s infrastructure footprint in the public cloud was a cultural challenge. Moving up the technology stack from the infrastructure layer to data and applications seems like a logical extension of that mindset, placing emphasis on capabilities provided and value delivered versus assets created over time.
Defining Critical Capabilities
Own Only What is Essential
Making an argument to shift to a consumption-oriented mindset in technology doesn’t mean there isn’t value in “owning” anything, rather it’s meant to be a call to evaluate and challenge assumptions related to where IT creates differentiated value and to apply our energies towards those things. What can be leveraged, configured, and orchestrated, I would buy and use. What should be built? Capabilities that are truly unique, create competitive advantage, can’t be sourced in the market overall, and that create a unified experience for end users. On the final point, I believe that shifting to a disaggregated applications environment could create complexity for end users in navigating end-to-end processes in intuitive ways, especially to the degree that data apps and integrated intelligence becomes a common way of working. To that end, building end user experiences that can leverage underlying capabilities provided by third parties feels like a thoughtful balance between a largely outsourced application environment and a highly effective and productive individual consumer of technology.
Recognize Orchestration is King
Workflow and business process management is not a new concept in the integration space, but it’s been elusive (in my experience) for many years for a number of reasons. What is clear at this point is that, with the rapid expansion in technology capabilities continuing to hit the market, our ability to synthesize a connected ecosystem that blends these unique technologies with existing core systems is critical. The more we can do this in consistent ways, the more we shift towards a configurable and dynamic environment that is framework-driven, the more business flexibility and agility we will provide… and that translates to innovation and competitive advantage over time. Orchestration is a critical piece of deciding which processes are critical enough that they shouldn’t be relegated to the internal workings of a platform solution or ERP, but taken in-house, mapped out, and coordinated with the intention of creating differentiated value that can be measured, evaluated, and optimized over time. Clearly the scalability and performance of this component is critical, especially to the degree there is a significant amount of activity being managed through this infrastructure, but I believe the transparency, agility, and control afforded in this kind of environment would greatly outweigh the complexity involved in its implementation.
Put Integration in the Center
In a service-driven environment, clearly the infrastructure for integration, streaming in particular, along with enabling a publish and subscribe model for event-driven processing, will be critical for high-priority enterprise transactions. The challenge in integration conversations in my experience tends to be defining the transactions that “matter”, in terms of facilitating interoperability and reuse, and those that are suitable for point-to-point, one off connections. There is ultimately a cost for reuse when you try to scale, and there is discipline needed to arbitrate those decisions to ensure they are appropriate to business needs.
Reassess Your Applications/Services
With any medium to large organization, there is likely technology sprawl to be addressed, particularly if there is a material level of custom development (because component boundaries likely won’t be well architected) and acquired technology (because of the duplication it can cause in solutions and instances of solutions) in the landscape. Another complicating factor could be the diversity of technologies and architectures in place, depending on whether or not a disciplined modernization effort exists, the level of architecture governance in place, and rate and means by which new technologies are introduced into the environment. All of these factors call for a thoughtful portfolio strategy, to identify critical business capabilities and ensure the technology solutions meant to enable them are modern, configurable, rationalized, and integrated effectively from an enterprise perspective.
Leverage Data and Insights, Then Optimize
With analytics and insights being a critical capability to differentiated business performance, an effective data governance program with business stewardship, selecting the right core, standard data sets to enable purposeful, actionable analytics, and process performance data associated with orchestrated workflows are critical components of any future IT infrastructure. This is not all data, it’s the subset that creates significant business value to justify the investment in making it actionable. As process performance data is gathered through the orchestration approach, analytics can be performed to look for opportunities to evolve processes, configurations, rules, and other characteristics of the environment based on key business metrics to improve performance over time.
Monitor and Manage
With the expansion of technologies and components, internal and external to the enterprise environment, having the ability to monitor and detect issues, proactively take action, and mitigate performance, security, or availability issues will become increasingly important. Today’s tools are too fragmented and siloed to achieve the level of holistic understanding that is needed between hosted and cloud-based environments, including internal and external security threats in the process.
Secure “Everything”
While zero trust and vulnerability management risk is expanding at a rate that exceeds an organization’s ability to mitigate it, treating security as a fundamental requirement of current and future IT environments is a given. The development of a purposeful cyber strategy, prioritizing areas for tooling and governance effectively, and continuing to evolve and adapt that infrastructure will be core to the DNA of operating successfully in any organization. Security is not a nice to have, it’s a requirement.
The Role of Standards and Governance
What makes the framework-driven environment of the future work is ultimately having meaningful standards and governance, particularly for data and integration, but extending into application and data architecture, along with how those environments are constructed and layered to facilitate evolution and change over time. Excellence takes discipline and, while that may require some additional investment in cost and time during the initial and ongoing stages of delivery, it will easily pay itself off in business agility, operating cost/ cost of ownership, and risk/exposure to cyber incidents over time.
The Lending Example
Having spent time a number of years ago understanding and developing strategy in the consumer lending domain, the similarities in process between direct and indirect lending, prime and specialty / sub-prime, from simple products like credit card to more complex ones like mortgage is difficult to ignore. That being said, it isn’t unusual for systems to exist in a fairly siloed manner, from application to booking, from document preparation, into the servicing process itself.
What’s interesting, from my perspective, is where the differentiation actually exists across these product sets: in the rules and workflow being applied across them, while the underlying functions themselves are relatively the same. As an example, one thing that differentiates a lender is their risk management policy, not necessarily the tool they use to assess to implement their underwriting rules or scoring models per se. Similarly, whether pulling a credit score is part of the front end of the process in something like credit card and an intermediate step in education lending, having a configurable workflow engine could enable origination across a diverse product set with essentially the same back-end capabilities and likely at a lower operating cost.
So why does it matter? Well, to the degree that the focus shifts from developing core components that implement relatively commoditized capability to the rules and processes that enable various products to be delivered to end consumers, the speed with which products can be developed, enhanced, modified, and deployed should be significantly improved.
Ok, Sounds Great, But Now What?
It Starts with Culture
At the end of the day, even the best designed solutions come down to culture. As I mentioned above, excellence takes discipline and, at times, patience and thoughtfulness that seems to contradict the speed with which we want to operate from a technology (and business) standpoint. That being said, given the challenges that ultimately arise when you operate without the right standards, discipline, and governance, the outcome is well worth the associated investments. This is why I placed courageous leadership as the first pillar in the five dimensions outlined in my article on Excellence by Design. Leadership is critical and, without it, everything else becomes much more difficult to accomplish.
Exploring the Right Operating Model
Once a strategy is established to define the desired future state and a culture to promote change and evolution is in place, looking at how to organize around managing that change is worth consideration. I don’t necessarily believe in “all in” operating approaches, whether it is a plan/build/run, product-based orientation, or some other relatively established model. I do believe that, given leadership and adaptability are critically needed for transformational change, looking at how the organization is aligned to maintaining and operating the legacy environment versus enabling establishment and transition to the future environment is something to explore. As an example, rather than assuming a pure product-based orientation, which could mushroom into a bloated organization design where not all leaders are well suited to manage change effectively, I’d consider organizing around a defined set of “transformation teams” that operate in a product-oriented/iterative model, but basically take on the scope of pieces of the technology environment, re-orient, optimize, modernize, and align them to the future operating model, then transition those working assets to different leaders that maintain or manage those solutions in the interest of moving to the next set of transformation targets. This should be done in concert with looking for ways to establish “common components” teams (where infrastructure like cloud platform enablement can be a component as well) that are driven to produce core, reusable services or assets that can be consumed in the interest of ultimately accelerating delivery and enabling wider adoption of the future operating model for IT.
Managing Transition
One of the consistent challenges with any kind of transformative change is moving between what is likely a very diverse, heterogenous environment to one that is standards-based, governed, and relatively optimized. While it’s tempting to take on too much scope and ultimately undermine the aspirations of change, I believe there is a balance to be struck in defining and establishing some core delivery capabilities that are part of the future infrastructure, but incrementally migrating individual capabilities into that future environment over time. This is another case where disciplined operations and disciplined delivery come into play so that changes are delivered consistently but also in a way that is sustainable and consistent with the desired future state.
Wrapping Up
While a certain level of evolution is guaranteed as part of working in technology, the primary question is whether we will define and shape that future or be continually reacting and responding to it. My belief is that we can, through a level of thoughtful planning and strategy, influence and shape the future environment to be one that enables rapid evolution as well as accelerated integration of best-of-breed capabilities at a pace and scale that is difficult to deliver today. Whether we’ll truly move to a full producer/consumer type environment that is service based, standardized, governed, orchestrated, fully secured, and optimized is unlikely, but falling short of excellence as an aspiration would still leave us in a considerably better place than where we are today… and it’s a journey worth making in my opinion.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
My father ran his own business when I was growing up. His business had two components: first, he manufactured pinion wire (steel or brass rods of various diameters with teeth from which gears are cut) and, second, as a producer of specialty gears that were used in various applications (e.g., the timing mechanism of an oil pump). It was something he used to raise and support a large Italian family and it was pretty much a one-man show, with help from his kids as needed, whether that was counting and quality testing gears with mating parts or cutting, packing, and shipping material to various customers across North America. He acquired and learned how to operate screw machines to produce pinion wire but eventually shifted to a distribution business, where he would buy finished material in quantity and then distribute to middle market customers at lower volumes at a markup.
His business was as low tech as you could get, with a little card file he maintained that had every order by customer written out on an index card, tracking the specific item/part, volume, and pricing so he had a way to understand history as new requests for quotes and orders came in and also understand purchase patterns over time. It was largely a relationship business, and he took his customer commitments to heart at a level that dinner conversation could easily deflect into a worry about a disruption in his supply chain (e.g., something not making it to the electroplater on time) and whether he might miss his promised delivery date. Integrity and accountability were things that mattered and it was very clear his customers knew it. He had a note pad on which he’d jot things down to keep some subset of information on customer / prospect follow ups, active orders, pending quotes, and so on, but to say there was a system outside what he kept in his head would be unfairto his mental capacity, which was substantial.
It was a highly manual business and, as a person who taught myself how to write software in the third grade, I was always curious what he could do to make things a little easier, more structured, and less manual, even though that was an inescapable part of running his business on the whole. That isn’t to say he had that issue or concern, given he’d developed a system and way of operating over many years, knew exactly how it worked, and was entirely comfortable with it. There was also a small point of my father being relatively stubborn, but that didn’t necessarily deter me from suggesting things could be improved. I do like a challenge, after all…
As it happened, when I was in high school, we got our first home computer (somewhere in the mid-1980s) and, despite its relatively limited capacity, I thought it would be a good idea to figure out a way to make something about running his business a little easier with technology. To that end, I wrote a piece of software that would take all of his order management off of the paper index cards and put it into an application. The ability to look up customer history, enter new orders, look at pricing differences across customers, etc. were all things I figured would make life a lot easier, not mention reducing the need to maintain this overstuffed card file that seemed highly inefficient to me.
By this point, I suspect it’s clear to the reader what happened… which is that, while my father appreciated the good intentions and concept, there was no interest in changing the way he’d been doing business for decades in favor of using technology he found a lot more confusing and intimidating than what he already knew (and that worked to his level of satisfaction). I ended up relatively disappointed, but learned a valuable lesson, which is that, the first challenge in transformation is changing mindsets… without that, the best vision in the world will fail, no matter what value it may create.
It Starts with Mindset
I wanted to start this article with the above story because, despite nearly forty years having passed since I tried to introduce a little bit of automation to my father’s business, manufacturing in today’s environment can be just as antiquated and resistant to change as it was then, seeing technology as an afterthought, a bolt on, or cost of doing business rather than the means to unlock the potential that exists to transform a digital business in even the most “low tech” of operating environments.
While there is an inevitable and essential dependence on people, equipment, and processes, my belief is that we have a long way to go on understanding the critical role technology plays in unlocking the potential of all of those things to optimize capacity, improve quality, ensure safety, and increase performance in a production setting.
The Criticality of Discipline
Having spent a number of years understanding various approaches to digital manufacturing, one point that I wanted to raise prior to going into more of the particulars is the importance of operating with a holistic vision and striking the balance between agility and long-term value creation. As I addressed in my article Fast and Cheap Isn’t Good, too much speed without quality can lead to complexity, uncontrolled and inflated TCO, and an inability to integrate and scale digital capabilities over time. Wanting something “right now” isn’t an excuse not to do things the right way and eventually there is a price to pay for tactical thinking when solutions don’t scale or produce more than incremental gains.
This is also related to “Framework-Driven Design” that I talk about in my article on Excellence by Design. It is rarely the case that there is an opportunity to start from scratch in modernizing a manufacturing facility, but I do believe there is substantial value in making sure that investments are guided by an overall operating concept, technology strategy, and evolving standards that will, over time, transform the manufacturing environment as a whole and unlock a level of value that isn’t possible where incremental gains are always the goal. Sustainable change takes time.
The remainder of this article will focus on a set of areas that I believe form the core of the future digital manufacturing environment. Given this is a substantial topic, I will focus on the breadth of the subject versus going too deep into any one area. Those can be follow-up articles as appropriate over time.
Leveraging Data Effectively
The Criticality of Standards
It is a foregone conclusion that you can’t optimize what you can’t track, measure, and analyze in real-time. To that end, starting with data and standards is critical in transforming to a digital manufacturing environment. Without standards, the ability to benchmark, correlate, and analyze performance will be severely compromised. This can be a basic as how a camera system, autonomous vehicle, drone, conveyor, or digital sensor is integrated within a facility, to the representation of equipment hierarchies, or how operator roles and processes are tracked across a set of similar facilities. Where standards for these things don’t exist, value will be constrained to a set of individual point solutions, use cases, and one-off successes. Where standards are, however, implemented and scaled over time, the value opportunity will eventually cross over into exponential gains that aren’t otherwise possible, because the technical debt associated with retrofitting and mapping across various standards in place will create a significant maintenance effort that limits focus on true innovation and optimization. This isn’t to suggest that there is a one-size-fits-all way to thinking about standards and that every solution needs to conform for the sake of an ivory tower ideal. The point is that it’s worth slowing down the pace of “progress” at times to understand the value in designing solutions for longer-term value creation.
The Role of Data Governance
It’s impossible to discuss the criticality of standards without also highlighting the need for active, ongoing data governance, both to ensure standards are followed, that data quality at the local and enterprise level is given priority (especially to the degree that analytical insights and AI become core to informed decision making), and also to help identify and surface additional areas of opportunity where standards may be needed to create further insights and value across the operating environment. The upshot of this is that there need to be established roles and accountability for data stewards at the facility and enterprise level if there is an aspiration to drive excellence in manufacturing, no matter what the present level of automation is across facilities.
Modeling Distributed Operations
Applying Distributed Computing
There is a power in distributed computing that enables you to scale execution at a rate that is beyond the capacity you can achieve with a single machine (or processor). The model requires an overall coordinator of activity to distribute work and monitor execution and then the individual processors to churn out calculations as rapidly as they are able. As you increase processors, you increase capacity, so long as the orchestrator can continue to manage and coordinate the parallel activity effectively.
From a manufacturing standpoint, the concept applies well across a set of distributed facilities, where the overall goal is to optimize the performance and utilization of available capacity given varying demand signals, individual operating characteristics of each facility, cost considerations, preventative maintenance windows, etc. It’s a system that can be measured, analyzed, and optimized, with data gathered and measured locally, a subset of which is used to inform and guide the macro-level process.
Striking the Balance
While I will dive into this a little further towards the tail end of this article, the overall premise from an operating standpoint is to have a model that optimizes the coordination of activity between individual operating units (facilities) that are running as autonomously as possible at peak efficiency, while distributing work across them in a way that maximizes production, availability, cost, or whatever other business parameters are most critical.
The key point being that the technology infrastructure for distributing and enabling production across and within facilities should ideally be a matter of business parameters that can be input and adjusted at the macro-level and the entire system of facilities be adjusted in real-time in a seamless, integrated way. Conversely, the system should be a closed loop where a disruption at the facility level can inform a change across the overall ecosystem such that workloads are redistributed (if possible) to minimize the impact on overall production. This could be manifest in one or more micro-level events (e.g., a higher than expected occurrence of unplanned outages) that informs production scheduling and distribution of orders to a major event (e.g., a fire or substantial facility outage) that redirects work across other facilities to minimize end customer impact. Arguably there are elements that exist within ERP systems that can account for some of this today, but the level and degree of customization required to make it a robust and inclusive process would be substantial, given much of the data required to inform the model exists outside the ERP ecosystemitself, in equipment, devices, processes, and execution within individual facilities themselves.
Thinking about Mergers, Acquisitions, and Divestitures
As I mentioned in the previous section on data, establishing standards is critical to enabling a distributed paradigm for operations, the benefit of which is also the speed at which an acquisition could be leveraged effectively in concert with an existing set of facilities. This assumes there is an ability to translate and integrate systems rapidly to make the new facility function as a logical extension of what is already in place, but ultimately a number of those technology-related challenges would have to be worked through in the interest of optimizing individual facility performance regardless. The alternative to having this macro-level dynamic ecosystem functioning would likely be excess cost, inefficiency, and wasted production capacity.
Advancing the Digital Facility
The Role of the Digital Facility
At a time when data and analytics can inform meaningful action in real-time, the starting point for optimizing performance is the individual “processor”, which is a digital facility. While the historical mental model would focus on IT and OT systems and integrating them in a secure way, the emergence of digital equipment, sensors, devices, and connected workers has led to more complex infrastructure and an exponential amount of available data that needs to be thoughtfully integrated to maximize the value it can contribute over time. With this increased reliance on technology, likely some of which runs locally and some in the cloud, the reliability of wired and wireless connectivity has also become a critical imperative of operating and competing as a digital manufacturer.
Thinking About Auto Maintenance
Drawing on a consumer example, I brought my car in for maintenance recently. The first thing the dealer did was plug in and download a set of diagnostic information that was gathered over the course of my road trips over the last year and a half. The data was collected passively, provided the technicians with input on how various engine components were performing, and also some insight on settings that I could adjust given my driving habits that would enable the car to perform better (e.g., be more fuel efficient). These diagnostics and safety systems are part of having a modern car and we take them for granted.
Turning back to a manufacturing facility, a similar mental model should apply for managing data at a local and enterprise level, which is that there should be a passive flow of data to a central repository that is mapped to processes, equipment, and operators in a way that enables ongoing analytics to help troubleshoot problems, identify optimization and maintenance opportunities, and look across facilities for efficiencies that could be leveraged at broader scale.
Building Smarter Equipment
Taking things a step further… what if I were to attach a sensor under the hood of my car, take the data, build a model, and try to make driving decisions using that model and my existing dashboard as input? The concept seems a little ridiculous given the systems already in place within a car to help make the driving experience safe and efficient. That being said, in a manufacturing facility with legacy equipment, that intelligence isn’t always built in, and the role of analytics can become an informed guessing game of how a piece of equipment is functioning without the benefit of the knowledge of the people who built the equipment to begin with.
Ultimately, the goal should be for the intelligence to be embedded within the equipment itself, to enable a level of self-healing or alerting, and then within control systems to look at operating conditions across a connected ecosystem to determine appropriate interventions as they occur, whether that be a minor adjustment to operating parameters or a level of preventative maintenance.
The Role of Edge Computing and Facility Data
The desire to optimize performance and safety at the individual facility level means that decisions need to be informed and actions taken in near real-time as much as possible. This premise then suggests that facility data management and edge computing will continue to increase in criticality as more advanced uses of AI become part of everyday integrated work processes and facility operations.
Enabling Operators with Intelligence
The Knowledge Challenge
With the general labor shortage in the market and retirement of experienced, skilled laborers, managing knowledge and accelerating productivity is a major issue to be addressed in manufacturing facilities. There are a number of challenges associated with this situation, not the least of which can be safety related depending on the nature of the manufacturing environment itself. Beyond that, the longer it takes to make an operator productive in relation to their average tenure (something that statistics would suggest is continually shrinking over time), the effectiveness of the average worker can become a limiting factor in the operating performance of a facility overall.
Understanding Operator Overload
One way that things have gotten worse is the proliferation of systems that comes with “modernizing” the manufacturing environment itself. When confronted with the ever-expanding set of control, IT, ERP, and analytical systems, all of which can be sending alerts and requesting action (to varying degrees of criticality) on a relatively continuous basis, the pressure being created on individual operators and supervisors in a facility has increased substantially (with the availability of exponential amounts of data itself). This is further complicated in situations where an individual “wears multiple hats” in terms of fulfilling multiple roles/personas within a given facility and arbitrating which actions to take against that increased number of demands can be considerably more complex.
Why Digital Experience Matters
While the number of applications that are part of an operating environment may not be something that is easy to reduce or simplify without significant investment (and time to make change happen), it is possible to look at things like digital experience platforms (DXPs) as a means to manage multiple applications into a single, integrated experience, inclusive of AR/VR/XR technologies as appropriate. Organizing around an individual operator’s responsibilities can help reduce confusion, eliminate duplicated data entry, improve data quality, and ultimately improve productivity, safety, and effectiveness by extension.
The Role of the Intelligent Agent
With a foundation in place to organize and present relevant information and actions to an operator on a real-time basis, the next level of opportunity comes with the integration of intelligent agents (AI-enabled tools) into a digital worker platform to inform meaningful and guided actions that will ultimate create the most production and safety impact on an ongoing basis. Again, there is a significant dependency on edge computing, wireless infrastructure, facility data, mobile devices, a delivery mechanism (the DXP mentioned above), and a sound underlying technology strategy to enable this at scale, but it is ultimately where AI tools can have a major impact in manufacturing moving forward.
Optimizing Performance through Orchestration
Why Orchestration Matters
Orchestration itself isn’t a new concept in manufacturing from my perspective, as the legacy versions of it are likely inherent in control and MES systems themselves. The challenge occurs when you want to scale that concept out to include digital equipment, digital workers, digital devices, control systems, and connected applications into one, seamless, integrated end-to-end process. Orchestration provides the means to establish configurable and dynamic workflow and associated rules into how you operate and optimize performance within and across facilities in a digital enterprise.
While this is definitely a capability that would need to be developed and extended over time, the concept is to think of the manufacturing ecosystem as a seamless collaboration of operators and equipment to ultimately drive efficient and safe production of finished goods. Having established the infrastructure to coordinate and track activity, the process performance can be automatically recorded and analyzed to inform continuous improvement on an ongoing basis.
Orchestrating within the Facility
The number of uses of orchestration within a facility can be as simple as coordinating and optimizing material movement between autonomous vehicles and fork lifts within a facility, to computer vision applications for safety and quality management. With the increasing number of connected solutions within a facility,having the means to integrate and coordinate activity between and across them offers a significant opportunity in digital manufacturing moving forward.
Orchestrating across the Enterprise
Scaling back out to the enterprise level, looking across facilities, there are opportunities to look at things like procurement of MRO supplies and optimizing inventory levels, managing and optimizing production planning across similar facilities, benchmarking and analyzing process performance and looking for improvements that can be applied across facilities in a way that will create substantially greater impact than is possible if the focus is limited to the individual facility alone. Given that certain enterprise systems like ERPs tend to operate at largely a global versus local level, having infrastructure in place to coordinate activity across both can create visibility to improvement opportunities and thereby substantial value over time.
Coordinated Execution
Finally, to coordinate between the local and global levels of execution, a thoughtful approach to managing data and the associated analytics needs to be taken. As was mentioned in the opening, the overall operating model is meant to leverage a configurable, distributed paradigm, so the data that is shared and analyzed within and across layers is important to calibrate as part of the evolving operating and technology strategy.
Wrapping Up
There is a considerable amount of complexity associated with moving between a legacy, process and equipment-oriented mindset to one that is digitally enabled and based on insight-driven, orchestrated action. That being said, the good news is that the value that can be unlocked with a thoughtful digital strategy is substantial given we’re still on the front-end of the evolution overall.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.